filename
stringlengths 18
35
| content
stringlengths 1.53k
616k
| source
stringclasses 1
value | template
stringclasses 1
value |
---|---|---|---|
0632_DiSIEM_700692.md
|
# 1 Introduction
The Commission is running a flexible pilot under Horizon 2020 called the Open
Research Data (ORD) Pilot. The ORD pilot aims to improve and maximize access
to and re-use of research data generated by Horizon 2020 projects and
considers the need to balance openness and protection of scientific
information, commercialization and Intellectual Property Rights (IPR), privacy
concerns, security, as well as data management and preservation aspects.
As a participating project, DiSIEM is required to develop a Data Management
Plan (DMP), identified as deliverable D8.2. The DMP is a key element of good
data management, describing the data management life cycle for the data to be
collected, processed and/or generated. The goal is to make research data
findable, accessible, interoperable and re-usable (FAIR).
All partners have contributed to the document, completing a project-wide
questionnaire that was then used to determine each partner’s role in creating
and/or processing data.
## 1.1 Organization of the Document
Since each partner will generate and/or manipulate data, the document is
organized with one section per partner (Sections 3-9). Each of these sections
is structured in five subsections:
1. **Dataset description** contains a textual description of the dataset. It aims at explaining, in a short paragraph, what the dataset contains and what its goal is;
2. **Standards and metadata** focuses on explaining the internals of the dataset, namely how a user can find syntactical and semantic information;
3. **Data sharing** addresses the issues related to data access, and privacy concerns, namely if the dataset is going to be indexed, and how and to whom it will be made accessible;
4. **Archiving and presentation** covers the aspects related to data availability, during and beyond the project, as well as the actions taken and planned to support availability;
5. **Data details** goes into the specifics of each partner’s dataset, describing its content.
Besides these per-partner sections, the document also contains a general
description of our overall methodology in terms of data collection and sharing
in Section 2. The summary and conclusions of the Data Management Plan are in
Section 10. In the appendix, we included the questionnaire each partner filled
to prepare the document.
# 2 Methodology
In this section, we explain some general policy we defined to store and share
the data sets produced during the project and the overall methodology used for
producing this document.
## 2.1 DiSIEM Policy for Storage and Sharing of Datasets
One of the most important aspects of the methodology is how datasets are to be
stored and used during the project.
A first general concern is how the produced datasets are to be stored. The
consortium decided to do that in three ways, for different types of datasets:
* For the public datasets, i.e., the ones we can share outside the consortium, we plan to publish them on the project webpage (or in another public repository to be referred by the project webpage).
* For controlled datasets, i.e., the ones that will be anonymized and shared within the consortium for enabling partners to do exploratory studies, we created a special directory in the project repository for storing them. The idea is to have a subdirectory for each dataset containing not only the dataset files but also a _info.txt_ text file with a brief description and metadata of the dataset.
* For privacy-sensitive datasets, i.e., those that contain critical information from partners and therefore require special care in sharing, we decided that partners need to agree on the specifics of how sharing can be done. This might include the signing of specific agreements and protocols between the involved partners. In any case, this should be done between partners, without any direct influence from the consortium.
Regarding the storage of controlled datasets, they will be kept in our project
repository, which is maintained in a dedicated KVM virtual machine hosted by
FCUL. This VM can only be directly accessed by DI-FCUL system administrators
and is externally visible only through the gitlab web interface and through
the git protocol over SSL/TLS. All accesses require authentication using valid
credentials and access control is enforced. Therefore, we believe an adequate
level of protection is provided for these datasets.
As will be clear in the next sections, the preferred formats for datasets are
CSV (Comma Separated Values, as specified in RFC4180 [1]) and JSON, since both
are text-based and easily parsed by any tool or service being used within the
project.
## 2.2 Data Collection Methodology
To compile the data management plan, a questionnaire was first elaborated
covering the main questions that need to be answered in the template provided
by the European Commission [2].
In the second phase, each project partner responded to the questionnaire,
filling it with as much detail as possible at this stage of the project.
Completed questionnaires were stored for analysis and traceability in the
project’s git repository.
In the third phase, the Data Management Plan was created as a synthesis of the
questionnaire results, attempting to take advantage of commonalities between
responses to provide a simple view of data management procedures within the
consortium.
Further revisions of the document will be based on updates to partner
questionnaires. Therefore, the DMP will be updated at least by the mid-term
and final reports to be able to accommodate any new data forms and
requirements that cannot be estimated in this current stage of the project.
# 3 Dataset FFCUL
FFCUL is an academic partner in the project therefore it is not expected to
contribute with datasets about monitored infrastructures. However, it plans to
contribute with some OSINT datasets that might be useful for evaluating the
tools and techniques proposed for processing such kinds of data.
## 3.1 Dataset Description
In principle, FFCUL will provide a collection of tweets classified as
“relevant or not” for a given reference infrastructure, a list of operating
systems vulnerabilities collected from NVD and enriched with information from
other databases, and a list of compromised IP addresses collected from several
security feeds on the Internet.
## 3.2 Standards and metadata
The dataset will contain data formatted using the common Comma-Separated
Values (CSV) standard.
## 3.3 Data Sharing
Since all these datasets are being collected from public feeds from the
Internet, FFCUL intends to make them publicly available, respecting possible
data protection legislation.
## 3.4 Archiving and presentation
The dataset will be made available as companion papers exploring them are
published. The idea is to have papers using the datasets for validating tools
built within the project. Once the papers are made public, the datasets will
be made available either through the project webpage or through DI-FCUL
webpage.
## 3.5 Data details
FFCUL will provide three different types of OSINT datasets that can be used to
validate different DiSIEM innovations:
* A collection of tweets gathered from 80 cybersecurity-related accounts such as sans_isc, e_kaspersky, alienvault, vuln_lab, etc. These tweets will be manually classified as relevant or not to some synthetic organization infrastructure;
* A list of operating systems vulnerabilities collected from NVD and enriched with information about exploits and patches obtained from other vulnerability databases such as ExploitDB and OSVDB;
* A list of compromised IPs collected from more than a hundred security feeds organized by published date and source.
Notice that “the operating system vulnerabilities” dataset is somewhat similar
to the data offered by the vepRisk tool from City (see next section). In the
future, we will try to integrate these datasets to avoid duplicating efforts.
# 4 Dataset CITY
City, being an Academic partner in the project, will be primarily a data
consumer rather than a data producer. We plan to analyse the data provided by
the project partners to evaluate and test our extensions and plug-ins for
diversity and data visualisation.
## 4.1 Dataset Description
We do plan to also deploy our own testbed to evaluate and test the extensions
we build for diversity and probabilistic modelling. The data will consist of
synthetically generated network data, as well as data collected from a
University honeypot.
We are also building a tool that gathers public data on vulnerabilities,
patches and exploits. The tool is made available from the following site (the
URL may be updated and change in the future):
_http://veprisk.city.ac.uk/sampleapps/vepRisk/_
## 4.2 Standards and metadata
The data from our testbed will consist of network traffic, in the _pcap_
format, as well as the alerts of the Intrusion Detection Systems (IDS) we will
test: Snort, Suricata and Bro. These will be generated in the respective alert
format of the tool vendors.
The data from the vepRisk tool can be downloaded from the site in CSV format.
## 4.3 Data Sharing
Synthetic data from our testbed will be shared with DiSIEM partners without
restriction. Data from honeynets, would need to be anonymized first to remove
sensitive, confidential and/or private information. Data from vepRisk is
available from the public page of the tool.
## 4.4 Archiving and presentation
The dataset will be disseminated to the consortium via the Git repository.
## 4.5 Data details
For the vepRisk tool, the data is taken from the public databases on
vulnerabilities, patches and exploits and the information on these data are
available from the repositories where this data is collected namely, NVD 1 ,
Exploitdb 2 and various patch databases (e.g. Microsoft 3 , Ubuntu 4
etc.)
Regarding our testbed, we expect the data will include network flows (source
and destination IP addresses, source and destination ports, network protocol,
timestamp etc.) and the alerts from the IDS platforms.
# 5 Dataset EDP
## 5.1 Dataset Description
Having an operating SIEM platform that receives over 10.000 events per second,
EDP – Energias de Portugal, SA. has the capability to provide realistic and
meaningful data for analysis. The dataset will consist of a significant subset
of real events, comprising data from multiple and diverse sources, after
adequate pre-processing to ensure that no confidential information is
wrongfully distributed.
## 5.2 Standards and metadata
The dataset will contain data formatted using the common Comma-Separated
Values (CSV) standard, as specified in RFC4180 [1].
## 5.3 Data Sharing
EDP will make data available for the project partners. The specific
information to be shared depends on the need presented by the partners, as
well as a risk assessment to guarantee legal and business policy compliance.
The final dataset details will be indicated in a later release of the DMP.
Information retrieved from EDP’s SIEM platform should not be made publicly
available due to the critical nature of the data and user privacy concerns.
EDP is investigating tools to enable data masking and/or anonymization. We
identified and started performing tests with two of such tools: Python Faker (
_http://blog.districtdatalabs.com/a-practical-guide-to-anonymizing-datasets-
withpython-faker)_ and ARX ( _http://arx.deidentifier.org/)_ .
## 5.4 Archiving and presentation
The dataset will be disseminated to the consortium via the official Git
repository.
## 5.5 Data details
The most relevant SIEM events collected in EDP’s platform, with a summary of
the respective field set, are presented in the following table.
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
**Event source**
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
**Field**
</td>
<td>
Firewall
</td>
<td>
IPS
</td>
<td>
User authentication
</td>
<td>
VPN
access
</td>
<td>
Server access
</td>
<td>
Antivirus
</td> </tr>
<tr>
<td>
Event name
</td>
<td>
√
</td>
<td>
√
</td>
<td>
√
</td>
<td>
√
</td>
<td>
√
</td>
<td>
√
</td> </tr>
<tr>
<td>
Source username
</td>
<td>
X
</td>
<td>
X
</td>
<td>
√
</td>
<td>
√
</td>
<td>
√
</td>
<td>
√
</td> </tr>
<tr>
<td>
Source address
</td>
<td>
√
</td>
<td>
√
</td>
<td>
√
</td>
<td>
√
</td>
<td>
√
</td>
<td>
√
</td> </tr>
<tr>
<td>
Source port
</td>
<td>
√
</td>
<td>
√
</td>
<td>
X
</td>
<td>
X
</td>
<td>
X
</td>
<td>
X
</td> </tr>
<tr>
<td>
Source geo country
</td>
<td>
√
</td>
<td>
√
</td>
<td>
X
</td>
<td>
√
</td>
<td>
√
</td>
<td>
X
</td> </tr>
<tr>
<td>
Destination username
</td>
<td>
X
</td>
<td>
X
</td>
<td>
√
</td>
<td>
√
</td>
<td>
√
</td>
<td>
√
</td> </tr>
<tr>
<td>
Destination address
</td>
<td>
X
</td>
<td>
X
</td>
<td>
√
</td>
<td>
√
</td>
<td>
√
</td>
<td>
√
</td> </tr>
<tr>
<td>
Destination port
</td>
<td>
√
</td>
<td>
√
</td>
<td>
X
</td>
<td>
X
</td>
<td>
X
</td>
<td>
X
</td> </tr>
<tr>
<td>
Destination geo country
</td>
<td>
√
</td>
<td>
√
</td>
<td>
X
</td>
<td>
√
</td>
<td>
√
</td>
<td>
X
</td> </tr>
<tr>
<td>
Application protocol
</td>
<td>
√
</td>
<td>
√
</td>
<td>
X
</td>
<td>
X
</td>
<td>
X
</td>
<td>
X
</td> </tr>
<tr>
<td>
File name
</td>
<td>
X
</td>
<td>
X
</td>
<td>
X
</td>
<td>
X
</td>
<td>
X
</td>
<td>
√
</td> </tr>
<tr>
<td>
Policy name
</td>
<td>
X
</td>
<td>
X
</td>
<td>
X
</td>
<td>
X
</td>
<td>
X
</td>
<td>
√
</td> </tr> </table>
**Table 1 – Data details (EDP)**
Field format:
Event name: String (255-character limit);
Source username: String (255-character limit);
Source address: IP Address (IPv4);
Source geo country: String (255-character limit);
Destination username: String (255-character limit);
Destination address: IP Address (IPv4);
Destination port: Integer from 1 to 65535
Destination geo country: String (255-character limit);
Application protocol: String (255-character limit); File name: String
(255-character limit);
Policy name: String (255-character limit).
# 6 Dataset AMADEUS
## 6.1 Dataset Description
Amadeus can provide real datasets from different log sources: applications,
Firewalls, OS syslog, Antiviruses, Proxy, VPN, IDS, DNS, etc. We need to
preprocess and anonymise the data before sharing it with partners.
## 6.2 Standards and metadata
Two data format will be used for the shared datasets: 1. Comma-Separated
Values (CSV);
2\. JSON.
A documentation will be provided with each type of dataset to be shared with
the partners.
## 6.3 Data Sharing
Amadeus datasets will be shared with DiSIEM partners depending on the needs
presented. However, partners need to ensure that shared datasets should not be
made publicly available in any case, due to legal and business policy
restrictions.
## 6.4 Archiving and presentation
The dataset will be disseminated to the consortium via the official Git
repository, or any secure file sharing method (in the case of privacy-
sensitive data).
## 6.5 Data details
A summary of the datasets to be shared with DiSIEM partners can be found in
the table below:
<table>
<tr>
<th>
**Source**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
LSS ASM logs
</td>
<td>
An administration tool for an authentication and access control management
application
</td> </tr>
<tr>
<td>
HTTP access logs
</td>
<td>
HTTP logs from an e-commerce application
</td> </tr>
<tr>
<td>
Cisco, Palo Alto Network
</td>
<td>
Firewall logs
</td> </tr>
<tr>
<td>
McAfee
</td>
<td>
Antivirus
</td> </tr>
<tr>
<td>
Suricata, Palo Alto, Bro
</td>
<td>
IDS
</td> </tr>
<tr>
<td>
Cisco VPN
</td>
<td>
VPN
</td> </tr> </table>
**Table 2 – Data details (AMADEUS)**
The next sections provide a description of the data fields for each dataset.
#### 6.5.1 LSS ASM logs
The logs of an administration tool for an authentication and access control
management application. The dataset to be provided is a set of user actions. A
user session is a set of user actions with the same session id (PFX, see table
below):
<table>
<tr>
<th>
**Field**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
PFX
</td>
<td>
Session id
</td> </tr>
<tr>
<td>
Orga
</td>
<td>
Organisation
</td> </tr>
<tr>
<td>
Action
</td>
<td>
Type of action performed
</td> </tr>
<tr>
<td>
userId
</td>
<td>
User issuing the action
</td> </tr>
<tr>
<td>
officeId
</td>
<td>
Office from which the user is connecting
</td> </tr>
<tr>
<td>
Country
</td>
<td>
Country Code
</td> </tr>
<tr>
<td>
IP
</td>
<td>
IP address
</td> </tr>
<tr>
<td>
*Browser
</td>
<td>
Client browser used
</td> </tr>
<tr>
<td>
*browserEngine
</td>
<td>
Client browser Engine
</td> </tr>
<tr>
<td>
*OS
</td>
<td>
Client operating system
</td> </tr> </table>
*These fields are derived from the useragent string.
**Table 3 – LSS ASM logs (AMADEUS)**
#### 6.5.2 HTTP access logs
This dataset will be extracted from a web server of an e-commerce application.
The fields are the default HTTP request fields with some additional nested
fields extracted from the IP address and the useragent string. More details in
the table below:
<table>
<tr>
<th>
**Field**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
Datetime
</td>
<td>
Timestamp
</td> </tr>
<tr>
<td>
Method
</td>
<td>
HTTP method
</td> </tr>
<tr>
<td>
Urlpath
</td>
<td>
URI path
</td> </tr>
<tr>
<td>
Status
</td>
<td>
HTTP status code
</td> </tr>
<tr>
<td>
http_referrer
</td>
<td>
HTTP referrer
</td> </tr>
<tr>
<td>
Useragent
</td>
<td>
Useragent String
</td> </tr>
<tr>
<td>
Accespt_language
</td>
<td>
Accept Language in the HTTP header
</td> </tr>
<tr>
<td>
Duration
</td>
<td>
Request processing time
</td> </tr>
<tr>
<td>
Hostname
</td>
<td>
Target HTTP hostname
</td> </tr>
<tr>
<td>
Referrer_uri_proto
</td>
<td>
Referrer URI protocol
</td> </tr>
<tr>
<td>
Referrer_hostname
</td>
<td>
Referrer Hostname
</td> </tr>
<tr>
<td>
Referrer_uri_path
</td>
<td>
Referrer URI path
</td> </tr>
<tr>
<td>
Referrer_params
</td>
<td>
Referrer Parameters
</td> </tr>
<tr>
<td>
Ua
</td>
<td>
Nested Useragent object
</td> </tr>
<tr>
<td>
remoteclientipaddress
</td>
<td>
End User or CDN IP address
</td> </tr>
<tr>
<td>
client_ip
</td>
<td>
Private IP address of HTTP server
</td> </tr>
<tr>
<td>
Geoip
</td>
<td>
Nested Geo coordinates object
</td> </tr>
<tr>
<td>
isp
</td>
<td>
Nested ISP object
</td> </tr>
<tr>
<td>
edge_proxy_cip
</td>
<td>
End User or CDN IP address
</td> </tr>
<tr>
<td>
x_forwarded_for
</td>
<td>
End User or CDN IP address
</td> </tr>
<tr>
<td>
Jsessionid
</td>
<td>
The session id of a given request
</td> </tr> </table>
**Table 4 – HTTP access logs (AMADEUS)**
#### 6.5.3 Suricata IDS
This dataset is extracted from the Open source IDS/NSM engine Suricata. A
brief description of the most relevant fields is provided in the table below:
<table>
<tr>
<th>
**Field**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
Category
</td>
<td>
Threat category
</td> </tr>
<tr>
<td>
Dest
</td>
<td>
Destination IP address
</td> </tr>
<tr>
<td>
Severity
</td>
<td>
Threat severity
</td> </tr>
<tr>
<td>
Signature
</td>
<td>
Threat Signature
</td> </tr>
<tr>
<td>
Src
</td>
<td>
Source IP address
</td> </tr>
<tr>
<td>
Answer
</td>
<td>
DNS server answer
</td> </tr>
<tr>
<td>
Date
</td>
<td>
Timestamp
</td> </tr>
<tr>
<td>
Dest_nt_host
</td>
<td>
Destination IP organization
</td> </tr>
<tr>
<td>
Dest_port
</td>
<td>
Destination port number
</td> </tr>
<tr>
<td>
Dns
</td>
<td>
Nested DNS response object
</td> </tr>
<tr>
<td>
http
</td>
<td>
Nested HTTP request object
</td> </tr>
<tr>
<td>
Eventtype
</td>
<td>
Suricata event type
</td> </tr>
<tr>
<td>
Message_type
</td>
<td>
Request/Reply
</td> </tr>
<tr>
<td>
Proto
</td>
<td>
Transport Layer Protocol
</td> </tr>
<tr>
<td>
Src_nt_host
</td>
<td>
Same as Dest_nt_host
</td> </tr>
<tr>
<td>
Ssl_issuer_common_name
</td>
<td>
SSL certificate issuer name
</td> </tr>
<tr>
<td>
Ssl_issuer_organization
</td>
<td>
SSL certificate issuer organization
</td> </tr>
<tr>
<td>
Ssl_publickkey
</td>
<td>
SSL certificate public key
</td> </tr>
<tr>
<td>
Ssl_subject_common_name
</td>
<td>
SSL subject name
</td> </tr>
<tr>
<td>
SSL_subject_organization
</td>
<td>
SSL subject organization
</td> </tr>
<tr>
<td>
Ssl_version
</td>
<td>
SSL/TLS version
</td> </tr>
<tr>
<td>
TLS
</td>
<td>
Nested TLS requests object
</td> </tr> </table>
**Table 5 – Suricata IDS (AMADEUS)**
#### 6.5.4 Cisco Firewall logs
Within the context of a security incident, administrators can use cisco syslog
messages to understand communication relationships, timing, and, in some
cases, the attacker's motives and/or tools.
<table>
<tr>
<th>
**Field**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
acl
</td>
<td>
Access control list
</td> </tr>
<tr>
<td>
Action
</td>
<td>
The status of the actions (e.g. allowed, blocked etc.)
</td> </tr>
<tr>
<td>
Cisco_ASA_action
</td>
<td>
The status of the Cisco Adaptive Security Appliance (e.g. allowed, blocked
etc.)
</td> </tr>
<tr>
<td>
Cisco_ASA_message_id
</td>
<td>
The id of the Cisco message
</td> </tr>
<tr>
<td>
Description
</td>
<td>
The Description of the firewall event
</td> </tr>
<tr>
<td>
Dest_category
</td>
<td>
The category destination of the event
</td> </tr>
<tr>
<td>
Dest_dns
</td>
<td>
Destination DNS
</td> </tr>
<tr>
<td>
Dest_mac
</td>
<td>
The physical address of the mac destination
</td> </tr>
<tr>
<td>
Dest_nt_host
</td>
<td>
Destination network host
</td> </tr>
<tr>
<td>
Dest_port
</td>
<td>
The port destination
</td> </tr>
<tr>
<td>
Dest_zone
</td>
<td>
The server destination of the event
</td> </tr>
<tr>
<td>
Eventtype
</td>
<td>
The type of the event
</td> </tr>
<tr>
<td>
Group
</td>
<td>
The group of servers
</td> </tr>
<tr>
<td>
Message_id
</td>
<td>
the ID of the message
</td> </tr>
<tr>
<td>
Rule_name
</td>
<td>
The name of the rule
</td> </tr>
<tr>
<td>
Severity_level
</td>
<td>
The severity level of the rule
</td> </tr> </table>
**Table 6 – Cisco firewall logs (AMADEUS)**
#### 6.5.5 Next-Generation Firewall – Palo Alto Networks (PAN)
This next-generation firewall classifies all traffic, including encrypted
traffic, based on application, application function, user and content.
<table>
<tr>
<th>
**Field**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
Action
</td>
<td>
The action taken by the IDS
</td> </tr>
<tr>
<td>
Application
</td>
<td>
The application on which the alert was raised
</td> </tr>
<tr>
<td>
Client_ip
</td>
<td>
The IP of the client
</td> </tr>
<tr>
<td>
Client_location
</td>
<td>
The location of the client
</td> </tr>
<tr>
<td>
Date
</td>
<td>
Timestamp
</td> </tr>
<tr>
<td>
Dest_asset_id
</td>
<td>
The asset destination ID
</td> </tr>
<tr>
<td>
Dest_dns
</td>
<td>
The dns of the destination
</td> </tr>
<tr>
<td>
Dest_interface
</td>
<td>
The destination network interface
</td> </tr>
<tr>
<td>
Dest_ip
</td>
<td>
The IP of the destination
</td> </tr>
<tr>
<td>
Dest_zone
</td>
<td>
The zone of the destination
</td> </tr>
<tr>
<td>
Dest_nt_host
</td>
<td>
Destination network host
</td> </tr>
<tr>
<td>
Eventtype
</td>
<td>
The type of event (e.g. allowed, blocked etc.)
</td> </tr>
<tr>
<td>
dstPort
</td>
<td>
Destination port
</td> </tr>
<tr>
<td>
Protocol
</td>
<td>
The communication protocol being used
</td> </tr>
<tr>
<td>
RuleName
</td>
<td>
The name of the rule
</td> </tr>
<tr>
<td>
Server_IP
</td>
<td>
The IP of the server
</td> </tr> </table>
**Table 7 – Palo Alto Networks (AMADEUS)**
#### 6.5.6 Palo Alto IDS
This dataset is also extracted from Palo Alto Networks next-generation
firewalls. It contains the events tagged as threats. A description of the most
relevant fields is provided below:
<table>
<tr>
<th>
**Field**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
Action
</td>
<td>
Action taken by the IDS
</td> </tr>
<tr>
<td>
Application
</td>
<td>
The application that raised the alert
</td> </tr>
<tr>
<td>
Category
</td>
<td>
Category of the intrusion
</td> </tr>
<tr>
<td>
Client_ip
</td>
<td>
Client local IP address
</td> </tr>
<tr>
<td>
Client_location
</td>
<td>
Location of the client in the network
</td> </tr>
<tr>
<td>
Date
</td>
<td>
timestamp
</td> </tr>
<tr>
<td>
Dest_ip
</td>
<td>
Destination IP address
</td> </tr>
<tr>
<td>
Dest_hostname
</td>
<td>
Destination hostname
</td> </tr>
<tr>
<td>
Dest_interface
</td>
<td>
Destination network interface
</td> </tr>
<tr>
<td>
Dest_nt_host
</td>
<td>
Destination IP organization
</td> </tr>
<tr>
<td>
Dest_port
</td>
<td>
Destination Port number
</td> </tr>
<tr>
<td>
DestinationZone
</td>
<td>
Destination network zone
</td> </tr>
<tr>
<td>
IngressInterface
</td>
<td>
Ingress network interface
</td> </tr>
<tr>
<td>
Proto
</td>
<td>
Transport Layer protocol
</td> </tr>
<tr>
<td>
Session_id
</td>
<td>
Communication session id
</td> </tr>
<tr>
<td>
Severity
</td>
<td>
Severity level (1 to 5)
</td> </tr>
<tr>
<td>
Signature
</td>
<td>
Vulnerability signature
</td> </tr>
<tr>
<td>
SourceUser
</td>
<td>
Source Username
</td> </tr>
<tr>
<td>
Src_bunit
</td>
<td>
Source user business unit
</td> </tr>
<tr>
<td>
Src_category
</td>
<td>
Source category
</td> </tr>
<tr>
<td>
Src_dns
</td>
<td>
Source DNS server name
</td> </tr>
<tr>
<td>
Src_mac
</td>
<td>
Source MAC address
</td> </tr>
<tr>
<td>
Src_nt_host
</td>
<td>
Source IP Organization
</td> </tr>
<tr>
<td>
Src_owner
</td>
<td>
Source IP Owner Name
</td> </tr>
<tr>
<td>
Src_port
</td>
<td>
Source Port Number
</td> </tr>
<tr>
<td>
Src_zone
</td>
<td>
Source IP network zone
</td> </tr>
<tr>
<td>
Threat:category
</td>
<td>
Threat category
</td> </tr>
<tr>
<td>
Threat:name
</td>
<td>
Threat Name
</td> </tr>
<tr>
<td>
User
</td>
<td>
Username
</td> </tr>
<tr>
<td>
User_watchlist
</td>
<td>
Boolean, true if User in watch list
</td> </tr> </table>
**Table 8 – Palo Alto IDS (AMADEUS)**
#### 6.5.7 McAfee ePO
McAfee ePolicy Orchestrator, a centralized security management software for
antiviruses, is the source of this dataset.
<table>
<tr>
<th>
**Field**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
Action
</td>
<td>
Action taken by McAfee Antivirus
</td> </tr>
<tr>
<td>
Category
</td>
<td>
Threat category
</td> </tr>
<tr>
<td>
Date
</td>
<td>
Date
</td> </tr>
<tr>
<td>
Dest
</td>
<td>
Office ID
</td> </tr>
<tr>
<td>
Dest_bunit
</td>
<td>
Destination business unit
</td> </tr>
<tr>
<td>
Dest_ip
</td>
<td>
Destination IP address
</td> </tr>
<tr>
<td>
Dest_mac
</td>
<td>
Destination MAC address
</td> </tr>
<tr>
<td>
Dest_nt_domain
</td>
<td>
Destination IP network domain
</td> </tr>
<tr>
<td>
Dest_nt_host
</td>
<td>
Destination hostname
</td> </tr>
<tr>
<td>
Dest_owner
</td>
<td>
Destination User name
</td> </tr>
<tr>
<td>
Detection_method
</td>
<td>
Firewall detection method
</td> </tr>
<tr>
<td>
Devent_description
</td>
<td>
Firewall event description
</td> </tr>
<tr>
<td>
File_name
</td>
<td>
Suspicious filename
</td> </tr>
<tr>
<td>
Fqdn
</td>
<td>
Fully Qualified domain name
</td> </tr>
<tr>
<td>
Is_laptop
</td>
<td>
Boolean, 1 if Laptop used
</td> </tr>
<tr>
<td>
Logon_user
</td>
<td>
Username
</td> </tr>
<tr>
<td>
Mcafee_epo_os
</td>
<td>
OS name
</td> </tr>
<tr>
<td>
Os_build
</td>
<td>
OS build number
</td> </tr>
<tr>
<td>
Os_version
</td>
<td>
OS version
</td> </tr>
<tr>
<td>
Process
</td>
<td>
Process name
</td> </tr>
<tr>
<td>
Product
</td>
<td>
Component creating the event
</td> </tr>
<tr>
<td>
Severity
</td>
<td>
Threat severity level
</td> </tr>
<tr>
<td>
Severity_id
</td>
<td>
A number mapped to severity
</td> </tr>
<tr>
<td>
Src
</td>
<td>
Source IP address
</td> </tr>
<tr>
<td>
Src_bunit
</td>
<td>
Source IP business unit
</td> </tr>
<tr>
<td>
Src_category
</td>
<td>
Source IP category
</td> </tr>
<tr>
<td>
Src_mac
</td>
<td>
Source MAC address
</td> </tr>
<tr>
<td>
Src_nt_host
</td>
<td>
Source IP network zone
</td> </tr>
<tr>
<td>
Src_owner
</td>
<td>
Source IP owner name
</td> </tr>
<tr>
<td>
Src_priority
</td>
<td>
Same as dest_priority
</td> </tr>
<tr>
<td>
Threat_handled
</td>
<td>
Boolean for whether threat is handled
</td> </tr>
<tr>
<td>
Threat_type
</td>
<td>
Threat Type
</td> </tr>
<tr>
<td>
User_email
</td>
<td>
User email address
</td> </tr> </table>
**Table 9 – McAfee ePO (AMADEUS)**
#### 6.5.8 Bro IDS
This dataset is extracted from Bro, an open source network analysis framework.
Below is a description of the Bro events fields.
<table>
<tr>
<th>
**Field**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
Body
</td>
<td>
Threat description
</td> </tr>
<tr>
<td>
Category
</td>
<td>
Threat category
</td> </tr>
<tr>
<td>
Date
</td>
<td>
Timestamp
</td> </tr>
<tr>
<td>
Dest
</td>
<td>
Destination IP address
</td> </tr>
<tr>
<td>
Dest_nt_host
</td>
<td>
Destination IP network zone
</td> </tr>
<tr>
<td>
Dest_port
</td>
<td>
Destination port number
</td> </tr>
<tr>
<td>
Eventtype
</td>
<td>
Bro event type
</td> </tr>
<tr>
<td>
File_desc
</td>
<td>
Suspicious file
</td> </tr>
<tr>
<td>
O
</td>
<td>
Organization
</td> </tr>
<tr>
<td>
Src
</td>
<td>
Source IP address
</td> </tr>
<tr>
<td>
Src_nt_host
</td>
<td>
Same as dest_nt_host
</td> </tr>
<tr>
<td>
Src_port
</td>
<td>
Source Port number
</td> </tr>
<tr>
<td>
Tag::eventtype
</td>
<td>
Event type
</td> </tr>
<tr>
<td>
Uid
</td>
<td>
User ID
</td> </tr> </table>
**Table 10 – Bro IDS (AMADEUS)**
#### 6.5.9 Cisco VPN
This dataset contains events from a Cisco VPN server. A description of the
dataset fields is in the summary below.
<table>
<tr>
<th>
**Field**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
Assigned_ip
</td>
<td>
Private IP assigned to the user session
</td> </tr>
<tr>
<td>
Cisco_ASA_user
</td>
<td>
Username
</td> </tr>
<tr>
<td>
Date
</td>
<td>
Timestamp
</td> </tr>
<tr>
<td>
Duration
</td>
<td>
VPN session duration in seconds
</td> </tr>
<tr>
<td>
Eventtype
</td>
<td>
Cisco VPN event type
</td> </tr>
<tr>
<td>
Group
</td>
<td>
Remote Access Group
</td> </tr>
<tr>
<td>
IP
</td>
<td>
User Public IP address
</td> </tr>
<tr>
<td>
Reason
</td>
<td>
Connection Lost reason
</td> </tr>
<tr>
<td>
User_email
</td>
<td>
User email address
</td> </tr>
<tr>
<td>
User_identity
</td>
<td>
Full username
</td> </tr>
<tr>
<td>
Username
</td>
<td>
Username
</td> </tr> </table>
**Table 11 – Cisco VPN (AMADEUS)**
# 7 Dataset DigitalMR
DigitalMR works with OSINT and has infrastructure to fetch information to
create datasets. We intend to fetch information from security related blogs
and tweets for a specific timeline of interest. These datasets will be
available during the project.
## 7.1 Dataset Description
Our data consists of openly available content on the Internet from sources
including blogs, forums, news, and social networks like Twitter, Instagram and
Facebook. This data is either scraped from the sources using our specially
built crawlers or fetched using the built-in API of the data sources such as
the ones provided by Twitter and Facebook.
## 7.2 Standards and metadata
The format of the data is in JSON which is widely supported by several
applications and is semi-structured. The size of the data can be up to 5
million posts on the Internet depending on the scope of the project.
## 7.3 Data Sharing
Given that the content of the data might contain information such as
usernames, and privacy laws might vary between countries; it is the
responsibility of the user of the dataset to make sure that the applicable
legislations are respected.
## 7.4 Archiving and presentation
The dataset will be shared to the consortium via the official Git repository
in JSON and will be available for use by the partners.
## 7.5 Data details
Some of the common fields in the data include the following:
* Author Username
* Author profile URL
* Post URL
* Parent tweet URL (for twitter content)
* Location
* Content/Post (actual content of the data)
* Date
* Tags (added by DigitalMR)
* Relevance (added by DigitalMR)
* Sentiment (added by DigitalMR)
# 8 Dataset FRAUNHOFER
Fraunhofer does not plan to produce any dataset during DiSIEM. Instead, data
provided from the project partners will be analysed using machine learning and
visual analytics methods. This may lead to the development of novel
representations of the event data produced by the SIEM platforms, as well as
the discovery of user- or session-clusters. These results can be used to
develop novel visualization tools for SIEM data.
To represent event sequences, Fraunhofer evaluates the embedding, including
the bag-of-words approach, event occurrence frequencies within a given
sequence and the TF-IDF-score (term frequency multiplied with the inverse
document frequency) of events with respect to a given sequence database.
Another approach is to define a similarity measure for sequences. To that
extend, Fraunhofer developed an embedding of event types into a metric space,
where the distance between events correspond to the co-occurrence frequencies
within a given sequence database. These feature representations of event
sequences will be used to embed the data in 2D or 3D for visualization, as
well as to find clusters of sequences and users and to predict whether a
sequence is a potential threat.
# 9 Dataset ATOS
## 9.1 Dataset Description
Atos dataset will be generated in a testbed specifically prepared for DiSIEM.
The dataset will consist of:
* Events generated by applications or sensors installed in the testbed (e.g. Snort, OSSec, netfilter, JBoss, linux kernel, etc), once normalized to the event format used by the XL-SIEM component;
* Alarms generated by XL-SIEM component.
OSINT data or IoC from external feeds such as AlienVault Open Threat Exchange
( _OTX_ ) 5 could also be used by XL-SIEM in Atos testbed.
Since data will be generated in the testbed, no confidential information will
be provided in the dataset.
## 9.2 Standards and metadata
Currently, data generated in Atos testbed can be provided in two formats:
* Comma-Separated Values (CSV);
* JSON.
No documentation or metadata is provided currently with the dataset. The need
for such additional documentation will be analysed for a later release of the
DMP.
## 9.3 Data Sharing
Atos will make data available for the remaining DiSIEM partners. Information
retrieved from Atos’ SIEM platform should not be made publicly available
without previous authorization. The specific information to be shared depends
on the needs presented by the partners, as well as a risk assessment to
guarantee legal and business policy compliance.
## 9.4 Archiving and presentation
The dataset will be disseminated to the consortium via the official Git
repository. Data generated in Atos testbed can be also shared to DiSIEM
partners using _Advanced Message Queuing Protocol_ (AMQP) protocol such as
RabbitMQ Server.
## 9.5 Data details
SIEM events to be collected in Atos testbed and the final dataset details will
be indicated in a later release of the DMP.
Some event sources to be considered are:
* Firewall;
* Server access;
* Network Intrusion Detection System.
Currently, SIEM events collected (once normalized by the plugins included in
the XL-SIEM agent for each specific data source) have the following fields:
<table>
<tr>
<th>
**Field**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
Type
</td>
<td>
Type of plugin: detector or monitor
</td> </tr>
<tr>
<td>
Date
</td>
<td>
Date (timestamp) on which the event is received from the sensor
</td> </tr>
<tr>
<td>
Device
</td>
<td>
IP address of the XL-SIEM agent generating the event in the normalized format
</td> </tr>
<tr>
<td>
Plugin_id
</td>
<td>
Identifier of the data source of event generated
</td> </tr>
<tr>
<td>
Plugin_sid
</td>
<td>
Type of event within the data source specified in plugin_id
</td> </tr>
<tr>
<td>
Protocol
</td>
<td>
Protocol (TCP, UDP, ICMP…)
</td> </tr>
<tr>
<td>
Src_ip
</td>
<td>
IP which the sensor generating the original event identifies as the source of
this event
</td> </tr>
<tr>
<td>
Src_port
</td>
<td>
Source port
</td> </tr>
<tr>
<td>
Dst_ip
</td>
<td>
Ip which the sensor generating the original event identifies as the
destination of this event
</td> </tr>
<tr>
<td>
Dst_port
</td>
<td>
Destination port
</td> </tr>
<tr>
<td>
Log
</td>
<td>
Event data that the specific plugin considers as part of the log and which is
not accommodated in the other fields.
</td> </tr>
<tr>
<td>
Data
</td>
<td>
Raw event's payload, although the plugin may use this field for anything else.
</td> </tr>
<tr>
<td>
Userdata1
to
Userdata9
</td>
<td>
Fields defined in the normalized event format to allocate relevant information
from the specific event's payload. They can contain any alphanumeric
information, and on choosing one or another, the type of display they have in
the event viewer will change.
</td> </tr>
<tr>
<td>
Organization
</td>
<td>
Identify the organization where the agent is deployed.
</td> </tr> </table>
**Table 12 – Data details (Atos)**
# 10 Summary and Conclusions
The Data Management Plan of DiSIEM describes partners’ activity related to
datasets. It contains a summary of all the information available as of
February 28 th , 2017. All (but one) partners intend to create datasets and
make them available within the consortium.
With respect to _dataset descriptions_ , most of the data manipulated by the
DiSIEM project is related to security events collected from SIEM systems and
processed using various exploratory methods.
With respect to _standards and metadata_ , the most prevalent form of data
format is Comma-Separated Values (CSV), a textual description of data that is
highly common and widely used in the SIEM and big data communities. This
format is very easy to manipulate, particularly adapted to sharing over git
(as text files are easily versioned) and is understood by a wide range of
tools.
With respect to _sharing_ , several partners intend to share the datasets for
further research and publication, at least in the academic community. Academic
research and innovation is the main objective of the data managed in the
DiSIEM project. All partners are aware of data sharing limitations due to
privacy concerns and legal obligations. When necessary, information will be
anonymized or truncated in compliance with the applicable legislation.
With respect to _archiving and presentation_ , partners plan to use internal
resources and have them available at the time of writing.
Since it is very early in the project, this document only presents preliminary
proposals in terms of sharing, volume and archiving. The project is aware of
these aspects and will tackle them by updating the present document during the
development of the specifications of the experimentations. Therefore,
information in this document is subject to change.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0633_GABLE_732363.md
|
* **Standards and metadata:** Existing suitable standards and metadata that can be used to share the produced data.
* **Data sharing:** Detailed description of how data will be shared, including access procedures and the identification of the repository where the data will be stored.
* **Archiving and preservation (including storage and backup):** Description of the procedures that will be put in place for long-term preservation of the data.
In order to participate in the ORDP, the GABLE project will share the dataset
described in table 3.1. Full descriptions of the two different datasets that
will be generated and stored during this project are presented below.
3.1.1 Dataset 1
<table>
<tr>
<th>
**Dataset name**
</th>
<th>
Game interaction data and other sensitive information
</th> </tr>
<tr>
<td>
**Dataset description**
</td>
<td>
This dataset includes all the video recordings of participants’ online gaming
sessions during game interaction, and also any other personal data with
sensitive information associated to the patient. This data will be generated
during the project development, particularly, in the piloting phase of the
project.
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
Video recordings of game interactions and documents with the associated
personal data will be stored using standard formats (avi, mp4, pdf, docx).
</td> </tr>
<tr>
<td>
**Data sharing**
</td>
<td>
This data will be accessible only to authorised personnel using an access
control system in order to comply with the ethical and security requirements.
To access the data, the system automatically verifies whether the users have
authorisation. The characteristics and specific measures of security that will
be applied to this data were included in deliverables 9.8 and 9.11.
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td>
<td>
This sensitive data will not be shared and will be stored in a secure server,
with control access, applying standards aimed at ensuring the levels of
security required for handling this sensitive data. The length of time the
storage system will conserve the data will be five years from the beginning of
the project.
</td> </tr> </table>
Table 3.0: Description of dataset 1
Project Acronym: GABLE WP8 – D8.4 Data management plan
Grant Agreement: 732363 © GABLE Consortium 2017 4
3.1.2 Dataset 2
<table>
<tr>
<th>
**Dataset name**
</th>
<th>
Statistical information of relevant data generated during the testing of games
</th> </tr>
<tr>
<td>
**Dataset description**
</td>
<td>
This dataset will include data related to statistical information, as for
example game scores, produced during the testing of games. This data will not
include any personal information of the end users.
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
This dataset will be a combination of excel files to present the numerical
data with the statistical information and pdf files with a detailed
explanation of the shared data and its corresponding metadata.
</td> </tr>
<tr>
<td>
**Data sharing**
</td>
<td>
The consortium will share some statistical information of relevant data
generated during the testing of games. This information will not contain any
personal information of the end users but will provide useful data for future
developers of similar games. This information will be published in a public
repository.
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td>
<td>
This data will be shared in a download section of the project website (
_www.projectgable.eu_ ) , which will be linked to an external digital
repository ( _https://zenodo.org_ or other similar).
</td> </tr> </table>
Table 3.1: Description of dataset 2
Project Acronym: GABLE WP8 – D8.4 Data management plan
Grant Agreement: 732363 © GABLE Consortium 2017 5
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0635_LADIO_731970.md
|
<table>
<tr>
<th>
</th> </tr>
<tr>
<td>
_Figure 1 . Witness-Cam rig tests by SRL and Quine_
</td> </tr> </table>
The first contributions of this _LADIO_DATASET_Live_Action_ will be
available in June 2017, and we expect to further expand the dataset.
2. _**LADIO_DATASET_Advanced_3D_Reconstruction** _
This data will consist of _web-collected data_ and include, based upon an
academic data set from INP experiments in WP3. It will include :
1. A selection of publicly available 3D object models,
2. A set of synthetically rendered images per object model coupled with camera poses,
3. A set of web-selected real images of object model instances coupled with camera poses.
In a) the 3D object models will be taken from sites sharing online free 3D
models e.g., like free3D.com (formerly tf3dm.com). They will be completed by
pre-computed differential geometry information (Gaussian maps, derivatives
w.r.t. the surface parameters, first and second fundamental forms, principal
curvatures etc.) which will be associated to each vertex of the surface. In
LADIO, the 3D object will be given by a set of 3D depth maps, which describes
how the original object surface is “shortened” by a perspective viewing. We
will provide a code along with the dataset for generating the depth map from
the 3D models, given a camera pose. On the other hand, b) and c) will be used
as test intensity images. In b) camera poses are known and will correspond to
ground-truth. In c) the camera poses camera poses will be determined by
_manually_ registering the real images to the 3D model and will be considered
as ground-truth.
This data set will allow to evaluate the performances of algorithms for
registering 2D images to 3D untextured models. In particular it will allow to
measures the degree of repeatability of the proposed features. In our case,
the repeatability of a feature is defined as the frequency with which one
detected in the depth image is found within pixels of the same location in the
corresponding intensity image.
We will also provide annotations for existing data sets (original data can not
be redistributed) , explaining to how to use them with respect to LADIO.
● IMPART datasets (cvssp.org/impart): multi-modal/multi-view datasets created
by Univ. of Surrey and Double Negative within the EU FP7 IMPART project. ●
PASCAL3D+ dataset (cvgl.stanford.edu/projects/pascal3d.html)
<table>
<tr>
<th>
</th> </tr>
<tr>
<td>
_Figure 2. Multi-modal data footage and 3D reconstructions for various
indoor/outdoor scenes from IMPART datasets_
</td> </tr> </table>
The first contributions of this _LADIO_DATASET_Advanced_3D_reconstruction_
will be made available in September 2017, and we expect to further expand the
dataset.
3. _LADIO_DATASET_Multi_Body_
Fully general Multi-Body Structure from Motion is a very difficult problem,
which is very unconstrained and perhaps can’t be solved without adopting
additional constraints and priors for particular situations at hand. We will
therefore investigate several key use cases. We will be looking for the
additional constraints and priors allowing to find, formulate and solve a well
defined task.
<table>
<tr>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
_Figure 3. Independently moving objects and cameras lead to two disconnected
reconstructions (green top left and red top right), which are obtained in
independent scales (bottom left) but can be put together into a consistent
scale and meaningful relationship (bottom right). [Taken from J. Krcek. MUlti-
Body Structure from Motion. MSc thesis. CTU in Prague, 1997.]_
</td> </tr> </table>
We plan to investigate three cases that correspond to situations we
encountered in LADIO applications.
1\. Data: A few objects of similar size and importance moving around at the
same time. Task: Segment, independently reconstruct individual objects and try
to bring them into reasonable geometrical relationship, Figure 1.
Application: Basic research task leading to understanding, formulating and
testing different Multi-Body SfM approaches.
1. Data: A main background scene with additional moving (nuisance) objects (cars, pedestrians, etc.).
Application: Reconstructing large outdoors scenes for during extended time
periods when some objects (often distractors) are moving in the scene.
Task: Segment moving objects from the background scene and ignore them.
2. Data A main background scene changing in time.
3. Task: Detect changes in the scene and build a 3D time dependent model representing the scene accurately at different time moments.
Application: Reconstructing studio setups where parts of the scene are being
gradually restructures.
We will also extend POPART’s ground truth data set “ _POPART_ _Virtual_
_Dataset -_ _Levallois_ _Town_ _Hall_ ” with moving objects as virtual data,
corresponding to our 3 use cases.
<table>
<tr>
<th>
</th> </tr>
<tr>
<td>
_Figure 4 . Levallois town hall dataset_
</td> </tr> </table>
This data set will allow to evaluate different scenarios and select the most
important one for further development.
The first contributions of this _LADIO_DATASET_Multi_Body_ will be made
available in november 2017, and we expect to further expand the dataset.
# Academic Publications
According to open access publications obligations In Horizon 2020 projects,
and in accordance with the global open innovation philosophy of LADIO project,
the academic partners of the consortium are committed to “Open access
publishing” (aka “Gold open access”) whenever this option is provided by the
venues where we must publish to reach the highest impact of our results.
Some of the most important venues in the research community are stored in the
IEEE Xplore digital library and IEEE does not provide any option for making
conference publications available as open access. However, it has become
common practice in some communities, including computer vision, to re-publish
these papers both on personal/insitute web pages and on arXiv.org. In spite of
the potential legal threat of publishing slightly different versions of works
on arXiv while transferring copyright of the final work to the IEEE, we are
following this dual approach since it is established practice in the
community.
# Standards and metadata
The video files in the data sets are based on ARRIRAW [1] and ISO MP4 [2].
Additionally, text files included describe lens metadata and other parameters.
3D models are stored in industry standards Alembic [3] and FBX [4].
See also Deliverable 2.2, for more description about File Formats.
1. _http://www.arri.com/camera/alexa/workflow/working_with_arriraw/arriraw/format/_
2. MPEG-4 Part 14 (ISO/IEC14496-14:2003)
3. _http://www.alembic.io/_
4. _http://www.**autodesk** .com/products/ **fbx** /overview _
# Data sets access and sharing
In the same spirit as the open source contributions of the project, LADIO’s
released data sets will be permanently available on _http://ladioproject.eu_
, the project's Github page _https://github.com/alicevision_ and from Zenodo
_https://zenodo.org/collection/user-ladio_ (to be created) or similar data
repository. __
The data sets will be released to the general public under the license of
_Creative_ _Commons_ _Attribution-ShareAlike_ _4.0_ _International_ ,
allowing researchers and other interested parties to exploit the data sets.
Reminder : under this license, the users are free to share and adapt the
content for any purpose, even commercially. The users must also give
appropriate credit, provide a link to the license, and indicate if changes
were made. If users remix, transform, or build upon the material, they must
distribute their contributions under the same license as the original. They
also may not apply legal terms or technological measures that legally restrict
others from doing anything the license permits.
# Data sets reference and name
The identifier for these data sets will all be prefixed by 'LADIO_DATASET'.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0636_CURE_767015.md
|
**DOI data model:** the Application Profile Framework. DOI names (identifying
entities, i.e Microbiome entries per donor or immune data per donor) will be
grouped into application profiles (Work packages). Any single DOI name can be
a member of multiple application profiles. Each application profile will
similarly be associated with one or more services (CURE deliverables): each
service will be made available in multiple ways. This makes it possible to
make a new service applicable to many DOI names, simply by adding that service
to the relevant application profile(s).
Figure 1: Project and data generation flow chart.
**Responsibilities / costing:** Data management responsibilities among
partners will be allocated during the first General Assembly in September
2018. A Data Control Committee (DCC) will be established to take over the role
of data controller in the project. The DCC will be included in the new DMP
version, which will be updated regularly. The DCC will estimate the costs that
will be needed for the project’s data management and propose to the General
Assembly allocation of costs per partner. Costs for open access are eligible
under all H2020 projects.
A detailed dictionary of terms will be available with each dataset.
# ⮚ Section 1: Organisation of data - common steps among partners
Within CURE, there is a broad range of experimental procedures from multiple
disciplines such as immunology, respiratory medicine, metagenomics,
bioinformatics, microbiology, virology, engineering, microbial ecology and
mathematics. However, a series of steps to standardise the handling of data
from discovery to publication has been designed and is presented in the
following section.
_Organization of data at each centre:_
_Laboratory-experimental data_ : All experiments will be described and logged
in hardcopy and/or electronic lab books. Electronic records will be in the
format of text files, word (.doc), Excel (.xlsx), csv (comma separated
variable) and rtf (rich text format) documents. Raw figure data will also be
saved in JPEG, TIFF or other high-resolution formats (>300 dpi). Raw data
extracted by various platforms such as real-time PCR cyclers, ELISA readers,
Luminex etc. will be saved in .txt or Excel files or appropriate formats. All
experimental protocols will be described in a clear and sufficiently detailed
manner such that protocols can be reproduced, and shared between lab or
consortium members, peer-reviewers upon request or external researchers
following publication. The protocol files will reside within the local servers
and exchanged as necessary. All unprocessed data will be saved in one folder
named **technical unprocessed data (TUD)** which can contain multiple other
sub-folders per user, per process and/or per experiment. Any-type of processed
data will be saved in one folder named **technical processed data (TPD)**
(Figure 2A).
_High throughput sequencing data_ : Raw data produced by any type sequencing
platform will be saved in a separate folder named **high throughput sequencing
unprocessed data (HTSUD)** . These files will contain the raw output (FASTQ or
FASTA) files after demultiplexing. Processed files will be saved in a separate
folder named **high throughput sequencing processed data (HTSPD)** (Figure
2b). This file will include any type of microbial or human annotated FASTQ or
FASTA files. Specifically, for metagenomics at least two subfolders will be
included, one per different strategy of microbial annotation: (1) based on de
novo assembly (metagenome assembled genomes) coded as **MAG** folder and, (2)
based on exact **k-mer sequence annotation** ( **KMA)** . Resulting data like
contigs, sequence derived statistics, dinucleotide odds ratios and metagenome
signatures, taxonomic assignments etc. will be included in a folder coded
**metagenome sequence data** ( **MSD** ) (Figure 2c).
_Publication data_ : Processed data to be submitted for publication and or
presentation to scientific meetings and workshops will be included in a
separate folder named **Publication data.** This folder will serve (a) the
primary folder under any type of reviewing process and (b) the circulation of
specific information of interest among consortium members which might not be
available in a journal’s supplementary data file.
Figure 2: Organisation of experimental data. Common folders, which will be
found at all consortium centres (a) & (b) and, specifically for metagenomics
(University of Manchester) (c).
_Databases:_
A number of databases will be developed, initially corresponding to individual
work package activities and subsequently by merging of these into larger
databases for the analyses and the modelling. A list of provisioned databases
follows:
1. Clinical database A - cross sectional (Task 1.1., NKUA)
1. Clinical, epidemiological, demographic data - coming from questionnaires
(baseline questionnaires, ACT, ACQ)
2. Clinical measurements: lung function (spirometric, impulse oscillometry, exhaled fractional nitric oxide) e-diary cards, Skin prick tests
2. Clinical database B - longitudinal (Task 1.2, NKUA)
1. Clinical, epidemiological, demographic data - coming from questionnaires
(baseline questionnaires, ACT, ACQ)
2. Clinical measurements: lung function (spirometric, impulse oscillometry, exhaled fractional nitric oxide, methacholine provocation test), e-diary cards, Skin prick tests
3. Clinical database C_cross sectional
1. Clinical, epidemiological, demographic data - coming from questionnaires
(baseline questionnaires)
2. Clinical measurements: lung function (spirometric, impulse oscillometry, exhaled fractional nitric oxide), Skin prick tests
4. E-diary and e-spirometry database
a. The data will be stored in Nuvoair encrypted and complaint health cloud and
shared with clinical partners
5. Immune response database - Innate (Task 2.1, BRFAA)
6. Immune response database - B-cells (Task 2.1, SIAF)
7. Primary epithelial cell response database (Task 2.2, SIAF)
8. In-vitro database - PBMC, immune (Task 2.1, BRFAA)
9. In-vitro database - Epithelial cell lines (Task 2.2, SIAF)
10. Phage database (Task 4.1, ELIAVA)
11. Phage-bacteria interaction information (Task 4.2, 4.3 ELIAVA)
12. Metagenomics database (Task 3.1, UMAN) 13. Metagenomics metadata database (Task 3.2, UMAN) 14. Merged databases:
1. Host response DB 1+3+4+5 (NKUA, BRFAA, SIAF)
2. Clinical metagenomics correlation DB 1+2+10 (NKUA, UMAN)
3. Clinical-microbial-immune interactions DB 12a+11 (NKUA, BRFAA, SIAF, UMAN)
Merged databases will be shared between the partners doing the relevant
analyses and forwarded to work package 5 for the modeling.
# ⮚ Section 2: Flow of human donor information across research disciplines -
encoding of data
⮚ Upon inclusion in the study each donor will get a unique code id. This will
be defined by the centre identifier, i.e. 1 for NKUA and the number of the
donor included, e.g., the first donor to be included in NKUA will be 01. For
the baseline visit B0 will be added eg. 101-B0. For follow up visits, F and
the sequential number will be added, e.g., 101-F1 for the first visit,
101-F2….101-F9 for the ninth visit. Samples that will be processed in any
partner centre will be given locally a new code. In practice, it is easier to
work with short codes given per experiment or experimental procedure. For
example, if the sample obtained from donor 101-B0 is processed for
metagenomics then this sample will be given the relevant coding locally. The
local database will comply with good laboratory practice, high research
standards and the human tissue act. To do so, minimum information including
the central code id, the sample code (or multiple codes used in multiple
experiments), the type of sample (e.g., DNA, protein, RNA, metagenomes), the
type of processing (e.g. cDNA synthesis, whole genome amplification), the
remaining sample volume, and the exact location of storage (e.g., freezer 1, 2
nd drawer, box 3, position 48) will be logged. The database will be linked
with the material transfer agreements between centres.
# ⮚ Section 3: Flow of information in the data exchange backbone
The flow of information was outlined in Figure 8 of the project proposal
(reproduced below) and is described in detail in each work package. The major
focuses of information within the consortium are work package 1 (cohort
recruitment and follow up) and work package 3 (metagenomics). All information
will be integrated in work package 5 to be used in the mathematical models.
# ⮚ Section 4: Data Types, Formats, Standards and Capture Methods
_Overview of Research data_ :
**Clinical data** ;
## Nature of data
The data consists of the following quantitative, raw data:
1. Clinical measurements (questionnaires, lung function_spirometry/impulse oscillometry and airway inflammation data_FeNO, diary cards, skin prick test) (clinical database)
2. Ex-vivo measurements
1. Flow cytometric measurements of nasal swabs (surface markers)
2. Blood cells (composition, response to stimuli)
3. Epithelial cells (viability, response to stimuli)
4. Levels of factors in serum (immunoglobulins, specific IgE)
3. In-vitro data
1. Cell-line response to stimuli (BRFAA, SIAF, NKUA) 4. Microbiological data
1. Bacterial and phage types/ characteristics, such as e.g. bacterial serotypes, identification methods applied: phage morphological classes (TEM images), their relatedness data, etc.
2. Phage-bacterial network data
5. Metagenomic data
1. Raw
2. Processed (different ways)
3. Metadata (OTUs, taxonomy, ecological indices)
6. Mathematical data
7. Model outputs
Quantitative data formats are continuous, ordinal, nominal and binary.
### Data collection
The procedure for the data collection will take place in the study centers, in
an especially configured private space, at scheduled date and time. The
participants will be informed via telephone call for the details of the
appointment. On the day of the recruitment the participant will be given a
unique study identifier. To ensure anonymity the name of the participant will
be confidential and will not be recorded into the questionnaires. The data
will be collected in hard copies (forms) and afterwards will be uploaded in a
secured database, from specially trained study personnel. Prior to any
participation, the recruiter will inform the participant of the studies
purpose, answer any questions the participant might have and have the
participate sign the informed consent. Then the baseline questionnaire will be
answered and the initial samples collected. Sampling will be repeated after
pre-defined time points.
## Personal data
Personal identification data (name, date of birth, telephone number and
department ID and study IDs) in line with the “personal data definition”
according to GCP, will be stored separately and will not be used for analysis.
These will be saved in excel locked format, in the Coordinator’s personal
computer and in a locked hard drive in the Coordinator’s office, separately
from other data. Data will be encrypted and processed in an anonymized way.
The data collected in these interviews will be encrypted with a study number.
The data will not identify the participant by name, only by a number. The
encrypted data will be available to the study centre. All data protection laws
of the EU and partner countries will be followed. At the end of the study, the
analysis of the complete data will be published anonymously and only in
summary form, so that no single participant could be identified.
### Data protection
All project data will be stored in accordance with the approval given to the
individual projects by the Respective (Greek and Polish) Data Protection
Agency.
Patient data will be fully anonymized, so that – by the data controller or by
any other person – it will be impossible to (i) identify individuals from the
data, (ii) link records relating to an individual, or (iii) for information
concerning an individual to be inferred from the data. Detailed information
are also provided in section 5 _storage and security
### Archiving and preservation – General principles
Data from the CURE project will be archived for long-term preservation at the
earliest five years after project completion, in accordance with the Grant
Agreement. Data will only be entered into long-term storage if it is in
agreement with the CURE Steering Committee, or when the relevant Data
Protection Agency approval expires.
### Data sharing – general principles
Data will be shared through the CURE consortium, with participating partners
accept and in compliance with applicable legislation:
1. We expect data regarding 300 variables per donor per scheduled visit.
2. **Epidemiological data** relative to microbial exposure (Task 1.1 and 1.2). Based on relative questionnaires which will describe the level of microbial exposure and type of microbial exposure of each donor including life in an urban or rural environment, average contact with other people, means of commuting (metro, tram, bus, cycling, walking), job type, going to school or not, pets in the house etc., we expect data regarding 200 variables per donor.
3. **Metagenomic data** . Unprocessed raw metagenomic data will include 4 FASTQ files per processed sample (swab); (i) Metagenome library data from isolated microbial DNA (x2 due to pair-end sequencing), and (ii) Metagenome library data from isolated microbial RNA (x2 due to pair-end sequencing). These files will be handed from the University of Manchester genomics core facility directly to a designated member of the UMAN team, after appropriate demultiplexing and quality control of samples.
Processed metagenomic data will include qualitative and quantitative
information regarding the annotated microbial species identified in each
processed sample. For each sample 2 major files will be formatted: (i)
metagenome content based on de novo assembly of contigs with information
regarding the absolute number of reads and the relative abundance of each
microbial taxon and at all taxon classification levels, (ii) metagenome
content based on microbial annotation without de novo assembly of genomes,
including the absolute number of reads and the relative abundance of each
microbial taxon and at all taxon classification levels. These files will be
shared to other consortium centres based on the specific interactions and flow
of information between centres as outlined in the main proposal (T5.1, T5.2,
T1.3, T3.1 and T3.2.).
Finally, processed metagenomic data will be produced in order to obtain a
predefined package of metadata to estimate the microbial diversity, richness
and abundance using individual and sample based rarefaction curves, annotate
inferred biological pathways and processes (GO terms) enriched in the
assembled metagenomes, calculate metagenomic signatures based on dinucleotide
relative abundance odds ratio (ρ*), identify viral signals (prophages) from
bacterial genomes and detect the CRISPR-Cas system and spacer sequences,
construct co-occurrence microbial interaction networks based on Spearman rank
tests and data from any other relevant metagenomics analysis.
4. **Human in vitro data** . Cells from nasopharyngeal brushings of patients with asthma and healthy controls, will be characterised using a comprehensive panel of antibodies for monitoring diverse and rare leukocyte populations including macrophages, dendritic cells (DCs), T cells, B cells, innate lymphoid cells (ILCs), NK cells, neutrophils, eosinophils, mast cells and others, as well as epithelial cells. Major macrophage, dendritic cell (DC) and epithelial cell populations will be sorted (ARIA III, BD) and analysed by transcriptomics using next generation sequencing (RNAseq; Illumina MiSeq) and qPCR. Supernatants will be further analysed for the presence of relevant cytokines and chemokines using Luminex technology. The effect of relevant phage preparations on blood-derived macrophage & DCs and epithelial cell populations will be also be examined in culture. Finally, co-culture of different bacteria with air-liquid interfacedifferentiated human epithelial cells will be infected with varying doses of relevant bacteria and treated with various titres of corresponding bacteriophages (single related to the same species or their mixtures with overlapping host ranges). Data generated will include measures of epithelial and microbial viability, proliferation, mediator transcription and production.
5. **Microbial in vitro data** . The phages will be isolated from nasal swabs, sputum, oral or nasal washings, clinical environment, lysogenic strains, etc. An enrichment methodology will be used for isolation of bacteriophages. For this purpose, a set of the host cultures will be obtained from international culture collections and isolated from the CURE cohort. The isolated phages will be then characterized according to their biological and morphological (TEM images) properties, such as plaque and capsid morphology, host range, single cell growth parameters (adsorption time, latent period, yield) and genetic features, using PCR, molecular typing (RAPD-PCR) and sequencing analyses. Crossinfection of phage isolates against a panel of bacteria will be performed using spot assays.
**Mathematical data** . (1) _Phage-bacteria interactions_ : A cross-infection
matrix constructed and represented as a network where phages and bacteria are
represented as nodes and edges will indicate a phage can infect and lyse a
specific host strain. The degree of interactions for each node will be used to
hierarchically cluster phage species based on their ability to infect multiple
bacteria species. To examine whether the phage-bacterial interactions are
deterministic and predictable (ecological and evolutionary drivers) or random,
four key types of Phage-Bacteria Infection networks (PBINs) will be
investigated: random, one-to-one, nested (NTS and NODF) and modular (Bipartite
Recursively Induced Modules, BRIM). Two widely used null models, the Bernoulli
random network and the probabilistic degree network will be used to measure
the statistical significance of patterns in the PBIN. (2) _Mathematical
modelling_ : Dirichlet multinomial mixture models, least angle regression and
model weighting approaches, and Lotka-Volterra models will predict the
dynamics of i) the bacterial and phage strains involved in health and disease
and ii) other microbial strains commonly found in the respiratory tract.
Stochastic optimisation (e.g., stochastic tunnelling) will be used to identify
optimal control strategies. Mathematical data processing will be implemented
in the R package and MATLAB.
**_Standards and best practices in next generation sequencing_ ** : The
metadata spreadsheet for NGS data will meet GEO’s standards. It provides
comprehensive information about the study design, sample information, the
protocol, data processing pipeline. In the repository of shared code will
include Wiki page and README file to describe the setup of computing
environment, usage of the software and demo. A recent overview (
_https://doi.org/10.1093/gigascience/gix047_ ) , discusses in detail the
landscape of data standards available for the description of essential steps
in metagenomics, including (i) **material sampling** , (ii) **material
sequencing** , (iii) **data analysis** , and (iv) **data archiving and
publishing** . We will follow the proposed Metagenomics Data Model providing
information regarding the: (1) study: information about the scope of a
sequencing effort that groups together all data of the project, (2) sample:
information about provenance and characteristics of the sequenced samples, (3)
experiment: information about the sequencing experiments, including library
and instrument details, (4) run: an output of a sequencing experiment
containing sequencing reads represented in data files, and (5) analysis: a set
of outputs computed from primary sequencing results, including sequence
assemblies and functional and taxonomic annotations.
**_Sampling_ ** : We will modify The Minimum Information about Metagenomic
Sequence (MIMS) ( _http://wiki.gensc.org/index.php?title=MIGS/MIMS_ )
standard based on our tissue specific output. MIMs is a Genomic Standards
Consortium (GSC)-developed data reporting standard designed for accurate
reporting of contextual information for samples associated with metagenomic
sequencing, and it is also largely applicable to metatranscriptomics studies.
**_Sequencing_ ** : Once a sample is collected and its provenance recorded, it
is subjected to preparation steps for nucleotide sequence analysis. Equally
critical for the downstream metagenomic data analysis and interpretation is
the reporting of sequencing library preparation protocols and parameters as
well as sequencing machine configurations. We will use existing MIxS fields to
describe mandatory information (mandatory descriptors for new generation
nucleotide sequencing experiments as currently captured by International
Nucleotide Sequence Database Collaboration (INSDC) databases) and non-
mandatory descriptors as outlined in MIMs.
**_Experiment and Run_ ** : Variable parameters of the library preparation and
instrumentation are captured in the metadata objects Experiment and Run. Each
Experiment should refer to Study and Sample objects, to provide context for
the sequencing, and is referred to from the Run objects, which point to the
primary sequencing reads.
Examples of the Experiment and Run XML are available, e.g., from the European
Nucleotide Archive (ENA):
[http://www.ebi.ac.uk/ena/submit/preparing-xmls#experiment]
[http://www.ebi.ac.uk/ena/submit/preparing-xmls#run]
The primary data (the reads) are stored in files of various formats, which can
be standard (Binary Alignment/Map [BAM], Compression Reduced Alignment/Map
[CRAM], or Fastq) or a platform specific, as with standard flowgram format
(SFF), PacBio, Oxford Nanopore, or Complete Genomics. Information on the read
data format must be indicated in the description of sequencing.
The minimum information encapsulated in read data files includes base calls
with quality scores. Quality requirements on read data files are file format
specific and are summarized, e.g., in the ENA data submission documentation. A
freely available diagnostic tool for the validation of CRAM and BAM files is
the Picard ValidateSamFile. Validation of FastQ files is less straightforward
since there is no single FASTQ specification. Recommended usage of FASTQ can
be found, e.g., in the ENA guidelines. An open resource for managing next
generation sequencing datasets is the NGSUtils, which also contains tools for
operations with FASTQ files. As sequencing technologies change over time, the
formats and associated validation tools may well change, so a comprehensive
list of formats and tools is likely to become outdated. The key point is to
adopt a widely used format and to check for file format and integrity (e.g.,
checksums).
**_Analysis:_ ** There are currently no standards for reporting how
metagenomics datasets have been analysed. While systematic analysis workflows,
such as those offered by EMG, Integrated Microbial Genomes with Microbiomes,
META-pipe, and MG-RAST, provide a standard that is documented (albeit in
different ways), many published datasets are analysed by in-house bespoke
pipelines. A schematic overview of a best practice for analysis metadata
collection is shown in Fig. 3A (adapted from
_https://academic.oup.com/gigascience/article/6/8/1/3869082_ ) . An
overarching set of metadata relating to analysis will encapsulate generic
information such as analysis centre, name of bioinformatician, analysis
objectives, name of overall analysis (if appropriate), and the date on which
the analysis was performed. It will also contain appropriate pointers to the
run data, run sequence metadata, and associated sample data. Underneath the
overarching analysis metadata is a collection of analysis components that
describe each stage of the analysis (Fig. 3B). Each component can be divided
into 3 sections: input(s), analysis algorithm, and output(s).
Figure 3: Schematic overview of a best practice for analysis metadata
collection
Archived components will be tailored to the analysis but will at least include
operational taxonomic unit counts and assignments, functional assignment
counts, and read/sequence positional information for the aforementioned
assignments. Such data files are already made available from MG-RAST and EMG,
and those from other sources are accepted for archiving within ENA. If
metagenomic assemblies have been performed, then these should have an
appropriate structure of contigs, scaffolds, or chromosomes with an
appropriate format as detailed, e.g., in the ENA data submission
documentation. Due to the overheads of producing an assembly, these should be
archived, ideally with an INSDC database.
# ⮚ Section 5: Short-term Storage and security
1\. **Clinical-epidemiological data** : All clinical protocols, procedures and
questionnaires will be described in a clear and detailed manner that will
allow reproducibility, follow up and sharing between clinical centers/members,
consortium members or peer-reviewers upon request. The protocol files will
reside within the local hospital intranet secure servers. Questionnaires and
procedures derived during the study from participants, including unprocessed
data will be kept hardcopies in a specific locked drawer in the PI’s office
and will be saved in a specific folder under the name “Cure_WP1” in pdf
format. The folder will contain other sub-folders named WP1a/ WP1b/WP1c and
further per questionnaire / procedures in the respective centers (Athens
_Research laboratory of the Allergy Unit, 2 nd Pediatric Clinic and Sotiria
Hospital Athens Greece and Lodz). Clinical data will be stored locally at the
donor inclusion centres for the duration of the project. Only ID study numbers
will be used to identify participants in the data files and
samples/procedures, according to protocol labeling. Consent forms will be
saved in hard copies and in e-pdf format in the “Cure_WP1” folder, as
described above.
The electronic unprocessed formats will be saved and monitored by the
respective research team (personal computer of the PI, medical responsible for
the study_data manager) in the respective centers or according to local
custom. Datasets will be named with date and version.
Raw data as produced by instruments (spirometry, impulse oscilometry, skin
prick tests, SCORAD, CAP results) will be saved in hard copies and locked in
electronic pdf formats, as described above.
Data files will be named according to CURE, type of data and date of
creation/revision. Data variables in individual files will be given an
abbreviated variable name, together with a longer, explanatory variable label.
Raw data from spirometry will be processed by the GLI 2012 predictive values
excel from, before uploaded in the e-database.
## _The e-database_
Processed raw data or metadata will be uploaded and stored in the e-database
in the REDCap system. A designated member of the research team will be
responsible for filling in the information to the electronic database.
The database will be created with the use of the RedCap programme. REDCap
(Research Electronic Data Capture) is a secure, web-based application designed
to support data capture for research studies. The primary objective of RedCap
is to provide high-quality data by keeping the number of errors and the
missing data as low as possible. The administrator is responsible for
controlling and allocating the database access.
Access to database requires user authentication using a password. Recruiters
create a new record using a unique code for each patient or edit an existing
one. The match between the code and the patient is known only to the recruiter
of each country ensuring patient's anonymity. The database will be updated
once a month in order to reduce time for the circulation of essential
information among centres
All records are stored on a secure encrypted server. In the end of the study,
administrator exports data to common data analysis packages (Microsoft Excel,
PDF, SAS or SPSS for analysis). The language of the database is English. The
location of the database will be in: _www.allergy1.gr/cure_ . The server
does automatically back up so that the data cannot be lost and the
functionality of the database allows the restore of the data up to a month
ago. Access in the platform will have authorized people only. The access will
be given by the administrator. The administrator provides with a username via
email the corresponding person and an auto-generated password. In case the
recruiter forgot his password there is the choice ‘’forgot the password’’ and
then automated he will receive an email to set a new password. The password is
known only from the recruiter. No one else can log in the data base.
Afterwards, for the needs of the analysis, data will be exported in a data
analysis package by the administrator.
## Quality and security control of the e-database
The database will be locked after the completion of the data entry. Quality
control check will be applied to the inserted data in order ensure the quality
of the data. In order to avoid the input of incorrect data, the fields on the
e-database will accept specific character types, numerical or text. Each field
will be set by default the kind of value it will accept. The option of a drop
down list for multiple choice options will be available in the corresponding
fields.
The donor’s medical file will be linked with the CURE donor code within the
database through a separate key to protect anonymity. Only the designated
medical professional and the PI of the recruitment centre will have the key to
link donor’s name to the code in order to follow up and send reminders for
visits.The CURE clinical database will be hosted locally at the NKUA and will
be accessible for members of the consortium through the private area at:
_https://www.cureasthma.eu/about-us_ . (Figure 4).
Figure 4: Schematic representation of CURE clinical data storage and sharing
protocol
**Metagenomic data** : Metagenomic data will be generated at UMAN. The
University of Manchester has an existing comprehensive data policy which will
be followed:
( _http://documents.manchester.ac.uk/DocuInfo.aspx?DocID=33802%20_ ) . This
policy applies to any data (Relevant Data) that is created or acquired in
research (funded or unfunded) involving staff and/or students of the
University (Relevant Research).
UMAN IT Services provides centrally hosted and administered data storage for
research staff/students — the Research Data Storage Service. Storage will be
available to each academic-led research project at no charge (up to 8 TB).
Further storage will be charged for. (This storage is commonly referred to as
Isilon.) All data will be stored on the servers/Isolon, and backed up to
encyrpted hard disks. Stored files within Isolon are accessible from desktop
and laptop machines on campus and may also be accessed from on-campus research
computing systems. For off-campus access, VPN will be used. Files stored on
this service are secure. For example, files corrupted or accidentally deleted
can be recovered for up to 35 days.
No personal data will be available since all files will be given local codes
per experiment and run as described before.
2\. Human in vitro data: Human in vitro data will be generated at SIAF and
BRFAA. Data storage and security will comply with local standards. The data
storage on personal computers will be real-time backed up with CrashPlan (
www.crashplan.com) . NGS data will be submitted into GEO
(https://www.ncbi.nlm.nih.gov/geo/) and assigned with unique accession number.
The unique accession number is commonly referred when the data is used in a
publication. Programming language code will be available as a Github
repository (https://github.com/) with a permanent URL. Usage by other
researchers can be monitored by download count. Other data will be deposited
into DRYAD (http://datadryad.org/) and assigned with DOI. DRYAD provides usage
statistics of the data. All digital repositories will be choosen will conform
to the FAIR Data Principles. We will choose digital repositories maintained by
a non-profit organisation. Large raw and intermediate data produced with High
Performance Computing will be stored on Science Cloud of University of Zürich
and storage server of FGCZ. **Microbiological data:** Microbiological data
will be generated at ELIAVA and ELIBIO. Detailed strain characteristics,
including the data on isolation date, source, bacteriology and genetic
identification, serology, etc. will be stored in Excel files. The detailed
characteristics of
the isolated phages, including TEM images, physiological parameters, genetic
data will be stored in text, jpg and excel files.
# ⮚ Section 6: Deposit, Long-Term Preservation and accessibility
Results from the project will be disseminated by publication in high quality,
internationally recognised peer-reviewed journals. We will make use of Open
Access (OA) options to ensure that our results are freely available to the
entire scientific community and the public. The raw data files (FASTQ) will be
deposited as freely accessible in the European Nucleotide Archive and the
NCBI, and available from time of publication. The wider scientific community
will be able to access the data and use them for data mining and discovery.
The metadata will also be available in metagenomic repositories (EBI
Metagenomics). GitHub will be used for archive of code and DRYAD to archive
data, and code, related to a specific publication.
<table>
<tr>
<th>
**Dataset ID**
</th>
<th>
**NKUA_C_S**
</th> </tr>
<tr>
<td>
Partner name
</td>
<td>
National and Kapodistrian University of Athens
</td> </tr>
<tr>
<td>
Purpose of data
</td>
<td>
Clinical database A - cross sectional (Task 1.1.,
NKUA)
</td> </tr>
<tr>
<td>
Longitudinal data
</td>
<td>
No
</td> </tr>
<tr>
<td>
Will these data be integrated with data from other centres?
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Number of unique patients that the database will involve
</td>
<td>
60
</td> </tr>
<tr>
<td>
Number of different variables
</td>
<td>
300
</td> </tr>
<tr>
<td>
How frequently will the database be updated?
</td>
<td>
Monthly
</td> </tr>
<tr>
<td>
Database in Engligh?
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Quality checks on dataset
</td>
<td>
Yes
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
**Dataset ID**
</td>
<td>
**NKUA_L**
</td> </tr>
<tr>
<td>
Partner name
</td>
<td>
National and Kapodistrian University of Athens
</td> </tr>
<tr>
<td>
Purpose of data
</td>
<td>
Clinical database B - longitudinal (Task 1.2, NKUA)
</td> </tr>
<tr>
<td>
Longitudinal data
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Will these data be integrated with data from other centres?
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Number of unique patients that the database will involve
</td>
<td>
78
</td> </tr>
<tr>
<td>
Number of different variables
</td>
<td>
300
</td> </tr>
<tr>
<td>
How frequently will the database be updated?
</td>
<td>
Monthly
</td> </tr>
<tr>
<td>
Database in Engligh?
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Quality checks on dataset
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Dataset ID**
</td>
<td>
**NKUA_SC**
</td> </tr>
<tr>
<td>
Partner name
</td>
<td>
National and Kapodistrian University of Athens
</td> </tr>
<tr>
<td>
Purpose of data
</td>
<td>
Clinical database C – Cross-Sectional (Task 1.3,
NKUA)
</td> </tr>
<tr>
<td>
Cross Sectional data
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Will these data be integrated with data from other centres?
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Number of unique patients that the database will involve
</td>
<td>
30
</td> </tr> </table>
<table>
<tr>
<th>
Number of different variables
</th>
<th>
300
</th> </tr>
<tr>
<td>
How frequently will the database be updated?
</td>
<td>
Monthly
</td> </tr>
<tr>
<td>
Database in Engligh?
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Quality checks on dataset
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Dataset ID**
</td>
<td>
**BRFAA_Innate**
</td> </tr>
<tr>
<td>
Partner name
</td>
<td>
Biomedical Research Foundation, Academy of
Athens
</td> </tr>
<tr>
<td>
Purpose of data
</td>
<td>
Immune response database - Innate (Task 2.1,
BRFAA)
</td> </tr>
<tr>
<td>
Longitudinal data
</td>
<td>
</td> </tr>
<tr>
<td>
Will these data be integrated with data from other centres?
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Number of unique patients that the database will involve
</td>
<td>
</td> </tr>
<tr>
<td>
Number of different variables
</td>
<td>
</td> </tr>
<tr>
<td>
How frequently will the database be updated?
</td>
<td>
Monthly
</td> </tr>
<tr>
<td>
Database in Engligh?
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Quality checks on dataset
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Dataset ID**
</td>
<td>
**SIAF_B_Cells**
</td> </tr>
<tr>
<td>
Partner name
</td>
<td>
Swiss Institute of Allergy and Asthma Research
</td> </tr> </table>
<table>
<tr>
<th>
Purpose of data
</th>
<th>
Immune response database - B-cells (Task 2.1, SIAF)
</th> </tr>
<tr>
<td>
Longitudinal data
</td>
<td>
</td> </tr>
<tr>
<td>
Will these data be integrated with data from other centres?
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Number of unique patients that the database will involve
</td>
<td>
</td> </tr>
<tr>
<td>
Number of different variables
</td>
<td>
</td> </tr>
<tr>
<td>
How frequently will the database be updated?
</td>
<td>
Monthly
</td> </tr>
<tr>
<td>
Database in Engligh?
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Quality checks on dataset
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Dataset ID**
</td>
<td>
**SIAF_Epithelial**
</td> </tr>
<tr>
<td>
Partner name
</td>
<td>
Swiss Institute of Allergy and Asthma Research
</td> </tr>
<tr>
<td>
Purpose of data
</td>
<td>
Primary epithelial cell response database (Task 2.2,
SIAF)
</td> </tr>
<tr>
<td>
Longitudinal data
</td>
<td>
</td> </tr>
<tr>
<td>
Will these data be integrated with data from other centres?
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Number of unique patients that the database will involve
</td>
<td>
</td> </tr>
<tr>
<td>
Number of different variables
</td>
<td>
</td> </tr>
<tr>
<td>
How frequently will the database be updated?
</td>
<td>
Monthly
</td> </tr> </table>
<table>
<tr>
<th>
Database in Engligh?
</th>
<th>
Yes
</th> </tr>
<tr>
<td>
Quality checks on dataset
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Dataset ID**
</td>
<td>
**BRFAA_Adaptive**
</td> </tr>
<tr>
<td>
Partner name
</td>
<td>
Biomedical Research Foundation, Academy of
Athens
</td> </tr>
<tr>
<td>
Purpose of data
</td>
<td>
In-vitro database - PBMC, immune (Task 2.1,
BRFAA)
</td> </tr>
<tr>
<td>
Longitudinal data
</td>
<td>
</td> </tr>
<tr>
<td>
Will these data be integrated with data from other centres?
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Number of unique patients that the database will involve
</td>
<td>
</td> </tr>
<tr>
<td>
Number of different variables
</td>
<td>
</td> </tr>
<tr>
<td>
How frequently will the database be updated?
</td>
<td>
Monthly
</td> </tr>
<tr>
<td>
Database in Engligh?
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Quality checks on dataset
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Dataset ID**
</td>
<td>
**BRFAA_Epithelial**
</td> </tr>
<tr>
<td>
Partner name
</td>
<td>
Biomedical Research Foundation, Academy of
Athens
</td> </tr>
<tr>
<td>
Purpose of data
</td>
<td>
In-vitro database - Epithelial cell lines (Task 2.2,
SIAF)
</td> </tr> </table>
<table>
<tr>
<th>
Longitudinal data
</th>
<th>
</th> </tr>
<tr>
<td>
Will these data be integrated with data from other centres?
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Number of unique patients that the database will involve
</td>
<td>
</td> </tr>
<tr>
<td>
Number of different variables
</td>
<td>
</td> </tr>
<tr>
<td>
How frequently will the database be updated?
</td>
<td>
Monthly
</td> </tr>
<tr>
<td>
Database in Engligh?
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Quality checks on dataset
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Dataset ID**
</td>
<td>
**ELV_INST**
</td> </tr>
<tr>
<td>
Partner name
</td>
<td>
The Eliava Institute of Bacteriophage, Microbiology and Virology
</td> </tr>
<tr>
<td>
Purpose of data
</td>
<td>
Phage database (Task 4.1, ELIAVA)
</td> </tr>
<tr>
<td>
Longitudinal data
</td>
<td>
No
</td> </tr>
<tr>
<td>
Will these data be integrated with data from other centres?
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Number of unique patients that the database will involve
</td>
<td>
Not applicable
</td> </tr>
<tr>
<td>
Number of different variables
</td>
<td>
</td> </tr>
<tr>
<td>
Time period the data cover
</td>
<td>
4 years
</td> </tr>
<tr>
<td>
How frequently will the database be updated?
</td>
<td>
Monthly
</td> </tr> </table>
<table>
<tr>
<th>
Database in Engligh?
</th>
<th>
Yes
</th> </tr>
<tr>
<td>
Quality checks on dataset
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Dataset ID**
</td>
<td>
**ELV_INST**
</td> </tr>
<tr>
<td>
Partner name
</td>
<td>
The Eliava Institute of Bacteriophage, Microbiology and Virology
</td> </tr>
<tr>
<td>
Purpose of data
</td>
<td>
Phage-bacteria interaction (Task 4.2, 4.3 ELIAVA)
</td> </tr>
<tr>
<td>
Longitudinal data
</td>
<td>
No
</td> </tr>
<tr>
<td>
Will these data be integrated with data from other centres?
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Number of unique patients that the database will involve
</td>
<td>
Not applicable
</td> </tr>
<tr>
<td>
Number of different variables
</td>
<td>
</td> </tr>
<tr>
<td>
How frequently will the database be updated?
</td>
<td>
Monthly
</td> </tr>
<tr>
<td>
Database in Engligh?
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Quality checks on dataset
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Dataset ID**
</td>
<td>
**UMAN_base**
</td> </tr>
<tr>
<td>
Partner name
</td>
<td>
The University of Manchester
</td> </tr>
<tr>
<td>
Purpose of data
</td>
<td>
Metagenomics database (Task 3.1, UMAN)
</td> </tr>
<tr>
<td>
Longitudinal data
</td>
<td>
No
</td> </tr>
<tr>
<td>
Will these data be integrated with data from other centres?
</td>
<td>
Yes
</td> </tr> </table>
<table>
<tr>
<th>
Number of unique patients that the database will involve
</th>
<th>
70
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
Number of different variables
</td>
<td>
>300
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
How frequently will the database be updated?
</td>
<td>
Monthly
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Database in Engligh?
</td>
<td>
Yes
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Quality checks on dataset
</td>
<td>
Yes
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Dataset ID**
</td>
<td>
**UMAN_Meta**
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Partner name
</td>
<td>
The University of Manchester
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Purpose of data
</td>
<td>
Metagenomics metadata database
UMAN)
</td>
<td>
(Task
</td>
<td>
3.2,
</td> </tr>
<tr>
<td>
Longitudinal data
</td>
<td>
Yes
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Will these data be integrated with data from other centres?
</td>
<td>
Yes
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Number of unique patients that the database will involve
</td>
<td>
150
</td>
<td>
</td> </tr>
<tr>
<td>
Number of different variables
</td>
<td>
>300 per visit (max of 5 visits per person)
</td>
<td>
</td> </tr>
<tr>
<td>
How frequently will the database be updated?
</td>
<td>
Monthly
</td>
<td>
</td> </tr>
<tr>
<td>
Database in Engligh?
</td>
<td>
Yes
</td>
<td>
</td> </tr>
<tr>
<td>
Quality checks on dataset
</td>
<td>
Yes
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Dataset ID**
</td>
<td>
**Merged_Host_response**
</td>
<td>
</td> </tr> </table>
<table>
<tr>
<th>
Partner name
</th>
<th>
NKUA, BRFAA, SIAF
</th> </tr>
<tr>
<td>
Purpose of data
</td>
<td>
Host response DB 1+3+4+5 (NKUA, BRFAA, SIAF)
</td> </tr>
<tr>
<td>
Longitudinal data
</td>
<td>
No
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Will these data be integrated with data from other centres?
</td>
<td>
Yes
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Number of unique patients that the database will involve
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Number of different variables
</td>
<td>
Number of variables
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
How frequently will the database be updated?
</td>
<td>
Monthly
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Database in Engligh?
</td>
<td>
Yes
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Quality checks on dataset
</td>
<td>
Yes
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Dataset ID**
</td>
<td>
**Merged_Clinical_Metagenomics**
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Partner name
</td>
<td>
NKUA, UMAN
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Purpose of data
</td>
<td>
Clinical metagenomics correlation
(NKUA, UMAN)
</td>
<td>
DB
</td>
<td>
1+2+10
</td> </tr>
<tr>
<td>
Longitudinal data
</td>
<td>
Yes
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Will these data be integrated with data from other centres?
</td>
<td>
Yes
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Number of unique patients that the database will involve
</td>
<td>
250
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Number of different variables
</td>
<td>
>300 per unique patient per visit
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
How frequently will the database be
</td>
<td>
Monthly
</td> </tr>
<tr>
<td>
updated?
</td>
<td>
</td> </tr>
<tr>
<td>
Database in Engligh?
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Quality checks on dataset
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Dataset ID**
</td>
<td>
**Merged_Microbial_Immune**
</td> </tr>
<tr>
<td>
Partner name
</td>
<td>
NKUA, UMAN
</td> </tr>
<tr>
<td>
Purpose of data
</td>
<td>
Clinical-microbial-immune interactions DB 12a+11
(NKUA, BRFAA, SIAF, UMAN)
</td> </tr>
<tr>
<td>
Longitudinal data
</td>
<td>
No
</td> </tr>
<tr>
<td>
Will these data be integrated with data from other centres?
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Number of unique patients that the database will involve
</td>
<td>
200
</td> </tr>
<tr>
<td>
Number of different variables
</td>
<td>
</td> </tr>
<tr>
<td>
How frequently will the database be updated?
</td>
<td>
Monthly
</td> </tr>
<tr>
<td>
Database in Engligh?
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
Quality checks on dataset
</td>
<td>
Yes
</td> </tr> </table>
The 2 main routes to open access are:
Self-archiving / 'green' open access – the author, or a representative,
archives (deposits) the published article or the final peer-reviewed
manuscript in an online repository before, at the same time as, or after
publication. Some publishers request that open access be granted only after an
embargo period has elapsed.
Open access publishing / 'gold' open access - an article is immediately
published in open access mode. In this model, the payment of publication costs
is shifted away from subscribing readers. The most common business model is
based on one-off payments by authors. These costs, often referred to as
Article Processing Charges (APCs) are usually borne by the researcher's
university or research institute or the agency funding the research.
Final CURE data will be stored for 3 years after the end of the project, to
allow for maximum publication of findings before release, unless further
processing is needed for Intellectual property protection.
The project’s steering committee will be responsible for deciding the final
amount and type of data and metadata which will be stored after the end of
CURE. The committee will also decide which datasets wil be publicly available
through CURE’s website. Decisions on the above will be taken based on:
* Potential publications and presentations in scientific meetings and conferences.
* Potential PhD projects to be concluded
* Data that can be used as preliminary for new EU funded or international grant applications, awards and fellowships.
* Potential patents that might arise from CURE.
* Completeness of datasets. A priority will be given in DOIs with complete datasets along the different CURE work packages, especially WP1, WP2 and WP3.Availability of storage space Ethics and Intellectual Property
## ETHICS
All data related processes, from collection and sharing to data research and
sustainability will be in compliance with the legal requirements established
by GDPR (General Data Protection Regulation).
CURE is a biomedical research project engaged in studying asthma using
clinical data of patients. Data relating to health constitutes sensitive
category of data. The processing of health data for research is subject to the
rules of data protection and requires legitimisation. Such legitimisation may
be given by the law, or by the patient by means of informed consent. Also,
since the project is engaged in biomedical research, the undertaking of such
research requires approval by the Ethics Committee after assessment “of its
scientific merit, including assessment of the importance of the aim of
research, and multidisciplinary review of its ethical acceptability.”
(Additional Protocol to the Convention on Human Rights and Biomedicine,
concerning Biomedical Research, Strasbourg, 25.I.2005)
In CURE, the use of sensitive data may be considered to be done in an
ethically and legally compliant way since the partners that contribute data to
the project, have informed consent of the patients and approvals by the Ethics
Committee.
## INTELLECTUAL PROPERTY
The management of Intellectual Property (IP) in CURE is governed by the Grant
Agreement (GA) and Consortium Agreement (CA). In particular, Article 23a of
the GA makes it an obligation of the project partners to take measures to
implement the Commission Recommendation on the management of intellectual
property in knowledge transfer activities. One of the principles regarding
collaborative research is to develop an IP strategy and to manage IP related
issues as early as possible. IP-related issues include _“allocation of the
ownership of intellectual property which is generated in the framework of the
project (hereinafter “foreground”), identification of the intellectual
property which is possessed by the parties before starting the project
(hereinafter “background”) and which is necessary for project execution or
exploitation purposes, access rights to foreground and background for these
purposes, and the sharing of revenues.”_
According to this principle, and as also set forth by Article 24.1 of the GA,
the CURE partners identified and agreed on the background, which they agree to
contribute into the project, both as on the terms, on which such background
may be contributed and used in the project in the agreement on background
(Attachment 1 to CA).
The ownership of results is in general also regulated by the project GA/CA.
The basic principle that governs ownership of research results, as laid down
by Article 8.1 CA, is that the results are owned by the party that generates
them. In cases, when work producing the results has been performed by more
than one party and it is not possible to separate and/or define contributions
of each, then the contributing parties shall have joint ownership over such
results and each enjoy the rights in relation to the exploitation of a joint
work, as laid down in Article 8.2 CA.
In addition, one of the guiding principles in relation to managing ownership
in research results is that “ _the ownership of the foreground should stay
with the party that has generated it, but can be allocated to the different
parties on the basis of a contractual agreement concluded in advance,
adequately reflecting the parties' respective interests, tasks and financial
or other contributions to the project_ .” In essence, these rules advocate and
delegate the management of IP and ownership to contractual arrangements. If
needed, the CURE consortium will set up contractual arrangements aimed to
address the IP and ownership related aspects of the project.
## Resourcing
Original databases will be kept locally for the duration of the project and
beyond, with the responsibility of the corresponding partner. Merged databases
will be kept at the University of Athens, until uploaded onto free public
servers, as described above.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0638_HOBBIT_688227.md
|
Table of Contents
1\. DATA MANAGEMENT LIFECYCLE 6
2\. DATA MANAGEMENT PLAN 7
2.1 DATASET REFERENCE AND NAME 7
2.2 DATASET DESCRIPTION 7
2.3 STANDARDS AND METADATA 7
2.4 DATA SHARING 8
2.5 ARCHIVING AND PRESERVATION (INCLUDING STORAGE AND BACKUP) 9
# Data Management Lifecycle
HOBBIT will continuously collect various datasets (i.e., not limited to
specific domains) as the base for benchmarks. Those data will initially be
provided by the project industrial partners, and later on by members of the
HOBBIT community.
To make the data **discoverable** and **accessible** , besides providing the
generated benchmarks as **dump files** 1 that can be loaded from the project
repository, HOBBIT will also provide a **SPARQL endpoint** that will serve all
the benchmark datasets. The HOBBIT SPARQL endpoint will enable the platform
users to run their own queries against one or more benchmark(s) to obtain
tailored benchmark(s) that fit exactly each user needs.
**Figure**
**1**
**. Data Management Lifecycle**
**Overview**
ckan
Dataset
Reference
\+
metadata
Data
Dump
SPARQL
Endpoint
To **keep the dataset submission process manageable** , we host an instance of
the _CKAN_ open source data portal software, extended with custom metadata
fields for the HOBBIT project. For the time being, this instance is hosted at
_http://hobbit.iminds.be_ . When the benchmarking platform itself goes
online, the CKAN instance will be moved there, to accommodate more space for
datasets. Users who want to add a dataset of their own, first need to request
to be added to an organization on the CKAN instance, after which they can add
datasets to this organization. _http://projecthobbit.eu/contacts/_
Datasets will be kept available on the HOBBIT platform for **at least the
lifetime of the project** , unless they are removed by their owners. After the
project, the HOBBIT platform will be maintained by the HOBBIT Association, and
so will the datasets. **Owners may add or remove** a dataset at any time.
**Figure 2. Screenshot of the current CKAN deployment.**
# Data Management Plan
In conformity with the guidelines of the Commission, we will provide the
following information for every dataset submitted to the project. This
information will be obtained either through automatically generating it (e.g.,
for the identifier), or by asking whoever provides the dataset upon
submission.
## Dataset Reference and Name
The datasets submitted will be identified and references by using a URL. This
URL can then be used to access the dataset (either through dump file or SPARQL
endpoint), and also be used as an identifier to provide metadata.
## Dataset Description
The submitter will be asked to provide a short textual, human-interpretable
description of the dataset, at least in English, and optionally in other
languages as well. Additionally, a machineinterpretable description will also
be provided (see 2.3 Standards and metadata).
## Standards and Metadata
Since we are dealing with Linked Data sets, it makes sense to adhere to a
Semantic Web context for the description of the datasets as well. Therefore,
we will use W3C recommended vocabularies such as _DCAT_ to provide metadata
about each dataset. The metadata that is currently associated with the
datasets includes:
* Title
* URL
* Description
* External Description
* Tags
* License
* Organization
* Visibility
* Source
* Version
* Contact
* Contact Email
* Applicable Benchmark
Currently, this metadata is stored in the CKAN instance’s database. However,
the plan is to convert this information to DCAT and make it available for
querying once the benchmarking platform is running.
**Figure 3. DCAT ontology overview (source: _https://www.w3.org/TR/vocab-
dcat/_ ) **
## Data Sharing
Industrial companies are normally unwilling to make their internal data
available for competitions because this could reduce the competitiveness of
these companies significantly. However, HOBBIT aims to pursue a policy of
making data **open, as much as possible** . Therefore, a number of mechanisms
are put in place.
As per the original proposal, HOBBIT will deploy a standard data management
plan that includes (1) employing **mimicking algorithms** that will compute
and reproduce variables that characterize the structure of company-data, (2)
feeding these characteristics into **generators that will be able to generate
data similar to real company data** without having to make the real company
data available to the public. The mimicking algorithms will be implemented in
such a way that can be used within companies and simply return parameters that
can be used to feed the generators. This preserves Intellectual Property
Rights (IPR) and will circumvent the hurdle of making real industrial data
public by allow configuring deterministic synthetic data generators so as to
compute data streams that display the same variables as industry data while
being fully open and available for evaluation without restrictions.
Since we will provide a mimicked version of the original dataset in our
benchmarks, **open access will be the default behaviour** . However, on a
case-by-case basis, datasets might be **protected** (i.e., visible only to
specific user groups) on request of the data owner, and in agreement with the
HOBBIT platform administrators.
## Archiving and Preservation (Including Storage and Backup)
HOBBIT will also support the functionality of accessing and querying past
versions of an evolving dataset, where all different benchmark versions will
be publically available as dump file as well as from the project SPARQL
endpoint. The data will be stored on the benchmarking platform server(s), at
least for the duration of the project. After the project, this responsibility
is transferred to the HOBBIT Association, who will be tasked with the long
term preservation of the datasets.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0639_INSPEX_730953.md
|
# 1\. Introduction
## 1.1. About the data management plan
The Data Management Plan (DMP) describes the rules and procedures for
collecting and using data, as well as the rules governing the sharing and
dissemination of information amongst the consortium or to the public. This
document has been developed in accordance with the Informed Consent Procedures
and Templates D1.1, the Informed Consent Form and Information Sheets for
Stakeholder Interviews D7.4, and the H2020 Guidelines on Data Management 1 .
## 1.2. Document organisation
Chapter 2 describes the institutional & national data management policy.
Chapter 3 deals with the specific procedures applicable in INSPEX. Data
management related to _INSPEX Who’s Who_ , _INSPEX User’s Needs Survey_ , _Use
of Real Pictures for the Personas_ (see Deliverable D1.1) are reported in
chapter 3 together with data management related to _Stakeholder Interviews_
(see Deliverable D7.4). Data management related to _validation in real-life
conditions_ is briefly described in section 3.5. This will be extended and
fully described in Deliverable D1.8 – Requirements on Data Protection and
Ethics. Management of other data generated by the project is summarized in
section 3.6. Data management related to _Communication and Dissemination_ is
summarised in section 3.7. , in accordance with Deliverable D7.3 –
Communication and Dissemination workplan. Note that rules regarding
dissemination of own results and another party’s unpublished Results and
Background are defined in the Consortium Agreement. Finally, section 3.8.
describes the _INSPEX Website Privacy Policy and Other Legal Mentions_ as
reported in D1.1.
# 2\. Institutional/National Data Management Policy
Data collection and processing will be carried out in accordance with the
ethical rules and standards of HORIZON 2020, and will comply with both the
ethical principles and relevant national, EU and international regulations
including the European Convention on Human Rights, namely:
* The European Convention on Human Rights (and especially the requirements from the case-law of the European Court of Human Rights on article 8);
* The Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data (Council of Europe, CETS No. 108, 1981);
* The Charter of Fundamental Rights of the European Union (especially articles 7 & 8);
* Directive 95/46/EC on the protection of individuals with regard to the processing of personal data and on the free movement of such data (Data Protection Directive) to be replaced by the _General Data Protection Regulation_ (GDPR) as from May 25 2018;
* Directive 2002/58/EC of the European Parliament and of the Council of 12 July 2002 concerning the processing of personal data and the protection of privacy in the electronic communications sector (Directive on privacy and electronic communications).
INSPEX will comply in full with specific national legislation for countries
involved in the validation of prototypes developed in the course of the
project as well as any organisational procedures that are in place in premises
and places where the prototype validation will be conducted.
**Note that Deliverable 1.8 – Requirements on Data Protection and Ethics,
delivered at M24, will provide the legal requirements regarding data
protection. These requirements will be updated and disseminated e.g. through
the work of the Ethical & Legal Advisory Board. **
# 3\. Specific procedures
_This section summarises the main procedures already defined in the INSPEX
project to collect data, especially in D1.1 and D7.4. It also specifies how
data will be managed in the course of the project._
## 3.1. INSPEX Who’s Who Consent Form and Procedure
The INSPEX Who’s Who puts a name to the face of the participants in the INSPEX
project and provides their organisation and location coordinates. _Deliverable
1.1 Informed consent form and procedures_ describes the rules for obtaining
the participants’ consent prior to the constitution of the Who’s Who in order
to comply with their rights to their image and to comply with data protection
rules including data subjects’ rights.
The printed consent form must be signed by the concerned person before the
taking of the picture. The signed forms are then securely stored by CEA. They
can be kept in an electronic form if compliant with specific national and
European rules regarding electronic archives.
<table>
<tr>
<th>
</th>
<th>
_The pictures and scanned consent forms will be stored in CEA, on a dedicated
server, as long as the project is active._
</th> </tr>
<tr>
<td>
</td>
<td>
_They are used internally._
</td> </tr>
<tr>
<td>
</td>
<td>
_At the end of the project (i.e. after the final review), they will be
erased._
</td> </tr> </table>
## 3.2. INSPEX User’s Needs Survey Notice of Information and Procedures
The determination of the user’s needs requires the collection of information
from potential users of the INSPEX system (VIB community members).
_Deliverable 1.1 Informed consent form and procedures_ provides the notice of
information to be given when doing so and includes the outline of the survey
and the possible procedures to collect this information. In order to consider
all circumstances, there is a full and a shorter version of the notice of
information.
The questions to be asked have been carefully reviewed with the cooperation of
the Legal and Ethical Advisory Board and of all the partners involved in the
interviews. A strict procedure has been defined to avoid collecting personal
data.
<table>
<tr>
<th>
</th>
<th>
_The surveys collected are scanned and sent to GoSense. Any email containing
the scanned surveys is erased. The surveys are stored in GoSense, on a
dedicated server, as long as the project is active._
</th> </tr>
<tr>
<td>
</td>
<td>
_The surveys are used for analysis of user-needs, as reported in D1.3._
</td> </tr>
<tr>
<td>
</td>
<td>
_At the end of the project, they will be erased._
</td> </tr> </table>
## 3.3. Notice of Information and Consent Form for Using Real Pictures for
the Personas
“Personas” are used in D1.3 – Use cases and Applications, preliminary version
(VIB use-case) – due at M6 (June 2017) in order to “personify” users’ needs. A
Persona represents a group of potential end-users. The use of real person
pictures to illustrate the Personas personifies the Personas, thus it helps to
get better results when defining the users’ requirements.
Deliverable D1.1 – Informed consent form and procedures – defines the rules
for using such images such as the necessity to get the consent of these
persons who willingly give their image to the “Personas”. Deliverable D1.1
also provides the notice of information to give to these persons and the
consent form for using their pictures. It fixes the procedure for collecting
these consent forms.
<table>
<tr>
<th>
</th>
<th>
_The consent forms and pictures will be stored in CEA, on a dedicated server,
as long as the project is active._
</th> </tr>
<tr>
<td>
</td>
<td>
_The interviews will be used for analysis of user-needs, as reported in D1.3._
</td> </tr>
<tr>
<td>
</td>
<td>
_At the end of the project, they will be erased._
</td> </tr> </table>
## 3.4. INSPEX Consent Form and Procedure for Interviews of Stakeholders
In order to explore the possible application domains and identify other
stakeholders that might be interested in INSPEX outcomes at the sub-modules,
modules, devices and system levels, interviews of key potential stakeholders
will be conducted. Deliverable 7.4 – Informed consent form and Information
sheet for stakeholder – provides the information that must be given to
potential participants so that they can decide whether or not they take part
in the INSPEX market exploration study. It also describes the procedure to get
the consent and to collect information from the participants.
The questions to be asked have been carefully reviewed with the cooperation of
the Legal and Ethical Advisory Board and of all the involved partners. A very
strict procedure has been put into place to avoid collecting personal data.
<table>
<tr>
<th>
</th>
<th>
_The consent forms and interviews will be scanned and sent to CEA. Any email
containing the scanned interviews and consent forms will be erased. These
documents will be stored in CEA, on a dedicated server, as long as the project
is active._
</th> </tr>
<tr>
<td>
</td>
<td>
_The interviews will be used in the market exploration analysis, as reported
in D7.7 and D7.9._
</td> </tr>
<tr>
<td>
</td>
<td>
_At the end of the project, they will be erased._
</td> </tr> </table>
## 3.5. Notice of Information and Consent Form for validation in real-life
conditions
The validation of the INSPEX system requires real-life experiments with
potential end-users. _Information sheet, informed consent form and procedures_
will be defined in D6.4, while the different tests carried out will be defined
in D6.5 – Finite Prototype Validation Plan. All the procedures and documents
will be defined with the support of the Legal and Ethical Advisory Board, in
particular to deal with personal data and anonymization.
* _Data collected and consent forms are stored in CEA, on a dedicated server, as long as the project is active._
* _Data will be used for validation of user-needs, and the analysis will be reported in D6.7 – Final smart integrated prototype validation results._
* _At the end of the project, they will be erased._
_Note that the procedures regarding management of data generated by validation
in real-life conditions will be fully described in D1.8 - Requirements on Data
Protection and Ethics._
## 3.6. Other Data generated by the project
In the course of the INSPEX project, various experimental measurements will be
conducted by the partners, especially regarding submodules characterisation
and/or verification. Data generated by such tests are the sole property of the
partner identified as the “submodule owner”. Even if data are shared among the
consortium for research activities, they belong to the submodule owner and
cannot be disclosed or disseminated. Dissemination of these results is
strictly forbidden and the rules surrounding _Dissemination of another Party’s
unpublished Results or Background_ is described in the Consortium Agreement,
see section 8.3.2.
In the course of INSPEX, software and firmware will be developed. Being a
result per se, access to Software is also defined in the Consortium Agreement,
see section 9, in particular section 9.8.3.
## 3.7. Communication and Dissemination Workplan
Deliverable 7.3 contains the INSPEX _Communication and Dissemination Workplan_
which describes the ways to achieve two key activities in the project:
_communicate_ about the project and _disseminate_ the project results.
More specifically, the _Communication and Dissemination Workplan_ describes
the communication and dissemination objectives pursued by the project, the
communication and dissemination methodology and the contribution of each
partner to the realization of these objectives and their timeline without
prejudice in the possibility of participating in interesting but unforeseen
events. The Workplan also contains monitoring and analysis mechanisms and will
be updated each year.
_The Consortium Agreement (sections 8.3.1 – Dissemination of own Results, and
8.3.2 – Dissemination of another Party’s unpublished Results or Background)
defines how the dissemination of results is handled within the consortium._
## 3.8. INSPEX Website Privacy Policy and Other Legal Mentions
_Deliverable 1.1 Informed consent form and procedures_ provides the INSPEX
website privacy policy and the other legal mentions, which should also appear
on the website regarding information society services, intellectual property
rights and liability.
# 4\. Conclusion: data management plan for the INSPEX project consortium
This deliverable provides a specification of the data management plan employed
by the INSPEX project consortium. The objective was to define a clear
procedure and policy that ensures all data collected complies with current
data management policies at an institutional, national and European level. In
addition the plan will be used to ensure consistent and reliable data
management processes will be used supporting the demonstration, validation and
successful delivery of the project.
As recommend in the H2020 Data Management plan, this document will evolve
during the lifespan of the project based on new requirements or constraints
that may arise during the project lifetime.
Other documents, especially D1.8 – Requirements on Data Protection and Ethics
– and D6.4 – Information sheet, informed consent form and procedures – will
complement the present deliverable regarding testing in real-life conditions.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0641_ExaNoDe_671578.md
|
1. **Introduction**
This Data Management Plan (DMP) describes the data management life cycle for
all datasets that will be collected, processed or generated by the ExaNoDe
project. This document outlines how research data will be handled during the
project and after the project completion. The DMP is not a fixed document, but
it evolves during the lifespan of the ExaNoDe project. This is the first
version of the DMP which has been aligned with the amendment reference No
AMD-671578-13. The DMP will be updated according to project needs.
Several categories of datasets are identified within the ExaNoDe project:
* **Reporting material** such as Consortium Agreement, Grant Agreement, deliverables or any other kind of material exchanged with the European Commission.
* **Scientific publications, presentations and dissemination material** that describe the research work within ExaNoDe.
* **Technical datasets** including the technical work such as source code of tools, libraries, RTL codes, netlist, design scripts, etc.
* **Evaluation datasets** that accompany the scientific publications and/or deliverables and usually provide more information than the one included in the publications.
This DMP addresses the points below on a dataset by dataset basis and reflects
the current status of reflection within the consortium about the data that
will be produced:
* **Data set reference and name:** identifier for the data set to be produced.
* **Data set description:** description of the data that will be generated or collected, its origin (in case it is collected), nature and scale and to whom it could be useful.
* **Standards and metadata:** reference to existing suitable standards of the discipline.
* **Data sharing:** description of how data will be shared, including access procedures, embargo periods (if any), outlines of technical mechanisms for dissemination and necessary software and other tools for enabling re-use, and definition of whether access will be widely open or restricted to specific groups. Identification of the repository where data will be stored, if already existing and identified, indicating in particular the type of repository (institutional, standard repository for the discipline, etc.).
* **Archiving and preservation (including storage and backup):** description of the procedures that will be put in place for long-term preservation of the data. Indication of how long the data should be preserved.
The following sections provide the current status of reflection within the
consortium about the data that will be produced for the various identified
categories.
2. **Reporting material**
Several non-technical documents for reporting the project progress with the
European Commission will be produced within ExaNoDe, such as the Consortium
Agreement, Grant Agreement, and meeting minutes.
A list of these datasets is provided in Table 1.
<table>
<tr>
<th>
**Type Reference Description Standards Data sharing Archiving and**
**and name and preservation**
**metadata**
</th> </tr>
<tr>
<td>
**Document**
</td>
<td>
Grant Agreement number: 671578
</td>
<td>
ExaNoDe Grant Agreement
electronically signed by all consortium members
</td>
<td>
pdf file
</td>
<td>
Codendi repository (1) for ExaNoDe consortium members, EC
repository (2) for
Commission services.
</td>
<td>
Dataset will be available on Codendi at least two years after the end of the
project.
</td> </tr>
<tr>
<td>
**Document**
</td>
<td>
Consortiu m
Agreement ref. 22773
</td>
<td>
ExaNoDe
Consortium Agreement
signed by all consortium legal entities
</td>
<td>
Paper copy
</td>
<td>
All partners have the same hard copy of the CA. An electronic copy is shared
on Codendi repository (1) .
</td>
<td>
Hard copy must be maintained and preserved by each partner even after the end
of the project (at least 6 years).
</td> </tr>
<tr>
<td>
**Document**
</td>
<td>
Amendme
nt
Reference
No AMD-
671578-13
</td>
<td>
ExaNoDe amendment
electronicaly signed by the Commission and the Coordinator
</td>
<td>
pdf file
</td>
<td>
Data sharing on: Codendi
repository (1) for
ExaNoDe consortium members, EC
repository (2) for
Commission services.
</td>
<td>
Dataset will be available on Codendi at least two years after the end of the
project.
</td> </tr>
<tr>
<td>
**Guide**
</td>
<td>
Project manual
</td>
<td>
ExaNoDe
project manual
</td>
<td>
Microsoft Office documents
</td>
<td>
Data sharing on Codendi repository (1) for
ExaNoDe consortium members.
</td>
<td>
Dataset will be available on Codendi at least two years after the end of the
project.
</td> </tr>
<tr>
<td>
**Minutes**
</td>
<td>
Meeting minutes
</td>
<td>
ExaNoDe
meeting minutes
</td>
<td>
pdf files
</td>
<td>
Data sharing on Codendi repository (1) for
ExaNoDe consortium members.
</td>
<td>
Dataset will be available on Codendi at least two years after the end of the
project.
</td> </tr>
<tr>
<td>
**Document**
</td>
<td>
Periodic
Progress
Report
</td>
<td>
ExaNoDe
Periodic
Progress Reports (for check meetings with the
Commission and formal reviews)
</td>
<td>
pdf files
</td>
<td>
Data sharing on Codendi repository (1) for
ExaNoDe consortium members.
An electronic copy is provided to the reviewers and the Project Officer before
each check meeting or review.
</td>
<td>
Dataset will be available on Codendi at least two years after the end of the
project.
</td> </tr>
<tr>
<td>
**Presentations**
</td>
<td>
Project meeting, check meeting and formal review presentatio ns
</td>
<td>
ExaNoDe
presentations made during the project meetings, the check meetings and the
formal reviews
</td>
<td>
pdf files
</td>
<td>
Data sharing on Codendi repository (1) for
ExaNoDe consortium members.
An electronic copy is provided to the reviewers and the Project Officer before
each check meeting or review.
</td>
<td>
Dataset will be available on Codendi at least two years after the end of the
project.
</td> </tr>
<tr>
<td>
**Document**
</td>
<td>
Memorand um-ofUnderstan ding
</td>
<td>
Memorandum-
of-
Understanding signed between the coordinators of the projects:
ExaNoDe,
ExaNeSt and EcoScale.
</td>
<td>
pdf file
</td>
<td>
Data sharing on Codendi repository (1) for
ExaNoDe consortium members.
</td>
<td>
Dataset will be available on Codendi at least two years after the end of the
project.
</td> </tr> </table>
# Table 1: Reporting datasets
**Notes from table:**
1. Codendi is a project management environment ( _https://minalogic.net/account/login.php_ ) used for ExaNoDe, which offers an easy, secured and consortium-limited access to ExaNoDe project datasets. Figure 1 shows the document menu which can be used to upload and download reporting materials including technical deliverables.
**Figure 1: Codendi document repository of the ExaNoDe project**
2. Research Participant Portal to manage the ExaNoDe project and communicate with the European Commission throughout the project’s life cycle :
_http://ec.europa.eu/research/participants/portal/desktop/en/home.html_
Apart from the above non-technical reporting dataset, ExaNoDe deliverables are
considered to be scientific reports of the work within the project. Several
deliverables will be publicly available and posted on the ExaNoDe website in
PDF format. Deliverables will be stored in the Codendi repository in Microsoft
Word format. The table below lists the deliverables based on their type and
dissemination level.
<table>
<tr>
<th>
**Type**
</th>
<th>
**Reference and name**
</th>
<th>
**Dissemination level**
</th>
<th>
**Data sharing**
</th>
<th>
**Archiving and preservation**
</th> </tr>
<tr>
<td>
**Reports**
</td>
<td>
D1.1, D1.3,
D3.4, D3.9, D4.1, D4.2, D4.7, D6.4,
D6.7
</td>
<td>
Confidential
</td>
<td>
Codendi repository for ExaNoDe consortium members, EC continuous reporting
repository (3) for Commission services.
</td>
<td>
Dataset will be available on Codendi at least two years after the end of the
project.
</td> </tr>
<tr>
<td>
**Others**
</td>
<td>
D3.5, D4.3,
D4.5, D5.1,
D5.3, D5.4
</td>
<td>
Confidential
</td>
<td>
Codendi repository for ExaNoDe consortium members, EC continuous reporting
repository (3) for Commission services.
</td>
<td>
Dataset will be available on Codendi at least two years after the end of the
project.
</td> </tr>
<tr>
<td>
**Demonst**
**rators**
</td>
<td>
D4.4, D4.6
</td>
<td>
Confidential
</td>
<td>
Codendi repository for ExaNoDe consortium members, EC continuous reporting
repository (3) for Commission services.
</td>
<td>
Dataset will be available on Codendi at least two years after the end of the
project.
</td> </tr>
<tr>
<td>
**ORDP:**
**Open**
**Research**
**Data**
**Pilot**
</td>
<td>
D1.4
</td>
<td>
Public
</td>
<td>
Codendi repository for ExaNoDe consortium members, EC continuous reporting
repository (3) for Commission services, ExaNoDe website (4) for public
access.
</td>
<td>
Dataset will be available at least two years after the end of the project.
</td> </tr>
<tr>
<td>
**Reports**
</td>
<td>
D2.1, D2.2,
D2.3, D2.4,
D2.5, D2.6, D2.7, D3.1, D3.2, D3.6,
D3.7, D5.2,
D5.5, D6.2,
D6.5
</td>
<td>
Public
</td>
<td>
Codendi repository for ExaNoDe consortium members, EC continuous reporting
repository (3) for Commission services, ExaNoDe website (4) for public
access.
</td>
<td>
Dataset will be available at least two years after the end of the project.
</td> </tr>
<tr>
<td>
**Demonst**
**rators**
</td>
<td>
D3.3, D3.8
</td>
<td>
Public
</td>
<td>
Codendi repository for ExaNoDe consortium members, EC continuous reporting
repository (3) for Commission services, ExaNoDe website (4) for public
access.
</td>
<td>
Dataset will be available at least two years after the end of the project.
</td> </tr>
<tr>
<td>
**Others**
</td>
<td>
D6.1, D6.3, D6.6
</td>
<td>
Public
</td>
<td>
Codendi repository for ExaNoDe consortium members, EC continuous reporting
repository (3) for Commission services, ExaNoDe website (4) for public
access.
</td>
<td>
Dataset will be available at least two years after the end of the project.
</td> </tr> </table>
# Table 2: Deliverables
**Notes from table:**
(3) Figure 2 shows the continuous reporting menu which is used for reporting
the ExaNoDe documentation with the European Commission.
**Figure 2: Continuous reporting repository of the ExaNoDe project:**
**_http://ec.europa.eu/research/participants/portal/desktop/en/home.html_ **
(4) _http://exanode.eu/_
# 3 Scientific publications, presentations and dissemination materials
There is a need to maximise the impact of European collaborative projects, and
this is one of the primary goals of the European Commission’s funding schemes,
aiming at promoting Europe’s strategic position in target technical fields.
The ExaNoDe consortium considers dissemination activities to be as important
as the technical work carried on within each task, to maximize the impact of
the project and get feedback from outside the project environment to drive the
work performed in a successful manner.
In the ExaNoDe project, the partners plan to give great importance to
dissemination, by presenting posters or publishing scientific papers to
international conferences or journals, by participating in events such as
workshops organized by the European community and by providing dissemination
materials with project website, flyers, and so on. For this purpose, a large
amount of dissemination data will be generated throughout the project.
For more information on the planned dissemination activities, deliverable D6.2
“Dissemination strategy Document” presents the plan for the dissemination of
the ExaNoDe project outcomes, and the project manual [1] sets the basic rules
for publications, presentations and copyright usage. This section of the DMP,
describes in detail how dissemination data will be handled within the ExaNoDe
project.
The following table provides the available scientific publications,
presentations and dissemination materials, as at M18 of the ExaNoDe project:
<table>
<tr>
<th>
**Type Reference Description Data sharing and name**
</th>
<th>
**Archiving and preservation**
</th> </tr>
<tr>
<td>
**Article**
</td>
<td>
ExaNoDe
@ HPC
Wire 2016
</td>
<td>
HPC Wire article on
“EU Projects Unite on Heterogeneous ARM-based Exascale Prototype” published on
Feb 24th 2016.
</td>
<td>
HPC Wire repository
</td>
<td>
</td> </tr>
<tr>
<td>
**Poster**
</td>
<td>
ExaNoDe
@ DATE
2016
</td>
<td>
Poster presented at
DATE 2016 in
Dresden by CEA
</td>
<td>
Codendi repository for ExaNoDe consortium members; ExaNoDe website for public
access.
</td>
<td>
Dataset will be available at least two years after the end of the project.
</td> </tr>
<tr>
<td>
**Poster**
</td>
<td>
ExaNoDe
@
HiPEAC
2017
</td>
<td>
Poster presented at
HiPEAC 2017 in
Stockholm by CEA
</td>
<td>
Codendi repository for ExaNoDe consortium members; ExaNoDe website for public
access.
</td>
<td>
Dataset will be available at least two years after the end of the project.
</td> </tr>
<tr>
<td>
**Presentation**
</td>
<td>
ExaNoDe
@ ISC
2016
</td>
<td>
Presentation at workshop on
International
Cooperation at ISC 2016 in Frankfurt by BSC.
</td>
<td>
Codendi repository for ExaNoDe consortium members.
</td>
<td>
Dataset will be available at least two years after the end of the project.
</td> </tr>
<tr>
<td>
**Presentation**
</td>
<td>
ExaNoDe
@ MontBlanc workshop
2017
</td>
<td>
Presentation at MontBlanc project workshop in
Barcelona by CEA.
</td>
<td>
Codendi repository for ExaNoDe consortium members.
</td>
<td>
Dataset will be available at least two years after the end of the project.
</td> </tr>
<tr>
<td>
**Publication**
</td>
<td>
ExaNoDe
@ ISC
2017
</td>
<td>
Position paper submitted at ISC
</td>
<td>
Codendi repository for ExaNoDe consortium members.
</td>
<td>
Dataset will be available at least two years after the end of the project.
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
2017 in Frankfurt by
VOSYS.
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Publication**
</td>
<td>
ExaNoDe
@
MEMSYS
2016
</td>
<td>
Publication by BSC at MEMSYS 2016
</td>
<td>
Codendi repository for ExaNoDe consortium members.
</td>
<td>
Dataset will be available at least two years after the end of the project.
</td> </tr>
<tr>
<td>
**Publication**
</td>
<td>
ExaNoDe
@
IWOMP
2016
</td>
<td>
Publication by UOM at IWOMP 2016
</td>
<td>
Codendi repository for ExaNoDe consortium members.
</td>
<td>
Dataset will be available at least two years after the end of the project.
</td> </tr>
<tr>
<td>
**Publication**
</td>
<td>
ExaNoDe
@ ISPASS
2016
</td>
<td>
Publication by UOM at ISPASS 2016
</td>
<td>
Codendi repository for ExaNoDe consortium members.
</td>
<td>
Dataset will be available at least two years after the end of the project.
</td> </tr>
<tr>
<td>
**Publication**
</td>
<td>
ExaNoDe
@ PACT
2016
</td>
<td>
Publication by UOM at PACT 2016
</td>
<td>
Codendi repository for ExaNoDe consortium members.
</td>
<td>
Dataset will be available at least two years after the end of the project.
</td> </tr>
<tr>
<td>
**Poster**
</td>
<td>
ExaNoDe @
ACACES
2016
</td>
<td>
Poster presented at
ACACES 2016 in
Fiuggi by FORTH
</td>
<td>
Codendi repository for ExaNoDe consortium members.
</td>
<td>
Dataset will be available at least two years after the end of the project.
</td> </tr>
<tr>
<td>
**Website**
</td>
<td>
ExaNoDe
project website
</td>
<td>
exanode.eu
</td>
<td>
Website subcontractor premises
</td>
<td>
Dataset will be available at least two years after the end of the project.
</td> </tr>
<tr>
<td>
**Flyer**
</td>
<td>
ExaNoDe
flyer
</td>
<td>
Public information about the project, its objectives and future achievements
</td>
<td>
Website for public access.
</td>
<td>
Dataset will be available at least two years after the end of the project.
</td> </tr> </table>
**Table 3: Scientific publications, presentations and dissemination
materials**
Table 3 will be amended throughout the project, in order to include newly
generated dissemination data.
The publications and related research data will be publicly provided by
research data repositories respecting the policies and rules set out by the
publishers (journals or conferences). The partners will use an open access
repository, connected to the tools proposed by the European Commission (e.g.
OpenAIRE), to grant access to the publications and to a bibliographic metadata
in a standard format including information requested by the European
Commission. Moreover, some posters shown in Table 3 will be posted on the
ExaNoDe website.
All scientific publications involving BSC will be uploaded to the UPCommons
open access repository of _Universitat Politècnica de Catalunya_ (UPC).
**4 Technical Datasets**
The technical datasets generated by the ExaNoDe project include:
* Mini-application and performance-critical kernel codes (from WP2 “Co-Design for Exascale HPC systems”);
* Firmware, OS, virtual machine, parallel programming models and runtime libraries codes (from WP3 “Enablement of Software Compute Node”);
* System-on-Chip design databases: RTL code, hard-macro design, gate netlist, design scripts, design environment, GDS2 file (from WP4 “Compute node design and manufacture”);
* Board design datasets (from WP5 “System Integration & Evaluation”).
The following technical datasets are foreseen for ExaNoDe project:
<table>
<tr>
<th>
**Type Reference Description Standards Data sharing Archiving and**
**and name and preservation**
**metadata**
</th> </tr>
<tr>
<td>
**Code**
</td>
<td>
Mini-apps or kernels of performance-
critical algorithms
</td>
<td>
Source or binary code of the miniapplications or performance critical kernels.
</td>
<td>
Binary files or plain text files.
</td>
<td>
No such datasets have been generated at M18 of the ExaNoDe project. Data
sharing, and archiving policies will be described in an updated DMP.
</td> </tr>
<tr>
<td>
**Binary code**
</td>
<td>
Virtual machine enhanced checkpoint with post-copy
</td>
<td>
Source code of the virtual machine
memory
snapshot based on post-copy.
</td>
<td>
plain text files (e.g., .h,
.c)
</td>
<td>
The code will be released in the form of
diff patches to the open source Qemu and
Linux communities.
</td>
<td>
Source code available in various mailing lists archives, and in QEMU code tree
once upstream.
</td> </tr>
<tr>
<td>
**Binary code**
</td>
<td>
Virtual machine incremental checkpoint
</td>
<td>
Binary release of the Virtual Machine incremental checkpointing feature.
</td>
<td>
Binary file
</td>
<td>
Binary will be released to the ExaNoDe consortium for integration and final
project prototype demonstration.
</td>
<td>
Binary released to
ExaNoDe
consortium for use only within the project lifetime.
</td> </tr>
<tr>
<td>
**Binary code**
</td>
<td>
ExaNoDe firmware
</td>
<td>
Realization of
UNIMEM support on experimental prototypes of
the ExaNoDe project (hardware design for
FPGAs together with Linux device drivers)
</td>
<td>
Binary files
</td>
<td>
The firmware, in the form of binary code (FPGA bitstreams, Linux device
drivers and kernel configuration), will be installed in the shared
testbed, hosted at FORTH premises,
available for use by partners.
</td>
<td>
Dataset will be available at least two years after the end of the project.
</td> </tr>
<tr>
<td>
**Binary code**
</td>
<td>
ExaNoDE
operating system
</td>
<td>
Enhancements and additions to the Linux kernel (together with API libraries)
for exposing Unimem
platform functionality to programming models and enduser applications.
</td>
<td>
Binary files
</td>
<td>
The operating system, in the form of binary code (Linux with Unimem support
and low-level API libraries), will be installed in the shared
testbed, hosted at FORTH premises,
available for use by partners.
</td>
<td>
Dataset will be available at least two years after the end of the project.
</td> </tr>
<tr>
<td>
**Source code**
</td>
<td>
UNIMEMoptimized MPI library
</td>
<td>
MPI library that has been optimized for
use with
UNIMEM
memory scheme
</td>
<td>
Implementat ion of the
MPI
standard consisting of plain text files (e.g., .h,
.c)
</td>
<td>
The UNIMEM MPI source code will be maintained by BSC and FORTH and available
from a git or SVN repository hosted at BSC. The code will be freely
downloadable with an open source licence.
</td>
<td>
The UNIMEM
MPI source code will be available at least three years after the end of the
project.
</td> </tr> </table>
<table>
<tr>
<th>
**Source code**
</th>
<th>
GPI
</th>
<th>
PGAS based
distributed onesided and asynchronous programming model.
</th>
<th>
GPI implements the GASPI standard to be found at: www.gaspi.d
e
</th>
<th>
The GPI code will be maintained at
Fraunhofer's premises and released with an open source licence.
</th>
<th>
Dataset will be available at least five years after the end of the project.
</th> </tr>
<tr>
<td>
**Source code**
</td>
<td>
Mercurium
</td>
<td>
OmpSs
compiler
</td>
<td>
plain text files (e.g., .h,
.c)
</td>
<td>
The Mercurium compiler source code is maintained by BSC and freely
downloadable from a git repository hosted at BSC, with the LGPL licence.
</td>
<td>
The Mercurium source code will be available at least 3 years after the end of
the project.
</td> </tr>
<tr>
<td>
**Source code**
</td>
<td>
Nanos 6
</td>
<td>
OmpSs runtime system
</td>
<td>
plain text files (e.g., .h,
.c)
</td>
<td>
The Nanos 6 runtime system source code is maintained by BSC in a git
repository hosted at BSC. Before the end of the project the code will be
freely downloadable with an open source licence.
</td>
<td>
The Nanos 6 source code will be available at least 3 years after the end of
the project.
</td> </tr>
<tr>
<td>
**Source code**
</td>
<td>
OpenStream
</td>
<td>
OpenStream compiler and runtime system
</td>
<td>
Plain text files (e.g., .h,
.c)
</td>
<td>
The OpenStream source code is maintained by UoM and publicly available through
a dedicated
portal
( _www.openstream.info_ ) and git repository hosted at UoM. The runtime
system code is freely available under GPLv2 license, the compiler is based on
the GNU C Compiler
and inherits its licenses (mostly GPLv2 and 3).
</td>
<td>
Dataset will be available at least two years after the end of the project.
</td> </tr>
<tr>
<td>
**Source code**
</td>
<td>
Thermal management
</td>
<td>
Power capping and thermal management
</td>
<td>
plain text files (e.g., .h,
.c, .py)
</td>
<td>
The thermal management runtime code is maintained by ETHZ in a git repository
hosted at ETHZ. Before the end of the project the code will be freely
downloadable with an open source licence.
</td>
<td>
The thermal management
source code will be available at least 3 years after the end of the project.
</td> </tr>
<tr>
<td>
**Chiplet RTL code**
</td>
<td>
ExaNode_Chi plet_RTL
</td>
<td>
RTL code of chiplet design
</td>
<td>
VHDL, Verilog…
files
</td>
<td>
The RTL code will be maintained at CEA's premises.
</td>
<td>
Dataset will be available at least two years after the end of the project.
</td> </tr>
<tr>
<td>
**ExaConv RTL code**
</td>
<td>
ExaNode_Exa Conv_RTL
</td>
<td>
RTL code of the
Convolution Hardware operator
</td>
<td>
VHDL, Verilog…
files
</td>
<td>
The RTL code will be maintained at ETHZ's premises.
</td>
<td>
Dataset will be available at least two years after the end of the project.
</td> </tr>
<tr>
<td>
**UoM RTL**
**code + Rx/Tx**
**cells**
</td>
<td>
ExaNode_Uo M_Macros
</td>
<td>
Rx/Tx Hard macro
</td>
<td>
VHDL, Verilog…
files
</td>
<td>
The RTL code + Hard Macro will be
</td>
<td>
Dataset will be available at least
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
maintained at UoM's premises.
</td>
<td>
two years after the end of the project.
</td> </tr>
<tr>
<td>
**Chiplet netlist**
</td>
<td>
ExaNode_Chi
plet_Netlist
</td>
<td>
Mapping of the RTL code of the chiplet design onto 28FDSOI technology
</td>
<td>
Verilog netlist
</td>
<td>
The RTL code will be maintained at CEA's premises.
</td>
<td>
Dataset will be available at least
two years after the end of the project. Snapshot backup policy is in place
\+ copy in another room on a daily basis with 6
months retention of the data
</td> </tr>
<tr>
<td>
**Design scripts**
</td>
<td>
ExaNode_Des
ign_Scripts
</td>
<td>
Scripts and constraints used to generate both netlist and GDS
starting from
RTL code
</td>
<td>
sh, csh, tcl, sdc …. files
</td>
<td>
The design scripts will be maintained at CEA's premises.
</td> </tr>
<tr>
<td>
**GDS2 file**
</td>
<td>
ExaNode_GD
S2
</td>
<td>
Output of the design flow, file sent to the foundry
</td>
<td>
GDS2
</td>
<td>
The GDS2 code will be maintained at CEA's premises.
</td> </tr>
<tr>
<td>
**Verification environment**
</td>
<td>
ExaNode_Veri f_Env
</td>
<td>
Testbench and associated files used to validate the RTL code both functionally
and in test mode
</td>
<td>
sh, csh, tcl,
C …. files
</td>
<td>
The verification environment plaform will be maintained at CEA's premises.
</td> </tr>
<tr>
<td>
**FPGA RTL**
**code**
</td>
<td>
ExaNode_Chi plet_FPGA_R TL
</td>
<td>
RTL code for FPGAs chiplet programming
</td>
<td>
VHDL,
Verilog files
</td>
<td>
The RTL code will be maintained at FORTH's premises.
</td>
<td>
Dataset will be available at least two years after the end of the project.
</td> </tr>
<tr>
<td>
**FPGA scripts**
</td>
<td>
ExaNode_Chi
plet_FPGA_S
cripts
</td>
<td>
Script for program load
onto the FPGAs chiplet
</td>
<td>
sh, csh, tcl, C files
</td>
<td>
The script bunch will be maintained at
FORTH's premises.
</td>
<td>
Dataset will be available at least two years after the end of the project.
</td> </tr>
<tr>
<td>
**FPGA Design**
</td>
<td>
ExaNode_CFP GA1_RTL
</td>
<td>
RTL code and
Vivado files for
MCM Compute FPGA1 programming
</td>
<td>
VHDL,
Verilog,
Vivado files
</td>
<td>
The RTL code and Vivado files will be maintained at FORTH's premises.
</td>
<td>
Dataset will be available at least two years after the end of the project.
</td> </tr>
<tr>
<td>
**FPGA Design**
</td>
<td>
ExaNode_CFP GA2_RTL
</td>
<td>
RTL code and
Vivado files for
MCM Compute FPGA2 programming
</td>
<td>
VHDL,
Verilog,
Vivado files
</td>
<td>
The RTL code and Vivado files will be maintained at FORTH's premises.
</td>
<td>
Dataset will be available at least two years after the end of the project.
</td> </tr>
<tr>
<td>
**FPGA scripts**
</td>
<td>
ExaNode_CFP
GA_Scripts
</td>
<td>
Script for program load onto the MCM FPGAs
</td>
<td>
sh, csh, tcl, C files
</td>
<td>
The script bunch will be maintained at
FORTH's premises.
</td>
<td>
Dataset will be available at least two years after the end of the project.
</td> </tr>
<tr>
<td>
**Schematics**
</td>
<td>
ExaNodeDem
onstration board schematics
</td>
<td>
Schematics in ORCAD format
</td>
<td>
.opj
</td>
<td>
The schematics will be maintained at FORTH's premises.
</td>
<td>
Dataset will be available at least two years after the end of the project
</td> </tr>
<tr>
<td>
**PCB Layout**
</td>
<td>
ExaNodeDem onstration board layout
</td>
<td>
Layout in
Allegro format
</td>
<td>
.brd and gerber files
</td>
<td>
The PCB layout files will be maintained at FORTH's premises.
</td>
<td>
Dataset will be available at least two years after the end of the project
</td> </tr> </table>
# Table 4: Technical Datasets within ExaNoDe
Some of the technical datasets will be maintained at the partners’ premises
and made available for ExaNoDe partners for prototype integration. This could
also be done through the CVS repository of the Codendi web-based project
management environment for ExaNoDe. It offers an easy, secured and consortium
limited access to ExaNoDe project datasets.
Moreover several of the technical datasets will be provided under open-source
licenses and are publicly available for download from a git or SVN repository
hosted at the partner’s premises (see Table 4 for related datasets).
Aiming to reach the widest audience possible, and accomplishing dissemination
and communication strategy and with the aim of getting sustainability at the
end of the project, the ExaNoDe website can provide descriptions and links to
the various individual sites hosting the source code licensed under any kind
of open-source license. This kind of decisions will be taken along the project
lifecycle and thus, the DMP will be updated accordingly.
# 5 Evaluation Datasets
The evaluation datasets accompany the scientific publications and
deliverables. These datasets include evaluation and performance measurements
of the ExaNoDe architecture and prototype. We foresee the following evaluation
datasets:
<table>
<tr>
<th>
**Type**
</th>
<th>
**Reference and name**
</th>
<th>
**Description**
</th>
<th>
**Standards and metadata**
</th>
<th>
**Data sharing**
</th>
<th>
**Archiving and preservation**
</th> </tr>
<tr>
<td>
**Simulation results**
</td>
<td>
Chiplet simulation results
</td>
<td>
Simulation results from the verification testbench
</td>
<td>
Completed
verification plan
</td>
<td>
The simulation results will be available at CEA's premises.
</td>
<td>
Dataset will be available at least two years after the end of the project.
Snapshot backup policy is in place + copy in another room on a daily basis
with 6 months retention of
the data
</td> </tr>
<tr>
<td>
**Measureme**
**nts**
</td>
<td>
Critical kernels performan ces
</td>
<td>
Performance measurements of miniapplications or critical kernels
</td>
<td>
Measuremen
t result files
</td>
<td>
The results will be available from the Codendi SVN repository for project
internal sharing.
</td>
<td>
Dataset will be available at least two years after the end of the project.
</td> </tr>
<tr>
<td>
**Measureme**
**nts**
</td>
<td>
Synthetic benchmark
performan ces
</td>
<td>
Performance measurements of synthetic benchmarks
</td>
<td>
Measuremen
t result files
</td>
<td>
The results will be available from the Codendi SVN repository for project
internal sharing.
</td>
<td>
Dataset will be available at least two years after the end of the project.
</td> </tr>
<tr>
<td>
**Measureme**
**nts**
</td>
<td>
Microbenchmark
s for
UNIMEM mechanism
s
</td>
<td>
Performance measurements of synthetic benchmarks using the
UNIMEM
firmware on ExaNoDe
testbench
</td>
<td>
Measuremen
t result files
</td>
<td>
The results will be available from the Codendi SVN repository for project
internal sharing.
</td>
<td>
Dataset will be available at least two years after the end of the project.
</td> </tr>
<tr>
<td>
**Measureme**
**nts**
</td>
<td>
benchmark performan ces
</td>
<td>
Performance measurements
of
communicatio n benchmarks
</td>
<td>
Measuremen
t result files
</td>
<td>
The results will be available from the Codendi SVN repository for project
internal sharing.
</td>
<td>
Dataset will be available at least two years after the end of the project.
</td> </tr> </table>
**Table 5: Evaluation datasets**
Page
6. **Concluding Remarks**
This deliverable presents the Data Management Plan (DMP) of the ExaNoDe
project aligned with the amendment reference No AMD-671578-13. It describes
the data management life cycle for all datasets that will be collected,
processed or generated by ExaNoDe project. Several categories of datasets have
been identified. Categories by categories, this DMP addresses data sharing and
archiving and reflects the current status of reflection within the consortium
about the data that will be produced.
The DMP evolves during the lifespan of the ExaNoDe project and it will be
updated according to project needs.
7. **References and Applicable Documents**
[1] Wray F. ExaNoDe Project Manual, V1.3 Dec. 2016
Page
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0643_myAirCoach_643607.md
|
# Executive Summary
The current deliverable is directly connected with the work performed under
the Task 7.4 “Standardization and concertation actions” and serves as the
initial plan for the collection, organization, storing and sharing of the
knowledge and data created within the project. The described data management
plan is based on several inputs, namely: a) the MyAirCoach Description of
Action (DOA) document, b) guidelines of the European Commission for the data
management of H2020 research projects, c) the outcomes of the plenary project
meetings and d) the input from several informal discussions among the project
consortium members.
The data management requirements and standardization guidelines specified in
this document are expected to form a reference manual to be used throughout
the project. In this way, MyAirCoach is aiming to develop a stable, reliable
and easy to use platform which will form an open repository for asthma
research and extend beyond the framework of the current project’s timeline.
Finally, it is important to underline that the current deliverable will be a
living document which will be continuously adapted depending on the needs of
the project research and development objectives, and based on the direct input
from members of the consortium and actual system users. This document is the
third updated and final version of the document adapted and extended to the
needs and requirements raised in the third year and the six months extension
period of the MyAirCoach project.
# 1 Introduction
The MyAirCoach project is aiming to support the research in the field of
personalized self-management of health and more specifically develop an
innovative solution for the effective and efficient management of asthma. In
this direction and based on the project’s description of work a number of
datasets are going to be collected and utilized for the support of both the
development and research tasks of the project. Therefore, it is considered of
fundamental importance to define the framework for the collection,
organization and sharing of such information in order to increase their long
term usability within the project partners but more importantly by the entire
research community.
Firstly, the current deliverable is aiming to provide concise summaries of the
types of datasets that are expected to be used during the project. These
datasets will form the basis for the design, development and testing of the
MyAirCoach system and in addition will be used for the academic research
activities foreseen by the consortium.
In the second part of the document important issues of the MyAirCoach Data
Management Plan (DMP) are discussed in order to outline the specific
requirements and guidelines that should be followed throughout the project’s
timeline.
The proposed plan was designed to allow the efficient dissemination of results
and the stimulation of research without jeopardizing any ethical requirements
of the project or decreasing the commercial value of the overall MyAirCoach
solution.
More specifically the MyAirCoach data management plan is aiming to:
1. Outline the responsibilities for data protection and sharing within an ethical and legal framework.
2. Do not interfere with the protection of the intellectual property created by the project.
3. Support open access to the project’s research outcomes and scientific publications
4. Support the openness of data related to both the publications and the development processes of the project
5. Define a documentation framework for the annotation of the collected knowledge towards increased discoverability and validation
6. Allow the creation of an online platform that will support all the above functionalities
Finally, the first version of the online MyAirCoach open portal is presented
with special focus on the access to open data by both registered and external
users. As the development tasks of the project will be evolved this platform
will be enhanced with additional functionalities regarding the data management
capabilities but also with additional datasets and links with data from other
external sources.
# 2 MyAirCoach Principles of Data Management
## 2.1 Data Management Requirements
In this section describes the requirements and principles that will form the
basis upon which the MyAirCoach data management plan has been defined. More
specifically the current deliverable has been based on the guidelines of the
EU Commission regarding the openness of the data generated from a project that
has been funded by the H2020 framework _** 1 ** _ . According to these
guidelines the scientifically-oriented data that are going to be generated by
the MyAirCoach project will be formed so that they can be easily
**discoverable** , **accessible** , **assessable** and **intelligible** ,
**usable** beyond the original purpose of their collection and usage but also
**interoperable** to appropriate quality standards.
Furthermore and due to the health oriented nature of the project two
additional but equally important attributes will be taken into account, the
**data security** and the **preservation of the participants’ privacy** . In
this direction, all the collected medical and sensitive data of patients will
be protected from any unauthorized access but also they will be carefully
anonymized in order to be shared through the proposed open data management
platform of the project.
In any case the publication of data should always conform to the ethical
guidelines of the MyAirCoach project as they were already defined in D8.5
“Ethics, Safety and mHealth Barriers Manual” deliverable.
## 2.2 EU Commission Guidelines for data management
The EU Commission has published some guidelines for appropriate data
management plans in Horizon 2020 projects. This guide is structured as a
series of questions that should be ideally clarified for all datasets produced
in any H2020 project. The following Table 9 presents the different aspects of
the questions along with a comment validating the conformance of the
MyAirCoach project to them.
**Table 1: EU Commission Data Management Plan Guidelines and Assurance of
MyAirCoach**
**Conformance**
<table>
<tr>
<th>
**Aspect**
</th>
<th>
</th>
<th>
**Question**
</th> </tr>
<tr>
<td>
**Discoverable**
</td>
<td>
</td>
<td>
DMP question: are the data and associated software produced and/or used in the
project discoverable (and readily located), identifiable by means of a
standard identification mechanism (e.g. Digital Object Identifier)?
</td> </tr>
<tr>
<td>
**Accessible**
</td>
<td>
</td>
<td>
DMP question: are the data and associated software produced and/or used in the
project accessible and in what modalities, scope, licenses (e.g. licensing
framework for research and education, embargo periods, commercial
exploitation, etc.)?
</td> </tr>
<tr>
<td>
**Assessable intelligible**
</td>
<td>
**and**
</td>
<td>
DMP question: are the data and associated software produced and/or used in the
project assessable for and intelligible to third parties in contexts such as
scientific
</td> </tr>
<tr>
<td>
</td>
<td>
scrutiny and peer review (e.g. are the minimal datasets handled together with
scientific papers for the purpose of peer review, are data is provided in a
way that judgments can be made about their reliability and the competence of
those who created them)?
</td> </tr>
<tr>
<td>
**Usable beyond the original purpose for which it was**
**collected**
</td>
<td>
DMP question: are the data and associated software produced and/or used in the
project useable by third parties even long time after the collection of the
data (e.g. is the data safely stored in certified repositories for long term
preservation and curation; is it stored together with the minimum software,
metadata and documentation to make it useful; is the data useful for the wider
public needs and usable for the likely purposes of non-specialists)?
</td> </tr>
<tr>
<td>
**Interoperable to specific quality**
**standards**
</td>
<td>
DMP question: are the data and associated software produced and/or used in the
project interoperable allowing data exchange between researchers,
institutions, organizations, countries, etc. (e.g. adhering to standards for
data annotation, data exchange, compliant with available software
applications, and allowing re-combinations with different datasets from
different origins)?
</td> </tr> </table>
## 2.3 Principles of medical information security
In order to adapt the requirements for openness of data without jeopardizing
the rights of the participating patients the principles for the security of
medical information (provided by the British Medical Association _** 2 ** _ )
were adopted as defined below:
**Table 2: Principles of medical information security**
<table>
<tr>
<th>
**Principle**
</th>
<th>
Description
</th> </tr>
<tr>
<td>
**Access control.**
</td>
<td>
Each identifiable clinical record shall be marked with an access control list
naming the people or groups of people who may read it and append data to it.
The system shall prevent anyone not on the access control list from accessing
the record in any way.
</td> </tr>
<tr>
<td>
**Record opening**
</td>
<td>
A clinician may open a record with herself and the patient on the access
control list. Where a patient has been referred, she may open a record with
herself, the patient and the referring clinician(s) on the access control
list.
</td> </tr>
<tr>
<td>
**Control**
</td>
<td>
One of the clinicians on the access control list must be marked as being
responsible. Only she/he may alter the access control list, and she may only
add other health care professionals to it.
</td> </tr>
<tr>
<td>
**Consent and notification**
</td>
<td>
The responsible clinician must notify the patient of the names on his record's
access control list when it is opened, of all subsequent additions, and
whenever responsibility is
</td> </tr>
<tr>
<td>
</td>
<td>
transferred. Her/his consent must also be obtained, except in emergency or in
the case of statutory exemptions.
</td> </tr>
<tr>
<td>
**Persistence**
</td>
<td>
No one shall have the ability to delete clinical information until the
appropriate time period has expired.
</td> </tr>
<tr>
<td>
**Attribution**
</td>
<td>
All accesses to clinical records shall be marked on the record with the
subject's name, as well as the date and time. An audit trail must also be kept
of all deletions.
</td> </tr>
<tr>
<td>
**Information flow**
</td>
<td>
Information derived from record A may be appended to record B if and only if
B's access control list is contained in A's.
</td> </tr>
<tr>
<td>
**Aggregation control**
</td>
<td>
There shall be effective measures to prevent the aggregation of personal
health information. In particular, patients must receive special notification
if any person whom it is proposed to add to their access control list already
has access to personal health information on a large number of people.
</td> </tr>
<tr>
<td>
**Trusted Computing Base**
</td>
<td>
Computer systems that handle personal health information shall have a
subsystem that enforces the above principles in an effective way. Its
effectiveness shall be subject to evaluation by independent experts.
</td> </tr> </table>
MyAirCoach followed a comprehensive strategy to protect and empower data
privacy before the final effect of GDPR _**3** _ , _**4** _ . GDPR regulation
was established on May 25th, 2018, that is almost simultaneous to the end of
the project. Hence, due to the limited time, the MyAirCoach project couldn’t
fully adopt the regulation. On the other hand, CNET that is responsible for
data collection is a GDPR compliant organization. In consequence of that, all
patients’ data included in the project are fully protected based on GDPR data
privacy regulations.
## 2.4 Actors
An important step towards the accurate and relevant definition of the data
management plan is the identification of all related actors that may be
involved in the formation and usage of the project’s online open access
repository. The following Table 3 presents the actors of the MyAirCoach online
platform for accessing and uploading datasets. Each category has its own
distinctive characteristics that should be taken into consideration. The basic
actors are patients and health care professionals who are the ones directly
involved in the management and control of the asthma disease. Researchers
dealing with aspects of asthma are also included along with external users who
will include commercial entities such as health oriented technology providers
and entrepreneurs.
**Table 3: MyAirCoach open access actors**
<table>
<tr>
<th>
**Actors**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
Patients
</td>
<td>
People who have asthma and are subjects of clinicians’ care
</td> </tr>
<tr>
<td>
Patient families
</td>
<td>
People in the close environment of patients who are given, by the patients,
the right to access their medical record
</td> </tr>
<tr>
<td>
Health care professional
</td>
<td>
Doctors, nurses, trainers, administrative personnel having direct contact with
and responsibility for patients
</td> </tr>
<tr>
<td>
Researchers
</td>
<td>
Research institutes, individual researchers, and in general people
investigating aspects of asthma
</td> </tr>
<tr>
<td>
External
</td>
<td>
Third party users of MyAirCoach data for technology development purposes, such
as commercial entities and entrepreneurs
</td> </tr> </table>
## 2.5 Self-Audit Process
The Caldicott Report _** 5 ** _ will serve as a guideline for the self-audit
processes of the datasets produced within MyAirCoach. The Caldicott report
sets out a number of general principles that health and social care
organizations should use when reviewing their use of client information. The
report makes several recommendations, one of which is the appointment of
Caldicott guardians, i.e. members of staff with a responsibility to ensure
patient data is kept secure. It is now a requirement for every NHS
organization to have a Caldicott guardian.
Within myAirCoach project the ethical advisory board as well as the Advisory
Patient Forums will be in charge of the execution of the defined data
management plan and will act as a “Caldicott guardian” supervising the
compliance with legal and ethical issues in terms of information security,
data protection and ethical issues. Except the datasets produced by the
project, the users of the myAirCoach system will be able to upload their own
datasets. Thus, the existence of an auditing mechanism is deemed very critical
in order to avoid the publication of non-validated clinical data or data
collected from campaigns that do not comply with the ethical manual of the
MyAirCoach project.
**Figure : Self-Audit Process for MyAirCoach Datasets**
The steps of the Self-Audit process that will be implemented are summarized
below:
* Self-Audit Planning o Plan and Set-up Self-Audit o Collect Relevant Documents
* Identification, Classification and Assessment of Datasets o Analyze Documents o Identify Data Sets o Classify Data Sets o Assess Data Sets
* Report of Results and Recommendations o Collate and analyze information from the audit
o Report on the compliance with the Data Management Plan o Identify weaknesses
and decide on corrective actions
## 2.6 Risk Assessment
Data management is directly connected with issues of privacy and as such it
should be aiming to the efficient and early identification of risks and their
timely solution through appropriate strategies. Initially, the data objects
need to be categorized based on the identifying and sensitive information that
they contains in order to selected the appropriate mitigation strategies.
MyAirCoach will be using the Harvard Research Data Security Policy (HRDSP)
scale _** 6 ** _ for the characterization of the risks associated with the
privacy of participants.
After categorizing the data objects, the risks related to each category should
be determined. The risk analyses and mitigation strategies will be considered
separately for every dataset so that the finally publishable data are
categorized to Level 1.
**Table 4: Categorization of datasets in regards to privacy**
<table>
<tr>
<th>
HRDSP
</th>
<th>
Description
</th>
<th>
MyAirCoach
publication rights
</th> </tr>
<tr>
<td>
**Level 1**
</td>
<td>
De-identified research information about people and other non-confidential
research information
</td>
<td>
Can be published on the open access
platform
</td> </tr>
<tr>
<td>
**Level 2**
</td>
<td>
Benign information about individually identifiable people
</td>
<td>
Can be shared within the consortium
</td> </tr>
<tr>
<td>
**Level 3**
</td>
<td>
Sensitive information about individually identifiable people
</td>
<td>
Can be shared within the consortium
</td> </tr>
<tr>
<td>
**Level 4**
</td>
<td>
Very sensitive information about individually identifiable people
</td>
<td>
Can be used by the responsible clinical partner only
</td> </tr>
<tr>
<td>
**Level 5**
</td>
<td>
Extremely sensitive information about individually identifiable people
</td>
<td>
Can be used by the responsible clinical partner only
</td> </tr> </table>
## 2.7 Context Categorization of Data
The research data that will be collected or generated during the project
lifecycle can be categorized in four groups regarding their context and the
informational weight. The Table 5 presents a summary of the categories
identified for the categorization of data collected within the MyAirCoach
project.
**Table 5: Context categorization of myAirCoach Data**
<table>
<tr>
<th>
**Category**
</th>
<th>
**Description**
</th>
<th>
**Examples**
</th> </tr>
<tr>
<td>
Raw Collected Data
</td>
<td>
Obtained data that has not been subjected to any quality assurance or control
</td>
<td>
Measurements collected from sensors/devices (e.g. smart bracelets, sensor
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
enhanced MyAirCoach
inhaler)
</td> </tr>
<tr>
<td>
Verified/Validate d Collected Data
</td>
<td>
These are the raw data that has been evaluated for completeness, correctness,
and conformance/compliance of a specific data set against the standard
operating procedure (verified), as well as reviewed for specific analytic
quality (validated)
</td>
<td>
Annotated sensor measurements, Images from patients’ tomographies, documents
from test campaigns, asthma action plans etc.
</td> </tr>
<tr>
<td>
Analyzed Collected Data
</td>
<td>
Validated data are then analyzed, through statistical operations, based on a
specific target or application scenario
</td>
<td>
Patient Models, assessments of inhaler usage, patients’ nutritional
assessments etc.
</td> </tr>
<tr>
<td>
Generated Data
</td>
<td>
The data needed to validate the results presented in scientific publications
(pseudo-code, libraries, system design, , etc)
</td>
<td>
Mutli-parametric indicators of asthma control, algorithmic approaches for the
detection of inhaler actuations, workflow for the deployment of User Centered
Design in mHealth applications.
</td> </tr> </table>
# 3 MyAirCoach Data Management Plan
The current chapter is aiming to provide a detailed description of all the
foreseen MyAirCoach datasets through the use of the template of DMP
established by the European Commission for Horizon 2020 1 . The definition
of all the related aspects of dataset categories (Table 6) indicates the
importance long term preservation of data and the requirement widest possible
sharing of the knowledge created by EU projects.
**Table 6: H2020 Template for Data Management Plan 1 **
<table>
<tr>
<th>
**Aspect**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**Data set reference and name**
</td>
<td>
Identifier for the data set to be produced
</td> </tr>
<tr>
<td>
**Data set description**
</td>
<td>
Description of the data that will be generated or collected, its origin (in
case it is collected), nature and scale and to whom it could be useful, and
whether it underpins a scientific publication. Information on the existence
(or not) of similar data and the possibilities for integration and reuse.
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
Reference to existing suitable standards of the discipline. If these do not
exist, an outline on how and what metadata will be created.
</td> </tr>
<tr>
<td>
**Data sharing**
</td>
<td>
</td>
<td>
Description of how data will be shared, including access procedures, embargo
periods (if any), outlines of technical mechanisms for dissemination and
necessary software and other tools for enabling re-use, and definition of
whether access will be widely open or restricted to specific groups.
Identification of the repository where data will be stored, if already
existing and identified, indicating in particular the type of repository
(institutional, standard repository for the discipline, etc.).
In case the dataset cannot be shared, the reasons for this should be mentioned
(e.g. ethical, rules of personal data, intellectual property, commercial,
privacy-related, security-related).
</td> </tr>
<tr>
<td>
**Archiving and preservation**
**(including storage and backup)**
</td>
<td>
Description of the procedures that will be put in place for long-term
preservation of the data. Indication of how long the data should be preserved,
what is its approximated end volume, what the associated costs are and how
these are planned to be covered.
</td> </tr> </table>
In order to indicate the position of the datasets within the MyAirCoach and
describe their envisioned use toward the project objectives a number of fields
were introduced to the above template as indicated in Table 7
**Table 7: MyAirCoach additional aspects of Data Management**
<table>
<tr>
<th>
**Aspect**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
**Relation to the objectives of MyAirCoach**
</td>
<td>
This aspect is introduced in order to provide a summary on how the specific
dataset is going to be used within the project and how it is expected to
contribute for the successful delivery of the project objectives.
</td> </tr>
<tr>
<td>
**Related Work Packages**
</td>
<td>
List of all the related tasks and work packages of the project’s description
of work that are related to the specific type of data
</td> </tr>
<tr>
<td>
**Ethical issues and requirements**
</td>
<td>
Description of any ethical requirements and suggestions for mitigation
strategies in the case of identified risks.
</td> </tr> </table>
In order to facilitate the easy use of the datasets through different
platforms and operation systems a naming scheme has been proposed for all
uploaded files. More specifically the following convention has been selected
for the purposes of MyAirCoach and for the files uploaded on the online open
access repository.
<table>
<tr>
<th>
</th>
<th>
_**“[Dataset prefix]_[ID]_[Date]_[Author].[ext]”** _
</th> </tr>
<tr>
<td>
_Dataset prefix_
</td>
<td>
is the prefix of the specific type of dataset as identified in Table 8
</td> </tr>
<tr>
<td>
ID
</td>
<td>
is the identification number as it is assigned by the online system
</td> </tr>
<tr>
<td>
_Date_
</td>
<td>
is date of upload on the online system following the format: YYMMDD
</td> </tr>
<tr>
<td>
Author
</td>
<td>
is the authors username
</td> </tr>
<tr>
<td>
ext
</td>
<td>
is the file extension pertaining to the format used.
</td> </tr>
<tr>
<td>
</td>
<td>
The selected names should not include spaces or symbols with the only
exception of the underscore
</td> </tr> </table>
Table 8 summarises the prefixes for the foreseen categories of MyAirCoach
datasets alongside a short description of the nature of the specified
datasets.
**Table 8: Naming Prefixes of Dataset Categories**
<table>
<tr>
<th>
**No**
</th>
<th>
**Naming Prefix**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
01
</td>
<td>
_DS_InhalerUsage_
</td>
<td>
Datasets related to inhaler usage measurements including both the time and
technique of use
</td> </tr>
<tr>
<td>
02
</td>
<td>
_DS_Physiology_
</td>
<td>
Datasets of physiology assessments including both sensor measurements and
doctor diagnosis and comments
</td> </tr>
<tr>
<td>
03
</td>
<td>
_DS_PhysicalActivity_
</td>
<td>
Datasets related to the lifestyle of asthma patients with special focus on
activity levels
</td> </tr>
<tr>
<td>
04
</td>
<td>
_DS_Nutritional_
</td>
<td>
Datasets containing information regarding nutritional aspects of asthma
patients
</td> </tr>
<tr>
<td>
05
</td>
<td>
_DS_ExhaledNO_
</td>
<td>
Datasets of Exhaled Nitric Oxide Measurements of asthma patients and healthy
subjects
</td> </tr>
<tr>
<td>
06
</td>
<td>
_DS_Environmental_
</td>
<td>
Datasets of Environmental Measurements
</td> </tr>
<tr>
<td>
07
</td>
<td>
_DS_Tomography_
</td>
<td>
Datasets of Patient Tomography of the Lungs
</td> </tr>
<tr>
<td>
08
</td>
<td>
_DS_LungSimulationResults_
</td>
<td>
Results from the simulation of lungs describing flow of air within the airways
and deposition of particles in the airway walls. Tables of numerical data and
analysis results
</td> </tr>
<tr>
<td>
09
</td>
<td>
_DS_PatientModels_
</td>
<td>
Datasets containing indicative patient models to be used for the multi-
parametric description of asthma
</td> </tr>
<tr>
<td>
10
</td>
<td>
_DS_EducationAndTraining_
</td>
<td>
Datasets of Educational and Training Content describing the disease of asthma
and the proper use of different types of inhalers
</td> </tr>
<tr>
<td>
11
</td>
<td>
_DS_ActionPlans_
</td>
<td>
Dataset of asthma action plans and medication strategies prescribed by doctors
</td> </tr>
<tr>
<td>
12
</td>
<td>
_DS_UserRequirements_
</td>
<td>
Datasets containing outcomes and information related to the assessment of user
requirements and feedback sessions within the UCD processes
</td> </tr>
<tr>
<td>
13
</td>
<td>
_DS_TestCampaigns_
</td>
<td>
Datasets collected during the Test Campaigns of the project categorized with
regards to the collection site.
</td> </tr> </table>
Indicative datasets generated within the myAirCoach project are available
online through the open access platform via the following link
_https://myaircoach.iti.gr:40001/myaircoach/app/#/opendata_ . An indicative
set of Inhaler Usage Measurements is described in Appendix 2\.
## 3.1 Datasets of Inhaler Usage Measurements
<table>
<tr>
<th>
**Name**
</th>
<th>
Dataset of Inhaler Usage Measurements
</th> </tr>
<tr>
<td>
**Naming Prefix**
</td>
<td>
DS_InhalerUsage
</td> </tr>
<tr>
<td>
**Summary**
</td>
<td>
The current type of dataset will include measurements and data collected in
regards to the use of inhaler by patients. More specifically, it is expected
to include sound and acceleration measurements from sensors attached on the
inhaler device.
</td> </tr>
<tr>
<td>
**Positioning within the MyAirCoach project**
</td> </tr>
<tr>
<td>
**Relation to the project objective**
</td>
<td>
MyAirCoach is aiming to develop novel algorithmic approaches for the automatic
detection of inhaler actuations and the analysis of the technique of use.
It is therefore considered of fundamental importance to produce a dataset from
testing sessions which will be used not only for the training of machine
learning approaches but also the validation of results.
</td> </tr>
<tr>
<td>
**Related Work Packages**
</td>
<td>
**WP3** Smart sensor based inhaler prototype and WBAN
**WP4** Computational models, intelligent information processing and DSS
module
**WP6** Evaluation
</td> </tr>
<tr>
<td>
**Description of Dataset Category**
</td> </tr>
<tr>
<td>
**Origin of Data**
</td>
<td>
Raw data will be collected by sensing elements attached on the inhaler
devices.
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
The annotation of collected data for the detection of actuation events and the
characterization of inhaler technique will be done by experienced researchers.
</th> </tr>
<tr>
<td>
**Nature and scale of data**
</td>
<td>
The data of this category will be in the form of time series describing
measured parameters during the actual use of an inhaler.
CSV (Comma Separated Values) is the advised file format in this category since
it allows the easy use of the data both through programming languages and
spreadsheet software packages (e.g. Open Office Calc, Microsoft Excel). In
this case timestamps for every measurement or the sampling rate should be
defined.
For the specific case of sound measurements commonly used formats of sound
representation can be also considered with WAV being the advised option.
The annotation files are advised to be stored in the CSV format corresponding
to the actual time series of data or in XML format for the identification of
positioning of start and stop of events and user actions (e.g. breath-in,
inhaler actuation)
</td> </tr>
<tr>
<td>
**Use by researchers and healthcare professionals**
</td>
<td>
The datasets in this category can support research in the field of biomedical
signal processing and serve as a basis for the comparative validation of
different algorithmic approaches.
Furthermore, the current type of datasets can be used for the testing of the
accuracy of possible commercial products that rely on the same sensing
capabilities.
Finally, the annotation of the data as it relates to the technique of inhaler
use by patients can be used as indicators of common errors of patients while
using their inhaler.
</td> </tr>
<tr>
<td>
**Indicative existing similar dataset**
</td>
<td>
There have not been identified any online available datasets in this category
and for any method of sensing.
</td> </tr>
<tr>
<td>
**Indicative scientific publications**
</td>
<td>
Unfortunately a very small number of publications are available in this field
of studies and they are mainly focusing on the understanding of Dry Powder
Inhalers (DPIs) _**7** _ , _**8** _ , with only one identified exception of a
scientific article monitoring the use of Metered Dose Inhaler (MDI) _** 9 **
_
</td> </tr>
<tr>
<td>
**Standards and Metadata**
</td> </tr>
<tr>
<td>
</td>
<td>
The dataset will be accompanied by detailed documentation of its contents
along with metadata describing the demographics of the samples from which
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
the data were generated and detailed description of the data collection
process.
Indicative metadata include: (a) description of the experimental setup and
procedure that led to the generation of the dataset, (b) documentation of the
variables recorded in the dataset.
The metadata will be in a format that may be easily parsed with open source
software.
</th> </tr>
<tr>
<td>
**Data Sharing**
</td> </tr>
<tr>
<td>
**Access type**
</td>
<td>
In accordance with the ethical and legal requirements regarding data obtained
from human participants, the dataset will be initially available to the
Consortium Members and only after its careful anonymization. Personal
information regarding the participants will be kept strictly private.
As the project progresses and the collected data are used for the research and
development processes of the project they will become available at the
projects open data platform after the approval by the ethics committee of the
MyAirCoach project. The inclusion of a subject’s data in the public part of
this dataset will be done on the basis of appropriate informed consent to data
publication
</td> </tr>
<tr>
<td>
**Access procedure**
</td>
<td>
In the first stages of the dataset sharing, and as soon it reaches an
anonymized formed, it will be shared among the consortium through the wiki
page of the project.
For the second stage of dataset publication, the anonymized data will be
published through the open data platform of the project in order to be used by
registered users and subsequently by any interested party aiming to use them
for research and development.
</td> </tr>
<tr>
<td>
_**Embargo periods (if any)** _
</td>
<td>
No preset embargo periods.
Selection of the appropriate time of publication based on the research and
development timeline of the project, the protection of intellectual property
and the proper
safeguarding of the privacy of participants
</td> </tr>
<tr>
<td>
_**Technical mechanisms for dissemination** _
</td>
<td>
The public part of the datasets in this category will be accessible through
the projects open data platform.
</td> </tr>
<tr>
<td>
_**Necessary S/W and other tools for enabling re-use** _
</td>
<td>
No specific type of software required.
Required characteristics include reading capabilities of CSV and WAV files
</td> </tr>
<tr>
<td>
_**Repository where data will** _
</td>
<td>
The dataset will be accommodated at the wiki page of the
</td> </tr>
<tr>
<td>
_**be stored (institutional, etc., if already existing and identified)** _
</td>
<td>
MyAirCoach project, as well as at an Open Data Platform of the final system.
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
_**For how long should the data be preserved?** _
</td>
<td>
The public part of the dataset will be preserved online for as long as there
are regular downloads within the online platform of the MyAirCoach system.
After that, it would be made accessible by request in order to reduce any
issues regarding the overall performance of the system.
The private part of the dataset will be preserved by responsible MyAirCoach
partner at least until the end of the project.
</td> </tr>
<tr>
<td>
**_Approximated end volume of data_ **
</td>
<td>
Unknown
</td> </tr>
<tr>
<td>
_**Indicative associated costs for data archiving and** _
**_preservation_ **
</td>
<td>
Probably two dedicated hard disk drives will be allocated for the dataset; one
for the public part and one for the private. There are no costs associated
with its preservation of the data.
</td> </tr>
<tr>
<td>
_**Indicative plan for** _
**_covering the above costs_ **
</td>
<td>
Small one-time costs covered within the MyAirCoach project.
</td> </tr>
<tr>
<td>
**Ethical issues and requirements**
</td> </tr>
<tr>
<td>
</td>
<td>
The collected data should be carefully anonymized for the preservation of the
privacy of participants.
Sounds measurements should be carefully reviewed and delete any sections were
participants speak and reveal important aspects of their way of life or
identify them.
</td> </tr> </table>
## 3.2 Datasets of Physiology Assessments
<table>
<tr>
<th>
**Name**
</th>
<th>
Dataset of Physiology Assessments
</th> </tr>
<tr>
<td>
**Naming Prefix**
</td>
<td>
_DS_Physiology_
</td> </tr>
<tr>
<td>
**Summary**
</td>
<td>
The current type of dataset will cover include different types of
physiological measurements collected within the project, such as wearable
smart sensors that can collect data such as heart rate or respiratory rate.
Furthermore, this category will also include the physiological assessments
done in the healthcare environment by trained practitioners (especially all
assessment done in the project test and evaluation campaigns)
</td> </tr>
<tr>
<td>
**Positioning within the MyAirCoach project**
</td> </tr>
<tr>
<td>
**Relation to the project**
</td>
<td>
MyAirCoach is aiming to propose a novel modelling
</td> </tr> </table>
<table>
<tr>
<th>
**objective**
</th>
<th>
approach for the personalized and the overall understanding of asthma disease.
It is therefore, of crucial importance to collect an adequate amount of data
in order to define a modelling framework that will effectively cover the most
important aspects of the disease.
Furthermore, the MyAirCoach project is aiming to develop decision support
tools and risk prediction functionalities that will be based on the
physiological condition of the patient. In this regards, the collected data
will be used for the training and the validation of the algorithmic approaches
that will allow such functionalities.
</th> </tr>
<tr>
<td>
**Related Work Packages**
</td>
<td>
**WP2** Test Campaigns, measurements and clinical analysis
**WP4** Computational models, intelligent information processing and DSS
module
**WP6** Evaluation
</td> </tr>
<tr>
<td>
**Description of Dataset Category**
</td> </tr>
<tr>
<td>
**Origin of Data**
</td>
<td>
Patients’ physiology assessments can be assessed either manually by the
corresponding doctors based on medical examinations or automatically by the
myAirCoach system based on the analysis of health data extracted by utilized
sensors.
</td> </tr>
<tr>
<td>
**Nature and scale of data**
</td>
<td>
Data will be represented based on the openEHR framework, using the available
archetypes when possible or developing new types of archetypes when it is
required.
</td> </tr>
<tr>
<td>
**Use by researchers and healthcare professionals**
</td>
<td>
The datasets in this category can support research in the field of medical
decision support and can form the basis for the comparative validation of
different algorithmic approaches.
Furthermore the aggregated data can be used for the validation or comparison
of commercial medical decision support tools
Finally, the current type of datasets can be used for the development of
alternative modelling approaches of asthma disease of be used for the
extension of the project outcomes to other respiratory medical issues.
</td> </tr>
<tr>
<td>
**Indicative existing similar dataset**
</td>
<td>
There have not been identified any online available datasets in this category
and for any method of sensing.
</td> </tr>
<tr>
<td>
**Indicative scientific**
</td>
<td>
Although a variety of scientific publications are available on the study of
physiological parameters in regards to
</td> </tr> </table>
<table>
<tr>
<th>
**publications**
</th>
<th>
asthma, a unified approach for the use of the diverse information of
electronic medical records as envisioned by the MyAirCoach project has not
been identified.
</th> </tr>
<tr>
<td>
**Standards and Metadata**
</td> </tr>
<tr>
<td>
</td>
<td>
The dataset will be accompanied by detailed documentation of its contents
along with metadata describing the demographics of the samples from which the
data were generated and detailed description of the data collection process.
Indicative metadata include: (a) description of the experimental setup and
procedure that led to the generation of the dataset, (b) documentation of the
variables recorded in the dataset.
The metadata will be in a format that may be easily parsed with open source
software.
</td> </tr>
<tr>
<td>
**Existing suitable standards**
</td>
<td>
The openEHR open standard specification for health informatics describing the
management, storage, retrieval and exchange of health data in electronic
health records (EHRs) _** 10 ** _ .
OpenEHR is currently identified as the main data representation framework to
be followed by MyAirCoach system
The HL7 framework (and related standards) for the exchange, integration,
sharing, and retrieval of electronic health information _** 11 ** _
</td> </tr>
<tr>
<td>
**Data Sharing**
</td> </tr>
<tr>
<td>
**Access type**
</td>
<td>
In accordance with the ethical and legal requirements regarding data obtained
from human participants, the dataset will be initially available to the
Consortium Members and only after its careful anonymization. Personal
information regarding the participants will be kept strictly private.
As the project progresses and the collected data are used for the research and
development processes of the project they will become available at the
projects open data platform after the approval by the ethics committee of the
MyAirCoach project. The inclusion of a subject’s data in the public part of
this dataset will be done on the basis of appropriate informed consent to data
publication.
</td> </tr>
<tr>
<td>
**Access procedure**
</td>
<td>
In the first stages of the dataset sharing, and as soon it reaches an
anonymized formed, it will be shared among
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
the consortium through the wiki page of the project.
For the second stage of dataset publication, the anonymized data will be
published through the open data platform of the project in order to be used by
registered users and subsequently by any interested party aiming to use them
for research and development.
</th> </tr>
<tr>
<td>
_**Embargo periods (if any)** _
</td>
<td>
No preset embargo periods.
Selection of the appropriate time of publication based on the research and
development timeline of the project, the protection of intellectual property
and the proper
safeguarding of the privacy of participants
</td> </tr>
<tr>
<td>
_**Technical mechanisms for dissemination** _
</td>
<td>
The public part of the datasets in this category will be accessible through
the projects open data platform.
</td> </tr>
<tr>
<td>
_**Necessary S/W and other tools for enabling re-use** _
</td>
<td>
The data will be only accessible through the use of software components and
products that support openEHR 10
</td> </tr>
<tr>
<td>
_**Repository where data will be stored (institutional, etc., if already
existing and identified)** _
</td>
<td>
The dataset will be accommodated at the wiki page of the MyAirCoach project,
as well as at an Open Data Platform of the final system.
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
_**For how long should the data be preserved?** _
</td>
<td>
The public part of the dataset will be preserved online for as long as there
are regular downloads within the online platform of the MyAirCoach system.
After that, it would be made accessible by request in order to reduce any
issues regarding the overall performance of the system.
The private part of the dataset will be preserved by responsible MyAirCoach
partner at least until the end of the project.
</td> </tr>
<tr>
<td>
**_Approximated end volume of data_ **
</td>
<td>
Unknown
</td> </tr>
<tr>
<td>
_**Indicative associated costs for data archiving and** _
**_preservation_ **
</td>
<td>
Probably two dedicated hard disk drives will be allocated for the dataset; one
for the public part and one for the private. There are no costs associated
with its preservation of the data.
</td> </tr>
<tr>
<td>
_**Indicative plan for** _
**_covering the above costs_ **
</td>
<td>
Small one-time costs covered within the MyAirCoach project.
</td> </tr>
<tr>
<td>
**Ethical issues and requirements**
</td> </tr>
<tr>
<td>
</td>
<td>
The collected data should be carefully anonymized for the preservation of the
privacy of participants.
</td> </tr>
<tr>
<td>
</td>
<td>
All doctors’ comments accompanying the assessments should be carefully
reviewed and delete any sections that can be used to identify the respective
patient.
</td> </tr> </table>
## 3.3 Datasets of Lifestyle Assessment
<table>
<tr>
<th>
**Name**
</th>
<th>
Dataset of Nutritional Assessments
</th> </tr>
<tr>
<td>
**Naming Prefix**
</td>
<td>
_DS_PhysicalActivity_
</td> </tr>
<tr>
<td>
**Summary**
</td>
<td>
The current type of dataset will cover include different types of assessments
and data related to the lifestyle behavior and activity levels of patients as
they will be collected within the project during the measurement campaigns and
also through the sensing devices of used by the project (i.e. smart health
bracelets or smartphones)
</td> </tr>
<tr>
<td>
**Positioning within the MyAirCoach project**
</td> </tr>
<tr>
<td>
**Relation to the project objective**
</td>
<td>
MyAirCoach will try to contribute to the effects of the lifestyle of patients
and especially their activity levels on the asthma condition and outline
significant correlations that may help doctors to better help their patients
and medical researchers to understand the condition of asthma though a mutli-
parametric view.
</td> </tr>
<tr>
<td>
**Related Work Packages**
</td>
<td>
**WP2** Test Campaigns, measurements and clinical analysis
**WP4** Computational models, intelligent information processing and DSS
module
**WP6** Evaluation
</td> </tr>
<tr>
<td>
**Description of Dataset Category**
</td> </tr>
<tr>
<td>
**Origin of Data**
</td>
<td>
Patients activity levels can be produced either manually by the corresponding
doctors based on specialized questionnaires or automatically by the myAirCoach
system based on the analysis of health data extracted by utilized sensors.
</td> </tr>
<tr>
<td>
**Nature and scale of data**
</td>
<td>
The current type of dataset will include responses to questionnaires as they
will be used in the measurement campaigns or though the final version of the
MyAirCoach system.
In addition the current category may include measurements of activity as they
will be assessed by the sensing devices of the project namely: smart health
bracelets and smartphone sensors
</td> </tr>
<tr>
<td>
**Use by researchers and**
</td>
<td>
The current dataset will help medical researchers to
</td> </tr> </table>
<table>
<tr>
<th>
**healthcare professionals**
</th>
<th>
identify correlation between the activity level of patients and the risk of
asthma exacerbations.
Furthermore, the collected data can be used for the validation and comparison
of algorithmic approaches studying the activity levels of people through the
use of acceleration measurements of modern smart devices.
</th> </tr>
<tr>
<td>
**Indicative existing similar dataset**
</td>
<td>
There have not been identified any online available datasets in this category
and for any method of sensing.
</td> </tr>
<tr>
<td>
**Indicative scientific publications**
</td>
<td>
There have not been identified any online available datasets in this category
and for any method of sensing.
</td> </tr>
<tr>
<td>
**Standards and Metadata**
</td> </tr>
<tr>
<td>
</td>
<td>
The dataset will be accompanied with detailed documentation of its contents
and of all the parameters and selected procedures during the deployment of the
campaigns or the characteristics of the sensors used for their assessment
through sensing devices.
</td> </tr>
<tr>
<td>
**Existing suitable standards**
</td>
<td>
No existing standards identified
</td> </tr>
<tr>
<td>
**Data Sharing**
</td> </tr>
<tr>
<td>
**Access type**
</td>
<td>
In accordance with the ethical and legal requirements regarding data obtained
from human participants, the dataset will be initially available to the
Consortium Members and only after its careful anonymization. Personal
information regarding the participants will be kept strictly private.
As the project progresses and the collected data are used for the research and
development processes of the project they will become available at the
projects open data platform after the approval by the ethics committee of the
MyAirCoach project. The inclusion of a subject’s data in the public part of
this dataset will be done on the basis of appropriate informed consent to data
publication.
</td> </tr>
<tr>
<td>
**Access procedure**
</td>
<td>
In the first stages of the dataset sharing, and as soon it reaches an
anonymized formed, it will be shared among the consortium through the wiki
page of the project.
For the second stage of dataset publication, the anonymized data will be
published through the open data platform of the project in order to be used by
registered users and subsequently by any interested party aiming to use them
for research and development.
</td> </tr>
<tr>
<td>
_**Embargo periods (if any)** _
</td>
<td>
No preset embargo periods.
</td> </tr>
<tr>
<td>
</td>
<td>
Selection of the appropriate time of publication based on the research and
development timeline of the project, the protection of intellectual property
and the proper
safeguarding of the privacy of participants
</td> </tr>
<tr>
<td>
_**Technical mechanisms for dissemination** _
</td>
<td>
The public part of the datasets in this category will be accessible through
the projects open data platform.
</td> </tr>
<tr>
<td>
_**Necessary S/W and other tools for enabling re-use** _
</td>
<td>
The data will be only accessible through the use of software components and
products that support openEHR 10
</td> </tr>
<tr>
<td>
_**Repository where data will be stored (institutional, etc., if already
existing and identified)** _
</td>
<td>
The dataset will be accommodated at the wiki page of the MyAirCoach project,
as well as at an Open Data Platform of the final system.
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
_**For how long should the data be preserved?** _
</td>
<td>
The public part of the dataset will be preserved online for as long as there
are regular downloads within the online platform of the MyAirCoach system.
After that, it would be made accessible by request in order to reduce any
issues regarding the overall performance of the system.
The private part of the dataset will be preserved by responsible MyAirCoach
partner at least until the end of the project.
</td> </tr>
<tr>
<td>
**_Approximated end volume of data_ **
</td>
<td>
Unknown
</td> </tr>
<tr>
<td>
_**Indicative associated costs for data archiving and** _
**_preservation_ **
</td>
<td>
Probably two dedicated hard disk drives will be allocated for the dataset; one
for the public part and one for the private. There are no costs associated
with its preservation of the data.
</td> </tr>
<tr>
<td>
_**Indicative plan for** _
**_covering the above costs_ **
</td>
<td>
Small one-time costs covered within the MyAirCoach project.
</td> </tr>
<tr>
<td>
**Ethical issues and requirements**
</td> </tr>
<tr>
<td>
</td>
<td>
The collected data should be carefully anonymized for the preservation of the
privacy of participants.
All doctors’ comments accompanying the assessments should be carefully
reviewed and delete any sections that can be used to identify the respective
patient.
</td> </tr> </table>
## 3.4 Datasets of Nutritional Assessments
<table>
<tr>
<th>
**Name**
</th>
<th>
Dataset of Nutritional Assessments
</th> </tr> </table>
<table>
<tr>
<th>
**Naming Prefix**
</th>
<th>
_DS_Nutritional_
</th> </tr>
<tr>
<td>
**Summary**
</td>
<td>
The current type of dataset will cover include different types of assessments
related to the nutritional habits of asthma patients as they will be collected
within the project (e.g questionnaires).
</td> </tr>
<tr>
<td>
**Positioning within the MyAirCoach project**
</td> </tr>
<tr>
<td>
**Relation to the project objective**
</td>
<td>
MyAirCoach will try to contribute to the understanding of the nutritional
habits of asthma patients in the evolution of their disease and outline
significant correlations that may help doctors to better help their patients
and medical researchers to understand the condition of asthma though a mutli-
parametric view.
</td> </tr>
<tr>
<td>
**Related Work Packages**
</td>
<td>
**WP2** Test Campaigns, measurements and clinical analysis
**WP6** Evaluation
</td> </tr>
<tr>
<td>
**Description of Dataset Category**
</td> </tr>
<tr>
<td>
**Origin of Data**
</td>
<td>
Data collected and conclusions drawn from the measurements campaigns of the
project.
</td> </tr>
<tr>
<td>
**Nature and scale of data**
</td>
<td>
The current category of datasets will include mainly anonymized responses to
questionnaires as they will be used in the measurement campaigns or assessed
through the MyAirCoach final system
</td> </tr>
<tr>
<td>
**Use by researchers and healthcare professionals**
</td>
<td>
The datasets of this category are aiming to become a useful component for the
study of asthma condition by medical researchers and hopefully be extended by
the input of other projects in the field of asthma related research.
</td> </tr>
<tr>
<td>
**Indicative existing similar dataset**
</td>
<td>
There have not been identified any online available datasets in this category
and for any method of sensing.
</td> </tr>
<tr>
<td>
**Indicative scientific publications**
</td>
<td>
There have not been identified any online available datasets in this category
and for any method of sensing.
</td> </tr>
<tr>
<td>
**Standards and Metadata**
</td> </tr>
<tr>
<td>
</td>
<td>
The dataset will be accompanied with detailed documentation of its contents
and of all the parameters and selected procedures during the deployment of the
campaigns
</td> </tr>
<tr>
<td>
**Existing suitable standards**
</td>
<td>
No existing standards identified
</td> </tr>
<tr>
<td>
**Data Sharing**
</td> </tr>
<tr>
<td>
**Access type**
</td>
<td>
In accordance with the ethical and legal requirements regarding data obtained
from human participants, the dataset will be initially available to the
Consortium
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
Members and only after its careful anonymization. Personal information
regarding the participants will be kept strictly private.
As the project progresses and the collected data are used for the research and
development processes of the project they will become available at the
projects open data platform after the approval by the ethics committee of the
MyAirCoach project. The inclusion of a subject’s data in the public part of
this dataset will be done on the basis of appropriate informed consent to data
publication.
</th> </tr>
<tr>
<td>
**Access procedure**
</td>
<td>
In the first stages of the dataset sharing, and as soon it reaches an
anonymized formed, it will be shared among the consortium through the wiki
page of the project.
For the second stage of dataset publication, the anonymized data will be
published through the open data platform of the project in order to be used by
registered users and subsequently by any interested party aiming to use them
for research and development.
</td> </tr>
<tr>
<td>
_**Embargo periods (if any)** _
</td>
<td>
No preset embargo periods.
Selection of the appropriate time of publication based on the research and
development timeline of the project, the protection of intellectual property
and the proper
safeguarding of the privacy of participants
</td> </tr>
<tr>
<td>
_**Technical mechanisms for dissemination** _
</td>
<td>
The public part of the datasets in this category will be accessible through
the projects open data platform.
</td> </tr>
<tr>
<td>
_**Necessary S/W and other tools for enabling re-use** _
</td>
<td>
The data will be only accessible through the use of software components and
products that support openEHR 10
</td> </tr>
<tr>
<td>
_**Repository where data will be stored (institutional, etc., if already
existing and identified)** _
</td>
<td>
The dataset will be accommodated at the wiki page of the MyAirCoach project,
as well as at an Open Data Platform of the final system.
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
_**For how long should the data be preserved?** _
</td>
<td>
The public part of the dataset will be preserved online for as long as there
are regular downloads within the online platform of the MyAirCoach system.
After that, it would be made accessible by request in order to reduce any
issues regarding the overall performance of the system.
The private part of the dataset will be preserved by responsible MyAirCoach
partner at least until the end of the project.
</td> </tr>
<tr>
<td>
**_Approximated end volume of data_ **
</td>
<td>
Unknown
</td> </tr>
<tr>
<td>
_**Indicative associated costs for data archiving and** _
**_preservation_ **
</td>
<td>
Probably two dedicated hard disk drives will be allocated for the dataset; one
for the public part and one for the private. There are no costs associated
with its preservation of the data.
</td> </tr>
<tr>
<td>
_**Indicative plan for** _
**_covering the above costs_ **
</td>
<td>
Small one-time costs covered within the MyAirCoach project.
</td> </tr>
<tr>
<td>
**Ethical issues and requirements**
</td> </tr>
<tr>
<td>
</td>
<td>
The collected data should be carefully anonymized for the preservation of the
privacy of participants.
All doctors’ comments accompanying the assessments should be carefully
reviewed and delete any sections that can be used to identify the respective
patient.
</td> </tr> </table>
## 3.5 Datasets of Exhaled Nitric Oxide Measurements
<table>
<tr>
<th>
**Name**
</th>
<th>
Dataset of Exhaled Nitric Oxide Measurements
</th> </tr>
<tr>
<td>
**Naming Prefix**
</td>
<td>
_DS_ExhaledNO_
</td> </tr>
<tr>
<td>
**Summary**
</td>
<td>
The current type of dataset will include measurements and data collected in
regards to the concentration of Nitric Oxide (NO) in the exhaled breath of
patients. In the framework of the MyAirCoach project exhaled NO will be
measured by the NIOX device developed by AEROCRINE.
</td> </tr>
<tr>
<td>
**Positioning within the MyAirCoach project**
</td> </tr>
<tr>
<td>
**Relation to the project objective**
</td>
<td>
Measurement of fractional nitric oxide (NO) concentration in exhaled breath
(FeNO) is a quantitative, non-invasive, simple, and safe method of measuring
airway inflammation that provides a complementary tool to other ways of
assessing airways disease, including asthma _** 12 ** _ .
There are various devices used for measuring the amount of FeNO in the breath.
The National Institute for Health and Care (NICE) has assessed 3 devices
including NIOX device of AEROCRINE _** 13 ** _
The MyAirCoach project is aiming to analyze the FeNO measurements of patients
for the better understanding of their asthma condition, the personalization of
medication approaches and the prediction of dangerous exacerbation incidents.
</td> </tr>
<tr>
<td>
**Related Work Packages**
</td>
<td>
**WP2** Test campaigns, measurements, clinical analysis
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
**WP3** Smart sensor based inhaler prototype and WBAN
**WP4** Computational models, intelligent information processing and DSS
module
**WP6** Evaluation
</th> </tr>
<tr>
<td>
**Description of Dataset Category**
</td> </tr>
<tr>
<td>
**Origin of Data**
</td>
<td>
Raw data will be collected by NIOX devices of AEROCRINE
</td> </tr>
<tr>
<td>
**Nature and scale of data**
</td>
<td>
The data of this category will be in the form of time series describing
measured parameters during the exhalation of patients
CSV (Comma Separated Values) is the advised file format in this category since
it allows the easy use of the data both through programming languages and
spreadsheet software packages (e.g. Open Office Calc, Microsoft Excel). In
this case timestamps for every measurement or the sampling rate should be
defined.
</td> </tr>
<tr>
<td>
**Use by researchers and healthcare professionals**
</td>
<td>
The datasets in this category can support research in the field of biomedical
signal processing and serve as a basis for the comparative validation of
different algorithmic approaches for the analysis of FeNo measurements
Furthermore, and if the collected data cover an adequate number of patients
with accurately assessed levels of asthma control, the analysis of FeNO
measurements can reveal important asthma indicators.
</td> </tr>
<tr>
<td>
**Indicative existing similar dataset**
</td>
<td>
National Health and Nutrition Examination Survey _** 14 ** _
</td> </tr>
<tr>
<td>
**Indicative scientific publications**
</td>
<td>
Exhaled Nitric Oxide For The Diagnosis Of Asthma In Adults And Children: A
Systematic Review _** 15 ** _
Exhaled nitric oxide levels to guide treatment for adults with asthma _** 16
** _
Exhaled nitric oxide levels to guide treatment for children with asthma _** 17
** _
</td> </tr>
<tr>
<td>
**Standards and Metadata**
</td> </tr>
<tr>
<td>
</td>
<td>
The dataset will be accompanied by detailed documentation of its contents
along with metadata describing the demographics of the samples from which the
data were generated and detailed description of the data collection process.
Indicative metadata include: (a) description of the experimental setup and
procedure that led to the generation of the dataset, (b) documentation of the
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
variables recorded in the dataset.
The metadata will be in a format that may be easily parsed with open source
software.
</th> </tr>
<tr>
<td>
**Data Sharing**
</td> </tr>
<tr>
<td>
**Access type**
</td>
<td>
In accordance with the ethical and legal requirements regarding data obtained
from human participants, the dataset will be initially available to the
Consortium Members and only after its careful anonymization. Personal
information regarding the participants will be kept strictly private.
As the project progresses and the collected data are used for the research and
development processes of the project they will become available at the
projects open data platform after the approval by the ethics committee of the
MyAirCoach project. The inclusion of a subject’s data in the public part of
this dataset will be done on the basis of appropriate informed consent to data
publication.
</td> </tr>
<tr>
<td>
**Access procedure**
</td>
<td>
In the first stages of the dataset sharing, and as soon it reaches an
anonymized formed, it will be shared among the consortium through the wiki
page of the project.
For the second stage of dataset publication, the anonymized data will be
published through the open data platform of the project in order to be used by
registered users and subsequently by any interested party aiming to use them
for research and development.
</td> </tr>
<tr>
<td>
_**Embargo periods (if any)** _
</td>
<td>
No preset embargo periods.
Selection of the appropriate time of publication based on the research and
development timeline of the project, the protection of intellectual property
and the proper
safeguarding of the privacy of participants
</td> </tr>
<tr>
<td>
_**Technical mechanisms for dissemination** _
</td>
<td>
The public part of the datasets in this category will be accessible through
the projects open data platform.
</td> </tr>
<tr>
<td>
_**Necessary S/W and other tools for enabling re-use** _
</td>
<td>
No specific type of software required.
Required characteristics include reading capabilities of CSV
</td> </tr>
<tr>
<td>
_**Repository where data will be stored (institutional, etc., if already
existing and identified)** _
</td>
<td>
The dataset will be accommodated at the wiki page of the MyAirCoach project,
as well as at an Open Data Platform of the final system.
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
_**For how long should the data be preserved?** _
</td>
<td>
The public part of the dataset will be preserved online for as long as there
are regular downloads within the online platform of the MyAirCoach system.
After that, it would be made accessible by request in order to reduce any
issues regarding the overall performance of the system.
The private part of the dataset will be preserved by responsible MyAirCoach
partner at least until the end of the project.
</td> </tr>
<tr>
<td>
**_Approximated end volume of data_ **
</td>
<td>
Unknown
</td> </tr>
<tr>
<td>
_**Indicative associated costs for data archiving and** _
**_preservation_ **
</td>
<td>
Probably two dedicated hard disk drives will be allocated for the dataset; one
for the public part and one for the private. There are no costs associated
with its preservation of the data.
</td> </tr>
<tr>
<td>
_**Indicative plan for** _
**_covering the above costs_ **
</td>
<td>
Small one-time costs covered within the MyAirCoach project.
</td> </tr>
<tr>
<td>
**Ethical issues and requirements**
</td> </tr>
<tr>
<td>
</td>
<td>
The collected data should be carefully anonymized for the preservation of the
privacy of participants.
</td> </tr> </table>
## 3.6 Datasets of Environmental Measurements
<table>
<tr>
<th>
**Name**
</th>
<th>
Datasets of Environmental Measurements
</th> </tr>
<tr>
<td>
**Naming Prefix**
</td>
<td>
_DS_Environmental_
</td> </tr>
<tr>
<td>
**Summary**
</td>
<td>
The current type of datasets will cover the assessment of environment
parameters such as air temperature and humidity and also levels of pollution
and concentration of common asthma irritants when possible.
</td> </tr>
<tr>
<td>
**Positioning within the MyAirCoach project**
</td> </tr>
<tr>
<td>
**Relation to the project objective**
</td>
<td>
Asthma is a multi-parametric condition that is being affected significantly by
the conditions in the environment of patients. In order to corer this usually
neglected view of asthma disease, MyAirCoach project is aiming to use the
collected measurements from the environment of patients in order to outline
important indicators of asthma attacks and contribute to the avoidance of such
harmful incidents by warning the patients and suggesting mitigation actions.
</td> </tr>
<tr>
<td>
**Related Work Packages**
</td>
<td>
**WP2** Test campaigns, measurements, clinical analysis
**WP3** Smart sensor based inhaler prototype and WBAN
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
**WP4** Computational models, intelligent information processing and DSS
module
**WP6** Evaluation
</th> </tr>
<tr>
<td>
**Description of Dataset Category**
</td> </tr>
<tr>
<td>
**Origin of Data**
</td>
<td>
Raw data will be collected online resources of environmental conditions and
sensing components of the MyAirCoach project.
</td> </tr>
<tr>
<td>
**Nature and scale of data**
</td>
<td>
The data of this category will be in the form of time series describing the
conditions in the patients environment, or in a specific location.
CSV (Comma Separated Values) is the advised file format in this category since
it allows the easy use of the data both through programming languages and
spreadsheet software packages (e.g. Open Office Calc, Microsoft Excel). In
this case timestamps for every measurement or the sampling rate should be
defined.
</td> </tr>
<tr>
<td>
**Use by researchers and healthcare professionals**
</td>
<td>
The datasets in this category can support research in the field of biomedical
signal processing as they hold the promise to correlate clinical indicators of
asthma attacks with environmental parameters.
</td> </tr>
<tr>
<td>
**Indicative existing similar dataset**
</td>
<td>
London Air Quality Network – King’s College London _** 18 ** _
Air Quality – The City of London _** 19 ** _
Air quality information and campaigns – Manchester City Council _** 20 ** _
GreatAir Manchester – The air quality website for the Greater Manchester
region _** 21 ** _
Weather data for research and projects – University of Reading _** 22 ** _
Historical monthly open data for UK meteorological stations – Met Office _**
23 ** _
UK Humidity open datasets _** 24 ** _
</td> </tr>
<tr>
<td>
**Indicative scientific publications**
</td>
<td>
Effect Of Atmospheric Conditions On Asthma Control And Gene Expression In The
Airway _** 25 ** _
Synoptic weather types and aeroallergens modify the effect of air pollution on
hospitalizations for asthma hospitalizations in Canadian cities _** 26 ** _
</td> </tr>
<tr>
<td>
**Standards and Metadata**
</td> </tr>
<tr>
<td>
</td>
<td>
The dataset will be accompanied by detailed documentation of its contents
along with metadata describing the demographics of the samples from which the
data were
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
generated and detailed description of the data collection process.
Indicative metadata include: (a) description of the experimental setup and
procedure that led to the generation of the dataset, (b) documentation of the
variables recorded in the dataset.
The metadata will be in a format that may be easily parsed with open source
software.
</th> </tr>
<tr>
<td>
**Data Sharing**
</td> </tr>
<tr>
<td>
**Access type**
</td>
<td>
In accordance with the ethical and legal requirements regarding data obtained
from human participants, the dataset will be initially available to the
Consortium Members and only after its careful anonymization. Personal
information regarding the participants will be kept strictly private.
As the project progresses and the collected data are used for the research and
development processes of the project they will become available at the
projects open data platform after the approval by the ethics committee of the
MyAirCoach project. The inclusion of a subject’s data in the public part of
this dataset will be done on the basis of appropriate informed consent to data
publication.
</td> </tr>
<tr>
<td>
**Access procedure**
</td>
<td>
In the first stages of the dataset sharing, and as soon it reaches an
anonymized formed, it will be shared among the consortium through the wiki
page of the project.
For the second stage of dataset publication, the anonymized data will be
published through the open data platform of the project in order to be used by
registered users and subsequently by any interested party aiming to use them
for research and development.
</td> </tr>
<tr>
<td>
_**Embargo periods (if any)** _
</td>
<td>
No preset embargo periods.
Selection of the appropriate time of publication based on the research and
development timeline of the project, the protection of intellectual property
and the proper
safeguarding of the privacy of participants
</td> </tr>
<tr>
<td>
_**Technical mechanisms for dissemination** _
</td>
<td>
The public part of the datasets in this category will be accessible through
the projects open data platform.
</td> </tr>
<tr>
<td>
_**Necessary S/W and other tools for** _
_**enabling re-use** _
</td>
<td>
No specific type of software required.
Required characteristics include reading capabilities of CSV
</td> </tr>
<tr>
<td>
_**Repository where data will be stored** _
_**(institutional, etc., if** _
</td>
<td>
The dataset will be accommodated at the wiki page of the
MyAirCoach project, as well as at an Open Data Platform of
</td> </tr>
<tr>
<td>
_**already existing and identified)** _
</td>
<td>
the final system.
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
_**For how long should the data be preserved?** _
</td>
<td>
The public part of the dataset will be preserved online for as long as there
are regular downloads within the online platform of the MyAirCoach system.
After that, it would be made accessible by request in order to reduce any
issues regarding the overall performance of the system.
The private part of the dataset will be preserved by responsible MyAirCoach
partner at least until the end of the project.
</td> </tr>
<tr>
<td>
_**Approximated end** _
**_volume of data_ **
</td>
<td>
Unknown
</td> </tr>
<tr>
<td>
_**Indicative associated costs for data archiving and** _
**_preservation_ **
</td>
<td>
Probably two dedicated hard disk drives will be allocated for the dataset; one
for the public part and one for the private. There are no costs associated
with its preservation of the data.
</td> </tr>
<tr>
<td>
_**Indicative plan for covering the above** _
**_costs_ **
</td>
<td>
Small one-time costs covered within the MyAirCoach project.
</td> </tr>
<tr>
<td>
**Ethical issues and requirements**
</td> </tr>
<tr>
<td>
</td>
<td>
In the case that the data are related with a patient and not with a specific
geographic location, they should be anonymized carefully
</td> </tr> </table>
## 3.7 Datasets of Patient Tomography
<table>
<tr>
<th>
**Name**
</th>
<th>
Datasets of Patient Tomography
</th> </tr>
<tr>
<td>
**Naming Prefix**
</td>
<td>
_DS_Tomography_
</td> </tr>
<tr>
<td>
**Summary**
</td>
<td>
A dataset of patient lung/chest tomographies will be collected and utilized
within the MyAirCoach project in order to support the understanding and
prediction of asthma condition of patients. Images resulting from modalities
such as Computed Tomography (CT) will be used to the understanding of
important asthma related parameters and will serve as a basis for the
simulation of airflows within the lung airways.
</td> </tr>
<tr>
<td>
**Positioning within the MyAirCoach project**
</td> </tr>
<tr>
<td>
**Relation to the project objective**
</td>
<td>
The MyAirCoach project is aiming to utilize Computational Fluid Dynamics and
Fluid Particle Tracing for the understanding of the flow of inhaled medication
and
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
irritant particles inside the airways of the patient lungs. In this direction
the availability or realistic geometric models of human lungs will be of
fundamental importance in order to reach realistic results.
</th> </tr>
<tr>
<td>
**Related Work Packages**
</td>
<td>
**WP2** Test Campaigns, measurements and clinical analysis
**WP4** Computational models, intelligent information processing and DSS
module
**WP6** Evaluation
</td> </tr>
<tr>
<td>
**Description of Dataset Category**
</td> </tr>
<tr>
<td>
**Origin of Data**
</td>
<td>
There are three types of patient tomographies used for asthma: Computed
Tomography (CT), Positron Emission Tomography (PET) and Magnetic Resonance
Imaging
(MRI).
**Computed tomography (CT)** scan provides a high degree of anatomical detail
and has been used in the diagnosis of various airway diseases. **High
resolution computed tomography (HRCT)** is a special type of CT which allows
visualization of airways and parenchyma in much greater detail than
conventional CT or plain radiography. In asthma it is very useful particularly
when it is difficult to achieve an effective control of disease, and in severe
deterioration. **Positron Emission Tomography (PET)** can be also used in
asthma diagnosis and especially in the assessment of lung inflammation in
patients with atopic asthma,. **Chest Magnetic Resonance Imaging (MRI)** is a
more safe and non-invasive method providing even higher resolution than the
previously mentioned tomography approaches.
</td> </tr>
<tr>
<td>
**Nature and scale of data**
</td>
<td>
Patient tomographies are actually images of patients’ lungs or chest and will
be in DICOM (Digital Imaging and Communications in Medicine) format providing
the capability to share medical images easily and quickly.
</td> </tr>
<tr>
<td>
**Use by researchers and healthcare professionals**
</td>
<td>
The datasets in this category can support research in the field of medical
image processing and extraction of lung geometry and can form the basis for
the comparative validation of different algorithmic approaches.
Furthermore, the current type of datasets can be used for the extraction of
significant asthma indicators that are based on the geometry of the lungs, and
therefore contribute to the enhancement of modelling approaches and the
medical research of asthma.
</td> </tr>
<tr>
<td>
**Indicative existing similar**
</td>
<td>
Open-Access Medical Image Repositories _** 27 ** _
</td> </tr> </table>
<table>
<tr>
<th>
**dataset**
</th>
<th>
Public Medical Image Databases – Cornell University _** 28 ** _
DICOM sample image sets _** 29 ** _
MRI and CT Data from The Visible Human Project _** 30 ** _
Bone and Joint CT-SCAN Data – International Society of Biomechanics _** 31 **
_
Sample DICOM Data - TRIPOD _** 32 ** _
</th> </tr>
<tr>
<td>
**Indicative scientific publications**
</td>
<td>
Although a variety of scientific publications are available for the
application of novel image processing approaches on tomographic data and the
extraction of the geometry of the airways
</td> </tr>
<tr>
<td>
**Standards and Metadata**
</td> </tr>
<tr>
<td>
**Existing suitable standards**
</td>
<td>
The dataset will follow the DICOM standard _** 33 ** _
</td> </tr>
<tr>
<td>
**Data Sharing**
</td> </tr>
<tr>
<td>
**Access type**
</td>
<td>
In accordance with the ethical and legal requirements regarding data obtained
from human participants, the dataset will be initially available to the
Consortium Members and only after its careful anonymization. Personal
information regarding the participants will be kept strictly private.
As the project progresses and the collected data are used for the research and
development processes of the project they will become available at the
projects open data platform after the approval by the ethics committee of the
MyAirCoach project. The inclusion of a subject’s data in the public part of
this dataset will be done on the basis of appropriate informed consent to data
publication.
</td> </tr>
<tr>
<td>
**Access procedure**
</td>
<td>
In the first stages of the dataset sharing, and as soon it reaches an
anonymized formed, it will be shared among the consortium through the wiki
page of the project.
For the second stage of dataset publication, the anonymized data will be
published through the open data platform of the project in order to be used by
registered users and subsequently by any interested party aiming to use them
for research and development.
Anonymized DICOM images will also considered to be made publicly available
through the DICOM Library _** 34 ** _ .
</td> </tr>
<tr>
<td>
_**Embargo periods (if any)** _
</td>
<td>
No preset embargo periods.
Selection of the appropriate time of publication based on the research and
development timeline of the project, the protection of intellectual property
and the proper
</td> </tr>
<tr>
<td>
</td>
<td>
safeguarding of the privacy of participants
</td> </tr>
<tr>
<td>
_**Technical mechanisms for dissemination** _
</td>
<td>
The public part of the datasets in this category will be accessible through
the projects open data platform.
</td> </tr>
<tr>
<td>
_**Necessary S/W and other tools for enabling re-use** _
</td>
<td>
The data will be only accessible through the use of software components and
products that support openEHR 10
</td> </tr>
<tr>
<td>
_**Repository where data will be stored (institutional, etc., if already
existing and identified)** _
</td>
<td>
The dataset will be accommodated at the wiki page of the MyAirCoach project,
as well as at an Open Data Platform of the final system.
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
_**For how long should the data be preserved?** _
</td>
<td>
The public part of the dataset will be preserved online for as long as there
are regular downloads within the online platform of the MyAirCoach system.
After that, it would be made accessible by request in order to reduce any
issues regarding the overall performance of the system.
The private part of the dataset will be preserved by responsible MyAirCoach
partner at least until the end of the project.
</td> </tr>
<tr>
<td>
**_Approximated end volume of data_ **
</td>
<td>
Unknown
</td> </tr>
<tr>
<td>
_**Indicative associated costs for data archiving and** _
**_preservation_ **
</td>
<td>
Probably two dedicated hard disk drives will be allocated for the dataset; one
for the public part and one for the private. There are no costs associated
with its preservation of the data.
</td> </tr>
<tr>
<td>
_**Indicative plan for** _
**_covering the above costs_ **
</td>
<td>
Small one-time costs covered within the MyAirCoach project.
</td> </tr>
<tr>
<td>
**Ethical issues and requirements**
</td> </tr>
<tr>
<td>
</td>
<td>
The collected data should be carefully anonymized for the preservation of the
privacy of participants.
All doctors’ comments accompanying the assessments should be carefully
reviewed and delete any sections that can be used to identify the respective
patient.
</td> </tr> </table>
## 3.8 Lung simulation Results and Related Analysis
<table>
<tr>
<th>
**Name**
</th>
<th>
Lung Simulation results and related analysis
</th> </tr>
<tr>
<td>
**Naming Prefix**
</td>
<td>
_DS_Lung Simulation Results_
</td> </tr>
<tr>
<td>
**Summary**
</td>
<td>
Results from the simulation of lungs describing flow of air within the airways
and deposition of particles in the
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
airway walls. Tables of numerical data and analysis results
</th> </tr>
<tr>
<td>
**Positioning within the MyAirCoach project**
</td> </tr>
<tr>
<td>
**Relation to the project objective**
</td>
<td>
. One of the fundamental objectives of the MyAirCoach project is the
understanding of the breathing process of asthma patients and the underlining
of statistical significant differences with healthy subjects. In this
direction the outcomes and simulation results of these processes will be
published under the MyAirCoach open data repository so as to be used by the
research medical community and stimulate more efforts in the same direction.
</td> </tr>
<tr>
<td>
**Related Work Packages**
</td>
<td>
**WP2** Test Campaigns, measurements and clinical analysis
**WP4** Computational models, intelligent information processing and DSS
module
</td> </tr>
<tr>
<td>
**Description of Dataset Category**
</td> </tr>
<tr>
<td>
**Origin of Data**
</td>
<td>
Simulation outcomes from the lung modeling within the tasks of WP4
</td> </tr>
<tr>
<td>
**Nature and scale of data**
</td>
<td>
Videos of particle tracing analysis and image visualizations of air dynamics
within lung airways
</td> </tr>
<tr>
<td>
**Use by researchers and healthcare professionals**
</td>
<td>
The datasets of this category are aiming to become a useful component for the
understanding of the breathing process of asthma patients and the study of the
differentiating factors in the geometry of the airways that may increase the
possibility of an asthma attack due to increased density of deposited
particles
</td> </tr>
<tr>
<td>
**Indicative existing similar dataset**
</td>
<td>
Results included in scientific publication do not provide the adequate level
of detail, and usually full raw data results are excluded due to space
limitations. Furthermore, the have not been identified any online available
videos of particle tracing, except sporadic dissemination articles containing
videos.
</td> </tr>
<tr>
<td>
**Indicative scientific publications**
</td>
<td>
There have not been identified any aggregated online available resource in
this category
</td> </tr>
<tr>
<td>
**Standards and Metadata**
</td> </tr>
<tr>
<td>
**Existing suitable standards**
</td>
<td>
The dataset will be accompanied with detailed documentation of the selected
simulation parameters as well as analysis results and conclusions. The data
will be also accompanied with a link to the open document of any publications
that are related to these results
</td> </tr>
<tr>
<td>
**Data Sharing**
</td> </tr> </table>
<table>
<tr>
<th>
**Access type**
</th>
<th>
In accordance with the ethical and legal requirements regarding data obtained
from human participants, the dataset will be initially available to the
Consortium Members and only after its careful anonymization. Personal
information regarding the participants will be kept strictly private.
As the project progresses and the collected data are used for the research and
development processes of the project they will become available at the
projects open data platform after the approval by the ethics committee of the
MyAirCoach project. The inclusion of a subject’s data in the public part of
this dataset will be done on the basis of appropriate informed consent to data
publication.
</th> </tr>
<tr>
<td>
**Access procedure**
</td>
<td>
In the first stages of the data sharing, and as soon it reaches an anonymized
formed, it will be shared among the consortium through the wiki page of the
project.
For the second stage of data publication, the anonymized data will be
published through the open data platform of the project in order to be used by
registered users and subsequently by any interested party aiming to use them
for research and development.
</td> </tr>
<tr>
<td>
_**Embargo periods (if any)** _
</td>
<td>
No preset embargo periods.
Selection of the appropriate time of publication based on the research and
development timeline of the project, the protection of intellectual property
and the proper
safeguarding of the privacy of participants
</td> </tr>
<tr>
<td>
_**Technical mechanisms for dissemination** _
</td>
<td>
The public part of the datasets in this category will be accessible through
the projects open data platform.
</td> </tr>
<tr>
<td>
_**Necessary S/W and other tools for enabling re-use** _
</td>
<td>
Any type of video and image viewing software. Spreadsheet editing software may
be required when analysis results are also attached
</td> </tr>
<tr>
<td>
_**Repository where data will be stored (institutional, etc., if already
existing and identified)** _
</td>
<td>
The dataset will be accommodated at the wiki page of the MyAirCoach project,
as well as at an Open Data Platform of the final system.
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
_**For how long should the data be preserved?** _
</td>
<td>
The public part of the dataset will be preserved online for as long as there
are regular downloads within the online platform of the MyAirCoach system.
After that, it would be made accessible by request in order to reduce any
issues regarding the overall performance of the system.
</td> </tr>
<tr>
<td>
</td>
<td>
The private part of the dataset (i.e. connection of lung model with actual
patient) will be preserved by responsible MyAirCoach partner at least until
the end of the project.
</td> </tr>
<tr>
<td>
**_Approximated end volume of data_ **
</td>
<td>
Unknown
</td> </tr>
<tr>
<td>
_**Indicative associated costs for data archiving and** _
**_preservation_ **
</td>
<td>
Probably two dedicated hard disk drives will be allocated for the dataset; one
for the public part and one for the private. There are no costs associated
with its preservation of the data.
</td> </tr>
<tr>
<td>
_**Indicative plan for** _
**_covering the above costs_ **
</td>
<td>
Small one-time costs covered within the MyAirCoach project.
</td> </tr>
<tr>
<td>
**Ethical issues and requirements**
</td> </tr>
<tr>
<td>
</td>
<td>
The collected data should be carefully anonymized for the preservation of the
privacy of participants.
All doctors’ comments accompanying the assessments should be carefully
reviewed and delete any sections that can be used to identify the respective
patient.
</td> </tr> </table>
## 3.9 Datasets of MyAirCoach Patient Models
<table>
<tr>
<th>
**Name**
</th>
<th>
Datasets of MyAirCoach Patient Models
</th> </tr>
<tr>
<td>
**Naming Prefix**
</td>
<td>
_DS_PatientModels_
</td> </tr>
<tr>
<td>
**Summary**
</td>
<td>
The current type of dataset will cover the generalized patient models produced
in the project’s framework and which will be designed based on the results of
measurement campaigns.
</td> </tr>
<tr>
<td>
**Positioning within the MyAirCoach project**
</td> </tr>
<tr>
<td>
**Relation to the project objective**
</td>
<td>
One of the main objectives of MyAirCoach is the development of a personalized
and accurate approach for the modelling of asthma condition of patients.
Parallel to this goal, generalized patients models will be created so as to
help medical researchers to study the disease of asthma through combination of
asthma patients behavioural pattern and computational simulation approaches.
</td> </tr>
<tr>
<td>
**Related Work Packages**
</td>
<td>
**WP2** Test Campaigns, measurements and clinical analysis
**WP4** Computational models, intelligent information processing and DSS
module
**WP6** Evaluation
</td> </tr> </table>
<table>
<tr>
<th>
**Description of Dataset Category**
</th> </tr>
<tr>
<td>
**Origin of Data**
</td>
<td>
Generalize models of asthma patients will be created within the MyAirCoach
project as they are described in T4.1 “Patient modelling and formal
representation”, T4.3 “Multiscale computational modeling of airways and
respiratory system” and based on the outcomes of WP2
“Test campaigns, measurements and clinical analysis”
</td> </tr>
<tr>
<td>
**Nature and scale of data**
</td>
<td>
The dataset could be in the form of XML-based representations of the
parameters involved in the myAirCoach Virtual Models, in OWL or UsiXML.
Furthermore the clinical component of the models could be based on the format
of electronic health records such as the openEHR framework.
</td> </tr>
<tr>
<td>
**Use by researchers and healthcare professionals**
</td>
<td>
The datasets of this category are aiming to become a useful component for the
study of asthma condition by medical researchers on the basis of computational
approaches and simulation.
</td> </tr>
<tr>
<td>
**Indicative existing similar dataset**
</td>
<td>
There have not been identified any online available datasets in this category
and for any method of sensing.
</td> </tr>
<tr>
<td>
**Indicative scientific publications**
</td>
<td>
There have not been identified any online available datasets in this category
and for any method of sensing.
</td> </tr>
<tr>
<td>
**Standards and Metadata**
</td> </tr>
<tr>
<td>
**Existing suitable standards**
</td>
<td>
The dataset will be accompanied with detailed documentation of its contents
and of all the variables involved in the myAirCoach Patient Models.
Guidelines for Virtual Human Modelling derived from the VUMS cluster and the
Veritas Project _** 35 ** _ will be used, along with related XSD and XML
specifications. The adoption and extension of the existing representation
format (OWL or UsiXML) developed in the context of the VERITAS project will be
also investigated.
</td> </tr>
<tr>
<td>
**Data Sharing**
</td> </tr>
<tr>
<td>
**Access type**
</td>
<td>
In accordance with the ethical and legal requirements regarding data obtained
from human participants, the dataset will be initially available to the
Consortium Members and only after its careful anonymization. Personal
information regarding the participants will be kept strictly private.
As the project progresses and the collected data are used for the research and
development processes of the project they will become available at the
projects open data platform after the approval by the ethics committee
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
of the MyAirCoach project. The inclusion of a subject’s data in the public
part of this dataset will be done on the basis of appropriate informed consent
to data publication.
</th> </tr>
<tr>
<td>
**Access procedure**
</td>
<td>
In the first stages of the dataset sharing, and as soon it reaches an
anonymized formed, it will be shared among the consortium through the wiki
page of the project.
For the second stage of dataset publication, the anonymized data will be
published through the open data platform of the project in order to be used by
registered users and subsequently by any interested party aiming to use them
for research and development.
</td> </tr>
<tr>
<td>
_**Embargo periods (if any)** _
</td>
<td>
No preset embargo periods.
Selection of the appropriate time of publication based on the research and
development timeline of the project, the protection of intellectual property
and the proper
safeguarding of the privacy of participants
</td> </tr>
<tr>
<td>
_**Technical mechanisms for dissemination** _
</td>
<td>
The public part of the datasets in this category will be accessible through
the projects open data platform.
</td> </tr>
<tr>
<td>
_**Necessary S/W and other tools for enabling re-use** _
</td>
<td>
The data will be only accessible through the use of software components and
products that support XML based data representations
</td> </tr>
<tr>
<td>
_**Repository where data will be stored (institutional, etc., if already
existing and identified)** _
</td>
<td>
The dataset will be accommodated at the wiki page of the MyAirCoach project,
as well as at an Open Data Platform of the final system.
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
_**For how long should the data be preserved?** _
</td>
<td>
The public part of the dataset will be preserved online for as long as there
are regular downloads within the online platform of the MyAirCoach system.
After that, it would be made accessible by request in order to reduce any
issues regarding the overall performance of the system.
The private part of the dataset will be preserved by responsible MyAirCoach
partner at least until the end of the project.
</td> </tr>
<tr>
<td>
**_Approximated end volume of data_ **
</td>
<td>
Unknown
</td> </tr>
<tr>
<td>
_**Indicative associated costs for data archiving and** _
**_preservation_ **
</td>
<td>
Probably two dedicated hard disk drives will be allocated for the dataset; one
for the public part and one for the private. There are no costs associated
with its preservation of the data.
</td> </tr>
<tr>
<td>
_**Indicative plan for** _
**_covering the above costs_ **
</td>
<td>
Small one-time costs covered within the MyAirCoach project.
</td> </tr>
<tr>
<td>
**Ethical issues and requirements**
</td> </tr>
<tr>
<td>
</td>
<td>
The collected data should be carefully anonymized for the preservation of the
privacy of participants.
All doctors’ comments accompanying the assessments should be carefully
reviewed and delete any sections that can be used to identify the respective
patient.
</td> </tr> </table>
## 3.10 Dataset of Educational and Training Content
<table>
<tr>
<th>
**Name**
</th>
<th>
Datasets of Educational and Training Content
</th> </tr>
<tr>
<td>
**Naming Prefix**
</td>
<td>
_DS_EducationAndTraining_
</td> </tr>
<tr>
<td>
**Summary**
</td>
<td>
Material related to the education of patients regarding asthma disease its
pathophysiology, symptoms, risk factors and indicators
Material related to the training of patients regarding the proper use of
different types of inhalers.
</td> </tr>
<tr>
<td>
**Positioning within the MyAirCoach project**
</td> </tr>
<tr>
<td>
**Relation to the project objective**
</td>
<td>
A very important parameter for increased involvement of asthma patients in the
management of their disease is their understanding of its fundamental nature
and the ability to detect and interpret correctly symptoms of reduce control.
Furthermore, the efficient training of patients regarding the proper use of
their inhaler is expected to increase their adherence to the prescribed
medication and help them optimize their inhaler technique.
</td> </tr>
<tr>
<td>
**Related Work Packages**
</td>
<td>
**WP1 User Needs, system requirements , architecture**
**WP2** Test Campaigns, measurements and clinical analysis
**WP6** Evaluation
</td> </tr>
<tr>
<td>
**Description of Dataset Category**
</td> </tr>
<tr>
<td>
**Origin of Data**
</td>
<td>
A dataset of educational and training content will be generated during the
myAirCoach project lifecycle in order to support patients and clinicians in
better asthma management. Registered users of the myAirCoach will also have
the capability to upload similar content following the established template.
</td> </tr>
<tr>
<td>
**Nature and scale of data**
</td>
<td>
Educational content will include information about the asthma disease, such as
associated risks, allergens,
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
physiology etc. Training content will include multimedia data concerning the
proper management and treatment of the disease (e.g. proper use of the
inhaler). Data can be in the form of documents, pdf files, videos, images,
presentations etc.
</th> </tr>
<tr>
<td>
**Use by researchers and healthcare professionals**
</td>
<td>
The material concentrated under the current category will be useful for
patients, doctors, clinicians, Institutes of Health, as well as for
researchers investigating issues related to asthma so as to help their
patients to effectively manage asthma disease and correctly use their
medication.
</td> </tr>
<tr>
<td>
**Indicative existing similar dataset**
</td>
<td>
Asthma Handouts – Sutter Health _** 36 ** _
Asthma Education Materials – Neighborhood Health
Plan _** 37 ** _
Instructions for Inhaler and Spacer Use _** 38 ** _
Inhalation protocols _** 39 ** _
</td> </tr>
<tr>
<td>
**Indicative scientific publications**
</td>
<td>
There have not been identified any online available datasets in this category
and for any method of sensing.
</td> </tr>
<tr>
<td>
**Standards and Metadata**
</td> </tr>
<tr>
<td>
**Existing suitable standards**
</td>
<td>
The dataset will be accompanied with detailed documentation of its contents.
Existing common formats for documents, pdf files, videos, images and
presentations will be utilized (e.g. pdf, doc, png).
</td> </tr>
<tr>
<td>
**Data Sharing**
</td> </tr>
<tr>
<td>
**Access type**
</td>
<td>
Widely open to the entire asthma community
</td> </tr>
<tr>
<td>
**Access procedure**
</td>
<td>
Open access within the MyAirCoach website and the open data platform of the
MyAirCoach System
</td> </tr>
<tr>
<td>
_**Embargo periods (if any)** _
</td>
<td>
No preset embargo periods.
Selection of the appropriate time of publication based on the research and
development timeline of the project, the protection of intellectual property
and the proper
safeguarding of the privacy of participants
</td> </tr>
<tr>
<td>
_**Technical mechanisms for dissemination** _
</td>
<td>
The public part of the datasets in this category will be accessible through
the projects open data platform.
</td> </tr>
<tr>
<td>
_**Necessary S/W and other tools for enabling re-use** _
</td>
<td>
The dataset will be designed to allow easy reuse with commonly available tools
and software libraries (e.g. Microsoft Office, Open Office, Adobe Reader, …)
</td> </tr>
<tr>
<td>
_**Repository where data will be stored (institutional, etc., if already
existing** _
</td>
<td>
The dataset will be accommodated at the project’s website and wiki, as well as
at an Open Data Platform of the final system.
</td> </tr>
<tr>
<td>
_**and identified)** _
</td>
<td>
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
_**For how long should the data be preserved?** _
</td>
<td>
The public part of the dataset will be preserved online for as long as there
are regular downloads within the online platform of the MyAirCoach system.
After that, it would be made accessible by request in order to reduce any
issues regarding the overall performance of the system.
The private part of the dataset will be preserved by responsible MyAirCoach
partner at least until the end of the project.
</td> </tr>
<tr>
<td>
**_Approximated end volume of data_ **
</td>
<td>
Unknown
</td> </tr>
<tr>
<td>
_**Indicative associated costs for data archiving and** _
**_preservation_ **
</td>
<td>
Probably two dedicated hard disk drives will be allocated for the dataset; one
for the public part and one for the private. There are no costs associated
with its preservation of the data.
</td> </tr>
<tr>
<td>
_**Indicative plan for** _
**_covering the above costs_ **
</td>
<td>
Small one-time costs covered within the MyAirCoach project.
</td> </tr>
<tr>
<td>
**Ethical issues and requirements**
</td> </tr>
<tr>
<td>
</td>
<td>
The collected data should be carefully anonymized for the preservation of the
privacy of participants.
All doctors’ comments accompanying the assessments should be carefully
reviewed and delete any sections that can be used to identify the respective
patient.
</td> </tr> </table>
## 3.11 Dataset of Asthma Action Plans
<table>
<tr>
<th>
**Name**
</th>
<th>
Datasets of Asthma Action Plans
</th> </tr>
<tr>
<td>
**Naming Prefix**
</td>
<td>
_DS_ActionPlans_
</td> </tr>
<tr>
<td>
**Summary**
</td>
<td>
This dataset will include templates of action plans and will be used not only
for the design and development of the related electronically enhanced action
plans of MyAirCoach but also serve as a repository for practitioners to use in
their clinical practice.
</td> </tr>
<tr>
<td>
**Positioning within the MyAirCoach project**
</td> </tr>
<tr>
<td>
**Relation to the project objective**
</td>
<td>
Action plans are the main tool for the definition of the methodology that a
patient should follow for the effective management of his/her asthma disease.
The asthma action plan shows patient’s daily treatment, such as what kind of
medicines to take and when to take them. It also describes how to control
asthma long term and how to
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
handle worsening asthma, or attacks. Moreover, the plan explains when to call
the doctor or go to the emergency room. Asthma action plan are actually
documents.
Traditionally, provided in paper form action plans are based on a variety of
templates related to the choice of the doctors towards their easy
understanding by patients.
</th> </tr>
<tr>
<td>
**Related Work Packages**
</td>
<td>
**WP2** Test Campaigns, measurements and clinical analysis
**WP4** Computational models, intelligent information processing and DSS
module
**WP6** Evaluation
</td> </tr>
<tr>
<td>
**Description of Dataset Category**
</td> </tr>
<tr>
<td>
**Origin of Data**
</td>
<td>
Templates of action plans will be collected during the measurement campaigns
of the project and also from online resources towards the formation of a
unified repository that will cover different medication approaches and also
different languages.
</td> </tr>
<tr>
<td>
**Nature and scale of data**
</td>
<td>
Electronic documents of action plans or detailed description of interactive
electronically enhanced approaches (doc/docx or pdf files)
</td> </tr>
<tr>
<td>
**Use by researchers and healthcare professionals**
</td>
<td>
The current dataset can be used by healthcare professionals in order to review
a spectrum of action plan templates and provide their prescribed medication
regiment using the most fitted template to the needs of the specific patient.
</td> </tr>
<tr>
<td>
**Indicative existing similar dataset**
</td>
<td>
There have not been identified any online available datasets in this category
and for any method of sensing.
</td> </tr>
<tr>
<td>
**Indicative scientific publications**
</td>
<td>
There have not been identified any online available datasets in this category
and for any method of sensing.
</td> </tr>
<tr>
<td>
**Standards and Metadata**
</td> </tr>
<tr>
<td>
**Existing suitable standards**
</td>
<td>
There is no widely accepted template for asthma action plans. In this regard
the MyAirCoach project is aiming to document the available approaches and
provide a detailed review comparing their strengths and weaknesses. Although
this review will serve as the guideline for the design of the related
MyAirCoach components, it is also expected to help healthcare professionals in
their daily practice.
</td> </tr>
<tr>
<td>
**Data Sharing**
</td> </tr>
<tr>
<td>
**Access type**
</td>
<td>
Widely open to the entire asthma community
</td> </tr>
<tr>
<td>
**Access procedure**
</td>
<td>
Open access within the MyAirCoach website and the open
</td> </tr>
<tr>
<td>
</td>
<td>
data platform of the MyAirCoach System
</td> </tr>
<tr>
<td>
_**Embargo periods (if any)** _
</td>
<td>
No preset embargo periods.
Selection of the appropriate time of publication based on the research and
development timeline of the project, the protection of intellectual property
and the proper
safeguarding of the privacy of participants
</td> </tr>
<tr>
<td>
_**Technical mechanisms for dissemination** _
</td>
<td>
The public part of the datasets in this category will be accessible through
the projects open data platform.
</td> </tr>
<tr>
<td>
_**Necessary S/W and other tools for enabling re-use** _
</td>
<td>
The dataset will be designed to allow easy reuse with commonly available tools
and software libraries (e.g. Microsoft Office, Open Office, Adobe Reader, …)
</td> </tr>
<tr>
<td>
_**Repository where data will be stored (institutional, etc., if already
existing and identified)** _
</td>
<td>
The dataset will be accommodated at the project’s website and wiki, as well as
at an Open Data Platform of the final system.
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
_**For how long should the data be preserved?** _
</td>
<td>
The public part of the dataset will be preserved online for as long as there
are regular downloads within the online platform of the MyAirCoach system.
After that, it would be made accessible by request in order to reduce any
issues regarding the overall performance of the system.
The private part of the dataset will be preserved by responsible MyAirCoach
partner at least until the end of the project.
</td> </tr>
<tr>
<td>
**_Approximated end volume of data_ **
</td>
<td>
Unknown
</td> </tr>
<tr>
<td>
_**Indicative associated costs for data archiving and** _
**_preservation_ **
</td>
<td>
Probably two dedicated hard disk drives will be allocated for the dataset; one
for the public part and one for the private. There are no costs associated
with its preservation of the data.
</td> </tr>
<tr>
<td>
_**Indicative plan for** _
**_covering the above costs_ **
</td>
<td>
Small one-time costs covered within the MyAirCoach project.
</td> </tr>
<tr>
<td>
**Ethical issues and requirements**
</td> </tr>
<tr>
<td>
</td>
<td>
The collected data should be carefully anonymized for the preservation of the
privacy of participants.
All doctors’ comments accompanying the assessments should be carefully
reviewed and delete any sections that can be used to identify the respective
patient.
</td> </tr> </table>
## 3.12 Datasets of Collected User Requirements
<table>
<tr>
<th>
**Name**
</th>
<th>
Datasets of MyAIrCoach Measurement Campaigns
</th> </tr>
<tr>
<td>
**Naming Prefix**
</td>
<td>
_DS_UserRequirements_
</td> </tr>
<tr>
<td>
**Summary**
</td>
<td>
The design and implementation of the MyAirCoach system will be based on the
collection and the analysis of user requirements so as to increase the
usability and usefulness of the final system. The collected requirements, user
inputs and analysis results can be a valuable asset for the development of
devices and software systems supporting the self-management of asthma.
</td> </tr>
<tr>
<td>
**Positioning within the MyAirCoach project**
</td> </tr>
<tr>
<td>
**Relation to the project objective**
</td>
<td>
The development of the MyAirCoach system will be based on a User Centered
Approach that has begun with the initial collection of user requirements and
will continue throughout project.
</td> </tr>
<tr>
<td>
**Related Work Packages**
</td>
<td>
Related to the entire project
</td> </tr>
<tr>
<td>
**Description of Dataset Category**
</td> </tr>
<tr>
<td>
**Origin of Data**
</td>
<td>
Data collected and conclusions drawn from the User Centered Design approach of
the project.
</td> </tr>
<tr>
<td>
**Nature and scale of data**
</td>
<td>
The current category may include all previously defined types of datasets of
user feedback as they will be assessed during the UCD processes defined in
D1.2 “User Requirements, use cases, UCD methodology and final protocols for
evaluation studies”
</td> </tr>
<tr>
<td>
**Use by researchers and healthcare professionals**
</td>
<td>
The datasets of this category are aiming to become a useful component for the
development of asthma oriented self-management software tools and devices
</td> </tr>
<tr>
<td>
**Indicative existing similar dataset**
</td>
<td>
There have not been identified any online available datasets in this category
and for any method of sensing.
</td> </tr>
<tr>
<td>
**Indicative scientific publications**
</td>
<td>
There have not been identified any online available datasets in this category
and for any method of sensing.
</td> </tr>
<tr>
<td>
**Standards and Metadata**
</td> </tr>
<tr>
<td>
**Existing suitable standards**
</td>
<td>
The dataset will be accompanied with detailed documentation of its contents
and of all the parameters and selected procedures during the deployment of
userfeedback collection sessions
</td> </tr>
<tr>
<td>
**Data Sharing**
</td> </tr>
<tr>
<td>
**Access type**
</td>
<td>
In accordance with the ethical and legal requirements regarding data obtained
from human participants, the
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
dataset will be initially available to the Consortium Members and only after
its careful anonymization. Personal information regarding the participants
will be kept strictly private.
As the project progresses and the collected data are used for the research and
development processes of the project they will become available at the
projects open data platform after the approval by the ethics committee of the
MyAirCoach project. The inclusion of a subject’s data in the public part of
this dataset will be done on the basis of appropriate informed consent to data
publication
</th> </tr>
<tr>
<td>
**Access procedure**
</td>
<td>
In the first stages of the dataset sharing, and as soon it reaches an
anonymized formed, it will be shared among the consortium through the wiki
page of the project.
For the second stage of dataset publication, the anonymized data will be
published through the open data platform of the project in order to be used by
registered users and subsequently by any interested party aiming to use them
for research and development.
</td> </tr>
<tr>
<td>
_**Embargo periods (if any)** _
</td>
<td>
No preset embargo periods.
Selection of the appropriate time of publication based on the research and
development timeline of the project, the protection of intellectual property
and the proper
safeguarding of the privacy of participants
</td> </tr>
<tr>
<td>
_**Technical mechanisms for dissemination** _
</td>
<td>
The public part of the datasets in this category will be accessible through
the projects open data platform.
</td> </tr>
<tr>
<td>
_**Necessary S/W and other tools for enabling re-use** _
</td>
<td>
Dependent on the dataset as it will be defined during the deployment of
measurement campaigns and the practice of the responsible clinical partner.
</td> </tr>
<tr>
<td>
_**Repository where data will be stored (institutional, etc., if already
existing and identified)** _
</td>
<td>
The dataset will be accommodated at the wiki page of the MyAirCoach project,
as well as at an Open Data Platform of the final system.
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
_**For how long should the data be preserved?** _
</td>
<td>
The public part of the dataset will be preserved online for as long as there
are regular downloads within the online platform of the MyAirCoach system.
After that, it would be made accessible by request in order to reduce any
issues regarding the overall performance of the system.
The private part of the dataset will be preserved by responsible MyAirCoach
partner at least until the end of the project.
</td> </tr>
<tr>
<td>
**_Approximated end volume of data_ **
</td>
<td>
Unknown
</td> </tr>
<tr>
<td>
_**Indicative associated costs for data archiving and** _
**_preservation_ **
</td>
<td>
Probably two dedicated hard disk drives will be allocated for the dataset; one
for the public part and one for the private. There are no costs associated
with its preservation of the data.
</td> </tr>
<tr>
<td>
_**Indicative plan for** _
**_covering the above costs_ **
</td>
<td>
Small one-time costs covered within the MyAirCoach project.
</td> </tr>
<tr>
<td>
**Ethical issues and requirements**
</td> </tr>
<tr>
<td>
</td>
<td>
The collected data should be carefully anonymized for the preservation of the
privacy of participants.
All doctors’ comments accompanying the assessments should be carefully
reviewed and delete any sections that can be used to identify the respective
patient.
</td> </tr> </table>
## 3.13 Datasets of MyAirCoach Measurement Campaigns
<table>
<tr>
<th>
**Name**
</th>
<th>
Datasets of MyAIrCoach Measurement Campaigns
</th> </tr>
<tr>
<td>
**Naming Prefix**
</td>
<td>
_DS_MeasurementCampaigns_
</td> </tr>
<tr>
<td>
**Summary**
</td>
<td>
In the context of the project, two measurement campaigns are scheduled for the
initial clinical analysis of asthma condition and the evaluation and
optimization of the integrated MyAirCoach system. Three different pilot sites
in Europe (London, Manchester, Leiden) will participate in these processes and
help for the collection of important data and conclusions regarding asthma
management and the related parts of the healthcare system.
The current collection of datasets is intended to collect the produced results
in a common reference framework and help for the easy access and future
reference.
</td> </tr>
<tr>
<td>
**Positioning within the MyAirCoach project**
</td> </tr>
<tr>
<td>
**Relation to the project objective**
</td>
<td>
The measurement campaigns of the MyAirCoach project will form the information
basis for the design and development of the majority of envisioned system
components and also for the validation of the overall usefulness of the final
integrated version of MyAirCoach.
</td> </tr>
<tr>
<td>
**Related Work Packages**
</td>
<td>
**WP2** Test Campaigns, measurements and clinical analysis
**WP6** Evaluation
</td> </tr>
<tr>
<td>
**Description of Dataset Category**
</td> </tr> </table>
<table>
<tr>
<th>
**Origin of Data**
</th>
<th>
Data collected and conclusions drawn from the measurements campaigns of the
project.
</th> </tr>
<tr>
<td>
**Nature and scale of data**
</td>
<td>
The current category may include all previously defined types of datasets in
addition to documents or any other types of data collected by the clinical
partners in during the campaigns.
</td> </tr>
<tr>
<td>
**Use by researchers and healthcare professionals**
</td>
<td>
The datasets of this category are aiming to become a useful component for the
study of asthma condition by medical researchers and hopefully be extended by
the input of other projects in the field of asthma related research.
</td> </tr>
<tr>
<td>
**Indicative existing similar dataset**
</td>
<td>
There have not been identified any online available datasets in this category
and for any method of sensing.
</td> </tr>
<tr>
<td>
**Indicative scientific publications**
</td>
<td>
There have not been identified any online available datasets in this category
and for any method of sensing.
</td> </tr>
<tr>
<td>
**Standards and Metadata**
</td> </tr>
<tr>
<td>
**Existing suitable standards**
</td>
<td>
The dataset will be accompanied with detailed documentation of its contents
and of all the parameters and selected procedures during the deployment of the
campaigns
</td> </tr>
<tr>
<td>
**Data Sharing**
</td> </tr>
<tr>
<td>
**Access type**
</td>
<td>
In accordance with the ethical and legal requirements regarding data obtained
from human participants, the dataset will be initially available to the
Consortium Members and only after its careful anonymization. Personal
information regarding the participants will be kept strictly private.
As the project progresses and the collected data are used for the research and
development processes of the project they will become available at the
projects open data platform after the approval by the ethics committee of the
MyAirCoach project. The inclusion of a subject’s data in the public part of
this dataset will be done on the basis of appropriate informed consent to data
publication.
</td> </tr>
<tr>
<td>
**Access procedure**
</td>
<td>
In the first stages of the dataset sharing, and as soon it reaches an
anonymized formed, it will be shared among the consortium through the wiki
page of the project.
For the second stage of dataset publication, the anonymized data will be
published through the open data platform of the project in order to be used by
registered users and subsequently by any interested party aiming to
</td> </tr>
<tr>
<td>
</td>
<td>
use them for research and development.
</td> </tr>
<tr>
<td>
_**Embargo periods (if any)** _
</td>
<td>
No preset embargo periods.
Selection of the appropriate time of publication based on the research and
development timeline of the project, the protection of intellectual property
and the proper
safeguarding of the privacy of participants
</td> </tr>
<tr>
<td>
_**Technical mechanisms for dissemination** _
</td>
<td>
The public part of the datasets in this category will be accessible through
the projects open data platform.
</td> </tr>
<tr>
<td>
_**Necessary S/W and other tools for enabling re-use** _
</td>
<td>
Dependent on the dataset as it will be defined during the deployment of
measurement campaigns and the practice of the responsible clinical partner.
</td> </tr>
<tr>
<td>
_**Repository where data will be stored (institutional, etc., if already
existing and identified)** _
</td>
<td>
The dataset will be accommodated at the wiki page of the MyAirCoach project,
as well as at an Open Data Platform of the final system.
</td> </tr>
<tr>
<td>
**Archiving and preservation (including storage and backup)**
</td> </tr>
<tr>
<td>
_**For how long should the data be preserved?** _
</td>
<td>
The public part of the dataset will be preserved online for as long as there
are regular downloads within the online platform of the MyAirCoach system.
After that, it would be made accessible by request in order to reduce any
issues regarding the overall performance of the system.
The private part of the dataset will be preserved by responsible MyAirCoach
partner at least until the end of the project.
</td> </tr>
<tr>
<td>
**_Approximated end volume of data_ **
</td>
<td>
Unknown
</td> </tr>
<tr>
<td>
_**Indicative associated costs for data archiving and** _
**_preservation_ **
</td>
<td>
Probably two dedicated hard disk drives will be allocated for the dataset; one
for the public part and one for the private. There are no costs associated
with its preservation of the data.
</td> </tr>
<tr>
<td>
_**Indicative plan for** _
**_covering the above costs_ **
</td>
<td>
Small one-time costs covered within the MyAirCoach project.
</td> </tr>
<tr>
<td>
**Ethical issues and requirements**
</td> </tr>
<tr>
<td>
</td>
<td>
The collected data should be carefully anonymized for the preservation of the
privacy of participants.
All doctors’ comments accompanying the assessments should be carefully
reviewed and delete any sections that can be used to identify the respective
patient.
</td> </tr> </table>
# 4 MyAirCoach Open Access Platform
In order to provide the required framework for the sharing of information
generated by the MyAirCoach project the knowledge portal of the project was
created were all partners can upload and share documents and data within the
consortium. After the assurance of anonymity and the protection of the privacy
of patients, data can be published through the dissemination channels of the
project and mainly through the project’s website.
Furthermore, open access to the MyAirCoach data should continue to be
available even after the completion of the project timeline and the deployment
of the MyAirCoach system as an independent framework open access to the data
of the project. In this direction and open access platform was created so as
to cover the above described types of datasets
## 4.1 MyAirCoach Open Access Demonstrator
The open access platform of MyAirCoach is designed as a component of the final
online platform of the MyAirCoach and as such has offers two fundamental
views. The first one is addressed to registered members of the system such as
health care professionals who in addition to the data of their patients will
be able to access anonymized health records and the knowledge generated within
the MyAirCoach project. Furthermore, these users will be able to upload data
to the open access framework and share them with the entire asthma and
research community.
The second view of the system is intended for unregistered users who need to
get access to the datasets and publications of MyAirCoach without registering
as a user. In this case only anonymized data will be made available to them
and they will not be able to upload any type of data to the system.
Figure 1 illustrated the login page of the MyAirCoach platform showing the two
different ways of accessing the data of the project
**Figure 1: Login page of the MyAirCoach Platform**
After login, the users will be presented with the functionalities of the
system which will be different for based on whether the user is registered or
not the MyAirCoach system. The open data option leads to an introductory page
describing the purpose so the repository and how it can be used by anyone
interested in the study and understanding of asthma
**Figure 2: Home page of the Open Data functionalities of MyAirCoach**
The selection of data in the top menu of the web page leads the user to the
main part of the open data repository where he/she can access the documents,
datasets and anonymized patient records.
**Figure 3: Documents repository of the MyAirCoach**
The documents repository of the platform will be used in order to access the
outcomes of the project and more specifically it will include
* **MyAirCoach deliverables** as they will be produced throughout the project and summarize the important results and strategies selected
* **Scientific publications** as they will translate the results of the project to scientific knowledge to be used by medical researchers and information technology specialists
* **Dissemination material for asthma disease** as they will be use by the project for the dissemination of the objectives and results of the project as well as the increase of the MyAirCoach user base
Figure 3 illustrated the final version of the currently datasets as they can
be also found in the project’s website.
In order to support the usability, usefulness and accessibility of the data a
metadata template was used for the description of every uploaded document as
shown in Figure 4. It should be underlined that only the creators of the
document and the system administrator have the right to edit and change the
provided information or delete the document from the repository.
**Figure 4: Indicative example of document metadata**
Furthermore, and following the same approach registered users are given the
ability to upload a new document on the platform with the explicit requirement
of filling in the most important parameters of document description.
The following figures describe the same functionalities as above but for the
case of the datasets that will be uploaded on the MyAirCoach open access
platform. More specifically the currently available categories of datasets
include:
* **Inhaler usage measurements** as they relate to the measurements during the actual use of inhalers by patients
* **Physiology measurements** as they relate to the physiological assessments of healthcare professionals or measurements of physiological parameters through the use of sensing devices in the patients environment
* **Exhaled NiOX measurements** as they relate to the use of modern Forced Exhaled Nitric Oxide devices in the clinical environment or in the patients home environment
* **Nutritional assessments** as they relate to the collection of data related to the nutritional habits of patients or the guidelines of doctors
* **Lifestyle measurements** as they relate to the collection of data from questionnaires and sensing devices regarding the activity levels of patients and also the advice of healthcare professionals in this area
* **Environmental measurements** as they relate to the collection of information regarding environmental conditions and pollution levers in the vicinity of asthma patients
* **Patient tomography data** as they relate to the 3D imaging of patient lungs and respiratory tract
* **Lung modelling results** from the simulations conducted within the project and which will provide useful information for the flow of air within the lungs as well as the deposition of particles in the airway walls.
* **Patient models** as they are related to the modeling framework of MyAirCoach and the general and anonymized patient models produced within the project’s framework
* **Educational and training content** documents and interactive material aiming to educate patients regarding the condition of asthma and help them use their inhalers correctly
* **Asthma action plans** action plan templated in document form or interactive computer/smartphone based approaches for the description of the prescribed methodology for the effective self-management of asthma
**Figure 5: Dataset repository of the MyAirCoach platform**
**Figure 6: Template for the uploading of datasets on the MyAirCoach
platform**
Figure 7 presents the available open datasets of the MyAirCoach project as
they include results of modelling simulations and annotated sound datasets for
the training of machine learning algorithms for the detection of important
steps of inhaler technique.
Finally, the open data repository of MyAirCoach provides access to anonymised
Virtual Patient Records. The data of this type will be assessable directly
through the platform and also possible for the users of the system to download
them in a standardized data format such as openEHR of HL7. The following
Figure present a list of test patient records created for the purposes of the
current demonstrator.
**Figure 7: MyAirCoach repository of anonymised Virtual Patient records**
As seen in Figure 7, the user can access the Virtual Patients profile. The
profile selection view of the patients’ electronic health record separated in
tabs of different health assessments (Figure 8). The summary view sorts the
assessments based on their time and aiming to allow doctors to better
understand the overall evolution of the patients’ health. Open data platform
is used in order to visualise important parameters of the datasets collected
and help to understand how the MyAirCoach repository will be evolving through
the timeline of the project.
**Figure 8: Profile view of the patient’s record**
The document and dataset charts, in Charts view include pie charts for the
visualisation of the relative percentage for the defined types of documents or
datasets and the number of datasets uploaded as a function of time. Figure
9910 and Figure 101011 show indicative examples of these visualisations based
on the testing data and the evaluation of the platform before the integration
with the MyAirCoach system.
Furthermore, informative diagrams are also available as a summary of the
available anonymised patient records as seen in Figure 111112. As presented
the initial version of the charts include the distribution of demographic data
among the entire dataset (age and Gender) as well as the distribution of
important clinical parameters as they are assessed in the last exam of the
patient.
**Figure 9: Charts for the visualization of uploaded documents**
**Figure 10: Charts for the visualization of uploaded datasets**
**Figure 11: Charts for the visualization of available anonymised patient
records**
## 4.2 Conformance to EU Commission Guidelines
The following table summarizes the proposed solutions of MyAirCoach for the
addressing of the data management aspects as described by EU commission.
**Table 9: Conformance with the EU Commission Data Management Plan Guidelines
a**
<table>
<tr>
<th>
**Aspect**
</th>
<th>
**MyAirCoach Solution**
</th> </tr>
<tr>
<td>
**Discoverable**
</td>
<td>
The documents and datasets of the project will be made available through a
diverse and side number of dissemination channels in order to support their
discoverability. Furthermore, all scientific publications of the project will
provide links to the respective datasets on the online open data platform of
MyAirCoach
</td> </tr>
<tr>
<td>
**Accessible**
</td>
<td>
The knowledge created within MyAirCoach, both in terms of documents and
datasets, will be easily accessible from the website of the project and the
open data repository as demonstrated in the previous section
</td> </tr>
<tr>
<td>
**Assessable and**
**intelligible**
</td>
<td>
The metadata provided for its document and dataset uploaded on the MyAirCoach
platform together with the provided searching tool will allow their easy
access and understanding so as to be used by researchers and be subjected to
scientific review.
</td> </tr>
<tr>
<td>
**Usable beyond the original purpose for which it was**
**collected**
</td>
<td>
The inclusion of a diverse set of datasets and documents in the same platform
is expected to increase the visibility of the available data and also support
their use beyond their initial purpose and by researchers outside the
project’s consortium.
</td> </tr>
<tr>
<td>
**Interoperable to specific quality**
**standards**
</td>
<td>
The suggested file formats for every type of document and dataset indicate the
project’s objective to remove any standardization barriers that may prevent a
number of users from assessing the data. Furthermore, the selected file
formats are supported by free software packages and open source programming
libraries that allow their use without additional costs.
</td> </tr> </table>
## 4.3 Conformance to Principles of Medical Information Security
The following table summarizes the proposed solutions of MyAirCoach for the
addressing of issues of medical information security
**Table 10: Conformance with the Harvard Research Data Security Policy**
<table>
<tr>
<th>
**Principle**
</th>
<th>
Description
</th> </tr>
<tr>
<td>
**Access control.**
</td>
<td>
The medical records of patients will be only accessible to their doctors and
family members as identified by the patient. Furthermore, and after the
informed consent of the
</td> </tr>
<tr>
<td>
</td>
<td>
patient an anonymized version of their record will be made available
</td> </tr>
<tr>
<td>
**Record opening**
</td>
<td>
MyAirCoach records will be accessible by the patients themselves. In addition
the open data repository will be also available to all users.
</td> </tr>
<tr>
<td>
**Control**
</td>
<td>
The uploading of data or editing will be subjected to a detailed scheme of
permissions and all uploaded data will be characterized by the name of their
creator
</td> </tr>
<tr>
<td>
**Consent and notification**
</td>
<td>
Informed consent of patients will be required before any type of publication
or sharing of information within the consortium or with external users.
</td> </tr>
<tr>
<td>
**Persistence**
</td>
<td>
No deletion functionalities of health record will be provided to any type of
users. If a user requires the deletion of his/her health record or uploaded
data a request should be sent to the ethical committee of the project for
review.
</td> </tr>
<tr>
<td>
**Attribution**
</td>
<td>
All uploaded data and changes will be marked with the user id of the
respective creator. An audit trail will be kept in when deletions are
performed, and after the approval of the ethical committee of the project.
</td> </tr>
<tr>
<td>
**Information flow**
</td>
<td>
No information flow will be available between records within the MyAirCoach
framework.
</td> </tr>
<tr>
<td>
**Aggregation control**
</td>
<td>
Patients will have the control of the users that have access to their medical
record, either through the anonymized or the detailed view.
</td> </tr>
<tr>
<td>
**Trusted Computing Base**
</td>
<td>
Information technology experts will supervise the proper function of the
system and report any risks for privacy and data security.
</td> </tr> </table>
# 5 Conclusions
The purpose of the current deliverable of the MyAirCoach project is to support
the data management life cycle for all data that will be collected, processed
or generated by the project. The data management plan of the project consists
of a detailed analysis of the datasets that the partners of the MyAirCoach
project plan to collect and use. Foreseen datasets contain inhaler usage
measurements, physiology assessments, exhaled Nitric Oxide measurements,
environmental measurements, patient tomography data, virtual models etc.
Each dataset was separately analyzed, with emphasis given on the nature of the
data, the accessibility and its possible access type, as well as any ethical
issues that may arise from manipulating sensitive personal information. This
deliverable will serve as a guide to build the infrastructure for efficiently
managing, storing and distributing the amount of data collected, especially
concerning the portions of the MyAirCoach datasets that will be made publicly
available.
Furthermore a detailed demonstrator of the online open data platform of the
project is presented, showing the main functionalities implemented in the
project and how it is integrated with the online version of the MyAirCoach
system. Furthermore, the user Centered Design and Development processes of the
MyAirCoach together with the planned evaluation task have allowed the
optimization of the open data platform and towards its use from researchers
outside the project’s consortium and after the completion of the project
activities.
# Appendix 1: Deposit License Agreement
In order to guarantee the proper function of the online open data repository
of MyAirCoach a License Agreement was prepared based on the respective
document of the 3TU Datacentrum _** 40 ** _
<table>
<tr>
<th>
The following parties are involved in this Licence Agreement:
1. The organization or person authorized to transfer and deposit the digital dataset/document(s), hereafter referred to as the Depositor
2. The organization that is authorized to archive and manage the digital
dataset/document(s), here after referred to as the Repository The Depositor
is:
The person or legal entity registered as such with the Repository The
Repository is:
MyAirCoach open access repository
</th> </tr> </table>
This Licence Agreement is subject to the following provisions:
### 1\. Licence
1. The Depositor grants the Repository a non-exclusive license for digital data files, hereafter referred to as ‘dataset/document’.
2. The Repository is authorized to include the dataset/document in its data archive. The Repository shall transfer the content of the dataset/document to an available carrier, through any method and in any form.
3. The Repository is authorized to make the dataset/document (or substantial parts thereof) available to third parties by means of online transmission. In addition, the Repository has the right, on the instruction of third parties or otherwise, to make a copy of the dataset/document or to grant third parties permission to download a copy.
### 2\. The Depositor
1. The Depositor declares that he is a holder of rights to the dataset/document, or the only holder of rights to the dataset/document, under the Databases act and where relevant the Copyright Actor otherwise, and/or is entitled to act in the present matter with the permission of other parties that hold rights.
2. By depositing a dataset/document the Depositor does not transfer ownership. The Depositor retains the right to deposit the dataset/document elsewhere in its present or future version(s). The Depositor retains all moral rights in the dataset/document including the right to be acknowledged as creator.
3. The Depositor indemnifies the Repository against all claims made by other parties against the Repository with regard to the dataset/document, the transfer of the dataset/document, and the form and/or content of the dataset/document.
### 3\. The dataset/document
1. The dataset/document to which the license relates consists of all the databases, documentation and other data files and documents that form part of this dataset/document, which have been transferred by the Depositor.
2. The Depositor declares that the dataset/document corresponds to the specification provided.
3. The Depositor declares that the dataset/document contains no data or other elements that are contrary to European law.
4. The Depositor will supply the dataset/document by means of a method and medium deemed acceptable by the Repository.
### 4\. The Repository
1. The Repository shall ensure, to the best of its ability and resources, that the deposited dataset/document is archived in a sustainable manner and remains legible and accessible.
2. The Repository shall, as far as possible, preserve the dataset/document unchanged in its original software format, taking account of current technology and the costs of implementation. The Repository has the right to modify the format and/or functionality of the dataset/document if this is necessary in order to facilitate the digital sustainability, distribution or re-use of the dataset/document.
3. If the access category “Temporary restriction: Embargo”, as specified at the end of this Agreement, is selected, the Repository shall, to the best of its ability and resources, ensure that effective technical and other measures are in place to prevent unauthorized third parties from gaining access to and/or consulting the dataset/document or substantial parts thereof.
### 5\. Removal of dataset/documents
**a.** If sufficient weighty grounds exist, the Repository has the right to
remove the dataset/document from the archive wholly or in part, or to restrict
or prevent access to the dataset/document on a temporary or permanent basis.
The Repository shall inform the Depositor in such cases.
### 6\. Availability to third parties
1. The Repository shall make the dataset/document available to third parties in accordance with the access conditions agreed with the Depositor: "Open access", or the “Temporary restriction: Embargo”.
2. The Repository shall make the dataset/document available only to third parties who have agreed to comply with the General Conditions of Use.
3. Notwithstanding the above, the Repository can make the dataset/document (or substantial parts thereof) available to third parties:
* if the Repository is required to do so by legislation or regulations, a court decision, or by a regulatory or other institution
* if this is necessary for the preservation of the dataset/document and/or the data archive
* (to a similar institution) if the Repository ceases to exist and/or its activities in the field of data archiving are terminated
4. The Repository shall publish the metadata and make them freely available, on the basis of the documentation that the Depositor provides with the dataset/document. The term metadata refers to the information that describes the digital files.
5. The general information about the research and the metadata relating to the dataset/document shall be included in the Repository’s databases and publications that are freely accessible to all persons.
### 7\. Provisions relating to use by third parties
1. The Repository shall require third parties to whom the dataset/document (or substantial parts thereof) is made available to include in the research results a clear reference to the dataset/document from which data have been used. The reference must comply with the General Conditions of Use.
2. The Repository shall require parties to which a dataset/document is made available to grant a non-exclusive license for the dataset/document(s) they create using the dataset/document that has been made available.
### 8\. Liability
1. The Repository accepts no liability in the event that all or part of a dataset/document is lost.
2. The Repository accepts no liability for any damage or losses resulting from acts or omissions by third parties to whom the Repository has made the dataset/document available.
3. The Repository accepts no responsibility for mistakes, omissions, or legal infringements within the deposited dataset/document.
### 9\. Term and termination of the Agreement
1. This Agreement shall come into effect on the date on which the Repository receives the dataset/document (hereafter the deposit date) and shall remain valid for an indefinite period. If the repository decides not to include the dataset/document in its data archive, this Agreement is cancelled. The Repository notifies the Depositor of publication or non-inclusion of the dataset/document in its data archive. Cancellation of this Agreement is subject to a period of notice of six months, and notice shall be given in writing. It is possible to change the agreed access category at any time during the term of the Agreement.
2. Notwithstanding point (a), this Agreement shall end when the dataset/document is removed from the data archive in accordance with Article 5 of this Agreement.
3. If the Repository ceases to exist or terminates its data-archiving activities, the Repository shall attempt to transfer the data files to a similar organization that will continue the Agreement with the Depositor under similar conditions if possible.
### 10\. Jurisdiction
MyAirCoach open data platform is entitled, but not obliged, to act
independently against violations of the Copyright Act and/or any other
intellectual property right of the holder(s) of rights to the dataset/document
and/or the data from the dataset/document.
### 11\. Applicable law
European law is applicable to this agreement.
**The Depositor hereby agrees to the above provisions and the general code(s)
of conduct referred to in this document.**
# Appendix 2: Dataset of Inhaler Usage Measurements
Indicative datasets generated within the myAirCoach project are available
online through the open access platform via the following link
_https://myaircoach.iti.gr:40001/myaircoach/app/#/opendata_ . An indicative
set of Inhaler Usage Measurements is described in this Appendix.
Specifically, seven recordings were performed with the smart inhaler device as
a dataset example for the open data repository. In more details,
1. inhaler_recording_1530271926.wav includes only two drug activation events,
2. inhaler_recording_1530271845.wav includes only an exhalation and an activation event,
3. inhaler_recording_1530272215.wav includes an exhalation, a drug activation, and inhalation and after 6 seconds an exhalation event,
4. inhaler_recording_1530272070.wav includes an exhalation, a drug activation, and inhalation and after 3 seconds an exhalation event,
5. inhaler_recording_1530278275.wav contains only an inhalation event in the first six seconds,
6. inhaler_recording_1530271999.wav includes an exhalation, a drug activation, and inhalation and after 3 seconds an exhalation event,
7. inhaler_recording_1530272143.wav includes an exhalation, a drug activation, and inhalation and after 3 seconds an exhalation event.
For the differentiation of inhaler events four classes are defined, drug
actuation denoted as D marked with colour red, exhalation denoted as E marked
with colour green, inhalation denoted as I marked with colour blue, noise and
other sounds denoted as N marked with colour gray
<table>
<tr>
<th>
Class #
</th>
<th>
Class description
</th>
<th>
Class short
</th>
<th>
Class colour
</th> </tr>
<tr>
<td>
1
</td>
<td>
Drug actuation
</td>
<td>
D
</td>
<td>
</td> </tr>
<tr>
<td>
2
</td>
<td>
Exhalation
</td>
<td>
E
</td>
<td>
</td> </tr>
<tr>
<td>
3
</td>
<td>
Inhalation
</td>
<td>
I
</td>
<td>
</td> </tr>
<tr>
<td>
4
</td>
<td>
Noise & other sounds
</td>
<td>
N
</td>
<td>
</td> </tr> </table>
For the sake of self-completeness, for the identification of inhaler events a
series of features is extracted including Continuous Wavelet Transform (CWT),
Spectrogram, Cepstrorgram, Mel Frequency Spectrum coefficients (MFCC), Zero
Cross Rate (ZCR). The classification of the feature vectors is performed using
Random Forest classification algorithm.
The sound recordings are visualized and depicted according to the following
figures, where the different events detected (Drug actuation, Exhalation,
Inhalation, Noise & other sounds) are denoted with the colour scale described.
Detection of drug
actuation event
Detection of drug
actuation event
**Figure 12 : inhaler_recording_1530271926.wav includes only two drug
activation events. Each block corresponds to one second of audio.**
**Figure 13 : inhaler_recording_1530271845.wav includes only an exhalation and
an activation event. Each block corresponds to one second of audio.**
**Figure 14 : inhaler_recording_1530272215.wav including an exhalation, a drug
activation, and inhalation and after 6 seconds an exhalation event. Each block
corresponds to one second of audio.**
**Figure 15 : inhaler_recording_1530272070.wav including an exhalation, a drug
activation, and inhalation and after 3 seconds an exhalation event. Each block
corresponds to one second of audio.**
**Figure 16 : inhaler_recording_1530278275.wav contains only an inhalation
event in the first six seconds. Each block corresponds to one second of audio.
Each block corresponds to one second of audio.**
**Figure 17 : inhaler_recording_1530271999.wav including an exhalation, a drug
activation, and inhalation and after 3 seconds an exhalation event. Each block
corresponds to one second of audio.**
**Figure 18 : inhaler_recording_1530272143.wav including an exhalation, a drug
activation, and inhalation and after 3 seconds an exhalation event. Each block
corresponds to one second of audio.**
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0645_REVEAL_732599.md
|
**Introduction**
</th> </tr> </table>
This document describes the data collected during the REVEAL project and how
it has been made Open Access in accordance with the H2020 Open Research Data
Pilot.
For more information about REVEAL data, including Fair Data policy, data
volume, allocation of resources, data security and ethical aspects, see REVEAL
Deliverable D1.5 Data Management Plan.
<table>
<tr>
<th>
**2.**
</th>
<th>
**Data Summary**
</th> </tr>
<tr>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**2.1.**
</td>
<td>
**Locomotion Data**
</td> </tr> </table>
In the first year of the REVEAL project, experiments were carried out during
game development to inform the most effective VR usability and immersion
techniques. Data was collected in order to evaluate alternative virtual
reality locomotion techniques for use in the REVEAL project. Locomotion is
essential to the creation of Environmental Narrative games (the primary goal
of REVEAL's technologies), but the resulting feelings of motion sickness are
an unresolved problem within the research literature. Questionnaires were used
to collect the preferences of participants in terms of their self-reported
levels of motion-sickness and immersion for different locomotion techniques,
and the software recorded interaction data about player's performance in the
game. Approximately 12MB of game interaction data was recorded in JSON format,
and detailed all player movement within the game accompanied by positional co-
ordinates allowing the player's overall movement through the level to be
visualised graphically. Questionnaire data was collated and entered into an
Excel spreadsheet for statistical analysis (~50k). This data could be useful
to other researchers interested in performing a meta-analysis of studies
investigating virtual reality locomotion techniques.
<table>
<tr>
<th>
**2.2.**
</th>
<th>
**Game Rating Data**
</th> </tr> </table>
A separate data-set was collected of publicly available game-rating data for
PlayStation VR titles on the PlayStation Network Store, consisting of
anonymous scores from players, rating games from 0 to 5. This was collected to
complement an analysis of existing locomotion techniques employed by
commercial developers for PlayStation VR. It represents a snapshot of publicly
available data at a specific point in the evolution of a new console
peripheral (PSVR). This was less than 15k of data stored in Excel format and
would be useful to future researchers seeking to examine the evolution of
virtual reality platforms.
<table>
<tr>
<th>
**2.3.**
</th>
<th>
**Knowledge Test Data**
</th> </tr> </table>
In the second year of the REVEAL project, Knowledge Test Data was collected to
evaluate the effectiveness of Educational Environmental Narrative games on
learning. A knowledge test was used to examine how well the player has learned
the topics in the in-game story. At least 50MB of game interaction data was
recorded in JSON format, detailing all interactions that the player carries
out in the
# D5.6 OPEN DATA PUBLICATION
This project has received funding from European’s Union’s Horizon 2020
research and innovation programme under grant agreement No 732599. No part of
this document may be used, reproduced and/or disclosed in any form without the
prior written permission of the REVEAL project partners. © 2017 – All rights
reserved.
game, including picking up items and unlocking elements of the story. Again,
questionnaire and knowledge test data was collated and entered into an Excel
spreadsheet for statistical analysis (~200k).
<table>
<tr>
<th>
**2.4.**
</th>
<th>
**Game Evaluation and Interaction Data**
</th> </tr> </table>
At the same time as the knowledge test, additional questionnaires were used to
collect data from the players on how present they felt within the game, how
engaged they were and their cognitive interest whilst the software recorded
interaction data about player's performance in the game.
<table>
<tr>
<th>
**2.5.**
</th>
<th>
**Museum Studies Data**
</th> </tr> </table>
In year two of REVEAL there was also a series of museum-based studies in order
to see if there is potential for narrative-based video games use in the
context of museums that are considered places for “informal learning” and what
such technology could bring to the visiting experience. The data collected
consisted of data gathered from post-game interviews and questionnaires. The
studies were designed and were conducted in accordance with the new GDPR, with
appropriate consent, Participant Information Sheet and anonymization.
<table>
<tr>
<th>
**3.**
</th>
<th>
**Open Access**
</th> </tr> </table>
The REVEAL data are shared using the Creative Commons, CC-BY licence:
_https://creativecommons.org/licenses/by/4.0/_
The data are stored in the Sheffield Hallam University Research Data Archive,
in accordance with REVEAL Deliverable D1.5 Data Management Plan.
<table>
<tr>
<th>
**3.1.**
</th>
<th>
**Locomotion Data**
</th> </tr> </table>
The REVEAL Locomotion Data is Open Access at _https://shurda.shu.ac.uk/95/_
The associated publication is also Open Access via the same link:
HABGOOD, Jacob, MOORE, David, WILSON, David and ALAPONT, Sergio (2018). Rapid,
continuous movement between nodes as an accessible virtual reality locomotion
technique. In: IEEE VR 2018 Conference. IEEE, 371-378,
<table>
<tr>
<th>
**3.2.**
</th>
<th>
**Game Rating Data**
</th> </tr> </table>
The REVEAL Game Rating Data is Open Access at _https://shurda.shu.ac.uk/94/_
The associated publication is also Open Access via the same link:
# D5.6 OPEN DATA PUBLICATION
This project has received funding from European’s Union’s Horizon 2020
research and innovation programme under grant agreement No 732599. No part of
this document may be used, reproduced and/or disclosed in any form without the
prior written permission of the REVEAL project partners. © 2017 – All rights
reserved.
HABGOOD, Jacob, WILSON, David, MOORE, David and ALAPONT, Sergio (2017). HCI
Lessons From PlayStation VR. In: Proceeding CHI Play '17 extended abstracts.
New York, ACM, 125-135.
<table>
<tr>
<th>
**3.3.**
</th>
<th>
**Knowledge Test Data**
</th> </tr> </table>
The scholarly article that this data relates to has not yet been published.
Upon publication of the article, the data will be made Open Access and this
document will be updated accordingly.
<table>
<tr>
<th>
**3.4.**
</th>
<th>
**Game Evaluation and Interaction Data**
</th> </tr> </table>
The scholarly article that this data relates to has not yet been published.
Upon publication of the article, the data will be made Open Access and this
document will be updated accordingly.
<table>
<tr>
<th>
**3.5.**
</th>
<th>
**Museum Studies Data**
</th> </tr> </table>
The scholarly article that this data relates to has not yet been published.
Upon publication of the article, the data will be made Open Access and this
document will be updated accordingly.
# D5.6 OPEN DATA PUBLICATION
This project has received funding from European’s Union’s Horizon 2020
research and innovation programme under grant agreement No 732599. No part of
this document may be used, reproduced and/or disclosed in any form without the
prior written permission of the REVEAL project partners. © 2017 – All rights
reserved.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0646_AMADEUS_737054.md
|
_737054 AMADEUS**D1.2** _
_Intentionally blank page_
Page **4** of **14**
_737054 AMADEUS**D1.2** _
**1.1. Purpose of the data collection/generation**
The European Commission (EC) is running a flexible pilot under Horizon 2020
called the Open Research Data Pilot (ORD pilot). This pilot is part of the
Open Access to Scientific Publications and Research Data Program in H2020 1
. The ORD pilot aims to improve and maximise access to and re-use of research
data generated by Horizon 2020 projects and takes into account the need to
balance openness and protection of scientific information, commercialisation
and Intellectual Property Rights (IPR), privacy concerns, security as well as
data management and preservation questions. The EC provided a document with
guidelines 2 for projects participants in the pilot. The guidelines address
aspects like research data quality, sharing and security.
According to the guidelines, participating projects will be required to
develop a Data Management Plan (DMP). The DMP describes the types of data that
will be generated or gathered during the project, the standards that will be
used, the ways how the data will be exploited and shared for verification or
reuse, and how the data will be preserved. In addition, beneficiaries must
ensure their research data are findable, accessible, interoperable and
reusable (FAIR) 2 .
**This document describes the initial Data Management Plan (DMP) for AMADEUS
project** . It addresses Project administration data collected as part of the
execution and management of a disruptive research that could be in the market
in the incoming years.
AMADEUS DMP will be set according to the article 29.3 of the Grant Agreement
“Open Access to Research Data”. Project participants must deposit their data
in a research data repository and take measures to make the data available to
third parties. The third parties should be able to access, mine, exploit,
reproduce and disseminate the data. This should also help to validate the
results presented in scientific publications. In addition, Article 29.3
suggests that participants will have to provide information, via the
repository, about tools and instruments needed for the validation of project
outcomes.
On the other hand, Article 29.3 incorporates the obligation of participants to
protect results, security obligations, obligations to protect personal data
and confidentiality obligations prior to any dissemination. And concludes: “
_As an exception, the beneficiaries do not have to ensure open access to
specific parts of their research data if the achievement of the action's main
objective, as described in Annex I, would be_
_jeopardised by making those specific parts of the research data openly
accessible. In this case, the data management plan must contain the reasons
for not giving access_ .”
In line with this, the AMADEUS consortium will decide what information is made
public according to aspects as potential conflicts against commercialization,
IPR protection of the knowledge generated (by patents or other forms of
protection), meaning a risk for obtaining the project objectives/outcomes,
etc.
AMADEUS DMP will follow the structure of a DMP given by DMP online tool 3 .
**AMADEUS Consortium will use repository ZENODO** (an OpenAIRE and CERN
collaboration). Motivations to use this repository are:
* Allows researchers to deposit both publications and data, while providing tools to link them.
* In order to increase visibility and impact of the project the Community AMADEUS has been created in ZENODO, so all beneficiaries of the project can link the uploaded paper to the Community 5 .
* The repository has backup and archiving capabilities.
* ZENODO assigns all publicly available uploads a Digital Object Identifier (DOI) to make the upload easily and uniquely citable.
* The repository allows different access rights.
All the above makes ZENODO a good candidate as a unified repository for all
foreseen project data (presentations, publications, images, videos and
measurement data) from AMADEUS.
## 1.2. OBJECTIVES OF AMADEUS PROJECT
The targeted breakthrough of AMADEUS project is to develop novel materials and
devices that enable a new kind of Ultra-high temperature thermal latent heat
energy storage (UHT-LHTES) systems, using a new kind of extremely high latent
heat (2-4 MJ/kg) and melting point (up to 2000 ºC) phase change materials
(PCMs). In this concern the Consortium will investigate the silicon-boron
(Si-B) system, exploring different SixBy stoichiometries and additives (e.g.
Mn, Cr, etc.) to find the optimum Si-B based alloy for LHTES. The Consortium
will also address the most relevant technological challenges concerning the
use of these materials, such as the refractory linings of the container,
advanced thermal insulation casing, and a new kind of solidstate conversion
devices able to operate at those ultra-high temperatures: the (still
conceptual) hybrid thermionic-photovoltaic (TIPV) converter. The specific
objectives of the project are:
* **Objective 1** \- Synthesize Si-B based alloys with latent heat above 2 MJ/kg optimized for LHTES applications
* **Objective 2** \- Fabricate an optimal PCM casing enabling long term reliability at temperatures up to 2000 ºC
* **Objective 3** \- Demonstrate the proof of concept of a thermionic-photovoltaic converter
* **Objective 4** \- Demonstrate the proof of concept of the novel energy storage concept
## 1.3. Dissemination Policy
The AMADEUS project is pioneering research that is of key importance to the
energy storage industry. Effective exploitation of the research results
depends on the proper management of intellectual property. Therefore, the
AMADEUS consortium will follow the strategy outlined in (Figure 1). When the
research findings result in a groundbreaking innovation, the members of the
consortium will consider two forms of protection: to withhold the data for
internal use or to apply for a patent in order to commercially exploit the
invention and have in return financial gain. In latter case, publications will
be therefore delayed until the patent filing. On the contrary, if the
technology developments are not going to be withheld or patented, the results
will be published for knowledge sharing purposes.
Figure 1: Schema on the dissemination policy of the Consortium.
The scientific and technical results of the AMADEUS project are expected to be
of maximum interest for the scientific community. Through the duration of the
project, all intended disseminations or protections must be noticed 45 days in
advance in order to get the permission or objection from the Consortium. Once
the relevant protections (e.g. IPR) are secured, the AMADEUS partners may
disseminate (subject to their legitimate interests) the obtained results and
knowledge to the relevant scientific communities through contributions in
journals and international conferences in the field of Materials Science,
Energy or Physics.
Page
## 1.4. Types, formats, size and origin of data generated/collected
In AMADEUS project, Open Research Data Pilot applies to two types of data:
* The data, including associated metadata, needed to validate the results presented in scientific publications (underlying data);
* Other data, including associated metadata, to be developed by the project. This refers to specifications of the AMADEUS system and the services it supports, the datasheets and performances of the technological developments of the project, the field trial results with the KPIs (Key Performance Indicators) used to evaluate the system performances, meeting presentations, demonstrator videos, pictures from set-ups, lab records, schemes, technical manuals, among others.
The format of the data generated will be mainly electronic, but some primary
data records can be also found handwritten as an example when beneficiaries
use lab notes in a daily basis. AMADEUS project will ensure that all
electronic files follow the FAIR policy as explained later. The main format of
electronic data in order to ensure the accessibility to data will be any of
the included in the IANA Myme Media Types 4 .
Expected size of data generated will be reasonable according to the normal
practices of the beneficiaries’ research. But we do not expect to deal with
large files.
Regarding the origin of data, the majority of them will come from software
used for simulations, experimental setups and equipment used.
## 1.5. Data Utility
Open Research Data from AMADEUS will allow that other researchers can make use
of that information to validate the results, thus being a starting point for
their investigations, as expected by the EC through its open access policy.
## 1.6. Consortium Awareness
The DMP is used by AMADEUS partners as a reference for data management
(providing metadata, storing and archiving) within the project each time new
project data is produced.
The project partners are introduced to the DMP and its use as part of WP1
activities. Relevant questions from partners will also be addressed within
WP1. The workpackage will also provide support to the project partners on
using Zenodo as the data management tool.
The coordinator will ensure the Research Open Data policy by verifying
periodically the information uploaded to ZENODO repository and AMADEUS
community.
# FAIR DATA
With the endorsement of the FAIR principles and its incorporation into the
guidelines for DMPs in H2020, the FAIR principles hereby serve as a template
for a full-lifecycle data management. Although the FAIR principle does not
serve as an independent lifecycle data model, it assures that the most
important components of a full life cycle model is covered.
As stated before our Consortium will use ZENODO repository for Open Research
data purposes since Zenodo facilitates linking publications and underlying
data through persistent identifiers and data citations. Therefore, the FAIR
data policy we are following is that established by this repository 5 .
## Making data findable, including provisions for metadata
### Discoverability: Metadata Provision
Metadata are created to describe the data and aid discovery. According to
ZENODO repository all metadata is stored internally in JSON-format according
to a defined JSON schema. Metadata is exported in several standard formats
such as MARCXML, Dublin Core, and DataCite Metadata Schema (according to the
OpenAIRE Guidelines).
Beneficiaries will complete all mandatory metadata required by the repository
and metadata recommended by the repository but mandatory for AMADEUS
Consortium and could provide additional metadata if appropriated. In the Table
1 a general overview of metadata is outlined.
**Table 1. Information on metadata generated at ZENODO.**
<table>
<tr>
<th>
**Metadata**
</th>
<th>
**Category**
</th>
<th>
**Additional Comments**
</th> </tr>
<tr>
<td>
Type of data
</td>
<td>
Mandatory
</td>
<td>
</td> </tr>
<tr>
<td>
DOI
</td>
<td>
Mandatory
</td>
<td>
If not filled, ZENODO will assigned an automatic DOI. Please Keep the same DOI
if the document is already identified with a DOI.
</td> </tr>
<tr>
<td>
Publication Date
</td>
<td>
Mandatory
</td>
<td>
</td> </tr>
<tr>
<td>
Title
</td>
<td>
Mandatory
</td>
<td>
</td> </tr>
<tr>
<td>
Authors
</td>
<td>
Mandatory
</td>
<td>
</td> </tr>
<tr>
<td>
Description
</td>
<td>
Mandatory
</td>
<td>
A description of the dataset including the procedures followed to obtain those
results (e.g., software used for simulations, experimental setups, equipment
used, etc.)
</td> </tr>
<tr>
<td>
Keywords
</td>
<td>
Mandatory
</td>
<td>
Frequently used keywords, plus AMADEUS
</td> </tr>
<tr>
<td>
Access rights
</td>
<td>
Mandatory
</td>
<td>
Open Access. Other permissions can be
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
considered when appropriated.
</td> </tr>
<tr>
<td>
Terms for Access Rights
</td>
<td>
Optional
</td>
<td>
Licenses Creative Common will be detailed here. AMADEUS will open the data
under Attribution, ShareAlike, Non Commercial and No Derivatives Licences.
</td> </tr>
<tr>
<td>
Communities
</td>
<td>
Mandatory
</td>
<td>
_Next Generation Materials and Solid State_
_Devices for Ultra High Temperature Energy_ _Storage and Conversion_
</td> </tr>
<tr>
<td>
Funding
</td>
<td>
Mandatory
</td>
<td>
European Union (EU), Horizon 2020, FETOPEN, Grant Nº 737054, AMADEUS
</td> </tr> </table>
### Identifiability of data
Beneficiaries will maintain the Digital Object Identifier (DOI) when the
publication/data has already been identified by a third party with this
number. Otherwise ZENODO will provide each dataset with a DOI.
### Naming convention
**AMADEUS does not establish a naming convention for uploading data to the
repository** . Since mandatory metadata in ZENODO repository include a
description of the dataset, we ensure third parties will access data easily by
describing properly the dataset. Likewise, our policy of not changing data
names will allow data to be consistent and traceable in each author’s local
back-up devices.
### Approach towards search keyword
ZENODO allows for introducing keywords for each dataset. Each author will
introduce relevant keywords and **all dataset generated by the Consortium will
be also identified with the keyword AMADEUS** .
## Making data openly accessible
### Types of data made openly available
**The underlying data related to the scientific publications will be made
publicly available by means of ZENODO.** This will allow that other
researchers can make use of that information to validate the results, thus
being a starting point for their investigations, as expected by the EC through
its open access policy.
Since a huge amount of data is generated in a European project as AMADEUS, the
Consortium will make a selection of relevant information, disregarding that
not being relevant for the validation of the relevant published results.
**Beneficiaries will be able to choose, additionally to the data underlying
publications, what other data they make available in open access mode** . The
reason of this optionality is based on ensuring a proper development of the
research since a project that is looking for a novel energy storage system
could experience some exploitation difficulties in a medium-term whether
certain data have been open to third parties.
For “other data” (those not linked to a paper) the beneficiary must
communicate to the rest of the consortium its intent to open the data through
ZENODO according to Art 29.1 of GA “A beneficiary that intends to disseminate
its results must give advance notice to the other beneficiaries of — unless
agreed otherwise — at least 45 days, together with sufficient information on
the results it will disseminate”.
### Methods or software tools needed to access the data
All our data are openly accessible since we used standard formats according to
IANA Myme Media Types.
### Deposition of data and associated metadata, documentation and code
As explained in 1.1 we will use ZENODO repository for the purpose of data,
metadata and documentation deposition.
## Making data interoperable
Interoperability means allowing data exchange and re-use between researchers,
institutions, organisations, countries, etc. (i.e. adhering to standards for
formats, as much as possible compliant with available (open) software
applications, and in particular facilitating re-combinations with different
datasets from different origins.
AMADEUS Consortium ensures the interoperability of the data by using data in
standard formats according to IANA Myme Media Types, and using ZENODO
repository with a standardization JSON scheme for metadata.
## Increase data re-use (through clarifying licenses)
Data (with accompanying metadata) will be shared no later than publication of
the main findings and will be in-line also in ZENODO. The maximum time allowed
to share underlying data is the maximum embargo period established by the EC,
six months.
AMADEUS open research data will free to re-use under creative Commons
Licences: Attribution, ShareAlike, Non Commercial and No Derivatives.
Data will be accessible for re-use without limitation during and after the
execution of AMADEUS project. After the end of the project, data will remain
in the repository.
Publications and/or other data related with the project but generated after
its deadline will be also uploaded.
# ALLOCATION OF RESOURCES
AMADEUS will use ZENODO to make data openly available so there is no cost for
the infrastructure. The cost of personnel devoted to the management of the
data is considered to be charged under the Program.
Each beneficiary will devote its own personnel resources to upload data to
ZENODO and follow the instructions contained in this document. The Coordinator
will name a person responsible to verify and control data opened by partners
ensuring that the policy described in this document will be fulfilled.
# DATA SECURITY
ZENODO counts with a technical infrastructure that ensures data security and
long term preservation. The interested reader can check the terms at
http://about.zenodo.org/infrastructure/
# ETHICAL ASPECTS
There are no ethical aspects affecting to AMADEUS research so we consider that
all data are out of ethical considerations.
On the other hand, in order to guarantee that no sensitive data are archived
without the consent of the Consortium, partners will apply the good practice
of communicating any kind of disclosure 45 days beforehand.
**Disclaimer**
‘Next Generation Materials and Solid State Devices for Ultra High Temperature
Energy Storage and Conversion' AMADEUS is a Collaborative Project (CP) funded
by the European Commission under Horizon 2020. Contract: 737054, Start date of
Contract: 01/01/2017; Duration: 36 months (3 years).
The authors are solely responsible for this information and it does not
represent the opinion of the European Commission. The European Commission is
not responsible for any use that might be made of the data appearing therein.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0654_EFICONSUMPTION_712179.md
|
# 1\. EXECUTIVE SUMMARY
The data management plan (DMP) is a written document that describes the data
expected to acquire or generate during the course of EFICONSUMPTION project by
the consortium, under _Article 29_ of the Grant Agreement Number 641998.
According to this Grant Agreement, it is mandatory the use of open access to
scientific publications ( _Article_ _29.2_ ), with the exemption shown in
_Article 29.3_ [1].
The DMP is a life document, which will vary during the course of the project,
this document will define how the data will be managed, described, treated and
stored, during and after the end of the project. In addition, the mechanisms
to use the results at the end of the project will be described to share and
preserve the data.
A description of the existing data relevant to the project and a discussion
about the data’s integration will be provided, together with the description
of the metadata to be provided, related to the subject of the project.
The document will provide a description of how the results will be shared,
including access procedures, embargo periods and technical mechanisms for
dissemination. Besides, it will foresee whether access will be opened
according to the two main routes of open access to publications: self-
archiving and open access publishing.
Finally, the document will show the procedures for archiving and preservation
of the data, including the procedures expected once the project has finished.
The application of this document will be a responsibility of CYNERGY. This
document will be updated through the lifecycle of EFICONSUMPTION project
extending the information given now, or including new issues or changes in the
project procedures. DMP will be updated when significant changes are aroused
(new data sets, changes in consortium policies or external factors) as a
deliverable [1]. As a minimum, the DMP will be updated and sent as a part of
the mid-term report and final report. Every time that the document is updated,
the draft version will be sent to all project partners to be reviewed. Once
approved, the definitive version will be sent to the consortium.
**2\. DATA SET REFERENCE, NAME AND DESCRIPTION**
This section shows a description of the information to be gathered, the nature
and the scale of the data generated or collected during the project. These
data are listed below:
▪ EFICONSUMPTION´s Project Parameters and Data are divided into Confidential
and Non Confidential Information:
<table>
<tr>
<th>
_Confidential Information_ :
</th>
<th>
_Non Confidential Information_ :
</th> </tr>
<tr>
<td>
* Names of the Proof Of Concept (POC) entities and their industrial details:
Company names and addresses.
* POC companies’ contact persons and titles.
* Typology of electricity supply contract of POC customers. - Electricity bills and unitary consumptions of the main receptors
* Production scheduling of customers
* Lay-Outs of customers plants
* Electrical diagrams of customers
* Units of production per month
* Units of services per month
* Specific consumption of electricity in kWh/Unit per period of time.
* Specific algorithms for the electrical energy efficiency modelling, calculated during the different POCs.
* Technical and financial data of the POC companies and entities.
</td>
<td>
* Anonymous examples of real graphs, showing the electrical energy efficiency with 3D surfaces and 2D lines
* Anonymous examples of algorithms for the electrical energy efficiency modelling and saving measurement. - Accumulated energy consumption and expenses per periods of time - Instant Apparent, Active and Reactive Power of real installations. - Instant and accumulated CO 2 emissions with their reductions
* Stored instant current per phase - Stored instant voltage per phase
* Stored instant Cos Phi per phase - Conclusions and recommendations for the improvement of the electrical energy efficiency in industrial plants and buildings
* Technical actions to be implemented in the different sectors.
</td> </tr> </table>
# 3\. STANDARDS AND METADATA
The main objectives of the EFICONSUMPTION project are not scientific
publications. However, _Open Access (OA) will be implemented in peer-review
publications (scientific research articles published in academic journals),
conference proceedings and workshop presentations carried out during and after
the end of the project. In addition, non-confidential PhD or Master Thesis and
presentations will be disseminated in OA._
The publications issued during the project will include the Grant Number,
acronym and a reference to the H2020 Programme funding, including the
following sentence:
“Project EFICONSUMPTION has received funding from the European Union´s Horizon
2020 research and innovation programme under grant agreement No 712179”. In
addition, all the documents generated during the project will indicate in the
Metadata the reference of the project: EFICONSUMPTION H2020 712179.
Each paper will include the terms Horizon 2020, European Union (EU), the name
of the action, acronym and the grant number, the publication date, the
duration of embargo period (if applicable) and a persistent identifier (e.g.
DOI).
The purpose of the requirement on metadata is to maximise the discoverability
of publications and to ensure the acknowledgment of EU funding. Bibliographic
data mining is more efficient than mining of full text versions. The inclusion
of information relating to EU funding as part of the bibliographic metadata is
necessary for adequate monitoring, production of statistics, and assessment of
the impact of Horizon 2020 [2].
# 4\. DATA SHARING
All the scientific publications of the Horizon 2020 project will be
automatically aggregated to the OpenAIRE portal (provided they reside in a
compliant repository). Each project has its own page on OpenAIRE ( _Figure 1_
) featuring project information, related project publications and datasets and
a statistics section. CYSNERGY will ensure that if there were scientific
papers derived from EFICONSUMPTION project, they will be available as soon as
possible in OpenAIRE, taking into account embargo period (in case they exist).
_Figure 1_ : EFICONSUMPTION information in OpenAIRE web (www.openaire.eu)
CYSNERGY will check periodically if the list of publications is completed. In
case there are articles not listed it is necessary to notify to the portal.
The steps to follow to publish an article and the subsequent OA process are:
* The final peer-reviewed manuscript is added to an OA repository.
* The reference and the link to the publication should be included in the publication list of the progress Report.
When the publication is ready, the author has to send it to the coordinator,
who will report to the EC through the publication list included in the
progress reports. Once the EC has been notified by the coordinator about the
new publication, the EC will automatically aggregate it at the OpenAIRE
portal.
# 5\. ARCHIVING AND PRESERVATION
In order to achieve an efficient access to research data and publications in
EFICONSUMPTION project, Open Access (OA) model will be applied. Open access
can be defined as the practice of providing on-line access to scientific
information that is free of charge to the end-user. Open Access will be
implemented in peer-review publications (scientific research articles
published in academic journals), conference proceedings and workshop
presentations carried out during and after the end of the project. In
addition, non-confidential PhD or Master Thesis and presentations will be
disseminated in OA.
Open access is not a requirement to publish, as researchers will be free to
publish their results or not. This model will not interfere with the decision
to exploit research results commercially e.g. through patenting [3].
The publications made during EFICONSUMPTION project will be deposited in an
open access repository (including the ones that are not intended to be
published in a peerreview scientific journal). The repositories used by
project partners will be:
* ZENODO will be used by CYNERGY
As stated in the Grant Agreement (Article 29.3): _“As an exception, the
beneficiaries do not have to ensure open access to specific parts of their
research data if the achievement of the action´s main objective, as described
in Annex I, would be jeopardized by making those specific parts of the
research data openly accessible. In this case, the data management plan must
contain the reasons for not giving access”._
This rule will be followed only in some specific cases, in those that will be
necessary to preserve the main objective of the project.
Dissemination Plan
Research Results
Data Management Plan
Research
Decision to
disseminate /
share
Decision to
exploit/
protect
Publications
Depositing
research data
Gold OA
Green OA
Restricted access
and/or use
Access and use
free of charge
Patenting (or
other form of
protection)
And/or
_Figure 2_ : Scheme of decision on IP protection
According to the “Guidelines on Open Access to Scientific Publications and
Research Data in Horizon 2020” [2], there are two main routes of open access
to publications:
* **Self-archiving (also referred to as “green open access”):** in this type of publication, the published article or the final peer-reviewed manuscript is archived (deposited) by the author \- or a representative - in an online repository before, alongside or after its publication. Some publishers request that open access be granted only after an embargo period has elapsed.
* **Open access publishing (also referred to as “gold open access”):** in this case, the article is immediately provided in open access mode as published. In this model, the payment of publication costs is shifted away from readers paying via subscriptions. The business model most often encountered is based on one-off payments by authors. These costs (often referred to as Article Processing Charges, APCs) can usually be borne by the university or research institute to which the researcher is affiliated, or to the funding agency supporting the research.
As a conclusion, the process involves two steps, firstly CYSNERGY will deposit
the publications in the repositories and then they will provide open access to
them.
Depending on the open access route selected self-archiving (Green OA) or open
access publishing (Gold OA), these two steps will take place at the same time
or not. In case of self-archiving model, embargo period will have to be taken
into account (if any).
## 1.1. Green Open Access (self-archiving)
This model implies that researchers deposit the peer-reviewed manuscript in a
repository of their choice (e.g. ZENODO).
Depending on the journal selected, the publisher may require and embargo
period between 6 and 12 months.
The process to follow for EFICONSUMPTION project is:
1. CYSNERGY prepares a publication for a peer-review journal.
2. After the publication has been accepted for publishing, the partner will send the publication to the project coordinator.
3. CYSNERGY will notify the publication details to the EC, through the publication list of the progress report. Then, the publication details will be updated in OpenAIRE.
4. The publication may be stored in a repository (with restricted access) for a period of between 6 and 12 months (embargo period) as a requirement of the publisher.
5. Once the embargo period has expired, the journal gives Open Access to the publication and the partner can give Open Access in the repository.
Partner prepares the
publication
Partner notifies to
project coordinator
Partner stores the publication in a
repository with restricted access
Publication in
OpenAIRE
Coordinator
notifies EC
Partner gives Open
Access to the
publication
Embargo period
_Figure 3_ : Steps to follow in Green open access publishing within
EFICONSUMPTION project
## 1.2. Gold Open Access (open access publishing)
When using this model, the costs of publishing are not assumed by readers and
are paid by the authors, this means that these costs will be borne by the
university or research institute to which the researcher is affiliated, or to
the funding agency supporting the research. These costs can be considered
eligible during the execution of the project.
The process foreseen in EFICONSUMPTION project is:
1. The partner prepares a publication for a peer-reviewed journal.
2. When the publication has been accepted for publishing, the partner sends the publication to the project coordinator.
3. The coordinator will notify the publication details to the EC, through the publication list of the progress report. Then, the publication details will be updated in OpenAIRE.
4. The partner pays the correspondent fee to the journal and gives Open Access to the publication. This publication will be stored in an Open Access repository.
Partner prepares the
publication
Partner notifies to
project coordinator
Partner pays the fees and gives
Open Access to the publication
Publication in OpenAIRE
Coordinator
notifies EC
_Figure 4_ : Steps to follow in Gold open access publishing within
EFICONSUMPTION project
# 2\. BIBLIOGRAPHY
1. E. Commission, “Guidelines on Data Management in Horizon 2020. Version 2.1,” 15 February 2016.
2. E. COMMISSION, “Guidelines on Open Access to Scientific Publications and Research Data in Horizon 2020. Version 2.0,” 30 October 2015\.
3. E. Commission, Fact sheet: Open Access in Horizon 2020, 9 December 2013.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0655_QuIET_767187.md
|
The initial version of the Data Management Plan (DMP) will be updated as
required during the course of the project.
# RESEARCH DATA COLLECTION AND PROCESSING
## Types of data produced
Fundamentally, four types of data will be generated within the QuIET project:
1. Data related to the chemical synthesis and initial characterization of molecules and molecular assemblies that are predicted to present quantum interference (QI) effects leading to high thermopower (associated to WP1). This basically includes the synthetic protocols describing precisely the procedures and reactions followed for obtaining new compounds as well as the analytical data that characterize the new compounds, proof its identity and document its purity (usually mass spectrometry, elemental analysis, NMR, UV-VIS spectroscopy, IR-spectroscopy, x-ray solid structure analysis, etc.).
2. Data related to the thorough electrical and thermoelectric characterization of the molecules and molecular assemblies synthetized in 1 (associated mainly to WP2). This includes additional analytical and experimental data (e.g. numerical data, tables, signals, images, graphs, spectra, etc.) generated using sophisticated techniques such as STM, AFM, MCBJ, etc. that document and describe the compound physico-chemical properties and performance.
3. Theoretical and modelling data files related to the structure and electron and phonon contributions to the thermoelectric performance of molecules and molecular assemblies (associated mainly to WP3).
4. Data related to the performance and manufacturability of optimized device configurations (associated mainly to WP4). This includes documentation on process flow and various characterization data on prototype devices.
## Data collection
The data of how a compound is synthesized and the details of each conductance,
thermopower and thermal conductance measurements are initially collected in
laboratory notebooks which clearly document the design of experiments, the
step-by-step procedures followed, materials, equipment and methodology used as
well as the results and conclusions obtained. Laboratory notebooks are
reasonably organized handwritten personal drafts with sequentially marked
pages where drawings, calculations, text, plots, images or even different
ideas and the reasons for choices amongst alternatives are collated in
chronological order. Each entry is marked with the date and is sufficiently
detailed to allow other researchers to reproduce what was performed at any
later date.
In the synthesis of novel molecules, in many cases, a particular step of a
reaction sequence is repeated numerous times while the reaction conditions are
gently altered in order to optimize the outcome of the reaction. Once a
successful procedure is found, it is repeated several times in order to check
for reproducibility. After the reaction turns out to be reliable and
reproducible, the documentation in the lab notebook serves as draft for
writing a synthetic protocol. The analytical data identifying and describing a
new compound are obtained from the corresponding analytical tool (NMR-
spectrometer, mass spectrometer, elemental composition analyser, UV-VIS
spectrometer, IR-spectrometer, etc.). The raw data are usually FIDs, which are
transformed into a plot of the recorded spectrum. From the plot of the
spectrum the listed analytical data are extracted and examined.
Similarly, the determination of the conductance and thermopower of a molecular
junction requires, in addition to a proper deposition of the molecules on the
electrodes, a large number of repeated measurements are necessary to ensure
reproducibility. Protocols used in data analysis are described in detail in
the laboratory notebooks.
Standard protocols for equipment use are typically optimised and followed by
users to ensure that data and results obtained are reliable and consistent. In
addition, project staff is adequately trained in the techniques they operate
to ensure they generate **high quality and standardized data** , which is a
prerequisite for meaningful use and re-use of data.
The data generated and the methods used will be scrutinised in weekly lab
meetings to ensure procedures have been carried out correctly, that
appropriate controls have been applied, that all information is suitably
recorded and that therefore there can be a high level of confidence in the
data generated. Quality assurance will further be strengthened through the
discussions held at the QuIET consortium meetings.
## Data processing
The different types of data obtained will be processed using the following
standard software:
* Text: ASCII, Word, PDF;
* Numerical: ASCII, STATA, Excel, Origin, Matlab;
* Multimedia: ppt, jpeg, tiff, mpeg, mcad, Quicktime, PaintShop;
* Models: 2PACD;
* Software: Gaussian 09, Dalton, GaussView 5.0, MathCad, Mathematica, Matlab, Python;
* Domain-specific: CIF (for crystallography files); Instrument-specific: Labview Data Format
# RESEARCH DATA STORAGE AND PRESERVATION
## Data organization and labelling
All datasets generated during the project will be suitably and systematically
organised in a database. A directory structure of folders and subfolders will
be created for each series of experiments performed to allow any project team
member to easily find and track files. Within each directory, we will store
all relevant information and details related to an experiment (i.e. metadata),
such as:
1. Chemical data and procedures followed. These will be linked with the lab notebook number and page number where details of the experiment are recorded (likewise, file names/locations of analytical readings will also be recorded in lab notebooks to allow electronic records to be easily linked to the raw data);
2. A PowerPoint report with details of all the analysis performed for each experiment and main results obtained, including on which instrument, which student(s) were in control of the experiment, exact dates, and directories on computers where the raw data is stored to allow the corresponding raw data records to be easily found;
3. Published references related to the experiment.
Files and folders will have an appropriately descriptive title/name. As QuIET
is a large project involving large research institutions and well-established
teams which already have advanced data-policies, workflows and naming
conventions established, it is not reasonable to apply a one-model-fits-all
approach. Therefore, all partners have come to an agreement to use a set of
essential minimum information (Including project, WP, date, institution,
experiment description) which shall ensure cross platform coherence.
The final datasets decided to be deposited in the chosen data repository (see
details in section 4) will also be accompanied by a README file listing the
contents of the files and outlining the structure and file-naming convention
used for potential users to easily understand the database itself.
In such way, we will ensure a high quality, standardized and traceable
workflow throughout the data generation process complying with the Findable
and Interoperable principles of the EC for data management 1 . For
Accessibility, Use and Re-use, please refer to section 4 of this document.
## Data storage and security
All electronic data generated during research activities (e.g. Data recorded
by a particular spectrometer or analytical tool) are stored on the equipment
itself and/or in secure central servers which can usually exclusively be
accessed by the person/group that has recorded the data or relevant
collaborators (password protected limited access).
Data in lab notebooks will also be recorded in electronic form and backed up
regularly to secure against loss or damage of the notebook.
All QuIET partners have servers with high security standards enabling data to
be stored safely. Maintenance of datasets stored in partners’ servers will be
carried out according to each of the partner’s institutions’ backup policy. In
addition to that, data is stored and backed-up regularly in portable hard
drives for which each Principal Investigator (PI) is responsible.
Data will be stored indefinitely, provided that storage space is available to
the PI. The amount of data produced is manageable and affordable, as the cost
of memory dropped steadily in the past few years. In case of space
restrictions, data will be eliminated after 10 years of publication.
## Data preservation
In addition, in the cases where the consortium decides to share sets of
results/data generated by the project (see additional details on the QuIET
open access policy in section 4), the QuIET consortium has decided to transfer
these datasets to the ZENODO repository ( _www.zenodo.org_ ).
This online repository is hosted at CERN and was created through the European
Commission’s OpenAIRE project with the aim of uniting all the research results
arising from EC funded projects. It is an easy-to-use and innovative service
that enables researchers based at institutions of all sizes to share results
in a wide variety of formats across all fields of science. Namely, ZENODO
enables users to:
* Easily share the long tail of small data sets in a wide variety of formats, including text, spreadsheets, audio, video, and images across all fields of science;
* Display and curate research results, get credited by making the research results citable, and integrate them into existing reporting lines to funding agencies like the European Commission;
* Easily access and reuse shared research results;
* Define the different licenses and access levels that will be provided for the different datasets.
Furthermore, ZENODO assigns a unique Digital Object Identifier (DOI) to all
publicly available uploads, which is particularly relevant for the research
data (in the case of publications, this identifier will be assigned by the
publisher), in order to make content easily findable and uniquely citable (The
DOI can be included as part of a citation in publications, allowing the
datasets underpinning a publication to be swiftly identified and accessed).
ZENODO will also ensure secure and sustainable short- and long-term archiving
and storage of research data, as these are placed in same cloud infrastructure
as research data from CERN's Large Hadron Collider. It uses digital
preservation strategies to storage multiple online replicas and backs up data
files and metadata on a nightly basis. Items deposited in ZENODO, including
all the scientific publications, will be archived and retained for the
lifetime of the repository, which is currently the lifetime of the host
laboratory CERN (with an experimental programme now established at least for
the next 20 years).
Therefore, this repository fulfils the main requirements imposed by the EC for
data sharing, archiving and preservation of the data generated in QuIET.
# RESEARCH DATA SHARING AND USE
As the QuIET project performs pioneering research that will be of key
importance to implementing the QI functionality in technologically-relevant
platforms, it is essential to have an effective intellectual property
management and exploitation strategy. To this aim, an _exploitation and impact
board_ , chaired by Dr. Gotsmann from IBM, has been set up to monitor and
identify the most relevant outcomes of the QuIET project and implement a
knowledge management system, which can be summarized as follows:
All experiments performed and associated data and metadata (creator, date,
subject, file names, format, brief description and relationship among them,
methodologies, workflow and analysis performed, as explained above) will be
recorded and described in internal reports containing text, calculations,
drawings, plots, and images which will be circulated among consortium members
for analysis and discussion (At present, email is used. If needed a place for
sharing on IBM box will be created). If the research findings are ground-
breaking results or innovations, the members of the consortium may decide:
1. To **withhold from publication** **the data and/or results** **with exploitation potential** for a) internal use & further research purposes; b) patent filing (or other forms of IPR) or c) direct or indirect (through transfer or licensing) commercial exploitation. In this case, publication and disclosure of results (or parts of them) will be therefore delayed until the owner(s) deem it convenient as established in the Consortium Agreement.
2. To actively **publish, disseminate and share** the knowledge and the most relevant results/ processed data generated. These results/data will mainly be disclosed and disseminated through publication in high impact journals and/or though oral/poster presentations in relevant conferences and workshops.
Additional interesting data which documents, supports and validates research
findings (i.e. metadata) will also be provided in the **supporting information
of the publication** . Therefore, the most important data will be publicly
available as long as the journals and/or publishing companies exist, which
ensures availability of data in the long term. These will allow
validation/replication of our research results presented in the scientific
publications and enable new discoveries with our data.
Other raw data and data from the lab notebooks will not published nor be made
publicly accessible and will remain in the group of the responsible Principal
Investigator. First, because raw and lab journal data are usually not in a
form that would allow to make them publicly available. Secondly because these
are part of a certain group intellectual property and it would not make sense
to document publicly the ongoing research, as different groups are permanently
competing worldwide for the best synthetic approaches and scientific concepts.
Usually an idea is only shared when at least first successful steps to its
realization are published. Therefore, the team’s approach and/or restrictions
to data sharing will be outlined in each publication and data sharing will be
analysed in a case-by-case basis.
The data that that will not be protected or exploited by consortium members
and which can be useful for the research community will be made available via
the ZENODO centralised repository, as stated above. It shall only be used for
research, training and non-profit purposes. Therefore, requesters will be
asked to explain the usage they will give to them and will be asked to sign a
dataset license agreement limiting its usage and distribution (how and on what
terms each dataset can be accessed will also be indicated in the project data
repository).
Where data or resources are provided to an external user, it will be
stipulated that the external user, prior to publishing any work using the
data/resources from any of the QuIET members, must consult the IP owner(s) to
determine whether it would be justified for the applicable PI and project team
members to be included as authors on that publication.
# OPEN ACESS TO PUBLICATIONS
In the case of peer-reviewed publications, beneficiaries must also ensure open
access (free of charge online access) for any user. There are two ways
considered by the EC to comply with this requirement: Publishing directly in
open access mode (‘gold open access’) or self-archiving a machine-readable
electronic copy of the published article or the final peer-reviewed manuscript
accepted for publication in an online repository (‘green open access’) 2 .
The QuIET consortium will give priority to publish results in high “impact
factor” journals and will then decide on the modality of open access to be
provided depending on the conditions from the editor. We will use the
Sherpa/Romeo tool ( _http://www.sherpa.ac.uk/romeo/index.php_ ;
_http://www.sherpa.ac.uk/romeoinfo.html_ ) to verify the journal’s policy on
the version of the article for which deposit is permitted (see colour code
below).
Whenever possible, the QuIET articles will be deposited in an open repository
(‘green’ OA) as soon as possible and at the latest on publication. Most
publishers allow to deposit a copy of the article in a repository, sometimes
with a period of restricted access (embargo). In Horizon 2020, the embargo
period imposed by the publisher must be shorter than 6 months (or 12 months
for social sciences and humanities). This embargo period will be therefore
taken into account by the QuIET consortium to choose the open access modality
for the fulfilment of the open access obligations established by the EC. In
other cases, gold open access will be applied and the costs of the ‘Author
processing charges’ (APCs) will be covered by the project budget.
The table below reflects the conditions of the main journals where the QuIET
publications will be sent:
For depositing scientific publications, there are several options
considered/suggested by the EC in the frame of the Horizon 2020 programme:
* Institutional repository of the research institutions involved (e.g. _http://eprints.lancs.ac.uk_ )
* Subject-based/thematic repository
* Centralised repository (e.g. the ZENODO)
As well as for data depositing, the QuIET consortium has chosen ZENODO (
_www.zenodo.org_ ) as the central scientific publication repository.
Additionally, according to the EC recommendation, whenever possible the QuIET
consortium will retain the ownership of the copyright for their work through
the use of a ‘License to Publish’, which is a publishing agreement between
author and publisher. With this agreement, authors can retain copyright and
the right to deposit the article in an Open Access repository, while providing
the publisher with the necessary rights to publish the article.
In line also with the Grant and Consortium Agreements, a beneficiary that
intends to disseminate its results must give advance notice to the other
beneficiaries of at least 45 days, together with sufficient information on the
results it will disseminate. Any other beneficiary may object within 30 days
of receiving notification, if it can show that its legitimate interests in
relation to the results or background would be significantly harmed. In such
cases, the dissemination may not take place unless appropriate steps are taken
to safeguard these legitimate interests. Moreover, all publications and
associated metadata will acknoeldge the project EU funding including the
following text:
_“This project has received funding from the European Union’s Horizon 2020
research and innovation programme under grant agreement No 767187”._
# OTHER DATA AND OUTCOMES GENERATED BY THE PROJECT
This section describes the QuIET strategy and practices regarding the
provision of Open Access to dissemination and communication materials (e.g.
website, social media, flyers, brochures, videos, public presentations,
newsletters, press releases, tutorials, and other audio-visual material) and
public deliverables produced. All these items will be available at the QuIET
project public website as well as at the ZENODO repository. The CORDIS website
will also host all public deliverables of the project as submitted to the
European Commission: _https://cordis.europa.eu/project/rcn/211921_en.html_
All other deliverables, marked as confidential in the Grant Agreement, will
only be accessible for the members of the consortium and the European
Commission services. The Project Coordinator will store a copy of them.
QuIET does not handle personal data and therefore it does not pose ethical
issues.
# RESPONSIBILITIES FOR THE IMPLEMENTATION OF THE DMP
Each consortium partner must respect the policies set out in this data
management plan (DMP). Each member will be responsible for data and metadata
generation and validation, for data security and quality assurance, for the
archiving, storage and backup of the data produced in their respective host
institutions as well as for sharing it with the rest of the consortium
members. WP and task leaders, supported by the Project Coordinator will be
responsible for checking the quality of these data.
The coordinator is responsible for supervising the proper implementing the DMP
and will be able to advise on best practice in data management and security.
The coordinator will be responsible for collecting all the public data and
uploading it in the public website and in ZENODO.
# FINAL REMARKS
This deliverable reflects the current state of the discussions, plans and
ambitions of the QuIET partners with regards to the available and expected
research data and will be updated as work progresses.
The QuIET consortium will continuously work on selected aspects of all FAIR
principles aiming at improving the Findability, Accessibility,
Interoperability and Reusability of the data generated within the project1.
The outcomes of this work shall be presented the forthcoming versions of the
DMP.
The data management strategy presented in this deliverable is closely related
to the QuIET project dissemination and exploitation strategy, which will be
developed and presented in detail in the Dissemination and Exploitation Plan
(Deliverables 5.4, 5.6 and 5.12 to be prepared at months 12, 24 and 42
respectively).
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0656_PRISMA_710059.md
|
# 1\. Introduction
This deliverable concerns the **Data management plan** (DMP) for the EU-H2020-
PRISMA project.
PRISMA stands for: Piloting RRI in Industry: a roadmap for tranSforMAtive
technologies).
The **main objectives** of the PRISMA- project
* Integration of Responsible Research and Innovation (RRI) in the CSR (corporate social responsibility) policies of 8 companies in the field of transformative technologies
* Providing evidence on how the RRI approach and attention for the gender dimension can improve the innovation process and its outcomes
* The development and dissemination of a roadmap that helps industries to implement RRI in their innovation processes as part of their CSR policy in order to deal with uncertain and sometimes partly unknown risks and public and ethical concerns of transformative technologies.
The main data we will collect as part of this project are: case studies at
companies, benchmarks (like CSR-policies), results of stakeholder meetings,
interviews and surveys.
This DMP describes:
* Which data will be collected in the different work package (par. 2)
* Which repository we will use (par. 3)
* How the data will be documented/metadata: par 4 (with more details in Annex 1
* How the data will be shared during and after the project and which data will be excluded from open access because of privacy reasons and commercial interest (par. 5 and Annex 2)
* Governance of the DMP (par 6.)
We consider the data management plan to be a living document that – if need
be- will be updated over the course of the project. The data management plan
has interdependencies with the informed consent forms which we use for our
research activities.
This DMP was written in close consultation with the _4TU.Centre for Research
Data_ (associated with the TU Delft) which mission is‘ to ensure the
accessibility of technical scientific research during and after completion of
research to give a quality boost to contemporary and future research.
This DMP is a living document and will be reviewed periodically.
# 2\. Data collection
The main data collected within this project are:
* Literature studies
* Data on the 8 companies participating in the pilot
* Around 50 (indicative number) interviews (transcript, videos)
* Results of surveys among stakeholders - Results from workshops.
The following data provides an overview of the data collected for each work
package:
<table>
<tr>
<th>
**WP**
</th>
<th>
Work Package Title
</th>
<th>
**Lead**
</th>
<th>
**Research Data and reports Collected**
</th> </tr>
<tr>
<td>
1
</td>
<td>
Design of RRI- pilots with industry
</td>
<td>
RIVM
</td>
<td>
* From Literature: An inventory of the specific challenges that are posed by transformative technologies for RRI and strategies and tools to deal with this on the basis of the existing
literature
* From literature **:** Overview most appropriate RRI tools (literature **)**
* Pilot company specific data (company policies, gender issues, internal procedures, planning, etc.) and pilot-company needs, wishes, possibilities and constraints (interviews, bestpractices).
* Interviews with selected companies:
identification of CSR policy of pilot companies and of possibilities to
integrate RRI in that CSR policy
* From literature: description of the technology domain as a contextual basis for the pilot studies.
* Case descriptions of the innovations developed in the pilot projects.
</td> </tr>
<tr>
<td>
2
</td>
<td>
Implementation of pilots
</td>
<td>
UWAR
</td>
<td>
* Report on kick-off workshops (synthesis)
* Report: Analysis of different view between technology and business ethicist
* Final report on pilots
</td> </tr>
<tr>
<td>
3
</td>
<td>
Evaluation of pilots
</td>
<td>
TU
Delft
</td>
<td>
* Report: Assessment of added value of RRI in industry based on pilots and additional projects
* Report Comparative analysis of the eight
pilots
* Feedback received from pilots
</td> </tr>
<tr>
<td>
4
</td>
<td>
Stakeholder dialogues
</td>
<td>
KIT
</td>
<td>
Report on development of a dialogue and stakeholder mapping strategy. Report
on Mapping of stakeholders
Report on stakeholder dialogue workshops
Report on Dialogue Integration/feedback
</td> </tr>
<tr>
<td>
5
</td>
<td>
Roadmap
</td>
<td>
AIRI
</td>
<td>
Report: analysis of economic impact of RRI adoption
Report wrap-up of all the results and outputs of the WP2, WP3, WP4 and
</td> </tr>
<tr>
<td>
6
</td>
<td>
Dissemination
</td>
<td>
TU
Delft
</td>
<td>
Reports on the 3 open stakeholders’ workshops which aim at clear
recommendations for the RRICSR roadmap.
</td> </tr>
<tr>
<td>
7
</td>
<td>
Management
</td>
<td>
TU
Delft
</td>
<td>
n.a.
</td> </tr> </table>
In summary, the data collected will be mainly qualitative and consists of
records of interviews and workshops as well as reports on findings.
Overview WP’s PRISMA project
# 3\. Data Storage and Back-up during the project
## 3.1 . Repository selected for storage during the project
For data storage _during the project_ we will use the following repository:
_DataverseNL_
DataverseNL is provided by _4TU Centre for Research Data_ to researchers and
lecturers of the four technical universities in the Netherlands to store and
share the data that they create or compile during their research. DataverseNL
accepts data in all disciplines and formats.
Screenshot _DataverseNL_
In section 4.2 we will provide more details on the 4TU Centre for Research
Data and the repository
## 3.2 A few words about the repository Dataverse.nl
* Dataverse is an open source web application to share, preserve, cite, explore, and analyze research data. It facilitates making data available to others, and allows you to replicate others' work more easily. Researchers, data authors, publishers, data distributors, and affiliated institutions all receive academic credit and web visibility.
* The Dataverse software is being developed at Harvard's _Institute for Quantitative_ _Social Science (IQSS)_ , along with many collaborators and contributors worldwide. One of these contributors is the Dutch national institute _DANS_ (Data Archiving and Networked Services).
* The mission of DANS is to promotes **sustained access** to digital research data files and to encourages researchers to **archive** and **reuse** data.
* The TU Delft and all its partners- can make use of this national repository without any restrictions.
* For data and other material in DataverseNL. A back-up is made each night and stored at 2 locations in the Netherlands. A back–up is kept for 3 months (retention time).
* This repository was also selected because:
* it allows to easily define and create different roles and reading rights (admin,
curator, contributor, read only).
* it can be used by all the researchers (in- and outside the TU Delft).
* because it is tailored for research in the alpha and gamma domains.
**Below a Screenshot of _https://datave_ ** _ r **s** **e.nl/dvn/** _
# 4\. Data Archive (long term storage)
## 4.1 Repository
When the project has ended and data are ready to be archived and shared, they
will be transferred to a repository with a commitment to long-term
preservation. For this purpose we will use the ‘4TU.ResearchData’ data archive
which is a certified data repository for technical-scientific research data.
Each dataset deposited at ‘4TU.ResearchData’ is assigned a Digital Object
Identifier (DOI) which allows easy citation and discoverability.
## 4.2 A few words about ‘ 4TUResearchData’
The TU Delft is one of the founding members of the ‘ 4TU.ResearchData’ (also
known as the 4TU centre for Research Data). It’s mission is to ensure the
accessibility of technical scientific research during and after completion of
research to give a quality boost to contemporary and future research.
The organization offers the knowledge, experience and the tools to archive
research data in a standardized, secure and well-documented manner. It
provides the research community with:
* A long-term archive for storing scientific research data
* Permanent access to, and tools for reuse of research data
* Advice and support on data management
4TU.ResearchData currently hosts thousands of datasets. To see examples please
visit: _http://data.4tu.nl_ .
# 5\. Data Documentation (metadata)
It goes without saying that generating metadata is highly important during
data collection in order to find and re-use the appropriate data.
When using DataverseNL for data storage and sharing during research it’s
required to add the cataloguing information (metadata) when submitting a
dataset. The metadata fields are designed for compliance with the _Data
Documentation Initiative_ (DDI), an internationally recognized standard for
describing data.
For adding Metadata to the datasets we have developed a format which you can
find _in_ _Annex 1_ (NB: slightly modified from 4TU Research Data Centre
metadata form).
As far as the _process_ is concerned, the following can be added:
* The responsible researcher of each case will take care of adding the meta-level information to the database by using the 'add new data' form as attached See par 4’.
* The responsible researcher uploads the respective original research documents to the selected repository. If this is not possible, (for instance because the research data is not collected during the PRISMA/ project or is owned by someone who is not part of the project) then at least the metadata has to be provided with information where the original, full document is located.
* The naming practices of files have to be distinct in order to achieve a clear structure in the database.
Each file will begin with a short name given to the case under consideration.
After this comes the content related part of the file name such as “CEO
interview transcript” or “short case description”, and the version information
referring to the day, month and year, of the last alteration of the file. Then
the affiliation, version (dr =draft, fv = final version) and reviewer initials
if needed.
<table>
<tr>
<th>
So, as an example:
PRISMA_WP3_"case"_”content”_”date”_”affiliation”_dr#_”reviewer
</th> </tr>
<tr>
<td>
initials”.filetype
</td>
<td>
</td> </tr> </table>
* So both draft and final versions can be stored in the database, but you have to include dr# or fv to the name.
* The responsible researcher is responsible that the access rights to each document are correct.
* The partners can search the data using the search tools provided for the repository The search result will show the metadata and provide and 'upload' link to the original research documents.
* Non-open data Data will be accessible to project participants only through username/password.
More about Data Access and rights in the next paragraph.
**6\. Data & Access rights **
_General rules_
* In general, data ownership is jointly shared among consortium partners. Commercial exploitation of data is not foreseen.
* Data gathered in the surveys and workshops will be made openly available via after the project has finished and scientific papers have been published and once it has been anonymized in such a way that it cannot be tracked back to individual respondents, directly nor indirectly. 1
* During the course of the project, these data will be stored and made available in and via Dataverse.nl (see par. 4), which complies fully with H2020 requirements.
* Data that we do not produce in the project (e.g. existing cases, existing survey data, informed consent forms, existing data from statistical offices) will not be made openly available.
* Research Data that is not privacy sensitive will be available open access through the data center mentioned above, after the project has finished and scientific papers have been published.
* We will work with Informed Consent (IC) forms for surveys, interviews, videos and workshops. These IC-forms will not form part of the dataset. However, we will publish the templates we used. Informed consent forms will be excluded from the open datasets. The IC-forms y can only be assessed by the researcher and the WP-leader. In line with the above, we will make interview summary reports available, but not the interview recording.|
* Data gathered in the **case studies** will also only be made openly available as long as it does not harm the competitiveness of the business being studied. See also Annex 2 for
procedures.
* All final publications, presentations and selected videos will -in principle- be published under a CC. 4.0 licence.
_Access during the project_
* WP-members can access and review all draft products which form part of their WP. Also the PI and the WP-leaders have access (but reading only) considering the strong link between the WP’s.
* Only the researcher is allowed to delete his or her own information/products.
* IC-forms and transcripts of interviews can only be accessed by the researcher involved and the WP-leader. If a dispute may arise, the PI will get access to these materials.
# 7\. Governance
To safeguard compliance with all aforementioned data management decisions, the
following governance measures are applied:
* WP leaders are responsible for adhering to the above specifications for their respective work package. For the overall project, TUD will be responsible for complying with the data management plan. All consortium partners are responsible for making sure personnel working on the project have read the data management plan and internalized the principles. Data management will be on the agenda in all executive board meetings as of September 2016. The 4TU Research data centre will give advice.
* The data management plan is considered a living document. New versions of the DMP should be created whenever important changes to the project occur due to inclusion of new data sets, changes in consortium policies or external factors ⮚ Updates to the data management will be communicated by the TU Deft
To evaluate the efficacy of the data management plan, we will conduct an
evaluation in M18. The evaluation will at least include:
* Is the metamodeling still consistent with what is being done in WP 1, 2, 3 4 and WP5? Is updating the meta-model required?
* Do the survey data include meaningful metadata (i.e. labels) that are understandable for outsiders?
* Are all personal data anonymized?
* Do the data gathered not harm privacy or the commercial interests of the company case studies
* Do informed consent forms align with the DMP (anonymized storage of data)
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0659_MARINERGI_739550.md
|
# 1\. Introduction
## 1.1. Introduction and overview of MARINERG-i
The H2020 MARINERG-i project is coordinated by the MaREI Centre at University
College of Cork Ireland. The consortium is comprised of 13 partners from 12
countries (Germany, Belgium, Denmark, Spain, France, Holland, Ireland, Italy,
Norway, Portugal, the United Kingdom and Sweden). MARINERG-i brings together
all the European countries with significant testing capabilities in offshore
renewable energy. The MARINERG-i project is a first step in forming an
independent legal entity of distributed testing infrastructures, united to
create an integrated centre for delivering Offshore Renewable Energy.
MARINERG-i will produce a scientific and business plan for an integrated
European Research Infrastructure (RI), designed to facilitate the future
growth and development of the Offshore Renewable Energy (ORE) sector. These
outputs are designed to ensure that the MARINERG-i RI model attains the
criteria necessary for being successful in an application to the European
Strategy Forum on Research Infrastructures (ESFRI) roadmap in 2020\.
## 1.2. Data Plan Description
This document is the _Initial_ Data Management Plan (DMP), which forms the
basis for deliverable D1.10. Recognising that DMP’s are living documents which
need to be revised to include more detailed explanations and finer granularity
and updated to take account of any relevant changes during the project
lifecycle (data types/partners etc.). This edition will be followed by two
further volumes: “The Detailed DMP”, (D1.11) and “Final Review DMP”, (D1.12).
The format for all three volumes is based on the template taken from the DMP
Online web-tool and conforms to The "Horizon 2020 DMP" template provided by
European Commission (Horizon 2020).
# 2\. Data Summary
## 2.1. Purpose of the data collection/generation
It is important to note that MARINERG-i will not create any new scientific
data e.g. from experimental investigations or actual testing of devices.
However, the discovery phase of the work programme (WP 2 and WP3) does involve
detailed information gathering in order to profile multiple attributes of the
participating testing centres and their infrastructure; which may in practice
be regarded as a form of highly granular metadata. Along-side and associated
with this there is a requirement to compile and include in a database (WP 7
Stakeholder Engagement; WP 6 Financial Framework), personal contact and other
potentially private, proprietary, financial or otherwise sensitive information
which will be maintained as confidential. Derived synthetic, statistical, or
anonymised information will also be produced which is destined for release in
the public domain. Further details of proposed data collection and use are
contained in D7.3 Stakeholder Database. Details of the procedures for
collection and use as well as their compliance with ethics and data protection
legislation are provided in D10.1 and D10.2.
## 2.2. Relation to the objectives of the project
The collection of data will be undertaken as a primary function of four key
work pages (WP 1, 2, 6 &7) which together form the Discovery phase of the
overall work plan, the general scheme of which is as follows:
* Discovery Phase - Engagement with stakeholders, Mapping, profiling RIA and einfra
* Development Phase – Design and Science plan, Finance, Value statements
* Implementation Phase – Business plan and implementation plan including roadmap.
Data and information collected during the discovery phase will feed into and
inform the subsequent phases of development and implementation. Specifically,
the objectives for WP2&3 listed below and deliverables listed in Table 1 (D2.1
–D3.4) provide an obvious and clear rationale for the collection and
operational use of several main categories of data within the project. Also
listed in Table 1 is deliverable 7.3 the stakeholder database. This database
will contain names, contact details, contact status and a range of other
information pertinent to the stakeholder mapping and engagement process, which
is a key objective within WP7.
WP 2 Objectives
The facilities to be included in MARINERG-i will be selected so as to
contribute to the strengthening of European, scientific and engineering
excellence and expertise in MRE research (wave, tidal, wind and integrated
systems) and to represent an indispensable tool to foster innovation across a
large variety of MRE structures and systems and through all key stages of
technology development (TRL’s 1-9). In order to achieve this, a profiling of
the European RI’ is to be conducted on both strategic and technical levels and
considering both infrastructures’ scientific and engineering capabilities.
Both existing facilities and future infrastructures should be identified and
characterized so as to account for future expansion and development. In
parallel, user’s requirements for MRE testing and scientific research at RI’s
should be identified so as to optimize and align service offerings to match
user needs with more efficiency, consistency, precision and accuracy.
All this information will be efficiently compiled so as to provide the basis
to inform the development of the design study and science plan to be conducted
under WP 4.
WP 3 Objectives
The set of resources, especially facilities, made available under MARINERG-i
currently have individual information systems and data repositories for
operation, maintenance and archival purposes. Access to these systems may be
generally quite restricted at present, constrained by issues relating to
ownership, IP, quality and other standards, liability, data complexity and
volume. Even where access is possible, uptake may not be extensive in the
absence of a suitable policies and effective mechanisms for browsing,
negotiation and delivery. A primary objective of WP3 is to instigate a program
to radically improve all aspects pertaining to the curation, management,
documentation, transport and delivery of data and data products produced by
the infrastructure. All this information will be efficiently compiled so as to
provide the basis to inform the development of the Design Study and Science
Plan to be conducted under WP4.
_Table 1 List of deliverables from WP 2, 3, 6 & 7 . _
<table>
<tr>
<th>
**Deliverable**
**Number**
</th>
<th>
**Deliverable Name**
</th>
<th>
**WP**
**Number**
</th>
<th>
**Lead beneficiary**
</th>
<th>
**Type**
</th>
<th>
**Dissemination level**
</th> </tr>
<tr>
<td>
D2.1
</td>
<td>
MRE RI End-users
requirements profiles
</td>
<td>
WP2
</td>
<td>
3 - IFREMER
</td>
<td>
Other
</td>
<td>
Confidential,
only for members of the consortium (including the Commission Services)
</td> </tr>
<tr>
<td>
D2.2
</td>
<td>
MRE RI Engineering and science baseline and future needs
profiles
</td>
<td>
WP2
</td>
<td>
3 - IFREMER
</td>
<td>
Other
</td>
<td>
Confidential,
only for members of the
consortium
(including the Commission
</td> </tr>
<tr>
<td>
D3.1
</td>
<td>
MRE e-Infrastructures End-Users requirements
profiles
</td>
<td>
WP3
</td>
<td>
3 - IFREMER
</td>
<td>
Other
</td>
<td>
Services)Confidential,
only for members of consortium (including Commission Services)
</td>
<td>
the the
</td> </tr>
<tr>
<td>
D3.2
</td>
<td>
MRE e-Infrastructures baseline and future needs
profile
</td>
<td>
WP3
</td>
<td>
3 - IFREMER
</td>
<td>
Other
</td>
<td>
Confidential,
only for members of consortium (including Commission Services)
</td>
<td>
the the
</td> </tr>
<tr>
<td>
D3.3
</td>
<td>
Draft Report MRE eInfrastructures strategic and technical alignment
</td>
<td>
WP3
</td>
<td>
1 -
UCC_MAREI
</td>
<td>
Report
</td>
<td>
Confidential,
only for members of consortium (including Commission Services)
</td>
<td>
the the
</td> </tr>
<tr>
<td>
D3.4
</td>
<td>
Final Report MRE e-
Infrastructures strategic and technical alignment
</td>
<td>
WP3
</td>
<td>
1 -
UCC_MAREI
</td>
<td>
Report
</td>
<td>
Public
</td>
<td>
</td> </tr>
<tr>
<td>
D6.1
</td>
<td>
Report on all RI costs and revenues
</td>
<td>
WP6
</td>
<td>
4 - WAVEC
</td>
<td>
Report
</td>
<td>
Public
</td>
<td>
</td> </tr>
<tr>
<td>
D7.3
</td>
<td>
Stakeholder database
</td>
<td>
WP7
</td>
<td>
5- Plocan
</td>
<td>
Database
</td>
<td>
Confidential,
only for members of
</td>
<td>
the
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
consortium (including Commission Services)
</td>
<td>
the
</td> </tr> </table>
2.3. Types and formats of data generated/collected.
As stated in the previous section there are two main types of data being
collected:
1. Data relating to the profiling of the Research Infrastructures (RI’s) and existing einfrastructure
2. Contact details for MARINERG-i stakeholders
The specifics in terms of type and format for collecting, analysing, storage
of data is still under consideration, however it anticipated that initial
collection will mostly be simple text generated locally by subjects using
forms and questionnaires in MS word/excel or alternatively through a
centralised system with an online interface. Images in various graphical
formats will also form a significant element of the data collected.
Collections will also consider other forms of documentation: specifications,
standards; templates; rule-sets; manuals; guides; and various types of
framework documents; legal statutes, contracts, strategic and operational
plans, etc. More detailed specifications/conventions governing key parameters
for all of the above will be provided to data providers/gathers in advance to
ensure current and future interoperability and compatibility.
The fields currently being used to collate stakeholder contact information are
listed in Table 2 below:
Table 2 Stakeholder database field structure and type
<table>
<tr>
<th>
Field
No
</th>
<th>
Field Header
</th>
<th>
Field type
</th> </tr>
<tr>
<td>
1
</td>
<td>
Order #
</td>
<td>
number
</td> </tr>
<tr>
<td>
2
</td>
<td>
Date
</td>
<td>
Number
</td> </tr>
<tr>
<td>
3
</td>
<td>
Category Stakeholders
</td>
<td>
Text
</td> </tr>
<tr>
<td>
4
</td>
<td>
If Other category, please include it here
</td>
<td>
Text
</td> </tr>
<tr>
<td>
5
</td>
<td>
Name of the Organisation Stakeholder
</td>
<td>
Text
</td> </tr>
<tr>
<td>
6
</td>
<td>
Acronym Stakeholder
</td>
<td>
Text
</td> </tr>
<tr>
<td>
7
</td>
<td>
Address
</td>
<td>
Text
</td> </tr>
<tr>
<td>
8
</td>
<td>
Country
</td>
<td>
Text
</td> </tr>
<tr>
<td>
9
</td>
<td>
Web
</td>
<td>
Text
</td> </tr>
<tr>
<td>
10
</td>
<td>
Phone(s)
</td>
<td>
number
</td> </tr>
<tr>
<td>
11
</td>
<td>
E-mail
</td>
<td>
Text
</td> </tr>
<tr>
<td>
12
</td>
<td>
Contact Person
</td>
<td>
Text
</td> </tr>
<tr>
<td>
13
</td>
<td>
Role in the Organisation
</td>
<td>
Text
</td> </tr>
<tr>
<td>
14
</td>
<td>
MARINERG-i partner providing the information
</td>
<td>
Text
</td> </tr>
<tr>
<td>
15
</td>
<td>
Contact providing the information
</td>
<td>
Text
</td> </tr>
<tr>
<td>
16
</td>
<td>
Energy sectors
</td>
<td>
Text
</td> </tr>
<tr>
<td>
17
</td>
<td>
If Other Sector, please include it here
</td>
<td>
Text
</td> </tr>
<tr>
<td>
18
</td>
<td>
R&D&I Area
</td>
<td>
Text
</td> </tr>
<tr>
<td>
19
</td>
<td>
If Other R&D&I Area, please include it here
</td>
<td>
Text
</td> </tr>
<tr>
<td>
20
</td>
<td>
Does the stakeholder provide permission to receive info from MARINERG-i?
</td>
<td>
Text
</td> </tr>
<tr>
<td>
21
</td>
<td>
Further Comments
</td>
<td>
Text
</td> </tr> </table>
## 2.4. Re-Use of existing data
The RI profiling information to be gathered will augment and greatly extend
existing generic baseline information gathered under the Marinet F7 project in
the respective research infrastructures of the MARINERG-i partnerships, and
some new information that has been added recently through the Marinet2 H2020
project. The latter is currently accessible through the Eurocean Research
Infrastructures Database (RID) online portal system
(http://rid.eurocean.org/), where it is accessible to RI managers to update.
Content for the stakeholder’s database will be initially obtained from
existing Marinet and PLOCAN databases, re-use permission will be obtained from
the individuals concerned. Since this is a live database, additional contact
information is being added primarily via our website where interested
stakeholders can sign up to be included as well as receive newsletters and
invitations to events. In addition, partners will email their contacts
informing them about the project and encouraging them to join our mailing
list/stakeholder database.
2.5. Expected size of the data
The total volume of data to be collected is not expected to exceed 100GB
## 2.6. Data utility: to whom will it be useful
As stated above the data being collated and generated in the project are
primarily for use by the partners within the project in order to prepare
specific outputs relevant to the key objectives. Summary, derived and or
synthetic data products of a non-sensitive nature will be produced for
inclusion in reports and deliverables some of which will be of interest to a
wider range of stakeholders and interested parties including but not limited
to the following:
National authorities, EU authorities, ORE industry, potential MARINERG-i node
participants, International Authorities, academic researchers, other EU and
international projects and initiatives.
# 3\. Fair Data
## 3.1. Metadata and making data findable
There is no specific aim in the Marinerg-I project to generate formally
structured or relational databases. The activity conducted as part of WP2 and
WP3 will require the use of existing databases and collation of information
from Institutions portals and through a questionnaire to be distributed to
potential stakeholders.
Hence metadata will be based on existing metadata formats and standards
developed for the existing services. Additional metadata will be created for
specific fields if necessary, after elaboration of the questionnaires.
More specifically the profiling of the Research Infrastructure will be for a
large part based on the information available on the Eurocean service, and on
services such as Seadatanet for the E-infrastructures.
Definition of naming conventions and keywords will be based on the same
approach. Specific metadata related to the stakeholders’ database would be
created according to fields presented in Table 2.
MARINERG-i is aware of the EC guidance metadata standards directory
[http://rdalliance.github.io/metadata-directory/standards/]. However given the
nature of the data being compiled, and the early stage of the project
lifecycle no firm decisions have yet been made regarding the use of particular
metadata formats or standards. This will be considered and dealt with further
in the subsequent iteration of this document (D1.11) including the following
aspects: discoverability of data (metadata provision); identifiability of data
and standard identification mechanism; persistent and unique identifiers;
naming conventions; approaches towards search keywords; approaches for clear
versioning; standards for metadata creation
## 3.2. Open accessibility
Produced data will for a large part be based on processing of existing
datasets already available in open access. This data will be made openly
available.
Restriction could apply to datasets or information provided by stakeholders in
cases where they specify such restrictions (for instance personal contact
details) that shouldn’t be made openly available for confidentiality reasons.
Publicly available data will be made available through the MARINERG-i web
portal.
No specific software should be required apart from standard open source office
tools required to read formats such as “txt”,”asci”, “.docx”, ”.doc”,
”.xls”,”.xlsx”, ”PDF”, “JPEG”, “PNG”, “avi”,”mpeg”,…
Data and metadata should be deposited on the MARINERG-i server.
## 3.3. Interoperability
The vocabulary used for all technical data types will be the standard
vocabulary used in marine research and offshore renewable experimental testing
programmes such as Marinet, Marinet2, Equimar…
For the other datatypes other interoperable data types will be chosen, where
possible making use of their own domain specific semantics.
## 3.4. Potential to increase data re-use through clarifying licenses
The project does not foresee the need to make arrangements for licensing data
collected.
Data should and will be made available to project’s partners throughout the
duration of the project and after the end of the project (at least until the
creation of the ERIC) and where possible made available to external users
after completion of the project.
Some of the data produced and used in the project will be useable by third
parties after completion of the project except for data for which restrictions
apply as indicated in
It is expected that information e.g. as posted on the website will be
available and reusable for at least 4 years, although the project does not
guarantee the currency of such data past the end of the project.
# 4\. Allocation of Resources
## 4.1. Explanation for the allocation of resources
Data management can be considered as operating on two levels in MARINERG-i.
The first is at the point of acquisition where responsibility is vested in
those acquiring the data to do so consciously and in accordance with this DMP
and associated Ethics requirements as set out in D 10.1 /D10.2. The second
level is where processed and analysed synthetic data products are passed to
the coordinators for approval and publication.
Data security, recovery and long term storage will be covered in the
subsequent iteration of the DMP (D1.11.)
# 5\. Ethical Aspects
Details pertaining to the ethical aspects in respect of data collected under
MARINERG-i are covered in D 10.1 /D10.2. This will as a minimum include
provision for obtaining informed consent for data sharing and long term
preservation to be included in questionnaires dealing with personal data.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0660_MARCO_730272.md
|
# 1 Abstract
MARCO is part of the Horizon 2020 Open Research Data Pilot, the Pilot project
of the European Commission which aims to improve and maximise access to and
reuse of research data generated by projects.
The Data Management Plan (DMP) of MARCO describes the life cycle of all data
collected and processed in MARCO. The DMP is one of the starting points for
the discussion with the community about the MARCO data management strategy and
reflects the procedures planned by the related work packages that conduct
survey/interview/focus groups.
The elements listed in this document have been also presented in the Project
Quality Plan, deliverable released at the beginning of the project, accessible
to all project members via the shared workspace (MARCO ECCP).
**2 Open access to publications**
MARCO will make all its public deliverables available on the project website:
_http://marco-h2020.eu/_
# 3 Data set description
In order to achieve the project objectives, a characterisation of users’ needs
on climate services is required, through a qualitative research, namely
through questionnaires and extensive surveys.
On the one hand, the analysis of users’ needs for climate services includes
public and private organisations and individuals.
On the other hand, this action aims at gaining a deep understanding of the
needs of users of climate services, the main purchase drivers, and the
decision-making process that will trigger a shift from ‘make’ to ‘buy’
inducing a market growth by externalisation.
The questionnaire/interview will include closed and open questions, and will
be designed for the project Working Package on Climate Service Providers (WP3)
or Potential & Actual Demand (WP4), to facilitate the gap analysis; it will be
reviewed by the partners and Advisory Expert Committee (a group of 7 experts
in different sections of the climate services sector). A large number or
specific interviews will be conducted for the study cases in WP5.
The widespread online survey reaches a wide audience geographically dispersed
(reached via the ClimateKIC network, the MARCO Stakeholder Network and the
partners’ own network). It will target both current customers and potential
ones (who may currently be users of weather services for instance). The
results from the online survey will then feed into the stakeholder analysis
that will finally be able to identify the users and related applications with
greater market potential.
# 4 Protocols for surveys and interviews
## MARCO Survey Data
The survey data will be anonymized so that personal identification will not be
possible; it will then be analysed and the results will be integrated in the
project reports. As most of the deliverables are public, they will be
accessible via the project official website. Survey participants’ answers will
be treated confidentially so that personal identification will not be
possible.
It should be noted that the surveys created for the relevant WPs do not
require personal data.
## MARCO Interview Data
Interviews can be audio recorded and transcribed on a case by case basis. The
partners who follow this audio recording and/or transcription will employ the
line of action described below for MARCO specific interviews. In most cases,
an analysis of content will be performed.
If interview recordings/transcripts will be realised, they will not be made
open access, as the consortium cannot guarantee anonymity to the interviewees
if full transcripts are published. Some interviews might be carried out in
national languages, making it rather easy to identify the national background
of the persons interviewed and, possibly, to identify the person herself. This
risk is plausible, since the persons interviewed are experts from a certain
field, so it is likely that interviews could at least be traced back to
certain institutions. Due to such conditions, publishing
recordings/transcripts could discourage interviewees from openly talking to
the consortium, which certainly would affect the research.
## MARCO workshop data
Workshops will be organized to engage the sector’s stakeholders, to understand
the stakeholders’ needs and expectations. Workshop group reflections could be
recorded as a matter of convenience for analysis. Audio records will not be
made open access.
# 5 Processing operations
The data on survey participants will be processed and be used to organise
interviews, focus group discussions and workshops, provided that each
participant accepted the conditions through a consent form signed before the
interview/focus group/workshop participation. This consent is available in the
Project Quality Plan.
Research partners will process the participants' personal data in compliance
with the relevant personal data protection laws. Moreover, each partner must
respect the specificities of its own national laws on the protection of
personal data. Individual research participants will be recruited through
relevant organisations that research partners will use.
Research partners will participate in:
* one-on-one interviews, which may be recorded;
* focus group discussion;
* collaborative workshops;
* and surveys, which are be conducted online or live, during conferences using the survey form, by the MARCO members.
The collected data will be in general organisation based (type of
organisation, number of employees, department – optional – and country).
In the case transcripts are produced, the organisation conducting these
interviews will give the participants the opportunity to review the
transcripts and content analyses of their interviews. Upon request from one of
the consortium members, the transcripts will be stored electronically on the
project secured server (MARCO ECCP – Electronic Content Collaboration
Platform), in a data repository which will only be accessible for project team
members directly engaged in the corresponding research work.
The creation of, and the access to this data repository are facilitated by
LGI.
# 6 Data Sharing
Data will only be used for MARCO project, and will not further be used for
other purposes, unless survey participants explicitly agree. Requesting such
explicit agreement may make sense if follow-up actions and developments after
the project end are anticipated, for instance through a future CS market
observatory. The survey participants may withdraw any time they wish from the
study and the information that they provided will be deleted upon request.
They also have the right to refuse the use of their personal data.
# 7 Archiving and preservation (including storage and backup)
If personal data is required, it will be maintained securely on the servers of
the organisations participating in MARCO project, which can only be accessible
by the MARCO researchers. Once the project has been completed, personal data
of research participants and recordings will be retained for a period of seven
years after the closure of the study on the servers.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0661_VATech_674491.md
|
<table>
<tr>
<th>
<table>
<tr>
<th>
**DESCRIPTION**
</th> </tr> </table>
This document outlines how data are being handled both during and after the
project. Through this plan many aspects of data management are considered,
such as data analysis, preservation or exploitation. This ensures that data
are well-managed in the present, and prepared for preservation in the future.
3
</th> </tr> </table>
**DATA MANAGEMENT PLAN**
1. **Types of data generated throughout the project.**
* Technical data:
* Results of laboratory tests (confidential).
* Internal reports for customer (confidential).
* Product data sheets: Junior and Senior (public).
* Commercial data.
* Market needs (confidential). They cannot be public in order to preserve final customer’s privacy.
* Dissemination Data
* Scientific publications: EuCAP 2016 and 2017 (public).
* Multimedia material of Virtual Antenna technology - video and/or presentation (public).
2. **Standards.**
Fractus has its ISO 9001 certified procedures for project and product
management, design, and qualification.
3. **Exploitation of data. Accessibility for verification and re-use.**
Public data will be shared both with customers and general audience though
scientific publications and tutorial videos, webinars, etc… as explained in
WP6 and associated deliverables.
Regarding accessibility, folders in Fractus’ server are organized by project
code “Number_Name”. Folders are divided into 2 categories: Product/Service
(numbers from 1 to 499) and Research/Innovation (numbers from 500). The
administrator gives permissions to users.
For the VATech project, there is a specific folder, divided into different
folders with each work package. All data generated outside this folder and
related to one of the work packages is dumped into it. For example,
information regarding intellectual property.
4. **Data preservation and security.**
Fractus server collects the data which is daily stored following, generally,
the following planning:
<table>
<tr>
<th>
</th> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
</td> </tr> </table>
Differential global backup (hard disk – daily)
Laboratory global backup.
Users global backup.
Department global backup.
4
<table>
<tr>
<th>
Global copies of each week are stored in the fireproof cabinet, forming a row
and sorted by date so that the first row is the latest copy and the last one
is the oldest. They are all "internal" copies.
Data are accessible in Fractus’ server to permitted employees.
5
</th> </tr> </table>
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0663_DE-ENIGMA_688835.md
|
# 2\. Introduction
The DE-ENIGMA DB will be the very first of its kind to be released for
research of behaviours shown by children with autism spectrum conditions
(ASC). It will contain (manually / semi-automatically) annotated audio-visual
recordings with respect to facial points, facial gestures, body postures and
gestures, various vocalisations, verbal cues, continuously valued target
affective states, valence, arousal, interest, stress, and prototypic examples
(templates) of rapport behaviour.
The Data Management Plan has been developed and agreed upon by the consortium
members, based on the DE-ENIGMA commitment to open access and to advancement
of the state of the art in the field by means of release of as much data and
as many software tools as possible.
# 3\. Objectives
The Multi-Modal Human-Robot Interaction for Teaching and Expanding Social
Imagination in Autistic Children (DE-ENIGMA) project aims to build robotic
technologies that can robustly and accurately track and recognise children’s
facial, bodily, and vocal behaviours and naturalistic interactions “in the
wild”, and react appropriately based on the observed child’s behaviour, with
the ultimate goal of helping autistic children and young people to enhance
their social communication skills in structured teaching with a therapist and
in everyday interactions.
Specifically, the project seeks to develop multimodal human-robot interaction
(HRI) methods that learn from interactions to:
1. model the child’s behaviour,
2. map multimodal input to estimate the child’s affect, interest, physical response and rapport, and
3. adapt the interaction to the current context (the child’s cultural background, the task, and his / her level of interest and stress) in order to maximise the effectiveness of teaching socio-emotional skills and social imagination to autistic children.
The robot will learn to understand the child’s vocalisation, their choice of
words, facial gestures, head and body gestures and postures and how these
modalities are combined to convey meaning. It will also examine the best ways
of changing the robot’s interactive behaviours in cases where there is lack of
engagement, lack of rapport, and increased behavioural responses by the child.
Learning models of the aforementioned behaviour suitable for machine analysis
depends on having suitable data recordings to learn from. Hence, an important
aspect of the DE-ENIGMA project lies in collecting suitable datasets of enough
labelled examples for building robust tools.
# 4\. Dataset
4.1. Data set reference and name
The dataset collected during the project will be denoted as the “DE-ENIGMA
DB”.
## 4.2. Data set description
A database of annotated audio, 2D and 3D recordings of interactions between
autistic children and (a) the robot, (b) the researcher and (c) their parents
made in structured teaching settings will be collected in the DE-ENIGMA
project.
In this project, we aim to recruit a total of 128 children on the autism
spectrum, half (n = 64) from London and South East of UK, and the other half
from Serbia. In each culture group, 32 children will be aged between 5 and 8
years and the other 32 between 9 and 12 years. The children from the two
cultures will take part in identical experiment settings. Namely, for each
culture, half of the children will be involved in robot-led teaching and the
other half will be involved in researcher / clinician-led teaching.
During the experiment, children within each age group will be randomly
assigned to either robot-led or researcher / clinician-led teaching
intervention, which will be implemented across multiple short sessions (10-15
minutes long) every 1-2 days for a maximum period of 3 weeks. We will follow
Howlin et al.’s (1998) approach to teaching perception, expression,
understanding, and social imagination related to four affective states:
surprise, happiness, anger and sadness. Specifically, the children will work
through the following Howlin et al.’s 6 Phases of teaching at their own pace,
with feedback given either by the robot or the researcher / clinician:
1. Matching across same static emotional images.
2. Matching across different static emotional images.
3. Matching from dynamic “real” emotional displays to static images.
4. Identifying dynamic “real” emotional displays and expression that emotion.
5. Identifying dynamic “real” emotional displays and expressing that emotion in the same way. 6) Understanding own / others’ emotional states.
After the intervention, the children will also take part in two teaching
sessions with their parents in the format as the earlier robot-led or
researcher / clinician-led sessions. These additional sessions will allow us
to examine whether the child has retained the skills learned during the
intervention and generalised such skill across instructional partners (in this
case, parents).
In the data collection experiment, all robot-led sessions will be facilitated
through a “Wizzard of Oz” (WoZ) setup. Namely, the robot will be controlled
directly by the researcher / clinician using a small keypad hidden from the
child’s view. Nevertheless, the robot will also perform a set of idle
animations autonomously, such as eye-blinks, head-turns, and minor hand
movements, to achieve a more “life-like” appearance.
All teaching sessions will be recorded in 3 modalities: audio, 2D video, and
3D video. For each modality, the following devices will be used:
1. Audio: 4 professional omnidirectional microphones will be used as the main data sources. Among these microphones, two will be mounted close to the child and the researcher / clinician respectively.
Another one will be mounted on the ceiling of the room directly above the
experiment setup. And the last one will be a wireless microphone carried
either by the child or the researcher / clinician (in case if the child is
unwilling or unsuitable to carry the microphone). In addition to these
professional microphones, we will also use the 2D and 3D video cameras’ built-
in consumer-grade microphones to make extra audio recordings. Each 2D camera
has 2 built-in microphones (except the one mounted on the robot’s chest that
only has 1), and the 3D camera (Microsoft Kinect) has 4 built-in microphones.
Therefore, for each session, a total of 18 (in researcher / clinician-led
sessions) or 19 (in robot-led sessions) distinct audio recordings will be
made.
2. 2D Video: 5 (in researcher / clinician-led sessions) or 6 (in robot-led sessions) 720p HD webcams will be used to make video recordings at approximately 30 frames per second. The placement of these cameras is as follows: 2 cameras will be mounted at the opposite corners of the room to record from overview perspectives; 3 cameras will be placed close to the researcher / clinician and the child to capture their facial expressions (1 facing the researcher / clinician, 1 facing the child, and 1 facing both); and, in robot-led sessions, 1 camera will be mounted on the robot to capture the scene from the robot’s perspective. In addition, the 2D video captured by the 3D camera will also be recorded. These add up to 6 (in researcher / clinician-led sessions) or 7 (in robot-led sessions) 2D video recordings per session.
3. 3D Video: We will use one Microsoft Kinect to record 2D and 3D video and sound data. We will record the monocular image, the registered depth field of the scene, and the 4-channel sound, at a sample rate of approximately 30 Hz.
The sensor placement is further illustrated in Figure 1. Note that the two
overview webcams at the corners and the microphone on the ceiling are not
visible in this picture.
Figure 1. Sensor placement in the experiment setup for robot-led sessions.
All recorded data streams will be time-stamped and synchronised. Specifically,
the internal clock of all data capturing machines will be synchronised to
universal time coordinated (UTC) using network time protocol (NTP). These
clocks will then serve as the reference clock to time-stamp all recorded data
on either perframe basis (for 2D and 3D video data) or per-buffer basis (for
audio data).
The DE-ENIGMA database will also include annotations of the recordings in
terms of facial landmarks and gestures, body postures and gestures, vocal and
verbal cues, continuously valued emotion dimensions, and rapport behaviours.
The data will be annotated in an iterative fashion, starting with a sufficient
number of examples to be annotated in a semi-automated manner and to be used
to train the algorithms in WP2WP4, and ending with a large database of
annotated facial and bodily behaviour recorded in the wild.
## 4.3. Ethical issues
The DE-ENIGMA project has obtained full ethical approval from UCL IOE Research
Ethics Committee and the Ethics Committee of Serbian Institute of Mental
Health. More details about the ethics approval can be found in DE-ENIGMA
Deliverable 1.1.
## 4.4. DB design and metadata
The DE-ENIGMA database will be organised into a flat list of folders, each
storing the data recorded during a single teaching session. The folders will
be named sequentially, reflecting the order of the sessions being conducted.
The layout of the folder’s content will be as follows.
1. An index file detailing the meta-data of all files saved in the folder. This file will be parsed by the database web portal to generate the overall database catalogue.
2. The participant’s demographic information and their parents’ answers to various pre-intervention questionnaires. All information will be saved in Java-script object notation (JSON) files with a strictly defined semantics to support automatic search and filtering. The participants’ information will be anonymised by replacing their name with a randomly generated unique identifier (ID).
3. A set of 6 or 7 AVI files storing the 2D videos recorded during the session. All video data will be recorded at a frame rate of approximately 30 frames per second and will have a resolution of at least 1280 x 720 pixels. Each AVI file will be accompanied by a text file containing all frames’ time-stamp.
4. A set of 18 or 19 WAV files storing the audio data recorded during the session, all sampled at 44.1 kHz, except the data recorded from the Microsoft Kinect which is sampled at 16 kHz. Similar to the video recording, each WAV file will also be accompanied by a text file containing time-stamp information.
5. Time-stamped images together with depth field and 4-channel sound captured using the Kinect device stored as multiple files, including raw data for each modality, and RGB to depth mapping information.
6. A folder containing all available annotations as described in the previous section. Each type of labels will be saved in its own subfolder, of which the exact folder structure and / or file format may vary. "ReadMe" files will be included in the subfolders to explain the specific data organisation method.
Along with the data, a comprehensive help document will be provided to give
detailed explanation on the format and semantics of all files included in the
database.
## 4.5. Data sharing
A web-portal will be developed for the DE-ENIGMA database, allowing easy
access and search of the available recordings according to various evidences
(i.e. annotations of key cues like facial actions, expressions, rapport) and
according to various metadata (gender, age, cultural background, occlusions,
etc.). This will facilitate investigations during and beyond the project in
the field of machine analysis of autistic children’s behaviours as well as in
other research fields.
The DE-ENIGMA database will be made available to researchers for academic-use
only. To comply with clauses stated in the Informed Consent signed by the
recorded participants, all non-academic / commercial use of the data is
prohibited. To enforce this retraction, an end-user license agreement (EULA)
has been prepared (see Appendix). Only researchers who have signed the EULA
will be granted access to the database. In order to ensure secure transfer of
data from the database to an authorised user’s PC, data will be protected by
SSL (Secure Sockets Layer) with an encryption key. If at any point, the
administrators of the DE-ENIGMA database and / or DE-ENIGMA researchers have a
reasonable doubt that an authorised user does not act in accordance to the
signed EULA, he/she will be declined the access to the database.
To increase the impact of the DE-ENIGMA project, we plan to organise data-
based research competitions. Partners of the project have done so previously
at the INTERSPEECH major speech conference (the INTERSPEECH ComParE 2009-2016
annual competitions) and premier ACM Multimedia venue (the AVEC series on
Audio / Visual Emotion Challenge has been organised six times up to now by
members of the consortium) and the premier IEEE Int’l Conf. Computer Vision
(satellite events on facial landmark localisation in static images and in
videos, in ICCV 2013 and ICCV 2015 respectively). These events have reached up
to 65 registered teams per event by now, thus generating significant impact in
the field. The number of downloads of data made available for these
competitions exceeds 2500 per data set. To be able to organise such events,
part of the data and labels need to be hidden temporarily from the outer
community. We plan to use parts of the DE-ENIGMA database to organise such
data-based research competitions.
## 4.6. Archiving and preservation (including storage and backup)
The DE-ENIGMA database will be stored on a data server hosted by the
Department of Computing, Imperial College London. The web-portal of the
database will be attached to the DE-ENIGMA project website. Both services will
continue to function indefinitely after the end of project without additional
cost. As a fail-safe measure, an additional backup copy of the DE-ENIGMA
database will be created and saved in external hard-drives.
## 4.7. Data destruction policy
The central repository of DE-ENIGMA database will be maintained indefinitely.
However, it is inevitable that parts of the DE-ENIGMA data may be stored
temporarily at other locations. For instance, during the data collection
experiment, pieces of raw data may reside on the data capturing machines’
local disk before they can be transferred to the central repository. To
prevent unauthorized access of the DEENIGMA data, all local copies of the data
will be permanently deleted once they are no longer in use. In addition, all
disks used to store these temporary copies will be labelled. At the end of the
project, these disks will be formatted and filled with random data repeatedly
(~10 times) to render their previous data content unrecoverable.
# 5\. Conclusion
The goal is for the DE-ENIGMA DB to become a publicly available benchmark
multilingual dataset of annotated atypical facial, bodily, vocal and verbal
interactive behaviour recordings made in naturalistic settings representing a
benchmark for efforts in automatic analysis of audio-visual behaviour in the
wild.
# 6\. EULA
**End User License Agreement**
**DE-ENIGMA Database** (www.de-enigma.eu)
By signing this document the user, he or she who will make use of the database
or the database interface, agrees to the following terms.
With database, we denote both the actual data as well as the interface to the
database.
## 1\. Commercial use
The user may not use the database for any non-academic purpose. Non-academic
purposes include, but are not limited to:
* proving the efficiency of commercial systems
* training or testing of commercial systems
* using screenshots of subjects from the dataset in advertisements
* selling data from the dataset
* creating military applications
* developing governmental systems used in public spaces
## 2\. Responsibility
This document must be signed by a person with a permanent position at an
academic institution (the signee). Up to five other researchers affiliated
with the same institution for whom the signee is responsible may be named at
the end of this document which will allow them to work with this dataset.
## 3\. Distribution
The user may not distribute the database or portions thereof in any way, with
the exception of using small portions of data for the exclusive purpose of
clarifying academic publications or presentations. **Only data from
participants who gave consent to have their data used in publications and
presentations may be used for this purpose.** Note that publications will have
to comply with the terms stated in article 5.
## 4\. Access
The user may only use the database after this End User License Agreement
(EULA) has been signed and returned to the Centre for Research in Autism and
Education at UCL Institute of Education, University College London. The user
may return the signed EULA by traditional mail or by email in portable
document format (pdf).
The signed EULA can be send to any of the following addresses:
Traditional mail:
Prof. Liz Pellicano
Centre for Research in Autism and Education (CRAE)
UCL Institute of Education
55-59 Gordon Square
London WC1H 0NU
United Kingdom
E-mail (pdf of EULA, after signing):
[email protected]
The user may not grant anyone access to the database by giving out their user
name and password.
## 5\. Publications
Publications include not only papers, but also presentations for conferences
or educational purposes.
**The user may only use data of subjects in publications if that particular
subject has explicitly granted permission for this. This is specified with
every database element.**
All documents and papers that report on research that use any of the DE-ENIGMA
Database will acknowledge this as follows:
“(Portions of) the research in this paper uses the DE-ENIGMA database
collected jointly by a European academic consortium consisting of Prof. Liz
Pellicano and her team at University College London, Prof. Evers and her team
of University of Twente, Prof. Maja Pantic and her team at Imperial College
London, Suncica Petrovic and her team at the Serbian Society for Autism, Prof.
Schuller and his team at the University of Passau, Prof. Sminchisescu and his
team at the Institute of Mathematics of the Romanian Academy, within the scope
of the ‘DE-ENIGMA: Multi-Modal Human-Robot Interaction for Teaching and
Expanding Social Imagination in Autistic Children’ project, financially
supported by the European Council under the European Council’s Horizon 2020
Work Programme (H2020-ICT-2015-688835) / Grant Agreement No. 688835”.
The user will send a copy of any document or papers that reports on research
that uses the DE-ENIGMA Database to Prof. Liz Pellicano or to
<[email protected]>.
**6\. Academic research**
The user may only use the database for academic research.
## 7\. Warranty
The database comes without any warranty. Professor Maja Pantic and the iBUG
Group at Imperial College London, who oversee the database, cannot be held
accountable for any damage (physical, financial or otherwise) caused by the
use of the database. The iBUG Group at Imperial College London will try to
prevent any damage by keeping the database virus free.
## 8\. Misuse
If at any point, the administrators of DE-ENIGMA database and/or the Centre
for Research in Autism and Education and/or the iBUG Group at Imperial College
London have a reasonable doubt that the user does not act in accordance to
this EULA, s/he will be notified of this and will immediately be declined the
access to the database.
User: ___________________________________________
User’s Affiliation: _________________________________
User’s address: ____________________________________
User’s e-mail: _____________________________________
Additional Researcher 1______________________________
Additional Researcher 2______________________________
Additional Researcher 3______________________________
Additional Researcher 4______________________________ Additional Researcher
5______________________________
## Signature: Date/place
_______________________
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0667_TELMI_688269.md
|
# 2 Introduction
As a project whose core concept is centred on the combination of multiple
modalities to assist in music education, TELMI will involve the acquisition
and management of multimodal data in several different scenarios (teacher
performances, student performances, interviews, questionnaires, etc).
It is in the interest of the consortium to have an open data policy to the
maximum degree affordable, both to standardise best practices in recording,
storage and data sharing, and to motivate and facilitate the advancement of
research in multiple fields.
Where traditional research data (such as training or benchmarking datasets for
machine learning algorithms) are concerned, sharing helps in improving the
performance and quality of research results, avoiding the duplication of
efforts associated with dataset creation and fostering collaboration across
institutions both within the EU and abroad.
The main objectives and goals of this deliverable are:
* To outline the potential types of datasets that will be publically shared for the duration of the TELMI project. We anticipate the release of several datasets within the timespan of the TELMI project, and their contents and purpose to vary greatly depending on the activity that produces them.
* To lay a common foundation for data management across the consortium and ensure interoperability of data & metadata among the partners. In order to minimize the effort needed to share the collected data, we must ensure that the data management practices of each member of the TELMI consortium are aligned, both within the consortium as well as with the current practices in the Music Education domain. To this end, this deliverable documents these practices and proposes a series of data formats and metadata standards.
* To gauge each partner’s willingness to openly share datasets, and catalogue different sharing strategies. While the value of open data sharing is undeniable, it is necessary to ensure that sharing practices are in line with the main objectives and strategic planning of the consortium partners regarding confidentiality when datasets may contain sensitive or personal information. For that purpose, each type of dataset is accompanied by the outline of the sharing strategy.
# 3 List of Prospective Datasets to be Shared
## 3.1 Raw Data Resulting from Multimodal Performances Acquisition
This consists of the raw data (motion capture, audio, video, and possibly
sensors) directly captured by the TELMI recording platform during music
performances by students, teachers and masters.
### 3.1.1 Description
The provisional overall architecture of the platform for multimodal recordings
is shown in the following figure (first release of the recording platform
expected at Month 8).
* The performer’s movements are captured by a Qualisys Motion Capture system endowed with thirteen cameras.
* Two further broadcast quality video cameras observe the scene, one from the front and one from the side.
* A Kinect for windows v2 sensor further observes the scene from the front, providing video and depth map data.
* The performer wears a set of markers and of rigidbodies composed of a fixed number markers, tracked by the Qualisys Motion Capture System (see Deliverable D3.1 for more details).
* Tracking of the violin and of the bow is performed with real and virtual markers (see Deliverable D3.1 for more details).
* Microphones are placed both in the environment and on the music instrument (see Deliverable D3.1 for more details).
* A set of Inertial Measurement Units (IMUs) to measure hands and trunk movements may be included in case they are deemed relevant for experiments.
Synchronization is guaranteed by the EyesWeb platform 1 (see figure below).
EyesWeb generates the reference clock used by all the recorders. The generated
reference clock is sent to each device in a compatible format. In particular,
the Qualisys Motion Capture system receives the reference clock encoded in an
audio stream using the SMPTE format. Also the two broadcast video-cameras and
the _Audio recorder_ use SMPTE encoded as an audio signal. The _IMU recorder_
receives the reference clock via network, through the OSC protocol.
To guarantee synchronization, EyesWeb keeps track of every recorded frame or
sample, and of the timestamp when the data was received. Not all streams can
be hardware-synchronized (e.g., with a genlock signal). To afford this
problem, a software synchronization is performed by EyesWeb that storages the
absolute time at which the data was received. This information is then used
when playing back the data. IMU sensors or Kinect are examples of devices
which are synchronized in this way. Further recordings will also be carried
out with cheaper motion capture technologies (e.g., Polhemus, see Deliverable
D3.1) and with low-cost devices (e.g., Kinect and common video cameras) in
order to enable downscaling of prototypes to low cost devices.
Recordings will follow the ethical procedures established in the TELMI
Consortium. Where data is to be used in the public database then performers
will provide both research consent and release copyright ownership of the
recordings to the TELMI consortium for use in the public database, project
dissemination, and marketing. Musicians will have the option to release
copyright under the condition of anonymity (with identifying features removed
from video recordings), otherwise they must explicitly grant the project the
use of their likeness and identifying information (see also Sections 3.3.3 and
3.4.3).
### 3.1.2 Types of Data (Generated and Collected) and use of Standards
The following types of data will be produced during the recording sessions:
* MoCap data from the Qualisys and the Polhemus motion capture systems
* Videos and ambient audio from two professional video cameras
* Instrument audio from the player’s instrument
* Video, Audio, IR, Depth Information and Mocap data from a Kinect for windows v2 sensor
* Optional IMU data (Accelerometer, Gyroscope, Magnetometer) from XOSC IMU Sensors
#### 3.1.2.1 MoCap Data
Mocap Data will be saved and stored as QTM and TSV files: the QTM format is a
binary and proprietary format by Qualisys, whereas TSV is a plain text format
that can be read by any text editor and is used in EyesWeb XMI.
#### 3.1.2.2 Video and ambient audio
The Video and audio streams will be stored using the following encoding:
* AVI file format
* 1280x720 50FPS video with MPEG4 codec
* 320 Kbps stereo audio with MP3 codec (ambient audio in the first channel and encoded SMPTE signal in the second channel)
#### 3.1.3.3 Instument audio
Audio streams will be stored as stereo AIFF or WAV files containing the
instrument signal in the first channel and the encoded SMPTE signal in the
second channel
##### 3.1.3.4 Video, Audio, IR, Depth and Mocap data from Kinect for windows
v2 sensor
The Kinect Video will be stored as a 1920x1080 30FPS (variable fps) AVI video
file with mpeg4 codec. IR and Depth streams are stored as a 512x424 30FPS AVI
video file with mpeg4 codec; audio will be stored as a single channel AIFF
file; MoCap Data will be stored as TSV files.
##### 3.1.3.5 IMU data (Accelerometer, Gyroscope, Magnetometer) from XOSC
Sensors
In case recordings include IMU Data, this will be stored as plain text files
containing timestamps and data streams of each sensor.
### 3.1.3 Data Sharing and reuse
Raw Data will be stored internally by UNIGE. Cleaned and ready-to-use data
will be made available for public access. EyesWeb patches will be made
available to playback the publicly-available data, and to convert it to other
commonly used formats. As an example, the data files (IMU or MOCAP sensors)
can be exported to the CSV format, to be imported in the RepoVizz database.
The audio-video files can be converted to different formats (e.g., MOV, MP4,
MPEG).
#### 3.1.4 Archiving and Preservation
Raw data will be stored internally on a dedicated NAS server. Such a NAS is
configured for Raid 5 redundancy, allowing a disk failure with no data loss.
Moreover, a copy of the data is preserved and archived on an offline portable
hard-disk.
## 3.2 Music Education Datasets and Users Feedback
Over the course of the TELMI project, data will be collected for the purpose
of guiding and implementing the pedagogical framework of the project and
evaluating the efficacy of the TELMI systems. These efforts will be led by the
RCM via TELMI Work Package 2: Music Performance Pedagogy.
### 3.2.1 Description
In establishing the pedagogical framework for the project, data will be
collected from violin students and teachers regarding their current teaching
and learning practices and use of technology and where technology may be
developed to address the challenges they face. These data will be collected,
analysed, stored, and disseminated following standard research practices
outlined by the British Psychological Society (BPS) and their Code of Human
Research Ethics, including guidelines outlining the obtaining of informed
consent and of maintaining participant anonymity. Where data is to be used in
the public database then musicians will be asked to sign copyright ownership
of the files and, if desired, permission to use identifying information, to
the TELMI partners as described below.
### 3.2.2 Types of Data (Generated and Collected) and use of Standards
The following data types will be collected:
* **Audio/video recordings (interviews and workshops):** recorded via hand-held recorders into .mp3/.mp4 format. Transcribed to text file (.doc) by project partners or by external services (e.g. www.rev.com).
* **Consent/copyright forms:** delivered, signed, collected, and securely stored in hardcopy.
* **Recordings (performance):** recordings of performance via audio, video, or motion capture will be processed as described in Sections 3.2 and 3.4.
* **Questionnaires:** collected in hardcopy or electronically via the online platform Surveymonkey ( _www.surveymonkey.com_ ) . The first of these questionairre can be found in Appendix B of D.2.1.Review of Violin Methods with Complementing Review of Technologies. Data will be stored as .xls, with quantitative data processed via IBM SPSS and qualitative data via NVivo.
* **Violin exercises:** collected as electronic PDFs, converted to .xml format for use in the public database (see 3.4). Exercises will be drawn primarily from the public domain (where composers have been deceased for a period exceeding 70 years, following EU copyright regulations) and, where required, licensing purchased from the publishers.
### 3.2.3 Data Sharing and reuse
A clear division will be maintained between data collected for research
purposes and data intended for public users of the TELMI system.
1. Research Data only: following the guidelines of the BPS, consent forms approved by the Conservatoires UK Research Ethics council will be delivered to participants that guarantee that their anonymity will be maintained within data collected. They will be informed that these data can be used within and in the public dissemination of the project, but all identifying information will be removed. This will include questionnaires and recordings of workshops and interviews.
2. Where audio and video recordings are collected to be used for public dissemination in the database, the musicians will provide both research consent and release copyright ownership of the recordings to the TELMI consortium for use in the public database, project dissemination, and marketing. Copyright forms will be adapted from those used by the RCM Studios. Musicians will have the option to release copyright under the condition of anonymity (with identifying features removed from video recordings), otherwise they must explicitly grant the project the use of their likeness and identifying information.
Where possible, research data will be collected and disseminated following the
open data policy of the
Royal Society. 2 Empirical data will be made publically available in an
anonymized format through the TELMI Public Database (see 3.4 below) or, if
that is not suited for purpose, a publicly available repository such as Dryad
or Figshare. The data will not made publically available in cases where the
nature of the information coven might compromise the participants' anonymity.
In such cases, we would consider releasing extracts of the data to third
parties upon request (e.g. for verification).
**3.2.4 Archiving and Preservation**
Data will be stored on the project databases as outlined in 3.4 below.
## 3.3 Public Database Repository Data
During the TELMI project a set of multi-modal recordings of performances will
be captured including teachers and students. This data will be used in the
TELMI prototypes as well as to refine analysis algorithms developed during the
project. The raw data acquired from performances will be analyzed and enriched
with feature extraction techniques to build the public datasets to be hosted
online.
For this public database, the repovizz platform [1] will be mainly used.
Repovizz (http://repovizz.upf.edu) is an integrated online system capable of
structural formatting and remote storage, browsing, exchange, annotation and
visualization of synchronous multimodal, time-aligned data. Motivated by a
growing need for data-driven collaborative research, repoVizz aims to resolve
commonly encountered difficulties in sharing or browsing large collections of
multi-modal data. At its current state, repovizz is designed to hold
timealigned streams of heterogeneous data: audio, video, motion capture,
physiological signals, extracted descriptors, annotations et cetera. Most
popular formats for audio and video are supported, while Broadcast WAVE or CSV
formats are adopted for streams other than audio or video (e.g., motion
capture or physiological signals). The data itself are structured via
customized XML files, allowing the user to (re-) organize multi-modal data in
any hierarchical manner, as the XML structure only holds metadata and pointers
to data files. Datasets are stored in an online database, allowing the user to
interact with the data remotely through a powerful HTML5 visual interface
accessible from any standard web browser; this feature can be considered a key
aspect of repovizz since data can be explored, annotated or visualized from
any location or device. Data exchange and upload/download is made easy and
secure via a number of data conversion tools and a user/permission management
system. The repovizz platform is physically hosted at an internal server in
the DTIC-UPF infrastructure.
### 3.3.1 Description
All datasets in repovizz (public database) include a description field that
can contain the information above mentioned. For additional information a web
page will be generated containing more structured information and additional
fields of all datasets generated during the project containing professional
musicians pieces. This web page will contain cross links to the datasets
stored in repovizz.
### 3.3.2 Types of Data (Generated and Collected) and use of Standards
The data gathered will be mainly consisting in music exercises and pieces that
are commonly used as learning material for violin training. RCM will be
responsible for selecting the pieces and exercises to be recorded and the
professional musicians that will record them.
For audio data professional microphones and bridge pickups will be used, for
video low cost and professional cameras will be used and additionally mocap
data will be acquired using an electromagnetic fields sensor and an optical
motion capture system.
Once the different data streams from different modalities are recorded they
need to be time synchronized between them and formatted accordingly to be
compatible with formats accepted in repovizz. The following formats are used
for each type of data:
* Audio: any common audio format that can be decoded by ffmpeg (wav, mp3, ogg, flag, aac, etc). Once uploaded to repovizz original audio streams are kept in the server but additionally are converted to wav files at a sampling rate of 44.1Khz and 16 bits for audio feature extraction and web friendly mp3 and ogg files are generated.
* Video: any common video format that can be decoded by ffmpeg (mp4, avi, mkv, mov, webm, etc). Once uploaded to repovizz original video streams are kept in the server but additionally are converted to webm and mp4 at a resolution of 720p to make it compatible with standard html5 browsers.
* Time varying Signals / Descriptors: csv containing a header line as defined in repovizz tutorial [2]
* Musical Scores: music xml (compatible with musescore open source software)
* Mocap Data: multiple csv files for each marker coordinate as defined in repovizz tutorial [2]
* Annotations: txt files containing lines with time and label information as defined in repovizz tutorial
[2]
### 3.3.3 Data Sharing and reuse
In the case of TELMI, the public database stored in the repovizz
infrastructure will serve as a sharing and visualization platform, allowing
third parties to download data as well as visualize it in a user friendly way
just opening a url in a browser.
Data sharing and reuse of data will be guaranteed once the data is uploaded to
the public database (repovizz). Being repovizz an online web based solution it
makes easily to share datasets and individual streams within each dataset.
A RESTful api [3] allows users to access and use all data stored in repovizz
programatically.
Using the API users can browse, search, list, and download datasets and
individual streams contained inside.
### 3.3.4 Archiving and Preservation
All data acquired during the TELMI project and uploaded to the public database
(repovizz) will be guaranteed to be available within a minimum of 6 months
after project completion. This embargo period is requested to allow time for
additional analysis and further publication of research findings to be
performed. Nevertheless the data won’t be deleted as repovizz platform might
be further maintained with other funds after this period.
## 3.4 Additional Guidelines for the Data Management Plan
Besides simply providing a sharing mechanism, data management in H2020 poses a
series of requirements for the access mechanisms to the data, as well as the
documentation and characteristics of the data itself. Below we outline our
plan to satisfy these requirements.
### 3.4.1 Discoverability and Accessibility
All data sets stored on the public database are searchable online, and
uniquely identified using a randomly generated ID. Also datasets in the public
database can be cross-linked through a unique url.
This unique url can be included in related publications, deliverable
documents, or detailed description documents or web pages that explain their
contents and meaningful context information.
Barring specific restrictions imposed by the TELMI partners to ensure that
there is no conflict with their strategic planning, the aforementioned
datasets will be released under a Creative Commons (CC) license (specific CC
license details will be analysed individually as they depend on the contents
of each dataset).
### 3.4.2 Additional Archiving and Preservation Requirements
The Public Database will be hosted in DTIC - UPF server’s infrastructure, and
it takes advantage of the UPF’s storage and backup facilities:
* Data is backed up on a type-class basis: mission-critical (user’s data, virtual machines, scientific output, etc) and static (scientific datasets, intermediate files, HPC filesystems, etc).
○ Mission-critical data is backed up:
■ Three times per day, locally (00:00, 08:00, 16:00) and retained for three
days. Granularity: 9 (3x3)
■ Once per day, remotely (00:00), to Jaume Primer remote datacenter, and
retained for two weeks. Granularity: 15 (15x1) ○ Static data is backed up:
■ Two times per day (00:00, 180:00), locally, and retained for one week.
Granularity: 7 (7x1)
* Backups are processed automatically based on snapshot technology on a time-scheduled basis.
* Standard recovery processes are available: Samba sharing (previous versions), NFS sharing, Qtree and volume restore.
The raw data collected and archived at UNIGE are stored on a dedicated NAS
server, with RAID 5 configuration. Data is backup on an offline hard-disk for
additional redundancy.
### 3.4.3 Compliance with Ethics Requirements and Protection of Personal Data
All the TELMI consortium members are well aware of the ethical aspects of the
project, and will take into account rules and legislation at national and
institutional level in their respective countries when collecting potentially
sensitive personal data. In order to enable participants to make informed
decisions, detailed documentation (e.g. informed consent and/or terms of
participation) will be prepared on a case-by-case basis, outlining the
information that will be gathered, the intended use within the project, and
any applicable risks associated with a potential public dissemination (e.g.
Video data reveals the identity of the user). All data will only be stored,
used and/or shared when participants and/or legal entities to which ownership
of the data can be credited have given their express informed consent for
publication; participants and/or legal entities will be given the option to
allow the public dissemination of the data and/or to disseminate the data in
an anonymous or traceable way. See section 3.2 Music Education Datasets and
Users Feedback above for further details regarding the collection of data from
users.
The treatment of personal information and sensitive data will be done in
accordance to the Ethics and Security specifications outlined in section 5 of
the TELMI proposal. All personal information and data intended to be private
will be stored in an administrative database housed in a secure location with
appropriate protections by the partner(s) responsible for its collection
and/or generation. All participants will be assigned a unique ID, by means of
which the anonymized data (when applicable) will be shared within the
consortium. All the master records and traceable tokens prone to enable the
mapping between the anonymized data and the real identity of a given user will
be protected by appropriate safety measures and only accessible to the
principal investigator of TELMI within the partner(s) institution(s)
responsible for the data collection and/or generation.
# 4 Conclusion
This deliverable presents a series of guidelines and best practises regarding
the Data Management Plan within the consortium, one for each type of dataset
that we are planning to release within the timeframe of TELMI. While future
updates of the Data Management Plan are expected to add specificity and depth,
this deliverable lays the foundation for data collection, generation and
management practices, as well as the sharing conditions of the datasets.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0669_COHESIFY_693427.md
|
Introduction 4
1\. Data Summary 5
2\. FAIR data 7
2.1. Making data findable, including provisions for metadata 7
2.2. Making data openly accessible 8
2.3. Making data interoperable 9
2.4. Increase data re-use (through clarifying licences) 10
3\. Allocation of resources 10
4\. Data security 10
5\. Ethical aspects 10
# Introduction
This deliverable describes the Data Management Plan (DMP) for the COHESIFY
project, funded by the EU’s Horizon 2020 Programme under Grant Agreement
693427. The purpose of the DMP is to set out the main elements of the data
management policy that will be used by the consortium with regard to the
datasets that will be generated by the project. The DMP lists the COHESIFY
datasets and describes the key data management principles, notably in terms of
data standards and metadata, sharing, archiving and preservation.
This is the second version of the COHESIFY DMP, which can be updated further
throughout the course of the project. It draws on Horizon 2020 guidance and
institutional guidance of the lead partner (European Policies Research Centre,
University of Strathclyde). The main reasons for this update is to take
account of the changes to European Commission guidance, which provide a new
structure; and to reflect a decision to opt-out of the open data provisions
under two datasets (Interviews with stakeholders, and focus groups with
citizens).
The structure of the DMP is as follows. First, the dataset references and
names are specified, including the responsible partner/s and work package/s.
The next section describes each dataset, including the data sources, file
formats and estimated volume to plan for storage and sharing. The third
section sets out the data standards and metadata approach. Data sharing,
archiving and preservation provisions are addressed in sections 5 and 6
respectively. The final section sets out the ethical considerations.
# Data Summary
COHESIFY will produce seven datasets during the lifetime of the project. The
data is quantitative and qualitative in nature and will be analysed from a
range of methodological perspectives for project development and scientific
purposes with results disseminated through scientific conferences and
publications. A list of the datasets is provided in table 1, identifying the
name, content, partner responsible for generating the data and associated work
package.
Table 1: COHESIFY Datasets
<table>
<tr>
<th>
No.
</th>
<th>
Name
</th>
<th>
Dataset (DS) file name
</th>
<th>
Responsible partner
</th>
<th>
Work
Package
</th> </tr>
<tr>
<td>
1
</td>
<td>
Territorial context
</td>
<td>
COHESIFY1_POLIMI_Territorial_context
</td>
<td>
POLIMI
</td>
<td>
2
</td> </tr>
<tr>
<td>
2
</td>
<td>
Territorial typology
</td>
<td>
COHESIFY2_DUT_POLIMI_Territorial_typology
</td>
<td>
DUT, POLIMI
</td>
<td>
2
</td> </tr>
<tr>
<td>
3
</td>
<td>
Implementation
</td>
<td>
COHESIFY3_EUREG_Implementation
</td>
<td>
EUREG
</td>
<td>
3
</td> </tr>
<tr>
<td>
4
</td>
<td>
Party manifestos
</td>
<td>
COHESIFY4_MANN_Party_manifestos
</td>
<td>
MANN
</td>
<td>
2
</td> </tr>
<tr>
<td>
5
</td>
<td>
Stakeholder survey
</td>
<td>
COHESIFY5_EUREG_CUT_Stakeholder_survey
</td>
<td>
EUREG, CUT
</td>
<td>
3, 4
</td> </tr>
<tr>
<td>
6
</td>
<td>
Interviews
</td>
<td>
COHESIFY6_EUREG_CUT_interviews
</td>
<td>
EUREG, CUT
</td>
<td>
3, 4
</td> </tr>
<tr>
<td>
7
</td>
<td>
Media frames
</td>
<td>
COHESIFY7_CUT_Media_frames
</td>
<td>
CUT
</td>
<td>
4
</td> </tr>
<tr>
<td>
8
</td>
<td>
Citizens survey
</td>
<td>
COHESIFY8_STRATH_Citizens_survey
</td>
<td>
STRATH
</td>
<td>
5
</td> </tr>
<tr>
<td>
9
</td>
<td>
Focus groups
</td>
<td>
COHESIFY9_STRATH_Focus_groups
</td>
<td>
STRATH
</td>
<td>
5
</td> </tr> </table>
The COHESIFY project will apply a mixed methods approach collecting both
qualitative and quantitative data. Primary data will be mainly collected in
the case study countries/regions (through surveys, interviews and focus
groups), while secondary data will be collected from publicly available EU and
national sources (such as Eurostat/Eurobarometer, academic and policy
literature, party political programmes and policy documents and online media).
A brief description of each dataset is provided in table 2, including the data
source, file formats and estimated volume to plan for storage and sharing.
Table 2: COHESIFY Datasets
<table>
<tr>
<th>
Dataset
</th>
<th>
Description
</th>
<th>
Source
</th>
<th>
File format
</th>
<th>
Volume
</th> </tr>
<tr>
<td>
1.Territorial settings
</td>
<td>
A dataset of territorial contextual variables for analysis and to inform the
case studies using public datasets.
</td>
<td>
public datasets
</td>
<td>
CSV, DTA, SAV
</td>
<td>
0.5 mb
</td> </tr>
<tr>
<td>
2.Territorial typology
</td>
<td>
A territorial typology for analysis and to inform the case studies using
public datasets.
</td>
<td>
public datasets
</td>
<td>
CSV, DTA, SAV
</td>
<td>
0.5 mb
</td> </tr>
<tr>
<td>
3\.
Implementation
</td>
<td>
A dataset of territorial funding, implementation data to inform the case study
analysis using public datasets.
</td>
<td>
public datasets
</td>
<td>
CSV, DTA, SAV
</td>
<td>
0.5 mb
</td> </tr>
<tr>
<td>
4.Party manifestos
</td>
<td>
A dataset of political programmes (e.g. election manifestos, coalition
agreements) will be constructed to analyse the framing of Cohesion policy at
the regional level using an existing database and publicly available data.
</td>
<td>
public datasets
</td>
<td>
CSV, DTA, SAV
</td>
<td>
10 mb
</td> </tr>
<tr>
<td>
5.Stakeholder survey
</td>
<td>
An online survey will be conducted to assess stakeholders’ views of Cohesion
policy implementation, performance and communication in the case study
regions/countries using a semistructured questionnaire.
</td>
<td>
original survey
</td>
<td>
CSV
</td>
<td>
3 mb
</td> </tr>
<tr>
<td>
6\. Interviews
</td>
<td>
Interviews will be conducted to assess stakeholder views of Cohesion policy
implementation, performance and communication in the case study
regions/countries.
</td>
<td>
original interviews
</td>
<td>
PDF,
WORD
DOC
</td>
<td>
20 mb
</td> </tr>
<tr>
<td>
7\. Media frames
</td>
<td>
A dataset of newspaper articles focusing on Cohesion policy
(regional/national/European) extracted through a crawling technique to analyse
the media framing of Cohesion policy in relation to citizens’ attitudes to the
EU.
</td>
<td>
Newspaper
articles
</td>
<td>
PDF
</td>
<td>
20 mb
</td> </tr>
<tr>
<td>
8.Citizens survey
</td>
<td>
A representative citizens survey will be conducted in each of the case study
regions to measure perceptions of Cohesion policy and attitudes to and
identification with the EU.
</td>
<td>
original survey
</td>
<td>
CSV, DTA, SAV
</td>
<td>
20 mb
</td> </tr>
<tr>
<td>
9.Focus groups
</td>
<td>
Focus groups will be conducted to explore citizens’ perceptions of Cohesion
policy and identification with the EU in each of the case study regions.
</td>
<td>
original focus groups
</td>
<td>
PDF,
WORD
DOC
</td>
<td>
15 mb
</td> </tr> </table>
# FAIR data
## Making data findable, including provisions for metadata
The main purpose of the data collection is to assess the impact of Cohesion
policy on citizens’ attitudes to the EU. The findings will be made available
via the project deliverables, website and scientific publications and will be
of use to academics and policymakers with an interest in EU Cohesion policy,
public opinion and communication.
The top level folder for each dataset will be named according to the following
convention syntax:
* ProjectAcronymDatasetID_ResponsiblePartner_DatasetName
* e.g. COHESIFY1_POLIMI_Territorial_context
All dataset names have been listed in Table 1, in the Data Summary section.
DOIs will be assigned to datasets for effective and persistent citation. The
DOIs can be used in any relevant publications to direct readers to the
underlying dataset.
The COHESIFY project aims to collect and document the data in a standardised
way to ensure that, at the end of the project, the datasets can be understood,
interpreted and shared in isolation alongside accompanying metadata and
documentation. The specific metadata contents, formats and internal
relationships will be defined in future versions of the COHESIFY DMP. The
minimum metadata elements will be consistent with the ‘Datacite’ metadata
schema. Specific considerations for each dataset are described in Table 3.
Table 3: Data standards and metadata
<table>
<tr>
<th>
Dataset
</th>
<th>
Standards and metadata
</th> </tr>
<tr>
<td>
1.Territorial context
</td>
<td>
Territorial datasets will be collected from various websites for analysis in
WP2
The dataset will be finalised in January 2017
The metadata to be created is to be confirmed
</td> </tr>
<tr>
<td>
2.Territorial typology
</td>
<td>
Territorial datasets will be collected from various websites and integrated
for analysis in WP2 and WP3
The dataset will be finalised in January 2017
The metadata to be created is to be confirmed
</td> </tr>
<tr>
<td>
3\.
Implementation
</td>
<td>
Territorial datasets will be collected from various websites and integrated
for analysis in WP3
The dataset will be finalised in January 2017
The metadata to be created is to be confirmed
</td> </tr>
<tr>
<td>
4.Party manifestos
</td>
<td>
A dataset of political programme documents in the case study countries will be
constructed using existing databases and additional data collected by the
consortium. Existing and suitable standards applied in political science will
be used.
The party manifesto data will be collected by November 2016
The metadata to be created is to be confirmed
</td> </tr>
<tr>
<td>
5.Stakeholder survey
</td>
<td>
An online survey of Cohesion policy stakeholders will be undertaken in the
selected case study countries/regions (WP3 and 4). The data collection tool
will contain traditional survey-type questions, such as Likert items, but also
open-ended questions.
The stakeholder survey will be conducted between January and March 2017
The metadata to be created is to be confirmed.
</td> </tr>
<tr>
<td>
6.Interviews
</td>
<td>
Semi-structured interviews will be conducted by each partner with Cohesion
policy stakeholders (WP 3 and 4). The interviews can be conducted face-to-face
or by telephone/skype. The data will be held in the form of qualitative and
anonymized interview transcripts typed up according to agreed standards in
word or pdf documents.
The interviews will be conducted between January and October 2017 The metadata
to be held is to be confirmed.
</td> </tr>
<tr>
<td>
7\. Media frames
</td>
<td>
Data will be selected and extracted from the lexis-nexis database to build a
random stratified sample of newspaper articles (regional, national, European)
for framing analysis.
The selection of newspaper articles will be conducted during the period
</td> </tr>
<tr>
<td>
</td>
<td>
September- December 2016.
The metadata to be held is to be confirmed.
</td> </tr>
<tr>
<td>
8.Citizens survey
</td>
<td>
A survey of citizens in the case study countries will be undertaken by a
specialist survey company adhering to international market research standards.
Respondents aged 18-65 will be chosen from standard listassisted random digit
dialling (RDD) and interviewed using a telephone interviewing (CATI)
technique. The overall sample size will be 9000 with equally-sized sub-samples
of 500 in each region covered in the case studies (16-20 regions in total, in
10 Member States)
The survey should be carried out between May and July 2017.
The metadata to be held is to be confirmed.
</td> </tr>
<tr>
<td>
9.Focus groups
</td>
<td>
Focus groups will be organised by each partner in the case study regions (3-5
groups in 16-20 cases, with 6-8 participants per group). The current
preference for recruitment is random selection based on snowball sampling. The
principle of segmentation will be applied to control/match the composition of
participants and facilitate discussion.
The dataset will comprise qualitative and anonymized transcripts of the focus
groups using agreed formats and standards.
The focus groups will be conducted during May-November 2017 The metadata to be
held is to be confirmed.
</td> </tr> </table>
## Making data openly accessible
Data will be made accessible and available for re-use and secondary analysis,
after taking account of data confidentiality, anonymity and protection
requirements. Horizon 2020 guidance on DMPS includes an opt-out option for
open data requirements, which can apply to all or part of the data, under the
following circumstances:
* participation is incompatible with the need for confidentiality in connection with security issues;
* participation is incompatible with rules on protecting personal data;
* participation would mean that the project’s main aim might not be achieved;
* the project will not generate / collect any research data or;
* there are other legitimate reasons
In the COHESIFY project, an opt-out of the open data provisions will apply to
two datasets:
* DS4 (Interviews with stakeholders). Open access to interview transcripts is incompatible with the need for confidentiality and protecting anonymity, and would risk the achievement of the project’s aims. Even with the anonymisation of direct identifiers (names), the participants could still be easily identifiable given the small number of individuals and/or types of actors represented in monitoring committees (e.g. managing bodies, ngos, trade unions, local government association). Further, the interview questions address sensitive topics including illegal activities such as mismanagement or fraudulent use of public resources and their organisational role in increasing citizens’ political support and identification with the EU. As a result, open access to this data is likely to reduce participation in the project, the reliability of responses and hinder the achievement of goals unless complete confidentiality is granted.
* DS7 (Focus groups with citizens). Open access to focus group data is incompatible with the need for confidentiality and protecting anonymity. Key topics under investigation are sensitive, such as citizens’ territorial identities (including ethnicity/race) and political opinions about the EU (including illegal activities such as mismanagement or fraudulent use of public resources). The removal of identifiers from audio recordings is also impractical.
The remaining original survey datasets will be anonymised (DS5 and DS8). A
decision will be taken by the project steering committee as to the appropriate
length of time after project completion for granting access to the research
data. During embargo periods, information about the restricted data will be
published in the data repository, and details of when the data will become
available will be included in the metadata.
The datasets will be shared and preserved via the University of Strathclyde’s
research information management system (‘PURE’). Data will be made openly
available via the ‘KnowledgeBase’ website, the public web portal of research
outputs that are stored in PURE ( _http://pure.strath.ac.uk/portal/)_ .
The collected data will be used for scientific evaluation and findings will be
published through scientific channels. Open access to these publications will
be made available depending on the form and cost of the open access.
All of the data is easily accessible through widely available software
## Making data interoperable
The Zenodo metadata schema and Strathclyde’s Pure metadata schema both include
the minimum DataCite metadata elements (Title, Publisher, PublicationYear,
Contributor, DOI). Furthermore, dataset records in both repositories will
include keywords and a free text description. The Pure metadata schema also
maps to the minimal DublinCore metadata standard.
## Increase data re-use (through clarifying licences)
Open data will be shared under a CC-BY licence to foster the widest possible
reuse.
Open data supporting published articles will be made available for reuse no
later than the date of publication of the article. Other data deemed to be of
value will be shared within 3 months of the end of the project unless a
restriction is required.
Data will be made available for a minimum of 10 years. Where possible, the
University of Strathclyde will update file formats to avoid file obsolescence
over time.
# Allocation of resources
The datasets are small in volume and Strathclyde’s data repository is free at
the point of use so that no costs for archiving and preservation need to be
considered.
STRATH is responsible for general coordination and supervision of the data
management plan. Datasets will be uploaded by STRATH as the project
coordinator. Datasets will be uploaded at the end of the project, within 3
months of the closing of project activities (M27). Each partner is responsible
for preparing their datasets in accordance with the FAIR principles envisaged
in the DMP.
# Data security
Data will be transferred between partners using Strathcloud (
_http://www.strath.ac.uk/it/services/strathcloud/)_
Data stored on the University of Strathclyde’s storage is dual sited and
replicated between two data centres which are physically separated by several
hundred metres. Data links between datacentres are provided by dual disparate
fabrics, providing added resilience. Additionally, the central I.T. service
provides tape based backup to a third and fourth site.
Data security is provided by access controls defined at a user level. The
University invested in new and upgraded storage in 2014 and the systems are in
line with existing best practices for robust information systems architecture.
Data will be archived and preserved in the University of Strathclyde’s PURE
information management system. This provides options for making data openly
available, and other data restricted access as required.
Data in Pure will be preserved in line with the University Data Deposit Policy
(UOS 2014). The data will be preserved indefinitely and there are currently no
costs for archiving data in Pure. The PI will have overall responsibility for
implementing the data management plan. The University’s data management
personnel will advise on aspects of data archiving and preservation.
# Ethical aspects
COHESIFY will comply with established EU regulations and corresponding
national laws on data privacy, confidentiality and consent. COHESIFY has
gained ethical approval from the University of Strathclyde’s ethics committee
and these ethical principles, described in a previous ethics deliverable, will
be followed in implementing the data management plan.
People will be advised that by participating in the research they are
consenting to making data openly available. Where possible and necessary,
participants will be given the opportunity to participate in the research
without their related anonymous data being made openly available.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0670_AudioCommons_688382.md
|
# Background
The purpose of this Data Management Plan (DMP) is to provide an analysis of
the main elements of the data management policy that will be used by the
project with regard to all the datasets that will be generated by the project.
The DMP is not a fixed document, but will evolve during the lifespan of the
project.
The DMP will address the points below on a dataset by dataset basis and should
reflect the current status of reflection within the consortium about the data
that will be produced.
The approach to the DMP follows that outlined in the “ _Guidelines_ _on_
_Data_ _Management_ _in_ _Horizon_ _2020_ ” (Version 2.1, 15 February 2016).
__
**Data set reference and name:** Identifier for the data set to be produced.
**Data set description:** Description of the data that will be generated or
collected, its origin (in case it is collected), nature and scale and to whom
it could be useful, and whether it underpins a scientific publication.
Information on the existence (or not) of similar data and the possibilities
for integration and reuse.
**Standards and metadata:** Reference to existing suitable standards of the
discipline. If these do not exist, an outline on how and what metadata will be
created.
**Data sharing:** Description of how data will be shared, including access
procedures, embargo periods (if any), outlines of technical mechanisms for
dissemination and necessary software and other tools for enabling re-use, and
definition of whether access will be widely open or restricted to specific
groups. Identification of the repository where data will be stored, if already
existing and identified, indicating in particular the type of repository
(institutional, standard repository for the discipline, etc.). In case the
dataset cannot be shared, the reasons for this should be mentioned (e.g.
ethical, rules of personal data, intellectual property, commercial, privacy-
related, security-related).
**Archiving and preservation (including storage and backup):** Description of
the procedures that will be put in place for long-term preservation of the
data. Indication of how long the data should be preserved, what is its
approximated end volume, what the associated costs are and how these are
planned to be covered.
# Admin Details
**Project Title:** Audio Commons: An Ecosystem for Creative Reuse of Audio
Content
**Project Number:** 688382
**Funder:** European Commission (Horizon 2020)
**Lead Institution:** Universitat Pompeu Fabra (UPF)
**Project Coordinator:** Prof Xavier Serra
**Project Data Contact:** Sonia Espi, [email protected]
**Project Description:** The democratisation of multimedia content creation
has changed the way in which multimedia content is created, shared and
(re)used all over the world, yielding significant amounts of user-generated
multimedia resources, big part shared under open licenses. At the same time,
creative industries need to reduce production costs in order to remain
competitive. There is, therefore, an opportunity for creative industries to
incorporate such content in their productions, but there is a lack of
technologies for easily accessing and incorporating that type content in their
creative workflows. In the particular case of sound and music, a huge amount
of audio material like sound samples, soundscapes and music pieces, is
available and released under Creative Commons licenses, both coming from
amateur and professional content creators. We refer to this content as the
'Audio Commons'. However, there exist no practical ways in which Audio Commons
can be embedded in the production workflows of the creative industries, and
licensing issues are not easily handled across the production chain. As a
result, most of this content remains unused in professional environments. The
aim of this project is to create an ecosystem of content, technologies and
tools to bring the Audio Commons to the creative industries, enabling
creation, access, retrieval and reuse of Creative Commons audio content in
innovative ways that fit the requirements of the use cases considered (e.g.,
audiovisual, music and video games production).Furthermore, we tackle rights
management challenges derived from the content reuse enabled by the created
ecosystem and research about emerging business models that can arise from it.
Our project will benefit creative industries by providing new and innovative
creativity supporting tools and reducing production costs, and will benefit
content creators by offering a channel to expose their works to professional
environments and to allow them to (re)licence their content.
# Dataset Information
Individual Dataset Information
**Data set reference and name**
DS 2.1.1: Requirements survey
## Data set description
Results from survey of creative industry content users in Task 2.1: "Analysis
of the requirements from creative industries". This data supports Deliverable
D2.1: "Requirements report and use cases", and has over 660 responses. WP: WP2
/ Task: Task 2.1 Responsible: QMUL (& MTG-UPF)
**Standards and metadata**
Text document (CSV file)
**Data sharing**
Anonymized form to be made available with DOI.
## Archiving and preservation (including storage and backup)
To be uploaded on Zenodo or other suitable research data repository. Estimated
final size (Bytes): 700K
DS 2.2.1: Audio Commons Ontology
## Data set description
Definition of Audio Commons Ontology, the formal ontology for the Audio
Commons Ecosystem. Data form of D2.2: Draft ontology specification and D2.3:
Final ontology specification. WP: WP2 / Task: Task 2.2 Responsible: QMUL
**Standards and metadata**
OWL Web Ontology Language
**Data sharing**
Public
## Archiving and preservation (including storage and backup)
Stored on project document server (& github) Estimated final size (Bytes): 10K
DS 2.3.1: ACE interconnection evaluation results
## Data set description
Results of evaluation of technological solutions for the
orchestration/interconnection of the different actors in the Audio Commons
ecosystem. Supporting data for deliverable D2.5: Service integration
technologies.
[Depending on the form of evaluation, this dataset may not be produced]
WP: WP2 / Task: Task 2.3 Responsible: QMUL (& MTG-UPF)
**Standards and metadata**
Tabular (e.g. CSV)
**Data sharing**
Public
## Archiving and preservation (including storage and backup)
Project document store.
Estimated final size (Bytes): 100K
DS 2.5.1: ACE Service evaluation results
## Data set description
Results of continuous assessment of ontologies, API specification and service
orchestration through the lifetime of the project, including API usage
statistics.
WP: WP2 / Task: Task 2.5 Responsible: QMUL (& MTG-UPF)
**Standards and metadata**
Tabular (e.g. CSV)
**Data sharing**
Public
## Archiving and preservation (including storage and backup)
Project document store.
Estimated final size (Bytes): 1M
DS 2.6.1: ACE Service
## Data set description
Freesound and Jamendo content exposed in the Audio Commons Ecosystem. Not
strictly a “dataset”, rather a service providing access to data. WP: WP2 /
Task: Task 2.6 Responsible: MTG-UPF (& Jamendo)
**Standards and metadata**
Audio Commons Ontology
**Data sharing**
Available via ACE service API
## Archiving and preservation (including storage and backup)
Dynamic service availability, no plans to provide a “snapshot”.
Estimated final size (Bytes): N/A
DS 3.3.1: Business model workshop notes and interviews
## Data set description
Notes/transcripts from workshop and structured interviews in Task 3.3
"Exploration of Business Models in the ACE". This data will support
Deliverables D3.4 and D3.5.
WP: WP3 / Task: Task 3.3
Responsible: Surrey-CoDE
**Standards and metadata**
Text documents
## Data sharing
Data collected and stored according to ethics policy and approval. Anonymized
text data to be made available as Appendix to Deliverable D3.4: Report on
business models emerging from the ACE.
## Archiving and preservation (including storage and backup)
Stored on project document server.
Estimated final size (Bytes): 100K
DS 4.2.1: Semantic annotations of musical samples
## Data set description
Results of semantically annotating musical properties such as the envelope,
the particular note being played in a recording, or the instrument that plays
that note. Supporting data for deliverables D4.4, D4.9, D4.10, D4.11
WP: WP4 / Task: Task 4.2 Responsible: MTG-UPF (& QMUL)
## Standards and metadata
Annotations will be stored using standard formats such as JSON and YAML, and
Semantic Web formats such as RDF/XML and N3, and following the Audio Commons
Ontology definition.
**Data sharing**
Public: Access via Audio Commons API
## Archiving and preservation (including storage and backup)
ACE Server. Annotation size estimate: 10kBytes per file x 500k files = 5
GBytes Estimated final size (Bytes): 5 GBytes
DS 4.3.1: Semantic annotations of musical pieces
## Data set description
Results of music piece characterisations such as bpm, tonality or structure.
The specific selection of audio properties to include in the semantic
annotation will depend on the requirements of the Audio Commons Ontology.
Supporting data for deliverables D4.4, D4.9, D4.10, D4.11 WP: WP4 / Task: Task
4.3 Responsible: QMUL (& MTG-UPF)
## Standards and metadata
Annotations will be stored using standard formats such as JSON and YAML, and
Semantic Web formats such as RDF/XML and N3, and following the Audio Commons
Ontology definition.
**Data sharing**
Public: Access via Audio Commons API
## Archiving and preservation (including storage and backup)
ACE Server. Annotation size estimate: 300kBytes per file x 500k files = 150
GBytes Estimated final size (Bytes): 150 GBytes
DS 4.4.1: Evaluation results of annotations of musical samples
## Data set description
Results of evaluation of automatic methods for the semantic annotation of
music samples. Results may include human evaluations via listening tests, if
required. Supporting data for deliverables D4.4, D4.10
WP: WP4 / Task: Task 4.4 Responsible: MTG-UPF (& QMUL)
**Standards and metadata**
IPython notebooks and/or Tabular (e.g. CSV)
## Data sharing
Statistical analysis: Public. Listening tests: Data collected and stored
according to ethics policy and approval; anonymized result data publicly
available.
## Archiving and preservation (including storage and backup)
Project document server. Personally identifiable data password-protected or
stored securely on paper.
Estimated final size (Bytes): 100K DS 4.5.1: Evaluation results of annotations
of musical pieces
## Data set description
Results of evaluation of automatic methods for the semantic annotation of
music pieces. Results may include human evaluations via listening tests, if
required. Supporting data for deliverables D4.5, D4.11
WP: WP4 / Task: Task 4.5 Responsible: QMUL (& MTG-UPF)
**Standards and metadata**
Tabular (e.g. CSV)
## Data sharing
Statistical analysis: Public. Listening tests: Data collected and stored
according to ethics policy and approval; anonymized result data publicly
available.
## Archiving and preservation (including storage and backup)
Project document server. Personally identifiable data password-protected or
stored securely offline (e.g. paper in locked filing cabinet).
Estimated final size (Bytes): 100K DS 4.6.1: Evaluation results of musical
annotation interface
## Data set description
Results of evaluation of interface for manually annotating musical content, in
terms of its usability and its expressive power for annotating music samples
and music pieces. The evaluation will be carried out with real users and in
combination with the evaluation of Task 5.4. Supporting data for deliverable
D4.9 WP: WP4 / Task: Task 4.6 Responsible: MTG-UPF
**Standards and metadata**
Free text and Tabular (e.g. CSV)
## Data sharing
Usability data collected and stored according to ethics policy and approval;
anonymized result data publicly available.
## Archiving and preservation (including storage and backup)
Project document server. Personally identifiable data password-protected or
stored securely offline (e.g. paper in locked filing cabinet).
Estimated final size (Bytes): 100K DS 4.7.1: Outputs of integrated annotation
technology: Musical content
## Data set description
Annotations of Freesound and Jamendo content. Success in Task 4.7 will result
in at least 70% of Freesound (musical content) and Jamendo content annotated
with Audio Commons metadata as defined in the Audio Commons Ontology. This
will incorporate datasets DS 4.2.1 and DS 4.3.1. WP: WP4 / Task: Task 4.7
Responsible: MTG-UPF & Jamendo
## Standards and metadata
Annotations will be stored using standard formats such as JSON and YAML, and
Semantic Web formats such as RDF/XML and N3, and following the Audio Commons
Ontology definition.
**Data sharing**
Available via ACE service API
## Archiving and preservation (including storage and backup)
ACE Server
Estimated final size (Bytes): 150 GBytes DS 5.1.1: Timbral Hierarchy Dataset
## Data set description
Data relate to Deliverable D5.1 which: (i) generated a hierarchy of terms
describing the timbral attributes of audio; (ii) determined the search
frequency for each of these terms on the _www.freesound.org_ audio database.
__
WP: WP5 / Task: Task 5.1
Responsible: Surrey-IoSR (& MTG-UPF)
**Standards and metadata**
Data comprises excel and csv files, Python code, figures and documentation..
**Data sharing**
Public. DOI:10.5281/zenodo.167392
## Archiving and preservation (including storage and backup)
Project document server, Zenodo.
Estimated final size (Bytes): 6.5M
DS 5.2.1: Timbral listening tests
## Data set description
Audio files, test interfaces, and results of listening experiments on timbre
perception, carried out to inform the specification of required enhancements
to existing metrics, and of modelling approaches for significant timbral
attributes not covered by the prototype system.
WP: WP5 / Task: Task 5.2 Responsible: Surrey-IoSR
**Standards and metadata**
Various (Datasets include multiple audio files as well as test interfaces, and
other ancillary files)
## Data sharing
Data collected and stored anonymously according to ethics policy and approval.
To be made publicly available.
## Archiving and preservation (including storage and backup)
Initially: Insitute of Sound Recording (IoSR).
Project document server.
Estimated final size (Bytes): 1.3GB
Individual Dataset Information
**Data set reference and name**
DS 5.3.1: Evaluation results of automatic annotation of non-musical content
## Data set description
Audio files, test interfaces, and results of evaluation of automatic methods
for the semantic annotation of non-musical content, including listening tests
where appropriate. Annotations will be evaluated against the timbral
descriptor hierarchy defined in Task 5.1. Supporting data for Deliverables
D5.3, D5.7
WP: WP5 / Task: Task 5.3
Responsible: Surrey-CVSSP (& Surrey-IoSR)
**Standards and metadata**
Various (Datasets include multiple audio files as well as test interfaces, and
other ancillary files)
## Data sharing
Data collected and stored anonymously according to ethics policy and approval.
To be made publicly available.
## Archiving and preservation (including storage and backup)
Project document server.
Estimated final size (Bytes): 30MB
DS 5.4.1: Evaluation results of non-musical annotation interface
## Data set description
Results of evaluation of interface for manually annotating non-musical
content, in terms of its usability and its expressive power for annotating .
The evaluation will be carried out with real users
and in combination with the evaluation of Task 4.6. Supporting data for
deliverable D5.5. WP: WP5 / Task: Task 5.4 Responsible: MTG-UPF
**Standards and metadata**
Tabular (e.g. CSV)
## Data sharing
Usability data collected and stored according to ethics policy and approval;
anonymized result data publicly available.
## Archiving and preservation (including storage and backup)
Project document server. Personally identifiable data password-protected or
stored securely offline (e.g. paper in locked filing cabinet).
Estimated final size (Bytes): 100K DS 5.5.1: Outputs of integrated annotation
technology: Musical content
## Data set description
Annotations of Freesound and Jamendo content. Success in Task 5.5 will result
in at least 70% of
Freesound (non-musical) content annotated with Audio Commons metadata as
defined in the Audio Commons Ontology. This will incorporate datasets DS 4.2.1
and DS 4.3.1.
WP: WP5 / Task: Task 5.5 Responsible: MTG-UPF
## Standards and metadata
Annotations will be stored using standard formats such as JSON and YAML, and
Semantic Web formats such as RDF/XML and N3, and following the Audio Commons
Ontology definition.
**Data sharing**
Available via ACE service API
## Archiving and preservation (including storage and backup)
ACE Server. Annotation size estimate: 100kBytes per file x 200k files = 20
GBytes Estimated final size (Bytes): 20 GBytes
DS 6.4.1: Evaluation results of ACE for Creativity Support
## Data set description
Results of holistic evaluation of the ACE in the context of Creativity
Support. This will include the results of novel methods to assess how the ACE
system and tools facilitate creative flow, discovery, innovation and other
relevant dimensions of creative work. Supporting data for Deliverables 6.8,
6.12. WP: WP6 / Task: Task 6.4
Responsible: QMUL (with Industrial Partners)
**Standards and metadata**
Free text and Tabular (e.g. CSV)
## Data sharing
Usability data collected and stored according to ethics policy and approval;
anonymized result data publicly available.
## Archiving and preservation (including storage and backup)
Project document server. Personally identifiable data password-protected or
stored securely offline (e.g. paper in locked filing cabinet).
Estimated final size (Bytes): 100K DS 6.5.1: Evaluation results of ACE in
music production
## Data set description
Results of evaluation of ACE in music production, measure the utilities of ACE
in typical music production workflows. The results will include usability data
from beta testers available from Waves and students of Queen Mary’s Media and
Arts Technology (MAT) programme. Supporting data for Deliverable 6.4.
WP: WP6 / Task: Task 6.5 Responsible: QMUL (with Waves)
**Standards and metadata**
Free text and Tabular (e.g. CSV)
## Data sharing
Usability data collected and stored according to ethics policy and approval;
anonymized result data publicly available.
## Archiving and preservation (including storage and backup)
Project document server. Personally identifiable data password-protected or
stored securely offline (e.g. paper in locked filing cabinet).
Estimated final size (Bytes): 100K DS 6.6.1: Evaluation results of search and
retrieval interfaces for accessing music pieces
## Data set description
Results of evaluation of search and retrieval interfaces for accessing Audio
Commons music pieces. The data will support assessment of how ACE supports
information seeking activities in creative music production using the web-
based interfaces created in Task 6.6. Supporting data for Deliverable D6.5.
WP: WP6 / Task: Task 6.6 Responsible: QMUL (with Jamendo)
**Standards and metadata**
Free text and Tabular (e.g. CSV)
## Data sharing
Usability data collected and stored according to ethics policy and approval;
anonymized result data publicly available.
## Archiving and preservation (including storage and backup)
Project document server. Personally identifiable data password-protected or
stored securely offline (e.g. paper in locked filing cabinet).
Estimated final size (Bytes): 100K DS 6.7.1: Evaluation results of ACE in
sound design and AV production
## Data set description
Results of evaluation of ACE in sound design and audiovisual production. The
results will include usability data from beta testers available from
AudioGaming and students from Surrey’s Film and Video Production Engineering
BA (Hons). Supporting data for Deliverable D6.6.
WP: WP6 / Task: Task 6.7
Responsible: QMUL (with AudioGaming)
**Standards and metadata**
Free text and Tabular (e.g. CSV)
## Data sharing
Usability data collected and stored according to ethics policy and approval;
anonymized result data publicly available.
## Archiving and preservation (including storage and backup)
Project document server. Personally identifiable data password-protected or
stored securely offline (e.g. paper in locked filing cabinet).
Estimated final size (Bytes): 100K
DS 7.1.1: Website statistics
## Data set description
Website visitor data and alignment with associated project events. Success in
Task 7.1 will yield 50 daily unique visitors to the AudioCommons web portal,
(excluding bots), increased by at least 50% during time periods influenced by
AudioCommons events.
WP: WP7 / Task: Task 7.1 Responsible: MTG-UPF
**Standards and metadata**
Tabular (e.g. CSV)
## Data sharing
During project: Private (maintained on Google Analytics).
At end of project: Public (following removal of any personally identifiable
information).
## Archiving and preservation (including storage and backup)
During project: Maintained on Google Analytics.
At end of project: Downloaded to web server, backed up on project document
server.
Storage estimate: 2k / day x 1300 days = 3 MB
Estimated final size (Bytes): 3 MBytes
DS 7.5.1: List of Key Actors in the creative community
## Data set description
A list of Key Actors in the creative community will be built and maintained to
facilitate dissemination activities in Task 7.5. This includes personally
identifiable information such as contact details and interests, and will be
maintained according to data protection policies.
WP: WP7 / Task: Task 7.5 Responsible: MTG-UPF
**Standards and metadata**
Text document
**Data sharing**
Project partners only.
## Archiving and preservation (including storage and backup)
Stored on project document server, in compliance with data protection
policies. Estimated final size (Bytes): 100K
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0672_netCommons_688768.md
|
# Introduction
netCommons operates only marginally with personal data, while in most cases
the data produced by the project refer to technical measures or experiments
not involving human beings. Nonetheless netCommons seeks the adoption of
proper security and confidentiality standards for the data collected as well
as proper Open Access (OA) policies to maximize the impact of the research
carried out, as we are well aware that at the heart of modern research is an
extensive scientific dialogue, with a timely sharing of data and experiences.
Proper data sharing accelerates innovation, allows researchers to build on
previous work improving the quality of the results, fosters collaboration and
avoids duplication of work. The necessity of Open Access and Open Research
Data (ORD) adoption has gained momentum and it is influencing the political
choices of all the main public agencies funding and sponsoring research. The
European Commission (EC) is no exception to this general international trend,
which has been first spawned in the U.S. by the National Institutes of Health
(NIH). The commitment of the EC toward Open Access of the research results is
reflected in official guidelines [1] and in the wording of Grant Agreements
(GAs) (e.g., art. 29.2). In addition, from the specific nature of netCommons
and from its being part of the “societal challenges” programme, we derive a
particular emphasis on the involvement of citizens, economic stakeholders,
governmental agencies and charities. All these considerations require the
adoption of liberal standards for the scientific dissemination of information,
in accordance with the mandate in Art. 29.2 of the netCommons grant agreement.
In order to avoid problems and misunderstandings and to streamline the whole
process of data collection and of dissemination of results, this document seek
to define clear guidelines on how to treat data and on how to disseminate the
results. This document is extended following the EC Guidelines[2]
# netCommons Open Access Policy
netCommons is part of the H2020 Open Data Pilot[3], thus the access policy to
the project result must deal both with the publications produced by the
project and with the data upon which these publications are based. Moreover,
given the interdisciplinary approach of the project and its societal
importance, we foresee additional data to support general findings and to
build a base for dissemination of the project outcomes, as well as setting the
ground to build the advocacy capabilities and support the impact-oriented
actions of netCommons.
One of the key challenges for a Cooperative Awareness Platforms for
Sustainability (CAPS) research project like netCommons is to produce
scientific knowledge that is persistent, that goes beyond the restricted
scientific communities and that fosters the benefit of the individuals, of the
communities and of the European society at large. Furthermore, having its
roots in Internet Science [4, 5] netCommons findings are conceived to foster
and benefit the development of Community Networks (CNs) also beyond the
European Union.
These ambitious goals require a thorough dissemination activity of the
research results, and a careful management of general data, including the
information collected, to maximize the impact of the project efforts. For this
reason, netCommons has opted for, and included in the Consortium Agreement, a
fully open model of results and documents dissemination, including
deliverables that are all public.
The remaining part of this Chapter deals separately with the two topics of: 1.
Open Access to scientific publications,
2\. Open Access to research data.
## Open Access to scientific publications
One of the cornerstones of our dissemination strategy is to secure a timely
and regular publication of the scientific findings in peer–reviewed, high
impact journal and conferences. This will ensure a proper consideration of
netCommons results in the scientific communities of interest. All scientific
publications will be available in Open Access, providing archival Portable
Document Format (PDF) versions of the published document. As specified in the
H2020 Guidelines on Open Access publishing [1], by this term we mean the
practice of providing free and unrestricted access to scientific publications
to read and download.
According to the contractual obligations specified in the GA Art. 29.2, “
_Each beneficiary must ensure open access (free of charge online access for
any user) to all peer-reviewed scientific publications relating to its
results._ ” We will obviously comply with this obligation and we are
implementing a specific policy and best practice based on OpenAIRE2.2 to
ensure and almost automatic propagation of the Open Access version of the
publication to repositories compliant with the Open Archives Initiative
Protocol for Metadata Harvesting (OAI-PMH) standard[6] and to the web site of
the project.
### Documents Subject to Open Access
The Open Access policy for scientific publications _applies whenever a partner
of the project or a group of partners decides to produce a scientific
publication_ containing the results of a research activity. This decision is
taken on the following grounds:
* the publication is scientifically relevant and brings forth significant advances in the state of the art of the interested discipline;
* (if applicable) the data contained in the publications fulfill the requirements specified by the Ethical Committees of the partner/partners that collected the data.
It must be noted that, due to the societal and open nature of netCommons
research, as provided by the Consortium Agreement (CA) in Sect. 8 and in
particular in Sect. 8.3.1, netCommons Parties are not subject to prior notice
to the Consortium or any other legal body.
The Open Access policy does not apply to partial results which are produced at
intermediate steps of the project and are not deemed scientifically relevant.
### Green and Gold Open Access
The H2020 guidelines [1] refer to the two main procedures to enforce Open
Access to scientific literature.
**The Green Open Access:** this procedure is based on re-publishing (often
indicated as selfarchiving) of the published article or the final
peer–reviewed manuscript without the graphical imprints of the commercial
publisher. Some journals also allow the deposit of the published version with
the publisher imprinting. The manuscript is archived into an OAI-PMH-compliant
repository2.1.3 by the authors; some publishers could require an _embargo_
period of time before the paper is made concretely available to the public:
netCommons will try to minimize both the use of publishers that require an
embargo and the duration of the embargo, that will in any case abide to the
requirements of the Commission [1] as stated in Art. 29.2 of the GA.
**The Gold Open Access:** the article is provided in Open Access directly by
the publisher, which normally (but not always) enables also re-publishing with
the same means of the ’green’ method. We note that while some publishers (most
notably the International Federation for Information Processing (IFIP))
maintain a fully open Digital Library (DL) without any fees, many others
require a fairly expensive fee to publish in Open Access. Many scientific
communities regret and discourage ’pay-to-publish’ procedures, specially in
mixed publication venues (i.e., journals that allow both traditional and OA
publications) where authors must declare their desire to publish in Open
Access before the peer-review.
### Open Access Repositories
A repository for scientific publications is generally defined as an online
archive, but this condition is not enough to make a repository Open Access.
The most known Open Access repository is probably arXiv ( http://arxive.org),
maintained by the Cornell University. The H2020 guidelines give full freedom
on the choice of the repository: it can be an Institutional Repository or a
subject–based centralized repository. If the Institution the authors belong to
does not have a specific infrastructure of this kind, the EU is funding the
OpenAIRE effort ( http://www.openaire.eu) , which provides APIs to a
comprehensive list of public repositories and in general means to foster Open
Access policies. OpenAIRE plays a central role in netCommons best practices
for Open Access, since it provides means to automatically link the
repositories of most institutions, and it can thus be used to provide suitable
visibility and linking to all the published material. In particular the Zenodo
(http://www.zenodo.org/) repository is strictly related to OpenAIRE and is
maintained by CERN, thus providing a suitable means for archival for all
European institutions that cannot (or have not yet) set up an institutional
repository. Other lists of repositories and further information on Open Access
are available at http://roar.eprints.org and http://www.opendoar.org/.
### Accepted version and published version
An _accepted paper_ is a version which has been revised by the author to
incorporate review suggestions, and which has been accepted by the publisher
for publication.
The final, _published version_ is the reviewed and accepted article, with
copy-editing, proofreading and formatting added by the publisher.
## Implementation of the Open Access Policy to Publications
The Open Access policy will be applied both to peer-reviewed publications
(i.e., publications that are evaluated by “peers”) and to other types of
publications such as books, white papers, and all other documents that the
consortium deems valuable of dissemination. In the following we refer to the
first type of publications as “peer-reviewed (PR)” and to the second as “non
peer-reviewed (NPR)”. Deliverables will be initially available through the
project web site with the very appealing format described in detail and
available on the specific area[7]. After review it will be decided if they
deserve dissemination through OAI-PMH compliant repositories(Sect. 2.1.3).
### Procedures for PR publications
The authors of netCommons publication have the freedom to opt for either a
Green or for a Gold policy. In case of _a Green Open Access policy_ the
procedure is as follows:
1. As soon as the paper is accepted, the draft of the accepted paper is stored in one or more repositories of the authors’ choice among those supported by OpenAIRE along with bibliographic metadata;
2. The paper publication is notified to the project coordinator and to the exploitation and dissemination list ([email protected]);
3. Within a few days the manuscript becomes visible automatically through OpenAIRE reporting the proper reference to netCommons;
4. A script parses OpenAIRE daily (or weekly) to retrieve novel manuscripts and upload them automatically on the netCommons web site in the proper section;
5. If requested by the publisher, the paper is left unpublished for the duration of the embargo period; such period cannot exceed 6 months or 1 year in exceptional cases;
6. After the embargo period expires, the Open Access is granted to every one via the repository;
This procedure guarantees the highest visibility and dissemination as well as
consistent and coordinated referencing, linking and availability.
In case of a _Gold Open Access policy_ the procedure is:
1. As soon as the paper is accepted, and according to the publisher’s Open Access policy,the draft of the accepted paper is stored in a repository of the authors’ choice among those supported by OpenAIRE along with bibliographic metadata;
2. The paper publication is notified to the project coordinator and to the exploitation and dissemination list ([email protected]);
3. Within a few days the manuscript becomes visible automatically through OpenAIRE reporting the proper reference to netCommons;
4. A script parses OpenAIRE daily (or weekly) to retrieve novel manuscripts and upload them automatically on the netCommons web site in the proper section;
5. After the final publication the authors also add the publisher digital library information to ensure that the gold access policy is correctly advertised and accomplished, the publisher may request a different version to be uploaded.
The costs incurred for publication are eligible for reimbursement as long they
are incurred before the end of the project; however netCommons will try to
avoid all venues that apply publication fees that can rise suspicions that the
publication does not follow an ethically consistent peer-review process. If
the publication of a work supported by netCommons with a publisher that does
not comply with EU rules is deemed by the Management Board of the utmost
importance for its dissemination, the netCommons Coordinator will write a
formal request to the publisher to comply with EU regulations.
### Procedures for NPR Publications
The researchers in netCommons will publish all NPR under one of the Creative
Commons licenses and they will adopt an Open Access policy also for NPR
publications such as technical reports and white papers.
The procedures is in this case is simple and similar to the Gold Open Access
case:
1. When a technical report is published (e.g., on an institutional website), the authors store a version of the paper, along with the available metadata, in one or more repositories of her/his choice among those supported by OpenAIRE;
2. The paper publication is notified to the project coordinator and to the exploitation and dissemination list ([email protected]);
3. Within a few days the manuscript becomes visible automatically through OpenAIRE reporting the proper reference to netCommons;
4. A script embedded in the netcommons web-site and compliant with OpenAire APIs, parses OpenAIRE daily (or weekly) to retrieve novel manuscripts and upload them automatically on the netCommons web site in the proper section.
Exception may apply to these rules and procedure for contributions to
newspapers and dissemination magazines.
### Current Policies by some of the Major Scientific Publishers
Clearly, the choice of whether to take a Green or a Gold Open Access policy is
also determined by the specific publisher and by the scientific field. Self
archiving is today compatible with the most important publishers, as far as it
is limited to the _accepted version_ of the paper, but publishers as IFIP and
Association for Computing Machinery (ACM) go definitely beyond, as described
below. With other publishers, the evaluation should be made on a case by case
basis. Details on most publishers and journal policies can be found on the
Sherpa Romeo portal ( http://www.sherpa.ac.uk/ romeo/index.php) . In the
extreme case in which self archiving is prohibited and commercial open access
options are not available, the authors should avoid the journal.
For the authors’ convenience and for general reference, we report here the
current policy contained in the copyright agreement or on web-pages of some of
the most relevant publishers at the moment of writing, though it is strongly
recommended to check the single journal OA policy on the Sherpa Romeo database
and/or on the journal website. The information in the following sub-sections
is mostly taken verbatim from publishers web pages, thus may contain
advertisement-like information and in general the publisher visions, which are
not necessarily reflected or agreed-upon by netCommons consortium.
#### Elsevier
The Elsevier policy on authors right can be found in the website
http://www.elsevier. com/about/company-information/policies/sharing. Elsevier
supports Green Open Access, but maintains a number of journals (
http://www.elsevier.com/ embargoperiodlist) with an embargo policy. Though
these journals can be used for netCommons publications, we suggest to avoid
those that have and embargo period longer than 12 months. In any case also
journals subject to embargo allows pre-prints to be shared in private
repositories. Citing from Elevier’s Frequently Asked Questions (FAQs) page:
Q. Have you removed an author’s right to self-archive in their institutional
repository?
A. No. We have removed the need for an institution to have an agreement with
us before any systematic posting can take place in its institutional
repository. Authors may share accepted manuscripts immediately on their
personal websites and blogs, and they can all immediately self-archive in
their institutional repository too. We have added a new permission for
repositories to use these accepted manuscripts immediately for internal use
and to support private sharing, and after an embargo period passes then
manuscripts can be shared publicly as well.
Regarding the author rights on the _accepted versions_ of the manuscripts of
journals not subject to embargo, we find the following wording:
Authors can share their accepted manuscript:
Immediately
* via their non-commercial personal homepage or blog by updating a preprint in arXiv or RePEc with the accepted manuscript
* via their research institute or institutional repository for internal institutional uses or as part of an invitation-only research collaboration work-group ...
After the embargo period
* via non-commercial hosting platforms such as their institutional repository
* via commercial sites with which Elsevier has an agreement
In all cases accepted manuscripts should:
* link to the formal publication via its DOI bear a CC-BY-NC-ND license ...
* ...
The CC-BY-NC-ND license can easily be obtained through the website http://
creativecommons.org/licenses/ and is explicitly recommended by the EC to
_enable open access in its broadest sense_ .
#### ACM
The ACM policy can be found in the website https://www.acm.org/publications/
policies/copyright_policy. ACM today adopts a very flexible scheme that ACM
itself summarizes as follows:
_“Authors have the option to choose the level of rights management they
prefer. ACM offers three different options for authors to manage the
publication rights to their work._
* _Authors who want ACM to manage the rights and permissions associated with their work, which includes defending against improper use by third parties, can use ACM’s traditional copyright transfer agreement._
* _Authors who prefer to retain copyright of their work can sign an exclusive licensing agreement, which gives ACM the right but not the obligation to defend the work against improper use by third parties._
* _Authors who wish to retain all rights to their work can choose ACM’s author-pays option, which allows for perpetual Open Access through the ACM Digital Library. Authors choosing the author-pays option can give ACM non-exclusive permission to publish, sign ACM’s exclusive licensing agreement or sign ACM’s traditional copyright transfer agreement. Those choosing to grant ACM a non-exclusive permission to publish may also choose to display a Creative Commons License on their works.”_
We notice that also in case of the traditional copyright transfer all ACM
publications allow Green Open Access without any embargo. Generally, the
publisher’s version/PDF cannot be used, but the author’s refereed post-print
can be uploaded for non commercial use on author’s personal website,
institutional repository, open access repository, the employer’s website or
the funder’s mandated repository. Publisher copyright and source must always
be acknowledged, and there must be a link to the publisher version with a
statement that this is the definitive version and Digital Object Identifier
(DOI). A set statement must be added on the website/in the repository:
cACM, YYYY. This is the author’s version of the work. It is posted here by
permission of ACM for your personal use. Not for redistribution. The
definitive version was published in PUBLICATION, {VOL#, ISS#, (DATE)}
http://doi.acm.org/10.1145/ nnnnnn.nnnnnn
Statement reported on the Sherpa Romeo web site (
http://www.sherpa.ac.uk/romeo/ pub/21/ as of June 30th 2016.
#### IEEE
The Institute of Electrical and Electronics Engineers (IEEE) specifies its
policy in a document that can be found in the association website[8]. In
summary:
Generally, authors have the right to post the accepted version of IEEE-
copyrighted articles on their own personal servers or the servers of their
institutions without permission from IEEE, provided that the posted version
includes a prominently displayed IEEE copyright notice (see below) and, when
published, a full citation to the original IEEE publication, including a
Digital Object Identifier (DOI) and a full citation to the original IEEE
publication, including a link to the article abstract in IEEE Xplore. Authors
shall not post the final, published versions of their articles.
The following copyright notice must be displayed on the initial screen
displaying IEEE-copyrighted material:
20xx IEEE. Personal use of this material is permitted. Permission from IEEE
must be obtained for all other uses, in any current or future media, including
reprinting/republishing this material for advertising or promotional purposes,
creating new collective works, for resale or redistribution to servers or
lists, or reuse of any copyrighted component of this work in other works.
Upon submission of an article to IEEE, an author is required to transfer
copyright in the article to IEEE, and the author must update any previously
posted version of the article with a prominently displayed IEEE copyright
notice. Upon publication of an article by IEEE, the author must replace any
previously posted electronic versions of the article with either (1) the full
citation to the IEEE work with a Digital Object Identifier (DOI), or (2) the
accepted version only with the DOI (not the IEEE-published version).
(see: http://www.ieee.org/publications_standards/publications/
rights/rights_policies.html; http://www.ieee.org/documents/
ieeecopyrightform.pdf)
IEEE also has an open access program for _Gold Access Policy_ , which at the
moment is limited to the societies journals. In any case IEEE always allows
its authors to follow mandates of funding agencies and post the accepted
version into publicly available repositories limiting the embargo to what
admitted by the funding agency.
#### Springer
Generally, authors can archive post-print (i.e., final draft post-refereeing)
on author’s personal website immediately and on any open access repository
after 12 months after publication. Publisher’s version/PDF cannot be used;
published source must be acknowledged and there must be a link to the
publisher version, with a set phrase to accompany link to published version.
Articles in some journals can be made Open Access on payment of additional
charge. (see: http://www.sherpa.ac. uk/romeo/pub/74/ as seen on June 30th
2016).
As far as Springer LNCS is concerned (see http://www.sherpa.ac.uk/romeo/pub/
2765/ as of June 30th 2016), authors can archive post-print (i.e., final
draft post-refereeing) on author’s personal website, institutional repository
or funder’s designated repository. Publisher’s version/PDF cannot be used;
published source must be acknowledged and there must be a link to the
publisher version with DOI and a set phrase to accompany link to published
version.
If Springer Open is chosen (see http://www.sherpa.ac.uk/romeo/pub/948/ as of
June 30th 2016), authors can archive post-print (i.e., final draft post-
refereeing) and publisher’s version/PDF. The published source must be
acknowledged; authors retain copyright and a Creative Commons Attribution
License must be attributed.
#### IFIP
All information in the IFIP Digital Library ( http://dl.ifip.org) is
available in Gold Open Access, free-to-read basis. However, the full text of
print publications from the IFIP publisher may be available only for a fee for
a period of time after publication (see http://www.ifip.org/
index.php?option=com_content&task=view&id=143&Itemid=460 as of June 30th
2016).
Some IFIP journals published by Springer and Elsevier have a paid Open Access
option , such as:
Journal: Computers and Security (ISSN: 0167-4048)
Journal: International Journal of Critical Infrastructure Protection (ISSN:
1874-5482)
Journal: Entertainment Computing (ISSN: 1875-9521, ESSN: 1875-953X)
* Authors can archive post-print (i.e., final draft post-refereeing) on author’s personal website immediately and on open access repository after an embargo period of between 12 months and 48 months; it must link to publisher version with DOI and must be released with a Creative Commons Attribution Non-Commercial No Derivatives License
* Authors cannot archive publisher’s version/PDF;
* Permitted deposit due to Funding Body, Institutional and Governmental policy or mandate, may be required to comply with embargo periods of 12 months to 48 months.
Journal: Education and Information Technologies (ISSN: 1360-2357, ESSN:
1573-7608)
* Authors can archive post-print (i.e., final draft post-refereeing) on author’s personal website immediately and on any open access repository after 12 months after publication. It must link to publisher version; the published source must be acknowledged with a set phrase to accompany link to published version;
* Authors cannot archive publisher’s version/PDF.
#### SAGE
Journals published by “SAGE-Hindawi Access to Research” have a paid Open
Access option. Authors retain the copyright of their article, which is freely
distributed under the Creative Commons Attribution License, permitting the
unrestricted use, distribution, and reproduction of the article in any medium,
provided the original work is properly cited. In order to cover the costs of
publication, Article Processing Charges are required for accepted manuscripts.
( http://www.hindawi. com/memberships/ as of June 30th 2016).
In subscription journals published by “SAGE Publications (UK and US)”, authors
can deposit the version of the article accepted for publication (version 2) in
their own institution’s repository. Authors may not post the accepted version
(version 2) of the article in any repository other than those listed above
(i.e., you may not deposit in the repository of another institution or a
subject repository) until 12 months after first publication of the article in
the journal. Authors may not post the published article (version 3) on any
website or in any repository without permission from SAGE. When posting or re-
using the article authors must provide a link to the appropriate DOI for the
published version of the article on SAGE Journals ( http://online.
sagepub.com) . (see https://uk.sagepub.com/en-gb/eur/the-green-route-%
E2%80%93-open-access-archiving-policy as of June 30th 2016).
In Sage Pure Gold Open Access Journals, all articles provide worldwide,
barrier-free access to the full-text of articles online, immediately on
publication under a creative commons licence. All articles are rigorously
peer-reviewed retaining the quality hallmarks of the academic publishing
process that authors would experience in publishing in any traditional SAGE
journal. Most SAGE pure Gold Open Access journals are supported by the payment
of an article processing charge (APC) by the author, institution or research
funder of the accepted manuscript.
Some journals (8 titles: http://www.sherpa.ac.uk/romeo/journals.php?id=1581&
fIDnum=|&mode=simple&letter=ALL&la=en) published by SAGE Publications (UK and
US) with the 12 month Embargo option let authors post on any non-commercial
repository or website the version of their article that was accepted for
publication – ‘version 2’. The article may not be made available earlier than
12 months after publication in the Journal issue and may not incorporate the
changes made by SAGE after acceptance. When posting or re-using the article,
authors should provide a link/URL from the article posted to the SAGE Journals
Online site where the article is published: http://online.sagepub.com, and
make the following acknowledgment:
The final, definitive version of this paper has been published in <journal>,
Vol/Issue, Month/Year by SAGE Publications Ltd, All rights reserved. c[The
Author(s)]. Authors may not post the final version of the article as published
by SAGE or the SAGE-created PDF { ‘version 3’.
See https://mc.manuscriptcentral.com/societyimages/wes/WES_
ExclusiveLicense.pdf as of June 30th 2016.
## Open Research Data
An interesting novelty of H2020 is the platform known as Open Research Data
Pilot for the dissemination of the data that could be used by different
researchers to replicate the experiments or the analysis presented in the
scientific publications. Given its scope netCommons obviously participates in
this pilot.
The topic of Open Research Data publication is much less debated, understood
and agreed upon compared to scientific publication Open Access. In particular,
the license of Data (open or not) is far more difficult, as Data are not
subject to standard Intellectual Property rules. For instance, most of
Creative Commons licenses ( https://creativecommons.org/share-your-work/)
may not apply to data as “derivative work” on Data is not clearly defined and
manipulating a data set with purposes different from rendering may be
inappropriate; sometimes even rendering and statistical analysis may change
the actual meaning of the Data published. Similarly, licences like Open
Database
License (ODbL) v1.0 ( http://opendatacommons.org/licenses/odbl/) may not
apply in many cases for both technical inconsistency (e.g., the wording
“intermixing with other datasets” is a technically inconsistent definition)
and it contains also semantic ambiguities. Furthermore, it is not clear at the
time of writing, if for netCommons it is acceptable that all produced Data can
be released also for commercial purposes.
Another additional issue with Open Research Data, is that very few
Institutions support an Institutional Repository for Data. Also in this case
the Zenodo repository can be used as for scientific publications; however, we
deem that it is still not possible to detail general procedures for the
publication of Open Research Data.
Given this situation, in the first part of the project, netCommons will
carefully select on a case-by-case basis the most appropriate license and the
most appropriate level of aggregation and detail, as well as the most
appropriate set of repositories where the Data produced during the research
can be archived and made public. We are confident that at M24, in D7.3, which
is the revised version of the Data Management Plan, we will be able to set a
more standardized procedure for Open Research Data publication within
netCommons. Chapter3 in any case details all the procedures that netCommons
will follow to ensure the protection of personal data and individuals that may
be involved in netCommons research.
# Data Security and Privacy Provisions in netCommons
## Scope of the privacy/security model
The activities of netCommons involve only marginally the direct interaction
with people and do not require to collect directly any personal or sensitive
information. People involved in Community Networks follow the usual policy of
these associations, and netCommons in general does not require CNs to transfer
any personal or sensitive data. Nonetheless, UoW will run some surveys and
some interviews or video recording may be useful during the project.
In such cases the netCommons practice will always abide to the principle of
informed consent and to the ethical annex reported as Sect. 5 of the project
Document of Work (Annex 1 to the Grant Agreement), furthermore the actions and
operations of the researchers will always comply with the national legislation
and with the internal regulations of the partners involved in the project. In
particular, Regulation (EU) 2016/679 of the European Parliament and of the
Council of 27 April 2016 on the protection of natural persons with regard to
the processing of personal data and on the free movement of such data, and
repealing Directive 95/46/EC (General Data Protection Regulation) will be
strictly followed by the project security model. In the Spanish case, the Data
Protection Act number 15/1999 (December 13th) adapted to the European
directives by the Royal Decree 1720/2007 promulgated on 21st December 2007
will be applicable to the project. The specific goal of this document is to
present and discuss the issues related to the treatment of the collected data
in electronic form, their storage on different media (Hard Disks, Storage
Units, CD/DVD/SD/USB peripherals), and their distribution using network
connections.
## General Principles
The privacy protection and operational model of netCommons rests on three
pillars:
* Data anonymity,
* Informed consent;
* Circulation of the information limited to the minimum required for processing and preparing the anonymous open data sets.
_Data anonymity_ will be guaranteed whenever possible. The only exemption to
anonymity can be in some cases for the researcher directly interacting with
the participants in surveys. When data must be presented in non-aggregate ways
for research purposes, the data will be anonymized following the best
practices of non-invertible hashing functions applied to all personal
information. Furthermore, provisions will be taken to avoid the possibility of
information linkage.
The _informed consent_ policy requires that each participant will provide
his/her informed consent prior to the start of any activity involving him/her.
AppendixA reports a template of the informed consent form that will be
completed by participants in surveys and interviews. Public distribution of
elements of information that can reveal the identity of the users (e.g.,
videos or pictures) for scientific dissemination purposes will be explicitly
authorized by the participant as part of this process.
_3\. Data Security and Privacy Provisions in netCommons_
To achieve a _limited circulation_ of the information, the database containing
in anonymous form the data collected from the users (e.g., the results of
questionnaires and of laboratory experiments) will be distributed to the
partners, if needed at all, through protected and encrypted Internet
connections; the raw data will only be shared if it is required for the
development. The researchers will never pass on or publish the data without
first protecting participants’ identities. No irrelevant information will be
collected; at all times, the gathering of private information will follow the
principle of proportionality by which only the information strictly required
to achieve the project objectives will be collected. In all cases, the right
of data cancellation will allow all users to request the removal of their data
from the project repository at any time.
The final datasets fully anonymized will be published as Open Data as
described in Sect.2.3.
## Security Framework
In order to accomplish the creation of a security framework it is essential to
focus on the issues of access and identity authentication, authorization and
auditing (AAA). Therefore, our main objective is to develop a base security
system that standardizes the processes of Authentication, Authorisation and
Auditing of the various information sources involved.
### Authentication
The Data Protection Act requires that any operator who is granted access to
sensitive data must be authenticated. Authentication technology should be
strict when dealing with sensitive and confidential data available to the
users of the platform. To do this, a username and a password will be used so
that the person who wants to access the raw data of surveys and interviews
confirms that he has authorized access to the system. If deemed necessary by
sensitive collected data, which is not foresees now in netCommons, we will use
an RSA encryption mechanism, with each operator receiving a personal private
key.
### Authorization
The objective of the authorization is to determine the rights of a user of an
information system. For each researcher, we will specify which content can be
accessed based on functionality, security and confidentiality criteria.
### Accounting and Auditing
netCommons should not deal with sensitive data, in any case logging of the
personal data will be enforced to prevent abuses, and in case of necessity
proper auditing measures as provided by the Data Protection Act shall be put
in action.
## Summary of Technological solutions
We report below a table of the main technological solutions used for the
different security issues mentioned in the Sect.3.2.
_D7.1: Data Management Plan (v1) 3. Data Security and Privacy Provisions in
netCommons_
<table>
<tr>
<th>
GOAL
</th>
<th>
Technological Solution
</th> </tr>
<tr>
<td>
Guaranteeing complete anonymity where required
</td>
<td>
The collected data will be labeled with participant codes. Participant consent
forms will be held separately and will not reference the participant code.
These will be paper based and held in a locked filing cabinet on the
researchers site
</td> </tr> </table>
Safe keeping of the doc- The informed consent will be provided by the
interested subject by umentation on informed filling an appropriate form as
reported in AppendixA. The authorized consent personnel must keep this
physical document under lock and key.
Information on the interested person can also be stored in electronic form in
an database or in a spreadsheet. The spreadsheet or the database will be
encrypted and its access will be password-protected and granted only to
authorized operators
<table>
<tr>
<th>
Remote access
</th>
<th>
In the general case, the “raw” data related to the participant to the project,
will be handled only by the researchers interacting with the participant and
made available to the rest of the consortium only in anonymous form. In
particular any personal data contained in the collected data will be handled
only by the researchers interacting with the participant. If, for special
cases, some other researchers should need to access to the “raw” data, the
interested participants will be informed. Only after their consent is extended
to the requiring researcher, can he/she have access to the data. In this case,
if the access is remote, the system has to have the following Researchers in
the consortium can have access through an internet SSL connection
</th> </tr> </table>
http://netcommons.eu
# Conclusions
The topics of Open Access and Open Research Data is one of the key debates
open in the scientific world, specially in case of research project that are
funded with public money. netCommons is not only a research project funded by
the EC, but it is also a project that deals with societal challenges, socio-
economic sustainability, the construction of a commons, and techno-legal
provisions. As such its effort to disseminate and propagate results and
findings must be, and it is indeed maximal.
This deliverable described the policy that netCommons as so far discussed,
approved and set for its own best practices in scientific Open Access
dissemination and in data collection and management to achieve Open Research
Data.
Regarding Open Access to publications, on the one hand, we have clearly
identified that most leading scientific publishers provide appropriate
licenses and means to achieve Open Access, either through Green Open Access
(i.e., re-publication on OAI-PMH compliant repositories) or through Golden
Open Access. On the other hand Golden Open Access is still very often
ambiguous on the peer-review process, and the publication fees required to
authors are hardly justifiable by the cost of electronic publishing, specially
in light of modern editing technologies that hardly require any intervention
of the publisher on the material provided by the authors. In conclusion,
netCommons does not see any obstacle to achieve a complete Open Access
dissemination for all its scientific publications.
Regarding Open Research Data accessibility and licensing, we have found that
the situation is far less clear, and that most Institutions are still most
unaware of the problem and they do not provide appropriate repositories. At
the same time, also the concept of license and of derivative for Open Data is
not as mature as it is for publications, where the concept of copyright and
the notion of intellectual property as well as creative work are well
understood both at the technical and the legal level. Indeed, in many cases
Data cannot be classified as a creative work, and the intellectual property of
Data does not yet have a commonly accepted technical and legal definition.
Furthermore the publication of data must comply with legal provisions on
privacy and individual protection. All the same, netCommons deems that data
collected and used for scientific research (specially if receiving public
funding), must be made available to the scientific community for validation
and falsification of results and theories and to the public community at large
for transparency and control. In the initial part of the project netCommons
will decide on where to publish Open Research Data, and under which license on
a caseby-case basis, guaranteeing in any case that published data is correctly
indexed by the OpenAIRE platform.
# Bibliography
1. The European Commission, “Participants Portal on-lline Manual: Open Access & Data Management,” http://ec.europa.eu/research/participants/docs/h2020-funding-guide/cross-cutting-issues/ open-access-data-management/open-access en.htm, Accessed in June 2016.
2. European Commission – Directorate-General for Research & Innovation, “Guidelines on Data Management in Horizon 2020 – Version 2.1,” http://ec.europa.eu/research/participants/data/ref/ h2020/grants manual/hi/oa pilot/h2020-hi-oa-data-mgt en.pdf, Feb. 15, 2016.
3. OpenAIRE Consortium, “What is the Open Research Data Pilot?” https://www.openaire.eu/ opendatapilot, Accessed in June 2016.
4. D. Lazer and et al., “Computational Social Science,” _Science_ , vol. 323, p. 721–723, Feb. 6, 2009.
5. The Network of Excellence in Internet Science Consortium, “Project Web Site,” http://www. internet-science.eu/network-excellence-internet-science, Accessed in June 2016.
6. C. Lagoze, H. Van de Sompel, M. Nelson, and S. Warner, “The Open Archives Initiative Protocol for Metadata Harvesting, Protocol Version 2.0,” http://www.openarchives.org/OAI/ openarchivesprotocol.html, Jan. 8, 2015.
7. netCommons Consortium, “Deliverables page,” http://netcommons.eu/?q=content/ deliverables-page, 2016\.
8. IEEE, “An faq on ieee policy regarding authors rights to post accepted versions of their articles,” https://www.ieee.org/documents/author version faq.pdf, 2015\.
# A. Appendix: Template of the Informed Consent Form
<table>
<tr>
<th>
Informed Consent Form
This survey/interview is part of the EU Horizon 2020 research project
“netCommons: network infrastructure as commons”: http://www.netcommons.eu.
Scholars from the five EU-based institutions involved in the netCommons
project carry out the survey research. The study does not have any commercial
purposes, the involved researchers do not have any monetary benefits by
conducting the study and the results will be published in the form of a report
and research papers based on the survey. Furthermore, the collected data will
be published in anonymous form as open data. The open data will not contain
any personal identifiers, which is data that we are not interested to collect,
do not ask for and do not publish. We will not ask you to provide personally
sensitive data in this survey and all the answers provided will be used only
in aggregate and anonymous form.
By signing this form, you confirm the following:
* I agree to the digital recording of the interview/survey
* I agree that the answers you give are stored in digital form in a database in such a way that I am not personally identifiable (anonymous or pseudonymous form)
* I have been given the opportunity to ask questions about the project
* I understand that my taking part is voluntary. I can withdraw from the study at any time during the interview/survey and I do not have to give any reasons for why I no longer want to take part.
* I understand my personal details such as my name, email, phone number and address will only be used by the researcher to contact me if necessary and will not be revealed to people outside the project. In any case such information will be completely deleted at the end of the project.
* I understand that my words may be quoted in publications, reports, web pages, and other research outputs in anonymous or pseudonymous form only (no name or other personal identifiable data will be mentioned).
The person responsible for the treatment of the data used in this
survey/interview is:
Prof. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
University of . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
E-mail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Phone
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . If you you
have any questions, don’t hesitate to contact him/her. I agree to these terms
and want to participate in the interview/survey.
Yes No
</th> </tr> </table>
The netCommons project
July 9, 2016 netCommons-D7.1-0.1/1.0
Horizon 2020
This work is licensed under a Creative Commons “Attribution-ShareAlike
3.0 Unported” license.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0677_MAGENTA_731976.md
|
<table>
<tr>
<th>
**Employer and affiliation of the contact**
</th>
<th>
CEA Saclay,
DRF / IRAMIS / SPEC
</th> </tr>
<tr>
<td>
**Project start date**
</td>
<td>
1 January 2017
</td> </tr>
<tr>
<td>
**Project duration**
</td>
<td>
48 months
</td> </tr> </table>
# History of the document
<table>
<tr>
<th>
Version number
</th>
<th>
Date
</th>
<th>
Description of the modification
</th>
<th>
Author/Reviewer
</th> </tr>
<tr>
<td>
</td>
<td>
04/06/2017
</td>
<td>
Initial draft
</td>
<td>
Dr Sawako Nakamae
</td> </tr>
<tr>
<td>
V1
</td>
<td>
08/06/2017
</td>
<td>
First IPR revision
</td>
<td>
Mrs Dijana Samson
</td> </tr>
<tr>
<td>
V2
</td>
<td>
13/06/2017
</td>
<td>
Second IPR revision
</td>
<td>
Dr Sawako Nakamae
</td> </tr>
<tr>
<td>
V3
</td>
<td>
23/06/2017
</td>
<td>
3rd IPR revision
</td>
<td>
Mr Edd Jones/ S.
Nakamae/ D. Samson
</td> </tr>
<tr>
<td>
V4
</td>
<td>
28/06/2017
</td>
<td>
Revision by Consortium members
</td>
<td>
Dr Sawako Nakamae
</td> </tr>
<tr>
<td>
V5
</td>
<td>
05/04/2018
</td>
<td>
Up-date of the document for the first periodic reporting time:
* Correction: UE emblem and acknowledgment to EU funding
* Modification Part 4.1: data set naming rule
* Modification Parts 4.4; 4.5 and 4.6 : Length of data preservation
</td>
<td>
Dr Sawako Nakamae / Ms Delphine Meyer
</td> </tr> </table>
# Abbreviations and Acronyms (to be updated throughout the project)
DMP: Data management plan
GA: Grant Agreement
CA: Consortium Agreement
ORDP: Open research data pilot
WP: Work package
MTE: Magneto-thermoelectric
MTD: Magneto-thermodiffusion
FF: Ferrofluid
IL: Ionic Liquids
DoA: Description of Actions
# 1\. Summary
MAGENTA is a research & innovation project that aims to bring a paradigm
change in TE-technology by exploiting the magneto-thermoelectric (MTE)
property of ionic-liquid based ferrofluids. The **primary objectives** are
**:** **1) to provide** **founding knowledge of novel MTE phenomena in
ferrofluids** , **2)** **to build** **application-specific MTE prototypes**
for their use in targeted industrial sectors (cars and portable electronics)
and **3)** **to build an innovation ecosystem around the novel MTE technology
in the field of waste-heat recovery research and development.**
During the course of the project, MAGENTA will generate data in a wide range
of R&D activities from materials synthesis (ionic liquids, magnetic
nanoparticles and ferrofluids), Magneto-ThermoElectric (MTE), Magneto-
ThermoDiffusion (MTD) measurements, theoretical and numerical analysis to
prototype device testing and validation. Since the MAGENTA technology is at an
early stage, it is important that timely dissemination of these findings
(data, publications, trial results) are open for scrutiny by other
researchers, potential future partners and the wider research and regulatory
community.
As a project participating in the Open Research Data Pilot (ORDP) in Horizon
2020, MAGENTA will make its research data findable , accessible, interoperable
and reusable (FAIR). Nevertheless, data sharing in the open domain can be
restricted, taking in account “the need to balance openness and protection of
scientific information, commercialization and Intellectual Property Rights
(IPR), privacy concerns, security as well as data management and preservation
questions” as stated in Guidelines on FAIR Data Management in Horizon 2020
published by the European Commission.
The DMP’s purpose is, therefore, to provide the main elements of the data
management policy to be used by the Consortium regarding its complete research
data cycle. It describes: types and formats of data to be generated or
collected and how, the standards to be applied, the data-reservation methods,
the data-sharing policies for reuse. The DMP reflects the exploitation and IPR
requirements as defined in the Consortium agreement.
The present document is the 1 st version of MAGENTA DMP, containing a
summary of the datasets; i.e., types, formats and sources (WPs and partner
names) and specific conditions to be applied for sharing and reuse. As a
living document, the DMP will be modified and refined through updates as the
project implementation progresses and/or significant changes occur. At
minimum, it will be updated in the context of the periodic
reporting/evaluation of the project.
# 2\. Data Management Plan – Overview
The DMP covers the complete research data cycle of MAGENTA as described in
Figure 1. In Step 1 of the DMP (Green oval in Figure 1). MAGENTA will produce
raw data (generated through measurements and simulations, collected through
market researching, etc.). The data will then be processed and analyzed into
more usable forms; i.e., reports, publishable documents, data tables, codes,
etc.). In Step 2 (blue oval), the data will be preserved using appropriate
naming rules and metadata schemes. The project’s _open access policy_ (see
following sections) will be applied to determine which datasets shall be made
accessible (share) for re-use in Step 3 (yellow oval). The publicly accessible
datasets will then be re-used by public for verification.
_Figure 1: Research data life-cycle (Adapted from _www.data-
archive.ac.uk/create-manage/life-cycle_ ) _
## 2.1. Research data types and open access policy of MAGENTA
MAGENTA will produce data in a wide range of R&D activities that are
summarized in the Table 1. Not that this list may require modifications
(addition or removal of datasets) in the later versions of the DMP depending
on the project developments. Once generated (or collected), these data will be
stored in several formats, which are: Documents, Images, Data, and Numerical
codes.
__Table 1: Types of data to be generated in MAGENTA_ _
<table>
<tr>
<th>
</th>
<th>
**Data description**
</th>
<th>
**Main Partners**
</th>
<th>
**WPs**
</th> </tr>
<tr>
<td>
**1**
</td>
<td>
Ionic liquids (IL)
</td>
<td>
_SOLV_ , GUT, CNR
</td>
<td>
WP2
</td> </tr>
<tr>
<td>
**2**
</td>
<td>
Magnetic nanoparticles and ferrofluid (MNP&FF)
</td>
<td>
_CNR,_ CNRS, NCSRD
</td>
<td>
WP3, WP5, WP6
</td> </tr>
<tr>
<td>
**3**
</td>
<td>
Magneto-Thermodiffusion (MTD)
</td>
<td>
_CNRS,_ CNR, CEA
</td>
<td>
WP4, WP6
</td> </tr>
<tr>
<td>
**4**
</td>
<td>
Magneto-Thermoelectric (MTE)
</td>
<td>
_CEA_ , HESSO, GUT
</td>
<td>
WP5, WP6
</td> </tr>
<tr>
<td>
**5**
</td>
<td>
Prototype (PT)
</td>
<td>
_CFR_ , GEM, CTECH, CEA
</td>
<td>
WP7
</td> </tr> </table>
Among the datasets described in Table 1 above, following categories of outputs
are declared “ORDP” in the Grant Agreement (Annex 1, Part A, Section 1.3.2)
and will be made “Open Access” (to be provided free of charge for public
sharing). These will be included in the Open Research Data Pilot and thus be
managed according to the present DMP.
* Public deliverables specifically declared as ‘ORDP’ in the grant agreement:
* D4.2: Database on MTD property in IL-FFs o D6.1: Single MNPs and FF structures
* D6.2: Molecular descriptor data base on IL and IL/FFs o D6.3: Analytical model on TE and TD effects
* Articles published in Open Access scientific journal
* Conference and Workshop abstracts/articles
For all data types, the Consortium will examine the aspects of potential
conflicts against commercialization and the IPR protection issues of the
knowledge generated before deciding which information needs to be made public
and when. The decision process, summarized in the figure below, will be
overseen by the “Dissemination, Exploitation & Communication” subcommittee
headed by CTECH and CEA (see Project Management Plan, Deliverable identifier:
PMP-D.1.1-v1, submitted on February 28, 2017).
_Figure 2: Open access to research data and publication decision diagram (from
Guidelines to the Rules on Open Access to_
_Scientific publications and Open Access to Research Data in Horizon 2020)_
As stated in the Grant Agreement (Article 29.3) “As an exception, the
beneficiaries do not have to ensure open access to specific parts of their
research data if the achievement of the action's main objective would be
jeopardized by making those specific parts of the research data openly
accessible.” Such an exception applies to MAGENTA when the project findings
present high innovation level (possibility of commercialization, etc.). In
this case, the consortium will consider two forms of protection: 1) to
withhold the data for internal use or 2) to apply for a patent in order to
commercially exploit the invention and have in return financial gain. In the
former case, appropriate IPR protection measures (e.g., NDA) must be taken for
data sharing outside the consortium. In the latter case, publications will be
delayed until the patent filing is completed. Otherwise, the results will be
made “Open Access” by depositing the research data into an online repository
service (see Section data repository options) or by publishing in journals
(document, reports, articles, etc.) adhering to suitable “Open Access”
(‘green’ or ‘gold’). In parallel, public deliverables will be stored at one
(or both) of the following locations: The MAGENTA website
(https://www.magenta-h2020.edu) after approval by the consortium, and the
MAGENTA page on CORDIS website where all public deliverables submitted to the
European Commission are hosted.
In the following section, details on the five datasets identified in MAGENTA
are given. They will be updated as more data are produced in the project.
# 3\. Datasets
**3.1. Ionic liquids:**
<table>
<tr>
<th>
**Data set reference and name***
</th>
<th>
DS_IL
</th> </tr>
<tr>
<td>
Purpose and relation to the objectives of the project *****
</td>
<td>
The datasets include information on ionic liquid synthesis protocol, molecular
structure and physical property calculation results, property measurement
results. The data will be used for producing novel ionic liquid based
ferrofluids.
</td> </tr>
<tr>
<td>
**Data types***
</td>
<td>
Document, data, images, codes
</td> </tr>
<tr>
<td>
**File formats***
</td>
<td>
Documents and images: All common electronic document formats
(.docx, .pdf, .tex, etc.)
Data: text format tables that are readable by common data analysis software,
or encrypted for specific data treatment software (to be defined)
Numerical codes: written in programming languages such as
Fortran 77, Fortran 90, C, C ++ , Perl and Bash
</td> </tr>
<tr>
<td>
**Reuse of existing data***
</td>
<td>
Processed and aggregated data will be shared by partners not collecting data
for the advancements of the project.
</td> </tr>
<tr>
<td>
**Data production methods***
</td>
<td>
The dataset will be generated by partner laboratories through experimental
trials, measurements, and numerical simulations.
The dataset will also include summaries of project meetings and discussions
between partners, and relevant publications in scientific journals.
</td> </tr>
<tr>
<td>
Expected size of the data *****
</td>
<td>
To be determined
</td> </tr>
<tr>
<td>
Data utility *****
</td>
<td>
The collected dataset will be used for identifying ionic liquids with optimal
thermoelectric properties. It will also be used to design and synthesize novel
ionic liquid based ferrofluids.
</td> </tr>
<tr>
<td>
Potential for reuse*
</td>
<td>
In addition to the project itself, the dataset will be useful for other
research groups working on related subjects in the area of ionic liquids.
</td> </tr>
<tr>
<td>
**Diffusion principles***
</td>
<td>
The dataset generated will be shared among project partners through private
section of MAGENTA website, as well as through
</td> </tr>
<tr>
<td>
</td>
<td>
a secure file-sharing platform CoRe (see section 4.2) overseen by CEA and
CTECH. Consortium will determine which data shall be made publicly available
according to Open Access Decision scheme (see Section 1). Institutional as
well as public data repositories (ZENODO) will be used along with open access
publications in scholarly journals.
</td> </tr> </table>
## 3.1. Magnetic Nanoparticles and Ferrofluids
<table>
<tr>
<th>
**Data set reference and name***
</th>
<th>
DS_MNP&FF
</th> </tr>
<tr>
<td>
Purpose and relation to the objectives of the project *****
</td>
<td>
The datasets concerns various aspects of the magnetic nanoparticle
(MNP) synthesis and their dispersions in ionic liquids (ferrofluids, FF). Both
experimental and theoretical methods will be taken; which are,
* Several magnetic materials to be used nanoparticles
* Surface coating methods
* Dispersion in ionic liquids (ferrofluids) and their stability
(including the compatibility with redox couples)
* Magnetic properties of MNPs and FFs
* Electrostatic nature of FFs
* Theoretical and numerical modelling of above results
These datasets will guide researchers for identifying optimal ionic liquid-
based ferrofluids for their use as a magneto-thermoelectric liquid.
</td> </tr>
<tr>
<td>
**Data types***
</td>
<td>
Document, data, images, codes
</td> </tr>
<tr>
<td>
**File formats***
</td>
<td>
Documents and images: All common electronic document formats
(.docx, .pdf, .tex, etc.)
Data: text format tables that are readable by common data analysis software,
or encrypted for specific data treatment software (to be defined)
Numerical codes: written in programming languages such as use
Fortran (for the atomistic and mesoscopic simulations) and VASP (Vienna Ab-
initio Simulations Package) package for the electronic structure calculations,
etc.
</td> </tr>
<tr>
<td>
**Reuse of existing data***
</td>
<td>
Processed and aggregated data will be shared by partners not collecting data
for the advancements of the project.
</td> </tr>
<tr>
<td>
**Data production methods***
</td>
<td>
The dataset will be generated by partner laboratories through experimental
trials, measurements, and theoretical/numerical simulations.
The dataset will also include summaries of project meetings and discussions
between partners, and relevant publications in scientific journals.
</td> </tr>
<tr>
<td>
Expected size of the data *****
</td>
<td>
To be determined
</td> </tr>
<tr>
<td>
Data utility *****
</td>
<td>
The collected dataset will give a practical guide on which MNPs and their
coating conditions can be used to create stable IL-based FFs. These IL-FF’s
magnetic, physico-chemical and electrostatic nature will be compared to their
corresponding magnetothermodiffusion and magneto-thermoelectric properties.
</td> </tr>
<tr>
<td>
Potential for reuse*
</td>
<td>
As only few examples of IL-based ferrolfuids exist, the dataset will be useful
for other research groups trying to produce novel ILFFs.
The surface coating effect on magnetic properties of MNPs can also be
exploited in areas of research beyond thermoelectricity, such as hyperthermia.
</td> </tr>
<tr>
<td>
**Diffusion principles***
</td>
<td>
The dataset generated will be shared among project partners through the
private section of MAGENTA website, as well as through a secure file-sharing
platform CoRE (see Section 4.2) overseen by CEA and CTECH. Consortium will
determine which data shall be made publicly available according to Open Access
Decision scheme (see Section 1). Institutional as well as public data
repositories (Zenodo) will be used along with open access publications in
scholarly journals.
</td> </tr> </table>
**3.1. Magneto-thermodiffusion:**
<table>
<tr>
<th>
**Data set reference and name***
</th>
<th>
DS_MTD
</th> </tr>
<tr>
<td>
Purpose and relation to the objectives of the project *****
</td>
<td>
The datasets are produced in 3 distinct areas.
* Instrumental: High temperature Forced Rayleigh Scattering spectroscopy device development
* Experimental: MTD measurements on IL-FFs
* Theoretical: Analytical and numerical modelling of MNP movements under thermal gradient
The thermodiffusion of MNPs is believed to play a key role in the production
of high thermoelectric coefficients in FFs. The purpose here is to
experimentally observe the MTD behavior of MNPs at high temperature and to
provide theoretical understanding of such phenomena.
</td> </tr>
<tr>
<td>
**Data types***
</td>
<td>
Documents, images, data, codes
</td> </tr>
<tr>
<td>
**File formats***
</td>
<td>
Documents and images: All common electronic document formats
(.docx, .pdf, .tex, etc.)
Data: text format tables that are readable by common data analysis software,
or encrypted for specific data treatment software (to be defined)
Numerical codes such as Mathematica and COMSOL will be used.
</td> </tr>
<tr>
<td>
**Reuse of existing data***
</td>
<td>
Processed and aggregated data will be shared by partners not collecting data
for the advancements of the project.
</td> </tr>
<tr>
<td>
**Data production methods***
</td>
<td>
The dataset will be generated by partner laboratories through experimental
trials, measurements, and theoretical calculations.
The dataset will also include summaries of project meetings and discussions
between partners, and relevant publications in scientific journals.
</td> </tr>
<tr>
<td>
Expected size of the data *****
</td>
<td>
To be determined
</td> </tr>
<tr>
<td>
Data utility *****
</td>
<td>
The collected dataset will be used compared to the MTE dataset in order to
understand the impact of MTD in increasing (or decreasing) the FF’s
thermoelectric efficiency. This and MTE datasets will then be used to identify
the optimal IL-FFs for the use in the prototype thermoelectric cells.
</td> </tr>
<tr>
<td>
Potential for reuse*
</td>
<td>
In addition to the project itself, the dataset will be useful for other
research groups working in the general field of colloids and nanofluids.
</td> </tr>
<tr>
<td>
**Diffusion principles***
</td>
<td>
The dataset generated will be shared among project partners through the
private section of MAGENTA website, as well as through a secure file-sharing
platform CoRE (see Section 4.2) overseen by CEA and CTECH. Consortium will
determine which data shall be made publicly available according to Open Access
Decision scheme (see Section 1). Institutional as well as public data
repositories (Zenodo) will be used along with open access publications in
scholarly journals.
</td> </tr> </table>
**3.1. Magneto-thermoelectric:**
<table>
<tr>
<th>
**Data set reference and name***
</th>
<th>
DS_MTE
</th> </tr>
<tr>
<td>
Purpose and relation to the objectives of the project *****
</td>
<td>
The dataset also consists of 3 types of research works:
* Instrumental: Development of high temperature and underfield thermoelectric property measurement cell for liquid materials
* Experimental: Magneto-thermoelectric property measurement results (Seebeck coefficient, capacitance, power output).
* Theoretical: analytical and numerical modelling of MTE phenomena in IL-FFs
We aim to identify IL-FFs with optimal MTE performance and provide theoretical
understanding of observed phenomena. Stated as such, these are the 1 st of
the 3 objectives of the project.
</td> </tr>
<tr>
<td>
**Data types***
</td>
<td>
Document, data, codes
</td> </tr>
<tr>
<td>
**File formats***
</td>
<td>
Documents: All common electronic document formats (.docx, .pdf,
.tex, etc.)
Data: text format tables that are readable by common data analysis software,
or encrypted for specific data treatment software (to be defined). Other
possible formats include: jpg (snapshots), mp4
</td> </tr>
<tr>
<td>
</td>
<td>
(simm. movies), png, tiff, xcf and svg (vector graphics)
Numerical codes: written in programming languages such as Fortran 77, Fortran
90, C, C ++ , Perl and Bash.
</td> </tr>
<tr>
<td>
**Reuse of existing data***
</td>
<td>
Processed and aggregated data will be shared by partners not collecting data
for the advancements of the project, adhering to the access rights conditions
to results and background as described in the CA – Section 9
</td> </tr>
<tr>
<td>
**Data production methods***
</td>
<td>
The dataset will be generated by partner laboratories through experimental
trials, measurements, and numerical simulations.
The dataset will also include summaries of project meetings and discussions
between partners, and publications in scientific journals.
</td> </tr>
<tr>
<td>
Expected size of the data *****
</td>
<td>
To be determined
</td> </tr>
<tr>
<td>
Data utility *****
</td>
<td>
The collected dataset will be used for identifying IL-FFs with optimal
magneto-thermoelectric properties, to be tested in the prototype devices
within the project.
</td> </tr>
<tr>
<td>
Potential for reuse*
</td>
<td>
In addition to the project itself, the dataset will be useful for other
research groups working on related subjects such as thermogalvanic cells,
thermally charged ionic supercapacitors and electrochemical cells.
</td> </tr>
<tr>
<td>
**Diffusion principles***
</td>
<td>
The dataset generated will be shared among project partners through the
private section of MAGENTA website, as well as through a secure file-sharing
platform CoRE (see Section 4.2) overseen by CEA and CTECH. Consortium will
determine which data shall be made publicly available according to Open Access
Decision scheme (see Section 1). Institutional as well as public data
repositories (Zenodo) will be used along with open access publications in
scholarly journals
</td> </tr> </table>
**3.1. Prototype:**
<table>
<tr>
<th>
**Data set reference and name***
</th>
<th>
DS_PT
</th> </tr>
<tr>
<td>
Purpose and relation to the objectives of the project *****
</td>
<td>
The datasets contain technical specifications of ‘prototype’ thermocells to be
produced in WP7. These include; feasibility assessments, device development,
validation, performance optimization and market research. These are one of the
final objectives of the project.
</td> </tr>
<tr>
<td>
**Data types***
</td>
<td>
Documents, images, data, codes and computer assisted drawings (CAD)
</td> </tr>
<tr>
<td>
**File formats***
</td>
<td>
Documents and images: All common electronic document formats
(.docx, .pdf, .tex, etc.)
Data: text format tables that are readable by common data analysis software,
or encrypted for specific data treatment software (to be defined).
CAD Formats (.dwg, .stp, .igs, etc)
Mesh file format for computational fluid dynamics (.msh, etc)
</td> </tr>
<tr>
<td>
**Reuse of existing data***
</td>
<td>
Processed and aggregated data will be shared by partners not collecting data
for the advancements of the project, adhering to the access rights conditions
to results and background as described in the CA – Section 9.
</td> </tr>
<tr>
<td>
**Data production methods***
</td>
<td>
The dataset will be generated by partner laboratories through experimental
trials, measurements, and numerical simulations.
The dataset will also include summaries of project meetings and discussions
between partners, as well as presentations at conferences, science fairs and
technological showcasing events.
</td> </tr>
<tr>
<td>
Expected size of the data *****
</td>
<td>
To be determined
</td> </tr>
<tr>
<td>
Data utility *****
</td>
<td>
The data generated within this dataset are likely to generate patents.
</td> </tr>
<tr>
<td>
Potential for reuse*
</td>
<td>
All reuse of data in DS_PT will be restricted whose terms and conditions to be
determined by the IPR team.
</td> </tr>
<tr>
<td>
**Diffusion principles***
</td>
<td>
The dataset generated will be shared among project partners through private
section of MAGENTA website, as well as through a secure file-sharing platform
CoRe, overseen by CEA and CTECH.
Deliverables associated to these datasets are declared
“confidential” in the Grant Agreement. Thus, the DS_Prototype will not be
shared with public, or with the third parties without proper licensing and
other IPR measures (ex. Non-disclosure Agreement).
</td> </tr>
<tr>
<td>
</td>
<td>
In case of diffusion (publications, demonstrations, etc.) the Consortium will
determine which data shall be made publicly available according to Open Access
Decision scheme (see Section 1). Once the Open Access decision is granted,
these data will be made public through data repositories (ZENODO) and/or open
access publications in scholarly journals.
</td> </tr> </table>
# 4\. FAIR Data: common provision for datasets 1, 2, 3 and 4
The following FAIR methods to make MAGENTA’s data “findable, accessible,
interoperable and reusable” apply to Datasets 1 through 4. The deliverables
associated to the Prototype dataset are declared “confidential” in the Grant
Agreement. Thus, the DS_PT (prototype) will not be shared with public or with
the third parties without proper licensing and other IPR measures (ex. Non-
disclosure Agreement). If the Consortium determines that some parts of DS_PT
can be made publicly available, then they will comply with the provisions
described in this section.
## 4.1. Making data findable
<table>
<tr>
<th>
**Metadata** *
</th>
<th>
Metadata is data on the research data themselves. It enables other researchers
to find data in an online repository and is, as such, essential for the
reusability of the dataset. By adding rich and detailed metadata, other
researchers, can better determine whether the dataset is relevant and useful
for their own research. In the online depositories used by MAGENTA partners,
metadata (type of data, location, etc.) will be uploaded in a standardized
form. This metadata will be kept separate from the original raw research data.
As described in the project Grant Agreement (Article 29.2), the bibliographic
metadata include all of the following:
* the terms “European Union (EU)” and “Horizon 2020”;
* the name of the action, acronym and grant number;
* the publication date, and length of embargo period if applicable
* a persistent identifier
Note: All publications resulting from MAGENTA actions must acknowledge the
financial support by EU by the statement: “MAGENTA project has received
funding from the European Union’s Horizon 2020 research and innovation
programme under grant agreement No 731976.”
</th> </tr>
<tr>
<td>
Persistent and unique identifier *****
(ex: DOI Digital Object Identifier)
</td>
<td>
DOI and Creative Common’s license numbers will be used as persistent
identifiers on open data repositories.
</td> </tr>
<tr>
<td>
Naming conventions* (see 1.1)
</td>
<td>
Files and folders at data repositories will be versioned and structured by
using a name convention consisting of project name, dataset name and ID; ex.
“MAGENTA_DS_nn_au_cc_type_kw_mmyy_vn”, with the
following information listed in the title, in compliance with the project’s
PMP:
• nn = dataset name (IL, MNPFF, MTD, MTE, Proto)
</td> </tr>
<tr>
<td>
</td>
<td>
* cc = CO or PU
* au = Author = Partner acronym
* type = data type (doc, img, data, etc.)
* kw = keyword (EMIMPF6, FF-BMIMTFI, etc.)
* mmyy = production date (0317 = March 2017)
* vn = version number (01.01, 03.01, etc)
</td> </tr>
<tr>
<td>
Search keywords *****
</td>
<td>
Example keywords (will be modified with the project advancement)
DS_IL: Synthesis, simulation, structure
DS_MNP&FF: Synthesis, structure, magnetism, stability, simulation
DS_MTD: Device, Soret coefficient, Diffusion coefficient, field effect,
simulation, theory
DS_MTE: Thermogalvanic, Supercapacitance, theory, simulation, Power output
DS_PT: Device, feasibility, simulation, power output
</td> </tr>
<tr>
<td>
Version numbers *****
</td>
<td>
Individual file names will contain version numbers that will be incremented at
each revision.
</td> </tr> </table>
**4.2. Making data accessible**
<table>
<tr>
<th>
Data openly available*
</th>
<th>
The MAGENTA project datasets will be first stored and organized in a database
by the data owners (personal computer, or on the institutional secure server)
and on the project database (project website’s private section and CoRe). Some
datasets, for which the Consortium declares no confidentiality or IPR issues,
will also be stored in ZENODO, the open access repository of the Open Access
Infrastructure
for Research in Europe (OpenAIRE)
In such case, data access policy will be unrestricted. An embargo period may
incur if collected datasets are linked to a green open access publication.
</th> </tr>
<tr>
<td>
Tools to read or reuse data*
</td>
<td>
Most data are produced in common electronic document/data/image formats
(.docx, .pdf, .tex, .jpg, .eps, ASCII etc.) that do not require specific
software.
Numerical codes may require specific compilers. (to be specified)
</td> </tr>
<tr>
<td>
Ways to make data available*
</td>
<td>
Data objects will be deposited in ZENODO by CTECH under:
* Open access to data files and metadata and data files provided over standard protocols such as HTTP.
* Use and reuse of data permitted.
To protect the copyright of the project knowledge, Creative Commons license
will be used in some cases.
</td> </tr>
<tr>
<td>
**Data and publication repository***
</td>
<td>
For preservation and sharing of internal data and datasets, MAGENTA will use:
</td> </tr>
<tr>
<td>
</td>
<td>
* Individual researchers data storage media
* Partner’s individual institutions’ secure data repositories
* Project website’s private section
( _https://www.magenta-h202.eu_ member only section)
* Dedicated collaborative data/file sharing space on CoRe: The CoRe platform is a SharePoint based data/file sharing service administered by CNRS, Centre National de la Recherche Scientifique. CoRe guarantees service availability of 7 days/week and 24 h/day except during the blocking incident, and which will be reestablished within h+5. The service may be affected during the system maintenance period, which will be communicated to the users.
For Open Access data and publications, MAGENTA will use:
* MAGENTA website’s public section
* OpenAIRE
* ZENODO ( _https://zenodo.org_ ) for ORDP data and
datasets
* Open archive HAL-page dedicated to MAGENTA publications on HAL-CEA, a repository for selfarchiving of scientific publications of the CEA's researchers and laboratories and providing free access
to articles, conferences, reports, thesis, etc. ( _https://halcea.archives-
ouverts.fr/HAL-MAGENTA/_ )
* Other national or institutional open access archiving platforms used by consortium partners. The links
toward these platforms (websites) will be included in the HAL-MAGENTA site
(see above)
* Open access journals
</td> </tr>
<tr>
<td>
Access procedures*
</td>
<td>
All data deposited on ZENODO will be accessible without restriction for
public. For other data, potential users must contact the IPR team or the data
owner in order to gain access. If necessary, appropriate IPR procedure (such
as non-disclosure agreement) will be used.
</td> </tr> </table>
**4.3. Making data interoperable**
<table>
<tr>
<th>
**Standards, vocabularies, or methodologies for data and metadata***
</th>
<th>
Controlled vocabularies will be used in descriptive metadata fields to support
consistent, accurate, and quick indexing and retrieval of relevant data.
Keywords (see section 4.1) and their synonyms will be used for indexing and
subject headings of the data and metadata. As controlled vocabularies change
within different disciplines of Science, these keywords will be updated during
the course of the project to increase the interoperability of the project’s
data and metadata.
</th> </tr>
<tr>
<td>
Inter-disciplinary interoperability *****
</td>
<td>
In order to ensure the interoperability, all datasets will use the same
standards for data and metadata capture/creation
</td> </tr> </table>
**4.4. Increase data re-use**
<table>
<tr>
<th>
Data licensing*
</th>
<th>
Creative Common Licensing with be used to protect the ownership of the
datasets. Both Share-Alike and
NonCommercial-ShareAlike licenses will be considered for the parts of datasets
for which the decision of making that part public has been made by the
Consortium.
</th> </tr>
<tr>
<td>
Date of data release*
</td>
<td>
Immediately after the Consortium decision to make data OpenAccess. However, an
embargo period may be applied if the data (or parts of data) are used in
published articles in “Green” open access scholarly journals. The recommended
maximum embargo period length by European Commission is 6 months.
</td> </tr>
<tr>
<td>
Access to third parties*
</td>
<td>
For datasets deposited on a public data repository (ZENODO) the access is
unlimited.
</td> </tr>
<tr>
<td>
**Restricted re-use : exception to the general diffusion principles***
</td>
<td>
Restrictions on re-use policy are applied for all protected data (see Figure
2: Open access to research data and publication decision diagram), whose re-
use will be limited within the project partners.
Other restrictions include:
* The “embargo” period imposed by scholarly journals publication policy (Green Open access)
* Some or all of the following restrictions may be applied with Creative Commons licensing of the dataset:
* Attribution: requires users of the dataset to give appropriate credit, provide a link to the license, and indicate if changes were made.
* NonCommercial: prohibits the use of the dataset for commercial purposes by others.
* ShareAlike: requires the others to use the same license as the original on all derivative works based on the original data.
</td> </tr>
<tr>
<td>
Data quality assurance processes*
</td>
<td>
Quality and Risk committee (composed of WP leaders) holds monthly video-
conference meeting to ensure the proper conduct of project’s data management.
</td> </tr>
<tr>
<td>
Length of time for reuse*
</td>
<td>
At least 1 years after the project.
</td> </tr> </table>
<table>
<tr>
<th>
Costs for making data FAIR and how to cover these costs*
</th>
<th>
•
</th>
<th>
Fees associated with the publication of scientific articles containing
project’s research data in “Gold” Open access journals. The cost sharing, in
case of multiple authors, shall be decided among the authors on a case-by-case
basis.
</th> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Project Website operation: to be determined
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Data archiving at ZENODO: free of charge
</td> </tr>
<tr>
<td>
</td>
<td>
• Copyright licensing with Creative Commons: free of charge
</td> </tr>
<tr>
<td>
**Data manager responsible during the project** *
</td>
<td>
During the project data will be updated regularly as new results are submitted
by partners. The data/metadata on a CoRe server will be backed up monthly.
</td> </tr>
<tr>
<td>
**Responsibilities of partners**
</td>
<td>
Every partner is responsible for the data they produce. Any fee incurred for
Open Access through scientific publication of the data will be the
responsibility of the data owner (authors) partner(s) in compliance with the
CA, Article 8.4.2.1: During the Project and for a period of 5 years after the
end of the Project, the dissemination of own Results by one or several Parties
including but not restricted to publications and presentations, shall be
governed by the procedure of Article 29.1 of the Grant Agreement subject to
the following provisions. etc...
</td> </tr>
<tr>
<td>
Potential value of long term preservation*
</td>
<td>
To be determined
</td> </tr>
<tr>
<td>
Costs of long term preservation*
</td>
<td>
Data preservation of at least 5 years after the project is required by the
Grant Agreement (Article 31.3). The associated costs for dataset preparation
for archiving will be covered by the project itself. Long-term preservation
will be provided and associated costs covered by a selected disciplinary
repository.
</td> </tr> </table>
# 6\. Archiving and preservation
<table>
<tr>
<th>
Data at the end of the project
</th>
<th>
January 1 st , 2021
</th> </tr>
<tr>
<td>
Data selection*
</td>
<td>
To be decided by the Consortium at the end of the project
</td> </tr>
<tr>
<td>
Estimated final volume
</td>
<td>
To be determined
</td> </tr>
<tr>
<td>
Recommended preservation duration*
</td>
<td>
The MAGENTA project database will be designed to remain operational for at
least 5 years after the project end.
</td> </tr>
<tr>
<td>
Long term preservation storage*
</td>
<td>
The final dataset will be transferred to the ZENODO repository, which ensures
sustainable archiving of the final research data. Additional data storage will
be ensured by individual partner institution’s data repositories and at CoRe.
</td> </tr> </table>
# 7\. Data security*
<table>
<tr>
<th>
Provisions for data security*
</th>
<th>
MAGENTA will use methods that emphasize easy access and extended contact and
trust building among participants. The following guidelines will be followed
in order to ensure the security of the data:
• Store data in at least two separate locations to avoid loss of data;
</th> </tr>
<tr>
<td>
</td>
<td>
* Encrypt data if it is deemed necessary by the participating researchers;
* Limit the use of USB flash drives;
* Label files in a systematically structured way in order to ensure the coherence of the final dataset.
* The CoRe platform offered by CNRS guarantees service availability of 7 days/week and 24 h/day except during the blocking incident, and which will be reestablished within h+5. The service may be affected during the system maintenance period, which will be communicated to the users.
</td> </tr>
<tr>
<td>
Security of long term preservation*
</td>
<td>
Long term data preservation security will be ensured by partner institution’s
data repositories.
</td> </tr> </table>
# 8\. Ethical aspects*
<table>
<tr>
<th>
Impact of ethical or legal issues*
</th>
<th>
No ethical issue has been identified.
</th> </tr>
<tr>
<td>
**9\. Other issues***
</td>
<td>
</td> </tr>
<tr>
<td>
Other data management procedures*
</td>
<td>
No other issues to report at this time.
</td> </tr> </table>
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0678_VISUALMEDIA_687800.md
|
# 1\. INTRODUCTION
A DMP describes the data management life cycle for all data sets that will be
collected, processed or generated **under** the research project. It is a
document outlining how research data will be handled during **the initiative**
, and even after the **action** is completed, describing what data will be
collected, processed or generated and following what methodology and
standards, whether and how this data will be shared and/or made open, and how
it will be curated and preserved. The DMP is not a fixed document; it evolves
and gains more precision and substance during the lifespan of the project.
## 1.2 Document description
The Data Management Plan intends to identify the dataset which is going to be
processed, to define a general protocol to create, manage and guarantee free
access to results and data collected within the project lifespan. This
document will be periodically updated along the duration of the project.
Due to project´s nature, the type of data managed in the project can´t be
considered as sensitive beyond some contact details and answers to
questionnaires. In Visualmedia, the amount of information will be relatively
small since interest groups are established and focused on media professionals
and data collection is only addressed for consultation matters.
More detailed versions of the DMP will be then submitted in case any
significant change may occur such as the generation of new data sets or any
potential change in the consortium agreement.
# 2\. DATA COLLECTION
## 2.1 Data description
In Visualmedia project there are 6 different sort of data that will be
gathered and produced during the project lifetime.
− **Personal Data:** contact details from stakeholders and project partners
who are taking part in either the requirements definition, any consultation
procedures or else becoming a member of the On-line Community or CIAG.
− **Questionnaires:** forms created in order to collect feedback from industry
professionals and end users about some aspects of the project that the
consortia wish to confirm and validate.
− **Interviews:** after answering questionnaires, it is expected to study more
complex parts of the system in depth with the aim of obtaining a clear idea of
customers´ expectations.
− **Cognitive Walkthroughs:** during the evaluation phase, end users will be
using the developed software solution, during the use they will be commenting
on its functionality. This is to identify flaws and space for improvement of
the product.
− **Graphic information:** pictures, videos, etc. that are shared among end-
users when implementing the technology in their own virtual studios.
− **Deliverables:** these documents were described in the Description of Work
and accepted by the EC. According to the Workplan, these reports will be
published on the Project website to be accessible for the general public. Some
of the deliverables will contain aggregated data obtained by means of
questionnaires and interviews, summing up the gathered feedback without
revealing personal information from participants.
**Deliverables**
**Graphic**
**information**
**Interviews**
**Questionnaires**
**Contact**
**information**
**Figure 1. Types of Data**
Most of the datasets will be part of the information generated under the
following tasks, since these work packages involve contacting and getting
feedback from stakeholders and end users. Information obtained in WP2 and WP5
will mainly consist of the output resulting from questionnaires and interviews
distributed to stakeholders. However, data within WP7 is generally made up of
personal contact details from potential end-users to whom forthcoming results
could be of interest.
<table>
<tr>
<th>
**WP/Task nr.**
</th>
<th>
**WP/ Task Description**
</th>
<th>
**Responsible**
</th>
<th>
**Output**
</th> </tr>
<tr>
<td>
WP2.- User Consultations & Requirements Definitions
</td>
<td>
NTNU
</td>
<td>
Deliverable
</td> </tr>
<tr>
<td>
Task 2.2
</td>
<td>
Identification of functionality requirements
</td>
<td>
Questionnaires/ Interviews
</td> </tr>
<tr>
<td>
Task 2.3
</td>
<td>
Identification and monitoring of user needs and interests
</td>
<td>
Questionnaires/ Interviews
</td> </tr>
<tr>
<td>
WP5.- System Verification and Validation
</td>
<td>
NTNU
</td>
<td>
Deliverable
</td> </tr>
<tr>
<td>
Task 5.4
</td>
<td>
Test Sessions and Data Collection
</td>
<td>
Interviews/
Questionnaires/
Cognitive
Walkthroughs D5.2
</td> </tr>
<tr>
<td>
Task 5.5
</td>
<td>
Data analysis and feedback
</td>
<td>
Deliverable
</td> </tr>
<tr>
<td>
WP7.-Dissemination
</td>
<td>
Brainstorm
</td>
<td>
Deliverable
</td> </tr>
<tr>
<td>
Task 7.1.
</td>
<td>
Promotional Activities
</td>
<td>
Contact details
</td> </tr>
<tr>
<td>
WP8 - Commercial Exploitation and Business Planning
</td>
<td>
Signum
</td>
<td>
Contact details
</td> </tr>
<tr>
<td>
Task 8.1
</td>
<td>
Establish and Manage Commercial Impact Advisory Group
</td>
<td>
Signum
</td>
<td>
Contact details
</td> </tr> </table>
**Table 1. Work Packages data outcome**
## 2.2. Participants
As explained in deliverable 2.1 User Consultation Protocol and Tools, users in
the **Visualmedia** project are composed of:
− E _nd-users_ participating in the project, as stated in D.2.1, the user
partners in the consortia are: Bayerischer Rundfunk (Germany), BlueSky TV
(Greece), Hallingdølen (Norway), Radio Televisión Española (RTVE, Spain),
Setanta Sports (Ireland), and Televiziunea Română (TVR, Romania). The end-
users are considered those persons who actually will use the VisualMedia
product
− _Commercial impact advisory group_ which is formed from a group of
_professionals_ from the media industry who are not directly connected to the
project, with whom it is intended to exchange a deeper analysis and discuss
the commercial potential of **Visualmedia** product.
− _Users from outside the consortia._ They are stakeholders from 13 countries
not included in the consortium but who are members of the Online Community of
Interest and may become future sales representatives of the resulting product.
**End users**
**(within the consortium)**
**CIAG**
**Users (outside the Consortium)**
**Figure 2. Different participants’ groups involved in the Visualmedia
project**
## 2.3. Tools
### 2.3.1. Questionnaires
This is one of the main tools for collecting the data for the establishment of
the user requirements and validation. These forms have been designed by NTNU.
There were created two different types of questionnaires, one online
questionnaire which was distributed to all users (i.e. end-users, CIAG, and
users outside the consortium). Another questionnaire was devoted for the
participant in the internal workshops at each users’ side.
### 2.3.2. Interviews
To complement the data from the questionnaires, there was also a series of
face to face interviews organized by the team of researchers from NTNU. As a
result of these encounters, notes were written down during the interview.
Audio recording was made of each interview in order to register all the
information produced and also the interview situations were photographed. A
project internal summary from each interview was produced. Each user partner
checked and approved their own interview summary. Additionally Skype meetings
with the members of the CIAG were held. During these, notes were taken down,
audio recordings were done, and internal summaries were made and put into
D2.2. This was to assure that the identified requirements with the end-users
match with the demand of experts outside the project, and to ensure market
mindshare.
### 2.3.3. Production diaries and data collection
During the validation process, user experiences will be collected in the form
of demo descriptions. Data collection will be based on actual user experiences
after the end-users have used Visualmedia system to create their own demos.
The emphasis lies on the practical experience and actual demos.
All end users are committed to document their work when implementing demo
material using Visualmedia. The feedback will be collected during March–June
2017.
The end users are going to be provided with a template in which they will
document the processes, materials, experiences etc. in each of the demos
they’ll make. These templates will act as diaries that also will show each end
user’s personal development process as they gain more knowledge along the way.
It wil be important that end users also share data in the form of photos,
videos and other visual material. The materials are intended to be submitted
via e-mail, or if necessary, some other transportation method. In addition to
writing diaries, the end users’ experiences will be also collected by means of
Skype interviews,cognitive walkthroughs, and questionnaires. The data
collection should be planned and organized based on each end users’ individual
needs. A detailed plan will be written in D5.1, and all the collected material
will be combined into a final report (D5.2)
## 2.4. Evaluation and analysis of the data
Apart from the feedback (questionnaires, interviews, audio recordings, etc.)
there will also be data in video format (real-time and non-real time) that
will be two-fold. One, videos of the installation process and handling with
the product will be available for project internal usage mostly. Two, the
prototype productions itself from all the user partners. All this is in order
to analyse the quality of the productions and refine the technology components
as well as advise the users in the proper use of the technology.
The conclusions obtained by means of questionnaires, interviews, etc., which
can´t be considered as sensitive, will come out publicly. The collected
material will be processed to both written and visual (charts, still photos
from demos etc.) in the final reports in order to keep on further development
of **Visualmedia** .
End users are expected to provide questionnaires, photos etc. throughout the
different stages of the project: first about the expectations and the user
cases, then about the demo performance and concluding report about the final
demo products (was the final product what you had expected in terms of
quality, better or worse and how/why, etc).
## 3\. DOCUMENTATION AND METADATA
As explained in previous sections of the DMP, data produced in Visualmedia
will be mostly the outcome of analysing questionnaires and interviews to
better know the users ‘expectations and their perception about the potential
of the product.
The information handled within this project might not be particularly
susceptible to be reused since it has been designed for the Visualmedia case.
Despite this fact, conclusions resulting from the research are going to be
openly published and summarised in the approved deliverables which their final
versions will be accessible on the project website.
As a first stage, information is initially foreseen to be saved and backed up
on personal computers. Additionally, file nomenclature will be according to
personal criteria. Regarding file versioning, it is intended to fulfill
project policies detailed in D.1.1.- Project Handbook.
On a second stage, the consortia has chosen Google Drive platform in order to
upload and share information enabling in this way to be accessible among
project partners. Thereby, server could act at the same time as a security
copy.
Concerning personal contact details, which will have been previously approved
by informed consent, only some contact information from people participating
in On-line Community will be published on the project website and in
deliverables. CIAG members authorise the project consortia to publish their
contact details and photo on the corresponding section of the website.
Information collected via questionnaires and interviews will be published
collectively but never revealing any personal opinion.
At this stage of the project, the main formats of files containing information
are described in the following table. However, this information is subject to
future changes which will be duly updated in next versions of DMP:
<table>
<tr>
<th>
**Type of Data**
</th>
<th>
**File Format**
</th> </tr>
<tr>
<td>
Questionnaires
</td>
<td>
Microsoft Word, Pages, PDF
</td> </tr>
<tr>
<td>
Interviews
</td>
<td>
AVI, mp4, jpeg, png
</td> </tr>
<tr>
<td>
Videos
</td>
<td>
avi, mpeg
</td> </tr>
<tr>
<td>
Deliverables
</td>
<td>
Microsoft Word
(compatible versions), Pages, PDF
</td> </tr>
<tr>
<td>
Webinars, Demo Sessions
</td>
<td>
AVI, FLT, mp4
</td> </tr>
<tr>
<td>
Contact Details
</td>
<td>
Microsoft Word
</td> </tr> </table>
**Table 2. File formats**
## 4\. ETHICS AND LEGAL COMPLIANCE
On the one hand, NTNU as responsible for User consultation and Validation
process deliverables is in charge of data security and legal compliance. As a
public institution, the university acts in accordance to their internal rules
of Information Security Policies and fulfil National legislation referring
this matter.
Brainstorm is a certificated company under ISO:9001 and it is committed to
ensure the necessary measures to guarantee the data protection.
In deliverables, answers from respondents are not going to be single out
individually, thereby, it will be impossible to for external people to
identify respondents answers. Data will be analysed as a whole, however, the
questionnaires weren’t anonymous as every respondent gave their names and
contact information. This information is not being revealed at any time.
## 5\. STORAGE AND BACK UP
Initially, data have been stored on Google Drive where all the information
will be uploaded in order to be accessible by all the consortia partners.
Google Drive is being used to back up the data and at the same time to be used
as a repository among partners to facilitate data exchange. Regarding
deliverables, they will be uploaded on the project website.
The owner of data storage speaking about questionnaires and interviews will be
on NTNU but only due to practical reasons since they will be in charge of
leading the questionnaire and interview collection. Concerning demo session
video and webinars, Brainstorm will assume the responsibility of keeping save
the information. Last but not least, personal information will be kept in a
personal computer with private access.
## 6\. DATA SHARING
Furthermore, public deliverables will be uploaded and accessible on due curse
on the project website section, Outcomes.
Graphic material such as demonstrations, webinars and session videos will be
uploaded on the project´s YouTube channel to be openly accessible for the
general public.
## 7\. SELECTION AND PRESERVATION
At this stage, the intention is to preserve and keep data at least 5 years
after the end of the project.
## 8\. RESPONSIBILITIES AND RESOURCES
As a collaborative project, data management responsibility is divided into
different persons/organisations depending on the role they have adopted in the
project:
<table>
<tr>
<th>
**Type of Data**
</th>
<th>
**Resource**
</th>
<th>
**Responsible**
</th> </tr>
<tr>
<td>
Questionnaires/ Interviews
</td>
<td>
Google Drive/External hardrive
</td>
<td>
Andrew Perkis (NTNU)
</td> </tr>
<tr>
<td>
Stakeholders contact details
</td>
<td>
Google Drive
</td>
<td>
Francisco Ibañez (Brainstorm)
</td> </tr>
<tr>
<td>
Demonstrations, Webinars, user cases
</td>
<td>
YouTube channel
</td>
<td>
Javier Montesa (Brainstorm)
</td> </tr>
<tr>
<td>
Deliverables
</td>
<td>
Google Drive/ Website
</td>
<td>
Francisco Ibáñez (Brainstorm)
</td> </tr> </table>
### **Table 3. Storage resources**
Taking into consideration the nature of the data handled in the project, it is
not foreseen to need any exceptional measures in order to carry out our plan.
Moreover, no additional expertise will be required for data management.
Regarding the work to be done speaking about data storage and back up, the
project has agreed to appoint task leaders to take care of ensuring the plan
commitment.
<table>
<tr>
<th>
**Task name**
</th>
<th>
**Responsible person name**
</th> </tr>
<tr>
<td>
Data capture
</td>
<td>
Veli-Pekka Räty (NTNU)
</td> </tr>
<tr>
<td>
Metadata production
</td>
<td>
Sebastian Arndt (NTNU)
</td> </tr>
<tr>
<td>
Data storage & back up
</td>
<td>
Andrew Perkis (NTNU)
</td> </tr>
<tr>
<td>
Data archiving & sharing
</td>
<td>
Francisco Ibáñez (Brainstorm)
</td> </tr> </table>
**Table 4. Task leaders**
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0681_SESAME_671596.md
|
# 1 Introduction
## 1.1 Preamble
The _**SESAME Project** _ 1 ( _**Grant Agreement (GA) No.297291** _ )
_-hereinafter mentioned as the “Project”-_ is an active part of the 5G-PPP
initiative and targets innovations around three central elements in 5G, as
follows:
1. The placement of network intelligence and applications in the network edge through Network Functions Virtualization (NFV) and Edge Cloud Computing;
2. The substantial evolution of the Small Cell concept, already mainstream in 4G but expected to deliver its full potential in the challenging high dense 5G scenarios, _and_ ;
3. The consolidation of multi-tenancy in communications infrastructures, allowing several operators/service providers to engage in new sharing models of both access capacity and edge computing capabilities.
SESAME proposes the Cloud- _Enabled_ Small Cell (CESC) concept which is a new
multi-operator enabled Small Cell that integrates a virtualized execution
platform (i.e., the Light DC ( _Data Center_ )) for deploying Virtual Network
Functions (VNFs), supporting powerful “ _self-x”_ 2 management and executing
novel applications and services inside the access network infrastructure. The
Light DC will feature low-power processors and hardware accelerators for time
critical operations and will build a high manageable clustered edge computing
infrastructure. This approach will allow new stakeholders to dynamically enter
the value chain by acting as neutral host providers in high traffic areas
where densification of multiple networks is not practical. The optimal
management of a CESC deployment is a key challenge of SESAME, for which new
orchestration, NFV management, virtualization of management views per tenant,
“self-x” features and radio access management techniques will be developed.
After designing, specifying and developing the architecture and all the
involved CESC modules, SESAME will culminate with a prototype with all
functionalities for proving the concept in relevant use cases. Besides, CESC
will be formulated consistently and synergistically with other 5G-PPP
components through coordination with the corresponding projects.
## 1.2 Framework for Data Management
An old tradition and a new technology have been “converged” to realize an
exceptional public good. The “old tradition” is the willingness of scientists
and scholars to publish -or to make known- the results of their research in
scholarly journals without payment, for the sake of inquiry and knowledge and
for the promotion of innovation. The new technology is the Internet that has
modified our lives in the way we work, we study, we amuse or we perceive the
modern digital world. The Internet has fundamentally changed the practical and
economic realities of distributing scientific knowledge and cultural heritage.
For the first time ever, the Internet now offers the chance to constitute a
global and interactive representation of human knowledge, including cultural
heritage and the guarantee of worldwide access.
The “public good” they can so make possible is the world-wide electronic
distribution of the peer- _reviewed_ journal literature, together with a
“completely free” and/or unrestricted access to it by all scientists,
scholars, teachers, students, and other curious minds. Removing access
barriers to this literature will accelerate research, enrich education, share
the learning of the rich with the poor and the poor with the rich, make this
literature _as useful as it can be_ , and lay the foundation for uniting
humanity in a common intellectual conversation and quest for knowledge.
According to the provisions of the SESAME Grant Agreement (GA) 3 , all
involved partners “ _must implement the Project effort as described in the
respective Annex 1 and in compliance with the provisions of the GA and all
legal obligations under applicable EU, international and national law”._
Effective research data management is an important and valuable component of
the responsible conduct of research. This document provides a data management
plan (DMP), which describes how data will be collected, organised, managed,
stored, secured, backuped, preserved, and where applicable, shared.
The scope of the present DMP is to make the SESAME data easily discoverable,
accessible, assessable and intelligible, useable beyond the original purpose
for which it was collected as well as interoperable to specific quality
standards.
# 2 Project Management Structure and Procedures
Being a **large contribution Project** of 30-months duration, comprising of
**20 partners** 4 , and of complexity comparable to traditional large IP
projects, the SESAME management structure has been carefully designed based on
the coordinator’s and partners’ experience in running large EC-funded
projects, and comprises of a **comprehensive and lightweight management
structure.**
The main goal of the management structure, as shown in _**Figure 1** _
(below), is to ensure that the Project will reach its objectives, in the
scheduled time, and making use of the budgeted resources, while complying with
the Commission’s regulation and applied procedures.
The well-defined project management (PM) structure ensures a proper level of
co-ordination and cooperation amongst the consortium members. Additionally,
project management has the following responsibilities: Project administration,
project organization, management of the technical progress of the project
according to plans, co-ordination with the other EC projects in the 5G-PPP 5
and other interested parties. The Project Coordinator ( _OTE_ ) has already
previous experience in managing large European projects that fully qualifies
it to lead such an initiative. An intensive horizontal (between WPs) and
vertical (between project management and partners) communication and
collaboration has been put in place, for the proper and within due time
execution of all related actions.
The SESAME- _based_ management activities comprise administrative and
technical issues, including the legal framework and the organizational
structure of the complete Project. Furthermore, a roadmap of meetings and
workshops and related activities as well as quality assurance procedures and
steering tools are described. The goal of the project management activities is
also to “identify and address” potential issues, risks or conflicts emerging
across partners, and manage the intellectual property related to both prior
knowledge as well as project achievements.
The SESAME partners have significant experience with collaborative projects
and have been -or are- already working together with other consortia. All
partners have a long-term strategic interest in the field, and most of them
have contributed significantly to the R&D topics at the core of the 5G-PPP
vision in previous/running projects. Main criteria for the selection of each
partners’ role were excellence in the field, reliability, experience and
commitment, as discussed in more details in the context of the Project’s GA.
SESAME consists of eight (-8-) distinct Work Packages, as described in
_Section 3.1.2_ of the corresponding DoW. A visual representation of the
interdependencies between the work packages is given in the Gantt and Pert
diagrams, as both appear in _Section 3.1.1_ and in _Section 3.1.2_ of the DoA,
_correspondingly_ . The advanced research parts in the Project will be managed
by using an agile management, based on decision points and concrete
milestones.
In the rest of this Section, we explicitly describe the governance part that
identifies the key roles and bodies, the management process, knowledge and
innovation management, including the risk assessments.
**Figure 1: SESAME Management Structure**
### 2.1.1 Management bodies and Organization
The management bodies employed in SESAME comprise persons, committees and
other entities that are responsible for making management decisions,
implementing management actions, and their interrelation. The management
bodies are illustrated in _**Figure 1** _ and include:
* PM - Project Manager (Dr. Ioannis Chochliouros, _OTE_ , for administrative management);
* TM - Technical and Scientific Manager (Dr. Anastasios Kourtis, _NCSRD_ , for technical management);
* IA - Innovation Architect (Dr. Nick Johnson, _IPA_ , Knowledge, for innovation & exploitation management);
* SM - Standardization Manager (Mr. Mick Wilson, _FLE_ , for standardisation and exploitation management);
* Diss&Comm (Dissemination and Communication) Leader (Dr. Tinku Rasheed, _CNET_ , for dissemination and communication management);
* GA - General Assembly (one representative per partner, administrative management);
* PB - Project Board, executive committee acting as decision-implementation body;
* AB - Advisory Board (chaired by PM, for International visibility beyond Europe); - WPLs - Work Packages Leaders, and; - TLs - Task Leaders.
Their detailed role and duties are described in the next subsections.
#### (i) Project Manager (PM)
The Project Manager (PM) for SESAME is Dr. Ioannis Chochliouros, who is a
senior manager and department head at OTE. Dr. Chochliouros is leader of _OTE
Research Programs_ within the _Fixed & Mobile Technology Strategy and Core
Network Division, _ within _OTE_ , since 2005. Dr. Chochliouros who is also
exercising the role of the Project Coordinator (PC) has substantial and proven
experience in the coordination of both scientific and RTD projects involving
many partners and complex research goals and has been involved in decision-
making positions in at least 45 (European, national and international)
research projects. The main role of the PM is the charge of the overall
administrative management of the Project, being the single point of contact
with the EC. The PM is responsible for the following tasks _(amongst others
tasks as explicitly defined by the EC Grant Agreement and the partner’s
Consortium Agreement)_ : (i) Monitor Project progress on a daily basis, for
continuous rating of achievements, objectives, tasks, WPs with global view of
the overall Project, ensuring a smooth running of activities and collaboration
among all partners, identifying problems and consequences for future research;
(ii) Provide the Project Management Plan which describes the project
management structure, procedures for communication, documentation, payments
and cost statements, procedures to control Project progress and risk
management; (iii) Quality procedures and quality assurance (QA); (iv)
Coordination between the EC and the consortium, communicating all information
in connection with the Project to the EC; (v) Document transmission to the EC,
including all contractual documents and reports related to the administrative,
financial, scientific, and technical progress of the Project; (vi) Coordinate
and manage the Project’s Advisory Board together with the TM; (vii)
Participate in the 5G-PPP programme-level Steering Board (SB) as recommended
by the 5G-PPP program. In summary, the PC is the legal, contractual, financial
and administrative manager of the Project.
#### (ii) Technical and Scientific Manager (TM)
The Technical and Scientific Manager (TM) for SESAME is Dr. Anastasios
Kourtis, Research Director at _NCSRD_ . He has more than 30 years of
experience in managing and successfully executing research and industrial
projects, in particular, at _NCSRD_ , he has been an active player from the
start of the EC framework programs and most recently within FP7, where he is
currently PM of T-NOVA 6 (FP7 ICT) and TM for VITAL 7 (H2020 ICT),
CloudSat 8 (ESA) projects. He has a strong background on wireless and wired
broadband network infrastructures, multimedia applications, Quality of Service
(QoS), network management (NM) and network virtualization. The TM is in charge
of the overall scientific and technical management and progress of the
Project. He is responsible for the correct execution of the technical
activities of the contract, as described in the respective GA. His tasks
comprise in particular ensuring timely release, technical high quality and
accuracy of technical deliverables. The TM is the “promoter” of the technical
achievement of the Project, in association with the PM and the Diss&Comm
Manager (i.e., the WP8 Leader), to ensure appropriate Project visibility. He
works in close cooperation with the WP leaders and will receive the support of
the PM. The TM will also participate in the programme-level Technology Board
(TB) established by the 5G-PPP), towards technical planning of joint
activities and monitoring the progress against the technical KPIs.
#### (iii) Innovation Architect (IA)
SESAME has appointed a dedicated Innovation Architect (IA), who will chair the
_Knowledge and Innovation Management (KIM)_ _Team_ activities in the Project,
together with the Standardisation Manager and the Technical Manager. The role
of the innovation Architect is to study and analyse both market and technical
aspects, and “bridge” the Project research achievements to a successful
implementation and deployment in the real world. The Innovation Architect for
SESAME will be Dr. Nick Johnson, the CTO of _IPA_ . Nick Johnson brings
several years of market and mobile-industry experience and background, and has
a successful track in productising research and innovation activities and
patents, and has the experience and capabilities to recognise (and foster)
_“how advanced scientific results can be transformed into products and market
opportunities”._ Indeed, the Innovation Architect will assist and advise the
Project in best responding to emerging market opportunities. In turns, by
thoroughly following the evolution of the sector, the new emerging
technologies and products from SESAME, and the mutating needs, the Innovation
Architect will help bringing all this inside the Project, utilising his
position as chair of the KIM activities.
#### (iv) Standardisation Manager (SM)
SESAME has appointed a dedicated Standardization Manager (SM), who will
coordinate the standardisation activities of the Project. SESAME has thus
appointed Mr. Mick Wilson, from _FLE_ , to undertake the corresponding SM
role. The main activity of the SM is to monitor and plan the standardization
strategy, together with the Innovation Architect and the Technical and
Scientific Manager, and to periodically “monitor and assess” the
standardization potential of the scientific results coming from the Project.
Mr. Wilson brings several years of experience in Standardization within
_Fujitsu Laboratories UK Ltd._ , and has both the knowledge and the ability to
quickly “identify” opportunities for standardisation and to match-make between
the proper Standards Developing Organisation (SDO) for SESAME-specific
innovations. The SM will periodically report to the KIM team about the
progress of standardization and open-source development activities within
SESAME, which will then be reported to the EC and further, presented to the
5G-PPP _WG on Standardization_ with the aim of creating joint opportunities
for targeting specific SDO’s which need collective strategy from the 5G-PPP
board, in order to “push” European interests globally.
#### (v) Dissemination & Communication Leader (DissComm Leader)
SESAME has appointed a Dissemination & Communication Leader to coordinate the
promotional activities and dissemination of the Project. This role will be
handled by Dr. Tinku Rasheed, from _CNET_ , who is also the WP8 Leader. The
_Diss &Comm leader _ will be in charge of all the dissemination related
priorities in SESAME, and he will also pursue the strategy to have optimum
visibility within the 5G-PPP initiative, and beyond, to secure a wide
dissemination and awareness of SESAME. The Diss&Comm leader will work closely
with the WP8 task leaders, and the PB in order to regularly update and inform
about the Diss&Comm activities and will also execute the planned Diss&Comm
strategy in a coherent manner together with the PB members.
#### (vi) General Assembly (GA)
The General Assembly (GA) is the decision-making body of the Project, chaired
by the PM and composed of one representative per partner (each having one
vote), allowing for the participation of each partner in the collective
decisions of the Project. The GA is responsible for the strategic orientation
of the Project, that is: overall direction of all activities, reorientation
whenever necessary, budget revision and measures taken to manage defaulting
partners. To ensure the Project is advancing in time and quality with the work
plan, and is adapting as necessary to external changes, the GA will analyse
performance indicators and all other relevant information provided by the
Project Board and take into account the evolution of the context in which the
Project is carried out, notably scientific, legal, societal, and economic
aspects, etc. The GA meets twice a year, unless intermediate meetings are in
the Project’s interest. In this case, GA meetings are held by decision of the
PM or by the request of at least 50% of its members. In between meetings, the
GA can take decisions by electronic means. The GA tries to reach consensus,
but in case this is not possible the GA makes decisions upon simple majority
with a deciding vote for the PM representative, _in case of a tie_ .
#### (vii) Project Board (PB)
The Project Board (PB), composed by a reduced number of members, will
facilitate the management and monitoring of the Project. It is made up of the
WP leaders, and will be chaired by the PM with the assistance of the TM, who
will be deputing the PM. Compared to the GA, the PB is “more focused” on the
operational management and can have more regular meetings, _when necessary_ .
It also prepares the decisions to be taken by the GA, ensures that these
decisions are properly implemented, and surveys ethical issues. The PB is also
in charge of the financial management of the WPs. It is also the
responsibility of the PB, as well as of the WPLs, to identify and assess risks
and provide contingency plans. The PB is composed of the following people,
each of them having both scientific excellence and strong experience in large
collaborative research and development projects; Dr. Ioannis Chochliouros (
_OTE_ , PM, PB Chair, WP1 Leader), Mrs. Maria Belesioti ( _OTE_ , WP2 Leader),
Neil Piercy ( _IPA_ , WP3 Leader), Antonino Albanese ( _ITL_ , WP4 Leader),
Miguel Anguel Puente ( _ATOS_ , WP5 Leader), Dr. Eduard Escalona ( _i2CAT_ ,
WP6 Leader), Dr. Anastasios Kourtis ( _NCSRD_ , TM, PB Deputy, WP7 Leader),
Dr. Tinku Rasheed ( _CNET_ , WP8 Leader).
The PB also defines the communication strategy to update partners about the
Project status, the planning and all other issues that are important to them,
to give maximum transparency to all involved partners and to increase the
synergy of the intended cooperation. Interactive management meetings and
technical meetings have an important role in the framework of the
communication strategy. All information -such as minutes of meetings, task
reports and relevant publications- will be communicated to the PM. It is the
strategy of the consortium to guarantee a fast and complete flow of
information. All partners have the means to communicate by using electronic
mail. The PB has bi-weekly meetings (with extra meetings held based on
purpose), either by conference call or during Project’s face-to-face Plenary
Meetings. The PB makes decisions upon simple majority with a deciding vote for
the PM representative, _in case of a tie_ .
#### (viii) Advisory Board (AB)
The SESAME consortium will appoint an Advisory Board in order to monitor the
SESAME- _related_ developments world-wide and ensure visibility of the Project
beyond Europe. The consortium plans to invite a maximum of 35 members to the
AB, which is to be chaired by the PM. The PM and the PB will periodically
organise remote conferences with the AB members to update the Project
activities and will gather information through semesterial inputs. The AB
members will be invited to annual workshops of SESAME and, further, they will
be invited to participate to the final Project demos. While preparing the
proposal, the SESAME consortium has already received promising inputs (a few
letters of support are already updated in the Annex, Section A2, for the DoA).
The AB is composed of the following members: _AT &T _ (Dr. Steven Wright);
_Samsung_ (Dr. Maziar Nekovee); _Fujitsu Japan_ (TBD); _ETRI Korea_ (Dr. Seung
Bang), and; _University of Melbourne, Australia_ (Prof. Tansu Alpcan). More
stakeholders will be incorporated if the consortium desires to further
strengthen its visibility.
#### (ix) Work Package Leaders (WPLs)
Each work package is led by the WP Leader (WPL), who is responsible for making
the day-to-day technical and management decisions that solely affect that WP.
The WP leaders’ responsibilities include: (i) Leading and coordinating the
task activities involved in the WP through the Task Leaders; (ii) Initial
quality checking of the WP work and deliverables; (iii) Handling
resource/skills balance within the WP subject to agreement of the PB to
changes; (iv) Participating in the PB meetings; (v) Highlighting to the PB of
potential threats to the technical success of the Project, and; (vi) Reporting
progress to the PB and raise amendments, issues and red flags to the TM if
needed.
#### (x) Task Leaders (TLs)
Each Task is led by the Task Leader (TL), who is responsible for the
activities performed in his/her task, coordinating the technical work, and
making the day-to-day technical decisions that solely affect his/her Task. TLs
should report (internally) to the WPL at least once a month on the progress of
their task.
### 2.1.2 Management procedures
Technical and operative decisions will be taken as far as possible informally,
and through achieving consensus. The various procedures are designed to ensure
that the Project runs smoothly, by ensuring that the goals are clearly defined
and understood, the WPs represent a sensible division of the work and comprise
the necessary expertise to fulfil the objectives, responsibilities are clearly
assigned, and there are transparent lines of communication among the
participants. A Consortium Agreement provides explicitly the rules and terms
of reference for any issue of legal nature concerning the co-operation among
the parties as well as the Intellectual Property Rights (IPR) of individual
partners and the consortium “ _as a whole_ ”.
For administrative, technical or operative decisions for which no consensus
can be reached, the Project will rely on the Project Board.
For decisions regarding budget redistribution, consortium composition or major
decisions on the workplan the Project Board is the highest decision making
body in the Project. Any project management decision, either technical or
administrative, taken by the Project Board is mandatory for all project
members, and may not be overruled within the Project.
##### 2.1.2.1.1 Reporting to the EC
SESAME follows the procedures presented in the Project guide to ensure on-
time, transparent and high-quality reporting to the EC. Project reporting as
well as internal intermediary reporting follows a planning approach with
several verifications. This method allows delivery of high-quality reports,
providing very accurate insight into the status of the Project. The following
reporting will be done: (i) Periodic reports will be provided to the EC
(M12+2, M24+2, M30+2); (ii) In between the periodic reports there will be
internal semestrial reports for the PM to keep track of the project
performance. The periodic report is mandatory in all European projects.
Deliverables and milestones follow a procedure with fixed regular reminders,
peer review by two (-2-) partners not involved in the specific reporting,
checking by the relevant WPL, followed by final validation by the PM and the
PB. This procedure results in on-time, high-quality deliverables and
milestones.
Periodic Progress Reports (PPRs) will be collated with the reporting periods,
prior to each project review and submitted and sent to the Project Officer by
the PM. These reports detail the work performed by the partners, the
achievements, collaborations, resources spent/planned, and future plans and,
together with the Financial Statements, will serve as the main Project
Management documentation.
_**Decision making:** _ The GA provides a forum for discussing management
issues and major technical issues. Decisions of the GA are binding for the
Project. All reports, such as the periodic reports, any management reports and
the deliverables will be discussed and approved before sending them to the EC.
Procedures for making decisions at a managerial level, to be taken by the GA,
are detailed in the Consortium Agreement. Day-to-day decisions at the
technical level are to be taken by the corresponding WP Leader(s) where
needed, after consultation with the PM. The Project Board meetings, which will
involve the PM and the principal partners will _-if necessary_ \- decide on
major issues by a majority vote with the PM having the casting vote. All
decisions will be taken unanimously, if feasible. If the members cannot come
to an agreement, a voting procedure _-as detailed in the CA-_ will take place.
It is envisaged that full majority would be necessary to achieve a decision.
The consortium has planned to physically meet for face-to-face (F2F) meetings
at least 3 times a year, where most of the technical meetings (including GA
meeting, Joint WP meetings, KIM team meetings, etc.) will be co-located over a
period of 2-3 days, at the premises of the project partners (chosen under the
principle of giving equal opportunity to each partner to host meetings).
##### 2.1.2.1.2 Progress Monitoring and Quality Assurance
In order to guarantee an optimal allocation of resources to the Project
activities, tasks as well as responsibilities and partner involvement have
been well defined. The management procedures for monitoring progress and
responding to changes have been documented in the Quality Assurance Plan
(i.e., the deliverable D1.2, submitted in M2) and executed regularly. This
constitutes a cyclic monitoring process to be implemented in the course of the
Project. The cycle time will be of six calendar months. The PM is ultimately
responsible for the quality control (QC) of the deliverables to the EC,
coordinating closely on technical quality checks with the TM. Consequently,
the PM can request remedial action and additional reports, should any doubt
regarding progress, timescales or quality of work make this necessary. Every
contractual deliverable, prior to its submission to the EC, will be the
subject of a peer review by persons not directly involved in either the
subject matter or the creation of that deliverable. Where necessary the PM
could request further work of the partners on a deliverable, to ensure that it
complies with the project’s contractual requirements.
The PM will organise regular assessment meetings with all the partners, in
addition to the PB meetings. These meetings will serve as preparation for the
EC review and the necessary periodic reports. The purpose of these meetings
will be to report on the progress so far and to redefine (if necessary) the
Description of the Action (DoA) for the remaining part of the GA. The PB will
regularly handle risk management and contingency plans. The PM and the PB will
jointly be in charge for preparing for the regular project reviews with the
EU. Specific access will be setup for the project reviewers (to the Project
intranet, code repository and the KIM database) to review the Project
progress. The consortium proposes the EU to organise three reviews during the
Project lifecycle.
**SESAME internal information flows:** The strategy will be to keep the
partners fully informed about the Project status, the planning and other
issues that are important with regard to maximising the transparency and
increasing synergy of co-operation and efficiency. The communication between
partners having closely related work will be more frequent and informal (in
ad-hoc meetings, phone conferences and by e-mail) including onsite visits of
personnel involved when appropriate. Informal technical interim reports
covering topics such as technical requirements, architectural issues,
progressing techniques, measurements/simulation practices and so on will be
developed if needed and will be distributed among the Project partners. In
increasing level of formality, WPLs will regularly call for WP phone calls. As
a reference, WP- _level_ phone calls will be conducted on a monthly basis. The
corresponding WPL will be responsible for fixing the agenda, which will
usually include time slots for discussions on upcoming Deliverables. The
Deliverable Editor will lead this part of the discussion, while the WPL will
lead the general technical discussions around the on-going tasks. After the
phone call, the WPL will release the minutes in copy to the TM. In this way,
each WPL will report regularly to the TM and will give an overview of the work
progress and any arising issues. These lines of communication will ensure that
any major deviation from the work plan will be spotted immediately and prompt
appropriate corrective action can be taken.
The formal flow of information will take place during Technical meetings
(face-to-face), which will be conducted approximately three times a year. The
objectives of these meetings will be to discuss technical issues and overall
project progress. Representatives will report to the rest of partners, thus
highlighting any divergence from the proposed plan and schedule. The PM will
be responsible (with the assistance of TM and WPLs) for the preparation of the
agendas, co-ordination of the meetings, and production of the minutes.
On the other side, a project collaborative infrastructure, accessible through
the web, has been set-up and used for distribution of documents among
partners. This infrastructure will enable all partners to deposit and retrieve
all relevant information regarding the Project. Furthermore it will include
the capability of collaborative edition of documents, thus improving joint
document management within the project. The Project Coordinator has
established and will maintain this infrastructure. More detailed information
is given in the related SESAME deliverable _D1.1 (“Project Website”)._
**Deliverables handling:** Deliverables will be elaborated as a joint effort
among the partners involved in the related WP. Their completion will be under
the responsibility of the relevant WPL, who will be assisted by the
Deliverable Editor identified in the workplan and will count on the
contributions from the other partners. The Deliverable Editor will establish a
calendar for the elaboration of the document well in advance of the submission
deadline, considering several rounds of contributions and rounds for
discussion and refinement. Once the Deliverable Editor and WPL feel that the
document is completed, it will be forwarded to the TM, who will check that it
is compliant with the quality assurance (QA) directives. If needed, the
document will return to the WP domain for complete alignment with the desired
quality. Once approved by the TM, the document will be forwarded to PB for
formal approval before submission to EC. If comments arise from PB, again the
document will return to WP domain and a new iteration will be established.
When defining the calendar, the following periods need to be considered: (i)
PB validation process starts 10 days in advance of official deliverable
submission deadline; (ii) TM review process starts 20 days in advance of
official deliverable submission deadline. Therefore, 10 days are enabled for
TM to review and comment on the document and the WP to address the comments in
case, before the document is forwarded to PB. Editorial guidelines (not only
for Deliverables but for all types of documents used in the project),
templates and document naming policies will be defined and will be available
in the document management platform.
**Information dissemination outside SESAME domain:** One of the objectives of
the SESAME is to raise awareness and impact on a wider community.
Consequently, a specific task (T8.1) has been considered in the workplan and a
specific dissemination plan with concrete goals for dissemination that will
oblige each individual partner to undertake certain activities and actions
will be defined, as in the related deliverable _D8.1 (“Plans for
Dissemination, Communication, Standardization and Exploitation, Interaction
with 5G-PPP”_ ). The dissemination processes are detailing the SESAME
ambitions and means, and describing the overall processes encompassing plans,
execution, review and approval, reporting and impact analysis. These will be
followed as specified in the CA. Decision on the dissemination level of the
project foreground will be made by the PB. Any objection to the planned
dissemination actions shall be made in accordance with the Grant Agreement.
**Technical problems and conflict resolution:** Technical problems will be
discussed on the level of each WP. The WPL leader will lead discussions and
make decisions, while ensuring that the work plan is respected. The WPL shall
report to the TM technical problems or solutions that have or may have
influences on other WPs. If a problem cannot be solved on the level of the WP,
the TM is responsible of taking a decision to solve the problem amicably. In
the unlikely event of conflict not being resolved at TM level, PM and PB will
be responsible to mediate in the conflict and to facilitate an end to the
conflict. They will act in accordance to what will be established in the
Consortium Agreement.
**Consortium Agreement (CA):** As mandated by EU project contractual
obligations, all partners of the consortium needed to sign a Consortium
Agreement before the contract with the European Commission is executed. Role
of the Project Management (and especially of the PM together with the PB) is
to modify and/or update the preestablished CA, based on the possibly changing
conditions in the Projects (change of partners, “shift” of responsibilities,
change of technical boundary conditions, etc.). The purpose of the actual CA
is to specify the internal organization of the work between the partners, to
organise the management of the Project, to define rights and obligations of
the partners, including -but not limited to- their respective liability and
indemnification as to the work performed under the Project, and more generally
to define and rule the legal structure and functioning of the consortium.
Moreover, the CA also addresses issues such as appropriate management of
knowledge in the sense of protection of know-how and more generally of any
knowledge and relevant intellectual property rights in any way resulting from
the Project. The CA also has the purpose to integrate or
“supplement” some of the provisions of the Grant Agreement, for example those
concerning Access Rights; as to the ruling of certain matters, the CA may set
out specific rights and obligations of the partners, which may integrate or
supplement, but which will under no circumstance be in conflict with those of
the GA.
# 3 Knowledge Management and Protection Strategy
## 3.1 Management of Knowledge
Information flows within the Project both vertically and horizontally. The
“vertical flow” of information comprises principally the administrative issues
(e.g., financial progress reports, consolidated reports, meeting minutes and
cost claims/advance payments), whereas the scientific and technical
information flow is generally more appropriate to a less formal and horizontal
process. The core of the information exchange is the SESAME web portal that is
visible to SESAME partners (also known as the _Collaborative Working
Environment_ ). Any collaborating partners will acquire free access on a
confidential basis to all items displayed in the KM database, unless
additional ad-hoc restrictions have been negotiated, in advance. This platform
also includes basic workflow tools to automate and simplify the working
procedures. For the Project partners, the website provides full access to all
achievements in detail, whereas the annual report, publications, and sequence
search sections will be open also to the public. Project summary, general
information and public reports have will be made available for everybody on
the Internet, also as a means to effectively communicate and coordinate, _if
possible_ , with parties outside the consortium (such as other related 5G-PPP
projects or the European Commission (EC)). The EC will receive a special
access code to access the necessary reports as well as to access prototypes on
the review process, _if and/or where necessary_ . The database and periodic
reports will greatly help in assembling the Annual and Interim reports for the
Commission.
More detailed information about the exact repositories of the Project,
corresponding to a public website accessed by any third party and to a private
website accessed by authorised physical and/or legal persons is given in the
already submitted deliverable _D1.1 (“Project Website”)_ .
SESAME will continuously host a comprehensive public website (
__http://www.sesame-h2020-5g-ppp.eu/_ _ ) that will contain all relevant
information about the Project.
A public section allows sharing information and documents among all partners,
also including any other “third party” (i.e., physical and/or legal persons)
that may express interest to access such data and receive information about
the scope and the achievements of the SESAME- _based_ effort. The public
section presents the specific aims, the vision and objectives as well as the
goals, the plan, the development(s) and the intended achievements of the
Project. It is also used to publish the public deliverables and the papers (as
well as other works and/or relevant presentations) that are to be presented or
accepted in international conferences, workshops, meetings and other similar
activities towards supporting a proper dissemination and exploitation policy
of the Project).
Furthermore it includes references to the related 5G-PPP context, as promoted
by the European Commission, and potentially affecting progress of the SESAME
effort. In addition, the public part includes an indicative description of the
profiles of the involved SESAME partners as well as a part for links to other
informative areas. There is also an explicit link to a private part of the
website, accessible only by the partners or the “ _beneficiaries_ ”) of the
Project, by using specific credentials (
__http://programsection.oteresearch.gr_ _ ) . _**Figure 2** _ provides an
indicative snapshot of the existing part of the public website.
The private part of the website serves as the “project management and
collaboration platform” bearing (among others) advanced document management
features (e.g. document versioning/history, documents checkin/out/locking,
etc.) and a powerful search functionality to ensure efficient work and
collaboration among partners.
The SESAME consortium is always proactively taking supplementary measures to
raise awareness and encourage the implementation of the technical, business,
social and all other concepts developed though the development of the public
website.
**Figure 2: SESAME Public Section -_Welcome Screen_ **
## 3.2 Ethics and Management of IPRs
The SESAME consortium is to respect the framework that is structured by the
joined provisions of:
* The _European Directive 95/46/EC_ (“ _Protection of personal data”_ ) 9 , and;
* _Opinion 23/05/2000 of the European Group on Ethics in Science and New Technologies concerning “Citizens Rights and New Technologies: A European Challenge”_ 10 .
The SESAME partners will also abide by professional ethical practices and
comply with the _Charter of Fundamental Rights of the European Union_ 11 .
The SESAME consortium recognises the importance of IPRs under a basic
philosophy as discussed in the following sections: The general architecture
and scientific results defined during the course of the Project are public
domain research, intended to be used in international fora to advance
technological development and scientific knowledge. Basic methods,
architectures and functionalities should be available for scrutiny, peerreview
and adaptation. Only this way can industry and standardisation groups accept
the results of SESAME and this is a procedure already applied in many similar
cases of research projects, until today. IPR will be managed in line with a
principle of equality of all the partners towards the foreground knowledge and
in full compliance with the general Commission policies regarding ownership,
exploitation rights and confidentiality.
Valuable IPRs that might come up during the course of the Project from the
work in the areas of new technological innovations with direct product use,
shall be protected by the consortium and/or single partner entity within the
Project. The IPRs shall be shared with reasonable rules, and the _H2020_
contract rules shall be strictly adhered to.
For handling patents, the consortium will also apply proven methods used in
previous EC projects. The partners will inform the consortium of technologies,
algorithms, etc. that they offer for use in the WPs that they have patented,
are in the process of patenting, or consider patenting. Similarly, if
patentable methods and techniques are generated within Project- _based_
activities, the patenting activities will aim to protect the rights of all
partners participating in these specific activities. Lists of patents related
to the Project, whether adopted, applied or generated will be maintained for
reference, and are to be included in reports submitted to the Commission. The
Consortium Agreement (CA) provides rules for handling confidentiality and IPR
to the benefit for the Consortium and its partners. All the Project
documentation will be stored electronically and as paper copies. Classified
Documents will be handled according to proper rules with regard to
classification (as described above), numbering and locked storing and
distribution limitations.
In general, knowledge, innovations, concepts and solutions that are not going
to be protected by patent applications by the participants will be made public
after agreement between the partners, to “allow others to benefit” from these
results and exploit them. However, where results require patents to show the
impact of VITAL, we will perform freedom to operate searches to determine that
this does not infringe on patents belonging to others.
The Consortium Agreement provides rules for handling confidentiality and IPR
to the benefit for the SESAME Consortium and its partners. All the project
documentation will be stored electronically and as paper copies. Classified
documents will be handled according to proper rules with regard to
classification (as described above), numbering and locked storing and
distribution limitations.
The policy, that will govern the IPR management in the scope of SESAME, is
driven by the following principles, which will be detailed in the Consortium
Agreement: (i) Policy for Ownership and Protection of knowledge; (ii)
Dissemination and Use policy; (iii) Access rights for use of knowledge; (iv)
Confidentiality; (v) Ownership of results / joint ownership of results /
difficult cases (i.e. pre-existing know-how so closely linked with result
difficult to distinguish pre-existing know-how and result); (vi) Legal
protection of results (patent rights); (vii) Commercial exploitation of
results and any necessary access right; (viii) Commercial obligation; (ix)
Relevant Patents, know-how, and information Sublicense; (x )Pre-existing know-
how excluded from contract.
Nevertheless, many specific IPR cases, that will need a concrete solution from
the bases previously fixed, may also exist. In these conflict situations, the
General Assembly will be the responsible Body to arbitrate a solution.
Furthermore, the IPR strategy and the updates will be monitored by the
Knowledge and Innovation Management (KIM) team and during the periodic
meetings; any IPR updates will be presented and approved upon consensus of the
KIM team.
# 4 Open Access Policy
Usually, academic research seems to be focused on questions of essential
scientific interest, the so-called _basic research_ . This is generally
intended to merely disclose new scientific and technical knowledge through
publications. On the other hand, the _applied research_ performed by the
industry is normally aimed at commercialising the resulting innovation and
therefore intended to increase the company value. To this end, research
results are protected through patents and trade secrets 12 . According this
kind of distinction, “publication is the most suitable means of knowledge
dissemination for research organizations/universities (ROs) as it permits the
fastest and open diffusion of research results. On the contrary, patents offer
the industry the strongest protection to commercialise their innovation and
recover the costs of the research investments. However, this scenario has been
critically changed, and expectations of _“how ROs create and manage their
knowledge”_ are changing rapidly, as this is increasingly considered by
academic personnel as a source of income. This is also due to the fact that
universities are encouraged to collaborate with private companies on research
projects in different areas, which constitutes an expansion of their research
interests into other sectors, such as biotechnology, nanotechnology, ICT and
so forth. As a consequence, the boundary between scientific and applied
research has blurred and, while the industry dissemination approach did not go
through any significant transformation, the ROs' strategy moved away from the
traditional “publishing”. ROs have in fact started focusing on the opportunity
to patent 13 research results, and extract as much value as possible from
intellectual property (IP).
The two main means to bring technical and scientific knowledge to the public
are patent applications 14 and journal publications 15 , 16 . With the
advent of the Internet two alternative means are also available for scientists
and research companies either to maximise their IP value or to disseminate
scientific and technical knowledge. These are: The defensive publications 17
and the **_Open Access_ model ** 18 . Public Internet is an emerging
functional medium for globally distributing knowledge, also being able to
significantly modify the nature of scientific publishing as well as the
existing system of quality assurance.
Enabling societal actors to interact in the research cycle improves the
quality, relevance, acceptability and sustainability of innovation outcomes by
integrating society’s expectations, needs, interests and values. Open access
is a key feature of Member States’ policies for responsible research and
innovation by making the results of research available to all and by
facilitating societal engagement. Businesses can also benefit from wider
access to scientific research results. Small and medium-sized enterprises in
particular can improve their capacity to innovate. Policies on access to
scientific information can also facilitate access to scientific information
for private companies. Open access to scientific research data 19 enhances
data quality, reduces the need for duplication of research, speeds up
scientific progress and helps to combat scientific fraud 20 .
In the context of the SESAME Project, expected publications are to be
published according to the _**Open Access (OA)** _ principles 21 . The
consortium will make use of both “green” (or self-archiving) and “gold” open
access options to ensure Open Access to most _-if not all-_ publications that
are to be produced during the life-time of the Project.
Almost all the top publications in the fields related to the Project are
expected to be published via IEEE, Springer, Elsevier or ACM that provide
authors with both “gold” -with either hybrid publication or open access
journals strategy- and “green” open access options.
Major achievements of the Project will be considered to be published in a
“gold” open access modality in order to “increase” the target audience. This
implies the publication on Open Access Journals or on Hybrid Journals with OA
agreement. The Article Processing Charges (APCs) that apply will be covered by
the Project budget. Self-archiving -or “green” open access- peer- _reviewed_
scientific research articles for dissemination will be published in scholarly
journals that consent self-archiving options compatible with “green” open
access, where the published article or the final peer- _reviewed_ manuscript
is archived (deposited) by the author -or a representative in case of multiple
authors- in an online repository before, alongside or after its publication.
SESAME will give preference to those journals that allow pre-print self-
archiving, in order to “maximise” the visibility of Project outcomes.
In fact, the SESAME consortium follows the guidelines set forth by the EU on
its mandate for open access publications to all peer- _reviewed_ scientific
publications. In order to effectively comply and “guide” the partners to
achieve such a high-promising goal, an _**Open Access publication policy and
strategy** _ is to take place and affect Project’s governing documentation and
further will be enforced and monitored by the Quality Manager (i.e., the
Project Coordinator).
According to this kind of policy, all scientific journals resulting from the
Project will be made “open access”
(with any exception needed to be approved by the Project Coordinator and
validated by the EU Project OfficerPO). Further, for other scientific
publications appearing in conference proceedings and other peer- _reviewed_
books, monographs or other “grey literature”, will be made available to the
general public through open access archives with very flexible licensing
(e.g., creative commons licenses) for the scientific community (open access
archives, such as arXiv ( __www.arxiv.org_ _ ) , researchgate (
__www.researchgate.net_ _ ) , CiteSeerX ( __citeseerx.ist.psu.edu_ _ ) can
be used for this purpose) 22 .
In an effort to “maximise” the expected impact with the scientific results and
associated data and the software (SW) code produced in the Project, the SESAME
consortium will create a dedicated code/data repository in a collaborative
open source code management tool (e.g., GitHub 23 ) for SESAME to release
all the mature
19
Economic and Social Research Council (2010). _ESRC research data policy_ .
Available at: _ _www.esrc.ac.uk/aboutesrc/information/data-policy.aspx_ . _
20
High Level Expert Group on Scientific Data (2010, October). Final Report:
“Riding the wave: How Europe can gain from the rising tide of scientific
data”. Available at :
__http://cordis.europa.eu/fp7/ict/e-infrastructure/docs/hlg-sdi-report.pdf_ .
_
21
See further detailed discussion about “Open Access” as IT appears below, in
the continuity of the present section.
22
Publication outputs will be placed either on arXiv or an analogous archive (in
accordance to the Registry of Open Access Repositories (ROAR)) and links from
the project website to these Open Access publications will be published
timely, in order to maximise impact and visibility of SESAME results and its
activities.
23
**GitHub** is a Web-based Git repository hosting service. It offers all of the
distributed revision control and source code management (SCM) functionality of
Git as well as adding its own features. Unlike Git, which is strictly a
command-line tool, GitHub provides a web-based graphical interface and desktop
as well as mobile integration. It also provides access control and several
collaboration features such as bug tracking, feature requests, task management
and wikis for every project.
(See, _for example_ : Williams, A. (2012, July). G _itHub pours Energies into
enterprise – Raises $100 Million From Power VC_ _Andreessen Horowitz_ , Tech.
Crunch. Available at: __http://techcrunch.com/2012/07/09/github-pours-
energies-into-enterprise-raises-100-million-from-power-vc-andreesenhorowitz/_
_ ) .
GitHub offers both plans for private repositories and free accounts, _[4]_
which are usually used to host _open-source_ software projects (
__https://github.com/about/press_ _ ) . In recent years, GitHub has become
the largest code host in the world, with more than 5M developers collaborating
across 10M repositories. Numerous popular open source projects
software and other data associated to the scientific publications. This will
allow the broader community to “access” the open source software and the
related data and/or tools, which is used to derive the scientific results
presented in the articles and magazines.
For a variety of reasons, this sort of free and unrestricted online
availability within the OA framework can be economically feasible, offers to
any potential reader astonishing power to “find and make use” of relevant
literature, while it provides authors and their works massive new visibility,
readership and impact 24 .
SESAME will also produce specific outcomes in terms of implementation of
individual software components which will be used in scientific publications
together with the data collected during experiments done within the complete
Project’s lifetime. To make software and data used in publications available
to the related (academic, business or other) community, such software and data
will be made open source or subject to very flexible licensing and available
whereby different channels. This potentially includes the creation of
repositories in open source code management tools - _such as GitHub, or an
“equivalent” one_ \- where to store the software developed which is in a
“mature” stage and updated from time to time, as new stable releases of the
code are available. Furthermore, since the SESAME Consortium aims to maximise
the impact inside the related SDN and NFV communities, the software will be
also made available inside open source initiatives (for example: OpenDayLight,
OPNFV, etc.) whenever possible and according to the provisions of both the GA
and the CA documents. With this kind of intended policy, SESAME Consortium
will disseminate Project- _based_ achievements to an audience as wide as
possible, and will so allow other parties to replicate the results presented
in scientific publications.
Open Access (OA) refers to the practice of granting free Internet access to
research articles. This model is deemed to be an efficient system for broad
dissemination of and access to research data 25 and publications, which can
indeed accelerate scientific progress. Although this model foresees that the
knowledge dissemination is on free-of-cost basis, this does not mean that the
publication process is entirely free of costs. The underlying philosophy, in
fact, focuses on the shift of costs from the reader to the author/publisher,
in order to readily access and disseminate publications.
**Open Access (OA)** can be defined 26 as the practice of providing on-line
access to scientific information that is “free of charge” to the end-user and
that is re-usable. The term “scientific” refers to all academic disciplines;
in the context of research and innovation activities, “scientific information”
can refer to: _(a)_ Peer- _reviewed_ scientific research articles (published
in scholarly journals) 27 , or; _(b)_ research data (i.e.: data underlying
publications, curated data and/or raw data).
(such as Ruby on Rails, Homebrew, Bootstrap, Django or jQuery) have chosen
GitHub as their host and have migrated their code base to it. GitHub offers a
tremendous research potential. As of 2015, GitHub reports having over 11
million users and over 29.4 million repositories (
__https://github.com/about/press_ _ ) , thus making it the largest host of
source code in the world. [An interesting approach for the latter comment is
discussed in: Gousios, G., Vasilescu, B., Serebrenik, A. and Zaidman, A.
(2014). _Lean GHTorrent: GitHub Data on Demand, in_ MSR-14 Proceedings (May
31- June 01, 2014), Hyderabad, India. ACM Publications].
For a wider informative scope about GitHub, also see the discussion presented
in : __https://en.wikipedia.org/wiki/GitHub_ _ .
24
Today, there is a strong and world-wide motivation for professional
associations, universities, libraries, foundations, and others to
consider/assess open access as a “suitable means” of further
advancing/promoting their specific missions. However, achieving open access
will require new cost recovery models and financing mechanisms, but the
significantly lower overall cost of dissemination is a critical reason to be
confident that the goal is attainable.
25
Organisation for Economic Co-operation and Development (OECD) (2007). _OECD
principles and guidelines for access to research data from public funding_ .
Paris, France: OECD. Available at :
__www.oecd.org/dataoecd/9/61/38500813.pdf_ . _
26
European Commission (2015, October 30). _Guidelines on Open Access to
Scientific Publications and Research Data in Horizon 2020. Version 2.0._
Brussels, Belgium: European Commission, Directorate-General for Research &
Innovation. Available at:
__http://ec.europa.eu/research/participants/data/ref/h2020/grants_manual/hi/oa_pilot/h2020-hi-
oa-pilotguide_en.pdf_ _ . 27
Under the “open access” conceptual framework, the literature that should be
freely accessible online is that which scholars offer to the world without
expectation of payment but, mainly, with the pure aim of promoting scientific
research and innovation. Mainly, this category includes not only their peer-
reviewed journal articles, but it also incorporates any un-reviewed preprints
that they might intend to “put online for comments” or to “alert” colleagues
to important research findings. There are several degrees and kinds of wider
and easier access to this literature. By "open access" to this literature, it
is meant its free availability on the public Internet, permitting any third
users to read, download, copy, distribute, print, search, or link to the full
texts of these articles, crawl them for indexing, pass them as
Establishing open access as a valuable practice ideally requires the active
commitment of each and every discrete/individual producer of scientific
knowledge. Open access contributions include original scientific research
results, raw data and metadata, source materials, digital representations of
pictorial and graphical materials and scholarly multimedia material.
Open access contributions have to satisfy/fulfil two conditions 28 : (i) The
author(s) and right holder(s) of such contributions grant(s) to all users a
free, irrevocable, worldwide, right of access to, and a license to copy, use,
distribute, transmit and display the work publicly and to make and distribute
derivative works, in any digital medium for any responsible purpose, subject
to proper attribution of authorship (community standards, will continue to
provide the mechanism for enforcement of proper attribution and responsible
use of the published work, as they do now), as well as the right to make small
numbers of printed copies for their personal use; (ii) A complete version of
the work and all supplemental materials, including a copy of the permission as
stated above, in an appropriate standard electronic format is deposited (and
thus published) in at least one online repository using suitable technical
standards (such as the Open Archive definitions) that is supported and
maintained by an academic institution, scholarly society, government agency,
or other well established organization that seeks to enable open access,
unrestricted distribution, interoperability, and long-term archiving.
The philosophy underlying the open access model is to introduce barrier-free,
cost-free access to scientific literature for readers 29 . In the past,
restrictions to free access of scientific publications were accepted, as the
subscription model was the only practically possible option, as printed
journals were the only means of disseminating validated scientific results 30
. While open access advocates free _dissemination_ of scientific knowledge,
this does not necessarily imply that no costs are involved in the publishing
process. Open access does not indulge in the illusion of an entirely cost-free
publication process. Communication of scientific results has always been paid
out of research funds, one way or another, either directly or indirectly, via
institutional overhead charges. That does not change in an open access model.
The OA model focuses on taking the burden of costs off the subscriber’s
shoulders, often by shifting the costs from the reader to the author, so that
payment for the process of peer review and publishing is made on behalf of the
author, rather than the reader.
Conformant to the OA- _based_ approach, the following options can be
distinguished: _“Open access to scientific publications_ which is discussed in
section _(i)_ below _,_ and; _“open access to research data”_ as discussed in
_sections 4.1. and 4.1.2,_ below:
### 4.1.1 Open Access to Scientific Publications
**_Open access to scientific publications_ ** refers to “free-of-charge”
online access for any potential user. Legally binding definitions of “open
access” and “access” in this context do not practically exist, but
authoritative definitions of open access can be found in key political
declarations on this subject, for instance the _Budapest Declaration of 2002_
( __http://www.budapestopenaccessinitiative.org/read_ _ ) or; the _Berlin
Declaration 31 of 2003 _ (
__http://openaccess.mpg.de/67605/berlin_declaration_engl.pdf_ _ ) .
data to software, or use them for any other lawful purpose, without financial,
legal, or technical barriers other than those inseparable from gaining access
to the Internet itself. The only restriction on potential intended
reproduction and distribution, and the only role for copyright in this domain,
should be to give authors control over the integrity of their work and the
right to be properly acknowledged and cited.
28
According to the detailed context proposed by the _Berlin Declaration_ _on
Open Access to Knowledge in the Sciences and Humanities._
29
P. Van Eecke, J. Kelly, P. Bolger and M. Truyens (2009). Monitoring and
analysis of technology transfer and intellectual property regimes and their
use Results of a study carried out on behalf of the European Commission (DG
Research). Mason Hayes+Curran, Brussels-Dublin, August 2009\.
30
M.J. Velterop (2004). Open Access: Science Publishing as Science Publishing
Should Be, _Serials Review 2004, 30_ , pp.308309.
31
Following to the spirit of the _Declaration of the Budapest Open Access
Initiative_ , the _Berlin Declaration_ _on Open Access to Knowledge in the
Sciences and Humanities_ has been made in order to promote the Internet as a
functional instrument for a global scientific knowledge base and human
reflection and to specify measures which research policy makers, research
institutions, funding agencies, libraries, archives and museums need to
consider for such purpose. According to the proposed framework, new
possibilities of knowledge dissemination not only through the classical form
These definitions describe “access” in the context of open access as including
not only basic elements such as “the right to read, download and print”, but
also “the right to copy, distribute, search, link, crawl, and mine”.
There are two main routes towards open access to publications:
* **Self-archiving / “green” open access** means that the published article or the final peer- _reviewed_ manuscript is archived (deposited) by the author -or an authorized representative in case of multiple authors- in an online repository before, alongside or after its publication. Some publishers request that open access be granted only after an “embargo” period has elapsed 32 .
Scholars and researchers need the tools and the assistance to deposit their
refereed journal articles in open electronic archives, a practice usually
called as _“self-archiving”_ . When these archives conform to standards
created by the Open Archives Initiative 33 , then search engines and other
tools can “treat the separate archives as one”. Users then need not know which
archives exist or where they are located in order to find and make use of
their contents.
* **Open access publishing / “gold” open access** means that an article is immediately provided in open access mode as published. In this specific model, the payment of publication costs is shifted away from readers paying via subscriptions 34 . The business model most often encountered is based on one-off payments by authors. These costs (often referred to as Article Processing Charges - APCs) can usually be borne by the university or research institute to which the researcher is affiliated, or to the funding agency supporting the research. In other cases, the costs of open access publishing are covered by subsidies or other funding models.
Scholars and researchers need the means to initiate a new generation of
journals committed to open access and, _consequently_ , to help existing
journals that elect _to make the transition to open access_ . Since journal
articles should be disseminated as widely as possible, such new journals will
no longer invoke copyright to restrict access to and use of the material they
publish. Instead, they will use copyright and other tools to ensure permanent
open access to all the articles they publish. Because price is a barrier to
access, these new journals will not charge subscription or access fees, and
will turn to other methods for covering their expenses. There are many
alternative sources of funds for this purpose, including the foundations and
governments that fund research, the universities and laboratories that employ
researchers, endowments set up by discipline or institution, friends of the
cause of open access, profits from the sale of add-ons to the basic texts,
funds freed up by the demise or cancellation of journals charging traditional
subscription or access fees, or even contributions from the researchers
themselves. There is no need to favor one of these solutions over the others
for all disciplines or nations, and no need to stop looking for other
alternatives.
_**Hybrid model** _ – While several existing scientific publishers have
converted to the open access publishing model, such conversion may not be
viable for every publisher. A third _("hybrid")_ model of open access
publishing has therefore arisen. In the hybrid model, publishers offer authors
the choice of paying the article processing fee and having their article made
freely available online, or they can elect not to pay and then only journal
subscribers will have access to their article. The hybrid model offers
publishers of traditional subscription-based journals a way to experiment with
open access and allow the pace of change to be dictated by the authors
themselves 35 .
Public institutions are also very interested in the OA system. The European
Commission is strongly committed to optimising the impact of publicly-funded
scientific research, both at European level ( _FP7, Horizon 2020_ ) and
but also and increasingly through the open access paradigm via the Internet
had to be supported. “Open access” has been defined as a comprehensive source
of human knowledge and cultural heritage that has been approved by the
scientific community. In order to realize the vision of a global and
accessible representation of knowledge, the future Web needed to be
sustainable, interactive, and transparent. Content and software tools needed
to be openly accessible and compatible.
32
_**Green OA** _ foresees that the authors deposit (self-archive) the final
peer-reviewed manuscript in a repository (open archive) to be made available
in open access mode, usually after an embargo period allowing them to recoup
the publishing costs (e.g. via subscriptions or pay per download).
33
For more relevant information see, for example :
__http://www.openarchives.org_ _ .
34
For this other model named _**Gold OA** _ , costs of publishing are covered
usually by the publisher so that research articles are immediately available
free of charge upon publication.
35
__http://www.powershow.com/view1/1a76ee-
ZDc1Z/Five_years_on_powerpoint_ppt_presentation_ _ .
at Member State level 19 37 . Indeed, the European Commission acts as the
coordinator between member states and within the European Research Area (ERA)
in order for results of publicly-funded research to be disseminated more
broadly and faster, to the benefit of researchers, innovative industry and
citizens. OA can also boost the European research, and in particular offers
SMEs access to the latest research for utilisation. The central underlying
reasons for an OA system are that:
* The results of publicly-funded research should be publicly available;
* OA enables research findings to be shared with the wider public, helping to create a knowledge society across Europe composed of better-informed citizens;
* OA enhances knowledge transfer to sectors that can directly use that knowledge to produce better goods and services. Many constituencies outside the research community itself can make use of research results. These include small and medium-sized companies that do not have access to the research through company libraries, organizations of professional (legal practices, family doctor practices, etc.), the education sector and so forth.
**_Misconceptions about open access to scientific publications:_ ** In the
context of research funding, open access requirements in no way imply an
explicit obligation to publish results. The decision on whether or not to
proceed to a publication, lies entirely with the grantees. Open access becomes
an issue only _if_ publication is elected as a means of further realizing
dissemination. Moreover, OA does not interfere with the decision to exploit
research results commercially, e.g. through patenting. Indeed, the decision on
whether to publish open access must come after the more general decision on
whether to publish directly or to first seek protection. More information on
this issue is available in the European IPR Helpdesk 20 fact sheet
_“Publishing vs. patenting”_ 39 . This is also illustrated in _**Figure 3** _
, below, showing open access to scientific publication and research data in
the wider context of dissemination and exploitation 21 22 .
**Figure 3: Open access to scientific publication and research data in the
wider context of dissemination and exploitation**
### 4.1.2 Open Access to Research Data
**Open access to research data** refers to the right to access and re-use
digital research data under the terms and conditions set out in the Grant
Agreement.
The term “research data” refers to information, in particular facts or
numbers, collected to be examined and considered and as a basis for reasoning,
discussion, or calculation. In a research context, possible examples of data
may comprise statistics, results of experiments, measurements, observations
resulting from fieldwork, survey results, interview recordings and images. The
focus is primarily upon research data that is available in digital form.
Openly accessible research data can typically be accessed, mined, exploited,
reproduced and disseminated free of charge for the user.
Public institutions are also very interested in the OA system 23 . The
European Commission is strongly committed to optimising the impact of
publicly-funded scientific research, both at European level (FP7, Horizon
2020) and at Member State level 24 . Indeed, the European Commission acts as
the coordinator between member states and within the European Research Area
(ERA) in order for results of publicly-funded research to be disseminated more
broadly and faster, to the benefit of researchers, innovative industry and
citizens. OA can also boost the European research, and in particular offers
SMEs access to the latest research for utilisation.
**The central underlying reasons for an OA system are that:**
* The results of publicly- _funded_ research should be publicly available;
* OA enables research findings to be shared with the wider public, helping to create a knowledge society across Europe composed of better-informed citizens;
* OA enhances knowledge transfer to sectors that can directly use that knowledge to produce better goods and services. Many constituencies outside the research community itself can make use of research results. These include small and medium-sized companies that do not have access to the research through company libraries, organizations of professional (legal practices, family doctor practices, etc.), the education sector and so forth 25 .
# 5 Data Management Plan
## 5.1 European Community Strategic Framework for DMP
The European Commission has early recognised that research data is as
important as publications 26 . It therefore announced in 2012 that it would
experiment with open access to research data 27 . Broader and more rapid
access to scientific papers and data will make it easier for researchers and
businesses to build on the findings of public-funded research 28 .
As a first step, the Commission has decided to make open access to scientific
publications a general principle of _Horizon 2020_ , the EU's Research &
Innovation funding programme for 2014-2020 29 . In particular, as of the
year 2014, all articles produced with funding from _Horizon 2020_ had to be
accessible according to the following options:
Articles had either immediately to be made accessible online by the publisher
(“Gold” open access) - up-front publication costs can be eligible for
reimbursement by the European Commission; or researchers had to make their
articles available through an open access repository no later than six months
(12 months for articles in the fields of social sciences and humanities) after
publication (“Green” open access). The Commission has also recommended that
Member States take a similar approach to the results of research funded under
their own domestic programmes 30 . This will boost Europe's innovation
capacity and give citizens quicker access to the benefits of scientific
discoveries. Intelligent processing of data is also essential for addressing
societal challenges.
The _Pilot on Open Research Data in Horizon 2020_ 31 does for scientific
information what the _Open Data Strategy_ 32 does for public sector
information: It aims to improve and maximise access to and re-use of research
data generated by projects for the benefit of society and the economy.
The _G8 definition of_ _Open Data_ 33 states that _data should be easily
discoverable, accessible, assessable, intelligible, useable, and wherever
possible interoperable to specific quality standards, while at the same time
respecting concerns in relation to privacy, safety, security and commercial
interests_ 34 .
The SESAME project intends to participate in the _H2020 Open Research Data
Pilot 53 _ , which well compliments Project’s views on Open Access, open
source 54 , and providing a transparent view of the scientific process,
particularly relevant in science driven by public funds.
This Pilot is an opportunity to see how different disciplines share data in
practice and to understand remaining obstacles, as well as part of the
Commission’s commitment to openness in _Horizon 2020. 55 _
Projects participating in the _Pilot on Open Research Data in Horizon 2020_
are required to deposit the research data described below 56 :
* The data, including associated metadata 57 , needed to validate the results presented in scientific publications as soon as possible;
* Other data 58 , including associated metadata, as specified and within the deadlines laid down in a _**data management plan (DMP) 59 ** _ .
Projects should deposit preferably in a research data repository and take
measures to enable third parties to access, mine, exploit, reproduce and
disseminate — free of charge for any user 60 .
The **main requirements of the _Open Data Pilot_ ** are listed as follows:
* Develop (and update) a Data Management Plan;
* Deposit in a research data repository;
* Make it possible for third parties to access, mine, exploit, reproduce and disseminate data – free of charge for any user;
* Provide information on the tools and instruments needed to validate the results (or provide the tools)
To participate in this initiative, the present _Deliverable D8.2_ consisting
of a first draft of the projects Data Management Plan has been produced in
month 6 (M6) of the Project by WP8, and further evolved as the Project goes
on.
information about the related Commission’s initiative can be found at:
__http://europa.eu/rapid/press-release_IP-131257_en.htm_ _ .
54
Generally, open source refers to a computer program in which the source code
is available to the general public for use and/or modification from its
original design. Open-source code is meant to be a collaborative effort, where
programmers improve upon the source code and share the changes within the
community. Typically this is not the case, and code is merely released to the
public under some license. Others can then download, modify, and publish their
version (fork) back to the community. Today you find more projects with forked
versions than unified projects worked by large teams. For further reading see,
for example: Lakhani, K.R., von Hippel, E. (June 2003). How Open Source
Software Works: Free User to User Assistance. _Research Policy 32(6),_
pp.923-943. [ __doi_ _ _:_10.1016/S0048-7333(02)00095-1_ ] _ as well as other
informative references i n __https://en.wikipedia.org/wiki/Open_source_ _ .
55
The _Pilot on Open Research Data_ in _Horizon 2020_ will give the Commission a
better understanding of what supporting infrastructure is needed and of the
impact of limiting factors such as security, privacy or data protection or
other reasons for projects opting out of sharing. It will also contribute
insights in how best to create incentives for researchers to manage and share
their research data. The Pilot will be monitored throughout _Horizon 2020_
with a view to developing future Commission policy and EU research funding
programs.
56
__https://www.openaire.eu/h2020-oa-data-pilot_ . _
57
“Associated metadata” refers to the metadata describing the research data
deposited.
58
For instance, curated data not directly attributable to a publication, or raw
data.
59
A DMP may be also referred to as a “Data Sharing Plan”.
60
For example, the _**OpenAIRE project** _ provides a _**Zenodo repository** _ (
__http://www.zenodo.org_ _ ) that could be used for depositing data. Also see
OpenAIRE FAQ ( __http://www.zenodo.org/faq_ _ ) for general information on
Open Access and European Commission funded research.
## 5.2 DMP in the Conceptual Framework of the _H2020_
All project proposals submitted to " _Research and Innovation actions_ " as
well as " _Innovation actions_ " had to include a section on research data
management which is evaluated under the criterion “Impact”. Where relevant,
applicants had to provide a short, general outline of their policy for data
management, including the following issues listed as _(i)-(iv):_
1. _What types of data will the project generate/collect?_
2. _What standards will be used?_
3. _How will this data be exploited and/or shared/made accessible for verification and re-use? (If data cannot be made available, this has to be explained why)._
4. _How will this data be curated and preserved?_
The described policy should reflect the current state of consortium agreements
regarding data management and be consistent with those referring to
exploitation and protection of results. The data management section can be
considered also as a checklist for the future and as a reference for the
resource and budget allocations related to data management.
Data Management Plans (DMPs) are introduced in the Horizon 2020 Work Programs
according to the following concept: “ _A further new element in Horizon 2020
is the use of Data Management Plans (DMPs) detailing what data the project
will generate, whether and how it will be exploited or made accessible for
verification and reuse, and how it will be curated and preserved. The use of a
Data Management Plan is required for projects participating in the Open
Research Data Pilot. Other projects are invited to submit a Data Management
Plan if relevant for their planned research”._
Projects taking part in the _Pilot on Open Research Data_ are required to
provide a first version of the DMP as an early deliverable within the first
six months of the respective project. Projects participating in the above
Pilot as well as projects who submit a DMP on a voluntary basis because it is
relevant to their research should ensure that this deliverable is mentioned in
the proposal. Since DMPs are expected to mature during the corresponding
project, more developed versions of the plan can be included as additional
deliverables at later stages. The purpose of the DMP is to support the data
management life cycle for all data that will be collected, processed or
generated by the project.
References to research data management are included in Article 29.3 of the
_Model Grant Agreement_ (article applied to all projects participating in the
_Pilot on Open Research Data in Horizon 2020_ .
A _**Data Management and Sharing Plan** _ 61 is usually submitted where a
project -or a proposal- involves the generation of datasets that have clear
scope for wider research use and hold significant long-term value 62 . In
short, plans are required in situations where the data outputs “form a
resource” from which researchers and other users would be able to generate
additional benefits. This would include all projects where the primary goal is
to create a database resource. It would also include other research generating
significant datasets that could be shared for added value - for example, those
where the data has clear utility for research questions beyond those that the
data generators are seeking to address. In particular, it would cover datasets
that might form "community resources" as defined by the _ Fort Lauderdale
Principles 6 3 _ and th e _Toronto statement 6 _ _ 4 _ . As noted in the
_Toronto statement_ , community resources will typically have the following
attributes: (i) Largescale (requiring significant resources over time); (ii)
broad utility; (iii) creating reference datasets, and; (iv)associated with
community buy-in. For studies generating small-scale and limited data outputs,
a data management and sharing plan will not normally be required. Generally,
the expected approach for projects of this type would be to make data
available to other researchers on publication, and where possible to deposit
data in appropriate data repositories in a timely manner. While a formal data
management and sharing plan need not be submitted in such cases, applicants
may find the guidance below helpful in planning their approaches for managing
their data.
61
See, for example: “ _Guidance for researchers: Developing a data management
and sharing plan”._ Available at: _ _http://www.wellcome.ac.uk/About-
us/Policy/Spotlight-issues/Data-sharing/Guidance-for-researchers/index.htm_ .
_
62
Also see: Framework for creating a data management plan, ICPRS, University
of Michigan, US. Available at:
_
_http://www.icpsr.umich.edu/icpsrweb/content/datamanagement/dmp/framework.htm_
. _
63
For more related information, see: __http://www.wellcome.ac.uk/About-
us/Publications/Reports/Biomedicalscience/WTD003208.htm_ _ .
64
Toronto International Data Release Workshop Authors (2009). _Nature_ _461,_
168-170 (September 10, 2009) [ __doi:10.1038/461168a_ _ ]. Available at :
__http://www.nature.com/nature/journal/v461/n7261/full/461168a.html_ . _
## 5.3 Principles and Guidelines for Developing a DMP
A DMP as a document outlining how research data will be handled during a
research project, and after it is completed, is very important in all aspects
for projects participating in the H _orizon 2020 Open Research Data Pilot_ as
well as almost any other research project. Especially where the project
participates in the above mentioned Pilot, it should always include clear
descriptions and rationale for the access regimes that are foreseen for
collected data sets 65 .
This principle is further clarified in the following paragraph of the Model
Grant Agreement: “ _As an exception, the beneficiaries do not have to ensure
open access to specific parts of their research data if the achievement of the
action's main objective, as described in Annex I, would be jeopardised by
making those specific parts of the research data openly accessible. In this
case, the data management plan must contain the reasons for not giving
access”._
A DMP describes the data management life cycle for all data sets that will be
collected, processed or generated by the corresponding research project. It is
a document outlining how research data will be handled during a research
project, and even after the project is completed, describing what data will be
collected, processed or generated and following what methodology and
standards, whether and how this data will be shared and/or made open, and how
it will be curated and preserved 66 . The DMP is not a fixed document; it
evolves and gains more precision and substance during the lifespan of the
project 67 .
The first version of the DMP is expected to be delivered within the first 6
months of the respective project. This DMP deliverable should be in compliance
with the template provided by the Commission, as presented in the following
_Section 5.3.1_ . More elaborated versions of the DMP can be delivered at
later stages of the project. The DMP would need to be updated at least by the
mid-term and final review to fine-tune it to the data generated and the uses
identified by the consortium since not all data or potential uses are clear
from the start. New versions of the DMP should be created whenever important
changes to the project occur due to inclusion of new data sets, changes in
consortium policies or external factors. Suggestions for additional
information in these more elaborated versions are provided below in the
subsequent _Section 5.3.2_ **.**
DMPs should follow relevant national and international recommendations for
best practice and should be prepared in consultation with relevant
institutional and disciplinary stakeholders. They should anticipate
requirements throughout the research activity, and should be subject to
regular review and amendment as part of normal research project management.
### 5.3.1 Template for DMP
The purpose of the Data Management Plan (DMP) is to provide an analysis of the
main elements of the data management policy that will be used by the
applicants with regard to all the datasets that will be generated by the
project 68 . The DMP is not a fixed document, but evolves during the
lifespan of the project.
The DMP should address 69 the points below on a dataset by dataset basis and
should reflect the current status of reflection within the consortium about
the data that will be produced.
▪ **Data set reference and name**
Identifier for the data set to be produced.
**Data set description**
65
UK Data Archive (2011, May). _Managing and Sharing Data. Best Practice for
Researchers_ . University of Essex, UK. Available at : __http://www.data-
archive.ac.uk/media/2894/managingsharing.pdf_ . _
66
Brunt, J. (2011). _How to Write a Data Management Plan for a National Science
Foundatio_ n (NSF) Proposal. Available at:
__http://intranet2.lternet.edu/node/3248_ _ . 67
Support on research data management for projects funded under _Horizon 2020_
has been planned through projects funded under the _Research Infrastructures
Work Programme 2014-15_ .
68
An interesting conceptual approach is also proposed in: Donnelly, M. & Jones,
S. (2011). _DCC Checklist for a Data Management Plan_ v3.0. Digital Curation
Centre (DCC), UK. Available at : __http://www.dcc.ac.uk/webfm_send/431_ _ .
69
Also see: Jones, S. (2011). “How to Develop a Data Management and Sharing
Plan”. _DCC How-to Guides._ Edinburgh: Digital Curation Centre. Available
online : __http://www.dcc.ac.uk/resources/how-guides_ _ .
Description of the data that will be generated or collected, its origin (in
case it is collected), nature and scale and to whom it could be useful, and
whether it underpins a scientific publication. Information on the existence
(or not) of similar data and the possibilities for integration and reuse.
Plans should cover all research data expected to be produced as a result of a
project or activity, from ‘raw’ to “published”. They may include, _inter-alia_
, details of: (i) An analysis of the gaps identified between the currently
available and required data for the research; (ii) anticipated data volume;
(iii) anticipated data type and formats including the format of the final
data; (iv) measures to assure data quality; (v) standards (including metadata
standards) and methodologies that will be adopted for data collection and
management, and why these have been selected; (vi) relationship to data
available from other sources, and; (vii) anticipated further/secondary use(s)
for the completed dataset(s).
▪ **Standards and metadata**
Reference to existing suitable standards of the discipline. If these do not
exist, an outline on how and what metadata will be created.
What disciplinary norms are to be adopted in the project? What is the data
about? Who created it and why? In what forms it is available? Metadata answers
such questions to enable data to be found and understood, ideally according to
the particular standards of the project-specific scientific discipline.
DMPs should specify the principles, standards and technical processes for data
management, retention and preservation that will be adopted. These may be
determined by the area of research and/or funder requirements. Processes
should be supported by appropriate standards addressing confidentiality and
information security, legal compliance, monitoring and quality assurance, data
recovery and data management reviews where suitable. In order to maximise the
potential for re-use of data, where possible, researchers should generate and
manage data using existing widely accepted formats and methodologies. DMPs
should provide suitable quality assurance concerning the extent to which “raw”
data may be modified. Where ‘raw’ data are not to be retained, the processes
for obtaining “derived” data should be specified and conform to the accepted
procedures within the research field.
Researchers should ensure that appropriately structured metadata, using a
recognised or _de facto_ standard schema where these exist, describing their
research data are created and recorded in a timely manner. The metadata should
include information about regulatory and ethical requirements relating to
access and use. Protocols for the use, calibration and maintenance of
equipment, together with associated risk assessments, should be clearly
documented to ensure optimal performance and research data quality. Where
protocols change, they should be version controlled and the current version
should be available and readily accessible. Documentation may include:
Technical descriptions, code commenting; project-build guidelines; audit trail
supporting technical decisions; resource metadata. Not all types of
documentation will be relevant to all projects and the quantity of
documentation proposed should be proportionate to the anticipated value of the
data.
▪ **Data sharing**
Description of how data will be shared, including access procedures, embargo
periods (if any), outlines of technical mechanisms for dissemination and
necessary software and other tools for enabling re-use, and definition of
whether access will be widely open or restricted to specific groups.
Identification of the repository where data will be stored, if already
existing and identified, indicating in particular the type of repository
(institutional, standard repository for the discipline, etc.) 35 .
In case the dataset cannot be shared, the reasons for this should be mentioned
(e.g. ethical, rules of personal data, intellectual property, commercial,
privacy-related, security-related).
By default as much of the resulting data as possible should be archived as
_Open Access_ . Therefore, legitimate reasons for not sharing resulting data
should be explained in the DMP.
Planning for data sharing should begin at the earliest stages of project
design and well in advance of beginning the research. Any potential issues
which could limit data sharing should be identified and mitigated from the
outset. Data management plans should therefore address how the research data
will be shared. Any reason for not eventually sharing data should be explained
with a justification citing for example legal, ethical, privacy or security
considerations.
▪ **Archiving and preservation (including storage and backup)**
Description of the procedures that will be put in place for long-term
preservation of the data. Indication of how long the data should be preserved,
what is its approximated end volume, what the associated costs are and how
these are planned to be covered.
Funding bodies are keen to ensure that publicly funded research outputs can
have a positive impact on future research, for policy development, and for
societal change. They recognise that impact can take quite a long time to be
realised and, _accordingly_ , expect the data to be available for a suitable
period beyond the life of the project. It has to be pointed out that it is not
simply enough to ensure that the bits are stored, but also to consider the
usability of the project-specific data. In this respect, it has to be
considered to preserve software or any code produced to perform specific
analyses or to render the data as well as being clear about any proprietary or
open source tools that will be needed to validate and use the preserved data.
Data management plans should provide for all retained data and related
materials to be securely preserved in such a way as to allow them to be
accessed, understood and used by any others having appropriate authorization
in future.
Data held electronically should be backed up regularly and duplicate copies
held in alternative locations in a secure and accessible format where
appropriate.
### 5.3.2 Additional Guidance for DMP
This can be applied to any project that produces, collects or processes
research data, and is included as reference for elaborating DMPs in _Horizon
2020_ projects. This guide is structures as a series of questions that should
be ideally clarified for all datasets produced in the project.
Scientific research data should be easily:
###### 1\. Discoverable
DMP question: Are the data and associated software produced and/or used in the
project discoverable (and readily located), identifiable by means of a
standard identification mechanism (e.g. Digital Object Identifier)?
###### 2\. Accessible
DMP question: Are the data and associated software produced and/or used in the
project accessible and in what modalities, scope, licenses 36 (e.g.
licensing framework for research and education, embargo periods, commercial
exploitation, etc.)?
###### 3\. Assessable and intelligible
DMP question: Are the data and associated software produced and/or used in the
project assessable for and intelligible to third parties in contexts such as
scientific scrutiny and peer review (e.g. are the minimal datasets handled
together with scientific papers for the purpose of peer review, are data is
provided in a way that judgments can be made about their reliability and the
competence of those who created them)?
###### 4\. Useable beyond the original purpose for which it was collected
DMP question: Are the data and associated software produced and/or used in the
project useable by third parties even long time after the collection of the
data (e.g. is the data safely stored in certified repositories for long term
preservation and curation; is it stored together with the minimum software,
metadata and documentation to make it useful; is the data useful for the wider
public needs and usable for the likely purposes of non-specialists)?
###### 5\. Interoperable to specific quality standards
DMP question: Are the data and associated software produced and/or used in the
project interoperable allowing data exchange between researchers,
institutions, organizations, countries, etc. (e.g. adhering to standards for
data annotation, data exchange, compliant with available software
applications, and allowing recombinations with different datasets from
different origins)?
## 5.4 Structuring of a SESAME DMP
Different types of data raise very different considerations and challenges,
and there are significant differences between fields in terms of, for example,
the availability of repositories and level of established good practice for
data sharing. Data generated by the Project will mostly consist of
measurement- and traffic data from various simulations, emulations in the CESC
platform, and the proof of concept (PoC) experimentation in the SESAME test-
bed(s). Without going into full details of the DMP here, there are several
standards that can be used to store such data as well as providing the meta-
data necessary for third parties to utilise the data.
The overall goal is to as much as possible, use not only open formats to store
the data but also open source software to provide the scripts and other meta-
data necessary to re-use it.
Similar to the software generated by the Project, some of the data generated
may pertain to components, software, or figures considered as confidential by
one or more of the partners. The particular data affected by this will be
described in the DMP and the reasons for maintaining confidentiality will be
provided.
According to the discussion provided in the previous _Section 5.3_ , a
suitable Data Management Plan (DMP) includes the following major components,
as shown in _**Figure 4** _ , below:
**Figure 4: Structure of a Data Management Plan (DMP)**
For the case of the SESAME Project, the context becomes as it appears in
_**Figure 5** _ , below:
**Figure 5: Essential Components of the SESAME Data Management Plan (DMP)**
In the following _Sections 5.4.1-5.4.5_ we discuss, one-by-one, the essential
characteristics -or “modules”- of the SESAME DMP, based on the concept of
_**Figure 5** _ .
### 5.4.1 Data Set Reference and Naming
The following structure is proposed for SESAME data set identifier:
_SESAME [Name]_[Type]_[Place]_[Date]_[Owner]_[Target User]_
Where we identify the following fields:
* _“Name”_ is a short name for the data.
* _“Type”_ describes the type of data (e.g. code, publication, measured data).
* _“Place”_ describe the place the data were produced.
* _“Date”_ is the date in format “YYYY-MM-DD”.
* “Owner” is the owner or the owners of the data (if exist)
* _“Target user”_ is the target audience of the data (this is an optional identifier).
* _“_” (underscore)_ is used as the separator between the fields.
For example,
_“SESAME_Field_Experiment_data_Athens_2015-06-31_OTE_Internal.dat”_ is a data
file from a field experiment in Athens, Greece that has been performed on
2015-06-31 and owned by the project partner OTE with extension .dat (MATLAB
72 ). More information about the data is provided in the metadata (see the
following section).
All the data fields in the identifier above, apart from the target user, are
mandatory. If one -or more owners- owner cannot be specified, then it should
be indicated as: _“Unspecified-owner”._
### 5.4.2 Data Set Description and Metadata
The previous _Section 5.4.1_ has defined a data set identifier. The data set
description is fundamentally an expanded description of the identifier with
more details.
The data set description that is organized as the metadata takes place in a
similar way as the case of the identifier, but with more details and,
depending on the file format, it will be either incorporated as a part of the
file or as a separate file (in its simplest form) in the text format. In the
case of the separate metadata file, it will have the same name with the added
suffix _“METADATA”._
For example, the metadata file name for the data file from the previous
section will appear as follows:
_“SESAME_Field_Experiment_data_Athens_2015-06-31_OTE_Internal_METADATA.txt”_
The Metadata file can also designate a number of files (e.g. a number of log
files). The SESAME Project may thus consider a possibility to provide the
metadata in XML 73 or JSON 74 formats, if necessary for convenience of
parsing and further processing. The Project will develop several data types
related to the VNF (Virtual Network Function) Descriptors, NS (Network
Service) Descriptors, VNF Catalogues, etc., which will be specifically encoded
into the metadata format appropriately in order to have consistency in the
description and filtering of the data types.
72
MATLAB (matrix laboratory) is a multi-paradigm numerical computing environment
and fourth-generation programming language. More information can be found at
: __https://en.wikipedia.org/wiki/MATLAB_ . _
73
Extensible Markup Language (XML) is a mark-up language that defines a set of
rules for encoding documents in a format which is both human-readable and
machine-readable. It is defined by the W3C’s XML 1.0 Specification and by
several other related specifications, all of which are free open standards.
More related information can be found at:
__https://en.wikipedia.org/wiki/XML_ . _ 74
JavaScript Object Notation (JSON) is an open standard format that uses human-
readable text to transmit data objects consisting of attribute-value pairs. It
is the primary data format used for asynchronous browser/server communication
(AJAJ), largely replacing XML. Though it originally derived from the
JavaScript scripting language, JSON is a languageindependent data format. Code
for parsing and generating JSON data is readily available in many programming
languages. More detailed information can be found at :
__https://en.wikipedia.org/wiki/JSON_ . _
### 5.4.3 Data Sharing
SESAME will use the _zenodo.org_ repository for storing the related Project
data and a SESAME account will be created for that purpose. _Zenodo.org_ is a
repository supported by CERN and the EU OpenAire project 37 ; This is open,
free, searchable and structured with flexible licensing allowing for storing
all types of data: datasets, images, presentations, publications and software.
Researchers working for European funded projects can participate by depositing
their research output in a repository of their choice 38 , 39 publish in a
participating Open Access journal, or deposit directly in the OpenAIRE
repository _Zenodo_ \- and indicating the project it belongs to in the
metadata 40 . Dedicated pages per project are visible on the OpenAIRE
portal. Project- _based_ research output, whether it is publications, datasets
or project information is accessible through the OpenAIRE portal. Extra
functionalities are also offered too, such as statistics, reporting tools and
widgets – making OpenAIRE a useful support service for researchers,
coordinators and project managers. On this portal, each project has a
dedicated page featuring: _(i)_ Project information; _(ii)_ App. & Widget box;
_(iii)_ Publication list; _(iv)_ Datasets, and; _(v)_ Author information.
In addition to that we also identify the following beneficial features:
* The repository has backup and archiving capabilities.
* The repository allows for integration github.com where the Project code will be stored. GitHub provides a free and flexible tool for code developing and storage.
* _Zenodo_ assigns all publicly available uploads a Digital Object Identifier (DOI) to make the upload easily -and uniquely- citable.
All the above features make _Zenodo_ a good candidate as a _unified_
repository for all foreseen project data (presentations, publications, code
and measurement data) coming from SESAME. Information on using _Zenodo_ by the
Project partners with application to the SESAME data will be circulated within
the consortium and addressed within the respective work package (WP8). The
process of making the SESAME data public and publishable at the repository
will follow the procedures described in the SESAME Consortium Agreement. For
the code, the Project partners will follow the internal _“Open Source
Management Process”_ document. All the public data of the project will be
openly accessible at the repository. Non-public data will be archived at the
repository using the “closed access” option.
### 5.4.4 Archiving and Preservation
The _Guidelines on Data Management in Horizon 2020_ require defining
procedures that will be put in place for long-term preservation of the data
and backup. The _zenodo.org_ repository possesses these archiving capabilities
including backup and will be used to archive and preserve the SESAME Project
data.
Further, the SESAME Project data will also be stored in a project-managed
repository tool, called as _Sharepoint_ 41 _,_ which is managed by the
Project Coordinator. It has flexible live data storage capability. This
repository will directly link to the project website, where access information
to different data types can be provided. This will permit the users and
research collaborators to have easy and convenient access to the Project
research data.
### 5.4.5 Use of DMP within the Project
The SESAME Project partners will use this plan as a reference for data
management (naming, providing metadata, storing and archiving) within the
project each time new project data are produced.
The SESAME partners are introduced to the DMP and its use as part of WP8
activities. Relevant questions from partners will also be addressed within
WP8. The work package will also provide support to the project partners on
using _Zenodo_ as the data management tool. The DMP will be used as a live
document in order to update the project partners about the use, monitoring and
updates of the shared infrastructure.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0683_PoliVisu_769608.md
|
# Executive Summary
PoliVisu aims to establish the use of big data and data visualisation as an
integral part of policy making, particularly, but not limited to, the local
government level and the mobility and transport policy domain. The project’s
relation with data is therefore essential and connatural to its experimental
research objectives and activities.
Additionally, the consortium has adhered to the H2020 ORDP (Open Research Data
Pilot) convention with the EC, which explicitly caters for the delivery of a
DMP (Data Management Plan).
According to the PoliVisu DoA (2017), data management planning, monitoring and
reporting is part of WP2 - the Project and Quality Management work package -
and foresees the delivery of four consecutive editions of the DMP at months 6,
12, 24 and 36.
This first edition, however, is not a mere collection of principles, as it
sets the stage for the ongoing and next activities handling with data, before
and even after the project is completed. As per the DoA description: “ _DMP_
_describes the data management lifecycle for all data sets that will be
collected, processed or generated by the research project. It is a document
outlining how research data will be handled during a research project, and
even after the project is completed, describing what data will be collected,
processed or generated and following what methodology and standards, whether
and how this data will be shared and/or made open, and how it will be curated
and preserved”._
We basically envisage three main data usage scenarios, which jointly compose
PoliVisu’s data management lifecycle:
* Original data produced by the PoliVisu consortium and/or individual members of it (e.g. during a dissemination action or a pilot activity)
* Existing data already in possession of the PoliVisu consortium and/or individual members of it prior to the project’s initiation
* Existing data sourced/procured by the PoliVisu consortium and/or individual members of it during the project’s timeline
The structure of this document is as follows:
* **Section 1** presents PoliVisu’s data management lifecycle and frames the DMP within the EU H2020 Guidelines and FAIR data handling principles, thus setting the stage for the following parts.
* **Section 2** is a brief overview of the legal framework, including the EU regulation on personal data protection (GDPR), the H2020 provisions for open access to research data, the specific provisions of the PoliVisu Grant Agreement and Consortium Agreement and some special provisions for big data management.
* The core of the DMP is **Section** **3** , in which the data usage scenarios are presented and the key issues to be examined in relation to each scenario are discussed. These issues include decisions on e.g. data anonymization, privacy and security protection measures, licensing etc.
* **Section 4** concludes the document by anticipating the expected contents of future editions of the DMP.
For completeness of information, the reader interested in getting to know how
the PoliVisu consortium plans to deal with data may also refer, in addition to
this DMP, to the following, already or soon to be published, deliverables:
D1.1 (Ethical Requirement No. 4), D1.3 (Ethical Requirement No. 3), D2.2
(Project Management
Plan), D2.3 (Quality and Risk Plan), D6.1 (Pilot Scenarios), D7.1 (Evaluation
Plan) and D8.1 (Impact
Enhancement Road Map).
# Introduction
Visualisation and management of (big) data in a user friendly way for public
administration bodies is one of the primary goals of the PoliVisu project. The
intention is to support integration of (big) data into policy and decision
making processes. The project’s relation with data is therefore essential and
connatural to its experimental research objectives and activities.
Additionally, the consortium adhered to the H2020 ORDP (Open Research Data
Pilot) convention with the EC, which explicitly caters for the delivery of a
DMP (Data Management Plan).
According to the PoliVisu DoA (2017), data management planning, monitoring and
reporting is part of WP2 - the Project and Quality Management work package -
and foresees the delivery of four consecutive editions of the DMP at months 6,
12, 24 and 36. This first edition, however, is not a mere collection of
principles, as it sets the stage for the ongoing and next activities handling
with data, before and even after the project is completed.
## The PoliVisu Data Management Lifecycle
As per the DoA description, the PoliVisu DMP “ _describes_ _the data
management lifecycle for all data sets that will be collected, processed or
generated by the research project. It is a document outlining how research
data will be handled during a research project, and even after the project is
completed, describing what data will be collected, processed or generated and
following what methodology and standards, whether and how this data will be
shared and/or made open, and how it will be curated and preserved”._
This paragraph summarizes the management procedures that will be followed when
dealing with the data of relevance for the PoliVisu project, and which will be
further described in Section 3 of this document.
We envisage **three main data usage scenarios** :
1. Original data produced by the PoliVisu consortium and/or individual members of it (e.g. during a dissemination action or a pilot activity);
2. Existing data already in possession of the PoliVisu consortium and/or individual members of it prior to the project’s initiation;
3. Existing data sourced/procured by the PoliVisu consortium and/or individual members of it during the project’s timeline.
For each of the above scenarios, the key issues to be examined are displayed
by the following logic tree:
**Figure 1 – The PolIVisu Data Management Life Cycle**
For each dataset (or even data point) handled in the project, the first level
of control/decision making must deal with its **nature** , notably whether
it has been (or will be) deemed Confidential, or Anonymised and Public (it
cannot be that the two latter things diverge, apart from very special
occasions, which are coped with in the third logical category displayed in the
picture).
Depending on the assessment of nature, the resulting, mandatory **action**
**lines** can then be summarized as follows:
* For any acknowledged **Confidential** 1 dataset (or data point), the Consortium and/or each Partner in charge of its handling shall control (if existing) or define (if not) the **Licensing** **rules** and the **Privacy** **and security measures** (to be) adopted in the process.
* For any acknowledged **Anonymised** **and Public** dataset (or data point), the only relevant discipline to be clarified is the set of **Open** **Access rules** that apply to the case. This set is little controversial for PoliVisu, as the ODRP convention has been adopted, as specified above. Note that the use of open data across the PoliVisu pilots, including e.g. Open Transport Maps or Open Land Use Maps, falls in this category.
* Any dataset (or data point) that does not belong to any of the former two categories is subject to an additional level of action by the Consortium and/or Partner in charge, leading to its classification as either Confidential or Anonymised and Public. In that regard, the two, mutually exclusive action items belonging to this level are:
○ the **anonymisation** **for publication** action, leading to the migration
to the second category of data, or
○ the adoption of appropriate **privacy** **and security measures** (very
likely the same applied to the category of Confidential data) in case
anonymisation is not carried out for whatever legitimate reason. Note that in
this latter case, i.e. without anonymisation, **no** **licensing rules are
applicable** (i.e. the PoliVisu consortium rejects the commercialisation of
the personal profiles of human beings as a non-ethical practice).
## Reference Framework and Perimeter of the DMP
The following picture – borrowed from the official EU H2020 information portal
2 \- clearly identifies the positioning of the DMP in the context of projects
that – like PoliVisu – have voluntarily adhered to the Pilot on Open Research
Data in Horizon 2020 3 .
**Figure 2: Open access to scientific publications and research data in the
wider context of a project’s dissemination and exploitation (source: European
Commission, 2017)**
As can be seen, a DMP holds the same status and relevance as the project’s
Dissemination Plan 4 . More specifically, in the former document, one should
retrieve the full list of research data and publications that the project will
deliver, use or reuse, as well as the indication of whether some data will be
directly exploited by the Consortium, having been patented or protected in any
other possible form. In the latter document, one should retrieve the
Consortium’s detailed provisions for all data and publications that can be
shared with interested third parties, with or without the payment of a fee 3
.
In particular, the following definitions – all taken from the aforementioned
EU H2020 portal – shall apply to our discourse:
* **Access** : “ _the_ _right to read, download and print – but also the right to copy, distribute, search, link, crawl and mine_ ”;
* **Research Data** : “ _[_ _any] information, in particular facts or numbers, collected to be examined and considered as a basis for reasoning, discussion, or calculation. In a research context, examples of data include statistics, results of experiments, measurements, observations resulting from fieldwork, survey results, interview recordings and images. The focus is on research data that is available in digital form_ ”;
* **Scientific Publications** : “ _journal_ _article[s],_ … _monographs, books, conference proceedings, [and] grey literature (informally published written material not controlled by scientific publishers)”_ , such as reports, white papers, policy/position papers, etc.;
* **Open Access Mandate** : “ _comprises 2 steps: depositing publications in repositories [and] providing open access to them_ ”. Very importantly, these steps “ _may_ _or may not occur simultaneously_ ”, depending on conditions that will be explained below:
* **“Green”** **Open Access (aka Self-Archiving)** : it is granted when the final, peer-reviewed manuscript is deposited by its authors in a repository of their choice. Then open access must be ensured within at most 6 months (12 months for publications in the social sciences and humanities). Thus, open access may actually follow with some delay (due to the so-called “embargo period”);
* **“Gold”** **Open Access (aka Open Access Publishing)** : it is granted when the final, peer-reviewed manuscript is immediately available on the repository where it has been deposited by its authors (without any delay or “embargo period”). Researchers can also decide to publish their work in open access journals, or in hybrid journals that both sell subscriptions and offer the option of making individual articles openly accessible. In the latter case, the so-called “article processing charges” are eligible for reimbursement during the whole duration of the project (but not after the end of it).
In the PoliVisu **DoA** (2017), the following provisions for Open Access
were defined, which have become part of the Grant Agreement (GA) itself:
_“PoliVisu_ _will follow the Open Access mandate for its publications and
will participate in the Open Research Data pilot, so publications must be
published in Open Access (free online access). Following the list of
deliverables, the consortium will determine the appropriate digital objects
that will apply to the Data Management Plan. Each digital object, including
associated metadata, will be deposited in the institutional repository of
Universitat Politècnico Milano, whose objective is to offer Internet access
for university's scientific, academic and corporate university in order to
increase their visibility and make it accessible and preservable.”_
Evidently, these provisions belong to the **“Green” Open Access** case.
As far as patenting or other form of protection of research results is
concerned (the bottom part of Figure 2), the ground for this has been paved by
the PoliVisu Consortium Agreement (2017) - following the DoA, which recognises
that _“formal_ _management of knowledge and intellectual property rights
(IPR) is fundamental for the effective cooperation within the project lifetime
and the successful exploitation of the PoliVisu Framework and tools within and
after the end of the project”_ . Further steps towards a clarification of the
licensing mechanisms will be taken in the context of the 3 foreseen editions
of the Business and Exploitation Plan in the context of WP8 (deliverables D8.3
due at month 12, D8.6 due at month 24 and D8.10 due at month 34). As a general
principle, the GA article 26.1 is faithfully adopted in the PoliVisu
Consortium Agreement (CA), according to which “ _Results_ _are owned by the
Party that generates them_ ”. This is what article 8.1 states. And in
addition, article 8.2 specifies that “ _in_ _case of joint ownership, each
of the joint owners shall be entitled to Exploit the joint Results as it sees
fit, and to grant non-exclusive licences, without obtaining any consent from,
paying compensation to, or otherwise accounting to any other joint owner,
unless otherwise agreed between the joint owners_ ”.
We take the above provisions also as a **guideline** **for the attribution
of responsibilities of data management** , as far as PoliVisu research
results are concerned. Very shortly, we posit that **ownership** **goes hand
in hand with the responsibility for data management** . The latter involves
the same project partner(s) who generate new data, individually or jointly. In
case of reuse of existing data, i.e. owned by someone else (a third party or
another PoliVisu partner), the individual or joint responsibility is to
**check** **the nature of data** (as specified in Figure 1 above) and
**undertake** **the consequent actions** as will be further described also
in Section 3 below.
## Alignment to the Principles of FAIR Data Handling
Generally speaking, a good DMP under H2020 should comply with the FAIR Data
Handling Principles. FAIR stands for Findable, Accessible, Interoperable and
Re-usable, as referred to a project’s research outputs – notably those made
available in digital form.
The FAIR principles, however, do not belong to H2020 or the EC but have
emerged in January 2014, as the result of an informal working group convened
by the Netherlands eScience Center and the Dutch Techcentre for the Life
Sciences at the Lorentz Center in Leiden, The Netherlands 4 .
Very pragmatically, the European Commission (2016) considers the FAIR
principles fulfilled if a DMP includes the following information:
1. _“The handling of research data during and after the end of the project”_
2. _“What data will be collected, processed and/or generated”_
3. _“Which methodology and standards will be applied”_
4. _“Whether data will be shared/made open access”, and_
5. _“How data will be curated and preserved (including after the end of the project)”._
In the case of PoliVisu, the above information is provided in Section 3 of
this document, which consists of five paragraphs, respectively:
1. Data summary ( _typologies and contents of data collected and produced_ )
2. Data collection ( _which procedures for collecting which data_ )
3. Data processing ( _which procedures for processing which data_ )
4. Data storage ( _data_ _preservation and archiving during and after the project_ )
5. Data sharing (i _ncluding provisions for open access_ )
The following table matches the aforementioned EC requirements with the
contents dealt with in Section 3 paragraphs.
**Table 1. Alignment between this DMP and the EC’s requirements**
<table>
<tr>
<th>
**This document’s Section 3 TOC**
**EC requirements**
</th>
<th>
**3.1 Data Summary**
</th>
<th>
**3.2 Data**
**Collection**
</th>
<th>
**3.3 Data**
**Processing**
</th>
<th>
**3.4 Data**
**Storage**
</th>
<th>
**3.5 Data**
**Sharing**
</th> </tr>
<tr>
<td>
**A. “The handling of research data during and after the end of the project”**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**B. “What data will be collected, processed and/or generated”**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**C. “Which methodology and standards will be applied”**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**D. “Whether data will be shared/made open access”**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**E. “How data will be curated and preserved (including after the end of the
project)”**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
This Introduction has presented PoliVisu’s data management lifecycle and
frames the DMP within the EU H2020 Guidelines and FAIR data handling
principles. The remaining structure of this document comes as follows:
* **Section 2** is a brief overview of the legal framework, including the EU regulation on personal data protection (GDPR), the H2020 provisions for open access to research data, the specific provisions of the PoliVisu grant agreement and consortium agreement and some special provisions for big data.
* **Section 3** presents and discusses the data usage scenarios in the framework outlined in the above Table and examines the key issues in relation to each scenario. These issues include decisions on e.g. data anonymization, privacy and security protection measures, licensing etc.
* **Section 4** concludes the document by anticipating the expected contents of future editions of the DMP.
* In **Annex** **I** the interested reader can find a running list of utilized / relevant data sources, which will be further updated over the course of the project.
# Legal framework
This section briefly overviews the key normative references making up the DMP
external context. The next paragraphs respectively deal with:
1. The PSI Directive and its recent modifications and revisions proposals (dated April 2018);
2. The General Data Protection Regulation, coming into force in May this year;
3. The terms of the H2020 Open Research Data Pilot (ORDP) the PoliVisu consortium has adhered to;
4. The resulting, relevant provisions of both the Grant and the Consortium Agreements;
5. The special provisions for big data management mentioned in the DoA and thus become binding for all partners;
6. A general outline of PoliVisu’s licensing policy.
## The PSI Directive
The Directive 2003/98/EC on the re-use of Public Sector Information (PSI)
entered into force on 31 December 2003. It was revised by the Directive
2013/37/EU, which entered into force on 17 July 2013. The consolidated text
resulting from the merge of these two legislative documents is familiarly
known as the PSI Directive, and can be consulted on the Eur-Lex website 7 .
On 25 April 2018, the EC adopted a proposal for a revision of the PSI
Directive, which was presented as part of a package of measures aiming to
facilitate the creation of a common data space in the EU. This review also
fulfils the revision obligation set out in Article 13 of the PSI Directive.
The proposal has received a positive opinion from the Regulatory Scrutiny
Board and is now being discussed with the European Parliament and the Council.
It comes as the result of an extensive public consultation process, an
evaluation of the current legislative text and an impact assessment study done
by an independent contractor 8 .
The current PSI Directive and its expected evolution is noteworthy and useful
to define the context of the PoliVisu project in general and of this DMP in
particular. Thanks to the PSI Directive and its modifications and
implementations 9 , the goal of making government data and Information
reusable has become shared at broad European level. In addition, the awareness
has been remarkably growing that as a general principle, the datasets where
PSI is stored must be set free by default. However, fifteen years after the
publication of the original PSI Directive, there are still barriers to
overcome (better described in the aforementioned impact assessment study) that
prevent the full reuse of government data and information, including data
generated by the public utilities and transport sectors as well as the results
from public funded R&D projects, two key areas of attention for PoliVisu and
this DMP.
## The EU Personal Data Protection Regulation (GDPR)
Regulation (EU) 2016/679 sets out the new General Data Protection Regulation
(GDPR) framework in the EU, notably concerning the processing of personal data
belonging to EU citizens by individuals, companies or
7 8 _https://eur-lex.europa.eu/legal-
content/EN/TXT/?uri=CELEX:02003L0098-20130717_
9 Available online at:
_http://ec.europa.eu/newsroom/dae/document.cfm?doc_id=51491_
For instance, the INSPIRE Directive (2007/2/EC) builds mechanisms for data and
corresponding Web services discoverability on top of the PSI Directive. See:
_https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:32007L0002
&from=en _
public sector/non government organisations, irrespective of their
localization. It is therefore a primary matter of concern for the PoliVisu
consortium.
The GDPR was adopted on 27 April 2016, but will become enforceable on 25 May
2018, after a two-year transition period. By then, it will replace the current
Data Protection Directive (95/46/EC) and its national implementations. Being a
regulation, not a directive, GDPR does not require Member States to pass any
enabling legislation and is directly binding and applicable.
The GDPR provisions do not apply to the processing of personal data of
deceased persons or of legal entities. They do not apply either to data
processed by an individual for purely personal reasons or activities carried
out at home, provided there is no connection to a professional or commercial
activity. When an individual uses personal data outside the personal sphere,
for socio-cultural or financial activities, for example, then the data
protection law has to be respected.
On the other hand, the legislative definition of personal data is quite broad,
as it includes any information relating to an individual, whether it relates
to his or her private, professional or public life. It can be anything from a
name, a home address, a photo, an email address, bank details, posts on social
networking websites, medical information, or a computer’s IP address.
While the specific requirements of GDPR for privacy and security are
separately dealt with in other PoliVisu Deliverables (such as D1.1 on POPD
Requirement No. 4 due by month 6 and D1.2 on POPD Requirement No.
6 delivered at month 3, as well as D4.5 & D4.6 on Privacy rules and data
anonymization, due by months 24 & 30 respectively) it is worth noting here
that the PoliVisu consortium has formed a working group composed of the
partner organisations Data Protection Officers (DPOs). The DPO function and
role has been introduced by the GDPR and better defined by a set of EC
guidelines, given on 13 December 2016 and revised on 5 April 2017 10 .
The GDPR text is available on the Eur-Lex website 11 .
## Open Access in Horizon 2020
As partly anticipated in Section 1, the EC has launched in H2020 a flexible
pilot for open access to research data (ORDP), aiming to improve and maximise
access to and reuse of research data generated by funded R&D projects, while
at the same time taking into account the need to balance openness with privacy
and security concerns, protection of scientific information, commercialisation
and IPR. This latter need is crystallised into an opt-out rule, according to
which it is possible at any stage - before or after the GA signature - to
withdraw from the pilot, but legitimate reasons must be given, such as
IPR/privacy/data protection or national security concerns.
With the Work Programme 2017 the ORDP has been extended to cover all H2020
thematic areas by default. This has particularly generated the obligation for
all consortia to deliver a Data Management Plan (DMP), in which they specify
what data the project will generate, if it will not be freely disclosed for
e.g. exploitation related purposes or how it will be made accessible for
verification and reuse, and how it will be curated and preserved.
The ORDP applies primarily to the data needed to validate the results
presented in scientific publications. Other data can however be provided by
the beneficiaries of H2020 projects on a voluntary basis.
The costs associated with the Gold Open Access rule, as well as the creation
of the DMP, can be claimed as eligible in any H2020 grant.
As already mentioned, the PoliVisu consortium has adhered to the **Green Open
Access** rule.
10 11 See: _http://ec.europa.eu/newsroom/document.cfm?doc_id=44100_
_https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A32016R0679_
## Grant Agreement and Consortium Agreement provisions
The key GA and CA provisions worth mentioning in relation to our discourse on
data management have been already introduced to a great extent in the previous
Sections. Now we simply reproduce the corresponding articles.
### Grant Agreement
_24.1 Agreement on background_
The beneficiaries must identify and agree (in writing) on the background for
the action (‘agreement on background’).
‘Background’ means any data, know-how or information — whatever its form or
nature (tangible or intangible), including any rights such as intellectual
property rights — that: (a) is held by the beneficiaries before they acceded
to the Agreement, and (b) is needed to implement the action or exploit the
results.
_26.1 Ownership by the beneficiary that generates the results_ Results are
owned by the beneficiary that generates them.
‘Results’ means any (tangible or intangible) output of the action such as
data, knowledge or information — whatever its form or nature, whether it can
be protected or not — that is generated in the action, as well as any rights
attached to it, including intellectual property rights.
_26.2 Joint ownership by several beneficiaries_ Two or more beneficiaries own
results jointly if: (a) they have jointly generated them and (b) it is not
possible to:
1. establish the respective contribution of each beneficiary, or
2. separate them for the purpose of applying for, obtaining or maintaining their protection.
1. _Obligation to disseminate results_
Unless it goes against their legitimate interests, each beneficiary must — as
soon as possible — ‘disseminate’ its results by disclosing them to the public
by appropriate means (other than those resulting from protecting or exploiting
the results), including in scientific publications (in any medium).
2. _Open access to scientific publications_
Each beneficiary must ensure open access (free of charge online access for any
user) to all peer-reviewed scientific publications relating to its results.
3. _Open access to research data_
Regarding the digital research data generated in the action (‘data’), the
beneficiaries must:
(a) deposit in a research data repository and take measures to make it
possible for third parties to access, mine, exploit, reproduce and disseminate
— free of charge for any user — the following: (i) the data, including
associated metadata, needed to validate the results presented in scientific
publications as soon as possible;
(ii) other data, including associated metadata, as specified and within the
deadlines laid down in the 'data management plan');
(b) provide information — via the repository — about tools and instruments at
the disposal of the beneficiaries and necessary for validating the results
(and — where possible — provide the tools and instruments themselves).
(...)
As an exception, the beneficiaries do not have to ensure open access to
specific parts of their research data if the achievement of the action's main
objective, as described in Annex 1, would be jeopardised by making those
specific parts of the research data openly accessible. In this case, the data
management plan must contain the reasons for not giving access.
_39.2 Processing of personal data by the beneficiaries_
The beneficiaries must process personal data under the Agreement in compliance
with applicable EU and national law on data protection (including
authorisations or notification requirements). The beneficiaries may grant
their personnel access only to data that is strictly necessary for
implementing, managing and monitoring the Agreement.
### Consortium Agreement
_Attachment 1: Background included_
According to the Grant Agreement (Article 24) Background is defined as “data,
know-how or information (…) that is needed to implement the action or exploit
the results”. Because of this need, Access Rights have to be granted in
principle, but Parties must identify and agree amongst them on the Background
for the project. This is the purpose of this attachment 5 .
(...)
As to EDIP SRO, it is agreed between the Parties that, to the best of their
knowledge, The following background is hereby identified and agreed upon for
the Project: (...)
Algorithms for the analysis of data characterizing the traffic flow from
automatic traffic detectors. Mathematical model of traffic network of roads in
the Czech Republic, including car traffic matrix.
(...)
As to HELP SERVICE REMOTE SENSING SRO, it is agreed between the Parties that,
to the best of their knowledge, The following background is hereby identified
and agreed upon for the Project: (...) Metadata Catalogue Micka.
Senslog Web Server.
HSLayers NG.
Mobile HSLayers NG Cordova.
VGI Apps.
(...)
As to GEOSPARC NV, it is agreed between the Parties that, to the best of their
knowledge, The following background is hereby identified and agreed upon for
the Project: (...) geomajas (http://www.geomajas.org).
INSPIRE>>GIS view & analysis component.
(...)
As to INNOCONNECT SRO, it is agreed between the Parties that, to the best of
their knowledge, The following background is hereby identified and agreed upon
for the Project: (...) WebGLayer library (available at http://webglayer.org/).
(...)
As to CITY ZEN DATA, it is agreed between the Parties that, to the best of
their knowledge, The following background is hereby identified and agreed upon
for the Project: (...) Warp10 platform (www.warp10.io).
(...)
As to ATHENS TECHNOLOGY CENTER SA, it is agreed between the Parties that, to
the best of their knowledge, The following background is hereby identified and
agreed upon for the Project: (...)
TruthNest, which will be integrated as a service within PoliVisu through an
API to be provided by ATC
(...)
As to SPRAVA INFORMACNICH TECHNOLOGII MESTA PLZNE, PRISPEVKOVA ORGANIZACE, it
is agreed between the Parties that, to the best of their knowledge, The
following background is hereby identified and agreed upon for the Project:
(...)
Mathematical model of traffic network of roads in the city of Pilsen,
including a car traffic matrix (so- called CUBE software:
http://www.citilabs.com/software/cube/).
(...)
As to MACQ SA, it is agreed between the Parties that, to the best of their
knowledge, The following background is hereby identified and agreed upon for
the Project: (...)
M3 Demo version in Macq's cloud for development, not allowed to put online or
in production. Excluded: background and especially data which is not owned by
Macq or which it is not allowed to share.
(...)
As to PLAN4ALL ZS, it is agreed between the Parties that, to the best of their
knowledge, The following background is hereby identified and agreed upon for
the Project: (...) Smart Points of Interest (http://sdi4apps.eu/spoi/).
Open Transport Map (http://opentransportmap.info/).
Open Land Use Map (http://sdi4apps.eu/open_land_use/).
(...)
As to STAD GENT, it is agreed between the Parties that, to the best of their
knowledge, The following background is hereby identified and agreed upon for
the Project: (...)
Any software developed for the publication, analysis, harmonisation and/or
storage of data by the City, its ICT partner Digipolis, or any subcontractor
thereof.
(...)
## The PoliVisu licensing policy
There is at the moment no single licensing policy within the PoliVisu
consortium, either for the software (so-called Playbox) or their individual
components, some of which belong to the Background as mentioned in the
previous subparagraph. This is probably a topic of discussion for later
project stages. Likewise, there has been no explicit consideration of the data
licensing issue at the broad consortium level yet - which can be due to the
relatively early stage of the project’s lifespan and the limited number of
plenary meetings done so far.
However, a few building blocks can already be identified, based on the
discussion done in this document, the
GA provisions quoted above as well as others not quoted yet, and the
individual partners declarations in the CA. These provisions have been
implicitly accepted by the PoliVisu consortium members upon their signature of
the aforementioned documents and are therefore totally enforceable. They are
summarized in the table below.
**Table 2. Building blocks of the PoliVisu licensing policy**
<table>
<tr>
<th>
**Typology of data**
</th>
<th>
**Licensees**
</th>
<th>
**During the project period**
</th>
<th>
**After the project period**
</th>
<th>
**Legal references**
</th> </tr>
<tr>
<td>
Pre-existing (e.g. part of the Background knowledge of PoliVisu, as listed in
the CA Attachment 1)
</td>
<td>
Other members of the
PoliVisu consortium
</td>
<td>
Royalty free usage
No right to sublicense
</td>
<td>
Under fair and reasonable conditions
</td>
<td>
GA Art. 25.2
GA Art. 25.3
</td> </tr>
<tr>
<td>
Any interested third party
</td>
<td>
As per the Background commercial licence
</td>
<td>
As per the Background commercial licence
</td>
<td>
CA Attachment 1
</td> </tr>
<tr>
<td>
Sourced from third parties for the execution of project activities (e.g.
portions of large datasets)
</td>
<td>
Other members of the
PoliVisu consortium
</td>
<td>
Royalty free usage
No right to sublicense
</td>
<td>
Within the scope of the third party’s license
</td>
<td>
General rules on IPR and license details
</td> </tr>
<tr>
<td>
Any interested third party
</td>
<td>
No right to sublicense
</td>
<td>
No right to sublicense
</td>
<td>
General rules on IPR and license details
</td> </tr>
<tr>
<td>
Freely available in the state of art (e.g. Open
Data)
</td>
<td>
Other members of the
PoliVisu consortium
</td>
<td>
Royalty free usage
</td>
<td>
Royalty free usage
</td>
<td>
Within the scope of the data owner’s license
</td> </tr>
<tr>
<td>
Any interested third party
</td>
<td>
Royalty free usage
</td>
<td>
Royalty free usage
</td>
<td>
Within the scope of the data owner’s license
</td> </tr>
<tr>
<td>
Newly produced 6 during the project (i.e. part of the Foreground knowledge
of PoliVisu)
</td>
<td>
Other members of the
PoliVisu consortium
</td>
<td>
Royalty free usage
No right to sublicense
</td>
<td>
Under fair and reasonable conditions
</td>
<td>
GA Art. 26.2
</td> </tr>
<tr>
<td>
Any interested third party
</td>
<td>
Open access at flexible conditions
</td>
<td>
Open access at flexible conditions
</td>
<td>
GA Art. 29.3
</td> </tr> </table>
## Special provisions for big datasets
The PoliVisu DoA describes how big data from different sources – notably
available at city level, in relation to the nature of the identified project
pilots, dealing with mobility and traffic flows – can distinctively contribute
to the three processes of policy experimentation belonging to its Framework:
design, implementation and (real time) evaluation of policy solutions 7 .
Big data, as defined in ISO/IEC CD 2046, is data stored in "extensive datasets
− primarily in the characteristics of volume, variety, velocity, and/or
variability − that require a scalable architecture for efficient storage,
manipulation, and analysis". This may include ‘smart data’, i.e. coming from
sensors, social media, and other human related sources. This obviously raises
questions about data security and privacy, which are explicitly and
extensively dealt with in a dedicated WP (1) and will ultimately become part
of a policy oriented manual, issued in two consecutive editions as
Deliverables D7.4 (due at month 24) and D7.6 (due at month 32).
In another WP (4), the PoliVisu DoA extensively deals with the smart data
infrastructure for cities that is now going to be developed within the
project. This is based on the Warp 10 big data architecture and will set up
various data processing and analytical steps. The general principle and modus
operandi is that any (big) data can be used in any application, can be
analysed and correlated with other sources of data and can be used to provide
detection of patterns to understand the effective functioning of
infrastructures, transport systems, services or process within a city. The
processed and analysed big data will be published as map services. Free and
open source geospatial tools and services will be used to generate OGC
standards (especially WMS-T and WFS), TMS and vector tile based open formats
for integration in GIS applications.
The existing OTN traffic modelling tool will be automated and ported to a big
data processing cloud to yield near-real-time traffic calculations. The
process will be calibrated to make the traffic model algorithms more accurate
(in space and time) using real time and historical traffic sensor data. System
interfaces and GUI will be developed to interact with the traffic modelling
software.
Existing crowdsourcing tools (such as Waze and plzni.to) will be adopted and
complemented with standard interfaces, protocols and data models to turn user
generated data into actionable evidence for policy making. New modules will be
designed for the SensLog open source library to support its integration with
big data technologies.
Data analytics functions and algorithms will be implemented to support policy
making processes. Social
Media analytics will be based on TruthNest. This tool will be extended with a
monitoring mechanism for Twitter contents that gathers any information on
mobility trends automatically and in real-time and sends alerts to users on
possible events.
Open source geospatial software (such as WebGLayer) will be used to realise
the big data visualisation. The tool will be extended with support for line
and area features. Advanced visualisation components will be added in the form
of multiple linked views, filters through interactive graphs, parallel
coordinates relationship analysis, map-screen extent filters, and area
selection. Focus will be set on the visualisation and filtering of mobility
related information and the comparison between different scenarios, time
periods and locations, in particular on mobile and touch devices.
The appropriate metadata will be defined for supporting the different tools
and processes in real life decision making conditions. This includes the
structures, services, semantics and standards to support big data, sensor
data, advanced analytics and linked data. Two open source metadata tools will
be considered in the project: GeoNetwork and Micka. The consortium will
contribute to the definition of integrated metadata standards in the OGC
metadata workgroup.
Considering the above scenario, as well as the DoA statement that “PoliVisu
will treat the data as confidential and will take every precaution to
guarantee the privacy to participants, i.e., ensuring that personal data will
be appropriately anonymised and be made inaccessible to third parties” (Part
B, p. 102) the resulting, natural implication is that a number of
anonymization, aggregation, and blurring techniques must be tested well in
advance, and applied to sourced and produced datasets in dependence of the
requirements of the various project pilots. The results of this effort will be
released as two WP4 Deliverables, notably a White Paper on data anonymisation
issued in two consecutive editions, D4.5 at month 24 and D4.6 at month 30.
However, due to the key role played by anonymization in the context of the
PoliVisu project and the need to balance privacy and security with the policy
(end user) requirements of having usable datasets for e.g. traffic flows
measurement, detection of trends, or sentiment analysis, it is highly
recommended that the contents of this section be updated and integrated when
the next edition of this DMP is published, notably at month 12 of the work
plan.
# PoliVisu Data Management Plan
In this Section, the data usage scenarios presented in the Introduction are
used as a basis for discussing the key issues to be examined in relation to
each distinct paragraph of the PoliVisu DMP. As a reminder, the three
scenarios, which jointly compose the PoliVisu’s data management lifecycle,
are:
* Original data produced by the PoliVisu consortium and/or individual members of it (e.g. during a dissemination action or a pilot activity);
* Existing data already in possession of the PoliVisu consortium and/or individual members of it prior to the project’s initiation;
* Existing data sourced/procured by the PoliVisu consortium and/or individual members of it during the project’s timeline.
On the other hand, the datasets handled within the three above scenarios can
belong to either of these three categories:
* Confidential data (for business and/or privacy protection);
* Anonymised and Public data (as explained in the Introduction, these two aspects go hand in hand); ● Non anonymised data (the residual category).
## Data summary
The following table summarizes the typologies and contents of data collected
and produced. For each distinct category, a detailed list will be provided in
the next edition of the DMP, due by month 12.
**Table 3. Summary of relevant data for the PoliVisu research agenda**
<table>
<tr>
<th>
**Nature of datasets**
**Data usage scenarios**
</th>
<th>
**Confidential**
</th>
<th>
**Anonymised and Public**
</th>
<th>
**Non anonymised**
</th> </tr>
<tr>
<td>
**Original data produced by the**
**PoliVisu consortium**
</td>
<td>
Raw survey/interview/sensor data
Evidence from project pilots
Personal data of end users
New contacts established
</td>
<td>
Summaries of surveys/interviews
Data in reports of pilot activities
End user data on public display
Contact data within deliverables
</td>
<td>
Photos/videos shot during public events
Audio recordings (e.g.
Skype)
Data in internal repositories
</td> </tr>
<tr>
<td>
**Existing data already in possession of the PoliVisu consortium and/or
partners**
</td>
<td>
Data embedded in some of the Background solutions
(see par. 2.4.2 above) Contact databases
</td>
<td>
Data embedded in some of the Background solutions (see par.
2.4.2 above)
Website logs and similar metrics
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Existing data sourced/procured by the PoliVisu consortium and/or partners**
</td>
<td>
Raw data in possession of the Cities or of any third party involved in the
pilots
</td>
<td>
Free and open data (including from scientific and statistical publications)
</td>
<td>
N/A
</td> </tr> </table>
The main implications of the above table for the three usage scenarios are the
following, in **decreasing** **order of urgency** for the related action
lines as well as **increasing** **order of gravity** for the consequences of
any inadvertent behaviour by the members of the consortium:
* The organisation of Living Lab experimentations (as foreseen by the project’s work plan) implies that personal data handling of the end users acting as volunteers must be carefully considered, also for their ethical implications.
* For any photos/videos shot during public events, it is crucial to collect an **informed** **consent note** 8 from all the participants, with an explicit disclaimer in case of intended publication of those personal images on e.g. newspapers, internet sites, or social media groups. This will bring the data back into the Confidential category, where it is legitimate to store and/or process it for legitimate reasons.
* For any audio recordings stored, e.g. in the project’s official repository (currently Google Drive) or in individual partners’ repositories, care must be taken of the risk of involuntary disclosure and/or the consequences of misuse for any unauthorized purpose. Same goes for the personal data of each partner in the consortium.
* Informed consent forms must be signed (also electronically) by all participants in surveys, interviews and/or pilot activities. As an alternative option, the partner in charge will commit to anonymisation and other related measures as a way to protect the identity of the respondents/pilot users.
* Informed consent forms are also required when using available contacts (be they preexisting to the project or created through it) to disseminate information via e.g. newsletters or dedicated emails. In this respect, the GDPR provisions are particularly binding and must be carefully considered, at least in any doubtful case.
* As a general rule, access conferred to Background knowledge on a royalty free basis during a project execution does not involve the right to sublicense. Therefore, attention must be paid by each partner of PoliVisu to ensure the respect of licensing conditions at any time and by any member of the team.
* This also applies to any dataset sourced or procured from third parties during the PoliVisu project’s lifetime.
## Data collection
The following table summarizes the procedures for collecting project related
data. For each distinct case, some concrete examples will be provided in the
next edition of the DMP, due by month 12.
**Table 4. Summary of PoliVisu data collection procedures**
<table>
<tr>
<th>
**Nature of datasets**
**Data usage scenarios**
</th>
<th>
**Confidential**
</th>
<th>
**Anonymised and Public**
</th>
<th>
**Non anonymised**
</th> </tr>
<tr>
<td>
**Original data produced by the**
**PoliVisu consortium**
</td>
<td>
Surveys
Interviews
Pilot activities
F2F / distant interaction
</td>
<td>
Newsletters
Publications
Personal Emails
Open Access repositories
</td>
<td>
Events coverage - directly or via specialised agencies
A/V conferencing systems
Internal repositories
</td> </tr>
<tr>
<td>
**Existing data already in possession of the PoliVisu consortium and/or
partners**
</td>
<td>
Seamless access and use during project execution
</td>
<td>
Seamless access and use during project execution
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Existing data sourced/procured by the PoliVisu consortium and/or partners**
</td>
<td>
Licensed access and use during project execution
</td>
<td>
Free and open access and use during project execution
</td>
<td>
N/A
</td> </tr> </table>
An implication of the above table that may not have been evident in the
previous one, is that **every** **partner is responsible for the behaviour
of all team members** , which may also include subcontracted organisations
(e.g. specialised press agencies) or even volunteers. The latter circumstance
does not exempt the delegate of a certain job in case of improper application
of extant norms and rules.
All data will be collected in a digital form – therefore CSV, PDF, (Geo)JSON,
XML, Shape, spreadsheets and textual documents will be the prevalent formats.
In case of audio/video recordings and images, the most appropriate standards
will be chosen and adopted (such as .gif, .jpg, .png, .mp3, .mp4, .mov and
.flv). Ontologies will be created in Protégé file format (.pont and .pins) or
.xml/.owl can also be used. Website pages can be created in .html and/or .xml
formats.
Individually, each research output will be of manageable size to be easily
transferred by email. However, it is important to note that email transfer can
become a violation of confidentiality under certain circumstances.
## Data processing
The following table summarizes the procedures for processing PoliVisu related
data that can be envisaged at this project’s stage. As one can see, most of
them make reference to the contents of paragraph 2.6 above. In this sense,
more can probably be added to the cells of the table. For this purpose,
however, some exemplary case descriptions will be provided in the next edition
of the DMP, due by month 12.
**Table 5. Summary of PoliVisu data processing procedures**
<table>
<tr>
<th>
**Nature of datasets**
**Data usage scenarios**
</th>
<th>
**Confidential**
</th>
<th>
**Anonymised and Public**
</th>
<th>
**Non anonymised**
</th> </tr>
<tr>
<td>
**Original data produced by the**
**PoliVisu consortium**
</td>
<td>
Anonymisation
Visualisation
</td>
<td>
Statistical evaluation
Visualisation
</td>
<td>
Selection/destruction
Blurring of identities
</td> </tr>
<tr>
<td>
**Existing data already in possession of the PoliVisu consortium and/or
partners**
</td>
<td>
Anonymisation
Statistical evaluation
Metadata generation
</td>
<td>
Visualisation
Analytics
Publication as map services
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Existing data sourced/procured by the PoliVisu consortium and/or partners**
</td>
<td>
Anonymisation
Statistical evaluation
Metadata generation
</td>
<td>
Visualisation
Analytics
Publication as map services
</td>
<td>
N/A
</td> </tr> </table>
Apart from the specific software listed in paragraph 2.6 above, state of the
art productivity tools will be used to process/visualize the data used or
generated during the project. Typically, the partners are left free to adopt
their preferred suite (such as Microsoft Office™ for PC or Mac, Apple’s iWork™
and OpenOffice™ or equivalent). However, the following tools are the ones
mainly used by the consortium:
* Google’s shared productivity tools (so-called G-Suite™) are used for the co-creation of outputs by multiple, not co-located authors.
* Adobe Acrobat™ or equivalent software is used to visualise/create the PDF files.
* Protégé™ or equivalent software is used to generate the ontologies.
* Photoshop™ or equivalent software are used to manipulate images.
* State of the art browsers (such as Mozilla Firefox™, Google Chrome™, Apple Safari™ and Microsoft Internet Explorer™) are used to navigate and modify the Internet pages, including the management and maintenance of social media groups.
* Cisco Webex™ or Skype™ (depending on the number of participants) are the selected tools for audio/video conferencing, which may also serve to manage public webinars.
* Tools like Google Forms™, and optionally SurveyMonkey™ and LimeSurvey™, are used for the administration of online surveys with remotely located participants.
* Dedicated Vimeo™ or YouTube™ channels can help broadcast the video clips produced by the consortium to a wider international audience, in addition to the project website.
* Mailchimp™ or equivalent software is helpful to create, distribute and administer project newsletters and the underlying mailing lists.
## Data storage
The following table summarizes the procedures for storing project related
data, during and after the PoliVisu lifetime, and the most frequently used
repositories. As for the previous paragraphs, we limit ourselves now to
listing the headlines and commit to adding more contents to the cases in the
next edition of the DMP, due by month 12\.
**Table 6. Summary of PoliVisu data storage procedures**
<table>
<tr>
<th>
**Nature of datasets**
**Data usage scenarios**
</th>
<th>
**Confidential**
</th>
<th>
**Anonymised and Public**
</th>
<th>
**Non anonymised**
</th> </tr>
<tr>
<td>
**Original data produced by the**
**PoliVisu consortium**
</td>
<td>
Individual partner repositories
Common project repository
</td>
<td>
Project website
Open access repository
</td>
<td>
Individual partner repositories
Common project repository
</td> </tr>
<tr>
<td>
**Existing data already in possession of the PoliVisu consortium and/or
partners**
</td>
<td>
Specific software repositories
</td>
<td>
Playbox components
Map services
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Existing data sourced/procured by the PoliVisu consortium and/or partners**
</td>
<td>
Individual partner repositories
Third party repositories
Cloud repositories
</td>
<td>
Playbox components
Map services
Cloud repositories
</td>
<td>
N/A
</td> </tr> </table>
Google Drive™ is the selected tool as Polivisu’s data and information
repository. This include both the project deliverables (including relevant
references utilised for their production or generated from them as project
publications, e.g. journal articles, conference papers, e-books, manuals,
guidelines, policy briefs etc.) and any other related information, including
relevant datasets. This implies that the privacy and security measures of
Google Drive™ must be GDPR compliant. The verification of such circumstance is
the responsibility of the coordinator.
Additionally, the coordinator will make sure that the official project
repository periodically generates back-up files of all data, in case anything
may get lost, corrupted or become unusable at a later stage (including after
the project’s end). The same responsibility goes to each partner for the local
repositories utilised by them (in some cases, these are handled by large
organisations such as Universities or Municipalities; in others, by SME or
even personal servers or laptops).
Collectively, we expect the whole set of outputs to reach the size of 500-600
Gb all along the project duration. This range will particularly depend on the
number and size of the received datasets to be utilised for the execution of
PoliVisu pilots.
Whatever the license that the consortium establishes for final datasets, their
intermediate versions will be deemed as **business confidential** , and
restricted to circulating only within the consortium.
Finally and as stipulated in the DoA, each digital object identified as R&D
result, including their associated metadata, will be stored in a dedicated
open access repository managed by POLIMI, to the purpose of both preserving
that evidence and making it more visible and accessible to the scientific,
academic and corporate world.
The next edition of this DMP will provide additional details on such open
access repository.
In addition to POLIMI open access server, other datasets may be stored on the
following repositories: ● Cordis, through the EU Sygma portal
* The PoliVisu website (with links on/to the Social Media groups)
* Individual Partner websites and the social media groups they are part of
* The portals of the academic publishers where scientific publications will be accepted ● Other official sources such as OpenAIRE/Zenodo 16 and maybe EUDAT 17
16 17 _https://www.zenodo.org/communities/ecfunded/?page=1 &size=20 _
_https://eudat.eu/what-eudat_
● Consortium’s and Partners’ press agencies and blogs ● PoliVisu official
newsletters.
## Data sharing
Last but not least, the following table summarizes the procedures for sharing
PoliVisu related data in a useful and legitimate manner. When sharing, it is
of utmost importance to keep in mind, not only the prescriptions and
recommendations of extant rules and norms (including this DMP), as far as
confidentiality and personal data protection are concerned, but also the risk
of voluntary or involuntary transfer of data from the inside to the outside of
the European Economic Area (EEA).
In fact, while the GDPR applies also to the management of EU citizens personal
data (for business or research purposes) outside the EU, not all the countries
worldwide are subject to bilateral agreements with the EU as far as personal
data protection is concerned. For instance, the US based organisations are
bound by the so-called EU-U.S. Privacy Shield Framework, which concerns the
collection, use, and retention of personal information transferred from the
EEA to the US. This makes the transfer of data from the partners to any US
based organisation relatively exempt from legal risks. This may not be the
same in other countries worldwide, however, and the risk in question is less
hypothetical than one may think, if we consider the case of personal sharing
of raw data with e.g. academic colleagues being abroad for the purpose of
attending a conference. It is also for this reason that the sharing of non
anonymised data is discouraged for whatever reason, as shown in the table.
**Table 7. Summary of PoliVisu data sharing procedures**
<table>
<tr>
<th>
**Nature of datasets**
**Data usage scenarios**
</th>
<th>
**Confidential**
</th>
<th>
**Anonymised and Public**
</th>
<th>
**Non anonymised**
</th> </tr>
<tr>
<td>
**Original data produced by the**
**PoliVisu consortium**
</td>
<td>
Personal email communication
Shared repositories
</td>
<td>
Project website
Open access repository
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Existing data already in possession of the PoliVisu consortium and/or
partners**
</td>
<td>
Personal email communication Shared access to software
repositories
</td>
<td>
Shared access to Playbox components Map services
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
**Existing data sourced/procured by the PoliVisu consortium and/or partners**
</td>
<td>
Personal email communication
Shared repositories
</td>
<td>
Shard access to Playbox components Map services
</td>
<td>
N/A
</td> </tr> </table>
As for the above mentioned procedures, additional case descriptions will be
provided in the next edition of the DMP, due by month 12.
# Conclusions and Future Work
This document is the first of a series of four planned deliverables concerning
the PoliVisu Data Management Plan (DMP) in fulfillment of the requirements of
WP2 of the project’s work plan. The main reason for planning four versions of
the DMP (at months 6, 12, 24 and 36) and particularly two of them during the
first project year, is evidently related to the need to hold on until the
development as well as piloting activities of PoliVisu gain further momentum,
in order to:
* Secure the current, proposed structure of contents against any changes suggested by the gradual and incremental start up of the core project activities, and
* Colour the already existing contents with important add-ons based on the learning process that the Polivisu partners will activate throughout the project’s lifetime, considering also that most of project work will be oriented to operationalizing the connection between data handling (including analytics and visualization) and the policy making cycle outlined in deliverable D3.2 (also resting under POLIMI responsibility, like the present one).
This edition of the DMP has, in our opinion, fulfilled the immediate goals of
such a stepwise approach to data management, by:
* Presenting the legislative and regulatory framework, shaping the external context of this DMP in a relatively immutable manner, at least within the timeframe of the PoliVisu project;
* Identifying the fundamental principles of FAIR data handling according to the EC requirements and that the PoliVisu consortium and individual partners are bound to respect;
* Proposing a unitary description of the PoliVisu data management lifecycle, a precise requirement of the DoA and that has been the leitmotif and conceptual architrave of the whole document;
* Summarizing the key aspects of data collection, processing, storage and sharing (the typical contents of a DMP) within the proposed lifecycle elements and particularly highlighting - first and foremost, to the attention of the partners - some key aspects of data management that go beyond the operational link with open access policy (the likely reason why this deliverable has been assigned to POLIMI) and interfere with privacy and security policies (an ethical topic falling under the competence of WP1) as well as with the way background knowledge and tools will be developed, deployed and customised to serve the needs of the city pilots (a topic entirely covered by the WP4 team).
As for now, it would be a great result if this first edition of the PoliVisu
DMP could enable all partners to understand the different action items that
handling with data of different nature, origin and “size” imply for anyone
wanting to stay in a “safe harbour” while actively contributing to the
successful achievement of pilot and project outcomes.
Indeed, this document can be found lacking in a variety of respects, which
will be gradually covered within the forthcoming editions of it. Some of the
contents left unattended or only partly covered by this edition of the DMP
include:
1. A timeline of partners contributions. Until now, the contents have been provided mainly by the responsible author (POLIMI) with the other partners acting as external reviewers. In the future, and especially from now until month 12, a collaboration plan must be designed, covering most of the aspects associated with small “signposts” here and there along the preceding text.
2. A clearer connection with data handling in other deliverables. In fact, due to the tight connection between project activities and data management, the reader interested in getting full information on how the PoliVisu project deals with data should also refer, in addition to this DMP, to the following, already published, deliverables: D1.1 (Ethical Requirement No. 4), D1.3 (Ethical Requirement No. 3), D2.2 (Project Management Plan), D2.3 (Quality and Risk Plan), D6.1 (Pilot Scenarios), D7.1 (Evaluation Plan) and D8.1 (Impact Enhancement Road Map). Additional deliverables will be released until month 12. It then makes sense to coordinate better and more explicitly the contents of these in order not to miss precious information while at the same time avoid duplications and inconsistencies in the framing and reporting of this crucial theme.
3. While commenting the TOC of this document about one month ago, some partners proposed a more detailed consideration of the following topics: open standards, open data licensing, and consortium level policies. The latter aspect has been partly dealt with by reconstructing ex post some provisions of the GA and CA that are already binding for all partners. However, it is certainly worthwhile to make a more explicit and (to some extent) forward looking plan of e.g. what kind of licenses should be part of all the output categories making up the project results. It is also in that context that the issues of open standards and open data licenses (other than those belonging to the open access scheme) may be more extensively dealt with.
4. Another missing indication is certainly that of the partners responsible for the various steps of data management. At the moment, the crucial question of “who is in charge of” collecting, processing and storing data for each partner or deciding to limit or allow full access to some datasets, is subject of future decision making and will also depend on the maturity level of the pilot partners involved and strategic decisions when designing the PoliVisu platform. This question is not trivial (the answer equating the members of each partner team, or the heads of the teams, with the “people in charge” is by no means acceptable, giving too many things for granted, including the lack of hierarchies and other sorts of complexity within each partner’s organisation). In fact, some internal work is ongoing within the consortium at the level of creating a working group of the Data Protection Officers of each participant organisation. However there is more in between, and it will be the task of the next DMP edition to dig into the issue, thus contributing to the specialisation and clarification of the use cases now presented very superficially, in table form, within the preceding Section 3.
5. A final, indispensable aspect to be covered by a DMP is obviously the post-project scenario. What is the consortium’s and individual partners’ foresight of the management of pilot related datasets and more generally, of all the datasets created during the project’s lifetime that - for legitimate reasons, first and foremost exploitation related - are not subject to immediate publicity and may nonetheless require considerable attention and care to be maintained and preserved? Arguably the PoliVisu work plan is at a too early stage to enable a firm definition of these aspects. However with the progress of activities (and time), we expect that the operational links created at pilot level between (big) data handling, the behaviours of people involved in the Living Lab experimentations, and the three stages of the PoliVisu policy cycle will start generating insights and enable the collection of evidence in view of the broader dissemination and exploitation phases of the project.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0686_CPaaS.io_723076.md
|
# Introduction
As described with the H2020 guidelines, Research funding organisations, as
well as organisations undertaking publicly funded research, have an obligation
to optimize the use the funds they have been granted. Part of this
optimization is that data sets resulting from public funded research must be
made available to other researchers to either verify the original results
(which is integral part of proper scientific approach), or to build upon them.
In order to achieve this high-level objective, a data management policy has to
be implemented and thoroughly followed by the CPaaS.io consortium as a whole,
even if –per se- not all CPaaS.io partners will be involved in all aspects of
those policies/principles.
The Data Management Plan (DMP) is a living document (with two formal versions
of the same deliverable released in M6 and M30 respectively) that describes
the data management policy (i.e. the management principles) and collected and
generated data sets. It covers all aspects introduced in the “Guidelines on
Data Management in Horizon 2020”, which are:
1. Precise description of the collected and generated data (nature of data, related domain ontologies, standards and data formats used,…)
2. Detail about various aspects of the data management (how it is stored, by whom, under which responsibility, how it is secured, how it is sustained and backed up)
3. Sharing principles (licensing, access methods,...)
4. Detail about how the privacy is maintained
This first version of the Data Management Plan gives a preliminary description
of the data as collected and generated by both the CPaaS.io platform and
project partners through their legacy systems. At the time of the document
editing, some aspects of data management are still under discussion, mainly
because they are strongly depending on some technical decisions pertaining to
the CPaaS.io system platform design and how this architecture deals with
partners’ legacy systems as far as storage, backup and data flows are
concerned.
Aspects such as data backup, sustainability, detail about data sharing and
archiving will be thoroughly developed through an intermediary and far more
complete version of this deliverable.
In this current version we mainly provide detail (as known at M6) about the
scenarios and collected data (see Section 2) and roles of partners as far as
Data Management in CPaaS.io is concerned (see Section 3).
Due to differences between EU and Japanese formal contracts and differences in
data-related rules and constraints, we have focussed in this initial version
on scenarios and partners from EU only. However the living document will aim
at harmonizing the different views through a single and consultable internal
document.
# CPaaS.io Research Data
This section introduces the different EU-side use-cases as described in the
CPaaS.io Description of Work document and the applications built upon them. It
also describes the collected data (meaning the semantically annotated raw-data
with no extra added value) and the generated data (meaning the semantic value-
added information built from the annotated raw-data using various technics
like analytics or reasoning). Part of the information described in this
section can be found in a more complete form in CPaaS.io deliverable D2.1 [1].
The two scenarios considered in CPaaS.io for the EU-only side are:
* Managing Fun and Sport events
* Waterproof Amsterdam
And the two derived applications are:
* Enhanced User Experience
* Waterproof Amsterdam
## Data from Enhanced User Experience application
### Short description
The core idea of this application is to use IoT sensors and analytics to
enhance people’s experience while visiting or participating at a fun or sports
event. Wearables and mobile phones are used as sensors in order to learn about
the activities of event participants. Event participants may include members
of the audience, but also performing artists or athletes. For instance AGT has
previously equipped referees and cheerleaders in basketball matches with
wearable sensors and created content based on the analysed data for
consumption on site and for distribution via TV broadcasting, social media and
other digital distribution channels 1 . Furthermore the application uses
sensor deployed at the venue to measure and analyse fan behaviour and
engagement.
### Data collected for the Enhanced User Experience application (Color Run)
Table 1 summarizes the data from the Enhanced User Experience application as
described in D2.1. Please note that although the hosting field specifies that
the most of the data is hosted external to the CPaaS.io platform we are
considering to use the storage capabilities in the next iterations. Further to
the data sets described in D2.1 we have added an additional mobile camera data
set.
**Table 1: Data collected for Managing Fun and Sport events scenario**
<table>
<tr>
<th>
**Biometric data**
</th>
<th>
</th> </tr>
<tr>
<td>
**Detailed Description**
</td>
<td>
We will collect a range of biometric measurements from wearables such as
wristbands, chest straps and smart sportswear that provides biometric
measurements including heart rate, breathing rate and galvanic skin response,
burned calories measurements and skin temperature.
</td> </tr>
<tr>
<td>
**OGD or private data**
</td>
<td>
Private
</td> </tr>
<tr>
<td>
**Personal Data**
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
**Hosting**
</td>
<td>
External
</td> </tr>
<tr>
<td>
**Data Provider**
</td>
<td>
AGT
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
JSON
</td> </tr>
<tr>
<td>
**Update Frequency**
</td>
<td>
up to every 200ms
</td> </tr>
<tr>
<td>
**Update Size**
</td>
<td>
~1 KB
</td> </tr>
<tr>
<td>
**Data Source**
</td>
<td>
Sensor
</td> </tr>
<tr>
<td>
**Sensor**
</td>
<td>
Wristband, chest strap, smart shirts
</td> </tr>
<tr>
<td>
**Number of Sensors per person**
</td>
<td>
~6
</td> </tr> </table>
<table>
<tr>
<th>
**GPS Traces**
</th>
<th>
</th> </tr>
<tr>
<td>
**Detailed Description**
</td>
<td>
GPS traces include positional data including altitude information as delivered
by GPS devices.
</td> </tr>
<tr>
<td>
**OGD or private data**
</td>
<td>
Private
</td> </tr>
<tr>
<td>
**Personal Data**
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
**Hosting**
</td>
<td>
External
</td> </tr>
<tr>
<td>
**Data Provider**
</td>
<td>
AGT
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
common GPS formats (GPX, KML, CSV, NMEA)
</td> </tr>
<tr>
<td>
**Update Frequency**
</td>
<td>
Up to 1s
</td> </tr>
<tr>
<td>
**Update Size**
</td>
<td>
< 1KB
</td> </tr>
<tr>
<td>
**Data Source**
</td>
<td>
Sensor
</td> </tr>
<tr>
<td>
**Sensor**
</td>
<td>
GPS sensor in wristbands and mobile phones
</td> </tr>
<tr>
<td>
**Number of Sensors per person**
</td>
<td>
1-2
</td> </tr> </table>
<table>
<tr>
<th>
**Motion Data**
</th>
<th>
</th> </tr>
<tr>
<td>
**Detailed Description**
</td>
<td>
Motion data that measures hand and body movements based on accelerometer and
gyroscope sensors
</td> </tr>
<tr>
<td>
**OGD or private data**
</td>
<td>
Private
</td> </tr>
<tr>
<td>
**Personal Data**
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
**Hosting**
</td>
<td>
External
</td> </tr>
<tr>
<td>
**Data Provider**
</td>
<td>
AGT
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
JSON
</td> </tr>
<tr>
<td>
**Update Frequency**
</td>
<td>
Up to every 16 ms
</td> </tr>
<tr>
<td>
**Update Size**
</td>
<td>
~ 200 byte per sensor reading
</td> </tr>
<tr>
<td>
**Data Source**
</td>
<td>
Sensors
</td> </tr>
<tr>
<td>
**Sensor**
</td>
<td>
Accelerometer and gyroscope sensors of mobile phones, wristband and other
wearables
</td> </tr>
<tr>
<td>
**Number of Sensors per person**
</td>
<td>
2-3
</td> </tr> </table>
<table>
<tr>
<th>
**Step Counts**
</th>
<th>
</th> </tr>
<tr>
<td>
**Detailed Description**
</td>
<td>
This data set contains step counts.
</td> </tr>
<tr>
<td>
**OGD or private data**
</td>
<td>
Private
</td> </tr>
<tr>
<td>
**Personal Data**
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
**Hosting**
</td>
<td>
External
</td> </tr>
<tr>
<td>
**Data Provider**
</td>
<td>
AGT
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
JSON
</td> </tr>
<tr>
<td>
**Update Frequency**
</td>
<td>
Up to 1Hz
</td> </tr>
<tr>
<td>
**Update Size**
</td>
<td>
~ 200 byte per sensor reading
</td> </tr>
<tr>
<td>
**Data Source**
</td>
<td>
Sensors
</td> </tr>
<tr>
<td>
**Sensor**
</td>
<td>
Step count measurement of wristband
</td> </tr>
<tr>
<td>
**Number of Sensors per person**
</td>
<td>
1-2
</td> </tr> </table>
<table>
<tr>
<th>
**Environmental Data**
</th>
<th>
</th> </tr>
<tr>
<td>
**Detailed Description**
</td>
<td>
This data set environmental data such light intensity and barometric pressure.
The data is primarily collected from wearable sensors.
</td> </tr>
<tr>
<td>
**OGD or private data**
</td>
<td>
Private
</td> </tr>
<tr>
<td>
**Personal Data**
</td>
<td>
Yes (tbc)
</td> </tr>
<tr>
<td>
**Hosting**
</td>
<td>
External
</td> </tr>
<tr>
<td>
**Data Provider**
</td>
<td>
AGT
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
JSON
</td> </tr>
<tr>
<td>
**Update Frequency**
</td>
<td>
Up to 1Hz
</td> </tr>
<tr>
<td>
**Update Size**
</td>
<td>
~ 200 byte per sensor reading
</td> </tr>
<tr>
<td>
**Data Source**
</td>
<td>
Sensors
</td> </tr>
<tr>
<td>
**Sensor**
</td>
<td>
Sensors in wristband
</td> </tr>
<tr>
<td>
**Number of Sensors per person**
</td>
<td>
1-2
</td> </tr> </table>
<table>
<tr>
<th>
**Mobile Camera videos**
</th>
<th>
</th> </tr>
<tr>
<td>
**Detailed Description**
</td>
<td>
This data set contains videos recorded by mobile cameras worn by Color Run
participants.
</td> </tr>
<tr>
<td>
**OGD or private data**
</td>
<td>
Private
</td> </tr>
<tr>
<td>
**Personal Data**
</td>
<td>
Yes
</td> </tr>
<tr>
<td>
**Hosting**
</td>
<td>
External
</td> </tr>
<tr>
<td>
**Data Provider**
</td>
<td>
AGT
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
MP4
</td> </tr>
<tr>
<td>
**Update Frequency**
</td>
<td>
30fps
</td> </tr>
<tr>
<td>
**Update Size**
</td>
<td>
(~45kbps)
</td> </tr>
<tr>
<td>
**Data Source**
</td>
<td>
Mobile Camera
</td> </tr>
<tr>
<td>
**Sensor**
</td>
<td>
GoPro Hero4 Camera
</td> </tr>
<tr>
<td>
**Number of Sensors per person**
</td>
<td>
1
</td> </tr> </table>
### Data generated by the Enhanced User Experience application (Color Run)
The Enhanced User Experience application generates three types of data
1. User Activity
2. Dominant Colors
3. Clothing Analysis
User activity is based mainly on motion data and therefore private
information. A user activity is always linked to a user and therefore personal
information. The re-use of the data is possible within the boundaries defined
in the consent forms used to collect the data.
Dominant Colour provides information about the prevailing colour in a video
feed and is used for detecting colour stations in the Color Run. The output is
a colour value, duration and location. The generated can be provided in
anonymised form, but requires further examination to what degree it can be
opened.
Clothing Analysis uses deep learning techniques to determine metrics based on
clothing styles derived from images. By nature this metrics are linked to user
and therefore reflect private data that can only be reused in the boundaries
of the consent forms used to collect the data.
**Table 2: Data generated for the Enhanced User Experience application**
<table>
<tr>
<th>
**Types of generated data**
</th>
<th>
**Based on…**
</th>
<th>
**Anonymised**
**Y/N**
</th>
<th>
**Open**
**Y/N**
</th> </tr>
<tr>
<td>
User Activity
</td>
<td>
Motion Data
</td>
<td>
N
</td>
<td>
reusable, but not open
</td> </tr>
<tr>
<td>
Dominant Colour
</td>
<td>
Mobile Camera Videos
</td>
<td>
Y
</td>
<td>
Reusable, but not fully open
</td> </tr>
<tr>
<td>
Clothing Analysis
</td>
<td>
Mobile Camera Videos, Public Images
</td>
<td>
N
</td>
<td>
Reusable, but not open
</td> </tr> </table>
## Data from Waterproof Amsterdam
### Short description
Extreme rainfall and periods of continued drought are occurring more and more
often in urban areas. Because of the rainfall, peak pressure on a
municipality’s sewerage infrastructure needs to be load balanced to prevent
flooding of streets and basements. With drought, smart water management is
required to allow for optimal availability of water, both underground as well
as above ground.
The Things Network develops the Amsterdam Waterproof application, which is a
software tool creating a network of smart, connected rain buffers, be it rain
barrels, retention rooftops or buffer otherwise, that can be both monitored
and controlled centrally by the water management authority. Third party
hardware providers will connect their buffers to this tool for uplink and
downlink data transmission.
External data such as weather data and sewerage capacity are added, in order
to calculate the optimal filling degree of each buffer and so operate a pump
or valve in the device. Waternet is the local water management company who
will be the main user of the application.
### Data collected for the Waterproof Amsterdam application
In the section below are the data sets used for the Waternet application. It
consists of device data (rain buffer information), public weather data and
government data about physical infrastructure. Device data will be stored in
the application and could be stored in CPaaS, especially as it contains
private data like name and address of device owner. As this stage however we
cannot determine whether this privacy data will be shared by the vendors of
the devices, who are also the ones maintaining them. They are the only actor
who has direct contact with the end user and/or owner of the device.
(Historical) weather data is publicly available on the web, so there is no
need to store this data. It will be provided by a subscription data feed from
the web. The third data set is already owned and stored by Waternet, so there
is also no need for storage capabilities.
**Table 3: Data collected for the Waterproof Amsterdam scenario**
<table>
<tr>
<th>
**Weather data**
</th>
<th>
</th> </tr>
<tr>
<td>
**Detailed Description**
</td>
<td>
Upcoming weather displaying periods of heavy rain or drought
</td> </tr>
<tr>
<td>
**OGD or private data**
</td>
<td>
OGD
</td> </tr>
<tr>
<td>
**Personal Data**
</td>
<td>
No
</td> </tr>
<tr>
<td>
**Hosting**
</td>
<td>
Platform
</td> </tr>
<tr>
<td>
**Data Provider**
</td>
<td>
KNMI – Dutch weather forecast agency
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
HDF5/JSON
</td> </tr>
<tr>
<td>
**Update Frequency**
</td>
<td>
Hourly
</td> </tr>
<tr>
<td>
**Update Size**
</td>
<td>
20kb
</td> </tr>
<tr>
<td>
**Data Source**
</td>
<td>
Sensors
</td> </tr>
<tr>
<td>
**Sensor**
</td>
<td>
Water sensor
</td> </tr>
<tr>
<td>
**Number of Sensors**
</td>
<td>
unknown
</td> </tr> </table>
<table>
<tr>
<th>
**Rain buffer information**
</th>
<th>
</th> </tr>
<tr>
<td>
**Detailed Description**
</td>
<td>
Specific information about each rainbuffer (rooftop, barrel, underground
storage)
* Buffer size and type
* Filling degree
* Temperature
* Location
* Battery status
* Pump/valve capacity
* Active pump/valve hours
* Owner name, address, contact information
</td> </tr>
<tr>
<td>
**OGD or private data**
</td>
<td>
Private
</td> </tr>
<tr>
<td>
**Personal Data**
</td>
<td>
Yes – anonymised and not open
</td> </tr>
<tr>
<td>
**Hosting**
</td>
<td>
Platform
</td> </tr>
<tr>
<td>
**Data Provider**
</td>
<td>
Rain buffer hardware provider
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
JSON
</td> </tr>
<tr>
<td>
**Update Frequency**
</td>
<td>
Hourly
</td> </tr>
<tr>
<td>
**Update Size**
</td>
<td>
10b
</td> </tr>
<tr>
<td>
**Data Source**
</td>
<td>
Sensors
</td> </tr>
<tr>
<td>
**Sensor**
</td>
<td>
Water sensor or infrared sensor
</td> </tr>
<tr>
<td>
**Number of Sensors**
</td>
<td>
1 per buffer
</td> </tr> </table>
<table>
<tr>
<th>
**Sewerage processing capacity**
</th> </tr>
<tr>
<td>
**Detailed Description**
</td>
<td>
Geographical data on water infrastructure depicting remaining capacity of
sewerage
</td> </tr>
<tr>
<td>
**OGD or private data**
</td>
<td>
Private
</td> </tr>
<tr>
<td>
**Personal Data**
</td>
<td>
No
</td> </tr>
<tr>
<td>
**Hosting**
</td>
<td>
External
</td> </tr>
<tr>
<td>
**Data Provider**
</td>
<td>
Waternet
</td> </tr>
<tr>
<td>
**Format**
</td>
<td>
XML
</td> </tr>
<tr>
<td>
**Update Frequency**
</td>
<td>
Hourly
</td> </tr>
<tr>
<td>
**Update Size**
</td>
<td>
1kb
</td> </tr>
<tr>
<td>
**Data Source**
</td>
<td>
Sensors, maps
</td> </tr>
<tr>
<td>
**Sensor**
</td>
<td>
Water sensor
</td> </tr>
<tr>
<td>
**Number of Sensors**
</td>
<td>
unknown
</td> </tr> </table>
**Data generated by the Waterproof Amsterdam application** The Waterproof
Amsterdam generates different types of data.
1. Open/close command per buffer. This is the most important data generated, as it determines when an actuator inside a buffer should be operated (valve open or pump on). Based on all data sources available, an algorithm will determine which conditions are required to perform a certain command. The commands can be open and close, or a value in between as different water discharge mechanisms have different capacities (i.e. a percentage of full capacity)
2. Aggregated remaining buffer capacity per area. Waternet as the primary user of the application needs to monitor the total remaining capacity to buffer rain water, to understand whether there will be plenty capacity to catch up rain water in moments of heavy rainfall.
3. Aggregated litres of rain water processed per area. This is a metric to be used to show the impact the micro buffer network has generated over time. These insights may be used for PR and marketing purposes to stimulate individuals and companies to also buy and install such rain buffers.
The open data in the table below can be reused to perform analytics on
historical data, and could be open data through a public (graphical or
application) interface for third parties to interact with.
**Table 4: Data generated for the Waterproof Amsterdam application**
<table>
<tr>
<th>
**Types of generated data**
</th>
<th>
**Based on…**
</th>
<th>
**Anonymised**
**Y/N**
</th>
<th>
**Open**
**Y/N**
</th> </tr>
<tr>
<td>
Open/close command per buffer
</td>
<td>
All data sets
</td>
<td>
Y
</td>
<td>
N
</td> </tr>
<tr>
<td>
Aggregated remaining buffer capacity
(street, area, city level)
</td>
<td>
Individual rain buffers filling degree and location, map
</td>
<td>
Y
</td>
<td>
Y
</td> </tr>
<tr>
<td>
Aggregated litres processed by the buffers
</td>
<td>
Individual rain buffer pump hours run and pump capacity, map
</td>
<td>
Y
</td>
<td>
Y
</td> </tr> </table>
# CPaaS.io Research Data management plan
CPaaS.io project follows the principle that research data will be handled and
managed by those organisations/institutions that either collects or generates
the research data. The CPaaS.io project comprise a number of partners that are
involved directly in either:
* Producing the actual data during the trials, or
* Developing tools and enablers (e.g. analytics, reasoners, etc.) that are needed as core elements in the CPaaS.io system architecture, or
* Elaborating upon the produced data (using the aforementioned enablers) in order to produce new value-added knowledge.
The individual roles and duties of such partners and the research data
management plans that are in place in the organisations taking part in
CPaaS.io are described in the following sub-sections.
## AGT International (AGT)
### Data collection (from sensors)
The data collected by AGT has been described in Section 2.1 and is used for
generated the data as described in Table 2 and for developing the Enhanced
User Experience application. As described in D2.2 the collected data is
enriched with additional metadata.
### Data generation
The data generated by AGT has been described in Table 2 and is used in the
Enhanced User Experience application.
### Data Management
We have implemented appropriate technical and organizational measures to
ensure generated data is protected from unauthorized or unlawful processing,
accidental loss, destruction or damage. We review our information collection,
storage and processing practices regularly, including physical security
measures, to guard against unauthorized access to our systems. We restrict
access to generated data to only those employees, contractors and agents who
strictly need access to this information, and who are subject to strict
contractual confidentiality obligations.
## University of Surrey (UoS)
ICS at the University of Surrey is not involved neither in the production of
raw data nor in the exploitation or generation of higher-level information out
of it. However, UoS is focussing on architecture work where particular
attention is paid to ensuring that 1/ all privacy-related requirements are
thoroughly taken into account 2/ important part of the data is publicly
available following the project Open Data policy.
To this respect UoS is aiming at providing a bridge between CPaaS.io and
another FIRE project called FIESTA-IoT, two projects where UoS is actively
involved. UoS will in particular aim at involving CPaaS.io in either the 2 nd
Open Call of FIESTA-IoT or as a fellow contributor to that project via a
cooperation agreement to be discussed between the two projects after both POs
have been consulted on that matter. In both cases, CPaaS.io could play two
non-exclusive distinct roles:
* Data-provider: playing this role the CPaaS.io project would inject its data or part of its data (either raw data or inferred data) to the FIESTA-IoT so that so-called experimenters can make use of it using the FIESTA-IoT enablers; or
* Experimenter: playing this role, CPaaS.io could reuse additional data sets produced by the FIESTAIoT collaborators for testing our new own algorithms (e.g. Analytics) and techniques.
**Data collection (from sensors)**
UoS does not participate in any data collection
**Data generation**
UoS does not generate any new data from the project data sets
**Data Management**
UoS does not manage any gathered or generated data
## Bern University of Applied Sciences (BFH)
The BFH is not directly involved in the implementation of the envisaged use
cases. Its main research focus is in the data management concepts – in
particular the usage of Linked Data and Open Government Data as well as data
quality annotations, the application of MyData approaches, and in the
validation of the use cases. Hence it is not collecting, generating or storing
any data.
However, as part of its exploitation, validation and knowledge transfer
activities, BFH is planning to connect some sensors via the LoRa testbed
network that another institute (Institute for Energy and Mobility Research in
Biel) is currently setting up. What data will be collected and for what
purposes exactly will be defined at a later stage; a related data management
plan will be drawn up before any data collection starts.
### Data collection
BFH is not collecting any data for the main use cases of CPaaS.io. It may
collect and make available some sensor data through the LoRa network at BFH
for testing and validation purposes; details will be determined at a later
stage.
### Data generation
BFH is not generating any data for the main use cases of CPaaS.io. It may link
public data sources (e.g., from the Swiss Open Government Data portal at
_www.opendata.swiss_ ) with the sensor data collected through the LoRa
network at BFH for testing and validation purposes; potential use cases will
be determined at a later stage.
### Data Management
BFH is not managing any data for the main use cases of CPaaS.io. Data
collected and generated for testing and validation purposes through the LoRa
network at BFH will likely be made available publicly, in the spirit of open
data research, unless the data could allow to infer any information about
individuals. Details are to be determined at a later stage.
## OdinS
OdinS as a partner involved on the security and privacy aspects, will check
and support the project to check that data access and sharing activities will
be implemented in compliance with the privacy and data collection rules and
regulations, as they are applied nationally and in the EU, as well as with the
H2020 rules. Concerning the results of the project, these will become publicly
available based on the IPRs as described in the Consortium Agreement.
Due to the nature of the data involved, some of the results that will be
generated by each project phase will be restricted to authorized users, while
other results will be publicly available. Data access and sharing activities
will be rigorously implemented in compliance with the privacy and data
collection rules and regulations, as they are applied nationally and in the
EU, as well as with the H2020 rules.
### Data collection (from sensors)
OdinS will not be involved in the data generation of data from sensors,
working exclusively in the architecture aspects of the data collections and
its consequence over the security and privacy components.
### Data generation
OdinS is not involved in the production of raw data, but as part of the Task
4.1 User Empowerment Component Definition and the definition of access control
policies and use consent solution, OdinS will generate information associated
to data for controlling access and sharing data between entities and
components that will use the platform.
### Data Management
As the raw data included in the data sources, will be gathered from sensor
nodes and information management systems, those could be seen as highly
sensitive. Therefore, access to raw data can only take place between the
specific end users based on the policies associated and the partners involved
in the analysis of the data. For the models to function correctly, the data
will have to be included into the CPaaS.io repository. The results of the data
analytics are set to be anonymised and made available to the subsequent layers
of the framework, which will then allow the possibility for external industry
stakeholders to use the results of the project for their own purposes.
## NEC
NEC is not directly involved in the production or raw data. NEC’s focuses are
in the architecture (system integration including transferability and semantic
interoperability) area and cloud-edge processing of the data. FIWARE’s
resources such as the Generic Enablers and NEC’s IoT Platform can support
storage and exploitation of data from use cases for generating higher-level
analytical results. NEC pays particular attention to privacy related
requirements as well as the Open Data policy of CPaaS.io.
**Data collection**
NEC is not planning to collect any raw data for the use cases of CPaaS.io.
### Data generation
NEC is not generating data for the main use cases, NEC may exploit shared data
from use cases and generate higher level data as a result. Potential use cases
will be determined at a later stage.
### Data management
While NEC is not directly involved with the use cases, it will take part in
data transferability and management via the provided IoT Platform. NEC has
implemented necessary organizational and technical measures for the usage of
the data and its protection from unauthorized persons.
## The Things Network
#### Data collection (from sensors)
The data collected by The Things Network has been described in Section 2.2 and
is used for generated the data as described in table 2 and for developing the
Waterproof Amsterdam application. As described in D2.4 the collected data is
enriched with additional metadata.
#### Data generation
The data generated by The Things Network has been described in and is used in
the Waterproof Amsterdam application. Private data from owners of a rain
buffer is anonymised. Based on an algorithm, data from various sources is
processed by the application to determine the optimal filling degree for each
individual rain buffer. The results may be used for automated control of
buffers, or push notifications to trigger manual control.
**Data Management** Open data such as weather data will be streamed into the
application and not stored locally.
Private data from external sources such as device location will be stored in
the application and only released in an anonymised and aggregated manner.
Personal details about a device, such as name, address and contact details
will also be stored in the application in a secure account server. These data
may be transferred to CPaaS.io at some time, easing security and privacy
demands on the application end and transferring those to CPaaS.io
Parts of the personal data, such as buffer location, size and processed
litres, will be released in an aggregated, anonymised manner (e.g. on a heat
map) per area of a city or the city as a whole.
Readily available data from Waternet about sewerage capacity will abide by the
policies of Waternet. These policies are not yet clear at the moment.
We restrict access to generated data to only those employees, contractors and
agents who strictly need access to this information, and who are subject to
strict contractual confidentiality obligations.
# Conclusions & Next Steps
In this deliverable we presented the CPaaS.io approach towards data management
as handled by the EU CPaaS.io consortium. However at this early stage (M6), we
do not have yet very precise information about the data collected or generated
by the project. Some functional aspects are also still under discussion which
prevents giving much detail about type and location of data storage, backup
procedures, techniques used for generating data, and architecture-related
detail in general.
However, being a living document, future iterations of this deliverable (even
if not official deliverables) will provide increasing level of detail about
all data sets collected and generated by the project (including the Japanese
part, in order to provide a complete view). We will hopefully also be able to
describe very soon pre-requisite for reusing the public data sets and possibly
concrete example of such reuse by thirdparties (some contacts have been
already taken with the FIESTA-IoT FIRE H2020 project for instance).
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0691_InVID_687786.md
|
1 Introduction 5
1.1 History of the document 5
2 Applied methodology 7
2.1 Dataset reference and name 7
2.2 Dataset description 7
2.3 Standards and metadata 8
2.4 Data sharing 8
2.5 Archiving and preservation 9
3 Datasets in InVID 10
3.1 WP2 Datasets 10
3.2 WP3 Datasets 15
3.3 WP4 Datasets 21
3.4 WP5 Datasets 22
3.5 WP6 Datasets 23
3.6 WP7 Datasets 24
3.7 WP8 Datasets 26
4 Summary 32
# Introduction
This deliverable presents the Data Management Plan of the InVID project. In
particular, it describes in detail the adopted management policy for the
datasets that will be collected, processed or generated by the project. The
utilized approach: (a) ensures that any sensitive data are kept safe, (b)
identifies whether and how the data will be exploited or made publicly
accessible so as to maximize their reuse potential, and (c) indicates how
these data will be curated and preserved, in accordance with the activities
described in Task T1.3 Quality, data and knowledge management.
The European Commission (EC) has defined a number of guidelines / requirements
for maximizing the reuse potential of scientific data, via making them easily
discoverable, intelligible, usable beyond the original purpose for which they
were collected and interoperable to specific quality standards. Using as a
basis these guidelines we apply the methodology that is outlined in Section 2.
According to this approach, for each dataset we specify: (a) its name (based
on a standardized referencing approach), (b) its description, (c) the utilized
standards and metadata, (d) the applicable data sharing policy and (e) the
intended actions for its archiving and preservation. Further explanation
regarding the information that needs to be considered and reported for each
one of these features is given in Sections 2.1 to 2.5. Subsequently, based on
this methodology, Section 3 lists and describes the datasets of the InVID
project in a per-workpackage-basis (Sections 3.1 to 3.7). The concluding
Section 4 briefly summarizes the information reported in the deliverable.
The InVID Data Management Plan is a working document that evolves during the
lifespan of the project. For this reason an updated version of the Data
Management Plan, enhanced by exploiting the findings and the decisions made as
the project proceeds, will be produced and delivered as part of deliverable
D1.3 titled "Updated Data, quality and knowledge management plan", which will
be submitted to the EC in Month 21 of the project (September 2017).
## History of the document
**Table 1: History of the document**
<table>
<tr>
<th>
**Date**
</th>
<th>
**Version**
</th>
<th>
**Name**
</th>
<th>
**Comment**
</th> </tr>
<tr>
<td>
11/02/2016
</td>
<td>
V0.1
</td>
<td>
E. Apostolidis, V. Mezaris,
CERTH
</td>
<td>
Skeleton of the deliverable
</td> </tr>
<tr>
<td>
17/02/2016
</td>
<td>
V0.2
</td>
<td>
S. Papadopoulos, CERTH
</td>
<td>
Addition of a first list of WP3 datasets
</td> </tr>
<tr>
<td>
25/02/2016
</td>
<td>
V0.3
</td>
<td>
R. Garcia, UdL
</td>
<td>
Addition of WP4 dataset
</td> </tr>
<tr>
<td>
10/03/2016
</td>
<td>
V0.4
</td>
<td>
G. Innerwinkler, G. Rudinger, APA-IT
</td>
<td>
Addition of WP7 datasets
</td> </tr> </table>
<table>
<tr>
<th>
**Date**
</th>
<th>
**Version**
</th>
<th>
**Name**
</th>
<th>
**Comment**
</th> </tr>
<tr>
<td>
11/03/2016
</td>
<td>
V0.5
</td>
<td>
D. Teyssou, AFP
</td>
<td>
Addition of WP8 Market Study and WP3 TVLogos datasets
</td> </tr>
<tr>
<td>
11/03/2016
</td>
<td>
V0.6
</td>
<td>
L. Nixon, MODUL
</td>
<td>
Addition of two WP2 datasets
</td> </tr>
<tr>
<td>
18/03/2016
</td>
<td>
V0.7
</td>
<td>
J. Spangenberg, R.
Bouwmeester, T. Koch,
DW
</td>
<td>
Addition of WP6 dataset
</td> </tr>
<tr>
<td>
22/03/2016
</td>
<td>
V0.8
</td>
<td>
A. Scharl, WLT
</td>
<td>
Addition of WP5 dataset
</td> </tr>
<tr>
<td>
04/04/2016
</td>
<td>
V0.9
</td>
<td>
E. Apostolidis, V. Mezaris,
CERTH
</td>
<td>
Complete draft version
</td> </tr>
<tr>
<td>
06/04/2016
</td>
<td>
V0.10
</td>
<td>
E. Apostolidis, V. Mezaris,
CERTH
</td>
<td>
Complete version submitted for Quality Assurance
</td> </tr>
<tr>
<td>
13/04/2016
</td>
<td>
V0.11
</td>
<td>
E. Apostolidis, V. Mezaris,
CERTH
</td>
<td>
After QA version of the deliverable; input from partners requested
</td> </tr>
<tr>
<td>
28/04/2016
</td>
<td>
V1.0
</td>
<td>
E. Apostolidis, S.
Papadopoulos, V.
Mezaris, CERTH
</td>
<td>
Final document after Quality
Assurance, submitted to the EC
</td> </tr> </table>
# Applied methodology
The applied methodology for drafting this initial Data Management Plan of the
project was based on the guidelines of the EC 1 and the DMPonline 2 tool
which can be used for implementing such a plan in a structured manner via a
series of questions that need to be clarified for each dataset of the project.
According to these guidelines, the Data Management Plan of the InVID project
addresses the points below on a per dataset basis, reflecting the current
status within the consortium about the data that will be produced:
* Dataset reference and name
* Dataset description
* Standards and metadata
* Data sharing
* Archiving and preservation (including storage and backup)
A more detailed description of the information that is considered and reported
for each one of these subjects, is provided in the following subsections.
## Dataset reference and name
For convenient reference on the data that will be collected and/or generated
in the project we had to define a naming pattern. A referencing approach that
contains information about the WP that owns/uses the dataset, the serial
number of the dataset and the title of the dataset is the following:
_InVID_Data_"WPNo."_"DatasetNo."_"DatasetTitle"_ . According to this pattern,
an example dataset reference name could be
_InVID_Data_WP1_1_UserGeneratedContent_ .
## Dataset description
The description of the dataset that will be collected and/or generated
includes information regarding the origin (in case of data collection), nature
and scale of the data, as well as details related to the potential users of
the data. In later editions of this document, this section will also clarify
whether these data have been used in InVID to support a scientific publication
(as a general rule, we expect most of the InVID datasets to indeed support one
or more scientific publications). Information on the existence of similar data
and the possibilities for integration and reuse, if any, is also provided.
Last but not least, concerning the nature of the data, potential negative
effects on persons that are dealing with these data due to mentally traumatic
and/or frustrating content will also be highlighted in this section (at
present, this does not apply to any of the datasets listed in this document).
## Standards and metadata
This section outlines how the data will be collected and/or generated and
which community data standards (if any) will be used at this stage. Moreover
it provides information on how the data will be organized during the project,
mentioning for example naming conventions, version control and folder
structures. For a detailed overview of the used standards the following
questions were considered:
* How will the data be created?
* What standards or methodologies will be used?
* Which structuring and naming approach will be applied for folders and files?
* How different versions of a dataset will be easily identifiable?
In addition this section reports the types of metadata that will be created to
describe the data and aid their discovery. Information about how this
information will be created/captured and where it will be stored is also
reported. The aspects bellow were examined for determining the necessary ways
and types of generating and using metadata:
* How these metadata are going to be captured/created?
* Can any of this information be created automatically?
* What metadata standards will be used and why?
## Data sharing
This point describes how the collected and/or generated data will be shared.
For this, it reports on access procedures and embargo periods (if any), and
lists technical mechanisms and software/tools for dissemination and
exploitation/re-use of these data. Moreover it determines whether access will
be widely open or restricted to specific groups (e.g. due to participant
confidentiality, consent agreements or Intellectual Property Rights (IPR)),
while it outlines any expected difficulties in data sharing, along with causes
and possible measures to overcome these difficulties. In case a dataset cannot
be shared, the reasons for this are mentioned (e.g. ethical, rules of personal
data, intellectual property, commercial, privacyrelated, security-related).
Last but not least, identification of the repository where data will be
stored, indicating in particular the type of repository (institutional,
standard repository for the discipline, etc.) is also performed. The questions
bellow were studied for concluding to the most appropriate sharing policy for
each dataset of the project:
* How these data are going to be available to others?
* With whom will the data be shared, and under what conditions?
* Are any restrictions on data sharing required (e.g. limits on who can use the data, when and for what purpose)?
* What restrictions are needed and why?
* What actions will be taken to overcome or minimise restrictions?
* Where (i.e. in which repository) will the data be deposited?
## Archiving and preservation
The established data archiving and preservation policy defines the procedures
that will be put in place for long-term preservation of the data. In
particular it indicates how long the data will be preserved and what is their
approximated end volume. It also outlines the plans for preparing and
documenting data for sharing and archiving. In case of not using an
established repository, the Data Management Plan demonstrates the resources
and systems that will be in place to enable the data to be curated effectively
beyond the lifetime of the grant.
A set of questions that were considered for defining the archiving and
preservation policy for the datasets of the project is given bellow:
* What is the long-term preservation plan for the dataset (e.g. deposit in a data repository)?
* Are any additional resources needed to deliver our plan?
* Is there sufficient storage and equipment or additional may be needed?
# Datasets in InVID
This section lists the datasets that will be created or collected for the
needs of the InVID project, grouping them in a per-workpackage basis. Based on
the methodology presented in Section 2, each dataset is defined by: (a) its
name, (b) its description, (c) the used standards and accompanying metadata,
(d) the applied data sharing policy, and (e) the adopted mechanisms for its
archiving and preservation.
As a key component for the creation and management of these datasets, data
privacy issues will be closely monitored from the beginning of the project,
and the project’s Data Protection Officer (Mr. Max Göbel from WLT) as well as,
where necessary, the external Ethics Board with be consulted on this, to
ensure that the collection, use and sharing of the data will not raise ethical
concerns.
As a general statement about the adopted data collection and management policy
for the datasets reported in the following subsections, we would like to
declare that InVID is a scientific project. Therefore, any use of third-
parties copyrighted material within its scope is meant to be made for
scientific purposes and under the exception set forth in article 5.3.a of the
Directive 2001/29/EC of the European Parliament and of the Council of 22 May
2001 on the harmonisation of certain aspects of copyright and related rights
in the information society. In order to set the licensing needs of the
project, should it become a commercial one, as well as any personal data
issues that need to be addressed, each WP will consider any copyright,
personal data and/or contractual limitations that applies to the media,
software and/or data involved in their study. These limitations will be
studied in order to provide recommendations on any agreements with the main
services/platforms where User Generated Video (UGV) is found and/or with
owners of such content that may be deemed necessary for the InVID tools to be
able to treat such contents and deliver their verification and licensing
outputs to the media industry.
## WP2 Datasets
<table>
<tr>
<th>
Dataset name
</th>
<th>
**InVID_Data_WP2_1_TRECVID**
</th> </tr>
<tr>
<td>
Dataset description
</td>
<td>
This dataset is provided by NIST 3 to the participants of the TRECVID SIN 4
and MED 5 tasks. It will be used for developing technologies for video
annotation with visual concept and event labels. The dataset is divided in two
main parts.
</td> </tr> </table>
<table>
<tr>
<th>
Dataset name
</th>
<th>
**InVID_Data_WP2_1_TRECVID**
</th> </tr>
<tr>
<td>
</td>
<td>
The first part consists of approx. 18500 videos (354 GB, 1400 hours) under a
Creative Commons (CC) license, in MPEG- 4 5 /H.264 format, and it is
typically partitioned into training (approx. 11200 videos, 10 seconds to 6,4
minutes long; 210 GB, 800 hours total) and testing set (approx. 7300 videos,
10 seconds to 4,1 minutes long; 144 GB, 600 hours total) for video concept
detection methods. The total number of concepts is 346, and the annotation of
each of these videos is based on a pair of XML and TXT files; the XML file
contains information about the shot segments of the video and the TXT file
includes the shot-level concept-based annotation of the video via a number of
positive and negative concept labels. Finally, a TXT file with metadata
describing sets of relations between these concepts in the form of "concept A
implies concept B" and "concept A excludes concept B", is also available.
The second part is a collection of approx. 63000 videos (736 GB, 2520 hours)
in MPEG-4/H.264 format, created by the Linguistic Data Consortium 6 and
NIST. It is used for the development of video event detection techniques and
is divided in three subsets: (a) a training set with 3000 (50 GB, 80 hours)
positive or near-miss videos, and 5000 (51 GB, 200 hours) background (i.e.,
negative) videos, (b) a validation set of 23000 videos (272 GB, 960 hours),
and (c) an evaluation set of 32000 videos (363 GB, 1280 hours). The number of
considered events is 20, and the ground truth for this collection is stored in
CSV files. These files provide the event-based annotations of the videos by
defining the list of positive or near-miss videos for each visual event.
</td> </tr>
<tr>
<td>
Standards and metadata
</td>
<td>
The videos of this static dataset are in MPEG-4/H.264 format, while their
annotations and metadata are in TXT, XML and CSV files. The generated results
after processing this dataset (extracted features, if any; automatic
annotation results) will be stored in XML, JSON and MPEG-7 formats. They will
be accompanied by a document (a word or pdf file) containing metadata with
sufficient information to: (a) link it to the research publications/outputs,
(b) identify the funder and research discipline, and (c) appropriate key words
to help users to locate the data.
</td> </tr>
<tr>
<td>
Data
</td>
<td>
This is a dataset created and provided to us by NIST, under specific
conditions
</td> </tr> </table>
<table>
<tr>
<th>
Dataset name
</th>
<th>
**InVID_Data_WP2_1_TRECVID**
</th> </tr>
<tr>
<td>
sharing
</td>
<td>
that are linked with the TRECVID benchmarking activity. Sharing of the dataset
is regulated by NIST, and we will comply with their requirements. We are not
allowed to further share this dataset with third parties. We can, however, and
will share the results of our processing of the dataset (automatic annotation
results in XML, JSON and MPEG- 7 formats) via the free-of-charge OpenAIRE7
or Zenodo 8 platforms, under the express conditions that the data is used
solely for the purposes of evaluating concept detection algorithms and may not
be copied and re-used for any other purpose.
</td> </tr>
<tr>
<td>
Archiving and preservation
</td>
<td>
The original dataset and the analysis results will be stored on the file
servers of CERTH (protected by applying the commonly used security measures
for preventing unauthorized access and ensuring that security software is up-
todate with the latest released security patches) and backup provisions will
be made. Moreover, as stated above, a set of processing outcomes of this
dataset will be also made available on the free-of-charge OpenAIRE or Zenodo
platforms.
</td> </tr> </table>
<table>
<tr>
<th>
Dataset name
</th>
<th>
**InVID_Data_WP2_2_ImageNet**
</th> </tr>
<tr>
<td>
Dataset description
</td>
<td>
This dataset contains images of the online ImageNet 9 collection, which is
organized and managed by the Stanford and Princeton Universities. It will be
used for building and training Deep Convolutional Neural Networks (DCNNs) for
video concept detection. In particular, ImageNet is an image dataset organized
according to the WordNet 10 hierarchy (currently only the nouns); for each
node of the hierarchy, related images (often several hundreds or thousands of
them) are provided. The current dataset is the one released in fall 2011 and
is an updated version of the initial collection 11 . It contains approx. 15
million images in high resolution JPEG format, which are clustered in
categories that correspond to 22000 distinct concepts of the WordNet
structure.
</td> </tr> </table>
<table>
<tr>
<th>
Dataset name
</th>
<th>
**InVID_Data_WP2_2_ImageNet**
</th> </tr>
<tr>
<td>
</td>
<td>
Images of each concept are quality-controlled and human-annotated.
</td> </tr>
<tr>
<td>
Standards and metadata
</td>
<td>
This static dataset is composed by images that are mainly in high resolution
JPEG format. The created metadata after analyzing these images can be: (a)
local features extracted from these images, that are stored in BIN of TXT
files, and (b) the output of the trained DCNNs (i.e., the classification
decision), which is stored in TXT files. These data will be accompanied by a
document (a word file) containing metadata with sufficient information to: (a)
link it to the research publications/outputs, (b) identify the funder and
discipline of the research, and (c) appropriate key words to help internal
users to locate the data.
</td> </tr>
<tr>
<td>
Data sharing
</td>
<td>
The ImageNet dataset is freely available for non-commercial research and/or
educational use, by following the procedure and adopting the terms of use that
are described in the ImageNet website 12 .
</td> </tr>
<tr>
<td>
Archiving and preservation
</td>
<td>
The original dataset and the results of processing it will be stored on the
file servers of CERTH (protected by applying the commonly used security
measures for preventing unauthorized access and ensuring that security
software is up-to-date with the latest released security patches) and backup
provisions will be made. The archiving and preservation of this dataset are
performed by the Stanford and Princeton Universities; InVID will have no
involvement in this process.
</td> </tr> </table>
<table>
<tr>
<th>
Dataset name
</th>
<th>
**InVID_Data_WP2_3_TopicDetection**
</th> </tr>
<tr>
<td>
Dataset description
</td>
<td>
This dataset is intended for the benchmark evaluation of the topic detection
results produced in the InVID project. For a baseline, we will have one set of
documents which contains 24 hours of collected news articles from English
international media, together with a ground truth annotation of topics which
emerge in this collection. For topic detection from Twitter streams we will
have another set of documents in the dataset, which is a collection of Twitter
content (from the Streaming API) over a 24 hour period.
</td> </tr> </table>
<table>
<tr>
<th>
Dataset name
</th>
<th>
**InVID_Data_WP2_3_TopicDetection**
</th> </tr>
<tr>
<td>
Standards and metadata
</td>
<td>
This static dataset will be an index of JSON serialised documents, where each
document captures the textual content and metadata (e.g. date-time published)
for one news article or tweet, according to the webLyzard document model. The
ground truth will be stored in a file as a description of the newsworthy
topics which occur in the dataset.
</td> </tr>
<tr>
<td>
Data sharing
</td>
<td>
This dataset will be generated from the documents crawled in a 24hr period by
the webLyzard platform. The resulting data will be made available to third
parties under the express conditions that the data is used solely for the
purposes of evaluating topic detection algorithms and may not be copied and
re-used for any other purpose.
</td> </tr>
<tr>
<td>
Archiving and preservation
</td>
<td>
The dataset will be stored persistently (i.e. guaranteed until project's end
and planned to be kept also after the project for an undefined period of time)
on a MODUL University server (protected by applying the commonly used security
measures for preventing unauthorized access and ensuring that security
software is up-to-date with the latest released security patches), and on
request can be made available for download.
</td> </tr> </table>
<table>
<tr>
<th>
Dataset name
</th>
<th>
**InVID_Data_WP2_4_SocialMediaRetrieval**
</th> </tr>
<tr>
<td>
Dataset description
</td>
<td>
This dataset is intended for the benchmark evaluation of the social media
retrieval produced in the InVID project. It will consist of a set of social
media postings collected from different social networks as a result of
different general queries on named entities who are in the news at that time,
e.g. the name of a celebrity, or a geographical location. A ground truth
annotation will tag which posts in the dataset are directly related to a news
story about the named entity.
</td> </tr>
<tr>
<td>
Standards and metadata
</td>
<td>
This static dataset will be an index of JSON serialised documents, where each
document captures the textual content and metadata (e.g. date-time published)
for one social media posting, according to the webLyzard document model and
extended with the ground truth annotation with the news story the posting is
directly related to.
</td> </tr>
<tr>
<td>
Data sharing
</td>
<td>
This dataset will be generated from the documents queried in a 24-hour period
by the webLyzard platform. The resulting data will be made available to third
</td> </tr>
<tr>
<td>
Dataset name
</td>
<td>
**InVID_Data_WP2_4_SocialMediaRetrieval**
</td> </tr>
<tr>
<td>
</td>
<td>
parties under the express conditions that the data is used solely for the
purposes of evaluating social media retrieval and may not be copied and reused
for any other purpose.
</td> </tr>
<tr>
<td>
Archiving and preservation
</td>
<td>
The dataset will be stored persistently (i.e. guaranteed until project's end
and planned to be kept also after the project for an undefined period of time)
on a MODUL University server (protected by applying the commonly used security
measures for preventing unauthorized access and ensuring that security
software is up-to-date with the latest released security patches), and on
request can be made available for download.
</td> </tr> </table>
## WP3 Datasets
<table>
<tr>
<th>
Dataset name
</th>
<th>
**InVID_Data_WP3_1_WildWebTamperedImages**
</th> </tr>
<tr>
<td>
Dataset description
</td>
<td>
This dataset was collected by CERTH within the REVEAL project. It will be used
for testing the existing image forensics capabilities offered by TUNGSTEN. Its
description is available on: _http://mklab.iti.gr/project/wild-web-tampered-
image-dataset_
The dataset contains 80 cases of forgeries, all confirmed from multiple
reliable sources and with the help of the original photographs, where
available. For each forgery, the dataset contains all instances that we could
find on the Web using the Google and TinEye reverse image search services. The
downloaded files went through a hash comparison to filter out exact file
duplicates. After this step, the entire collection contains 13,577 unique
images. By further removing images that were considered inappropriate for the
task of evaluating image tampering detection algorithms, the remaining images
are 10,870. In addition, the dataset contains manually created masks
corresponding to the tampered area (ground truth).
</td> </tr>
<tr>
<td>
Standards and metadata
</td>
<td>
The root folder of this static dataset contains two subfolders: WildWeb and
UnsplicedSources. The former contains 90 subfolders, each containing one
subcase. The naming convention is, in all cases, the name of the case,
followed by a number, if multiple subcases exist. Within each such folder are
the images, plus two subdirectories. The first subdirectory, called Mask
contains all the mask files for the subcase, in the form of PNG images, with
</td> </tr>
<tr>
<td>
Dataset name
</td>
<td>
**InVID_Data_WP3_1_WildWebTamperedImages**
</td> </tr>
<tr>
<td>
</td>
<td>
white (255) corresponding to the tampered region and black (0) to the rest of
the image pixels. The second subdirectory, called Crops – PostSplices,
contains all cropped and re-spliced versions of the subcase.
</td> </tr>
<tr>
<td>
Data sharing
</td>
<td>
Due to copyright considerations, the dataset is not publicly available.
However, for research purposes, the dataset creator may share the dataset
following an electronic request by interested parties.
</td> </tr>
<tr>
<td>
Archiving and preservation
</td>
<td>
The original dataset and the results of processing it will be stored on the
file servers of CERTH (protected by applying the commonly used security
measures for preventing unauthorized access and ensuring that security
software is up-to-date with the latest released security patches) and backup
provisions will be made.
</td> </tr> </table>
<table>
<tr>
<th>
Dataset name
</th>
<th>
**InVID_Data_WP3_2_InVidFakeVideos**
</th> </tr>
<tr>
<td>
Dataset description
</td>
<td>
This dataset will be collected for testing a number of verification
approaches. It will be composed by a set of videos that have been found to be
fake (or misleading). For each video the dataset will contain: the source
(link where the video was found), metadata about the video (both embedded in
the video file and available from the platform hosting the video), contextual
information (e.g. website(s) or social media posts where the video appeared).
In addition, we consider including in the dataset annotations that journalists
produce during the verification process.
</td> </tr>
<tr>
<td>
Standards and metadata
</td>
<td>
A simple and lightweight annotation scheme will be defined to accommodate the
needs of this corpus. The serialization format will most likely be JSON to
enable easy parsing, extensibility and ease of storage and retrieval. The
dataset will be versioned by the WP3 leader (CERTH).
</td> </tr>
<tr>
<td>
Data sharing
</td>
<td>
Since the corpus will be collected by the InVID consortium, we will consider
making it publicly available. However, since part of the data will come from
third party platforms (e.g. YouTube, Twitter, etc.), we will first need to
investigate the legal constraints and issues that may arise from such an
action.
</td> </tr>
<tr>
<td>
Archiving
</td>
<td>
The original dataset and the results of processing it will be stored on the
file
</td> </tr>
<tr>
<td>
Dataset name
</td>
<td>
**InVID_Data_WP3_2_InVidFakeVideos**
</td> </tr>
<tr>
<td>
and preservation
</td>
<td>
servers of CERTH (protected by applying the commonly used security measures
for preventing unauthorized access and ensuring that security software is up-
to-date with the latest released security patches) and backup provisions will
be made.
</td> </tr> </table>
<table>
<tr>
<th>
Dataset name
</th>
<th>
**InVID_Data_WP3_3_VisualGeometryGroupDatasets**
</th> </tr>
<tr>
<td>
Dataset description
</td>
<td>
This refers to two datasets from the Visual Geometry Group, namely the Oxford
buildings ( _http://www.robots.ox.ac.uk/~vgg/data/oxbuildings/_ ) and the
Paris dataset ( _http://www.robots.ox.ac.uk/~vgg/data/parisbuildings/_ ) .
These datasets have been extensively used to test similarity-based search
approaches and hence are considered as one of the benchmarks to use for
assessing the InVID near-duplicate search solution.
The Oxford Buildings Dataset consists of 5062 images collected from Flickr by
searching for particular Oxford landmarks. The collection has been manually
annotated to generate a comprehensive ground truth for 11 different landmarks,
each represented by 5 possible queries. This gives a set of 55 queries over
which an object retrieval system can be evaluated. The Paris Dataset consists
of 6412 images collected from Flickr by searching for particular Paris
landmarks.
</td> </tr>
<tr>
<td>
Standards and metadata
</td>
<td>
Each of these two static datasets consists of a set of image files (from
Flickr) and ground truth in custom text format.
</td> </tr>
<tr>
<td>
Data sharing
</td>
<td>
The datasets are available from the dedicated pages of the Visual Geometry
Group, and hence no further sharing is foreseen within InVID.
</td> </tr>
<tr>
<td>
Archiving and preservation
</td>
<td>
The dataset are stored and maintained by the Visual Geometry Group on a
dedicated dataset page: _http://www.robots.ox.ac.uk/~vgg/data/_
</td> </tr> </table>
<table>
<tr>
<th>
Dataset name
</th>
<th>
**InVID_Data_WP3_4_InriaDatasets**
</th> </tr>
<tr>
<td>
Dataset description
</td>
<td>
This refers to two datasets available from INRIA, namely the Holidays and
Copydays datasets. These are expected to be useful for evaluating the
nearduplicate detection solution of InVID.
The Holidays dataset is a set of images which mainly contains some of the
creators’ personal holiday photos. The remaining ones were taken on purpose to
test the robustness to various attacks: rotations, viewpoint and illumination
changes, blurring, etc. The dataset includes a very large variety of scene
types (natural, man-made, water and fire effects, etc.) and images are in high
resolution. The dataset contains 500 image groups, each of which represents a
distinct scene or object. The first image of each group is the query image and
the correct retrieval results are the other images of the group.
The Copydays dataset is a set of images which is exclusively composed of the
creators’ personal holiday photos. Each image has suffered three kinds of
artificial attacks: JPEG, cropping and "strong". The motivation is to evaluate
the behavior of indexing algorithms for most common image copies.
More information is available on: _https://lear.inrialpes.fr/~jegou/data.php_
.
</td> </tr>
<tr>
<td>
Standards and metadata
</td>
<td>
This static dataset contains: (a) the images themselves, (b) the set of
descriptors extracted from these images, (c) a set of descriptors produced,
with the same extractor and descriptor, for a distinct dataset (Flickr60K),
(d) two sets of clusters used to quantize the descriptors (again obtained from
Flickr60K), (e) some pre-processed feature files for one million images, that
were used by the dataset creators to perform the evaluation on a large scale.
</td> </tr>
<tr>
<td>
Data sharing
</td>
<td>
The datasets are available from the dedicated page of INRIA and hence no
further sharing is foreseen within InVID.
</td> </tr>
<tr>
<td>
Archiving and preservation
</td>
<td>
The datasets are stored and maintained by INRIA on a dedicated dataset page:
_https://lear.inrialpes.fr/~jegou/data.php_ .
</td> </tr> </table>
<table>
<tr>
<th>
Dataset name
</th>
<th>
**InVID_Data_WP3_5_CCWEBVIDEO**
</th> </tr>
<tr>
<td>
Dataset description
</td>
<td>
The dataset is called CC_WEB_VIDEO, named by the initials of City University
of Hong Kong and Carnegie Mellon University, and which was collected from
</td> </tr>
<tr>
<td>
Dataset name
</td>
<td>
**InVID_Data_WP3_5_CCWEBVIDEO**
</td> </tr>
<tr>
<td>
</td>
<td>
the web video sharing web site YouTube and video search engines Google Video
and Yahoo! Video. It will be used for evaluating the near-duplicate detection
solution of InVID.
This static dataset was collected by considering 24 queries designed to
retrieve the most viewed and top favorite videos from YouTube. Each text query
was issued to YouTube, Google Video, and Yahoo! Video respectively. The videos
were collected in November, 2006. Videos with time duration over 10 minutes
were removed from the dataset. The final data set consists of 12,790 videos.
More information is available on: _http://vireo.cs.cityu.edu.hk/webvideo/_ .
</td> </tr>
<tr>
<td>
Standards and metadata
</td>
<td>
Links to the videos, metadata and ground truth information are stored in
simple text files, which are further described in the dataset page.
</td> </tr>
<tr>
<td>
Data sharing
</td>
<td>
The dataset is available from the dedicated page of City University Hong Kong,
and hence no further sharing is foreseen within InVID.
</td> </tr>
<tr>
<td>
Archiving and preservation
</td>
<td>
The dataset is stored and maintained by City University Hong Kong on a
dedicated page: _http://vireo.cs.cityu.edu.hk/webvideo/_ .
</td> </tr> </table>
<table>
<tr>
<th>
Dataset name
</th>
<th>
**InVID_Data_WP3_6_MediaevalVerifyingMultimediaUse**
</th> </tr>
<tr>
<td>
Dataset description
</td>
<td>
This is a dataset consisting of tweets spreading both fake and real images and
videos. It has been used as a benchmark in the Verifying Multimedia Use task
in Mediaeval 2015. It is expected to be of interest for testing contextual
verification approaches. The dataset was collected in a semi-automatic way, by
first manually collecting a set of known cases of images and videos and then
in any automatic way collecting tweets that shared those images/videos. Data
cleaning has also been done using manual inspection.
</td> </tr>
<tr>
<td>
Standards and metadata
</td>
<td>
The dataset comprises a set of tweet ids associated with basic metadata and
ground truth information. All information is serialized in simple tab-
separated text files.
</td> </tr>
<tr>
<td>
Data
</td>
<td>
The dataset is available on:
</td> </tr>
<tr>
<td>
Dataset name
</td>
<td>
**InVID_Data_WP3_6_MediaevalVerifyingMultimediaUse**
</td> </tr>
<tr>
<td>
sharing
</td>
<td>
_https://github.com/MKLab-ITI/image-verification-corpus_
</td> </tr>
<tr>
<td>
Archiving and preservation
</td>
<td>
The dataset will continue to be maintained on GitHub.
</td> </tr> </table>
<table>
<tr>
<th>
Dataset name
</th>
<th>
**InVID_Data_WP3_7_YFCC100M**
</th> </tr>
<tr>
<td>
Dataset description
</td>
<td>
This is a dataset consisting of 99 million CC-licensed Flickr images and one
million videos. It is currently the largest publicly available multimedia
dataset. We primarily foresee its usage for the purpose of evaluating location
detection approaches (relevant for T3.3), since a large percentage of the
images and videos are geo-located. In addition, the dataset has been
extensively used within the Placing Task of Mediaeval.
More details on the dataset are available on the following article from
Communications of the ACM:
_http://cacm.acm.org/magazines/2016/2/197425-yfcc100m/fulltext_
</td> </tr>
<tr>
<td>
Standards and metadata
</td>
<td>
This static dataset comprises the metadata of the images in tab-separated text
file format. Furthermore, some extensions of the dataset available from
_http://mmcommons.org_ include the original images, visual features extracted
from the images and audio features extracted from the videos.
</td> </tr>
<tr>
<td>
Data sharing
</td>
<td>
The dataset is available through the Yahoo Research WebScope program, while
several extensions to the dataset are available at _http://mmcommons.org_ .
Hence, no further sharing is foreseen within InVID.
</td> </tr>
<tr>
<td>
Archiving and preservation
</td>
<td>
The dataset is stored and maintained by Yahoo Research through their WebScope
program: _https://webscope.sandbox.yahoo.com/catalog.php?datatype=i &did=67 _
Furthermore, the Lawrence Livermore National Laboratory hosts several
extensions of the dataset on:
_https://multimediacommons.wordpress.com/features/_
</td> </tr> </table>
<table>
<tr>
<th>
Dataset name
</th>
<th>
**InVID_Data_WP3_8_TVChannelsLogos**
</th> </tr>
<tr>
<td>
Dataset description
</td>
<td>
This dataset will be built for the needs of task T3.2, which is related to the
collection of logos of TVs and user-generated channels on video platforms,
along with the name and a description of the channel, a DBPedia URI if
available, and tags. We intend to use this dataset to assess the performance
of methods recognizing automatically logos in videos.
</td> </tr>
<tr>
<td>
Standards and metadata
</td>
<td>
The dataset will be stored in a schemaless database and exposed as a web
service to display relevant information on the channel’s logos in the InVID
verification platform. Moreover, a spreadsheet that will be versioned by AFP
will be used as index of this dataset, storing for each logo its name, a short
description and (potentially) a number of indicative images.
</td> </tr>
<tr>
<td>
Data sharing
</td>
<td>
As part of the dissemination and exploitation strategy, we will consider
exposing publicly the dataset as an API and/or a web tool.
</td> </tr>
<tr>
<td>
Archiving and preservation
</td>
<td>
The dataset will be stored and maintained on the file servers of CERTH
(protected by applying the commonly used security measures for preventing
unauthorized access and ensuring that security software is up-to-date with the
latest released security patches) and backup provisions will be made.
</td> </tr> </table>
## WP4 Datasets
<table>
<tr>
<th>
Dataset name
</th>
<th>
**InVID_Data_WP4_1_UGCRegisteredProviders**
</th> </tr>
<tr>
<td>
Dataset description
</td>
<td>
This dataset will register User Generated Content (UGC) creators collected
from social networks and other UGC online sources (such as YouTube, Twitter or
Facebook). These creators will be registered, after obtaining their informed
consent, whenever one of their digital media items is selected because a
potential user is interested in reusing it. Consequently, just preselected
users will be gathered and no crawling of social networks or UGC sources will
be performed. The dataset will keep the username and the source social
network, plus all the reuse policies defined by the creator. In case that
there are agreements between the creator and the reusers, these will be also
stored in the database, associated to the creator and the licensed UGC.
Moreover, a set of security measures will be defined (which will be reported
in the corresponding project deliverable D4.2 "Framework and Workflows for UGC
</td> </tr>
<tr>
<td>
Dataset name
</td>
<td>
**InVID_Data_WP4_1_UGCRegisteredProviders**
</td> </tr>
<tr>
<td>
</td>
<td>
Copyright Management") and applied in order to ensure that the aforementioned
data within the project is not used for improper or unauthorized purposes.
Finally, registered users will be offered the option to opt-out of the
service. In this case, additional personal data collected during registration
will be erased. However, links to original users in social networks, content
and policies will be kept if they are required to contextualize existing
agreements by the user opting-out.
</td> </tr>
<tr>
<td>
Standards and metadata
</td>
<td>
This dataset will be based on Resource Description Framework (RDF) metadata
and use different Web Ontologies to structure the data, including for example
FOAF, SIOC, Schema.org, Media Ontology and Copyright Ontology. It will be
stored in a database capable of storing semantic data based on RDF. Specific
RDF properties for time intervals and instants will be used to track the
evolution of the dataset, for instance keeping track of when a particular
agreement between a creator and a reuser was established.
</td> </tr>
<tr>
<td>
Data sharing
</td>
<td>
This dataset will be generated as a result of the InVID platform operation
when the Rights Module is involved and is specific to its operation. As stated
in its description, this dataset will basically contain UGV creators reuse
policies and bilateral agreements between them and the reusers, which we
expect that they will prefer not to fully expose in public. Consequently, this
dataset won't be shared outside InVID.
</td> </tr>
<tr>
<td>
Archiving and preservation
</td>
<td>
This dataset will be preserved at the same location where the Rights
Management module is deployed, i.e. a server hosted at the premises of
Universitat de Lleida. It will be protected by preventing unauthorized access
to the server and ensuring that security software is up-to-date. Moreover,
backup provisions will be made.
</td> </tr> </table>
## WP5 Datasets
<table>
<tr>
<th>
Dataset name
</th>
<th>
**InVID_Data_WP5_1_News-Media**
</th> </tr>
<tr>
<td>
Dataset description
</td>
<td>
This dataset is intended as a generic, domain-independent basis for building
the initial system prototype (T5.2) including the multimodal analytics
dashboard (T5.3), and help to assess the achieved progress on document
annotation and
</td> </tr>
<tr>
<td>
Dataset name
</td>
<td>
**InVID_Data_WP5_1_News-Media**
</td> </tr>
<tr>
<td>
</td>
<td>
topic detection. It will be continuously updated through WLT’s crawling
architecture, and by accessing RSS feeds embedded in the crawled Web content.
Specific InVID content feeds from social media will later complement the
dataset, to be analyzed individually or in combination.
</td> </tr>
<tr>
<td>
Standards and metadata
</td>
<td>
The dataset will be a continuously updated index of JSON serialised documents,
where each document captures the textual content and metadata (e.g. date-time
published) for one news article or tweet, according to the webLyzard document
model.
</td> </tr>
<tr>
<td>
Data sharing
</td>
<td>
The resulting data will be made available as part of the InVID dashboard under
the express conditions that the data is used solely for the purposes of
evaluating individual technical components as well as the overall system
(T5.4), and may not be copied and re-used for any other purpose.
</td> </tr>
<tr>
<td>
Archiving and preservation
</td>
<td>
The dataset will be stored persistently on a webLyzard server, during and
beyond the project, and will be downloadable (with certain restrictions) via
the multimodal analytics dashboard (T5.3).
</td> </tr> </table>
## WP6 Datasets
<table>
<tr>
<th>
Dataset name
</th>
<th>
**InVID_Data_WP6_1 _Industrial Requirements**
</th> </tr>
<tr>
<td>
Dataset description
</td>
<td>
This dataset will contain all data on related UGC verification tools and
initiatives and the ones focusing on video verification in particular, as well
as the interviews that have been reported in the deliverable D6.1, entitled
"InVID Initial Industrial Requirements". By its nature it will also list all
requirements that have been derived from the market analysis as well as the
interviews with key persons active in the field that have been conducted.
The dataset is meant to list all relevant activities in the research fields
InVID tackles in order to identify the advantages and shortcomings of already
existing solutions and to collect a complete list of what needs to be
developed in InVID to make it a commercially successful video verification
platform.
</td> </tr>
<tr>
<td>
Standards and
</td>
<td>
This dataset is designed to analyse the industrial requirements. The latter
will be collected in a shared spreadsheet and can be stored in a repository or
</td> </tr>
<tr>
<td>
Dataset name
</td>
<td>
**InVID_Data_WP6_1 _Industrial Requirements**
</td> </tr>
<tr>
<td>
metadata
</td>
<td>
database if required. The spreadsheet will be versioned by the WP6 leader
(CONDAT).
</td> </tr>
<tr>
<td>
Data sharing
</td>
<td>
The dataset will be made available for project partners only. Nevertheless,
D6.1 and its updates are public deliverables that can be downloaded from the
project website.
</td> </tr>
<tr>
<td>
Archiving and preservation
</td>
<td>
The spreadsheet will be maintained by the WP6 leader (CONDAT). Updates of the
industrial requirements will be created in the course of the project.
</td> </tr> </table>
## WP7 Datasets
<table>
<tr>
<th>
Dataset name
</th>
<th>
**InVID_Data_WP7_1_UGVideo1**
</th> </tr>
<tr>
<td>
Dataset description
</td>
<td>
This dataset will include UGV and their relevant metadata that are created by
the utilized mobile applications for capturing these videos (e.g. data about
the creator/registered user of the video, details about the used device,
geolocation data and so on). The owners of these videos will be requested to
sign up to the platform and agree to the usage terms, thus providing their
informed consent for the collection and processing of their data. The users
will also have an option to “opt-out” by notifying the local newspapers
representative. Moreover, a set of security measures will be defined (which
will be reported in the corresponding project deliverable D7.1 "Activities and
outcome of the Pilots, first report") and applied in order to ensure that the
aforementioned data is not used for improper or unauthorized purposes.
</td> </tr>
<tr>
<td>
Standards and metadata
</td>
<td>
The videos will be stored in their native format that is defined by the mobile
phone type. The metadata provided by the mobile application (user-id, date and
time of video taken, location if agreed by the user) are distributed according
to the possibilities of the appropriate device (either embedded in the video
file itself or in a sidecar file (XML) managed by the mobile application).
</td> </tr>
<tr>
<td>
Data sharing
</td>
<td>
The videos of this dataset (which is a static dataset as videos will not be
updated) that will be selected by the editors will be shared via the websites
of local newspapers, mentioning also the credit (as provided by the user) and
</td> </tr>
<tr>
<td>
Dataset name
</td>
<td>
**InVID_Data_WP7_1_UGVideo1**
</td> </tr>
<tr>
<td>
</td>
<td>
usually the location of the video. Both fake and validated videos will be
shared (after been anonymised) within the project consortium in order to be
used for further tests and evaluations. So, no sensitive information will be
shared, something that will be clearly indicated upon signing up to the
platform and agreeing to the usage terms.
</td> </tr>
<tr>
<td>
Archiving and preservation
</td>
<td>
UGV will be stored in the data-center of APA-IT on high-availability object
store hosted in two data centers (protected by applying the commonly used
security measures for preventing unauthorized access and ensuring that
security software is up-to-date with the latest released security patches).
Videos will be deleted after a time to be agreed on with the newspapers.
Videos identified as fake videos, and validated videos will be stored for a
longer period, something that has to be agreed on with the consortium.
</td> </tr> </table>
<table>
<tr>
<th>
Dataset name
</th>
<th>
**InVID_Data_WP7_2_CommunityManagement**
</th> </tr>
<tr>
<td>
Dataset description
</td>
<td>
This dataset will contain all data needed to manage the selected online-
usergroups of newspapers for the pilot tests. These data include email
addresses of the users, usernames, date and time of agreeing to the usage
terms, users' feedback, usage statistics and device-information, assignments
to groups (e.g. members of firebrigades, local sportsclubs and similar). The
users will also have an option to “opt-out” by notifying the local newspapers
representative. The involved persons in these tests will be requested to sign
up to the platform and agree to the usage terms, thus providing their informed
consent for the collection and processing of their data. Moreover, a set of
security measures will be defined (which will be reported in the corresponding
project deliverable D7.1 "Activities and outcome of the Pilots, first report")
and applied in order to ensure that the aforementioned data is not used for
improper or unauthorized purposes.
</td> </tr>
<tr>
<td>
Standards and metadata
</td>
<td>
These data will be stored in an SQL-database, and changes will be logged
accordingly without versioning.
</td> </tr>
<tr>
<td>
Data sharing
</td>
<td>
Data will be shared as aggregated data only within the consortium. This
dataset will show which user-groups were involved in the pilot tests, how
</td> </tr>
<tr>
<td>
Dataset name
</td>
<td>
**InVID_Data_WP7_2_CommunityManagement**
</td> </tr>
<tr>
<td>
</td>
<td>
actively they participated and similar statistics. Details on specific users
are owned by the publishers who manage their user-base and are of no
importance for the project's results itself, something that will be clearly
indicated upon signing up to the platform and agreeing to the usage terms.
</td> </tr>
<tr>
<td>
Archiving and preservation
</td>
<td>
The relational database will be run on servers in the data-center of APA-IT
(protected by applying the commonly used security measures for preventing
unauthorized access and ensuring that security software is up-to-date with the
latest released security patches), and backup provisions will be made.
</td> </tr> </table>
## WP8 Datasets
<table>
<tr>
<th>
Dataset name
</th>
<th>
**InVID_Data_WP8_1_MarketStudy**
</th> </tr>
<tr>
<td>
Dataset description
</td>
<td>
This dataset will include all data collected regarding the market of UGV
verification. It will include company names, UGV publishers (such as broadcast
TVs) and their websites, online video platforms, technology companies dealing
with forensic verification or contextual verification on social networks,
market figures, contact names and company information, which will be gathered
mainly from the web.
</td> </tr>
<tr>
<td>
Standards and metadata
</td>
<td>
As this dataset is designed to support the efforts for exploitation of the
InVID consortium, it will be initially collected as a shared spreadsheet and
later will be included in an SQL database if needed. The spreadsheet will be
versioned by the WP8 leader (AFP).
</td> </tr>
<tr>
<td>
Data sharing
</td>
<td>
Being collected by InVID partners for exploitation purposes, we will maintain
internally this dataset, although some findings about new tools, companies, or
publishers will be shared on our website and social networks accounts as part
of our dissemination policy.
</td> </tr>
<tr>
<td>
Archiving and preservation
</td>
<td>
The spreadsheet will be maintained by the WP8 leader (AFP). A backup procedure
will be set up for the preservation of the data.
</td> </tr> </table>
<table>
<tr>
<th>
Dataset name
</th>
<th>
**InVID_Data_WP8_2_InVidDeliverables**
</th> </tr>
<tr>
<td>
Dataset description
</td>
<td>
This dataset will be composed of the project deliverables that have to be
prepared and submitted to the EC during the project's lifespan, according to
the contractual obligations of the InVID consortium.
</td> </tr>
<tr>
<td>
Standards and metadata
</td>
<td>
These documents will be stored in PDF format. For each deliverable we will
provide: (a) the list of authors, (b) a brief description of its content (i.e.
its abstract), (c) the related WP of the project, and (d) the contractual date
for their submission to the EC. This dataset will be extended whenever new
deliverables are submitted to the EC. A simple log file of the performed
updates of the dataset will be maintained by CERTH in the project wiki (hosted
by a CERTH server).
</td> </tr>
<tr>
<td>
Data sharing
</td>
<td>
The public project deliverables will be made publicly available after their
submission to the EC, via the project website.
</td> </tr>
<tr>
<td>
Archiving and preservation
</td>
<td>
This dataset will be maintained on the project wiki and the relevant webpage
of the project website 13 , both hosted by a CERTH server which is protected
by applying the commonly used security measures for preventing unauthorized
access and ensuring that security software is up-to-date with the latest
released security patches. This webpage will grant open access to the PDF file
of each listed public deliverable.
</td> </tr> </table>
<table>
<tr>
<th>
Dataset name
</th>
<th>
**InVID_Data_WP8_3_InVidPublications**
</th> </tr>
<tr>
<td>
Dataset description
</td>
<td>
This dataset will contain manuscripts reporting the conducted scientific work
in InVID, which have been accepted for publication in peer-reviewed journals
and conferences. All these publications will inlcude a statement with
acknowledgement to the InVID project, while their content may vary from the
description of specific analysis techniques, to established evaluation
datasets and individual components or parts of the InVID platform.
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
Most commonly, these documents will be stored in PDF format. Each
</td> </tr> </table>
<table>
<tr>
<th>
Dataset name
</th>
<th>
**InVID_Data_WP8_3_InVidPublications**
</th> </tr>
<tr>
<td>
and metadata
</td>
<td>
document will be also accompanied by: (a) details about the venue (e.g.
conference, workshop or benchmarking activity) or journal where it was
published, (b) a short description with the abstract of the publications, and
(c) the LaTeX-related BIB file with its citation. This dataset will be
extended whenever new submitted works are accepted for publication in
conferences or journals. A simple log file of the performed updates of the
dataset will be maintained by CERTH in the project wiki (hosted by a CERTH
server).
</td> </tr>
<tr>
<td>
Data sharing
</td>
<td>
This dataset will be publicly available, following the guidelines of the EC
14 for open access to scientific publications and research data in
Horizon2020.
</td> </tr>
<tr>
<td>
Archiving and preservation
</td>
<td>
Self-archiving (also known as "green" open access) will be applied for
ensuring open access to these publications. According to this archiving policy
the author(s) of the publication will archive (deposit) the published article
or the final peer-reviewed manuscript in online repositories, such as personal
webpage(s), the project website 15 and the free-of-charge OpenAIRE 16 or
Zenodo 17 repositories, after its publication. Nevertheless, the employed
archiving policy will also be fully aligned with restrictions concerning
embargo periods that may be defined by the publishers of these publications,
making the latter publicly available in certain repositories only after their
embargo period has elapsed.
</td> </tr> </table>
<table>
<tr>
<th>
Dataset name
</th>
<th>
**InVID_Data_WP8_4_InVidPresentations**
</th> </tr>
<tr>
<td>
Dataset description
</td>
<td>
This dataset will consist of presentations prepared for reporting InVID-
related scientific work or progress made, in a variety of different events,
such as conferences, workshops, meetings, exhibitions, interviews and so on.
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
Most commonly these presentations will be in PPT or PDF format. Information
</td> </tr> </table>
<table>
<tr>
<th>
Dataset name
</th>
<th>
**InVID_Data_WP8_4_InVidPresentations**
</th> </tr>
<tr>
<td>
and metadata
</td>
<td>
related to: (a) the authors, (b) the presenter, (c) the venue and (d) the date
of the presentation will be also stored in plain text. This dataset will be
extended whenever new InVID presentations are prepared and publicly released.
A simple log file of the performed updates of the dataset will be maintained
by CERTH in the project wiki (hosted by a CERTH server).
</td> </tr>
<tr>
<td>
Data sharing
</td>
<td>
The project presentations will be made publicly available after their
presentation at the venue/event they were prepared for.
</td> </tr>
<tr>
<td>
Archiving and preservation
</td>
<td>
The project presentations will be publicly available for view and download via
the SlideShare channel of the project 18 , while links to the presentations
of this channel will be also added on the relevant webpage of the project
website 19 , which is hosted by a CERTH server that is protected by applying
the commonly used security measures for preventing unauthorized access and
ensuring that security software is up-to-date with the latest released
security patches.
</td> </tr> </table>
<table>
<tr>
<th>
Dataset name
</th>
<th>
**InVID_Data_WP8_5_InVidSoftwareDemosAndTutorials**
</th> </tr>
<tr>
<td>
Dataset description
</td>
<td>
This dataset will collect information regarding the developed and utilized
InVID technologies. Public video demonstations, tutorials with instructions of
use, documentations as well as links to publicly-released online instances of
these technologies will be also included.
</td> </tr>
<tr>
<td>
Standards and metadata
</td>
<td>
A variety of different formats will be used for storing the necessary
information. In particular, video demonstrations can be (but not limited to)
MP4, AVI or WEBM files, software tutorials and documenations can be written in
PDF format, online documentations of tools and services can be presented in
plain text, and presentations can be stored in PPT or PDF format. This dataset
will be extended whenever new content related to the InVID developed
technologies (e.g. video/web demos, tutorials, documentation) is prepared and
publicly released. A simple log file of the performed updates of the dataset
will
</td> </tr> </table>
<table>
<tr>
<th>
Dataset name
</th>
<th>
**InVID_Data_WP8_5_InVidSoftwareDemosAndTutorials**
</th> </tr>
<tr>
<td>
</td>
<td>
be maintained by CERTH in the project wiki (hosted by a CERTH server).
</td> </tr>
<tr>
<td>
Data sharing
</td>
<td>
Information related to the developed InVID technologies, including video
demonstrations, documentations, presentatons and tutorials with instructions
of use, will be publicly available supporting the dissemination of the
project's activities and the exploitation of the project's outcomes. However,
confidentiality control will be applied on each piece of information in order
to avoid the release of inappropriate information that could have a negative
impact to the project's progress and developments.
</td> </tr>
<tr>
<td>
Archiving and preservation
</td>
<td>
Data related to the developed InVID technologies, tools and applications will
be archived and made publicly available through the relevant webpage of the
project website 20 , which is hosted by a CERTH server that is protected by
applying the commonly used security measures for preventing unauthorized
access and ensuring that security software is up-to-date with the latest
released security patches. Moreover, the created video demos and tutorials
will be also available for view via the YouTube channel of the InVID project
21 .
</td> </tr> </table>
<table>
<tr>
<th>
Dataset name
</th>
<th>
**InVID_Data_WP8_6_InVidNewsletters**
</th> </tr>
<tr>
<td>
Dataset description
</td>
<td>
This dataset will comprise the released newsletters for disseminating the
activities and the progress made in the InVID project.
</td> </tr>
<tr>
<td>
Standards and metadata
</td>
<td>
The newsletters will be prepared and stored in PDF format, while information
regarding their release date will be provided. This dataset will be extended
whenever new project newsletters are publicly released. A simple log file of
the performed updates of the dataset will be maintained by CERTH in the
project wiki (hosted by a CERTH server).
</td> </tr>
<tr>
<td>
Data sharing
</td>
<td>
The newsletters of the project will be publicly available online right after
their official release.
</td> </tr> </table>
<table>
<tr>
<th>
Dataset name
</th>
<th>
**InVID_Data_WP8_6_InVidNewsletters**
</th> </tr>
<tr>
<td>
Archiving and preservation
</td>
<td>
An online archive with open access to the released newsletters of the project
will be maintained at the relevant webpage of the project website 22 , which
is hosted by a CERTH server that is protected by applying the commonly used
security measures for preventing unauthorized access and ensuring that
security software is up-to-date with the latest released security patches.
</td> </tr> </table>
# Summary
The initial Data Management Plan by the members of the consortium of the InVID
project was presented in this deliverable. This plan involves every dataset
that will be collected, processed or generated during the lifespan of the
project. Aligned with the guidelines of the European Commision, the aim of the
Data Management Plan is to ensure the safety of data, to enhance data
accessibility, exploitability and reuse potential, as well as to support their
long-term preservation. The applied methodology for defining the DMP of the
InVID project was presented in Section 2, while detailed explanations about
what will be considered for the reported datasets were provided in Sections
2.1 to 2.5. The entire list of datasets was presented in Section 3, where each
subsection (see Sections 3.1 to 3.7) groups the datasets of each workpackage
of the project. An updated version of the Data Management Plan integrating
newer findings of the project in relation to datasets and their management
will be described in D1.3 "Updated Data, quality and knowledge management
plan", which is due in Month 21 of the project (September 2017).
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0696_CPSELabs_644400.md
|
# Executive Summary
The purpose of this Data Management Plan (DMP) is to provide an analysis of
the main data foreseen to be generated in the course of the project and to
describe the data management policy that will be applied by CPSELabs. The
project consortium fully supports the endeavour to improve access to
scientific information and research data and will make information and data
generated within the project available on a voluntary basis, whenever
possible.
CPSELabs pursues the goal to contribute to establishing an open eco-system,
and the project plan has been conceived to broadly disseminate the project
findings and to contribute to the generation of broader knowledge in the
field. Therefore, the vast majority of the project deliverables are public,
containing information and data that can be used or re-used by various target
groups.
A variety of data and information will also be generated in CPSELabs
experiments, which involve ‘third parties’ through ‘cascading funding’.
CPSELabs perceives it as its’ role to accompany the third parties in aspects
of data management supporting open access of the generated research results,
along with publications, so they can be easily discovered, identified and re-
used, whenever possible.
# Introduction and Context
As stated in the _‘Guidelines on Open Access to Scientific Publications and
Research Data in Horizon 2020’_ , fuller and wider access to scientific
publications and data helps to:
* build on previous research results (improved quality of results)
* foster collaboration and avoid duplication of effort (greater efficiency)
* accelerate innovation (faster to market = faster growth)
* involve citizens and society (improved transparency of the scientific process)
The CPSELabs consortium fully supports the endeavour to improve access to
scientific information and research data in order to enhance the benefits of
public investment. Especially, if the information and data have been derived
with the help of public funding, CPSELabs agrees, that this should benefit
European companies and citizens to the full.
The Open Research Data Pilot, aims to improve and maximize access to and re-
use of research data generated by projects. As defined in the guidelines,
openly accessible research data can typically be accessed, mined, exploited,
reproduced and disseminated free of charge for the user.
The CPSELabs project participates in the ‘Open Research Data Pilot’ and will
make its research data available on a voluntary basis, whenever possible.
The role of this Data Management Plan (DMP; D1.6), is to drive the policy
towards providing open access to the data generated in the scope of the
CPSELabs project, along with publications and other project results, so they
can be easily discovered, identified and re-used.
# Project Goals and Implications on Data Sharing
A variety of data and information will be generated in CPSELabs, ranging from
interview outcomes, guidelines and best practices to software artefacts and
(raw) sensor data.
Whereas a part of that data will be generated by the consortium itself, and
will be made freely available via the website and public deliverables, much of
the data that could be categorized as ‘digital research data’ will be
generated in conjunction with ‘third parties’, participating via cascading
funding. (A part of the CPSELabs project funding is used to involve project
external ‘third parties’ through open calls in ‘experiments’.)
A major goal of the project is to build an open eco-system supporting the
whole stakeholder community (from CPS developers, integrators, and suppliers
to users), to enhance technology transfer by providing existing open platforms
and tools for application experiments and to enable stakeholders to benefit
from use and re-use of experiences, data and information.
A second major aim of the project is to efficiently involve SMEs and mid-caps
(as third parties), to help them in the development and commercialization
efforts of CPS enabled/related technologies and products (through open call
experiments) and with this increase European competitiveness.
While in the first case, opening information and data is well in-line with the
project goal, in the second case, sharing of data and information might
jeopardize the endeavour of exploiting and commercializing the results or
products developed in CPSELabs experiments with third parties.
In the course of the project, the CPSELabs consortium will have to carefully
consider and agree with the third parties on a case-by-case basis if, how and
to what extent data can be shared. Especially in the following cases, above
others, the collected/generated data will not be shared:
* if the results can be expected to be commercially or industrially exploited by the project partners or third parties (or if sharing would contradict intellectual property rights and commercial exploitation in any way);
* if sharing would jeopardize efficient the involvement of SMEs;
* if incompatible with the need for confidentiality in connection with privacy-related or securityrelated issues;
* if incompatible with existing rules concerning the protection of personal data ;
* if incompatible with existing rules concerning ethical issues. (In the project’s ethical report, the consortium has clearly defined how sensitive / personal data will be treated.)
# Data Generation and Management in the Scope of CPSELabs
The CPSELabs partners expect that most likely it will be mostly software
artefacts that are being produced (mainly within the third parties
experiments), rather than true research data. The latter includes simulated
data that can be used to test physical systems based on a simulation platform
that will be made available on an open source basis. Moreover, performance
analysis data, based on several configurations of the platform or systems
based on the Design Centre platform may provide a basis for decision making
for other third parties. More explicitly, the generated data within the third
parties experiments might include:
* open-source software, either as standalone tools, or as libraries/plugins extending other existing (not necessarily open source) tool sets;
* experimental artefacts like use case descriptions, exemplary analysis or design models, exemplary analysis results
* descriptions of domains, co-models, descriptions of metrics used, tool extension data
* experimental sensor data sets (anonymized, if required) to repeat executed experiments
* reports on executed experiments, public deliverables synthesizing the experiments; and scientific publications
* reports on best practices using the Design Centre platforms
Next to this, the Design Centres will yield other data, such as results of
interviews carried out with stakeholders in the context of eco-system
analysis, needs in professional training, measurement data on performance of
the Design Centres (in terms of KPI) and summary data related to literature
surveys (coded sources, categorizations, etc.). Moreover, results will include
information like contributions to the contacts database and information and
data on the Open Call content, process, outcome and response.
More explicitly, the information captured and data generated by the project
partners with the help of interviews, surveys and other investigations with
project internal and external participants will derive, among others:
* overview of existing innovation practices and opportunities in innovation eco-systems
* inventory of existing professional training, good practices and effects
* investigation of needs for professional training as perceived by relevant stakeholders
* overview of CPS areas of relevance for stimulating innovation by means of Market Places
(MP’s) including an overview of existing MP’s and best practices of MP’s
* stakeholder needs and considerations for the MP pilots within CPSELabs
* information on open call process, (including FAQs), outcome, statistics, feedback
A very important factor for sharing information and data within the CPSELabs
project is, as described above, by what means and with which aim this data is
generated. This can be sub-divided into two categories:
1. Data collected / generated by the project consortium, with the aim of broad dissemination
2. Data generated by / with the third parties in the scope of open call experiments
As these categories differ substantially, they will be described in two
separate sub-chapters.
## Data and Information generated and shared by the Consortium
As CPSELabs pursues the goal to contribute to establishing an open eco-system
supporting the whole community of stakeholders, the vast majority of the
projects deliverables are public, containing information and data that can be
used or re-used by different target groups. Besides being publicized via the
CPSELabs website, the documents will be spread via the CPSElabs partners
networks. The following table gives an overview on a selection of the CPSELabs
(planned) public deliverables, conceived to broadly disseminate the projects
findings and contribute to the generation of broader knowledge in the field.
The table below lists the documents per work package (WP), deliverable number
(Del) and contains the publicizing month (M); M1 corresponds to February 2015.
A brief information is given on the type of data and the target group. More
detailed information in terms of information / data content, can be obtained
from the deliverables themselves.
<table>
<tr>
<th>
**WP**
**Del**
</th>
<th>
**Document name, Publicizing month**
</th>
<th>
**Data / type of information**
**Target group for (re-) use**
</th> </tr>
<tr>
<td>
**WP1 Project Management**
</td> </tr>
<tr>
<td>
D1.6
</td>
<td>
‘Collaboration plan with other Smart Anything Everywhere projects’ (M2)
</td>
<td>
The public deliverable provides a shared vision of the current SAE
coordinators/teams on collaboration within and future evolvement of the SAE
initiative.
(Targeted to SAE stakeholders, EC, policy makers)
</td> </tr>
<tr>
<td>
**WP2 Communication and Outreach**
</td> </tr>
<tr>
<td>
D2.1
D2.2
D2.4
</td>
<td>
‘Web portal’ (M2)
‘Communication Plan’ (M3, 12, 18, 24, 30, 36)
‘Public Materials’ (M3)
</td>
<td>
The data/information provides detailed information on the CPSELabs Design
Centres and their ‘open tools and platforms’, the CPSELabs Vision as well as
practical information and guidance for applicants of the open call process
(e.g. FAQs).
(Targeted to experiment proposers, stakeholders of the CPS ecosystem, broad
public)
</td> </tr>
<tr>
<td>
**WP3 Open Call Process for Experiments**
</td> </tr>
<tr>
<td>
D3.1
D3.3
</td>
<td>
‘Open Call Process
Documents’ (M3)
‘Call Texts’ (M3, 9, 15)
</td>
<td>
Information on the open calls content and the process: The data/information
provides detailed information and guidance on structuring and handling of the
open call process. Next to giving guidance to proposers and evaluators, these
documents provide ‘re-usable’ information on the call process and templates
for future open-call projects.
(Targeted to experiment proposers, evaluators, EC, other projects with cascade
funding)
</td> </tr>
<tr>
<td>
D3.2
</td>
<td>
‘Information events and coaching activities’ (M16)
</td>
<td>
The data/information includes experiences and best practices from ‘Information
events and coaching activities’, which can be valuable in terms of ‘lessons
learnt’ for future endeavours.
(Targeted to experiment proposers, other projects with cascade funding)
</td> </tr> </table>
<table>
<tr>
<th>
**WP4 Design Centres**
</th> </tr>
<tr>
<td>
D4.1
</td>
<td>
‘Centre handbook’ (M4)
</td>
<td>
The data includes information on centre management and exchange of best
practices among Design Centres, promoting synergies among them and their
regional eco-systems by:
* establish a learning network among the Design Centres to exchange best practices in creating innovation eco-systems;
* carry out cross-centre opportunity scouting in which the research, industrial and business profiles of centres and their regional eco-systems are examined to identify innovation and other collaboration opportunities
It also includes templates and guidelines for basic processes
(Targeted to Design Centres, regional eco-systems, educational institutions,
policy makers…)
</td> </tr>
<tr>
<td>
D4.2
</td>
<td>
‘Report on best practices and professional training’
(M12, M24, M36)
</td>
<td>
Information will include results of:
Analysis of best practices and professional training within partner eco-
systems and exchange of best practices within the regional eco-systems of each
centre by
* establishing regional learning networks
-identifying industrial needs for professional training of particular relevance for CPSELabs
* matching these needs with existing competences and courses;
* implementing selected training
(Targeted to Design Centres, regional eco-systems, educational institutions,
stakeholders of CPS eco-system, policy makers…)
</td> </tr>
<tr>
<td>
D4.3
</td>
<td>
Innovation management including ‘Annual report on innovation management
activities’ (M12, M24, M36).
</td>
<td>
Information will include results of the innovation management activities of
CPSELabs
* participating in reviews of experiments and marketplace efforts, including categorization, TRL assessments, and mapping and analysis of collaborative innovation activities using social network analysis,
* identifying business opportunities and improvements in practices for CPS innovation management.
* interview studies of firms having central roles in the innovation eco-systems based on cyber-physical systems in order to identify existing best practices for managing networked and open innovation in this field
* preparing an action plan for commercialization / standardization
(Targeted to Design Centres, stakeholders of CPS eco-system, regional eco-
systems, EC, policy makers…)
</td> </tr> </table>
<table>
<tr>
<th>
D4.4
</th>
<th>
‘Strategic Innovation
Agenda for CPS’ (M8, 14)
</th>
<th>
The Strategic innovation agenda for CPSELabs contains information on setting
out the overall direction for experiments and other eco-system promoting
interactions, and provides plans for the open calls for experiments. The CPS-
SIA will also consider existing agendas as far as relevant, including for
example the Artemis strategic research agenda and the EIT ICT Labs strategic
innovation agenda.
(Targeted to Design Centres, stakeholders of CPS eco-system, EC, policy
makers)
</th> </tr>
<tr>
<td>
D4.5
</td>
<td>
‘Market Place Report’
(D4.5)
</td>
<td>
The report will contain information about the creation of marketplaces for
selected CPS technology platforms, such as middleware platforms for CPS and
tool integration platforms. Information on suitable models for a marketplace
(e.g. in terms of IP rights, open source, governance, codex, best practices)
will be presented. A first marketplace pilot will address the sharing of
software assets and best practices to promote interoperability for CPS
engineering environments. An early survey will identify the willingness of
research and industrial organizations to contribute to and take-up assets from
the marketplace.
(Targeted to stakeholders of CPS eco-system, regional ecosystems, EC, policy
makers…)
</td> </tr>
<tr>
<td>
D4.6
</td>
<td>
‘Design Centres final report’ (M36)
</td>
<td>
The report will include the final evaluation and impact assessment. Additional
information on identified "take-aways" and further evolution of innovation
eco-systems in general, and for CPSELabs in particular; an overall evaluation
of the goals, methodology and achievements of CPSELabs will be included.
(Targeted to stakeholders of CPS eco-system, EC, policy makers, other projects
with cascade funding …)
</td> </tr>
<tr>
<td>
**WP5 Dissemination and Exploitation**
</td> </tr>
<tr>
<td>
D5.1
D5.2
</td>
<td>
‘Dissemination and
Exploitation Plan’ (M3)
‘Annual Report on
Dissemination Activities’
(M12, M24, M36)
</td>
<td>
WP5 will make projects outcomes public and will build an ecosystem for sharing
information and exploiting the knowledge generated during the projects
lifetime.
Next to publishing direct results the information will be related to: relevant
conferences and workshop outcomes, influencing research programs, influencing
standards bodies, influencing educational institutions, raising awareness and
setting up communities, open access; new or improved products and services,
incubation of business ideas, creation of start-up and spin-offs.
(Targeted to stakeholders of CPS eco-system, standardization bodies,
educational institutions, policy makers…)
</td> </tr>
<tr>
<td>
**WP6 Execution of Experiments**
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
Public outcomes from experiments
</td>
<td>
Experiments will produce a publishable summary of their work and results (not
including any confidential information). Moreover, a mean of 1 scientific or
market-oriented publication per experiment is expected. Additionally, research
data might be provided in an open database (to be decided on a case-by-case
basis).
(Targeted to stakeholders of CPS eco-system, academia, industry, EC, policy
makers, broad public)
</td> </tr>
<tr>
<td>
D6.3
</td>
<td>
‘Final Experiments Report’
(M36)
</td>
<td>
This report will contain the main publishable outcomes of the experiments,
including an assessment of outcomes and extraction of exploitable results.
(Targeted to stakeholders of CPS eco-system, academia, industry, EC, policy
makers, broad public)
</td> </tr> </table>
Table 1: Overview of data generated and shared by the consortium
Regarding additional (peer-reviewed) publications, which are foreseen to be
academic or marketrelated, the CPSELabs general policy is to require open
access for all publications. Self-archiving ("green" open access) is expected.
Partners will be required to ensure before submission that publications will
be eligible for archiving on institutional repositories of at least one of the
co-authors. It is recognized that, in a very few exceptional cases, "gold"
open access may be required. Data used in publications will be made available,
either on the web portal or by application to the CPSELabs Service Centre.
Besides the public deliverables and other publications, CPSELabs will create
an interactive open marketplace: The CPSELabs launches a marketplace for
sharing software assets related to integrated CPS engineering tools and
environments. CPSELabs is aiming at maximizing input with this marketplace by
establishing an enlarged forum of developers, integrators, and users from
global powerhouses as well as SMEs and mid-caps.
Moreover, CPSELabs aims at contributing to standardization: Relationships with
standardization body and open platform groups are planned to make the results
available and acceptable for a wider audience. This includes presentation and
visit to specific groups such ‘The Open Group Open Platform 3.0’ (as
http://www.opengroup.org/) which is cross-domain, or Autosar, which is
dedicated to automotive standard. The Open Group has committed to support
(non-funded) the CPSELabs in identifying standardization opportunities and to
also participate in open call evaluation and in reviews of experiments for
identifying standardization opportunities.
## Data generated in conjunction with Third Parties Experiments
Third party experiments are carried out in close collaboration with the
partners of one of the CPSELabs Design Centres in South Germany (fortiss),
North Germany (Offis), France (ONERA and LAAS-CNRS), Sweden (KTH), the UK
(Newcastle Univ.) and Spain (Univ. Politécnica de Madrid and Indra Sistemas).
The Design Centres offer expertise and training in developing cyber-physical
systems, as well as development environments, tool chains, architectural
frameworks, and technology platforms that form the basis for the experiments,
including:
* 4DIAC framework for distributed industrial automation and control
* FMI-based virtual co-simulation
* eMIR - open source test platform for maritime systems ( _www.emaritime.de_ )
* Model-based safety assessment techniques (AltaRica, Hazop UML)
* GenoM and Mauve-OROCOS frameworks for robotics systems programming
* Open Services for Life-Cycle-Collaboration (OSLC) open standard
* Overture family of VDM-based technologies (Overture, Crescendo, Symphony)
* SOFIA2 interoperability platform for smart spaces
At the time of the deliverable (M6, July 2015), the first round of calls has
been closed, but the process of experiment selection, invitation and
confirmation has not yet been completely concluded. The collection/generation
and sharing of data heavily depends on the experiments performed, and the
third parties involved. Considering this, a detailed analysis of the data
foreseen to be collected and possibly shared can only be performed at a later
time point.
Nevertheless, the Design Centres, based on their calls and platforms
available, have made some assumptions on the data that could be generated and
the handling of it. The results of a first survey and discussions amongst the
consortium is shown in the following table.
<table>
<tr>
<th>
**Centre/ Partner**
</th>
<th>
**Type of data expected to be generated in conjunction with third parties**
**Plan for sharing the data**
</th> </tr>
<tr>
<td>
Design Centre Germany South
</td> </tr>
<tr>
<td>
FOR
</td>
<td>
The data generated will most likely be software artefacts, rather than true
research data. “Data” in the stricter sense might come from simulation runs
that will be executed within the virtual co-simulation experiments, which will
enable to improve the key technologies of the Design Centre. In cases where
third parties generate the data (or participate therein) consent to share the
data will be required.
Developments on some of the core technologies provided by the Design Centre
will be provided as open-source software artefacts, or in the form of
publishable research reports. The example below illustrates how this will be
mapped to the data management plan.
**Data set reference and name**
4DIAC: Framework for Distributed Industrial Automation and Control
**Data set description**
4DIAC presents an open source software solution implementing IEC 61499\. It
consists of a run-time and a GUI part. The run-time is called FORTE and is
deployed to the individual controllers as a basic execution framework allowing
the execution of the applications in real-time on top. The GUI represents an
IDE
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
realized in Eclipse. It supports the developers by creating their applications
and deploying them to the controllers running an instance of FORTE.
As 4DIAC is licensed under EPL (Eclipse Public License) all extensions and
adaptations to it have to be provided under EPL again. This ensures that all
improvements performed with in the project are offered to all the other users
of 4DIAC, which allows them to benefit from the modifications as well.
**Standards and metadata**
The 4DIAC GUI is implemented using Java in Eclipse and consists of a set of
individual Eclipse plug-ins distributed as RCP (Rich Client Platform) and
source code.
FORTE itself is implemented using C/C++ and is currently ported to a set of
different platforms, e.g. Raspberry Pi, Beaglebone Black, Wago PLCs, etc.
**Data sharing**
All the code of 4DIAC is publicly available for download under
http://www.fordiac.org. Within the near future the code base of 4DIAC will be
ported into the Eclipse repository, where it is even better visible to the
public.
**Archiving and preservation (including storage and backup)**
Currently the individual code versions are handled using Hg. Later on the
individual development states will be supported using a Git repository.
</th> </tr>
<tr>
<td>
Design Centre Germany North
</td> </tr>
<tr>
<td>
OFF
</td>
<td>
**Type of Data**
OFF will focus on architecture development. Work will be based on eMIR (open
source test platform for maritime systems ( _www.emaritime.de_ ) .
The following types of data sets may be provided: Simulated data that can be
used to test physical systems based on the simulation platform, which will be
made available on an open source basis. Examples of simulated data can be
simulated traffic data or engine performance simulation. Performance analysis
data based on several configurations of the platform may provide a basis for
decision making for other third parties.
**Data sharing**
Experiments done by OFF without restrictions of third parties will be
published corresponding to DFG guidelines (Deutsche Forschungsgemeinschaft,
German Research Foundation) for scientific best practices, most likely in
conference papers or journal articles. However, OFF will also archive and
publish digital data, if available. In cases where third parties generate the
data (or participate therein) consent to share the data will be required.
The collection/generation and sharing of data heavily depends on the
experiments performed, data collected and third parties involved. As the first
experiments at the Design Centre North Germany are only foreseen in the second
round of calls, further details can only be elaborated at a later stage.
</td> </tr>
<tr>
<td>
Design Centre France
</td> </tr> </table>
<table>
<tr>
<th>
ONR
</th>
<th>
**Type of Data**
_Data foreseen to be generated by ONR include:_
Numerical models of concept of operation of robots (AltaRica CONOPS model).
The concept of operation can be devised by the labs or in answer to an actual
business case of external companies.
Numerical models of software and hardware architecture of robots (AltaRica
system model and MAUVE system architecture). The robots are owned either by
the CPSE-Labs or by external companies.
Software, which implements the robot function for ONERA robots / external
companies robot with a focus on the implementation of: safety functions,
decision making functions, real-time execution management.
Update of the design tools that have been used to build or analyse the
numerical models or the embedded software: safety assessment tools owned by
ONERA (e.g. DAL-culator, EPOCH), decision making libraries, MAUVE to OROCOS
translator Publishable materials are foreseen to include:
* AltaRica Libraries
* MAUVE Libraries
* Decision making libraries
* Update of ONERA design tools
* Simplified version of models / software developed for ONERA or other company use cases
_Data foreseen to be generated jointly by third parties and ONR include:_
Simplified models of robots of the external the companies + specification of
the software embedded in ONERA robot to mimic company use case _Data foreseen
to be generated by third parties include:_
Specialization of the publishable results for the robots owned by the company
* Detailed AltaRica / Mauve models
* Adaptation of the decision making algorithms
OROCOS modules derived from the detailed Mauve models
**Data sharing**
The data are interesting for different focused communities of end users and
are considered to be put in, e.g., _http://www.orocos.org/_ or
_http://altarica.labri.fr/wp/_ and general platforms that exist to deliver
open source software (e.g. _https://www.polarsys.org/_ ) .
Consent of the third parties participating in the data generation will be
required.
</th> </tr>
<tr>
<td>
LAAS- CNRS
</td>
<td>
**Type of Data**
Most of the experimental artefacts will be produced together with the third
parties.
The experiments are expected to generate (i) open-source software, either as
standalone tools, or as libraries/plugins extending other existing (not
necessarily open source) toolsets; (ii) experimental artefacts like use case
descriptions, exemplary analysis or design models, exemplary analysis results;
(iii) Public deliverables synthesizing the
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
experiments; (iv) scientific publications. Who generates what depends on the
experiments.
The experimental artefacts produced together with third parties include:
* GenoM: GenoM3 templates, exemplary verification results
* HAZOP UML : exemplary models and safety analysis results
* SMOF : exemplary SMOF models and monitoring strategies generated from the models
* MORSE: exemplary test experiments and robustness evaluation results
LAAS-specific data during the project will be:
* GenoM: updated open-source distribution, updated tutorial, scientific papers;
* HAZOP UML: scientific papers and tutorial;
* SMOF: updated version and open source distribution by the end of the project, scientific papers, tutorial;
* MORSE: distribution of exemplary test components, scientific papers and tutorial for MORSE-based testing;
**Data sharing**
The definition call topics for experiments by the French CPSE-labs Design
Centre will include explicit concerns for delivering publicly available
material, e.g. by focusing on extending an open source framework, called GenoM
1 , or by requiring that applied techniques shall be illustrated on artefacts
derived from use cases that are representative enough but do not raise IP or
confidentiality issues, and that results must be summarized in a publishable
experiment description document.
Industrial experiment partners may be less used to sharing data. CPSELabs
perceives it as its role to accompany them in this opening process. CPSELabs
will help them in scoping their use cases and demonstration artefacts, in
order to extract information that is sufficient to exemplify the concepts and
problems, while not disclosing too much about their systems and know how.
Not only the code of the tools/libraries but also tutorials can be made
available. For the most mature tools, there also are mailing lists gathering a
community of users. A repository of courseware material would be useful to the
community.
</th> </tr>
<tr>
<td>
Design Centre Sweden
</td> </tr>
<tr>
<td>
KTH
</td>
<td>
**Type of Data**
The collaboration with third parties will probably generate software
artefacts, rather than research data.
Moreover, interview transcripts and measurement data (KPI-related) will be the
most relevant research data generated. Possibly, there will also be summary
data related to literature surveys (coded sources, categorizations, etc.)
**Data sharing**
KTH will push for open source software artefacts related to CPS marketplaces.
Consent of the third parties participating in the data generation will be
required.
</td> </tr> </table>
<table>
<tr>
<th>
Design Centre UK
</th> </tr>
<tr>
<td>
UNEW
</td>
<td>
**Type of Data**
Descriptions of domains, co-models, descriptions of metrics used, tool
extension data.
**Data sharing**
Consent of the third parties participating in the data generation will be
required.
</td> </tr>
<tr>
<td>
Design Centre Spain
</td> </tr>
<tr>
<td>
IND
</td>
<td>
**Type of Data**
Data foreseen to be collected/generated in conjunction with third parties
include:
* Environmental data: Data automatically taken from sensors
* Personal data: Data taken from sensor that can identify an individual. Information privacy must be considered for these data.
* Generated data: Data not taken from sensors, but inferred from the previous kinds of data using traditional or non-traditional processing application - Test data: as above but generated under lab conditions
Indra’s activities are foreseen to generate logs of automatically generated
data (such as 1 reading of temperature per every thermometer and minute during
the period); information that can be inferred from these (the results of CEP
engine using the previous sensors as inputs) and the commands sent by human
agents answering this (to provide non-automatic answers or to perform forensic
analysis).
The data in and by themselves are probably not suitable to be published as
such. The results will nevertheless most likely provide opportunities to
manually generate material that will be interesting - such as documentation
for new functionalities, video tutorials, etc. The details of third-party
(including SME's and public institutions) activities about data generation
will depend on the specifics of their experiment. We do think their functional
interests will lead to more visible and friendlier data.
Besides this, IND considers the data generated as potential input for pattern-
inference analysis if possible. Moreover, it could provide input to identify
potential shortages or improvement points for our technology.
**Data sharing**
Not all the data will be offered to the public _as is_ . Some data must be
protected due to legislation and/or to ethical concerns. This includes, but is
not restricted to, personal data. Moreover, consent of the third parties
generating the data will be required.
The collection/generation and sharing of data heavily depends on the
experiments performed, data collected and third parties involved. As the first
experiments at the Design Centre Spain are only foreseen in the second round
of calls, further details can only be elaborated at a later stage.
</td> </tr>
<tr>
<td>
UPM
</td>
<td>
**Type of Data**
The following data is foreseen to be generated in conjunction with the third
party experiments
* Data from IED
* Data from humans-CPS interactions
* Data from Social networks in the context of CPSs
</td> </tr>
<tr>
<td>
</td>
<td>
* Data from assessing the work performed by 3rd parties
* Data from applying changes to the work performed by 3rd parties suggested by conclusions from assessment
* Code that could be shared
* Models
**Data sharing**
The implications of releasing data will be checked on a case by case basis. To
publish the “raw data”, will, in many cases, not be possible. Moreover,
consent of the third parties generating the data will be required. While third
parties might object to publish raw data, their consent to publish the
research results conclusions, with processed data, might still be possible.
The collection/generation and sharing of data heavily depends on the
experiments performed, data collected and third parties involved. As the first
experiments at the Design Centre Spain are only foreseen in the second round
of calls, further details can only be elaborated at a later stage.
</td> </tr> </table>
Table 2: Overview of data generated in conjunction with third party
experiments
Data sets will be provided by the Design Centres, whenever possible. Data
generated together with or by third parties will only be shared upon their
consent. The Design Centres will provide the data for the execution open
experiments during the course of the project. The Design Centres will also
publish the data on open access data platforms to ensure availability of data
also after the end of the project. In order to manage these data, partner
hosted repositories, as well as external repositories will be used to ensure
maximum visibility, serve as backups, and ensure availability well after the
end of the project. The CPSELabs market place will also be considered for
providing a repository for some of the data or to have links to specific
forges in the market place. As the data need to be easily updated by their
producers and to avoid fragmentation of open data platforms some data are
considered to be put in the general market place or focused places (e.g
_http://www.orocos.org/_ or _http://altarica.labri.fr/wp/_ ) . Moreover,
general platforms exist to deliver open source software (e.g.
_https://www.polarsys.org/_ ) . The usage of well-known platforms like
OpenAIRE (or even opendata.eu in future) would be advantageous in some cases.
With respect to software developed during the course of the project, whenever
possible (i.e. not violating IPR) it will also be provided under open-source
license to allow for their re-use, adaptation and further enhancement to match
possibly different application contexts and serve as a baseline for future
business and research endeavours. The project will consider using the Open
Access Infrastructure for Research in Europe (OpenAIRE) 2 as well as
exploiting the expected support to be provided on research data management for
projects funded under Horizon 2020.
# Conclusion
The CPSELabs consortium fully supports the endeavour to improve access to
scientific information and research data in order to enhance the benefits of
public investment. To fully exploit possibilities of data sharing, the project
participates in the ‘Open Research Data Pilot’ and will make its research data
available on a voluntary basis, whenever possible.
A variety of data and information will be generated in CPSELabs, whereof a
part will be generated by the consortium itself, and will be made freely
available e.g. via the website and public deliverables, while another part,
will be generated in conjunction with ‘third parties’, participating via
cascading funding in so called ‘experiments’. In the course of the project,
the CPSELabs consortium will have to carefully consider and agree with the
third parties on a case to case basis if, how and to what extend data can be
shared. CPSELabs perceives it as its role to accompany the mainly industrial
third parties in this opening process. CPSELabs will help them in scoping
their use cases and demonstration artefacts, in order to extract information
that is sufficient to exemplify the concepts and problems, while not
disclosing too much about their systems and know how.
As the project is just on the way of concluding the first round of ‘open call’
selection and invitation process, only assumptions on the data generated
through the ‘third party experiments’ and possible ways of sharing this data
could be provided within this deliverable. Future version of this document may
provide more refined policies to manage and share such data when the scope and
contents of experiments can be assessed more clearly. In addition, the
deliverable also aims at giving a brief overview on other data and information
elaborated by the project consortium that could be useful for specific
stakeholders or other projects pursuing similar aims in future.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0697_OpenBudgets.eu_645833.md
|
# Introduction
## Purpose and Scope
A Data Management Plan (DMP) is a formal document that specifies ways of
managing data throughout a project, as well as after the project is completed.
The purpose of DMP is to support the life cycle of data management, for all
data that is/will be collected, processed or generated by the project. A DMP
is not a fixed document, but evolves during the lifecycle of the project.
The OBEU project aims at providing a generic framework and concrete tools for
supporting financial transparency, to enhance accountability of public
administrations and to reduce the possibility of corruption. Objectives of the
OBEU are as follows: (1) publish and integrate financial data using Linked
Open Data (LOD);
2. explore, compare, and (visually) demonstrate financial data;
3. interactively manage budgets, in the sense that stakeholders and citizens can participate through providing with opinions and comments;
4. develop a comprehensive platform to realise (1)-(3);
5. test the platform in three applications – journalism, anti-corruption initiatives, and private citizenship engagement;
6. establish OBEU as a Software-as-a-Service.
The major block of these aims is the heterogenic nature of data formats used
by public administrations, which vary extensively. Examples of the most
popular formats used include CSV, EXL, XML, PDF, and RDB. By applying DCAT-AP
standard for dataset descriptions and making them publicly available, OBEU DMP
covers the 5 key aspects (dataset reference name, dataset description,
standards and metadata, access, sharing, and re-use, archiving and
preservation), following the guidelines on Data Management of H2020 [1].
## Relation with Work Packages and Deliverables
This deliverable is related to D1.5 “Final release of data definitions for
public finance data” [2] and D1.6 “Survey of code lists for the data model’s
coded dimensions” [3] which presents existing financial code classifications.
## Structure of the Deliverable
The rest of this deliverable is structured as follows: Section 2 presents the
data life-cycle of OBEU, five kinds of stakeholders for the OBEU projects, and
13 best practices for data management. Section 3 describes basic information
required for datasets of OBEU project, and guidelines of DMP of OBEU. Section
4 presents DMP templates for data management. Each dataset has a unique
reference name. Each data source and each of the transformed form will be
described with meta-data, which includes technical descriptions about
procedures and tools used for the transformation, and common-sense
descriptions for external users to better understand the published data. The
Open Data Commons Open Database License (ODBL) is taken as the default data
access, sharing, and re-use policies of OBEU datasets. Physical location of
datasets shall be provided.
# Data Lifecycle
The OBEU platform is a Linked Data platform, whose data ingestion and
management follow the Linked Data Life Cycle (LDLC) [4]. The LDLC describes
the technical process required to create datasets and manage their quality. To
ease the process, best practices are described to guide dataset contributors
in the OBEU platform.
Formerly, data management was executed by a single person or a working-group,
who also took responsibility for data management. With the popularity of the
Web and the widely distributed data sources, data management has shifted to a
service of a large economic system that has many stakeholders.
## Stakeholders
For OBEU platform, stakeholders are those who have influence on data
management, in our case:
1. _Data Source Publisher/Owner_ refers to organisations those provide financial datasets to the OBEU platform. The communication between OBEU and DSPO is limited to two cases: OBEU downloads financial data from DSPO, and DSPO uploads financial data to OBEU
2. _Data End-User_ refers to persons and organisations who use the OBEU platform to view financial data, to comment budget policy, and to monitor budget flow. Three end-user examples are entities in the journalism domain, anti-corruption initiatives, and private citizens. All the latter are the key driver for the content of the OBEU platform.
3. _Data Wrangler_ refers to persons who integrate heterogenic datasets into the OBEU platform. They are able to understand both the terminology used in financial datasets and OBEU data model, and their role is to ensure that the data integration is semantically correct.
4. _Data Analyser_ refers to persons who provide query results to end-users of OBEU. They may need to use data mining software.
5. _System Administrator and Platform Developer_ refers to persons responsible for developing and maintaining the OBEU platform.
## The Generic OBEU Data Value Chain
Based on the Data Value Chain of IBM Big Data & Analytics [5], we structure
the generic OBEU data value chain as follows:
1. _Discover._ An end-user query can require data to be collected from many datasets located within different entities and potentially also distributed in different countries. Datasets hence need to be located and evaluated. For OBEU, the evaluation of datasets results in dataset metadata, which is one of the main best practices in the Linked Data community. DCAT-AP is used as the metadata vocabulary.
2. _Ingest and make the data machine processable._ In order to realise the value creation stage (integration, analyse, and enrich), datasets in different formats are transformed into a machine processable format. In the case of OBEU, it is the RDF format. The conversion pipeline from heterogenic datasets into an RDF dataset is fundamental. A Data Wrangler is responsible for the conversion process. For CSV datasets, additional contextual information is required to make the semantics of the dataset explicit.
3. _Persist._ Persistence of datasets happens throughout the whole data management process. When a new dataset comes into the OBEU platform, the first data persistence is to backup this dataset and the ingestion result of this dataset. Later data persistence is largely determined by the data analysis process. Two strategies used in data persistence are (a) keeping local copy – copy the dataset from DSPO to the OBEU platform; (b) caching, to enhance data locality to increase the efficiency of data management.
4. _Integrate, analyse, enrich._ One of the data management tasks is to combine a variety of datasets and find out new insights. Data integration needs both domain knowledge and technical knowhow. This is achieved by using a Linked Data approach enriched with a shared ontology. The life cycle of Linked Data ETL process starts from the **extraction** of RDF triples from heterogenic datasets, and storing the extracted RDF data into a storage, that is available for SPARQL querying. The RDF storage can be manually updated. Then, the interlinking and data fusion is carried out, which use ontologies in several public Linked Data sources and creates the Web of Data. In contrast to a relational data warehouse, the Web of Data is a distributed knowledge graph. Based on Linked Data technologies, new RDF triples can be derived, and new enrichment is possible. Evaluation is necessary to control the quality of new knowledge, which further results in searching more data sources, and performing data **extraction** .
5. _Expose._ The result of data analysis will be exposed to end-users in a clear, salient, and simple way. The OBEU platform is a Linked Data platform, whose outcomes include (a) meta-data description about the results; (b) a SPARQL endpoint for the meta-data; (c) a SPARQL endpoint for the resulting datasets; (d) a user-friendly interface for the above results.
## Best Practices
The OBEU platform is a Linked Data platform. The best practices for publishing
Linked Data are described in [5]. 13 stages are recommended to publish a
standalone dataset, 6 of them are vital (marked as **must** ).
1. _Provide descriptive metadata with locale parameters_
Metadata _**must** _ be provided for both human users and computer
applications. Metadata provides DEU with information to better understand the
meaning of data. Providing metadata is a fundamental requirement when
publishing data on the Web, because DSPO and DEU may be unknown to each other.
Then, it is essential to provide information that helps DEU – both human users
and software systems, to understand the data, as well as other aspects of the
dataset.
Metadata should include the following overall features of a dataset: The
**title** and a **description** of the dataset; the **keywords** describing
the dataset; the **date of publication** of the dataset.; the **entity
responsible (publisher)** for making the dataset available; the **contact
point** of the dataset; the **spatial coverage** of the dataset; the
**temporal period** that the dataset covers; the **themes/categories** covered
by a dataset.
Locale parameters metadata should include the following information: the
language of the dataset; the formats used for numeric values, dates and time.
2. _Provide structural metadata_
Information about the internal structure of a distribution _**must** _ be
described as metadata, for this information is necessary for understanding the
meaning of the data and for querying the dataset. (3) _Provide data license
information_
License information is essential for DEU to assess data. Data re-use is more
likely to happen, if the dataset has a clear open data license.
4. _Provide data provenance information_
Data provenance describes data origin and history. Provenance becomes
particularly important when data is shared between collaborators who might not
have direct contact with one another.
5. _Provide data quality information_
Data quality is commonly defined as “fitness for use” for a specific
application or use case. The machine readable version of the dataset quality
metadata may be provided according to the vocabulary that is being developed
by the DWBP working group, i.e., the Data Quality and Granularity vocabulary.
6. _Provide versioning information_
Version information makes a dataset uniquely identifiable. The uniqueness
enables data consumers to determine how data has changed over time and to
identify specifically which version of a dataset they are working with.
7. _Use persistent URIs as identifiers_
Datasets _**must** _ be identified by a persistent URI. Adopting a common
identification system enables basic data identification and comparison
processes by any stakeholder in a reliable way. They are an essential pre-
condition for proper data management and re-use.
8. _Use machine-readable standardised data formats_
Data **_must_ ** be available in a machine-readable standardised data format
that is adequate for its intended or potential use.
9. _Data Vocabulary_
Standardised terms _should_ be used to provide metadata, Vocabularies _should_
be clearly documented, shared in an open way, and include versioning
information. Existing reference vocabularies _should_ be re-used where
possible
10. _Data Access_
Providing easy access to data on the Web enables both humans and machines to
take advantage of the benefits of sharing data using the Web infrastructure.
Data _should_ be available for bulk download. APIs for accessing data _should_
follow REST (REpresentational State Transfer) architectural approaches. When
data is produced in real-time, it _should_ be available on the Web in real-
time. Data _**must** _ be available in an up-to-date manner and the update
frequency made explicit. If data is made available through an API, the API
itself _should_ be versioned separately from the data. Old versions _should_
continue to be available.
11. _Data Preservation_
Data depositors willing to send a data dump for long term preservation
_**must** _ use a well established serialisation. Preserved datasets _should_
be linked with their "live" counterparts.
12. _Feedback_
Data publishers _should_ provide a means for consumers to offer feedback.
13. _Data Enrichment_
Data _should_ be enriched whenever possible, generating richer metadata to
represent and describe it.
# Data Management Plan Guidelines
In this section, we describe guidelines of DMP of OBEU.
## Dataset Content, Provenance and Value
1. _What dataset will be collected or created?_
Financial data in any file format from EU members are used as input data to
the OBEU platform. They shall be transformed into RDF triple formats.
2. _What is its value for others?_
Using the OBEU platform, different stakeholders can easily scrutinise
financial data and express their comments on financial policies.
## Standards and Metadata
3. _Which data standards will the data conform to?_
Following the Linked Data approach, raw input datasets will be semantically
enriched to comply with the RDF standards. The OBEU project will re-use and
extend a number of tools of the LinDA project, such as RDF2Any and Any2RDF,
and other data transform tools that will be used/developed.
4. _What documentation and metadata will accompany the data?_
Following the best practices for data on the web, all _**must** _ information
described in section 2.3 will be accompanied. The use of W3C standards such as
PROV-O for provenance, and DCAT for data catalogue description will be
followed.
## Data Access and Sharing
5. _Which data is open, re-usable and what licenses are applicable?_
The OBEU project aims at reducing the possibility of corruption through
increasing financial transparency. It is envisaged that all financial datasets
in the OBEU project should be freely accessed. In particular, the Open Data
Commons Open Database License (OdbL) to open datasets is adopted as a
project's best practice. Since we only cater for financial datasets within the
OBEU project, we do not envisage to have any data of a private or personal
nature.
6. _How will open data be accessible and how will such access be maintained?_
Data _should_ be available for bulk download. APIs for accessing data _should_
follow REST architectural approaches. Real-time data _should_ be available on
the Web in realtime. Data _**must** _ be available in an up-to-date manner,
with explicitly demonstrated update frequency. For data available through an
API, the API itself _should_ be versioned separately from the data. Old
versions _should_ continue to be available. See Section 2.3 10 for detail.
## Data Archiving, Maintenance and Preservation
7. _Where will each dataset be physically stored?_
Datasets will be initially stored in a repository hosted by OBEU server, or
one of participating consortium partners. Depending on its nature, a dataset
may be moved to an external repository, e.g. European Open Data Portal, or the
LOD2 project's PublicData.eu.
8. _Where will the data be processed?_
Datasets will be processed locally at the project partners. Later, datasets
will be processed on the OBEU server, using cloud services.
9. _What physical resources are required to carry out the plan?_
Hosting, persistence, and access will be managed by the OBEU project partners.
They will identify virtual machines, cloud services for long term maintenance
of the datasets and data processing clusters.
10. _What are the physical security protection features?_
For open accessible financial datasets, security will be taken to ensure that
the datasets are protected from any unwanted tempering, to guarantee the
validity. (11) _How will each dataset be preserved to ensure long-term value?_
Since the OBEU datasets will follow Linked Data principles, the consortium
will follow the best practices for supporting the life cycle of Linked Data,
as defined in the EU-FP7 LOD2 project. This includes curation, reparation, and
evolution.
(12) _Who is responsible for the delivery of the plan?_
Members of each WP should enrich this plan from her/his own aspect.
# Data Management Plan Template
The following template will be used to establish plans for each dataset
aggregated or produced during the project.
## Data Reference Name
A data reference name is an identifier for the data set to be produced [1].
<table>
<tr>
<th>
**Description**
</th>
<th>
A dataset should have a standard name within OBEU, which can reveal its
content, provenance, format, related stakeholders, etc.
</th> </tr>
<tr>
<td>
**Metadata**
</td>
<td>
Interpretation, guideline, and software tools shall be given, provided, or
indicated for generating, interpreting data reference names.
</td> </tr> </table>
**Table 1 - Template for Data Reference Name**
## Dataset Content, Provenance and Value
_When completing this section, please refer to questions and answers 1-2 in
Section 3.1_
<table>
<tr>
<th>
**Description**
</th>
<th>
A general description of the dataset, indicating whether it has been:
☑ aggregated from existing source(s)
☑ created from scratch
☑ transformed from existing data in other formats
☑ generated via (a series of) other operations on existing dataset
The description should include reasons leading to the dataset, information
about its nature and size and links to scientific reports or publications that
refer to the dataset.
</th> </tr>
<tr>
<td>
**Provenance**
</td>
<td>
Links and credits to original data sources
</td> </tr>
<tr>
<td>
**Operations performed**
</td>
<td>
If the dataset is a result of transformation or other operations (including
queries, inference, etc.) over existing datasets, this information will be
retained.
</td> </tr>
<tr>
<td>
**Value in Reuse**
</td>
<td>
Information about the perceived value and potential candidates for exploiting
and reusing the dataset. Including references to datasets that can be
integrated for added value.
</td> </tr> </table>
**Table 2 - Template for Dataset Content, Provenance and Value**
## Standards and Metadata
When completing this section, please refer to questions and answers 3-4 in
section 3.2
<table>
<tr>
<th>
**Format**
</th>
<th>
Identification of the format used and underlying standards. In case the DMP
refers to a collection of related datasets, indicate all of them.
</th> </tr>
<tr>
<td>
**Metadata**
</td>
<td>
Specify what metadata has been provided to enable machine-processable
descriptions of dataset. Include a link if a DCAT-AP representation for the
dataset has been published.
</td> </tr> </table>
**Table 3 - Template for Standards and Metadata**
## Data Access and Sharing
When completing this section, please refer to questions and answers 5-6 in
section 2.3
<table>
<tr>
<th>
**Data Access and Sharing Policy**
</th>
<th>
It is envisaged that all financial datasets in the OBEU project should be
freely accessed, in particular, under the Open Data Commons Open Database
License (OdbL).
When an access is restricted, justifications will be cited (ethical, personal
data, intellectual property, commercial, privacy-related, security-related)
</th> </tr>
<tr>
<td>
**Copyright and IPR**
</td>
<td>
Where relevant, specific information regarding copyrights and intellectual
property should be provided.
</td> </tr>
<tr>
<td>
**Access Procedures**
</td>
<td>
To specify how and in which manner can the data be accessed, retrieved,
queried, visualised, etc.
</td> </tr>
<tr>
<td>
**Dissemination and reuse Procedures**
</td>
<td>
To outline technical mechanisms for dissemination and reuse, including special
software, services, APIs, or other tools.
</td> </tr> </table>
**Table 4 - Template for Data Access and Sharing**
## Archiving, Maintenance and Preservation
When completing this section, please refer to questions and answers 6-12 in
section 3.4
<table>
<tr>
<th>
**Storage**
</th>
<th>
Physical repository where data will be stored and made available for access
(if relevant) and indication of type:
☑ OpenBudgets partner owned
☑ societal challenge domain repository
☑ open repository
☑ other
</th> </tr>
<tr>
<td>
**Preservation**
</td>
<td>
Procedures for guaranteed long-term data preservation and backup. Target
length of preservation.
</td> </tr>
<tr>
<td>
**Physical Resources**
</td>
<td>
Resources and infrastructures required to carry out the plan, especially
regarding long-term access and persistence. Information about access mechanism
including physical security features.
</td> </tr>
<tr>
<td>
**Expected Costs**
</td>
<td>
Approximate hosting, access, maintenance costs for the expected end volume,
and a strategy to cover them.
</td> </tr>
<tr>
<td>
**Responsibilities**
</td>
<td>
Individual and/or entities are responsible for ensuring that the DMP is
adhered to the data resource.
</td> </tr> </table>
**Table 5 - Template for Archiving, Maintenance and Preservation**
# Conclusion
This deliverable outlines the guidelines and strategies for data management of
OBEU, which will be fine-tuned and extended throughout the course of the
project. Following the guideline on Data Management in H2020 [1], we described
the purpose and scope of datasets of OBEU, and specified the datasets
management for the OBEU project. Five kinds of stakeholders related to OBEU
are described: original data producer, data wrangler, data analyser, system
administrator/developer, and data end-user; generic data flow chain of OBEU is
listed and explained: data discover, data ingest, data persist, data analyse,
and data expose. Following the best practices of Linked Data Publishing, we
specified the 13 steps of best practices for
OBEU dataset management. Based on the above, we present DMP guidelines for
OBEU, and DMP templates for data management process during the lifetime of
OBEU projects
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0700_EarthServer-2_654367.md
|
# Introduction
The EarthServer-2 project is itself built around concepts of data management
and accessibility. Its aim is to implement enabling technologies to make large
datasets accessible to a varied community of users. The intention is not to
create new datasets but to make existing datasets (identified at the start of
the project) easier to access and manipulate, encouraging data sharing and
reuse. Additional datasets will be added during the life of the project as
they become available and the DMP will be updated as a “live” document to
reflect this. This version of the Data Management Plan is a snapshot taken May
31 st 2017 and contains additional datasets such as the NOAA Hydro-Estimator
and Global ECMWF Fire Forecasting model data.
# Data Organisation, Documentation and Metadata
Data will be accessible through the Open Geospatial Consortium (OGC) Web
Coverage Processing Service 1 (WCPS) and Web Coverage Service 2 (WCS)
standards. EarthServer-2 will establish data/metadata integration on a
conceptual level (by integrating array queries with known metadata search
techniques such as tabular search, full text search, ontologies etc.) and on a
practical level (by utilizing this integrated technology for concrete
catalogue implementations based on standards like ISO 19115, ISO 19119 and ISO
19139 depending on the individual service partner needs).
# Data Access and Intellectual Property
Data access restrictions and intellectual property rights will remain as set
by the dataset owners (see Section 6). The datasets identified for the initial
release have no access restrictions.
# Data Sharing and Reuse
The aim of EarthServer-2 is to make data available for sharing and reuse
without requiring that users download the entire (huge) dataset. Data will be
available through the OGC WCPS and WCS standard, allowing users to filter and
process data at source before transferring them back to the client. Five data
services have been created (Marine, Climate, Earth Observation, Planetary and
Landsat), providing simple access via web portals with a user-friendly
interface to filtering and analysis tools as required by the application
domain.
# Data Preservation and Archiving
EarthServer-2 will not generate new data; preservation and archiving will be
the responsibility of the upstream projects from which the original data was
obtained.
# Data Register
The data register will be maintained as a “live” document; a snapshot will be
created for each DMP release (see 6.1 and following sections).
The data register will be based upon information and restrictions supplied by
the upstream data provider matched to Horizon 2020 guidelines as below (in
_italics)_ :
* **Data set reference and name**
_Identifier for the data set to be produced._
* **Data set description**
_Descriptions of the data that will be generated or collected, its origin (in
case it is collected), nature and scale and to whom it could be useful, and
whether it underpins a scientific publication. Information on the existence
(or not) of similar data and the possibilities for integration and reuse._
* _Standards and metadata_
_Reference to existing suitable standards of the discipline. If these do not
exist, an outline on how and what metadata will be created._
* _Data sharing_
_Description of how data will be shared, including access procedures, embargo
periods (if any), outlines of technical mechanisms for dissemination and
necessary software and other tools for enabling reuse, and definition of
whether access will be widely open or restricted to specific groups.
Identification of the repository where data will be stored, if already
existing and identified, indicating in particular the type of repository
(institutional, standard repository for the discipline, etc.). In case the
dataset cannot be shared, the reasons for this should be mentioned (e.g.
ethical, rules of personal data, intellectual property, commercial, privacy-
related, security-related)._
* **Archiving and preservation (including storage and backup)** _Description of the procedures that will be put in place for long-term preservation of the data. Indication of how long the data should be preserved, what is its approximated end volume, what the associated costs are and how these are planned to be covered._
Within EarthServer-2 currently, the original data are held by upstream
providers who have their own policies. In this case archiving and preservation
responsibility will remain with the upstream project.
## Marine Science Data Service
<table>
<tr>
<th>
**Data set reference and name**
</th>
<th>
ESA OC-CCI
</th> </tr>
<tr>
<td>
**Organisation**
</td>
<td>
**ESA OC-CCI**
</td> </tr>
<tr>
<td>
**Data set description**
</td>
<td>
ESA Ocean Colour Climate Change Indicators. http://www.esa-
oceancolourcci.org/index.php?q=webfm_send/318
</td> </tr>
<tr>
<td>
**Standards**
</td>
<td>
Data will be made available through the OGC WCPS standard.
</td> </tr>
<tr>
<td>
**Spatial extent**
</td>
<td>
Global
</td> </tr>
<tr>
<td>
**Temporal extent**
</td>
<td>
1997-2016
</td> </tr>
<tr>
<td>
**Project Contact**
</td>
<td>
Peter Walker ([email protected])
</td> </tr>
<tr>
<td>
**Upstream Contact**
</td>
<td>
[email protected]_
</td> </tr>
<tr>
<td>
**Limitations**
</td>
<td>
None
</td> </tr>
<tr>
<td>
**License**
</td>
<td>
Free
</td> </tr>
<tr>
<td>
**Constraints**
</td>
<td>
None
</td> </tr>
<tr>
<td>
**Data Format**
</td>
<td>
NetCDF-CF
</td> </tr>
<tr>
<td>
**Access URL**
</td>
<td>
_http://earthserver.pml.ac.uk/rasdaman/ows? &SERVICE=W _ _CS
&VERSION=2.0.1&REQUEST=GetCapabilities _
</td> </tr>
<tr>
<td>
**Archiving and preservation**
**(including storage and backup)**
</td>
<td>
Data is part of long term ESA CCI project and the original copy is maintained
there.
</td> </tr> </table>
_Table 6-1: Data set description for the ESA Ocean Colour Climate Change
Indicators._
## Climate Science Data Service
<table>
<tr>
<th>
**Data set reference ECMWF ERA-interim reanalysis and name**
</th> </tr>
<tr>
<td>
Organisation
</td>
<td>
**ECMWF**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
A selection of ERA-Interim reanalysis parameters is provided. ERA-interim is a
global atmospheric reanalysis produced by ECMWF. It is the replacement of
ERA-40 and extends back to 1 Jan 1979. Reanalysis data are global data sets
describing the recent history of the atmosphere, land surface, and oceans.
Reanalysis data are used for monitoring climate change, for research and
education, and for commercial applications. Currently, five surface parameters
are available: 2m air temperature, precipitation, mean sea level pressure, sea
surface temperature, soil moisture. Further, three parameters on three
different pressure levels (500, 850 and 1000 hPa) are provided: temperature,
geopotential and relative humidity. More information to ERA-interim data is
available under
http://onlinelibrary.wiley.com/doi/10.1002/qj.828/full
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
Data will be made available through the OGC WCS/WCPS standard.
</td> </tr>
<tr>
<td>
Spatial extent
</td>
<td>
Global (Longitude: -180 to 180, Latitude: -90 to 90); Spatial resolution: 0.5
x 0.5 deg
</td> </tr>
<tr>
<td>
Temporal extent
</td>
<td>
1 Jan 1979 to 31 Dec 2015 (6-hourly resolution)
</td> </tr>
<tr>
<td>
Project Contact
</td>
<td>
Stephan Siemen (ECMWF)
</td> </tr>
<tr>
<td>
Upstream Contact
</td>
<td>
Dick Dee (ECMWF)
</td> </tr>
<tr>
<td>
Limitations
</td>
<td>
None
</td> </tr>
<tr>
<td>
License
</td>
<td>
Free, but no redistribution
</td> </tr>
<tr>
<td>
Constraints
</td>
<td>
None
</td> </tr>
<tr>
<td>
Data Format
</td>
<td>
GRIB
</td> </tr>
<tr>
<td>
Access URL
</td>
<td>
http://earthserver.ecmwf.int/rasdaman/ows
</td> </tr>
<tr>
<td>
Archiving and preservation
(including storage and backup)
</td>
<td>
Stored in MARS archive - original data will be kept without time limit
</td> </tr> </table>
_Table 6-2: Data set description for the ERA-Interim reanalysis parameters._
<table>
<tr>
<th>
**Data set reference and name**
</th>
<th>
**GloFAS river discharge forecast data**
</th> </tr>
<tr>
<td>
Organisation
</td>
<td>
**ECMWF / JRC**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Data is part of the Global Flood Awareness System
(GloFAS) (www.globalfloods.eu). The GloFAS system produces daily flood
forecasts in a pre-operational manner. More information about the data can be
found under http://www.hydrol-earth-syst-
sci.net/17/1161/2013/hess-171161-2013.pdf
</td> </tr>
<tr>
<td>
**Data set reference and name**
</td>
<td>
**GloFAS river discharge forecast data**
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
Data will be made available through the OGC WCS/WCPS standard.
</td> </tr>
<tr>
<td>
Spatial extent
</td>
<td>
Global (Longitude: -180 to 180, Latitude: -60 to 90); Spatial resolution: 0.1
x 0.1 deg
</td> </tr>
<tr>
<td>
Temporal extent
</td>
<td>
1 April 2008 up to now
</td> </tr>
<tr>
<td>
Project Contact
</td>
<td>
Stephan Siemen (ECMWF)
</td> </tr>
<tr>
<td>
Upstream Contact
</td>
<td>
Florian Pappenberger (ECMWF)
</td> </tr>
<tr>
<td>
Limitations
</td>
<td>
</td> </tr>
<tr>
<td>
License
</td>
<td>
Free, but no redistribution
</td> </tr>
<tr>
<td>
Constraints
</td>
<td>
None
</td> </tr>
<tr>
<td>
Data Format
</td>
<td>
NetCDF-CF
</td> </tr>
<tr>
<td>
Access URL
</td>
<td>
http://earthserver.ecmwf.int/rasdaman/ows
</td> </tr>
<tr>
<td>
Archiving and preservation
(including storage and backup)
</td>
<td>
TBD
</td> </tr> </table>
_Table 6-3: Data set description for the Global Flood Awareness System._
<table>
<tr>
<th>
**Data set reference and name**
</th>
<th>
**ERA river discharge data**
</th> </tr>
<tr>
<td>
Organisation
</td>
<td>
**ECMWF / JRC**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
Data will be made available through the OGC WCS/WCPS standard.
</td> </tr>
<tr>
<td>
Spatial extent
</td>
<td>
Global (Longitude: -180 to 180, Latitude: -90 to 90); Spatial resolution: 0.1
x 0.1 deg
</td> </tr>
<tr>
<td>
Temporal extent
</td>
<td>
1 January 1981 up to now
</td> </tr>
<tr>
<td>
Project Contact
</td>
<td>
Stephan Siemen (ECMWF)
</td> </tr>
<tr>
<td>
Upstream Contact
</td>
<td>
Florian Pappenberger (ECMWF)
</td> </tr>
<tr>
<td>
Limitations
</td>
<td>
</td> </tr>
<tr>
<td>
License
</td>
<td>
Free, but no redistribution
</td> </tr>
<tr>
<td>
Constraints
</td>
<td>
None
</td> </tr>
<tr>
<td>
Data Format
</td>
<td>
NetCDF-CF
</td> </tr>
<tr>
<td>
Access URL
</td>
<td>
http://earthserver.ecmwf.int/rasdaman/ows
</td> </tr>
<tr>
<td>
Archiving and preservation
(including storage and backup)
</td>
<td>
</td> </tr> </table>
_Table 6-4: Data set description for the ERA river discharge data._
<table>
<tr>
<th>
**Data set reference and name**
</th>
<th>
**Global ECMWF Fire Forecasting model data, as part of the Copernicus
Emergency Management Service**
</th> </tr>
<tr>
<td>
Organisation
</td>
<td>
**ECMWF**
</td> </tr> </table>
<table>
<tr>
<th>
**Data set reference Global ECMWF Fire Forecasting model data, as part of
the**
**and name Copernicus Emergency Management Service**
</th> </tr>
<tr>
<td>
Data set description
</td>
<td>
The European Forest Fire Information System (EFFIS) is currently being
developed in the framework of the
Copernicus Emergency Management Services to monitor and forecast fire danger
in Europe. The system provides timely information to civil protection
authorities in 38 nations across Europe
(http://forest.jrc.ec.europa.eu/effis/abouteffis/effis-network/) and mostly
concentrates on flagging regions which might be at high danger of spontaneous
ignition due to persistent drought. GEFF is the modelling component of EFFIS
and implements the three most used fire danger rating systems; the US NFDRS,
the Canadian FWI and the Australian MARK-5. The dataset extends from 1980 to
date and is updated once a month when new ERA-Interim fields become available.
Following indices are available via GEFF: (i) Fire Weather Index (FWI), (ii)
Fire Danger Index (FDI) and (iii) Burning Index (BI). Further information are
available under
http://journals.ametsoc.org/doi/full/10.1175/JAMC-D-15-
0297.1
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
Fire Weather Index data will be made available through the OGC WCS/WCPS
standard.
</td> </tr>
<tr>
<td>
Spatial extent
</td>
<td>
Global (Longitude: -180 to 179.297, Latitude: 89.4628 to -
89.4628); Spatial resolution: 0.703 x 0.703 deg
</td> </tr>
<tr>
<td>
Temporal extent
</td>
<td>
1 January 1980 up to now
</td> </tr>
<tr>
<td>
Project Contact
</td>
<td>
Stephan Siemen (ECMWF)
</td> </tr>
<tr>
<td>
Upstream Contact
</td>
<td>
Francesca Di Giuseppe (ECMWF)
</td> </tr>
<tr>
<td>
Limitations
</td>
<td>
</td> </tr>
<tr>
<td>
License
</td>
<td>
Free
</td> </tr>
<tr>
<td>
Constraints
</td>
<td>
None
</td> </tr>
<tr>
<td>
Data Format
</td>
<td>
NetCDF-CF
</td> </tr>
<tr>
<td>
Access URL
</td>
<td>
Available in beta version at the moment:
http://apps.ecmwf.int/datasets/data/geff-reanalysis/
</td> </tr>
<tr>
<td>
Archiving and preservation
(including storage and backup)
</td>
<td>
Stored in MARS archive - original data will be kept without time limit
</td> </tr> </table>
_Table 6-5: Data set description for Global ECMWF Fire Forecasting model data,
as part of the Copernicus Emergency Management Service._
<table>
<tr>
<th>
**Data set reference and name**
</th>
<th>
**CAMS Regional Air Quality - Reanalysis data**
</th> </tr>
<tr>
<td>
Organisation
</td>
<td>
**ECMWF**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
CAMS is the Copernicus Atmosphere Monitoring Service and will deliver various
products (near-real-time, reanalysis, etc.) of European and global atmospheric
composition on an
</td> </tr>
<tr>
<td>
**Data set reference CAMS Regional Air Quality - Reanalysis data and name**
</td> </tr>
<tr>
<td>
</td>
<td>
operational basis. CAMS produces daily air quality ensemble reanalysis for the
air quality parameters Particulate Matter 10 (PM10), Particulate Matter 2.5
(PM25), Nitrogen Dioxide (NO2), and Ozone (O3).
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
Data will be made available through the OGC WCS/WCPS standard.
</td> </tr>
<tr>
<td>
Spatial extent
</td>
<td>
Europe (Longitude: -25.0 to 45.0, Latitude: 70.0 to 30.0); Spatial resolution:
0.1 x 0.1 deg
</td> </tr>
<tr>
<td>
Temporal extent
</td>
<td>
2014 - 2016; hourly resolution
</td> </tr>
<tr>
<td>
Project Contact
</td>
<td>
Stephan Siemen (ECMWF)
</td> </tr>
<tr>
<td>
Upstream Contact
</td>
<td>
Miha Razinger (ECMWF)
</td> </tr>
<tr>
<td>
Limitations
</td>
<td>
None
</td> </tr>
<tr>
<td>
License
</td>
<td>
Free
</td> </tr>
<tr>
<td>
Constraints
</td>
<td>
None
</td> </tr>
<tr>
<td>
Data Format
</td>
<td>
NetCDF-CF
</td> </tr>
<tr>
<td>
Access URL
</td>
<td>
http://www.regional.atmosphere.copernicus.eu/
</td> </tr>
<tr>
<td>
Archiving and preservation
(including storage and backup)
</td>
<td>
Data is available for download at the URL provided.
</td> </tr> </table>
_Table 6-6: Data set description for_ _CAMS Regional Air Quality - Reanalysis
data._
## Earth Observation Data Service
<table>
<tr>
<th>
**Data set reference and name**
</th>
<th>
**MOD 04 - Aerosol Product; MOD 05 - Total Precipitable**
**Water; MOD 06 - Cloud Product; MOD 07 -**
**Atmospheric Profiles; MOD 08 - Gridded Atmospheric**
**Product; MOD 11 - Land Surface Temperature and Emissivity; MOD 35 - Cloud
Mask;**
</th> </tr>
<tr>
<td>
Organisation
</td>
<td>
**NASA**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
There are seven MODIS Level 3 Atmosphere Products, each covering a different
temporal scale: Daily, 8-Day, and Monthly. Each of these Level 3 products
contains statistics de-rived from over 100 science parameters from the Level 2
Atmosphere products: Aerosol, Precipitable Water, Cloud, and Atmospheric
Profiles. A range of statistical summaries (scalar statistics and 1- and
2-dimensional histograms) are computed, depending on the Level 2 science
parameter. Statistics are aggregated to a 1° x 1° equal-angle global grid. The
daily product contains ~700 statistical summary parameters. The 8-day and
monthly products contain ~900 statistical summary parameters.
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
Data is available through the OGC WCS/WCPS standard.
</td> </tr>
<tr>
<td>
Spatial extent
</td>
<td>
</td> </tr>
<tr>
<td>
Temporal extent
</td>
<td>
2000 - today
</td> </tr>
<tr>
<td>
Project Contact
</td>
<td>
[email protected]_
</td> </tr>
<tr>
<td>
Upstream Contact
</td>
<td>
http://modaps.nascom.nasa.gov/services/user/
</td> </tr>
<tr>
<td>
Limitations
</td>
<td>
</td> </tr>
<tr>
<td>
License
</td>
<td>
</td> </tr>
<tr>
<td>
Constraints
</td>
<td>
The distribution of the MODAPS data sets are funded by NASA's Earth-Sun System
Division (ESSD). The data are not copyrighted; however, in the event that you
publish data or results using these data, we request that you include the
following acknowledgment:
"The data used in this study were acquired as part of the NASA's Earth-Sun
System Division and archived and distributed by the MODIS Adaptive Processing
System
(MODAPS)."
We would appreciate receiving a copy of your publication, which can be
forwarded to [email protected].
</td> </tr>
<tr>
<td>
Data Format
</td>
<td>
GeoTIFF (generated from HDF)
</td> </tr>
<tr>
<td>
Access URL
</td>
<td>
_eodataservice.org_
</td> </tr>
<tr>
<td>
Archiving and preservation
(including storage and backup)
</td>
<td>
Data is part of Level-2 MODIS Atmosphere Products
</td> </tr> </table>
_Table 6-7: Data set description for the MODIS Level 3 Atmosphere Products._
<table>
<tr>
<th>
Data set reference and name
</th>
<th>
**SMOS Level 2 Soil Moisture**
**(SMOS.MIRAS.MIR_SMUDP2); SMOS Level 2 Ocean Salinity
(SMOS.MIRAS.MIR_OSUDP2)**
</th> </tr>
<tr>
<td>
Organisation
</td>
<td>
**ESA**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
ESA's Soil Moisture Ocean Salinity (SMOS) Earth Explorer mission is a radio
telescope in orbit, but pointing back to Earth not space. It's Microwave
Imaging Radiometer using Aperture Synthesis (MIRAS) radiometer picks up faint
microwave emissions from Earth's surface to map levels of land soil moisture
and ocean salinity.
These are the key geophysical parameters, soil moisture for hydrology studies
and salinity for enhanced understanding of ocean circulation, both vital for
climate change models.
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
Data is available through the OGC WCS/WCPS standard.
</td> </tr>
<tr>
<td>
Spatial extent
</td>
<td>
Global
</td> </tr>
<tr>
<td>
Temporal extent
</td>
<td>
12-01-2010 - today
</td> </tr>
<tr>
<td>
Project Contact
</td>
<td>
[email protected]_
</td> </tr>
<tr>
<td>
Upstream Contact
</td>
<td>
</td> </tr>
<tr>
<td>
Limitations
</td>
<td>
</td> </tr>
<tr>
<td>
License
</td>
<td>
</td> </tr>
<tr>
<td>
Constraints
</td>
<td>
</td> </tr>
<tr>
<td>
Data Format
</td>
<td>
GeoTIFF (generated from measurements geo-located in an equal-area grid system
ISEA 4H9)
</td> </tr>
<tr>
<td>
Access URL
</td>
<td>
_eodataservice.org_
</td> </tr>
<tr>
<td>
Archiving and preservation
(including storage and backup)
</td>
<td>
Data is part of Level-2 SMOS Products
</td> </tr> </table>
_Table 6-8: Data set description for ESA's Soil Moisture Ocean Salinity
parameters._
<table>
<tr>
<th>
**Data set reference and name**
</th>
<th>
**Landsat8 L1T**
</th> </tr>
<tr>
<td>
Organisation
</td>
<td>
**ESA**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Level 1 T- Terrain Corrected
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
Data is available through the OGC WCS/WCPS standard.
</td> </tr>
<tr>
<td>
Spatial extent
</td>
<td>
European
</td> </tr>
<tr>
<td>
Temporal extent
</td>
<td>
2014 - today
</td> </tr>
<tr>
<td>
Project Contact
</td>
<td>
[email protected]_
</td> </tr>
<tr>
<td>
Upstream Contact
</td>
<td>
EO-Support (https://earth.esa.int/web/guest/contact-us)
</td> </tr>
<tr>
<td>
Limitations
</td>
<td>
</td> </tr>
<tr>
<td>
License
</td>
<td>
</td> </tr>
<tr>
<td>
**Data set reference Landsat8 L1T and name**
</td> </tr>
<tr>
<td>
Constraints
</td>
<td>
Acceptance of ESA Terms and Conditions 3
</td> </tr>
<tr>
<td>
Data Format
</td>
<td>
GeoTIFF
</td> </tr>
<tr>
<td>
Access URL
</td>
<td>
_eodataservice.org_
</td> </tr>
<tr>
<td>
Archiving and preservation
(including storage and backup)
</td>
<td>
ESA is an International Co-operator with USGS for the
Landsat-8 Mission. Data is downlinked via Kiruna and Matera (KIS and MTI)
stations whenever the satellite passes over Europe, starting from November
2013. Typically the station's will receive 2 or 3 passes per day each and
there will be some new scenes for each path, in accordance with the overall
mission acquisition plan.
The Neustrelitz data available on the portal from May 2013 to December 2013
Data will be processed to either L1T or L1Gt product format as soon as it is
downlinked. The target time is for scenes to be available for download within
3 hours of reception.
https://landsat8portal.eo.esa.int/faq/
</td> </tr> </table>
_Table 6-9: Data set description for Landsat8 L1T parameters._
<table>
<tr>
<th>
**Data set reference and name**
</th>
<th>
**Sentinel2**
</th> </tr>
<tr>
<td>
Organisatio n
</td>
<td>
**ESA**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Level-1C
Feature layers (NDVI, Cloudmask, RGB)
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
Data is available through the OGC WCS/WCPS standard.
</td> </tr>
<tr>
<td>
Spatial extent
</td>
<td>
Italy
</td> </tr>
<tr>
<td>
Temporal extent
</td>
<td>
Q3 2015
</td> </tr>
<tr>
<td>
Project Contact
</td>
<td>
[email protected]_
</td> </tr>
<tr>
<td>
Upstream Contact
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
Limitations
</td>
<td>
https://sentinel.esa.int/documents/247904/690755/Sentinel_Data_Legal _Notice
</td> </tr>
<tr>
<td>
License
</td>
<td>
https://sentinel.esa.int/documents/247904/690755/Sentinel_Data_Legal _Notice
</td> </tr>
<tr>
<td>
Constraints
</td>
<td>
</td> </tr>
<tr>
<td>
Data
</td>
<td>
JPG2000 for L1C
</td> </tr>
<tr>
<td>
Format
</td>
<td>
GeoTIFF for feature layers generated from L1C
</td> </tr>
<tr>
<td>
Access
URL
</td>
<td>
_eodataservice.org_
</td> </tr>
<tr>
<td>
Archiving and preservatio n
(including storage and backup)
</td>
<td>
</td> </tr> </table>
_Table 6-10: Data set description for Sentinel2 Level-1C parameters._
<table>
<tr>
<th>
**Data set reference and name**
</th>
<th>
**Sentinel2 / Sentinel3**
</th> </tr>
<tr>
<td>
Organisatio n
</td>
<td>
**ESA**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Level-1C
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
Data is available through the OGC WCS/WCPS standard.
</td> </tr>
<tr>
<td>
Spatial extent
</td>
<td>
Global
</td> </tr>
<tr>
<td>
Temporal extent
</td>
<td>
last year
</td> </tr>
<tr>
<td>
Project Contact
</td>
<td>
[email protected]_
</td> </tr>
<tr>
<td>
Upstream Contact
</td>
<td>
</td> </tr>
<tr>
<td>
Limitations
</td>
<td>
https://sentinel.esa.int/documents/247904/690755/Sentinel_Data_Legal _Notice
</td> </tr>
<tr>
<td>
License
</td>
<td>
https://sentinel.esa.int/documents/247904/690755/Sentinel_Data_Legal _Notice
</td> </tr>
<tr>
<td>
Constraints
</td>
<td>
</td> </tr>
<tr>
<td>
Data Format
</td>
<td>
JPG2000 / netCDF
</td> </tr>
<tr>
<td>
Access
URL
</td>
<td>
_eodataservice.org_
</td> </tr>
<tr>
<td>
Archiving and preservatio n
(including storage and backup)
</td>
<td>
</td> </tr> </table>
_Table 6-11: Data set description for_ _Sentinel2 / Sentinel3 parameters._
<table>
<tr>
<th>
**Data set reference and name**
</th>
<th>
**Hydro Estimator**
</th> </tr>
<tr>
<td>
Organisation
</td>
<td>
**NOAA**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
The Hydro-Estimator (H-E) uses infrared (IR) data from
NOAA's Geostationary Operational Environmental Satellites (GOES) to estimate
rainfall rates. Estimates of rainfall from satellites can provide critical
rainfall information in regions where data from gauges or radar are
unavailable or unreliable, such as over oceans or sparsely populated regions.
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
Data is available through the OGC WCS/WCPS standard.
</td> </tr>
<tr>
<td>
Spatial extent
</td>
<td>
Global
</td> </tr>
<tr>
<td>
Temporal extent
</td>
<td>
22 May 2006 - today
</td> </tr>
<tr>
<td>
Project Contact
</td>
<td>
[email protected]_
</td> </tr>
<tr>
<td>
Upstream Contact
</td>
<td>
</td> </tr>
<tr>
<td>
Limitations
</td>
<td>
https://www.star.nesdis.noaa.gov/star/productdisclaimer.php
</td> </tr>
<tr>
<td>
License
</td>
<td>
https://www.star.nesdis.noaa.gov/star/productdisclaimer.php
</td> </tr>
<tr>
<td>
Constraints
</td>
<td>
</td> </tr>
<tr>
<td>
Data Format
</td>
<td>
GeoTIFF
</td> </tr>
<tr>
<td>
Access URL
</td>
<td>
_eodataservice.org_
</td> </tr>
<tr>
<td>
Archiving and preservation
(including storage and backup)
</td>
<td>
</td> </tr> </table>
_Table 6-12: Data set description for_ _Hydro Estimator._
## Planetary Science Data Service
<table>
<tr>
<th>
Data set reference and name
</th>
<th>
**MGS MOLA GRIDDED DATA RECORDS**
</th> </tr>
<tr>
<td>
Organisation
</td>
<td>
**JACOBSUNI**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
MARS ORBITER LASER ALTIMETER
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
Data will be made available through the OGC WCPS standard.
</td> </tr>
<tr>
<td>
Spatial extent
</td>
<td>
GLOBAL
</td> </tr>
<tr>
<td>
Temporal extent
</td>
<td>
NOT APPLICABLE (Derived from multiple experimental data records)
</td> </tr>
<tr>
<td>
Project Contact
</td>
<td>
[email protected]_
</td> </tr>
<tr>
<td>
Upstream Contact
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
Limitations
</td>
<td>
None
</td> </tr>
<tr>
<td>
License
</td>
<td>
Free
</td> </tr>
<tr>
<td>
Constraints
</td>
<td>
None
</td> </tr>
<tr>
<td>
Data Format
</td>
<td>
PDS standard (GDAL-compatible .IMG or alike)
</td> </tr>
<tr>
<td>
Access URL
</td>
<td>
http://access.planetserver.eu:8080/rasdaman/ows
</td> </tr>
<tr>
<td>
Archiving and preservation
(including storage and backup)
</td>
<td>
Data is part of long-term NASA PDS archives and the original copies are
maintained there.
</td> </tr> </table>
_Table 6-13: Data set description for Mars Orbiter LASER Altimeter data._
<table>
<tr>
<th>
Data set reference and name
</th>
<th>
**MRO-M-CRISM-3-RDR-TARGETED-V1.0**
</th> </tr>
<tr>
<td>
Organisation
</td>
<td>
**JACOBSUNI**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
TRDR - Targeted Reduced Data Records contain data calibrated to radiance or
I/F.
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
Data will be made available through the OGC WCPS standard.
</td> </tr>
<tr>
<td>
Spatial extent
</td>
<td>
LOCAL
</td> </tr>
<tr>
<td>
Temporal extent
</td>
<td>
VARIABLE
</td> </tr>
<tr>
<td>
Project Contact
</td>
<td>
[email protected]_
</td> </tr>
<tr>
<td>
Upstream Contact
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
Limitations
</td>
<td>
None
</td> </tr>
<tr>
<td>
License
</td>
<td>
Free
</td> </tr>
<tr>
<td>
Constraints
</td>
<td>
None
</td> </tr>
<tr>
<td>
Data Format
</td>
<td>
PDS standard (GDAL-compatible .IMG or alike)
</td> </tr>
<tr>
<td>
Access URL
</td>
<td>
http://access.planetserver.eu:8080/rasdaman/ows
</td> </tr>
<tr>
<td>
Archiving and preservation
(including storage and
</td>
<td>
Data is part of long term NASA PDS archives and the original copies are
maintained there
</td> </tr>
<tr>
<td>
Data set reference and name
</td>
<td>
**MRO-M-CRISM-3-RDR-TARGETED-V1.0**
</td> </tr>
<tr>
<td>
backup)
</td>
<td>
</td> </tr> </table>
_Table 6-14: Data set description for MRO-M-CRISM Targeted Reduced Data
Records._
<table>
<tr>
<th>
Data set reference and name
</th>
<th>
**MRO-M-CRISM-5-RDR-MULTISPECTRAL-V1.0**
</th> </tr>
<tr>
<td>
Organisation
</td>
<td>
**JACOBSUNI**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
MRDR - Multispectral Reduced Data Records contain multispectral survey data
calibrated, mosaicked, and map projected.
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
Data will be made available through the OGC WCPS standard.
</td> </tr>
<tr>
<td>
Spatial extent
</td>
<td>
REGIONAL/GLOBAL
</td> </tr>
<tr>
<td>
Temporal extent
</td>
<td>
Not applicable. Derived data from multiple acquisition times.
</td> </tr>
<tr>
<td>
Project Contact
</td>
<td>
[email protected]_
</td> </tr>
<tr>
<td>
Upstream Contact
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
Limitations
</td>
<td>
None
</td> </tr>
<tr>
<td>
License
</td>
<td>
Free
</td> </tr>
<tr>
<td>
Constraints
</td>
<td>
None
</td> </tr>
<tr>
<td>
Data Format
</td>
<td>
PDS standard (GDAL-compatible .IMG or alike)
</td> </tr>
<tr>
<td>
Access URL
</td>
<td>
http://access.planetserver.eu:8080/rasdaman/ows
</td> </tr>
<tr>
<td>
Archiving and preservation
(including storage and backup)
</td>
<td>
Data is part of long term NASA PDS archives and the original copies are
maintained there
</td> </tr> </table>
_Table 6-15: Data set description for MRO-M-CRISM Multispectral Reduced Data
Records._
<table>
<tr>
<th>
Data set reference and name
</th>
<th>
**LRO-L-LOLA-4-GDR-V1.0**
</th> </tr>
<tr>
<td>
Organisation
</td>
<td>
**JACOBSUNI**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
LRO LOLA Gridded Data Record
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
Data will be made available through the OGC WCPS standard.
</td> </tr>
<tr>
<td>
Spatial extent
</td>
<td>
Global
</td> </tr>
<tr>
<td>
Temporal extent
</td>
<td>
NOT APPLICABLE (Derived from multiple experimental data records)
</td> </tr>
<tr>
<td>
Project Contact
</td>
<td>
[email protected]_
</td> </tr>
<tr>
<td>
Upstream Contact
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
Limitations
</td>
<td>
None
</td> </tr>
<tr>
<td>
License
</td>
<td>
Free
</td> </tr>
<tr>
<td>
Data set reference and name
</td>
<td>
**LRO-L-LOLA-4-GDR-V1.0**
</td> </tr>
<tr>
<td>
Constraints
</td>
<td>
None
</td> </tr>
<tr>
<td>
Data Format
</td>
<td>
PDS standard (GDAL-compatible .IMG or alike)
</td> </tr>
<tr>
<td>
Access URL
</td>
<td>
http://access.planetserver.eu:8080/rasdaman/ows
</td> </tr>
<tr>
<td>
Archiving and preservation
(including storage and backup)
</td>
<td>
Data is part of long term NASA PDS project and the original copies are
maintained there
</td> </tr> </table>
_Table 6-16: Data set description for LRO LOLA gridded data._
<table>
<tr>
<th>
Data set reference and name
</th>
<th>
**MEX-M-HRSC-5-REFDR-DTM-V1.0**
</th> </tr>
<tr>
<td>
Organisation
</td>
<td>
**JACOBSUNI**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Mars Express HRSC topography
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
Data will be made available through the OGC WCPS standard.
</td> </tr>
<tr>
<td>
Spatial extent
</td>
<td>
LOCAL
</td> </tr>
<tr>
<td>
Temporal extent
</td>
<td>
VARIABLE
</td> </tr>
<tr>
<td>
Project Contact
</td>
<td>
[email protected]_
</td> </tr>
<tr>
<td>
Upstream Contact
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
Limitations
</td>
<td>
None
</td> </tr>
<tr>
<td>
License
</td>
<td>
Free
</td> </tr>
<tr>
<td>
Constraints
</td>
<td>
None
</td> </tr>
<tr>
<td>
Data Format
</td>
<td>
PDS standard (GDAL-compatible .IMG or alike)
</td> </tr>
<tr>
<td>
Access URL
</td>
<td>
http://access.planetserver.eu:8080/rasdaman/ows
</td> </tr>
<tr>
<td>
Archiving and preservation
(including storage and backup)
</td>
<td>
Data is part of long term ESA PSA project and the original copies are
maintained there.
</td> </tr> </table>
_Table 6-17: Data set description for Mars Express HRSC topography
parameters._
<table>
<tr>
<th>
Data set reference and name
</th>
<th>
**CH1-ORB-L-M3-4-L2-REFLECTANCE-V1.0**
</th> </tr>
<tr>
<td>
Organisation
</td>
<td>
**JACOBSUNI**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Chandrayaan-1 Moon Mineralogy Mapper (M3)
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
Data will be made available through the OGC WCPS standard.
</td> </tr>
<tr>
<td>
Spatial extent
</td>
<td>
LOCAL
</td> </tr>
<tr>
<td>
Temporal extent
</td>
<td>
VARIABLE
</td> </tr>
<tr>
<td>
Project Contact
</td>
<td>
[email protected]_
</td> </tr>
<tr>
<td>
Upstream Contact
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
Limitations
</td>
<td>
None
</td> </tr>
<tr>
<td>
License
</td>
<td>
Free
</td> </tr>
<tr>
<td>
Constraints
</td>
<td>
None
</td> </tr>
<tr>
<td>
Data Format
</td>
<td>
PDS standard (GDAL-compatible .IMG or alike)
</td> </tr>
<tr>
<td>
Access URL
</td>
<td>
http://moon.planetserver.eu:8080/rasdaman/ows
</td> </tr>
<tr>
<td>
Archiving and preservation
(including storage and backup)
</td>
<td>
Data is part of long term NASA PDS project and the original copies are
maintained there
</td> </tr> </table>
_Table 6-18: Data set description for Moon Mineralogy Mapper (M3) parameters._
## Landsat Data Cube Service
<table>
<tr>
<th>
**Data set reference and name**
</th>
<th>
**Landsat**
</th> </tr>
<tr>
<td>
Organisation
</td>
<td>
**ANU/NCI**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
_http://geonetwork.nci.org.au/geonetwork/srv/eng/metadata.sh_ _ow?id=24
&currTab=simple _
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
Data is available at OGC WCS standard.
</td> </tr>
<tr>
<td>
Spatial extent
</td>
<td>
Longitude: 108 – 155, Latitude: -10 - -45, Universal Transverse Mercator (UTM)
and Geographic Lat-Lon
</td> </tr>
<tr>
<td>
Temporal extent
</td>
<td>
1997-now
</td> </tr>
<tr>
<td>
Project Contact
</td>
<td>
[email protected]_
</td> </tr>
<tr>
<td>
Upstream Contact
</td>
<td>
[email protected]_
</td> </tr>
<tr>
<td>
Limitations
</td>
<td>
None
</td> </tr>
<tr>
<td>
License
</td>
<td>
Commonwealth of Australia (Geoscience Australia) 2015.
Creative Commons Attribution 4.0 International Australia License.
https://creativecommons.org/licenses/by/4.0/
</td> </tr>
<tr>
<td>
Constraints
</td>
<td>
Commonwealth of Australia (Geoscience Australia) 2015.
Creative Commons Attribution 4.0 International Australia License.
https://creativecommons.org/licenses/by/4.0/
</td> </tr>
<tr>
<td>
Data Format
</td>
<td>
GeoTIFF [NetCDF-CF conversion currently underway]
</td> </tr>
<tr>
<td>
Access URL
</td>
<td>
http://rasdaman.nci.org.au/rasdaman/ows
</td> </tr>
<tr>
<td>
Archiving and preservation
(including storage and backup)
</td>
<td>
This data collection is part of the Research Data Storage Infrastructure
program, which aims for long-term preservation.
</td> </tr> </table>
_Table 6-19: Data set description for Landsat data._
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0701_PAL_643783.md
|
**1 Initial DMP**
# 1.1 Data set description
To reach our objective of designing, implementing and evaluating the PAL
system with stakeholders involved in each phase, we will apply different
research methods. The user and functional requirements are derived in a
continuous process during the project in which we apply co-design/co-creation
techniques during interactions with the stakeholders (formal and informal
caregivers and children). Structured interviews, focus groups, observational
studies and questionnaires will be used to derive user and functional
requirements. Furthermore by doing fast prototyping stakeholders can provide
their input on concrete aspects of the system to improve implementation.
Finally during more formal evaluations we will make use of questionnaires,
logging and observational data which is analyzed both statistically and more
ethnographically (e.g. grounded theory) to evaluate the system and provide
refined user requirements.
Below we provide a table (see Table 1) in which all data is structured: the
”what” column tells the kind of data collected, the ”data storage” column
describes what type is stored and the ”form” column describes how it is
stored. Not all data will always be analyzed to result in all suggested data
forms. At this moment we have little information on which datasets we are
going to link with each other to provide other relevant metadata sets. This
will be further addressed in the next version of the DMP.
Data gathered within the FP7 project ALIZ-e will be used as starting point and
new information will be integrated.
<table>
<tr>
<th>
What
</th>
<th>
Data storage
</th>
<th>
Form
</th> </tr>
<tr>
<td>
Consent forms (Parents/custodian, children w and w/o T1DM, formal
caregivers, teachers)
</td>
<td>
paper, scanned
</td>
<td>
PDF
</td> </tr>
<tr>
<td>
Participant data (gender, age, years T1DM, robot experience, ehealth
experience, type of diabetes therapy
\- insulin pump, multi-injective)
</td>
<td>
Paper/(excel) table
</td>
<td>
xlsx/csv
</td> </tr>
<tr>
<td>
Requirement elicitation with Structured interviews
</td>
<td>
Video/Audio Record-
ings
</td>
<td>
docx/pdf, mp4/mov/mpg/wmv, mp3/wav
</td> </tr>
<tr>
<td>
Requirement elicitation with Focus groups
</td>
<td>
Video/Audio Recordings, observation notes, output (e.g. drawings)
</td>
<td>
Mind maps, docx/pdf, mp4/mov/mpg/wmv, mp3/wav
</td> </tr>
<tr>
<td>
Requirement elicitation with Observational studies
</td>
<td>
(Video/Audio Recordings), observation notes
</td>
<td>
Observer forms with recurring aspects (docx, xlsx), mp4/mov/mpg/wmv, mp3/wav,
docx/pdf
</td> </tr>
<tr>
<td>
Questionnaires on user requirements
</td>
<td>
Paper/computer
</td>
<td>
docx/xlsx/csv
</td> </tr>
<tr>
<td>
Performance data
</td>
<td>
logging
</td>
<td>
xlsx/csv
</td> </tr>
<tr>
<td>
Adherence
</td>
<td>
logging, data input
</td>
<td>
xlsx/csv, docx/pdf
</td> </tr>
<tr>
<td>
Emotional state
</td>
<td>
data logging, questionnaires, photos (selfies), observation notes, dialogue
data
(speech/text), video
</td>
<td>
xlsx/csv, jpg/png, mp4/mov/mpg/wmv, mp3/wav, docx/pdf
</td> </tr>
<tr>
<td>
PAL experience (child, formal and informal caregivers)
</td>
<td>
questionnaires, observations notes
</td>
<td>
xlsx/csv, docx/pdf
</td> </tr>
<tr>
<td>
autonomy/relatedness/competence feelings
</td>
<td>
questionnaires, observations notes, dialogue data (speech/text)
</td>
<td>
docx/xlsx/csv,
mp4/mov/mpg/wmv, mp3/wav, docx/pdf
</td> </tr>
<tr>
<td>
Glucose values, nutritional and
lifestyle habits
</td>
<td>
logging, explicit input user
</td>
<td>
xlsx/csv, database
</td> </tr>
<tr>
<td>
What
</td>
<td>
Data storage
</td>
<td>
Form
</td>
<td>
</td> </tr>
<tr>
<td>
parent/custodian questionnaires on parenting capacities that influence disease
management (pre/post, compared to parents of healthy children) - e.g parental
overprotection, perceived child vulnerability, parenting stress
</td>
<td>
questionnaires
</td>
<td>
docs/xlsx/csv
</td>
<td>
</td> </tr>
<tr>
<td>
parent/custodian and/or teacher questionnaires (atti-
tude/knowledge/trust/skills/shared responsibility)
</td>
<td>
questionnaires
</td>
<td>
docx/xlsx/csv
</td>
<td>
</td> </tr>
<tr>
<td>
Professional care- questionnaires giver questionnaires
(Trust/acceptance/awareness/tailoring/
PAL experience/effect on child)
</td>
<td>
docx/xlsx/csv
</td>
<td>
</td> </tr>
<tr>
<td>
User, functional and design require- Derived from all data ments
</td>
<td>
sCET database, word or excel
</td>
<td>
xml,
</td> </tr> </table>
Table 1: Data structure
# 1.2 Standards and metadata
For transcription we will use word (.docx) or special transcription software
(to be decided on). For mind maps we will use powerpoint (.pptx). Data will be
represented in Excel (.xlsx or .csv), data analysis output will be in SPSS,
PRISM or R for quantitative analysis and with atlas.ti or word for more
qualitative observation analysis. Recordings are saved in mp3 (audio), mp4
(video) or another widespread format.
Use cases, requirements and claims are stored in the situated Cognitive
Engineering format (online) which will be exported to doc, html or xml. All
data will be supported by an ontology in RDF format.
All data will be collected in folders using the following format YEAR MM
partnerAcronym location experiment. The DMP is further supported by an
experimentation report template in docx format with information about the
experiment performed (main researcher, goal, lessons learned, time of
execution, partners involved, methodology summary, overview of data outcomes
(references to data storage), conclusions and references to publications.
# 1.3 Data sharing
Most data will be anonymous and therefore free accessible for the scientific
community, recordings of voice and face are ethically a different issue as
might be the case with medical data, some dialogues and connections between
data sets (e.g. glucose values and diary input). The data that is provided,
disclosed or otherwise made free accessible shall not include personal data as
defined by Article 2, Section (a) of the Data Protection Directive
(95/46/EEC).
Particular attention will be taken for video and some of the other data. In
this case we will take all necessary steps in order to ensure that the data
and video will be accessible only after the signature of a specific written
agreement that imposes that the same data and video will not be shared. The
modalities and possibilities to sharing data and video will depend on the
written informed consent given by caretakers and children and by the ethical
committees of the partners involved. For sharing data and video between
partners we have a specific Material Transfer Agreement (MTA), which can also
be used outside the project if we want to share outside of the consortium. In
this specific case the MTA will be modified considering that the recipient
parties arent partners of the consortium.
As far as possible and useful for the community we will put the data on the
OpenAire supported Zenodo repository (https://zenodo.org/).
# 1.4 Archiving and preservation (including storage and backup)
All data is preserved either in the Zenodo repository, the project SVN at TNO
or for more sensitive data at the specific partners locations (e.g. the
medical data connected to the user). In particular, data collected during the
field experiments with the stakeholders will be stored and preserved in each
of the leading country partners: Netherlands and Italy.
This data will be preserved according to the rules of research data, which is
at least 5 years after the project.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0702_AudioCommons_688382.md
|
# Background
The purpose of this Data Management Plan (DMP) is to provide an analysis of
the main elements of the data management policy that will be used by the
project with regard to all the datasets that will be generated by the project.
The DMP is not a fixed document, but will evolve during the lifespan of the
project.
The DMP will address the points below on a dataset by dataset basis and should
reflect the current status of reflection within the consortium about the data
that will be produced.
The approach to the DMP follows that outlined in the “ _Guidelines_ _on_
_Data_ _Management in_ _Horizon_ _2020_ ” (Version 2.1, 15 February 2016).
**Data set reference and name:** Identifier for the data set to be produced.
**Data set description:** Description of the data that will be generated or
collected, its origin (in case it is collected), nature and scale and to whom
it could be useful, and whether it underpins a scientific publication.
Information on the existence (or not) of similar data and the possibilities
for integration and reuse.
**Standards and metadata:** Reference to existing suitable standards of the
discipline. If these do not exist, an outline on how and what metadata will be
created.
**Data sharing:** Description of how data will be shared, including access
procedures, embargo periods (if any), outlines of technical mechanisms for
dissemination and necessary software and other tools for enabling re-use, and
definition of whether access will be widely open or restricted to specific
groups. Identification of the repository where data will be stored, if already
existing and identified, indicating in particular the type of repository
(institutional, standard repository for the discipline, etc.). In case the
dataset cannot be shared, the reasons for this should be mentioned (e.g.
ethical, rules of personal data, intellectual property, commercial, privacy-
related, security-related).
**Archiving and preservation (including storage and backup):** Description of
the procedures that will be put in place for long-term preservation of the
data. Indication of how long the data should be preserved, what is its
approximated end volume, what the associated costs are and how these are
planned to be covered.
# Admin Details
**Project Title:** Audio Commons: An Ecosystem for Creative Reuse of Audio
Content
**Project Number:** 688382
**Funder:** European Commission (Horizon 2020)
**Lead Institution:** Universitat Pompeu Fabra (UPF)
**Project Coordinator:** Prof Xavier Serra
**Project Data Contact:** Sonia Espi, [email protected]
**Project Description:** The democratisation of multimedia content creation
has changed the way in which multimedia content is created, shared and
(re)used all over the world, yielding significant amounts of user-generated
multimedia resources, big part shared under open licenses. At the same time,
creative industries need to reduce production costs in order to remain
competitive. There is, therefore, an opportunity for creative industries to
incorporate such content in their productions, but there is a lack of
technologies for easily accessing and incorporating that type content in their
creative workflows. In the particular case of sound and music, a huge amount
of audio material like sound samples, soundscapes and music pieces, is
available and released under Creative Commons licenses, both coming from
amateur and professional content creators. We refer to this content as the
'Audio Commons'. However, there exist no practical ways in which Audio Commons
can be embedded in the production workflows of the creative industries, and
licensing issues are not easily handled across the production chain. As a
result, most of this content remains unused in professional environments. The
aim of this project is to create an ecosystem of content, technologies and
tools to bring the Audio Commons to the creative industries, enabling
creation, access, retrieval and reuse of Creative Commons audio content in
innovative ways that fit the requirements of the use cases considered (e.g.,
audiovisual, music and video games production).Furthermore, we tackle rights
management challenges derived from the content reuse enabled by the created
ecosystem and research about emerging business models that can arise from it.
Our project will benefit creative industries by providing new and innovative
creativity supporting tools and reducing production costs, and will benefit
content creators by offering a channel to expose their works to professional
environments and to allow them to (re)licence their content.
# Dataset Information
Individual Dataset Information
**Data set reference and name**
DS 2.1.1: Requirements interviews
## Data set description
Notes/transcripts from structured interviews with creative industry content
users in Task 2.1: Analysis of the requirements from creative industries.
WP: WP2 / Task: Task 2.1 Responsible: QMUL (& MTG-UPF)
**Standards and metadata**
Text documents
## Data sharing
Anonymized form to be made available as appendix to Deliverable D2.1:
Requirements report and use cases.
## Archiving and preservation (including storage and backup)
Stored on project document server.
Estimated final size (Bytes): 100K
DS 2.2.1: Audio Commons Ontology
## Data set description
Definition of Audio Commons Ontology, the formal ontology for the Audio
Commons Ecosystem. Data form of D2.2: Draft ontology specification and D2.3:
Final ontology specification.
WP: WP2 / Task: Task 2.2 Responsible: QMUL
**Standards and metadata**
OWL Web Ontology Language
**Data sharing**
Public
## Archiving and preservation (including storage and backup)
Stored on project document server (& github) Estimated final size (Bytes): 10K
DS 2.3.1: ACE interconnection evaluation results
## Data set description
Results of evaluation of technological solutions for the
orchestration/interconnection of the different actors in the Audio Commons
ecosystem. Supporting data for deliverable D2.5: Service integration
technologies.
WP: WP2 / Task: Task 2.3 Responsible: QMUL (& MTG-UPF)
**Standards and metadata**
Tabular (e.g. CSV)
**Data sharing**
Public
## Archiving and preservation (including storage and backup)
Project document store.
Estimated final size (Bytes): 100K
DS 2.5.1: ACE Service evaluation results
## Data set description
Results of continuous assessment of ontologies, API specification and service
orchestration through the lifetime of the project, including API usage
statistics.
WP: WP2 / Task: Task 2.5 Responsible: QMUL (& MTG-UPF)
**Standards and metadata**
Tabular (e.g. CSV)
**Data sharing**
Public
## Archiving and preservation (including storage and backup)
Project document store.
Estimated final size (Bytes): 1M
DS 2.6.1: ACE Service
## Data set description
Freesound and Jamendo content exposed in the Audio Commons Ecosystem. Not
strictly a “dataset”, rather a service providing access to data. WP: WP2 /
Task: Task 2.6 Responsible: MTG-UPF (& Jamendo)
**Standards and metadata**
Audio Commons Ontology
**Data sharing**
Available via ACE service API
## Archiving and preservation (including storage and backup)
Dynamic service availability, no plans to provide a “snapshot”.
Estimated final size (Bytes): N/A
DS 4.2.1: Semantic annotations of musical samples
## Data set description
Results of semantically annotating musical properties such as the envelope,
the particular note being played in a recording, or the instrument that plays
that note. Supporting data for deliverables D4.4, D4.9, D4.10, D4.11
WP: WP4 / Task: Task 4.2 Responsible: MTG-UPF (& QMUL)
## Standards and metadata
Annotations will be stored using standard formats such as JSON and YAML, and
Semantic Web formats such as RDF/XML and N3, and following the Audio Commons
Ontology definition.
**Data sharing**
Public: Access via Audio Commons API
## Archiving and preservation (including storage and backup)
ACE Server. Annotation size estimate: 10kBytes per file x 500k files = 5
GBytes Estimated final size (Bytes): 5 GBytes
DS 4.3.1: Semantic annotations of musical pieces
## Data set description
Results of music piece characterisations such as bpm, tonality or structure.
The specific selection of audio properties to include in the semantic
annotation will depend on the requirements of the Audio Commons Ontology.
Supporting data for deliverables D4.4, D4.9, D4.10, D4.11
WP: WP4 / Task: Task 4.3 Responsible: QMUL (& MTG-UPF)
## Standards and metadata
Annotations will be stored using standard formats such as JSON and YAML, and
Semantic Web formats such as RDF/XML and N3, and following the Audio Commons
Ontology definition.
**Data sharing**
Public: Access via Audio Commons API
## Archiving and preservation (including storage and backup)
ACE Server. Annotation size estimate: 300kBytes per file x 500k files = 150
GBytes Estimated final size (Bytes): 150 GBytes
DS 4.4.1: Evaluation results of annotations of musical samples
## Data set description
Results of evaluation of automatic methods for the semantic annotation of
music samples. Results may include human evaluations via listening tests, if
required. Supporting data for deliverables D4.4, D4.10
WP: WP4 / Task: Task 4.4 Responsible: MTG-UPF (& QMUL)
**Standards and metadata**
Tabular (e.g. CSV)
## Data sharing
Statistical analysis: Public. Listening tests: Data collected and stored
according to ethics policy and approval; anonymized result data publicly
available.
## Archiving and preservation (including storage and backup)
Project document server. Personally identifiable data password-protected or
stored securely on paper. Estimated final size (Bytes): 100K
DS 4.5.1: Evaluation results of annotations of musical pieces
## Data set description
Results of evaluation of automatic methods for the semantic annotation of
music pieces. Results may include human evaluations via listening tests, if
required. Supporting data for deliverables D4.5, D4.11
WP: WP4 / Task: Task 4.5 Responsible: QMUL (& MTG-UPF)
**Standards and metadata**
Tabular (e.g. CSV)
## Data sharing
Stastical analysis: Public. Listening tests: Data collected and stored
according to ethics policy and approval; anonymized result data publicly
available.
## Archiving and preservation (including storage and backup)
Project document server. Personally identifiable data password-protected or
stored securely offline (e.g. paper in locked filing cabinet). Estimated final
size (Bytes): 100K
DS 4.6.1: Evaluation results of musical annotation interface
## Data set description
Results of evaluation of interface for manually annotating musical content, in
terms of its usability and its expressive power for annotating music samples
and music pieces. The evaluation will be carried out with real users and in
combination with the evaluation of Task 5.4. Supporting data for deliverable
D4.9 WP: WP4 / Task: Task 4.6 Responsible: MTG-UPF
**Standards and metadata**
Free text and Tabular (e.g. CSV)
## Data sharing
Usability data collected and stored according to ethics policy and approval;
anonymized result data publicly available.
## Archiving and preservation (including storage and backup)
Project document server. Personally identifiable data password-protected or
stored securely offline (e.g. paper in locked filing cabinet). Estimated final
size (Bytes): 100K
DS 4.7.1: Outputs of integrated annotation technology: Musical content
## Data set description
Annotations of Freesound and Jamendo content. Success in Task 4.7 will result
in at least 70% of Freesound (musical content) and Jamendo content annotated
with Audio Commons metadata as defined in the Audio Commons Ontology. This
will incorporate datasets DS 4.2.1 and DS 4.3.1. WP: WP4 / Task: Task 4.7
Responsible: MTG-UPF & Jamendo
## Standards and metadata
Annotations will be stored using standard formats such as JSON and YAML, and
Semantic Web formats such as RDF/XML and N3, and following the Audio Commons
Ontology definition.
**Data sharing**
Available via ACE service API
## Archiving and preservation (including storage and backup)
ACE Server
Estimated final size (Bytes): 150 GBytes
DS 5.1.1: Timbral metadata & ontology of timbral descriptors
## Data set description
Timbral metadata in existing content from Freesound (and potentially other
sources), supplemented with descriptors from verbal elicitation experiments.
Analysis will provide an ontology of timbral descriptors. Data will support
Deliverable D5.1.
WP: WP5 / Task: Task 5.1
Responsible: Surrey-IoSR (& MTG-UPF)
## Standards and metadata
Annotations will be stored using standard formats such as JSON and YAML, and
Semantic Web formats such as RDF/XML and N3, and following the Audio Commons
Ontology definition. Analysis will be stored in free text and tabular form
(e.g. CSV).
## Data sharing
Existing metadata: Public. Results of verbal elicitation: Data collected and
stored anonymously according to ethics policy and approval; result data
publicly available.
## Archiving and preservation (including storage and backup)
Project document server.
Estimated final size (Bytes): 1M
DS 5.2.1: Timbral listening tests
## Data set description
Results of listening experiments on timbre perception, carried out to inform
the specification of required enhancements to existing metrics, and of
modelling approaches for significant timbral attributes not covered by the
prototype system.
WP: WP5 / Task: Task 5.2 Responsible: Surrey-IoSR
**Standards and metadata**
Tabular (e.g. CSV)
**Data sharing**
Data collected and stored anonymously according to ethics policy and approval:
publicly available.
## Archiving and preservation (including storage and backup)
Project document server.
Estimated final size (Bytes): 100k
Individual Dataset Information
**Data set reference and name**
DS 5.3.1: Evaluation results of automatic annotation of non-musical content
## Data set description
Results of evaluation of automatic methods for the semantic annotation of non-
musical content, including listening tests where appropriate. Annotations will
be evaluated against the timbral descriptor hierarchy defined in Task 5.1.
Supporting data for Deliverables D5.3, D5.7
WP: WP5 / Task: Task 5.3
Responsible: Surrey-CVSSP (& Surrey-IoSR)
**Standards and metadata**
Tabular (e.g. CSV)
**Data sharing**
Data collected and stored anonymously according to ethics policy and approval:
publicly available.
## Archiving and preservation (including storage and backup)
Project document server.
Estimated final size (Bytes): 100k
DS 5.4.1: Evaluation results of non-musical annotation interface
## Data set description
Results of evaluation of interface for manually annotating non-musical
content, in terms of its usability and its expressive power for annotating .
The evaluation will be carried out with real users and in combination with the
evaluation of Task 4.6. Supporting data for deliverable D5.5.
WP: WP5 / Task: Task 5.4 Responsible: MTG-UPF
**Standards and metadata**
Tabular (e.g. CSV)
## Data sharing
Usability data collected and stored according to ethics policy and approval;
anonymized result data publicly available.
## Archiving and preservation (including storage and backup)
Project document server. Personally identifiable data password-protected or
stored securely offline (e.g. paper in locked filing cabinet). Estimated final
size (Bytes): 100K
DS 5.5.1: Outputs of integrated annotation technology: Musical content
## Data set description
Annotations of Freesound and Jamendo content. Success in Task 5.5 will result
in at least 70% of Freesound (non-musical) content annotated with Audio
Commons metadata as defined in the Audio Commons Ontology. This will
incorporate datasets DS 4.2.1 and DS 4.3.1.
WP: WP5 / Task: Task 5.5 Responsible: MTG-UPF
## Standards and metadata
Annotations will be stored using standard formats such as JSON and YAML, and
Semantic Web formats such as RDF/XML and N3, and following the Audio Commons
Ontology definition.
**Data sharing**
Available via ACE service API
## Archiving and preservation (including storage and backup)
ACE Server. Annotation size estimate: 100kBytes per file x 200k files = 20
GBytes Estimated final size (Bytes): 20 GBytes
DS 6.4.1: Evaluation results of ACE for Creativity Support
## Data set description
Results of holistic evaluation of the ACE in the context of Creativity
Support. This will include the results of novel methods to assess how the ACE
system and tools facilitate creative flow, discovery, innovation and other
relevant dimensions of creative work. Supporting data for Deliverables 6.8,
6.12. WP: WP6 / Task: Task 6.4
Responsible: QMUL (with Industrial Partners)
**Standards and metadata**
Free text and Tabular (e.g. CSV)
## Data sharing
Usability data collected and stored according to ethics policy and approval;
anonymized result data publicly available.
## Archiving and preservation (including storage and backup)
Project document server. Personally identifiable data password-protected or
stored securely offline (e.g.
paper in locked filing cabinet). Estimated final size (Bytes): 100K DS 6.5.1:
Evaluation results of ACE in music production
## Data set description
Results of evaluation of ACE in music production, measure the utilities of ACE
in typical music production workflows. The results will include usability data
from beta testers available from Waves and students of Queen Mary’s Media and
Arts Technology (MAT) programme. Supporting data for
Deliverable 6.4.
WP: WP6 / Task: Task 6.5 Responsible: QMUL (with Waves)
**Standards and metadata**
Free text and Tabular (e.g. CSV)
## Data sharing
Usability data collected and stored according to ethics policy and approval;
anonymized result data publicly available.
## Archiving and preservation (including storage and backup)
Project document server. Personally identifiable data password-protected or
stored securely offline (e.g.
paper in locked filing cabinet). Estimated final size (Bytes): 100K DS 6.6.1:
Evaluation results of search and retrieval interfaces for accessing music
pieces
## Data set description
Results of evaluation of search and retrieval interfaces for accessing Audio
Commons music pieces. The data will support assessment of how ACE supports
information seeking activities in creative music production using the web-
based interfaces created in Task 6.6. Supporting data for Deliverable D6.5.
WP: WP6 / Task: Task 6.6 Responsible: QMUL (with Jamendo)
**Standards and metadata**
Free text and Tabular (e.g. CSV)
## Data sharing
Usability data collected and stored according to ethics policy and approval;
anonymized result data publicly available.
## Archiving and preservation (including storage and backup)
Project document server. Personally identifiable data password-protected or
stored securely offline (e.g.
paper in locked filing cabinet). Estimated final size (Bytes): 100K DS 6.7.1:
Evaluation results of ACE in sound design and AV production
## Data set description
Results of evaluation of ACE in sound design and audiovisual production. The
results will include usability data from beta testers available from
AudioGaming and students from Surrey’s Film and Video Production Engineering
BA (Hons). Supporting data for Deliverable D6.6. WP: WP6 / Task: Task 6.7
Responsible: QMUL (with AudioGaming)
**Standards and metadata**
Free text and Tabular (e.g. CSV)
## Data sharing
Usability data collected and stored according to ethics policy and approval;
anonymized result data publicly available.
## Archiving and preservation (including storage and backup)
Project document server. Personally identifiable data password-protected or
stored securely offline (e.g.
paper in locked filing cabinet). Estimated final size (Bytes): 100K DS 7.1.1:
Website statistics
## Data set description
Website visitor data and alignment with associated project events. Success in
Task 7.1 will yield 50 daily unique visitors to the AudioCommons web portal,
(excluding bots), increased by at least 50% during time periods influenced by
AudioCommons events.
WP: WP7 / Task: Task 7.1 Responsible: MTG-UPF
**Standards and metadata**
Tabular (e.g. CSV)
**Data sharing**
Public (following removal of any personally identifiable information)
## Archiving and preservation (including storage and backup)
Web server, backed up on project document server. Storage estimate: 1k / visit
x 100 visits/day x 300 days = 30MBytes
Estimated final size (Bytes): 30 MBytes
DS 7.5.1: List of Key Actors in the creative community
## Data set description
A list of Key Actors in the creative community will be built and maintained to
facilitate dissemination activities in Task 7.5. This includes personally
identifiable information such as contact details and interests, and will be
maintained according to data protection policies.
WP: WP7 / Task: Task 7.5 Responsible: MTG-UPF
**Standards and metadata**
Text document
**Data sharing**
Project partners only.
## Archiving and preservation (including storage and backup)
Stored on project document server, in compliance with data protection
policies. Estimated final size (Bytes): 100K
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0704_TRANS-URBAN-EU-CHINA_770141.md
|
D6.5 Data Management Plan
Managing and sharing data during a research project has several clear
advantages. It allows quickly finding and understanding the data when one
needs to use it; it gives continuity if project staff leave or new researchers
join, and it avoids unnecessary duplication. Moreover, the data underlying
publications are maintained allowing for validation of results and favouring
collaboration and further research on the same issues. It makes research more
visible and increases the impact.
The data management plan helps to save time and effort and makes the research
and data sharing process easier. By considering in advance what data will be
created and how, one can organize the necessary support, bearing in mind the
wider context and consequences of different options.
In this report the initial Data Management Plan (DMP) for the
TRANS‐URBAN‐EU‐CHINA project is presented. The report explains how research
data generated and used by the project will be handled during and after the
project duration. The DMP describes what data will be collected, processed or
generated with which methodologies and standards, whether and how this data
will be shared or made open, and how it will be curated and preserved.
This Data Management Plan provides a first overview on the diversity, scale
and amount of data which will be handled during the TRANS‐URBAN‐EU‐CHINA
project. The DMP provides information on the following points:
* Data Description
* Data Access and Ethical Aspects
* Standards and Metadata
* Short‐Term Storage
* Archiving and Long‐Term Preservation
The DMP is not a fixed document, but evolves during the lifespan of the
project.
# DATA SUMMARY
TRANS‐URBAN‐EU‐CHINA will generate and collect various data. Through
interviews, questionnaires, surveys, re‐draw of existing situation city plans
and literature review urban data and the experience of citizens, policy
makers, urban authorities, planners, administrations and other stakeholders
will be gathered. In addition, certain data will be requested from the
participating city pairs and also bought from commercial data providers in
order to achieve a sound base for city indicators. In addition, contact
details of the target stakeholders for communication and dissemination
activities as well as of participants in project events will be stored.
Attachment 1 is a list of data that is expected to be generated during the
project. The list includes the types and formats of the data, size of data and
sharing options. This list reflects the current status of knowledge and
discussion within the consortium about the data to be produced within the
project. This list will evolve and develop over the lifetime of the project
and will be kept up to date on the document repository of the internal project
website.
The data generated in a project task will initially be stored by the partner
generating/acquiring the data. Datasets will be shared among project partners
through the internal project website. Any dataset containing personal data
will be anonymised before sharing. Which datasets will be provided open access
and stored in which long‐term repository will be decided later in the project.
The data collected and generated by the different consortium partners will
have multiple formats and vary in size from a few MB’s to several GB’s. The
formats range from interview transcripts, survey results, protocols, pictures,
reports up to software prototypes and test data. So far four types of general
data sets are identified:
* text based data: interviews, surveys, publications, reports, contact details
* visual data: graphs, visual protocols, pictures, maps, diagrams
* models
* software data: test data, source code
* quantitative/qualitative data to be used to base indicators on.
Due to the fact that data collection and creation is an ongoing process,
questions such as the detailed description of data nature, exact scale, to
whom those data may be useful or if these data underpin a scientific
publication will be answered in the updated versions of the DMP. Moreover the
question on the existence or non‐existence of similar data and the
possibilities for integration and reuse are not finally agreed between the
consortium partners and will be reported later.
The following is a summary and overview of the data planned to be generated in
the project with more details about individual datasets provided in Attachment
1.
Task 1.1 – Type: Mapping and analysing citizen perspectives to identify
opportunities and challenges of public engagement. Standards: Interviews and
surveys/questionnaires with stakeholders such as middleclass urban dwellers,
representatives of municipalities, policy makers, design agencies or
consultants. Exploitation/Sharing: Analyses and summaries of anonymised
interview and survey data will be published through the website. All
informants are informed about the research purpose, anonymity and
confidentiality and must consent before data is used.
Task 1.2 – Type: Experience of public and private institutions which provide
networks for citizens living in urban areas, in Europe and in China, with
specific attention to educative system. Standards: Direct interviews within
WP1 in cooperation with WP6 Data Management. Exploitation/Sharing: Responses
will be anonymised and summarised for comparative assessment. The summaries
will be shared with stakeholders for verification and discussion. Data will be
stored and curated by POLITO in cooperation with Tsinghua and according to WP6
Data Management.
Task 1.3 – Type: Experiences on active preservation in Europe and China of
cultural heritage, with attention to population pressure, development policies
of local economies, and financial support for heritage sites. Standards:
Interviews, good methodologies and approaches within WP1 and test results in
Living Labs within WP5 in cooperation with WP6 Data Management.
Exploitation/Sharing: Responses will be anonymised and summarised for
comparative assessment. The summaries will be shared with stakeholders for
verification and discussion. Data will be stored and curated by POLITO in
cooperation with Tsinghua and according to WP6 Data Management.
Task 1.4 – Type: Experiences in which place‐making is influenced by the design
quality of public spaces, including processes of negotiation of citizenship
rights and social agreement. The purpose is to have basic materials to study
and reinterpret during the KB phase; generation of new plans and the
definition of a series of keywords. Materials can be divided into “first‐hand
documentary sources”, such as data and plans and “second‐hand documentary
sources” such as bibliographic references. A new glossary with WORDS that make
explicit concepts to be applied in future research programmes and a new series
of plans, referred as to specific PLACES that represent examples of good
practices will be generated. Data to be collected are bibliographical
references and series of maps of specific PLACES that represents examples of
good practices. For the European part, databases of the Municipality will be
used that can give us numerical data and vector plans. Existing maps will be
used as sources. Maps are re‐drawn in defining a representation strategy to
best communicate the concept of place‐making and design of public space.
Standards: Drawings, direct and indirect interviews and reports on good design
practices within WP1, in cooperation with WP6 Data Management.
Exploitation/Sharing: Responses will be anonymised and summarised for
comparative assessment. The summaries will be shared with stakeholders for
verification and discussion. Final critical results derived from a
reinterpretation and study of the collected data will be useful to trace
guidelines to be spread both to the scientific community and to stakeholders
and municipalities that could take advantage of our studies. Data will be
stored and curated by POLITO in cooperation with Tsinghua and according to WP6
Data Management.
Task 2.1 – Type: Case Studies on European and Chinese Cities on the
development process of their strategies for sustainable urbanisations.
Standards: Interviews with urban policy makers. Exploitation/Sharing:
Responses will be anonymised and summarised in case studies. The case studies
will be shared with stakeholders for verification. Data will be stored and
curated by AIT in cooperation with CAS and according to WP6 Data Management.
Task 2.2 – Type: Case Studies on European and Chinese Cities on the
implementation of integrated planning. Standards: Interviews with urban policy
makers. Exploitation/Sharing: Responses will be anonymised and summarised in
case studies. The case studies will be shared with stakeholders for
verification. Data will be stored and curated by AIT in cooperation with CCUD
and according to WP6 Data Management.
Task 2.3 – Type: List and description of mechanisms for implementation of
integrative planning. Standards: Interviews with urban policy makers,
literature review, EIP SCC document screening. Exploitation/Sharing: Responses
will be anonymised and summarised for the list and description of the
implementation mechanisms. The implementation mechanisms will be shared with
stakeholders for verification. Data will be stored and curated by ISINNOVA and
AIT in cooperation with CCUD and according to WP6 Data Management.
Task 3.1 – Type: Experiences on urban renewal, challenges, priorities,
opportunities, planning approaches, governance. Standards: Interviews within
WP3 in cooperation with WP6 Data Management. Exploitation/Sharing: Responses
will be anonymised and summarised for comparative assessment. The summaries
will be shared with stakeholders for verification and discussion. Data will be
stored and curated by IOER in cooperation with CAS and according to WP6 Data
Management.
Task 3.2 – Type: Experiences on urban expansion areas, challenges, priorities,
opportunities, planning approaches, governance. Standards: Interviews within
WP3 in cooperation with WP6 Data Management. Exploitation/Sharing: Responses
will be anonymised and summarised for comparative assessment. The summaries
will be shared with stakeholders for verification and discussion. Data will be
stored and curated by IOER in cooperation with CAS and according to WP6 Data
Management.
Task 3.3 – Type: Experiences on land banking and land administration,
challenges, priorities, opportunities, (technical and legal) approaches,
governance. Standards: Interviews within WP3 in cooperation with WP6 Data
Management. Exploitation/Sharing: Responses will be anonymised and summarised
for comparative assessment. The summaries will be shared with stakeholders for
verification and discussion. Data will be stored and curated by TUD in
cooperation with CAS and according to WP6 Data Management.
Task 4.1 – Type: Formulation of storylines illustrating integrated pathways
for urban transition. Standards: Information collected through predefined
templates from WP1‐5 in cooperation with WP6 Data Management.
Exploitation/Sharing: Responses will be summarised and shared with
stakeholders for verification and discussion. Data will be made public online
stored and curated by IOER in cooperation with CAS and according to WP6 Data
Management.
Task 4.2 – Type: Experiences on SCBA in urban policy and decision‐making,
challenges, priorities, opportunities, planning approaches, governance.
Standards: Interviews within WP4 in cooperation with WP6 Data Management.
Exploitation/Sharing: Responses will be anonymised and summarised for
comparative assessment. The summaries will be shared with stakeholders for
verification and discussion. Data will be stored and curated by IOER in
cooperation with CAS and according to WP6 Data Management.
Task 4.3 – Type: Numerical and map data on environment, image data, and
textual data. Standards: GIS data standards. Exploitation/Sharing: Textual
responses will be anonymised, summarised and shared with stakeholders for
verification and discussion. Open Data will be made public online at the CIUC
according to WP6 Data Management.
Task 5.1 ‐ Type: Meeting summaries for all Living Lab workshops and meetings.
Meeting details including numbers and lists of attendees and participants,
location specifics, duration, activities and outcomes. Documents and other
material presented and/or exchanged during the events. Standards: DOCX, PDF,
etc. Exploitation/ Sharing: The data generated in this task will be condensed
and edited to an appropriate format and publically shared on the project’s
open access website. The full version of minutes, activities, and outcomes
will be made available to consortium partners and Advisory Board members as
well as Living Lab participants. Data published or otherwise released to the
public will include disclaimers and/or terms of use as per the policies of the
DES.
Task 5.2 – Type: Exchange of knowledge and good practices with a wider circle
of Reference Cities, identification of experts that work in urbanisation
topics. Standards: Collection of basic information about urban projects and
practices in cooperation with WP6 Data Management. No personal or sensitive
data will be asked for or collected. Collection of business contact details
for practitioners, industry, academia experts and policy makers that relate to
the urbanisation topics in Europe and China. Exploitation/Sharing: Contacts
and mailing lists as well as project information fiches will be shared with
project partners for analysis, verification and discussion. Data will be
stored and curated by IOER in cooperation with CAS and according to WP6 Data
Management.
Task 5.3 – Task 5.3 disseminates data types, standards, and exploitation to
the URBAN‐EU‐CHINA R&I Agenda and Evidence Base, and does not generate data of
its own.
Task 5.4 – Type: Promotion of project results to interested persons and
organisations. Standards: Collection of business contact details for
organisations, city practitioners, industry and academia experts and policy
makers that relate to the urbanisation topics in Europe and China.
Exploitation/Sharing: Contacts and mailing lists will be shared with project
partners for communication and dissemination purposes. The membership
information of EUROCITIES (the Membership Information hereinafter), including
but not limited to the contact details and identity information of persons
connected to the individual members, shall stay confidential to any party
under the Grant Agreement without prior consent of EUROCITIES in writing.
Access to and disclosure of the Membership Information, either in part or in
whole, shall be determined EUROCITIES based on the relevance of the request
and interests of EUROCITIES’ members. Any request to deliver communication in
any form to any EUROCITIES member via EUROCITIES shall be decided by
EUROCITIES based on its assessment of the value and interests of any relevant
member or group of members. Data will be stored and curated by IOER in
cooperation with CAS and according to WP6 Data Management.
Task 6.3 – Task 6.3 uses the data generated in the project and processes it
for the website and other information material. It does not generate data of
its own.
# FAIR DATA MANAGEMENT
TRANS‐URBAN‐EU‐CHINA aims for 'FAIR' 1 research data, that is findable,
accessible, interoperable and re‐usable.
## Making data findable, including provisions for metadata
A first collection of datasets has been compiled in Attachment 1 at the end of
this document. A comprehensive pattern for naming the produced datasets of the
project to be published open access will be developed. As an example one
approach could be the following:
TUEC_Data_"WP.TaskNo."."DatasetNo."_"DatasetTitle"
e.g., TUEC_Data_WP1.1_InterviewCitizens. This depends also on the long term
data sharing platform to be chosen.
The internal project website is used to share and manage the collected and
generated data sets within the project. It provides a well‐organized structure
to make it easy for research teams to find, better understand and reuse the
various data by creating a consistent and well‐structured research data pool.
The TRANS‐URBAN‐EU‐CHINA project will create diverse data to detail project
content and to create data needed to enable other researchers to use and
regenerate output data in a systematic way. The documentation can take the
form of publications, manuals and reports. To enable a consistent description
of all datasets provided by the project, a template table is used to describe
metadata of each dataset including title, author, description, formats, etc.
(see Attachment 1).
## Making data openly accessible
When a specific task of the project is concluded, the scientific results will
be published in international peer‐reviewed journals. At this stage, data
analysis will allow the partners to identify and select the most important
data related to the specific publication. These data will be deposited in a
repository and preserved for at least 10 years. Moreover, at the end of the
project all relevant data, even if not published yet, will be preserved in the
same way.
Concerning access to the data, the consortium partners will take into account
the embargo periods or copyright details involved in the specific scientific
publications. Data will be made available as soon as possible considering
those limitations. Moreover, data that are relevant for future publications or
significant new results and discoveries worth protecting will be excluded from
data sharing and archiving.
In particular, the data produced by the TRANS‐URBAN‐EU‐CHINA project will be
kept confidential and not made open access due to the following reasons:
* legal properties (e.g. missing copyrights, participant confidentiality, consent agreements or intellectual property rights)
* scientific and/or business reasons (e.g. pending publications, exploitation aspects)
* technical issues (e.g. incomplete data sets).
Each consortium partner will be responsible to secure the short‐term data
storage. All raw data and documents relevant to the project data should be
preserved until the end of the project and then transferred to a long‐term
repository.
Intermediate documents and data generated by the project partners will be
shared through the internal part of the project website (
_http://transurbaneuchina.eu/_ ). The website can be easily accessed by all
partners. It includes all the publications, raw data, reviews, deliverable
reports, meeting minutes, slides of the presentations at the project meetings,
and further information material. No personal data will be shared by the
project partners without written consent. Personal data will be removed from
any data set before sharing it by anonymizing the data.
Selected data from TRANS‐URBAN‐EU‐CHINA will be shared publicly during or
after the life time of the project. All long term data collected or generated
will be deposited in a repository. The final repository has not been chosen
yet. The choice of repository will depend on:
* location of repository
* research domain
* costs
* open access options
* prospect of long‐term preservation.
One option considered for long‐term data archiving and publication is the
research data repository OpARA (Open Access Repository and Archive). OpARA is
the institutional repository of the TU Dresden and is operated in cooperation
with the TU BAF Freiberg. It allows the publication and referencing of
research data (by a DOI) with metadata for reuse (“Open Access”). For closed
data restricted access is possible as well as the setting of an embargo period
before publication. Interested parties can search for published data and
download them via a web browser. The data are archived for at least 10 years
according to the universities ‘Guidelines for Safeguarding Good Scientific
Practice, Avoiding Scientific Misconduct and Dealing with Violations’.
Another repository considered is ZENODO _https://zenodo.org/_ . This is
online, free of charge storage created through the European Commission’s
OpenAIREplus project and is hosted at CERN, Switzerland. It encourages open
access deposition of any data format, but also allows deposits of content
under restricted or embargoed access. Contents deposited under restricted
access are protected against unauthorized access at all levels. Access to
metadata and data files is provided over standard protocols such as HTTP and
OAI‐PMH. Data files are kept in multiple replicas in a distributed file
system, which is backed up to tape every night. Data files are replicated in
the online system of ZENODO. Data files have versions attached to them, whilst
records are not versioned. Derivatives of data files are generated, but the
original content is never modified. Records can be retracted from public view;
however, the data files and records are preserved. The uploaded data is
archived as a Submission Information Package in ZENODO. Files stored in ZENODO
will have MD5 checksum of the file content, and it will be checked against
their checksum to assure that a file content remains correct. Items in the
ZENODO will be retained for the lifetime of the repository which is also the
lifetime of the host laboratory CERN which currently has an experimental
programme defined for the next 20 years. Each dataset can be referenced at
least by a unique persistent identifier (DOI), in addition to other forms of
identifications provided by ZENODO.
## Making data interoperable
Since only common formats such as pdf, docx, xlsx, jpeg, gif, mpg, mp3, mp4
will be used in the project, interoperability will not be an issue.
**2.4 Increase data re‐use (through clarifying licences)**
Licensing options will be considered and decided later in the project.
# ALLOCATION OF RESOURCES
The costs of short‐term data storage, and of preparing data and documentation
for long term storage will be borne by the project partners.
The permanent costs of preserving datasets in the OpARA repository are free of
charge for TUD members.
# DATA SECURITY
The long term data have different levels of open accessibility:
* data with restricted access to the consortium partner creating this data set;
* data with restricted access to TRANS‐URBAN‐EU‐CHINA project partners;
* data that is to be published and shared as open source to researchers only;
* data that is to be published and shared as open source to everyone.
The decisions on data publication and the level of accessibility will be taken
per dataset and by the responsible consortium partner who created the dataset.
This will be documented in future versions of the data management plan. The
updated version of the DMP shall detail the information on data sharing,
including access procedures, embargo periods, and outlines of technical
mechanisms for dissemination of open accessible data sets.
The internal project website ( _http://transurbaneuchina.eu/login/_ ) is
hosted on a server at jweiland.net in Stuttgart, Germany and monitored by the
coordinator IOER in Dresden, Germany. Data is backed up by jweiland.net
automatically as well as by IOER once a week. This assures data recovery and
data protection.
# ETHICAL ASPECTS
All partners will ensure that project activities related to data, specifically
to personal data, will be in compliance with applicable EU and national law on
data protection, in particular the EU General Data Protection Regulation
(GDPR)2 and China’s Network Security Law.
The following principles will guide all data activities:
* to be as sensitive as possible about collecting, storing and using personal data;
* to keep personal data anonymised and not retractable;
* to use correct citations (‘credits’) to the data originator;
* based on legal conditions (right to use/edit/publish the data).
No personal sensitive data will be collected. Interviewees will be asked for
opinions about urban development and urban sustainability. In order to comply
with the regulations, the persons taking part in empirical parts of the
project will be asked for their informed consent before any data is collected.
Personal data will only be collected if absolutely necessary to fulfil the
objectives of the project, e.g. contact details of interested stakeholders for
dissemination and communication activities.
A principle will be that only the minimal amount of data necessary to reach
the research goal will be collected (data minimization). Data collection and
analysis will be done in an anonymous or at least pseudonymous way. This will
be especially the case in all kinds of (online) questionnaires, where
demographic data is only collected to the extent that makes re‐identification
of a single person very unlikely. Names and addresses of interviewees will be
separated from the information they provide, so that it will not be possible
to trace back the information to any identifiable individual. If data
collected for research purposes are not anonymised, explicit consent from the
data subject will be required.
Regarding data processing, the collected data will be immediately
pseudonymized and aggregated, and the original data will not be stored
whatsoever.
In terms of data retention and destruction, data will be deleted or fully
anonymized as soon as the relevant scientific and innovative purpose as stated
in the DoA is fulfilled. For the collection, storage and analyses of personal
data only computers under the sole control of the project partners will be
used, e.g. no third party services offering online questionnaires etc. will be
used. Appropriate technical measures will be taken for secure data access and
user authentication.
All data collected will be stored and transmitted in an encrypted way together
with a sticky policy expressing for which purposes the data was collected and
who is allowed to access the data. Easily accessible and understandable
privacy policies will on the one hand bind the data controllers and on the
other hand allow the data subjects to understand and execute their rights at
any time. This privacy policy will be defined following the European Union’s
Data Protection Directive (Directive
2
https://ec.europa.eu/commission/priorities/justice‐and‐fundamental‐rights/data‐protection/2018‐reformeu‐data‐protection‐rules_en
95/46/EC on the protection of individuals with regard to the processing of
personal data and on the free movement of such data).
Data which will be imported to/exported from EU will be listed in Attachment
1. Adequate authorisations, if required, will be provided by the relevant
consortium partner.
# ATTACHMENT 1
List of TRANS‐URBAN‐EU‐CHINA datasets
**TRANS‐URBAN‐EU‐CHINA**
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0705_FutureTDM_665940.md
|
# INTRODUCTION
This deliverable describes the Data Management Plan (DMP) for the FutureTDM
project. The aim of the DMP is to provide an analysis of the main elements of
the data management policy that will be used throughout the project, with
regard to all the datasets that will be generated and used. The DMP follows
the guidelines suggested for Horizon 2020 calls 1 and will evolve during the
lifespan of the project. Moreover, an ethical approach will be adopted and
maintained throughout the fieldwork process, following the directives
described in deliverable 2.1.
# DATASET MANAGEMENT
This section encompasses the current status within the consortium about the
data that will be produced. It will be coherently updated as the project
progresses to always reflect the current status. In particular, Table 1
outlines the most relevant aspects that the FutureTDM project will take into
account.
**Table 1: Relevant aspects in dataset management**
<table>
<tr>
<th>
Data set reference and name
</th>
<th>
_Structured interviews_
</th> </tr>
<tr>
<td>
Data set description
</td>
<td>
General description
</td>
<td>
The data gathered concerns the answers given by TDM practitioners on questions
related to text and data mining practices and issues. In total the 30
interviews will be recorded to mp3 and transcribed into an excel file. The
data will be made available anonymized to the project partners for use within
the FutureTDM project
</td> </tr>
<tr>
<td>
Provenance
</td>
<td>
Face to face interviews
</td> </tr>
<tr>
<td>
Nature
</td>
<td>
Mixed method research. The questionnaire contains questions on the
participants’ professional opinion and experiences with TDM. No additional or
sensitive personal data is being collected
</td> </tr>
<tr>
<td>
Scale
</td>
<td>
30 stakeholders
</td> </tr>
<tr>
<td>
Beneficiary
</td>
<td>
The data is collected for internal use of the FutureTDM project. However the
results will feed back into the FutureTDM project through visualizations and
reports, to derive insight and provide best practices which will be made
publicly available
</td> </tr>
<tr>
<td>
Standards and metadata
</td>
<td>
The interviews will be recorded (mp3), transcribed together with additional
notes in Excel and reported in form of a document. In case the primary
interview data (actual mp3 files) are to be persistently stored for future use
by other researches, they would be appropriately described using a compatible
schema (Dublin Core)
</td> </tr>
<tr>
<td>
Data collection procedure
</td>
<td>
* Stakeholder collection/ KC participant list: names, contact details and organisations as part of the stakeholder identification will be collected and (after getting their consent) made available in the online stakeholder map
* Bibliographic references: references will be collected and stored (e.g. in bibTex format), but being public domain data, no ethical issue arises
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
Structured Interviews (WPs 4 and 5): participants are chosen from the
stakeholder collection and, based on their voluntary informed consent, asked
to answer a set of pre‐determined open and closed question in person or
through videocall. With the participants consent (using the form provided in
the Annex), the interviews will be recorded and transcribed for internal use
only (adopting the standards above mentioned). The data will be anonymized for
further use for the purpose of FutureTDM
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
Mainly public data will be used throughout the FutureTDM project. In case
industrial projects are used, the owner of the Intellectual Property Rights
will be approached and will have to approve that the data can be used for the
project. Furthermore, the data will be aggregated and anonymized for ensuring
that personal and or confidential data are not violated
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
Interviews (WPs 2 and 7): Video and photo images will be made available on the
project website under Creative Commons BY Attribution v. 4.0 License and may
be played at the larger multistakeholder workshops/symposium. The participants
will be asked to sign a “consent form”, where the participants confirm that
all portraits and images are made with the explicit authorization of the
participant. The participants also confirm that the FutureTDM project can use
the videos and images for the
FutureTDM projects
</td> </tr> </table>
# DATA SHARING
Table 2 outlines access procedures and rights in relation to the data gathered
throughout the FutureTDM project
**Table 2: Access procedure and access rights followed by FutureTDM**
<table>
<tr>
<th>
Access procedure
</th>
<th>
In accordance with Grant Agreement Article 25, data must be made available
upon request, or in the context of checks, reviews, audits or investigations.
If there are ongoing checks etc. the records must be retained until the end of
these procedures
</th> </tr>
<tr>
<td>
Access rights
</td>
<td>
Project partners:
* FutureTDM partners must give each other access — on a royalty‐free basis — to data needed to implement their own tasks under the action, where is legally and practically possible
* FutureTDM partners must give each other access – under fair and reasonable conditions (Article 25.3) – for exploiting their own results to data, where is legally and practically possible
* Unless otherwise agreed, requests for access may be made up to one year after the period set out in Article 3 (24 months)
Affiliated entities:
* Unless otherwise agreed, access must be given, under fair and reasonable conditions, and where is legally and practically possible
* Requests for access may be made — unless agreed otherwise — up to one year after the period set out in Article 3 (24 months)
</td> </tr> </table>
Concerning the exploitation and the dissemination of results, each partner
must take measures to ensure the exploitation of its results, up to four years
after the period set out in Article 3 (24months) and to guarantee the access
and visibility of the results (according to Article 29 of the Grant
Agreement). To this aim different dissemination channels are adopted, improved
and maintained also after the project lifecycle (for more detailed information
see D7.2, Communication and exploitation plan). They are shown in Table 3
along with a short description about their use and the policies adopted. The
content presented in the table will be coherently updated as the project
progresses.
**Table 3: Dissemination channels**
<table>
<tr>
<th>
**Dissemination channels**
</th>
<th>
**Usage**
</th>
<th>
**Policy**
</th> </tr>
<tr>
<td>
Project website
</td>
<td>
Reference point of project visibility until the Open Information Hub goes
online
</td>
<td>
CC‐BY
</td> </tr>
<tr>
<td>
Newsletter
</td>
<td>
Provide regular updates on the project activities and redirect to the website,
where more information on the project is available
</td>
<td>
CC‐BY
</td> </tr>
<tr>
<td>
Fact sheets
</td>
<td>
Support the work of the project and encourage feedback e.g. at events
</td>
<td>
CC‐BY
</td> </tr>
<tr>
<td>
Knowledge Cafés (KC)
</td>
<td>
Informal opportunity for stakeholders to find out about TDM, the FutureTDM
project and its goals, and to provide the project with feedback
</td>
<td>
Chatham
house rule 2
</td> </tr>
<tr>
<td>
KC flyers
</td>
<td>
Explain knowledge cafés and asking for input
</td>
<td>
CC‐BY
</td> </tr>
<tr>
<td>
Social media (e.g.
Twitter)
</td>
<td>
Publicise the project several time a day and support the diffusion of TDM
related news
</td>
<td>
CC‐BY
</td> </tr>
<tr>
<td>
Publications
</td>
<td>
Project related articles in TDM field
</td>
<td>
Open Access
</td> </tr>
<tr>
<td>
Blog
</td>
<td>
A place where stakeholders can find latest updates on the project, useful info
and exchange comments on TDM related topics
</td>
<td>
CC‐BY
</td> </tr>
<tr>
<td>
Templates
</td>
<td>
Ensure brand continuity
</td>
<td>
CC‐BY
</td> </tr>
<tr>
<td>
Project reports
</td>
<td>
Describe the results of the work packages
</td>
<td>
CC‐BY
</td> </tr>
<tr>
<td>
Video
</td>
<td>
Gain an insight into FutureTDM and involve stakeholder to improve TDM uptake
in EU
</td>
<td>
Consent form + CC‐BY
</td> </tr>
<tr>
<td>
Survey (e.g. structured interviews)
</td>
<td>
Collect experts feedback and generate best practice case studies
</td>
<td>
Consent form +
CC‐BY for best practice
</td> </tr> </table>
# ARCHIVING AND PRESERVATION
Table 4 outlines the main management principles behind the archiving and
preservation of the data collected through the project.
**Table 4: Storage and preservation in FutureTDM**
<table>
<tr>
<th>
Inform and keep track
</th>
<th>
* Data gathered in WP2 (interviews and workshops) and WP4 (best practices) as well as their metadata, will be compiled and deposited in OpenAIREʼs Zenodo repository to ensure discoverability, accessibility, and intelligibility
* In case of changes to this regard, each partner must immediately inform the coordinator (who in turn must inform the Funding Agency and other partner countries)
* Records and documentation will kept upto‐date in content and format so they remain easily accessible and usable
</th> </tr>
<tr>
<td>
Retention
</td>
<td>
A period of four years (after the end of the project)
</td> </tr>
<tr>
<td>
Type of documents retained
</td>
<td>
Project partners retain the original documents. Digital and digitalised
documents are considered originals if they are authorised by the applicable
national law
</td> </tr> </table>
# ETHICS
The project partners are to comply with the ethical principles as set out in
the Grant Agreement (Article 34), which states that all activities must be
carried out in compliance with:
* The ethical principles (including the highest standards of research integrity e.g. as set out in the European Code of Conduct for Research Integrity, and including, in particular, avoiding fabrication, falsification, plagiarism or other research misconduct) and Commission recommendation (EC) No 251/2005 of 11 March 2005 on the European Charter for Researchers and on a Code of Conduct for the Recruitment of Researchers (OJ L 75, 22.03.2005, p. 67), the European Code of Conduct for Research Integrity of ALLEA (All
European Academies) and ESF (European Science Foundation) of March 2011
* Applicable international, EU and national law.
Furthermore, activities raising ethical issues must comply with the “ethics
requirements” set out in
Annex 1 of the Grant Agreement.
**Confidentiality**
Whenever not differently written, the FutureTDM partners must retain any data,
documents or other material as confidential (“confidential information”)
during the implementation for the project and for four years after the period
set out in Article 3 (24 months). Further details on confidentiality can be
found in Article 36 of the Grant Agreement.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0706_AXIOM_645560.md
|
**_a) What types of data will the project generate/collect?_ **
The project generates different forms of data, which can be separated in the
following sections:
# Generic research data (process description and communication data)
* **Sources and resources** : collection of external references, studies, papers and openaccess material from other research projects, individuals, as well as related contexts
* **Articles and commentary** on related topics, research and overview of the larger field of research, such as open source cinema, open hardware
# Specific research data (outcomes)
* **Code** (software) in various formats
* **Technical and functional plans, drawings & 3D-Models ** (CAD)
# Documentation
* **Texts and articles:** documentation of processes and products of the main research
* **Photography** of processes, dissemination events, status
* **Video Communication** : updates for the (scientific) community, other developers as well as the interested public (title: apertus° Team Talks).
# Demonstrations
\- Demonstration footage from the camera prototypes - Tutorials
# Research Papers and articles
\- Academic papers and articles about research outcomes and other aspects (not
pure technical benefits) of Open Hardware
<table>
<tr>
<th>
**Title**
</th>
<th>
**Types of data**
</th>
<th>
**Dissemination**
</th> </tr>
<tr>
<td>
Generic research data
</td>
<td>
various
</td>
<td>
Blog, Social Media, Wiki, Mailing List
</td> </tr>
<tr>
<td>
Specific research data
</td>
<td>
source code, technical plans and technical drawings
</td>
<td>
Phabricator, Wiki, Github
</td> </tr>
<tr>
<td>
Documentation
</td>
<td>
text, photo, video
</td>
<td>
Blog, Wiki, Social Media
</td> </tr>
<tr>
<td>
Demonstrations
</td>
<td>
video footage
</td>
<td>
Repository
</td> </tr>
<tr>
<td>
Research papers and articles
</td>
<td>
text and pdf
</td>
<td>
Blog, academic repositories
</td> </tr> </table>
_b) Which standards will be used?_
The project focusses mainly on open standards, since this an integral part of
the nature of the project. All outcomes, including documentation and research
data produced will be released under open licenses. Public release of all
AXIOM project research data and results are published under a free licence
(GNU GPLV3, CERN Open Hardware Licence 1.2, GNU Free Documentation License
1.3).
AXIOM will open everything from the beginning. All of the hardware (including
optical and mechanical parts) and software produced in the course of the
project (including all knowledge/ know-how generated during our research and
development stage) will be made public and available on the Internet. It will
be open for anyone to access without registration ('gold' open access).
Output data is separated into Open Formats and Public Formats: _Open Formats_
are used for internal (archival) storage, while _Public Formats_ are used for
dissemination and circulation. Public Formats are often not open (and mostly
not _Free Formats_ ), but are necessary to publish research data on popular
different channels on the Internet.
<table>
<tr>
<th>
**Data**
</th>
<th>
**Format/** **Container**
</th>
<th>
**Description**
</th>
<th>
**Approx.**
**size after finished EU project**
</th>
<th>
**Open Formats**
</th>
<th>
**Public formats**
</th> </tr>
<tr>
<td>
Photo documentati on
</td>
<td>
CR, ARW, CR2,
DNG, NEF
</td>
<td>
Photographic images and documentation throughout the whole project.
</td>
<td>
400 GB
</td>
<td>
DNG
</td>
<td>
JPG,
PNG
</td> </tr>
<tr>
<td>
Video Footage
</td>
<td>
MOV, AVI, MXF, DNG
</td>
<td>
demo footage from AXIOM camera, documentation, guides, introduction videos,
communication, events
</td>
<td>
24 TB
</td>
<td>
DNG,
Cinema
DNG
</td>
<td>
MOV,
AVI,
MFX
</td> </tr>
<tr>
<td>
Technical Drawings
</td>
<td>
DWG, IPT, IAM, STP, STL, etc.
</td>
<td>
mechanical components and assemblies as 3D CAD models created in various
software tools
</td>
<td>
5GB
</td>
<td>
STP,
STL,
AMF
</td>
<td>
</td> </tr>
<tr>
<td>
Finished Videos
</td>
<td>
MP4, MOV
</td>
<td>
finished edited documentation and demonstration videos for publication/
distribution
</td>
<td>
5 TB
</td>
<td>
DNG,
Cinema
DNG
</td>
<td>
h.264/
h.265
</td> </tr>
<tr>
<td>
Illustrations,
Graphic
Designs
</td>
<td>
PSD, AI, EPS, PDF,
</td>
<td>
drawings, illustrations for website and publications
</td>
<td>
10GB
</td>
<td>
SVG
</td>
<td>
PDF
</td> </tr>
<tr>
<td>
Animation Source Files
</td>
<td>
AEP
</td>
<td>
illustration animations and motion graphics
</td>
<td>
1TB
</td>
<td>
\-
</td>
<td>
\-
</td> </tr>
<tr>
<td>
Texts and
Source
Code
</td>
<td>
Various
</td>
<td>
documentation, articles, publications, software source code
</td>
<td>
1GB
</td>
<td>
ODF, ASCII
</td>
<td>
PDF
</td> </tr> </table>
# Open Standards
The project AXIOM focusses on the use of _Open Standards_ defined as technical
(file) formats which are extensively documented and standardized. As the
general term is used very differently throughout disciplines and contexts, the
project AXIOM relates to the following definitions of _Open Standards_
* Definition of the Free Software Foundation Europe
* The definition by OpenStand (joint IEEE, ISOC, W3C, IETF and IAB definition)
While a lot of these formats are under different licenses, the main aspects
describing _Open Standards_ are:
* freely available specification
* format based on open specifications and/or standards
* freely available source-code
* documented to have no known intellectual property encumbrances or license requirements
Data in the project will be stored in at least one archival format (Open
Format) as well as in at least one _Public Format_ (accessible and popular,
compressed format) for dissemination.
_c) How will the data management be implemented?_
The data management in the project AXIOM consists of a 6-tier system:
<table>
<tr>
<th>
#
</th>
<th>
**Tier**
</th>
<th>
**Implementation**
</th>
<th>
**Application**
</th> </tr>
<tr>
<td>
1
</td>
<td>
Pre-Ingest
</td>
<td>
gathered data from various sources
</td>
<td>
data gathering, data validation and selection
</td> </tr>
<tr>
<td>
2
</td>
<td>
Intranet Raid
</td>
<td>
ingest to backup: metadata and
classification of data, immediate archival of high-priority data
</td>
<td>
Classification and description of data, adding of metadata, duplication on
local storage.
</td> </tr>
<tr>
<td>
3
</td>
<td>
Archival
</td>
<td>
Long-term archival on LTO-6 tapes (incremental as well as immediate)
</td>
<td>
Separation of data according to high-priority: instant archival of highly
important data (data which is not redundantly stored yet).
</td> </tr>
<tr>
<td>
4
</td>
<td>
Storage Cluster
</td>
<td>
offsite server/storage cluster: preparation of
data for internal project use
</td>
<td>
Transfer of relevant data to offsite storage cluster for availability to the
whole
consortium. Collection of external sources at the storage cluster. Backup of
storage cluster through regular scheduled backups.
</td> </tr>
<tr>
<td>
5
</td>
<td>
Cloud collaboration
</td>
<td>
Project management and collaboration systems
</td>
<td>
Internal data stored in Google Drive, consortium documents and EU-relevant
information in Phabricator, information for general public and larger team in
apertus wiki.
</td> </tr>
<tr>
<td>
6
</td>
<td>
Dissemination
</td>
<td>
publication of data and outcomes
</td>
<td>
external repositories (github, video hosting, social media, exernal photo
storage, apertus blog and wiki). See details in section b.
</td> </tr> </table>
# Used hardware for the implementation of the DMP
* Intranet RAID (local RAID-6 with 24TB storage)
* Archival computer with LTO-6 tape drive (long-term archival)
* Storage Cluster with fileserver (research data sharing throughout the team, public availability of research data)
# Targeting Data Degradation (data decay/ data rot)
As “data degradation” is a key topic of the current discourse in long-term
archival, we decided to use up-to-date filesystems to secure our data. In the
case of file-level-corruption, we decided to work with ZFS and Btrfs file-
systems, which implement integrity-checking and self-repair algorithms to
prevent data rot.
_d) How will this data be exploited and/or shared/made accessible for
verification and re- use?_
According to the 6-Tier system, data will be made accessible according to the
scope of the tier:
<table>
<tr>
<th>
1
</th>
<th>
Pre-Ingest
</th>
<th>
availability only to responsible team members
</th> </tr>
<tr>
<td>
2
</td>
<td>
Intranet Raid
</td>
<td>
local availability at University of Applied Arts
</td> </tr>
<tr>
<td>
3
</td>
<td>
Archival
</td>
<td>
local availability at University of Applied Arts
</td> </tr>
<tr>
<td>
4
</td>
<td>
Storage Cluster
</td>
<td>
Availability to whole consortium through fileserver (ftp/http). Selected parts
will be made accssesible to the larger audience via a public repository
software.
</td> </tr>
<tr>
<td>
5
</td>
<td>
Cloud collaboration
</td>
<td>
Internal data stored in Google Drive, consortium documents and EUrelevant
information in Phabricator, information for general public and larger team in
apertus wiki.
</td> </tr>
<tr>
<td>
6
</td>
<td>
Disseminatio n
</td>
<td>
general availability according to the external context.
</td> </tr> </table>
**Backup strategy for external sources**
<table>
<tr>
<th>
**External source**
</th>
<th>
**Max. size**
</th>
<th>
**Backup to**
</th> </tr>
<tr>
<td>
Github
</td>
<td>
30GB
</td>
<td>
Incremental pull-backup from archival computer to LTO tape. Additional
redundancy through the nature of git-system; Additional daily pull on Storage
Cluster.
</td> </tr>
<tr>
<td>
Developer FTP
</td>
<td>
10GB
</td>
<td>
Incremental pull-backup from archival computer to LTO tape. Mirror on Storage
Cluster.
</td> </tr>
<tr>
<td>
Google Drive
</td>
<td>
20GB
</td>
<td>
Incremental pull-backup from archival computer to LTO tape. Mirror on Storage
Cluster.
</td> </tr>
<tr>
<td>
Apertus° Wiki
</td>
<td>
15GB
</td>
<td>
Incremental pull-backup from archival computer to LTO tape. Mirror on Storage
Cluster.
</td> </tr>
<tr>
<td>
Apertus° Blog
</td>
<td>
2GB
</td>
<td>
Incremental pull-backup from archival computer to LTO tape. Mirror on Storage
Cluster.
</td> </tr>
<tr>
<td>
Phabricator
</td>
<td>
2GB
</td>
<td>
Incremental pull-backup from archival computer to LTO tape. Mirror on Storage
Cluster.
</td> </tr> </table>
_Public Repository for research data_
A public and self-hosted repository for all the research data will be made
available during the project timeframe and is guaranteed to be hosted after
the Horizon2020 project by the Artistic Bokeh Initiative. Details for the
software used and implementation specifics will be outlined in Version 2 of
the DMP.
5. _How will this data be curated and preserved?_
The described policy reflects the current state of consortium agreements
regarding data management and is consistent with those referring to
exploitation and protection of results. Data is curated by team members at the
University of applied arts and will be preserved using current state-of-
technology in data backup and data availability. Mission-critical data is
always stored at least with dual-redundancy at all times, together with the
long-term archival procedures data is avaailable on tripple-redundancy level.
<table>
<tr>
<th>
**#**
</th>
<th>
**Tier**
</th>
<th>
**Curating**
</th>
<th>
**Preservation strategy**
</th> </tr>
<tr>
<td>
1
</td>
<td>
Pre-ingest
</td>
<td>
Immediate duplication on Intranet Raid. Selection by WP8 responsible team
members.
</td>
<td>
Dual redundancy on Intranet Raid and source media.
</td> </tr>
<tr>
<td>
2
</td>
<td>
Intranet Raid
</td>
<td>
Definition of initial folder structure and selection by WP8 responsible team
members. Selection of missioncritical data for immediate archival by WP8
responsible members.
</td>
<td>
Additional redundancy via direct archival (to LTO) for missioncritical data.
Weekly incremental backup to LTO tape.
</td> </tr>
<tr>
<td>
3
</td>
<td>
Archival
</td>
<td>
Archival and long-term storage: LTO-6 Tape drive with redundant storage.
</td>
<td>
Weekly incremental backups, redundant archival of all research data to LTO-6
(storage of physical tapes on two different sites). MDisc for additional
failover.
</td> </tr>
<tr>
<td>
4
</td>
<td>
Storage Cluster
</td>
<td>
Selection by WP8 responsible team members.
</td>
<td>
RAID-6 with ZFS file-system
(tripple redundancy). Backups to LTO tape drive. Additional backup to cloud.
</td> </tr>
<tr>
<td>
5
</td>
<td>
Cloud collaboration
</td>
<td>
Selection by whole consortium, responsibility at the project lead.
</td>
<td>
Pull-backups to Storage cluster and to incremental backup to LTO tape drive.
</td> </tr>
<tr>
<td>
6
</td>
<td>
Dissemination
</td>
<td>
External, outsourced storage
</td>
<td>
Pull-backups to Storage cluster and to incremental backup to LTO tape drive.
Storage of
dissemination results at Google Drive (Cloud Collaboration) as well as
publishing to Phaidra system (University long-term database).
</td> </tr> </table>
**Page**
<table>
<tr>
<th>
LTO-6 Drive
</th>
<th>
Tape Backup Drive
</th> </tr>
<tr>
<td>
M-DISC Drive
</td>
<td>
M-Disc (Long term storage BluRay discs)
</td> </tr>
<tr>
<td>
Cloud Storage
</td>
<td>
SpiderOak Cloud Services
</td> </tr>
<tr>
<td>
Local RAID
</td>
<td>
Synology RAID-System incl. 4x4TB Hard Disks
</td> </tr>
<tr>
<td>
Virtual Server
</td>
<td>
Virtualized Server (Linux) for dissemination and storage
</td> </tr> </table>
6. Dissemination of data and project outcomes
<table>
<tr>
<th>
Apertus° Wiki
</th>
<th>
Internal Hardware/Software documentation
</th>
<th>
_http://wiki.apertus.org_
</th> </tr>
<tr>
<td>
Apertus° Website
</td>
<td>
Primary dissemination outlet via project website
</td>
<td>
_http://apertus.org_
</td> </tr>
<tr>
<td>
External video hosting service
</td>
<td>
Finished edited documentation and demonstration videos for publication/
distribution.
</td>
<td>
_http://youtube.com_ _http://vimeo.com_
</td> </tr>
<tr>
<td>
External photo hosting service
</td>
<td>
Edited photos hosted for maximized availabilty
</td>
<td>
_http://flickr.com_ _https://_
_commons.wikimedia.org_
</td> </tr>
<tr>
<td>
Social Media
</td>
<td>
Generic information and texts for wider audience.
</td>
<td>
_http://facebook.com_ _http://twitter.com_ _http://plus.google.com_
</td> </tr>
<tr>
<td>
Academia.edu
</td>
<td>
Research papers and articles.
</td>
<td>
_http://academia.edu_
</td> </tr>
<tr>
<td>
Phaidra
</td>
<td>
Selected documentation, texts and research content for long-term archival in
Phaidra (Permanent Hosting, Archiving and Indexing of
Digital Resources and Assets)
</td>
<td>
_https://_
_phaidra.bibliothek.uniak.ac.at/_
</td> </tr>
<tr>
<td>
Cern Open
Hardware
Repository
</td>
<td>
A place on the web for electronics designers at experimental physics
facilities to collaborate on open hardware designs, much in the philosophy of
the free software movement.
</td>
<td>
_http://www.ohwr.org_
</td> </tr>
<tr>
<td>
Github
</td>
<td>
Source code, source files for collaborative development
</td>
<td>
_http://github.com_
</td> </tr> </table>
# g) Responsibilities and resources
Data will be collected (Pre-ingest, Intranet Raid) by the lead partner, and
will be selected for archival and storage by the team responsible for WP8
(P1). Data-backup and archival tasks will be undertaken in the facilities of
the lead partner (University of Applied Arts) with support of an external 3rd
party to supervise and consult regarding the technical requirements.
_resources used_
**Page**
# **Literature list / sources**
e-Infrastructures Austria. (Version 2.0 Mai 2015). Data Management Plan: Eine
Anleitung zur Erstellung von Data Management Plänen. Retrieved from
_https://fedora.phaidra.univie.ac.at/ fedora/get/o:367863/bdef:Content/get_
European Commission. (2016). Guidelines on Data Management in Horizon 2020
(Version 2.1). Retrieved from
_http://ec.europa.eu/research/participants/data/ref/h2020/grants_manual/hi/oa_pilot/
h2020-hi-oa-data-mgt_en.pdf_
Open Stand. The Modern Paradigm for Standards. (2015). Retrieved from
_https://open-stand.org/ about-us/principles/_
The GNU General Public License v3.0 - GNU Project - Free Software Foundation.
(2014,
November 8). Retrieved February 19, 2016, from
_http://www.gnu.org/licenses/gpl-3.0.en.html_
GNU Free Documentation License v1.3 - GNU Project - Free Software Foundation.
(2014, April
12). Retrieved February 19, 2016, from
_http://www.gnu.org/licenses/fdl-1.3.en.html_
Johnson, D. (2010, May 14). Is PDF an Open Standard? Retrieved February 14,
2016, from _http:// talkingpdf.org/is-pdf-an-open-standard/_
CERN Open Hardware Licence v1.2. (2013, September 6). Retrieved February 19,
2016, from _http://www.ohwr.org/attachments/2388/cern_ohl_v_1_2.txt_
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0707_DD-DeCaF_686070.md
|
# Initial Data Management Plan
This is the first version of the Data Management Plan following the guidelines
for the Open Research Data Pilot.
# Data Summary
## Purpose of the data collection
Data is collected by two end-user company partners (DSM and Biosyntia) in
order to provide real-life data to validate the underlying data
analysis/interpretation methods, developed in this project and to provide test
cases for the full DeCaF platform including data visualization.
## Relation to the objectives of the project
End-user validation and feedback is crucial for the purpose of developing a
broadly usable cell factory and community design platform. End-user engagement
is best achieved by applying the platform and underlying methods to data
generated by the end-user companies in their own research and development
projects.
## Types and formats of data collected
The primary large-scale experimental data types collected are:
Genomics: Data collection is achieved by short read sequencing on Illumina
sequencer platforms creating raw reads in fastq format. Reads are aligned to a
reference genome in order to allow identification of genetic variants in
production strains.
Transcriptomics: Data collection is achieved through RNA-seq on Illumina
sequencer platforms creating raw reads in fastq format. These raw reads are
processed into derived tabular data formats that consist of unique transcript
identifies and the corresponding absolute RNA expression level in a particular
condition.
Proteomics: Data collection is achieved through standard massspectrometry
proteomics platforms (vendor depends on the partner) each of which produces
its own proprietary file type. These files will be converted to the standard
mzML format that allows deposition to public databases. The processed data is
in tabular data format that consist of unique protein and peptide identifiers
and the corresponding relative protein expression level in a particular
condition.
Metabolomics: Data collection is achieved through mass spectrometry (LC/MS)
and
HPLC platforms. The raw data is not of primary interest as the file formats
are proprietary to each instrument vendor. Instrument vendor provided software
will be used to obtain abslute metabolite concentration data in a tabular
format that consist of unique metabolite identifiers and the corresponding
absolute metabolite concentration in a particular condition.
Fluxomics: Data collection is achieved through mass spectrometry (GC/MS) and
HPLC platforms. As above the raw data is not of interest, but from the raw
data a series of derived data types will be generated including isotopomer
distributions and final metabolic flux estimates. All the derived data can be
represented in tabular formats consisting of unique identifiers (e.g. the
metabolic reaction specified by the reactant and product metabolite ids and
stoichiometry) and the corresponding measured/estimated data.
In addition to experimental data, this project will include creating,
improving and extending genome-scale models of cellullar processes.These
models will be stored in the SBML (Systems Biology Markup Language) format,
which will allow importing the models to public domain model repositories
(BioModels at EBI). We will follow MIRIAM (Minimum Information Required in the
Annotation of Models) standards to ensure that identifiers used in the models
are consistent with identifiers used with data (e.g. gene/protein/metabolite
identifiers). During Model versioning during the project will be done by
maintaining the models on github. Upon publication, the final versions of the
models will be deposited in BioModels.
## Existing data being re-used
Existing reference genomics, transcriptomics, proteomics, metabolomics and
fluxomics data may be reused for wild type (i.e. non-engineered) strains in
order to validate the quality of the data produced within the project.
However, new data for wildtype strains will also be produced during the
project in order to ensure that a consistent dataset is overall generated for
all the strains studied.
Existing public domain metagenomics data will be used for mining of novel
enzyme functions and for functionally annotating metagenomic datasets by
partners EMBL and biobyte. These data are also available in public metagenome
repositories, but with minimal annotation. There are no restrictions for the
use of the public domain metagenomics data and the derived data generated from
this data can also be freely shared. The DD-DeCaF project will not generate
any additional metagenomics data.
Existing public domain genome-scale models will be used as basis for
improvement or extension within the project. The majority of these models will
be available free of restrictions, but a few of the models will come with
restrictions for commercial use. These restrictions will be propagated to the
derived models as required by the licenses attached to the original models.
## Origin of the data
The data will originate from two end user partners who will provide the raw
data in standardized formats (for transcriptomics and proteomics) and the
processed data in tabular format (for all omics data types).
The models will originate from four of the academic partners (DTU, EMBL,
Chalmers and EPFL). DTU partner will be responsible for ensuring that the
models are in formats that allow reuse and integration of experimental data
with the models.
## Expected size of the data
The raw data for each genomics data set (one strain) is approximately 0.25 Gb.
The raw data for each transcriptomics data set (one strain, condition,
replicate) is approximately 0.5 Gb.
The raw data for each proteomics data set (one strain, condition, replicate)
is approximately 2 Gb.
Each of the processed tabular data sets (all omics data types) is of the order
of 0.5 Mb.
The total number of individual data sets is approximately 20
strains/conditions x 3 replicates = 60 samples total for all other omics data
types except genomics and 20 samples for genomics (no replicates needed). This
gives a total experimental data volume of 160 Gb primarily consisting of
proteomics and transcriptomics raw data.
The genome-scale models will each take few tens of Mb of space and don't
represent a major data management challenge.
## Data utility
The experimental data will be useful within the project for testing and
validating methods and for testing features of the overall data analysis and
visualization platform. Outside of the project, the data will be useful as
reference data for cell factory design in yeast and E. coli and as part of
compendia of omics datasets for these organisms.
The genome-scale models generated within the project will be useful for cell
factory design as well as a number of other applications related to metabolic
physiology both within and outside of the project.
# FAIR data
## Making data findable, including provisions for metadata
There are two types of metadata, that need to accompany the primary omics data
collected within the project: 1) Metadata describing the general experimental
setup (e.g. strain genotype, cultivation conditions, sampling time points) and
2) metadata describing the process of going from a particular microbial
culture to the raw data (e.g. sampling, sample processing, relevant instrument
settings). The metadata within this project will be collected in the ISA-Tab
format, which allows metadata submission together with raw/processed data
submission to relevant databases (e.g. those maintained by the European
Bioinformatics Institute, EBI). We will also utilize the ISATab format as a
metadata exchange format with the data repositories built within the project.
Unique dataset identifiers will be generated in conjunction with submission of
the genomics, proteomics and transcriptomics data to the public domain
databases (ENA, PRIDE and ArrayExpress at EBI respectively). There are not
currently comparable metabolomics or fluxomics databases that would work for
the type of data generated within this project. For metabolomics and fluxomics
the data will be submitted to the general purpose research materials sharing
platform Zenodo. These platforms create the necessary dataset IDs and DOIs for
all the materials. In addition to public domain databases, we will also build
a database within this project that will contain the specific omics data and
metadata generated within this project in order to demonstrate the use of the
data within the platform and allow easy use of the data by academic and
industrial partners working on data analysis and visualization methods.
## Making data openly accessible
All data generated within the project will be made openly accessible after an
embargo period of maximum of two years from the generation of the dataset
during which the data will only be accessible within the consortium to the
partners that require data access for method or tool development. During the
embargo period the data will be made available to the partners through the
DeCaF platform developed in this project. After the embargo period, the data
will be made available publicly both through the platform developed during the
project and through public data repositories as described above. The embargo
period will end upon public disclosure of the data in the form of a preprint
or conference/journal article if this happens in less than 2 years from the
generation of the data. The project will develop the software tools and APIs
that allow accessing the data within the DeCaF platform. Public data
repositories already provide such tools.
The genome-scale models developed during the project will be subject to the
usual embargo period where they will be available only to consortium partners
until publication (either preprint or conference/journal article). After
publication, models will be publicly available both through the DeCaF platform
and through the BioModels database. Software tools developed by academic
partners during the DD-DeCaF project will be available open source through the
github repository. SME partners in the project may release some of their code
open source, but also reserve the right to maintain proprietary code bases
where it is deemed to be necessary for commercial reasons.
## Making data interoperable
The DD-DeCaF project will create genomics, transcriptomics, proteomics,
metabolomics and fluxomics data for two organisms - Escherichia coli and
Saccharomyces cerevisiae. We will use standard gene, transcript and protein
identifiers that are specified by the reference databases for these organisms
(EcoCyc and SGD respectively). For metabolites we will use universal unique
chemical identifier (InChI) that allow conversion to other types of commonly
used identifiers such as SMILES, ChEBI, PubChem, CAS. Within the DeCaF
platform we will also provide genome-scale models that utilize these same
standard identifiers (MIRIAM) so that the data can be directly mapped to the
models and used with the methods developed within the DD-DeCaF project. The
models will be made available in the standard SBML format facilitating use of
the models with different types of modeling and visualization software. We
will use standard raw and processed data formats where possible (genomics,
transcriptomics and proteomics) as outlined earlier in the document.
## Increase data re-use
All omics data and associated metadata will after the embargo period
(described above, maximum of two years from data generation or upon
publication) be freely usable without restrictions. The embargo period will be
to allow seeking patents or to prepare publications. No data will be generated
for commercially sensitive strains or processes within this project. Data will
be quality controlled during initial data analysis within the project, and
poor quality data (with quality standards dependent on the type of data) will
be discarded and new data will be generated.
The modified and extended genome-scale models will be made publicly available
upon scientific publication at the latest. Models will be made available with
the same licensing restrictions that apply to underlying models that are used
as starting point. All models will be free to use and modify for non-
commercial use, but some may require commercial use licenses. Within the DeCaF
platform these licensing restrictions will be made explicit. Models will be
quality controlled by verifying their predictive performance against standard
publicly available benchmark datasets for each organism if these are
available.
# Allocation of resources
## Estimation of the costs for making data FAIR
Since this project is focused on building a data analysis and processing
platform with the goal of designing new cell factories and communities, the
costs of making data and models FAIR are already included in the proposed work
and no additional costs will be involved in this process. The project includes
components where data deposition and sharing tools are developed.
## Responsibilities for data management
Data management of data generated within the project is primarily handled by
the coordinating partner DTU. DTU will develop the data management, analysis
and visualization platform that is used within the project. Partners EMBL and
biobyte will handle development of the metagenomic data mining platform, but
this platform only uses existing public metagenomics data.
## Costs and potential value of long term preservation
Long term preservation of the primary data and metadata is ensured by
deposition of the data to public domain repositories (ENA, ArrayExpress,
PRIDE, BioModels and Zenodo). The DeCaF platform developed within the project
will also be maintained long term using internal resources available within
the Novo Nordisk Foundation Center for Biosustainability at DTU.
# Data security
During the embargo period the data and models will be stored in the data and
model repositories developed within the DD-DeCaF project as part of the DeCaF
platform. These repositories will include as a feature the possibility of
restricting data and model sharing to specific partners or making them
entirely public. Access control is handled by the REMS system developed by
CSC. The DeCaF platform will include backup features for all the data as well
as long term archiving (10 years). The same apply to the genome-scale models.
Deposition to public domain databases will be only done after embargo period
is complete. These databases handle backups and archiving internally and are
expected to provide very long term data storage.
# Ethical aspects
The data generated in this project is for microbial cell factories and
involves no human subjects.
# Other
DTU has drafted a research data management policy that is aligned with the EC
Horizon 2020 data management policy requirements. DD-DeCaF will follow the DTU
policy once it is made official within the following months.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0714_EDCMET_825762.md
|
# EXECUTIVE SUMMARY
Metabolic effects of Endocrine Disrupting Chemicals: novel testing METhods and
adverse outcome pathways (EDCMET) is a research project supported by the
European Commission’s Horizon 2020 research framework (825762). The project
responds to call H2020-SC1-2018-Single-Stage-RTD, programme H2020-EU.3.1.1 –
“Understanding health, wellbeing and disease”, topic SC1-BHC-27-2018 “New
testing and screening methods to identify endocrine disrupting chemicals” and
is active between 1.1.2019-31.12.2023.
EDCMET brings together experts in various research fields, including systems
toxicologists, experimental biologists with a thorough understanding of the
molecular mechanisms of metabolic disease and comprehensive _in silico_ , _in
vitro_ and _in vivo_ methodological skills and ultimately, epidemiologists
linking environmental exposure to adverse metabolic outcomes. The project
focuses on developing novel test methods and models to assess the metabolic
effects of EDCs. Combined _in silico_ , _in vitro_ and _in vivo_ methods are
developed with an emphasis on liver and adipose tissue and endocrine pathways
related to their metabolism. In addition, epidemiological and field monitoring
data is used to gain information regarding the exposure to chemicals and EDC-
related metabolic effects. The interdisciplinary approach and complementary
expertise of the participants will identify novel mechanisms of action, and in
collaboration with the European Commission (EC) Joint Research Centre (JRC)
providing an interface between the programme and European regulatory agencies,
novel validated test methods for regulatory purposes will be generated.
Efficient and secure data management, including collection, integration and
sharing of data, is the very essence of large-scale multidisciplinary research
projects such as EDCMET that collects and integrates data from different
sources including various high-throughput technologies. This data management
plan (DMP) is produced as part of the Open Research Data Pilot (ORDP). It sets
the framework for the handing of data produced in EDCMET project from
acquisition over curation to dissemination and shall thereby assure the
implementation of best practical procedures for the management and
accessibility of EDCMET data during and beyond the lifetime of the project.
This deliverable (D6.1, month 3) provides the first issue of the DMP and the
initial approach to the handling of research data during and after the end of
the project, data collection, generation and processing, data standards and
methodologies, open access as well as data curation and preservation. As a
living document, the
DMP will further evolve throughout the lifespan of the project.
# PURPOSE OF THE DMP AND RESPONSIBILITIES OF PARTNERS
The purpose of this DMP s to provide main elements of the data management
policy used by the EDCMET consortium regarding project data. The DMP covers
the complete project and research data cycle. It describes the types of data
that will be generated, collected, processed and re-used during the project,
the standards that will be used, how the data will be preserved, and which
parts of the data will be shared for verification or reuse. It also reflects
the current state of consortium agreements on data management and must be
consistent with exploitation and IPR requirements.
EDCMET includes four scientific work packages (WP1-4) and four work packages
related to management, dissemination and ethics (WP5-8). The WP6 (Data
management, UEF) has an overall responsibility of the data management and to
ensure that the shared data are easily available, that proprietary data are
secured and that regular backups are performed. In the frames of project data
management, each EDCMET partner must respect the policies and responsibilities
set out in this DMP and follow best practices for data generation, storage and
sharing. Further, each partner shall follow their national and institutional
procedures for data management, when applicable and/or required.
Datasets must be created, managed and stored appropriately in line with
applicable legislation and the DMP. Quality control of the data, according to
the EDCMET quality assurance protocols and GLP principles, is the
responsibility of each partner and ultimately of the WP leaders and project
management team (UEF). Further, each WP leader will ensure dataset integrity
and compatibility for its use during the validation of tools by different
partners. Registration of datasets to the common data repository and
accompanying metadata, according to unified naming conventions and ontologies,
is the responsibility of the partner that generates the data. If datasets are
updated, the partner that possesses the original and updated data has the
responsibility to manage the different versions and to make sure that the
latest version is available within EDCMET and EURION networks as well as for
sharing through open access repositories, depending on the data and phase of
research. All researchers and partners involved in data gathering, generating
and processing shall become familiar with the EDCMET data management policies
and guidelines of open access issues. Data is owned by the partner that
generates them. A partner may transfer ownership of the data as agreed in the
Consortium Agreement (CA). Each partner must continuously identify and protect
valuable intellectual property rights and to identify opportunities to exploit
the data. Also, prior notice of any publications or public presentations must
be given according to the CA.
The document will evolve during the project and will be updated and ultimately
completed accordingly, as research data is collected or when significant
changes in consortium policies or composition arise. At minimum, it will be
updated in _**M6** _ according to H2020 guidelines and as part of the mid-term
and final project reports.
# DATA SUMMARIES
As the first version of the DMP is due in month 3, precise and more detailed
information related to the data collected and generated by EDCMET, applied
data standards, harmonization and accessibility of research data as well as
confidentiality levels of each dataset will be given in later versions of the
DMP.
## Purpose of data collection/generation and data utility
The main objective of the EDCMET project is to develop validated _in silico_ ,
_in vitro_ and _in vivo_ methods assessing the metabolic effects of EDs in
line with the OECD work on endocrine disruption. Further, the aim is to follow
the traditional adverse outcome pathway (AOP) paradigm to identify molecular
initiating events and predict the emergent adverse biological phenotypes. A
prerequisite to achieve these aims, is to coordinate the data collected at
individual EDCMET WPs by different research organizations and groups and to
make them openly available. To this end, the EDCMET data management aims to
integrate datasets to enable and simplify the access and utilization of data
for different stakeholders within various scientific and operational
communities.
Data will be made available for the members of the project as well as to the
broader research community through open access repositories or other channels
as described in the DMP. As the EDCMET members are from eight different
countries in Europe with extensive scientific and other stakeholder networks,
each partner is expected to increase awareness and support co-operation within
the field. EDCMET WP5 as well as the EURION cluster will continuously search
and evaluate potentially relevant stakeholders to maximally enhance the impact
of the project findings for both scientific and regulatory purposes. Tailored
information will be provided to different stakeholders, based on the developed
dissemination and communication strategy and tools, while respecting all
ethico-legal framework. The data collected and generated in the project are
likely to be useful at least to the following general categories of
stakeholders (living list):
* EDCMET consortium and EURION cluster members
* Regulatory agencies and authorities
* Policy makers and funders
* Academic researchers and other scientific experts
* Industry
* Members of the public
## Origin of the data
EDCMET uses a variety of methodologies for novel and improved approaches to
assess the metabolic effects of endocrine disruptors (EDs), ranging from _in
silico_ and omics to _in vitro_ and _in vivo_ , and ultimately,
epidemiological data to associate exposure levels to ED-related metabolic
effects. The data produced during the project are based on the Description of
Action (DoA) and the results/deliverables from individual WPs. The data
generated by EDCMET strongly depends on the individual tasks, tools and
research methods used within WPs. In WPs 1-3 the data will be mainly collected
and produced by various _in silico_ , _in vitro_ and _in vivo_ methodologies
during the project. Literature and public databases will be used to generate a
list of omics datasets of relevant experiments. EDCMET research involving work
human data (WP4) includes secondary use of information obtained from existing
cohorts within EU.
In a scientific context, research data refers to information, facts or numbers
collected and generated within the frames of the project (raw data), which
will be further analysed and processed (processed data). The focus of this DMP
is on research data that is available in digital form. However, short
descriptions for management and dissemination document formats and storage are
also included in the DMP.
## Data types and formats
A list of main categories and types of data generated, collected and re-used
during the project are listed in _**Table 1** _ according to work packages
(WP1-8). This list will be adapted and extended with the addition datasets as
well as more detailed description of data types and file formats in further
versions of the DMP, based on project developments.
**Table 1. Main categories and types of data in EDCMET**
<table>
<tr>
<th>
**WP**
</th>
<th>
**Main types of data**
</th> </tr>
<tr>
<td>
**1**
</td>
<td>
Several datasets derived from _in silico_ and omics analyses.
</td> </tr>
<tr>
<td>
**2**
</td>
<td>
Data from _in vitro_ laboratory measurements.
</td> </tr>
<tr>
<td>
**3**
</td>
<td>
Data from _in vivo_ animal experiments.
</td> </tr>
<tr>
<td>
**4**
</td>
<td>
Data from epidemiological studies (cohorts).
</td> </tr>
<tr>
<td>
**5**
</td>
<td>
Data related to dissemination activities, such as publications, presentations,
posters, seminars, brochures, newsletters, templates and logos
</td> </tr>
<tr>
<td>
**6-8**
</td>
<td>
Management files, project rules and follow-up data including Grant and
Consortium Agreements, Gantt chart and action plans, administrative and
financial data
</td> </tr> </table>
The multidisciplinary approach of the project will generate large amounts of
new data from different experimental approaches. Such data include for example
graphics, microscopic graphs and imaging data, selected numerical data from
laboratory measurements, analyses and high-throughput technologies,
statistical data _etc_ . Data from the scientific WPs (WP1-4) will vary in
terms of raw and processed data formats, depending on the instrumentation and
software used to measure and analyse/process the data. Detailed identification
and descriptions of used instrumentation, software and resulting data formats
as well as confidentiality levels of datasets, will be included in the
accompanying metadata ( _**Section 2.4** _ ) in the later versions of the DMP,
when appropriate. To ascertain the data quality, performance of experimental
instruments and systems will be checked regularly with standardized controls
and procedures, as per the rules and instructions of the relevant institutes
and/or EDCMET quality assurance procedures. Final data will be saved in an
open file format ( _e.g._ .csv, Office documents, .pdf/a) whenever possible to
enable further data sharing. Possible data conversions shall be managed by the
researcher who has produced the data, to ensure data integrity. Where
applicable, data formats may be migrated when new technologies become
available and are proven robust enough to ensure digital continuity and
continued availability of data or when the appropriate software is freely
available or included.
Generally, all data falling under management category (WPs 6-8) as well as
majority of data produced by dissemination activities (WP5) will be available
in .pdf/a format. Further data types from WP5 may include for examples graphic
images, such as the project logo, or presentations and posters. EDCMET has
agreed that each member uses templates from their own institutes for
presentations and posters, while including relevant project and EU logos and
descriptions. Reports by each partner will be produced using a unified
template and format generated for the project. Scientific publications will
follow the format required by the conferences or journals in which said
publications will appear.
## Data standards and metadata
At this stage of the project ( _**M3** _ ), a detailed data harmonization
process is being constructed. This will be achieved through collecting
information on the data produced within EDCMET by all partners as well as
coordination of the activities of WP1 to draft standard formats, naming
conventions, keywords, ontologies and metadata standards or alternative
approaches. Existing metadata standards can be used, if applicable. Further,
on-going standardisation efforts conducted _e.g._ in eTOX and ECETOX, include
_e.g._ data dictionary standard loaders for MeSH terms, UniProt proteins, Gene
Ontologies, AOP-wiki _etc_ . Considering the strongly interdisciplinary nature
of EDCMET as well as various data types produced within the project by
significantly different methodologies and _e.g._ frequently changing software
versions, a unified approach may include a set of common elements,
complemented with more detailed dataset or methodology -specific elements.
Consistency between similar data sets will be sought.
## Expected size of the data
At this stage of the project ( _**M3** _ ), the precise total volume cannot be
determined but it is reasonable to assume that it will reach several tens of
terabytes (Tb, midterm) due to extensive use of _in silico_ and omics
methodologies.
# DATA STORAGE AND ARCHIVING
## Internal (midterm) data storage and sharing
The overall data produced and/or collected by each EDCMET partner must be
carefully stored. In a preliminary stage of data collection or generation,
local/institutional secured data repositories can be used. During the project,
all non-sensitive data and protocols shall be carefully stored by respective
authors or organizations in the common (midterm) data repositories, dedicated
to the EDCMET project and managed by the coordinating institute (UEF), at
secure CSC servers ( _**Table 2** _ ). Templates for storing the data,
including clear versioning, as well as detailed structures and properties of
the repositories are under construction. They will be disseminated to all
partners and included in the later versions of the DMP. This will facilitate
document evaluation and review and ensure standardization as well as required
compatibility with data and repositories from the EURION cluster. Further,
special agreements will be established to ensure data and protocol transfers,
as required. In general, data shall be preferably shared within EDCMET via
indication of its placement in the repository.
Access to EDCMET midterm data repositories will be granted per request and by
the Scientific Manager. The contents of the midterm repositories can only be
accessed by authenticated members of the EDCMET project and by default, access
to all datasets will be granted to all partners, unless a limited access is
requested and justified by the owner of the dataset. Cases where access to
specific background is subject to legal restrictions or limits are specified
in the CA and will be handled case-by-case. All project members have equal
rights to add, remove, and operate on project data stored in the IDA
repository but care must be taken, that no changes to data owner by other
partners are to be made. Each repository user must verify that the data and
documents uploaded to the repository follow the standards set for EDCMET data
( _**Section 2.4** _ ). The repository users are required to follow
announcements and provided schedules on CSC server maintenance periods. The
users will be promptly informed on any unplanned interruption of services.
**Table 2. Midterm data storage and sharing**
<table>
<tr>
<th>
CSC-IDA
ida.fairdata.fi
</th>
<th>
Continuous service for safe data storage organized by the Finnish Ministry of
Education and Culture. The midterm storage area is for new data for collecting
and organizing data during the project. This area is not visible to other
services or users than the EDCMET/EURION participants with granted access by
the Scientific Manager. The frozen area (read-only) of the EDCMET database is
meant for final data, which are given unique identifiers and metadata stored
in metadata repositories and linked to other Fairdata services. The current
data repository can host 10 Tb of data but can be extended depending on the
project needs. IDA is not optimized for data under heavy usage for which the
Object Storage is a better option.
</th> </tr>
<tr>
<td>
CSC-Object Storage
(cPouta)
</td>
<td>
The Object Storage functionality is a cloud storage service, provided for the
public could cPouta IaaS cloud computing platform. Object storage can be used
for large datasets or intermediary (unstructured) results that a lot of nodes
need to access, to share data and results within the project or with other
projects over https. The Object Storage is especially suitable for pushing big
data temporarily from different groups within EDCMET for later processing and
computing.
</td> </tr> </table>
Public and base protection level data, such as management and dissemination
documents will be stored in the EDCMET collaboration platform in Office365
environment at UEF and shared, as applicable, on the EDCMET website. The
EDCMET midterm repositories are not suitable for sensitive personal data or
biometric identifiers (for these, see _**Section 5.2** _ ). Animal data will
be acquired in accordance with national and EU regulations for animal
experiments. Use and storage of these data do not have similar ethical
restrictions as human data and can be stored in the EDCMET repositories.
## Public data sharing and reuse
When a dataset is ready to be published in an open repository or other public
space, _e.g._ after publication or after an assigned embargo period, the final
version of the dataset, following the established data and metadata standards
set up for EDCMET ( _**Section 2.4** _ ), shall be uploaded to the Frozen area
of the EDCMET IDA repository ( _**Section 3.1** _ ). The frozen datasets and
files are assigned PIDs and their metadata is stored in a centralized
repository. Checksums for frozen files are generated automatically. Further
Fairdata services, such as the metadata tool (QVAIN), research finder service
(ETSIN) and digital preservation services (FairdataPAS) will be used, when
available and applicable, to enable re-use and citations. Depending on the
dataset and ownership, the data can be further shared in other open access
platforms, such as OpenAIRE Zenodo. Machine-readable electronic copies of
published or final peer-reviewed manuscripts as well as suitable datasets or
metadata will also be deposited in institutional or national electronic
repositories, such as UEF eRepository ( _https://erepo.uef.fi_ ) . Attention
will also be given to ownership and intellectual property by establishing
rigorous rules and procedures for utilization and dissemination.
The tools and models produced in EDCMET will further made available to the
broad scientific community via high-impact peer-reviewed publications,
presentations at scientific meetings as well as existing networks of the
project partners. Significant scientific data will be published in journals,
which are in wholly open access and offer various open access modes ( _i.e._
gold or green routes or analogous modes). Open access publication will enhance
transformation of the results to other scientists and regulatory authorities.
EDCMET will also publish any public results through the project website (
_www.uef.fi/edcmet_ ) and diffuse or publish appropriate data via EDCMET
social media channels, such as Twitter (@edcmet_eu). The project partners are
encouraged to use social media and researcher networks (LinkedIn,
ResearchGate) to disseminate public data and results. The regulatory
implementation of the results will be achieved by consulting with national and
EU-level authorities and later, _e.g._ transforming the test systems or
characterized AOP pathways in the AOP-Wiki or Effectopedia (OECD). The EDCMET
consortium will also arrange conferences and workshops that are important
platforms to disseminate results and form scientific networks around endocrine
disrupters and health.
## Archiving and long-term preservation
At the time of this first version of the DMP ( _**M3** _ ), data archiving
plans are preliminary and will be finalized when more accurate information on
the end volume of the collected and produced data allows definitions of the
long-term preservation procedures. By the end of the project, all final
datasets at CSC IDA repository will be frozen (unless frozen previously)
ensuring sustainable archiving of the final research data. The EDCMET data
repository will remain operational for 5 years after the project ends.
# FAIR DATA
## Making data findable
The EDCMET data management practices will follow the FAIR principles dictating
how the data will be Findable, Accessible, Interoperable and Re-usable.
Information on data documentation ( _**Section 2.4** _ ), such as used
ontologies, naming conventions, keywords and produced metadata will ensure
that the collected and generated data will be easily discoverable,
identifiable and locatable. Open data repositories providing PIDs and open
access publications will be used ( _**Sections 3.2 and 3.3** _ ).
## Making data openly accessible
EDCMET will generate, collect and reuse a variety of datasets. During the
project, the data will be deposited in EDCMET midterm storages ( _**Section
3.1** _ ) and will be primarily shared among the project members. All
published research data will be open and available for shared use, if
agreements concerning ownership, rights to use, IPR and non-disclosure as well
as legislation and ethical principles allow it. The principles for sharing and
opening of research data in EDCMET is described in _**Section 3.2.** _
It is expected that all data related to public deliverables, social media,
courses or stakeholder events and open access publications will be made openly
available by default. As GWAS data, novel epigenetic data will be released,
after embargo (to be determined), into public databases. In certain cases,
such as with human epidemiological data, only metadata will be shared.
However, for example coded data can be pooled and analysed and may be
presented at scientific congresses and published in medical journals. Any
further information which must be kept confidential per request of any EDCMET
partner shall be clearly marked and justification as well as possible embargo
period for the dataset shall be provided. An embargo period may be requested
_e.g._ due to a planned publication, for allowing a PhD student to finalize
their thesis or to support IP protection or exploitation. In such cases, a
timeline for the release of the data shall be provided. Prior notice to
project members of any planned publication or opening of a dataset shall be
given according to the CA to avoid IP conflicts within EDCMET as well as
violation of rules of good scientific practice and protection of personal
data.
## Making data inoperable
To allow data exchange and re-use between researcher, institutions,
organisations, countries _etc._ , EDCMET will assure the use of interoperable
formats ( _**Section 1.4** _ ). Standard vocabularies for all data types will
be used to allow interdisciplinary interoperability. A common vocabulary for
harmonising the descriptions of data and metadata are under definition and
will be available in the later versions of the DMP. In the case where less
common ontologies or vocabularies cannot be avoided or are specific to the
project or dataset itself, EDCMET will aim to provide mappings to more
commonly used ontologies. Certain datasets (omics, _in silico_ ) may be
accessible only using specific software, but this will be avoided as far as
possible. In such cases, the software in question as well as options for
access will be described or included, if possible. All such cases will also be
explained and justified in later versions of the DMP.
## Increasing data re-use
Open data availability will be established as soon as possible while
respecting partner publication targets and requested embargo periods. As
sharing and promoting the re-use of data from EDCMET is believed to also
contribute to the dissemination of the developed methods and tools, and to
have a significant impact both in scientific as well as regulatory context,
EDCMET will always promote data re-use in a timely manner. As no research data
has been produced to date, re-usability by third parties or usability period
is not fully developed in the current version (v1.0) of the DMP. A general
overview of potential stakeholders is presented in _**Section 2.1** _ and will
be updated in the later versions of the DMP as the list of relevant
stakeholders (D5.2) and the EDCMET dissemination and communication plan (D5.3)
have been established.
Licences, such as Creative Commons CC BY 4.0 for open data and CC0 for
metadata, or other relevant licences, will be used. The owner of the dataset
will determine the type of licence used when data is added to EDCMET
repositories. A general archiving and long-term preservation plan for EDCMET
is presented in _**Section 3.3** _ . The quality control of each dataset is
ultimately a responsibility of all EDCMET partners, as described in _**Section
1** _ .
## Allocation of resources
The coordinating partner, UEF, is responsible for the general data management
of EDCMET, as well as for the set-up and maintenance of the common data
repositories ( _**Section 1** _ ). UEF has allocated a partial salary of a
senior post-doctoral researcher (Scientific Manager) for data management
activities. The general responsibilities of EDCMET partners have been
described in _**Section 2** _ . EDCMET will use the free-of-charge CSC
services for data storage and (partial) processing. WP1 partners have
allocated additional funds for data processing and calculations for the _in
silico_ and omics approaches. Scientific publications, where the analyses of
the research data will be presented, will be published primarily in open
access journals and the costs related to open access will be claimed as part
of the H2020 grant as allocated in the budget. No data storage and
preservation nor data sharing arrangements at the EURION cluster level have
been made to date. When established and applicable, these will be included in
the later versions of the DMP. The EURION cluster collaboration on data
management will further benefit the data management processes of EDCMET.
# DATA SECURITY AND ETHICAL ASPECTS
## Data security
According to the general data protection regulation, each EDCMET partner is
responsible for data security of the midterm data they gather within their
organization and guarantee to meet the European data protection standards (
_e.g._ 201/69) within their organizations.
All local repositories are to be secured using the latest security protocols.
Each partner or organization is further expected to adopt a backup strategy
allowing for full recovery of the data in case of an event in which the
responsible person or location of the data storage is somehow compromised. By
means of example, the coordinating organization University of Eastern Finland
(UEF) complies with national and international information security laws and
regulations and implements the following approach: UEF runs its own
infrastructure with enterprise level disk storage and file servers located in
a physically secure data centers with appropriate fire suppression equipment.
All data storages and networks are located behind UEF institutional firewall
to protect against external attacks. The IT staff of UEF is responsible for
data security and protection, while implementing the following security
measures:
* In-house servers controlled exclusively by UEF IT staff
* Password policy
* Regular updates of system and application software
* Timely installation of security patches
* Maintaining a level of preparedness for disturbances and exceptional situations based on _e.g._ risk surveys and audits
The EDCMET will use CSC’s services of secure storage, backup and preservation
as well as transfer mechanisms. CSC information security management systems,
including cloud services, have an ISO/IEC27001 certificate. The data stored in
the EDCMET repositories are protected against unauthorized access by means of
Haka authentication or CSC accounts. Access to the repositories is restricted
to selected EDCMET members only and per request from the Scientific Manager
after agreement to terms and conditions set out in the CA and GA. Access to
EDCMET data by EURION members will be evaluated case-by-case basis to expedite
useful cluster collaboration while protecting the rights of EDCMET partners.
Further access restrictions can be put in place for confidential data per
request of the data owner. A detailed access policy as well as a tested backup
strategy allowing full recovery of data in case of a catastrophic event are
under construction and will be included in a next version of the DMP ( _**M6**
_ ).
## Ethical aspects
In EDCMET, ethical standards and guidelines of H2020 will be rigorously
applied, regardless of the country where the research is carried out. All
EDCMET partners are required with the ethics of research integrity as
described in GA, as well as with national and international legislation
related to data collection, generation and re-use. Activities raising ethical
issues must comply with the Ethics requirements set out in the GA. WP8 will
follow-up the ethical issues applicable to EDCMET project implementation. As a
coordinating partner, UEF will ensure that all experimental work carried out
in EDCMET will comply with relevant guidelines and legislation and that all
data collection has been approved by local Ethical Committees. The Ethical
Committee approvals and personal training licenses will be stored on the
EDCMET data repository with access restricted to the management team or local
secure discs at UEF.
National Animal Experiment Boards of the corresponding countries will approve
protocols involving the use of rodents, which are applied for the project.
EDCMET will adhere to the EU GDPR (EU2016/679) on personal data protection as
well as to all relevant legislation and directives pertinent to the management
of human data. All data from previous cohorts, further processed during the
project, are treated as confidential and all data have been made pseudonymous
at the site of collection of the material, with only the cohort owner having
access to the key code list. The data is currently stored at secured
institutional servers by the respective cohort owners within the EU. The data
can be further hosted on a separate database (to be established as required)
meeting appropriate security standards and only coded data with variables at
the lowest possible resolution for analysis and with appropriate ethics
approvals may be shared with EDCMET partners solely for the purposes of this
project. A review of ethical issues related to the collected or generated data
will be carried out at month 6 of the project (D6.2) and all relevant
documentation will be made available to the European Commission by month 12 of
the project as agreed in WP8.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0715_EnvMetaGen_668981.md
|
# INTRODUCTION
This document represents the **Data Management Plan** for the **EnvMetaGen**
project. The plan details what data the project will generate, how it will be
exploited and made accessible for verification and re-use, and how it will be
curated and preserved. The underlying principles informing this plan are that
the data should be managed so that it is findable, accessible, interoperable
and reusable (FAIR) – _Guidelines on FAIR Data Management in Horizon 2020_ .
# TYPES OF DATA
The EnvMetaGen project will generate research products on Environmental
Metagenomics and its application to ecological problems. There are five main
data types that this project will produce:
1. Nucleic acid sequence data;
2. Taxonomic information and associated metadata;
3. Molecular biological and field collection methods; iv. Data analysis methods;
v. Ecological information based on the results of experimental work.
Each data type will be archived and disseminated in the ways that are
appropriate to its specific qualities.
# DATA ARCHIVING
## Nucleic acid sequence data
Nucleic acid sequence data can be archived in many ways depending upon the
degree of analysis and annotation of features that has been undertaken on it.
Raw reads will be archived either in the NCBI Short Reads Archive; or similar
appropriate archives of un-analysed sequence data; or a generic data archive
such as DRYAD (http://datadryad.org/). Sequences that have been carefully
inspected and associated with a clearly defined taxon verified by a recognized
taxonomic expert will be deposited in the GenBank (
_https://www.ncbi.nlm.nih.gov/genbank/)_ ; the Barcoding of Life Database (
_http://www.barcodeoflife.org/)_ ; or DRYAD. These approaches are the
international standard for the field ensuring data is openly accessible. This
is also the main way that nucleic acid sequence data is re-used by other
researchers.
## Taxonomic data
Taxonomic information relating to DNA sequences that are generated in this
project will generally be archived in custom databases with links to the
sequences and organisms associated with them. Where appropriate, sub-sets of
this data will also be deposited in BOLD or GenBank. The custom databases will
be hosted by the EnvMetaGen project website ( _http://inbio-envmetagen.pt/_ )
. In addition, where appropriate the data on species occurrences will be added
to the database of the Global Biodiversity Information Facility (
_http://www.gbif.org_ ) . Associated metadata such as collection locations
for type specimens or photographs of them will be archived according to the
requirements of the data repositories.
## Molecular biological or field methods
Molecular biological or field collection procedures for biological materials
will be archived either through peer-reviewed publication in Open Access
journals, or by making a transcript or video of the methods and making this
available on the EnvMetaGen website. This follows the H2020 principle of Open
Research Data (ORD) publication.
## Data analysis methods
Data analysis methods such as scripts for Bioinformatic procedures or R
scripts or similar for statistical analysis and graphical display of data will
be archived either in supplementary material for published papers in Open
Access journals, or by placing the script or a description of Bioinformatic
procedures on the EnvMetaGen website.
## Ecological results dissemination
Ecological information that is generated through the various projects that may
be funded and run under the capacities of the EnvMetaGen project will be
archived in peer reviewed articles for Open Access journals. Where Open Access
is not an option because of the cost of it, summaries will be archived in the
EnvMetaGen website. This follows the H2020 principle of Open Research Data
(ORD) publication.
# DATA DISSEMINATION
EnvMetaGen will follow the H2020 principles of making its data findable,
accessible, interoperable and reusable. The EnvMetaGen website will provide
the primary passive mechanism for data dissemination and achieving FAIR
outcomes. This site will have sections for each project that is conducted
under the overall EnvMetaGen umbrella project. It will also have sections for
protocols and data archiving for anything that cannot be handled by DRYAD,
BOLD or GenBank.
Active data dissemination to specialists will be done primarily through
published papers and verbal and poster presentations at scientific
conferences. Data dissemination to non-specialists will be handled by the
“Knowledge Transfer and Dissemination Officer” (KTDO) employed by the
EnvMetaGen project. This will involve the KTDO liaising with schools,
organisations such as museums and potential commercial partners. The KTDO will
develop specific strategies appropriate to each of these groups to raise
community awareness of the work done by EnvMetaGen and to explore potential
commercial collaborations.
# CONCLUDING REMARKS
EnvMetaGen will follow standard “best practice” for the field of environmental
metagenomics. It will do this in a way that conforms to the H2020 programme’s
principles of FAIR data dissemination and that also follows an ORD approach.
This is a rapidly changing field and the exact data types that will be
generated are not yet determined as this is a capacity building project
without specific scientific objectives. The EnvMetaGen management team will
follow changes in the field and adjust the data management plan as necessary.
This might be important if data archiving practices change within the wider
environmental metagenomics community.
Data security is ensured by redundancy in that we will archive all data in
established public repositories as well as in our own EnvMetaGen site and
databases. Ethical considerations for our data are all covered in the
EnvMetaGen Deliverables 8.1 - 8.9 that were already uploaded on the EC project
portal.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0716_PLUSH_674285.md
|
# User Account Management
## Roles
### Access Owner/Administrator
The person in the Platform.sh organisation who bears final responsibility for
determining and implementing security access control, authorisation, and for
granting security access to critical systems. The _Access
Owner/Administrators_ table evidences Platform.sh personnel and their
corresponding backups.
## User Access Request & Approval
The foundation of access management is the process defined for approving and
granting access and privileges to systems. There are numerous Platform.sh and
third party systems that require user access approval and provisioning. These
systems include, but are not limited to, applications, operating systems,
databases, internal network infrastructure, and mobile devices.
For new hires and active Platform.sh team members, certain systems are pre-
approved based upon the team they have been assigned to work in. If a role is
not pre-approved, the Access Owner/Administrator must approve access additions
to the system via the ticketing system prior to granting access to the system
to help ensure that only authorized personnel have access to the system. The
access ticket must include which systems access, privileges, and roles are
requested to be provisioned.
For new Platform.sh members, user credentials must be distributed only after
an employee’s has started working at Platform.sh. If credentials are
forgotten, Platform.sh team members must request new credentials via automated
or manual mechanisms, and receive those credentials in a secure manner.
All user access request tickets must be retained for at least 1 year after the
access addition ticket is marked complete for Platform.sh team members or the
business relationship has ended for contractors, employee contractors,
interns, partners or vendors.
Third party vendor remote access must only be activated based upon business
need, and revoked immediately after use is complete.
### Changes to User Access
If Platform.sh Personnel change job functions, their access to systems must be
reviewed and adjusted to reflect the level of access required for their new
job function. The user, departing manager, or hiring manager must submit an
access change request in the ticketing system.
### Revoking User Access
When Platform.sh Personnel are terminated from Platform.sh, it is critical
that access is revoked within 24 hours to protect Platform.sh property,
systems, and data. As terminations are often involuntary, both the timeliness
of termination procedures and confidentially of termination information are
very important.
The Hiring Manager creates a termination ticket and notifies the Access
Owner/Administrators of employee terminations. The Access Owner/Administrator
must revoke the terminated employee’s access to the system within 24 hours of
receiving the termination notification. If the employee was aware of passwords
to system, service and/or default accounts, those passwords must be changed.
Platform.sh assets must be collected.
### User Access Review
A review of user and administrator access listings must be performed by the
Access Owner/Administrator and Information Security Team to help ensure that
only authorized personnel have access to the systems and data. The access
review includes both whether the user or administrator is authorized to access
the system and whether they are authorized to have their current level of
access privileges.
### Assignment of Passwords
A user account must be assigned an initial password. The initial password
assigned must be changed upon first use of the user account. Systems must
enforce this change upon first use when systematically possible. If systemic
enforcement for the password change is not possible, the user must change
their password upon first login if the system permits the user to change their
password. When it is not systematically possible to enforce password change
upon initial login, employees must work with the access administrator to
change their password.
The user account and password must be sent to the user separately via email or
chat. This is also true for administrative passwords, or when a user account
and password needs to be re-issued.
### User Account and Password Use
A user account, in conjunction with a password, enables a user to authenticate
to a system. At Platform.sh, there are user accounts for employees, employee
contractors, contractors, interns, vendors, and partners.
All users are responsible for the security of data, accounts, and systems
under their control. passwords must be kept secure. Users shall not share
account or password information with anyone, including other employees,
family, or friends. Similarly, users are forbidden from performing any
activity with user accounts belonging to other users.
Additionally:
* If a user suspects that somebody else may know his or her password, the password must be changed immediately.
* Users must not ask for customer passwords.
* Users must not store fixed passwords in any computer files, such as login scripts or computer programs, unless the passwords have been encrypted.
### Password Requirements
Password requirements help ensure that unauthorized personnel do not gain
access to Platform.sh systems and data. Password requirements should be
systematically enforced when possible, manually if not. For systems where
multi-factor authentication is in place, the manual password requirements are
superseded by this stronger authentication method.:
### Password Storage
Platform.sh personnel must store their individual user account and passwords
in an encrypted password management vault. Passwords should never be stored
electronically in a document. Passwords must not be written down.
### Default Passwords
All default passwords must be changed upon system install. Knowledge of the
account password needs to be restricted to authorized personnel based upon
their job responsibilities.
_**Hardcoded Passwords** _
Passwords must not be hardcoded in an unencrypted manner into an application.
_**Masking** _
Passwords must be in a masked format when entered into an Platform.sh system.
### Administrator, Internal, and External Password Policy
The Platform.sh _Password Policy_ defines the requirements for Platform.sh
systems and third party systems where Platform.sh administers the password
policy
### Customer Responsibilities
* Customers are responsible for requesting multi-factor authentication for additional security to access the Platform.sh web user interfaces.
* Customers are responsible for ensuring the confidentiality of any user accounts and passwords assigned to them for use with Platform.sh’s system.
* Customers are responsible for ensuring their application meets industry best practices for strong authentication requirements.
#### Segregation of Duties
Access privileges adhere to the principles of separation of functions.
Administrative users must have an additional end user account for logging in
as an end user to perform his or her job function.
#### Role Based Security
Role based security must be utilized for systems access. The role(s) assigned
should only include the least access privileges necessary for a user to
perform his or her job function. Access privileges typically include
privileges such as read, write, and delete. In certain systems, access
privileges may allow or not allow access to specific screens or fields or
functions within the system.
#### Administrator Access
Administrator access permits powerful systems access that could include the
ability to add and remove users, change or delete data, and implement code.
Administrator access to systems must absolutely be restricted to only those
personnel that require this level of access to perform their job functions
including, but not limited to, the following teams:
<table>
<tr>
<th>
AWS Console
</th>
<th>
Support, Engineering, Operations
</th> </tr>
<tr>
<td>
Server Operating System (O/S)
</td>
<td>
Support, Engineering, Operations
</td> </tr>
<tr>
<td>
Application
</td>
<td>
Authorized Access Owners/Administrators
</td> </tr> </table>
#### Computer Screen Lockout
If there has been no activity on a computer terminal, workstation or personal
computer (PC) for 15 minutes, a password protected screen saver must
automatically lock the screen. Reestablishment of the session will take place
only after the user has provided the proper user account and password for
authentication. Employees are required to lock their screen if they leave
their computer unattended.
#### Public Content and Permitted Actions without Authentication
Unauthenticated users are only permitted to view certain publicly accessible
content via Platform.sh external user interfaces. The content must be
classified as ‘public’. Role based security is utilized to prevent the
disclosure of restricted or internal use only data.
Platform.sh team members are responsible for reviewing and posting content to
public facing mediums.
# Networks & Services
Users shall have direct access only to the networks and services that they
have been specifically authorized to use. Platform.sh employees operate with 6
varieties of credentials
* LDAP credentials
* Google Apps credentials
* Amazon IAM credentials
* Platform SSH public key credentials
* Atlassian (JIRA/HipChat) credentials
* Accounts/Zendesk credentials
## LDAP Credentials
An LDAP account allows access to the interlan Platform.sh web based
infrastructure management and monitoring applications hosted on
admin.platform.sh.
## Google Apps
Platform.sh utilizes Google applications for company email and contacts,
documents, and conferencing. Google Apps credentials require two-factor
authentication.
## Amazon IAM
Platform.sh utilizes Amazon IAM (Identity and Access Management) credentials
in order to operate all Amazon Web Service resources for Platform.sh Cloud.
IAM credentials require two-factor authentication.
## SSH public key
Platform.sh utilizes SSH public key authentication to control access to
virtual machine (VM) and container resources running Platform.sh services and
customer applications.
Privileged access to Platform.sh VMs and containers is restricted to users
accessing via our 'jump' box (admin server, aka bastion host).
## Atlassian (JIRA/HipChat)
Platform.sh utilizes JIRA and HipChat, both of which are sotfware as a servce
products maintened by Atlassian. JIRA is used to track internal issues
affecting the business and product. HipChat is used for real-time
communication between employees and customers.
## Accounts/Zendesk
Zendesk is the Platform.sh customer facing issue and support tracking system.
This is a software as a service maintained by the company of the same name,
Zendesk.
Platform.sh utilizes a self hosted Drupal instance (called Accounts) to manage
customer and employee access to Zendesk and Platform.sh Standard UI. Account
credentials require twofactor authentication.
# Operating Systems
## Cloud Infrastructure
Platform.sh has standardized on using Debian Linux for virtual machine (VM)
and container resources running Platform.sh services and customer
applications.
Hosts are accessible using only (Secure Shell) SSH with public key
authentication.
## Employee Workstations
Platform.sh does not dictate the type of operating system employees use on
workstations. Any operating system capable of securely accessing the Internet
via a Web Browser and SSH console is suitable.
Platform.sh employees are responsible for self-managing access to personal
workstations in order to remain compliant with Platform.sh policies.
# Applications
Platform.sh employees and customers utilize many diffrent types specific
applications to gain access to manage cloud infrastrure and services. Some
common use cases include:
## Web Browser
May be used to access any web based services such as Google Apps, Zendesk,
JIRA, HipChat, Accounts, and Platform UI.
This includes well-known desktop web browsers such as Mozilla Firefox, Google
Chrome, Opera, Safari, and Internet Explorer, mobile devices, or any HTTPS
client application capable of rendering modern HTML/CSS/Javascript.
## Secure Shell Client
Used to connect to and manage hosts and container resources remotely via the
command line. Most Platform.sh employees and customers are utilizing the
widely known OpenSSH client included in UNIX based operating systems. Users of
the Microsoft Windows Operating system have several commercial and open source
alternative solutions available, for example PuTTY.
## AWS Command Line Interface Tools
Platform.sh personal who require access to AWS instance management and
configuration via a command line console use the official Python based AWS
Command Line Interface Tools provided by Amazon. Complete details about access
control with the AWS CLI is found in the official documentation
_http://aws.amazon.com/documentation/cli/_ .
# Mobile computing and teleworking
Platform.sh is a virtual office enviroment. Most employees work remotely over
the Internet using the aforementioned applications, networks, and services
with company provided hardware.
The employee is responsible in ensuring his/her workstation and personal
network is properly secured from unauthorized access. At minimum, hard disk
drive encryption is enabled.
**Platform.sh**
Information Security Policy
Version 0.1.1
Updated: 2016-03-30
## Purpose
Platform.sh is a customer centric organization. We strive each day to provide
the best products and services to our customers. This in turn enables our
customers to be successful in their business. Platform.sh has a creed that
employees live by each day to ensure we meet our customers expectations:
1. Do the right thing
2. Jump in and own it
3. Committed to awesome
4. Give back more
5. Inspire a little crazy
Platform.sh has an Information Security and Compliance Program that provides
the security, control, and transparency that our customers expect. The purpose
of this policy is to ensure the highest level of integrity, confidentiality
and availability of Platform.sh and customer systems and data. Platform.sh
understands that information security is extremely important to our customers
and us.
## Scope
The Information Security Policy is one component of a comprehensive
Information Security and Compliance Program at Platform.sh and must be
followed by all Platform.sh employees, employee contractors, interns,
contractors, vendors, customers, and partners utilizing Platform.sh system
resources. The policy includes key security and operational requirements in
areas that include, but are not limited to, the following:
* Authentication
* Access Management
* Logging and Monitoring
* Application Development
* Backup
* Configuration Management
* Patch Management
* Vulnerability Management
* Anti-virus Protection
* Perimeter Defenses
* Asset Management
* Incident Management
* Physical Security
* Environmental Security
* Third Party Vendor Management
The scope of this policy includes information security controls and safeguards
that protects Platform.sh, customer systems, and data. Security in the cloud
is a shared responsibility between different service providers and the
customer. Within the roles and responsibilities section of this policy
Platform.sh describes the shared responsibility model and references customers
and Amazon Web Services (AWS) responsibilities.
## Management Commitment and Applicability
The Platform.sh Executive Team will fully commit to and support policies and
procedures. The Platform.sh Data Management Officer will ensure all
organizational entities coordinate to effectively implement and disseminate
these policies and procedures.
## Definitions
<table>
<tr>
<th>
**Term**
</th>
<th>
**Definition**
</th> </tr>
<tr>
<td>
Security Logging
</td>
<td>
Security logging is the recording in a log file of security events that
occurred on the system.
</td> </tr>
<tr>
<td>
Service Account
</td>
<td>
A service account is an account that is utilized by the system for an
automated process.
</td> </tr>
<tr>
<td>
Default Account
</td>
<td>
A default account is an account that is part of the installation of the
particular system. For example, the ‘root’ administrator account for a Linux
system.
</td> </tr>
<tr>
<td>
Third Party Vendor
</td>
<td>
A third party vendor provides goods or services to Platform.sh in exchange for
payment from Platform.sh.
</td> </tr>
<tr>
<td>
Complementary
Control
</td>
<td>
A secondary control that serves the same purpose in reducing a given risk as
the primary control. This control is meant to ‘complement’ the primary control
in reducing the risk.
</td> </tr>
<tr>
<td>
PCI-DSS
</td>
<td>
Payment Card Industry Data Security Standard (PCI-DSS) – A set of standard
security requirements established by the payment brands (AMEX, VISA,
MasterCard, etc.) to ensure security for the storage, processing or
transmission of cardholder data.
</td> </tr> </table>
## Roles and Responsibilities
<table>
<tr>
<th>
**Role**
</th>
<th>
**Responsibility**
</th> </tr>
<tr>
<td>
Product Owner
</td>
<td>
The Product Owner is responsible for approving the release for implementation
into the production environment via the ticketing system.
</td> </tr>
<tr>
<td>
Director of Operations
</td>
<td>
Director of Operations is responsible for approving and scheduling
maintenance, operational incident response, and overall management and
oversight of Support and Operations Personnel.
</td> </tr>
<tr>
<td>
Hiring Manager
</td>
<td>
The Hiring Manager/Manager is responsible for requesting new hires access or
access modifications for existing employees to Platform.sh systems and
employment checks.
</td> </tr>
<tr>
<td>
Human Resources
Personnel
</td>
<td>
Human Resources (HR) personnel are responsible for current, new and terminated
employees processing tasks.
</td> </tr>
<tr>
<td>
Access Owner / Administrator
</td>
<td>
Responsible for certain access administration tasks, facilitating the access
addition / modification, revocation processing, and performing quarterly
access reviews
</td> </tr>
<tr>
<td>
Platform.sh
Contact for the
Third Party Vendor
</td>
<td>
The Platform.sh Contact for the Third Party Vendor is responsible for being
the point of contact for the third party vendor, which may include, but is not
limited to, service and maintenance requests, feature requests, assisting
finance with billing or payment questions, requesting compliance audit
documentation.
</td> </tr>
<tr>
<td>
Platform.sh Personnel
</td>
<td>
Platform.sh personnel include employees, employee contractors, contractors,
and interns and are responsible for complying with the requirements outlined
in the Information Security Policy.
</td> </tr>
<tr>
<td>
Executive Team / Leadership
</td>
<td>
A collection of senior leaders and executives at Platform.sh responsible for
making key strategic and operational decisions for Platform.sh.
</td> </tr>
<tr>
<td>
Product Manager
</td>
<td>
The Product Manager is responsible for documenting user stories, grooming and
prioritizing the change and feature requests, and leading sprint review
meetings.
</td> </tr>
<tr>
<td>
Risk Owner
</td>
<td>
Responsible for making decisions regarding Platform.sh risks and assigning
remediation activities to their teams. Ultimately responsible for the risks
within their business unit.
</td> </tr>
<tr>
<td>
Operations Personnel
</td>
<td>
Responsible for many different security and operations activities, that
includes but are not limited to, systems administration, backup, configuration
management, patch management and anti-virus.
</td> </tr>
<tr>
<td>
ISMS Committee / InfoSec Team
</td>
<td>
Responsible for performance and oversight of different information security
activities at Platform.sh
</td> </tr> </table>
## Shared Responsibility Model
### Platform.sh Responsibilities
Platform.sh is responsible for the security and availability of the
Platform.sh PaaS Platform and Internal Platform.sh Systems. This includes the
operating system and database layers of the architecture. This includes, but
is not limited to, server level patching, vulnerability management,
penetration testing, security event logging & monitoring, incident management,
operational monitoring, 24/7 support, and ensuring customer site availability
in accordance with SLA’s. In addition, Platform.sh is responsible for managing
server firewall configurations
(IPTables) and perimeter firewall configurations (security groups). If a
customer has the Platform.sh CLI tool, security updates to core and
contributed modules will be made available for testing via an automated
process. Deployment to production by Platform.sh requires customer testing and
approval.
### Customer Responsibilities
The customer is primarily responsible for the security of their application
hosted on the Platform.sh Platform. This would include ensuring a secure
configuration and coding of the website application, and related security
monitoring activities including penetration testing and vulnerability scans of
the customer site on a periodic basis. Platform.sh offers professional
services and works with third party partners to assist customers with building
their website application and assume some of these responsibilities. In
addition, Customers are also responsible for the security of their users, the
granting of privileged access to their configuration (Platform UI) and
application (hosted web application).
### AWS Responsibilities
AWS is responsible for security of the network including routing, switching,
and perimeter network security via firewall systems and intrusion detection
systems (IDS). AWS is responsibly for physical security to the data centers
hosting the Platform.sh PaaS, and environmental security to ensure proper
power, cooling, and mechanism controls are in place. AWS is responsible for
the bare metal infrastructure that is running the Platform.sh PaaS.
Platform.sh PaaS is built within Amazon's AWS data centers and uses Amazon's
Elastic Compute Cloud (EC2), Amazon Simple Storage Service (S3) and Elastic
Block Store (EBS) services.
## Policy
### Information Classification
Information is a critical resource at Platform.sh that must be appropriately
classified and handled to help ensure the following:
* Platform.sh meets customer, industry, regulatory and privacy standards.
* Protection of customer data.
* Reduce the risk that internal use only or restricted information is released to unauthorized personnel.
#### 6.1.1 Roles and Responsibilities
Platform.sh has established three different information classifications to
help ensure the protection of Platform.sh and customer information:
<table>
<tr>
<th>
**Public**
</th>
<th>
Information that can be viewed by the general public.
</th> </tr>
<tr>
<td>
**Internal Use Only**
</td>
<td>
Information that must be kept internal to Platform.sh personnel. Platform.sh
internal use only information includes, but is not limited to:
* Intellectual Property: Source code and system diagrams
* Sales Data: Prospective customer company, name, phone number, email, address
* Platform.sh Human Resources Data: Platform.sh Personnel names, addresses, salary information
</td> </tr>
<tr>
<td>
**Restricted**
</td>
<td>
Restricted information must not be viewed by Platform.sh personnel unless
explicit permission is provided by the customer. Platform.sh restricted
information includes, but is not limited to:
* Customer Data: Data stored on the customer database.
* Cardholder Data: Full magnetic stripe or personal account number (PAN) plus any of the following: Cardholder name, expiration date, service code.
* Classified Data: Information that has been determined pursuant to Executive Order 13526 or any predecessor order to require protection against unauthorized disclosure and is marked to indicate its classified status when in documentary form. (Reference: CNSSI 4009, EO 13526)
* Personally Identifiable Information (PII) Definitions: Reference: State Data Privacy Laws
* Personal Health Information (PHI): Protected health information (PHI) is any information in the medical record or designated record set that can be used to identify an individual and that was created, used, or disclosed in the course of providing a health care service such as diagnosis or treatment.
</td> </tr> </table>
### Data Handling
Platform.sh has established different areas of responsibility to reduce
opportunities for unauthorized or unintentional modification or misuse of
data.
<table>
<tr>
<th>
**Public**
</th>
<th>
Information that can be viewed by the general public.
</th> </tr>
<tr>
<td>
**Internal Use Only**
</td>
<td>
Information that must be kept internal to Platform.sh personnel. Platform.sh
internal use only data handling includes, but is not limited to:
* Intellectual Property: Must not be posted to public forums, or shared over unencrypted communications mechanisms (e.g. Chat or email) ● Sales Data: Must not be posted to public forums.
* Platform.sh Human Resources Data: Access should be restricted to only authorized human resources personnel.
</td> </tr>
<tr>
<td>
**Restricted**
</td>
<td>
Restricted information must not be viewed by Platform.sh personnel unless
explicit permission is provided by the customer. Platform.sh restricted data
handling includes, but is not limited to:
● Customer Data: Must not be viewed by Platform.sh personnel unless explicit
permission is provided by the customer.
</td> </tr>
<tr>
<td>
</td>
<td>
●
</td>
<td>
Cardholder Data: Must be stored within the Platform.sh PCI-DSS compliant
virtual private cloud (VPC) offering and include data encryption and a full
multi-tier setup.
</td> </tr>
<tr>
<td>
</td>
<td>
●
</td>
<td>
Classified Data: Platform.sh is not authorized to handle or store classified
information.
</td> </tr>
<tr>
<td>
</td>
<td>
●
</td>
<td>
Personally Identifiable Information (PII): Should be stored with the
Platform.sh PII offering and include data encryption.
</td> </tr>
<tr>
<td>
</td>
<td>
●
</td>
<td>
Personal Health Information (PHI): Must be stored within the HIPAA VPC
offering and include data encryption. In addition, customer must sign a
Business Associate Agreement (BAA).
</td> </tr> </table>
Platform.sh Responsibilities
* The data on retired Platform.sh technology assets must be deleted and/or degaussed prior to disposal of the asset.
* Employees must lock computer workstation screens when unattended. In addition, systemic controls must be configured to lock computer workstations after 15 minutes of inactivity.
* Locked shredding bins must be located in the main office facility to help ensure that sensitive data is securely disposed. Documents that contain internal use only or restricted information must be disposed of in the shredding bins. A third party must collect the shred bin materials on at least a quarterly basis and securely shred the contents onsite. A shredding certificate must be required from the third party provider.
* A cross-cut shredder or secure shredding service must be located at each of the remote Platform.sh office facilities. Documents that contain Platform.sh internal use only or restricted information must be disposed of utilizing the cross-cut shredder.
* Access to the HR, legal, and security file shares and hard copy file folders must be restricted to authorized personnel based on role to help ensure that documentation is only available to personnel that require access to perform their job function.
* Hardcopy documentation containing HR sensitive data including but not limited to, PII, compensation information, and health information must be securely locked via lock and key and accessible only by authorized HR personnel.
* System documentation stored in the Cloud must be appropriately protected with access controls.
* Documentation containing restricted information that is transmitted electronically across public networks or wirelessly must use access control methods and strong encryption protocol of 128 bit strength or greater. Internal use only information should not be stored, accessed or transported outside of Platform.sh office facilities unless it is securely transmitted or located at a third party service provider approved through the third party management process.
AWS Responsibilities
* AWS is responsible for installation, maintenance, disposal of physical server infrastructure supporting Platform.sh operations.
Customer Responsibilities
* Customers are responsible to provide Platform.sh knowledge of restricted information that will be stored, processed or transmitted on the Platform.sh PaaS platform.
* Customers are responsible for selecting the services required for the storage, processing or transmission of restricted information to ensure restricted information is being handled according to Platform.sh policy.
* Customers with cardholder data on the platform are responsible for ensuring legal, regulatory, and business requirements are met regarding the retention of cardholder data. In addition, customers are responsible for setting retention, deletion, and review processes for cardholder data.
## Risk Management
_ISO 27001: 6.1.2 – Information security risk assessment_
_ISO 27001: 6.1.3 – Information security risk treatment_
The Information Security Program at Platform.sh is a risk-based program.
Platform.sh values the necessary balance between risk and control, and
understands that the intent of the Information Security Program at Platform.sh
is to reduce risk to an acceptable level. Security control can never eliminate
risk entirely.
To facilitate the risk decision process at Platform.sh, various Risk Owners
have been identified. These Risk Owners are Platform.sh Team members that are
responsible for the risks identified within their respective business units.
For a given risk, the Risk Owner must evaluate the likelihood and impact on
confidentiality, integrity and availability and must make one of the following
decisions regarding the risk:
* **Remediate** \- Address the risk and fix the issue based on risk rating.
* **Monitor** \- Monitor the risk until such time a decision can be made for the risk.
* **Transfer** \- Transfer the risk to a third party or place reliance on a complementary control.
* **Accept** \- Accept the risk and do nothing to address the risk.
The quantification of risk helps enable the Risk Owner to make a decision
regarding the risk. Critical and High risks are required to be remediated.
The risk assessment results must be documented and distributed to key
stakeholders within the Platform.sh organization.
## Corporate Governance
A corporate governance framework is in place at Platform.sh to help ensure
continuity and monitor quality of the Information Security Program. The
Paltform.sh Executive Team is committed to supporting and evangelizing the
importance of the Information Security Program. Platform.sh has the following
groups established to facilitate corporate governance:
* **Board of Directors** – A board of directors is in place and meets on at least a quarterly basis to help ensure oversight for management strategy and operations.
* **Audit Committee** – An audit committee is in place and meets on at least a quarterly basis to help ensure that an independent body can provide sound corporate governance in corporate matters.
* **Governance, Risk and Compliance (GRC) Council** – A GRC Council is established with members of the Platform.sh Executive team to help ensure that organizational risks are prioritized and addressed, accepted or transferred. The GRC council meets on at least a quarterly basis.
**Information Security Program Monitoring** : Paltform.sh personnel must
perform a risk assessment on an annual basis or where there is a significant
change in the environment to help monitor that the Platform.sh Information
Security Program is operating effectively.
**Information Security Architecture** : Information Security must be designed
into the Platform.sh Product as an inherent component of the system. The
architecture must include security components that help to ensure the
confidentiality, integrity and availability of customer systems and data.
Security architecture diagrams must be reviewed at least annually or whenever
there is a significant security architecture change.
**Policy, Plan & Procedure Review ** : The policy, plan and procedure owners
must perform a review and update of their policy, plans and procedures on at
least an annual basis to help ensure that policy, plan, and procedure
documentation is up to date.
**External Third Party Audits** : Information security controls must be
evaluated on an annual basis by an independent third party audit firm to
ensure controls are designed and operating effectively.
## Security Awareness & Training
Platform.sh personnel must attend security awareness training as part of new
hiring training employee onboarding process and on an annual basis thereafter
to help ensure that security best practices are owned and followed by
Platform.sh personnel. In addition, Platform.sh personnel must attend
department specific training to help ensure they know how to properly perform
their job function. Security awareness training content is to be reviewed on
an annual basis or when there has been a significant change to help ensure
that content relevant and current. Data Management Officer will ensure that
personnel have completed security awareness training. When required, the Data
Management Officer will send out emails regarding security issues.
Operations, Support and Engineering personnel must attend platform specific
security training upon assuming a Platform.sh role, upon significant changes
to the platform that would necessitate a retraining, and on an annual basis.
Training completion documentation must be logged and tracked for at least one
year.
## Human Resources
The Data Management Officer at Platform.sh is responsible for the new and
terminated employee processes including, but are not limited to, recruitment,
new hire training, termination notification, termination asset lifecycle
management, and facilitating the completion of required new hire
documentation.
_**Code of Conduct and Ethics** _
_**Preventing Harassment** _
### Acceptable Use
New Platform.sh personnel must read the Platform.sh Acceptable Use policy and
confirm their receipt and understanding of this policy with their sign-off.
### Non-disclosure agreements
New Platform.sh personnel must read and sign the Non-Disclosure Agreement
(NDA) as part of the offer letter process and confirm their receipt and
understanding of this agreement with their signature. The NDA includes clauses
for non-solicitation and intellectual property.
### Information Security Policy
New Platform.sh personnel must read the Platform.sh Information Security
Policy and confirm their receipt and understanding of this policy with their
sign-off.
_**Security Awareness Training Completion** _
Data Management Officer must confirm that each new employee completed
training.
_**Employment Checks** _
### Job Descriptions
The Hiring Managers are responsible for ensuring documented positions
descriptions are maintained that outline roles and responsibilities for
Platform.sh Personnel and prospective candidates. In addition, the Hiring
Manager is responsible for periodically reviewing and updating the position
descriptions.
### Exit Interview
Data Management Officer must discuss information security topics including
returning information technology assets and that the employee system access
will be revoked as a component of the termination process.
### Termination Processing
Data Management Officer must collect Platform.sh technology assets as a
component of the termination process. A termination checklist within the
ticketing system must be utilized to help ensure that employee termination
procedures are completed.
_**Transfer Processing** _
### Customer Responsibility
Customers are responsible to review and understand and abide by the external
user acceptable use policy listed on the Platform.sh webpage.
## Access Management
Further detail on access management is evidenced within the User Account
Management section of the Platform.sh _System Access Control_ policy.
## Security Audit Logs
Security audit logs of must be available for both real time security event
monitoring and forensic security event investigations. A comprehensive
illustration on logging events and elements is evidenced within the
_Platform.sh Logging Event Table_ .
Security Audit Log Monitoring is critically important to effectively use the
data provided by security audit logs. Without a review of security audit logs,
there is an increased risk that a security event could go undetected. Further
detail on log monitoring in available on the _Logging and Monitoring
Procedure_ document.
If the review of the security event indicates that there is an intrusion or
breach then the Platform.sh Incident Response Plan must be activated.
## System Development Life Cycle (Change Management)
Platform.sh product and engineering teams perform system development
activities. These teams each participate in the Agile system development
lifecycle (SDLC). To build new product capabilities, Platform.sh follows agile
process. The Platform.sh application development lifecycle includes a number
of controls to help ensure development efforts are coded well and securely.
### Ideation
The purpose of the Ideation phase is to assess and get buy-in for new product
investment opportunities with Product, Engineering, Sales, Finance, and Legal
leadership before any product development begins.
### Pre-Development planning EPICs
EPICs must be documented to define the engineering requirements that will be
broken out into ‘User Stories’.
### User Stories
User stories document the detailed use cases related to the EPIC. These user
stories are often a one to many relationship between a ‘EPIC’ and the ‘User
Stories’. User stories must be documented and prioritized in the ticketing
system by the Product Managers. The user stories include the development
'tasks' that are required for the release.
### Bugs / Tickets
Bug / Ticket requests relate to a specific issue that requires the engineering
team to investigate and/or intervene. They are generated by engineering
personnel and must be documented in the ticketing system and include, but are
not limited to, the following information:
* Project
* Description
* Issue Type
* Priority
* Status
* Assignee
### Sprint Review Meetings
Engineering and product teams have several different review meetings as part
of the Agile software development process including:
* **Daily ‘scrum’ meetings** : Daily meetings to discuss progress.
* **Additional Meetings** : Architecture meetings, release review meetings are held as needed.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0717_FESTIVAL_643275.md
|
#### 1\. Executive Summary
The involvement of end users in the experiments created on top of the project
federation of testbeds, platforms and living labs is one of the project main
challenges. And it serves several complementary objectives: it is the guaranty
of realistic experimentation conditions, ensures the accurate and useful
collection of feedback, provides better understanding the potential impacts of
technology and its perception at societal level, and offers opportunities of
harnessing co-creation creativity and future exploitations of the project
results.
The project consortium is committed to conducting responsible research and
innovation and, as such as realized at proposal stage, an ethic self-
assessment of potential risks and ethical impacts.
Two main points can be concerned in the FESTIVAL project:
The involvements of end users in the experiments run on the test-beds. The
potential collection and handling of personal data.
In addition, the project focused on the Internet of Things requires us to look
into the current privacy and ethical concerns identified on this technology
domain, and to liaise with the existing work carried out in the ecosystem.
This deliverable thus defines guidelines for end user involvement and handling
of privacy, data protection and ethics issues in experiments on the federated
testbeds. These guidelines will help experimenter from both the consortium and
external stakeholders to handle end user involvement accordingly.
It includes a review of the context and motivation for end user involvement,
and presents the current state of the experiments defined for the project in
terms of expected end users involvement. This list of experiments is directly
related with activities of WP3 and will evolve in the future linked with the
project activities and with the involvement of external experimenters.
Based on this analysis of the context and state of the art, a project strategy
has been defined to ensure responsible involvement of external participants.
It focuses on the following points:
* **A technological and legal watch** activity to ensure that the project stays up to date with the state of the art in research experiment setup, involvement of external participants, protection of data and privacy and the accompanying legal framework.
* **Raising Awareness:** The creation of training materials for both experimenters and participants to present and explain in a rapid and accessible way some of the important challenges that can be raised by the participation to the project experiments.
* **Informed Consent Process:** The creation off the processes and document for ensuring the legal participation of the end users to the experiments: processes for the collection of informed consent, set up of a complaint procedure through a neutral third party, technical mechanisms ensuring the protection of experiments data (in joint work with WP2 and WP3).
* **Assessment of Personal Data Management:** The set-up of a basis for a “privacy impact assessment” process that can be used by the project experiments (and external experimenters) to assess the potential risks linked with their technology deployments in order to identify the ones that would require specific measures and oversight.
* **The organization of training sessions** and support for the experimenters to accompany the creation of the experiments and the actual involvement of the experimenters.
The deliverable also provides a report on the first year of activity and
presentation of the initial results:
* **The project Factsheets** on responsible user engagement aimed at raising the awareness on the issue. The Factsheet, a visual document of a single page, that can be widely distributed and that focus on a single issue, or a specific process set up by the project. They provide a general overview of the topic they address without entering into the details but rather as an invitation to consider a specific point. The Factsheets target usually the experimenters, but can be also very useful for experiment participants as they inform them of the project practices.
* **The informed consent, complaint and data withdrawal procedures.** Informed consent is an ethical requirement for most research and must be considered and implemented throughout the research lifecycle, from planning to publication. Gaining consent must include making provision for sharing data and take into account any immediate or future uses of data.
* **A first assessment of the Privacy and Security impacts of the project experiments.** The FESTIVAL PIA process consists of a fifteen question questionnaire, covering the entire information flow of an experiment, and describing how the data is handled in each phase and what associated security measures are provided. A first evaluation of the project experiment is provided.
Finally, the deliverable also looks into the differences of perspective
between European and Japanese side, which can be considered as limited as
privacy protection and ethical research are a strong concern on both sides of
the project.
Over the first year, we have not only set up an operational environment for
the involvement of external participants to the project, but also increased
our knowledge on these issues and disseminated this knowledge in the
consortium and even outside (thanks to the first distributions of factsheets).
The project effort on this task will continue over the following period, to
pursue the effort already engaged and finalise the project infrastructure, but
also to gather feedbacks and improve our framework. As the project experiment
move into an operational phase and as the project opens up to external
experimenters, we foresee that this task will also progressively evolve into
an operation support task that will provide guidance to experimenters.
#### 2\. Introduction
The involvement of end users in the experiments created on top of the project
federation of testbeds, platforms and living labs is one of the project main
challenges. And it serves several complementary objectives: it is the guaranty
of realistic experimentation conditions, ensures the accurate and useful
collection of feedback, provides better understanding the potential impacts of
technology and its perception at societal level, and offers opportunities of
harnessing co-creation creativity and future exploitations of the project
results.
This deliverable objective is to define guidelines for end user involvement
and handling of privacy, data protection and ethics issues in experiments on
the federated testbeds. These guidelines will help experimenter from both the
consortium and external stakeholders to handle end user involvement
accordingly.
The following plan has been followed:
* **Section 3: Context and Motivation** reminds the project’s motivation for contacts and feedbacks from end users and its importance for the project. We present the context of experiments and trials in the project that will involve end users with a first description of the planned interactions with end user for each of the experiment envisioned by the project (experiment presented in more details in Deliverable 3.1). And we also present the main ethical issues risen by the project activities and an overview of the ethical and privacy discussion in the Internet of Things domain.
* **Section 4: Strategy for Responsible User Involvement** presents the project overall strategy for responsible user involvement. It looks into the main challenges identified, based on the context and motivation included in section 3, and provides a general strategy and list of planned activities. To provide an operational vision of the project effort, a summary of the actions preformed in the first year and a plan for future activities are also provided.
* **Section 5: Raising Awareness: Factsheets,** in this section the project activity toward raising the end users and experimenters awareness toward ethical issues and responsible research and innovation are presented. Thus the concept of factsheets proposed by the project and the current and foreseen factsheets are shown.
* **Section 6: Informed Consent Process,** the process proposed for the project for the informed consent procedure and the tools set up are included. The general procedure principles, and the current set of document for informing the user about the project and experiments, gathering their consent and providing a process for complaints and withdrawal of their data are included. Additionally, future plans for an electronic version of the informed consent process to be included in the project portal for Experimentation as a Service is presented.
* **Section 7: Assessment of Personal Data Management,** the project activities to assess the risks associated with the management of data in the project testbeds and experiments and the associated measures to safeguard privacy and data confidentiality are shown. This includes a general overview of the principles of the project and the Privacy Impact Assessment process set up by the project. It also provides for each of the experiment currently envisioned by the project, a first version of the Privacy Impact Assessment, identifying the way data is collected, stored, used, shared, destroyed and managed. This section also provides updates on the current status of relation with the Data Protection Authorities of each of the project experiment location.
* **Section 8: Europe – Japan Differences** presents the main differences identified in this task between European and Japanese approaches to involve of end users in experiments and safeguard of privacy.
* Finally, a conclusion is provided (Section 9) and additional documents and references are presented in the Annex of this deliverable.
This deliverable provides a first complete overview of the project approach,
activities and results regarding responsible end user involvement in the
project experiments. As the project develops with more detailed experiment
definition (and actual implementation), and progressively opens to external
experimenters, the task 4.1 will concentrate on making sure that the
guidelines defines in this deliverable are applied and kept up to date.
#### 3\. Context and motivation
_In this section we remind the motivation behind contacts and feedbacks with
end users and its importance for the project. We present the context of
experiments and trial in the project that will involve end users. And we also
present the main ethical issues risen by the project activities._
##### 3.1. Motivations for end user involvement
The involvement of end users in the experiments created on top of the project
federation of testbeds, platforms and living labs is one of the project main
challenges as emphasised in the project description of work:
_“**Challenge 5 - User involvement, privacy:** The development of an open
federation of testbeds enabling “Experimentations as a Service” can only make
sense and have a real impact by the number and quality of the experimentations
that are run on the testbed. The infrastructure federated in the project will
enable both small and large scale trials over various application domains. As
the technologies of the Future Internet move ever closer to the market, in an
ever shorter innovation cycle, the need to validate the experimentation in
“real life” trials with end user is a strong requirement of the project. Given
the number and complexity of the privacy and ethics concerns in the deployment
of future connected applications (user informed consent, continuity and
availability of services, contextualization of risk, profiling, ownership,
management and captivity of data, applicable legislation and enforcement…), a
strong focus has to be put on the protection of the end users participating in
the trials.” _
The involvement of end users in the project’s experiments serves several
complementary objectives:
* First, user involvement is **the guaranty of realistic experimentation conditions** . Indeed, the experiments that can be built on top of the project federation are not solely technical application but rather socio-technical systems. The potential interactions between the technological system set up and the human users and bystanders can have strong impacts on the viability of the proposed experiments. Therefore the presence and involvement of human participants in the experiment is a requirement to validate the proposed applications in conditions as close as possible to those of a final deployment of the technology.
* The involvement of end users, especially when they are external to the project, is also a guaranty of **accurate and useful feedback collection** . They can provide an alternate view on the experiment going on and complement the technical evaluation made of the system from a different perspective. It is also a key element to validate some of the non-functional requirements of the applications (system usability, training time, acceptability…)
* The engagement of users can also help research on **better understanding the potential impacts of technology and its perception at societal level** . The actual involvement of end users
in experiments is superior to a simple presentation of potential applications
scenario in terms of collection of impressions, reactions, beliefs, and
opinions on future technologies and innovations. The project evaluation
activities (especially task 4.3) will thus strongly benefit from end user
involvement and require adequate methodologies to involve end users and gather
feedbacks.
* Some of the experiments proposed by the project as well as those that will be proposed by external experimenters have **direct exploitation opportunities** . These exploitation opportunities, however, require more than test in the Lab to validate their potential commercial viability. Thus, the involvement of external users in experiments can also help with the project exploitation strategy (developed in WP5) to identify the most promising exploitations.
* Finally, the involvement of end users also opens **the opportunity for co-creation mechanisms** . The engagement of end users can provide external ideas that directly influence the set-up of the experiment by enlarging the number and broadening the scope of the inputs on which the future experimentations, applications and services are built.
As the experiments proposed by the project cover many technologies,
application domains and require the involvement of various stakeholders, the
term “end users” can cover various categories of stakeholders in FESTIVAL. And
it is important to grasp this diversity of stakeholders that are to be
involved in the experiments. This involves:
* External decision makers that would (in a real deployment set up) be the customer of the potential applications covered by the experiment.
* External technical stakeholders that would be involved in (or impacted by) the technology deployment (system operators, IT infrastructure).
* Actual end users of the applications proposed by the experiments.
* Citizens at large that could be impacted by the deployment of the application.
##### 3.2. FESTIVAL Field Trials and experiments involving end users
The following section presents the current state of the experiments defined
for the project in terms of expected end users involvement. This list of
experiments is directly related with activities of WP3 and will evolve in the
future linked with the project activities and with the involvement of external
experimenters.
###### 3.2.1. Smart Energy
3.2.1.1. PTL - Energy Management sensiNact
<table>
<tr>
<th>
**Experiment Name:**
</th>
<th>
Autonomous Smart energy application and the real user perception
</th> </tr>
<tr>
<td>
**Responsible Partner:**
</td>
<td>
CEA (PTL)
</td> </tr>
<tr>
<td>
**Topic:**
</td>
<td>
Smart building Smart energy
</td> </tr>
<tr>
<td>
**Start date:**
</td>
<td>
M20
</td> </tr>
<tr>
<td>
**End date:**
</td>
<td>
M25
</td> </tr>
<tr>
<td>
**Min number of end users:**
</td>
<td>
1 user
</td> </tr>
<tr>
<td>
**Max number of end users:**
</td>
<td>
10 users
</td> </tr>
<tr>
<td>
**Openness of the experiment/Selection of end users:**
</td> </tr>
<tr>
<td>
The users chosen to interact with the system should be able to express their
impression about a given software interface, meaning, that they should have a
minimum vocabulary in order to describe the aspects of the system that are not
pleasing from the user point of view.
</td> </tr>
<tr>
<td>
**Nature of interactions with end users:**
</td> </tr>
<tr>
<td>
In order to save energy, the system might take actions that the user is not
necessarily aware or may not understand completely ( _e.g_ system may close
the shutter when the user asks to decrease the temperature of a room).
</td> </tr>
<tr>
<td>
**Expected outcome**
</td> </tr>
<tr>
<td>
This experiment will point how comfortable (or not) the user would be with
autonomous systems taking decisions that he/she is not completely aware of.
This would allow us to know how intrusive the smart energy application can be.
</td> </tr>
<tr>
<td>
**Description of the Experiment for end users:**
</td> </tr>
<tr>
<td>
</td>
<td>
The user should be driven to perform actions in the environment (equipped with
a smart energy application) that would produce indirect actions.
</td> </tr>
<tr>
<td>
</td>
<td>
Autonomous indirect actions should not be explained by the researcher
responsible by the experiment.
</td> </tr> </table>
3.2.1.2. ATR DC – xEMS control
<table>
<tr>
<th>
**Experiment Name:**
</th>
<th>
xEMS (Energy Management System)
</th> </tr>
<tr>
<td>
**Responsible Partner:**
</td>
<td>
OSK
</td> </tr>
<tr>
<td>
**Topic:**
</td>
<td>
Smart Energy (Data Center)
</td> </tr>
<tr>
<td>
**Start date:**
</td>
<td>
M20
</td> </tr>
<tr>
<td>
**End date:**
</td>
<td>
M36
</td> </tr>
<tr>
<td>
**Min number of end users:**
</td>
<td>
4 as the data center users
</td> </tr>
<tr>
<td>
**Max number of end users:**
</td>
<td>
100 as the data center users
</td> </tr>
<tr>
<td>
**Openness of the experiment/Selection of end users:**
</td> </tr>
<tr>
<td>
In the first stage, first 20 months, the algorithm for the energy control will
be established and the AIDCIM (AI-Data Center Infrastructure Management
System) will be built for the ATR Data Center. After 20 months, the AI-DCIM
will be extended to other data centers.
</td> </tr>
<tr>
<td>
**Nature of interactions with end users:**
</td> </tr>
<tr>
<td>
The end users demonstrate the energy management of the data center from
outside via network, including ASP.
</td> </tr>
<tr>
<td>
**Description of the Experiment for end users:**
</td> </tr>
<tr>
<td>
The end user demonstrates the optimum energy management for the data centers
by using the AIDCIM software provided as an OSS. Also by using secure
communication protocols IEEE1888 developed and standardized, the management
will be achieved from the outside of the data centers.
</td> </tr> </table>
3.2.1.3. Knowledge Capital – SNS-like EMS
<table>
<tr>
<th>
**Experiment Name:**
</th>
<th>
xEMS (Energy Management System)
</th> </tr>
<tr>
<td>
**Responsible Partner:**
</td>
<td>
OSK
</td> </tr>
<tr>
<td>
**Topic:**
</td>
<td>
Smart Energy
</td> </tr>
<tr>
<td>
**Start date:**
</td>
<td>
M20
</td> </tr>
<tr>
<td>
**End date:**
</td>
<td>
M36
</td> </tr>
<tr>
<td>
**Min number of end users:**
</td>
<td>
1
</td> </tr>
<tr>
<td>
**Max number of end users:**
</td>
<td>
1000
</td> </tr>
<tr>
<td>
**Openness of the experiment/Selection of end users:**
</td> </tr>
<tr>
<td>
End users that have fundamental knowledge on using smartphone are selected.
For floor pressure sensor, the pedestrian that walk on the sensor are
(automatically) selected as test users.
</td> </tr>
<tr>
<td>
**Nature of interactions with end users:**
</td> </tr>
<tr>
<td>
End users can input their feelings and requests by using smartphones to the
system. The system would change the configuration based on users’ inputs.
Also, end users would input their movement through floor pressure sensors.
</td> </tr>
<tr>
<td>
**Description of the Experiment for end users:**
</td> </tr>
<tr>
<td>
The energy management systems would gather end users’ inputs and control
various actuators based on the users’ inputs
</td> </tr> </table>
###### 3.2.2. Smart Building
3.2.2.1. PTL - People counting using a single / multiple camera(s)
<table>
<tr>
<th>
**Experiment Name:**
</th>
<th>
People counting using a Single/Multiple camera(s)
</th> </tr>
<tr>
<td>
**Responsible Partner:**
</td>
<td>
CEA (PTL)
</td> </tr>
<tr>
<td>
**Topic:**
</td>
<td>
Smart Building and Smart Shopping
</td> </tr>
<tr>
<td>
**Start date:**
</td>
<td>
M24
</td> </tr>
<tr>
<td>
**End date:**
</td>
<td>
M36
</td> </tr>
<tr>
<td>
**Min number of end users:**
</td>
<td>
10
</td> </tr>
<tr>
<td>
**Max number of end users:**
</td>
<td>
100
</td> </tr>
<tr>
<td>
**Openness of the experiment/Selection of end users:**
</td> </tr>
<tr>
<td>
The experiment will be opened the participants. Generated data will be of
restricted access. The end users will be involved when they go in the filmed
area.
</td> </tr>
<tr>
<td>
**Nature of interactions with end users:**
</td> </tr>
<tr>
<td>
No specific interaction is needed from the end users, thus the experiment is
transparent for end users. They will only be informed that an experiment is
being conducted.
</td> </tr>
<tr>
<td>
**Description of the Experiment for end users:**
</td> </tr>
<tr>
<td>
A smart imaging system is currently working. This system sends an anonymized
description of the scene to a centralized computing system. For instance,
those image features can help to provide statistics about the number of
persons that are inside the room. The entire system enables the computation of
relevant information regarding people trajectories keeping a high level of
privacy protection. The two main goals of this experiment are first to
evaluate the accuracy of such system and secondly what are the potential
usages of those statistics for smart building/smart advertising applications.
</td> </tr> </table>
3.2.2.2. PTL - Using actuator based on interpreting the scene using a smart
camera
<table>
<tr>
<th>
**Experiment Name:**
</th>
<th>
Using actuator based on interpreting the scene using a smart camera
</th> </tr>
<tr>
<td>
**Responsible Partner:**
</td>
<td>
CEA (PTL)
</td> </tr>
<tr>
<td>
**Topic:**
</td>
<td>
Smart Building and Smart Shopping
</td> </tr>
<tr>
<td>
**Start date:**
</td>
<td>
M24
</td> </tr>
<tr>
<td>
**End date:**
</td>
<td>
M36
</td> </tr>
<tr>
<td>
**Min number of end users:**
</td>
<td>
3
</td> </tr>
<tr>
<td>
**Max number of end users:**
</td>
<td>
10
</td> </tr>
<tr>
<td>
**Openness of the experiment/Selection of end users:**
</td> </tr>
<tr>
<td>
The experiment will be opened the participants. Generated data will be of
restricted access. The end users will be involved when they go in the filmed
area.
</td> </tr>
<tr>
<td>
**Nature of interactions with end users:**
</td> </tr>
<tr>
<td>
The end user will interact with their body and gesture when they will be
filmed. The participants will act on various actuators (not defined yet) using
their own behaviours and gestures
</td> </tr>
<tr>
<td>
**Description of the Experiment for end users:**
</td> </tr>
<tr>
<td>
A smart imaging system is currently collecting information. The system
processes and saves only anonymized image features without storing any
information about personal data (i.e. images or image features possibly linked
to an identity). A “computer interpretation” of the scene provides signals to
control media such as sounds, videos… Multimodal interactions are also
investigated to act on the mood of the room (e.g. by modifying air ambient
temperature, humidity level, synthetic lights).
</td> </tr> </table>
3.2.2.3. ATR DC – Cold storage geo-replication
<table>
<tr>
<th>
**Experiment Name:**
</th>
<th>
Geo-replication
</th> </tr>
<tr>
<td>
**Responsible Partner:**
</td>
<td>
OSK
</td> </tr>
<tr>
<td>
**Topic:**
</td>
<td>
Smart Data Center
</td> </tr>
<tr>
<td>
**Start date:**
</td>
<td>
M20
</td> </tr>
<tr>
<td>
**End date:**
</td>
<td>
M36
</td> </tr>
<tr>
<td>
**Min number of end users:**
</td>
<td>
4
</td> </tr>
<tr>
<td>
**Max number of end users:**
</td>
<td>
100
</td> </tr>
<tr>
<td>
**Openness of the experiment/Selection of end users:**
</td> </tr>
<tr>
<td>
In the first stage before month 20, the algorithm for the geo-replication for
IoT data will be established. After M20, geo-replication will be established
between two locations.
</td> </tr>
<tr>
<td>
**Nature of interactions with end users:**
</td> </tr>
<tr>
<td>
The end users demonstrate the geo-replication for the IoT data between at
least two locations.
</td> </tr>
<tr>
<td>
**Description of the Experiment for end users:**
</td> </tr>
<tr>
<td>
This experiment demonstrates the optimum geo-replication for IoT data between
at least two locations.
</td> </tr> </table>
3.2.2.4. iHouse – Smart House
<table>
<tr>
<th>
**Experiment Name:**
</th>
<th>
Smart House
</th> </tr>
<tr>
<td>
**Responsible Partner:**
</td>
<td>
OSK
</td> </tr>
<tr>
<td>
**Topic:**
</td>
<td>
Smart House
</td> </tr>
<tr>
<td>
**Start date:**
</td>
<td>
M20
</td> </tr>
<tr>
<td>
**End date:**
</td>
<td>
M36
</td> </tr>
<tr>
<td>
**Min number of end users:**
</td>
<td>
1
</td> </tr>
<tr>
<td>
**Max number of end users:**
</td>
<td>
10
</td> </tr>
<tr>
<td>
**Openness of the experiment/Selection of end users:**
</td> </tr>
<tr>
<td>
The end users who have basic knowledge on controlling smart home appliances in
the smart house are selected, since the experiment monitors and gathers
various sensing data from sensors in iHouse for energy efficient control of
appliances and actuators in smart house.
</td> </tr>
<tr>
<td>
**Nature of interactions with end users:**
</td> </tr>
<tr>
<td>
The end users emulate the daily life in the smart house with smart home
appliances. For example, they may monitor the power consumption of the house
and each appliance and also control them.
</td> </tr>
<tr>
<td>
**Description of the Experiment for end users:**
</td> </tr>
<tr>
<td>
Various sensing data are gathered with end users’ emulated living in the smart
house. Real-time monitoring and control home appliances would be conducted
based on end users’ inputs on the appliances.
</td> </tr> </table>
3.2.2.5. Smart Station at Maya
<table>
<tr>
<th>
**Experiment Name:**
</th>
<th>
Smart Station at Maya
</th> </tr>
<tr>
<td>
**Responsible Partner:**
</td>
<td>
JCOMM
</td> </tr>
<tr>
<td>
**Topic:**
</td>
<td>
Smart Building
</td> </tr>
<tr>
<td>
**Start date:**
</td>
<td>
M18
</td> </tr>
<tr>
<td>
**End date:**
</td>
<td>
M36
</td> </tr>
<tr>
<td>
**Min number of end users:**
</td>
<td>
85 (assuming 1% of the station ride personnel)
</td> </tr>
<tr>
<td>
**Max number of end users:**
</td>
<td>
850 (assuming 10% of the station ride personnel)
</td> </tr>
<tr>
<td>
**Openness of the experiment/Selection of end users:**
</td> </tr>
<tr>
<td>
This experiment will be held at the Maya Station in Kobe city. Maya station is
the new station of the JR Kobe Line which will be opened in March 2016. All
users at the Maya Station can get useful information about this station to
watch digital signage and web site _._
</td> </tr>
<tr>
<td>
**Nature of interactions with end users:**
</td> </tr>
<tr>
<td>
1. An agreement with station users is not required because personal information is not treated.
2. Useful information about Maya Station will be provided. By watching digital signage in front of the train gates, users can get information about temperature, weather, solar power generation, location of bus access information and so on. Libelium sensors and Wi-Fi packet sensors to gather these data are used. The acquired information is expected to use Jose testbed, etc.
3. Anonymous user feedback by questionnaire after the experiment.
</td> </tr>
<tr>
<td>
**Description of the Experiment for end users:**
</td> </tr>
<tr>
<td>
This experiment will be held at the Maya Station in Kobe city. Maya station is
the new station of the JR Kobe Line which will be opened in March 2016. All
users at the Maya Station can get useful information about Maya Station by
watching digital signage and web site _._
</td> </tr> </table>
###### 3.2.3. Smart Shopping
3.2.3.1. Knowledge Capital – Smart Shopping system and recommendation analysis
<table>
<tr>
<th>
**Experiment Name:**
</th>
<th>
Smart Exhibition at the LAB
</th> </tr>
<tr>
<td>
**Responsible Partner:**
</td>
<td>
OSK
</td> </tr>
<tr>
<td>
**Topic:**
</td>
<td>
Smart Shopping
</td> </tr>
<tr>
<td>
**Start date:**
</td>
<td>
M14
</td> </tr>
<tr>
<td>
**End date:**
</td>
<td>
M14
</td> </tr>
<tr>
<td>
**Min number of end users:**
</td>
<td>
350
</td> </tr>
<tr>
<td>
**Max number of end users:**
</td>
<td>
1400
</td> </tr>
<tr>
<td>
**Openness of the experiment/Selection of end users:**
</td> </tr>
<tr>
<td>
This experiment will be held at the Lab. in Grand front Osaka. Any visitorscan
participate in this experiment as long as they accept the agreement on
personal data usage. We assume that the number of the exhibitions is around 20
in the Lab. and the number of simultaneous users participating this experiment
is 20 at most.
</td> </tr>
<tr>
<td>
**Nature of interactions with end users:**
</td> </tr>
<tr>
<td>
1. We make an agreement on personal data usage with end users before their participation.
2. We provide users a Beacon device during the experiment, which emits Beacon signals for detecting user location and staying duration. Then the system recommends other exhibitions and controls the user environment according to user behaviours.
3. User feedback by questionnaire after the participation.
This experiment will be held at the LAB in Grand front Osaka during November,
2015. The feature of this experiment is to recommend exhibitions and control
the user environment by actuators (candidates are playing music, changing the
light colour, and changing the smell of the air ) according to user
behaviours, so that the staying time of the users in the Lab. can be
maximized.
</td> </tr>
<tr>
<td>
**Description of the Experiment for end users:**
</td> </tr>
<tr>
<td>
In this experiment, we ask people to walk in the Lab. and see exhibitions with
Beacon emitters. Our system analyses users’ behaviours in the Lab., whowill
spending more or less time at differentexhibitions, and then this system will
analyse all user behaviours and generate behaviour model. Then, this system
recommends users another suitable exhibitions you seems to stay longer by
estimating how long time you stayed at the previous exhibitions. The idea of
this experiment is similar to the recommendation system used by amazon.com,
but it is extended to be applied to real world users. Moreover, this system
also controls users’ environment by using aroma refusers according to their
attributes, gender, age, and other profiles, so that the users could stay
longer by personalizing the environment.
</td> </tr> </table>
3.2.3.2. Santander – Connected Shops
<table>
<tr>
<th>
**Experiment Name:**
</th>
<th>
Connected Shop in Santander (SmartSantander)
</th> </tr>
<tr>
<td>
**Responsible Partner:**
</td>
<td>
SAN
</td> </tr>
<tr>
<td>
**Topic:**
</td>
<td>
Smart Shopping
</td> </tr>
<tr>
<td>
**Start date:**
</td>
<td>
M07
</td> </tr>
<tr>
<td>
**End date:**
</td>
<td>
M34
</td> </tr>
<tr>
<td>
**Min number of end users:**
</td>
<td>
At the moment is planned to have at least five external experimenters
throughout the project lifespan that will use the data generated.
</td> </tr>
<tr>
<td>
**Max number of end users:**
</td>
<td>
There is no maximum number of experimenters to use the available data. The
number of citizens involved in the experiment is about of one hundred citizens
per day during the data gathering.
</td> </tr>
<tr>
<td>
**Openness of the experiment/Selection of end users:**
</td> </tr>
<tr>
<td>
The experimenters who want to make use the available positioning and
environmental data will need to request the specific permissions to the UC and
Santander Municipality. Additionally, gathered data will be made available
under the FESTIVAL EaaS. Therefore, the data access will follow agreed access
policies.
</td> </tr>
<tr>
<td>
**Nature of interactions with end users:**
</td> </tr>
<tr>
<td>
Experimenters will access positioning data to perform and test their own
positioning and customer behaviour algorithms.
Citizens will not interact directly with the experiment, but the anonymized
data sent automatically from their smartphones will be collected to get the
different measurements.
</td> </tr>
<tr>
<td>
**Description of the Experiment for end users:**
</td> </tr>
<tr>
<td>
The experiment aims at providing a trusted source of SNR data from the two
major technologies used in the smartphones, tablets or laptops: Bluetooth and
WiFi. Data will also be sent along with the metadata regarding the area where
the devices are installed, the environmental parameters, and the exact
location of the devices in the market. These data will be provided the
possibility of testing their own algorithms to verify and improve the
behaviour.
Furthermore, several algorithms tested on the gathered data will provide
useful parameters about the number of users in the market area as well as
their location. These data will be made available to the shop owners to better
understand the citizen behaviour.
</td> </tr> </table>
3.2.3.3. Santander - Advertised premium discounts
<table>
<tr>
<th>
**Experiment Name:**
</th>
<th>
Advertised premium discounts in Santander (SmartSantander)
</th> </tr>
<tr>
<td>
**Responsible Partner:**
</td>
<td>
SAN
</td> </tr>
<tr>
<td>
**Topic:**
</td>
<td>
Smart Shopping
</td> </tr>
<tr>
<td>
**Start date:**
</td>
<td>
M07
</td> </tr>
<tr>
<td>
**End date:**
</td>
<td>
M34
</td> </tr>
<tr>
<td>
**Min number of end users:**
</td>
<td>
At least one hundred citizens will be end users and receive or use discounts
during the project lifespan. Additionally, ten shop owners will access the
platform to provide specific discount data.
</td> </tr>
<tr>
<td>
**Max number of end users:**
</td>
<td>
There is no limit in the number of end users.
</td> </tr>
<tr>
<td>
**Openness of the experiment/Selection of end users:**
</td> </tr>
<tr>
<td>
There will be no restriction to become part of the experimentation and access
to the specific discounts based on the location and other parameters. The only
requirement will be downloading the application from the marketplace of iOS
and Android to access the offers and activating GPS, WIFI and/or Bluetooth at
the smartphone.
</td> </tr>
<tr>
<td>
**Nature of interactions with end users:**
</td> </tr>
<tr>
<td>
Citizens will use the smartphone application to get different parameters from
the shopping area, such as temperature, humidity, position, etc. Citizens will
get different discounts based on their location as well as other parameters.
These discounts will be generated by the shop owners in order to engage
citizens.
Additionally, citizens will be able to provide their feedback regarding the
offers generated as well as to the different parameters in the shop
(temperature, humidity…).
</td> </tr>
<tr>
<td>
**Description of the Experiment for end users:**
</td> </tr>
<tr>
<td>
This experiment aims at shortening the relation between the shop and customers
by giving different tools to communicate both. These tools are characterized
at following:
Customers will receive premium offers depending on several parameters,
including their location.Customers will be able to access to environmental
data in the shops, including parameters such as temperature or
humidity.Customers will have access to communication tools in order to send
feedback about their feelings in the shop (e.g. the temperature is low) and
the offers received.
</td> </tr> </table>
###### 3.2.4. Multi-domain
3.2.4.1. JOSE/JOSE (Japan-wide Orchestrated Smart/Sensor Environment)
<table>
<tr>
<th>
**Experiment Name:**
</th>
<th>
Constructing and providing IoT testbed on JOSE as IaaS testbed
</th> </tr>
<tr>
<td>
**Responsible Partner:**
</td>
<td>
ACUTUS
</td> </tr>
<tr>
<td>
**Topic:**
</td>
<td>
Federation Experiment
</td> </tr>
<tr>
<td>
**Start date:**
</td>
<td>
M7
</td> </tr>
<tr>
<td>
**End date:**
</td>
<td>
M36
</td> </tr>
<tr>
<td>
**Min number of end users:**
</td>
<td>
No end user involved in the experiment itself
</td> </tr>
<tr>
<td>
**Max number of end users:**
</td>
<td>
No end user involved in the experiment itself
</td> </tr>
<tr>
<td>
**Openness of the experiment/Selection of end users:**
</td> </tr>
<tr>
<td>
This experiment is not open for any end users but open for other experimenters
and federation experiment in FESTIVAL project.
</td> </tr>
<tr>
<td>
**Nature of interactions with end users:**
</td> </tr>
<tr>
<td>
Experimenter users on this experiment will interact with end users of each
experiment.
</td> </tr>
<tr>
<td>
**Description of the Experiment for end users:**
</td> </tr>
<tr>
<td>
N/A
</td> </tr> </table>
3.2.4.2. Engineering FIWARE-Lab
<table>
<tr>
<th>
**Experiment Name:**
</th>
<th>
FIWARE GE experiment over a federated domain
</th> </tr>
<tr>
<td>
**Responsible Partner:**
</td>
<td>
ENG
</td> </tr>
<tr>
<td>
**Topic:**
</td>
<td>
Federation Experiment
</td> </tr>
<tr>
<td>
**Start date:**
</td>
<td>
M12
</td> </tr>
<tr>
<td>
**End date:**
</td>
<td>
M14
</td> </tr>
<tr>
<td>
**Min number of end users:**
</td>
<td>
The number of user is related to the specific experiment that will use the
FIWARE-lab.
</td> </tr>
<tr>
<td>
**Max number of end users:**
</td>
<td>
The number of user is related to the specific experiment that will use the
FIWARE-lab
</td> </tr>
<tr>
<td>
**Openness of the experiment/Selection of end users:**
</td> </tr>
<tr>
<td>
In the first phase of the project the access to the FIWARE-lab resources will
be restricted to a set of users.
</td> </tr>
<tr>
<td>
**Nature of interactions with end users:**
</td> </tr>
<tr>
<td>
The users will be able to use the FIWARE-lab resources (e.g. VM for GE) using
both the OpenStack interface and the FESTIVAL experiment portal.
</td> </tr>
<tr>
<td>
**Description of the Experiment for end users:**
</td> </tr>
<tr>
<td>
The FIWARE testbed will support different types of experiments providing to
the experimenter IT resources (e.g. virtual machines or virtual networks) in
order to instantiate FIWARE Generic Enabler.
</td> </tr> </table>
The GE can be used directly in the experiment offering their different
functionalities (e.g. data processing, networking security etc.)
3.2.4.3. IoT based experiment over a federated domain
<table>
<tr>
<th>
**Experiment Name:**
</th>
<th>
IoT-based experiment over a federated domain
</th> </tr>
<tr>
<td>
**Responsible Partner:**
</td>
<td>
UC
</td> </tr>
<tr>
<td>
**Topic:**
</td>
<td>
Federation Experiment
</td> </tr>
<tr>
<td>
**Start date:**
</td>
<td>
M12
</td> </tr>
<tr>
<td>
**End date:**
</td>
<td>
M34
</td> </tr>
<tr>
<td>
**Min number of end users:**
</td>
<td>
At the moment is planned to have at least five external experimenters will be
able to access to the federated IoT-based experiment.
</td> </tr>
<tr>
<td>
**Max number of end users:**
</td>
<td>
At the time being, no maximum users are considered. However, this will depend
on the number of free resources available.
</td> </tr>
<tr>
<td>
**Openness of the experiment/Selection of end users:**
</td> </tr>
<tr>
<td>
The final authorisation for accessing the resources will depend on the
responsible of the involved testbeds. However, the experimentation will be
performed on top of the FESTIVAL EaaS. Therefore, the data access will follow
agreed access policies.
</td> </tr>
<tr>
<td>
**Nature of interactions with end users:**
</td> </tr>
<tr>
<td>
Experimenters will interact with the EaaS platform to reserve available
resources and to link them automatically.
</td> </tr>
<tr>
<td>
**Description of the Experiment for end users:**
</td> </tr>
<tr>
<td>
The idea of this experiment is to provide the possibility to automatically
create links between available resources within FESTIVAL. The experimenter
will be able to reserve virtual machines and receive the data from the sensors
in these machines. This will provide to the experimenters an easy way to
access the sensor data in the reserved virtual machines, in order to perform a
fast deployment of applications and perform experiments.
The main benefit for experimenters will be the possibility of testing their
own applications and experiments with real time data without having an
available physical infrastructure, as it will be provided by the FESTIVAL
federation EaaS.
</td> </tr> </table>
3.2.4.4. Messaging/Storage/Visualization platform federation example
<table>
<tr>
<th>
**Experiment Name:**
</th>
<th>
Messaging/Storage/Visualization platform federation use case
</th> </tr>
<tr>
<td>
**Responsible Partner:**
</td>
<td>
KSU
</td> </tr>
<tr>
<td>
**Topic:**
</td>
<td>
Smart Building
</td> </tr>
<tr>
<td>
**Start date:**
</td>
<td>
M12
</td> </tr>
<tr>
<td>
**End date:**
</td>
<td>
M36
</td> </tr>
<tr>
<td>
**Min number of end users:**
</td>
<td>
No end user is directly involved in the experiment. In the current plan, end
users are involved via the experiment of Smart Station at Maya.
</td> </tr>
<tr>
<td>
**Max number of end users:**
</td>
<td>
No end user is directly involved in the experiment. In the current plan, end
users are involved via the experiment of Smart Station at Maya.
</td> </tr>
<tr>
<td>
**Openness of the experiment/Selection of end users:**
</td> </tr>
<tr>
<td>
This experiment is not open for any end users but open for other experimenters
and federation experiment in FESTIVAL project.
</td> </tr>
<tr>
<td>
**Nature of interactions with end users:**
</td> </tr>
<tr>
<td>
Experimenters on this federation experiment will interact with end users of
each experiment.
</td> </tr>
<tr>
<td>
**Description of the Experiment for end users:**
</td> </tr>
<tr>
<td>
N/A
</td> </tr> </table>
##### 3.3. FESTIVAL potential Ethical issues
The project consortium is committed to conducting responsible research and
innovation and, as such as realized at proposal stage, an ethic self-
assessment of potential risks and ethical impacts.
Two main points can be concerned in the FESTIVAL project:
The involvements of end users in the experiments run on the test-beds. The
potential collection and handling of personal data.
In addition, the project focused on the Internet of Things requires us to look
into the current privacy and ethical concerns identified on this technology
domain, and to liaise with the existing work carried out in the ecosystem.
###### 3.3.1. Involvements of end users in the experiments run on the test-
beds
The involvement of end users in research experiments requires specific
methodologies to ensure the safety of the participants (and of the experiment
as a whole) and the correct understanding and acceptance of the participants
to the experiment.
Regarding the safety of experiments, the risk can be considered very limited
or inexistent in the case of the FESTIVAL experiments as the planned
experiment concerns providing additional services, mostly in an informative
way, rather than disrupting existing process or handling devices that could
harm humans. Additionally, only the most mature experiment, already tested in
the lab, will be deployed to populated areas.
The understanding of the experiments using new technologies that are not
familiar to the general public and proposing applications in domains and scope
that could prove disrupting is however a significant challenge. The project is
addressing this challenge by Task 4.1 and this deliverable is here to document
these activities. As such the objectives set up by the project regarding this
challenge are:
* Engaging with end user only on an informed way: making sure they are aware of the presence of experiments and that relevant documentation, in understandable format (language and avoidance of technical jargon) is available.
* Gathering end user consent as a prerequisite for interaction and any data collection
* Providing a complaint procedure with a neutral third party
* Ensuring that end-users are free to refuse the experiment at any moment, including after it is started, without any prejudice or disadvantage.
###### 3.3.2. The potential collection and handling of personal data
Although not a key part of the project, it is possible that some experiment,
for specific reasons, may need to collect data that is directly or that could
become through secondary use (profiling) personal data, even if no such
secondary use is planned within the project. This is true of most ICT related
project involving in one way or another end users in experiments and not a
specific focus of FESTIVAL, nevertheless it should be taken into account.
The data collected will be treated as confidential and security processes and
techniques will be applied to ensure their confidentiality. Overall the
following general principles will be used regarding any data collection by the
project experiments:
* Transparency of usage of the data: User – data subject in the European Union (EU) parlance - shall give explicit consent of usage of data.
* Collected Data shall be adequate, relevant and not excessive: The data shall be collected on “need to know” principle. This principle is also known as “Data Minimization”, which also helps to setup the user contract, to fulfill the data storage regulation and enhance the “Trust” paradigm.
* Collector shall use data for explicit purpose: Data shall be collected for legitimate reasons and shall be deleted (or anonymize) as soon as data is no longer relevant.
* Collector shall protect data at communication level: The Integrity of the information is important because modification of received information could have serious consequences for the overall system availability. User has accepted to disclose information to a specific system, not all the systems. The required level of protection depends on the data to be protected according the cost of the protection and the consequence of data disclosure to unauthorized systems.
* Collector shall protect collected data at data storage: User has accepted to disclose information to a specific system, not all the systems. It also could be mandatory to get infrastructure certification. The required level of protection depends on the data to be protected according the cost of the protection and the consequence of data disclosure to unauthorized systems. As example, user financial information can be used to perform automatic billing. Such data shall be carefully protected. Security keys at device side and server side are very exposed and shall be properly protected against hardware attacks.
* Collector shall allow user to access / remove Personal Data: Personal Data may be considered as a property of the user. User shall be able to verify correctness of the data and ask – if necessary – correction. Dynamic Personal Data – for instance home electricity consumption – shall also be available to the user for consultation. For static user identity, this principle is simply the application of current European regulations according access to user profile.
###### 3.3.3. Internet of Things Ethical and Privacy concerns
We have in previous work [1] studied in depth the potential ethical and
privacy implications of the Internet of Things. This existing knowledge is
taken into account in the set-up of the project experiments and the project
will be continuously involved in the IoT ecosystem activities regarding ethics
and privacy protection.
The following presents rapidly the main identified concerns related to ethics
and privacy in the IOT domain:
3.3.3.1. IoT Potential Ethical Implications
As presented in details in the Ethics Factsheet summarizing the findings of
the ethics subgroup of the IoT Expert Group of DG Connect [2] the main
identified issues regarding Ethics in IoT are:
* **The risk of social divides** : although many societal benefits are envisioned for IoT, their deployment and spreading may not be uniform across the population, creating a risk of an increased digital divide (between those who can afford and use the new applications and services and those who cannot). This risk is reinforced and may even be amplified in a “knowledge divide”, between those who know and understand the technologies behind an IoT world and those who don’t and who are therefore unable both to take full profit of it and to avoid potential dangers.
* The key issue **of trust and reliance on IoT** which is mostly linked, but clearly not limited to the respect of privacy and data security. The massive deployment of IoT enabled technologies and services will pose the question of their reliability and how, when, and why the user can, or has to rely on these new services in a trustful relationship. This need for a trustful relationship and the risk associated are even stronger in the case of “smart”, context aware applications who advise decisions to the end user. This pleads for the need for openness and reputation / ranking systems as strong needs to establish this trust.
* The risk of a **blurring of context** in the society perception of what is private and public, what is virtual and what is real. This evolution of society values and perception is not necessarily an issue in itself, but it has to be understood, monitored and reflected upon to make sure that it doesn’t result in additional issues or increase existing risks (such as the risk of social divides, especially between different age groups).
* The **non neutrality of the IoT metaphors and vocabulary** . Many terms and metaphors (such as the “smart”-things) used to describe IoT technologies, products and services assume that IoT will ease the lives of people, and they convey this meaning and raises expectations. This non neutrality and the associated expectations are important to be understood not only by the stakeholders defining the IoT but also by the targeted market.
* The necessity of **a social contract between object and peoples** . This necessity arises from the stronger and stronger reliance of societies on technologies envisioned in the IoT vision. As IoT objects are more and more autonomous, connected and involved in our lives, this may result in loss of control for users (as object take decisions for them) and in blurring of responsibilities for stakeholders (whose in the end really responsible for the decision). This pleads for a strong reflection on how IoT objects should behave and interact with people and with each others. A need that is furthermore reinforced in the case of context awareness by the ability of objects to create profiles of users and stakeholders based on the data gathered.
* And the **issue of informed consent and obfuscation of functionalities** which here again rejoin the privacy and data protection issue (without being limited to it). The actual understanding of what is happening in IoT scenarios, which is necessary for a truly “informed” consent by the user, is complicated by the strong tendency of IoT deployments to be actually nearly invisible as communicating objects are miniaturized, hidden, and their true features obfuscated. This pleads for an ability to make IoT deployment visible for inspection, education and explanation needs.
3.3.3.2. IoT potential implications on privacy, data protection and security
Based on the findings of the privacy and security subgroup of the IoT Expert
Group of DG Connect [2], and their analysis in the BUTLER project [1], the
main identified privacy and data protection issues in IoT are:
* **Continuity and availability of services:** As the deployment of IoT spreads and more and more systems and persons rely on these new products, applications and services, the issue of continuity and availability of the services rises. The strong integration of IoT devices in our day to day lives, and especially in critical services (such as health, security, and energy) increase the impact of a potential loss of service.
* **Sensibility of user data and contextualization of risks:** As Smart services gather more and more information on the user (willingly or even without notice), the question of the sensibility of these data, arise. The Internet of Things complicates this issue as it gathers more and more information that, despite a harmless appearance, can turn out to become sensitive when analysed on a large scale. For example, the collection of household power consumption can seem to hold no important privacy issues, however these data when statistically analysed can reveal much on the content of the user home and his/her day to day habits. The actual sensibility of gathered information is therefore not always known at the time when data gathering is decided and / or accepted by the user. In an IoT world, the risks related to privacy and data security are dependant of the context and purpose in which data is gathered, and used. And this context can be evolving, which support the need for a context-aware management of security and data protection.
* **Security of user data** : The user data must therefore be protected against unauthorized access, and this security should be ensured at each level of communication. The multiplication of communicating devices characteristic of the Internet of Things therefore increases the difficulty of this protection as the number of link to be protected increases. The potential impact of security breaches is also on the rise as the data stored have more and more applications, and thus give more and more information on the user and give more and more access to critical parts of our lives, increasing risks linked to identity theft and electronic identification.
* **Management of data:** Even when the security of the user data can be guaranteed against unauthorized access, the question of the actual management and storage of the information by the service provider remains. Questions such as: “How much data is collected to provide the service?”, “Is this strictly necessary?”, “Who is responsible to handle these data?”, “Who has access, how and when to the data?”; can be expected from the user.
* **Ownership, repurposing and communication of data:** The question of the ownership of the data collected is also central to the IoT Ethics issue: getting propriety or access to user data and reselling these data can be a significant source of revenue. The monetization of user data can raise several issues: how is the additional revenue shared between the service provider and the user? How aware is the user of this use of his/her data? How much control does he/she have on it? What are the third parties who get access to the data and for what?
* **Captivity of data:** Even as the service is becoming more and more used and accepted by the user, the ethics question remains: what happens to the user data if the user leaves the service? And how feasible is it for a user or consumer to change service provider once he has been engaged with one
for a significant time? These questions are important to avoid consumer
captivity through data that would result in an unfair advantage, destroying
competition with all the eventual consequences (suppression of consumer
choice, degradation of user service and reduction of innovation).
* **Applicable legislation and enforcement:** Given the global nature of IoT and the number of stakeholders necessarily involved in an IoT deployment, the question of responsibility and applicable legislation arise. This is reinforced by the fact that in a truly “Internet” of things vision the different actors will be spread across different countries and regions, increasing the number of potential legislation involved. This issue impacts not only the users, which may be confused on which legislation the service he/she is using follows, but also the policy makers and the whole IoT value chain as developing IoT applications and deployment without a clearly identified chain of responsibilities and applicable law represent a strong business risks.
* **Availability of information:** Finally, in a world where technical and legal complexity increases, the quality of the information available to the user is key to the management of the ethical issues: the service provider must ensure not only that the information is available, but that it is presented in a way that ensures it is correctly understood by the user.
#### 4\. Strategy for responsible user involvement
_In this section we present the project overall strategy for responsible user
involvement, a summary of the actions preformed in the first year and a plan
for future activities._
##### 4.1. General strategy for responsible user involvement
As presented above in section 3, the involvement of end users in the
experiments created over the FESTIVAL federation of platforms and testbeds,
responds to several objectives and requires specific attention to ensure a
responsible research and innovation practice. It is based on this analysis
that the project has defined a strategy for preparing and supporting the user
involvement.
###### 4.1.1. Main challenges identified
The strategy proposed has to face several operational challenges:
* First, the nature of the project means that the experiment that will be set up on the federation are research experiment that will take place in **an evolving framework** . The scientific knowledge of the field, the technical set up, and the legal and societal framework in which the experiment will be conducted are progressing in parallel and these evolutions create requirement for adaptability and evolution capabilities in the experiment themselves. This implies a need for openness to potential evolutions in the way the user involvement is carried out.
* A direct requirement of user involvement in research experiment is, of course, to ensure that the experiment and interactions with the participants is done in a valid legal framework that protects both the participants and the project. An additional point of attention in this matter lies in the fact that the FESTIVAL project involves partners and experiments in various countries in Europe (with different legislations) and in Japan.
* The nature of the experiments that look into new ways to collect, use and / or store data can also imply challenges to personal data protection (as presented in section 3.3). As aforementioned, although personal data collection and processing is not a goal of the project, a clear assessment of the risks and the set up of safeguard is clearly necessary to ensure a minimal hazard.
* Finally, we consider that a foremost challenge of the user involvement in experiment is in raising the awareness and knowledge of both participants and experimenters in the importance of the ethical issues at stake, the responsible involvement methodologies, and the potential impacts of novel ICT innovation at a societal level. Therefore, significant effort has to be dedicated to training and produce education material for both participants and experimenters.
###### 4.1.2. Project strategy and planned activities
From the challenges presented above derives the project strategy and planned
activities:
The project strategy articulates around two phases:
* A first phase (M1 – M12) focuses on the preparation of the user involvement before the start of any project experiments. In this first phase, all the preparatory work and document must be made ready for the beginning of the experiments.
* A second phase (M13 – M36) where the activity focus more on a direct support to the experimenters in the set up of their experiment and in the actual participant involvement. However, in this second phase the project will have to continue to produce additional training material and to ensure that the material created in the first phase stays up to date and adapted to the experiments.
The strategy will be implemented through a set of complementary activities:
* **A technological and legal watch** activity to ensure that the project stays up to date with the state of the art in research experiment setup, involvement of external participants, protection of data and privacy and the accompanying legal framework.
* **Raising Awareness:** The creation of training materials for both experimenters and participants to present and explain in a rapid and accessible way some of the important challenges that can be raised by the participation to the project experiments.
* **Informed Consent Process:** The creation off the processes and document for ensuring the legal participation of the end users to the experiments: processes for the collection of informed consent, set up of a complaint procedure through a neutral third party, technical mechanisms ensuring the protection of experiments data (in joint work with WP2 and WP3).
* **Assessment of Personal Data Management:** The set-up of a basis for a “privacy impact assessment” process that can be used by the project experiments (and external experimenters) to assess the potential risks linked with their technology deployments in order to identify the ones that would require specific measures and oversight.
* **The organization of training sessions** and support for the experimenters to accompany the creation of the experiments and the actual involvement of the experimenters.
##### 4.2. First year activity report
###### 4.2.1. Overview of the task activities
Based on this strategy, we have set up the following activity schedule for the
project first year:
**Figure 1 - First Year Activity Schedule**
Within the first year, this schedule decomposes into four overlapping period
of activity:
* During Month 1 to 4, the task conducted a state of the art analysis through a literature review and confrontations between the various existing knowledge in the consortium. It defined the project strategy and activity schedule.
* During Month 3 to 6, the task concentrated on the creation of initial drafts of the planned output of the task: the informed consent process, first examples of Factsheets, and an initial draft of the Privacy Impact Assessment process. The objective of the task was to have these initial draft ready for discussion and initial validation at the consortium level for the second project plenary meeting (April 25 th in Osaka).
* During Month 5 to 12, the initial draft processes and documents were refined based on the comments and the specific needs of the envisioned experiments.
* During Month 10 to 12, the task focused on the documentation of the activities in this deliverable.
To coordinate and conduct these activities, the project consortium organised
specific meetings and participated to the project meetings. The following
table presents the main meetings in which the task activities were discussed.
The organisation of task specific meetings was especially necessary in the
first phases of the activity as the task had to work in closed group on
setting up the processes. The participation to the plenary meetings of the
project enabled to communicate the task proposals and results to the whole
consortium to enable more complete discussions.
**Meeting Date Type of Meeting**
<table>
<tr>
<th>
**November 27 th 2014 **
</th>
<th>
Plenary Meeting (Santander)
</th> </tr>
<tr>
<td>
**January 27 th 2015 **
</td>
<td>
Plenary conference call
</td> </tr>
<tr>
<td>
**February 13 th 2015 **
</td>
<td>
Task Specific conference call
</td> </tr>
<tr>
<td>
**February 24 th 2015 **
</td>
<td>
Plenary conference call
</td> </tr>
<tr>
<td>
**March 24 th 2015 **
</td>
<td>
Task Specific conference call
</td> </tr>
<tr>
<td>
**March 31 st 2015 **
</td>
<td>
Plenary conference call
</td> </tr>
<tr>
<td>
**April 25 th 2015 **
</td>
<td>
Plenary Meeting (Osaka)
</td> </tr>
<tr>
<td>
**May 19 th 2015 **
</td>
<td>
Task Specific conference call
</td> </tr>
<tr>
<td>
**June 6 th 2015 **
</td>
<td>
Plenary conference call
</td> </tr>
<tr>
<td>
**July 24 th 2015 **
</td>
<td>
Plenary conference call
</td> </tr>
<tr>
<td>
**September 18 th 2015 **
</td>
<td>
Plenary Meeting (Grenoble)
</td> </tr> </table>
###### 4.2.2. Specific local activities
In addition to these high level coordinated activities, some of the project
experiment set up required already some specific work toward user involvement.
These specific efforts are documented here:
4.2.2.1. Santander
In order to evaluate the potential impact of Festival initiative, the first
step was to arrange a meeting with the Responsible of the Municipal office of
Market Support to present the current situation of Smart Shopping in
Santander, and how it could be improved thanks to FESTIVAL project. He liked
the idea of using new technologies to promote shopping activity within the
city and he proposed several places where to install new devices, including
indoor and outdoor scenarios. One of the indoor scenarios is a Municipal
Market (Mercado del Este) whereas the proposed outdoor scenario consists of a
couple streets full of shops located at the old town (Cádiz St., Lealtad St,).
After this first meeting, we decided to arrange a second one to show him real
devices, analyse proposed locations and start contacting with other
stakeholders.
During the second meeting, the Responsible of the Municipal office of Market
Support proposed another streets of the city center where to install FESTIVAL
devices, taking into account not only the number of shops, but also, the
degree of involvement of shopkeepers: Arrabal St. or Medio St. located both at
the old town, include shops with the most participative shopkeepers in the
city: they have organized different initiatives, in collaboration with bars
and restaurants, to foster shopping activity.
Regarding the indoor scenario, and in order to get feedback and feelings from
shop owners of Mercado del Este we held a meeting with the manager of the
owner’s association of Mercado del Este, together with the Municipal office of
Market Support. Although he liked the idea, he raised a potential problem:
shop owners could request the exclusiveness of the offers sent, if Festival
devices installed at Mercado del Este premises. Additionally, it was required
to analyse the technical/economic viability of the installation at the Mercado
del Este: there is no assigned budget for equipment, so, it is essential to
minimise installation costs. Therefore, a meeting with the Municipal
responsible of computing department, in charge of new installations, was also
arranged to find the balanced suitable location of FESTIVAL devices, taking
into account available points to access to internet, available power supply
access and also, the best location in terms of counting people and location.
Finally, these devices may be installed at the central corridor of the market,
being the most transient part of this building.
During the last meeting with the manager of the owner’s association of Mercado
del Este and the Municipal office of Market Support, the first one informed us
that there was no exclusiveness requirement from shop owners, therefore, it
was possible to install there new devices.
At this point, the idea is to install Festival devices at Mercado del Este,
which will serve to develop and test in real scenarios the counting people and
localization, functionalities being tested already in the UC premises.
Once these functionalities are validated in Mercado del Este, sending offers
functionality will be developed. Regarding this new functionality, several
meetings will be arranged with main actors of Smart Shopping:
* Shop owners, informing them about this innovative initiative, which will allow them to generate and deliver offers and special discounts of their products for free. Getting their involvement is one of the main goals, because attractive offer generation is essential in order to get citizen involvement.
* Citizens, as final users, who will have on their mobiles devices new offers and discounts. We will try to reach as many citizens as possible, so we will use different communication channels, such as, meetings with neighbourhood associations and providing information at centre for Demonstration of Smart City (Enclave Pronillo).
Generating offers will be done by shop owners through CreateAnOffer app, which
will allow to know statistics about: number of shops, number of offers
generated by shop, length of the offer,… This will be valuable information in
order to evaluate shop owners engagement.
Additionally, it will be suitable to arrange follow-up meetings to analyse
obtained results, get feedback from shop owners and citizens, and, taking it
into account to improve the process.
##### 4.3. Future plans
This first year of the project has enabled, as presented in this report, the
creation of the baseline infrastructure to enable the responsible involvement
of end users in the project experiments. Over the following period, the focus
of the task 4.1 will progressively shift to a supporting role to ensure that
the infrastructure set up is used efficiently and that it is kept up to date.
The task priorities are:
* Continuing the existing efforts in the set-up of the baseline infrastructure.
* Providing support to the project partners and external experimenters in the set-up and usage of the tools provided by the project.
* The extension and improvement of the existing infrastructure and processes to complete the existing offer.
Thus, the main activities envisioned to develop the responsible user
engagement framework of the project are:
* **The continuation of the creation of project factsheets** raising awareness on specific topics of the user engagement in experiment. This includes the finalisation of the translation work on all factsheet and the creation of new factsheets on new topics. Section 5.3 present a list of the future envisioned factsheets.
* **Support to the project experiment description:** The current deliverable presents in section 3.2 a first description of the experiment envisioned by the project from a user engagement perspective, and in section 7.2 a first assessment of the security and privacy mechanisms set up by each experiment. This first analysis will evolve as the project experiments become better defined and start to be implemented and deployed over the federation. The Task 4.1 will ensure that the current descriptions are maintained up to date as the experiments evolve and that they reach a higher level of maturity and consistency. Future description of the experiments and privacy impact assessment will be integrated in deliverable 3.2 (month 22) and 3.4 (month 34).
* **The Privacy Impact Assessment process may evolve,** as presented in section 7.1, the current process is an introduction to a full Privacy Impact Assessment procedure, and depending on the direction taken by the project experiments and their potential collection and use of personal data, it may be necessary to go further in the definition of a Privacy Impact Assessment process.
* **An electronic version of the informed consent process** will be developed and integrated in the FESTIVAL federation portal, as presented in section 6.3.
* **The feedback collection process** will be defined in cooperation with task 4.2 and 4.3. The deployment and further development of existing tools (such as the BUTLER User Feedback Tool [3]) will be considered. The integration of such tool to the federation portal would be a strong asset to improve the experience of the external experimenters: the portal would enable them to enter in contact with experiment participants not only to gather their informed consent but also to gather feedbacks on the experiment.
* Finally, the project will continue to keep watching the evolution of the state of the art and interact with other research project involving end users in experiment to gather (and reproduce) good practices and if necessary adapt the strategy of the project toward responsible end user engagement.
#### 5\. Raising Awareness: Factsheets
_In this section we present the project activity toward raising the end users
and experimenters awareness toward ethical issues and responsible research and
innovation. We thus present the concept of factsheets proposed by the project
and the current and foreseen factsheets._
##### 5.1. Rational and concept for factsheets
As presented above in section 4, raising the awareness of both participants to
the experiments and experimenters regarding the process of responsible
research and potential ethical impacts of the experiment, is a significant
challenge for the project. The challenge is complex from the issues it
addresses (the protection of privacy and potential ethical impacts of the IoT
being a research topic by itself), but also from the fact we address audiences
(experimenters and participants) that are usually not aware of the potential
issues. This is especially the case when the trade-off is between immediate
rewards (the conduction of the experiment / the discovery of new technologies)
and potential long term risks (long term societal impacts of applications that
would be based on the project work).
The challenge is of high importance for the project as an increased awareness
and understanding of the potential issues is important both for experimenters
(to ensure they comply with the project processes) and for participants (to
ensure a real “informed” consent to the experiments).
The state of the art on the ethical implications of IoT, on the protection of
privacy and on data security is already consequent and does not represent a
core domain of research for the FESTIVAL project. We decided therefore to
focus our work on creating training material that increase the awareness of
the experimenters and participant on the potential issues and link with
existing solutions. With this in mind we came up with the **Factsheet
concept.**
The Factsheet, a visual document of a single page, that can be widely
distributed and that focus on a single issue, or a specific process set up by
the project. They provide a general overview of the topic they address without
entering into the details but rather as an invitation to consider a specific
point. The Factsheets target usually the experimenters, but can be also very
useful for experiment participants as they inform them of the project
practices.
The Factsheets serve several complementary objectives:
* They can raise awareness on a specific potential problem to attract experimenters and participants to ask themselves the right questions.
* They provide a rapidly accessible general overview of complex issues, without definitive answers but as invitation to look into a subject or seek additional advice.
* They can provide also directly usable high level guidelines on the processes and activities of the project.
* They are also useful in disseminating the project vision and/or best practices with the community with the double objective of promoting the work of the project and participating to the community discussions on these important issues.
Concretely the factsheet will be part of the document disseminated by the
project in coordination with Work Package 5. The factsheet will thus be
disseminated on the project website in a dedicated section (Experimentations
Documents: _http://www.festival-project.eu/en/?page_id=424_ ) . They will be
also disseminated in the events the project participates, in relevant
communities (such as the RRI-ICT Forum: _http://www.rri-ict-forum.eu/_ ) .
Finally and most importantly the factsheet will be made available, translated
in the local language in the project experimentation locations and living labs
so they are directly available for local experimenters and experiment
participants.
##### 5.2. Year 1 factsheets
###### 5.2.1. Overall vision
The first year factsheets aim to enable to experimenter to grasp the primary
issues responsible research and the ethical impacts of the experiments. Once
the content had been agreed on (in an English version), the factsheets are
being translated in the languages of the various experiment platforms and
living labs of the project to make them easily accessible for experimenters
and participants.
The following table sums up the factsheets created over the first year, and
the languages in which they are available.
<table>
<tr>
<th>
**#**
</th>
<th>
**Topic**
</th>
<th>
**Main Partner**
</th>
<th>
**Date**
</th>
<th>
**En**
</th>
<th>
**Jp**
</th>
<th>
**Fr**
</th>
<th>
**Sp**
</th> </tr>
<tr>
<td>
#1
</td>
<td>
Personal Data Protection
</td>
<td>
inno
</td>
<td>
March 2015
</td>
<td>
X
</td>
<td>
X
</td>
<td>
X
</td>
<td>
X
</td> </tr>
<tr>
<td>
#2
</td>
<td>
Informed Consent Process
</td>
<td>
inno
</td>
<td>
April 2015
</td>
<td>
X
</td>
<td>
X
</td>
<td>
</td>
<td>
X
</td> </tr>
<tr>
<td>
#3
</td>
<td>
Camera use in trial
</td>
<td>
CEA
</td>
<td>
August 2015
</td>
<td>
X
</td>
<td>
</td>
<td>
X
</td>
<td>
X
</td> </tr>
<tr>
<td>
#4
</td>
<td>
Usage of Open Data
</td>
<td>
Santander & KSU
</td>
<td>
August 2015
</td>
<td>
X
</td>
<td>
</td>
<td>
</td>
<td>
X
</td> </tr>
<tr>
<td>
#5
</td>
<td>
Privacy Impact Assessment
</td>
<td>
inno
</td>
<td>
August 2015
</td>
<td>
X
</td>
<td>
</td>
<td>
</td>
<td>
X
</td> </tr> </table>
###### 5.2.2. Factsheet #1 Personal Data Protection
The first factsheet created is a general information factsheet on personal
data protection.
The objective of this factsheet is to inform the audience about what are
Personal Data, and to state the general principles regarding Personal Data
protection that are being followed by the FESTIVAL project.
Although the collection of personal data is not a focus of the project
experiments, the broad definition of personal data makes it possible that some
of the project experiments will indeed deal with personal data at some point.
Therefore, this factsheet was created to inform experimenters of the specific
care that should be taken in these cases.
The targeted audience for this factsheet is therefore both the participants to
the experiments (so that they understand the policy of the project regarding
personal data) as well as the experimenters (so that they take specific care
if they ever have to deal with personal data in their experiments). In
addition, we think this factsheet can also be useful to raise the general
public awareness on what is personal data, why they should care and how
responsible ICT experiments and applications should handle such data.
The factsheet was already presented to the RRI-ICT event 2015 (Brussels July 8
– 9) and received positive feedbacks.
###### 5.2.3.Factsheet #2 Informed Consent
The second factsheet deals with the process of getting informed consent of the
participants for an experiment.
The objective of this factsheet is to provide all the information to gather
the informed consent of an experiment participant to be in line with the
ethical requirement of research.
As the FESTIVAL project aims to carry out several experiments and to involve a
number of external participants, it was essential to prepare a step by step
guideline on how to ethically involve end users in the experimentations. The
audience targeted by the factsheet is thus the experimenters that wish to
conduct experiments in regards to the ethics requirement of research but also
the participants that wish to be informed of the benefits and risks of the
experiment, and their potential way to complain or to withdraw from it.
The factsheet in itself provides the steps of getting the informed consent and
the content necessary documents to do so. In this way, it provides the links
to the informed consent templates.
###### 5.2.4.Factsheet #3 Camera use in Trials
This Factsheet focuses on the use of video data in project experiments.
Dealing with video data means particular requirements in terms of precaution
related to privacy. It also implies specific agreements either from an
independent local authority or from the end-users (i.e. the experiment
participants).
This factsheet aims at informing potential experimenters about specific
requirements needed when they are dealing with video data. More generally, the
target audience corresponds to any person involved in the project that is
related to a video experiment. It also provides particularly useful
information for the end-users and participants in the case they are involved
in a video experiment.
As the FESTIVAL project aims to carry out several video experiments involving
external experimenters and external participants, it was essential to clarify
how the protagonists may deals with those data which are of a specific nature.
It was necessary to identify main measures to take in regards to the ethics
requirements but also to follow local applicable laws.
###### 5.2.5.Factsheet #4 Usage of Open Data
The fourth factsheet deals with the process of using Open Data for an
experiment.
The objective of this factsheet is to provide the information to Open up Data,
including useful recommendations and which steps should be followed in this
process, not forgetting the promotion of new datasets and/or catalogs.
It is important to define first what Open Data is, and then, step by step
guideline to be followed when you are facing with this type of information.
The audience targeted by the factsheet includes the experimenters that will
generate new datasets and catalogs based on developed experiments.
The factsheet consists of three sections, as can be seen in the following
figure,:
* definition of Open Data,
* how to Open up Data, including some recommendations and steps,
* how to promote new datasets and/or catalogs, including catalogue federation by a simple API.
###### 5.2.6.Factsheet #5 Privacy Impact Assessment
This Factsheet focuses on the process set up in the project to evaluate the
risks associated with data protection in the project experiment.
The objective of this Factsheet is to inform the audience about the process
followed by the project, that should be applied by every project experiment
done both by the consortium or external experimenters.
The process set up in the project (presented here in section 7) aims to
rapidly evaluate the way data is collected, stored, used, shared and
destructed by the experiment to identify potential risks early on.
The target audience of this factsheet is mainly the experimenters as it
summarizes the idea behind the Privacy Impact Assessment process of the
project. It can also be useful for participants to the experiments as an
explanation of how the experimenters had to question themselves on these
important questions. Along with the results of the experiments Privacy Impact
Assessment (which have to be communicated to the participants), it can help
build trust in the experiment participants.
##### 5.3. Future factsheets
The project approach (factsheets) to raise the stakeholders awareness on
responsible end user involvement topics and the processes set up by the
project, has for now received positive feedbacks. Although the factsheets are
relatively new and have been presented on relatively few occasions, they seem
to respond to a demand for rapid overview and introductions on important
topics. We will of course continue to monitor the feedbacks on the existing
factsheets in the following months to be able to fully judge of our approach
success, and these feedbacks will strongly influence the future roadmap on
factsheets.
However, we can already present a temporary list of the topics that we
consider could make useful future factsheets, shown in what we consider could
be a chronological order:
* **Project thematic experiments and user involvement:** We consider the creation of four specific factsheets on the topics of experiments of the project: Smart Energy, Smart Building, Smart Shopping and Federation Experiment:
o **Smart Energy Experiments and User Involvement.** o **Smart Building
Experiments and User Involvement.** o **Smart Shopping Experiments and User
Involvement.** o **Federation Experiments and User Involvement.**
Each of these factsheets would present an overview of the project use cases
and planned experiments on the specific topic, present the challenges and
importance of the planned experiments, the planned end user involvement and
its relevance to the topic, and the foreseen impacts. These factsheets would
help experiment participants to understand the experiment they participate to
in a broader context, and the external experimenters to identify the work
already carried out in the project they can relate to. These factsheets would,
of course, have to be created in close cooperation with Work Package three.
* **Responsible End User Involvement in Experiment:** This factsheet would present the general approach and strategy of the project toward responsible end user involvement. It would present some of the content documented in the current deliverable: section three on motivation and context, and section four on overall strategy. The factsheet would be useful for external experimenters and experiment participants to understand the approach of the project globally,as a complement to the focus of the other factsheets on specific topics. The Factsheet could also be useful as a dissemination tool, ensuring that the approach developed in the project can reach other research projects and be replicated.
* **Evaluating Experiments:** This factsheet would provide an introduction to the evaluation framework created by task 4.2 and 4.3 and presented in deliverable 4.2. The factsheet would present the motivation for setting up an evaluation framework, the approach followed by the project, how it can be used, and the role of the relationship and feedbacks of end users in the collection of feedbacks. This factsheet would be useful for external experimenters and experiment participant to understand one of the key motivation for end user involvement (the collection of useful feedback) and to thus put in context the responsible involvement of end users in experiments. This factsheet would be created in close cooperation with Task 4.2 and
4.3.
* **Using FESTIVAL in your Experiments:** This factsheet would provide an introduction on how external experimenters can use the FESTIVAL federation to create and conduct their experiments. It would present a rapid overview of the type of resources available and the process of using the federation portal. This factsheet would be useful for external experimenters as a first introduction to the project EaaS offer. The factsheet would be created in close cooperation with Task 3.4.
* **Collecting feedbacks in Experiments:** This factsheet would present the project process and tools for collecting end users feedbacks in experiments. It would be useful for external experimenters as an introduction to the project tools for feedback collection. The factsheet would be created in close cooperation with Task 4.2 and 4.3.
* **Experimentation as a Service Model:** This factsheet would present the EaaS model set up by the FESTIVAL federation. It would present the principles of the business model and the envisioned set up beyond the project end as a common exploitation opportunity. This factsheet would target external experimenters, to provide them information on the sustainability of the project approach and on the future business model. It would also be useful as a dissemination tool for other projects working on EaaS models. The factsheet would be created in close cooperation with Task 5.1.
* **FESTIVAL Socio-economic impacts:** This factsheet could present the result of the socioeconomic evaluation carried out by task 4.3 and present in a general way the envisioned long term socio economic impact perspective of the experiments carried out on the FESTIVAL federation and on the federation itself. It would be useful for experiment participant and external experimenters as a vision of the context in which the experiment take place, and could also serve as a dissemination tool for the project. The factsheet would be created in close cooperation with Task 4.3.
Additionally, it has been discussed the opportunity of creating similar
factsheet on topic less related to this specific task (responsible end user
involvement) and more general to the project. If the factsheet model gathers
good feedbacks and is considered useful, it could be extended as a
dissemination material on other subject, especially technical subjects such as
a general introduction to the project architecture, or a presentation of the
APIs of the federations, etc…
As already mentioned, the subjects presented above are our current vision of
what could be useful as future factsheet. The list will evolve based on the
feedback we receive on the current factsheets and on the general development
of the project.
#### 6\. Informed Consent process
_In this section we present the process proposed for the project for the
informed consent procedure and the tools set up._
##### 6.1. General Procedure Principles
Informed consent is one of the key notions of personal data protection. Indeed
several General principles must then be taken into account when dealing with
personal data:
* The right to access and to rectify collected data.
* The protection of the rights of individuals, and
* The control and protection of these data by an independent national authority.
* **_The informed consent of the concerned persons._ **
Informed consent is a term which originates in the medical research community
and describes the fact that a person has been fully informed about the
benefits and risks of a medical procedure and has agreed on the medical
procedure being undertaken on them. **Informed consent** is an ethical
requirement for most research and must be considered and implemented
throughout the research lifecycle, from planning to publication. Gaining
consent must include making provision for sharing data and take into account
any immediate or future uses of data.
The provisions European law, national laws and guidelines of many professional
research organizations recommend the following principles be followed to
ensure that consent is informed:
* Consent must be freely given with sufficient detail to indicate what participating in the study will involve.
* There must be active communication between the parties - what is expected from participants and why their participation is required.
* Documentation outlining **consent has to differentiate between consent to participate and consent to allow data to be published and shared.**
* Consent cannot be inferred from a non-response to a communication such as a letter or invitation to participate.
The general procedure principles followed by the project were then to produce
an experiment documentation that would follow the four principles cited above.
To do so several documents were produced.
##### 6.2. Informed Consent documents
###### 6.2.1. Information sheet
Before collecting the consent, the participants must understand the nature of
the research and the risks and benefits involved if they are to make an
informed decision about their participation. To do so, the document given out
to describe the study must be simple and understandable by any subject and
must gather the following elements:
* Purpose of the research.
* What is involved in participating.
* Benefits and risks.
* Terms for withdrawal:
* Participants have a right to withdraw at any time without prejudice and without providing a reason.
* Thought should be given to what will happen to existing, already provided, data in the event of withdrawal.
* Usage of the data:
* During research.
* Dissemination.
* Storage, archiving, sharing and re-use of data.
* Strategies for assuring ethical use of the data:
* Procedures for maintaining confidentiality.
* Anonymization data where necessary, especially in relation to data archiving.
* Details of the research:
* Funding source/ sponsoring institution/ name of project/ contact details for researchers/ how to file a complaint.
The Festival project has prepared an information sheet template (see annex A),
that provides a generic project description with simple language and
understandable by all, as well as a second section to be filled out with the
element described below according to the specificities of each experiment.
###### 6.2.2. Letter of consent
Once the participants have been aware of the nature of the study and the risks
and benefits involved, a formal consent in the participation to the experiment
is required. A specific form has been produced to ensure that the participant
has fully understood the specificities of the experiments and agrees to take
part in the study. This form also provides information on the withdrawal
procedure, the data collected and encourages the participants to ask further
questions to the researcher.
###### 6.2.3. Withdrawal Procedure
As detailed in the informed consent forms, the participant has all the rights
to decide at any time during the research that he/she no longer wish to
participate in the study, and that he/she can notify the researchers involved
and withdraw from it immediately without giving any reason. A specific form is
provided for this purpose:
###### 6.2.4. Complaint Procedure Documents
In addition to the feedback collection that will take place during the
experiment, participant will be given the possibility to file a formal
complaint on their participation to the study. A complaint procedure
information document and a complain form will be provided to the participants:
##### 6.3. Electronic consent form
The existing consent procedure is focusing on the use of paper forms that the
participants to experiment have to fill upon their arrival in an
experimentation facility. To make it more convenient and enable a larger
participation to experiment, the project will create an electronic version of
the forms on a webpage. The following diagram presents the high level
functional requirements for the portal:
Integrated in the FESTIVAL federation portal, the web form for gathering
consent would be the main entry point for experiment participants to the
participant part of the portal.
An initial screen would enable to select the experiment in which the
participant is involved (screen 1).
Based on the experiment selected and the experiment location, the mother
tongue of the experiment participant can be assessed. Of course, the
possibility to change language will still be offered to the participant in
case he/she prefers to get access to the forms in another language than the
local language of the experiment.
A second screen (screen 2) will present the general project description, as
well as a commonly understandable description of the experiment and a brief
explanation of the informed consent process. This screen will also give access
to more information on the experiment, such as the result of the experiment
Privacy Impact Assessment.
The participant will then be presented with the consent form (screen 3) using
the same language and format as the “paper” version of the form presented in
this deliverable.
If the participant accepts the conditions of the experiment he/she will
receive by emails a confirmation of the conditions of the experiment as well
the complaint and withdrawal procedures.
In case the experiment participant feels uncomfortable about sharing his/her
email for the experiment, the webpage will remind him/her of the existence of
a paper version of the forms which don’t require to provide a contact email.
#### 7\. Assessment of Personal Data Management
_In this section we present the project activities to assess the risks
associated with the management of data in the project testbeds and experiments
and the associated measures to safeguard privacy and data confidentiality._
##### 7.1. Privacy Impact Assessment Procedure
###### 7.1.1. Requirements and State of the Art
As presented in section four, one of the challenges of the project involvement
of participants in experiment is to correctly understand and assess the type
of data that are collected by the experiments, how they are treated, stored,
used, communicated and eventually destroyed. This is important to understand
and detect rapidly any potential privacy impact in case personal data would be
collected, and any other potential security and ethical issues related to data
protection. This is a challenge from the innovative and evolving nature of the
experiments carried out in the project. Additionally, it is an opportunity to
better explain to participant the nature of the experiment and what is done
with the data collected, to reinforce the “informed” nature of the consent
gathered.
To correctly carry out the assessment, the project looked into the state of
the art of Privacy Impact
Assessment. There is no standard definition for Privacy Impact Assessment (or
Privacy and Data Protection Assessment, as sometimes found in literature), nor
any standard process recommended that could be directly adopted. Evaluations
of various existing Privacy Impact Assessment initiatives and processes exist
such as the PIAW@tch website [4] or the evaluation of existing initiative in
various EU member states done by the PIAF project [5]. To build our own
framework we looked into the existing standards for RFID applications at
European Level [6] (to see how it could be adapted to the project
technologies). We also looked more specifically into recommendations and
process established at national level by public authorities: the UK ICO
Privacy Impact Assessment process [7], the French CNIL recommendations
regarding Privacy Impact Assessments [8], and beyond Europe with the Privacy
Impact Assessment Guidance [9] from US Homeland Security department.
The process we have set up in the first phase of the project respond to
several requirements:
* The need to have a process that allows for relatively rapid evaluation of risks, within the project budget, and that can be applied both by the consortium and by future external experimenters.
* The need to have a process that enables an identification of experiments and applications that would require more advanced and in depth evaluation and additional procedures.
* The need to have a process that look with enough detail in the technical complexity of an experiment. The innovative nature of the potential experiment requires a real look at the experiment information flows to be able to assess with certainty the potential risks of an application.
* The need to have a process that provides results that can be communicated to the experiment participants.
###### 7.1.2. FESTIVAL PIA process at phase 1
Based on these requirements, we created a process that aims to identify the
need for a more advanced Privacy Impact Assessment and that accurately
describes the information flows of the experiment.
The FESTIVAL Privacy Impact Assessment is a process designed to evaluate the
potential privacy impacts and data security risks of an experiment. The PIA
should be conducted throughout the lifespan of an experiment, from its early
design phase to its deployment, help to identify potential impact on the
fundamental rights of individuals, and be publicly available to the
experiment’s participants.
The FESTIVAL PIA process consists of a fifteen question questionnaire,
covering the entire information flow of an experiment, and describing how the
data is handled in each phase and what associated security measures are
provided.
The questions are not limited to the way personal data are handled but concern
any type of data collected or used by the experiment. This allows to identify
not only direct privacy risks, but also to look into other potential security
issues. It is also a guarantee against the mischaracterization of data as
being not concerned by the personal data guidelines.
The questionnaires of the experiments are first shared and reviewed by the
consortium to evaluate the soundness of the questionnaire answer and look into
any unidentified potential issues.
The results of the Privacy Impact Assessment questionnaire have to be
communicated to participants to the experiments, along with the factsheet
explaining the principle of the process,as part of the informed consent.
The PIA is presented in Annex B.
###### 7.1.3. Foreseen evolutions
We are confident that the current process can help us to look into any
potential issues related with the project experiment and remove any doubts
about the ethical impact of our experiment. However, if we identify some
experiments that require more advanced evaluation, we have plans to complete
the current process with the following steps:
* A process for the identification of privacy and security risks based on the description of the information flows available in the current form. This assessment will describe the identified risks, and for each the probability of occurrence, and the potential impacts.
* A process for the evaluation of security solutions. For each risk identified, a description of the prevention measures (describing solutions set up to limit the occurrence of risks) and a description of mitigation measures (solutions set up to limit the impact of the risk) accompanied by an evaluation to determine whether the final impact on individuals after implementing each solution is a justified, compliant and proportionate response to the aims of the project.
* The involvement of external experts to analyze the results of the whole Privacy Impact Assessment and provide recommendations.
##### 7.2. PIA Results
The following section presents the result of the initial evaluation of the
experiments planned by the project.
###### 7.2.1. Smart Energy
7.2.1.1. PTL – Energy Management sensiNact
<table>
<tr>
<th>
**Data Collection**
</th> </tr>
<tr>
<td>
_**What data is collected for the experiment?** _
</td> </tr>
<tr>
<td>
The user comfort to deal with the autonomous system.
</td> </tr>
<tr>
<td>
_**How is the data collected?** _
</td> </tr>
<tr>
<td>
* The data from the user will be collected in a form of interview.
* Sensor information will be collected in a hard drive in form of a _log._
</td> </tr>
<tr>
<td>
_**Describe security measures used in the data collection phase?** _
</td> </tr>
<tr>
<td>
No network data transmission will be used, the log data will be collected in
place.
</td> </tr> </table>
<table>
<tr>
<th>
**Data Storage**
</th> </tr>
<tr>
<td>
_**How and where are the data stored?** _
</td> </tr>
<tr>
<td>
The data collected will be stored in a secured server behind a firewall and
with a security network agent as the responsible to setup such server.
</td> </tr>
<tr>
<td>
_**Describe security measures used in the data storage?** _
</td> </tr>
<tr>
<td>
The data will be password protected and the computer network access will be
restricted.
</td> </tr> </table>
<table>
<tr>
<th>
**Data Usage**
</th> </tr>
<tr>
<td>
_**What are the data used for?** _
</td> </tr>
<tr>
<td>
The data will be used to detect which autonomous system action (or at what
point) it became an issue for the user.
</td> </tr>
<tr>
<td>
_**Are you using profiling techniques?** _
</td> </tr>
<tr>
<td>
No profiling technique adopted.
</td> </tr>
<tr>
<td>
_**Are you verifying the data?** _
</td> </tr>
<tr>
<td>
The researcher assigned to perform the experiment will be in charged to verify
the quality of the data and the coherence of the subject (the user) before
submit the data for the analysis.
</td> </tr>
<tr>
<td>
_**Are you considering secondary/future use?** _
</td> </tr>
<tr>
<td>
Negative, a report will be produced with the experiment and the data can be
disposed.
</td> </tr> </table>
<table>
<tr>
<th>
**Data Sharing**
</th> </tr>
<tr>
<td>
_**Are you sending/sharing the collecting data with a third party or
publishing the data?** _
</td> </tr>
<tr>
<td>
Not applicable.
</td> </tr>
<tr>
<td>
_**How is data exchange with third party protected?** _
</td> </tr>
<tr>
<td>
Not applicable.
</td> </tr> </table>
<table>
<tr>
<th>
**Data Destruction**
</th> </tr>
<tr>
<td>
_**How long is data stored?** _
</td> </tr>
<tr>
<td>
The data will be available until the report be produced containing the
relevant information.
</td> </tr> </table>
<table>
<tr>
<th>
**Data Management**
</th> </tr>
<tr>
<td>
_**What regulation / legislation is followed by the experiment to protect data
and user privacy?** _
</td> </tr>
<tr>
<td>
Not applicable.
</td> </tr>
<tr>
<td>
_**Who has access to the data for management purpose?** _
</td> </tr>
<tr>
<td>
Only the researcher assigned for the experiment will have access to the data
collected.
</td> </tr>
<tr>
<td>
_**Describe security measures used in the data management?** _
</td> </tr>
<tr>
<td>
No data will be transmitted through the network or data storage will be done
without the consent of the subject. The data collected will be stored in a
secure server managed by a specialist assigned during the experiment, this
specialist might be one collaborator coming from one of the partners of the
project.
</td> </tr> </table>
7.2.1.2. ATR DC – xEMS control
<table>
<tr>
<th>
**Data Collection**
</th> </tr>
<tr>
<td>
_**What data is collected for the experiment?** _
</td> </tr>
<tr>
<td>
Describe in particular what type of data is collected: are you collecting:
* Energy consumption of equipment in the data center Servers, Air conditioners, Electric power sources - Task (Work load) assignment.
* Data center operator (including ASP).
</td> </tr>
<tr>
<td>
_**How is the data collected?** _
</td> </tr>
<tr>
<td>
_Describe what process you use for the data collection?_
\- Environment sensors including temperature, humidity and pressure. - Are you
requesting the data directly from the user? Yes.
_Are you using external data sources?_
No _._
</td> </tr>
<tr>
<td>
_**Describe security measures used in the data collection phase?** _
</td> </tr>
<tr>
<td>
For data transmission via wide area networks, secure communication protocols
such as ssh, HTTPS, IEEE1888, are utilized.
</td> </tr> </table>
<table>
<tr>
<th>
**Data Storage**
</th> </tr>
<tr>
<td>
_**How and where are the data stored?** _
</td> </tr>
<tr>
<td>
The data are stored inside and outside of the data centers. Especially, as the
ASP model, the data are handled for the energy management from outside
(management office).
</td> </tr>
<tr>
<td>
_**Describe security measures used in the data storage?** _
</td> </tr>
<tr>
<td>
The secure communication protocol is used for the communications and the data
is protected.
</td> </tr> </table>
<table>
<tr>
<th>
**Data Usage**
</th> </tr>
<tr>
<td>
_**What are the data used for?** _
</td> </tr>
<tr>
<td>
The data are used only for energy management of the data centers.
</td> </tr>
<tr>
<td>
_**Are you using profiling techniques?** _
</td> </tr>
<tr>
<td>
The data are only sensing data for energy management. By analyzing the data
with machine learning, the data center operation is demonstrated from outside.
</td> </tr>
<tr>
<td>
_**Are you verifying the data?** _
</td> </tr>
<tr>
<td>
The data in only environment data. By using developed heuristic, the data
quality is maintained.
</td> </tr>
<tr>
<td>
_**Are you considering secondary/future use?** _
</td> </tr>
<tr>
<td>
The data obtained in the data center is not used for other case. The
management system with heuristic and algorithm developed for the data center
is expanded to other data center.
</td> </tr> </table>
<table>
<tr>
<th>
**Data Sharing**
</th> </tr>
<tr>
<td>
_**Are you sending/sharing the collecting data with a third party or
publishing the data?** _
</td> </tr>
<tr>
<td>
The data obtained in the data center is not used for other case. The
management system with heuristic and algorithm developed for the data center
is expanded to other data center. The system is widely used as an OSS.
</td> </tr>
<tr>
<td>
_**How is data exchange with third party protected?** _
</td> </tr>
<tr>
<td>
The data obtained in the data center is not used for other case. The
management system with heuristic and algorithm developed for the data center
is expanded to other data center. The system is widely used as an OSS.
</td> </tr> </table>
<table>
<tr>
<th>
**Data Destruction**
</th> </tr>
<tr>
<td>
_**How long is data stored?** _
</td> </tr>
<tr>
<td>
The data are overwritten every year.
</td> </tr> </table>
<table>
<tr>
<th>
**Data Management**
</th> </tr>
<tr>
<td>
_**What regulation / legislation is followed by the experiment to protect data
and user privacy?** _
</td> </tr>
<tr>
<td>
The data are only sensing data for energy management. By analyzing the data
with machine learning, the data center operation is demonstrated from outside.
CPU task (workload) assignment information is normally not opened.
</td> </tr>
<tr>
<td>
_**Who has access to the data for management purpose?** _
</td> </tr>
<tr>
<td>
The data center manager access the data are for energy management.
</td> </tr>
<tr>
<td>
_**Describe security measures used in the data management?** _
</td> </tr>
<tr>
<td>
The data are only sensing data for energy management. By analyzing the data
with machine learning, the data center operation is demonstrated from outside.
Only CPU task (workload) assignment information is normally not opened.
Therefore, the workload assignment information for real use case is securely
controlled.
</td> </tr> </table>
7.2.1.3. Knowledge Capital – SNS like EMS
<table>
<tr>
<th>
**Data Collection**
</th> </tr>
<tr>
<td>
_**What data is collected for the experiment?** _
</td> </tr>
<tr>
<td>
For smart energy management, the following data is collected:
\- End users’ inputs through smartphones and web browsers on PC. - Various
sensor data such as Temperature, humidity, GPS, … - People movement data
through floor pressure sensors.
…
</td> </tr>
<tr>
<td>
_**How is the data collected?** _
</td> </tr>
<tr>
<td>
For end users’ input, web browsers and smartphone applications are utilized.
Sensor data is collected through designated sensor devises. For collecting
data via wide area network, secure communications such as HTTPS, IEEE1888 are
utilized. For local transmission, local protocols such as ECHONET Lite,
Bluetooth are exploited.
</td> </tr>
<tr>
<td>
_**Describe security measures used in the data collection phase?** _
</td> </tr>
<tr>
<td>
For data transmission via wide area networks, secure communication protocols
such as ssh, HTTPS, IEEE1888, are utilized.
</td> </tr> </table>
<table>
<tr>
<th>
**Data Storage**
</th> </tr>
<tr>
<td>
_**How and where are the data stored?** _
</td> </tr>
<tr>
<td>
Collected data is stored on the storage servers at NICT JOSE platform.
</td> </tr>
<tr>
<td>
_**Describe security measures used in the data storage?** _
</td> </tr>
<tr>
<td>
JOSE virtual servers are located behind strong firewall. So, restricted access
is permitted with high security.
</td> </tr> </table>
<table>
<tr>
<th>
**Data Usage**
</th> </tr>
<tr>
<td>
_**What are the data used for?** _
</td> </tr>
<tr>
<td>
Collected data is utilized for smart energy management for building controls.
It is also utilized for research objectives to find further efficient energy
management.
</td> </tr>
<tr>
<td>
_**Are you using profiling techniques?** _
</td> </tr>
<tr>
<td>
No
</td> </tr>
<tr>
<td>
_**Are you verifying the data?** _
</td> </tr>
<tr>
<td>
Various statistical methods are exploited for finding efficient energy
management. On that process, the data is verified
</td> </tr>
<tr>
<td>
_**Are you considering secondary/future use?** _
</td> </tr>
<tr>
<td>
No
</td> </tr> </table>
<table>
<tr>
<th>
**Data Sharing**
</th> </tr>
<tr>
<td>
_**Are you sending/sharing the collecting data with a third party or
publishing the data?** _
</td> </tr>
<tr>
<td>
No
</td> </tr>
<tr>
<td>
_**How is data exchange with third party protected?** _
</td> </tr>
<tr>
<td>
No
</td> </tr> </table>
<table>
<tr>
<th>
**Data Destruction**
</th> </tr>
<tr>
<td>
_**How long is data stored?** _
</td> </tr>
<tr>
<td>
Collected data is stored until the FESTIVAL project ends.
</td> </tr> </table>
<table>
<tr>
<th>
**Data Management**
</th> </tr>
<tr>
<td>
_**What regulation / legislation is followed by the experiment to protect data
and user privacy?** _
</td> </tr>
<tr>
<td>
Not Applicable
</td> </tr>
<tr>
<td>
_**Who has access to the data for management purpose?** _
</td> </tr>
<tr>
<td>
Only researchers and operators that have appropriate authorization can access
and manage the collected data.
</td> </tr>
<tr>
<td>
_**Describe security measures used in the data management?** _
</td> </tr>
<tr>
<td>
Data is accessed and managed only through secure protocols such as HTTPS, ssh,
and IEEE1888.
</td> </tr> </table>
###### 7.2.2. Smart Building
7.2.2.1. PTL – People counting using a single / multiple camera(s)
<table>
<tr>
<th>
**Data Collection**
</th> </tr>
<tr>
<td>
_**What data is collected for the experiment?** _
</td> </tr>
<tr>
<td>
The collected data are mainly from video data. Video frames are neither stored
nor transmitted.
</td> </tr>
<tr>
<td>
_**How is the data collected?** _
</td> </tr>
<tr>
<td>
The image sensors being involved can be of different natures. The only
information that is stored or transmitted corresponds to anonymized image
features.
</td> </tr>
<tr>
<td>
_**Describe security measures used in the data collection phase?** _
</td> </tr>
<tr>
<td>
No image frame is neither stored nor transmitted. Background images or image
features are transmitted via secured protocols. Depending on privacy issues,
transmitted image features can be shared using an encryption technique.
</td> </tr> </table>
<table>
<tr>
<th>
**Data Storage**
</th> </tr>
<tr>
<td>
_**How and where are the data stored?** _
</td> </tr>
<tr>
<td>
To Be Determined
</td> </tr>
<tr>
<td>
_**Describe security measures used in the data storage?** _
</td> </tr>
<tr>
<td>
To Be Determined
</td> </tr> </table>
<table>
<tr>
<th>
**Data Usage**
</th> </tr>
<tr>
<td>
_**What are the data used for?** _
</td> </tr>
<tr>
<td>
The data are used for statistics and monitoring.
</td> </tr>
<tr>
<td>
_**Are you using profiling techniques?** _
</td> </tr>
<tr>
<td>
Profiling technique is not planned because not needed by target applications.
</td> </tr>
<tr>
<td>
_**Are you verifying the data?** _
</td> </tr>
<tr>
<td>
For the moment, no procedure is defined to verify the collected data.
</td> </tr>
<tr>
<td>
_**Are you considering secondary/future use?** _
</td> </tr>
<tr>
<td>
No future use of collected data is planned.
</td> </tr> </table>
<table>
<tr>
<th>
**Data Sharing**
</th> </tr>
<tr>
<td>
_**Are you sending/sharing the collecting data with a third party or
publishing the data?** _
</td> </tr>
<tr>
<td>
The data will not be made available to third parties.
</td> </tr>
<tr>
<td>
_**How is data exchange with third party protected?** _
</td> </tr>
<tr>
<td>
**\--**
</td> </tr> </table>
<table>
<tr>
<th>
**Data Destruction**
</th> </tr>
<tr>
<td>
_**How long is data stored?** _
</td> </tr>
<tr>
<td>
The data will not be stored after the end of the experiment.
</td> </tr> </table>
<table>
<tr>
<th>
**Data Management**
</th> </tr>
<tr>
<td>
_**What regulation / legislation is followed by the experiment to protect data
and user privacy?** _
</td> </tr>
<tr>
<td>
National legislation and rules about video experiments will be followed. In
France, where the experiment will be conducted, guidelines edited by the local
authority the CNIL (Commission Nationale de l'Informatique et des Libertés)
will be taken into account.
</td> </tr>
<tr>
<td>
_**Who has access to the data for management purpose?** _
</td> </tr>
<tr>
<td>
Only the experimenters will have access to stored data.
</td> </tr>
<tr>
<td>
_**Describe security measures used in the data management?** _
</td> </tr>
<tr>
<td>
An admin with login and password system will be used to access to the data.
</td> </tr> </table>
7.2.2.2. PTL – Using actuator based on interpreting the scene using a smart
camera
<table>
<tr>
<th>
**Data Collection**
</th> </tr>
<tr>
<td>
_**What data is collected for the experiment?** _
</td> </tr>
<tr>
<td>
The collected data are mainly from video data. Video frames are neither stored
nor transmitted.
</td> </tr>
<tr>
<td>
_**How is the data collected?** _
</td> </tr>
<tr>
<td>
The image sensors being involved can be of different natures. The only
information that is stored or transmitted corresponds to anonymized image
features.
</td> </tr>
<tr>
<td>
_**Describe security measures used in the data collection phase?** _
</td> </tr>
<tr>
<td>
No image frame is neither stored nor transmitted. Background images or image
features are transmitted via secured protocols. Depending on privacy issues,
transmitted image features can be shared using an encryption technique.
</td> </tr> </table>
<table>
<tr>
<th>
**Data Storage**
</th> </tr>
<tr>
<td>
_**How and where are the data stored?** _
</td> </tr>
<tr>
<td>
To Be Determined
</td> </tr>
<tr>
<td>
_**Describe security measures used in the data storage?** _
</td> </tr>
<tr>
<td>
To Be Determined
</td> </tr> </table>
<table>
<tr>
<th>
**Data Usage**
</th> </tr>
<tr>
<td>
_**What are the data used for?** _
</td> </tr>
<tr>
<td>
The data are used for statistics and monitoring.
</td> </tr>
<tr>
<td>
_**Are you using profiling techniques?** _
</td> </tr>
<tr>
<td>
Profiling technique is not planned because not needed by target applications.
</td> </tr>
<tr>
<td>
_**Are you verifying the data?** _
</td> </tr>
<tr>
<td>
For the moment, no procedure is defined to verify the collected data.
</td> </tr>
<tr>
<td>
_**Are you considering secondary/future use?** _
</td> </tr>
<tr>
<td>
No future use of collected data is planned.
</td> </tr> </table>
<table>
<tr>
<th>
**Data Sharing**
</th> </tr>
<tr>
<td>
_**Are you sending/sharing the collecting data with a third party or
publishing the data?** _
</td> </tr>
<tr>
<td>
The data will not be made available to third parties.
</td> </tr>
<tr>
<td>
_**How is data exchange with third party protected?** _
</td> </tr>
<tr>
<td>
**\--**
</td> </tr> </table>
<table>
<tr>
<th>
**Data Destruction**
</th> </tr>
<tr>
<td>
_**How long is data stored?** _
</td> </tr>
<tr>
<td>
The data will not be stored after the end of the experiment.
</td> </tr> </table>
<table>
<tr>
<th>
**Data Management**
</th> </tr>
<tr>
<td>
_**What regulation / legislation is followed by the experiment to protect data
and user privacy?** _
</td> </tr>
<tr>
<td>
National legislation and rules about video experiments will be followed. In
France, where the experiment will be conducted, guidelines edited by the local
authority the CNIL (Commission Nationale de l'Informatique et des Libertés)
will be taken into account.
</td> </tr>
<tr>
<td>
_**Who has access to the data for management purpose?** _
</td> </tr>
<tr>
<td>
Only the experimenters will have access to stored data.
</td> </tr>
<tr>
<td>
_**Describe security measures used in the data management?** _
</td> </tr>
<tr>
<td>
An admin with login andpassword system will be used to access to the data.
</td> </tr> </table>
7.2.2.3. ATR DC – Cold storage geo replication
<table>
<tr>
<th>
**Data Collection**
</th> </tr>
<tr>
<td>
_**What data is collected for the experiment?** _
</td> </tr>
<tr>
<td>
Describe in particular what type of data is collected: are you collecting:
* IoT data (Log information, SNS, etc…)
* Sensitive personal data are not included.
* Location information is essential for geo-replication.
* Users with at least two data center share the data.
</td> </tr>
<tr>
<td>
_**How is the data collected?** _
</td> </tr>
<tr>
<td>
Describe what process you use for the data collection?
\- IoT data are replicated with at least two data centers.
</td> </tr>
<tr>
<td>
_**Describe security measures used in the data collection phase?** _
</td> </tr>
<tr>
<td>
IoT data including log information are stored in the cold storage (Tape, Blu-
ray, etc) and secured managed.
</td> </tr> </table>
<table>
<tr>
<th>
**Data Storage**
</th> </tr>
<tr>
<td>
_**How and where are the data stored?** _
</td> </tr>
<tr>
<td>
The data are stored in at least two data centers as cold data.
</td> </tr>
<tr>
<td>
_**Describe security measures used in the data storage?** _
</td> </tr>
<tr>
<td>
Encryption is applied.
</td> </tr> </table>
<table>
<tr>
<th>
**Data Usage**
</th> </tr>
<tr>
<td>
_**What are the data used for?** _
</td> </tr>
<tr>
<td>
Archiving cold data for several users.
</td> </tr>
<tr>
<td>
_**Are you using profiling techniques?** _
</td> </tr>
<tr>
<td>
Profiling of IoT data is required.
</td> </tr>
<tr>
<td>
_**Are you verifying the data?** _
</td> </tr>
<tr>
<td>
No, the verification is not essential for Cold-storage replication.
</td> </tr>
<tr>
<td>
_**Are you considering secondary/future use?** _
</td> </tr>
<tr>
<td>
The algo **ri** thm developed is opened to public as OSS.
</td> </tr> </table>
<table>
<tr>
<th>
**Data Sharing**
</th> </tr>
<tr>
<td>
_**Are you sending/sharing the collecting data with a third party or
publishing the data?** _
</td> </tr>
<tr>
<td>
The data is not opened. The algorithm of cold data replication developed is
opened to public as OSS.
</td> </tr>
<tr>
<td>
_**How is data exchange with third party protected?** _
</td> </tr>
<tr>
<td>
The data is not opened. The algorithm of cold data replication developed is
opened to public as OSS.
</td> </tr> </table>
<table>
<tr>
<th>
**Data Destruction**
</th> </tr>
<tr>
<td>
_**How long is data stored?** _
</td> </tr>
<tr>
<td>
Cold data is stored at least 10 years
</td> </tr> </table>
<table>
<tr>
<th>
**Data Management**
</th> </tr>
<tr>
<td>
_**What regulation / legislation is followed by the experiment to protect data
and user privacy?** _
</td> </tr>
<tr>
<td>
No regulation.
</td> </tr>
<tr>
<td>
_**Who has access to the data for management purpose?** _
</td> </tr>
<tr>
<td>
The Cold data are not opened.
</td> </tr>
<tr>
<td>
_**Describe security measures used in the data management?** _
</td> </tr>
<tr>
<td>
The Cold data are not opened.
</td> </tr> </table>
7.2.2.4. iHouse – Smart House
##### **Data Collection**
<table>
<tr>
<th>
_**What data is collected for the experiment?** _
</th> </tr>
<tr>
<td>
Data from sensors in iHouse is collected, such as temperature, humidity, door
open/close information, illuminance, power consumption of each device, power
generation by solar panels, power data of battery, wind, rain, and so on.
</td> </tr>
<tr>
<td>
_**How is the data collected?** _
</td> </tr>
<tr>
<td>
For each data, designated sensors are utilized. The data is collected by
wireless/wired network using communication protocols such as ECHONET Lite
protocol and IEEE1888, with openHAB protocol bindings.
</td> </tr>
<tr>
<td>
_**Describe security measures used in the data collection phase?** _
</td> </tr>
<tr>
<td>
Data transmission to outside networks from iHouse is conducted by IEEE1888
protocol that generated encrypted path with enough security.
</td> </tr> </table>
<table>
<tr>
<th>
**Data Storage**
</th> </tr>
<tr>
<td>
_**How and where are the data stored?** _
</td> </tr>
<tr>
<td>
The data is stored at SQL server at NICT JOSE VM, and local storage servers on
OSK for data analysis.
</td> </tr>
<tr>
<td>
_**Describe security measures used in the data storage?** _
</td> </tr>
<tr>
<td>
Both storage servers are behind the strong firewall so that the restricted
access is only permitted.
</td> </tr> </table>
<table>
<tr>
<th>
**Data Usage**
</th> </tr>
<tr>
<td>
_**What are the data used for?** _
</td> </tr>
<tr>
<td>
The data is used for energy monitoring and control of home appliances in
iHouse. Also, the collected data is analyzed for finding various relationships
among appliances for further smart house controls.
</td> </tr>
<tr>
<td>
_**Are you using profiling techniques?** _
</td> </tr>
<tr>
<td>
No
</td> </tr>
<tr>
<td>
_**Are you verifying the data?** _
</td> </tr>
<tr>
<td>
Various statistical methods are exploited for finding relationships among each
data. On that process, the data is verified.
</td> </tr>
<tr>
<td>
_**Are you considering secondary/future use?** _
</td> </tr>
<tr>
<td>
No.
</td> </tr> </table>
<table>
<tr>
<th>
**Data Sharing**
</th> </tr>
<tr>
<td>
_**Are you sending/sharing the collecting data with a third party or
publishing the data?** _
</td> </tr>
<tr>
<td>
Making the collected data to be open data is under discussion with NICT, that
is an original data holder. For publishing journal papers and conference
presentations, only statistically summarized data is utilized.
</td> </tr>
<tr>
<td>
_**How is data exchange with third party protected?** _
</td> </tr>
<tr>
<td>
JOSE CKAN platform will be used for publishing data as open data.
</td> </tr> </table>
<table>
<tr>
<th>
**Data Destruction**
</th> </tr>
<tr>
<td>
_**How long is data stored?** _
</td> </tr>
<tr>
<td>
The data will be stored until the end of FESTIVAL project (e.g. M36).
</td> </tr> </table>
<table>
<tr>
<th>
**Data Management**
</th> </tr>
<tr>
<td>
_**What regulation / legislation is followed by the experiment to protect data
and user privacy?** _
</td> </tr>
<tr>
<td>
Nothing to be noted.
</td> </tr>
<tr>
<td>
_**Who has access to the data for management purpose?** _
</td> </tr>
<tr>
<td>
Only researchers of the FESTIVAL project can access data.
</td> </tr>
<tr>
<td>
_**Describe security measures used in the data management?** _
</td> </tr>
<tr>
<td>
Storage servers are behind the strong firewall so that the restricted access
with secure protocols such as IEEE1888 and ssh is only permitted.
</td> </tr> </table>
7.2.2.5. Smart Station at Maya
<table>
<tr>
<th>
**Data Collection**
</th> </tr>
<tr>
<td>
_**What data is collected for the experiment?** _
</td> </tr>
<tr>
<td>
\- Amount of solar power generation. - Amount of reduce CO2.
</td> </tr>
<tr>
<td>
* Current temperature.
* Weather of region.
* Amount of pollen.
* Bus access information at Maya Station.
</td> </tr>
<tr>
<td>
_**How is the data collected?** _
</td> </tr>
<tr>
<td>
* _What sensors are you using?_
* Iberium _sensor, Wi-Fi packet sensor._
* _Are you requesting the data directly from the user?_
* No we are requesting the data directly.
* _Are you using external data sources?_
* Under review.
</td> </tr>
<tr>
<td>
_**Describe security measures used in the data collection phase?** _
</td> </tr>
<tr>
<td>
The data will be password protected and the computer network access will be
restricted.
</td> </tr> </table>
<table>
<tr>
<th>
**Data Storage**
</th> </tr>
<tr>
<td>
_**How and where are the data stored?** _
</td> </tr>
<tr>
<td>
Under review.
</td> </tr>
<tr>
<td>
_**Describe security measures used in the data storage?** _
</td> </tr>
<tr>
<td>
Under review.
</td> </tr> </table>
<table>
<tr>
<th>
**Data Usage**
</th> </tr>
<tr>
<td>
_**What are the data used for?** _
</td> </tr>
<tr>
<td>
We provide useful information about Maya Station. Watching digital signage in
front of the train gates, users can get information about temperature,
weather, solar power generation, bus access information and so on.
</td> </tr>
<tr>
<td>
_**Are you using profiling techniques?** _
</td> </tr>
<tr>
<td>
No profiling technique used.
</td> </tr>
<tr>
<td>
_**Are you verifying the data?** _
</td> </tr>
<tr>
<td>
We are going to verify the data such as searching information to use digital
signage at Maya Station.
</td> </tr>
<tr>
<td>
_**Are you considering secondary/future use?** _
</td> </tr> </table>
We do not consider it at the moment.
<table>
<tr>
<th>
**Data Sharing**
</th> </tr>
<tr>
<td>
_**Are you sending/sharing the collecting data with a third party or
publishing the data?** _
</td> </tr>
<tr>
<td>
We do not consider sending/sharing the collecting data with others at the
moment, because now we are unconfident to collect accurate data at Maya
Station and now we do not recognize the importance and the usefulness for our
collecting information at Maya Station. If the experiment of Maya Station is
successful, we will start to consider about sending/sharing the collecting
data with others.
</td> </tr>
<tr>
<td>
_**How is data exchange with third party protected?** _
</td> </tr>
<tr>
<td>
Undecided.
</td> </tr> </table>
<table>
<tr>
<th>
**Data Destruction**
</th> </tr>
<tr>
<td>
_**How long is data stored?** _
</td> </tr>
<tr>
<td>
Undecided.
</td> </tr> </table>
<table>
<tr>
<th>
**Data Management**
</th> </tr>
<tr>
<td>
_**What regulation / legislation is followed by the experiment to protect data
and user privacy?** _
</td> </tr>
<tr>
<td>
For personal information is not treated in Maya Station experiment, we
measures for the protection of personal information is not carried out. Jcomm
has acquired the Privacy Mark defined by JIPDEC.
If we need to treat personal information in future, we handling in accordance
with the Privacy Mark.
</td> </tr>
<tr>
<td>
_**Who has access to the data for management purpose?** _
</td> </tr>
<tr>
<td>
Personal information handling manager of Jcomm and people who permitted to
treat personal information from handling manager of Jcomm
</td> </tr>
<tr>
<td>
_**Describe security measures used in the data management?** _
</td> </tr>
<tr>
<td>
Undecided.
</td> </tr> </table>
###### 7.2.3. Smart Shopping
7.2.3.1. Knowledge Capital – Smart Shopping system and recommendation analysis
<table>
<tr>
<th>
**Data Collection**
</th> </tr>
<tr>
<td>
_**What data is collected for the experiment?** _
</td> </tr>
<tr>
<td>
We will collect direct personal data (name, phone number, email address) and
location information of users inside of the Lab. in Knowledge Capital.
</td> </tr>
<tr>
<td>
_**How is the data collected?** _
</td> </tr>
<tr>
<td>
Direct personal data: we collect the direct personal data directly by the
users.
Location information of users: we collect the location information of the
users by using Beacon signals.
</td> </tr>
<tr>
<td>
_**Describe security measures used in the data collection phase?** _
</td> </tr>
<tr>
<td>
Direct personal data: We keep the direct personal data only written in papers
not stored in computer systems so that the data will never be copied and used
by other purposes.
Location information of users: We anonymize the location information of users
so that the data does not contain any direct persona data.
</td> </tr> </table>
<table>
<tr>
<th>
**Data Storage**
</th> </tr>
<tr>
<td>
_**How and where are the data stored?** _
</td> </tr>
<tr>
<td>
Direct personal data: We keep the direct personal data only written in papers.
Location information of users: We store Location information of users in a
distributed file system.
</td> </tr>
<tr>
<td>
_**Describe security measures used in the data storage?** _
</td> </tr>
<tr>
<td>
Direct personal data: Only the managers of the experiments can access to the
direct personal data written in papers.
Location information of users **:** Authorization is required for the data
access.
</td> </tr> </table>
<table>
<tr>
<th>
**Data Usage**
</th> </tr>
<tr>
<td>
_**What are the data used for?** _
</td> </tr>
<tr>
<td>
Direct personal data: we need the contact information of the users (direct
personal data), because we provide the users the iPad mini during the
experiments and we have to contact the users in case they may steal the iPad
mini.
Location information of users: The location information of the users is used
for the recommendation service.
</td> </tr>
<tr>
<td>
_**Are you using profiling techniques?** _
</td> </tr>
<tr>
<td>
No.
</td> </tr>
<tr>
<td>
_**Are you verifying the data?** _
</td> </tr>
<tr>
<td>
Direct personal data: We validate the data manually.
Location information of users: We will install the Beacon devices properly in
the Lab. so that they provide us good quality of the user locations.
</td> </tr>
<tr>
<td>
_**Are you considering secondary/future use?** _
</td> </tr>
<tr>
<td>
Basically we will not consider secondary use of the raw data. But, we compute
the statistics of the location information of users (not personal data any
more) and make it as an open research data.
</td> </tr> </table>
<table>
<tr>
<th>
**Data Sharing**
</th> </tr>
<tr>
<td>
_**Are you sending/sharing the collecting data with a third party or
publishing the data?** _
</td> </tr>
<tr>
<td>
We will share the statistics of the location information of users as an open
research data. We make the data public so that other users can make analysis
over the data.
</td> </tr>
<tr>
<td>
_**How is data exchange with third party protected?** _
</td> </tr>
<tr>
<td>
The statistic is open to everyone (no protection).
</td> </tr> </table>
<table>
<tr>
<th>
**Data Destruction**
</th> </tr>
<tr>
<td>
_**How long is data stored?** _
</td> </tr>
<tr>
<td>
The data is kept during the life time of the FESTIVAL project.
</td> </tr> </table>
<table>
<tr>
<th>
**Data Management**
</th> </tr>
<tr>
<td>
_**What regulation / legislation is followed by the experiment to protect data
and user privacy?** _
</td> </tr>
<tr>
<td>
We have to follow the privacy data management law in Japan for the direct
personal data and location information of users.
</td> </tr>
<tr>
<td>
_**Who has access to the data for management purpose?** _
</td> </tr>
<tr>
<td>
The managers of the experiments.
</td> </tr>
<tr>
<td>
_**Describe security measures used in the data management?** _
</td> </tr>
<tr>
<td>
The direct personal data is not store in computer systems. Authentication is
used for the Location information of users.
</td> </tr> </table>
7.2.3.2. Santander – Connected Shops
<table>
<tr>
<th>
**Data Collection**
</th> </tr>
<tr>
<td>
_**What data is collected for the experiment?** _
</td> </tr>
<tr>
<td>
This experiment will collect environmental data such as temperature or
humidity.
Furthermore, the experiment will also collect the SNR data from the signals
sent by the devices within the deployment area, which will make possible the
positioning of the devices.
</td> </tr>
<tr>
<td>
_**How is the data collected?** _
</td> </tr>
<tr>
<td>
_What sensors are you using?_
The SNR from WiFi and BT signals are collected using both radio interfaces
located in several points in the deployment. Additionally, temperature and
humidity sensors will be also used.
_Are you requesting the data directly from the user?_
No.
_Are you using external data sources?_
SmartSantander environmental data will be also available to be accessed.
</td> </tr>
<tr>
<td>
_**Describe security measures used in the data collection phase?** _
</td> </tr>
<tr>
<td>
Data will be collected and anonymized locally using secure hash algorithms,
before being sent to the SmartSantander platform. The devices will be
connected to internet through the municipality network (what is behind a
firewall only accessible for specific machines). Stored data will be only
accessible through the EaaS federation platform or the SmartSantander using an
X509 certificate for authentication _._
</td> </tr> </table>
<table>
<tr>
<th>
**Data Storage**
</th> </tr>
<tr>
<td>
_**How and where are the data stored?** _
</td> </tr>
<tr>
<td>
The data is stored using the IoT API (RESTful API) in SmartSantander. The
storage facility is located in the UC premises and it is based in a Mongo
database engine.
</td> </tr>
<tr>
<td>
_**Describe security measures used in the data storage?** _
</td> </tr>
<tr>
<td>
Data access will only be possible through the IoT API (RESTful API) in
SmartSantander. Authentication is based in an x509 based certificate for
authentication. The Machines storing the data are only accessible from a
specific IPs.
</td> </tr> </table>
<table>
<tr>
<th>
**Data Usage**
</th> </tr>
<tr>
<td>
_**What are the data used for?** _
</td> </tr>
<tr>
<td>
Data collected will be used to provide useful information to the shop owners
about customer’s behavior. Additionally, environmental data will be also
delivered to the customers.
</td> </tr>
<tr>
<td>
_**Are you using profiling techniques?** _
</td> </tr>
<tr>
<td>
Users can never be identified but some profiling techniques can be applied to
recognize when the same user has accessed to the shopping area.
</td> </tr>
<tr>
<td>
_**Are you verifying the data?** _
</td> </tr>
<tr>
<td>
Data quality cannot be verified by common means. Only strange behaviors (e.g.
SNR surpass certain limits) can be inferred to discard wrong measurements.
</td> </tr>
<tr>
<td>
_**Are you considering secondary/future use?** _
</td> </tr>
<tr>
<td>
At the moment no future use is being considered.
</td> </tr> </table>
<table>
<tr>
<th>
**Data Sharing**
</th> </tr>
<tr>
<td>
_**Are you sending/sharing the collecting data with a third party or
publishing the data?** _
</td> </tr>
<tr>
<td>
Data will be available through the federation of FESTIVAL EaaS. External
experimenters will be able to access to the data but authorization must be
given from UC/Santander municipality. Future results from the experiments made
on top of that data will be also shared with the shop owners regarding the
number of people in the market, people preferences, etc.
</td> </tr>
<tr>
<td>
_**How is data exchange with third party protected?** _
</td> </tr>
<tr>
<td>
In SmartSantander, authentication is made through x509 based certificates and
it will also be used to authorize the experimenters to use corresponding
resources. Data will be also delivered using HTTPS (SSL/TLS).
In EaaS, the information will be secured using the platform methods.
</td> </tr> </table>
<table>
<tr>
<th>
**Data Destruction**
</th> </tr>
<tr>
<td>
_**How long is data stored?** _
</td> </tr>
<tr>
<td>
There is no time limit to destroy data. In principle, it is considered to be
stored two years.
</td> </tr> </table>
<table>
<tr>
<th>
**Data Management**
</th> </tr>
<tr>
<td>
_**What regulation / legislation is followed by the experiment to protect data
and user privacy?** _
</td> </tr>
<tr>
<td>
We have inquired Data Protection Office about the use of anonymized MAC
address in Smart Shopping use case, in order to know if it is considered
private/personal information or not. Other experiences in Spain shows that MAC
address are not personal data.
</td> </tr>
<tr>
<td>
_**Who has access to the data for management purpose?** _
</td> </tr>
<tr>
<td>
UC and Santander municipality will have access for management purposes.
</td> </tr>
<tr>
<td>
_**Describe security measures used in the data management?** _
</td> </tr>
<tr>
<td>
Authentication and authorization are made using the same methods as they were
external experimenters, but with management permissions.
</td> </tr> </table>
7.2.3.3. Santander – Advertised Premium Discounts
<table>
<tr>
<th>
**Data Collection**
</th> </tr>
<tr>
<td>
_**What data is collected for the experiment?** _
</td> </tr>
<tr>
<td>
During the experiment there will be collected shop offers what are all
publicly available. Additionally, customers will provide feedback about the
offers and the shop environment.
</td> </tr>
<tr>
<td>
_**How is the data collected?** _
</td> </tr> </table>
<table>
<tr>
<th>
Data is gathered through a web application and the smartphones (shop offers)
and smartphones (customer feedback).
</th> </tr>
<tr>
<td>
_**Describe security measures used in the data collection phase?** _
</td> </tr>
<tr>
<td>
Data will be collected and sent using SSL/TLS encryption from the smartphones.
</td> </tr> </table>
<table>
<tr>
<th>
**Data Storage**
</th> </tr>
<tr>
<td>
_**How and where are the data stored?** _
</td> </tr>
<tr>
<td>
At the time being, shop offers are stored depending on the source, some in the
municipality facilities and other in the UC premises. Customer feedbacks will
be stored in the UC.
</td> </tr>
<tr>
<td>
_**Describe security measures used in the data storage?** _
</td> </tr>
<tr>
<td>
Data access will only be possible through the IoT API (RESTful API) in
SmartSantander. Authentication is based in an x509 based certificate for
authentication. The Machines storing the data are only accessible from a
specific IPs.
</td> </tr> </table>
<table>
<tr>
<th>
**Data Usage**
</th> </tr>
<tr>
<td>
_**What are the data used for?** _
</td> </tr>
<tr>
<td>
The goal of the application is to provide premium offers to the customers
based on their location. Additionally, data will provide a way to improve the
shop conditions to the owners.
</td> </tr>
<tr>
<td>
_**Are you using profiling techniques?** _
</td> </tr>
<tr>
<td>
No, but application users will have access to their own feedback.
</td> </tr>
<tr>
<td>
_**Are you verifying the data?** _
</td> </tr>
<tr>
<td>
Only registered shops can send offers to the platform. Users can be banned if
the system detects pernicious content.
</td> </tr>
<tr>
<td>
_**Are you considering secondary/future use?** _
</td> </tr>
<tr>
<td>
No at this moment.
</td> </tr> </table>
<table>
<tr>
<th>
**Data Destruction**
</th> </tr>
<tr>
<td>
_**How long is data stored?** _
</td> </tr>
<tr>
<td>
Shop offers are stored at least until the offer is expired. Customer feedback
will also be accessible with no restriction at the moment.
</td> </tr> </table>
<table>
<tr>
<th>
Shop offers are publicly available. Customer’s feedback is shared with the
shop owners.
</th> </tr>
<tr>
<td>
_**How is data exchange with third party protected?** _
</td> </tr>
<tr>
<td>
Open data can be freely accessed.
</td> </tr> </table>
<table>
<tr>
<th>
**Data Management**
</th> </tr>
<tr>
<td>
_**What regulation / legislation is followed by the experiment to protect data
and user privacy?** _
</td> </tr>
<tr>
<td>
No personal data is stored so it fulfil the Spanish regulation regarding
personal data protection.
</td> </tr>
<tr>
<td>
_**Who has access to the data for management purpose?** _
</td> </tr>
<tr>
<td>
UC and Santander municipality will have access for management purposes.
</td> </tr>
<tr>
<td>
_**Describe security measures used in the data management?** _
</td> </tr>
<tr>
<td>
Authentication and authorization is made using the same methods as they were
external experimenters, but with management permissions.
</td> </tr> </table>
###### 7.2.4. Multi-domain
7.2.4.1. JOSE/JOSE (Japan-wise Orchestrated Smart/Sensor Environment)
<table>
<tr>
<th>
**Data Collection**
</th> </tr>
<tr>
<td>
_**What data is collected for the experiment?** _
</td> </tr>
<tr>
<td>
This experiment itself will collect only logs from components of the system.
Experimenter users on this experiment system will collect their own data.
</td> </tr>
<tr>
<td>
_**How is the data collected?** _
</td> </tr>
<tr>
<td>
Each components of the system will output logs to the system storage.
Experimenter users on this experiment system will collect data by their own
measures
</td> </tr>
<tr>
<td>
_**Describe security measures used in the data collection phase?** _
</td> </tr>
<tr>
<td>
Logs will be collected on securely protected machines (authorization needed to
log into the machines) and transferred only by encrypted (by SSL/TLS/SSH)
connection. Experimenter users on this experiment system should use secure
measures in the data collection phase.
</td> </tr> </table>
<table>
<tr>
<th>
**Data Storage**
</th> </tr>
<tr>
<td>
_**How and where are the data stored?** _
</td> </tr>
<tr>
<td>
Data (logs and user data) will be stored on storage servers provided by JOSE
testbed. Each server is placed at one of data centers of NICT in Japan.
</td> </tr>
<tr>
<td>
_**Describe security measures used in the data storage?** _
</td> </tr>
<tr>
<td>
Only authorized users can access to the machines providing storage.
</td> </tr> </table>
<table>
<tr>
<th>
**Data Usage**
</th> </tr>
<tr>
<td>
_**What are the data used for?** _
</td> </tr>
<tr>
<td>
Logs will be used for maintaining and tuning of the experiment system.
Experimenter users may have their own goal.
</td> </tr>
<tr>
<td>
_**Are you using profiling techniques?** _
</td> </tr>
<tr>
<td>
No, but experimenter users on the system may use profiling techniques on their
own data.
</td> </tr>
<tr>
<td>
_**Are you verifying the data?** _
</td> </tr>
<tr>
<td>
No for the logs, but experimenter users may have their own verification
process of their data.
</td> </tr>
<tr>
<td>
_**Are you considering secondary/future use?** _
</td> </tr>
<tr>
<td>
Logs may also be used for collecting statistical information of the system
usage.
</td> </tr>
<tr>
<td>
Logs will not be shared with third parties and not be published. Statistical
information may be published as a part of future Deliverables. Experimenter
users on the system may have their own sharing policy for their data.
</td> </tr>
<tr>
<td>
_**How is data exchange with third party protected?** _
</td> </tr>
<tr>
<td>
Logs will not be exchanged with third parties. Experimenter users on the
system may have their own data exchanging policy for their data.
</td> </tr> </table>
<table>
<tr>
<th>
**Data Destruction**
</th> </tr>
<tr>
<td>
_**How long is data stored?** _
</td> </tr>
<tr>
<td>
Undecided. Logs may be saved as long as the system is operating. Experimenter
users on the system will have their own storing periods of their data.
</td> </tr> </table>
<table>
<tr>
<th>
**Data Collection**
</th> </tr>
<tr>
<td>
_**What data is collected for the experiment?** _
</td> </tr>
<tr>
<td>
The FIWARE lab will support multi-domain experiment in which will be collected
specific information. From the point of the FIWARE-lab usage will be collected
system logs related to performance or security or for statistical reasons.
</td> </tr> </table>
<table>
<tr>
<th>
**Data Management**
</th> </tr>
<tr>
<td>
_**What regulation / legislation is followed by the experiment to protect data
and user privacy?** _
</td> </tr>
<tr>
<td>
No personal data will be stored on this experiment system. So we will not be
subject to Japanese Act on the Protection of Personal Information.
</td> </tr>
<tr>
<td>
_**Who has access to the data for management purpose?** _
</td> </tr>
<tr>
<td>
The operators of the experiment system from ACUTUS have access to the logs.
Experimenter users will have their own policy for management of their data.
</td> </tr>
<tr>
<td>
_**Describe security measures used in the data management?** _
</td> </tr>
<tr>
<td>
Authentication and authorization are needed for managing data on the system.
</td> </tr> </table>
7.2.4.2. Engineering FIWARE-lab
<table>
<tr>
<th>
_**How is the data collected?** _
</th> </tr>
<tr>
<td>
The data are collected internally by the FIWARE-lab system components.
</td> </tr>
<tr>
<td>
_**Describe security measures used in the data collection phase?** _
</td> </tr>
<tr>
<td>
Logs will be collected on securely protected machines (authorization needed to
log into the machines) and transferred only by encrypted (by SSL/TLS/SSH)
connection.
</td> </tr> </table>
<table>
<tr>
<th>
**Data Storage**
</th> </tr>
<tr>
<td>
_**How and where are the data stored?** _
</td> </tr>
<tr>
<td>
The data will be collected by log components and stored in specific database
located in the same infrastructure of FIWARE-lab.
</td> </tr>
<tr>
<td>
_**Describe security measures used in the data storage?** _
</td> </tr>
<tr>
<td>
The data will be accessible only for authorized users.
</td> </tr> </table>
<table>
<tr>
<th>
**Data Usage**
</th> </tr>
<tr>
<td>
_**What are the data used for?** _
</td> </tr>
<tr>
<td>
The log data provided by FIWARE-lab will be used to monitor system
performance, for maintenance reasons (e.g. identifications of bug/errors) and
for security reasons (monitor access and authorization to the
functionalities).
</td> </tr>
<tr>
<td>
_**Are you using profiling techniques?** _
</td> </tr>
<tr>
<td>
No, but experimenter users on the system may use profiling techniques on their
own data.
</td> </tr>
<tr>
<td>
_**Are you verifying the data?** _
</td> </tr>
<tr>
<td>
No for the logs, but experimenter users may have their own verification
process of their data.
</td> </tr>
<tr>
<td>
_**Are you considering secondary/future use?** _
</td> </tr>
<tr>
<td>
The data will be also used to calculate specific KPI.
</td> </tr>
<tr>
<td>
Data will not be shared with third parties and not be published. Statistical
information may be published as a part of future Deliverables. Experimenter
users on the system may have their own sharing policy for their data.
</td> </tr>
<tr>
<td>
_**How is data exchange with third party protected?** _
</td> </tr>
<tr>
<td>
N/A
</td> </tr> </table>
<table>
<tr>
<th>
**Data Destruction**
</th> </tr>
<tr>
<td>
_**How long is data stored?** _
</td> </tr>
<tr>
<td>
Undecided. Logs may be saved as long as the system is operating. Experimenter
users on the system will have their own storing periods of their data.
</td> </tr> </table>
<table>
<tr>
<th>
**Data Management**
</th> </tr>
<tr>
<td>
_**What regulation / legislation is followed by the experiment to protect data
and user privacy?** _
</td> </tr>
<tr>
<td>
Italian and European regulation will be followed.
</td> </tr>
<tr>
<td>
_**Who has access to the data for management purpose?** _
</td> </tr>
<tr>
<td>
The FIWARE-lab system administrator will have the access to the information.
Part of the information can be also accessible by the experimenters.
</td> </tr>
<tr>
<td>
_**Describe security measures used in the data management?** _
</td> </tr>
<tr>
<td>
Authentication and authorization are needed for managing data on the system.
</td> </tr> </table>
7.2.4.3. IoT based experiment over a federated domain
<table>
<tr>
<th>
**Data Collection**
</th> </tr>
<tr>
<td>
_**What data is collected for the experiment?** _
</td> </tr>
<tr>
<td>
No data are collected for the experiments. Federation experiment is made with
legacy data from testbeds.
</td> </tr>
<tr>
<td>
_**How is the data collected?** _
</td> </tr> </table>
<table>
<tr>
<th>
It depends on the testbed involved. SmartSantander data are collected from
deployed sensors.
</th> </tr>
<tr>
<td>
_**Describe security measures used in the data collection phase?** _
</td> </tr>
<tr>
<td>
Data is collected using the SmartSantander network. Most gateways access
internet through the internet.
</td> </tr> </table>
<table>
<tr>
<th>
**Data Storage**
</th> </tr>
<tr>
<td>
_**How and where are the data stored?** _
</td> </tr>
<tr>
<td>
Data storage depends on the facility. In SmartSantander data is stored in a
mongo database which is located in the UC premises.
</td> </tr>
<tr>
<td>
_**Describe security measures used in the data storage?** _
</td> </tr>
<tr>
<td>
It depends on the testbed, but in a close future security access layer will
depend on the EaaS.
</td> </tr> </table>
<table>
<tr>
<th>
**Data Usage**
</th> </tr>
<tr>
<td>
_**What are the data used for?** _
</td> </tr>
<tr>
<td>
Data will be used for the experimenters which access to the EaaS.
</td> </tr>
<tr>
<td>
_**Are you using profiling techniques?** _
</td> </tr>
<tr>
<td>
No.
</td> </tr>
<tr>
<td>
_**Are you verifying the data?** _
</td> </tr>
<tr>
<td>
Data is gathered from sensors but verification depends on the testbed. Several
reasonable limit can be set to know whether a sensor is broken or not.
</td> </tr>
<tr>
<td>
_**Are you considering secondary/future use?** _
</td> </tr>
<tr>
<td>
Automated links for data and VMs will be used under the EaaS in a future.
</td> </tr> </table>
<table>
<tr>
<th>
**Data Sharing**
</th> </tr>
<tr>
<td>
_**Are you sending/sharing the collecting data with a third party or
publishing the data?** _
</td> </tr>
<tr>
<td>
Data from testbeds will be shared with the experimenters who have access.
</td> </tr>
<tr>
<td>
_**How is data exchange with third party protected?** _
</td> </tr>
<tr>
<td>
It depends on the testbed. In SmartSantander, authentication is made through
x509 based certificates and it will also be used to authorize the
experimenters to use corresponding resources. Data will be also delivered
using HTTPS (SSL/TLS).
In EaaS, the information will be secured using the platform methods.
</td> </tr> </table>
<table>
<tr>
<th>
**Data Destruction**
</th> </tr>
<tr>
<td>
_**How long is data stored?** _
</td> </tr>
<tr>
<td>
There is no limit at the moment for data storage.
</td> </tr> </table>
<table>
<tr>
<th>
**Data Management**
</th> </tr>
<tr>
<td>
_**What regulation / legislation is followed by the experiment to protect data
and user privacy?** _
</td> </tr>
<tr>
<td>
The experiment will follow the regulation of the EaaS.
</td> </tr>
<tr>
<td>
_**Who has access to the data for management purpose?** _
</td> </tr>
<tr>
<td>
Each testbed will have its own managers. UC will manage SmartSantander data.
</td> </tr>
<tr>
<td>
_**Describe security measures used in the data management?** _
</td> </tr>
<tr>
<td>
It depends on the testbed. Authentication and authorization is made using the
same methods as they were external experimenters, but with management
permissions.
</td> </tr> </table>
7.2.4.4. Messaging/Storage/Visualization platform federation use case
<table>
<tr>
<th>
**Data Collection**
</th> </tr>
<tr>
<td>
_**What data is collected for the experiment?** _
</td> </tr>
<tr>
<td>
This experiment itself will collect only logs from components of the federated
platform. Experimenters using the platform of this federation experiment will
collect their own data
</td> </tr>
<tr>
<td>
_**How is the data collected?** _
</td> </tr>
<tr>
<td>
Each components of the federated platform will output logs to the system
storage.
\- Experimenters using platform of this federation experiment will collect
data by their own measures
</td> </tr>
<tr>
<td>
_**Describe security measures used in the data collection phase?** _
</td> </tr>
<tr>
<td>
Logs will be collected on securely protected machines (authorization needed to
log into the machines) and transferred only by encrypted (by SSL/TLS/SSH)
connection. Experimenters using the platform of this federation experiment
should use secure measures in the data collection phase.
</td> </tr> </table>
<table>
<tr>
<th>
**Data Storage**
</th> </tr>
<tr>
<td>
_**How and where are the data stored?** _
</td> </tr>
<tr>
<td>
Data (logs and user data) will be stored on storage servers provided by JOSE
testbed and CKP Umekita Network testbed. Each server is placed at one of data
centers of NICT and CKP dojima data center in Japan
</td> </tr>
<tr>
<td>
_**Describe security measures used in the data storage?** _
</td> </tr>
<tr>
<td>
Only authorized users can access to the machines providing storage.
</td> </tr> </table>
<table>
<tr>
<th>
**Data Usage**
</th> </tr>
<tr>
<td>
_**What are the data used for?** _
</td> </tr>
<tr>
<td>
Logs will be used for maintaining and tuning of the experiment system.
Experimenters may have their own goal.
</td> </tr>
<tr>
<td>
_**Are you using profiling techniques?** _
</td> </tr>
<tr>
<td>
No, but experimenters using the federated platform may use profiling
techniques on their own data.
</td> </tr>
<tr>
<td>
_**Are you verifying the data?** _
</td> </tr>
<tr>
<td>
No for the logs, but experimenters may have their own verification process of
their data.
</td> </tr>
<tr>
<td>
_**Are you considering secondary/future use?** _
</td> </tr>
<tr>
<td>
Logs may also be used for collecting statistical information of the system
usage.
</td> </tr> </table>
<table>
<tr>
<th>
**Data Sharing**
</th> </tr>
<tr>
<td>
_**Are you sending/sharing the collecting data with a third party or
publishing the data?** _
</td> </tr>
<tr>
<td>
Logs will not be shared with third parties and not be published. Statistical
information may be published as a part of future Deliverables. Experimenters
using federated platform may have their own sharing policy for their data.
</td> </tr>
<tr>
<td>
_**How is data exchange with third party protected?** _
</td> </tr>
<tr>
<td>
Logs will not be exchanged with third parties. Experimenters using federated
platform may have their own data exchanging policy for their data.
</td> </tr> </table>
<table>
<tr>
<th>
**Data Destruction**
</th> </tr>
<tr>
<td>
_**How long is data stored?** _
</td> </tr>
<tr>
<td>
Undecided. Logs may be saved as long as the system is operating. Experimenters
using federated platform will have their own storing periods of their data.
</td> </tr> </table>
<table>
<tr>
<th>
**Data Management**
</th> </tr>
<tr>
<td>
_**What regulation / legislation is followed by the experiment to protect data
and user privacy?** _
</td> </tr>
<tr>
<td>
Currently, we have no plan to store personal data into the federated platform.
So we will not be subject to Japanese Act on the Protection of Personal
Information.
</td> </tr>
<tr>
<td>
_**Who has access to the data for management purpose?** _
</td> </tr>
<tr>
<td>
The KSU/ACUTUS operators of the federated platforms have access to the logs
which are generated by the shared platforms. Experimenters will have their own
policy for management of their data and the logs of their own platforms.
</td> </tr>
<tr>
<td>
_**Describe security measures used in the data management?** _
</td> </tr>
<tr>
<td>
Authentication and authorization are needed for managing data on the system.
</td> </tr> </table>
###### 7.3. Contacts with Data Protection authorities
In this section we present the current state of the discussions with the
responsible Data Protection Authorities for the different project locations
that will be in contact with external participants to the project experiments.
7.3.1. PTL
Storage and manipulation of personal digital information are regulated by an
independent French organization known as _Commission Nationale de
l'informatique et des libertés_ (CNIL - http://www.cnil.fr/). From whom the
actual president is _Isabelle Falque-Pierrotin_ .
Thus, to ensure that the experiments are fully compliant with constraints
ruled by CNIL, regarding personal data collection, a legal entity with
expertise in this field has been contacted to provide the legal support on the
experimentation.
The administrative procedure to get a specific authorization might not be
required for the targeted applications at first. Thus, it is yet planned to
ask for an accreditation, notice that this accreditation might not be
necessary, the specific case required by the competent authority will be
promulgated in near future through another deliverable.
The delay for the authorization document from CNIL is estimated to be three
months. Thus, it is required three months of previous preparation before the
experiments take place.
7.3.2. TUBA
The Tuba depends also on the requirements of the French authority CNIL.
As for now, the GrandLyon Data platform contains only anonymized data and
therefore is in compliance with the CNIL guidelines.
Since the Data platform of GrandLyon is about Open Data, the data contained in
this platform should be kept anonymized and not contain personal data.
Otherwise, some private data should be created and a request/simplified
submission sent to the CNIL.
Delays are :
* Special request : 3 months
* Simplified submission (if the request fits one established CNIL use case) : immediate
7.3.3. Santander
In order to ensure what kind of treatment we should apply to data involved in
the Smart Shopping use cases, we have sent a query to the Spanish Data
Protection Office regarding the use of anonymized MAC address required by the
different functionalities to be developed in this area. As it was commented,
location, tracking and delivery of offers require capturing MAC address of
citizen devices, which is anonymized and erased, using from that moment, the
anonymized MAC. We have not received any answer by now.
In parallel and in order to know if there is any previous case similar to
ours, we have found a resolution from Spanish Data Protection Office related
to the use of anonymized MAC address. Zaragoza Municipality launched a project
to detect traffic congestion and estimate time spent by citizens driving in
the city, through the use of Bluetooth devices which detect MAC address of
smartphones. In this case, MAC address is also anonymized. The resolution says
that this data is not considered as private data, so it is not required to
apply Data Protection Law.
7.3.4. iHouse, ATR DC, and the Lab
For sensing data of iHouse, the agreement for data exploitation for FESTIVAL
project has been contracted. In addition for publishing as open data, the
discussion with NICT is undergoing.
#### 8\. Europe – Japan differences
_In this section we present the main differences identified in this task
between European and Japanese approach to involvement of end users in
experiments and safeguard of privacy._
##### 8.1. Personal data and consent
Personal data in Japan has been protected by the Act on the Protection of
Personal Information (Act No. 57 of 2003) (APPI). The term “personal
information” is defined in the Article 2 (1) of the Act as “information about
a living individual which can identify the specific individual by name, date
of birth or other description contained in such information (including such
information as will allow easy reference to other information and will thereby
enable the identification of the specific individual).
English translation of Act on the Protection of Personal Information (Act No.
57 of 2003), _http://www.cas.go.jp/jp/seisaku/hourei/data/APPI.pdf_
The amendment to APPI was approved in the House of Representatives in Japan on
September 3, 2015. One of the backgrounds of the amendment is that it was
inconvenient especially for commercial use and application of personal data by
companies because the range of personal information was not clearly defined in
APPI. So the definition of “personal information” is extended by the amendment
to include additional types of information related to the physical
characteristics of individuals such as fingerprint data and face recognition
data along with the numeric codes allocated to individuals such as passport
numbers and driver’s license numbers.
Another background is that the incidents related to personal information
leakage increased the concerns of people. Therefore, the amendment enhances
the protection of personal data by maintaining the traceability of the records
when the personal data is transferred to third parties as well as introducing
a new criminal penalty to deal with the misuse of personal data.
One of the notable features of the amendment is the establishment of a new
government authority named “Personal Information Protection Committee” on
January 2016 to ensure the protection of personal data as well as coping with
the cross-border transfer of personal data with countries that have a legal
system equivalent to the Japanese personal data protection system such as EU
member countries.
Regarding the data subject’s prior consent, Article 16 (Restriction by the
Purpose of Utilization) states that “a business operator handling personal
information shall not handle personal information about a person, without
obtaining the prior consent of the person” and Article 23 (Restriction of
Provision to A Third Party) also states that “A business operator handling
personal information shall not, except in the following cases, provide
personal data to a third party without obtaining the prior consent of the
person.”
However, how to obtain the prior consent is not specifically expressed in
APPI. In some business fields, certain ministries and government offices
responsible for the fields prepare certain guidelines to deal with personal
information and the way of obtaining the prior consent. For instance,
Financial Services Agency (FSA) defines “Guidelines for Personal Information
Protection in the Financial Field,” and Article 4 Regarding the Format of
Consent (relevant to Article 16 and 23 of the Law) states that “When acquiring
the consent of the person prescribed in Article 16 and 23 of the Law, entities
handling personal information in the financial field shall, in principle, do
so by document (including a record made by an electronic method, a magnetic
method, or any other method not recognizable to human senses. Hereinafter this
applies).” So, the “document” means not necessarily a paper document but also
includes any other digital alternatives including web-based systems obtaining
user’s consent.
Guidelines for Personal Information Protection in the Financial Field,
_http://www.fsa.go.jp/frtc/kenkyu/event/20070424_02.pdf_
Introduction of significant amendments to Japan’s Privacy Law,
_http://globalcompliancenews.com/introduction-of-significant-amendments-to-
japans-privacy-lawpublished-20150904/_
##### 8.2. Video Experiment
The use of camera in experiment is usually not done in Japanese experiments as
Japanese tend to refuse excessively captured body by the camera in a public
space.
NICT and Osaka station City had planned experiments using cameras to measure
the flow of people at JR Osaka station (in April, 2014). However the
experiment was cancelled due to opposition of station users and scholars. The
reasons of the opposition were that:
* The users of JR Osaka Station and station building could not refuse to participate in the experiment without changing their commuting route.
* There is no personal information protection, etc. unified rules concerning experiment of public space.
#### 9\. Conclusions
_This section concludes the deliverable with lesson learned and plans for
future activities._
Over the first year the project has focused on creating a responsible
environment for involving end users and external experimenters in the project.
The context of the project experiments, and the associated risks towards
ethics and privacy has been studied carefully and although the project will
make very limited use of personal data and overall represent a very limited
ethical hazard in its research, a strong policy on the issue has been decided,
as a testimony of the importance of this issue for the consortium. The state
of the art in responsible research and innovation has been studied and the
project has defined its strategy to integrate some of its principles in the
project experiments.
The project has set up the infrastructure for its external participant
involvement, from factsheets aimed at raising the awareness on important
issues, to guidelines and processes for user consent, complaints and data
withdrawal, as well as a first scheme for Privacy and Security Impact
Assessment.
The project experiments, although they are still in the early process of their
own definition have all participated to this effort and provided a first
evaluation of their potential contacts and interactions with end users and of
the way they deal with data.
Over this process we have not only set up an operational environment for the
involvement of external participants to the project, but also increased our
knowledge on these issues and disseminated this knowledge in the consortium
and even outside (thanks to the first distributions of factsheets).
The project effort on this task will continue over the following period, to
pursue the effort already engaged and finalise the project infrastructure, but
also to gather feedbacks and improve our framework. As the project experiment
move into an operational phase and as the project opens up to external
experimenters, we foresee that this task will also progressively evolve into
an operation support task that will provide guidance to experimenters.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0719_EarthServer-2_654367.md
|
# 1 Introduction
The EarthServer-2 project is itself built around concepts of data management
and accessibility. Its aim is to implement enabling technologies to make large
datasets accessible to a varied community of users. The intention is not to
create new datasets but to make existing datasets (identified at the start of
the project) easier to access and manipulate, encouraging data sharing and
reuse. Additional datasets have been added during the life of the project as
they became available and the DMP was updated as a “live” document to reflect
this. This, final, version of the Data Management Plan is a snapshot taken
February 1 st 2018\.
# 2 Data Organisation, Documentation and Metadata
Data is accessible through the Open Geospatial Consortium (OGC) Web Coverage
Processing Service 1 (WCPS) and Web Coverage Service 2 (WCS) standards.
EarthServer-2 has established data/metadata integration on a conceptual level
(by integrating array queries with known metadata search techniques such as
tabular search, full text search, ontologies etc.) and on a practical level
(by utilizing this integrated technology for concrete catalogue
implementations based on standards like ISO 19115, ISO 19119 and ISO 19139
depending on the individual service partner needs).
# 3 Data Access and Intellectual Property
Data access restrictions and intellectual property rights will remain as set
by the dataset owners (see Section 6). All data used in the EarthServer-2
project is freely available, although in some cases users are asked to
acknowledge data when presenting results.
# 4 Data Sharing and Reuse
The aim of EarthServer-2 is to make data available for sharing and reuse
without requiring that users download the entire (potentially huge) dataset.
Data is available through the OGC WCPS and WCS standard, allowing users to
filter and process data at source before transferring them back to the client.
Five data services have been created (Marine, Climate, Earth Observation,
Planetary and Landsat), providing simple access via web portals with a user-
friendly interface to filtering and analysis tools as required by the
application domain.
# 5 Data Preservation and Archiving
EarthServer-2 will not generate new data; preservation and archiving is the
responsibility of the upstream projects from which the original data was
obtained.
1. : http://www.opengeospatial.org/standards/wcps
2. : http://www.opengeospatial.org/standards/wcs
# 6 Data Register
The data register has been maintained as a “live” document; a snapshot was
created for each DMP release (see 1.1 and following sections).
The data register is based upon information and restrictions supplied by the
upstream data provider matched to Horizon 2020 guidelines as below (in
_italics)_ :
* **Data set reference and name**
_Identifier for the data set to be produced._
* **Data set description**
_Descriptions of the data that will be generated or collected, its origin (in
case it is collected), nature and scale and to whom it could be useful, and
whether it underpins a scientific publication. Information on the existence
(or not) of similar data and the possibilities for integration and reuse._
* _Standards and metadata_
_Reference to existing suitable standards of the discipline. If these do not
exist, an outline on how and what metadata will be created._
* _Data sharing_
_Description of how data will be shared, including access procedures, embargo
periods (if any), outlines of technical mechanisms for dissemination and
necessary software and other tools for enabling reuse, and definition of
whether access will be widely open or restricted to specific groups.
Identification of the repository where data will be stored, if already
existing and identified, indicating in particular the type of repository
(institutional, standard repository for the discipline, etc.). In case the
dataset cannot be shared, the reasons for this should be mentioned (e.g.
ethical, rules of personal data, intellectual property, commercial, privacy-
related, security-related)._
* **Archiving and preservation (including storage and backup)** _Description of the procedures that will be put in place for long-term preservation of the data. Indication of how long the data should be preserved, what is its approximated end volume, what the associated costs are and how these are planned to be covered._
Within EarthServer-2 currently, the original data are held by upstream
providers who have their own policies. In this case archiving and preservation
responsibility will remain with the upstream project.
## 1.1 Marine Science Data Service
<table>
<tr>
<th>
**Data set ESA OC-CCI referen ce and name**
</th> </tr>
<tr>
<td>
Organis ation
</td>
<td>
**ESA OC-CCI**
</td> </tr>
<tr>
<td>
Data set descripti on
</td>
<td>
ESA Ocean Colour Climate Change Indicators. http://www.esaoceancolour-
cci.org/index.php?q=webfm_send/318
</td> </tr>
<tr>
<td>
Standar ds
</td>
<td>
Data will be made available through the OGC WCPS standard.
</td> </tr>
<tr>
<td>
Spatial extent
</td>
<td>
Global
</td> </tr>
<tr>
<td>
Tempor
al extent
</td>
<td>
1997-2016
</td> </tr>
<tr>
<td>
Project Contact
</td>
<td>
Peter Walker ([email protected])
</td> </tr>
<tr>
<td>
Upstrea m
Contact
</td>
<td>
[email protected]_
</td> </tr>
<tr>
<td>
Limitati ons
</td>
<td>
None
</td> </tr>
<tr>
<td>
License
</td>
<td>
Free
</td> </tr>
<tr>
<td>
Constrai nts
</td>
<td>
None
</td> </tr>
<tr>
<td>
Data Format
</td>
<td>
NetCDF-CF
</td> </tr>
<tr>
<td>
Access
URL
</td>
<td>
_http://earthserver.pml.ac.uk/rasdaman/ows? &SERVICE=WCS&VERSION _ _=2.0.1
&REQUEST=GetCapabilities _
</td> </tr>
<tr>
<td>
Archivi ng and preserva tion (includi ng storage and backup)
</td>
<td>
Data is part of long term ESA CCI project and the original copy is maintained
there.
</td> </tr> </table>
_Table 6-1: Data set description for the ESA Ocean Colour Climate Change
Indicators._
<table>
<tr>
<th>
**Data set referen**
**ce and name**
</th>
<th>
**ESA OC-CCI, version 2**
</th> </tr>
<tr>
<td>
Organis
</td>
<td>
**ESA OC-CCI**
</td>
<td>
</td> </tr>
<tr>
<td>
ation
</td>
<td>
</td> </tr>
<tr>
<td>
Data set descripti on
</td>
<td>
The ESA Ocean Colour Climate Change Initiative provides a multi sensor long
timeseries of ocean colour parameters. These include Rrs at varying
frequencies and derived products such as Chlorophyll. These variables are
vital to understanding the health of the oceans and can be used as a
monitoring tool. As new processing systems come online and historical data go
through phased reprocessing by the data creators a new version of OCCCI is
processed.
</td> </tr>
<tr>
<td>
Standar ds
</td>
<td>
Data is available through the OGC WCS/WCPS standard.
</td> </tr>
<tr>
<td>
Spatial extent
</td>
<td>
Global
</td> </tr>
<tr>
<td>
Tempor
al extent
</td>
<td>
1997-2016 available as daily, weekly and monthly composites
</td> </tr>
<tr>
<td>
Project Contact
</td>
<td>
Olly Clements ([email protected])
</td> </tr>
<tr>
<td>
Upstrea m
Contact
</td>
<td>
[email protected]_
</td> </tr>
<tr>
<td>
Limitati ons
</td>
<td>
None
</td> </tr>
<tr>
<td>
License
</td>
<td>
Free
</td> </tr>
<tr>
<td>
Constrai nts
</td>
<td>
None
</td> </tr>
<tr>
<td>
Data Format
</td>
<td>
NetCDF-CF
</td> </tr>
<tr>
<td>
Access
URL
</td>
<td>
_http://earthserver.pml.ac.uk/rasdaman/ows? &SERVICE=WCS&VERSION _ _=2.0.1
&REQUEST=GetCapabilities _
</td> </tr>
<tr>
<td>
Archivi ng and preserva tion (includi ng storage and backup)
</td>
<td>
Data is part of long term ESA CCI project and the original copy is maintained
there.
</td> </tr> </table>
_Table 6-2: Data set description for the ESA Ocean Colour Climate Change,
version 2._
<table>
<tr>
<th>
**Data set ESA OC-CCI, version 3 referen ce and name**
</th> </tr>
<tr>
<td>
Organis ation
</td>
<td>
**ESA OC-CCI**
</td> </tr>
<tr>
<td>
Data set descripti on
</td>
<td>
The ESA Ocean Colour Climate Change Initiative (OCCCI) provides a multi sensor
long timeseries of ocean colour parameters. These include Rrs at varying
frequencies and derived products such as Chlorophyll. These
</td> </tr>
<tr>
<td>
</td>
<td>
variables are vital to understanding the health of the oceans and can be used
as a monitoring tool. As new processing systems come online and historical
data go through phased reprocessing by the data creators a new version of
OCCCI is processed.
</td> </tr>
<tr>
<td>
Standar ds
</td>
<td>
Data is available through the OGC WCS/WCPS standard.
</td> </tr>
<tr>
<td>
Spatial extent
</td>
<td>
Global
</td> </tr>
<tr>
<td>
Tempor
al extent
</td>
<td>
1997-2016 available as daily, weekly and monthly composites
</td> </tr>
<tr>
<td>
Project Contact
</td>
<td>
Olly Clements ([email protected])
</td> </tr>
<tr>
<td>
Upstrea m
Contact
</td>
<td>
[email protected]_
</td> </tr>
<tr>
<td>
Limitati ons
</td>
<td>
None
</td> </tr>
<tr>
<td>
License
</td>
<td>
Free
</td> </tr>
<tr>
<td>
Constrai nts
</td>
<td>
None
</td> </tr>
<tr>
<td>
Data Format
</td>
<td>
NetCDF-CF
</td> </tr>
<tr>
<td>
Access
URL
</td>
<td>
_http://earthserver.pml.ac.uk/rasdaman/ows? &SERVICE=WCS&VERSION _ _=2.0.1
&REQUEST=GetCapabilities _
</td> </tr>
<tr>
<td>
Archivi ng and preserva tion (includi ng storage and backup)
</td>
<td>
Data is part of long term ESA CCI project and the original copy is maintained
there.
</td> </tr> </table>
_Table 6-3: Data set description for the ESA Ocean Colour Climate Change,
version 3._
<table>
<tr>
<th>
**Data set referen ce and name**
</th>
<th>
**ESA OC-CCI, version 3.1**
</th> </tr>
<tr>
<td>
Organis ation
</td>
<td>
**ESA OC-CCI**
</td> </tr>
<tr>
<td>
Data set descripti on
</td>
<td>
The ESA Ocean Colour Climate Change Initiative provides a multi sensor long
timeseries of ocean colour parameters. These include Rrs at varying
frequencies and derived products such as Chlorophyll. These variables are
vital to understanding the health of the oceans and can be used as a
monitoring tool. As new processing systems come online and historical data go
through phased reprocessing by the data creators a new version of OCCCI is
processed.
</td> </tr>
<tr>
<td>
Standar ds
</td>
<td>
Data is available through the OGC WCS/WCPS standard.
</td> </tr>
<tr>
<td>
Spatial extent
</td>
<td>
Global
</td> </tr>
<tr>
<td>
Tempor
al extent
</td>
<td>
1997-2016 available as daily, weekly and monthly composites
</td> </tr>
<tr>
<td>
Project Contact
</td>
<td>
Olly Clements ([email protected])
</td> </tr>
<tr>
<td>
Upstrea m
Contact
</td>
<td>
[email protected]_
</td> </tr>
<tr>
<td>
Limitati ons
</td>
<td>
None
</td> </tr>
<tr>
<td>
License
</td>
<td>
Free
</td> </tr>
<tr>
<td>
Constrai nts
</td>
<td>
None
</td> </tr>
<tr>
<td>
Data Format
</td>
<td>
NetCDF-CF
</td> </tr>
<tr>
<td>
Access
URL
</td>
<td>
_http://earthserver.pml.ac.uk/rasdaman/ows? &SERVICE=WCS&VERSION _ _=2.0.1
&REQUEST=GetCapabilities _
</td> </tr>
<tr>
<td>
Archivi ng and preserva tion (includi ng storage and backup)
</td>
<td>
Data is part of long term ESA CCI project and the original copy is maintained
there.
</td> </tr> </table>
_Table 6-4: Data set description for the ESA Ocean Colour Climate Change,
version 3.1._
<table>
<tr>
<th>
**Data set referen ce and name**
</th>
<th>
**OLCI - Sentinel 3 - Global**
</th> </tr>
<tr>
<td>
Organis ation
</td>
<td>
**ESA**
</td> </tr>
<tr>
<td>
Data set descripti on
</td>
<td>
SENTINEL-3 Ocean and Land Colour Instrument (OLCI) sensor provides light
reflectance data and derived Chlorophyll. Data are available as single
chlorophyll coverages and aggregated coverages including all available Rrs
Bands
</td> </tr>
<tr>
<td>
Standar ds
</td>
<td>
Data is available through the OGC WCS/WCPS standard.
</td> </tr>
<tr>
<td>
Spatial extent
</td>
<td>
Global
</td> </tr>
<tr>
<td>
Tempor
al extent
</td>
<td>
2017-ongoing available as individual scenes
</td> </tr>
<tr>
<td>
Project Contact
</td>
<td>
Olly Clements ([email protected])
</td> </tr>
<tr>
<td>
Upstrea m
Contact
</td>
<td>
[email protected]_
</td> </tr>
<tr>
<td>
Limitati ons
</td>
<td>
None
</td> </tr>
<tr>
<td>
License
</td>
<td>
Free
</td> </tr>
<tr>
<td>
Constrai nts
</td>
<td>
None
</td> </tr>
<tr>
<td>
Data Format
</td>
<td>
NetCDF-CF
</td> </tr>
<tr>
<td>
Access
URL
</td>
<td>
_http://earthserver.pml.ac.uk/rasdaman/ows? &SERVICE=WCS&VERSION _ _=2.0.1
&REQUEST=GetCapabilities _
</td> </tr>
<tr>
<td>
Archivi ng and preserva tion (includi ng storage and backup)
</td>
<td>
Data is maintained in its original form by CMEMS.
</td> </tr> </table>
_Table 6-5: Data set description for the ESA Global S-3A OLCI._
<table>
<tr>
<th>
**Data set OLCI - Sentinel 3 - UK referen**
**ce and name**
</th> </tr>
<tr>
<td>
Organis ation
</td>
<td>
**ESA**
</td> </tr>
<tr>
<td>
Data set descripti on
</td>
<td>
SENTINEL-3 Ocean and Land Colour Instrument (OLCI) sensor provides light
reflectance data and derived Chlorophyll. Data are available as single
chlorophyll coverages and aggregated coverages including all available Rrs
Bands
</td> </tr>
<tr>
<td>
Standar ds
</td>
<td>
Data is available through the OGC WCS/WCPS standard.
</td> </tr>
<tr>
<td>
Spatial extent
</td>
<td>
Lat(47:67) Lon(-15:13)
</td> </tr>
<tr>
<td>
Tempor
al extent
</td>
<td>
2017-ongoing available as individual scenes
</td> </tr>
<tr>
<td>
Project Contact
</td>
<td>
Olly Clements ([email protected])
</td> </tr>
<tr>
<td>
Upstrea m
Contact
</td>
<td>
[email protected]_
</td> </tr>
<tr>
<td>
Limitati ons
</td>
<td>
None
</td> </tr>
<tr>
<td>
License
</td>
<td>
Free
</td> </tr>
<tr>
<td>
Constrai nts
</td>
<td>
None
</td> </tr>
<tr>
<td>
Data Format
</td>
<td>
NetCDF-CF
</td> </tr>
<tr>
<td>
Access
URL
</td>
<td>
_http://earthserver.pml.ac.uk/rasdaman/ows? &SERVICE=WCS&VERSION _ _=2.0.1
&REQUEST=GetCapabilities _
</td> </tr>
<tr>
<td>
Archivi ng and preserva tion (includi ng storage and backup)
</td>
<td>
Data is maintained in its original form by CMEMS.
</td> </tr> </table>
_Table 6-6: Data set description for the ESA S-3A UK._
<table>
<tr>
<th>
**Data set referen ce and name**
</th>
<th>
**OLCI - Sentinel 3 - North Atlantic**
</th> </tr>
<tr>
<td>
Organis ation
</td>
<td>
**ESA**
</td> </tr>
<tr>
<td>
Data set descripti on
</td>
<td>
SENTINEL-3 Ocean and Land Colour Instrument (OLCI) sensor provides light
reflectance data and derived Chlorophyll. Data are available as single
chlorophyll coverages and aggregated coverages including all available Rrs
Bands
</td> </tr>
<tr>
<td>
Standar ds
</td>
<td>
Data is available through the OGC WCS/WCPS standard.
</td> </tr>
<tr>
<td>
Spatial extent
</td>
<td>
Lat(20:66) Lon(-46:13)
</td> </tr>
<tr>
<td>
Tempor
al extent
</td>
<td>
2017-ongoing available as individual scenes
</td> </tr>
<tr>
<td>
Project Contact
</td>
<td>
Olly Clements ([email protected])
</td> </tr>
<tr>
<td>
Upstrea m
Contact
</td>
<td>
[email protected]_
</td> </tr>
<tr>
<td>
Limitati ons
</td>
<td>
None
</td> </tr>
<tr>
<td>
License
</td>
<td>
Free
</td> </tr>
<tr>
<td>
Constrai nts
</td>
<td>
None
</td> </tr>
<tr>
<td>
Data Format
</td>
<td>
NetCDF-CF
</td> </tr>
<tr>
<td>
Access
URL
</td>
<td>
_http://earthserver.pml.ac.uk/rasdaman/ows? &SERVICE=WCS&VERSION _ _=2.0.1
&REQUEST=GetCapabilities _
</td> </tr>
<tr>
<td>
Archivi ng and preserva tion (includi ng storage and backup)
</td>
<td>
Data is maintained in its original form by CMEMS.
</td> </tr> </table>
_Table 6-7: Data set description for the ESA North Atlantic S-3A OLCI._
## 1.2 Climate Science Data Service
<table>
<tr>
<th>
**Data set reference ECMWF ERA-interim reanalysis and name**
</th> </tr>
<tr>
<td>
Organisation
</td>
<td>
**ECMWF**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
A selection of ERA-Interim reanalysis parameters is provided. ERA-interim is a
global atmospheric reanalysis produced by ECMWF. It is the replacement of
ERA-40 and extends back to 1 Jan 1979. Reanalysis data are global data sets
describing the recent history of the atmosphere, land surface, and oceans.
Reanalysis data are used for monitoring climate change, for research and
education, and for commercial applications. Currently, five surface parameters
are available: 2m air temperature, precipitation, mean sea level pressure, sea
surface temperature, soil moisture. Further, three parameters on three
different pressure levels (500, 850 and 1000 hPa) are provided: temperature,
geopotential and relative humidity. More information to ERA-interim data is
available under http://onlinelibrary.wiley.com/doi/10.1002/qj.828/full. In
addition to these parameters, a large portion of the ERAinterim database is
also available on an "on-demand" basis through the MARS-Rasdaman connection.
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
Data will be made available through the OGC WCS/WCPS standard.
</td> </tr>
<tr>
<td>
Spatial extent
</td>
<td>
Global (Longitude: -180 to 180, Latitude: -90 to 90); Spatial resolution: 0.5
x 0.5 deg
</td> </tr>
<tr>
<td>
Temporal extent
</td>
<td>
1 Jan 1979 to 31 Dec 2015 (6-hourly resolution)
</td> </tr>
<tr>
<td>
Project Contact
</td>
<td>
Stephan Siemen (ECMWF)
</td> </tr>
<tr>
<td>
Upstream Contact
</td>
<td>
Dick Dee (ECMWF)
</td> </tr>
<tr>
<td>
Limitations
</td>
<td>
None
</td> </tr>
<tr>
<td>
License
</td>
<td>
Free, but no redistribution
</td> </tr>
<tr>
<td>
Constraints
</td>
<td>
None
</td> </tr>
<tr>
<td>
Data Format
</td>
<td>
GRIB
</td> </tr>
<tr>
<td>
Access URL
</td>
<td>
http://earthserver.ecmwf.int/rasdaman/ows
</td> </tr>
<tr>
<td>
Archiving and preservation
(including storage and backup)
</td>
<td>
Stored in MARS archive - original data will be kept without time limit
</td> </tr> </table>
_Table 6-8: Data set description for the ERA-Interim reanalysis parameters._
<table>
<tr>
<th>
**Data set reference and name**
</th>
<th>
**GloFAS river discharge forecast data**
</th> </tr>
<tr>
<td>
Organisation
</td>
<td>
**ECMWF / JRC**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Data is part of the Global Flood Awareness System (GloFAS)
(www.globalfloods.eu). The GloFAS system produces daily
flood forecasts in a pre-operational manner. More information about the data
can be found under
http://www.hydrol-earth-syst-sci.net/17/1161/2013/hess-171161-2013.pdf
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
Data will be made available through the OGC WCS/WCPS standard.
</td> </tr>
<tr>
<td>
Spatial extent
</td>
<td>
Global (Longitude: -180 to 180, Latitude: -60 to 90); Spatial resolution: 0.1
x 0.1 deg
</td> </tr>
<tr>
<td>
Temporal extent
</td>
<td>
1 April 2008 up to now
</td> </tr>
<tr>
<td>
Project Contact
</td>
<td>
Stephan Siemen (ECMWF)
</td> </tr>
<tr>
<td>
Upstream Contact
</td>
<td>
Florian Pappenberger (ECMWF)
</td> </tr>
<tr>
<td>
Limitations
</td>
<td>
</td> </tr>
<tr>
<td>
License
</td>
<td>
Free, but no redistribution
</td> </tr>
<tr>
<td>
Constraints
</td>
<td>
None
</td> </tr>
<tr>
<td>
Data Format
</td>
<td>
NetCDF-CF
</td> </tr>
<tr>
<td>
Access URL
</td>
<td>
http://earthserver.ecmwf.int/rasdaman/ows
</td> </tr>
<tr>
<td>
Archiving and preservation
(including storage and backup)
</td>
<td>
TBD
</td> </tr> </table>
_Table 6-9: Data set description for the Global Flood Awareness System._
<table>
<tr>
<th>
**Data set reference and name**
</th>
<th>
**ERA river discharge data**
</th> </tr>
<tr>
<td>
Organisation
</td>
<td>
**ECMWF / JRC**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
Data will be made available through the OGC WCS/WCPS standard.
</td> </tr>
<tr>
<td>
Spatial extent
</td>
<td>
Global (Longitude: -180 to 180, Latitude: -90 to 90); Spatial resolution: 0.1
x 0.1 deg
</td> </tr>
<tr>
<td>
Temporal extent
</td>
<td>
1 January 1981 up to now
</td> </tr>
<tr>
<td>
Project Contact
</td>
<td>
Stephan Siemen (ECMWF)
</td> </tr>
<tr>
<td>
Upstream Contact
</td>
<td>
Florian Pappenberger (ECMWF)
</td> </tr>
<tr>
<td>
Limitations
</td>
<td>
</td> </tr>
<tr>
<td>
License
</td>
<td>
Free, but no redistribution
</td> </tr>
<tr>
<td>
Constraints
</td>
<td>
None
</td> </tr>
<tr>
<td>
Data Format
</td>
<td>
NetCDF-CF
</td> </tr>
<tr>
<td>
Access URL
</td>
<td>
http://earthserver.ecmwf.int/rasdaman/ows
</td> </tr> </table>
_Table 6-10: Data set description for the ERA river discharge data._
<table>
<tr>
<th>
**Data set reference Global ECMWF Fire Forecasting model data, as part of and
name the Copernicus Emergency Management Service**
</th> </tr>
<tr>
<td>
Organisation
</td>
<td>
**ECMWF**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
The European Forest Fire Information System (EFFIS) is currently being
developed in the framework of the
Copernicus Emergency Management Services to monitor and forecast fire danger
in Europe. The system provides timely information to civil protection
authorities in 38 nations across Europe
(http://forest.jrc.ec.europa.eu/effis/about-effis/effisnetwork/) and mostly
concentrates on flagging regions which might be at high danger of spontaneous
ignition due to persistent drought. GEFF is the modelling component of EFFIS
and implements the three most used fire danger rating systems; the US NFDRS,
the Canadian FWI and the Australian MARK-5. The dataset extends from 1980 to
date and is updated once a month when new ERA-Interim fields become available.
Following indices are available via GEFF: (i) Fire Weather Index (FWI), (ii)
Fire Danger Index (FDI) and (iii) Burning Index (BI). Further information are
available under
http://journals.ametsoc.org/doi/full/10.1175/JAMC-D-15-
0297.1
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
Fire Weather Index data will be made available through the OGC WCS/WCPS
standard.
</td> </tr>
<tr>
<td>
Spatial extent
</td>
<td>
Global (Longitude: -180 to 179.297, Latitude: 89.4628 to -
89.4628); Spatial resolution: 0.703 x 0.703 deg
</td> </tr>
<tr>
<td>
Temporal extent
</td>
<td>
1 January 1980 up to now
</td> </tr>
<tr>
<td>
Project Contact
</td>
<td>
Stephan Siemen (ECMWF)
</td> </tr>
<tr>
<td>
Upstream Contact
</td>
<td>
Francesca Di Giuseppe (ECMWF)
</td> </tr>
<tr>
<td>
Limitations
</td>
<td>
</td> </tr>
<tr>
<td>
License
</td>
<td>
Free
</td> </tr>
<tr>
<td>
Constraints
</td>
<td>
None
</td> </tr>
<tr>
<td>
Data Format
</td>
<td>
NetCDF-CF
</td> </tr>
<tr>
<td>
Access URL
</td>
<td>
Available in beta version at the moment:
http://apps.ecmwf.int/datasets/data/geff-reanalysis/
</td> </tr>
<tr>
<td>
Archiving and preservation
(including storage and backup)
</td>
<td>
Stored in MARS archive - original data will be kept without time limit
</td> </tr> </table>
_Table 6-11: Data set description for Global ECMWF Fire Forecasting model
data, as part of the Copernicus Emergency Management Service._
<table>
<tr>
<th>
**Data set reference and name**
</th>
<th>
**CAMS Regional Air Quality - Reanalysis data**
</th> </tr>
<tr>
<td>
Organisation
</td>
<td>
**ECMWF**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
CAMS is the Copernicus Atmosphere Monitoring Service and will deliver various
products (near-real-time, reanalysis, etc.) of European and global atmospheric
composition on an operational basis. CAMS produces daily air quality ensemble
reanalysis for the air quality parameters Particulate Matter 10 (PM10),
Particulate Matter 2.5 (PM25), Nitrogen Dioxide (NO2), and Ozone (O3).
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
Data will be made available through the OGC WCS/WCPS standard.
</td> </tr>
<tr>
<td>
Spatial extent
</td>
<td>
Europe (Longitude: -25.0 to 45.0, Latitude: 70.0 to 30.0); Spatial resolution:
0.1 x 0.1 deg
</td> </tr>
<tr>
<td>
Temporal extent
</td>
<td>
2014 - 2016; hourly resolution
</td> </tr>
<tr>
<td>
Project Contact
</td>
<td>
Stephan Siemen (ECMWF)
</td> </tr>
<tr>
<td>
Upstream Contact
</td>
<td>
Miha Razinger (ECMWF)
</td> </tr>
<tr>
<td>
Limitations
</td>
<td>
None
</td> </tr>
<tr>
<td>
License
</td>
<td>
Free
</td> </tr>
<tr>
<td>
Constraints
</td>
<td>
None
</td> </tr>
<tr>
<td>
Data Format
</td>
<td>
NetCDF-CF
</td> </tr>
<tr>
<td>
Access URL
</td>
<td>
http://www.regional.atmosphere.copernicus.eu/
</td> </tr>
<tr>
<td>
Archiving and preservation
(including storage and backup)
</td>
<td>
Data is available for download at the URL provided.
</td> </tr> </table>
_Table 6-12: Data set description for CAMS Regional Air Quality - Reanalysis
data._
## 1.3 Earth Observation Data Service
<table>
<tr>
<th>
**Data set reference and name**
</th>
<th>
**MOD 04 - Aerosol Product; MOD 05 - Total Precipitable**
**Water; MOD 06 - Cloud Product; MOD 07 - Atmospheric**
**Profiles; MOD 08 - Gridded Atmospheric Product; MOD**
**11 - Land Surface Temperature and Emissivity; MOD 35 - Cloud Mask;**
</th> </tr>
<tr>
<td>
Organisation
</td>
<td>
**NASA**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
There are seven MODIS Level 3 Atmosphere Products, each covering a different
temporal scale: Daily, 8-Day, and Monthly. Each of these Level 3 products
contains statistics de-rived from over 100 science parameters from the Level 2
Atmosphere products: Aerosol, Precipitable Water, Cloud, and Atmospheric
Profiles. A range of statistical summaries (scalar statistics and 1- and
2-dimensional histograms) are computed, depending on the Level 2 science
parameter. Statistics are aggregated to a 1° x 1° equal-angle global grid. The
daily product contains ~700 statistical summary parameters. The 8-day and
monthly products contain ~900 statistical summary parameters.
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
Data is available through the OGC WCS/WCPS standard.
</td> </tr>
<tr>
<td>
Spatial extent
</td>
<td>
Global
</td> </tr>
<tr>
<td>
Temporal extent
</td>
<td>
2000 - today
</td> </tr>
<tr>
<td>
Project Contact
</td>
<td>
[email protected]_
</td> </tr>
<tr>
<td>
Upstream Contact
</td>
<td>
http://modaps.nascom.nasa.gov/services/user/
</td> </tr>
<tr>
<td>
Limitations
</td>
<td>
</td> </tr>
<tr>
<td>
License
</td>
<td>
</td> </tr>
<tr>
<td>
Constraints
</td>
<td>
The distribution of the MODAPS data sets is funded by NASA's Earth-Sun System
Division (ESSD). The data are not copyrighted; however, in the event that you
publish data or results using these data, we request that you include the
following acknowledgment:
"The data used in this study were acquired as part of the NASA's Earth-Sun
System Division and archived and distributed by the MODIS Adaptive Processing
System
(MODAPS)."
We would appreciate receiving a copy of your publication, which can be
forwarded to [email protected].
</td> </tr>
<tr>
<td>
Data Format
</td>
<td>
GeoTIFF (generated from HDF)
</td> </tr>
<tr>
<td>
Access URL
</td>
<td>
_eodataservice.org_
</td> </tr>
<tr>
<td>
Archiving and preservation
(including storage and backup)
</td>
<td>
Data is part of Level-2 MODIS Atmosphere Products
</td> </tr> </table>
_Table 6-13: Data set description for the MODIS Level 3 Atmosphere Products._
<table>
<tr>
<th>
Data set reference and **SMOS Level 2 Soil Moisture**
name **(SMOS.MIRAS.MIR_SMUDP2); SMOS Level 2 Ocean**
**Salinity (SMOS.MIRAS.MIR_OSUDP2)**
</th> </tr>
<tr>
<td>
Organisation
</td>
<td>
**ESA**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
ESA's Soil Moisture Ocean Salinity (SMOS) Earth Explorer mission is a radio
telescope in orbit, but pointing back to Earth not space. Its Microwave
Imaging Radiometer using Aperture Synthesis (MIRAS) radiometer picks up faint
microwave emissions from Earth's surface to map levels of land soil moisture
and ocean salinity.
These are the key geophysical parameters, soil moisture for hydrology studies
and salinity for enhanced understanding of ocean circulation, both vital for
climate change models.
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
Data is available through the OGC WCS/WCPS standard.
</td> </tr>
<tr>
<td>
Spatial extent
</td>
<td>
Global
</td> </tr>
<tr>
<td>
Temporal extent
</td>
<td>
12-01-2010 - today
</td> </tr>
<tr>
<td>
Project Contact
</td>
<td>
[email protected]_
</td> </tr>
<tr>
<td>
Upstream Contact
</td>
<td>
</td> </tr>
<tr>
<td>
Limitations
</td>
<td>
</td> </tr>
<tr>
<td>
License
</td>
<td>
https://earth.esa.int/web/guest/-/revised-esa-earthobservation-data-
policy-7098
</td> </tr>
<tr>
<td>
Constraints
</td>
<td>
</td> </tr>
<tr>
<td>
Data Format
</td>
<td>
GeoTIFF (generated from measurements geo-located in an equal-area grid system
ISEA 4H9)
</td> </tr>
<tr>
<td>
Access URL
</td>
<td>
_eodataservice.org_
</td> </tr>
<tr>
<td>
Archiving and preservation
(including storage and backup)
</td>
<td>
Data is part of Level-2 SMOS Products
</td> </tr> </table>
_Table 6-14: Data set description for ESA's Soil Moisture Ocean Salinity
parameters._
<table>
<tr>
<th>
**Data set reference Landsat8 L1T and name**
</th> </tr>
<tr>
<td>
Organisation
</td>
<td>
**ESA**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Level 1 T- Terrain Corrected
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
Data is available through the OGC WCS/WCPS standard.
</td> </tr>
<tr>
<td>
Spatial extent
</td>
<td>
European
</td> </tr>
<tr>
<td>
Temporal extent
</td>
<td>
2014 - today
</td> </tr>
<tr>
<td>
Project Contact
</td>
<td>
[email protected]_
</td> </tr>
<tr>
<td>
Upstream Contact
</td>
<td>
EO-Support (https://earth.esa.int/web/guest/contact-us)
</td> </tr>
<tr>
<td>
Limitations
</td>
<td>
</td> </tr>
<tr>
<td>
License
</td>
<td>
</td> </tr>
<tr>
<td>
Constraints
</td>
<td>
Acceptance of ESA Terms and Conditions 3
</td> </tr>
<tr>
<td>
Data Format
</td>
<td>
GeoTIFF
</td> </tr>
<tr>
<td>
Access URL
</td>
<td>
_eodataservice.org_
</td> </tr>
<tr>
<td>
Archiving and preservation
(including storage and backup)
</td>
<td>
ESA is an International Co-operator with USGS for the
Landsat-8 Mission. Data is downlinked via Kiruna and Matera (KIS and MTI)
stations whenever the satellite passes over Europe, starting from November
2013. Typically the station's will receive 2 or 3 passes per day each and
there will be some new scenes for each path, in accordance with the overall
mission acquisition plan.
The Neustrelitz data available on the portal from May 2013 to December 2013
Data will be processed to either L1T or L1Gt product format as soon as it is
downlinked. The target time is for scenes to be available for download within
3 hours of reception.
https://landsat8portal.eo.esa.int/faq/
</td> </tr> </table>
_Table 6-15: Data set description for Landsat8 L1T parameters._
<table>
<tr>
<th>
**Data set reference and name**
</th>
<th>
**Sentinel2**
</th> </tr>
<tr>
<td>
Organisatio n
</td>
<td>
**ESA**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Level-1C
Feature layers (NDVI, Cloudmask, RGB)
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
Data is available through the OGC WCS/WCPS standard.
</td> </tr>
<tr>
<td>
Spatial extent
</td>
<td>
Italy
</td> </tr> </table>
3 : https://earth.esa.int/web/guest/terms-conditions
<table>
<tr>
<th>
Temporal extent
</th>
<th>
2015 - today
</th> </tr>
<tr>
<td>
Project Contact
</td>
<td>
[email protected]_
</td> </tr>
<tr>
<td>
Upstream Contact
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
Limitations
</td>
<td>
https://sentinel.esa.int/documents/247904/690755/Sentinel_Data_Legal_ Notice
</td> </tr>
<tr>
<td>
License
</td>
<td>
https://sentinel.esa.int/documents/247904/690755/Sentinel_Data_Legal_ Notice
</td> </tr>
<tr>
<td>
Constraints
</td>
<td>
</td> </tr>
<tr>
<td>
Data Format
</td>
<td>
JPG2000 for L1C
GeoTIFF for feature layers generated from L1C
</td> </tr>
<tr>
<td>
Access
URL
</td>
<td>
_eodataservice.org_
</td> </tr>
<tr>
<td>
Archiving and preservatio n
(including storage and backup)
</td>
<td>
</td> </tr> </table>
_Table 6-16: Data set description for Sentinel2 Level-1C parameters._
<table>
<tr>
<th>
**Data set reference and name**
</th>
<th>
**Sentinel2**
</th> </tr>
<tr>
<td>
Organisatio n
</td>
<td>
**ESA**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Level-1C
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
Data is available through the OGC WCS/WCPS standard.
</td> </tr>
<tr>
<td>
Spatial extent
</td>
<td>
Global
</td> </tr>
<tr>
<td>
Temporal extent
</td>
<td>
2015 - today
</td> </tr>
<tr>
<td>
Project Contact
</td>
<td>
[email protected]_
</td> </tr>
<tr>
<td>
Upstream Contact
</td>
<td>
</td> </tr>
<tr>
<td>
Limitations
</td>
<td>
https://sentinel.esa.int/documents/247904/690755/Sentinel_Data_Legal_ Notice
</td> </tr>
<tr>
<td>
License
</td>
<td>
https://sentinel.esa.int/documents/247904/690755/Sentinel_Data_Legal_ Notice
</td> </tr>
<tr>
<td>
Constraints
</td>
<td>
</td> </tr>
<tr>
<td>
Data
</td>
<td>
JPG2000 / netCDF
</td> </tr>
<tr>
<td>
Format
</td>
<td>
</td> </tr>
<tr>
<td>
Access
URL
</td>
<td>
_eodataservice.org_
</td> </tr>
<tr>
<td>
Archiving and preservatio n
(including storage and backup)
</td>
<td>
</td> </tr> </table>
_Table 6-17: Data set description for Sentinel2 / Sentinel3 parameters._
<table>
<tr>
<th>
**Data set reference and name**
</th>
<th>
**Sentinel3**
</th> </tr>
<tr>
<td>
Organisatio n
</td>
<td>
**ESA**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Level-2
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
Data is available through the OGC WCS/WCPS standard.
</td> </tr>
<tr>
<td>
Spatial extent
</td>
<td>
Global
</td> </tr>
<tr>
<td>
Temporal extent
</td>
<td>
2018 - today
</td> </tr>
<tr>
<td>
Project Contact
</td>
<td>
[email protected]_
</td> </tr>
<tr>
<td>
Upstream Contact
</td>
<td>
</td> </tr>
<tr>
<td>
Limitations
</td>
<td>
https://sentinel.esa.int/documents/247904/690755/Sentinel_Data_Legal_ Notice
</td> </tr>
<tr>
<td>
License
</td>
<td>
https://sentinel.esa.int/documents/247904/690755/Sentinel_Data_Legal_ Notice
</td> </tr>
<tr>
<td>
Constraints
</td>
<td>
</td> </tr>
<tr>
<td>
Data Format
</td>
<td>
JPG2000 / netCDF
</td> </tr>
<tr>
<td>
Access
URL
</td>
<td>
_eodataservice.org_
</td> </tr>
<tr>
<td>
Archiving and preservatio n
(including storage and backup)
</td>
<td>
</td> </tr> </table>
_Table 6-18: Data set description for Sentinel3 parameters._
<table>
<tr>
<th>
**Data set reference and name**
</th>
<th>
**Hydro Estimator**
</th> </tr>
<tr>
<td>
Organisation
</td>
<td>
**NOAA**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
The Hydro-Estimator (H-E) uses infrared (IR) data from
NOAA's Geostationary Operational Environmental Satellites (GOES) to estimate
rainfall rates. Estimates of rainfall from satellites can provide critical
rainfall information in regions where data from gauges or radar are
unavailable or unreliable, such as over oceans or sparsely populated regions.
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
Data is available through the OGC WCS/WCPS standard.
</td> </tr>
<tr>
<td>
Spatial extent
</td>
<td>
Global
</td> </tr>
<tr>
<td>
Temporal extent
</td>
<td>
22 May 2006 - today
</td> </tr>
<tr>
<td>
Project Contact
</td>
<td>
[email protected]_
</td> </tr>
<tr>
<td>
Upstream Contact
</td>
<td>
</td> </tr>
<tr>
<td>
Limitations
</td>
<td>
https://www.star.nesdis.noaa.gov/star/productdisclaimer.php
</td> </tr>
<tr>
<td>
License
</td>
<td>
https://www.star.nesdis.noaa.gov/star/productdisclaimer.php
</td> </tr>
<tr>
<td>
Constraints
</td>
<td>
</td> </tr>
<tr>
<td>
Data Format
</td>
<td>
GeoTIFF
</td> </tr>
<tr>
<td>
Access URL
</td>
<td>
_eodataservice.org_
</td> </tr>
<tr>
<td>
Archiving and preservation
(including storage and backup)
</td>
<td>
</td> </tr> </table>
_Table 6-19: Data set description for Hydro Estimator._
## 1.4 Planetary Science Data Service
<table>
<tr>
<th>
Data set reference and name
</th>
<th>
**MGS MOLA GRIDDED DATA RECORDS**
</th> </tr>
<tr>
<td>
Organisation
</td>
<td>
**JACOBSUNI**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
MARS ORBITER LASER ALTIMETER
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
Data will be made available through the OGC WCPS standard.
</td> </tr>
<tr>
<td>
Spatial extent
</td>
<td>
GLOBAL
</td> </tr>
<tr>
<td>
Temporal extent
</td>
<td>
NOT APPLICABLE (Derived from multiple experimental data records)
</td> </tr>
<tr>
<td>
Project Contact
</td>
<td>
[email protected]_
</td> </tr>
<tr>
<td>
Upstream Contact
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
Limitations
</td>
<td>
None
</td> </tr>
<tr>
<td>
License
</td>
<td>
Free
</td> </tr>
<tr>
<td>
Constraints
</td>
<td>
None
</td> </tr>
<tr>
<td>
Data Format
</td>
<td>
PDS standard (GDAL-compatible .IMG or alike)
</td> </tr>
<tr>
<td>
Access URL
</td>
<td>
http://access.planetserver.eu:8080/rasdaman/ows
</td> </tr>
<tr>
<td>
Archiving and preservation
(including storage and backup)
</td>
<td>
Data is part of long-term NASA PDS archives and the original copies are
maintained there.
</td> </tr> </table>
_Table 6-20: Data set description for Mars Orbiter LASER Altimeter data._
<table>
<tr>
<th>
Data set reference and name
</th>
<th>
**MRO-M-CRISM-3-RDR-TARGETED-V1.0**
</th> </tr>
<tr>
<td>
Organisation
</td>
<td>
**JACOBSUNI**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
TRDR - Targeted Reduced Data Records contain data calibrated to radiance or
I/F.
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
Data will be made available through the OGC WCPS standard.
</td> </tr>
<tr>
<td>
Spatial extent
</td>
<td>
Local
</td> </tr>
<tr>
<td>
Temporal extent
</td>
<td>
Variable
</td> </tr>
<tr>
<td>
Project Contact
</td>
<td>
[email protected]_
</td> </tr>
<tr>
<td>
Upstream Contact
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
Limitations
</td>
<td>
None
</td> </tr>
<tr>
<td>
License
</td>
<td>
Free
</td> </tr>
<tr>
<td>
Constraints
</td>
<td>
None
</td> </tr>
<tr>
<td>
Data Format
</td>
<td>
PDS standard (GDAL-compatible .IMG or alike)
</td> </tr>
<tr>
<td>
Access URL
</td>
<td>
http://access.planetserver.eu:8080/rasdaman/ows
</td> </tr>
<tr>
<td>
Archiving and preservation
(including storage and backup)
</td>
<td>
Data is part of long term NASA PDS archives and the original copies are
maintained there
</td> </tr> </table>
_Table 6-21: Data set description for MRO-M-CRISM Targeted Reduced Data
Records._
<table>
<tr>
<th>
Data set reference and name
</th>
<th>
**MRO-M-CRISM-5-RDR-MULTISPECTRAL-V1.0**
</th> </tr>
<tr>
<td>
Organisation
</td>
<td>
**JACOBSUNI**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
MRDR - Multispectral Reduced Data Records contain multispectral survey data
calibrated, mosaicked, and map projected.
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
Data will be made available through the OGC WCPS standard.
</td> </tr>
<tr>
<td>
Spatial extent
</td>
<td>
REGIONAL/GLOBAL
</td> </tr>
<tr>
<td>
Temporal extent
</td>
<td>
Not applicable. Derived data from multiple acquisition times.
</td> </tr>
<tr>
<td>
Project Contact
</td>
<td>
[email protected]_
</td> </tr>
<tr>
<td>
Upstream Contact
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
Limitations
</td>
<td>
None
</td> </tr>
<tr>
<td>
License
</td>
<td>
Free
</td> </tr>
<tr>
<td>
Constraints
</td>
<td>
None
</td> </tr>
<tr>
<td>
Data Format
</td>
<td>
PDS standard (GDAL-compatible .IMG or alike)
</td> </tr>
<tr>
<td>
Access URL
</td>
<td>
http://access.planetserver.eu:8080/rasdaman/ows
</td> </tr>
<tr>
<td>
Archiving and preservation
(including storage and backup)
</td>
<td>
Data is part of long term NASA PDS archives and the original copies are
maintained there
</td> </tr> </table>
_Table 6-22: Data set description for MRO-M-CRISM Multispectral Reduced Data
Records._
<table>
<tr>
<th>
Data set reference and name
</th>
<th>
**LRO-L-LOLA-4-GDR-V1.0**
</th> </tr>
<tr>
<td>
Organisation
</td>
<td>
**JACOBSUNI**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
LRO LOLA Gridded Data Record
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
Data will be made available through the OGC WCPS standard.
</td> </tr>
<tr>
<td>
Spatial extent
</td>
<td>
Global
</td> </tr>
<tr>
<td>
Temporal extent
</td>
<td>
NOT APPLICABLE (Derived from multiple experimental data records)
</td> </tr>
<tr>
<td>
Project Contact
</td>
<td>
[email protected]_
</td> </tr>
<tr>
<td>
Upstream Contact
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
Limitations
</td>
<td>
None
</td> </tr>
<tr>
<td>
License
</td>
<td>
Free
</td> </tr>
<tr>
<td>
Constraints
</td>
<td>
None
</td> </tr>
<tr>
<td>
Data Format
</td>
<td>
PDS standard (GDAL-compatible .IMG or alike)
</td> </tr>
<tr>
<td>
Access URL
</td>
<td>
http://access.planetserver.eu:8080/rasdaman/ows
</td> </tr>
<tr>
<td>
Archiving and preservation
</td>
<td>
Data is part of long term NASA PDS project and the original copies are
maintained there
</td> </tr>
<tr>
<td>
Data set reference and name
</td>
<td>
**LRO-L-LOLA-4-GDR-V1.0**
</td> </tr>
<tr>
<td>
(including storage and backup)
</td>
<td>
</td> </tr> </table>
_Table 6-23: Data set description for LRO LOLA gridded data._
<table>
<tr>
<th>
Data set reference and name
</th>
<th>
**MEX-M-HRSC-5-REFDR-DTM-V1.0**
</th> </tr>
<tr>
<td>
Organisation
</td>
<td>
**JACOBSUNI**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Mars Express HRSC topography
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
Data will be made available through the OGC WCPS standard.
</td> </tr>
<tr>
<td>
Spatial extent
</td>
<td>
LOCAL
</td> </tr>
<tr>
<td>
Temporal extent
</td>
<td>
VARIABLE
</td> </tr>
<tr>
<td>
Project Contact
</td>
<td>
[email protected]_
</td> </tr>
<tr>
<td>
Upstream Contact
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
Limitations
</td>
<td>
None
</td> </tr>
<tr>
<td>
License
</td>
<td>
Free
</td> </tr>
<tr>
<td>
Constraints
</td>
<td>
None
</td> </tr>
<tr>
<td>
Data Format
</td>
<td>
PDS standard (GDAL-compatible .IMG or alike)
</td> </tr>
<tr>
<td>
Access URL
</td>
<td>
http://access.planetserver.eu:8080/rasdaman/ows
</td> </tr>
<tr>
<td>
Archiving and preservation
(including storage and backup)
</td>
<td>
Data is part of long term ESA PSA project and the original copies are
maintained there.
</td> </tr> </table>
_Table 6-24: Data set description for Mars Express HRSC topography
parameters._
<table>
<tr>
<th>
Data set reference and name
</th>
<th>
**CH1-ORB-L-M3-4-L2-REFLECTANCE-V1.0**
</th> </tr>
<tr>
<td>
Organisation
</td>
<td>
**JACOBSUNI**
</td> </tr>
<tr>
<td>
Data set description
</td>
<td>
Chandrayaan-1 Moon Mineralogy Mapper (M3)
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
Data will be made available through the OGC WCPS standard.
</td> </tr>
<tr>
<td>
Spatial extent
</td>
<td>
LOCAL
</td> </tr>
<tr>
<td>
Temporal extent
</td>
<td>
VARIABLE
</td> </tr>
<tr>
<td>
Project Contact
</td>
<td>
[email protected]_
</td> </tr>
<tr>
<td>
Upstream Contact
</td>
<td>
[email protected]
</td> </tr>
<tr>
<td>
Limitations
</td>
<td>
None
</td> </tr>
<tr>
<td>
License
</td>
<td>
Free
</td> </tr>
<tr>
<td>
Constraints
</td>
<td>
None
</td> </tr>
<tr>
<td>
Data Format
</td>
<td>
PDS standard (GDAL-compatible .IMG or alike)
</td> </tr>
<tr>
<td>
Access URL
</td>
<td>
http://moon.planetserver.eu:8080/rasdaman/ows
</td> </tr>
<tr>
<td>
Archiving and preservation
(including storage and backup)
</td>
<td>
Data is part of long term NASA PDS project and the original copies are
maintained there
</td> </tr> </table>
_Table 6-25: Data set description for Moon Mineralogy Mapper (M3) parameters._
## 1.5 Landsat Data Cube Service
<table>
<tr>
<th>
**Data set reference and name**
</th>
<th>
**Landsat**
</th> </tr>
<tr>
<td>
Organisati on
</td>
<td>
**ANU/NCI**
</td> </tr>
<tr>
<td>
Data set descriptio n
</td>
<td>
_http://geonetwork.nci.org.au/geonetwork/srv/eng/metadata.show?id=24 & _
_currTab=simple_
</td> </tr>
<tr>
<td>
Standards
</td>
<td>
Data is available at OGC WCS standard.
</td> </tr>
<tr>
<td>
Spatial extent
</td>
<td>
Longitude: 108 – 155, Latitude: -10 - -45, Universal Transverse Mercator (UTM)
and Geographic Lat-Lon
</td> </tr>
<tr>
<td>
Temporal extent
</td>
<td>
1997-now
</td> </tr>
<tr>
<td>
Project Contact
</td>
<td>
[email protected]_
</td> </tr>
<tr>
<td>
Upstream Contact
</td>
<td>
[email protected]_
</td> </tr>
<tr>
<td>
Limitation
s
</td>
<td>
None
</td> </tr>
<tr>
<td>
License
</td>
<td>
Commonwealth of Australia (Geoscience Australia) 2015. Creative Commons
Attribution 4.0 International Australia License.
https://creativecommons.org/licenses/by/4.0/
</td> </tr>
<tr>
<td>
Constraint
s
</td>
<td>
Commonwealth of Australia (Geoscience Australia) 2015. Creative Commons
Attribution 4.0 International Australia License.
https://creativecommons.org/licenses/by/4.0/
</td> </tr>
<tr>
<td>
Data Format
</td>
<td>
GeoTIFF [NetCDF-CF conversion currently underway]
</td> </tr>
<tr>
<td>
Access
URL
</td>
<td>
http://rasdaman.nci.org.au/rasdaman/ows
</td> </tr>
<tr>
<td>
Archiving and preservati on
(including storage and backup)
</td>
<td>
This data collection is part of the Research Data Storage Infrastructure
program, which aims for long-term preservation.
</td> </tr> </table>
_Table 6-26: Data set description for Landsat data._
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0722_TEQ_766900.md
|
# IMPLEMENTATION
The creation of the Data Management Plan started with a discussion during the
Kick-off Meeting of the TEQ project (February 2018) with the members of the
TEQ Steering Committee present at the meeting. This discussion focused on:
* The specific data to be saved
* Where they should be saved
* Whether the partner institutions have specific regulations about data management
At the Kick-off Meeting, it was decided that the DMP will be drafted by the
Chair, based on what written in the GA and on further discussions with other
TEQ members, and will be sent to the SC for approval before month 6.
Between month 2 and month 6, the DMP was object of discussion among the
members and was finalized in a draft sent to the TEQ Consortium members for
approval on June 20, 2018, by the Chair. The DMP was approved unanimously in
eVote by the TEQ Steering Committee members on June 27, 2018.
As described in the Data Management Plan, Consortium members have created
online repositories to store their data and metadata. Here below some examples
of repositories (home pages) of TEQ member institutions: University College
London (Figure 1), Technische Universiteit Delft (Figure 2), University of
Southampton (Figure 3).
**Figure 1** : The online repository of the University College London
**Figure 2** : The online repository of the Technische Universiteit Delft
**Figure 3:** The online repository of the University of Southampton
As specified in the DMP attached, project data will be collected and
catalogued, whilst specific information will be given about: data-set
reference and name, description of data, standards, associated metadata. Here
below an example of dataset in the repository of the Queen’s University
Belfast.
**Figure 4** : Screenshot of an example of one data-sets in the QUB’s
repository.
As mentioned in the DMP, TEQ-credited publications will be made available and
accessible through the TEQ website in the section _Publications_ , as shown in
Figure 5. In the members-only part of the TEQ website, a detailed list of all
the publications will be made available (Figure 6). Moreover, a similar table
will be provided for all the preprints, as shown in Figure 7. All the above-
mentioned information is downloadable from the TEQ Website (for members only).
**Figu**
**re 5**
:
The
Publications section o
n
the TEQ
w
ebsite
.
**Figure 6** : Part of the table reporting the publications accessible from
the Members Area on the TEQ Website.
**Figure 7** : Part of the table reporting the preprints of the TEQ
publications.
# TIMETABLE
The DMP will be updated, whenever requested by one of the TEQ partners (with
written request to the PI), upon approval of the SC.
**ISSUES MET AND SOLUTIONS**
No issue was met in the achievement of this deliverable.
# CONCLUSION
Open-source software and components will be available when produced, as well
as experimental data for replication of experiments. Research publications
will be openly accessible. All project partners have created on-line
repositories for their sharable data for reproduction, access, mining,
exploitation.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0723_ACTRIS-2_654109.md
|
# Introduction to ACTRIS Data Centre
ACTRIS-2 (Aerosols, Clouds, and Trace gases Research InfraStructure)
Integrating Activity (IA) addresses the scope of integrating state-of-the-art
European ground-based stations for long-term observations of aerosols, clouds
and short lived gases. ACTRIS-2 is a unique research infrastructure improving
the quality of atmospheric observations, developing new methods and protocols,
and harmonizing existing observations of the atmospheric variables listed in
Appendix I.
The overall goal of the ACTRIS Data Centre is to provide scientists and other
user groups with free and open access to all ACTRIS infrastructure data,
complemented with access to innovative and mature data products, together with
tools for quality assurance (QA), data analysis and research.
The numerous measurement methodologies applied in ACTRIS result in a
considerable diversity of the data collected. In accordance with these
requirements, the ACTRIS Data Centre consists of three topical data
repositories archiving the measurement data, which are all linked through the
ACTRIS data portal to provide a single access point to all data. Hence, the
ACTRIS Data Centre is founded on 3 topical data repositories:
* In situ aerosol and trace gas data are reported to EBAS : _http://ebas.nilu.no/_ • Aerosol remote sensing data are reported to the EARLINET Data base:
_http://access.earlinet.org/EARLINET/_
* Cloud remote sensing data are reported to the Cloudnet data base : _http://cloudnet.fmi.fi/_
In addition, AERIS contributes with the production and provision of satellite
data that complements the ACTRIS ground-based data : _http://www.icare.univ-
lille1.fr/catalogue_ .
Generally, the ACTRIS Data Centre and data management activity aim to work in
accordance with the ENVRI Reference Model, hosted a t _www.envri.eu/rm_ .
# ACTRIS data set descriptions and ACTRIS data levels
ACTRIS data sets are atmospheric variables listed in Appendix I, measured with
the corresponding recommended methodology. **ACTRIS data -** comprises ACTRIS
variables resulting from measurements that fully comply with the standard
operating procedures (SOP), measurement recommendations, and quality
guidelines established within ACTRIS. Furthermore, the data are qualified as
ACTRIS data sets only if they comply with the additional requirements
specified in section 2.1 -2.3 **.** There is 3 levels of ACTRIS data: o
**ACTRIS level 0 data:** Raw sensor output, either mV or physical units.
Native resolution, metadata necessary for next level.
* **ACTRIS level 1 data:** Calibrated and quality assured data with minimum level of quality control. o **ACTRIS level 2 data:** Approved and fully quality controlled ACTRIS data product or geophysical variable. o **ACTRIS level 3 data:** Elaborated ACTRIS data products derived by post-processing of ACTRIS Level 0 1 -2 data, and data from other sources. The data can be gridded or not.
* **ACTRIS synthesis product:** Data product from e.g. research activities **,** not under direct ACTRIS responsibility, but ACTRIS offer repository and access.
The list of variables are expected to increase during the progress of ACTRIS,
particularly level 3data products. During ACTRIS-2, e.g. the aerosol and cloud
databases will be augmented with new classification products developed through
the combination of existing sensors with additional instrumentation; and
products providing information about aerosol layering and typing, together
with advanced products derived from long term series or special case analyses.
In addition, new parameters utilising these products will also be prepared,
and standardized pre processed lidar data and NRT optical property profiles
will be available.
## Aerosol and trace gas in situ data sets
Aerosol and trace gas in situ data are qualified as ACTRIS data only if
* The atmospheric variables are included in the list in Appendix I
* The applied procedures comply with the standard operating procedures (SOP), and measurement recommendations and guidelines provided by the ACTRIS in situ community, available from here _http://actris.nilu.no/Content/SOP_ See section 4.1 of this document for more details.
* The measurement data are submitted to the topic data base EBAS by using the reporting templates and procedures recommended by the ACTRIS in situ community, and available at _http://ebas-submit.nilu.no_
Datasets fulfilling the requirements above qualify for the “ACTRIS” in situ
data set label. The types of variables are expected to expand during ACTRIS-2.
The data can in addition be associated with other programs and frameworks such
as GAW, EMEP, and national EPA etc. The data originator determines other
project associations.
Standard collection and reporting procedure for aerosol and trace gas in situ
measurement data:
* Deadline for reporting data is 31 May of the following year from the reported measurements
* Data are submitted using EBAS EBAS-submit-tool ( _http://ebas-submit.nilu.no/Submit-Data/DataReporting/tools_ ) . This is web based tool to check file formats and metadata.
* An auto-generated e-mail is sent to the data submitter to confirm that the data is received
* After submission, the data undergo an automatic format, NASA-Ames 1001, and metadata check, followed by manual inspection.
* If the data file is accepted, data are imported to EBAS, and feedback is given to the data originator. If there are suspicious data (e.g. suspicious data points/outliers) or format errors (in e.g. metadata, formats, etc.) the data originator is contacted and asked to assess, correct, and re-submit data.
* Data originators are asked about their project affiliation with collaborating networks and frameworks (EMEP, GAW-WDCA etc.)
* Trace gas data is made available to GAW-WDCRG; aerosol data are made available to GAWWDCA.
* Near-real-time (NRT) data collection is set up and the raw data are auto-processed to hourly averages
## Aerosol remote sensing data sets
Aerosol profile data are qualified as ACTRIS data only if
* The atmospheric profile variables are included in the list in Appendix I.
* The applied procedures comply with the recommendations and procedures provided by the ACTRIS profile community available from here _http://actris.nilu.no/Content/SOP_ , harmonised with EARLINET. See section 4.2 of this document for more details.
* The data are reported to the EARLINET DB in accordance with the reporting procedures (available at _http://www.earlinet.org_ / ).
Standard collection and reporting procedure for aerosol profile data:
* Data originators have the possibility to use, in addition to their own quality-assured method, the common standardized automatic analysis software developed within EARLINET, namely the Single Calculus Chain (SCC), for analysing their own lidar data to obtain optical properties from raw data, and passing through pre processed data.
* New data shall be uploaded to the EARLINET DB within 3 months after measurement by data originator as preliminary data. These data are automatically available to all internal ACTRIS/EARLINET users.
* Automatic quality control procedures are applied to data during the submission from the data originator to the ACTRIS/EARLINET database. Only data compliant to the QC are uploaded on the database, while a message reporting the incurred problems is provided to the data originator.
* Every 3 months further quality control procedures are run on the data. Data compliant also to these QCs are included in a list of files with the highest QC score (QC 2.0), while the ones not passing these QCs are QC 1.0 files.
* Data are publicly available when the data originator set up this for the files, typically within 1 year from the data submission, however not before all the QC procedures are run on the files.
All documentations related to the QC procedures applied at the moment on the
ACTRIS remotes sensing profiles and to the history of these procedures are
available at _https://www.earlinet.org/index.php?id=125_ .
The list of files compliant to the different levels of QC are reported at
https://www.earlinet.org/index.php?id=125.
At the beginning of ACTRIS-2 project, the aerosol vertical profile database
contained aerosol optical properties profiles. By the end of the ACTRIS-2
project, it will be augmented with more products, providing also information
about the layering, and typing. In addition, standardized pre-processed lidar
data and NRT optical properties profiles will be available. In the process to
reach this goal, some of the products are already available even if not
directly on the topical data centre: under Data Originator consensus Level 1
data are made available to the AERIS/ICARE service for combined data product
retrieval andLev1 and lev 1.5 to modelling groups involved in JRA3. Finally
Quicklook images are currently available within ACTRIS/EARLINET community for
testing this facility before opening it to the public.
## Cloud remote sensing data sets
Cloud profile data are qualified as ACTRIS data only if
* The atmospheric profile variables are included in the list in Appendix 1
* The processing applied complies with the procedures and recommendations provided by the ACTRIS community harmonised with Cloudnet.
* The data are reported to the Cloudnet DB in accordance with the reporting procedures
Standard collection and reporting procedure for cloud profile data
* Utilise the Cloudnet processing scheme.
* Preliminary data is accessible immediately to the community and public on insertion into the Cloudnet DB, together with a statement their appropriateness and validity for use.
* All data undergoes an approval process for final publishing, cognisant with full periodic calibration assessment and approval by expert panel.
* Selected variables are provided in NRT for the purposes of assimilation and NRT evaluation of NWP model data.
## ACTRIS level 3 data products and digital data tools
ACTRIS level 3 data are elaborated ACTRIS data products derived by post-
processing of ACTRIS Level 0 -1
-2 data, as described in section 2.1-2.3, and data from other sources. ACTRIS level 3 data and project data tools can also include codes, algorithms and software used to generate ACTRIS data, level 0-level 3. Whereas level 0-1-2 datasets are regularly updated mainly due to the collection of new measurements and extension of the time series, level 3 datasets are not updated regularly. Level 3 are usually the result of targeted analysis, special studies, case studies, or processed for model experiments, including work performed under ACTRIS Joint Research Activities, and Transnational Access. The next section give some examples.
### Advanced products based on aerosol and trace gas in situ data sets
Advanced products based on aerosol and trace gas in situ data sets will be
developed in collaboration with joint research activities and in accordance
with other scientific requests during the project. Standard advanced products
can include typically aggregated data such as daily, monthly or annual means
of selected variables. Furthermore, the potential of long-term high quality
ACTRIS-2 data for understanding of trends in atmospheric composition is
further developed. A methodology will be put in place to analyse and produce
site-specific and regional trends. Suitable in situ variables are particle
size, and particle optical properties. Additionally, online QA tools and
products is offered for checking the consistency of the data sets in terms of
ratios between specific trace gases, and closure tests between aerosol
variables from different instruments.
### Advanced products based on aerosol remote sensing data sets
Advanced data products will be designed time by time following the specific
needs as they are results of specific studies. Advanced data are stored and
made freely available at EARLINET database as advanced products. These are the
results of devoted (typically published) studies. Standard advanced products
include climatological products from long-term observations. Further advanced
products can be the results of JRA as microphysical aerosol products based on
inversion of multi-channel lidar data, and microphysical aerosol products from
combined lidar and sun-photometer observations. In particular, ICARE will
automatically process raw lidar data from the EARLINET DB, combined with
coincident AERONET data, using the GARRLiC (Generalized Aerosol Retrieval from
Radiometer and Lidar Combined data) algorithm to retrieve vertical profiles of
aerosol properties.
Currently 2 advanced product datasets are available for the aerosol remote
sensing component: a datasets about aerosol masking and typing for the case of
2010 Eyjafjallajökull volcanic cloud, and the EARLINET 72h operativity
exercise dataset.
### Advanced products based on cloud profile data sets
Advanced data products are prepared automatically by the Cloudnet processing
scheme include model evaluation datasets, and diurnal/seasonal composites. In
addition, advanced classification and products will be available from certain
sites, and from campaigns, where additional instruments and products are
combined.
### Data sets resulting from combined activities with external data providers
The ICARE data centre routinely collects and produces various satellite data
sets and model analyses that are used either in support of ground-based data
analysis or in combination with ground-based data to generate advanced derived
products. These data sets will be channelled to the ACTRIS portal using
colocation and extraction/sub setting tool.
## The ACTRIS user community
The ACTRIS user community can be classified as primary users (direct users of
ACTRIS data, data products and services) and secondary users (using results
from primary users, e.g. from international data centres). These are both
internal and external users. In general, the user community can be summarized
into five groups:
1. **Atmospheric science research community.** Together with atmospheric chemistry and physics, this also includes climate change research and meteorology, as well as multidisciplinary research combining these aspects (such as air quality, and climate interactions with links between aerosols, clouds and weather).
2. **Research communities in neighbouring fields of research.** These are environmental and ecosystem science, marine science, geosciences/geophysics, space physics, biodiversity, health and energy research. These communities will benefit from ACTRIS through the longterm provision of high-quality data products and through the enhanced capacity to perform interdisciplinary research.
3. **Operational observation and data management.** This community includes international data centres and international programmes to which ACTRIS contributes via the provision of longterm and consistent high-quality data products. Many research programmes and operational services (such as the Copernicus Atmosphere Monitoring and Climate Services) use ACTRIS to produce reliable data.
4. **Industry and private sector users** . These benefit from the services and high quality standards of the ACTRIS Calibration Centres, and from the free and open access to data products.
5. **Legislative / policy making community** . This include the user groups within climate, air quality and environmental issues including actors from local organisations, through national governments, to international conventions and treaties (including IPCC and UNFCCC, and UNECE-CLRTAP via the link to EMEP). This user community uses ACTRIS research results to define, update and enhance knowledge for decision making, policy topic preparation and drafting response and mitigation policies.
# ACTRIS data set references and names
ACTRIS works towards establishing traceability for all applicable variables.
In collaboration with partners in the ENVRI plus project, ACTRIS is working
towards use of digital object identifiers (DOIs), in order to assure proper
attribution is given to data originators adequately reflecting their
contributions.
Generally, ACTRIS data set names aim to be compliant with CF (Climate and
Forecast) conventions. In the case where no standard CF names are defined, an
application will be sent to establish these.
## Aerosol and trace gas in situ data set references and names
The in situ data set names are listed in Appendix I. For most in situ
variables, ACTRIS data are traceable from the final data product back to the
time of measurement. Traceability is implemented by a series of data levels
leading from curated, instrument specific raw data to the final, automatically
and manually quality assured data product. Processing steps between data
levels are documented by SOPs.
All submissions of in situ data passing quality assurance are uniquely
identified in the EBAS database with a unique dataset identity numbers, ID-
numbers. In case of updates, a ID-number is generated, and previous data
versions are kept available upon request while the latest version is served
through the database web-interface. Defined requests from the data holdings
are identified in the web-interface by unique URLs that allow external links
to the data.
## Aerosol remote sensing data set references and names
The aerosol profile data set names are listed in Appendix I. The use of SCC
allows the full traceability of the data: SSC converts individual instrument
raw signals into standardized and quality-assured preprocessed lidar data. The
SCC tool will be used to develop a harmonised network-wide, open and freely
accessible quicklook database (high-resolution images of time-height cross
sections). The standardized pre-processed data will also serve as input for
any further processing of lidar data, within the SCC as well as in other
processing algorithms (e.g., combined retrievals with sun photometer, combined
retrievals with Cloudnet).
All aerosol profiles passed through quality check inspections manual and/or
automatic leading to biannual final publication of quality checked data
collection with DOI assignment. The DOI is assigned through the publication on
the CERA database. In case of updates, only the latest version of data is
available at _http://access.earlinet.org_ and a new collection of data (with
new DOI) is published. Previous data versions are kept available. The
versioning of the EARLINET database is currently in a new design phase:
different versions of data because of different processing and different QC
procedures will be available. The new design in this context will allow having
simultaneously available different versions of data and to track all the
quality control procedures.
## Cloud profiles
The cloud profile data set names are listed in Appendix I. The common use of
the Cloudnet processing scheme ensures full traceability of the data from raw
individual instrument measurements through to a combined standardised and
quality-assured processed data set. The Cloudnet processing scheme ensures
harmonisation of products across a relatively heterogeneous network. All
quicklooks are open and freely accessible a t
_http://cloudnet.fmi.fi/quicklooks/_
It is envisaged that publication of curated datasets with DOI assignment will
commence as soon as possible. Currently, only the latest data version is
available throug h _http://cloudnet.fmi.fi/_ d ue to the large data volume
requirements.
# ACTRIS Standards and metadata
ACTRIS standards and metadata systems are well-developed, with variable
standardization already existing in most cases. If this is not the case,
ACTRIS, as a leading community in this field of atmospheric science, will work
in collaboration with WMO-GAW, EMEP and other EU-funded projects (such as
ENVRI plus ) in order to set the standards and foster interoperability
between both the large variety of data products developed with ACTRIS itself,
and with respect to external data centres.
## Standard operating procedures, recommendations and metadata for aerosol
and trace gas in situ data
All aerosol and trace gas in situ data sets are archived and provided in the
NASA-Ames 1001 format.
### Regular quality-assured data
Standards, SOPs and recommendations for each in situ variable measured within
ACTRIS are listed here for aerosols:
_http://actris.nilu.no/Content/?pageid=13d5615569b04814a6483f13bea96986_ and
here for trace gases
_http://actris.nilu.no/Content/?pageid=68159644c2c04d648ce41536297f5b93_ and
made public available for all.
_**Metadata:** _ A comprehensive metadata system and description of each
ACTRIS in situ variable is implemented in the topic data base EBAS. All ACTRIS
in situ variables are reported to EBAS by using the reporting templates
recommended by the ACTRIS in situ community, harmonized with
GAWrecommendations. The templates ensure that the measurements are reported in
accordance with the procedures for the employed instrument, and include all
the necessary metadata required to precisely describe the measurements,
including uncertainty/percentiles. In this way, all ACTRIS in situ data are
accompanied by a sufficient documentation of the measurements to have in-depth
information on the quality of the data. Information about the reporting
procedure and metadata items are open accessible and available through
_http://ebas-submit.nilu.no_ . Metadata are interconnected with GAWSIS and
the ACTRIS data centre handling of metadata is INSPIRE and WIS-ready.
### Near-real-time (NRT) data
Near-real-time (NRT) data flow is offered to the data originators as daily
quality check for selected variables, with the possibility for an alert system
for outliers, instrumental failures and inconsistencies. NRT data collection
and dissemination is available for the in situ ACTRIS observables as
identified in Appendix I.
Participating stations submit their data as annotated raw data in hourly
submissions starting and ending at the turn of an hour. As an exception,
3-hourly submissions are accepted if indicated by limited connectivity with
the station. The raw data are auto-processed to hourly averages, while periods
with obvious instrument malfunctions are disregarded. Special sampling
conditions or transport episodes are not flagged. The processed NRT data are
available through the EBAS web-interface or through autoupdated custom FTP
extracts.
## Standards and metadata for aerosol profiles
Aerosol profiles data are archived and provided in netCDF format. All
published EARLINET data are in CF (Climate and Forecast) 1.5 compliant format.
A migration for all the data to this convention is planned.
Standards, SOPs and recommendations for aerosol profile data measured within
ACTRIS are listed here:
_http://actris.nilu.no/Content/?pageid=37df0131f7384f70a668e48f4e593278_
_**Metadata:** _ All aerosol profile data are accompanied by respective
metadata reporting information about the station, the system, and the timing
of the measurements. Aerosol profile data sets reported to the ACTRIS data
centre can be the results of regular operation of the EARLINET network, but
also related to specific campaigns and joint research activities. Homogeneous
and well-established quality of data originating from different systems is
assured through a rigorous quality assurance program addressing both
instrument performance and evaluation of the algorithms. Information about the
QA program are summarized in Pappalardo et al., AMT, 2014 and are open and
freely available at _http://www.atmosmeas-
tech.net/7/2389/2014/amt-7-2389-2014.html_ ACTRIS-2 improvement of the SCC is
a step forward to complete harmonization of the aerosol profiles data quality.
First quality control procedures have been developed in NA2 in collaboration
with the data centre for checking technical consistency with database rules
and format, and checking the data optical properties consistency and through
the comparison with climatological data. All QC tools are available to all
potential contributors of ACTRIS database, both internal and external. The SCC
is currently available to all ACTRIS aerosol remote sensing data originators.
Some collaborations with external users already exist and the SCC will be
opened to the external users.
### Near-real-time (NRT) data
A standardized and harmonized quicklook interface has been developed for an
open and freely accessible quicklook database under WP2 and has been made
internally available in May 2017. After a testing period some visualization
issues are under fixing. Aerosol remote sensing quicklooks will be made
operational and available through the ACTRIS data portal by April 2018.
Apart from the quicklook images, numerical near real time data will be soon
made available to specific users thanks to a Data Distribution Consensus form
set up by the data centre and filled in by station PIs.
## Standards and metadata for cloud profiles
### Quality-assured data
Cloud profiles are archived and provided in netCDF format, with CF–compliant
metadata.
The base-line SOPs and recommendations for Cloudnet variables is given in
Illingworth et al., (2007), with updates given in ACTRIS-FP7 Deliverable D5.10
<table>
<tr>
<th>
**Variable**
</th>
<th>
**Reference SOP and recommendations**
</th> </tr>
<tr>
<td>
Cloud and aerosol target classification
</td>
<td>
Illingworth et al., BAMS, 2007
</td> </tr>
<tr>
<td>
Drizzle products
</td>
<td>
ACTRIS-FP7 Deliverable D5.7, see also O’Connor et al., JTECH, 2005
</td> </tr>
<tr>
<td>
Ice water content
</td>
<td>
Hogan et al., JAMC, 2006
</td> </tr>
<tr>
<td>
Liquid water content
</td>
<td>
Illingworth et al., BAMS, 2007
</td> </tr>
<tr>
<td>
Liquid water path
</td>
<td>
MWRNET, _http://cetemps.aquila.infn.it/mwrnet/_ see also Gaussiat et al.,
JTECH, 2007
</td> </tr>
<tr>
<td>
Higher-level metrics
</td>
<td>
ACTRIS-FP7 Deliverable D5.10
</td> </tr> </table>
_**Metadata:** _ Cloud profile data are accompanied by metadata describing the
station, instrument combination and supporting ancillary measurements, and
processing software version. Metadata describing instrument calibration
history will be implemented within ACTRIS-2. Harmonization and rigorous
quality control for data originating from different instruments and instrument
combinations is achieved through the common use of the Cloudnet processing
software, summarised in Illingworth et al. (2007). All metadata is propagated
through to every cloud product derived from the measurements; this requirement
will be mandated for all new products derived during ACTRIS-2. The Cloudnet
processing scheme, and the interface description for generating new products,
is freely available for all potential users of ACTRIS data, whether internal
or external.
### Near-real-time (NRT) data
All cloud NRT data is processed in the same manner as for quality-assured
data, together with all accompanying metadata. However, subsequent instrument
calibration may require reprocessing to generate a revised product, which uses
the updated calibration values.
# Sharing of ACTRIS data sets and data products
## Access to ACTRIS data sets and data products
The ACTRIS Data Centre compile, archive and provide access to all ACTRIS data,
and the ACTRIS data portal ( _http://actris.nilu.no_ ) is giving free and
open access to data resulting from the activities of the ACTRIS
infrastructure, including advanced data products resulting from ACTRIS
research activities. Every dataset created within ACTRIS is owned by the
ACTRIS partner(s) who created this dataset. _The ACTRIS Data Policy (_ _
http://actris.nilu.no/Content/Documents/DataPolicy.pdf) _ regulates the
sharing and use of ACTRIS data, see section 5.3.
The ACTRIS data portal ( _http://actris.nilu.no_ ) provide access to ACTRIS
data sets. This is a virtual research environment with access to all data from
ACTRIS platforms and higher level data products resulting from scientific
activities. The portal is structured as a metadata catalogue, searching the
topical data bases, enabling data download from the primary archive and
combination of data across the primary data repositories. The metadata
catalogue is updated every night, providing access to all recent ACTRIS data.
All data are archived in the topical data repositories, to 1) maintain
access to last version of data, 2) avoid duplications and 3) keep full
traceability of the data sets.
### Aerosol and trace gas in situ data repository
The ACTRIS data repository for all aerosol and trace gas in situ data is EBAS
. _http://ebas.nilu.no_ . The web portal is set up on a dedicated linux
server running in Python program language. EBAS is an atmospheric database
infrastructure where open access to research data has developed over almost 45
years and the data infrastructure is developed, operated, and maintained by
NILU - Norwegian Institute for Air Research. The main objective of EBAS is to
handle, store and disseminate atmospheric composition data generated by
international and national frameworks to various types of user communities.
Currently, EBAS is a data repository for ACTRIS, and also hosts the World Data
Centre of aerosols under WMO Global Atmosphere Watch (GAW) and data from
European Monitoring and Evaluation Programme (EMEP) under the UN Convention
for Long-Range Transport of Air Pollution (CLRTAP), among other frameworks and
programmes.
No embargo times apply to these data; all data is reported to EBAS as early as
possible, and no later than 31 July the following year of the measurement. The
data sets are made available to all users as soon as possible after quality
control and quality assurance.
### Aerosol profile data repository
The ACTRIS data repository for all aerosol profile data is
_http://access.earlinet.org_ . The aerosol profile database is hosted,
maintained and operated by CNR-IMAA (National Research Council-Institute of
Methodologies for Environmental Analysis) where the Single Calculus Chain for
the automatic processing of lidar data for aerosol optical properties
retrieval was designed, optimized and operated for the whole network. CNR-IMAA
hosts different advanced products developed by EARLINET in the past for
providing access to external users (volcanic eruption products, satellite
validation datasets and NRT EARLINET subsets).
Aerosol profiles data are regularly published on the CERA database, following
the first database publications of EARLINET database. This assures the
discoverability of the data through the association of a DOI to the data and
the archiving on CERA, a recognized official repository. A different data
granule is under investigation for the future for allowing a better
recognition to the different stations.
The ACTRIS/EARLINET database is also accessible through THREDDS (Thematic
Real-time Environmental Distributed Data Services).
### Cloud profile data repository
The ACTRIS data repository for all cloud profile data is
_http://cloudnet.fmi.fi_ . The cloud profile database is currently hosted,
maintained and operated by FMI (Finnish Meteorological Institute). The
database provides the capability for both in-house processing of instrument
data, and collection of on-site processed data through distributed use of the
Cloudnet processing scheme. Both NRT access (e.g. model evaluation) and full
quality-assured archived data access is available for internal and external
users.
No embargo is applied to data quicklooks, available in NRT when possible. An
embargo is generally only applied to data when a site is in testing mode (new
instrumentation or re-calibration of existing instrumentation). Otherwise all
data sets are immediately available in NRT-mode (no QA) or as soon as quality
control/assurance has been applied. During the course of ACTRIS-2 quality-
assured archived datasets will be published in a recognized official
repository with an associated DOI.
## Access to level 3 data and combined data products
ACTRIS level 3data sets are stored in dedicated catalogue in the ACTRIS Data
Centre or specified in the ACTRIS topical databases to provide long term
access for all users. Access to these data sets and products is made available
through the ACTRIS data portal: _http://actris.nilu.no_ .
The ICARE Data and Services Centre is hosted by the University of Lille in
partnership with CNRS and CNES. ICARE routinely collects various data sets
from third party data providers (e.g., space agencies, meteorological
agencies, ground-based observation stations) and generates a large number of
derived products. All data sets are available for download at
_http://www.icare.univ-lille1.fr/catalogue_ through direct FTP access or web-
based services, upon receipt or upon production, some of them in NRT. In
addition, ICARE provides visualisation and analysis tools (e.g. ,
_http://www.icare.univ-lille1.fr/browse_ ) , and tools to co-locate and
subset data sets at the vicinity of ground-based observation networks (
_http://www.icare.univ-lille1.fr/extract_ ) . Existing tools will be fine-
tuned to meet specific ACTRIS requirements. Access to selected data and
services will be facilitated through the ACTRIS portal.
No embargo is applied to data quicklooks. Most data sets are freely available
for download upon registration. Some restrictions in data access or data use
may be inherited from original data providers or algorithm PIs for
experimental products generated at ICARE.
## The ACTRIS Data Policy
The ACTRIS Data Policy regulates the sharing of ACTRIS data and includes
information on dissemination, sharing and access procedures for various types
of data and various user groups. The ACTRIS Data Policy is publically
available from the ACTRIS web site, from the ACTRIS Data Centre, and here:
_http://actris.nilu.no/Content/Documents/DataPolicy.pdf_
The 1 st version of the ACTRIS Data Policy was established under ACTRIS-FP7,
June 2012. The 2 nd version was approved by ACTRIS-2 SSC, September 2015.
# Archiving and preservation of ACTRIS data sets
The main structure and installations of the ACTRIS Data Centre is located at
_NILU - Norwegian Institute for Air Research_ , Kjeller, Norway. NILU hosts
EBAS archiving all in situ data sets, in addition to the ACTRIS Data Portal.
The other installations are the EARLINET DB at _National Research Council -
Institute of Environmental Analysis_ (CNR), Tito Scalo, Potenza, Italy, the
satellite data components at the _University of Lille_ , Villeneuve d'Ascq,
France, and the cloud profile data in the Cloudnet DB at the _Finnish
Meteorological Institute_ in Helsinki, Finland.
## Aerosol and trace gas in situ data
EBAS is a relational database (Sybase) developed in the mid-1990s. Data from
primary projects and programmes, such as ACTRIS, GAW-WDCA, EMEP, AMAP, are
physically stored in EBAS. All data in EBAS are, in addition, stored at a
dedicated disk in the file tree at NILU. This include the levels 0-1-2 of
data.
The complete data system is backed up regularly. This includes incremental
back up of the data base 6 times per week, and one weekly back up of the full
data base to a server in a neighbour building to ensure as complete as
possible storage of all data for future use in case of e.g. fires or other
damages to the physical construction. File submission is conducted by ftp. A
separate ftp area is allocated to incoming files, and all activities herein
are logged on a separate log file, and backed up on 2 hour frequency. An alert
system is implemented to ensure warning messages if there are problems during
file transfer from the data originators to the data centre.
Ca 455 separate new comprehensive files including meta data with annual time
series of medium to high time resolution (seconds to week) is expected per
year. A significant growth in this number is not expected on annual scale. In
total this will sum up to ca 10GB/year from ca 150 000 single column files,
including both raw data and auxiliary parameters.
EBAS is based on data management over more than 40 years. Last 10 years there
has been a European project-type cooperation from FP5 to Horizon2020, with and
EMEP and GAW programmes since 1970’s as the fundament. Sharing visions and
goals with the supporting long-term policy driven frameworks have ensured
long-term funding for the core data base infrastructure. A long-term strategy
for providing access to all ACTRIS data and other related services are in
progress through the establishment of ACTRIS as a RI. ACTRIS is on the ESFRI
(European Strategy Forum on Research Infrastructures) roadmap for Research
Infrastructures, and a preparatory phase project is ongoing.
## Aerosol profiles
The storage infrastructure is composed by three servers and two different SAN
(Storage Area Network). One server hosts the EARLINET PostgreSQL database and
the other one is used to interface both end-users and data submitters to the
EARLINET database. This last server is connected to the operational SAN on
which the data submitted by the user are safety stored. A daily back up of the
EARLINET database is made automatically and it is stored on the second backup
SAN.
The whole EARLINET database is also accessible through THREDDS (Thematic Real-
time Environmental Distributed Data Services) which is installed on a third
server. On the same server a CAS (Central Authentication Service) is configure
to authenticate all EARLINET users centrally.
The current size of the PostgresSQL EARLINET database is about 1GB. The total
amount of data submitted (NetCDF EARLINET files) is about 1.3 GB. An
estimation of the growing rate of the database at this rate is 100-200MB/year.
However a significant growth in number of files to be collected is expected
because of: the use of the SCC (Single Calculus Chain) for the data
submission, the inclusion of new products (preprocessed data, NRT optical
properties, profiles, aerosol layers properties and multi-wavelength
datasets), increases of the number of EARLINET stations and increase of
EARLINET h24 stations. We estimate that at the end of ACTRIS2 project, the
ACTRIS aerosol profile database could growth at a rate of about 12-15 GB per
year.
The SCC is part of the EARLINET data centre and it is the standard EARLINET
tool for the automatic analysis of lidar data. Three additional servers are
needed to provide this further service: a calculus server where all the SCC
calculus modules are installed and ran, a MySQL database where all the
analysis metadata are stored in a fully traceable way a finally a web
interface allowing the users to access to the SCC.
The EARLINET database and the SCC are maintained by the National Research
Council of Italy with long term commitment for archiving and preservation. The
archiving on CERA database is a further measure for assuring the availability
of the data through redundancy of the archive.
## Cloud profiles
The Cloudnet database is a file-based database, due to the nature of the
typical use-case and data volume. The infrastructure comprises an FTP server
for incoming data streams, rsync server for outgoing data streams, processing
server, webserver, with data storage distributed across a series of virtual
file-systems including incremental backups. Due to the data volume, most sites
also hold a copy of their own processed data, effectively acting as a second
distributed database and additional backup.
The current size of the database is about 10 TB and the volume is expected to
grow by close to 0.5 TB per year with the current set of stations and the
standard products. However, there will be a significant increase in volume
when the planned move to multi-peak and spectral products is undertaken; this
is in addition to a slight increase arising through the creation of new
products. The Cloudnet database is maintained by FMI with long-term commitment
for archiving and preservation. Publication of QA datasets will aid dataset
preservation.
# ACTRIS Data Centre− Organisation and personal resources
The ACTRIS Data Centre involves personal with broad and complementary
background and competence. In total, more than 25 persons are involved in the
data management, on full or part time.
A crucial structure of the ACTRIS data centre is the use of topical data
service centres involving by scientists with expertise in the relevant field.
This ensures not only proper curation of the data but also a close connection
to the data provider and user communities. A topical data centre run by
scientists with data curation expertise serves as identifying elements built
jointly with the data provider community, and as connecting element between
data providers and users. The fundamental structure of the data centre is
based on efficient use of complementary competence. This includes involvements
of senior scientists, young scientists, engineers, programmers, and data base
developers. A data centre serving several related communities, e.g. scientific
and regulatory ones, are facilitating exchange and collaboration between
these. Additionally, involvement of senior scientists working actively within
various scientific communities is another pre-requisite, to ensure the links
to various scientific user groups, for distribution of data products, and user
oriented development of the data centre.
The ACTRIS data portal acts as umbrella for the topical data centres allowing
search, download, and common visualisation of the data archived at the topical
data centres. Maybe even more important, it will also connect ACTRIS with
other European and international research data centres by allowing the same
services for the data stored there by making use of latest inter-operability
specifications. Also at the administrative plain, the ACTRIS portal represents
the infrastructures in the relevant bodies working a unifying data management,
and relays new developments to the whole infrastructure.
# Appendix I:List of ACTRIS variables and recommended methodology
<table>
<tr>
<th>
**ACTRIS Aerosol particle variables**
**Variable name**
</th>
<th>
**Recommended methodology**
</th>
<th>
**Validated data**
</th>
<th>
**NRT**
</th>
<th>
**Typical time res.**
</th>
<th>
**Higher timeres. available**
</th> </tr>
<tr>
<td>
**In situ aerosol particle variables**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Particle light scattering coefficient
</td>
<td>
Integrating Nephelometer
</td>
<td>
X
</td>
<td>
X
</td>
<td>
1h
</td>
<td>
X
</td> </tr>
<tr>
<td>
Particle light backscattering coefficient
</td>
<td>
Integrating Nephelometer
</td>
<td>
X
</td>
<td>
X
</td>
<td>
1h
</td>
<td>
X
</td> </tr>
<tr>
<td>
Particle number size distribution
</td>
<td>
Mobility particle size spectrometer (e.g. differential mobility particle size,
scanning mobility particle sizer) or Optical particle size spectrometer (e.g.
optical particle counter, optical particle sizer) or Aerodynamic particle size
spectrometer (e.g. aerodynamic particle sizer)
</td>
<td>
X
</td>
<td>
X
</td>
<td>
1h
</td>
<td>
X
</td> </tr>
<tr>
<td>
Particle light absorption coefficient
</td>
<td>
Filter Absorption Photometer (e.g. Particle Soot/Absorption
Photometer, Multi-Angle Absorption Photometry, Aethalometer)
</td>
<td>
X
</td>
<td>
X
</td>
<td>
1h
</td>
<td>
X
</td> </tr>
<tr>
<td>
Particle number concentration
</td>
<td>
Condensation Particle Counter
</td>
<td>
X
</td>
<td>
X
</td>
<td>
1h
</td>
<td>
X
</td> </tr>
<tr>
<td>
Cloud condensation nuclei number concentration
</td>
<td>
Condensation Cloud Nuclei Counter
</td>
<td>
X
</td>
<td>
X(later)
</td>
<td>
1h
</td>
<td>
X
</td> </tr>
<tr>
<td>
Hygroscopic growth factor
</td>
<td>
Hygroscopicity Tandem Differential Mobility Analyzer
</td>
<td>
X
</td>
<td>
</td>
<td>
1h
</td>
<td>
X
</td> </tr>
<tr>
<td>
Particulate organic and elemental carbon mass concentrations (OC/EC)
</td>
<td>
Filter sampling + evolved gas analysis with optical correction for charring
(thermal-optical analysis)
</td>
<td>
X
</td>
<td>
</td>
<td>
1d-1week
</td>
<td>
</td> </tr>
<tr>
<td>
Particulate size-resolved chemical composition
(organic & inorganic size-resolved mass speciation)
</td>
<td>
Aerosol Mass Spectrometer, Aerosol Chemical Speciation Monitor
</td>
<td>
X
</td>
<td>
</td>
<td>
1h
</td>
<td>
X
</td> </tr>
<tr>
<td>
Particulate levogluocsan mass concentration
</td>
<td>
Filter sampling + offline methodology
</td>
<td>
X
</td>
<td>
</td>
<td>
1d-1week
</td>
<td>
</td> </tr> </table>
<table>
<tr>
<th>
**ACTRIS in situ trace gas variables**
**Variable**
</th>
<th>
**Recommended methodology**
</th>
<th>
**Validated data**
</th>
<th>
**NRT**
</th>
<th>
**Approx. time resolution**
</th> </tr>
<tr>
<td>
NMHCs (C2-C9 hydrocarbons) _*See detailed list_
</td>
<td>
on-line: GC-FID, GC-MS, GS-FID/MS, GC-Medusa, PTR-MS off-line traps: ads-tubes
off-line: steel canisters + glass flasks, combined with the on-line
instruments in laboratories
</td>
<td>
X
</td>
<td>
</td>
<td>
1 h-2/week
</td> </tr>
<tr>
<td>
OVOCs (oxidised volatile organic compounds as aldehydes, ketons, alcohols,)
_See detailed list of the compounds at the end of the document_
</td>
<td>
on-line: GC-FID, GC-MS, GS-FID/MS, GC-Medusa, PTR-MS off-line traps: ads-
tubes, DNPH-cartridge-HPLC
</td>
<td>
X
</td>
<td>
</td>
<td>
1 h-2/week
</td> </tr>
<tr>
<td>
Terpenes (biogenic hydrocarbons with a terpenestructure) _*See detailed list
at the end of the document_
</td>
<td>
on-line (GC-FID, GC-MS, GS-FID/MS, GC-Medusa) and off-line traps (adstubes)
</td>
<td>
X
</td>
<td>
</td>
<td>
1 h-2/week
</td> </tr>
<tr>
<td>
NO
</td>
<td>
NO-O 3 chemiluminescence
</td>
<td>
X
</td>
<td>
X
</td>
<td>
1 min - 1 h
</td> </tr>
<tr>
<td>
NO2
</td>
<td>
indirect: NO-O 3 chemiluminescence coupled to photolytic converter
(Xenon lamp (PLC) or diode (BLC)),
direct: cavity ring down spectroscopy (CRDS), laser induced fluorescence
(LIF), Cavity Attenuated Phase Shift Spectroscopy (CAPS)
</td>
<td>
X
</td>
<td>
X
</td>
<td>
1 min - 1 h
</td> </tr>
<tr>
<td>
NOy (NO, NO2, NO3, N2O5, HNO2, HNO3, PAN, organic nitrates and aerosol
nitrates sum of oxidized nitrogen species with an oxidation number >1, both
organic and inorganic.)
</td>
<td>
indirect: NO-O3 chemiluminescence coupled to gold converter
</td>
<td>
X
</td>
<td>
X
</td>
<td>
1 min - 1 h
</td> </tr> </table>
<table>
<tr>
<th>
**ACTRIS Aerosol particle variables**
</th>
<th>
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
**Variable name**
</td>
<td>
**Recommended methodology**
</td>
<td>
**Validated data**
</td>
<td>
**NRT**
**Approx. time resolution**
</td> </tr>
<tr>
<td>
**Aerosol remote sensing variables**
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Aerosol backscatter coefficient profile
</td>
<td>
Backscatterlidar / Raman lidar/Highspectral resolution lidar
</td>
<td>
X
</td>
<td>
0.5 h, 2+1 measur. per week + special events
\+ CALIPSO overpasses (2.5 h)
</td> </tr>
<tr>
<td>
Aerosol extinction coefficient profile
</td>
<td>
Raman lidar / High spectral resolution lidar
</td>
<td>
X
</td>
<td>
0.5 h, 2+1 measur. per week + special events
\+ CALIPSO overpasses (2.5 h)
</td> </tr>
<tr>
<td>
Lidar ratio profile*
</td>
<td>
Raman lidar / High spectral resolution lidar
</td>
<td>
X
</td>
<td>
0.5 h, 2+1 measur. per week + special events
\+ CALIPSO overpasses (2.5 h)
</td> </tr>
<tr>
<td>
Ångström exponent profile*
</td>
<td>
Multiwavelength Raman lidar
</td>
<td>
X
</td>
<td>
0.5 h, 2+1 measur. per week + special events
\+ CALIPSO overpasses (2.5 h)
</td> </tr>
<tr>
<td>
Backscatter-related Ångström exponent profile*
</td>
<td>
Multiwavelengthbackscatterlidar / Raman lidar
</td>
<td>
X
</td>
<td>
0.5 h, 2+1 measur. per week + special events
\+ CALIPSO overpasses (2.5 h)
</td> </tr>
<tr>
<td>
Particle depolarization ratio profile
</td>
<td>
Depolarization backscatter lidar
</td>
<td>
X
</td>
<td>
0.5 h, 2+1 measur. per week + special events
\+ CALIPSO overpasses (2.5 h)
</td> </tr>
<tr>
<td>
Particle layer geometrical properties (height and thickness)*
</td>
<td>
Backscatterlidar / Raman lidar/ Highspectral resolution lidar
</td>
<td>
X
</td>
<td>
0.5 h, 2+1 measur. per week + special events
\+ CALIPSO overpasses (2.5 h)
</td> </tr>
<tr>
<td>
Particle layer optical properties (extinction, backscatter, lidar ratio,
Ångström exponent, depolarization ratio, optical depth)*
</td>
<td>
Multiwavelength Raman lidar
</td>
<td>
X
</td>
<td>
0.5 h, 2+1 measur. per week + special events
\+ CALIPSO overpasses (2.5 h)
</td> </tr>
<tr>
<td>
Aerosol optical depth (column)*
</td>
<td>
Sun/sky photometer
</td>
<td>
x
</td>
<td>
x
</td> </tr>
<tr>
<td>
Planetary boundary layer height
</td>
<td>
Backscatterlidar / Raman lidar/ Highspectral resolution lidar
</td>
<td>
X
</td>
<td>
0.5 h, 2+1 measur. per week + special events
\+ CALIPSO overpasses (2.5 h)
</td> </tr> </table>
_* these data will be available in the new data products when released_
<table>
<tr>
<th>
**ACTRIS cloud variables**
**Variable**
</th>
<th>
**Recommended methodology**
</th>
<th>
**Validated**
**data NRT**
</th>
<th>
**Approx. time /height resolution**
</th> </tr>
<tr>
<td>
**Cloud remote sensing variables**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
cloud/aerosol target classification
</td>
<td>
cloud radar, lidar/ceilometer, NWP model or radiosonde (optional: microwave
radiometer)
</td>
<td>
X
</td>
<td>
X
</td>
<td>
30 seconds / 60 metres
</td> </tr>
<tr>
<td>
drizzle drop size distribution
</td>
<td>
doppler cloud radar, lidar/ceilometer, NWP model or radiosonde (optional:
microwave radiometer)
</td>
<td>
X
</td>
<td>
X
</td>
<td>
30 seconds / 60 metres
</td> </tr>
<tr>
<td>
drizzle water content
</td>
<td>
doppler cloud radar, lidar/ceilometer, NWP model or radiosonde (optional:
microwave radiometer)
</td>
<td>
X
</td>
<td>
X
</td>
<td>
30 seconds / 60 metres
</td> </tr>
<tr>
<td>
drizzle water flux
</td>
<td>
cloud radar, lidar/ceilometer, NWP model or radiosonde (optional: microwave
radiometer)
</td>
<td>
X
</td>
<td>
X
</td>
<td>
30 seconds / 60 metres
</td> </tr>
<tr>
<td>
ice water content
</td>
<td>
cloud radar, lidar/ceilometer, NWP model or radiosonde (optional: microwave
radiometer)
</td>
<td>
X
</td>
<td>
X
</td>
<td>
30 seconds / 60 metres
</td> </tr>
<tr>
<td>
liquid water content
</td>
<td>
cloud radar, lidar/ceilometer, microwave radiometer
</td>
<td>
X
</td>
<td>
X
</td>
<td>
30 seconds / 60 metres
</td> </tr>
<tr>
<td>
liquid water path
</td>
<td>
dual- or multi-frequency microwave radiometers (ceilometer useful for
identifying clear-sky)
</td>
<td>
X
</td>
<td>
X
</td>
<td>
30 seconds
</td> </tr>
<tr>
<td>
rainrate
</td>
<td>
drop-counting raingauge or disdrometer preferable to tipping bucket raingauges
</td>
<td>
X
</td>
<td>
X
</td>
<td>
30 seconds
</td> </tr>
<tr>
<td>
**Cloud in situ variables**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Liquid Water Content
</td>
<td>
In-situ cloud-microphysical sensors
</td>
<td>
X
</td>
<td>
</td>
<td>
5 min
</td> </tr> </table>
<table>
<tr>
<th>
**Detailed list of trace gases included in ACTRIS -** _Alkanes, Alkenes,
Alkynes_
</th>
<th>
</th> </tr>
<tr>
<td>
**Alkanes**
</td>
<td>
ethane
</td>
<td>
2-methylhexane n-heptane 2-2-4trimethylpentane 3-methylheptane
</td>
<td>
**Alkenes**
</td>
<td>
ethene propene trans-2-butene
1-butene
</td>
<td>
**Alkynes**
</td>
<td>
ethyne
</td> </tr>
<tr>
<td>
propane
</td>
<td>
proypne
</td> </tr>
<tr>
<td>
2-methylpropane
</td>
<td>
1-butyne
</td> </tr>
<tr>
<td>
n-butane
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
2-2-dimethylpropane
</td>
<td>
n-octane n-nonane n-decane methyl-cyclohexane n-undecane
</td>
<td>
2-methylpropene cis-2-butene
1-3-butadiene
3-methyl-1-butene
2-methyl-2-butene
</td> </tr>
<tr>
<td>
2-methylbutane
</td> </tr>
<tr>
<td>
n-pentane
</td> </tr>
<tr>
<td>
cyclopentane
</td> </tr>
<tr>
<td>
methyl-cyclopentane
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
2-2-dimethylbutane
</td>
<td>
n-dodecane n-tridecane n-tetradecane n-pentadecane n-hexadecane
</td>
<td>
trans-2-pentene cyclopentene 1-pentene cis-2-pentene 1-hexene isoprene
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
2-3-dimethylbutane
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
2-methylpentane
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
3-methylpentane
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
cyclohexane
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
n-hexane
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
methyl-cyclohexane
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
2-2-3-trimethylbutane
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
2-3-dimethylpentane
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
2-2-dimethylpentane
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
2-4-dimethylpentane
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
3-3-dimethylpentane
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
3-methylhexane
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
<table>
<tr>
<th>
**Detailed list of trace gases included in ACTRIS** _\- OVOCs, Terpenes,
Aromatics_
</th>
<th>
</th> </tr>
<tr>
<td>
**OVOCs**
</td>
<td>
methanol methylethylketon
</td>
<td>
**Terpenes**
</td>
<td>
alpha-thujene
</td>
<td>
**Aromatics**
</td>
<td>
benzene
</td> </tr>
<tr>
<td>
ethanol methacrolein
</td>
<td>
tricyclene
</td>
<td>
toluene
</td> </tr>
<tr>
<td>
isopropanol methylvinylketon
</td>
<td>
alpha-pinene
</td>
<td>
ethylbenzene
</td> </tr>
<tr>
<td>
n-propanol glyoxal
</td>
<td>
camphene
</td>
<td>
m-p-xylene
</td> </tr>
<tr>
<td>
n-butanol methylglyoxal
</td>
<td>
sabinene
</td>
<td>
o-xylene
</td> </tr>
<tr>
<td>
methyl-butanol butylacetat
</td>
<td>
myrcene
</td>
<td>
1-3-5-trimethylbenzene
</td> </tr>
<tr>
<td>
formaldehyde acetonitrile
</td>
<td>
beta-pinene
</td>
<td>
1-2-4-trimethylbenzene
</td> </tr>
<tr>
<td>
acetaldehyde
</td>
<td>
</td>
<td>
alpha-phellandrene
</td>
<td>
1-2-3-trimethylbenzene
</td> </tr>
<tr>
<td>
n-propanal
</td>
<td>
</td>
<td>
3-carene
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
n-butanal
</td>
<td>
</td>
<td>
alpha-terpinene
</td> </tr>
<tr>
<td>
pentanal
</td>
<td>
m-cymene
</td> </tr>
<tr>
<td>
hexanal
</td>
<td>
cis-ocimene
</td> </tr>
<tr>
<td>
heptanal
</td>
<td>
p-cymene
</td> </tr>
<tr>
<td>
octanal
</td>
<td>
limonene
</td> </tr>
<tr>
<td>
decanal
</td>
<td>
beta-phellandrene
</td> </tr>
<tr>
<td>
undecanal
</td>
<td>
eucalyptol
</td> </tr>
<tr>
<td>
benzaldehyde
</td>
<td>
gamma-terpinene
</td> </tr>
<tr>
<td>
acrolein
</td>
<td>
terpinolene
</td> </tr>
<tr>
<td>
acetone
</td>
<td>
camphor
</td> </tr> </table>
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0724_INFRAFRONTIER2020_730879.md
|
## INTRODUCTION
The laboratory mouse has emerged as the major mammalian model for studying
human genetic and multi-factorial diseases. Numerous mouse mutants have been
produced and, more recently, technological improvements have allowed mouse
mutants for virtually any gene to be produced by gene-specific approaches
(knock-outs, knock-ins and conditional mutagenesis). Random approaches such as
large scale, genome-wide ENU mutagenesis and gene trapping have also expanded
the current repertoire of available mutants. Using these mouse mutants,
researchers are able to decipher molecular disease and potentially develop new
diagnostic, prognostic and therapeutic approaches.
<table>
<tr>
<th>
*To whom correspondence should be addressed. Tel: 44-1223-494451; Fax: 44-1223-494468; Email: [email protected]
The Author(s) 2009. Published by Oxford University Press.
This is an Open Access article distributed under the terms of the Creative
Commons Attribution Non-Commercial License
(http://creativecommons.org/licenses/ by-nc/2.5/uk/) which permits
unrestricted non-commercial use, distribution, and reproduction in any medium,
provided the original work is properly cited.
</th> </tr> </table>
The International Knockout Mouse Consortium [IKMC
(http://www.knockoutmouse.org); (1,2)] is made up of four major projects
(EUCOMM (http://www .eucomm.org) in Europe, KOMP (http://www.nih
.gov/science/models/mouse/knockout/) and TIGM (http://www.tigm.org) in the USA
and NorCOMM (http://www.norcomm.org) in Canada, and is in the process of
producing mutations in ES cells for all known protein coding genes. A number
of mouse mutant lines have already been produced from these resources. In
particular, some 650 mouse lines are being produced and phenotyped in high-
throughput screens as part of the EUCOMM and EUMODIC projects (http://www
.eumodic.org), the results of which will be presented in the Europhenome
resource (3). To take this process to the next level, the International Mouse
Phenotyping Consortium (IMPC) has recently been formed with a remit to raise
the funding for and to coordinate the production of mouse mutants for each of
the IKMC mutations, along with high throughput phenotyping of these mice
resulting in the first complete catalogue of mammalian gene function (see
Appendix 6 of the PRIME final report: http://www.prime-eu.org/PRIME final
report.pdf).
Archiving and distribution of the products of these various projects is a
vital activity, alongside the capture of data describing in detail the
genotype and phenotype characteristics of the mutants. The costs for a typical
academic researcher to regenerate from scratch one of these knock-out (KO)
lines has been estimated at E25–30k and would take at least 9 months.
Regenerating the mouse lines is an obvious waste of public funds for science
as well as laboratory mice from an animal welfare aspect.
Since no single archiving facility can retain all of these mutant mouse
strains it is essential that all mutants that have been created are held in
centrally organised repositories, from which mutant mice can readily be made
available to interested investigators (4,5). The European Mouse Mutant Archive
[(EMMA); (6)] is a leading international network infrastructure for archiving
and provision of mouse mutant strains for the whole of Europe and worldwide.
To provide the best possible service to the international scientific community
there is a requirement for coordination of archiving and distribution of the
valuable genetically defined mice and ES cells in line with global research
demand. The Federation of International Mouse Resources [(FIMRe); (7)], of
which EMMA is a founding member and the European component, was initiated in
response to this need for coordination.
As well as coordination of archiving, there is a requirement for a common
portal that allows searching of all publicly available mice, including those
not from FIMRe partners, followed by redirection to individual repositories
for more detailed information and the possibility to order material. The
International Mouse Strain Resource [IMSR (http://www.findmice.org); (8)] has
been developed to fulfill this need and over the last few years, EMMA has
become one of the largest mouse network repositories worldwide and a major
contributor to IMSR.
EMMA also has a special role in the archiving and distribution of mouse
mutants as it is one of four repositories handling the mouse resources
produced by the IKMC initiative (EMMA archiving and distributing the mutant
mice arising out of the EUCOMM project, the KOMP repository
(http://www.komp.org) handling KOMP products, the Canadian Mouse Mutant
Repository [CMMR (http://www.cmmr.ca); (9)] handling the NorCOMM resources and
TIGM handling its own products. Eventually, these four resources will provide
access to data and material covering the complete, functional characterised,
proteome of the mouse, providing an unprecedented resource for bench
scientists studying all aspects of the mammalian genome including human
disease.
The EMMA resource database described in this paper provides up to date
information about the archiving status of mice and describes the genetic and
phenotypic properties of all the mutant strains that EMMA stocks. The EMMA
database has two main benefits to the research community: (i) scientists with
particular gene or genes of interest can discover if any mouse lines exist
with mutations in these gene(s) and what the observed phenotype changes were,
which may provide clues to the gene’s role, and (ii) it allows scientists to
order existing mouse mutants for further research and generation of data of
interest to other researchers. As well as providing user-friendly searching
and browsing of the database, the EMMA website is the link to the scientific
community and facilitates the submission of mice to the EMMA and requests of
mice from EMMA, as well as expressing interest in strains currently undergoing
archiving. The data recorded for each strain is a combination of data entered
by the original submitting scientist as well as subsequent curation to correct
and add extra value to the database. Although the full record is only
available through the EMMA database, summary data is exchanged with our
partners in IKMC and the IMSR to ensure that researchers using the portals
available at their sites see descriptions of EMMA lines, along with links back
to the original record in EMMA and the option to order biological material. In
addition, EMMA utilises the BioMart data management system (10,11) and the
Distributed Annotation System [DAS; (12)] to allow distributed, integrated
querying with other resources such as the Ensembl genome browser (13).
## DATA COLLECTION AND CURATION
The EMMA website is used to advertise the goals of the project and encourage
interested parties to submit mouse mutant lines of widespread use to the
scientific research community as a disease model or other research tool. The
submission process is handled automatically by the website and collects
extensive data through a web form and stores this directly in the EMMA
database. Data collected at this stage includes:
. Contact details for the strain producer.
. Strain name, affected gene(s) and mutant allele(s).
. Genetic background of the original mutation and current background.
. Genetic and phenotype descriptions of the line.
. Bibliographic data on the line. . Whether the mouse models a human disease
and an OMIM ID if appropriate.
. Whether the strain is immunocompromised.
. Whether homozygous mice are viable and fertile and if homozygous mating are
required.
Additional optional data collected includes:
. Affected chromosome, dominance pattern and ES cell line(s) used for targeted
mutants.
. Name and description for chromosome anomaly lines. . Mutagen used for
induced mutant lines.
. Promoter, founder line number and plasmid/construct name(s) for transgenic
lines. . Breeding history of the line.
. Current health status of the line and specific information for animal
husbandry such as diet used.
. How to characterise the line by genotyping, phenotyping or other methods
e.g. coat colour.
. Research areas the mouse is useful for, and whether it is a research tool
such as a Cre-recombinase expressing line.
Extensive curation takes place to correct and augment the initial submission
data. To facilitate input of correct data by submitting users, specific tools
have been incorporated into the submission form, for searching and selecting
approved gene, allele, background names, symbols and identifiers (from the
Mouse Genome Database (MGD) developed by the Mouse Genome Informatics (MGI;
http://www.informatics.jax.org) group (14). Similar tools for searching and
selecting PubMed bibliographic references and identifiers have also been
implemented. However, there is still a requirement for manual correction of
submitted data using our curation interfaces.
The curation is based on the application of international rules and standards
for the initial assignment and periodic review and update of the strain and
mutation nomenclature, as defined by the International Committee on
Standardized Genetic Nomenclature for Mice
(http://www.informatics.jax.org/mgihome/nomen). These approved definitions
make use of control vocabularies for gene, allele, background names and
symbols. Specific automated routines and associated manual curation procedures
have been defined and implemented, in particular, for:
. Assigning to each submitted strain record a unique EMMA identification (ID)
as the primary attribute for internal strain identification and retrieval and
cross-reference with connected databases such as IMSR.
. Checking that the submitted records of mutant genes or expressed transgenes
(and corresponding alleles), carried by the deposited strains, have assigned
the correct names, symbols and identifiers, and mutation classification (as
defined by MGI) according to the associated bibliographic references.
. Proposing new mutant gene and allele names, symbols and identifiers for
publication in the MGD database, according to the associated bibliographic
references or personal communication with submitting scientists.
. Checking that the submitted backgrounds of deposited strains have approved
names and symbols assigned.
. Inserting a preliminary strain designation for each newly submitted strain,
including the assigned strain background name and the MGI allele symbol, and
associating it with the corresponding EMMA strain ID.
. Reviewing and approving the preliminary strain designations, in
collaboration with the curation group at IMSR.
. Periodically reviewing and updating of current strain designations,
according to variations of MGI gene and allele’s names and symbols.
. Automated correction and population of bibliographic data using the
submitted PubMed IDs and the CiteXplore web service (http://www.ebi.ac
.uk/citexplore/).
Archiving of submitted mice is handled by one of the EMMA mouse archiving
partners (CNR Instituto di Biologia Cellulare in Monterotondo, Italy; the CNRS
Centre de Distribution de Typage et d’Archivage Animale in Orleans, France;
the MRC Mammalian Genetics Unit in Harwell, UK; the Karolinska Institute in
Stockholm, Sweden; the Helmholtz Zentrum Mu¨ nchen in Munich, Germany; the
Wellcome Trust Sanger Institute in Hinxton; the Institut Clinique de la Souris
in Strasbourg and the CNB-CSIC, Centro Nacional de Biotecnologia in Madrid).
The archiving process involves genotype and/or phenotype verification of the
mouse, followed by test freezing of either sperm or embryos and then checking
the stock can be reconstituted from this frozen stock. Several strains are in
particularly high demand as they represent extremely interesting disease
models or valuable Cre-expressing lines and these are kept as live stocks
facilitating a fast delivery to the customers. The EMMA lines are supplied to
the research community for research purposes only and there is no charge for
the cryopreservation service. Archiving of mice produced by the EUCOMM mouse
production centres follows the same procedure except the initial import of
data describing these lines is automated from the EUCOMM database. The EMMA
database is used internally by the EMMA partners to track each mutant strain
through the archiving process. For example, the status of the strain in the
archiving pipeline, which centre is archiving the strain, the funding source
for this archiving, which material is currently in stock and available to
order is all stored in the database. EMMA archiving centres record this data
using internal interfaces implemented using Java Spring and Hibernate
technologies.
Requests for EMMA mice are also submitted via the EMMA website and recorded in
the EMMA database. The archiving centres again track the whole process of
distributing the requested mice using the database and the same internal Java
interfaces.
EMMA now contains over 1700 submitted strains from 19 countries including
around 50 lines from the USA, Canada and Australia. In the coming 5 years, it
is predicted that there will be a tripling of the mouse lines held, largely as
a result of the IKMC initiative. To date EMMA has sent out 1245 lines to
requesting scientists worldwide. Although nearly 58% of the requests for
mutant mouse lines were from European scientists, about one-third come from
the USA and Canada and requests from Asia are steadily increasing. So far,
EMMA has shipped mice to scientists from more than 500 different institutions
located in 39 countries. Considering the estimated cost of generating these
lines from scratch the existence of the EMMA resource has saved the worldwide
community E37M and 934 years of laboratory effort.
## QUERYING THE EMMA DATABASE
The EMMA database can be searched using a userfriendly query interface (Figure
1). This search takes full/partial case-insensitive terms and searches against
the official MGI gene symbols e.g. Otog, the official IMSR designated strain
name e.g. B6.129S2-Otog tm1Prs / Orl, the common strain name e.g.
OtogC57BL/6J, the phenotype description e.g. auditory functions or EMMA IDs
e.g. EM:01820. EMMA lines are also browsable by the affected gene, mutant type
(e.g. Targeted Knock-out, Targeted Knock-in), particular research tools (e.g.
Cre-expressing lines) or mice produced by large projects (e.g. EUCOMM lines).
Results of searches or browsing are presented in a table, sortable by any of
the columns, which lists the EMMA ID, gene affected (with hyperlinks back to
MGI pages describing the particular gene and mutant alleles in detail), common
strain name, approved international name and a link to either order the line
or express interest in ordering lines that are in the process of being
archived. The latter option triggers an automated process, in which the
particular archiving centre is informed that there is a priority for this line
and when it becomes available further automated emails inform the original
scientist that they can go ahead and complete the ordering process.
Clicking on any of the strain names pops up a strain description (Figure 2)
including the mutation type, genetic background it is currently maintained on,
genetic and phenotype descriptions if known, the original producer, literature
references, the genotyping or phenotyping protocol needed to confirm the
mutation, what material is available along with delivery times and costs and a
link for downloading associated Material Transfer Agreement (MTA)
documentation, if applicable.
## INTEGRATION WITH OTHER RESOURCES
As described earlier, a subset of data on each of the EMMA curated lines are
sent weekly to the IMSR, allowing users searching this common catalogue of
mutant lines to be redirected to our site for more detailed data and the
ability to order the line. The MGD database
Figure 1. Browsing and searching for mouse lines in EMMA. Relevant strains can
be identified by either (i) typing case-insensitive, full/partial terms in the
top text field which searches against the affected gene symbols and name,
approved international designated and common strain names, phenotype
description and EMMA ID, or (ii) browsing through a complete list of lines or
partial lists categorised by the affected gene(s), mutant type (targeted, gene
trap, transgenic, induced, chromosomal anomalies or spontaneous), research
tool [Cre recombinase expressing strains, lines for tetracycline
(Tet)-regulated gene expression systems], strains provided by the Wellcome
Trust Knockout Mouse Resource and finally strains produced out of the EUCOMM
programme. Results are presented as a table of the EMMA ID, affected gene,
common and approved international strain names alongside links to order or
register interest in ordering a line when it becomes available. Clicking on
any of the common strain names pops up a description of the strain.
Figure 2. EMMA strain descriptions. Data presented includes the mutation type,
genetic background it is currently maintained on, brief genetic and phenotype
descriptions if known, the original producer, literature references, the
genotyping protocol needed to confirm the mutation, what material is available
along with delivery times and costs and a link for downloading associated MTA
documentation, if applicable.
provides extensive descriptions of known mutant alleles and EMMA links to the
MGD pages, wherever possible as the definitive source for this data.
As well as our simple search box, we also provide an advanced BioMart query
interface, which is linked from the main search page (Figure 3). The BioMart
interface queries a denormalised snapshot of the EMMA database that is updated
nightly. Queries can involve complex combinations of query terms including the
affected gene symbols and MGI IDs, common and official strain names, EMMA IDs,
mutant type, original and maintained genetic backgrounds and the type of
material available (frozen embryos, sperm or ovaries, live mice on shelf or
mice rederived from frozen stock). The results are fully configurable,
allowing any combination of the fields presented in the standard EMMA search
results and strain descriptions to be displayed, as well as extra data such as
whether the mutant is viable and fertile when homozygous and whether it is
required to keep it homozygous, whether the line is immunocompromised, if it
represents a human model, the breeding history and for targeted mutants known
dominance and ES cell line used, and for transgenics the promoter and plasmid
construct used. The results can be previewed and exported in a number of
formats such as HTML, Tab/Commaseparated text or Excel. However, the real
benefit of BioMart comes from the ability to perform integrated querying with
BioMarts deployed on other resources, which share a common identifier such as
MGI or Ensembl IDs. For example, in Figure 3a BioMart query
<table>
<tr>
<th>
Figure 3. The EMMA BioMart interface. This interface allows advanced querying
of the EMMA database as well as distributed and integrated querying with the
Ensembl resource. In this example EMMA targeted knock-out lines are identified
that have affected genes annotated by Ensembl as being located within the
first 100Mbp of chromosome 1 and containing a transmembrane domain in their
protein products. The results table is fully configurable from within the
interface and here shows the strain name, EMMA ID (hyperlinked back to the
strain description at EMMA), gene symbol and phenotype description from the
EMMA BioMart and the Ensembl Gene ID, chromosome, start and end from the
Ensembl BioMart
</th> </tr> </table>
located at the Wellcome Trust Sanger Institute, Hinxton, UK.
has identified all lines held in EMMA that have an affected gene annotated by
Ensembl as being located on the first 100Mbp of chromosome 1 and having a
transmembrane protein domain.
A new portal is currently being developed for the IKMC initiative by the
International-Data Coordination Center (I-DCC; http://www.i-dcc.org). This
will be released late 2009 and will display the status of all genes in the
mutagenesis pipeline along with available products and data for the mutant ES
cells and mouse lines. The portal will utilise a number of BioMarts developed
for the IKMC component mutagenesis pipelines and repositories, as well as for
other useful resources such as the GXD (15) and Eurexpress
(http://www.eurexpress.org) gene expression databases, and the Europhenome
phenotyping resource. The EMMA BioMart will form an integral component of this
IKMC portal and in addition allow a wider variety of integrated queries from
our EMMA BioMart server.
Another type of data integration is provided by our Distributed Annotation
System (DAS) server (www.emmanet.org/das). This serves up summary level data
for each EMMA line, allowing the display of EMMA strains on DAS clients such
as the Ensembl genome browser. For example by browsing to
http://www.ensembl.org/Mus_musculus/Gene/External
Data/EMMA?g=ENSMUSG00000055694 and clicking on the ‘Configure this page’
option and selecting EMMA it is possible to view any EMMA lines that exist for
this gene (Gdf1). The EMMA ID, affected gene symbol, name and link to curated
data at MGI is given along with the mutation type, phenotype summary and a
link to the strain description at EMMA.
## CONCLUSION AND FUTURE DIRECTION
The number of mutant mouse lines submitted to EMMA as well as the number of
requests for these mutants is likely to increase significantly in the near
future. This is due to the large scale and systematic efforts of the IKMC to
perform saturation mutagenesis of the mouse genome using gene targeting and
gene trapping approaches. As well as continuing to expand the number of lines
curated and distributed by the EMMA resource, collaboration with international
efforts to present all available mutants worldwide is going to become ever
critical as the IKMC and eventually the IMPC initiatives continue to produce
and characterise mutants. Data exchange with IMSR will continue to provide a
common access site and EMMA will collaborate extensively with the I-DCC to
provide a central portal to the data and products produced by the IKMC. There
will be a particular focus on utilising the phenotyping data arising out of
these programmes to allow searching for mouse models using precise phenotype
queries structured using the Mammalian Phenotype (MP) ontology (16).
The EMMA project is currently funded until 2013, but obviously long term,
stable funding for the data storage and mouse archiving that EMMA performs
will be critical to capture and maintain the products emerging from the IKMC
and IMPC programmes. This is a recognised issue and the European Commission is
currently funding a number of projects under the ESFRI Roadmap with the goal
of identifying sources of long term funding for key scientific activities.
Infrafrontier (http://www .infrafrontier.eu) is one of these projects and is
tasked with securing such funding for archiving and phenotyping of mouse
mutants. Infrafrontier has already decided that the archiving aspect will be
taken care of by a major upgrade to the EMMA project. Hence, it is highly
likely that EMMA will continue providing this valuable service to the
worldwide scientific community for many years to come.
## ACKNOWLEDGEMENTS
The authors would like to thank the members of the EMMA Technical Working
Group, Board of Participating Directors and EMMA archive centres who
coordinate and carry out the hard task of archiving all the mouse lines.
## FUNDING
European Commission FP6 Infrastructure Programme [grant no. 506455]. Funding
for open access charge: European Commission FP7.
Conflict of interest statement. None declared.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0726_ExaHyPE_671698.md
|
# Storage and Accessibility of Data
The intention of the consortium is to grant full and open access to all data
being collected, processed and generated within the ExaHyPE project, not only
during the grant period but also beyond that. For this reason, we implemented
the following technical infrastructure which is subject to further extension.
We set up a project website, _http://exahype.eu/_ , for dissemination of all
project activity.
Furthermore, we set up a YouTube-channel for [email protected]_ for the
dissemination of project videos. It is accessible under
_https://www.youtube.com/channel/UCKRM7I8tB6MxidxCuvn3FCA_
The source code of the ExaHyPE engine is under version control in the GIT
repository [email protected]:gi26det/ExaHyPE.git_ , which is accessible only
to the developers and members of the ExaHyPE project. Stable releases
including the full source code are published in a second repository located at
_https://github.com/exahype/exahype_ , which is open-access. This two-level
publication of source code gives us the possibility to keep the main-
development in a core team and to build up a user community based on stable
production versions of the code. Nevertheless, any code developed within the
project will be made available in the public repository at GitHub.com.
The whole simulation pipeline of the project is monitored with the constant
integration tool Jenkins _https://jenkins-ci.org/_ , which runs benchmark
simulations and test cases on a nightly basis. Using this pipeline, we produce
and publish the documentation of the source code, the profiling of runtime
behaviour and a guidebook (i.e. a user documentation).
For very large datasets, such as simulation results of grand challenge
simulations, it is not feasible to host them in a repository or provide the
download via a project website. For such datasets we are developing an access
and archiving structure together with our associated partner, the Leibniz
Supercomputing Centre (LRZ) _https://www.lrz.de/_ . We will elaborate more
on this topic in later versions of this document, as soon as we have such
large datasets at hand and more progress in the development of the technical
infrastructure is made. In these versions also the potential additional costs
for long-term storage will be targeted.
The consortium plans for a backup period of 10 years. As technical
infrastructure is hosted at LRZ, this is ensured and gives full accessibility
of all data, including archiving for long term availability of the data.
Details can be found at the respective pages of the LRZ under
_https://www.lrz.de/_ .
In Chapters 4 to 6, we give a description for every dataset including the used
standards. However, for established tools or already published data, we
refrain from replicating these. In particular, we will not collect data of the
following kind:
Tools that are not developed by ExaHyPE: tools, libraries, operation systems,
compilers or visualisation tools, as we do consider this neither feasible nor
useful.
Data that is already publicly accessible: geo-information databases or
material descriptions, etc.; this data will be explicitly referenced.
# Dataset Description – ExaHyPE Engine
**Data set reference and name:**
ExaHyPE Engine
**Origin of the dataset:**
_Generated_
The code of the ExaHyPE engine is written by the project members from scratch.
It uses and extends functionality of existing research codes and in this sense
it is based on previous code development of the partners. Namely these are the
_Peano_ software developed by Weinzierl et al. and the PDE solver _pdesol_ by
Dumbser et al.
**Type of the dataset:**
_Source code_
The ExaHyPE engine consists of the source code itself and a set of developed
pre- and post-processors which support the creation of user-specific
applications and the evaluation of the simulated results.
**Level of open access:**
The access to the ExaHyPE engine is partially open and partially confined as
explained in the dataset description.
**Ethical considerations for this dataset:**
No ethical considerations have to be taken for the ExaHyPE engine.
**Dataset description:**
The ExaHyPE Engine is the core software project developed by the consortium
members. The software is accessible to the team of developers through an
access-protected GitLab repository and to the open public through an open-
access GitHub repository. This distinction is only a difference in time and is
to streamline the development process. Any development of the ExaHyPE engine
will be released to the public-accessible repository.
**Standards:**
The main programming languages used for the ExaHyPE Engine are C++, FORTRAN,
JAVA and Python. Parallel programming models include standard MPI and OpenMP,
as well as Intel Thread Building Blocks. Standard version control (git) is
applied to the source code.
Input and configuration files are tailored text formats readable by users. As
the files are text files, standard version control (git) is applied.
Output data will be stored in VTK, HDF5, and Tecplot standard.
**Data sharing:**
The ExaHyPE engine is provided under the modified BSD license for free use, as
specified in the grant agreement. The reference is given by the following
header in every source file:
//
// This file is part of the ExaHyPE project.
//
// (C) http://exahype.eu
//
// The project has received funding from the European Union’s Horizon // 2020
research and innovation programme under grant agreement // No 671698\. For
copyrights and licensing, please consult the webpage. //
**Archiving and preservation:**
All data are stored in the repository and therefore follow the archiving and
preservation procedures outlined in Chapter 3.
# Dataset Description – Applications from Geophysics
**Data set reference and name:**
Geophysical seismic wave simulations
**Origin of the dataset:**
_Collected_ and _processed_ data – see data set description for details.
**Type of the dataset:**
Geophysical _input data_ , _configurations_ and _simulation results_ – see
data set description for details.
**Level of open access:**
All data is available without restrictions to the members of the project,
unless it stems from sources that do not allow redistribution. Scientific
results (in form of post-processed simulation output) and simulation
configurations (e.g. including boundary conditions) will be presented in open-
access journals and made openly accessible. Input data, e.g. in form of
detailed subsurface material properties and geometries, have varying levels of
open access from publicly available to restricted, depending on their origin.
**Ethical considerations for this dataset:**
No ethical considerations have to be taken for this dataset.
**Dataset description:**
Input:
* Subsurface material properties determining elastic and non-elastic seismic wave propagation (e.g.
wave speeds): Available mostly in public geophysical community repositories or
from restricted scientific publications or scientific collaboration partners.
* Subsurface geometry properties describing material interfaces, fault planes and geological structures: Available partly from geophysical community repositories (publicly) or will be generated by LMU Munich based on scientific publications and collaborations (restricted).
* Surface topography and bathymetry data giving high-resolution elevation of earth or planetary surface: Available from geophysical community repositories (publicly).
* Location and observed ground shaking of seismic stations during real earthquakes: Available from geophysical community repositories (publicly).
Configuration:
* Boundary conditions: Will be made publicly available upon publication.
* Frictional descriptions - analytic, empirical relationships describing frictional failure on a fault: Will be made publicly available.
* Initial parameterization of stress and strength state of the modelling domain: Will be made publicly available upon publication.
Output:
* Wave-field output: Large scale spatial-temporal output of all elastic and inelastic quantities which are solved for during the simulation of seismic wave propagation. Will be made publicly available upon publication in post-processed form.
* Fault output: Large scale spatial-temporal output of all frictional quantities which are solved for on earthquake fault during simulations incorporating dynamic rupture. Will be made publicly available upon publication in post-processed form.
* Synthetic seismograms at chosen locations: Post-processed ground-shaking time series allowing for comparison and analysis using observational seismological methods. Will be made fully publicly available upon publication.
All output data will be generated via software produced within the project or
via software that is proprietary but of which we have copyright access. In
addition, publicly available dedicated software may be employed for the
analysis of the dataset from simulations.
We do not plan on purchasing any kind of data.
**Standards:**
Input, configuration and output data will be processed and published in
formats according to existing standards in the geophysical community. We will
try to define and advocate for suitable future research standards in case they
are not available.
We aim on making our simulations fully reproducible by providing computational
metadata (compiler type, compilation flags, source-code tree structure,
information on supercomputers infrastructure employed for producing the data,
information on the software employed in the analysis of the data).
The data will be stored in the most compact form possible using well-known
protocols such as hdf5 or VTK.
**Data sharing:**
The sharing of the Geophysical dataset will follow the data-sharing policy of
the ExaHyPE project.
Input, configuration and output datasets, which are not already available to
the public from other sources, will be made publicly available upon scientific
publication of geophysical simulations.
The geophysical parts of the ExaHyPE engine (source code) employed to produce
the scientific data will be made publicly available upon scientific
publication of the regarding simulation and not later than one year after the
end of the project.
**Archiving and preservation:**
The archiving and preservation policy of the geophysical dataset will follow
that of the ExaHyPE project.
We will not archive data which is publicly available from other sources.
# Dataset Description – Applications from Astrophysics
**Data set reference and name:**
Astrophysical simulations of merging binary neutron stars
**Origin of the dataset:**
_Generated, collected_ and _processed_ data – see data set description for
details.
**Type of the dataset:**
Astrophysical _input data_ , _configurations_ and _simulation results_ – see
data set description for details.
**Level of open access:**
All the data is available without restrictions to the members of the project,
while the scientific results will be presented in open-access journals or on
publicly open preprint archives.
**Ethical considerations for this dataset:**
No ethical considerations have to be taken for this dataset.
**Dataset description:**
The data that will be generated and collected will refer to the evolution of
primitive hydrodynamical and
MHD quantities, either in the form of scalar quantities (density, energy,
etc.) or in the form of vectorial quantities (electromagnetic fields, etc.),
or in the form of tensor quantities (metric tensor, extrinsic curvature,
etc.).
A subset of this data will be employed to produce figures in scientific
publications and will be stored in specific folders dedicated to the various
publications.
All of the data will be generated via software produced within the project or
via software that is proprietary but of which we have copyright access. In
addition, publicly available dedicated software may be employed for the
analysis of the dataset from simulations.
**Standards:**
The data will be stored in the most compact form possible using well-known
protocols such as hdf5 or VTK.
Different datasets will be stored with precise timestamps and with all the
metadata information that is needed to reproduce the results of the
simulations. Such metadata includes: compiler type, compilation flags, source-
code tree structure, information on supercomputers infrastructure employed for
producing the data, information on the software employed in the analysis of
the data.
**Data sharing:**
The sharing of the Astro dataset will follow the data-sharing philosophy of
the ExaHyPE project.
More specifically:
* all the useful data produced and collected in the simulations will be made available publicly in its most compact and yet useful form.
* the source code employed to produce the scientific data will be made publicly available as soon as its release will not endanger the academic prospects of the personnel employed in the ExaHyPE project (in particular student and postdocs) and after a proper scientific exploitation of the code has been made. At any rate, all of the produced software will be made publicly available no longer that one year after the end of the project.
**Archiving and preservation (including storage and backup):**
The archiving and preservation policy of the Astro dataset will follow that of
the ExaHyPE project.
# Degree of Progress
All activities regarding Data Management are currently proceeding as planned
and no major issues have been identified.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0727_ACTRIS-2_654109.md
|
# Introduction to ACTRIS Data Centre
ACTRIS-2 (Aerosols, Clouds, and Trace gases Research InfraStructure)
Integrating Activity (IA) addresses the scope of integrating state-of-the-art
European ground-based stations for long-term observations of aerosols, clouds
and short lived gases. ACTRIS-2 is a unique research infrastructure improving
the quality of atmospheric observations, developing new methods and protocols,
and harmonizing existing observations of the atmospheric variables listed in
Appendix I.
The overall goal of the ACTRIS Data Centre is to provide scientists and other
user groups with free and open access to all ACTRIS infrastructure data,
complemented with access to innovative and mature data products, together with
tools for quality assurance (QA), data analysis and research.
The numerous measurement methodologies applied in ACTRIS result in a
considerable diversity of the data collected. In accordance with these
requirements, the ACTRIS Data Centre consists of three topical data
repositories archiving the measurement data, which are all linked through the
ACTRIS data portal to provide a single access point to all data. Hence, the
ACTRIS Data Centre is founded on 3 topical data repositories:
* Near-surface aerosol and trace gas data are reported to EBAS : _http://ebas.nilu.no/_ • Aerosol profile data are reported to the EARLINET Data base:
_http://access.earlinet.org/EARLINET/_
* Cloud profile data are reported to the Cloudnet data base : _http://www.cloud-net.org/data/_
In addition, ICARE contributes with the production and provision of satellite
data that complements the ACTRIS ground-based data : _http://www.icare.univ-
lille1.fr/catalogue_ .
Generally, the ACTRIS Data Centre and data management activity aim to work in
accordance with the ENVRI Reference Model, hosted a t _www.envri.eu/rm_ .
# ACTRIS data set descriptions
ACTRIS data sets are atmospheric variables listed in Appendix I, measured with
the corresponding recommended methodology. Furthermore, the data are qualified
as ACTRIS data sets only if they comply with the additional requirements
specified in section 2.1 -2.3 **.** The list of variables are expected to
increase during the progress of ACTRIS, particularly secondary data products.
During ACTRIS-2, e.g. the aerosol and cloud databases will be augmented with
new classification products developed through the combination of existing
sensors with additional instrumentation; and products providing information
about aerosol layering and typing, together with advanced products derived
from long term series or special case analyses. In addition, new parameters
utilising these products will also be prepared, andstandardized preprocessed
lidar data and NRT optical property profiles will be available.
## Aerosol and trace gas near-surface data sets
Aerosol and trace gas near-surface data are qualified as ACTRIS data only if
* The atmospheric variables are included in the list in Appendix I
* The applied procedures comply with the standard operating procedures (SOP), and measurement recommendations and guidelines provided by the ACTRIS near-surface community. See section 4.1of this document for more details.
* The measurement data are submitted to the topic data base EBAS by using the reporting templates and procedures recommended by the ACTRIS near-surface community, and available at _http://ebas-submit.nilu.no_
Datasets fulfilling the requirements above qualify for the “ACTRIS” near-
surface data set label. The types of variables are expected to expand during
ACTRIS-2. The data can in addition be associated with other programs and
frameworks such as GAW, EMEP, and national EPA etc. The data originator
determines other project associations.
Standard collection and reporting procedure for aerosol and trace gas near-
surface measurement data:
* Deadline for reporting data is 31 July of the following year from the reported measurements
* Data are submitted to a dedicated ftp-server at the data centre
* An auto-generated e-mail is sent to the data submitter to confirm that the data is received
* After submission, the data undergo an automatic format, NASA-Ames 1001, and metadata check, followed by manual inspection.
* If the data file is accepted, data are imported to EBAS, and feedback is given to the data originator. If there are suspicious data (e.g. suspicious data points/outliers) or format errors (in e.g. metadata, formats, etc.) the data originator is contacted and asked to assess, correct, and re-submit data.
* Data originators are asked about their project affiliation with collaborating networks and frameworks (EMEP, GAW-WDCA etc.)
* Trace gas data is made available to GAW-WDCGG; aerosol data are made available to GAWWDCA.
* Near-real-time (NRT) data collection is set up and the raw data are auto-processed to hourly averages
## Aerosol profile data sets
Aerosol profile data are qualified as ACTRIS data only if
* The atmospheric profile variables are included in the list in Appendix I.
* The applied procedures comply with the recommendations and procedures provided by the ACTRIS profile community, harmonised with EARLINET. See section 4.2 of this document for more details.
* The data are reported to the EARLINET DB in accordance with the reporting procedures (available at _http://www.earlinet.org_ / ).
Standard collection and reporting procedure for aerosol profile data:
* Data originators have the possibility to use, in addition to their own quality-assured method, the common standardized automatic analysis software developed within EARLINET, namely the Single Calculus Chain (SCC), for analysing their own lidar data to obtain optical properties from raw data, and passing through preprocessed data.
* New data shall be uploaded to the EARLINET DB within 3 months after measurement by data originator as preliminary data.
* Preliminary data shall be made accessible to the public as soon as possible, and automatically by the database 1 year after the measurement.
* All data will pass an approval process within 2 years after being measured. The approval is undertaken by an internal group of experts. During the ACTRIS2 project automatic QC procedures will be implemented and applied starting from these previous experiences.
At the beginning of ACTRIS-2 project, the aerosol vertical profile database
contain aerosol optical properties profiles. During the ACTRIS-2 project, it
will be augmented with more products, providing also information about the
layering, and typing. In addition, standardized preprocessed lidar data and
NRT optical properties profiles will be available.
## Cloud profile data sets
Cloud profile data are qualified as ACTRIS data only if
* The atmospheric profile variables are included in the list in Appendix 1
* The processing applied complies with the procedures and recommendations provided by the ACTRIS community harmonised with Cloudnet.
* The data are reported to the Cloudnet DB in accordance with the reporting procedures
Standard collection and reporting procedure for cloud profile data Utilise
the Cloudnet processing scheme.
* Preliminary data is accessible immediately to the community and public on insertion into the Cloudnet DB, together with a statement their appropriateness and validity for use.
* All data undergoes an approval process for final publishing, cognisant with full periodic calibration assessment and approval by expert panel.
* Selected variables are provided in NRT for the purposes of assimilation and NRT evaluation of NWP model data.
## ACTRIS Secondary data products, combined data and project data tools
ACTRIS Secondary data are derived from the primary ACTRIS data described in
section2.1-2.3, by e.g. averaging, filtering of events, interpolation of data
etc. ACTRIS secondary data sets and project data tools can also include codes,
algorithms and software used to generate ACTRIS primary or secondary data.
Whereas primary datasets are regularly updated mainly due to the collection of
new measurements and extension of the time series, secondary datasets are
normally not updated. Secondary datasets are usually the result of targeted
analysis, special studies, case studies, or processed for model experiments,
including work performed under ACTRIS Joint Research Activities, and
Transnational Access.They are “single purpose”, i.e. made for one specific
purpose, as opposed to primary data which are documented as to serve as many
purposes as possible.
### Advanced products based on aerosol and trace gas near-surface data sets
Advanced products based on aerosol and trace gas near-surface data sets will
be developed in collaboration with joint research activities and in accordance
with other scientific requests during the project. Standard advanced products
can include typically aggregated data such as daily, monthly or annual means
of selected variables. Furthermore, the potential of long-term high quality
ACTRIS-2 data for understanding of trends in atmospheric composition shall be
further developed. A methodology will be put in place to analyse and produce
regularly site-specific and regional trends. Suitable near-surface variables
are particle size, and particle optical properties. Additionally, online QA
tools and products will be offered for checking the consistency of the data
sets in terms of ratios between specific trace gases, and closure tests
between aerosol variables from different instruments.
### Advanced products based on aerosol profile data sets
Advanced data products will be designed time by time following the specific
needs as they are results of specific studies. Advanced data are stored and
made freely available at EARLINET database as advanced products. These are the
results of devoted (typically published) studies. Standard advanced products
include climatological products from long-term observations. Further advanced
products can be the results of JRA as microphysical aerosol products based on
inversion of multi-channel lidar data, and microphysical aerosol products from
combined lidar and sun-photometer observations. In particular, ICARE will
automatically process raw lidar data from the EARLINET DB, combined with
coincident AERONET data, using the GARRLiC (Generalized Aerosol Retrieval from
Radiometer and Lidar Combined data) algorithm to retrieve vertical profiles of
aerosol properties.
### Advanced products based on cloud profile data sets
Advanced data products are prepared automatically by the Cloudnet processing
scheme include model evaluation datasets, and diurnal/seasonal composites. In
addition, advanced classification and products will be available from certain
sites, and from campaigns, where additional instruments and products are
combined.
### Data sets resulting from combined activities with external data providers
The ICARE data centre routinely collects and produces various satellite data
sets and model analyses that are used either in support of ground-based data
analysis or in combination with ground-based data to generate advanced derived
products. These data sets will be channelled to the ACTRIS portal using
colocation and extraction/subsetting tool.
## The ACTRIS user community
The ACTRIS user community can be classified as primary users (direct users of
ACTRIS data, data products and services) and secondary users (using results
from primary users, e.g. from international data centres). These are both
internal and external users. In general, the user community can be summarized
into five groups:
1. **Atmospheric science research community.** Together with atmospheric chemistry and physics, this also includes climate change research and meteorology, as well as multidisciplinary research combining these aspects (such as air quality, and climate interactions with links between aerosols, clouds and weather).
2. **Research communities in neighbouring fields of research.** These are environmental and ecosystem science, marine science, geosciences/geophysics, space physics, biodiversity, health and energy research. These communities will benefit from ACTRIS through the longterm provision of high-quality data products and through the enhanced capacity to perform interdisciplinary research.
3. **Operational observation and data management.** This community includes international data centres and international programmes to which ACTRIS contributes via the provision of longterm and consistent high-quality data products. Many research programmes and operational services (such as the Copernicus Atmosphere Monitoring and Climate Services) use ACTRIS to produce reliable data.
4. **Industry and private sector users** . These benefit from the services and high quality standards of the ACTRIS Calibration Centres, and from the free and open access to data products.
5. **Legislative / policy making community** . This include the user groups within climate, air quality and environmental issues including actors from local organisations, through national governments, to international conventions and treaties (including IPCC and UNFCCC, and UNECE-CLRTAP via the link to EMEP). This user community uses ACTRIS research results to define, update and enhance knowledge for decision making, policy topic preparation and drafting response and mitigation policies.
# ACTRIS data set references and names
ACTRIS works towards establishing traceability for all applicable variables.
In collaboration with partners in the ENVRI plus project, ACTRIS is working
towards use of digital object identifiers (DOIs), in order to assure proper
attribution is given to data originators adequately reflecting their
contributions.
Generally, ACTRIS data set names aim to be compliant with CF (Climate and
Forecast) conventions. In the case where no standard CF names are defined, an
application will be sent to establish these.
## Aerosol and trace gas near-surface data set references and names
The near-surface data set names are listed in Appendix I. For most near-
surface variables, ACTRIS data are traceable from the final data product back
to the time of measurement. Traceability is implemented by a series of data
levels leading from curated, instrument specific raw data to the final,
automatically and manually quality assured data product. Processing steps
between data levels are documented by SOPs.
All submissions of near-surface data passing quality assurance are uniquely
identified in the EBAS database with a unique dataset identity numbers, ID-
numbers. In case of updates, a ID-number is generated, and previous data
versions are kept available upon request while the latest version is served
through the database web-interface. Defined requests from the data holdings
are identified in the webinterface by unique URLs that allow external links to
the data.
## Aerosol profiles
The aerosol profile data set names are listed in Appendix I. The EARLINET
database is a version controlled database. The use of SCC allows the full
traceability of the data: SSC converts individual instrument raw signals into
standardized and quality-assured pre-processed lidar data. The SCC tool will
be used to develop a harmonised network-wide, open and freely accessible
quicklook database (highresolution images of time-height cross sections). The
standardized pre-processed data will also serve as input for any further
processing of lidar data, within the SCC as well as in other processing
algorithms (e.g., combined retrievals with sun photometer, combined retrievals
with Cloudnet).
All aerosol profiles passed through quality check inspections manual and/or
automatic leading to biannual final publication of quality checked data
collection with DOI assignment. The DOI is assigned through the publication on
the CERA database. In case of updates, only the latest version of data is
available at _http://access.earlinet.org_ and a new collection of data (with
new DOI) is published. Previous data versions are kept available.
## Cloud profiles
The cloud profile data set names are listed in Appendix I. The common use of
the Cloudnet processing scheme ensures full traceability of the data from raw
individual instrument measurements through to a combined standardised and
quality-assured processed data set. The Cloudnet processing scheme ensures
harmonisation of products across a relatively heterogeneous network. All
quicklooks are open and freely accessible a t _http://www.cloud-
net.org/quicklooks/_
It is envisaged that publication of curated datasets with DOI assignment will
commence as soon as possible. Currently, only the latest data version is
available throug h _http://www.cloud-net.org/data/_ due to the large data
volume requirements.
# ACTRIS Standards and metadata
ACTRIS standards and metadata systems are well-developed, with
parameter/variable standardization already existing in most cases. If this is
not the case, ACTRIS, as a leading community in this field of atmospheric
science, will work in collaboration with WMO-GAW, EMEP and other EU-funded
projects (such as ENVRI plus ) in order to set the standards and foster
interoperability between both the large variety of data products developed
with ACTRIS itself, and with respect to external data centres.
## Standards and metadata for aerosol and trace gas near-surface data
All aerosol and trace gas near-surface data sets are archived and provided in
the NASA-Ames 1001 format.
### Regular quality-assured data
Standards, SOPs and recommendations for each near-surface variable measured
within ACTRIS are listed in the table below.
<table>
<tr>
<th>
**Variable**
</th>
<th>
**Reference SOP and recommendations**
</th> </tr>
<tr>
<td>
Particle light scattering coefficient
</td>
<td>
GAW report #200
</td> </tr>
<tr>
<td>
Particle light absorption coefficient
</td>
<td>
GAW report #200
</td> </tr>
<tr>
<td>
Particle number concentration
</td>
<td>
Wiedensohler et al., Atmos. Meas. Tech., 5, 657-685, 2012,
doi:10.5194/amt-5-657-2012
</td> </tr>
<tr>
<td>
Particle number size distributions (fine fraction)
</td>
<td>
Wiedensohler et al., Atmos. Meas. Tech., 5, 657-685, 2012,
doi:10.5194/amt-5-657-2012
</td> </tr>
<tr>
<td>
Particle number size distributions (coarse fraction)
</td>
<td>
ACTRIS protocol in preparation
</td> </tr>
<tr>
<td>
Cloud condensation nuclei number concentration
</td>
<td>
ACTRIS protocol in preparation
</td> </tr>
<tr>
<td>
Liquid Water Content
</td>
<td>
ACTRIS protocol in preparation, see also Guyot et al., Atmos. Meas. Tech.
Discuss., 8, 5511-5563, doi:10.5194/amtd-8-55112015, 2015.
</td> </tr>
<tr>
<td>
Particulate organic and elemental carbon mass concentrations (OC/EC)
</td>
<td>
EMEP/CCC (2014) Manual for sampling and chemical analysis.
Chapter 4.22 (Last rev. February 2014). URL:
_http://www.nilu.no/projects/ccc/manual/index.html_ . See also Cavalli et
al., Atmos. Meas. Tech., 3, 79-89, 2010, doi:10.5194/amt-3-79-2010
</td> </tr>
<tr>
<td>
Particulate size-resolved chemical composition (organic & inorganic
sizeresolved mass speciation)
</td>
<td>
ACTRIS protocol in preparationSee also Ng, et al., Aerosol Science and
Technology, 45:770-784. 2011,
DOI:10.1080/02786826.2011.560211 and Fröhlichet al., Atmos.
Meas. Tech., 6:3225-3241, 2013, doi:10.5194/amt-6-3225-2013.
</td> </tr>
<tr>
<td>
**Variable**
</td>
<td>
**Reference SOP and recommendations**
</td> </tr>
<tr>
<td>
Particulate levogluocsan mass concentration
</td>
<td>
Yttri et al,. Atmos. Meas. Tech., 8, 125–147, 2015, Further ACTRIS
recommendations in preparation.
</td> </tr>
<tr>
<td>
Volatile Organic Compounds (VOCs)
</td>
<td>
ACTRIS-FP7 Deliverable D4.9:Final SOPs for VOCs measurements
_http://www.actris.net/Portals/97/Publications/quality%20standar_
_ds/WP4_D4.9_M42_30092014.pdf_
</td> </tr>
<tr>
<td>
NO xy
</td>
<td>
ACTRIS-FP7 Deliverable D4.10: Standardized operating procedures
(SOPs) for NOxy measurements
_http://www.actris.net/Portals/97/Publications/quality%20standar_
_ds/WP4_D4.10_M42_140919.pdf_
</td> </tr> </table>
_**Metadata:** _ A comprehensive metadata system and description of each
ACTRIS near-surface variable is implemented in the topic data base EBAS. All
ACTRIS near-surface variables are reported to EBAS by using the reporting
templates recommended by the ACTRIS near-surface community, harmonized with
GAW-recommendations. The templates ensure that the measurements are reported
in accordance with the procedures for the employed instrument, and include all
the necessary metadata required to precisely describe the measurements,
including uncertainty/percentiles. In this way, all ACTRIS nearsurface data
are accompanied by a sufficient documentation of the measurements to have in-
depth information on the quality of the data. Information about the reporting
procedure and metadata items are open accessible and available throug h
_http://ebas-submit.nilu.no_ . Metadata are interconnected with GAWSIS and
the ACTRIS data center handling of metadata is INSPIRE and WIS-ready.
### Near-real-time (NRT) data
Near-real-time (NRT) data flow is offered to the data originators as daily
quality check for selected variables, with the possibility for an alert system
for outliers, instrumental failures and inconsistencies.NRT data collection
and dissemination is available for the near-surface ACTRIS observables as
identified in Appendix I.
Participating stations submit their data as annotated raw data in hourly
submissions starting and ending at the turn of an hour. As an exception,
3-hourly submissions are accepted if indicated by limited connectivity with
the station. The raw data are auto-processed to hourly averages, while periods
with obvious instrument malfunctions are disregarded. Special sampling
conditions or transport episodes are not flagged. The processed NRT data are
available through the EBAS web-interface or through autoupdated custom FTP
extracts.
## Standards and metadata for aerosol profiles
Aerosol profiles data are archived and provided in netCDF format. All
published EARLINET data are in CF (Climate and Forecast) 1.5 compliant format.
A migration for all the data to this convention is planned.
<table>
<tr>
<th>
**Variable**
</th>
<th>
**Reference SOP and recommendations**
</th> </tr>
<tr>
<td>
Aerosol backscatter coefficient profile
</td>
<td>
Bockmann et al., Appl. Opt. 2004
</td> </tr>
<tr>
<td>
Aerosol extinction coefficient profile
</td>
<td>
Pappalardo et al., Appl. Opt. 2004
</td> </tr>
<tr>
<td>
Lidar ratio profile
</td>
<td>
Pappalardo et al., Appl. Opt. 2004
</td> </tr>
<tr>
<td>
Ångström exponent profile
</td>
<td>
Pappalardo et al., Appl. Opt. 2004
</td> </tr>
<tr>
<td>
Backscatter-related Ångström exponent profile
</td>
<td>
Bockmann et al., Appl. Opt. 2004
</td> </tr>
<tr>
<td>
Particle depolarization ratio profile
</td>
<td>
ACTRIS-FP7 Deliverable D2.7, see also Freudenthaler et al., Tellus, 2008
</td> </tr>
<tr>
<td>
Planetary boundary Layer
</td>
<td>
Matthias et al., JGR 2004
</td> </tr> </table>
_**Metadata:** _ All aerosol profile data are accompanied by respective
metadata reporting information about the station, the system, and the timing
of the measurements. Aerosol profile data sets reported to the ACTRIS data
centre can be the results of regular operation of the EARLINET network, but
also related to specific campaigns and joint research activities. Homogeneous
and well-established quality of data originating from different systems is
assured through a rigorous quality assurance program addressing both
instrument performance and evaluation of the algorithms. Information about the
QA program are summarized in Pappalardo et al., AMT, 2014 and are open and
freely available at _http://www.atmosmeas-
tech.net/7/2389/2014/amt-7-2389-2014.html_ ACTRIS-2 improvement of the SCC is
a step forward to complete harmonization of the aerosol profiles data quality.
During ACTRIS-2, protocols and quality check procedures will be further
optimized, in particular for new products, in NA2 and data QC tools will be
developed in NA2 in collaboration with the data centre, checking the data
optical properties consistency and through the comparison with climatological
data. The SCC and all QC tools will be available to all potential users of
ACTRIS data, both internal and external.
## Standards and metadata for cloud profiles
### Quality-assured data
Cloud profiles are archived and provided in netCDF format, with CF–compliant
metadata.
The base-line SOPs and recommendations for Cloudnet variables is given in
Illingworth et al., (2007), with updates given in ACTRIS-FP7 Deliverable D5.10
<table>
<tr>
<th>
**Variable**
</th>
<th>
**Reference SOP and recommendations**
</th> </tr>
<tr>
<td>
Cloud and aerosol target classification
</td>
<td>
Illingworth et al., BAMS, 2007
</td> </tr>
<tr>
<td>
Drizzle products
</td>
<td>
ACTRIS-FP7 Deliverable D5.7, see also O’Connor et al., JTECH, 2005
</td> </tr>
<tr>
<td>
Ice water content
</td>
<td>
Hogan et al., JAMC, 2006
</td> </tr>
<tr>
<td>
Liquid water content
</td>
<td>
Illingworth et al., BAMS, 2007
</td> </tr>
<tr>
<td>
Liquid water path
</td>
<td>
MWRNET, _http://cetemps.aquila.infn.it/mwrnet/_ see also Gaussiat et al.,
JTECH, 2007
</td> </tr>
<tr>
<td>
Higher-level metrics
</td>
<td>
ACTRIS-FP7 Deliverable D5.10
</td> </tr> </table>
_**Metadata:** _ Cloud profile data are accompanied by metadata describing the
station, instrument combination and supporting ancillary measurements, and
processing software version. Metadata describing instrument calibration
history will be implemented within ACTRIS-2. Harmonization and rigorous
quality control for data originating from different instruments and instrument
combinations is achieved through the common use of the Cloudnet processing
software, summarised in Illingworth et al. (2007). All metadata is propagated
through to every cloud product derived from the measurements; this requirement
will be mandated for all new products derived during ACTRIS-2. The Cloudnet
processing scheme, and the interface description for generating new products,
is freely available for all potential users of ACTRIS data, whether internal
or external.
### Near-real-time (NRT) data
All cloud NRT data is processed in the same manner as for quality-assured
data, together with all accompanying metadata. However, subsequent instrument
calibration may require reprocessing to generate a revised product which uses
the updated calibration values.
# Sharing of ACTRIS data sets and data products
## Access to ACTRIS data sets and data products
The ACTRIS Data Centre compile, archive and provide access to all ACTRIS data,
and the ACTRIS data portal ( _http://actris.nilu.no_ ) is giving free and
open access to all data resulting from the activities of the ACTRIS
infrastructure, including advanced data products resulting from ACTRIS
research activities. Every dataset created within ACTRIS is owned by the
ACTRIS partner(s) who created this dataset. _The ACTRIS Data Policy (_ _
http://actris.nilu.no/Content/Documents/DataPolicy.pdf) _ regulates the
sharing and use of ACTRIS data, see section 5.3.
The ACTRIS data portal ( _http://actris.nilu.no_ ) provide access to ACTRIS
data sets. This is a virtual research environment with access to all data from
ACTRIS platforms and higher level data products resulting from scientific
activities. The portal is structured as a metadata catalogue, searching the
topical data bases, enabling data download from the primary archive and
combination of data across the primary data repositories. The metadata
catalogue is updated every night, providing access to all recent ACTRIS data.
All data are archived in the topical data repositories, to 1) maintain
access to last version of data, 2) avoid duplications and 3) keep full
traceability of the data sets.
The cooperation of ACTRIS with EUDAT, has already started and will proceed
through ENVRI PLUS , providing a further instrument for discovering the
ACTRIS data sets.
### Aerosol and trace gas near-surface data repository
The ACTRIS data repository for all aerosol and trace gas near-surface data is
EBAS. _http://ebas.nilu.no_ . The web portal is set up on a dedicated linux
server running in Python program language. EBAS is an atmospheric database
infrastructure where open access to research data has developed over almost 45
years and the data infrastructure is developed, operated, and maintained by
NILU - Norwegian Institute for Air Research. The main objective of EBAS is to
handle, store and disseminate atmospheric composition data generated by
international and national frameworks to various types of user communities.
Currently, EBAS is a data repository for ACTRIS, and also hosts the World Data
Centre of aerosols under WMO Global Atmosphere Watch (GAW) and data from
European Monitoring and Evaluation Programme (EMEP) under the UN Convention
for Long-Range Transport of Air Pollution (CLRTAP), among other frameworks and
programmes.
No embargo times apply to these data; all data is reported to EBAS as early as
possible, and no later than 31 July the following year of the measurement. The
data sets are made available to all users as soon as possible after quality
control and quality assurance.
### Aerosol profile data repository
The ACTRIS data repository for all aerosol profile data is
_http://access.earlinet.org_ . The aerosol profile database is hosted,
maintained and operated by CNR-IMAA (National Research Council-Institute of
Methodologies for Environmental Analysis) where the Single Calculus Chain for
the automatic processing of lidar data for aerosol optical properties
retrieval was designed, optimized and operated for the whole network. CNR-IMAA
hosts different advanced products developed by EARLINET in the past for
providing access to external users (volcanic eruption products, satellite
validation datasets and NRT EARLINET subsets).
Aerosol profiles data are regularly published (every 2 years) on the CERA
database, following the first database publications of EARLINET database. This
assures the discoverability of the data through the association of a DOI to
the data and the archiving on CERA, a recognized official repository.
### Cloud profile data repository
The ACTRIS data repository for all cloud profile data is _http://www.cloud-
net.org_ . The cloud profile database is currently hosted, maintained and
operated by the University of Reading, but is in transition to FMI (Finnish
Meteorological Institute). The database provides the capability for both in-
house processing of instrument data, and collection of on-site processed data
through distributed use of the Cloudnet processing scheme. Both NRT access
(e.g. model evaluation) and full quality-assured archived data access is
available for internal and external users.
No embargo is applied to data quicklooks, available in NRT when possible. An
embargo is generally only applied to data when a site is in testing mode (new
instrumentation or re-calibration of existing instrumentation). Otherwise all
data sets are immediately available in NRT-mode (no QA) or as soon as quality
control/assurance has been applied. During the course of ACTRIS-2 quality-
assured archived datasets will be published in a recognized official
repository with an associated DOI.
## Access to secondary data and combined data products
ACTRIS secondary data sets are stored in dedicated catalogue in the ACTRIS
Data Centre or specified in the ACTRIS topical databases to provide long term
access for all users. Access to these data sets and products is made available
through the ACTRIS data portal : _http://actris.nilu.no_ .
The ICARE Data and Services Centre is hosted by the University of Lille in
partnership with CNRS and CNES. ICARE routinely collects various data sets
from third party data providers (e.g., space agencies, meteorological
agencies, ground-based observation stations) and generates a large number of
derived products. All data sets are available for download at
_http://www.icare.univ-lille1.fr/catalogue_ through direct FTP access or web-
based services, upon receipt or upon production, some of them in NRT. In
addition, ICARE provides visualisation and analysis tools (e.g.,
_http://www.icare.univ-lille1.fr/browse_ ) , and tools to co-locate and
subset data sets at the vicinity of ground-based observation networks (
_http://www.icare.univ-lille1.fr/extract_ ) . Existing tools will be fine-
tuned to meet specific ACTRIS requirements. Access to selected data and
services will be facilitated through the ACTRIS portal.
No embargo is applied to data quicklooks. Most data sets are freely available
for download upon registration. Some restrictions in data access or data use
may be inherited from original data providers or algorithm PIs for
experimental products generated at ICARE.
## The ACTRIS Data Policy
The ACTRIS Data Policy regulates the sharing of ACTRIS data and includes
information on dissemination, sharing and access procedures for various types
of data and various user groups. The ACTRIS Data Policy is publically
available from the ACTRIS web site, from the ACTRIS Data Centre, and here:
_http://actris.nilu.no/Content/Documents/DataPolicy.pdf_
The 1 st version of the ACTRIS Data Policy was established under ACTRIS-FP7,
June 2012. The 2 nd version was approved by ACTRIS-2 SSC, September 2015.
# Archiving and preservation of ACTRIS data sets
The main structure and installations of the ACTRIS Data Centre is located at
_NILU - Norwegian Institute for Air Research_ , Kjeller, Norway. NILU hosts
EBAS archiving all near-surface data sets, in addition to the ACTRIS Data
Portal. The other installations are the EARLINET DB at _National Research
Council - Institute of Environmental Analysis_ (CNR), Tito Scalo, Potenza,
Italy, the satellite data components at _University of Lille_ , Villeneuve
d'Ascq, France, and the cloud profile data at _Reading University_ , Reading,
UK. There will be a transfer of the installation from Reading University to
FMI (Finnish Meteorological Institute) by May 2016.
## Aerosol and trace gas near-surface data
EBAS is a relational database (Sybase) developed in the mid-1990s. Data from
primary projects and programmes, such as ACTRIS, GAW-WDCA, EMEP, AMAP, are
physically stored in EBAS. All data in EBAS are, in addition, stored at a
dedicated disk in the file tree at NILU. This include all 3 levels (0-1-2) of
data.
The complete data system is backed up regularly. This includes incremental
back up of the data base 6 times per week, and one weekly back up of the full
data base to a server in a neighbor building to ensure as complete as possible
storage of all data for future use in case of e.g. fires or other damages to
the physical construction. File submission is conducted by ftp. A separate ftp
area is allocated to incoming files, and all activities herein are logged on a
separate log file, and backed up on 2 hour frequency. An alert system is
implemented to ensure warning messages if there are problems during file
transfer from the data originators to the data centre.
Ca 455 separate new comprehensive files including meta data with annual time
series of medium to high time resolution (seconds to week) is expected per
year. A significant growth in this number is not expected on annual scale. In
total this will sum up to ca 10GB/year from ca 150 000 single column files,
including both raw data and auxiliary parameters.
EBAS is based on data management over more than 40 years. Last 10 years there
has been a European project-type cooperation from FP5 to Horizon2020, with and
EMEP and GAW programmes since 1970’s as the fundament. Sharing visions and
goals with the supporting long-term policy driven frameworks have ensured
long-term funding for the core data base infrastructure. Currently, a long-
term strategy for providing access to all ACTRIS data and other related
services are explored through the establishment of ACTRIS as a RI. For this
reason, ACTRIS is applying a position on the next ESFRI (European Strategy
Forum on Research Infrastructures) roadmap for Research Infrastructures.
## Aerosol profiles
The storage infrastructure is composed by two servers and a SAN (Storage Area
Network). One server hosts the EARLINET PostgreSQL database and the other one
is used to interface both end-users and data submitters to the EARLINET
database. This last server is connected to an internal SAN on which the data
submitted by the user are safety stored. A daily back up of the EARLINET
database is made automatically and it is stored on the SAN.
The current size of the PostgresSQL EARLINET database is about 1GB. The total
amount of data submitted (NetCDF EARLINET files) is about 900MB. An estimation
of the growing rate of the database at this rate is 100-200MB/year. However a
significant growth in number of files to be collected is expected because of:
the use of the Single Calculus Chain for the data submission, the inclusion
into the ACTRIS aerosol profiles database of new products (pre-processed data,
NRT optical properties, profiles, aerosol layers properties and multi-
wavelength datasets), increases of the number of EARLINET stations and
increase of EARLINET h24 stations. We estimate that at the end of ACTRIS2
project, the ACTRIS aerosol profile database could growth at a rate of about
12-15 GB per year.
The EARLINET database is maintained by the National Research Council of Italy
with long term commitment for archiving and preservation. The archiving on
CERA database is a further measure for assuring the availability of the data
through redundancy of the archive. Further developments in terms of specific
services will be developed in ACTRIS 2 as aerosol profiles quality check tools
and processing through the SCC. Long term strategy for providing access to
data and other related services is explored through the establishment of
ACTRIS as a RI and for this reason ACTRIS is applying a position on the next
ESFRI (European Strategy Forum on Research Infrastructures) roadmap for
Research Infrastructures.
## Cloud profiles
The Cloudnet database is a file-based database, due to the nature of the
typical use-case and data volume. The infrastructure comprises an FTP server
for incoming data streams, rsync server for outgoing data streams, processing
server, webserver, with data storage distributed across a series of virtual
filesystems including incremental backups. Due to the data volume, most sites
also hold a copy of their own processed data, effectively acting as a second
distributed database and additional backup.
The current size of the database is about 10 TB and the volume is expected to
grow by close to 0.5 TB per year with the current set of stations and the
standard products. However, there will be a significant increase in volume
when the planned move to multi-peak and spectral products is undertaken; this
is in addition to a slight increase arising through the creation of new
products. The transfer of the database to FMI will ensure the long-term
commitment for archiving and preservation. Publication of QA datasets will aid
dataset preservation.
# ACTRIS Data Centre Organisation and personal resources
The ACTRIS Data Centre involves personal with broad and complementary
background and competence. In total, more than 25 persons are involved in the
data management, on full or part time.
A crucial structure of the ACTRIS data centre is the use of topical data
centres run by scientists with expertise in the relevant field. This ensures
not only proper curation of the data, which makes the data usable by both,
experts and non-experts, but also a close connection to the data provider and
user communities. A topical data centre run by scientists with data curation
expertise serves as identifying elements built jointly with the data provider
community, and as connecting element between data providers and users. The
fundamental structure of the data centre is based on efficient use of
complementary competence. This includes involvements of senior scientists,
young scientists, engineers, programmers, and data base developers. A data
centre serving several related communities, e.g. scientific and regulatory
ones, are facilitating exchange and collaboration between these. Additionally,
involvement of senior scientists working actively within various scientific
communities is another prerequisite, to ensure the links to various scientific
user groups, for distribution of data products, and user oriented development
of the data centre.
The ACTRIS data portal acts as umbrella for the topical data centres allowing
search, download, and common visualisation of the data archived at the topical
data centres. Maybe even more important, it will also connect ACTRIS with
other European and international research data centres by allowing the same
services for the data stored there by making use of latest inter-operability
specifications. Also at the administrative plain, the ACTRIS portal represents
the infrastructures in the relevant bodies working an unifying data
management, and relays new developments to the whole infrastructure.
# Appendix I:List of ACTRIS variables and recommended methodology
<table>
<tr>
<th>
**ACTRIS Aerosol particle variables**
**Variable name**
</th>
<th>
**_Recommended methodology_ **
</th>
<th>
**Validated _data_ **
</th>
<th>
**_NRT_ **
</th>
<th>
**Typical time res.**
</th>
<th>
**Higher timeres. available**
</th> </tr>
<tr>
<td>
**Near-surface aerosol particle variables**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Particle light scattering coefficient
</td>
<td>
Integrating Nephelometer
</td>
<td>
X
</td>
<td>
X
</td>
<td>
1h
</td>
<td>
X
</td> </tr>
<tr>
<td>
Particle light backscattering coefficient
</td>
<td>
Integrating Nephelometer
</td>
<td>
X
</td>
<td>
X
</td>
<td>
1h
</td>
<td>
X
</td> </tr>
<tr>
<td>
Particle number size distribution
</td>
<td>
Mobility particle size spectrometer (e.g. differential mobility particle size,
scanning mobility particle sizer) or Optical particle size spectrometer (e.g.
optical particle counter, optical particle sizer) or Aerodynamic particle size
spectrometer (e.g. aerodynamic particle sizer)
</td>
<td>
X
</td>
<td>
X
</td>
<td>
1h
</td>
<td>
X
</td> </tr>
<tr>
<td>
Particle light absorption coefficient
</td>
<td>
Filter Absorption Photometer (e.g. Particle Soot/Absorption
Photometer, Multi-Angle Absorption Photometry, Aethalometer)
</td>
<td>
X
</td>
<td>
X
</td>
<td>
1h
</td>
<td>
X
</td> </tr>
<tr>
<td>
Particle number concentration
</td>
<td>
Condensation Particle Counter
</td>
<td>
X
</td>
<td>
X
</td>
<td>
1h
</td>
<td>
X
</td> </tr>
<tr>
<td>
Cloud condensation nuclei number concentration
</td>
<td>
Condensation Cloud Nuclei Counter
</td>
<td>
X
</td>
<td>
X(later)
</td>
<td>
1h
</td>
<td>
X
</td> </tr>
<tr>
<td>
Hygroscopic growth factor
</td>
<td>
Hygroscopicity Tandem Differential Mobility Analyzer
</td>
<td>
X
</td>
<td>
</td>
<td>
1h
</td>
<td>
X
</td> </tr>
<tr>
<td>
Particulate organic and elemental carbon mass concentrations (OC/EC)
</td>
<td>
Filter sampling + evolved gas analysis with optical correction for charring
(thermal-optical analysis)
</td>
<td>
X
</td>
<td>
</td>
<td>
1d-1week
</td>
<td>
</td> </tr>
<tr>
<td>
Particulate size-resolved chemical composition
(organic & inorganic size-resolved mass speciation)
</td>
<td>
Aerosol Mass Spectrometer, Aerosol Chemical Speciation Monitor
</td>
<td>
X
</td>
<td>
</td>
<td>
1h
</td>
<td>
X
</td> </tr>
<tr>
<td>
Particulate levogluocsan mass concentration
</td>
<td>
Filter sampling + offline methodology
</td>
<td>
X
</td>
<td>
</td>
<td>
1d-1week
</td>
<td>
</td> </tr> </table>
<table>
<tr>
<th>
**ACTRIS near-surface trace gas variables**
**Variable**
</th>
<th>
**Recommended methodology**
</th>
<th>
**Validated data**
</th>
<th>
**NRT**
</th>
<th>
**Approx. time resolution**
</th> </tr>
<tr>
<td>
NMHCs (C2-C9 hydrocarbons) _*See detailed list_
</td>
<td>
on-line: GC-FID, GC-MS, GS-FID/MS, GC-Medusa, PTR-MS off-line traps: ads-tubes
off-line: steel canisters + glass flasks, combined with the on-line
instruments in laboratories
</td>
<td>
X
</td>
<td>
</td>
<td>
1 h-2/week
</td> </tr>
<tr>
<td>
OVOCs (oxidised volatile organic compounds as aldehydes, ketons, alcohols,)
_See detailed list of the compounds at the end of the document_
</td>
<td>
on-line: GC-FID, GC-MS, GS-FID/MS, GC-Medusa, PTR-MS off-line traps: ads-
tubes, DNPH-cartridge-HPLC
</td>
<td>
X
</td>
<td>
</td>
<td>
1 h-2/week
</td> </tr>
<tr>
<td>
Terpenes (biogenic hydrocarbons with a terpenestructure) _*See detailed list
at the end of the document_
</td>
<td>
on-line (GC-FID, GC-MS, GS-FID/MS, GC-Medusa) and off-line traps (adstubes)
</td>
<td>
X
</td>
<td>
</td>
<td>
1 h-2/week
</td> </tr>
<tr>
<td>
NO
</td>
<td>
NO-O 3 chemiluminescence
</td>
<td>
X
</td>
<td>
X
</td>
<td>
1 min - 1 h
</td> </tr>
<tr>
<td>
NO2
</td>
<td>
indirect: NO-O 3 chemiluminescence coupled to photolytic converter
(Xenon lamp (PLC) or diode (BLC)),
direct: cavity ring down spectroscopy (CRDS), laser induced fluorescence
(LIF), Cavity Attenuated Phase Shift Spectroscopy (CAPS)
</td>
<td>
X
</td>
<td>
X
</td>
<td>
1 min - 1 h
</td> </tr>
<tr>
<td>
NOy (NO, NO2, NO3, N2O5, HNO2, HNO3, PAN, organic nitrates and aerosol
nitrates sum of oxidized nitrogen species with an oxidation number >1, both
organic and inorganic.)
</td>
<td>
indirect: NO-O3 chemiluminescence coupled to gold converter
</td>
<td>
X
</td>
<td>
X
</td>
<td>
1 min - 1 h
</td> </tr> </table>
<table>
<tr>
<th>
**ACTRIS Aerosol particle variables**
**Variable name Recommended methodology**
</th>
<th>
**Validated data**
</th>
<th>
**NRT**
**Approx. time resolution**
</th> </tr>
<tr>
<td>
**Column and profile aerosol particle variables (remote particle observations
from ground)**
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Aerosol backscatter coefficient profile
</td>
<td>
Backscatter lidar / Raman lidar/High spectral resolution lidar
</td>
<td>
X
</td>
<td>
0.5 h, 2+1 measur. per week + special events + CALIPSO overpasses (2.5 h)
</td> </tr>
<tr>
<td>
Aerosol extinction coefficient profile
</td>
<td>
Raman lidar / High spectral resolution lidar
</td>
<td>
X
</td>
<td>
0.5 h, 2+1 measur. per week + special events + CALIPSO overpasses (2.5 h)
</td> </tr>
<tr>
<td>
Lidar ratio profile
</td>
<td>
Raman lidar / High spectral resolution lidar
</td>
<td>
X
</td>
<td>
0.5 h, 2+1 measur. per week + special events + CALIPSO overpasses (2.5 h)
</td> </tr>
<tr>
<td>
Ångström exponent profile
</td>
<td>
Multiwavelength Raman lidar
</td>
<td>
X
</td>
<td>
0.5 h, 2+1 measur. per week + special events + CALIPSO overpasses (2.5 h)
</td> </tr>
<tr>
<td>
Backscatter-related Ångström exponent profile
</td>
<td>
Multiwavelength backscatter lidar / Raman lidar
</td>
<td>
X
</td>
<td>
0.5 h, 2+1 measur. per week + special events + CALIPSO overpasses (2.5 h)
</td> </tr>
<tr>
<td>
Particle depolarization ratio profile
</td>
<td>
Depolarization backscatter lidar
</td>
<td>
X
</td>
<td>
0.5 h, 2+1 measur. per week + special events + CALIPSO overpasses (2.5 h)
</td> </tr>
<tr>
<td>
Particle layer geometrical properties (height and thickness)
</td>
<td>
Backscatter lidar / Raman lidar/ High spectral resolution lidar
</td>
<td>
X
</td>
<td>
0.5 h, 2+1 measur. per week + special events + CALIPSO overpasses (2.5 h)
</td> </tr>
<tr>
<td>
Particle layer optical properties (extinction, backscatter, lidar ratio,
Ångström exponent, depolarization ratio, optical depth)
</td>
<td>
Multiwavelength Raman lidar
</td>
<td>
X
</td>
<td>
0.5 h, 2+1 measur. per week + special events + CALIPSO overpasses (2.5 h)
</td> </tr>
<tr>
<td>
Aerosol optical depth (column)
</td>
<td>
Sun/sky photometer
</td>
<td>
x
</td>
<td>
x
</td> </tr>
<tr>
<td>
Planetary boundary layer height
</td>
<td>
Backscatter lidar / Raman lidar/ High spectral resolution lidar
</td>
<td>
X
</td>
<td>
0.5 h, 2+1 measur. per week + special events + CALIPSO overpasses (2.5 h)
</td> </tr> </table>
<table>
<tr>
<th>
**ACTRIS cloud variables**
**Variable _Recommended methodology_ **
</th>
<th>
**Validated**
**_data NRT_ **
</th>
<th>
**Approx. time /height resolution**
</th> </tr>
<tr>
<td>
**Column and profile cloud variables (remote observations from ground)**
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
cloud/aerosol target classification
</td>
<td>
cloud radar, lidar/ceilometer, NWP model or radiosonde (optional: microwave
radiometer)
</td>
<td>
X
</td>
<td>
X
</td>
<td>
30 seconds / 60 metres
</td> </tr>
<tr>
<td>
drizzle drop size distribution
</td>
<td>
doppler cloud radar, lidar/ceilometer, NWP model or radiosonde (optional:
microwave radiometer)
</td>
<td>
X
</td>
<td>
X
</td>
<td>
30 seconds / 60 metres
</td> </tr>
<tr>
<td>
drizzle water content
</td>
<td>
doppler cloud radar, lidar/ceilometer, NWP model or radiosonde (optional:
microwave radiometer)
</td>
<td>
X
</td>
<td>
X
</td>
<td>
30 seconds / 60 metres
</td> </tr>
<tr>
<td>
drizzle water flux
</td>
<td>
cloud radar, lidar/ceilometer, NWP model or radiosonde (optional: microwave
radiometer)
</td>
<td>
X
</td>
<td>
X
</td>
<td>
30 seconds / 60 metres
</td> </tr>
<tr>
<td>
ice water content
</td>
<td>
cloud radar, lidar/ceilometer, NWP model or radiosonde (optional: microwave
radiometer)
</td>
<td>
X
</td>
<td>
X
</td>
<td>
30 seconds / 60 metres
</td> </tr>
<tr>
<td>
liquid water content
</td>
<td>
cloud radar, lidar/ceilometer, microwave radiometer
</td>
<td>
X
</td>
<td>
X
</td>
<td>
30 seconds / 60 metres
</td> </tr>
<tr>
<td>
liquid water path
</td>
<td>
dual- or multi-frequency microwave radiometers (ceilometer useful for
identifying clear-sky)
</td>
<td>
X
</td>
<td>
X
</td>
<td>
30 seconds
</td> </tr>
<tr>
<td>
rainrate
</td>
<td>
drop-counting raingauge or disdrometer preferable to tipping bucket raingauges
</td>
<td>
X
</td>
<td>
X
</td>
<td>
30 seconds
</td> </tr>
<tr>
<td>
**Near-surface cloud variables**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Liquid Water Content
</td>
<td>
In-situ cloud-microphysical sensors
</td>
<td>
X
</td>
<td>
</td>
<td>
5 min
</td> </tr> </table>
<table>
<tr>
<th>
**Detailed list of trace gases included in ACTRIS -** _Alkanes, Alkenes,
Alkynes_
</th>
<th>
</th> </tr>
<tr>
<td>
**Alkanes**
</td>
<td>
ethane propane
2-methylpropane n-butane
</td>
<td>
2-methylhexane n-heptane 2-2-4trimethylpentane 3-methylheptane
</td>
<td>
**Alkenes**
</td>
<td>
ethene
</td>
<td>
**Alkynes**
</td>
<td>
ethyne
</td> </tr>
<tr>
<td>
propene
</td>
<td>
proypne
</td> </tr>
<tr>
<td>
trans-2-butene
</td>
<td>
1-butyne
</td> </tr>
<tr>
<td>
1-butene
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
2-2-dimethylpropane 2-methylbutane n-pentane cyclopentane methyl-cyclopentane
</td>
<td>
n-octane n-nonane n-decane methyl-cyclohexane n-undecane
</td>
<td>
2-methylpropene
</td> </tr>
<tr>
<td>
cis-2-butene
</td> </tr>
<tr>
<td>
1-3-butadiene
</td> </tr>
<tr>
<td>
3-methyl-1-butene
</td> </tr>
<tr>
<td>
2-methyl-2-butene
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
2-2-dimethylbutane
2-3-dimethylbutane
2-methylpentane 3-methylpentane cyclohexane n-hexane methyl-cyclohexane
</td>
<td>
n-dodecane n-tridecane n-tetradecane n-pentadecane n-hexadecane
</td>
<td>
trans-2-pentene
</td>
<td>
</td> </tr>
<tr>
<td>
cyclopentene
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
1-pentene
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
cis-2-pentene
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
1-hexene
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
isoprene
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
2-2-3-trimethylbutane
2-3-dimethylpentane
2-2-dimethylpentane
2. 4-dimethylpentane
3. 3-dimethylpentane
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
3-methylhexane
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
<table>
<tr>
<th>
**Detailed list of trace gases included in ACTRIS** _\- OVOCs, Terpenes,
Aromatics_
</th>
<th>
</th> </tr>
<tr>
<td>
**OVOCs**
</td>
<td>
methanol methylethylketon
</td>
<td>
**Terpenes**
</td>
<td>
alpha-thujene
</td>
<td>
**Aromatics**
</td>
<td>
benzene
</td> </tr>
<tr>
<td>
ethanol methacrolein
</td>
<td>
tricyclene
</td>
<td>
toluene
</td> </tr>
<tr>
<td>
isopropanol methylvinylketon
</td>
<td>
alpha-pinene
</td>
<td>
ethylbenzene
</td> </tr>
<tr>
<td>
n-propanol glyoxal
</td>
<td>
camphene
</td>
<td>
m-p-xylene
</td> </tr>
<tr>
<td>
n-butanol methylglyoxal
</td>
<td>
sabinene
</td>
<td>
o-xylene
</td> </tr>
<tr>
<td>
methyl-butanol butylacetat
</td>
<td>
myrcene
</td>
<td>
1-3-5-trimethylbenzene
</td> </tr>
<tr>
<td>
formaldehyde acetonitrile
</td>
<td>
beta-pinene
</td>
<td>
1-2-4-trimethylbenzene
</td> </tr>
<tr>
<td>
acetaldehyde
</td>
<td>
</td>
<td>
alpha-phellandrene
</td>
<td>
1-2-3-trimethylbenzene
</td> </tr>
<tr>
<td>
n-propanal
</td>
<td>
</td>
<td>
3-carene
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
n-butanal
</td>
<td>
</td>
<td>
alpha-terpinene
</td> </tr>
<tr>
<td>
pentanal
</td>
<td>
m-cymene
</td> </tr>
<tr>
<td>
hexanal
</td>
<td>
cis-ocimene
</td> </tr>
<tr>
<td>
heptanal
</td>
<td>
p-cymene
</td> </tr>
<tr>
<td>
octanal
</td>
<td>
limonene
</td> </tr>
<tr>
<td>
decanal
</td>
<td>
beta-phellandrene
</td> </tr>
<tr>
<td>
undecanal
</td>
<td>
eucalyptol
</td> </tr>
<tr>
<td>
benzaldehyde
</td>
<td>
gamma-terpinene
</td> </tr>
<tr>
<td>
acrolein
</td>
<td>
terpinolene
</td> </tr>
<tr>
<td>
acetone
</td>
<td>
camphor
</td> </tr> </table>
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0731_Climateurope_689029.md
|
# Executive Summary
The work packages 1 and 6 in project Climateurope (original name, ECOMS2) are
responsible for data management in the project. Work package 1 (WP1) has
produced a very simple Data Management Plan (DMP) in line with the ‘Guidelines
of Data Management for Horizon 2020’, and along with WP6 is responsible for
monitoring the adherence to this plan. The DMP is aligned to the Dissemination
and Exploitation Plan for the project (Deliverable D6.2).
# Detailed Report
## Introduction
Data creation and management is not a key or central component to
Climateurope. For example, there will be no generation of data products or
software output. Therefore, for the purposes of this report, the DMP template
issued by the European Commission will not be used here. However, the project
will provide aggregated information. Therefore, Climateurope will take part in
the Horizon 2020 Open Data Research Pilot (ODRP) 1 .
The following types of data, information and materials are anticipated by the
project, so must be considered within the context of data management:
* Network – communities and individuals that Climateurope will engage with.
* Reports on current and recommended products, services and activities.
* Public website.
* Festivals.
* Science-stakeholder communication platform (internet communication platform).
## Network
One of the major outcomes of Climateurope will be the creation of the managed
network whose composition is detailed in the Climateurope Description of
Action.
_Treatment of personal data:_
In order to create and manage the network, plus gather information regarding
Earth system modelling (ESM) and climate services (CS), a certain amount of
personal data will be gathered. For instance, surveys and interviews will be
conducted with various network members. Milestone 10 details how any personal
data will be protected (including collection, sharing and storage), and these
details will not be replicated here.
_Data/information obtained from the network:_
All of the information gathered will be made publically available, and this
will be made clear to the network members at the start of the communications
with them. The ’raw’ information gathered will be stored in the Internet
Communication Platform (see Section 2.6), which is a platform that has limited
access.
## Reports
All formal reports from ECOMS2 (also declared as deliverable reports) will be
made openly and publically available. These include:
* Three reports on the state of Earth system modeling (ESM) and climate service (CS) provision in Europe (WP3);
* Four reports on the new challenges and emerging needs, plus future recommendations on research and innovation priorities, for ESM and CS (WP4); ’State of the European Earth system modelling and climate services’ publication series (WP6).
The details of the format of these reports and their methods for dissemination
will be agreed by the ECOMS2 General Assembly.
The data and information to input and form these reports will be gathered from
members of the network (see Section 2.2).
## Website
The website ( _www.climateurope.eu_ ) will act as the _public_ interface to
the project. It will:
* Provide background information on the project itself;
* Provide information on events;
* Provide overview and analysis by linking to reports, websites, portals, services etc.
* Provide a platform for interaction with users.
## Festivals
There will be three festivals held during Climateurope which will showcase the
work of the project. The associated literature and presentations from the
festivals will be made publically available on the project website. Any
personal data gathered as part of the organisation and running of the
festivals will be treated in accordance with Section 2.2 and Milestone 10.
## Internet Communication Platform (ICP)
The ICP will serve as a working tool for both internal communication among the
project members and communication with/among stakeholders and experts in the
expert groups (members of the network). The ICP will provide a Wiki - a space
for sharing documents, document version control system, discussion platform,
etc. Stakeholders/experts from outside the project can get access to a limited
part of the ICP.
The Wiki will be realized in such a way that it will allow making separate
working environments for the project members (internal communication) and for
expert/stakeholder groups where external persons and project members can
discuss, exchange documents, etc. The Wiki of any single group will provide
full freedom to the group members: each member of the group will be able to
add/modify content in the Wiki.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0733_IT2RAIL_636078.md
|
**1\. INTRODUCTION**
The present Document Management Plan (onwards DMP) details what data the
project will generate, whether and how it will be exploited or made accessible
for verification and re-use, and how will be curated and preserved. This
document should be considered in combination with:
* Articles 9.1, 9.2, 9.3 and attachment 1 of the Consortium Agreement;
* Section 3 (Articles 23, 24, 25, 26, 27, 28, 29, 30 and 31) of the Grant Agreement No. 636078.
The DMP is organised per Work Package (WP) in order to concretely describe the
contribution of each WP to the final outcome as well as the spin-off potential
of each activity.
In order to understand the data that the project will generate, a brief
overview of the project is given below:
IT2Rail is a first step towards achieving the objectives of the long term
Shift2Rail
Programme.
More specifically the 4
th
Innovation Programme (IP4) focusing on “IT Solutions for Attractive
Railway Services”. The overall aim is to provide a new seamless travel
experience giving access
to a complete multimodal travel offer which connects th
e first and last mile to long distance
journeys by:
•
Transforming global travel interactions into a fully integrated and customised
experience;
•
Providing a door
\-
to
\-
door (D2D) multi modal travel experience, through services
distributed by multiple providers;
•
Helping operators to adapt their level of service, better to satisfy customer
expectations
and optimise their own operations.
Even though the scope of IT2Rail is reduced in comparison to IP4, the work is
organised around
the six Technology Demonstrators (
TDs) that can be found in IP4 and are essentially equivalent
to the Work Packages 1
\-
6
shown in Figure 1.
**Figure**
**1**
**:**
**Project Organisation**
<table>
<tr>
<th>
* WP1 will provide IT2Rail functional applications with a ‘web of transportation data’ abstraction of the distributed resources they need to operate. The abstraction is constructed by using semantic web technology open standards.
* WP2 will:
* Establish the architecture for managing and aggregating distributed travel shopping data and distributed journey planning expertise;
* Create the basis for a one-stop shop for co-modally marketed transport products and services whose combinations can answer to door-to-door mobility queries;
* Allow for the presentation of transport service attributes and facilities answering to
Customer preferences in connection with carbon footprint and ‘reduced
mobility’ needs;
* Interface with WP1 to overcome interoperability obstacles, so protecting the Customer from the fragmentation of messaging and codification standards which make travel shopping so difficult and risky in today’s fragmented travel marketplace.
* WP3 will extend the interoperability between modes, operators and systems by providing travellers with the possibility to book and pay in a ‘one-click’ action, complete multimodal door-todoor travel journeys and to validate their travel entitlements across heterogeneous transport systems. It also includes ticketing activities.
* WP4 will monitor irregularities in transport and respond to such anomalies in on-line mode, including suggestions of alternative solutions.
* WP5 will develop the key concepts of unique Traveller identifier, smart device and virtualised data store to bolster attractiveness of the Rail transport ecosystem. An allencompassing user front end offering access to a wealth of multimodal services and products will promote a new door-to-door traveling experience.
* WP6 will focus on leveraging social, mobile, structured and unstructured data to obtain valuable, actionable insights that allows rail operators, product/service providers, Traveller/Transport Enterprises to make better decisions in order to increase quality of service and revenues, to better adapt their level of service to the passenger demand and to optimise their operations in order to bring and retain more people on the train-urban mobility.
</th> </tr> </table>
**2\. DATA MANAGEMENT AT PROJECT LEVEL**
1. **DATA COLLECTION & DEFINITION **
The responsibility to define and describe all non-generic data sets specific
to an individual work package shall be with the WP leader.
The WP leaders shall formally review and update the data sets related to their
WP.
All modifications/additions to the data sets shall be provided to the IT2Rail
Coordinator (UNIFE) for inclusion in the DMP.
2. **DATA ARCHIVING & PRESERVATION **
At the formal project closure, all the data material that has been collated or
generated within the project and registered on the Cooperation Tool (CT) shall
be copied and transferred to a digital archive.
This archive shall reside in the UNIFE premises located in Brussels, Belgium.
UNIFE provides an archive facility with structured systems for document query,
retrieval and longterm preservation.
### 2.2.1 Data Security & Integrity
The IT2Rail project will be subject to the same levels of data security as
applied to normal operations within UNIFE.
UNIFE relies upon its information and the systems that manage it to carry out
its business operations; hence protecting information is paramount in
supporting UNIFE activities in meeting both its objectives and regulatory
obligations.
Maintaining the security of information manages the risks more effectively
resulting in the prevention of operational activities interruption.
Without the correct protection measures, there is a risk of vulnerability to
those who are intent on harming or who wish to control or steal assets.
All data types that are uploaded to the CT shall not be encrypted,
irrespective of whether these data items have been identified for future
archiving or not.
### 2.2.2 Document Archiving
The document structure and type definition will be preserved as defined in the
document breakdown structure and work package groupings specified for the CT.
At the time of document creation (uploading to CT) the document will be
“flagged” as a candidate data set for future archiving.
The process of archiving will be based on a data extract performed within 12
weeks of the formal closure of the IT2Rail project.
### 2.2.3 Data Transfer
The data transfer mechanism between the CT and the data archive repository
shall be performed as a single transaction.
The physical means of data transfer shall be jointly reviewed between the
Project Coordinator (UNIFE) and the CT system provider.
**2.3 FILE NAMING CONVENTIONS**
All files irrespective of the data type shall be named in accordance with the
following document Code structure:
The identification code contains the six following sections: **[Project] -
[Domain] - [Type] - [Owner] - [Number] – [Version]**
Where:
* [Project] is ITR for all IT2Rail documents;
* [Domain] is the relevant domain in the Cooperation Tool (WP, Task or project body);
* [Type] is one letter defining the document category;
* [Owner] is the trigram of the deliverable leader organisation;
* [Number] is an order number allocated by the Cooperation Tool when the document is first created;
* [Version] is the incremental version number, automatically incremented at each upload.
Example shown below:
<table>
<tr>
<th>
**Project**
**Code**
</th>
<th>
**Domain**
**(3-5 char.)**
</th>
<th>
**Type**
**(1 letter)**
</th>
<th>
**Owner (3 letters)**
</th>
<th>
**Number**
**(3 digits)**
</th>
<th>
**Version**
</th> </tr>
<tr>
<td>
ITR
</td>
<td>
WP2
</td>
<td>
D
</td>
<td>
UNI
</td>
<td>
001
</td>
<td>
01
</td> </tr> </table>
**2.4 IT2RAIL ARCHIVED DATA & SHIFT2RAIL **
The specific IT2Rail deliverables and all other related generated data are
fundamentally linked to the future planned Shift2Rail project activity.
The data requirements of this DMP have been developed with the objective of
providing data structures that are uniform, intelligible and not subject to
possible future ambiguous interpretation.
It is anticipated that the synergetic parallel working between the two
projects will be further enhanced by having data available prior to the
conclusion of the IT2Rail project that is of a defined format in accordance
with this DMP.
Data shall be specifically selected for archiving based on the criteria that
it will be likely to be useful for future Shift2Rail activities.
During the life of IT2Rail data extraction from the CT will be supported.
**3\. DMP OF WP1: INTEROPERABILITY FRAMEWORK**
**3.1 DATA SETS**
Existing data used in this WP include the following data sets:
<table>
<tr>
<th>
**Code**
</th>
<th>
**Description of Dataset/Digital Output**
</th>
<th>
**Units and Format**
</th>
<th>
**Size**
</th>
<th>
**Provided by**
</th> </tr>
<tr>
<td>
ITR-1.1
</td>
<td>
Wikidata knowledge graph 1
</td>
<td>
rdf,
accessed via sparql endpoint
</td>
<td>
unlimited
</td>
<td>
Wikidata (online)
</td> </tr>
<tr>
<td>
ITR-1.2
</td>
<td>
DBpedia knowledge graph 2
</td>
<td>
rdf,
accessed via sparql endpoint
</td>
<td>
unlimited
</td>
<td>
Dbpedia (online)
</td> </tr>
<tr>
<td>
ITR-1.3
</td>
<td>
SNCF Rail Stations
</td>
<td>
CSV
</td>
<td>
422 KB
</td>
<td>
SNCF
</td> </tr>
<tr>
<td>
ITR-1.4
</td>
<td>
SNCF Routes
</td>
<td>
XML
</td>
<td>
29 KB
</td>
<td>
SNCF
</td> </tr>
<tr>
<td>
ITR-1.5
</td>
<td>
AMS Stations
</td>
<td>
XML
</td>
<td>
30 KB
</td>
<td>
Oltis Group
</td> </tr>
<tr>
<td>
ITR-1.6
</td>
<td>
AMS Connections
</td>
<td>
XML
</td>
<td>
3.2 MB
</td>
<td>
Oltis Group
</td> </tr>
<tr>
<td>
ITR-1.7
</td>
<td>
VBB Stops, Routes, Services
</td>
<td>
GTFS
</td>
<td>
54.5 MB
(compressed)
</td>
<td>
HaCon
</td> </tr>
<tr>
<td>
ITR-1.8
</td>
<td>
TMB (Madrid) Stops, Routes, Services
</td>
<td>
GTFS
</td>
<td>
23.3 MB
(compressed)
</td>
<td>
INDRA
</td> </tr>
<tr>
<td>
ITR-1.9
</td>
<td>
TMB (Barcelona) Stops, Routes, Services
</td>
<td>
GTFS
</td>
<td>
5.5 MB
(compressed)
</td>
<td>
INDRA
</td> </tr>
<tr>
<td>
ITR-1.10
</td>
<td>
VAO Stops, Routes, Services
</td>
<td>
GTFS
</td>
<td>
67.3 MB
(compressed)
</td>
<td>
HaCon
</td> </tr> </table>
**Table 1: Existing Data used in WP1**
Data generated by this WP include the following data sets:
<table>
<tr>
<th>
**Code**
</th>
<th>
**Description of Dataset/Digital**
**Output**
</th>
<th>
**Units and Format**
</th>
<th>
**Size**
</th>
<th>
**Ownership**
</th> </tr>
<tr>
<td>
ITR-1.11
</td>
<td>
Barcelona Network Statistics
</td>
<td>
XML
</td>
<td>
273 KB
</td>
<td>
Indra
</td> </tr>
<tr>
<td>
ITR-1.12
</td>
<td>
Madrid Cercanias Network
Statistics
</td>
<td>
XML
</td>
<td>
436 KB
</td>
<td>
Indra
</td> </tr>
<tr>
<td>
ITR-1.13
</td>
<td>
Madrid Bus Network Statistics
</td>
<td>
XML
</td>
<td>
8.2 MB
</td>
<td>
Indra
</td> </tr>
<tr>
<td>
ITR-1.14
</td>
<td>
Madrid Metro Ligero Network
Statistics
</td>
<td>
XML
</td>
<td>
97 KB
</td>
<td>
Indra
</td> </tr>
<tr>
<td>
ITR-1.15
</td>
<td>
Madrid Metro Network Statistics
</td>
<td>
XML
</td>
<td>
507 KB
</td>
<td>
Indra
</td> </tr>
<tr>
<td>
ITR-1.16
</td>
<td>
Berlin Network Statistics
</td>
<td>
XML
</td>
<td>
921 KB
</td>
<td>
HaCon
</td> </tr>
<tr>
<td>
ITR-1.17
</td>
<td>
AMS Network Statistivs
</td>
<td>
XML
</td>
<td>
19 KB
</td>
<td>
Oltis Group
</td> </tr>
<tr>
<td>
ITR-1.18
</td>
<td>
IndraRail Network Statistics
</td>
<td>
XML
</td>
<td>
9 KB
</td>
<td>
Indra
</td> </tr>
<tr>
<td>
ITR-1.19
</td>
<td>
VAO (Wien) Network Statistics
</td>
<td>
XML
</td>
<td>
3.8 KB
</td>
<td>
HaCon
</td> </tr>
<tr>
<td>
ITR-1.20
</td>
<td>
SNCF Network Statistics
</td>
<td>
XML
</td>
<td>
7.4 KB
</td>
<td>
SNCF
</td> </tr>
<tr>
<td>
ITR-1.21
</td>
<td>
Trenitalia Network Statistics
</td>
<td>
XML
</td>
<td>
6 KB
</td>
<td>
Trenitalia
</td> </tr>
<tr>
<td>
ITR-1.22
</td>
<td>
It2Rail semantic graph
</td>
<td>
RDF
</td>
<td>
1.6 M
triples,
</td>
<td>
It2Rail (online)
</td> </tr>
<tr>
<td>
ITR-1.23
</td>
<td>
It2Rail ontology
</td>
<td>
OWL
</td>
<td>
11 K statements
</td>
<td>
It2Rail (online)
</td> </tr> </table>
**Table 2: Data Generated in WP1**
## 3.2 STANDARDS, METADATA AND QUALITY ISSUES
The data will be organised in databases and documented in a standardised way
that will be decipherable by all the participants of the WP1.
## 3.3 DATA SHARING
<table>
<tr>
<th>
**Code**
</th>
<th>
**Data sharing**
</th>
<th>
</th> </tr>
<tr>
<td>
ITR-1.11 to
ITR-1.21
</td>
<td>
Network statistics data sets on SVN repository at
https://svn.ws.dei.polimi.it/IT2Rail-deib/XSDschemas/NetworkStatistics
</td>
<td>
</td> </tr>
<tr>
<td>
ITR-1.22
</td>
<td>
It2Rail semantic graph accessible at SPARQL access point
http://accessmanagementdemo.cloud:70/graphdb-workbench-free/sparql
</td>
<td>
at
</td> </tr>
<tr>
<td>
ITR-1.23
</td>
<td>
It2Rail ontology accessible at https://it2rail.ivi.fraunhofer.de/webprotege/
</td>
<td>
</td> </tr> </table>
### Table 3: Data Sharing in WP1
## 3.4 ARCHIVING AND PRESERVATION
<table>
<tr>
<th>
**Code**
</th>
<th>
**Archiving and preservation**
</th>
<th>
</th> </tr>
<tr>
<td>
ITR-1.11 to
ITR-1.21
</td>
<td>
Network statistics data sets on SVN repository at
https://svn.ws.dei.polimi.it/IT2Rail-deib/XSDschemas/NetworkStatistics
</td>
<td>
</td> </tr>
<tr>
<td>
ITR-1.22
</td>
<td>
It2Rail semantic graph accessible at SPARQL access point
http://accessmanagementdemo.cloud:70/graphdb-workbench-free/sparql
</td>
<td>
at
</td> </tr>
<tr>
<td>
ITR-1.23
</td>
<td>
It2Rail ontology accessible at https://it2rail.ivi.fraunhofer.de/webprotege/
</td>
<td>
</td> </tr> </table>
**Table 4: Archiving and preservation of the data in WP1**
4. **DMP OF WP2: TRAVEL SHOPPING**
**4.1 DATA TYPES**
Existing data used in this WP include the following data types:
<table>
<tr>
<th>
**Code**
</th>
<th>
**Description of Dataset/Digital Output**
</th>
<th>
**Units and Format**
</th>
<th>
**Size**
</th>
<th>
**Ownership**
</th> </tr>
<tr>
<td>
ITR-2.1
</td>
<td>
Feeds for Indra’s Urban TSP with the planning data of the
urban transit in Madrid
(CRTM)
</td>
<td>
GTFS
</td>
<td>
NR
</td>
<td>
CRTM (Consorcio
Regional de
Transportes de
Madrid)
</td> </tr>
<tr>
<td>
ITR-2.2
</td>
<td>
Feeds for Indra’s Urban TSP with the planning data of the urban transit in
Barcelona
(TMB)
</td>
<td>
GTFS
</td>
<td>
NR
</td>
<td>
TMB (Transports
Metropolitans de
Barcelona)
</td> </tr> </table>
### Table 5: Existing Data used in WP2
Data generated in this WP include the following types:
<table>
<tr>
<th>
**Code**
</th>
<th>
**Description of Dataset/Digital Output**
</th>
<th>
</th>
<th>
**Units and**
**Format**
</th>
<th>
**Size**
</th>
<th>
**Ownership**
</th> </tr>
<tr>
<td>
ITR-
2.3
</td>
<td>
Rail Itineraries between an origin and a destination:
</td>
<td>
</td>
<td>
JSON
</td>
<td>
NR
</td>
<td>
IT2Rail
</td> </tr>
<tr>
<td>
</td>
<td>
**Output**
</td>
<td>
origin
</td>
<td>
</td> </tr>
<tr>
<td>
destination
</td> </tr>
<tr>
<td>
date
</td> </tr>
<tr>
<td>
duration
</td> </tr>
<tr>
<td>
numTransfers
</td> </tr>
<tr>
<td>
price
</td> </tr>
<tr>
<td>
departureTime
</td> </tr>
<tr>
<td>
arrivalTime
</td> </tr>
<tr>
<td>
travelEpisodes
</td>
<td>
date
duration
travelEpisodeId trainCode
departureStation
destinationStation departureTime
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
arrivalTime
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
ITR-
2.4
</td>
<td>
Train availability to find an available **Seat** in a **Train** from an
**Origin** **Station** to a **destination** **Station** at a specific **date**
.
</td>
<td>
</td>
<td>
JSON
</td>
<td>
NR
</td>
<td>
IT2Rail
</td> </tr>
<tr>
<td>
</td>
<td>
**Output**
</td>
<td>
ResponseCode
</td>
<td>
</td> </tr>
<tr>
<td>
ResponseDescription
</td> </tr>
<tr>
<td>
TrainCode
</td> </tr>
<tr>
<td>
ClassCode
</td> </tr>
<tr>
<td>
Price
</td> </tr>
<tr>
<td>
CoachCode
</td> </tr>
<tr>
<td>
SeatCode
</td> </tr>
<tr>
<td>
DepartureTime
</td> </tr>
<tr>
<td>
ArrivalTime
</td> </tr>
<tr>
<td>
ContractId
</td> </tr>
<tr>
<td>
ITR-
2.5
</td>
<td>
Lock Inventory info to lock/book a seat in a Train from an Origin Station to a
destination Station at a specific date.
</td>
<td>
</td>
<td>
JSON
</td>
<td>
NR
</td>
<td>
IT2Rail
</td> </tr>
<tr>
<td>
</td>
<td>
**Output**
</td>
<td>
ResponseCode
</td>
<td>
</td> </tr>
<tr>
<td>
ResponseDescription
</td> </tr>
<tr>
<td>
SeatId
</td> </tr>
<tr>
<td>
PurchaseCode
</td> </tr>
<tr>
<td>
BookingCode
</td> </tr>
<tr>
<td>
ITR-
2.6
</td>
<td>
GetRoutes information for Madrid travel episodes:
* Itineraries
* Legs
* Steps
</td>
<td>
</td>
<td>
JSON
</td>
<td>
NR
</td>
<td>
IT2Rail
</td> </tr>
<tr>
<td>
ITR-
2.7
</td>
<td>
Network Reference Resources
</td>
<td>
</td>
<td>
Amadeus ad hoc format
</td>
<td>
NR
</td>
<td>
IT2Rail
</td> </tr>
<tr>
<td>
ITR-
2.8
</td>
<td>
Itinerary offers
</td>
<td>
</td>
<td>
XML
</td>
<td>
NR
</td>
<td>
IT2Rail
</td> </tr> </table>
**Table 6: Data Generated in WP2**
**4.2 STANDARDS, METADATA AND QUALITY ISSUES**
The data will be organised in databases and documented in a standardised way
that will be decipherable by all the participants of the WP2.
**4.3 DATA SHARING**
<table>
<tr>
<th>
**Code**
</th>
<th>
**Data sharing**
</th> </tr>
<tr>
<td>
ITR-2.1
</td>
<td>
Data has been obtained from the open data portal of the CRTM (
_http://datacrtm.opendata.arcgis.com/_ ) containing the GTFS files for Metro,
Buses, Coach,
Tram and Train in Madrid, and this information is imported in the Indra’s
Urban TSP.
</td> </tr>
<tr>
<td>
ITR-2.2
</td>
<td>
Data has been obtained from TMB containing the GTFS files for Metro, Buses,
Coach, Tram and Train in Barcelona, and this information is imported in the
Indra’s Urban TSP. Indra has received authorization of TMB to use them
specifically for project purposes.
</td> </tr>
<tr>
<td>
IT2-2.3
</td>
<td>
Data retrieved through a REST endpoint on Indra’s server
</td> </tr>
<tr>
<td>
IT2-2.4
</td>
<td>
Data retrieved through a REST endpoint on Indra’s server
</td> </tr>
<tr>
<td>
IT2-2.5
</td>
<td>
Data retrieved through a REST endpoint on Indra’s server
</td> </tr>
<tr>
<td>
IT2-2.6
</td>
<td>
Data retrieved through a REST endpoint on Indra’s server
</td> </tr>
<tr>
<td>
IT2-2.7
</td>
<td>
Data retrieved from the Networkgraph manager (WP1) through a XML endpoint
</td> </tr>
<tr>
<td>
IT2-2.8
</td>
<td>
Data retrieved dynamically from the Shopping Broker (WP1) through a XML
endpoint
</td> </tr> </table>
### Table 7: Data Sharing in WP2
**4.4 ARCHIVING AND PRESERVATION**
<table>
<tr>
<th>
**Code**
</th>
<th>
**Archiving and preservation**
</th> </tr>
<tr>
<td>
ITR-2.1
</td>
<td>
Data stored on Indra’ server
</td> </tr>
<tr>
<td>
ITR-2.2
</td>
<td>
Data stored on Indra’ server
</td> </tr>
<tr>
<td>
ITR-2.3
</td>
<td>
Data stored on Indra’ server
</td> </tr>
<tr>
<td>
ITR-2.4
</td>
<td>
Data stored on Indra’ server
</td> </tr>
<tr>
<td>
ITR-2.5
</td>
<td>
Data stored on Indra’ server
</td> </tr>
<tr>
<td>
ITR-2.6
</td>
<td>
Data stored on Indra’ server
</td> </tr>
<tr>
<td>
ITR-2.7
</td>
<td>
Network Reference Resources are stored within Amadeus server. This data is
stored until the associated validity date is reached or until the Network
Reference Resources are refreshed with new data
</td> </tr>
<tr>
<td>
ITR-2.8
</td>
<td>
Itinerary offers details are stored within Amadeus server. This data is used
by the booking process (WP3) and its storage is temporary (1 week maximum)
</td> </tr> </table>
### Table 8: Archiving and preservation of the data in WP2
## 4.5 DATA MANAGEMENT RESPONSIBILITIES
<table>
<tr>
<th>
**Code**
</th>
<th>
**Name of Responsible**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
ITR-2.1
</td>
<td>
Leyre Merle
Javier Saralegui
Verónica González Pérez
</td>
<td>
The data managers verify the availability of the repositories storing data.
</td> </tr>
<tr>
<td>
ITR-2.2
</td>
<td>
Leyre Merle
Javier Saralegui
Verónica González Pérez
</td>
<td>
The data managers verify the availability of the repositories storing data.
</td> </tr>
<tr>
<td>
ITR-2.3
</td>
<td>
Leyre Merle
Javier Saralegui
Verónica González Pérez
</td>
<td>
The data managers verify the availability of the repositories storing data.
</td> </tr>
<tr>
<td>
ITR-2.4
</td>
<td>
Leyre Merle
Javier Saralegui
Verónica González Pérez
</td>
<td>
The data managers verify the availability of the repositories storing data.
</td> </tr>
<tr>
<td>
ITR-2.5
</td>
<td>
Leyre Merle
Javier Saralegui
Verónica González Pérez
</td>
<td>
The data managers verify the availability of the repositories storing data.
</td> </tr>
<tr>
<td>
ITR-2.6
</td>
<td>
Leyre Merle
Javier Saralegui
Verónica González Pérez
</td>
<td>
The data managers verify the availability of the repositories storing data.
</td> </tr>
<tr>
<td>
ITR-2.7
</td>
<td>
Amadeus
</td>
<td>
Amadeus maintains Network Reference Resources data up-to-date
</td> </tr>
<tr>
<td>
ITR-2.8
</td>
<td>
Amadeus
</td>
<td>
Amadeus is in charge of the storage of the itinerary offers details
</td> </tr> </table>
**Table 9: Data Management Responsibilities in WP2**
5. **DMP OF WP3: BOOKING & TICKETING **
**5.1 DATA TYPES**
No existing data types were used in this WP. Data generated in this WP include
the following types:
<table>
<tr>
<th>
**Code**
</th>
<th>
**Description of Dataset/Digital Output**
</th>
<th>
</th>
<th>
**Units and**
**Format**
</th>
<th>
**Size**
</th>
<th>
**Ownership**
</th> </tr>
<tr>
<td>
ITR-3.1
</td>
<td>
GetBooking with the booking information of a Seat on a Train from A to B on a
Date.
</td>
<td>
</td>
<td>
JSON
</td>
<td>
NR
</td>
<td>
IT2R
</td> </tr>
<tr>
<td>
</td>
<td>
**Output**
</td>
<td>
bookingCode
</td>
<td>
JSON
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
status
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
trainCode
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
origin
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
destination
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
date
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
departureTime
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
arrivalTime
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
duration
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
price
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
coachCode
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
seatCode
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
classCode
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
passengerName
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
passengerSurname
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
passengerId
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
numStops
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
serviceList
</td>
<td>
name code
descripti price
</td>
<td>
on
</td> </tr>
<tr>
<td>
ITR-3.2
</td>
<td>
IssueToken with the Payload information of t token
</td>
<td>
he
</td>
<td>
JSON
</td>
<td>
NR
</td>
<td>
IT2R
</td> </tr>
<tr>
<td>
</td>
<td>
**Output**
</td>
<td>
ResponseCode
</td> </tr>
<tr>
<td>
ResponseDescription
</td> </tr>
<tr>
<td>
Payload
</td> </tr>
<tr>
<td>
ITR-3.3
</td>
<td>
Booking data
</td>
<td>
</td>
<td>
XML
</td>
<td>
NR
</td>
<td>
IT2Rail
</td> </tr>
<tr>
<td>
ITR-3.4
</td>
<td>
Confirmed booking data
</td>
<td>
</td>
<td>
XML
</td>
<td>
NR
</td>
<td>
IT2Rail
</td> </tr> </table>
### Table 10: Data Generated in WP3
**5.2 STANDARDS, METADATA AND QUALITY ISSUES**
The data will be organised in databases and documented in a standardised way
that will be decipherable by all the participants of the WP3.
**5.3 DATA SHARING**
<table>
<tr>
<th>
**Code**
</th>
<th>
**Data sharing**
</th> </tr>
<tr>
<td>
IT2-3.1
</td>
<td>
Data retrieved through a REST endpoint on Indra’s server
</td> </tr>
<tr>
<td>
IT2-3.2
</td>
<td>
Data retrieved through a REST endpoint on Indra’s server
</td> </tr>
<tr>
<td>
IT2-3.3
</td>
<td>
Data retrieved dynamically from the Booking Broker (WP1) through a XML
endpoint
</td> </tr>
<tr>
<td>
IT2-3.4
</td>
<td>
Data retrieved dynamically from the Issuance Broker (WP1) through a XML
endpoint
</td> </tr> </table>
### Table 11: Data Sharing in WP3
**5.4 ARCHIVING AND PRESERVATION**
<table>
<tr>
<th>
**Code**
</th>
<th>
**Archiving and preservation**
</th> </tr>
<tr>
<td>
ITR-3.1
</td>
<td>
Data stored on Indra’ server
</td> </tr>
<tr>
<td>
ITR-3.2
</td>
<td>
Data stored on Indra’ server
</td> </tr>
<tr>
<td>
ITR-3.3
</td>
<td>
Booking and confirmed booking details are stored within Amadeus server. This
data is used by the booking and issuance orchestration and its storage is
temporary (1 week maximum)
</td> </tr> </table>
### Table 12: Archiving and preservation of the data in WP3
**5.5 DATA MANAGEMENT RESPONSIBILITIES**
<table>
<tr>
<th>
**Code**
</th>
<th>
**Name of Responsible**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
ITR-3.1
</td>
<td>
Leyre Merle
Javier Saralegui
Verónica González Pérez
</td>
<td>
The data managers verify the availability of the repositories storing data.
</td> </tr>
<tr>
<td>
ITR-3.2
</td>
<td>
Leyre Merle
Javier Saralegui
Verónica González Pérez
</td>
<td>
The data managers verify the availability of the repositories storing data.
</td> </tr>
<tr>
<td>
ITR-3.3
</td>
<td>
Amadeus
</td>
<td>
Amadeus is in charge of the collection of this data
</td> </tr>
<tr>
<td>
ITR-3.4
</td>
<td>
Amadeus
</td>
<td>
Amadeus is in charge of the collection of this data
</td> </tr> </table>
**Table 13: Data Management Responsibilities in WP3**
6. **DMP OF WP5: TRAVEL COMPANION**
**6.1 DATA TYPES**
There is no pre-existing data at WP5 level, all data is either generated by
the user (account creation, preferences, credit cards…) or received from other
modules of the IT2Rail project (booked offers, etc…).
There is no data base at WP5 level; all data are stored on Indra’s server.
APIs allow to receive/send the data.
Data generated or transiting through the TC Personal Application, and TC
Cloud, in this WP include the following types:
### Table 14: Data Generated or transiting in WP5
<table>
<tr>
<th>
**Code**
</th>
<th>
**Description of Dataset/Digital**
**Output**
</th>
<th>
**Units and Format**
</th>
<th>
**Size**
</th>
<th>
**Ownership**
</th> </tr>
<tr>
<td>
ITR-5.1
</td>
<td>
User Identity data:
* Login
* Password
* UserIdtoken
</td>
<td>
JSON
</td>
<td>
NR
</td>
<td>
IT2Rail
</td> </tr> </table>
<table>
<tr>
<th>
ITR-5.1
</th>
<th>
User Preferences data:
* Preferred means of transportation
* Preferred carrier
* Loyalty/Reduction/Payment card
* PRM type
* Class Seat
* Trip Tracker Behavior
</th>
<th>
JSON
</th>
<th>
NR
</th>
<th>
IT2Rail
</th> </tr>
<tr>
<td>
ITR-5.1
</td>
<td>
Entitlement data (allows the access of users to the Travel Companion
database):
* UserIdToken
* User name
* Media Type
* Issue Date
* Departure Time
* Departure
* Arrival
* Token Id
* Payload State
* Trip Units
</td>
<td>
JSON
</td>
<td>
NR
</td>
<td>
IT2Rail
</td> </tr>
<tr>
<td>
ITR-5.1
</td>
<td>
Token data (Tokens data allows the access of users to the Travel Companion
database):
* Result Code
* Result Description
* Token Id
* Payload State Trip Units
</td>
<td>
JSON
</td>
<td>
NR
</td>
<td>
IT2Rail
</td> </tr>
<tr>
<td>
ITR-5.1
</td>
<td>
Booking data (allows accessing the information of the Booking Offer Item in
the Travel Companion Cloud Wallet):
* Context (retailer, Travel
Shopper, Device Info)
* Passenger (Functionnal Id,
Code, Personal Info,
Preference)
* Stop Place (location)
* Travel Episode Endpoint
(location)
</td>
<td>
JSON
</td>
<td>
NR
</td>
<td>
IT2Rail
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
* Travel Solution (departure,
arrival)
* Travel Episode:
* Departure
* Arrival
* Mileage
* Transportation Service (departure, arrival, Service Provider,
Accessibility,
Emission, Route Link, Reference,
Equipment,
Customer
FeedBack,
Operating Partner Info, Validating Partner Info).
* Booking:
* Booking Status
* Booking Provider
* Booking element
(itinerary Offer Item)
* Confirmed Booking:
* Booking Status
* Booking Provider
* Booking element
(itinerary Offer Item)
* Entitlement (tokens
</th>
<th>
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
ITR-5.1
</td>
<td>
Payment data (contains data related to payment means, and access to payment
means):
* User Id Token
* Credit card Id
* Card Display Name
* Card Number
* Card Validity End Month
* Card Validity End Year
* Card CVV
* Credit Card Type Id
</td>
<td>
JSON
</td>
<td>
NR
</td>
<td>
IT2Rail
</td> </tr>
<tr>
<td>
ITR-5.1
</td>
<td>
Alert and Information messages (allow to receive and display different type of
messages to the user):
* Booked Offer Ids
* Message Id
* Message Title
* Message Types
* Message Short Text
* Message Full Text
* Message Object
* MessageAsk For an
Alternative
* Message Time
</td>
<td>
JSON
</td>
<td>
NR
</td>
<td>
IT2Rail
</td> </tr> </table>
7. **DMP OF WP6: BUSINESS ANALYTICS**
1. **DATA TYPES**
Existing data used in this WP include the following data types:
<table>
<tr>
<th>
**Code**
</th>
<th>
**Description of Dataset/Digital Output**
</th>
<th>
**Units and Format**
</th>
<th>
**Size**
</th>
<th>
**Ownership**
</th> </tr>
<tr>
<td>
ITR-6.1
</td>
<td>
Current Weather Data
</td>
<td>
MongoDB
</td>
<td>
Size:
47.911.381 Byte
Record Count: 91.396 Record Size:
0,5 Kb
</td>
<td>
OpenWeatherMap
</td> </tr>
<tr>
<td>
ITR-6.2
</td>
<td>
Forecast Weather Data
</td>
<td>
MongoDB
</td>
<td>
Size:
20.446.103 Byte
Record Count: 35.848 Record Size:
0,6 Kb
</td>
<td>
OpenWeatherMap
</td> </tr> </table>
<table>
<tr>
<th>
ITR-6.3
</th>
<th>
Itinerary Offers retrieved from
the Mobility Request
Manager
</th>
<th>
MongoDB
</th>
<th>
Size:
93.723.084 Byte
Record Count:
780
Record Size:
117,3 Kb
</th>
<th>
IT2Rail – WP2
</th> </tr>
<tr>
<td>
ITR-6.4
</td>
<td>
TC User Feedbacks regarding Travel Questionnaire
</td>
<td>
MySQL
</td>
<td>
Size:
824 Byte
Record Count:
249
Record Size:
96 Kb
</td>
<td>
IT2Rail – WP5
</td> </tr>
<tr>
<td>
ITR-6.5
</td>
<td>
ArrivalDelayEvent
</td>
<td>
MySQL
</td>
<td>
Size:
256 Byte
Record Count:
64
Record Size:
16 Kb
</td>
<td>
IT2Rail – WP4
</td> </tr>
<tr>
<td>
ITR-6.6
</td>
<td>
DepartureDelayEvent
</td>
<td>
MySQL
</td>
<td>
Size:
260 Byte
Record Count:
63
Record Size:
16 Kb
</td>
<td>
IT2Rail – WP4
</td> </tr>
<tr>
<td>
ITR-6.7
</td>
<td>
ArrivalRulesActivationReque st
</td>
<td>
MySQL
</td>
<td>
Size:
98 Byte
Record Count:
</td>
<td>
IT2Rail – WP4
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
166
Record Size:
16 Kb
</th>
<th>
</th> </tr>
<tr>
<td>
ITR-6.8
</td>
<td>
DepartureRulesActivationRe
quest
</td>
<td>
MySQL
</td>
<td>
Size:
92 Byte
Record Count:
177
Record Size:
16 Kb
</td>
<td>
IT2Rail – WP4
</td> </tr>
<tr>
<td>
ITR-6.9
</td>
<td>
RuleDeactivationRequest
</td>
<td>
MySQL
</td>
<td>
Size:
84 Byte
Record Count:
193
Record Size:
16 Kb
</td>
<td>
IT2Rail – WP4
</td> </tr>
<tr>
<td>
ITR-6.10
</td>
<td>
User feedbacks for sentiment analysis
</td>
<td>
File
</td>
<td>
5 MB
</td>
<td>
LDO-provided data
</td> </tr>
<tr>
<td>
ITR-6.11
</td>
<td>
Social Network Messages
</td>
<td>
MongoDB/Spa
rksee
</td>
<td>
UPC
</td>
<td>
UPC-filtered Twitter feed
</td> </tr>
<tr>
<td>
ITR-6.12
</td>
<td>
Train Station Air Quality Data
</td>
<td>
MongoDB
</td>
<td>
Size : 704 Kb
Record
Count : 9397
Record
Size:78 b
(avg)
</td>
<td>
CEA-provided data
</td> </tr>
<tr>
<td>
ITR-6.13
</td>
<td>
Travel Data Messages
</td>
<td>
MongoDB
</td>
<td>
Size : 25370 Kb
Record
Count : 67233
Record
Size :386 b
(avg)
</td>
<td>
CEA-provided data
</td> </tr>
<tr>
<td>
ITR-6.14
</td>
<td>
Data mining information
</td>
<td>
PostgreSQL
</td>
<td>
NR
</td>
<td>
Polimi-provided data
</td> </tr>
<tr>
<td>
ITR-6.15
</td>
<td>
Accesses by BA users to the IT2Rail BA web platform
</td>
<td>
MongoDB embedded
within Sofia2 platform
</td>
<td>
NR
</td>
<td>
IT2Rail – WP6
</td> </tr>
<tr>
<td>
ITR-6.16
</td>
<td>
Searches by BA users to the IT2Rail BA web platform
</td>
<td>
MongoDB embedded
within Sofia2 platform
</td>
<td>
NR
</td>
<td>
IT2Rail – WP6
</td> </tr> </table>
**Table 15: Existing Data used in WP6**
Simulated data used in this WP include the following data types:
<table>
<tr>
<th>
**Code**
</th>
<th>
**Description of Dataset/Digital Output**
</th>
<th>
**Units and Format**
</th>
<th>
**Size**
</th>
<th>
**Ownership**
</th> </tr>
<tr>
<td>
ITR-6.17
</td>
<td>
Happenings
</td>
<td>
MySQL
</td>
<td>
Size: 48.941
Byte
Record
Count : 133
Record
Size : 0,4
</td>
<td>
IT2Rail – WP6
</td> </tr>
<tr>
<td>
ITR-6.18
</td>
<td>
KPIs concerning Transport Systems
</td>
<td>
Pentaho
</td>
<td>
A 50Kbyte
file of
simulated data
</td>
<td>
IT2Rail – WP6
</td> </tr>
<tr>
<td>
ITR-6.19
</td>
<td>
KPIs concerning Booking & Ticketing of Travel Routes
</td>
<td>
MongoDB embedded
within Sofia2 platform
</td>
<td>
NR
</td>
<td>
IT2Rail – WP6
</td> </tr>
<tr>
<td>
ITR-6.20
</td>
<td>
KPIs concerning Travellers’
Preferences for Transport
Systems
</td>
<td>
MongoDB embedded
within Sofia2 platform
</td>
<td>
NR
</td>
<td>
IT2Rail – WP6
</td> </tr>
<tr>
<td>
ITR-6.21
</td>
<td>
KPIs concerning
Preferences of Travellers with Reduced Mobility
</td>
<td>
MongoDB embedded
within Sofia2 platform
</td>
<td>
NR
</td>
<td>
IT2Rail – WP6
</td> </tr> </table>
**Table 16: Simulated Data used in WP6**
Data generated in this WP include the following types:
<table>
<tr>
<th>
**Code**
</th>
<th>
**Description of Dataset/Digital**
**Output**
</th>
<th>
**Units and Format**
</th>
<th>
**Size**
</th>
<th>
**Ownership**
</th> </tr>
<tr>
<td>
ITR-6.22
</td>
<td>
KPIs based on User
Preferences from the TC
</td>
<td>
MongoDB embedded
within Sofia2 platform
</td>
<td>
NR
</td>
<td>
IT2Rail-WP6
</td> </tr>
<tr>
<td>
ITR-6.23
</td>
<td>
KPIs based on Trip Tracking Alternative Routes
</td>
<td>
MongoDB embedded
within Sofia2 platform
</td>
<td>
NR
</td>
<td>
IT2Rail-WP6
</td> </tr>
<tr>
<td>
ITR-6.24
</td>
<td>
KPIs based on Trip Tracking
Complex Event Processing
Messages
</td>
<td>
MongoDB embedded
within Sofia2 platform
</td>
<td>
NR
</td>
<td>
IT2Rail-WP6
</td> </tr>
<tr>
<td>
ITR-6.25
</td>
<td>
KPIs based on Social Network Messages
</td>
<td>
JSON
</td>
<td>
NR
</td>
<td>
IT2Rail-WP6
</td> </tr>
<tr>
<td>
ITR-6.26
</td>
<td>
Calculation of parameters of Train Station Air Quality Data based on
Meteorological Data
</td>
<td>
JSON
</td>
<td>
NR
</td>
<td>
IT2Rail-WP6
</td> </tr>
<tr>
<td>
ITR-6.27
</td>
<td>
Calculation of most Informative
Term from Travel data messages
</td>
<td>
JSON
</td>
<td>
NR
</td>
<td>
IT2Rail-WP6
</td> </tr>
<tr>
<td>
ITR-6.28
</td>
<td>
Calculation of number of cooccurring terms in Travel data
messages
</td>
<td>
JSON
</td>
<td>
NR
</td>
<td>
IT2Rail-WP6
</td> </tr>
<tr>
<td>
ITR-6.29
</td>
<td>
Calculation of a list of timelines of terms of interest from travel data
messages, given a metro line and a time window
</td>
<td>
JSON
</td>
<td>
NR
</td>
<td>
IT2Rail-WP6
</td> </tr>
<tr>
<td>
ITR-6.30
</td>
<td>
Calculation of properties of preferred television programs in different
contexts
</td>
<td>
PostgreSQL
</td>
<td>
NR
</td>
<td>
IT2Rail-WP6
</td> </tr>
<tr>
<td>
ITR-6.31
</td>
<td>
KPIs concerning accesses by BA users to the IT2Rail BA web platform
</td>
<td>
MongoDB embedded
within Sofia2 platform
</td>
<td>
NR
</td>
<td>
IT2Rail-WP6
</td> </tr>
<tr>
<td>
ITR-6.32
</td>
<td>
KPIs concerning searches by BA users to the IT2Rail BA web platform
</td>
<td>
MongoDB embedded
within Sofia2 platform
</td>
<td>
NR
</td>
<td>
IT2Rail-WP6
</td> </tr> </table>
**Table 17: Data generated in WP6**
2. **STANDARDS, METADATA AND QUALITY ISSUES**
The data will be organised in databases and documented in a standardised way
that will be decipherable by all the participants of the WP6.
3. **DATA SHARING**
<table>
<tr>
<th>
**Code**
</th>
<th>
**Description**
</th>
<th>
**Mode of Data Sharing**
</th> </tr>
<tr>
<td>
ITR-6.1
</td>
<td>
Current Weather data
</td>
<td>
OpenWeatherMap data retrieved through a REST endpoint and published to the
IT2Rail web
application or the mobile Travel
Companion
</td> </tr>
<tr>
<td>
ITR-6.2
</td>
<td>
Weather forecast data
</td>
<td>
OpenWeatherMap data retrieved through a REST endpoint and published to the
public IT2Rail web
application
</td> </tr>
<tr>
<td>
ITR-6.3
</td>
<td>
Itinerary Offers retrieved from the Mobility Request Manager
</td>
<td>
WP2 data retrieved dynamically from the Mobility Request Manager through a
REST endpoint and
saved on a WP6 MongoDB
database
</td> </tr>
<tr>
<td>
ITR-6.4
</td>
<td>
TC User Feedbacks regarding Travel Questionnaire
</td>
<td>
WP5 data retrieved through a
REST endpoint and published to the IT2Rail web application or the mobile
Travel Companion
</td> </tr>
<tr>
<td>
ITR-6.5
</td>
<td>
ArrivalDelayEvent
</td>
<td>
WP4 simulated train event data created and saved on a WP6
MySQL database for WP4-WP6 integration testing purposes
</td> </tr>
<tr>
<td>
ITR-6.6
</td>
<td>
DepartureDelayEvent
</td>
<td>
WP4 simulated train event data created and saved on a WP6
MySQL database for WP4-WP6 integration testing purposes
</td> </tr>
<tr>
<td>
ITR-6.7
</td>
<td>
ArrivalRulesActivationRequest
</td>
<td>
WP4 Travel Companion Trip
Tracking User Preferences retrieved from the WP4 database
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
and stored on a WP6 MySQL database for calculation of TT KPIs
</th> </tr>
<tr>
<td>
ITR-6.8
</td>
<td>
DepartureRulesActivationRequest
</td>
<td>
WP4 Travel Companion Trip
Tracking User Preferences retrieved from the WP4 database and stored on a WP6
MySQL database for calculation of TT KPIs
</td> </tr>
<tr>
<td>
ITR-6.9
</td>
<td>
RuleDeactivationRequest
</td>
<td>
WP4 Travel Companion Trip
Tracking User Preferences retrieved from the WP4 database and stored on a WP6
MySQL database for calculation of TT KPIs
</td> </tr>
<tr>
<td>
ITR-6.10
</td>
<td>
User feedbacks for sentiment analysis
</td>
<td>
No sharing; demo data used for testing purposes
</td> </tr>
<tr>
<td>
ITR-6.11
</td>
<td>
Social Network Messages
</td>
<td>
Data retrieved dynamically through the Twitter API, pre-processed to
conform to GDPR regulations (not
retrieving any personal information,
all the information is anonymised) and saved on a WP6 MongoDB
database for further calculation of
WP6 KPIs
</td> </tr>
<tr>
<td>
ITR-6.12
</td>
<td>
Train Station Air Quality Data
</td>
<td>
No sharing; demo data used for testing purposes
</td> </tr>
<tr>
<td>
ITR-6.13
</td>
<td>
Travel Data Messages
</td>
<td>
No sharing; demo data used for testing purposes
</td> </tr>
<tr>
<td>
ITR-6.14
</td>
<td>
Data mining information
</td>
<td>
No sharing; demo data used for testing purposes
</td> </tr>
<tr>
<td>
ITR-6.15
</td>
<td>
Accesses by BA users to the IT2Rail BA web platform
</td>
<td>
WP5 data retrieved through a
REST endpoint and published to the IT2Rail web application or the mobile
Travel Companion
</td> </tr>
<tr>
<td>
ITR-6.16
</td>
<td>
Searches by BA users to the IT2Rail BA web platform
</td>
<td>
WP5 data retrieved through a
REST endpoint and published to the IT2Rail web application or the mobile
Travel Companion
</td> </tr>
<tr>
<td>
ITR-6.17
</td>
<td>
Happenings data
</td>
<td>
Simulated happenings data stored on a WP6 MySQL database and
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
retrieved through a REST endpoint by the Travel Companion
</th> </tr>
<tr>
<td>
ITR-6.18
</td>
<td>
KPIs for Transport Systems
</td>
<td>
Simulated transport systems data stored on a WP6 MySQL database and retrieved
through a REST
endpoint by the Travel Companion
stored on a WP6 MySQL database and retrieved through a REST endpoint by the
Travel Companion
</td> </tr>
<tr>
<td>
ITR-6.19
</td>
<td>
KPIs for Booking & Ticketing
</td>
<td>
WP3 data retrieved through a
REST endpoint and published to the IT2Rail web application or the mobile
Travel Companion
</td> </tr>
<tr>
<td>
ITR-6.20
</td>
<td>
KPIs for Preferences of Travellers with Reduced Mobility
</td>
<td>
WP5 data retrieved through a
REST endpoint and published to the IT2Rail web application or the mobile
Travel Companion
</td> </tr>
<tr>
<td>
ITR-6.21
</td>
<td>
KPIs on user feedbacks concerning Travel Questionnaire
</td>
<td>
KPIs calculated on Travel Companion Travel Questionnaire user feedbacks stored
on a WP6
MySQL database and retrieved through a REST endpoint by the
Travel Companion
</td> </tr>
<tr>
<td>
ITR-6.22
</td>
<td>
KPIs for Travellers’ Preferences
</td>
<td>
WP5 data retrieved through a
REST endpoint and published to the IT2Rail web application or the mobile
Travel Companion
</td> </tr>
<tr>
<td>
ITR-6.23
</td>
<td>
KPIs based on Trip Tracking Alternative Routes
</td>
<td>
WP4 data retrieved through a
REST endpoint and published to the IT2Rail web application or the mobile
Travel Companion
</td> </tr>
<tr>
<td>
ITR-6.24
</td>
<td>
KPIs based on Trip Tracking
Complex Event Processing
Messages
</td>
<td>
WP4 data retrieved through a
REST endpoint and published to the IT2Rail web application or the mobile
Travel Companion
</td> </tr>
<tr>
<td>
ITR-6.25
</td>
<td>
KPIs based on Social Network Messages
</td>
<td>
Information provided throught a RESTFul API viewed on a local IT2RAIL/CEA web
application.
</td> </tr>
<tr>
<td>
ITR-6.26
</td>
<td>
Calculation of parameters of Train
Station Air Quality Data based on
Meteorological Data
</td>
<td>
No sharing; viewed through a REST API on a local IT2Rail/CEA web application.
</td> </tr>
<tr>
<td>
ITR-6.27
</td>
<td>
Calculation of most Informative Term from Travel data messages
</td>
<td>
No sharing; viewed through a REST API on a local IT2Rail/CEA web application.
</td> </tr>
<tr>
<td>
ITR-6.28
</td>
<td>
Calculation of number of cooccurring terms in Travel data
messages
</td>
<td>
No sharing; viewed through a REST API on a local IT2Rail/CEA web application.
</td> </tr>
<tr>
<td>
ITR-6.29
</td>
<td>
Calculation of a list of timelines of terms of interest from travel data
messages, given a metro line and a time window
</td>
<td>
No sharing; viewed through a REST API on a local IT2Rail/CEA web application.
</td> </tr>
<tr>
<td>
ITR-6.30
</td>
<td>
Calculation of properties of preferred television programs in different
contexts
</td>
<td>
WP6 data retrieved through a Java application provided by Polimi. No sharing;
the mined rules are the data generated by the application but are invisible to
the end user who can only check and acknowledge that application behaviour has
changed due to the generated rules.
</td> </tr>
<tr>
<td>
ITR-6.31
</td>
<td>
KPIs concerning accesses by BA users to the IT2Rail BA web platform
</td>
<td>
WP5 data retrieved through a
REST endpoint and published to the IT2Rail web application or the mobile
Travel Companion
</td> </tr>
<tr>
<td>
ITR-6.32
</td>
<td>
KPIs concerning searches by BA users to the IT2Rail BA web platform
</td>
<td>
WP5 data retrieved through a
REST endpoint and published to the IT2Rail web application or the mobile
Travel Companion
</td> </tr> </table>
### Table 18: Sharing of the data in WP6
**7.4 ARCHIVING AND PRESERVATION**
<table>
<tr>
<th>
**Code**
</th>
<th>
**Archiving and preservation**
</th>
<th>
</th> </tr>
<tr>
<td>
ITR-6.1
</td>
<td>
Current Weather data
</td>
<td>
Data stored within LEONARDO server
</td> </tr> </table>
<table>
<tr>
<th>
ITR-6.2
</th>
<th>
Weather forecast data
</th>
<th>
Data stored within LEONARDO server
</th> </tr>
<tr>
<td>
ITR-6.3
</td>
<td>
Itinerary Offers retrieved from the Mobility Request Manager
</td>
<td>
Data stored within LEONARDO server
</td> </tr>
<tr>
<td>
ITR-6.4
</td>
<td>
TC User Feedbacks regarding Travel Questionnaire
</td>
<td>
Data stored within LEONARDO server
</td> </tr>
<tr>
<td>
ITR-6.5
</td>
<td>
ArrivalDelayEvent
</td>
<td>
Data stored within LEONARDO server
</td> </tr>
<tr>
<td>
ITR-6.6
</td>
<td>
DepartureDelayEvent
</td>
<td>
Data stored within LEONARDO server
</td> </tr>
<tr>
<td>
ITR-6.7
</td>
<td>
ArrivalRulesActivationRequest
</td>
<td>
Data stored within LEONARDO server
</td> </tr>
<tr>
<td>
ITR-6.8
</td>
<td>
DepartureRulesActivationRequest
</td>
<td>
Data stored within LEONARDO server
</td> </tr>
<tr>
<td>
ITR-6.9
</td>
<td>
RuleDeactivationRequest
</td>
<td>
Data stored within LEONARDO server
</td> </tr>
<tr>
<td>
ITR-
6.10
</td>
<td>
User feedbacks for sentiment analysis
</td>
<td>
Data stored within LEONARDO server
</td> </tr>
<tr>
<td>
ITR-
6.11
</td>
<td>
Social Network Messages
</td>
<td>
Data stored temporary (1 day) within UPC Server.
</td> </tr>
<tr>
<td>
ITR-
6.12
</td>
<td>
Train Station Air Quality Data
</td>
<td>
Data stored within IT2RAIL/CEA server.
</td> </tr>
<tr>
<td>
ITR-
6.13
</td>
<td>
Travel Data Messages
</td>
<td>
Data stored within IT2RAIL/CEA server.
</td> </tr>
<tr>
<td>
ITR-
6.14
</td>
<td>
Data mining information
</td>
<td>
Data stored within POLIMI server
</td> </tr>
<tr>
<td>
ITR-
6.15
</td>
<td>
Accesses by BA users to the IT2Rail BA web platform
</td>
<td>
Data stored within
Sofia2 Platform and
LEONARDO server
</td> </tr>
<tr>
<td>
ITR-
6.16
</td>
<td>
Searches by BA users to the IT2Rail BA web platform
</td>
<td>
Data stored within
Sofia2 Platform and
LEONARDO server
</td> </tr>
<tr>
<td>
ITR-
6.17
</td>
<td>
Happenings data
</td>
<td>
Data stored within LEONARDO server
</td> </tr>
<tr>
<td>
ITR-
6.18
</td>
<td>
KPIS for Transport Systems
</td>
<td>
Data stored within LEONARDO server
</td> </tr>
<tr>
<td>
ITR-
6.19
</td>
<td>
KPIs for Booking & Ticketing
</td>
<td>
Data stored within LEONARDO server
</td> </tr>
<tr>
<td>
ITR-
6.20
</td>
<td>
KPIs for Preferences of Travellers with Reduced Mobility
</td>
<td>
Data stored within
Sofia2 Platform and
LEONARDO server
</td> </tr>
<tr>
<td>
ITR-
6.21
</td>
<td>
KPIs on user feedbacks concerning Travel Questionnaire
</td>
<td>
Data stored within LEONARDO server
</td> </tr>
<tr>
<td>
ITR-
6.22
</td>
<td>
KPIs for Travellers’ Preferences
</td>
<td>
Data stored within
Sofia2 Platform and
LEONARDO server
</td> </tr>
<tr>
<td>
ITR-
6.23
</td>
<td>
KPIs based on Trip Tracking Alternative Routes
</td>
<td>
Data stored within
Sofia2 Platform and
LEONARDO server
</td> </tr>
<tr>
<td>
ITR-
6.24
</td>
<td>
KPIs based on Trip Tracking Complex Event Processing Messages
</td>
<td>
Data stored within LEONARDO server
</td> </tr>
<tr>
<td>
ITR-
6.25
</td>
<td>
KPIs based on Social Network Messages
</td>
<td>
Data stored anonymised within UPC Server
</td> </tr>
<tr>
<td>
ITR-
6.26
</td>
<td>
Calculation of parameters of Train Station Air Quality Data based on
Meteorological Data
</td>
<td>
No preservation. Computed on demand.
</td> </tr>
<tr>
<td>
ITR-
6.27
</td>
<td>
Calculation of most Informative Term from Travel data messages
</td>
<td>
No preservation. Computed on demand.
</td> </tr>
<tr>
<td>
ITR-
6.28
</td>
<td>
Calculation of number of co-occurring terms in Travel data messages
</td>
<td>
No preservation. Computed on demand.
</td> </tr>
<tr>
<td>
ITR-
6.29
</td>
<td>
Calculation of a list of timelines of terms of interest from travel data
messages, given a metro line and a time window
</td>
<td>
No preservation. Computed on demand.
</td> </tr>
<tr>
<td>
ITR-
6.30
</td>
<td>
Calculation of properties of preferred television programs in different
contexts
</td>
<td>
Data stored within POLIMI server
</td> </tr>
<tr>
<td>
ITR-
6.31
</td>
<td>
KPIs concerning accesses by BA users to the IT2Rail BA web platform
</td>
<td>
Data stored within
Sofia2 Platform and
LEONARDO server
</td> </tr>
<tr>
<td>
ITR-
6.32
</td>
<td>
KPIs concerning searches by BA users to the IT2Rail BA web platform
</td>
<td>
Data stored within
Sofia2 Platform and
LEONARDO server
</td> </tr> </table>
**Table 19: Archiving and preservation of the data in WP6**
**7.5 DATA MANAGEMENT RESPONSIBILITIES**
<table>
<tr>
<th>
**Code**
</th>
<th>
**Data Description**
</th>
<th>
**Name of Data Manager**
</th>
<th>
**Description of Responsabilities**
</th> </tr>
<tr>
<td>
ITR-6.1
</td>
<td>
Current Weather data
</td>
<td>
Guido Mariotta
Catherine Minciotti
Massimo Fratini
</td>
<td>
The data managers verify the availability of the repositories storing data.
</td> </tr>
<tr>
<td>
ITR-6.2
</td>
<td>
Weather forecast data
</td>
<td>
Guido Mariotta
Catherine Minciotti
Massimo Fratini
</td>
<td>
The data managers verify the availability of the repositories storing data.
</td> </tr>
<tr>
<td>
ITR-6.3
</td>
<td>
Itinerary Offers retrieved from the Mobility Request Manager
</td>
<td>
Guido Mariotta
Catherine Minciotti
Massimo Fratini
</td>
<td>
The data managers verify the availability of the repositories storing data.
</td> </tr>
<tr>
<td>
ITR-6.4
</td>
<td>
TC User Feedbacks regarding Travel Questionnaire
</td>
<td>
Guido Mariotta
Catherine Minciotti
Massimo Fratini
</td>
<td>
The data managers verify the availability of the repositories storing data.
</td> </tr>
<tr>
<td>
ITR-6.5
</td>
<td>
ArrivalDelayEvent
</td>
<td>
Guido Mariotta
Catherine Minciotti
Massimo Fratini
</td>
<td>
The data managers verify the availability of the repositories storing data.
</td> </tr>
<tr>
<td>
ITR-6.6
</td>
<td>
DepartureDelayEvent
</td>
<td>
Guido Mariotta
Catherine Minciotti
Massimo Fratini
</td>
<td>
The data managers verify the availability of the repositories storing data.
</td> </tr>
<tr>
<td>
ITR-6.7
</td>
<td>
ArrivalRulesActivationRequest
</td>
<td>
Guido Mariotta
Catherine Minciotti
Massimo Fratini
</td>
<td>
The data managers verify the availability of the repositories storing data.
</td> </tr>
<tr>
<td>
ITR-6.8
</td>
<td>
DepartureRulesActivationRequest
</td>
<td>
Guido Mariotta
Catherine Minciotti
Massimo Fratini
</td>
<td>
The data managers verify the availability of the repositories storing data.
</td> </tr>
<tr>
<td>
ITR-6.9
</td>
<td>
RuleDeactivationRequest
</td>
<td>
Guido Mariotta
Catherine Minciotti
</td>
<td>
The data managers verify the availability of the repositories storing data.
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
Massimo Fratini
</th>
<th>
</th> </tr>
<tr>
<td>
ITR-6.10
</td>
<td>
User feedbacks for sentiment analysis
</td>
<td>
Guido Mariotta
Catherine Minciotti
Massimo Fratini
</td>
<td>
The data managers verify the availability of the repositories storing data.
</td> </tr>
<tr>
<td>
ITR-6.11
</td>
<td>
Social Network Messages
</td>
<td>
Jordi Urmeneta
Carlos Balufo
Josep Lluís Larriba
</td>
<td>
The data managers verify the availability of the repositories storing data.
</td> </tr>
<tr>
<td>
ITR-6.12
</td>
<td>
Train Station Air Quality Data
</td>
<td>
Lorene Allano
Jacques-Henri Sublemontier
Fred Ngole Mboula
</td>
<td>
The data managers verify the availability of the repositories storing data.
</td> </tr>
<tr>
<td>
ITR-6.13
</td>
<td>
Travel Data Messages
</td>
<td>
Lorene Allano
Jacques-Henri Sublemontier
Fred Ngole Mboula
</td>
<td>
The data managers verify the availability of the repositories storing data.
</td> </tr>
<tr>
<td>
ITR-6.14
</td>
<td>
Data mining information
</td>
<td>
Matteo Rossi
Elisa Quintarelli
Letizia Tanca
</td>
<td>
The data managers verify the availability of repositories storing data and
preferences.
</td> </tr>
<tr>
<td>
ITR-6.15
</td>
<td>
Accesses by BA users to the IT2Rail BA web platform
</td>
<td>
Habib Deriu
Javier Saralegui
Sánchez
</td>
<td>
The data managers verify the availability of the repositories storing data.
</td> </tr>
<tr>
<td>
ITR-6.16
</td>
<td>
Searches by BA users to the IT2Rail BA web platform
</td>
<td>
Habib Deriu
Javier Saralegui
Sánchez
</td>
<td>
The data managers verify the availability of the repositories storing data.
</td> </tr>
<tr>
<td>
ITR-6.17
</td>
<td>
Happenings data
</td>
<td>
Guido Mariotta
Catherine Minciotti
Massimo Fratini
</td>
<td>
The data managers verify the availability of the repositories storing data.
</td> </tr>
<tr>
<td>
ITR-6.18
</td>
<td>
KPIS for Transport Systems
</td>
<td>
Guido Mariotta
Catherine Minciotti
Massimo Fratini
</td>
<td>
The data managers verify the availability of the repositories storing data.
</td> </tr> </table>
<table>
<tr>
<th>
ITR-6.19
</th>
<th>
KPIs for Booking & Ticketing
</th>
<th>
Habib Deriu
Javier Saralegui
Sánchez
</th>
<th>
The data managers verify the availability of the repositories storing data.
</th> </tr>
<tr>
<td>
ITR-6.20
</td>
<td>
KPIs for Preferences of Travellers with Reduced Mobility
</td>
<td>
Habib Deriu
Javier Saralegui
Sánchez
</td>
<td>
The data managers verify the availability of the repositories storing data.
</td> </tr>
<tr>
<td>
ITR-6.21
</td>
<td>
KPIs on user feedbacks concerning Travel Questionnaire
</td>
<td>
Guido Mariotta
Catherine Minciotti
Massimo Fratini
</td>
<td>
The data managers verify the availability of the repositories storing data.
</td> </tr>
<tr>
<td>
ITR-6.22
</td>
<td>
KPIs for Travellers’ Preferences
</td>
<td>
Habib Deriu
Javier Saralegui
Sánchez
</td>
<td>
The data managers verify the availability of the repositories storing data.
</td> </tr>
<tr>
<td>
ITR-6.23
</td>
<td>
KPIs based on Trip Tracking Alternative Routes
</td>
<td>
Habib Deriu
Javier Saralegui
Sánchez
</td>
<td>
The data managers verify the availability of the repositories storing data.
</td> </tr>
<tr>
<td>
ITR-6.24
</td>
<td>
KPIs based on Trip Tracking
Complex Event Processing
Messages
</td>
<td>
Guido Mariotta
Catherine Minciotti
Massimo Fratini
</td>
<td>
The data managers verify the availability of the repositories storing data.
</td> </tr>
<tr>
<td>
ITR-6.25
</td>
<td>
KPIs based on Social Network Messages
</td>
<td>
Jordi Urmeneta
Carlos Balufo
Josep Lluís Larriba
</td>
<td>
The data managers verify the availability of the repositories storing data.
</td> </tr>
<tr>
<td>
ITR-6.26
</td>
<td>
Calculation of parameters of Train
Station Air Quality Data based on
Meteorological Data
</td>
<td>
Lorene Allano
Jacques-Henri Sublemontier
Fred Ngole Mboula
</td>
<td>
No computations saved.
</td> </tr>
<tr>
<td>
ITR-6.27
</td>
<td>
Calculation of most Informative Term from Travel data messages
</td>
<td>
Lorene Allano
Jacques-Henri Sublemontier
Fred Ngole Mboula
</td>
<td>
No computations saved.
</td> </tr>
<tr>
<td>
ITR-6.28
</td>
<td>
Calculation of number of cooccurring terms in Travel data
messages
</td>
<td>
Lorene Allano
</td>
<td>
No computations saved.
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
Jacques-Henri Sublemontier
Fred Ngole Mboula
</td>
<td>
</td> </tr>
<tr>
<td>
ITR-6.29
</td>
<td>
Calculation of a list of timelines of terms of interest from travel data
messages, given a metro line and a time window
</td>
<td>
Lorene Allano
Jacques-Henri Sublemontier
Fred Ngole Mboula
</td>
<td>
No computations saved.
</td> </tr>
<tr>
<td>
ITR-6.30
</td>
<td>
Calculation of properties of preferred television programs in different
contexts
</td>
<td>
Matteo Rossi
Elisa Quintarelli
Letizia Tanca
</td>
<td>
The data managers verify the availability of repositories storing data and
contextual preferences.
</td> </tr>
<tr>
<td>
ITR-6.31
</td>
<td>
KPIs concerning accesses by BA users to the IT2Rail BA web platform
</td>
<td>
Habib Deriu
Javier Saralegui
Sánchez
</td>
<td>
The data managers verify the availability of the repositories storing data.
</td> </tr>
<tr>
<td>
ITR-6.32
</td>
<td>
KPIs concerning searches by BA users to the IT2Rail BA web platform
</td>
<td>
Habib Deriu
Javier Saralegui
Sánchez
</td>
<td>
The data managers verify the availability of the repositories storing data.
</td> </tr> </table>
**Table 20: Data Management Responsibilities in WP6**
# DMP OF WP8: DISSEMINATION
## DATA TYPES
Existing data used in this WP include the following data types:
<table>
<tr>
<th>
**Code**
</th>
<th>
**Description of Dataset/Digital Output**
</th>
<th>
**Units and Format**
</th>
<th>
**Size**
</th>
<th>
**Ownership**
</th> </tr>
<tr>
<td>
ITR-8.1
</td>
<td>
Images: Images and logos from partners participating in the project.
</td>
<td>
.eps, .ai,
.png, .jpeg
</td>
<td>
Variable
</td>
<td>
The owner gives permission to UNIFE to use images for dissemination
purposes of
IT2Rail.
</td> </tr>
<tr>
<td>
ITR-8.2
</td>
<td>
Database of Advisory Board: This database contains data such as name, e-mail,
company, telephone and field of expertise of the partners participating in the
Advisory Board.
</td>
<td>
.xls, .doc
</td>
<td>
≈ 14 people in the contact
list
</td>
<td>
The data will be kept in the UNIFE and UITP servers and is also included in
deliverable D8.9.
</td> </tr>
<tr>
<td>
ITR-8.3
</td>
<td>
Database of End Users Expert Group: This database contains data such as name,
e-mail, company, telephone and field of expertise of the partners
participating in the Expert Group.
</td>
<td>
.xls, .doc
</td>
<td>
≈ 15 people in the contact
list
</td>
<td>
The data will be kept in the UNIFE and UITP servers and is also included in
deliverable D8.8.
</td> </tr>
<tr>
<td>
ITR-8.4
</td>
<td>
Database of Ethical Privacy and Security Expert Group: This database contains
data such as name, e-mail, company, telephone and field of expertise of the
partners participating in the Expert Group.
</td>
<td>
.xls, .doc
</td>
<td>
≈ 5 people in the contact
list
</td>
<td>
The data will be kept in the UNIFE and UITP servers and is also included in
deliverable D8.8.
</td> </tr> </table>
### Table 21: Existing Data used in WP8
Please consult the UITP’s Privacy Policy (http://www.uitp.org/privacy-policy)
to find out more about how UITP handles personal data.
## STANDARDS, METADATA AND QUALITY ISSUES
The pictures and logos are stored in common formats: vector image formats and
picture compression standards.
## DATA SHARING
<table>
<tr>
<th>
**Code**
</th>
<th>
**Data sharing**
</th> </tr>
<tr>
<td>
ITR-8.1
</td>
<td>
The data will not be shared but some of the image database will be used for
dissemination purposes and therefore will become public.
</td> </tr>
<tr>
<td>
ITR-8.2
</td>
<td>
This data is confidential and only the consortium partners will have access to
it.
</td> </tr>
<tr>
<td>
ITR-8.3
</td>
<td>
This data is confidential and only the consortium partners will have access to
it.
</td> </tr>
<tr>
<td>
ITR-8.4
</td>
<td>
This data is confidential and only the consortium partners will have access to
it.
</td> </tr> </table>
### Table 22: Data Sharing in WP8
## ARCHIVING AND PRESERVATION
<table>
<tr>
<th>
**Code**
</th>
<th>
**Archiving and preservation**
</th> </tr>
<tr>
<td>
ITR-8.1
</td>
<td>
Data will be stored on the UNIFE server which is regularly backed up.
</td> </tr>
<tr>
<td>
ITR-8.2, 8.3 and 8.4
</td>
<td>
Data will be stored on the UITP server which is regularly backed up.
</td> </tr> </table>
### Table 23: Archiving and preservation of the data in WP8
## DATA MANAGEMENT RESPONSIBILITIES
<table>
<tr>
<th>
**Code**
</th>
<th>
**Name of Responsible**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
ITR-8.1
</td>
<td>
Stefanos Gogos (UNIFE)
</td>
<td>
Update and maintenance of the data
</td> </tr>
<tr>
<td>
ITR-8.2, 8.3 and
8.4
</td>
<td>
Cristina Hernandez (UITP,
Project manager)
</td>
<td>
Update and maintenance of the data
related to the project
</td> </tr> </table>
**Table 24: Data Management Responsibilities in WP8**
# CONCLUSIONS
The purpose of the Data Management Plan is to support the data management life
cycle for all data that will be collected, processed or generated by the
IT2Rail project. The DMP is expected to be updated after the final review, to
fine-tune it to the data generated and the uses identified by the consortium
since not all data or potential uses might be considered before then.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0734_PEGASUS_766894.md
|
# 1\. Summary
This document provides the PEGASUS data management plan, version 1. The data
management plan outlines how the research data collected or generated will be
handled during the PEGASUS project, describes which standards and methodology
for data collection and generation will be followed, and whether and how data
will be shared. This document aims to provide a consolidated plan for PEGASUS
partners in the data management policy, following the template provided by the
European Commission in the Participant Portal 1 . This document is the first
version of the data management plan, delivered 6 months after the beginning of
the PEGASUS project. It will be updated during the lifecycle of the project.
## 2\. PEGASUS project
The PEGASUS project embodies plasmas driven controllable design of matter at
atomic scale level. To this end, PEGASUS ultimate goal is to create a highly
efficient, catalyst/harmful-free novel plasma method along with a proof-of-
concept PEGASUS device for a large-scale Ngraphene direct synthesis, as well
as N-graphene/metal oxides nanocomposites and unique vertical N-graphene
arrays grown on metal substrates, via breakthrough research on plasmaenabled
singular assembly pathways. By doing so, a disruptive and highly competitive
alternative to conventional lengthy/multistep routes will emerge, based on the
mastering of plasma exclusive mechanisms to control the amount and
localization of energy and matter at atomic scales, spurring a new European
manufacturing/processing platform. PEGASUS framework is uniquely positioned in
the strategic domain of 2D materials via the promotion of plasma methods as a
key enabling technology for highly controllable and "green" assembly of atom
thick hybrid nanostructures and by replacing long existing materials with new
costeffective, higher performance ones. The synergy between plasma physics and
mechanical, electrochemical and hi-tech engineering expertise will be the
driving force boosting the innovative approach pursued by this project,
spanning from fundamental knowledge to appliance prospects. This
interdisciplinary project is developed under the coordination of Dr. Elena
Tatarova and her team with the Plasma Engineering Laboratory at IPFN, joining
a consortium that involves IST-ID (Portugal), Centre National de la Recherche
Scientifique (France), Institut Jozef Stefan (Slovenia), Kiel University
(Germany), Sofia University (Bulgaria) and Charge2C-Newcap Lda (Portugal).
PEGASUS ambitious purpose is to translate the unique properties of plasmas
into extraordinary material characteristics and to create novel forms of
matter by using a multitude of specific plasma mechanisms to control the
energy and matter transfer processes at nanoscales. The targeted outstanding
electrochemical performance of the nano-architectures considered will allow
their use as base electrode elements in a proof-of-concept supercapacitor
device. An overview of PEGASUS research method is given in Figure 1. The
project is divided in 4 work packages (WP):
* WP1 - Plasma enabled novel method for single step, large-scale assembly of freestanding NG and NG/MO composites;
* WP2 - Plasma-enabled assembly of networks of vertically aligned N-graphene sheets and its hybrids affixed on metal surfaces;
* WP3 - Design of electrochemical capacitors based on different electrode materials and proof-of-concept prototypes; WP4 - Management.
Figure 1. Overview of the PEGASUS research method
# 3\. Data Summary
The data management plan here presented describes the types of data that will
be generated or gathered during the project, the standards that will be used,
the ways how the data will be exploited and shared for verification or reuse,
and how the data will be preserved.
Several types of data will be collected and analysed during the research in
the project. Data created during the project consists of plasma produced
nanostructures as well as of the characterization of such structures and of
the plasma reactors, as obtained from several diagnostic techniques which
include experimental and modelling tools. A description on the kind of data
generated in each WP is given in the following subsections.
3.1 Implementation of WP1
The data generated in this WP consists of self-standing N-graphene and hybrid
NG/MnO 2 /Fe 2 O 3 /SnO 2 nanosheets synthesized at large scale with
prescribed structural qualities and properties via development and use of
effective plasma means to control the energy and particles transfer
mechanisms. An overview on how data will be obtained and transmitted between
production and analysis of the nanostructures is given in Figure 2. This will
result in the elaboration of protocols for large scale fabrication of the
targeted nanostructures. Essentially, the following type of data is collected:
* Data on the design of plasma reactors and plasma environment to synthesize the targeted nanostructures. Includes optimization of the plasma reactors through simulations and experiments, with feedback from the structural analysis of the nanostructures;
* Data on the structural qualities and properties of the synthesized nanostructures. Includes physical and chemical analysis, using techniques such as SEM (EDS), FTIR, Raman spectroscopy, XRD, XPS, NEXAFS, TEM/HRTEM;
* Protocols for large-scale fabrication of NG and NG/MO composites;
* Data on unique hybrid nanostructures, wrapping the synthesized nanostructures with conductive polymers.
Figure 2. Overview of the data flow, from the synthesis of targeted
nanostructures to their analysis and elaboration of protocols for large-scale
fabrication and hybrid nanostructures synthesis.
3.2. Implementation of WP2
The data generated in this WP consists of vertically aligned N-graphene
nanosheets, or its hybrids, standing on wafer/metal substrates, including Ni
foams. Selective synthesis of such nanostructures is achieved through
controllable plasma-based assembling. An overview on the flow of material from
the assembly stage to the characterization of the obtained nanostructures is
given in Figure 3. Essentially, the following type of data is collected:
* Data on the design of plasma reactors and plasma environment to synthesize the vertical N-graphene nanostructures. Includes optimization of the plasma reactors through simulations and experiments, with feedback from the structural analysis of the nanostructures;
* Data on the characterization of N-graphene structures on metal foam, decorated with MOs nanoparticles and wrapped with conductive polymers. This includes data generated from diagnostic techniques such as SEM, TEM, XRD, XPS, NEXAFS.
Figure 3. Overview of the data flow, from the plasma-based synthesis of the
targeted nanostructures to their characterization.
3. Implementation of WP3
This WP focuses on the design of electrochemical capacitors based on different
electrode materials and proof-of-concept prototypes. C2C will assess the
potential of an array of nanomaterials for use as active materials for
electrodes for electrochemical capacitors, and build proof-of-concept devices
with boosted performances (this includes analysis of VA curves, specific
capacity, chemical stability, charge-discharge profiles etc.). The materials
on target are: (i) NG sheets, (ii) NG sheets decorated with different metal
oxides (MnO 2 , Fe 2 O 3 , SnO 2 ) including GMOP, and (iii) vertical
NG, NG/MOs including GMOP grown on Ni foams. IST-ID will provide samples with
distinct material properties to C2C for that purpose. Material transfer
agreements between IST-ID and C2C was signed. Delivery protocols to exchange
data as illustrated in Figure 4 will be provided. .
Figure 4. Data transfer between IST-ID and C2C to assess the potential of
arrays of nano-materials for use as active materials for electrodes in
electrochemical capacitors, and build real scale proofof-concept devices with
boosted performances.
4. Implementation of WP4
This WP refers to the management of the project. It is to be carried along the
whole lifecycle of PEGASUS. Data generated in this WP refers to all the
necessary documentation for the management. This includes supporting documents
for review meetings, progress reports, website creation and management,
delivery protocols, dissemination and exploitation plans, data management
plan, and technical and scientific report regarding achieved results.
# 4\. FAIR - findable, accessible, interoperable and reusable data
## 4.1. Making data openly accessible
Each beneficiary must ensure open access (free of charge online access for any
user) to all peer-reviewed scientific publications relating to its results. In
particular, it must:
1. as soon as possible and at the latest on publication, deposit a machine-readable electronic copy of the published version or final peer-reviewed manuscript accepted for publication in a repository for scientific publications; Moreover, the beneficiary must aim to deposit at the same time the research data needed to validate the results presented in the deposited scientific publications.
2. ensure open access to the deposited publication — via the repository — at the latest: (i) on publication, if an electronic version is available for free via the publisher, or (ii) within six months of publication (twelve months for publications in the social sciences and humanities) in any other case.
3. ensure open access — via the repository — to the bibliographic metadata that identify the deposited publication.
The bibliographic metadata must be in a standard format and must include all
of the following:
* the terms “European Union (EU)” and “Horizon 2020”;
* the name of the action, acronym and grant number;
* the publication date, and length of embargo period if applicable, and
* a persistent identifier.
Regarding the generated digital research data, the beneficiaries must:
1. deposit in a research data repository and take measures to make it possible for third parties to access, mine, exploit, reproduce and disseminate — free of charge for any user — the following:
1. the data, including associated metadata, needed to validate the results presented in scientific publications as soon as possible;
2. other data, including associated metadata, as specified and within the deadlines of the GA.
2. provide information — via the repository — about tools and instruments at the disposal of the beneficiaries and necessary for validating the results (and — where possible — provide the tools and instruments themselves).
As an exception, the beneficiaries do not have to ensure open access to
specific parts of their research data if the achievement of the action's main
objective, would be jeopardized by making those specific parts of the research
data openly accessible.
## 4.2. Dissemination and exploitation of results
At least 8 patents registrations concerning main advances on NG and NG/MnO 2
/Fe 2 O 3 /SnO 2 hybrid nanostructures and their 3D networks are
expected during the project´s course. Moreover patent portfolio associated
with PEGASUS device for large scale production of NG addressing the process,
microwave plasma reactor, customization of the process etc, will be created.
Likewise patent portfolio of targeted supercapacitor proof-of-concept device
will be formed (at least 2 patents). The management of intellectual property
and access rights to results will strictly follow all the rules in the
Consortium Agreement (CA). It addresses: the liability and confidentiality
arrangements between partners; Background identification; Foreground and
exploitation. Any produced patent will be filed by the involved parties
according to the rules defined in the CA, and all publications will be done
following the rules settled by the same agreement. Part of the results,
related with fundamental issues considered, will be published/presented in
prestigious international journals (at least 30 articles) and conferences.
Open access will be provided to the resulting articles.
## 4.3. Communication activities
The activities to promote the project include: publicity via local media,
newspapers; presentations at conferences/workshops, etc., and the creation of
movies/cartoons/posters to be distributed among participating institutes and
related social/scientific communities. A project Webpage has been created
where the ongoing progress and related results will be disseminated while
preserving patent associated restrictions, thus increasing level of publicity
of the project.
## 4.4. Making data interoperable
Data produced in the project will be exchanged and re-used between
beneficiaries. Data and metadata will follow standard vocabularies for each
dataset, allowing inter-disciplinary interoperability between all the
institutions involved in the project. Figure 5 illustrates how synthesized
materials and information on their properties will be exchanged between
partners. Delivery protocols and materials transfer agreements will be defined
between the beneficiaries in accordance with the CA.
Figure 5. Overview of the flow of material between partners.
## 4.5. Increase data re-use (through clarifying licences)
The intellectual property rights ownership is defined by the Consortium
Agreement and Grant Agreement related to the project. Such access will be
provided by accepting the terms and conditions of use, as appropriate.
Materials generated under the project will be disseminated in accordance with
the Consortium Agreement.
# 5\. Allocation of resources
Each partner must authorize a responsible of data management who will take the
responsibility to control the correct storage, management, sharing and
security of the dataset. The data will be managed and handled by collaborators
of the project. The knowledge generated by the Project among partners is
managed in two ways, depending on the data source:
1. The non-sensitive data will be organized into a repository that will contain all the knowledge produced by the project partners. A restricted access is expected for the knowledge that will be used for exploitation purposes; open access for all the other knowledge.
2. To manage and store the sensitive data obtained, all partners from PEGASUS must comply with relevant European and national regulations as well as with the standards defined in the Consortium Agreement and Grant Agreement.
# 6\. Data security
Each beneficiary must examine the possibility of protecting its results and
must adequately protect them — for an appropriate period and with appropriate
territorial coverage — if:
1. the results can reasonably be expected to be commercially or industrially exploited and
2. protecting them is possible, reasonable and justified (given the circumstances).
When deciding on protection, the beneficiary must consider its own legitimate
interests and the legitimate interests (especially commercial) of the other
beneficiaries.
If a beneficiary intends not to protect its results, to stop protecting them
or not seek an extension of protection, the Agency may — under certain
conditions (see Article 26.4 of the Grant Agreement) — assume ownership to
ensure their (continued) protection.
Applications for protection of results (including patent applications) filed
by or on behalf of a beneficiary must — unless the Agency requests or agrees
otherwise or unless it is impossible — include the following:
“The project leading to this application has received funding from the
European Union’s Horizon 2020 research and innovation programme under grant
agreement No 766894”.
# 7\. Ethics and security of nanomaterials collection and storage
Requirements in vigor for handling nanomaterials:
1 - Limit access in areas where the processes are being carried out. Only
trained personnel may be allowed to work in these areas while nanomaterials
are being used. 2 - Training procedures and operational procedures should be
implemented before beginning work on nanomaterials.
3. \- The nanoparticles will be stored in specific packaging, labeled and stored in their own place.
4. \- Regular cleaning of countertops, floors and other surfaces will be implemented and the cleaning schedule documented. The cleaners will be compatible with the liquid in which the nanoparticles are suspended and with the nanoparticles themselves. 5 - Eating and drinking in the laboratory and controlled areas is prohibited.
Reception of nanomaterials, rules in vigor:
1. \- There is appropriate place for reception of nanomaterials;
2. \- Ensuring that the packaging is not damaged (torn, punctured, contaminated, etc.); 3 - Using collective and individual protection equipment appropriate to the type of nanomaterial and work environment;
4 - Have procedures and technical / operational staff trained and trained to
dealing with the risks of each type of nanomaterial handled.
Collection of nanoparticle samples:
1. \- Using containers that are easy to handle;
2. \- Using appropriate collective and personal protective equipment.
Storage of nanomaterials - rules in vigor:
1. \- Have adequate facilities and packaging systems compatible with the type of nanomaterial used and collected (humidity and oxygen content and / or controlled inert atmosphere, thermal control, insulation of sources of excessive heat, sparks or flames);
2. \- Use appropriate packaging in order to minimize electrostatic charges;
3. \- Use electrical systems with earth;
4. \- Use utensils/tools that do not produce sparks or sparks;
5. \- Use appropriate collective and personal protective equipment (including clothing) and compatible with the physical-chemical nature of the nanomaterials handled and their forms (dispersions in liquids or solid media).
Adequacy of personal protective equipment:
Depending on the specificity and efficiency of each type and PPE (personal
protective equipment) or EPC (collective protection equipment), the following
summarized information is only generic, serving as a general guide and should
not be extrapolated for specific cases. Clothing and masks with filters:
* Robe
* Acquired footwear
* Disposable cap
* Safety glasses with side shields - Mascara, type manufactured by 3M.
In case of nanomaterial release/exposure, specific safety measures will be
applied, namely:
1. In case of release, the use of personal protective equipment is always mandatory, including: Laboratory Cloth; Proper footwear; Safety glasses with side shields; Masks P2/P3 according to EN 149 2001 (manufactured by 3M).
The released namomaterial, only a few milligrams, must not be blow off, but
clean with a wet cloth and closed in a container for disposal.
2. In case of exposure/accident, affected persons must be moved out of dangerous area. A physician must be consulted, and the following safety data sheet must be shown to the doctor. The emergency call is 112\.
Graphene and other nanomaterial containing waste will be treated in accordance
to an environmentally friendly waste management hierarchy, first the amount of
nanomaterial waste produced will be reduced to the minimum necessary, second
we will reuse as much nanomaterial as possible, recovering and recycling, and
finally all non-recyclable nanomaterials will be disposed of and treated as
“hazardous waste” and delivered to a licensed disposal company. Our institute
has a contract for the "Collection and disposal of hazardous waste" with the
company "EGEO - Tecnologia e Ambiente SA" licensed by the Portuguese "Agência
Portuguesa do Ambiente". Furthermore our institute has a "Hygiene and Health
Safety Center" with long list of hazardous and non- hazardous waste management
procedures.
# 8\. History of changes
This document is the first version of the data management plan, delivered 6
months after the beginning of the PEGASUS project. Therefore, it will be
updated during the lifecycle of the project and the changes will be described
in the following table:
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
History of changes
</th> </tr>
<tr>
<td>
Version
</td>
<td>
Publication date
</td>
<td>
Change
</td> </tr>
<tr>
<td>
1.0
</td>
<td>
30.04.2018
</td>
<td>
Initial version
</td> </tr> </table>
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0736_MixedEmotions_644632.md
|
# Introduction and scope
This report, the Data Management Plan (DMP) version 2, describes the data
management life cycle for all data sets that have been or will be collected,
processed or generated by the MixedEmotions project. It outlines how research
data will be handled during the project, and after it is completed, describing
what data is collected, processed or generated and what methodology and
standards are followed, whether and how this data will be shared and/or made
available, and how it will be curated and preserved.
As the DMP is not a fixed document, it evolves and gains more precision and
substance during the lifespan of the project, therefore it will be necessarily
incomplete. A final Data Management Report will be available by the end of the
project.
# Dataset identification and listing
To allow for more context and a better understanding of the purposes of the
different data collecting, the datasets are listed categorized according to
the consortium partner that collects the data.
## Paradigma Tecnologico datasets
### DW content (text)
**Data set reference and name** : DW texts and videos
**Data set description:** Texts and videos obtained from Deutsche Welle API
regarding selected brands
**Standards and metadata:** Text, video, brand, date, language
**Data sharing:** Restricted availability through DW
**Archiving and preservation (including storage and backup):** Preserved in a
“sources” index in the platform elasticSearch.
**Contact:** [email protected]
### Twitter tweets (text)
**Data set reference and name:** Twitter tweets
**Data set description:** Tweets extracted from Twitter regarding selected
brands **Standards and metadata:** Text, brand, date, language, account.
**Data sharing:** None. There are legal issues sharing this data.
**Archiving and preservation (including storage and backup):** Preserved in a
“sources” index in the platform elasticSearch.
**Contact:** [email protected]
### Processed Results
**Data set reference and name:** Processed results
**Data set description:** Once input data is processed (eg. splitted and
emotion, polarity and terms are added) the results are saved to be the base of
the analytics.
**Standards and metadata:** Sentence, brand, date, language, account,
original_text, emotions, polarity, concepts, topics, source, media.
**Data sharing:** No sharing, for commercial reasons.
**Archiving and preservation (including storage and backup):** Preserved in a
“results” index in the platform elasticSearch.
**Contact:** [email protected]
## NUIG datasets
### Review Suggestion Dataset
**Data set reference and name:** Review Suggestion Dataset
**Data set description:** Manually labeled sentences from hotel and
electronics reviews, which were in turn obtained from existing academic
datasets. Each sentence is labeled as ‘suggestion’ or ‘non-suggestion’,
depending on if the sentence conveys a suggestion. Data labelling is performed
using paid crowdsourcing platforms.
**Standards and metadata:** sentiment polarity, review id, sentence id,
tripadvisor hotel id **Data sharing:** Publicly available.
**Archiving and preservation (including storage and backup):** TBD
**Link:** _http://server1.nlp.insight-centre.org/sapnadatasets/EMNLP2015/_
**Contact:** [email protected]
### Tweet Suggestion Dataset
**Data set reference and name:** Tweet Suggestion Dataset
**Data set description:** Manually labeled tweets, downloaded using twitter
API. Each tweet is labeled as ‘suggestion’ or ‘non-suggestion’, depending on
if it conveys a suggestion. Data labelling is performed using paid
crowdsourcing platforms. Due to the restrictions imposed by twitter, only
tweet id and manual label would be available in the downloadable version of
the dataset.
**Standards and metadata:** tweet id
**Data sharing:** Publicly available.
**Archiving and preservation (including storage and backup):** TBD
**Link:** _http://server1.nlp.insight-
centre.org/sapnadatasets/starsem2016/tweets/_
**Contact:** [email protected]
### Forum Suggestion Dataset
**Data set reference and name:** Forum Suggestion Dataset
**Data set description:** Manually labeled sentences of posts from a
suggestion forum, scraped from the website _www.uservoice.com_ . Each sentence
is labeled as ‘suggestion’ or ‘nonsuggestion’, depending on if it conveys a
suggestion. Data labelling is performed by the project members.
**Standards and metadata:** Post id, sentence id, software name.
**Data sharing:** Publicly available.
**Archiving and preservation (including storage and backup):** TBD
**Link:** _http://server1.nlp.insight-
centre.org/sapnadatasets/starsem2016/SuggForum/_
**Contact:** [email protected]
### VAPUI Annotated Tweets (crowd sourced)
**Data set reference and name:** VAPUI Annotated Tweets
**Data set description:** Planned data set containing manually labeled tweet
comparisons. Tweets will be compared along up to 5 emotional dimensions:
Valence (Pleasure / Positivity), Arousal (Activation), Potency (Dominance /
Power), Unpredictability (Expectation / Novelty / Surprise) and emotional
Intensity. Each annotation is a comparison between two tweets along one of the
emotion dimensions. Annotators will be drawn from the CrowdFlower platform.
Data on the time taken to perform the annotations will also be also collected.
The data is expected to contain 10000 tweet comparisons over 2000 tweets.
**Standards and metadata:** tweet ids, data collection methodology **Data
sharing:** Publicly available only for academic research.
**Archiving and preservation (including storage and backup):** TBD
**Contact:** [email protected]
### VAPUI Annotated Tweets (pilot study)
**Data set reference and name:** VAPUI Annotated Tweets (pilot study data)
**Data set description:** Manually labeled tweet comparisons. Tweets were
compared along each of 5 emotional dimensions: Valence (Pleasure /
Positivity), Arousal (Activation), Potency (Dominance / Power),
Unpredictability (Expectation / Novelty / Surprise) and emotional Intensity.
Annotations were collected for each of two annotation schemes: comparing pairs
of tweets and choosing the best/worst tweets from 4. Annotators were drawn
from MixedEmotions collaborators and their contacts. Data on the time taken to
perform the annotations was also collected. The data contains 30 annotated
tweet pairs and 18 annotated tweet quads. **Standards and metadata:** tweet
ids, data collection methodology **Data sharing:** Publicly available only for
academic research.
**Archiving and preservation (including storage and backup):** TBD
**Contact:** [email protected]
### Ekman Annotated Emoji Tweets
**Data set reference and name:** Ekman Annotated Emoji Tweets
**Data set description:** Tweets containing emotive emoji labelled with
Ekman’s six basic emotions (Joy, Surprise, Sadness, Anger, Disgust, Fear).
Emoji were removed from the tweets before annotation. Annotators were drawn
from MixedEmotions collaborators and their contacts. Data on the time taken to
perform the annotations was also collected. The data contains 366 annotated
tweets.
**Standards and metadata:** tweet ids, selected emotive emoji, data collection
methodology **Data sharing:** Publicly available only for academic research.
**Archiving and preservation (including storage and backup):** TBD
**Contact:** [email protected]
## UPM datasets
### Twitter relations
**Data set reference and name:** Twitter relations
**Data set description:** Relationships for Twitter accounts. That would be
followers and followings of accounts that tweeted about our selected brands.
**Standards and metadata:** RDF.
**Data sharing:** No sharing. There are legal issues sharing this data.
**Archiving and preservation (including storage and backup):** In a graph
database that could be Elasticsearch with the Siren plugin. **Contact:**
[email protected]
## ExpertSystem datasets
### ES Dataset based on the enrichment of DW English Dataset
**Data set reference and name:** ES Dataset based on the enrichment of DW
Dataset
**Data set description:** All articles published by Deutsche Welle over recent
years in English. Metadata describing audio, video and image material
published by Deutsche Welle of recent years in all DW languages. This dataset
is semantically enriched by ES modules so the final result is a dataset with
all the previous information, plus, for each article or A/V, a set of metadata
(topic, main lemmas, people, and places)
**Standards and metadata:** IPTC topic, main lemmas, people, places
**Data sharing:** The data is available in the platform elasticSearch, access
to which was described to the consortium in a separate document. The data is
only to be used by consortium members but can be used for scientific
publications with DW’s permission. The reason is that the rights associated
with DW’s material vary from item to item, depending on the material’s origin.
**Archiving and preservation (including storage and backup):** The data
remains available on the ME Platform elasticSearch after the end of the
project. **Contact:** [email protected]
### Twitter trend related to DW A/V
**Data set reference and name:** Twitter trend related to DW A/V
**Data set description:** Tweets extracted from Twitter selected through
keywords related to DW A/V
**Standards and metadata:** IPTC topic, main lemmas, people, places, sentiment
and emotions **Data sharing:** The data is available in the platform
elasticSearch, access to which was described to the consortium in a separate
document.
**Archiving and preservation (including storage and backup):** Preserved in an
index in the platform elasticSearch.
**Contact:** [email protected]
### Twitter trend related to DW English’s RSS feed
**Data set reference and name:** Twitter trend related to DW English’s RSS
feed
**Data set description:** Tweets extracted from Twitter selected through
keywords related to DW
English’s RSS feed
**Standards and metadata:** IPTC topic, main lemmas, people, places, sentiment
and emotions **Data sharing:** The data is available in the platform
elasticSearch, access to which was described to the consortium in a separate
document.
**Archiving and preservation (including storage and backup):** Preserved in an
index in the platform elasticSearch.
**Contact:** [email protected]
## Phonexia datasets
### CallCenter1
**Data set reference and name:** CallCenter1
**Data set description:** Czech telephone speech (PCM 16b linear, 8kHz wav)
from a call center in an outbound campaign. Agent and client are recorded in
separate channels. Important is the fact that the client’s channel is
available only. Speech is manually annotated with emotions on a segment level.
Arousal and valence value of -1, 0 or 1 were assigned to every speech segment.
These labels can be mapped to emotions ‘anger’, ‘joy’, ‘sadness’ or ‘neutral’.
For more details see the table below. This data is used for training of the
emotion recognition system in Pilot 3.
**Standards and metadata:** call_id, segment_start, segment_end, emotion,
arousal, valence **Data sharing:** NDA does not allow to share this data or
name the call center **Archiving and preservation** : Phonexia servers.
**Contact:** [email protected]
### CallCenter2
**Data set reference and name:** CallCenter2
**Data set description:** Czech telephone speech (PCM 8b linear, 8kHz wav)
from a call center in an outbound campaign. Both agent and client are recorded
in a single channel. We manually tagged regions where the operator and client
speak. Emotions annotation for client’s segments was done in the same way as
in the method from Call Center1. For more details see the table below. This
data are used for training of the emotion recognition system in Pilot 3.
**Standards and metadata:** call_id, speaker_id, segment_start, segment_end,
emotion, arousal, valence
**Data sharing:** NDA does not allow us to share this data or name the call
center.
**Archiving and preservation** : Phonexia servers. **Contact:**
[email protected]
<table>
<tr>
<th>
name
</th>
<th>
duration
[h:mm:ss]
</th>
<th>
</th>
<th>
arousal
</th>
<th>
</th>
<th>
</th>
<th>
valence
</th>
<th>
</th> </tr>
<tr>
<th>
-1
</th>
<th>
0
</th>
<th>
1
</th>
<th>
-1
</th>
<th>
0
</th>
<th>
1
</th> </tr>
<tr>
<td>
Call Center1
</td>
<td>
2:09:16
</td>
<td>
0:05:42
</td>
<td>
1:18:42
</td>
<td>
0:44:53
</td>
<td>
0:25:49
</td>
<td>
1:18:42
</td>
<td>
0:24:45
</td> </tr>
<tr>
<td>
Call Center2
</td>
<td>
1:21:41
</td>
<td>
0:07:10
</td>
<td>
0:39:33
</td>
<td>
0:34:58
</td>
<td>
0:33:13
</td>
<td>
0:39:33
</td>
<td>
0:08:55
</td> </tr>
<tr>
<td>
All
</td>
<td>
3:30:57
</td>
<td>
0:12:51
</td>
<td>
1:58:15
</td>
<td>
1:19:51
</td>
<td>
0:59:02
</td>
<td>
1:58:15
</td>
<td>
0:33:40
</td> </tr> </table>
Table 1 _Distribution of arousal and valence values in used Czech Call Center
data._
## DW datasets
### DW Article Data and AV Metadata
**Data set reference and name:** DW Article Data and AV Metadata
**Data set description:** All articles published by Deutsche Welle over recent
years in all DW languages. Metadata describing audio, video and image material
published by Deutsche Welle of recent years in all DW languages. This data is
mainly used for the recommendation engine and editorial dashboard developed in
Pilot 1.
**Standards and metadata:** JSON format defined by Deutsche Welle.
**Data sharing:** The data is available via an API, access to which was
described to the consortium in a separate document. The data is only to be
used by consortium members but can be used for scientific publications with
DW’s permission. The reason is that the rights associated with DW’s material
vary from item to item, depending on the material’s origin. **Archiving and
preservation (including storage and backup):** The data remains available
through the API after the end of the project. **Contact:**
[email protected]
## BUT datasets
### Brno Deceit dataset
**Data set reference and name:** Brno Deceit dataset
**Data set description:** The dataset will consist of recordings of interview-
style sessions in which the interviewees provide true and deceitful statements
based on preceding instructions. Part of the dataset is being recorded in a
lab with a Kinect V2 RGB-D camera. Larger number of recordings will be
recorded via a web application in unconstrained environments and with
unconstrained equipment. Upper body video and audio is recorded in both
instances. The Kinect V2 provides Full HD video, depth images and audio. The
quality of the web application recordings varies due to the equipment used.
**Standards and metadata:** Truth/deceit labels for individual statements.
**Data sharing:** The dataset will be publicly available via http download for
research purposes. **Archiving and preservation (including storage and
backup):** The data will remain stored and downloadable from BUT servers after
the end of the project. **Contact:** [email protected]
of 11
## UP datasets
### AV+EC dataset
**Data set reference and name:** AVEC (or AV+EC)
**Data set description:** The dataset consists of continuous annotation of
emotions from 27 participants, each 5 minutes of data recording. The recorded
modalities are audio (speech), video, and physiological signals and data is
useful for multimodal continuous emotion recognition. The annotations are in
terms of arousal and valence. This database is used for the Audio Visual
Emotion Challenge (AVEC) in 2015 and 2016. For more information please refer
to _http://arxiv.org/abs/1605.01600_ .
**Standards and metadata:** ARFF
**Data sharing:** As part of the challenge participants can download the data,
however, not the annotations of the test partition.
**Archiving and preservation (including storage and backup):** Data is stored
in a server at the University of Passau and it will stay there for the AVEC
challenges of the next years.
**Contact person:** Fabien Ringeval (Fabien.Ringeval(at)univ-grenoble-
alpes.fr)
**Challenge URL:** _http://sspnet.eu/avec2016/_ **Contact:** [email protected]
# Conclusions
We provided a summary of data sets collected, generated and/or enriched across
modalities: DW news text and A/V data, call center audio data, twitter social
media data, video data for deceit analysis and multimedia data collected and
curated in the context of the AVEC challenge. These data sets will be further
curated through automatic enrichment and manual annotation and will be made
available publicly where possible and appropriate, as indicated in each
section above.
of 11
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0737_Next-Lab_731685.md
|
**Executive summary**
This deliverable presents the Data Management Plan of the Next-Lab project.
Data in Next-Lab can be broadly divided in four categories: (1) platform
content data, (2) platform usage data, (3) activity data and student output
data, and (4) feedback data.
**Platform content data** mainly consists of data created by users on the
Next-Lab sharing platform (Golabz) and the Next-Lab authoring platform
(Graasp), such as Spaces, documents, links, discussions, etc. This data is
essential for the Graasp and Golabz services to work. This data is mostly
linked to login credentials (name, email, password) and possibly user
profiles. User names are accessible to anyone on the platform, but emails and
passwords are kept private.
**Platform usage data** consists of general Google Analytics traces on Graasp
and Golabz. This data is used to provide the European Commission with evidence
of impact. This data is not linked to identifiers such as email or names.
**Activity data and student output data** consists of activity traces of
teachers and students, as well as of student productions (e.g., a concept map,
a pdf report) inside an Inquiry Learning Space (ILS). Activity traces are used
to provide feedback through teacher dashboards (e.g., Kibana, teacher
Analytics apps) and student Learning Analytics apps. Student output data can
be linked to a Nickname or be anonymous, depending on the settings of the ILS.
Activity traces in an ILS are only recorded if the AngeLA learning analytics
angel is present as a member of the ILS. If AngeLA is removed, no activity
traces are recorded in the ILS.
**Feedback data** consists of data generated by the Go-Lab Community and Event
interactions on Graasp, participatory design (PD) activities, help desk
support activities and impact evaluation activities. The interaction of
teachers and project partners through the Go-Lab Community and Event spaces is
similar to other content data on Graasp with the exception of registration
data that community and event members fill in. This form can contain their
emails which - if so specified by users - are visible to space owners.
Helpdesk support data and PD data deal in general with issues raised by users.
This data is mainly used to better understand the needs of users and to fine
tune the Next-Lab services to fit these needs. Furthermore, the data is also
used to provide evidence to the European Commission on the performance of the
project.
In order to ensure data preservation, the Next-Lab ecosystem runs on cutting
edge infrastructure with full backup strategies. Data access is closely
monitored (details are provided in this deliverable) to mitigate data security
risks. Selected anonymized data and analytics will be extracted from our
database for reporting to the European Commission or for publication in
scientific venues. In the spirit of the open science movement, such data will
be shared under Creative Commons CC-BY-NC.
# Introduction
This deliverable describes the data management plan for Next-Lab and the
issues related to the collection, the exploitation, and the storage of data.
The objectives of data management are threefold:
1. Ensuring access to research data (open science) that can be used in studies conducted by Next-Lab partners in the framework of investigations related to the Next-Lab Innovation Action.
2. Assessing the qualitative and quantitative impact of Next-Lab for the European Commission.
3. Enforcing the European data protection rules 1 (to be implemented by May 2018), which are bringing additional restrictions on collecting, storing, exploiting, and disclosing data to ensure data protection and security. In a nutshell, these rules are mainly about increasing user awareness about what data is tracked, who has access to it, and what is done with it; requesting informed consent to data collection and exploitation; as well as accessing and controlling their data, being able to correct inaccuracies and delete them.
These objectives are complementary but also contradictory at times. For
instance, on the one hand, Objectives 1 and 2 argue for the tracking and
storage of as much data as possible including content data, activity data,
user opinions and feedback. These objectives, but especially Objective 1, also
argue to make the data freely and publicly available. Objective 3, on the
other hand, constrains collection and usage to predefined purposes and limits
the dissemination to predefined stakeholders.
To better understand these dimensions, a typical scenario illustrating the
usage of the GoLab 2 ecosystem as promoted in the Next-Lab Innovation Action
is detailed below.
## Usage scenario
The typical usage scenario illustrating the data management elements of Next-
Lab is the following. A teacher (from any country) discovers an interesting
educational resource (e.g., an online lab) on the golabz.eu sharing platform
(henceforth Golabz) which can be pursued as follows:
1. The teacher can freely use this online resource alone with his or her students without providing any identification on the Golabz sharing platform.
2. If the teacher wants to personalize (configure, embed) this resource (s)he needs to create an account on the Graasp authoring platform (graasp.eu). A full name, an email address, and a (encrypted) password are requested. The email and password are kept as credentials for further access.
3. With the Graasp account, the teacher can personalize the resource and share it as a single standalone Web page with selected students (typically, the students of one of her or his classes) using a secret URL. External Web applications, resources or services can be freely integrated by the teacher in the open educational resource (OER), which is referred to as an online inquiry learning space (ILS).
4. Two dimensions can be configured by the teacher before sharing the ILS with the selected students:
1. activity tracking can be enabled or disabled by inviting or not a virtual learning analytics agent (AngeLA) explicitly represented as a member of the space;
2. access for students can be set as anonymous, nickname only, or nickname and password.
5. Activity traces and student outputs are kept in the space where they are created under the full control of the teacher.
6. Learning analytics visualization can be freely enabled by the teacher for self- awareness and reflection.
## Roadmap
Broadly the data collected in Next-Lab can be divided into four categories,
which guide the structure of this deliverable. Section 2 presents _platform
content data_ , which consists mainly of data created by teachers on Graasp
and Golabz. Section 3 discusses _platform usage data_ , which consists of
Google Analytics traces on Graasp and Golabz used to provide evidence of
impact to the European Commission. Section 4 presents _activity data and
student output data_ , which consists of teachers’ activity traces and
students’ activity traces, as well as content produced by students in ILSs.
Section 5 discusses _feedback data_ , which consists mainly of surveys,
participatory design data and data from the interactions on the Next-Lab
helpdesk. Finally, Section 6 then discusses the data preservation issues
before Section 7 wraps up with a conclusion.
# Platform Content Data
Platform usage data includes all user data stored by software components used
in NextLab. These include the Sharing Platform (golabz.eu) and the Authoring
Platform (graasp.eu).
**Graasp** content data contains user data and data generated and uploaded by
users. When signing up, users (typically teachers) provide email, full name,
and password (as shown in Figure 1), the latter are saved in an encrypted
format in the database. Data generated by users can contain anything from text
to binary files. The content data in Graasp is organised in spaces, which can
be described as online folders with permissions. A space has a list of members
(owners, editors, viewers) and can contain a subspace hierarchy, links,
documents, discussions and apps. Each one of these items also contains a
description and all contain associated metadata (e.g., timestamps, creator id,
file size). Users can also populate their profile in Graasp, which contains
their usernames, possibly a picture, and a description. The database also
stores the nicknames and sometimes passwords of users (typically students) who
logged in through the standalone (student) view. Teachers are informed that to
preserve anonymity, students should not use their real names as nicknames and
they should change nicknames frequently (but not within one ILS or they will
lose their data). Space owners can delete any content from the space. Once
deleted, no copy of the data is kept on the server.
**Figure 1. Graasp Sign Up dialogue.**
**Golabz** receives user data from Graasp when a user (typically a teacher)
logs in to Graasp from Golabz or when a user publishes an ILS from Graasp to
Golabz. These data include: username, email, and Graasp user-ID. Golabz does
not receive any data of the students. For the consortium members and external
online lab and app providers, the accounts are created by the system
administrator from the project consortium (Golabz administrator). These
accounts contain email, username, and password (which is changed by the user
when logged in for the first time).
## Platform Content Data Consent
At the platform usage data level, for all those users obliged to enter
personal data (i.e., typically teachers), an on-line consent form is used to
provide information before users sign up to the platform. The users have at
their disposal the description of terms and conditions in Graasp, which is the
central place where users sign up (see Figure 2).
**Figure 2. Consent form informing users when they sign up on Graasp.**
## Platform Content Data Storage
**Graasp** related data is stored in a secured data center on the EPFL campus
in Lausanne, Switzerland. This data is backed up every day on a NAS provided
by the data center.
**Golabz** data (incl. all content saved in Golabz and its metadata provided
by online lab and app owners, like name and description of the software,
screenshots, etc.) is stored at a HostEurope server (
_www.hosteurope.de/en/Server/Virtual-Server/_ ; Enterprise tariff). The
virtual server is hosted in datadock in Strasbourg, which fully complies with
all quality and safety standards of Germany. HostEurope makes an automatic
daily backup of the data; it is also possible to create snapshots to determine
dates of backups and restorations. The data is also regularly saved locally at
IMC AG, Saarbrücken, Germany. HostEurope assures an average availability of
its servers of 99.9%. Using monitoring features, it is possible to supervise
the running of the services and ports.
## Platform Content Data Access
**Graasp** . The data in Graasp, like many cloud services, can either be
accessed through regular usage by platform users or through database query by
platform administrators.
* Regular usage access
* _Private space data:_ data uploaded to a space can be accessed by any members of that space with the adequate access rights (owner, contributor, viewer).
* _Public space data:_ data located in spaces set to _public_ are accessible to anyone online.
* _User profile data:_ User profiles are public and accessible to anyone, but user emails are not accessible 3 .
* _Student data:_ Data uploaded by students through the Standalone View of an ILS are accessible by space members (typically teachers).
* Database query access
* The Graasp database can only be accessed by the Graasp platform managers (as of June 2017, Alex Wild, André Nogueira, Andrii Vozniuk, and Juan Carlos Farah), WP2 leader (Maria Jesus Rodriguez Triana) and the deputy coordinator (Denis Gillet). All these people are under EPFL contract and have to comply with the EPFL data management policy guaranteeing confidentiality. They lose their access if they leave EPFL.
**Golabz** data can be accessed by the Golabz platform managers (Evgenij
Myasnikov, Diana Dikke). All these people are under IMC contract and have to
comply with the IMC data management policy guaranteeing confidentiality. They
lose their access if they leave IMC.
## Platform Content Data Usage
The platform content data is stored primarily to allow the platform to
function (i.e., user profile, content and activity traces are stored in order
to allow users to exploit their personal and shared spaces, and to provide
them with analytics and recommendations). It is also used by WP1 to provide
analytics to partners, ambassadors and the European Commission about the
project impact. The current script extracting analytics on the Graasp.eu
database is listing the following information organized by tabs:
* _Users per day_ : Date, number of standalone users (students), number of users until this date, min, max, average, mean.
* _Users per country_ : Country, number of registered users (teachers), number of creators (having created inquiry learning spaces), and number of potential 4 implementers (having created inquiry learning spaces used by a certain number of students).
* _Long tail_ : Number of inquiry learning spaces versus their number of standalone users.
* _Evolution per month_ : Number of registered users, number of standalone users, number of inquiry learning spaces (existing, created, co-authored, implemented with more than 5 or 10 students).
* _Co-authoring_ : number of created, implemented and published ILS that were coauthored by teachers, or by teachers and Next-Lab members.
* _Implemented inquiry learning spaces_ : Space ID, creation date, author category (project or external), space type, space language, published or not on the public repository (golabz.eu), number of copies, number of owners, number of editors, number of viewers.
* _User list_ : Anonymized user ID (different from the internal user ID stored in the Graasp.eu database), country, registration date, account used (Facebook, Google+ of Graasp), language, number of ILS created, number of standalone users.
* _Apps and labs_ : Number of times each app/lab was embedded in an ILS, created and implemented ILSs where the app/lab was embedded, users who embedded the app/lab, in general, in their ILSs and, in particular, in the potentially implemented ones.
These anonymized analytics are only accessible by the project partners (as an
excel file) for the duration of the project. The raw data (see Section 2.3)
exploited to produce these analytics are not shared with project partners or
anyone else.
# Platform Usage Data
Platform usage data is _anonymous_ interaction data collected through
mainstream tracking services installed on Graasp and Golabz i.e., Google
Analytics. The data is anonymous in the sense that it is not linked to
specific user identifiers.
**Google Analytics:** usage data on Google Analytics contains anonymized
website traffic and navigation on Golabz and Graasp. Figure 3 shows the type
of live information shown with Google analytics, whereas Figure 4 shows
longitudinal data (here from January 1st to May 28th 2017). The Google
Analytics data is stored by Google and comply to its own terms
& conditions. 5 No explicit consent is given by people who do not sign up.
However the terms and conditions inform users that platform usage data is
collected. Furthermore users can block Google Analytics through browser plug
ins, such as Ghostery. 6
**Figure 3. Live Google Analytics data on platform usage**
6
**Figure 4. Google Analytics data on platform usage over time**
The platform usage data is stored in order to provide usage statistics to the
European Commission, partners and ambassadors by WP1. More concretely, Google
Analytics help us to monitor project metrics such as number of visits per
platforms, the number and length of the session and the bounce rate, as well
as the number of users per country and city in a given period of time. Live
data is also used in order to avoid making changes on the server when users
are online. The Google analytics are only accessible by the platform managers
(EPFL for graasp.eu and IMC for golabz.eu) and by the Next-Lab Coordinator for
the duration of the project. No one else can request and get access to the
corresponding google analytics accounts. However, synthetic graphs are shared
with the project partners and with the European Commission to show them the
overall impact of the project.
# Activity Data and Student Output Data
Activity data is interaction data linked to specific user identifiers in the
platform and used for a twofold purpose: first, to provide awareness and
reflection services back to users through learning analytics apps and activity
dashboards; and second, to keep track to the current status of the students
work so that, when they open a new session, they can continue working on their
ILSs (providing they use the same nickname). This activity data is also linked
to platform content and more specifically, learning analytics apps can be
linked to student outputs.
**Graasp user activity** (mainly teachers) contains actions performed by users
inside a space, such as accessing an item, creating an item, deleting or
modifying an item. **ILS user activity** (mainly students) contains activity
traces of standalone users. This activity relates to actions in the different
inquiry learning apps and labs that support user tracking. The apps and labs
can be both producers of activity data and consumers of activity data (e.g.,
to show which students are online, for example). Note, that a central feature
of the Go-Lab ecosystem is that it allows users (teachers) to aggregate third
party apps and labs into their learning spaces. How these apps and labs handle
their data is not the responsibility of the Next-Lab consortium. Nevertheless,
apps added to a space can only access data from other items in that space if
the AngeLA activity tracker is enabled (teachers can disable it).
**AngeLA.** AngeLA, the learning analytics angel (agent), is a visual
representation of the learning analytics tracking mechanism as a member of an
ILS. If AngeLA is present in a space, then Student activity will be tracked
and made available to LA apps. If AngeLA is not present, student activity is
not tracked. 6 This implies that some apps will not work up to their full
potential. Note that currently AngeLA sends activity traces to both the Vault
7 and a Learning Analytics backend located at the University of Duisburg
Essen in Germany. This architecture is a leftover from the Go-Lab project in
which Duisburg Essen was a partner. We are currently in the process of moving
this backend on the Graasp infrastructure. Finally, adding AngeLA to the space
is Opt Out for now, but we will change to Opt In in 2018 to comply with new EU
privacy regulations.
**ILS user output** (mainly students) contains student productions, such as
reports that they might have uploaded, or concept maps or other artefacts that
they might have created within apps and labs. Again, apps and labs can both be
producers and consumers of ILS user outputs.
## Activity Data and Student Output Data Consent
Activity data is encompassed by the terms and conditions Graasp users
(teachers) agree to when signing up. ILS users (students) do not sign up and
thus do not formally provide their consent. However, like with other learning
artefacts, the teachers are in charge of making choices for their students.
## Activity Data and Student Output Data Storage
Activity data is stored in Graasp. Student output data is stored in the Vault
in the ILS (also in Graasp). Traces are digital log data stored in the
Graasp.eu database in the form of a timestamped and contextualized (i.e.,
associated with a dedicated inquiry learning space) triplet of actor, verb,
object which does not need calibration. The vocabulary is embedded in the
platform, so there is no risk of vocabulary misuse (ActivityStreams and xAPI
standard vocabulary).
An example of the raw data is: On “Date” (timestamp), “Anonymized_Actor_ID”
(actor), “downloaded” (verb), “Object_ID” (object) in “Space_ID” (context).
An example of associated analytics will be: “Space_ID” has been accessed by
“Access_Count” users of type “User_Type” from “Country_ID” in the period
“Period_Descriptor”, which are open Web standards typically exploited to
provide learning analytics.
Data related to students and their activities are stored by design in an
anonymous form in the Graasp.eu database, i.e., the actual identity of the
students is never requested and they are only identified with nicknames they
can change at their convenience and which can be different in each inquiry
learning space created to support a different supervised classroom activity or
learning session.
In future work, we plan to allow users to select their own learning record
repository to store activity traces and learning outcomes (outputs).
Additionally, we aim to provide the functionality to validate these records
using blockchain or other cryptographic technologies. This will ensure that
users cannot tamper with the contents of their learning repositories or
falsify their educational records in addition to guaranteeing privacy.
## Activity Data and Student Output Data Access
**Graasp user activity for a specific space** can be accessed and visualized
through the Kibana dashboard by space owners (teachers) as shown in Figure 5.
**Figure 5. Kibana dashboard showing activity in a space.**
Teachers, as owners of an ILS, have access to **student data** located in the
Vault.
The **Graasp database,** where all activities are currently stored, can be
accessed by the Graasp platform managers (see Section 2.3).
## Activity Data and Student Output Usage
The trace data which will be extracted on a weekly or monthly basis from the
Graasp database for assessing the impact of the project (as requested by the
European Commission) or for scientific investigations will be anonymized
during the extraction process and delivered as a file in the Excel or csv
format (around 3MB per month). Naming will include the platform name and the
date, i.e., Graasp.eu_Day_Month_Year.xlsx (or csv). The current script
extracting data on the Graasp.eu database is listing the following information
organized by tabs:
_Graasp space activity_ : Space ID, creation date, number of actions for each
Activity Streams or xAPI verb.
_Labs and apps usage_ : Number of times an app or a lab on Golabz has been
added in inquiry learning spaces.
_Publish ILS_ : Inquiry learning spaces which have been created in Graasp and
published by their owner(s) on the golabz.eu repository.
# Feedback Data (pen paper, event community)
Under the umbrella of _feedback data_ we consider not only feedback provided
by the users e.g., data collected by WP1 and WP2 partners, as well as
ambassadors in the event they organized, by the PD team, or by other partners
for targeted research investigations, …) but also problems and questions asked
by them (e.g., via the helpdesk). This data is used mainly to reinforce the
co-design and to measure the impact of those functionalities and services
offered in the project. Thus, ambassadors, partners and the European
Commission will have access to the outcomes obtained from the data analyses.
Feedback and Participatory design (usability data) are gathered on:
* Graasp through the **Go-Lab Community space** 8 . As it is described in D2.1, the community space is used to support Events and peer interaction among community members (i.e., teachers and project partners). Teachers are typically invited to join the community in general or for a particular event (which automatically adds them to the community). When invited to join the community they fill in a registration form 9 shown in Figure 6. This form has a threefold purpose: collect the user profile (essential to measure the impact in T1.4); get the informed consent to use anonymous data regarding the activities carried out in the project and the platforms for research and improvement purpose; register under which conditions the users are joining the community. Apart from helping us to keep the community updated (e.g., sending information regarding training events and platform updates to those who subscribe), the conditions for joining the community allow us to detect users willing to provide user feedback (up to 2 questionnaires per year for assessing features and services).
**Figure 6. Go-Lab Community registration form.**
Figure 7 shows that the event registration is just an extension of Community
registration form, where we ask for consent to take pictures or record videos
during the sessions.
**Figure 7. Event registration form.**
* **Intercom** **Helpdesk.** A direct line of support with the teachers in Next-Lab is provided through the Intercom Helpdesk. Intercom streamlines the creation and management of support tickets, allowing project partners to collaboratively answer questions and resolve issues put forth by current and potential users. Interactions that occur on Intercom are stored on Intercom’s infrastructure. Intercom stores browser information such as language and location (as illustrated by Figure 8 which shows the Helpdesk users over the last 3 months). Furthermore, the names of Graasp and Go-Lab users are shared with Intercom, however the associated Graasp userID is not directly shared with Intercom, but it is hashed. This mechanism allows Graasp managers, but no one else, to make the link between Intercom users and Graasp users. A user can always sign up separately to Intercom to share his/her information, though this is not a requirement to access the helpdesk feature.
**Figure 8. Intercom Helpdesk user location from March to June 2017.**
* **Participatory design data.** Data will be gathered by means of interviews, observations, questionnaires, etc. either in face-to-face PD (participatory design) events or through online mechanisms (e.g., online questionnaires, PDotCapturer 10 , etc.). PD data will be gathered anonymously, meaning it will not be linked with personal information of the participants providing it. For some inferential statistics and to get background information on the participants, general demographic data on the person (e.g., age) and their teaching/learning background (e.g., primary or secondary school) might be gathered and taken into consideration for the data analysis.
* **Ambassador** **Outreach Data.** We collect feedback data from outreach activities from Go-Lab ambassadors through online surveys. This data contains information about presentations (see Figure 9) 11 and social media dissemination (see Figure 10) 12 performed by the Ambassadors. These surveys also include personal data about the Ambassadors such as: name, surname, email address, city and country where they teach, school name, school postal address, subjects they teach. When it comes to the events/presentations/trainings they carry out as part of their outreach the following information is collected: type of activity, dates, country, city, language, name of the event, link to website (if available), type of participants, number of participants.
**Figure 9. Ambassador presentation dissemination report surveys.**
**Figure 10. Ambassador social media dissemination reporting surveys.**
## Feedback Data Consent
* **Graasp Community and event spaces** . When joining the Go-Lab Community in Graasp, teachers:
* must agree to let Go-Lab & Next-Lab use anonymous data regarding their activities in the project and the platforms for research and improvement purposes
* can agree to let Go-Lab & Next-Lab send them questionnaires (max. twice per year) for assessing current and new features and services offered by golabz.eu and graasp.eu
* can agree to let Go-Lab & Next-Lab send them information regarding training events and platform updates
When joining an event, they can also decide whether or not to appear in
pictures or video recordings taken during the event for dissemination and
research purposes.
* **Participatory design** . Teachers will be approached by the Next--Lab consortium on the basis of their experience with Go-Lab ecosystem, local contacts with schools using Go-Lab, longer standing cooperations, and/or specific user characteristics. Students (minors) will never be approached directly but always through their teachers or schools. All participants, irrespective of ages, are required to sign a consent form to protect their rights of participation in empirical studies. Of particular important is that they agree on the data so produced being published anonymously for research purposes and that they have full rights to withdraw from any study without the need of giving any reason. In case of interviews, questionnaires, and the online feedback mechanisms, participants give consent by participating in the data collection. In case of observations, consent will be gathered in advance. All participants participate on a voluntary basis. Participants (or when appropriate, their legal representatives) will be informed about the data gathering and the way the data are used (which will always be done anonymously). For participatory design and feedback data where minors are involved (mainly concerning the inquiry learning spaces) we use (passive) informed consent forms as they are in use at UT and ULEIC. ULEIC and UT data gathering is subject to prior approval of the ethics committees of these two institutions. As part of the consent procedure, teachers and students (and their legal representatives) will be informed about the goal of the study and the way the data will be processed and published. The information given will ensure that participants or their legal representatives have sufficient information to enable them to decide on their consent and it will explain in a clear way participant’s rights.
* **Intercom Helpdesk.** Use of the Intercom helpdesk implies acceptance to Intercom’s privacy policy 13 . If and when users sign up individually to Intercom, they will be prompted to accept this privacy policy, along with their terms of service. However, users do not provide their explicit consent when they simply use the service without signing in.
* **Ambassador Outreach Data.** The organizing of events is part of the Ambassadors tasks, which they agreed to fulfil as part of the MOU they have signed (see Figure 11). In the Open Call for the Ambassadors, teachers had to reply yes/no to some statements, including this one regarding their contact details: "Whether I am selected or not, European Schoolnet may contact me for other projects / events".
**Figure 11. Ambassador MOU.**
## Feedback Data storage
Participatory design and feedback data with minors on Next-Lab learning spaces
will be gathered by ULEIC and UT (and potentially additional partners),
participatory design data and feedback data with adults will be collected by
most partners under the leadership of EUN.
* **Graasp Community and event spaces** . The same applies to other Graasp data (see Section 2.3).
* **Participatory Design data** collected by ULEIC will be stored on ULEIC servers (in case of ULEIC online tools used to collect data) or servers of questionnaire service providers (e.g., Google Forms) for digital data collection. For paper-based data collection the feedback data will be stored in a locked office in the Informatics Department of the University of Leicester (for data collected by ULEIC).
* **Intercom Helpdesk.** As stated in their privacy policy, Intercom “complies with the EU-U.S. Privacy Shield Framework and the Swiss-US Privacy Shield Framework as set forth by the U.S. Department of Commerce regarding the collection, use, and retention of personal information from European Union member countries.”
* **Ambassador Outreach Data.** The data is stored on the SurveyMonkey server under EUN’s professional account.
## Feedback Data Access
* **Graasp Community and event spaces** . The same applies to other Graasp data (see Section 2.3) with the exception of user registration data, which does not exist in regular Graasp spaces. Such registration information can be accessed by community and event owners.
* **Participatory design** Only the partner that performed the PD activity (mostly ULEIC) will have access to the raw data collected. On rare occasions they might share the data either with a partner to analyse the data (e.g., if a partner other than ULEIC conducts the event but the data analysis task lies with ULEIC) or with the partner developing the artefact of interest in the PD activity ( e.g., if there are benefits of accessing the anonymized raw data over receiving a report).
* **Intercom Helpdesk** Besides the platform administrators at Intercom, 52 people involved in providing help to users have access to Intercom data. These include Ambassadors and project partners. Among the 52 people, 15 have full access (i.e, these are partners from EPFL, IMC, Nuclio, or EA), the others having restricted access (Intercom app settings, Intercom members and Intercom billing can’t be accessed).
* **Ambassador Outreach Data.** Access to the SurveyMonkey data is accessible to the EUN team (Evita Tasiopoulou, Enrique Martin) and the person responsible for the impact (Task 1.4), i.e., María Jesús Rodríguez Triana.
## Feedback Data Usage
* **Graasp Community and event spaces** . This data is used to report about the training events and provide partners, Ambassadors and the Commission with evidence on the project impact.
* **Participatory design** . For the most cases the results and outcome of PD activities will be analysed by the partner conduction the activity (or ULEIC) and shared with the respective partners in the form of anonymized and aggregated reports. These reports can be enhanced with quotes from the raw datasets where appropriate.
* **Intercom Helpdesk.** The Intercom helpdesk data is used to provide help to users who request it. In the future, we will use the data to better understand the recurring issues and provide FAQ type support to users. Finally, we use data to provide feedback to the European Commission on the workload and the performance of the Helpdesk.
* **Ambassador Outreach Data.** Ambassadors’ personal info, contact, school details are collected in order to facilitate our communication with them and provide us with some demographics like the type of areas we cover, possible audiences we can attract and indicate needs that might arise in the future (i.e. travel limitations etc.). Events data is collected mainly for reporting purposes and for providing an as detailed as possible overview of the outreach activities that our Ambassadors are carrying out and their possible impact (when we combine this info with the metrics for examples, we might get some interesting insights). We also use this info to evaluate the Ambassadors performance and these reports with partly determine our future collaboration with them (we hold the right to replace them if they do not perform as agreed).
# Data Preservation
Backups of the Graasp.eu server are kept at least for the full duration of the
Next-Lab project (January 2017 to December 2019) plus one year. After the end
of the project we will fall back in our usual backup scheme of keeping backup
at least for one year (longer if human and IT resources available allow it).
Backups of the extracted data files are also kept for the full duration of the
project plus one year. The public data set will be kept according to the
policy of the public scientific repository which will be selected in agreement
with the project partners.
Thanks to these backups, the full database of the Graasp.eu platform can be
regenerated at any time and analytics can be extracted.
The Graasp.eu server and one backup storage unit are currently stored in the
data center of the SV building at EPFL. A second backup is made in the EE
building also at EPFL. In July 2017, the graasp.eu server will be moved to
another EPFL data center located in the MA building.
Backups of Golabz are kept for the full duration of the Next-Lab project plus
one year after the project end. After that, the database will be archived
locally at IMC and can be restored at any moment, if needed.
## Formal information/data security standards
The Graasp.eu server is following the EPFL standard for open access platform
and is audited yearly for possible intrusion risks. The computers of the Data
Management Leader and of T1.4 Leader are not open on the Web and OS security
patches are applied when available.
The HostEurope virtual server (where Golabz is hosted) is hosted in datadock
in Strasbourg, which fully complies with all quality and safety standards of
Germany. The datadock is one of the safest and greenest data centres in Europe
and was awarded the highest possible rating of 5 stars in the recent eco
Datacentre Star Audit. The computer of the Golabz system administrator is not
open on the Web and security updates are applied regularly.
## Main risks to data security
The only personal data which are stored in the Graasp database are emails and
names or nicknames of the registered users. These emails are only accessible
by the EPFL server managers and the Data Management Leader (see 2.3) belonging
to the EPFL React Group developing the Graasp platform, all have a regular
EPFL contract). Only these authorized managers with the password to access the
server can see this information. The administrator password is changed
regularly, including every time there is a change in the personal
administrating the server. So, the main risks include an intrusion or a breach
in the server (which is well protected against this) or a manager sharing
intentionally or unintentionally the data (which could trigger a legal
procedure).
As mentioned before, all data extracted from the server database for the Task
T1.4 Leader (responsible for assessing the impact of the project) are
anonymized during the extraction process (randomized identifiers). So, user
identities are never made and are not available outside the server database
(which needs it to provide its services).
Golabz stores the following personal data: users’ emails, usernames, and
passwords. These data is accessible for the Golabz system administrator and
main developer (Evgenij Myasnikov, IMC) and Golabz product manager (Diana
Dikke, IMC) both having permanent contract at IMC. The risk of an intrusion or
a breach in the server is low, as the HostEurope servers are well protected
against such attacks.
## Open Science and Data sharing
All activity traces of the spaces in the Graasp.eu platform will be
automatically recorded, except the data of the spaces in which tracking has
been disabled by the users themselves (opting-in or opting-out for data
tracking is available per ILS in Graasp through the AngeLA mechanism 14 ).
However, only selected analytics extracted from the digital logs will be
shared for reporting to the European Commission or in case of associated
scientific publications. A description of the data will be provided in the
repository which will be selected in association with related publications.
See an example _here_ . The data will be curated and formatted according to
relevant guidelines 15 and shared under Creative Commons CC-BY-NC.
No restrictions apply on the anonymized data, as there are no commercial
dimensions in the Next-Lab project fully dedicated to promoting and exploiting
open access platforms and open educational resources.
The Next-Lab Data Management Leader and the other authorized people who have
access to data specified above will have to use the data accessible to them
only for their contractual duties. They will have no rights to share their
credentials to access such data with others and they will be responsible to
keep them in a safe place.
# Conclusion
This deliverable presented the Data Management Plan for the Next-Lab Project.
This deliverable highlighted how the Next-Lab consortium tackles the tension
between storing and sharing the greatest amount of data in the spirit of open
science and restricting data collection to the minimum to respect user privacy
and ensure informed consent and data control.
This Data Management Plan has been evaluated and accepted by the EPFL Data
Management Team and by the EPFL Ethics Committee working closely together.
They focused on the authoring and exploitation infrastructure (graasp.eu and
the associated storages of spaces, traces and learning outputs) which are
under the responsibility of EPFL, while also considering the interplay with
the other platforms, services, and data management activities. This process
was helpful to understand the challenges and to develop best practices for an
academic institution like EPFL in offering cloud services with worldwide open
access.
As a matter of fact, privacy and ethics related to educational services and
data are part of the research investigations carried out in the framework of
the Next-Lab Innovation Action, and advances on these dimensions for open
access digital education will be regularly reported in scientific publications
acknowledging the co-funding of Next-Lab and in upcoming related Next-Lab
deliverables.
Thanks to this document, the Next-Lab beneficiaries should have a clear
understanding of their duties as services providers, data managers, or data
consumers. We focused on having people individually listed with fully-defined
responsibilities and duties. Only people requiring access to servers or data
for the operation of the infrastructures or the exploitation of the data are
granted with such an access. No complimentary access is granted to
beneficiaries not requiring it for the completion of their tasks or to any
third parties.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0740_AMBER_689682.md
|
# 1 INTRODUCTION
The Horizon2020 FAIR Data Management Plan (DMP) template is used in this
report. ‘FAIR’ stands for: findable, accessible, interoperable and re-useable.
FAIR guidance
_http://ec.europa.eu/research/participants/data/ref/h2020/grants_manual/hi/oa_pilot/h2020-hi-
oadata-mgt_en.pdf_
Basis of the creation of the FAIR principles can be found here (Nature
publication):
_https://www.nature.com/articles/sdata201618_
Three versions of the data management plan will be submitted ( **Table 1** ),
but the third will be incorporated into deliverable D6.2 “Final Report with
Legacy plan for updating and maintaining Barrier Atlas and other AMBER digital
outputs” which was specifically created to ensure post project data
management.
**Table 1.** Updates on the AMBER Data Management Plan.
<table>
<tr>
<th>
Deliverable
</th>
<th>
Title
</th>
<th>
Submission Date (Month)
</th>
<th>
Content
</th> </tr>
<tr>
<td>
D6.3
</td>
<td>
Data Management Plan v1, v2.
</td>
<td>
30 November
2016
(M6)
</td>
<td>
Initial data
management plan outlining the intended approach for the
project
</td> </tr>
<tr>
<td>
D6.3
Updated
</td>
<td>
Data Management Plan v3
</td>
<td>
31 May 2018 (M24)
</td>
<td>
Update of the data management plan with modifications indicated by the EC and
other changes generated by the project
</td> </tr>
<tr>
<td>
Incorporated into D6.2
</td>
<td>
Final Report with Legacy plan for updating and maintaining Barrier Atlas and
other AMBER digital outputs
</td>
<td>
31 May 2020 (M48)
</td>
<td>
Final data
management plan, also covering post-AMBER
legacy plan
</td> </tr> </table>
# 2 DATA SUMMARY
## 2.1 Purpose of the data
In terms of data management, AMBER main data outputs include:
1. A pan-European Atlas of river barriers
2. A decision support tool for planning, removal and mitigation of barriers (dams, culverts, weirs) in European Rivers
Data collation (gathering pre-existing data) and data collection (new data
obtained through the actions of the AMBER project) are important in producing
databases for both of these objectives. The Barrier Atlas will also be used as
a basis for creating other important pan-European data resources (maps) within
AMBER, such as predicted fish community maps. The decision tool is comprised
of individual tools that each contribute to the barrier
planning/removal/mitigation and monitoring processes and have their associated
data sets from tool development. Case Study data sets will result from testing
of the tools, and finally there will be useful data resulting from
dissemination activities and metadata associated with the project.
**Table 2.** Relationship between data collated/collected and objectives of
the project
<table>
<tr>
<th>
**A: Barrier Atlas data and associated maps:**
1. Barrier Atlas
1. Collation of currently available data held by regional and national authorities on barriers to produce the barrier base map
2. Data collected by the AMBER consortium to validate the Barrier Atlas
3. Citizen Science (public) data collected on barriers using a smart phone app to supplement the available barrier data
2. Fish community map
3. Atlantic salmon status map
4. Barrier impacts on river ecology map
**B: Specific tools used to comprise the overall decision tool:**
**Monitoring:**
1. eDNA tool for ecological monitoring
2. Rapid habitat assessment tool using drones
**Barrier Passability:**
3. Barrier Passability Tool: Fish and other aquatic organisms responses to barriers and hydrodynamics
4. Model of organism passability vs. hydropower generation
5. Model of fish movement through river networks
**Conflict resolution:**
6. Cost-benefit analysis of river infrastructure tool
7. Barrier management scenario tool (habitat stress days)
8. Ecosystem services evaluation tool
9. Social attitudes tool for conflict resolution
**C: Case study data** ; outputs of testing the tools and mitigation
techniques **D: Dissemination data and project metadata**
</th> </tr> </table>
The Barrier Atlas itself is the first pan-European barrier map and will have
applications in scientific research, barrier planning, and policy making. The
tools, both individually and the final decision tool, will have use within
industry (hydropower), policy decisions, catchment management, regional
planning, national planning, and also within scientific research.
Being ‘adaptive’ management, it is important that future researchers also have
access to the data used to initially create the tools such that they can be
iteratively improved. Consequently, the overall decision tool can improved, as
scientific understanding progresses.
Additional data associated with dissemination activities and metadata relating
to these data sets will also be created.
The following data descriptions list whether the data being used is pre-
existing to AMBER i.e. fully (yes), partially (some) or completely collected
within the scope of the AMBER project (no). It also details the origin of the
data being used by AMBER, then the type of data output from the study
(variables), the format of the output data, how the output can be further
utilised, and which organisational bodies are likely to use that output.
Within AMBER there are Case Studies (WP4) which are used to test the various
barrier management tools which AMBER produces. There are also Case Study sites
(WP4) which are chosen to assess the tools in general, a separate ‘test
catchment’ in Germany (River Neckar) has been selected for in-depth studies on
specific socio-economic aspects (i.e. Ecosystem services and barrier cost
evaluation).
What follows is a data summary, but more detailed data outputs associated with
the specific tasks are listed in Appendix 1 ( **Table 6** and **Table 7** ).
## 2.2 Summary
**A: Barrier Atlas data and associated maps** (T1.2.1; D1.2; T1.2.2; D1.3)
**BARRIER ATLAS**
**A1a.Collated barrier data**
<table>
<tr>
<th>
**Data contact** POLIMI (SB) **Existing data?** Yes
</th> </tr>
<tr>
<td>
**Data origin**
</td>
<td>
Collated barrier data from regional and national authorities throughout Europe
</td> </tr>
<tr>
<td>
**Data type**
</td>
<td>
All available data, but focusing on: Source ID of barrier; url of data source;
country; latitude; longitude; river name; basin name; barrier height; barrier
type; year constructed; fishpass (y/n)
</td> </tr>
<tr>
<td>
**Data format**
</td>
<td>
Original spreadsheet data, processed into databases and GIS themes:
.xls .csv .mxd .shp .dbf
</td> </tr>
<tr>
<td>
**Expected size**
</td>
<td>
10 GB
</td> </tr>
<tr>
<td>
**Data utility**
</td>
<td>
Along with the Citizen Science and validation data, will create the pan-
European Barrier Atlas.
</td> </tr>
<tr>
<td>
**Data Users**
</td>
<td>
Will be used by the public; hydropower companies; educational establishments;
scientists; municipalities; water authorities; NGOs and policy makers.
</td> </tr> </table>
The Atlas data will comprise of stream barrier location (Latitude; Longitude)
and all other available information that is stored on barriers from regional
and national authorities within all 31 European Economic Area (EEA) countries,
as well as some Balkan countries (Albania, Bosnia and Herzegovena, Macedonia,
Montenegro, Serbia). It will also include islands within these countries, e.g.
Azores. This is data collation i.e. gathering pre-existing data. Deliverable
D1.2 (Country specific reports containing the metadata) provides more details
on the data being collected.
### A1b. Atlas validation data
<table>
<tr>
<th>
**Data contact** POLIMI (SB) **Existing data?** No
</th> </tr>
<tr>
<td>
**Data origin**
</td>
<td>
Collated barrier data from regional and national authorities throughout Europe
</td> </tr>
<tr>
<td>
**Data type**
</td>
<td>
ID of barrier; photo; latitude; longitude; date recorded; barrier type;
barrier height; extends across entire watercourse (y/n); in use (y/n);
altitude; slope; river type; sinuosity; local land use
</td> </tr>
<tr>
<td>
**Data format**
</td>
<td>
Original spreadsheet data, processed into databases and GIS themes:
.xls .csv .mxd .shp .dbf
</td> </tr>
<tr>
<td>
**Expected size**
</td>
<td>
5 GB
</td> </tr>
<tr>
<td>
**Data utility**
</td>
<td>
Along with the Collated barrier data and Citizen Science data, will create the
panEuropean Barrier Atlas.
</td> </tr>
<tr>
<td>
**Data Users**
</td>
<td>
Will be used by the public; hydropower companies; educational establishments;
scientists; municipalities; water authorities; NGOs and policy makers.
</td> </tr> </table>
Collated regional and national authority data will vary in types of barriers
surveyed by different authorities and the minimum height surveyed as well as
the survey methods. To allow comparability between Member States and to
estimate the numbers of barriers of types of heights not monitored, AMBER
consortium members will do a validation exercise. This will be a field
exercise whereby selected locations are surveyed for all types and heights of
barriers. Comparison to the collated data set for that region will allow
upscaling of the data to provide better estimates of total barrier numbers,
and estimated barrier numbers of each type, for Member States and across
Europe. It will also allow fair comparisons between regions which have been
surveyed by authorities using different survey methods.
### A1c. Citizen Science Data
<table>
<tr>
<th>
**Data contact** WFMF (JD) **Existing data?** No
</th> </tr>
<tr>
<td>
**Data origin**
</td>
<td>
Citizen Science: the European public will record barrier data using ‘barrier
tracker’ app.
</td> </tr>
<tr>
<td>
**Data type**
</td>
<td>
ID of barrier; photo; latitude; longitude; date recorded; barrier type;
barrier height; extends across entire watercourse (y/n); in use (y/n)
</td> </tr>
<tr>
<td>
**Data format**
</td>
<td>
Original spreadsheet data, processed into databases and GIS themes:
.xls .csv .mxd .shp .dbf
</td> </tr>
<tr>
<td>
**Expected size**
</td>
<td>
10 GB
</td> </tr>
<tr>
<td>
**Data utility**
</td>
<td>
Along with the Collated barrier data and Citizen Science data, will create the
panEuropean Barrier Atlas.
</td> </tr>
<tr>
<td>
**Data Users**
</td>
<td>
Will be used by the public; hydropower companies; educational establishments;
scientists; municipalities; water authorities; NGOs and policy makers.
</td> </tr> </table>
Citizen Science (CS) data will be from the ‘barrier tracker’ app developed for
AMBER. The app consists of tier 1 and tier 2 expertise levels. The majority of
users will use the simple tier 1 app. Additional data fields to be used in
tier 2, by expert users, is still being decided. A 3 rd party contractor,
‘Natural Apptitude’ developed the app and will collect the data on their
servers before sending it to beneficiary 19-JRC. Validation of the data
(checking images, checking against other records) will be done before being
utilised within the Barrier Atlas.
### A2. Fish Community Map (T2.2.1)
<table>
<tr>
<th>
**Data contact** SSIFI (PP)
**Existing data?** Yes (plus model outputs)
</th> </tr>
<tr>
<td>
**Data origin**
</td>
<td>
Pan-European fisheries and habitat data will be collated.
</td> </tr>
<tr>
<td>
**Data type**
</td>
<td>
Fish species; abundance; age category; river type; channel sediment type;
channel width; channel depth discharge/flow rate
</td> </tr>
<tr>
<td>
**Data format**
</td>
<td>
Original spreadsheet data, processed into databases and GIS themes:
.xls .csv .mxd .shp .dbf
</td> </tr>
<tr>
<td>
**Expected size**
</td>
<td>
10 GB
</td> </tr>
<tr>
<td>
**Data utility**
</td>
<td>
Assessing the effect of barriers on ecological habitats and fish communities;
input into predictive planning.
</td> </tr>
<tr>
<td>
**Data Users**
</td>
<td>
Will be used by the hydropower companies; scientists; municipalities; NGOs
</td> </tr> </table>
The map of fish communities in different water bodies will be created. This
will be based on determining the ecological fish habitats in water bodies,
taking into account barriers and the hydrologic regimes. Habitat models
already developed by SSIFI and ERCE will be used to delineate these fish
habitats. The fish communities will also be compared with expected reference
conditions, and Restoration Alternative Analysis to examine the change in
habitat structure and the change in ‘habitat stress days’. This data will
allow assessments of the available and optimal options for stream restoration.
**A3. Atlantic salmon status map** (T4.2.1)
### Data contact SOTON (PK)
**Existing data?** Some (plus model outputs)
**Data origin** The AMBER Barrier Atlas; Barrier impacts on river ecology;
national juvenile salmon stock assessments (from regional authorities)
<table>
<tr>
<th>
**Data type**
</th>
<th>
Spreadsheet of predicted salmon stocks; map of Atlantic salmon status
</th> </tr>
<tr>
<td>
**Data format**
</td>
<td>
Spreadsheet data and GIS themes:
.xls .csv .mxd .shp .dbf
</td> </tr>
<tr>
<td>
**Expected size**
</td>
<td>
1 GB
</td> </tr>
<tr>
<td>
**Data utility**
</td>
<td>
Targeting specific barriers in Europe which could be removed to improve salmon
stocks and socio-economics; rehabilitation schemes; strategic planning
</td> </tr>
<tr>
<td>
**Data Users**
</td>
<td>
Will be used by: educational establishments; scientists; municipalities; water
authorities; NGOs and policy makers.
</td> </tr> </table>
Utilising the ‘Barrier Impacts on River Ecology’ output and the Barrier Atlas
a pan-European, river by river assessment of the status of Atlantic salmon
will be done; examining the effects of barriers on salmon communities. The
model will be validated with data from national juvenile salmon stock
assessments. This will include an assessment of Atlantic Salmon river habitats
lost, at different spatial scales. This data will also be used to select
barriers whose removal would most benefit Salmon populations and socio-
economic return.
#### A4. Barrier impacts on river ecology map (T2.1)
<table>
<tr>
<th>
**Data contact** SU (LB)
**Existing data?** Yes (plus model outputs)
</th> </tr>
<tr>
<td>
**Data origin**
</td>
<td>
Water Framework Directive (WFD) fish/invertebrate/plant/phytoplankton data;
Barrier Atlas
</td> </tr>
<tr>
<td>
**Data type**
</td>
<td>
species and abundance data: invertebrates; fish; macrophytes; phytoplankton
</td> </tr>
<tr>
<td>
**Data format**
</td>
<td>
Original spreadsheet data, processed into databases and GIS themes:
.xls .csv .mxd .shp .dbf
</td> </tr>
<tr>
<td>
**Expected size**
</td>
<td>
50 GB
</td> </tr>
<tr>
<td>
**Data utility**
</td>
<td>
Determining how connectivity is affecting ecological status as defined within
the WFD; targeting river restoration schemes i.e. seeing if connectivity
likely to be a problem
</td> </tr>
<tr>
<td>
**Data Users**
</td>
<td>
Will be used by scientists; municipalities; water authorities; NGOs and policy
makers
</td> </tr> </table>
Existing raw stream survey data of ecological assemblages (aquatic plants,
benthic macroinvertebrates and fishes) will be collated from national WFD
databases, into a single database. A novel modelling approach ‘PREDICTS’ will
be used to examine the effects of barriers on ecological assemblages at a pan-
European scale.
**B. Specific tools used to comprise the overall decision tool**
### MONITORING
#### B1. eDNA tool
<table>
<tr>
<th>
**Data contact** UNIOVI (EGV) **Existing data?** Some
</th> </tr>
<tr>
<td>
**Data origin** species
</td>
<td>
Available primers; lab testing of primers; field testing of eDNA methods for
detecting
</td> </tr>
<tr>
<td>
**Data type**
</td>
<td>
specific primers configurations; methodologies
</td> </tr>
<tr>
<td>
**Data format**
</td>
<td>
Spreadsheet; publications .xls .csv .doc .pdf
</td> </tr>
<tr>
<td>
**Expected size**
</td>
<td>
1 GB
</td> </tr>
<tr>
<td>
**Data utility**
</td>
<td>
Will enable users to monitor species using eDNA and to make assessments on
barrier effects based on eDNA differences up/down stream.
</td> </tr>
<tr>
<td>
**Data Users**
</td>
<td>
Will be used by: hydropower companies; scientists; water authorities; NGOs
</td> </tr> </table>
Environmental DNA (eDNA) is increasingly being used to do rapid detection of
the presence of a suite of different species. A water sample provides DNA
sequences from multiple species which can be analysed for presence/absence
simultaneously. This method is still in development but testing above and
below barriers is an excellent method to refine this technique. The tool will
be useful for monitoring the effects of barriers on species passability.
#### B2. Rapid habitat assessment tool
<table>
<tr>
<th>
**Data contact** DU (PC) **Existing data?** No
</th> </tr>
<tr>
<td>
**Data origin**
</td>
<td>
Photo and video footage from drone flights along river corridors done by AMBER
consortium members. Development of a rapid habitat assessment methodology.
</td> </tr>
<tr>
<td>
**Data type**
</td>
<td>
video; photo; report
</td> </tr>
<tr>
<td>
**Data format**
</td>
<td>
video; photo; report
.mov .avi .mp4 .jpg .doc .pdf
</td> </tr>
<tr>
<td>
**Expected size**
</td>
<td>
20 GB
</td> </tr>
<tr>
<td>
**Data utility**
</td>
<td>
Users will be able to assess river habitats rapidly using drone technology.
This will have particular application for assessing hydromorphological change
due to barriers.
</td> </tr>
<tr>
<td>
**Data Users**
</td>
<td>
Will be used by: hydropower companies; scientists; water authorities; NGOs
</td> </tr>
<tr>
<td>
</td>
<td>
</td> </tr> </table>
### BARRIER PASSABILITY
#### B3. Barrier Passability Tool
<table>
<tr>
<th>
**Data contact** SOTON (PK) **Existing data?** Some
</th> </tr>
<tr>
<td>
**Data origin**
</td>
<td>
Published data on ability of aquatic organisms to pass different barriers
types based on barrier heights, barrier structure and hydrodynamic conditions;
experimental data in flumes on ability of weak swimmers to navigate different
hydrodynamic conditions.
</td> </tr>
<tr>
<td>
**Data type**
</td>
<td>
passability values for species based on variables such as water depth required
to jump; swim velocity; jump height; behavioural responses etc. (TBC)
</td> </tr>
<tr>
<td>
**Data format**
</td>
<td>
Spreadsheet data; report
.xls .csv .doc .pdf
</td> </tr>
<tr>
<td>
**Expected size**
</td>
<td>
1 GB
</td> </tr>
<tr>
<td>
**Data utility**
</td>
<td>
Will enable barrier design and mitigation techniques to be optimized for
different species; can be used to predict ecological effects of barrier
construction; can be used for modelling ecological effects of barriers at a
strategic (national, panEuropean) scale.
</td> </tr>
<tr>
<td>
**Data Users**
</td>
<td>
Will be used by the public; hydropower companies; educational establishments;
scientists; municipalities; water authorities; NGOs and policy makers.
</td> </tr> </table>
#### B4. Model of organism passability vs. hydropower generation
<table>
<tr>
<th>
**Data contact** SOTON (PK) **Existing data?** Some
</th> </tr>
<tr>
<td>
**Data origin**
</td>
<td>
Data from the passability tool and from information relating to hydropower
generation and flow (and seasonal migration patterns). Existing data comes
from technical details relating to hydropower production and published
information used in developing barrier passability tool
</td> </tr>
<tr>
<td>
**Data type**
</td>
<td>
Passability values under different flow velocities and barrier heights;
hydropower commitments (licensing) and flow-power generation relationships;
technical hydraulic data relating to barrier types and mitigation types;
temporal migration/movement patterns of different organisms (TBC).
</td> </tr>
<tr>
<td>
**Data format**
</td>
<td>
Spreadsheet data
.xls .csv
</td> </tr>
<tr>
<td>
**Expected size**
</td>
<td>
1 GB
</td> </tr>
<tr>
<td>
**Data utility**
</td>
<td>
Will assist in optimizing the management or mitigation strategies of
individual barriers as well as feeding into strategic regional decision
making.
</td> </tr>
<tr>
<td>
**Data Users**
</td>
<td>
Will be used by hydropower companies; scientists; municipalities; water
authorities; NGOs
</td> </tr> </table>
Data from the passability tool and from information relating to hydropower
generation and flow (and seasonal migration patterns) will be incorporated
into a tool which can balance decisions on hydropower generation against the
ability of different organisms to navigate different barriers under different
flow/seasonal regimes. This will be validated in a specific field test
catchment in Germany (Rivers Nehe or Neckar) where beneficiary 16-IBK have
significant knowledge (T3.1.1).
#### B5. Model of fish movement through river networks (T3.2.3)
<table>
<tr>
<th>
**Data contact** SOTON (JK) **Existing data?** Some
</th> </tr>
<tr>
<td>
**Data origin**
</td>
<td>
Data from the passability tool and from information relating to hydropower
generation and flow (and seasonal migration patterns)
</td> </tr>
<tr>
<td>
**Data type**
</td>
<td>
Passability values under different flow velocities and barrier heights;
hydropower commitments (licensing) and flow-power generation relationships;
technical hydraulic data relating to barrier types and mitigation types;
temporal migration/movement patterns of different organisms (TBC).
</td> </tr>
<tr>
<td>
**Data format**
</td>
<td>
Spreadsheet data
.xls .csv
</td> </tr>
<tr>
<td>
**Expected size**
</td>
<td>
1 GB
</td> </tr>
<tr>
<td>
**Data utility**
</td>
<td>
Will assist in optimizing the management or mitigation strategies of
individual barriers as well as feeding into strategic regional decision
making.
</td> </tr>
<tr>
<td>
**Data Users**
</td>
<td>
Will be used by hydropower companies; scientists; municipalities; water
authorities; NGOs
</td> </tr> </table>
The behavioural response of organisms to barriers and flow velocities will be
modelled using an Agent Based Model (ABM). Data on swimming behaviour will
also be obtained from experimental lab work done by AMBER in Swansea and
Southampton. Information from other data sources created in AMBER will be used
(Barrier Atlas; Barrier Passability Tool). Existing data has been used for
both producing the Barrier Atlas and Barrier Passability datasets.
### CONFLICT RESOLUTION
#### B6. Cost-benefit analysis of river infrastructure tool (T3.3.1)
<table>
<tr>
<th>
**Data Contact** DU (ML)
**Existing data?** Some (within “Model of organism passability vs. hydropower
generation” data)
</th> </tr>
<tr>
<td>
**Data origin**
</td>
<td>
AMBER field studies in tests catchment and “Model of organism passability vs.
hydropower generation” data
</td> </tr>
<tr>
<td>
**Data type**
</td>
<td>
Hydrological variables: head difference; stream geometry; flow rate.
Costings for different constructions of different dam and barrier types.
</td> </tr>
<tr>
<td>
</td>
<td>
Costing estimates for ecosystem services and economic benefits of barriers
(e.g. of power production).
</td> </tr>
<tr>
<td>
**Data format**
</td>
<td>
Spreadsheet data
.xls .csv
</td> </tr>
<tr>
<td>
**Expected size**
</td>
<td>
2 GB
</td> </tr>
<tr>
<td>
**Data utility**
</td>
<td>
Will feed into decision tool and assist in strategic planning of barrier
feasibility and location and provide objective information as a basis for
stakeholder conflict resolution.
</td> </tr>
<tr>
<td>
**Data Users**
</td>
<td>
hydropower companies; local government; municipalities; water authorities;
NGOs
</td> </tr> </table>
The field tests catchment in Germany (see “Model of organism passability vs.
hydropower generation” data) will also undergo a comprehensive economic
valuation of the effects of stream barriers on riverine goods and services.
#### B7. Barrier management scenario tool (D2.6, T2.2.1. T2.3)
<table>
<tr>
<th>
**Data contact** SSIFI (PP)
**Existing data?** Some (WFD databases, fisheries and hydrological data)
</th> </tr>
<tr>
<td>
**Data origin**
</td>
<td>
‘Fish Community Map’ (Fisheries and hydrological data) from above
</td> </tr>
<tr>
<td>
</td>
<td>
WFD data bases
</td> </tr>
<tr>
<td>
</td>
<td>
EC stream flow and climate records _https://www.eea.europa.eu/data-and-
maps/indicators/river-flow-3_
</td> </tr>
<tr>
<td>
**Data type**
</td>
<td>
Loss of habitat; change in habitat structure; change in number of habitat
stress days; RAA diagrams
</td> </tr>
<tr>
<td>
**Data format**
</td>
<td>
Spreadsheet data and diagrams (in reports) .xls .csv .doc .pdf
</td> </tr>
<tr>
<td>
**Expected size**
</td>
<td>
2 GB
</td> </tr>
<tr>
<td>
**Data utility**
</td>
<td>
Will feed into decision tool and assist in strategic planning of barrier
feasibility and location and provide objective information as a basis for
stakeholder conflict resolution.
</td> </tr>
<tr>
<td>
**Data Users**
</td>
<td>
hydropower companies; scientists; water authorities; policy makers; NGOs
</td> </tr> </table>
Using previous AMBER ‘Fish Community Map’ data (based on fisheries and
hydrological data) the fish guilds and habitats will be assessed for deviation
from expected reference conditions (WFD databases) within representative
rivers of the EU. Restoration Alternatives Analysis (RAA), based on the
MesoHABSIM model, will be used to assess loss of habitat, change of habitat
structure and increase in the number of habitat stress days for different
barrier management scenarios (planning, removal and various forms of
mitigation).
Habitat deficit, change of habitat structure and habitat stress days will also
be calculated for barriers under different climate change scenarios, using a
model based on EC stream flow and climate data. RAA diagrams will also be
produced.
#### B8. Ecosystem Services Evaluation Tool (T2.6)
<table>
<tr>
<th>
**Data contact** ERCE (KK)
**Existing data?** No (except some use of barrier atlas)
</th> </tr>
<tr>
<td>
**Data origin**
</td>
<td>
Field studies in German test catchment; barrier atlas data; ‘Cost-benefit
analysis of river infrastructure tool’; additional cost-benefit valuations
relating to ESS
</td> </tr>
<tr>
<td>
**Data type**
</td>
<td>
from test catchment: categorization of ESS; cost-benefit valuations; diagrams
and spreadsheet models of links between stakeholders
</td> </tr>
<tr>
<td>
**Data format**
</td>
<td>
Spreadsheets and report
.xls .csv .doc .pdf
</td> </tr>
<tr>
<td>
**Expected size**
</td>
<td>
1 GB
</td> </tr>
<tr>
<td>
**Data utility Data Users**
</td>
<td>
</td> </tr> </table>
Ecosystem Services (ESS) that rivers provide and the users of these services
will be identified and defined, and the effects of barriers on delivery. ESS
delivery rate will be determined in selected Case Studies (WP4). Interactions
between stakeholders and how construction and removal of barriers affects and
change in river status redistributes economic gains and losses (utilising data
from German test catchment). Testing done in WP4. Consequences of management
decisions under different temperature/flow conditions (due to climate change)
will also be considered.
#### B9. Social Attitudes Tool
<table>
<tr>
<th>
**Data contact** UNIOVI (EDR) **Existing data?** no
</th> </tr>
<tr>
<td>
**Data origin**
</td>
<td>
Questionnaire for public
</td> </tr>
<tr>
<td>
**Data type**
</td>
<td>
database of public preferences and value placed on dams and services provided
by dams and rivers (categorised by predictor variables; see description
below).
</td> </tr>
<tr>
<td>
**Data format**
</td>
<td>
Spreadsheet
.xls .csv
</td> </tr>
<tr>
<td>
**Expected size**
</td>
<td>
1 GB
</td> </tr>
<tr>
<td>
**Data utility**
</td>
<td>
Will feed into decision tool and assist in strategic planning of barrier
feasibility and location and provide objective information as a basis for
stakeholder conflict resolution.
</td> </tr>
<tr>
<td>
**Data Users**
</td>
<td>
hydropower companies; scientists; water authorities; policy makers; NGOs
</td> </tr> </table>
Questionnaires will be used to collect data on public attitudes to dams and
reservoirs and the financial value the public place on them. These will be
done intensively for all Case Studies and additionally in AMBER beneficiary
countries not represented in the Case Studies. This data will be used to
construct a model of acceptability of dams given basic dam predictors (barrier
height, type and age, as well as respondent education, gender, age, country).
**C. Case Study data**
#### Data contact DTU (KA) Existing data? no
**Data origin** Field and public surveys within Case Study catchments
**Data type** (see data types for all tools above)
**Data format** (see data formats for all tools above)
#### Expected size 28 GB
**Data utility** (see data utility for all tools above)
**Data Users** (see data users for all tools above)
The tools being developed within AMBER require testing and validation within a
diverse range of catchments and situations e.g. barrier planning, removal and
mitigation. Case Studies sites were chosen to be representative of this
diversity. The data collected will inevitably be integrated into improving the
functioning and accuracy of the tool, and is more appropriately stored with
the tool for which it is being tested. However, during field studies
centralised organisation of the collection and storage of the data will be
organized.
**D. Dissemination data and project metadata**
<table>
<tr>
<th>
**Data contact** WFMF (JD) & SU (ID) **Existing data?** no
</th> </tr>
<tr>
<td>
**Data origin**
</td>
<td>
databases used in assisting dissemination of the project e.g. stakeholders,
Barrier Tracker app users, users of AMBER outputs, educational material,
publications. Metadata of AMBER project.
</td> </tr>
<tr>
<td>
**Data type**
</td>
<td>
contacts
</td> </tr>
<tr>
<td>
**Data format**
</td>
<td>
databases and documents
</td> </tr>
<tr>
<td>
</td>
<td>
.xls .csv .doc .pdf
</td> </tr>
<tr>
<td>
**Expected size**
</td>
<td>
6 GB
</td> </tr>
<tr>
<td>
**Data utility**
</td>
<td>
contacting stakeholders, project organisation, promoting AMBER
</td> </tr>
<tr>
<td>
**Data Users**
</td>
<td>
AMBER consortium (internal), and public; scientists
</td> </tr> </table>
Databases of stakeholders, Barrier Tracker app users and users of AMBER
outputs are maintained throughout the project. Promotional material relating
to the project will also be maintained, such as Educational data, Newsletters
etc. Scientific publications produced by the consortium are also referenced
through OpenAIRE and stored at institutional or journal level (depending on
Open Access copyright conditions).
## 2.3 Data not originating directly from AMBER beneficiaries
Data originates from various sources:
1. Some members of AMBER (IFI, SSIFI, WFMF) brought data to the project prior to commencement and have associated IPR; specifically with an agreement that such data can only be used within AMBER. This was dealt with in the Consortium Agreement which was signed prior to AMBER commencement. Attachment 1 from the Consortium Agreement is included in this Data Management report (as Annex 1) in its full version, for reference.
2. Barrier data from Regional and National authorities within Europe is generally open to the public and free to use. However, some data is not and usage agreements have to be drawn up or agreed. Additionally there is some commercial data e.g. through hydropower companies, which require usage agreements although where data cannot be used within the Atlas (publically) or for research purposes it may not be worth collecting the data.
3. Citizen Science will be used to collect additional barrier data for the Atlas. This data will be open for use for research and by the public. A statement has been included in the app agreement which users have to actively tick to agree to before continuing. This is worded as such (subject to change prior to app launch):
_Who will have access to the data and for what purpose_
The AMBER project team and Natural Apptitude will have access to the data
submitted. Data will be verified by staff at the World Fish Migration
Foundation, before it is made available to JRC. Findings will be presented in
a range of outputs, potentially including the Barrier Atlas, academic
journals, magazines, project summaries, blog posts, infographics, leaflets,
policy briefs and email newsletters. This will help improve scientific
understanding of the impact of barriers across Europe. Members of the general
public will also have access to records via the AMBER website, although record
data will be summarized and will not include your personal information.
4. WP4 Case Studies will contain data from specific studies within catchments will include data collected by AMBER members and funded by the EC and thus will be freely available for use. Questionnaire data collected within the Case Studies is collected with a signed agreement for use of data (see ethics deliverable D7.2).
5. AMBER members will also collect data for validation, which will be freely available.
Throughout the project additional data sources are likely to become available.
It is important that signed agreements of Intellectual Property Rights are
obtained, and that data is flagged within the data itself whether it has
limitations or not on its use/reuse. AMBER members must also be aware of data
protection law regarding the storage and use of personal data. This is covered
in detail in section 5.1 of this report.
### 2.4 Data Size
NaturalApptitude, the company creating the CS app, has a server to collect the
app data initially, which after pre-processing, will be transferred to JRC.
The estimate of the barrier data collected for the barrier inventory and Atlas
is **16GB** . This will be held and maintained by JRC (Ispra). At the end of
AMBER (May 2020) the contract with NaturalApptitude ends and any data
collected will go directly to JRC. The details of this change over will be in
the final data management report, within D6.2 (month 48).
Other data is being held by the 20 individual participants in AMBER. However,
a central Swansea
Server has been made available with a current size of **4.4 TB** , expandable
to 11TB if required. The
Swansea Server has several roles: (i) document and allow data sharing between
AMBER participants (having the guaranteed latest version) (ii) storing
reference documentation for the running of AMBER, e.g. contact details,
meeting minutes (iii) as a backup for data collected by beneficiaries.
**Tables 2, 3 and 4** summarise the estimated sizes for different components
of the data collected.
Publication datasets are linked to the publications themselves and stored at
the Coordinator’s (Swansea University) repository as well as the repositories
of the Beneficiaries who generated them (see Section 3.1 below).
Deliverable D5.6 ‘Plan of Exploitation and Dissemination of Results’ contains
details of how the outputs from AMBER will be disseminated and the target
audience. Table 2 in this document specifically relates to the data output
from different tasks, some of which feed into or are combined to produce
outputs from AMBER. A summary table ( **Table 3** ) has been created to show
the relationship between the dissemination of data outputs in D5.6 and the
different sets of data created within each task. This shows the data that come
out of each task, the output created from the data, the method of
dissemination and the target audience, thereby combining the information
provided in **Table 10** of deliverable D5.6 and **Table 2** of this
deliverable.
# 3 ‘FAIR’ (FINDABLE, ACCESSIBLE, INTEROPERABLE AND RE-USEABLE) DATA
The ‘FAIR’ principles have the objective of making available data easy to find
and access using modern computing methods and the internet. It is recommended
that these documents are read by members of the AMBER consortium involved with
creating open access databases.
_https://www.force11.org/group/fairgroup/fairprinciples_
_http://www.nature.com/articles/sdata201618_
## 3.1 Findable
AMBER will self-archive both publications and data in open access repositories
commonly used by scientists, allowing easily findable and searchable access to
this information on the servers where they are held e.g. CRONFA (Swansea
University). Repositories can be searched for via: _https://www.openaire.eu/_
_http://www.opendoar.org/_
A website front-end for the Atlas data is also being developed, principally as
a public access interface for the barrier data. However, the barrier data will
be accessible visually through the website, or in spreadsheet/csv form through
JRC. Data which is not spatially associated with the Atlas will be held at
Swansea University where possible, although some institutions will hold models
and model output data (Open Access). The website front-end for the Atlas will
also provide links to all the other data and publications associated with
AMBER. Figure 1 shows how the repository lists (and website) are linked to the
different data repositories and the types of data being held there. Digital
Object Identifiers (DOIs) will be produced for final Open Access data sets
_https://www.doi.org/_ .
Some of the datasets being used are already collated in other repositories
e.g. Water Framework Directive biological data. Such data sets will not be
duplicated but links will be provided to these respositories. There is
potential for data to be held in non-institutional repositories such as the
Global Biodiversity Information Facility (GBIF) _http://www.gbif.org/_ or the
free repository, Zenodo _https://www.zenodo.org/_ , however it is easier to
control the standards of data and data maintenance in institutional
repositories if they are well established. For example, Swansea University has
a specialized Institutional repository (‘data hub’) and this will be used for
long-term storage of the AMBER datasets.
_**Repository Repository Data stored** _
_**service** _
OpenAIRE
deposit
service
OpenDOAR
Swansea
University
Joint Research
Centre, Ispra
AMBER
Barrier
Website
Interface
)
(
WFMF
Other
approved
Institutional
repositories
All Barrier Atlas
Data
Models and
model outputs
All other data
Publications
from AMBER
Open Access
Journals
**Figure 1.** Relationship between repository lists, repositories and data
stored
### 3.1.1 Open access to scientific publications generated by AMBER
Open Access is where the public, without subscription, can access
publications. OpenAIRE is an Open Access project that can be used to find and
link to Open Access publications.
AMBER will use two forms of Open Access publications:
* **Gold Open Access** : The final publication is freely available directly from the publisher
* **Green Open Access** : An author's version of the publication is made available through an institutional repository, a practice commonly referred to as "self-archiving". There is often an embargo period before the publication can be made available elsewhere.
Researchers within AMBER will ensure that all publications are either Gold or
Green Open Access and that they include the terms: "European Union (EU)" and
"Horizon 2020"; name of the action, acronym, grant number and duration i.e.
**“European Union (EU) Horizon 2020 Programme. Adaptive Management of Barriers
in European Rivers (AMBER) Grant#689682 (from 2016 to2020)”**
Each partner will self-archive via open access repositories in order to adhere
to Article 29.2 of the GA. Institutional Repositories used by the consortium
include:
* CRONFA – Swansea University
* RUO – Repository University of Oviedo
* DRO – Durham Research Online
* Orbit – Technical University of Denmark
In addition, all AMBER publications and associated data sets are stored in
Swansea’s Server
### 3.1.2 Costs associated with Open Access
As the AMBER project budget has been devolved, beneficiaries are responsible
for forecasting and meeting publication costs, including any costs associated
with Open Access.
## 3.2 Accessibility
### 3.2.1 Data
Regarding data sets which will be collected as part of the AMBER project ,
there is no specific data set which cannot be shared i.e. :
* questionnaire data will not include identifiers of the individuals;
* much of the barrier data is publicly available and generated by national or regional agencies; case study and validation data collection is funded by the EC and will be publicly available.
However, there are some barrier and fish data used by AMBER that were
collected by hydropower companies and Member Estates that are not all publicly
available. Efforts are being made to make as much of these open access
(detailed below). Where the data cannot be used for open access additional
information regarding the restrictions will be kept in a database (that is not
necessarily the same as the database containing scientific data) and documents
of written agreements will be compiled and filed in a structured manner.
Currently we have identified some potential data sets that will not be open
access, but the data itself has not yet been collated.
The questionnaire examining the social aspect of barriers and asking opinions
on barriers has some relevance under data protection law. Respondents are
asked for informed consent for data sharing and long term preservation of the
data within the survey, and are provided the details of how the data will be
used. The data from the questionnaire will not be stored with personal details
(name, address etc) that could identify them. Data protection law was reformed
in April 2016. More information is provided here:
_http://ec.europa.eu/justice/data-protection/reform/index_en.htm_
Conditions of use for some data collected prior to the Consortium Agreement
has been agreed but within the AMBER project no beneficiaries have yet
requested that the output of data collected by AMBER be restricted, except
with regard to enabling time for AMBER researchers to publish articles based
on this data before release.
### 3.2.2 Software for accessing the data
Through its citizen science portal (‘AMBER Atlas website’), AMBER will permit
users to access the data collated in the Atlas and, later also the Case
Studies. This will also be linked to JRC barrier inventory at the end of the
project. In addition to tools for visualising the data (principally a barrier
map) there will also be the ability to download different data sets from the
website. Currently ‘csv’ files are considered the best file type for file
download as it can be opened directly in a variety of spreadsheet packages
(e.g. Microsoft Excel, LibraOffice) as well as utilized directly in a range of
statistical and analytical software packages. In addition, the file size tends
to be minimal since there is no formatting of the text and data.
Thus, data can be accessed with any common internet browser and opened with an
extensive range of software types over different platforms, e.g. Microsoft
Office suite on the PC, Linux operating systems, Unix, Apple OS X.
## 3.3 Interoperable
Since the barrier inventory database will be the first comprehensive barrier
inventory in Europe, it is hoped the data structure detailed in D1.1 Guidance
on Stream Barrier Surveying and Reporting (Part B) will become the standard
for barrier inventories. The database will already be highly interoperable, as
it will combine data sets from different origins, comprising a base set of 20
core variables, but will also not discard any additional data collated on
single barrier datasets. In order for this to be achieved, the following
procedures will be adopted:
* the data will utilise English as a common language and the International System of Units (SI) for measurements
* any categorical data will refer to documentation on what the categories represent and how the categories were created (method), the data collection method will also be referred to within the database.
* scientifically accurate and non-ambiguous vocabularies will be used where possible, or the most commonly accepted terms (in English) if there is no specific scientific definition of the variable.
* words within the data and within column/row titles will be kept to a minimum to make it clear which columns/rows contain the same type of data (until further data has been collected throughout Europe, the specifics of this cannot be detailed).
## 3.4 Reusable
Most of the data (generated from AMBER and from National databases for the
barrier inventory) will be open access and will not have restricted use. There
may also be options to link into national databases to get automated updates
of barrier data, although the ability to do this on a large scale has not been
assessed and is likely to vary greatly depending on the structure of the
database from which the data is obtained and the permissions given by the data
owners.
Restrictions on the release of data to open access, to allow time to publish,
will be in place. It is estimated that 6 months between collection of the
whole data set and allowing open access would be a guideline. However, many
data collection activities create outputs that feed into other analysis and
models, so there may be cases where the data will be withheld for longer prior
to publishing due to the dependence on a data set later within the project
timeline.
# 4 ALLOCATION OF RESOURCE
Costs of making data FAIR within the project are integrated within the
specific tasks, particularly WP5 (dissemination) and not separately costed.
# 5 DATA SECURITY
The Swansea Server is a SFTP (Secure File Transfer Protocol) server, backed up
every evening. Copies of data on this server from beneficiaries are also kept
by the beneficiaries.
The JRC server will operate indefinitely under the auspices of the EC. Funding
to sustain this will be applied for through specific grants in the last 2
years of AMBER. The Swansea Server will retain the data for at least 10 years.
## 5.1 Data Protection
In May 2018 the General Data Protection Regulation (GDPR) comes into force
(Regulation EU 2016/679). _http://eur-lex.europa.eu/legal-
content/en/TXT/?uri=CELEX%3A32016R0679_ . AMBER will comply with current
European and national data protection law. Full details of conformity to data
protection will be provided in deliverable D7.2 (POPD - H - Requirement No.2).
Data Controllers and Data Processers have been designated (as is required in
the GDPR) and a legal agreement between Swansea University and the other
institutions potentially involved with personal data is currently being
written. The basic structure of data protection is detailed below.
_Data protection concerns_
There are three main areas where data protection is a concern due to the
collection of personal data:
1. Collection of drone data over river basins where faces or people may be inadvertently recorded on drone film footage.
2. Audio recordings and opinions taken during questionnaires on river barriers (dams; weirs etc).
3. Emails from voluntary registration on the AMBER app or website.
Personal data that is being collected is this:
* Images of the public (potential to identify them or invade privacy)
* Audio recordings (potential to identify people through the recordings)
* Emails
However, in (1) and (2) there is no intention to retain this personal data and
thus there is no processing of personal data, but there is a small risk that
personal data will inadvertently be retained. Images of people within footage
will be blurred in drone footage (except drone operators, who will be asked to
sign an agreement that their image can be used). Audio recordings will be
destroyed after (timely) transcription.
In (3) we are retaining personal data (emails) for the duration of the project
(ends 31 May 2020). This is also more complex since the beneficiary (partner)
who will be using the data employs a 3rd party outside the consortium to
collect the data. Potentially there may be another 3rd party to host the
website, and another agreement will need to be drawn up in such a case. Also,
at the end of the project and EC body (JRC) will retain data collected by the
AMBER app. It is intended that we write to the registered users by email,
asking if they wish to continue to be registered. If they do, these emails
will then pass to this EC body (JRC), however if they do not, or they do not
respond to our request, we will destroy that individual’s personal data
(emails).
Thus, there are two aspects to protecting personal data within this project.
The first is ensuring in drone and questionnaire work, that personal data is
not retained. The second is ensuring that personal data from app/website
registration is properly controlled. As such a legal agreement needs to be
drawn up between the parties involved.
_Structure of data management agreement:_
**_Overall data controller (1_ ) **
Institution: **Swansea University** (SU), UK
_General Responsibilities:_
Ensuring that the legal agreement and the EC law is abided by through contact
with data processors and a co-data controller. Overall authority of data
control.
_Specific responsibilities:_
Routine contact with data controllers to ensure personal data is not being
circulated outside the signatories to this agreement and to ensure that data
controllers’ responsibilities are being followed.
**_Co data controller and data processer (2) – app/website data_ **
Institution: **World Fish Migration Foundation** (WFMF), Netherlands
_General Responsibilities:_
To be co-data controller regarding the emails (i.e. collaborate with SU to
ensure email data is correctly controlled) and to utilise this data for
sending emails to registered users (as a data processor).
_Specific responsibilities:_
To collaborate with (1) in determining rules for controlling email data. To
ensure email data does not go beyond Natural Apptitude or WFMF. To deal with
email data at project end (destruction or change over of responsibilities to
JRC).
**_Data Processor (3) – drone data_ **
### Institution: Durham University (DU)
_General Responsibilities:_
To follow the guidelines in the legal agreement (determined and regulated by
data controller (1) and to follow the EU and UK law in regard to data
protection).
_Specific responsibilities:_
To coordinate the assurance of data protection for all the drone work.
**_Data Processor (4) – drone data_ **
### Institution: Instytut Rybactwa Srodladowego Im Stanistawa Sakowicza
(SSIFI), Poland
_General Responsibilities:_
To follow the guidelines in the legal agreement (determined and regulated by
data controller (1) and to follow the EU and UK law in regard to data
protection).
_Specific responsibilities:_
To coordinate the assurance of data protection for all the drone work.
**_Data Processor (5) – questionnaire audio recordings_ **
Institution: **University of Oviedo** (UNIOVI), Spain
_General Responsibilities:_
To follow the guidelines in the legal agreement (determined and regulated by
data controller (1) and to follow the EU and UK law in regard to data
protection).
_Specific responsibilities:_
To destroy audio data on opinions after it has been transcribed (in a timely
manner). Also to ensure questionnaire data collected does not contain personal
data.
Note:
The Joint Research Centre (Ispra) may receive personal data (public emails)
during a transfer process, following the conclusion of the AMBER project, but
they will not hold or process personal data within AMBER prior to this. This
hand-over will be detailed in D6.2.
No one except WFMF and Natural Apptitude (app developer) are permitted to hold
or be given personal data relating to AMBER during the lifetime of the AMBER
project. Personal data of those working directly on the AMBER project (such as
beneficiaries’ emails and addresses) can be held and circulated, following
relevant EU and national data protection law.
In summary, only WFMF and Natural Apptitude will be handling personal data
during the AMBER project(emails). However, some institutions have been given
specific authority to ensure that personal data is not inadvertently collected
in the questionnaire (University of Oviedo); and in the drone work (Durham
University and the Instytut Rybactwa Srodladowego Im Stanistawa Sakowicza)
through destroying audio data and smudging faces in video footage
(respectively).
# 6 APPENDIX 1 – LINKS BETWEEN TASKS, DATA, DELIVERABLES AND DISSEMINATION
**Table 3.** Summary of data sources and types collected within the AMBER
project WP1,2,3. ‘Further utility’ are applications for the data outside the
direct scope of the project. (internal) refers to file type used whilst being
worked on within AMBER and (external) is the file type as it will be presented
to the public for open access.
<table>
<tr>
<th>
Task
</th>
<th>
Data
</th>
<th>
Origin
</th>
<th>
Uses
</th>
<th>
Format
</th>
<th>
Estimated
Size
</th>
<th>
Further utility
</th> </tr>
<tr>
<td>
**T1.2.1**
</td>
<td>
Collation of barrier data throughout
Europe
</td>
<td>
National and Regional authorities were data exists; specific river studies
</td>
<td>
To create the Barrier Atlas; (i) to inform policy decisions;(ii) strategic
decision making; (iii) numerous models with further data output (listed here)
</td>
<td>
.xls (internal use)
.csv (external use)
</td>
<td>
10GB
</td>
<td>
Scientific investigations
</td> </tr>
<tr>
<td>
**D1.2**
</td>
<td>
Metadata on the barrier inventory (T1.2.1)
</td>
<td>
Created by AMBER based on the type of data collected in T1.2.1
</td>
<td>
Overview of data within the barrier inventory
</td>
<td>
.xls (internal use)
.csv (external use)
</td>
<td>
1GB
</td>
<td>
Understanding the barrier data; procuring further barrier data within Europe
</td> </tr>
<tr>
<td>
**T1.2.2**
</td>
<td>
Validation data
</td>
<td>
AMBER experts collecting field data on barriers
</td>
<td>
To allow comparability between survey methods and countries in T1.2.1; to give
more realistic estimates of total barrier numbers in
Europe, and within Member States; to be included as data within the European
barrier inventory (D1.3)
</td>
<td>
.xls (internal use)
.csv (external use)
</td>
<td>
5GB
</td>
<td>
Representative of intense and comprehensive barrier surveys
</td> </tr>
<tr>
<td>
**D1.3**
</td>
<td>
Barrier inventory
</td>
<td>
Combination of the data obtained from collated European barrier data
(T1.2.1); validation data
(T.1.2.2) and Case Study data
</td>
<td>
The basis of the European barrier map (the ‘Atlas’)
</td>
<td>
.csv
GIS theme (.shp; .shx; .dbf)
</td>
<td>
16GB
</td>
<td>
Research; shaping policy; promotion of project
</td> </tr> </table>
<table>
<tr>
<th>
**T2.1**
</th>
<th>
Europe wide connectivity and biodiversity data
</th>
<th>
Compilation of stream
surveys of plant/invertebrate/fish from national WFD databases within Europe
</th>
<th>
To produce a predictive model of barrier effects on ecology
</th>
<th>
.csv
</th>
<th>
50GB
</th>
<th>
-data already available-
</th> </tr>
<tr>
<td>
**T2.2.1**
</td>
<td>
Fish guilds predicted from habitat and barrier data
</td>
<td>
Fisheries; barrier; hydrological and stratified habitat data for selected
rivers (pre-existing). Prediction of expected ecological guilds based on this
data (generated with AMBER).
</td>
<td>
Assessing the effectivenss of
Restoration Alternatives
Analysis
</td>
<td>
.xls (internal use)
.csv (external use)
GIS theme (.shp; .shx; .dbf)
</td>
<td>
10GB
</td>
<td>
</td> </tr>
<tr>
<td>
**T2.2.2**
</td>
<td>
Drone generated river habitat data
</td>
<td>
Drone flight film and photos in selected river catchments
</td>
<td>
Developing rapid habitat assessment methodology through image interpretation.
</td>
<td>
.mp4 (video)
.jpg (photo)
.xls (predicted habitats)
.csv (predicted habitats)
</td>
<td>
20GB
</td>
<td>
Research for improving image interpretation; promotional media; examining the
catchments from the
air
</td> </tr>
<tr>
<td>
**T2.2.3**
</td>
<td>
European sediment connectivity data
</td>
<td>
Barrier Inventory data (D1.3); available hydrological data-> with output of
sediment connectivity (movement) in rivers throughout Europe.
</td>
<td>
Creating sediment connectivity (movement) map for Europe, based on barriers.
</td>
<td>
.xls (internal)
.csv (external)
GIS theme (.shp; .shx; .dbf)
</td>
<td>
10GB
</td>
<td>
Widespread research applications
</td> </tr>
<tr>
<td>
**T2.3**
</td>
<td>
Effect of climate change on river connectivity
</td>
<td>
Stream flow & climate data for 441 catchments in 15 countries (European
Environment Agency data); National WFD data bases of catchment
topography/size. -> output of habitat deficit; stress days; habitat change
</td>
<td>
Illustrate predictive model of analyzing effect of climate change on
connectivity (based on barriers)
</td>
<td>
.xls (internal)
.csv (external)
</td>
<td>
20GB
</td>
<td>
Research; example for strategic planning of climate change scenarios for
environment agencies
</td> </tr>
<tr>
<td>
**T2.5.1**
</td>
<td>
eDNA detection thresholds
</td>
<td>
AMBER eDNA research for metabarcoding protocols
</td>
<td>
Thresholds to develop the metabarcoding toolkit
</td>
<td>
.xls (internal)
.csv (external)
</td>
<td>
1GB
</td>
<td>
Widespread use for application of
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
metabarcoding; improvement/research to further develop metabarcoding
</th> </tr>
<tr>
<td>
**T2.5.2**
</td>
<td>
Presence/absence of aquatic biota based on eDNA sampling in test catchments
</td>
<td>
AMBER field collection and analysis of eDNA, processed with metabarcoding
toolkit (T2.5.1) and barrier data collected in Case Studies
</td>
<td>
Illustrate use of eDNA toolkit to
determine species presence/absence
</td>
<td>
.xls (internal)
.csv (external)
</td>
<td>
1GB
</td>
<td>
Example for other metabarcoding field
exercises
</td> </tr>
<tr>
<td>
**T2.6**
</td>
<td>
Ecosystem services and interaction with stakeholders
</td>
<td>
Ecosystem Services evaluated in the Case Studies; and stakeholders/stakeholder
interests identified in the Case Studies.
</td>
<td>
Data to inform model development
</td>
<td>
.xls (internal)
.csv (external)
**NB. Data protection considerations**
</td>
<td>
1GB
</td>
<td>
Example of relationships between ESS, barriers, and stakeholders
</td> </tr>
<tr>
<td>
**T3.1.1/ T3.1.2**
</td>
<td>
Hydropower potential and passability
</td>
<td>
Structural and hydrological and passability data collected by AMBER from test
catchment in Germany > output of hydropower generation potential; dam
construction costs at different locations
</td>
<td>
For prioritization in the barrier mitigation and hydropower placement decision
tool
</td>
<td>
.xls (internal)
.csv (external)
</td>
<td>
1GB
</td>
<td>
Example of assessing hydropower potential (though data likely to be combined
with other data within decision tool)
</td> </tr>
<tr>
<td>
**T3.2.1**
**(D3.1)**
</td>
<td>
Hydrodynamic
conditions at river infrastructures
</td>
<td>
Flow velocities, shear and turbulence values associated with barriers and
fishways; hydrodynamics for key biological species
</td>
<td>
Determines hydrodynamic parameters/thresholds for species and how structures
thus permit/prevent passage: For Agent Based Model
</td>
<td>
.xls (internal)
.csv (external)
</td>
<td>
1GB
</td>
<td>
Useful data for research, regulatory bodies and hydropower industry
</td> </tr>
<tr>
<td>
**T3.2.2**
</td>
<td>
Behaviour and locomotory performance of weak swimmers
</td>
<td>
Behaviour and locomotory performance of weak swimming species (e.g. crayfish)
under conditions found at barriers (AMBER experiment at SOTON labs). Also,
similar data for inverts and macrophytes collated.
</td>
<td>
Used to develop response criteria for range of organisms in Agent Based Model.
**NB. Data collection has ethics considerations (working with animals)**
</td>
<td>
.xls (internal)
.csv (external)
</td>
<td>
1GB
</td>
<td>
Research
</td> </tr>
<tr>
<td>
**T3.2.4**
</td>
<td>
Field data of passability of species (focus on non-salmonids)
</td>
<td>
Movement data of nonsalmonid spp. Including weak swimmers; invertebrates and
macrophytes. From surveys and tagging exercises in the Case Study sites.
</td>
<td>
For testing Agent Based Model. **NB. Data collection has ethics considerations
(tagging)**
</td>
<td>
.xls (internal)
.csv (external)
</td>
<td>
2GB
</td>
<td>
Important information for regulatory bodies: informing EU habitats
directive and Convention on
Biological Diversity and
movement of invasives.
</td> </tr>
<tr>
<td>
**T3.3 (D3.2,**
**MS3)**
</td>
<td>
Cost-Benefit of restoring stream connectivity
</td>
<td>
Data collected in Case Studies assessing costbenefit (including nonmarket
benefits/costs) and data from non-market benefit inventories, of various
restoration options; includes MS3 Evaluation of Natural Capital data.
</td>
<td>
Will feed in to barrier planning and decision tool.
</td>
<td>
.xls (internal)
.csv (external)
</td>
<td>
2GB
</td>
<td>
Regulators/Public: data to assist conflict resolution in barrier management;
Research
</td> </tr>
<tr>
<td>
**T3.3 (D3.5;**
**MS10)**
</td>
<td>
Social attitudes to dams in rivers
</td>
<td>
Questionnaire on social attitudes to dams in rivers
</td>
<td>
Will feed in to barrier planning and decision tool.
</td>
<td>
.xls (internal)
.csv (external)
**NB. Data protection considerations**
</td>
<td>
1GB
</td>
<td>
Regulatory bodies/public: for understanding and informing conflict resolution
</td> </tr>
<tr>
<td>
**D3.3**
</td>
<td>
Inventory of barriers and river structures within German test catchment
</td>
<td>
Data collected on location and properties of barriers within the German test
catchment
</td>
<td>
**NB. Likely to be integrated into the validation data**
(T1.2.2)
</td>
<td>
.xls (internal)
.csv (external)
</td>
<td>
3GB
</td>
<td>
Representative of intense and comprehensive barrier surveys
</td> </tr> </table>
**Table 4.** Data collected during WP4 (Case Studies). NB. Much of this field
data is collated for use in specific tasks in WP1,2 and 3 ( **Table 2** ).
<table>
<tr>
<th>
Task
</th>
<th>
Data
</th>
<th>
Origin
</th>
<th>
Uses
</th>
<th>
Format Estimated
Size
</th>
<th>
Further utility
</th> </tr>
<tr>
<td>
**T4.1.1**
</td>
<td>
River Nalon field data (Spain)
</td>
<td>
Field work in Case Study areas
</td>
<td>
Data feeds into tasks in WPs 1 to 3.
</td>
<td>
4GB
4GB
4GB
.mp4 4GB
.jpg
.xls
.csv 4GB
GIS theme (.shp; .shx; .dbf)
4GB
4GB
</td>
<td>
Case study examples for public/regulators; catchment management within the
specific catchments; publicity
</td> </tr>
<tr>
<td>
**T4.1.2**
</td>
<td>
River Allier (France) field data
</td> </tr>
<tr>
<td>
**T4.1.3**
</td>
<td>
River Munster
(Ireland) field data
</td> </tr>
<tr>
<td>
**T4.1.4**
</td>
<td>
River Gary (Scotland) field data
</td> </tr>
<tr>
<td>
**T4.1.5**
</td>
<td>
River Vistula (Poland) field data
</td> </tr>
<tr>
<td>
**T4.1.6**
</td>
<td>
Lowland river (various countries) field data
</td> </tr>
<tr>
<td>
**T4.1.7**
</td>
<td>
River
Guardalhorce
(Spain) field data
</td> </tr>
<tr>
<td>
**T4.2.1**
**(D4.3)**
</td>
<td>
Trans-European
Status of Atlantic
Salmon
</td>
<td>
A trans-European river by river GIS map showing status of Salmon derived from
barrier inventory (D1.3) and connectivity and biodiversity data (T2.1)
</td>
<td>
[Output directly for external (policy shaping/AMBER promotional use)]
</td>
<td>
.xls (internal) 5GB
.csv (external)
GIS theme (.shp; .shx; .dbf)
</td>
<td>
Informing policy decisions; national and local conservation/
restoration efforts; promotion of AMBER
</td> </tr> </table>
**Table 5.** Data from WP5, 6, 7.
<table>
<tr>
<th>
Task
</th>
<th>
Data
</th>
<th>
Origin
</th>
<th>
Uses
</th>
<th>
Format
</th>
<th>
Estimated
Size
</th>
<th>
Further utility
</th> </tr>
<tr>
<td>
**T5.1/T6.1**
</td>
<td>
Parallel Projects database
</td>
<td>
Collated during project
</td>
<td>
Linking AMBER to other projects
</td>
<td>
.xls
**NB. Data protection considerations**
</td>
<td>
1GB
</td>
<td>
For future projects
</td> </tr>
<tr>
<td>
**T5.1/T6.1**
</td>
<td>
AMBER member
details and contacts
</td>
<td>
Collated during project
</td>
<td>
Communication within project
</td>
<td>
.xls
**NB. Data protection considerations**
</td>
<td>
1GB
</td>
<td>
For future projects and for contact regarding further information on AMBER or
future collaborations
</td> </tr>
<tr>
<td>
**T5.2.1**
</td>
<td>
Stakeholder database
</td>
<td>
Collated during project
</td>
<td>
Feedback pre-output; dissemination of results and information
</td>
<td>
.xls
**NB. Data protection considerations**
</td>
<td>
1GB
</td>
<td>
Links stakeholders into outputs of project to ensure maximum use
</td> </tr>
<tr>
<td>
**T5.3.2**
</td>
<td>
Registered app users
</td>
<td>
Database with details of number, location and other information relating to
app use (including nonregistered and registered users)
</td>
<td>
Monitor uptake and use of app; feed into improvements to app; managing citizen
science activity and improving website which presents the data.
</td>
<td>
.xls (internal only)
**NB. Database itself not to be made public. Data protection issues.**
</td>
<td>
1GB
</td>
<td>
**Complete database not to be released** ; some analysed data may be presented
for publicity (eg number of users)
</td> </tr>
<tr>
<td>
**D7.1/7.2**
</td>
<td>
Ethics documentation
</td>
<td>
Database of ethics documentation for the AMBER project
</td>
<td>
Keep track of ethics documentation for beneficiaries. Internal (AMBER) use
only.
</td>
<td>
.xls (internal only)
</td>
<td>
1GB
</td>
<td>
**-Confidential: Internal use only-**
</td> </tr>
<tr>
<td>
**T 6.2 (D6.2)**
</td>
<td>
AMBER project metadata
</td>
<td>
Data base detailing all the data collected and produced by AMBER (listed
within **Table 1,2 and 3** ) including details of data that is not publically
available (with source contacts)
</td>
<td>
For external users
</td>
<td>
.xls
</td>
<td>
1GB
</td>
<td>
Allows ease of access and understanding of available AMBER data to all
external user types.
</td> </tr> </table>
**Table 6.** Summary of how data leads to deliverables and is then
disseminated to specific audiences WP 1, 2, 3, 4.
**Table 7.** Summary of outputs from non-specific data sources (WP5). NB. WP6
and WP7 outputs are internal management documents and not for external
audiences.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0741_RADIOFOREGROUNDS_687312.md
|
**_Types and formats of data_ **
Within the RADIOFOREGROUNDS project, we consider three types of data products.
1. **Maps** . Most of the data generated during this project will correspond to either sky maps or component separated maps. The standard format to be used within this project is the same one adopted by the CMB community at large: the HEALPIX ( _http://healpix.jpl.nasa.gov_ ) pixelization scheme. HEALPix is an acronym for Hierarchical Equal Area isoLatitude Pixelization of a sphere. As suggested in the name, this pixelization produces a subdivision of a spherical surface in which each pixel covers the same surface area as every other pixel. All Planck and QUIJOTE maps are provided as FITS files under this scheme. This is the same scheme in which all the Planck maps appear inside the PLA, or the scheme in which the WMAP data are provided in NASA’s LAMBDA ( _http://lambda.gsfc.nasa.gov_ ) archive.
2. **Catalogues** . The project will also provide complementary information to the PCCS (Planck Catalogue of Compact Sources), including the flux densities, polarization fractions and angles of specific radiosources at the QUIJOTE frequencies. Here, we will provide the results as extensions in binary fits tables, which could be added to the ESA database.
3. **Models** . We will also provide specific physical models of the different radio foreground components. Although these models will be described in detail in the relevant publications, they will also be implemented in a specific software tool, which will allow the possibility of carrying out specific predictions/simulations at the requested frequencies, or even doing some basic analysis to the data (e.g. aperture photometry, combination of frequencies, etc.).
**_Maps and models_ ** . The proposed format follows a **standard HEALPix FITS
binary table** , with:
* N side = 256
* Maps in RING ordering
* Maps in Galactic coordinates.
* Units. In general, they are K_CMB (or MJy/sr for Planck 545 and 857 GHz channels).
* Map columns called MAP_I, MAP_Q, MAP_U.
* Noise covariance columns called COV_II, COV_IQ, COV_IU, COV_QQ, COV_QU, COV_UU.
* Maps without polarisation information contain only MAP_I and COV_II columns.
* Name of the experiment in the TELESCOP header.
* Name of the channel in the CHANNEL header.
* Effective central frequency for that channel in the FREQUENC header.
* VERSION number of the map. This is relevant for maps that might be updated during the lifetime of the project.
**_Re-using existing data_ **
All the products of ESA’s Planck collaboration are publicly available in the
Planck Legacy Archive (PLA, _http://pla.esac.esa.int/pla/_ ). NASA’s WMAP maps
are also publicly available through the LAMBDA platform (
_http://lambda.gsfc.nasa.gov/_ ). Other ancillary data sets (see table 1.1)
will be obtained from the PLA, the LAMBDA archive, or the CADE database (
_http://cade.irap.omp.eu/_ ). In the case that HEALPIX maps for those
ancillary data are not found in those archives, they will be generated within
the group based on the existing public data. All the maps adapted for this
project will be degraded to a common angular resolution of one degree.
**_Expected size of the RADIOFOREGROUNDS data base_ **
The total size of our database is expected to be approximately 3 GB. In more
detail:
<table>
<tr>
<th>
**Data source**
</th>
<th>
</th>
<th>
**Description**
</th>
<th>
**Estimated volume (MB)**
</th> </tr>
<tr>
<td>
QUIJOTE
</td>
<td>
</td>
<td>
Four Healpix maps containing I,Q,U Stokes parameters, and noise covariances.
Frequencies: 11, 13, 17, 19 GHz.
Angular Resolution: 1 degree.
</td>
<td>
4 files of 27MB each
</td> </tr>
<tr>
<td>
PLANCK
</td>
<td>
</td>
<td>
Seven Healpix maps containing I,Q,U Stokes parameters, and noise covariances
(30, 44, 70, 100, 143, 217, 353 GHz), plus two Healpix maps containing only
Stokes I parameter (545 and 857 GHz). Angular resolution: 1 degree.
</td>
<td>
7 files of 27MB and 2 files of
6MB
</td> </tr>
<tr>
<td>
WMAP
</td>
<td>
</td>
<td>
Five Healpix maps containing I,Q,U Stokes parameters, and noise covariances.
Frequencies: 23, 33, 41, 61, 94 GHz. Angular Resolution: 1 degree.
</td>
<td>
5 files of 27MB each
</td> </tr>
<tr>
<td>
Ancillary
</td>
<td>
</td>
<td>
Healpix maps containing Stoke I parameters, Stokes Q and U (when available),
and noise covariances. Remaining experiments in Table 1.1. Angular Resolution:
1 degree.
</td>
<td>
Estimated maximum number
of 10 files of 27MB each
</td> </tr>
<tr>
<td>
Catalogues
</td>
<td>
</td>
<td>
Tables with celestial coordinates (RA-DEC or Galactic), and fluxes at
different frequencies. Detailed format TBC.
</td>
<td>
200 MB maximum
</td> </tr>
<tr>
<td>
Models of foreground emissions
</td>
<td>
the
</td>
<td>
Healpix maps containing the coefficients of the parameterised models of the
foreground emission derived in the project.
</td>
<td>
27MB each map. We expect a maximum number of 100 maps.
</td> </tr> </table>
The data products and software tools to be generated within the
RADIOFOREGROUNDS project will be of enormous importance not only for the
Cosmology community, but also to other communities in Astrophysics. In
particular, this data will be useful to study the gas and dust content of the
Milky Way, the cosmic ray distribution and the Galactic magnetic field
structure (especially at large scales), the physics of molecular clouds, SNRs,
HII regions and other regions with AME, and for the study of evolutionary
models of radio sources both in intensity and polarization. It will also
provide a very valuable resource to estimate the effect of radio foregrounds
on the detection of the CMB B-mode of polarization with future satellite and
sub-orbital experiments, helping to design the configuration of such
experiments in an optimal way. We believe that our proposed software tools
will be very helpful for these communities.
# 2\. FAIR data 2. 1. Making data findable, including provisions for metadata
This section considers the standards that RADIOFOREGROUNDS will use to
represent data generated by the project, and the additional standards around
data security, etc. that might be useful to govern the data used within and
generated by the project. In addition, this section also includes a
consideration and selection of the metadata that will be most effective in
describing the RADIOFOREGROUNDS data set.
This includes a consideration and evaluation of existing standards and our
reasoning for selecting specific standards. At present, the consortium has
agreed to avoid proprietary data formats as far as possible, as these will
make it difficult for both the consortium and any external stakeholders to
utilise the RADIOFOREGROUNDS data after the close of the project. This is for
three specific reasons – first, because proprietary programmes evolve and data
formats may become defunct, thus maintaining proprietary data formats
represents a significant need of investment to keep the data relevant and
accessible. Second, because access to the data would be restricted to those
who have access to the appropriate analysis tools if a proprietary data format
was utilised. Third, because it would be difficult to combine RADIOFOREGROUNDS
data with other data as proprietary data formats often raise interoperability
issues. In addition to these, RADIOFOREGROUNDS will also consider standards
around other issues that could govern the storage and representation of
RADIOFOREGROUNDS data. This includes data security standards such as ISO
27001. As the RADIOFOREGROUNDS data management plan develops alongside the
project, partners will consider each of these relevant standards and make an
informed selection for RADIOFOREGROUNDS toolset and evaluation data. The next
iteration of this document in M18 will provide more information.
Finally, the project will consider and select effective metadata for
describing RADIOFOREGROUNDS toolset and evaluation data. Effective metadata
will assist project partners and potential additional data users by providing
“clear and detailed data descriptions and annotation”, version numbers,
accessibly written, accompanying documentation and any contextual information
that is relevant when the data is re-used (UK Data Service, 2016). This
consideration of meta-data is linked to the next section on data exploitation,
in that the metadata provided should consider the uses to which the data can
be put in order to provide sufficient and relevant information to potential
users.
## 2.2. Making data openly accessible
With respect to the Observatory data provided to RADIOFOREGROUNDS, the
partners have agreed that the project coordinator will manage access to the
data during the duration of the project. Each node has agreed to provide
access to the data required for the project, provided that the data is only
accessed by consortium partners and only for project activities. If needed,
partners will provide the anonymised data directly to the coordinator, who
will store the data in their existing ICT infrastructure. The anonymisation of
the data means that it is acceptable to share it within the consortium,
however partners have agreed not to seek access to raw data. Any partner with
a user password will be able to access the historical data, and the simulated
real-time data to develop or test their algorithm or software. With respect to
the data that will be used for the final testing of the RADIOFOREGROUNDS
solution, this data will be housed within IAC facilities, and partners will be
able to access it during the testing.
The issue of access to RADIOFOREGROUNDS toolset and evaluation data after the
end of the project will be initially considered in the next iteration of this
document in M18, and will be finalised in the last iteration in M36. In
principle, and as described in the original proposal, **the project will make
publicly available all maps, catalogues and models described in Section 1 of
this document** . The consortium will decide if additional data (e.g. maps of
small sky regions) is of interest to the community and can be open access as
well. The data will be made accessible by FITS format, as described in Section
1\. The website will provide a list of public FITS gathered by categories.
Additionally, it will develop a REST API that provides access to the FITS and
other metadata to the community. The users could download the data in the
website or queries can be performed to the API using only your browsers
address bar (only GET methods will work for this approach). For other type of
request (POST, PUT, DELETE) they will need install a RESTful addon or
programmatically (this is the more useful way to consume a REST web services).
API documentation will be provided to end-users. A user manual is created to
describe relevant processes for data access. Basic API Calls are detailed,
separately.
**_Storage and processing._ **
With respect to storage, each of the types of data will be stored and backed-
up slightly differently. IAC will store the provided data in their existing
infrastructure. All of this data will be backed-up in the cluster,
automatically, so that the data can be recovered in the event of an incident.
The security of this data will be maintained via the security policies,
methodologies and mechanisms that Treelogic already has in place for
protecting their sensitive commercial data. These follow existing data and
information security standards.
Information is stored in the cluster using the Hadoop Distributed File Systems
(HDFS). Concretely, Treelogic uses the .20.20x distributions of Hadoop which
focus on security issues by utilizing the following:
* _Mutual Authentication with Kerberos RPC (SASL/GSSAPI) on RPC connections_ **:** SASL/GSSAPI was used to implement Kerberos and mutually authenticate users, their processes, and Hadoop services on RPC connections.
* _“Pluggable” Authentication for HTTP Web Consoles:_ meaning that implementers of web applications and web consoles could implement their own authentication mechanism for HTTP connections. This could include (but was not limited to) HTTP SPNEGO authentication.
* _Enforcement of HDFS file permissions:_ Access control to files in HDFS could be enforced by the NameNode based on file permissions - Access Control Lists (ACLs) of users and groups.
* _Delegation Tokens for Subsequent Authentication checks:_ **t** hese were used between the various clients and services after their initial authentication in order to reduce the performance overhead and load on the Kerberos KDC after the initial user authentication. Specifically, _delegation tokens_ are used in communication with the NameNode for subsequent authenticated access without using the Kerberos Servers.
* _Block Access Tokens for Access Control to Data Block_ **:** when access to data blocks were needed, the NameNode would make an access control decision based on HDFS file permissions and would issue _Block access tokens (using HMAC-SHA1)_ that could be sent to the DataNode for block access requests. Because DataNodes have no concept of files or permissions, this was necessary to make the connection between the HDFS permissions and access to the blocks of data.
* _Job Tokens to Enforce Task Authorization:_ _Job tokens_ are created by the JobTracker and passed onto TaskTrackers, ensuring that Tasks could only do work on the jobs that they are assigned. Tasks could also be configured to run as the user submitting the job, making access control checks simpler.
* _From “Pluggable Authentication” to HTTP SPNEGO Authentication_ : Although the 2009 security design of Hadoop focused on pluggable authentication, the Hadoop developer community decided that it would be better to use Kerberos consistently, since Kerberos authentication was already being used for RPC connections (users, applications, and Hadoop services). Now, Hadoop web consoles are configured to use HTTP SPNEGO Authentication, an implementation of Kerberos for web consoles. This provided some much-needed consistency.
* _Network Encryption:_ Connections utilizing SASL can be configured to use a Quality of Protection (QoP) of confidential, enforcing encryption at the network level – this includes connections using Kerberos RPC and subsequent authentication using delegation tokens. Web consoles and MapReduce shuffle operations can be encrypted by configuring them to use SSL. Finally, HDFS File Transfer can also be configured for encryption
Treelogic will use an Apache Kafka end point to provide test data for the
partners. The 0.9.x release used by the RADIOFOREGROUNDS Project includes a
number of features that, whether used separately or together, will increase
security in a Kafka cluster. This includes the following security measures:
* Authentication of connections to brokers from clients (producers and consumers), other brokers and tools, using either SSL or SASL (Kerberos)
* Authentication of connections from brokers to ZooKeeper
* Encryption of data transferred between brokers and clients, between brokers or between brokers and tools using SSL (However, there is a performance degradation when SSL is enabled, and the magnitude of this degradation depends on the CPI type and the JVM implementation utilized.
* Authorisation of read/write operations by clients
* Authorisation is pluggable and integration with external authorisation services is supported (Apache Kafka, 2016)
RADIOFOREGROUNDS toolset data and anonymised evaluation data will be stored by
individual partners and in the consortium’s file repository that is managed by
IAC. The sharing of this data within the consortium will create back-ups
should an incident occur, however, like the Observatory data, storing this
data within IAC’s file repository would trigger automated back-ups and enable
recovery. Finally, IAC will store the personal data associated with the
informed consent, if needed, within a separate, but equally secure, storage
space that is not accessible to project partners to protect the personal data
of those participating in the project. IAC’s existing data and information
security protocols and tools will also protect this data.
**_Role-based access control_ **
RBAC is a secure method of restricting account access to authorized users.
This method enables the account owner to add users to the account and assign
each user to specific roles. Each role has specific permissions defined by
Rackspace. RBAC allows users to perform various actions based on the scope of
their assigned role.
## 2.3. Making data interoperable
RADIOFOREGROUNDS data products will follow the **Flexible Image Transport
System** ( **FITS** ) open standard (see _https://fits.gsfc.nasa.gov/_ ). This
is the standard data format widely used by astronomers to transport, analyse,
and archive scientific data files. We will use the rules established by NASA
to create and use the FITS files.
Concerning the information included in the headers of the FITS files and the
formats, we will closely follow the standards adopted by ESA in the Planck
Legacy Archive (PLA, _http://pla.esac.esa.int/pla/_ ), both for maps and
catalogues, which can be found here:
_https://wiki.cosmos.esa.int/planckpla2015/index.php/Main_Page_ .
We note that in the particular case of maps data products, we have adopted the
**H** ierarchical **E** qual **A** rea iso **L** atitude **Pix** elization (
**HEALPIX** ) scheme ( _http://healpix.jpl.nasa.gov/_ ).
A “Hierarchical Progressive Survey”, also called **HiPS** , allows a dedicated
client/browser tool to access and display a survey progressively, based on the
principle that “the more you zoom in on a particular area the more details
show up”. This method is based on HEALPix sky tessellation.
## 2.4. Increase data re-use (through clarifying licences)
We expect that the final data products of RADIOFOREGROUNDS, and in particular,
the frequency maps and catalogues, should be used for decades. This is the
reason why the data products will follow the FITS open standards of the
astrophysical community.
**_Long-term archiving and preservation (including open access)._ **
RADIOFOREGROUNDS partners will use this section of the DMP to outline a
strategy for long-term preservation of the data products beyond the end of the
project. A consideration of these issues needs to take place alongside the
planning of the research process for generating RADIOFOREGROUNDS toolset and
evaluation data, and this section will be updated to reflect these
developments.
In any case, the current baseline is that the IAC node ( _http://www.iac.es_ )
will use its current infrastructure located at the IAC Headquarters (La
Laguna, Tenerife) for archiving and long-term preservation. The applications
will be implemented using virtual servers. In this way, the virtual server can
be allocated with CPU, memory or disk resources as needed. This virtualization
system consists of the VMware hypervisor that is installed on 6 hosts with 120
cores and 750GB of RAM in total. These hosts are connected with fibres and
alternate paths to an EMC VNX 5500 storage enclosure, where the virtual server
disks are located. In addition, hosts are connected with multiple 2GB
aggregates to the corporate network and are protected by a PaloAlto 5050
firewall from external and/or internal attacks.
The specific processes and procedures that will be put into place to guide the
long-term preservation of the data will be included in this section during the
development of the project. This includes a detailed description of how long
the data might be preserved, its exact volume and characteristics as well as
information about how the veracity of the data will be ensured. The project
will evaluate if the current proposed baseline is sufficient, or a larger
system is required. Given the dependency of this evaluation on the larger
development of the research processes and the eventual characteristics of the
final data, this section will be updated in M18 of the project and finalised
in M36.
# 3\. Allocation of resources
As discussed in the last section, the IAC will provide its infrastructure
located at the IAC Headquarters (La Laguna, Tenerife) for archiving and long-
term preservation. This infrastructure is now hosting the project web site (
_http://www.radioforegrounds.eu_ ), and will contain in the future the data
base and associated software tools.
# 4\. Data security
A detailed description on data security has been included in Section 2\.
Concerning the IAC infrastructure for archiving and long-term preservation,
from each of the virtual servers daily backups are made with the Avamar backup
system. These copies are stored in a CPD that is on a different Canary island
(La Palma). The infrastructure is designed in such a way that the service is
not affected by faults of the equipment or during maintenance operations.
# 5\. Ethical aspects
The types of data described above raise specific issues related to
intellectual property, data protection and research ethics that the project
will have to manage appropriately. Where relevant, Spanish law is considered
alongside European law, as Spain is the primary location of the research and
the location in which the data collection and processing activities are taking
place. The following discussion outlines how RADIOFOREGROUNDS will manage each
of the relevant legal requirements, and describes how the agreed data
governance processes around access storage and sharing will also assist in
managing these requirements. Consequently, this section makes consistent
reference to the material to be discussed in Section 1. This section begins by
considering issues related to research ethics through the management of
informed consent.
## 5.1 Informed consent
Addressing issues related to research ethics can largely be addressed though
the management of informed consent when the research is being conducted with
healthy, adult volunteers. However, as the participants are employees of a
partner organisation, there are some risks that they may feel pressured to
participate. This risk will also be managed though the informed consent
process.
**Informed consent** is central to ethical research practice, as adult healthy
volunteers should be empowered to manage their participation and the use of
their information during social science research. Providing transparent and
adequate information to these participants about the purpose of the research,
the data that will be collected, the research funders, the ways in which their
data will be utilised and who will benefit from the research is important to
ensure that participants understand the potential implications of their
participation (See Annex A for a draft of the informed consent form). The
creation of an **information sheet** provides this information in appropriate
detail and in language that is meaningful to the participant (See Annex A for
the draft information sheet). It also sets out information about:
* how their data will be anonymised or pseudonymised
* how their data will be stored and shared with other researchers
* how participants may access the data they provided
* whether they can make corrections
* how they can request their data be removed, and
* where they can go if they have any questions, comments or complaints.
In addition, the information sheet explains any unintended effects that may
result from the research. Combining each of these pieces of information will
enable potential participants to evaluate whether they would like to
participate in the research and whether they might experience any unintended
or adverse effects.
However, given that this research will be carried out with employees of one of
the partner organisations, ensuring voluntary participation will require a few
additional steps. First, following good practice, the information sheet will
advise participants that their participation is purely voluntary. In addition,
partners’ personnel will undertake recruitment and advise participants that
their participation is voluntary. Finally, during the research activity
itself, those conducting the research will invite participants to re-consider
their participation and to excuse themselves from the research activity. Given
that the project does not involve many sensitive topics, this should be
sufficient to ensure voluntary participation. Nevertheless, the project will
remain vigilant about this potential conflict and will carry out a rolling
risk assessment to ensure voluntary participation.
## 5.2 Personal data protection
However, seeking informed consent will raise issues around the protection of
personal data, as personal data, including names and contact information will
be needed to record informed consent. The consortium will manage this using
the following steps:
1. Participants will immediately be given a participant number linked to their name, and this will replace their name in any stored or shared data.
2. The link between a participant’s name and number will be stored in a proprietary storage facility by the project coordinator
3. This information will not be shared with the project partners, and any enquiries about participants’ personal information will be fed through the coordinator
4. Participants with particular identifying features or experiences may be managed by mixing these with other participants’ characteristics (e.g., switching places of birth) to make each participant less identifiable. Where necessary, some identifying features may be removed from the data if it cannot be anonymised
5. Participants will be given the right to review their data and make any corrections or erasures should they have any concerns.
The project will also avoid the collection of data that is not necessary for
the purposes of the research ( _purpose limitation and data minimisation
principles_ ). Each of these processes will assist in the anonymisation and
pseudonymisation of any personal data, and storing this data with the
coordinator will ensure that participants are adequately protected with
reference to confidentiality. In addition, the information sheet will enable
the project to meet requirements around transparency and provide a mechanism
through which participants can exercise their rights of access, correction and
erasure. The information sheet will also assist the project in meeting
requirements around data retention, as the information sheet sets out how long
the data will be stored and with whom it may be shared.
In addition to these, the project will also meet Spanish data protection
requirements. The project coordinator will register with the AEPD as a
processer of personal data. In addition, should a data breach occur, the
coordinator will inform both the AEPD and research participants about the
breach and provide advice on any consequences.
Thus, the overlapping requirements around ethical research practice and the
protection of personal data can be met simultaneously using both the
information sheet and informed consent forms for the RADIOFOREGROUNDS
research. While the amount of personal data that will be collected by the
project is relatively minimal, the project will use the data protection
principles to guide the collection of all data about human participants,
whether personal or not, to ensure that we meet ethical research requirements.
Attention to both will ensure participants receive the maximum level of
protection and consideration.
## 5.3 Intellectual property rights
As noted above, the Observatory data is subject to intellectual property
protections, and the consortium will take the following specific steps to
address this. First, as noted above, the data will be anonymised so that any
sensitive data about partners’ customers and collaboration is removed. Second,
the data governance procedures around access, storage and sharing discussed in
Chapter 6, below, will ensure that consortium members respect partners’
intellectual property rights. Finally, each of the partners have agreed to
only use the data for the purposes of the RADIOFOREGROUNDS project and the
development and testing of RADIOFOREGROUNDS algorithms and software. This has
been agreed via the RADIOFOREGROUNDS consortium agreement, a legally binding
document that governs the project and partnership arrangements.
The Consortium Agreement and this document will also guide the intellectual
property rights claimed by the consortium with respect to RADIOFOREGROUNDS
toolset data. The consortium will agree a license that adequately describes
how the data will be used and shared within the consortium and at the close of
the project. Underpinning this will be the agreement, contained within the
Consortium Agreement, that each partner owns the intellectual property,
including data, which they create. Nevertheless, the Consortium Agreement also
provides for joint or multiple ownership, and in these cases, relevant
partners will agree on the license to be used. Consideration of these
intellectual property rights will also govern the extent to which
RADIOFOREGROUNDS toolset data can be made openly accessible at the close of
the project. If this option is selected, partners will agree an open license
to manage the use of this data, and will likely select a license such as CC-BY
– a creative commons license that requires users to attribute the data to
those who originally created it. The outcome of these discussions will feed
into the RADIOFOREGROUNDS intellectual property rights and innovation
committee that will undertake the final decision regarding licensing. This
issue will be re-visited in the next iteration of this plan in M18.
# 6\. Other issues
There are no other issues.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0742_AudioCommons_688382.md
|
# Executive Summary
This Data Management Plan (DMP) provides an analysis of the main elements of
the data management policy used by the project with regard to all the datasets
that have been generated by the project. The DMP has evolved during the
lifespan of the project. This is the final version of the DMP produced during
the project.
# Background
The purpose of this Data Management Plan (DMP) is to provide an analysis of
the main elements of the data management policy used by the project with
regard to all the datasets that have been generated by the project.
The DMP was not a fixed document, but evolved during the lifespan of the
project. This is the final version, representing the position after project
completion.
The DMP will address the points below on a dataset by dataset basis and should
reflect the final status of reflection within the consortium about the data
that has been produced.
The approach to the DMP follows that outlined in the “ _Guidelines_ _on_
_Data_ _Management_ _in_ _Horizon_ _2020_ __ ” (Version 2.1, 15 February
2016).
**Dataset reference and name: ** Identifier for the data set to be produced.
**Dataset description: ** Description of the data that will be generated or
collected, its origin (in case it is collected), nature and scale and to whom
it could be useful, and whether it underpins a scientific publication.
Information on the existence (or not) of similar data and the possibilities
for integration and reuse.
**Standards and metadata: ** Reference to existing suitable standards of the
discipline. If these do not exist, an outline on how and what metadata will be
created.
**Data sharing: ** Description of how data will be shared, including access
procedures, embargo periods (if any), outlines of technical mechanisms for
dissemination and necessary software and other tools for enabling re-use, and
definition of whether access will be widely open or restricted to specific
groups. Identification of the repository where data will be stored, if already
existing and identified, indicating in particular the type of repository
(institutional, standard repository for the discipline, etc.). In case the
dataset cannot be shared, the reasons for this should be mentioned (e.g.
ethical, rules of personal data, intellectual property, commercial, privacy-
related, security-related).
**Archiving and preservation (including storage and backup): ** Description
of the procedures that will be put in place for long-term preservation of the
data. Indication of how long the data should be preserved, what is its
approximate final volume, what the associated costs are and how these are
planned to be covered.
# 1 Admin Details
**Project Title:** Audio Commons: An Ecosystem for Creative Reuse of Audio
Content
**Project Number:** 688382
**Funder:** European Commission (Horizon 2020)
**Lead Institution:** Universitat Pompeu Fabra (UPF)
**Project Coordinator:** Prof Xavier Serra
**Project Data Contact:** Sonia Espi, [email protected]
**Project Description: ** The democratisation of multimedia content creation
has changed the way in which multimedia content is created, shared and
(re)used all over the world, yielding significant amounts of user-generated
multimedia resources, big part shared under open licenses. At the same time,
creative industries need to reduce production costs in order to remain
competitive. There is, therefore, an opportunity for creative industries to
incorporate such content in their productions, but there is a lack of
technologies for easily accessing and incorporating that type content in their
creative workflows. In the particular case of sound and music, a huge amount
of audio material like sound samples, soundscapes and music pieces, is
available and released under Creative Commons licenses, both coming from
amateur and professional content creators. We refer to this content as the
'Audio Commons'. However, there exist no practical ways in which Audio Commons
can be embedded in the production workflows of the creative industries, and
licensing issues are not easily handled across the production chain. As a
result, most of this content remains unused in professional environments. The
aim of this project is to create an ecosystem of content, technologies and
tools to bring the Audio Commons to the creative industries, enabling
creation, access, retrieval and reuse of Creative Commons audio content in
innovative ways that fit the requirements of the use cases considered (e.g.,
audiovisual, music and video games production).Furthermore, we tackle rights
management challenges derived from the content reuse enabled by the created
ecosystem and research about emerging business models that can arise from it.
Our project will benefit creative industries by providing new and innovative
creativity supporting tools and reducing production costs, and will benefit
content creators by offering a channel to expose their works to professional
environments and to allow them to (re)licence their content.
# 2 Dataset Information
## DS 2.1.1: Requirements survey
**Dataset reference and name**
DS 2.1.1: Requirements survey
**Dataset description**
Results from survey of creative industry content users in Task 2.1: "Analysis
of the requirements from creative industries". This data supports Deliverable
D2.1: "Requirements report and use cases", and has over 660 responses.
WP: WP2 / Task: Task 2.1
Responsible: QMUL (& MTG-UPF)
**Standards and metadata**
Text document (CSV file)
**Data sharing**
Anonymized form available at the link 1 . Corresponding DOI:
**10.5281/zenodo.832644**
**Archiving and preservation (including storage and backup)**
Available on Zenodo.
Final size (Bytes): 653 kB
## DS 2.2.1: Audio Commons Ontology
**Dataset reference and name**
DS 2.2.1: Audio Commons Ontology
**Dataset description**
Definition of Audio Commons Ontology, the formal ontology for the Audio
Commons Ecosystem. Data form of D2.2: Draft ontology specification and D2.3:
Final ontology specification. WP: WP2 / Task: Task 2.2 Responsible: QMUL
**Standards and metadata**
OWL Web Ontology Language
**Data sharing**
Available at _ https://w3id.org/ac-ontology/aco _ as OWL in multiple
serialization formats and HTML documentation (via HTTP content negotiation).
**Archiving and preservation (including storage and backup)** Maintained on
GitHub in repository _AudioCommons/ac-ontology_
Snapshot of current version (v1.2.3) uploaded to Zenodo and available at _
10.5281/zenodo.2553184 _ Size (Bytes): 65.1K
## DS 2.6.1: Audio Commons Mediator data
**Dataset reference and name**
DS 2.6.1: Audio Commons Mediator data
**Dataset description**
Freesound and Jamendo content exposed in the Audio Commons Ecosystem. Not
strictly a “dataset”, rather a service providing access to data.
WP: WP2 / Task: Task 2.6
Responsible: MTG-UPF (v1) & QMUL (v2)
**Standards and metadata**
Audio Commons Ontology
**Data sharing**
Available via ACE Mediator versions 1 and 2.
_http://m.audiocommons.org/_ _http://m2.audiocommons.org/_
**Archiving and preservation (including storage and backup)**
Dynamic service availability, no plans to provide a “snapshot”.
Estimated final size (Bytes): N/A
## DS 3.3.1: Business model workshop notes and interviews
**Dataset reference and name**
DS 3.3.1: Business model workshop notes and interviews
**Dataset description**
Notes/transcripts from workshop in Task 3.3 "Exploration of Business Models in
the ACE". This data will support Deliverables D3.4 and D3.5.
WP: WP3 / Task: Task 3.3 Responsible: Surrey-CoDE
**Standards and metadata**
Text documents
**Data sharing**
Data collected and stored according to ethics policy and approval. Can be made
available upon request and following a confidentiality agreement. To request
access, contact Dr Carla Bonina ([email protected]).
**Archiving and preservation (including storage and backup)** Workshop
recordings and notes stored in a secured project drive.
Estimated final size (Bytes): 100K
## DS 4.2.1: Semantic annotations of musical samples
**Dataset reference and name**
DS 4.2.1: Semantic annotations of musical samples
**Dataset description**
Ground truth annotations of datasets used to evaluate the algorithms included
in the AC tool for the annotation of music samples. Supporting data for
deliverables D4.4, D4.10, D4.12. WP: WP4 / Task: Tasks 4.2 and 4.4.
Responsible: MTG-UPF
**Standards and metadata**
Ground truth annotations are stored using standard CSV format.
**Data sharing**
Ground truth annotations public in Zenodo:
https://zenodo.org/record/2546754#.XEcmny2ZOL4. The audio they refer to is not
always openly available due to licensing constraints, but instructions are
provided about how to obtain the audio. Ground truth annotations contain
references to the original audio files.
**Archiving and preservation (including storage and backup)**
Archived and stored in Zenodo research data repository.
Size (Bytes): 2M
## DS 4.3.1: Semantic annotations of musical pieces
**Dataset reference and name**
DS 4.3.1: Semantic annotations of musical pieces
**Dataset description**
Results of music piece descriptions such as bpm, tonality or chords. The
specific audio properties included in the semantic annotation are chords,
tempo, beats, global-key, keys, tuning, instruments. Supporting data for
deliverables D4.3, D4.8, D4.13.
WP: WP4 / Task: Task 4.3 Responsible: QMUL **Standards and metadata**
Annotations are stored using the standard JSON format, and with a converter to
a Semantic Web format (JSON-LD), and following the Audio Commons Ontology
definition.
**Data sharing**
Public: Access via Audio Commons API
**Archiving and preservation (including storage and backup)**
Data stored in ACE Server. Annotation size estimate: 66kBytes per file x 100k
files = 6.6 GBytes Amount of data will be growing along with the usage of the
web service.
## DS 4.3.2: MediaEval AcousticBrainz Genre
**Dataset reference and name**
DS 4.3.2: MediaEval AcousticBrainz Genre
**Dataset description**
MediaEval AcousticBrainz Genre dataset contains genre and subgenre annotations
of music recordings extracted from four different online metadata sources,
including editorial metadata databases maintained by music experts and
enthusiasts (AllMusic and Discogs) as well as collaborative music tagging
platforms (Lastfm and Tagtraum). In addition, it includes music features
precomputed from audio for every annotated music recording. All music features
are taken from the community-built database _ AcousticBrainz _ and were
extracted from audio using _E_ _ ssentia _ , an open-source library for
music audio analysis.
For the purposes of AcousticBrainz Genre Task held within MediaEval
Benchmarking Initiative for Multimedia Evaluation in _2017_ and _ 2018 _
, the dataset is split into development and validation and testing set in a
70%-15%-15% proportion. The development set contains annotations from AllMusic
(1353213 recordings annotated by 21 genres and 745 subgenres), Discogs (904944
recordings, 15 genres, 300 subgenres), Lastfm (566710 recordings, 30 genres,
297 subgenres), and Tagtraum (486740 recordings, 31 genres, 265 subgenres).
WP: WP4 / Task: Tasks 4.3. Responsible: MTG-UPF **Standards and metadata**
Ground truth annotations are provided using standard TSV files. Music features
are provided in JSON
files.
**Data sharing**
Full dataset description available here:
_https://multimediaeval.github.io/2018-AcousticBrainz-Genre-Task/data/_
Dataset contents available in Zenodo:
* _https://zenodo.org/record/2553414_
* _https://zenodo.org/record/2554044_
**Archiving and preservation (including storage and backup)**
Archived and stored in Zenodo research data repository.
Size (Bytes): 40G
## DS 4.4.1: Evaluation results of annotations of musical samples
**Dataset reference and name**
DS 4.4.1: Evaluation results of annotations of musical samples
**Dataset description**
Results of evaluation of automatic methods for the semantic annotation of
music samples. These results include the output of the analysis algorithms run
on the datasets annotated with ground truth data. Supporting data for
deliverables D4.4, D4.10 and D4.12.
WP: WP4 / Task: Task 4.4 Responsible: MTG-UPF **Standards and metadata**
Ground truth annotations are stored using standard CSV format.
**Data sharing**
Automatically generated annotations public in Zenodo:
https://zenodo.org/record/2546643#.XEcKpS2ZOL4.
The audio they refer to is not always openly available due to licensing
constraints, but instructions are included for obtaining the audio. Provided
annotations contain references to the original audio files.
**Archiving and preservation (including storage and backup)**
Archived and stored in Zenodo research data repository.
Size (Bytes): 4.7M
## DS 4.5.1: Evaluation results of annotations of musical pieces
**Dataset reference and name**
DS 4.5.1: Evaluation results of annotations of musical pieces
**Dataset description**
Results of evaluation of automatic methods for the semantic annotation of
music pieces. Results include human evaluations via questionnaire. Supporting
data for deliverables D4.5, D4.11 WP: WP4 / Task: Task 4.5 Responsible: QMUL
**Standards and metadata**
Tabular (e.g. CSV) and freeform text
**Data sharing**
Statistical analysis: Public in D4.11. User evaluations: data collected and
stored according to ethics policy and approval.
**Archiving and preservation (including storage and backup)**
Project document server, personally identifiable data password-protected.
Consent forms stored securely offline (e.g. paper in locked filing cabinet).
Estimated final size (Bytes): 100K
## DS 4.6.1: Evaluation results of musical annotation interface
**Dataset reference and name**
DS 4.6.1: Evaluation results of musical annotation interface
**Dataset description**
Results of evaluation of interface for manually annotating musical content, in
terms of its usability and its expressive power for annotating music samples
and music pieces. The evaluation was carried out with real users and in
combination with the evaluation of Task 5.4. Supporting data for deliverable
D4.9
WP: WP4 / Task: Task 4.6 Responsible: MTG-UPF **Standards and metadata**
Free text and Tabular (e.g. CSV)
**Data sharing**
Project partners only.
**Archiving and preservation (including storage and backup)**
Anonymized data stored in project document server.
Estimated final size (Bytes): 1M
## DS 4.7.1: Outputs of integrated annotation technology: Musical content
**Dataset reference and name**
DS 4.7.1: Outputs of integrated annotation technology: Musical content
**Dataset description**
Annotations of Freesound and Jamendo content. Success in Task 4.7 will result
in at least 70% of Freesound (musical content) and Jamendo content annotated
with Audio Commons metadata as defined in the Audio Commons Ontology.
WP: WP4 / Task: Task 4.7
Responsible: MTG-UPF & Jamendo **Standards and metadata**
Annotations for Freesound are stored using standard JSON format.
Annotations for Jamendo are stored using standard JSON format and include the
Jamendo identifier as part of the “_id” field, which has the form “jamendo-
tracks:<jamendo-id>”. Using the Jamendo id, further metadata and audio can be
requested through the _ Jamendo _ _API_
(https://developer.jamendo.com/).
**Data sharing**
Freesound integration analysis results available in Zenodo:
https://zenodo.org/record/2546812#.XEc2ZC2ZOL4 Jamendo integration analysis
results available in Zenodo: _https://doi.org/10.5281/zenodo.2551256_
**Archiving and preservation (including storage and backup)**
Data stored in Zenodo.
Estimated final size (Bytes): 160M (Freesound analysis output) + 6.6GB
(Jamendo analysis output)
## DS 5.1.1: Timbral Hierarchy Dataset
**Dataset reference and name**
DS 5.1.1: Timbral Hierarchy Dataset
**Dataset description**
Data relate to Deliverable D5.1 which: (i) generated a hierarchy of terms
describing the timbral attributes of audio; (ii) determined the search
frequency for each of these terms on the _www.freesound.org_ __ audio
database.
WP: WP5 / Task: Task 5.1
Responsible: Surrey-IoSR (& MTG-UPF)
**Standards and metadata**
Data comprises excel and csv files, Python code, figures and documentation.
**Data sharing**
Public. DOI:10.5281/zenodo.167392 **Archiving and preservation (including
storage and backup)**
Project document server, Zenodo.
Estimated final size (Bytes): 6.5M
## DS 5.2.1: Timbral Characterisation Tool v0.1 Development Dataset
**Dataset reference and name**
DS 5.2.1: Timbral Characterisation Tool v0.1 Development Dataset
**Dataset description**
Audio files, test interfaces, and results of listening experiments on timbre
perception, carried out to inform the specification of required enhancements
to existing metrics, and of modelling approaches for significant timbral
attributes not yet modelled.
WP: WP5 / Task: Task 5.2 Responsible: Surrey-IoSR **Standards and metadata**
Various (Datasets include multiple audio files as well as test interfaces, and
other ancillary files)
**Data sharing**
Data collected and stored anonymously according to ethics policy and approval.
Public. DOI:10.5281/zenodo.2545488
**Archiving and preservation (including storage and backup)**
Project document server, Zenodo.
Estimated final size (Bytes): 50MB
## DS 5.2.2: Timbral Characterisation Tool v0.1
**Dataset reference and name**
DS 5.2.2: Timbral Characterisation Tool v0.1
**Dataset description**
Computer code implementing the timbral models developed in Task 5.2 and
delivered in D5.2.
WP: WP5 / Task: Task 5.2 Responsible: Surrey-IoSR **Standards and metadata**
Computer code plus documentation. **Data sharing**
Public. DOI:10.5281/zenodo.2545492
**Archiving and preservation (including storage and backup)**
Project document server, Zenodo.
Estimated final size (Bytes): 150kB
## DS 5.3.1: Timbral Characterisation Tool v0.1 Evaluation Dataset
**Dataset reference and name**
DS 5.3.1: Timbral Characterisation Tool v0.1 Evaluation Dataset
**Dataset description**
Audio files, test interfaces, and results of evaluation of automatic methods
for the semantic annotation of non-musical content, including listening tests
where appropriate. Annotations will be evaluated against the timbral
descriptor hierarchy defined in Task 5.1. Supporting data for Deliverables
D5.3, D5.7
WP: WP5 / Task: Task 5.3
Responsible: Surrey-CVSSP & Surrey-IoSR
**Standards and metadata**
Various (Datasets include multiple audio files as well as test interfaces, and
other ancillary files)
**Data sharing**
Data collected and stored anonymously according to ethics policy and approval.
Public. DOI:10.5281/zenodo.2545494
**Archiving and preservation (including storage and backup)**
Project document server, Zenodo.
Estimated final size (Bytes): 1.5GB
## DS 5.4.1: Evaluation results of non-musical annotation interface
**Dataset reference and name**
DS 5.4.1: Evaluation results of non-musical annotation interface
**Dataset description**
Results of evaluation of interface for manually annotating non-musical
content, in terms of its usability and its expressive power for annotating.
The evaluation wa carried out with real users and in
combination with the evaluation of Task 4.6. Supporting data for deliverable
D5.5.
WP: WP5 / Task: Task 5.4 Responsible: MTG-UPF **Standards and metadata**
Free text and Tabular (e.g. CSV)
**Data sharing**
Project partners only.
**Archiving and preservation (including storage and backup)**
Anonymized data stored in project document server.
Estimated final size (Bytes): 1M
## DS 5.5.1: Outputs of integrated annotation technology: Non-Musical content
**Dataset reference and name**
DS 5.5.1: Outputs of integrated annotation technology: Non-Musical content
**Dataset description**
Annotations of Freesound content. Success in Task 5.5 will result in at least
70% of Freesound (non-musical) content annotated with Audio Commons metadata
as defined in the Audio Commons
Ontology. This will incorporate datasets DS 4.2.1 and DS 4.3.1.
WP: WP5 / Task: Task 5.5 Responsible: MTG-UPF **Standards and metadata**
Annotations for Freesound are stored using standard JSON format.
**Data sharing**
Freesound integration analysis results available in Zenodo:
https://zenodo.org/record/2546812#.XEc2ZC2ZOL4 **Archiving and preservation
(including storage and backup)**
Data stored in Zenodo.
Estimated final size (Bytes): 160M (Freesound analysis output)
## DS 5.6.1: FSDKaggle2018
**Dataset reference and name**
DS 5.6.1: FSDKaggle2018 **Dataset description**
Freesound Dataset Kaggle 2018 (or FSDKaggle2018 for short) is an audio dataset
containing 18,873 audio files annotated with labels from 41 general audio
categories from Google's _ AudioSet _ Ontology. All audio samples in this
dataset are gathered from _ Freesound _ . All sounds in Freesound are
released under Creative Commons (CC) licenses. In particular, all Freesound
sounds included in FSDKaggle2018 are released under either _ CC-BY _ or _
CC0 _ . For attribution purposes and to facilitate attribution of these
files to third parties, this dataset includes a relation of audio files and
their corresponding license.
WP: WP5 / Task: Task 5.5 Responsible: MTG-UPF **Standards and metadata**
Ground truth annotations are provided using standard CSV files. Audio files
are as uncompressed
PCM 16 bit, 44.1 kHz, mono.
**Data sharing**
Ground truth annotations and audio publically available in Zenodo: _
https://zenodo.org/record/2552860#.XFD1cfwo-V4 _ **Archiving and preservation
(including storage and backup)**
Archived and stored in Zenodo research data repository.
Estimated final size (Bytes): 5G
## DS 5.6.2: Timbral Characterisation Tool v0.2 Development Dataset
**Dataset reference and name**
DS 5.6.2: Timbral Characterisation Tool v0.2 Development Dataset
**Dataset description**
Audio files, test interfaces, and results of listening experiments on timbre
perception, carried out to inform the specification of required enhancements
to existing metrics, and of modelling approaches for significant timbral
attributes not yet modelled.
WP: WP5 / Task: Task 5.2
Responsible: Surrey-IoSR & Surrey-CVSSP
**Standards and metadata**
Various (Datasets include multiple audio files as well as test interfaces, and
other ancillary files)
**Data sharing**
Data collected and stored anonymously according to ethics policy and approval.
Public. DOI:10.5281/zenodo.2545496
**Archiving and preservation (including storage and backup)**
Estimated final size (Bytes): 1.3GB
## DS 5.6.3: Timbral Characterisation Tool v0.2
**Dataset reference and name**
DS 5.6.3: Timbral Characterisation Tool v0.2
**Dataset description**
Computer code implementing the timbral models developed in Task 5.2 and
delivered in D5.6.
WP: WP5 / Task: Task 5.2
Responsible: Surrey-IoSR and Surrey-CVSSP
**Standards and metadata**
Computer code plus documentation. **Data sharing**
Public. DOI:10.5281/zenodo.2545498
**Archiving and preservation (including storage and backup)**
Project document server, Zenodo.
Estimated final size (Bytes): 1.0MB
## DS 5.7.1: Timbral Characterisation Tool v0.2 Evaluation Dataset
**Dataset reference and name**
DS 5.7.1: Timbral Characterisation Tool v0.2 Evaluation Dataset
**Dataset description**
Code used in the evaluation of automatic methods for the semantic annotation
of non-musical content as delivered in Deliverable D5.6. Supporting data for
Deliverable D5.7
WP: WP5 / Task: Task 5.3
Responsible: Surrey-CVSSP & Surrey-IoSR
**Standards and metadata**
Computer code plus documentation. **Data sharing**
Public. DOI:10.5281/zenodo.1697212
**Archiving and preservation (including storage and backup)**
Project document server, Zenodo.
Estimated final size (Bytes): 500kB
## DS 5.7.2: Timbral Hardness Modelling Dataset
**Dataset reference and name**
DS 5.7.2: Timbral Hardness Modelling Dataset
**Dataset description**
Audio files, test interfaces, and results of listening experiments on
_hardness_ perception, carried out to inform the development and testing of a
model of _hardness_ perception, as delivered in Deliverable D5.6. Supporting
data for Deliverable D5.7 and journal paper by Pearce _et al._ [2019].
WP: WP5 / Task: Task 5.3
Responsible: Surrey-CVSSP & Surrey-IoSR
**Standards and metadata**
Computer code plus documentation. **Data sharing**
Public. DOI:10.5281/zenodo.1548721
**Archiving and preservation (including storage and backup)**
Project document server, Zenodo.
Estimated final size (Bytes): 1.5GB
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0743_WhoLoDancE_688865.md
|
# Introduction
The WhoLoDancE Movement Library (WML) is a web-based tool that allows end
users to browse, search and annotate the multimodal recordings that have been
acquired during the project. It integrates a data management and user
management back-end system, as well as an end-user interface targeting dance
practitioners and experts. WML’s latest version constitutes an improved
version of the older application. By upgrading several libraries to their
latest version, not only has the WML tool acquired flexibility, but also
compatibility with more devices and browsers. The new version of the WML and
annotator, also brings several changes that refer to both user interface and
user experience, new functionalities, as well as alternative viewers for the
recordings.
_Table 1. Changes and improvement during 2nd Period of WhoLoDancE_
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
Additions/improvements
</th> </tr>
<tr>
<td>
**General modifications to the UI & UX **
</td>
<td>
●
●
●
</td>
<td>
Upgrade jQuery to latest version
Total redesign (upgrade to bootstrap 4 framework) Error handling
</td> </tr>
<tr>
<td>
**Home Page**
</td>
<td>
●
</td>
<td>
Redesign
○ Search bar transferred to the middle of the page
○ Browse options moved below search bar
</td> </tr>
<tr>
<td>
**Results Page**
</td>
<td>
●
</td>
<td>
Redesign and new functionalities ○ Tag filtering system
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
○
</td>
<td>
Extra metadata referred to each recording
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
○
</td>
<td>
Option for editing metadata
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
○
</td>
<td>
Search for playlists
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
○
</td>
<td>
Additional filter options
</td> </tr>
<tr>
<td>
</td>
<td>
●
</td>
<td>
Search using database for faster and more accurate results
</td> </tr>
<tr>
<td>
**Mocap Viewer Page**
</td>
<td>
●
</td>
<td>
Redesig
○
○
</td>
<td>
n and new functionalities
Timeline structure
Playlist display option
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
○
</td>
<td>
Create new playlist
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
○
</td>
<td>
Manage recordings by adding them to or removing them from a playlist
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
○
</td>
<td>
Metadata field
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
○
</td>
<td>
Option for editing metadata
</td> </tr>
<tr>
<td>
**Choreomorphy Viewer Page**
</td>
<td>
●
</td>
<td>
New viewer option with several discrete functionalities
○ Altering the avatar
○ Automatically rotating-following the camera
○ Modifying the scale of the avatar
○ Adding trails and traces
</td> </tr>
<tr>
<td>
**Playlists Pages**
</td>
<td>
●
</td>
<td>
New pages related to the playlist’s manipulation
○ Personal channel that demonstrates created and saved playlists
○ Option for managing the recordings of a playlist
○ Option of creating new playlists
</td> </tr>
<tr>
<td>
_**User Management** _
</td>
<td>
●
</td>
<td>
Actions determine user’s role (a user might have several roles)
</td> </tr>
<tr>
<td>
_**Database** _
</td>
<td>
●
</td>
<td>
Partial redesign of database schema and query writing for efficiency
</td> </tr>
<tr>
<td>
</td>
<td>
●
</td>
<td>
New tables related to new functionalities
○ Copying recording metadata in PostgreSQL 1 database
○ Enrichment of metadata using the ontology
○ Enrichment of annotations using the ontology
</td> </tr>
<tr>
<td>
</td>
<td>
●
</td>
<td>
Updating the existing tables
</td> </tr>
<tr>
<td>
</td>
<td>
●
</td>
<td>
Editing recording metadata using WML (updating both CKAN 2 and PostgreSQL
database)
</td> </tr>
<tr>
<td>
</td>
<td>
●
</td>
<td>
Type of segment in all recordings (PostgreSQL and CKAN database)
</td> </tr>
<tr>
<td>
_**General** _
</td>
<td>
●
</td>
<td>
Upgraded Spring security
</td> </tr>
<tr>
<td>
</td>
<td>
●
</td>
<td>
Upgraded technologies
</td> </tr> </table>
# Architecture & data management
In this section, we describe the main components of the WhoLoDancE Movement
Library (WML) that have been upgraded regarding both the interface and the
back-end system. The WML architecture has been presented in detail in _D5.4
Final release testing and validation data management platform report_ . In
addition, we present the components of WML such as Annotation System, Movement
Library front end, and Repository in relation to the global WhoLoDancE
architecture and their relationship to other components (Figure 1). Several
upgrades have been made in order to improve the efficiency of the WML, as well
as the user experience. The approach adopts an elevated but flexible
architecture, which relies on the efficiency of the platform.
As with the previous version, the WML, being a web-based application, has been
developed according to the MVC architecture model (Model-View-Controller).
More specifically, the Spring Web MVC framework has been used. Spring MVC 3
separates an application into three interconnected parts
_Figure 1. WhoLoDancE overall architecture_
## WML back end
The WML back-end system has been upgraded in order to meet the functional and
non-functional requirements and users’ needs as they have been defined during
the evaluation process with the internal (members of the consortium) and
external dance experts.
These requirements suggested new specifications, for all of the different
layers of the implementation of the WML as a system, starting from the data
management and back-end. In particular, changes have been made to the data
management, updating the schema and content of the database. Some extra tables
have been added, as described above. These updates targeted both an enhanced
performance of the back-end system, as well as a richer representation of
movement recordings and their descriptors. In order to organise the knowledge
that relates the recordings with the dance, movement and other concepts that
describe metadata, annotations and other descriptors, the ontology that was
introduced in _D3.1 Report on semantic representation models_ has been
extended and integrated with the WML system. More details about the new
extended version of the ontology is provided in _D6.4 Final Report on the
resulting extension and integration of the ASTE_ 4 _engine in WhoLoDancE_ .
An important component of this new version is the ontology. It was used for
semantic enrichment of the metadata for each recording regarding the ‘Ballet
movement’, i.e., ballet specific vocabulary that consists of the syllabus and
terminology of this particular genre. Ballet, as a dance genre, is one of the
examples where particular movements have names and introduce a particular
vocabulary which is common not only among the practitioners of the dance
genre, but also among other dance practices. In addition, the vocabulary of
movements and its corresponding terminology implies particular rules about the
difficulty of the steps, hierarchies of movements and relationship with more
generic movement principles, qualities and actions such as turn, step, jump
etc. More details about the computational applications of particular
vocabularies such as the “Ballet Movement” sub-ontology can be found in
related papers [4][5]. More examples are also given in “ _D6.4 Final Report on
the resulting extension and integration of the ASTE engine in WhoLoDancE”_ .
Furthermore, a part of this information was incorporated in the annotation
process. More specifically, the Ballet movement has been added as an extra
field of choice in order to describe the movement of the dancer.
Currently, the WML repository has a total of 786 recordings. These recordings
have been incorporated in the database following the schema shown in detail in
Figure 4 and Figure 2. They were migrated from the CKAN data management
platform, omitting unnecessary data, so as to be tailored to the WML needs.
Taking advantage of this form, the search has been redesigned and the response
time has been reduced.
## WML data storage
As described in D5.4 the Data storage layer represents the infrastructure
which implements the storage of the multimodal recordings. The Data storage
layer has been enriched so as to support the extra functionalities and
improvements that have been made in the WML. The Annotations Database
component consists of the following tables:
1. Recordings: it contains the metadata of the recordings.
2. Dance Genre: it contains the dance genres that describe the recordings.
3. Movement Principle: it contains the specified vocabulary that describes the recordings
4. Users: it contains the users that are registered in the WML.
5. Actions: it contains the actions that a role can do while interacting with the WML.
6. Roles: it contains the role/s that a user has in the WML
7. Annotations: it contains the annotations that are added by the dance experts
8. Categories: it contains the categories from which a user can choose to annotate a recording
9. Labels: it contains a specific vocabulary for each category regarding the annotations.
10. Tags: it contains the keywords that refer to each of the recordings.
11. Playlist: it contains a collection of recordings that the user has saved in their profile either private or public.
12. Ballet movement: it consists of a specific vocabulary that was extracted from the ontology. This table has the information of the specified movement for the recordings.
The tables that were added in the newest version of the WML were the
Recordings, Dance Genre, Movement Principle, Actions, Tags, Playlist and
Ballet Movement. The Categories as well as the Labels were enriched by adding
the Ballet Movement and the related concepts that make up this particular
vocabulary.
_Figure 2. WhoLoDancE Movement Library schema_
## WML–ontology integration
In this section we describe the changes that have been made in the database as
well as the use of the ontology. In particular, an initial version of the
ontology has been described in the deliverable D6.3 [11] and reflects the
conceptual framework of the WhoLoDancE project for recording and organizing
the movement content and educational scenarios [1][2][6][7]. The ontology has
been extended to include more details about the recordings’ metadata,
annotations and tags, providing interrelations between descriptors (qualities,
principles, actions) and educational related details (level, dance genre,
dance syllabi and specific vocabularies) and integrate ontologies that have
been produced by Athena RC and published in related conference papers [4][5].
The ontology and its integration in the Educational platform are described in
detail in “D6.4 Final Report on the resulting extension and integration of the
ASTE engine in WhoLoDancE”.
Moreover, in order to integrate the ontology with the recordings, we have used
Apache Jena 5 , a free and open source Java framework. Taking advantage of
the wealth of information extracted from the ontology, the metadata of the
recordings as well as the annotations were enriched. For example, after this
process the recording with title “grand_battement_02_A_001” got the ballet
movement “Grand_Jeté” as metadata. The Eclipse RDF4J framework 6 , an open
source Java framework for processing RDF data, was used.
Regarding the information that derived from the ontology, there were the
following additions:
* 87 ballet movements were added as metadata in the WML.
* 76 recordings were enriched from the above metadata.
In Figure 2, an overview of the applied Dance Ontology is show, comprising of
concepts describing the Recording, Annotations, Movement, Movement Descriptors
and their subcategories Movement Principle, Movement Qualities, Action, Human
Body Part, but also concepts related to metadata such as Dance Genre, Dance
Company, Dance Performance, Dance Performer, and concepts related to the
Educational aspect such Learning Unit, Part_of_Class, Learning Level, etc.
Figure 3 shows the metrics of the asserted Classes, Object properties and
datatype properties and provides in the final version of the ontology.
_Figure 3. An overview of the Ontology using Protégé_
_Figure 4. Dance-Ontology metrics_
## WML user management & security
As described in D5.2 [10] the user management system is a core part of the WML
platform. It provides basic security and describes the ability of the
administrator to manage user access to various resources and functionalities.
The following Figure 5 shows the part of the database schema that is dedicated
to user management and role handling.
_Figure 5. User Management data schema_
Through the user management system, the first step of using the WML platform
is completing the registration. After a successful registration process, the
following message is shown, and the user can access the WML platform through
the login form and interact with the tool (Figures 6, 7 and 8).
_Figure 7. Successful registration message Figure 8. Log-in form_
_Figure 6. Registration form_
An important component of the WML is security. To ensure the protection of the
data within the platform, Spring Security framework 7 was used for
authentication and authorization to the WML. Having upgraded to the latest
version, a protection throughout the platform is provided.
# Functionality & user interface
## Evaluating design decisions
The WhoLoDancE Movement Library and the annotator interface has been developed
through a usercentred, iterative design approach. The user interface has been
evaluated at different stages. More details about the evaluation methodology
and results is provided in the deliverable _“D7.2 First evaluation of
personalised experience”_ and _“D7.3 Evaluation of Learning Personalized
Experience Final public report”_ , as well as in a in a published paper [3]. A
large number of the changes made to the user interface and to the
functionality of the platform have resulted from the iterative design process
and the requirements and specifications that emerged during the evaluation
with UI/UX and dance experts that represent the potential users of the
platform.
## Search by keywords and browse using dance genre
### Description
Figure 9 and Figure 10 show the application’s main interface, the old and new
version, respectively. Both pages were designed in order to meet users’ needs
for both searching and browsing the WhoLoDancE repository.
The Home page’s main goal is to guide users by providing an effective and
direct medium for discovering, searching and browsing the WhoLoDancE
recordings. In both pages (old and new version), the appearance as well as the
functionalities are similar.
_Figure 9. Old version of the WhoLoDancE Movement Library’s Home page_
_Figure 10. New version of the WhoLoDancE Movement Library’s Home page_
### Related requirement
The WhoLoDancE Movement Library meets the users’ need for effectively
discovering data. Searching by using keywords that refer to the recordings’
description and characteristics is cover through the use of the search bar.
However, there are other cases, in which users are not familiar with
specialized dance vocabulary used in the WhoLoDancE ontology and expressions
and they are simply interested to explore the repository. Browsing the
recordings by dance genre will offer that opportunity.
### Specifications
The WhoLoDancE Movement Library serves as a search engine that aims to show
off the WhoLoDancE Movement Library repository. The “Home” page has a decisive
role in this challenge. The old and new version of the tool (Figure 9 and
Figure 10) look similar.
On top of the page, users can still find the navigation bar. The navigation
bar is composed by the WhoLoDancE icon, which is also a link to the home page,
as well as a small dropdown menu on the right corner. The drop-down menu
includes two option buttons, “Playlists” and “Log Out”. “Playlists” button
redirects users to their personal channel, in which they can detect, play,
edit or delete their own playlists.
As Figure 10 shows, the search bar has been transferred from the navigation
bar to the middle of the page. Considering the observation that users are
prone to examine a layout by following the F rule (F-Layout refers to specific
design rules that are related to the UI and UX improvement), altering the
position of the search bar was a necessary improvement.
Down below search bar, four circle icons are located, in order to serve as the
medium for browsing the repository. Each circle corresponds to a specific
dance genre.
## Explore the search results
### Description
The search results page has been created, to offer an efficient way of
presenting the results of interest, obtaining an insight into the recording
through their metadata description, managing the metadata information, as well
as browsing the WhoLoDancE repository.
Considering the large number of recordings, combined with several
distinguished metadata, it was essential to design an effective way for both
searching and managing. As Figure 11 (old version of the result page) and
Figure 12 (new version of the result page) demonstrate, the current page has
undergone significant changes.
_Figure 11. Old version of the search Results page_
_Figure 12. New version of the search Results page_
First, the filters panel has been removed from the left side of the results
section. Through the latest version, the filters panel is located in a toggle
panel just above the results panel (Figure 13). When a specific filter is
selected, the option also appears as a tag label. Another design alteration
refers to the recordings metadata. As it is presented in Figures 11 and 12,
through the last version recordings are enriched with further details as well
as with an option of editing. An inline approach has been developed in order
to facilitate the edit process (Figure 14).
_Figure 13. New version of the search Results page - Filters tag system_
_Figure 14. New version of the search Results page - Edit metadata_
### Related requirement
The search results page serves as the intermediate between the home page and
the viewer page. After searching or browsing by using dance genre, users are
redirected to the search results page, where the recordings of their interest
are presented (Figure 12).
Through the current page not only are users able to search for specific
recordings, but also to locate their personal playlists. The process of
searching has been enhanced with mechanisms for filtering, paging and editing
the results.
### Specifications
Through the search results page, users are informed for the total number of
recordings, as well as playlists that are produced by their search actions. In
order to provide users with an insight into the recordings, each result is
combined with even more details than the previous version. More specifically,
a result contains: Title, Description, Free tags, Movement Principle tags,
Dance Genre, Capture Venue, Capture day, Performer, Number of performers,
Company and Number of annotations. As it is shown on Figure 14, a new feature
has been developed, so as to allow users to edit the information that were
mentioned above.
The function for filtering results has been removed from the left side of the
page. Through the latest version, filtering system has been placed above the
results panel, on the top of the page. Instead of toggles that contain
checkbox options, filtering process has been enhanced with the use of tag
labels. Filter panel still contains discrete lists with checkboxes. However,
additional filters have been used and each option, selected by the user, is
displayed as a tag (Figure 13).
## Mocap viewer/player
### Description
On top of the page users will meet the recording’s name. In the current
version title also works as a toggle button, in order to present all the
details of the recording (Figure 17). Not only users can read the recording
metadata but also, they have the opportunity to edit some of those details.
Directly below the component of the custom player shown in Figure 18. Player
has been developed, in order to offer the ability of simultaneously watching
and handling the motion capture file and the corresponding video. Player
supports all basic functions, such as play, pause, move forward and backward,
seek in specific timestamp, mute, increase or decrease volume and take current
and total time.
Moreover, there is a button for hiding the timeline and annotation structures
(hide annotations button), a button responsible to redirect users to another
player (Choreomorphy viewer has been included as an extra view for the
recordings), as well as a button for adding the recording to a playlist.
_Figure 15. Old Version of the Viewer page - Player_
_Figure 16. New Version of the Viewer page - Edit metadata_
### Related requirement
Home, browse and mocap viewer page constitute the basic components of the
WhoLoDancE Movement Library application. Regarding the latter, it was
essential to build a custom player serving as a view for the recordings. Each
recording includes both a video and a motion capture. The need of
simultaneously watching those files as well as interacting with them, led to
the player’s design and development.
### Specifications
Viewer page constitutes an essential interface for the WhoLoDancE Movement
Library tool, as it includes several important functionalities. Both in the
old and new version (Figure 15 and Figure 16), viewer page is composed by
three discrete components, the custom player, the timeline structure and
finally the annotations table. However, during that last version a series of
new functionalities have been developed.
Regarding the player’s component, the new version has maintained all its key
features. The player still supports play, pause, move forward and backward,
seek in specific timestamp, mute, increase or decrease volume and show current
and total time. Moreover, it still offers the opportunity to interact (zoom
in/out, rotate, move) with the motion capture 3D skeleton.
During the last version, the player structure has been extended with options
for creating a playlist, adding or removing the recording from a playlist
(Figure 17), selecting another view (Choreomorphy viewer) and finally
interacting with the timeline structure.
_Figure 17. New Version of the Viewer page - Add to playlist_
## Annotation timeline
### Description
Each recording stored in the WhoLoDancE Repository could be combined with
several annotations that aim to describe and analyse the dancer’s motion.
Through that direction, the new version of the WML application includes a new
structure for viewing the annotations.
More specifically a specialized timeline structure has been developed (Figure
18, Figure 19) that offers the opportunity to watch a movement and the
relative descriptions, simultaneously. The Timeline structure not only does
serve as an annotation viewer, but also allow users to add new annotations,
edit or delete them. Several functionalities have been included to create a
strong mean of viewing the annotations.
_Figure 18. Annotation Timeline_
_Figure 19 Annotation Timeline on hover option_
### Related requirement
During the previous version a table had been used serving both as viewer as
well as a tool for the management of the annotations. However, the new version
includes also a timeline structure. The timeline has been developed as an
alternative view and management system for the annotations.
The new structure is able to provide a totally new perspective of viewing the
motion capture files by relating time with comments. The timeline allows a
more flexible and effective synchronised view of annotations while the
recordings play. Specifications
The timeline structure serves as an alternative view option for the
annotations of the recordings. Within the timeline, the user can add, edit or
delete an existing annotation.
The range of the structure is dynamically adjusted accordingly to the
recording’s duration. The time scale is displayed every 10 seconds. A vertical
red line synchronized with the player’s seek bar, moves during the recordings
playback and displays the current time. Moreover, when the mouse moves over
the timeline structure, a tooltip follows the cursor and shows the time. At
this point, users are able, by double clicking in an area of the timeline, to
seek the specific timestamp of that recording (Figure 18, Figure 19).
Zoom in, zoom out, slide left or right are also some of the functionalities
that were developed to enhance the timeline.
Depending on the duration, annotations could be divided in two categories.
Those that refer to a specific time moment are displayed with a dot and those
that refer to a time period with a square. Each annotation belongs in one of
the following categories, “Action”, “Movement Quality”, “Movement Principle”
and “Other”. Depending on their category, annotations are presented with a
different colour (Figure 19). When the mouse hovers an annotation, a tooltip
with details appears, as well as options for deleting and editing.
Finally, users can filter the annotations by simply using the checkboxes that
are placed below the timeline structure.
## Annotations table
_Figure 20. Annotations Table_
### Description
The Annotations table is a structure that has been developed in order to
provide users with an effective tool for managing and viewing annotations.
That structure that is also included in the previous version of the
application, has been developed so as to support the necessity to quickly
manage annotations.
The table structure supports several functionalities such as pagination,
sorting and searching. It also allows users to add, edit and delete
annotations. Each action is also connected with the timeline structure.
### Related requirements
Annotations table has been created as an effective tool for viewing and
especially for managing annotations. Combined with the timeline structure,
users have the opportunity to select, which of those two structures is more
suitable with their needs. The most important aspects of the annotations table
structure are related to the process of comparing annotations, searching and
sorting them.
### Specifications
As it was described in the D5.2 report the annotations table is a specialized
structure, which provides an efficient way to add, edit and delete
annotations. It also includes several useful traits, such as searching the
table with keywords, regulating the number of annotations that will emerge in
each page, as well as sorting the columns of the table. Undo and redo methods
have also been implemented.
The table is enhanced to support the processes of adding new annotations,
editing and deleting them. Add, edit and delete functionalities take place in-
line on the table structure, offering extra flexibility and effectiveness.
## Add, edit, delete annotations
### Description
Two ways have been developed, in order to allow users to add, edit and delete
annotations. Both the timeline as well as the annotations table have been
developed as a means of viewing and managing annotations.
### Related requirements
The decision to create a tool for viewing the WhoLoDancE recordings and
combining the dancer’s motion with specific annotations, has automatically
created the functional requirements of easily adding, editing and deleting
annotations.
That is the basic reason for which both view options are also combined with
functionalities for managing the recordings. The timeline structure offers a
fastest way for reading and managing annotations. On the other hand, the table
appears more effective when several annotations must be managed, as it allows
proceeding with the process by comparing and sorting them.
### Specifications
The old version of the WhoLoDancE Movement Library tool was allowing viewing,
adding or managing annotations only by using the Annotations table structure.
However, during the last version of the tool, a timeline structure has been
also included.
Figure 22 demonstrates the add annotation process in the table structure.
Adding a new annotation results from the “Add Annotation” button, on the top,
right corner of the table. The edit and delete options are reached from the
corresponding buttons that are included in every table row. Add and edit
actions take place directly on the table structure, without using any popup
windows.
_Figure 21. Annotation table_
On the other hand, add and edit annotations by using the timeline structure,
is achieved with popup windows (Figure 22). The Timeline structure also
includes an “Add Annotation” button on the right, top corner. However, edit
and delete options appear only when the mouse hovers a specific annotation.
Each action affects simultaneously both the timeline and the table structure.
_Figure 22. Using Timeline to add/edit annotations_
## Choreomorphy viewer
### Description
WhoLoDancE Movement Library application has been developed to serve as an
effective tool to bridge the gap between the users and the WhoLoDancE
repository. In order to achieve that, it was essential to emphasize on the
processes of searching and viewing the recordings. Regarding the latter, a new
viewer, combined with specialized functionalities has been developed (Figure
23, Figure 24).
During the first version of the WhoLoDancE tool, mocap viewer was the only
interface that was provided as a view option for the users. However, the last
version comes up with two distinguished viewer interfaces. By clicking the
Choreomorphy Viewer button on the top of the mocap viewer’s interface, users
would be redirected to the new player. Choreomorphy viewer’s interface
provides an alternative view for the motion capture recording.
_Figure 23. Choreomorphy Viewer Page_
_Figure 24. Choreomorphy Viewer Page 2_
### Related Requirements
Viewing the recordings of the WhoLoDancE repository and managing annotations
on the recordings were some of the most important needs that the Movement
Library tool tried to cover.
As it was shown by the evaluation process, both the motion capture 3D skeleton
and the video, each one of them for different reasons, were extremely useful
in understanding the movement of dancers.
The Choreomorphy Viewer has been developed in order to enhance the view
structures by suggesting one more option.
### Specifications
The Choreomorphy Viewer page includes all the functionalities that were
mentioned for the “Mocap Viewer” page. On top of the page and by clicking the
title of the recording, the recordings’ metadata as well as an option for
editing are provided. Below the metadata panel, the components of the
Choreomorphy player, timeline and annotation table are located.
Choreomorphy player (Figures 23 and 24) is composed by three discrete
structures. There is a view of the motion capture representing the dancer’s
body as a 3D avatar in a cube, the video of recording and finally the
Choreomorphy viewer with a 3D avatar placed in a virtual environment. By
selecting the monitor’s icon, users can keep only the Choreomorphy structure
(Figure 24). All three represent the movement of the dancer and they are
totally synchronized not just between them but also with the timeline
structure.
Even though both 3D avatar components look resemble, there are several
differences not only in the environment that these are placed but also at
their functionalities. In both views, users have the opportunity to rotate and
zoom in/out the scene. However, Choreomorphy view also offers options for
altering the avatar, automatically rotating-following the camera depending on
the avatar’s movement, modifying the scale of the avatar, adding trails and
traces. Those functionalities appear when the user clicks on the gear icon
(Figure 24).
Initially, Choreomorphy view component constituted a distinguished Unity
project. However, the WebGL build option has been used, in order to allow
Unity to publish content as JavaScript program which use HTML5 technologies
and the WebGL rendering API to run Unity content in a web browser.
## Playlists
### Description
WML application latest version comes also with a complete playlists system.
Avoiding the repeated process of searching among several recordings, playlists
system provides a more personalized experience by offering users the choice to
create their personal channel, in which they can save their own playlists.
### Related requirements
The vast number of recordings, the difficulties that might emerge from the
lack of experience with the use of the tool, as well as the probably unknown
semantics, had played a decisive role in the decision of creating personal
channel and playlists.
The creation and management of a personal repository that includes grouped
recordings of interest, offers a totally different and more personalized
aspect on the platform.
### Specifications
Assuming the role of a personal repository, this new feature allows users to
gather recordings of interest in playlists and directly search, select and
display their selections.
Figure 18 demonstrates the interface of a personal channel. Under the title
“Created playlists” users would find their created playlists combined with a
title and the included number of recordings. By hovering the image of a
playlist, the “Play All” option appears. Current interface includes also an
option for the creation of a new playlist. The “Create Playlist” button
reveals a dropdown menu (Figure 25), in order to clarify the new playlist
characteristics. Title, description and privacy are the three traits that
could describe a playlist.
The playlist’s title also serves as a link, which redirects users to the
“Playlist Info” page.
_Figure 25. Personal channel and created playlists_
_Figure 26. Personal Channel - Create new playlist_
In the “Playlists Info” page (Figure 27), users can read the list of tracks
that are included in a playlist, as well as details relevant to that.
Moreover, users have the opportunity to select the play button, to change the
Playlist settings or even delete a playlist. Deleting specific recording from
the list is also supported.
_Figure 27. Playlist’s Info page_
Creating a new playlist is both provided through the viewer and
Playlists/Profile pages. On the top right corner of the “Viewer” page, the
“Add to Playlist” button is located. This button reveals a dropdown menu,
allowing to include the current recording in a new playlist or in any of the
already created lists. Figure 28 demonstrates, how the custom player is
formed, when the play all (all tracks of a list) button is selected.
_Figure 28. Mocap Viewer - Play the tracks of a playlist_
# Testing and validation
This section presents the results of the testing and validation activities of
the WhoLoDancE platform.
## SWOT analysis
This section presents the SWOT analysis of the WhoLoDancE platform (Table 1).
<table>
<tr>
<th>
**Strengths**
</th>
<th>
**Weaknesses**
</th> </tr>
<tr>
<td>
**WhoLoDancE Movement Library**
■ Several functionalities are supported from the same platform, such as
searching, browsing, viewing and annotating the WhoLoDancE recordings
■ Responsive design allowing to use the Platform from personal devices, such
as smartphones **Search/browse**
■ Search WhoLoDancE repository using metadata as keywords
■ Add or Edit recording metadata
■ Manage results with filtering and
pagination functionalities V **iew recordings**
■ View motion capture recordings with
different viewers
■ Interact with the 3D avatar of the mocap (zoom, rotate, move, change 3D
avatar, add traces and trails) **Annotations**
■ View, add and manage annotations
through a timeline or a table structure
■ Custom annotations (without vocabulary
restrictions) could be created **Playlists**
■ Create and manipulate personal playlists
</td>
<td>
**WhoLoDancE Movement Library**
■ Offline use of the Platform is not supported **Search/browse**
■ Browsing is focused only on dance genre selection
■ Search process requires specific vocabulary
**Annotations**
■ Recommended annotations are not offered
</td> </tr>
<tr>
<td>
**Opportunities**
</td>
<td>
**Threats**
</td> </tr>
<tr>
<td>
**WhoLoDancE Movement Library**
■ Could support specialized lessons, depending on the user’s dance interests
and skills
■ Could be used as a tool for dance lessons preparation
■ Able to provide integration with other dance tools **View Recordings**
■ Project Annotations directly on the 3D avatar of the mocap
■ Provide suggestions depending on similar
traits
■ Provide suggestions depending on similar annotations
■ Provide suggestions depending on users’
interests and dance skills **Annotations**
■ Annotating by clicking on specific body parts of the 3D avatar
</td>
<td>
**Choreomorphy viewer**
■ Using the Choreomorphy Viewer from personal devices, such as smartphones,
might be impossible **Search/browse**
■ Users may not be positive towards a
searching process with specific vocabulary
**Annotations**
■ Users might have difficulties, with annotating on a table structure or a
timeline, while watching the recordings
</td> </tr> </table>
## Testing of the platform
<table>
<tr>
<th>
Bugs/Improvements
</th>
<th>
Status
</th> </tr>
<tr>
<td>
Show timeline with the annotations
</td>
<td>
Completed
</td> </tr>
<tr>
<td>
Alternative view options (Choreomorphy Viewer)
</td>
<td>
Completed
</td> </tr>
<tr>
<td>
Personal channel for each user
</td>
<td>
Completed
</td> </tr>
<tr>
<td>
Create, manage, view and display playlists
</td>
<td>
Completed
</td> </tr>
<tr>
<td>
Methods for editing metadata
</td>
<td>
Completed
</td> </tr>
<tr>
<td>
Enrich metadata shown in each result
</td>
<td>
Completed
</td> </tr> </table>
# Maintenance plan
The Data management platform is installed at the ATHENA Research Center
Servers. The Servers are maintained, and the content is backed up on a regular
basis.
Overall, the project consortium and ATHENA Research Center in particular
commits to retain the platform operational and the data available for at least
years after the end of the project. Migration patterns are planned to take
place in line with the development of the standards and technologies adopted
by the project.
After this period the maintenance of the platform will be defined according to
potential exploitation plans by the project consortium. The strategies for
covering the platform sustainability costs are closely related with the
strategies and approaches the project will put in place for the exploitation
and sustainability of the entire project results.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0746_SUMCASTEC_737164.md
|
# Section 1: Data summary
## 1.1 Purpose of the data collection/generation and relation to the
objectives of the project
The purpose of data collection/generation is to gather evidence that the
developed lab-on-chip platform can isolate and neutralize CSCs, which will
require characterization of the new device performance along with
characterization of the biological response of the samples tested (ex. cells)
using the device. Additionally it will be necessary to collect data from
biological cells tested and characterized without utilizing the developed
device (ex. by traditional petri dishes culture) in order to benchmark the
novel technology/procedures against established “gold” standards.
Therefore the collected data will include the experimental procedures for
characterization of the device along with the protocols for biological
testing. Finally, simulation data and code, for example for automatic control
of measurement instruments, will be also collected.
## 1.2 Types and formats of the data
The data will be collected in text, numerical and image formats and gathered
in files whose extension are defined by the equipment/software used for
generation and collection. Some examples include but are not limited to .xlsx
(Excel spreadsheet), .ppt(PowerPoint), .mat (Matlab), .txt (text), .s2p
(touchstone), .avi (video). Data will be generated by individuals or groups of
researchers in all involved institutions.
## 1.3 Size of the data
The expected size cannot be predicted at this stage but it is reasonable to
assume that it will hit the tens of Gigabyte range.
## 1.4 Targeted users of the collected data
The data will be useful to members of the scientific community who are willing
to reproduce and build on the described experiments or develop similar
technologies.
# Section 2: FAIR Data
## 2.1 Making data findable, including provisions for metadata
### 2.1.1 Discoverability of data (metadata provision)
Considering the strongly interdisciplinary nature of the project SUMCASTEC's
consortium favors the adoption of a broad and domain agnostic metadata
standard that the EU recommends to its member states for recording information
about research activity: the Common European Research Information Format
(CERIF) standard is described at _http://www.eurocris.org/cerif/main-
features-cerif_
An additional advantage of a CERIF inspired standard is that SUMCASTEC's DMP
managing institution (Bangor University) currently uses a research information
system developed by Elsevier that implements the CERIF standard (PURE).
### 2.1.1 Identifiability of data
For publication data unique identifiers such as Digital Object Identifiers
will be used. For other data the identification mechanism described in "Naming
and convention used" will be adopted.
### 2.1.1 Naming and conventions used
The following structure is proposed for a SUMCASTEC data set identifier:
“Project”_“Date”_”Time”_”Name”_”Type”_”Extension”_”Place”_”Creators”_“Target
user”_”Other”
Where:
* “Project” is the project name (SUMCASTEC by default).
* “Date” is the date in format “YYMMDD” which is chosen to allow data that was taken at similar dates to be stored in close locations. For the same reason the date and time fields are set to precede the name field.
* “Time”: is the time in format “HHMMSS” if relevant, or NA by default.
* “Name” is a short name for the data.
* “Type” describes the type of data (e.g. publication, measured data, simulation data, protocol description …).
* “Extension” describes the data file extension.
* “Place” describes the location where the data were produced.
* “Creators” defines the individual(s) who generated the data.
* “Target user” defines the target audience of the data, if known.
* “Other” is an optional field for additional details (whose default value is NA).
For example:
“SUMCASTEC”_“170519”_”092134”_”Sparameters”_”Measured”_”txt”_”Bangor”_”Cr
C. Palego”_“Patners and public”_”NA”
is a file named Sparameters that was taken on May 19 th 2017 at 9:21 AM and
contains measured data with txt extension. Such data was generated in Bangor
by C. Palego and its storage target SUMCASTEC partners as well as general
public. A simple excel spreadsheet has been created and will be distributed to
all partners for a highly automatized generation of file names using the
described format. An example of the file name generated using such a tool is
visible in figure 1.
**Figure**
**1**
**:**
**simple Excel utility to be distributed to all partners for generation of
da**
**ta name according to the**
**DMP convention.**
### 2.1.2 Approach towards keywords
For publication data the official keywords list provided by the publisher will
be used. For other data keywords will be selected by the data owner.
## 2.2 Making data openly accessible
### 2.2.1. Data to be made publicly available and rationale for keeping some
data closed
**Publications:** Partners will be free to publish and disseminate their own
results according to the procedure defined and agreed in the Consortium
Agreement. The consortium will comply with the Grant Agreement open access
clause for the publications generated from the project, but will deposit them
into institutional
(closed) repositories like the University of Limoges’ Ucloud (
_https://ucloud.unilim.fr_ ) before moving them to public data repositories
like Zenodo ( _https://zenodo.org_ ) . The timing and approach in moving
publications to the public repository is similar to those for the other data
and is discussed in next session.
**Other data:** SUMCASTEC's partners strive for maximum openness of data
collected and generated during the project but reserve the right to evaluate
which data will be made publicly available along with the time for publication
on a case by case basis. The "Guidelines for FAIR Data Management in Horizon
2020" recognize the need to balance openness and protection of scientific
information, commercialization and Intellectual Property Rights (IPR), privacy
concerns, security as well as data management and preservation questions. It
is expected that the dominant causes for enforcing data access restriction
during SUMCASTEC will be protection of IPR and commercialization strategies.
It is also expected that the openness stance regarding individual items can be
reviewed and updated periodically. For example, test results or experimental
protocols can be made publicly available after the consortium has filed for
the corresponding patents.
The decision as to data openness and availability time will be made through a
vote held by the steering board. If the amount and quality of data is deemed
to require an extraordinary board consultation, a meeting will be scheduled at
the earliest convenience. Otherwise the steering board will hold a vote in the
frame of the scheduled consortium meetings.
### 2.2.2. Methods to access the data
SUMCASTEC has chosen the Zenodo ( _https://zenodo.org_ ) repository for
storing the project data and a SUMCASTEC project account has been thereby
created. Zenodo is a repository supported by CERN and the EU OpenAire project,
is open, free, searchable and structured with flexible licensing allowing for
storing all types of data: datasets, images, presentations, publications and
software. Additionally:
* The repository has backup and archiving capabilities.
* The repository allows for integration with github.com3 (a platform providing a free and flexible tool for code developing and storage) which could be used for storing of code generated during the project (ex. code for data analysis and automated measurement setup drivers).
* The repository can be set to restrict access to the data under embargo status until a chosen date; then the content becomes publically available automatically.
* Zenodo assigns all publicly available uploads a Digital Object Identifier (DOI) to make the upload easily and uniquely citable.
Finally, the documentation about the software needed to access the data will
be included by means of a text file that will be periodically updated.
### 2.2.2. Restricted area access
If an embargo is sought to give the consortium time to publish or seek IPR
protection, data will be accessible through Zenodo.org to consortium members
only until the agreed embargo expiration date.
## 2.3 Making data interoperable
### 2.2.1. Data interoperability and used vocabulary
The depositors will strive for adhering to standards for formats, as much as
possible compliant with available (open) software applications as from the
CERIF guidelines. They will also strive for using a standard vocabulary for
all data types present to allow inter-disciplinary interoperability.
## 2.4 Increase data re-use (through clarifying licences)
### 2.4.1. Data licensing
The flexible licensing capability embedded in Zenodo will be leveraged to
partition the repository space in an open area and a restricted access area
with the aim to transfer as much data as possible to the open area at the
earliest convenience. Sharing of data with restricted access will be possible
only by the depositor’s approval.
### 2.4.2. Reusability at the end of the project
The data produced and/or used in the project will be useable by third parties,
both during and after the end of the project as far as it is placed in the
open area of the Zenodo repository. Access by third parties will be encouraged
through dissemination initiatives for example by sharing the repository
address and basic access instructions during conference presentations.
### 2.4.3. Data quality assurance process
The DMP manager will periodically assess compliance of the repository entries
to the preset format and content standards. The Plan is a living document
whose content concerning the data management will be updated from its creation
(month 6 of the project) to the end of the project (month 42).
### 2.4.4. Re-usability duration
The length of time for which the data will remain re-usable will not be
enforced by
SUMCASTEC partners after the end of the project (unless it is deemed that
further IPR protection steps need to be taken). However it is foreseeable that
re-usability will depend on the demonstrated technology obsolescence.
# Section 3: Allocation of resources
The chosen repository (ZENODO) is free of charge for educational and
informational use. While no resources were specifically devoted to making
SUMCASTEC's data FAIR, all partner institutions have budgeted dissemination
costs supporting Open access publication. Therefore they will make sure that
peer-reviewed journal article they publish is openly accessible, free of
charge (article 29.2. Model Grant Agreement).
_http://ec.europa.eu/research/openscience/index.cfm?pg=openaccess_
For some publishers supporting a green route to Open Access of journals,
special issues and conference proceedings a post-print version of the
publication will be made available in the Zenodo repository. This version is
after the peer-review changes have been made, but it does not typically
include the publication-specific formatting. This version may also be referred
to as the author's final draft, accepted author manuscript (AAM) or the
author's final peer-reviewed manuscript.
For example the IEEE supports this green route to Open Access for the IEEE
Transactions on Microwave Theory and Techniques.
# Section 4: Data security
## 4.1 Data recovery
By relying on the ZENODO repository SUMCASTEC's research output will be stored
safely in the same cloud infrastructure as research data from CERN's Large
Hadron Collider and using CERN's battle-tested repository software INVENIO (a
fully cutomised digital library framework).
All files uploaded to Zenodo are stored in CERN’s EOS service in an 18
petabytes disk cluster. Each file copy has two replicas located on different
disk servers.
## 4.2 Secure storage
Metadata and persistent identifiers in Zenodo are stored in a PostgreSQL
instance operated on CERN’s Database on Demand infrastructure with 12-hourly
backup cycle with one backup sent to tape storage once a week.
## 4.1 Transfer of sensitive data
Transfer of sensitive data will occur uniquely from the University of Limoges
cloud infrastructure (Ucloud) that the consortium has chosen for internal data
storage and transfer.
# Section 5: Ethical aspects
The ethics aspects have been covered in the proposal and by obtaining (a) any
ethics committee opinion required under national law and (b) any notification
or authorization for activities raising ethical issues required under national
and/ or European law.
The documents submitted upon request by the coordinator to the Agency will be
added to the Zenodo repository.
# Section 6: Use of the DMP within the project
The plan is used by the SUMCASTEC partners as a reference for data management
(naming, providing metadata, storing and archiving) within the project each
time new project data are produced.
The project partners are introduced to the DMP and its use as part of WP5
activities. Relevant questions from partners will be specifically addressed
within WP5. The workpackage will also provide support to the project partners
on using Zenodo as the data management tool.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0747_Big Policy Canvas_769623.md
|
# Executive Summary
The Big Policy Canvas project participates in the Pilot on Open Research Data
launched by the European Commission along with the Horizon 2020 programme.
Hence, this deliverable sets the project’s Data Management Plan, which
conforms to the Guidelines on Data Management in Horizon 2020 and specifies
the types of data to be generated and collected during the project duration
along with the metadata related to them, and the scheme of their archiving and
preservation.
The Big Policy Canvas consortium will follow a series of dedicated activities
to publish the project outcomes so as to communicate and spread the knowledge
to all interested communities and stakeholders and get feedback from them.
The type of data to be generated and collected will be obtained with the
collaboration of researchers, the external experts (both the Experts Committee
and Experts Advisory Group) and other collaborators. These include:
1. List of Needs
2. List of Trends
3. List of Technological Assets
4. Community Contacts
5. Community Feedback
6. Roadmap
7. Guidelines & Recommendations
In the case where these data will contain personal information - data which
relate to an individual who can be identified from those data and/or other
information which may come into the possession of any interested stakeholder,
and includes any expression of opinion or intention about the individual – Big
Policy Canvas will follow the respective, new EU General Data Protection
Regulation.
Furthermore, the Big Policy Canvas publication infrastructure consists of
several web-based publication platforms that together provide long-term open
access to all publishable, generated or collected results in the project: the
project’s website, ResearchGate, ownCloud, the BPC Knowledge base and other
prospective data archiving & publishing infrastructure.
To conclude, this document also addresses the data management process of the
project’s deliverables. The Big Policy Canvas consortium will follow the same
methodology for data sharing, storage and preservation of the forthcoming
deliverables, respecting the deliverables’ classification as this was defined
in the DoA.
## 1 Introduction
### 1.1 Purpose of the document
The present deliverable (D6.1) entitled “Data Management Plan” is particularly
associated with T6.1 “Dissemination and Communication Strategy” of WP6 and, as
such, its main purpose is to document, an initial data management plan for the
project, highlighting the project’s data archiving and publishing
infrastructure and the template under which the project’s results will be
documented with respect to the management of the data they provide.
Hence, the present deliverable aims to fulfil the following main objective:
• To develop a plan for the data management of the project, identify the
infrastructure to be used for data archiving and publishing and list the
various expected project’s results (from the perspective of the data and
information they encapsulate).
### 1.2 Relation to other project work
WP6 is a horizontal component within the project work plan and aims at
supervising the integrity and consistency of all dissemination efforts to
achieve the goals mentioned above. In this context, Work
Package 6 will retain close collaboration with all project’s WPs to ensure
that all up to date information and knowledge produced within the project will
be effectively recognised and disseminated. Closer connection can be
identified though with WP2 “Project Community Establishment, Networking
Support and Project’s Engagement Activities” that focuses on identifying key
stakeholders working in the area of data-driven policy-making and policy-
modelling, since WP6 may enforce and facilitate community building, thus these
two WPs are closely coupled.
**Figure 1-1 Relation of WP6 with the other WPs**
### 1.3 Structure of the document
The rest of this document is structured in the following major chapters:
**Chapter 2** refers to the Data Management Plan for the project, exposing the
methodological framework that will be used, the data archiving and publishing
infrastructure to be exploited and the expected project’s results.
**Chapter 3** summarizes the main conclusions of the document.
## 2 Data Management Plan
The Big Policy Canvas consortium will follow a series of dedicated activities
to publish the project outcomes so as to communicate and spread the knowledge
to all interested communities and stakeholders and get feedback from them. The
goal of this section is to define how listing of results and research data
that can be published during a research project will be accomplished and
describe how these data will be handled from their acquisition and even after
the project’s end; how these will be collected, processed or generated and
following what methodology and standards, whether and how this data will be
shared and/or made open, how it will be curated and preserved, etc.
### 2.1 Methodological Framework for DMP
The general strategy for data management, in accordance with the EC Guidelines
for FAIR Data Management in Horizon 2020 1 , will be based on the
identification and classification of the generated and collected data, the
standards and metadata to be used, their exploitation and availability, as
well as their sharing, archiving and preservation. In that view, a methodology
should be outlined that will make research data generated in the context of
the Big Policy Canvas project, findable, accessible, interoperable and
reusable.
The Big Policy Canvas DMP aims to cover the whole data life cycle. Hence, the
task _T6.1 Dissemination and Communication Strategy and Data Management Plan_
in WP6 will be devoted to formulate and continuously evolve the Big Policy
Canvas research data management plan in accordance with the H2020 guidelines
regarding Open Research Data. In this task, the metadata, procedures and file
formats for note-taking, recording, transcribing, storing visual data from
participatory techniques, and anonymising semi-structured interview and focus
group discussion data will be developed and agreed.
Data Management Plan (DMP) is not required to provide detailed answers to all
the questions in this first version. Rather, the DMP is intended to be a
living document that will be updated over the course of the project whenever
significant changes arise, such as (but not limited to) new data, new
innovations, changes in the consortium members and others.
Regarding the **type of data** to be generated and collected, these will be
obtained with the collaboration of researchers, the external experts (both the
Experts Committee and Experts Advisory Group) and other collaborators.
In the case where these data will contain personal information - data which
relate to an individual who can be identified from those data and/or other
information which may come into the possession of any interested stakeholder,
and includes any expression of opinion or intention about the individual – Big
Policy Canvas will follow the respective, new EU General Data Protection
Regulation 2 .
In what concerns **standards and metadata** to be used, publications
(deliverables and papers) will serve as the main piece of metadata for the
shared data. Therefore, formats to be used mainly include
.doc, .pdf and .xls files, which substantially reduce the amount of metadata,
while other standards do not apply to this project.
In order to decide whether results (i.e. all kind of artefacts collected or
generated during the project) should be published or not, a list of questions
has been introduced by the project to facilitate their classification as
either _public (_ under the open access policy) or _non-public_ , as follows:
1. _Does a result provide significant value to others or is it necessary to understand a scientific conclusion?_
2. _Does a result include personal information that is not the author's name?_
3. _Does a result allow the identification of individuals even without the name?_
4. _Does a result include business or trade secrets of one or more partners of the project?_
5. _Does a result name technologies that are part of an ongoing, project-related patent application?_
6. _Can a result be abused for a purpose that is undesired by society in general or contradict with societal norms and the project’s ethics?_
7. _Does a result break national security interests for any project partner?_
### 2.2 Data archiving and publishing infrastructure
The Big Policy Canvas publication infrastructure consists of several web-based
publication platforms that together provide long-term open access to all
publishable, generated or collected results in the project.
In the following subsections, we describe the used platforms.
#### 2.2.1 Project’s website
The project’s website will be used to provide a short description of the
project’s objective and its methodology, as well as a short presentation of
the consortium. A dedicated page for project’s public documents will be
available where the most important deliverables of the project will be
published in portable document format (.pdf). Furthermore, a blog post
section/page will be added to inform the public about events, workshops, news
and updates that are relevant to the project’s activities. The website will
also provide a link to the project’s private area, where username/password
will be requested from the site in order to upload material and comment on
deliverables. The material will be accessed without creating an account, but
in order to upload material or comment on deliverables an account will be
needed The webpage is hosted by partner Lisbon Council (ipHost provider) at
_http://www.bigpolicycanvas.eu_ . All webpage-related data will be backed up
on a regular basis. All information on the Big Policy Canvas website will be
accessed without creating an account.
#### 2.2.2 ResearchGate
ResearchGate will be used to gather all Big Policy Canvas-related publications
and share them with interested researchers in order to further diffuse the
research done in the context of Big Policy Canvas.
ResearchGate is expected to raise awareness for project’s publications and
connect with relevant researchers.
#### 2.2.3 ownCloud
ownCloud will be the project’s internal document repository where all the
files exchanged within the consortium, including intermediate versions of the
deliverables, meetings’ material (agenda, notes, presentations, demos,
minutes, etc.) and any other document used for gathering inputs from the
project’s partners will be uploaded. ownCloud is hosted by ATOS and aims to
deliver out-of-the-box, collaborative content management, simplifying
capturing, sharing, and retrieval of information across virtual teams;
boosting productivity; and reducing network bandwidth requirements and email
volumes between project team members. Credentials are needed to access any of
the ownCloud material, as the platform usage is restricted only to the Big
Policy Canvas consortium and to the EC. Link:
https://repository.atosresearch.eu/index.php/apps/files/?dir=%2FBigPolicyCanvas
#### 2.2.4 Knowledge base
The Big Policy Canvas partners will setup a knowledge base that will
incorporate all project’s findings to produce a repository of value that will
facilitate rapid and effective uptake of novel technologies, tools,
methodologies and applications that cover the identified by the project public
sector needs and exploit available (big) data. This knowledge base intends to
act as the project’s basic infrastructure that will be constantly updated
maintaining the material identified and assessed by Big Policy Canvas, both
during and after the end of the project.
#### 2.2.5 Prospective data archiving & publishing infrastructure
Apart from the aforementioned publishing infrastructures, other data archiving
and publishing infrastructures, used by other EU-funded research projects
and/or suggested by the EC, will be also considered during the following
months of the project’s duration, in accordance with the resulting needs.
Indicatively, examples of such publishing infrastructures that will be
examined for their utility in the project are Zenodo, Futurium and JoinUp.
### 2.3 Project’s results
In this section, the datasets used or produced by the Big Policy Canvas
partners are listed. In accordance with the EC guidelines for FAIR data
management, the necessary information for all the datasets that clarify the
way data are collected, documented, stored, preserved and shared are provided.
2.3.1 List of Needs
#### **Dataset Description**
Dataset for analysis of existing and emerging public administration’s needs is
one of the main outcomes of WP3 and in particular T3.1. The data collection
techniques to be used will be mainly desk based research, workshops and
interviews with public administration representatives and experts from the BPC
network. The dataset will be useful for all users of the project outcomes, and
will act as well as input for the construction of the roadmap.
#### **Standards and metadata**
This dataset is stored in Google Sheets spreadsheets to facilitate the
contribution on the identified needs by all consortium members. Information on
the need name, need description, need type (e.g. strategical, organisational,
technical, etc.) and need source (source name, url and countries on which it
is addressed) is being held.
**Data Sharing**
This dataset will be mainly shared through the WP3, WP4 and WP5 deliverables,
which are all public.
#### **Archiving and Preservation (including storage and backup)**
The dataset will be preserved in the project internal repository (ownCloud)
and in the project website _http://www.bigpolicycanvas.eu/_ . Other
archiving and preservation repositories (e.g. Zenodo) will be also examined.
#### 2.3.2 List of Trends
##### Dataset Description
The dataset regarding the analysis of existing and emerging public
administration’s trends, along with public administration’s needs dataset
described above, is one of the main outcomes of WP3 and in particular of T3.1.
The data collection techniques to be used will be mainly desk based research,
workshops and interviews with public administration representatives and
experts from the BPC network. The dataset will be useful for all users of the
project outcomes, and will act as well as input for the construction of the
roadmap.
##### Standards and metadata
This dataset is stored in Google Sheets spreadsheets to facilitate the
contribution on the identified trends by all consortium members. Information
on the trend name, trend description, trend type (e.g. technical innovation,
phenomenon, method, concept, etc.), needs addressed by the trend recorded and
trend source (source name, url and countries on which it is addressed) is
being held.
**Data Sharing**
This dataset will be mainly shared through the WP3, WP4 and WP5 deliverables,
which are all public.
##### Archiving and Preservation (including storage and backup)
The dataset will be preserved in the project internal repository (ownCloud)
and in the project website _http://www.bigpolicycanvas.eu/_ . Other
archiving and preservation repositories (e.g. Zenodo) will be also examined.
#### 2.3.3 List of Technological Assets
##### Dataset Description
The dataset regarding the identification and reporting of methodologies,
tools, technologies and applications originating either from public or private
sector is being created and reported in the context of WP4 (especially T4.1).
Data collection techniques that will be used refer mainly to desk based
research, workshops, focus groups with stakeholders met during events and
workshops attended by BPC consortium members, interviews with IT experts, from
both public and private sector, and interviews and discussions through online
communication means with public sector’s and policy making experts of the BPC
network. The dataset will be useful for all users of the project outcomes, and
will act as well as input for the construction of the roadmap.
##### Standards and metadata
As in the case of the two aforementioned project’s results (i.e. list of needs
and trends), this dataset is stored in Google Sheets spreadsheets to
facilitate the contribution on the identified technological assets by all
consortium members. Information on the asset name, asset description, asset
type (e.g.
tool, database, platform, software, etc.), asset’s origin (e.g. public sector,
private sector, research domain, etc.), asset application field, needs served
by asset recorded and asset source (source name, url and countries on which it
is addressed) is being held.
**Data Sharing**
This dataset will be mainly shared through the WP4 and WP5 deliverables, which
are all public.
##### Archiving and Preservation (including storage and backup)
The dataset will be preserved in the project internal repository (ownCloud)
and in the project website _http://www.bigpolicycanvas.eu/_ . Other
archiving and preservation repositories (e.g. Zenodo) will be also examined.
#### 2.3.4 Community Contacts
##### Dataset Description
The community contacts collected during the project’s duration will be stored
in Excel spreadsheets, the access of which will be restricted to Big Policy
Canvas consortium and refer to the following sections and purposes:
* Contact users’ register for newsletter subscriptions, containing name and e-mail (both mandatory). This dataset is automatically generated when visitors sign up to the newsletter form available on the project website. The register will be used in order to send issues of the project newsletters.
* Contact user’s personal details with regard to messages sent to the website through the Contact form. It includes name, e-mail, message (all mandatory) and (possibly) phone. The contact details will be used to address the inquiry/request and to send information in the scope of the Big Policy Canvas project, after asking for and receiving his/her permission.
* Contacts identified as stakeholders that will build the Big Policy Canvas network and will provide their feedback and support in the dissemination of the relevant to the project information. These stakeholders may be either identified through web sources (e.g. targeted LinkedIn groups) and requested to provide their permission to be considered a Big Policy Canvas network member or may register on their own initiative in the collaboration portal of the project. This dataset will contain their name, surname, job function, domain field/expertise, e-mail contact and location, as well as their project interests or benefits and what they can contribute.
**Standards and metadata**
This dataset can be imported from, and exported to .doc, .pdf or .xls files
##### Data Sharing
The mailing list will be used for dissemination and feedback gathering
purposes, including disseminating the project newsletter to a targeted
audience, inviting community contacts to a Big Policy Canvas event, request
their opinion and feedback on a specific topic, etc. An analysis of newsletter
subscribers may be performed in order to assess and improve the overall
visibility of the project. As it implies personal data, the access to the
dataset is restricted to Big Policy Canvas consortium.
**Archiving and Preservation (including storage and backup)** The dataset will
be preserved in ATOS’ servers.
#### 2.3.5 Community Feedback
##### Dataset Description
Community feedback refers to any kind of feedback produced by the Big Policy
Canvas community, either this comes from stakeholders’ interviews,
questionnaires, focus groups, etc.
##### Standards and metadata
Regarding interviews and focus groups, data will be collected and stored using
digital audio recording whenever interviewees permit it. In any case, the data
from these sources will be always held in transcript form in accessible .doc
file format (Word). Information coming from questionnaires and any other
similar written feedback can be imported from and exported to .doc, .pdf or
.xls files.
##### Data Sharing
These datasets will be used to produce analytical reports on the most
important public administration’s needs and trends as well as to identify the
most appropriate technological assets to tackle these tasks. They will be also
used to validate the BPC roadmap and derived guidelines and recommendations.
Due to personal data protection, only aggregated information on these datasets
will be made accessible, protecting the identity of the engaged stakeholder,
if deemed necessary. In case where copyright and IPRs issues are raised, the
contributors of feedback will bear the copyright, but they will be asked to
assign a Creative Commons License, so that Big Policy Canvas can freely use
their contributions, respecting the terms of this license.
**Archiving and Preservation (including storage and backup)**
Τhese datasets will be preserved in the internal project repository
(ownCloud).
#### 2.3.6 Roadmap
##### Dataset Description
This dataset will be the outcome of the analysis and matching of the
identified public administration’s needs and trends with the identified
technological assets covering specific needs and will provide information on
what is already done and what is available at the moment. It is a result
coming from WP5 activities taking input from WP3 and WP4.
**Standards and metadata**
This dataset is a combination of .doc and .pdf documents.
**Data Sharing**
This dataset will be mainly shared through the WP5 deliverables (D5.1 and
D5.2), which are public.
##### Archiving and Preservation (including storage and backup)
The dataset will be preserved in the project internal repository (ownCloud)
and in the project website http://www.bigpolicycanvas.eu/. Other archiving and
preservation repositories (e.g. Zenodo) will be also examined.
#### 2.3.7 Guidelines & Recommendations
##### Dataset Description
Part of WP5 activities is also the elaboration of practical research
directions and recommendations to all interested BPC stakeholders. These
recommendations and research directions will stem from all the work
implemented during the project, especially under WP3, WP4 and WP5 activities,
building on the exchange with the community of stakeholders.
**Standards and metadata**
This dataset is a combination of .doc and .pdf documents.
**Data Sharing**
This dataset will be mainly shared through D5.3, which is a public
deliverable.
##### Archiving and Preservation (including storage and backup)
The dataset will be preserved in the project internal repository (ownCloud)
and in the project website http://www.bigpolicycanvas.eu/. Other archiving and
preservation repositories (e.g. Zenodo) will be also examined.
### 2.4 Data management of other project documents
#### 2.4.1 Project’s Deliverables
In this subsection, the data management process of the project’s deliverables
that have been delivered so far is briefly described. For each of these
datasets, the necessary information that characterise the document, describe
its content, its format and its metadata, and the way it has been shared and
stored, is provided. The Big Policy Canvas consortium will follow the same
methodology for data sharing, storage and preservation of the forthcoming
deliverables, respecting the deliverables’ classification as this was defined
in the DoA.
Ιn the following subsections, the so far submitted project’s deliverables are
listed.
2.4.1.1 D1.1 – Project Management Handbook
##### Dataset Description
The deliverable defines the structures, the procedures, and the supporting
documents that need to be appropriately established in order to assure the
quality of the project deliverables and project management activities. It
identifies potential risks and a management plan to face these situations
**.**
##### Standards and metadata
The document is stored in the cross-platform portable document format (.pdf).
Metadata will be added manually and include the title, the partner
organisations and keywords that classify this report, once the report has been
accepted by the EC.
##### Data Sharing
This document is classified as “Confidential” and thus, access to it is
restricted to the consortium and the EC.
##### Archiving and Preservation (including storage and backup)
The document, as well as all earlier versions of the document, is archived on
the project-internal ownCloud repository. The repository is hosted in a
server, which is backed on a regular basis by ATOS.
2.4.1.2 D2.1 – Identified Stakeholders & Networking Activities Planning
##### Dataset Description
This deliverable describes the process of identification and clustering of
stakeholders, including the rationale for the involvement of stakeholders in
the project and the provisional identification process, as well as the future
networking activities of the project, outlining the strategic plan for
building the Big Policy Canvas community and the initial version of the
community building plan. Furthermore, it contains a preliminary list of
communities, related projects and stakeholders.
##### Standards and metadata
The document is stored in the cross-platform portable document format (.pdf).
Metadata will be added manually and include the title, the partner
organisations and keywords that classify this report, once the report has been
accepted by the EC.
##### Data Sharing
The document will be published openly on the Big Policy Canvas webpage. The
access will be free for everyone and without restrictions.
##### Archiving and Preservation (including storage and backup)
The document will be published on the Big Policy Canvas webpage. All earlier
versions of the document are archived on the project’s internal ownCloud
repository. The repository is hosted in a server, which is backed on a regular
basis by ATOS.
#### 2.4.2 Scientific Publications
With regard to peer-reviewed scientific publications that will result from the
project, an Open access publishing approach ('gold' open access) will be
followed, according to the Regulation and the Rules of Participation for H2020
3 . All publications will be also become available in ResearchGate to support
their further dissemination.
There are no scientific publications to document yet.
## 3 Conclusions
The deliverable at hand, entitled “Data Management Plan” is preparatory of the
activities to be conducted within WP6 on Dissemination, Communication and
Sustainability and details the Data Management Plan for the project, where,
apart from the presentation of the methodological framework, the project’s
outcomes to be produced and disseminated are described with respect to the
standards and metadata relating to them, their sharing, their archiving and
their preservation. In this context, the main project’s data archiving and
publishing infrastructure consists of (a) the project’s website, where a short
description of the project’s objective and methodology is presented among
other things, (b) OwnCloud, which is the project’s internal document
repository, (c) ResearchGate, that aims to gather all Big Policy Canvas-
related publications and (d) the project’s Knowledge Base that will
incorporate all project’s findings. In what concerns the project’s results, as
part of tangible data for the project, the list of needs, trends and
technological assets are considered, as well as the project’s community
contacts, their feedback and the project’s roadmap and guidelines. Of course,
the project’s deliverables and scientific publications are also part of the
project’s results and so they are also being described as such.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0748_INNO-4-AGRIFOOD_681482.md
|
# Introduction
The current document constitutes the final version of the **Data Management
Plan** (DMP) elaborated in the framework of the **INNO-4-AGRIFOOD** project,
which received funding from the European Union’s Horizon 2020 Research and
Innovation Programme under Grant Agreement No 681482. INNO-4-AGRIFOOD aimed at
fostering, supporting and stimulating **online collaboration for innovation**
amongst **agri-food SMEs** across Europe. To this end, the project enhanced
the service portfolio and practices of **innovation intermediaries and SME
support networks** across Europe by providing them with a well-tailored blend
of demand-driven **value propositions** , including:
* A **new generation of value added innovation support services** aimed at empowering their agri-food SME clients to capitalise on the full potential of online collaboration for innovation.
* A **suite of smart and platform-independent ICT tools** to support and optimise the delivery of the novel online collaboration for innovation support services.
* A **series of highly interactive and flexible e-training courses** equipping them with the knowledge and skills required to successfully deliver these new services.
On top of the above mentioned, the accumulated experience and lessons learned
through INNO-4-AGRIFOOD has been translated into meaningful **guidelines** to
be diffused across Europe so as to fuel the replication of its results and
thus enable SMEs in other European sectors to tap into the promising potential
of online collaboration for innovation as well.
To this end, INNO-4-AGRIFOOD brought together and was implemented by a well-
balanced and complementary **consortium** , which comprised of **7 partners
across 6 different European countries** , as presented in the following table.
## _Table 1: INNO-4-AGRIFOOD consortium partners_
<table>
<tr>
<th>
**Partner**
**No**
</th>
<th>
**Partner Name**
</th>
<th>
**Partner short name**
</th>
<th>
**Country**
</th> </tr>
<tr>
<td>
1
</td>
<td>
Q-PLAN INTERNATIONAL ADVISORS (Coordinator)
</td>
<td>
Q-PLAN
</td>
<td>
Greece
</td> </tr>
<tr>
<td>
2
</td>
<td>
Agenzia per la Promozione della Ricerca Europea
</td>
<td>
APRE
</td>
<td>
Italy
</td> </tr>
<tr>
<td>
3
</td>
<td>
IMP 3 rove – European Innovation Management
Academy EWIV
</td>
<td>
IMP 3 rove
</td>
<td>
Germany
</td> </tr>
<tr>
<td>
4
</td>
<td>
European Federation of Food Science and Technology
</td>
<td>
EFFoST
</td>
<td>
Netherlands
</td> </tr>
<tr>
<td>
5
</td>
<td>
BioSense Institute
</td>
<td>
BIOS
</td>
<td>
Serbia
</td> </tr>
<tr>
<td>
6
</td>
<td>
National Documentation Centre
</td>
<td>
EKT/NHRF
</td>
<td>
Greece
</td> </tr>
<tr>
<td>
7
</td>
<td>
Europa Media szolgaltato non profitkozhasznu KFT
</td>
<td>
EM
</td>
<td>
Hungary
</td> </tr> </table>
In this context, the **final version of the DMP** presents the data management
principles set forth in the framework of INNO-4-AGRIFOOD by its consortium
partners (Chapter 2). Moreover, it builds upon the interim version of the DMP
and provides an updated list of the datasets that have been processed and/or
produced during the project along with an up-to-date description for each one
(Chapter 3), addressing crucial aspects pertaining to their management and
taking into account the “ _Guidelines on Data Management in Horizon_ _2020_ ”
provided by the European Commission (EC).
# Data management principles
## Data archiving and preservation
The datasets produced by INNO-4-AGRIFOOD that were deemed open for sharing and
re-use are currently deposited to Zenodo ( _www.zenodo.org_ ) , an open data
repository, with a view to increasing data interoperability. This data
repository, created by OpenAIRE and CERN, has been chosen to enable open
access to the project’s open data free of charge. In fact, Zenodo builds and
operates a simple service that enables researchers, scientists, EU projects
and institutions, among others, to share and showcase research results
(including datasets and publications) that are not part of the existing
institutional or subject-based repositories of the research communities. In
this respect, the Coordinator (Q-PLAN) has uploaded all open datasets to
Zenodo, while all partners have disseminated them through their professional
networks and other communication channels.
_**Figure 1: CC BY-NC-ND 4.0** _ On top of the aforementioned, INNO-4-AGRIFOOD
has published its openly available data under the **Creative Commons licencing
scheme** to foster their re-use and build an equitable and accessible
environment for them. In fact,
Zenodo provided the opportunity to publish the project’s data under a
preferable Creative Common Licence. With that in mind, **the consortium has
decided that the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0
International (CC BY-NC-ND 4.0) is an appropriate licensing scheme to ensure
the widest re-use of the data** , while also taking into account the
importance of recognising both the source and the authority of the data.
## Metadata and standards
All open datasets produced by INNO-4-AGRIFOOD are accompanied with data that
facilitate their understanding and re-use by interested stakeholders. This
data includes basic details that assists interested stakeholders to locate the
dataset, including its format and file type as well as meaningful information
about who created or contributed to the dataset, its name and reference, date
of creation and under what conditions it may be accessed. To this end, the
project followed a metadata-driven approach so as to increase the
searchability of its datasets. With that in mind, data repository Zenodo
created appropriate metadata to accompany the datasets that have been uploaded
to its repository, extending their reach to a wider audience of interested
stakeholders. Moreover, complementary documentation (when needed) also
encompasses details on the methodology used to collect, process and/or
generate the datasets, definitions of variables, vocabularies and units of
measurement as well as any assumptions made. Finally, whenever possible,
consortium partners have identified and utilised existing standards.
## Data sharing
The Coordinator (Q-PLAN) in collaboration with the respective Work Package
Leaders of the project, determined how the data collected and produced in the
framework of INNO-4-AGRIFOOD has been shared. This included the definition of
access procedures as well as potential embargo periods along with any
necessary software and/or other tools which may have been required for data
sharing and re-use. In case the dataset could not have been shared, the
explicit reasons for this have been clearly mentioned (e.g. ethical, rules of
personal data, intellectual property, commercial, privacy-related, security-
related). A consent has been requested from all data providers 1 in order to
allow for their data to be shared. Under this light, only anonymised and
aggregated data has been shared to ensure that data providers cannot be
identified in any reports, publications and/or datasets resulting from the
project. The project partners have undertaken the necessary anonymisation
procedures to anonymise the data in such a way that the data providers are no
longer identifiable.
## Ethical considerations
INNO-4-AGRIFOOD entailed activities which involved the **processing of data
that did not fall into any special category of personal data** 2 (i.e. non-
sensitive data). The collection/generation of this data from individuals
participating in the project’s activities has been based upon a **process of
informed consent** . In fact, any personal data collected/generated in the
framework of INNO-4-AGRIFOOD has been processed according to the principles
laid out by the **Regulation (EU) 2016/679 of the European Parliament and of
the Council** **of 27 April 2016** 3 4 on the protection of natural persons
with regard to the processing of personal data and on the free movement of
such data which entered into force in May 2018 aiming to protect individuals’
rights and freedoms in relation to the processing of their personal data,
while also facilitating the free flow of such data within the European Union.
Along these lines, **data was collected/generated only for specified, explicit
and legitimate purposes** relative to project’s objectives. Moreover, all
project partners tasked with processing data during the course of
INNO-4-AGRIFOOD fully abided with their respective applicable national as well
as EU regulations while at the same time are able at any time to demonstrate
their compliance during the entire timespan of the project (principle of
accountability).
# Data management plan
## Overview
INNO-4-AGRIFOOD placed special emphasis on the management of the valuable data
that has been collected, processed and generated throughout its activities. In
this respect, the table below provides a list of the datasets identified by
INNO-4-AGRIFOOD consortium members, indicating the name of the dataset, its
linked Work Package and the respective leading consortium member (i.e. Work
Package Leader) as well as its status compared to the previous version of the
DMP.
### _Table 2: List of INNO-4-AGRIFOOD datasets_
<table>
<tr>
<th>
**No**
</th>
<th>
**Dataset Name**
</th>
<th>
**Linked**
**Work**
**Package**
</th>
<th>
**Work**
**Package**
**Leader**
</th>
<th>
**Status**
</th> </tr>
<tr>
<td>
1
</td>
<td>
Analysis of the agri-food value chain
</td>
<td>
WP1
</td>
<td>
BIOS
</td>
<td>
\-
</td> </tr>
<tr>
<td>
2
</td>
<td>
Needs of agri-food SMEs in terms of online collaboration for innovation
support
</td>
<td>
WP1
</td>
<td>
BIOS
</td>
<td>
\-
</td> </tr>
<tr>
<td>
3
</td>
<td>
Skills of innovation intermediaries in terms of supporting online
collaboration for innovation
</td>
<td>
WP1
</td>
<td>
BIOS
</td>
<td>
\-
</td> </tr>
<tr>
<td>
4
</td>
<td>
Outcomes of the INNO-4-AGRIFOOD Cocreation Workshop – E-learning
</td>
<td>
WP2
</td>
<td>
IMP 3 rove
</td>
<td>
Updated 5
(M30)
</td> </tr>
<tr>
<td>
5
</td>
<td>
Case-based training material supplemented by theoretical information on the
topic
</td>
<td>
WP2
</td>
<td>
IMP 3 rove
</td>
<td>
\-
</td> </tr>
<tr>
<td>
6
</td>
<td>
Outcomes of the INNO-4-AGRIFOOD Cocreation Workshop – Services and tools
</td>
<td>
WP3
</td>
<td>
Q-PLAN
</td>
<td>
Updated 4
(M30)
</td> </tr>
<tr>
<td>
7
</td>
<td>
Pool of agri-food SMEs
</td>
<td>
WP4
</td>
<td>
APRE
</td>
<td>
Updated
(M30)
</td> </tr>
<tr>
<td>
8
</td>
<td>
Roster of specialists database
</td>
<td>
WP4
</td>
<td>
APRE
</td>
<td>
Updated
(M30)
</td> </tr>
<tr>
<td>
9
</td>
<td>
Service testing metrics
</td>
<td>
WP4
</td>
<td>
APRE
</td>
<td>
Updated
(M30)
</td> </tr>
<tr>
<td>
**No**
</td>
<td>
**Dataset Name**
</td>
<td>
**Linked**
**Work**
**Package**
</td>
<td>
**Work**
**Package**
**Leader**
</td>
<td>
**Status**
</td> </tr>
<tr>
<td>
10
</td>
<td>
User data and learning curve of e-learning participants
</td>
<td>
WP5
</td>
<td>
EM
</td>
<td>
Updated 6
(M30)
</td> </tr>
<tr>
<td>
11
</td>
<td>
Feedback derived from e-learning participants
</td>
<td>
WP5
</td>
<td>
EM
</td>
<td>
Updated 5
(M30)
</td> </tr>
<tr>
<td>
12
</td>
<td>
Awareness creation, dissemination and stakeholder engagement
</td>
<td>
WP6
</td>
<td>
EFFoST
</td>
<td>
Updated
(M30)
</td> </tr> </table>
With the identified datasets of INNO-4-AGRIFOOD in mind, the current section
of the DMP provides meaningful information per each one, including:
* The name of the dataset.
* The type of study in the frame of which the dataset is produced.
* A concise description of the dataset.
* The methodology and tools employed for collecting/generating the data.
* The format and volume of the dataset.
* Any standards that will be used (if applicable) and/or metadata to be created.
* Potential stakeholders for whom the data may prove useful.
* Provisions regarding the confidentiality of the data.
### Important remark
The information provided within this section reflects the current views and
plans of INNO-4-AGRIFOOD consortium partners at this final of the project
(M30). The template employed for collecting the information from project
partners is annexed to this document.
## Analysis of the agri-food value chain
<table>
<tr>
<th>
**Dataset name**
</th>
<th>
Analysis of the agri-food value chain.
</th> </tr>
<tr>
<td>
**Type of study**
</td>
<td>
Agri-food value chain analysis aimed at revealing the primary value chain
areas and SME actors to be targeted by the project based on both secondary and
primary research.
</td> </tr>
<tr>
<td>
**Dataset description**
</td>
<td>
Data derived from interviews with members of the Advisory and Beneficiaries
boards of INNO-4-AGRIFOOD, providing their opinions about the needs of SMEs
with respect to innovation support and the opportunities for online
collaboration for innovation.
</td> </tr>
<tr>
<td>
**Methodologies for data collection / generation**
</td>
<td>
A semi-structured questionnaire was employed in order to collect qualitative
data during the interviews.
</td> </tr>
<tr>
<td>
**Format and volume of the dataset**
</td>
<td>
The dataset is stored within a .zip file which comprises of 7 distinct
documents stored in .docx formats. The total size of the (uncompressed)
dataset is 1.12 MB.
</td> </tr>
<tr>
<td>
**Metadata and**
**standards**
</td>
<td>
Each document of the dataset is accompanied by descriptive metadata including
title, author and keywords.
</td> </tr>
<tr>
<td>
**For whom might the dataset be useful?**
</td>
<td>
The dataset provided INNO-4-AGRIFOOD consortium members with valuable
information from the perspective of agri-food stakeholders, fuelling and
complementing the agri-food value chain analysis conducted in the context of
the project.
</td> </tr>
<tr>
<td>
**Confidentiality**
</td>
<td>
The outcomes of the study that produced the dataset have been published
through th e _Agri-food Value Chain Analysis Report_ , available at the web
portal of the project. The report contains only aggregated data so as to
ensure the confidentiality of the interviewees and their responses. The
dataset itself, used only in the context of the project, is not intended for
sharing and/or re-use, with a view to safeguarding the privacy of
interviewees. With that in mind, the dataset has been archived at the private
server of the Coordinator (Q-PLAN) and will be preserved for at least 5 years
following the end of the project, before eventually being deleted.
</td> </tr> </table>
## Needs of agri-food SMEs in terms of online collaboration for innovation
support
<table>
<tr>
<th>
**Dataset name**
</th>
<th>
Needs of agri-food SMEs in terms of online collaboration for innovation
support.
</th> </tr>
<tr>
<td>
**Type of study**
</td>
<td>
Interview-based survey of representatives of agri-food SMEs as well as
innovation intermediaries aimed at revealing the needs, level of readiness and
profiles of agrifood SMEs in terms of online collaboration for innovation.
</td> </tr>
<tr>
<td>
**Dataset description**
</td>
<td>
The dataset contains the responses (mostly qualitative) provided by
interviewees who participated in the study, addressing different aspects of
the current situation in the EU with respect to online collaboration for
innovation amongst SMEs in the agri-food sector as well as diverse topics
relevant to collaborating for innovation by employing online means (e.g.
specific attributes of platforms and tools needed for online collaboration,
support that SMEs may seek or need in this respect, etc.).
</td> </tr>
<tr>
<td>
**Methodologies for data collection / generation**
</td>
<td>
The collection of the data was realised through a semi-structured
questionnaire administered to survey participants in the frame of interviews.
An online (web) form was employed by interviewers in order to submit a record
to the dataset.
</td> </tr>
<tr>
<td>
**Format and volume of the dataset**
</td>
<td>
The dataset has been stored in spreadsheet (.xls) and .pdf formats, both of
which containing the 52 replies derived from the interview-based survey. The
size of the dataset in .xls format is 0.17MB, while in .pdf is 0.76MB.
</td> </tr>
<tr>
<td>
**Metadata and**
**standards**
</td>
<td>
Descriptive metadata (i.e. title, author and keywords) have been created to
accompany the dataset.
</td> </tr>
<tr>
<td>
**For whom might the dataset be useful?**
</td>
<td>
The insights derived from the analysis of the data have been key in the
process of co-creating and developing the novel services and tools of
INNO-4-AGRIFOOD according to the needs of agri-food SMEs and their innovation
intermediaries.
</td> </tr>
<tr>
<td>
**Confidentiality**
</td>
<td>
The findings and conclusions of the study based on the processing and analysis
of the data within this dataset, have been openly shared through the _Agri-
food SME_ _Profiling and Needs Analysis Report_ , which is published at the
web portal of INNO4-AGRIFOOD. The report contains only aggregated data so as
to ensure the confidentiality of the interviewees and their responses. The raw
data collected through the interview-based survey will not be shared and/or
re-used (outside the framework of the project and/or beyond its completion) to
safeguard the privacy of data providers. Hence, the dataset, currently
archived at the private server of the Coordinator (Q-PLAN), shall be preserved
for at least 5 years following the end of the project, before eventually being
deleted.
</td> </tr> </table>
## Skills of innovation intermediaries in terms of supporting online
collaboration for innovation
<table>
<tr>
<th>
**Dataset name**
</th>
<th>
Skills of innovation intermediaries in terms of supporting online
collaboration for innovation.
</th> </tr>
<tr>
<td>
**Type of study**
</td>
<td>
Online survey of staff of innovation intermediaries and SME support networks
aimed at assessing the current level of their knowledge and skills in
providing support to the online collaboration for innovation endeavours of
agri-food SMEs.
</td> </tr>
<tr>
<td>
**Dataset description**
</td>
<td>
The data collected comprises predominantly quantitative responses provided by
the participants of the online survey, including demographic information as
well as their perceived level of skills (gauged via a 5-scale Likert scale) in
different skill areas, including agri-food industry, support services,
collaboration, innovation management and soft skills.
</td> </tr>
<tr>
<td>
**Methodologies for data collection / generation**
</td>
<td>
A structured questionnaire was used in order to collect the data. The
questionnaire was self-administered and survey participants were able to
access it online by following a dedicated link.
</td> </tr>
<tr>
<td>
**Format and volume of the dataset**
</td>
<td>
The dataset has been stored in standard spreadsheet format (.xlsx). In total,
79 respondents from the EU as well as 23 from around the world filled in and
successfully submitted a questionnaire resulting in 102 responses in total.
The same number of records was collected and is now within the dataset. The
size of the dataset is 53KB.
</td> </tr>
<tr>
<td>
**Metadata and**
**standards**
</td>
<td>
The dataset has been accompanied by descriptive metadata (i.e. title, author
and keywords).
</td> </tr>
<tr>
<td>
**For whom might the dataset be useful?**
</td>
<td>
The dataset has been of great use to INNO-4-AGRIFOOD consortium members,
enabling them to unearth the insight required to set the stage for the need-
driven co-creation and development of the project’s e-learning curriculum and
modules.
</td> </tr>
<tr>
<td>
**Confidentiality**
</td>
<td>
The _Skills Mapping and Training Needs Analysis Report_ , available at the
web portal of the project, provides public access to the findings of the study
in the frame of which this dataset has been produced. Moreover, the report
contains only aggregated data so as to ensure the confidentiality of the
interviewees and their responses. Records of the database are available only
to consortium partners and are not intended for sharing and/or re-use, so as
to ensure the privacy of the study’s participants. The dataset itself is
archived at the private server of the Coordinator (Q-PLAN) and will be
preserved for at least 5 years following the end of the project, before
eventually being deleted.
</td> </tr> </table>
## Outcomes of the INNO-4-AGRIFOOD Co-creation Workshop – E-learning
<table>
<tr>
<th>
**Dataset name**
</th>
<th>
Outcomes of the INNO-4-AGRIFOOD Co-creation Workshop – E-learning.
</th> </tr>
<tr>
<td>
**Type of study**
</td>
<td>
The INNO-4-AGRIFOOD Co-creation Workshop which was held on the 15 th of
September 2017 at Amsterdam, the Netherlands in order to co-create, along with
stakeholders of the agri-food ecosystem, the e-learning offer of
INNO-4-AGRIFOOD.
</td> </tr>
<tr>
<td>
**Dataset description**
</td>
<td>
The dataset generated encompasses the feedback as well as the innovative
concepts and ideas provided by participants of the INNO-4-AGRIFOOD Co-creation
Workshop during the structured activities of the co-creative session dedicated
to the e-learning offer of the project. The data is mostly textual (short
sentences) and refer to (i) the appropriateness of the e-learning material
developed at the time of the workshop and (ii) supplementary ideas for
consideration in the process of developing the e-learning material of the
project.
</td> </tr>
<tr>
<td>
**Methodologies for data collection / generation**
</td>
<td>
In addition to the minutes recorded throughout the co-creation workshop, the
participants’ input from the group discussions were tabulated for each of the
draft e-learning modules (which were provided as background information) using
preprepared templates. Comments of relevant consortium members on each module
were added remotely after the event. Conclusions were then drawn on the
content and weighting of elements within each module.
</td> </tr>
<tr>
<td>
**Format and volume of the dataset**
</td>
<td>
The data collected have been integrated within the report on the _Outcomes of
the_ _INNO-4-AGRIFOOD Co-creation Workshop: Curriculum concept and key
training_ _topics_ . The report is stored in .pdf format and its size reaches
approximately
1.11MB.
</td> </tr>
<tr>
<td>
**Metadata and**
**standards**
</td>
<td>
The report in which the dataset has been integrated includes meaningful
information with respect to the context in which the data have been collected
as well as the methodology for collecting them. Descriptive metadata,
including the title and type of file has been created to accompany the report.
</td> </tr>
<tr>
<td>
**For whom might the dataset be useful?**
</td>
<td>
Innovation support service designers and providers as well as relevant
trainers and educators would find the dataset most useful, especially those
who operate within the agri-food sector or are interested to do so.
</td> </tr>
<tr>
<td>
**Confidentiality**
</td>
<td>
The results of the analysis have been openly shared through the public report
on the _Outcomes of the INNO-4-AGRIFOOD Co-creation Workshop: Curriculum_
_concept and key training topics_ , which is available free of charge at the
INNO-4AGRIFOOD web portal.
</td> </tr> </table>
## Case-based training material supplemented by theoretical information on
the topic
<table>
<tr>
<th>
**Dataset name**
</th>
<th>
Case-based training material supplemented by theoretical information on the
topic.
</th> </tr>
<tr>
<td>
**Type of study**
</td>
<td>
Development of educative case studies based on the services provided in the
framework of INNO-4-AGRIFOOD blended with theoretical training building upon
existing material available to partners either from previous work or from open
sources.
</td> </tr>
<tr>
<td>
**Dataset description**
</td>
<td>
The data collected includes simple responses (plain text in English) provided
in the frame of interviews.
</td> </tr>
<tr>
<td>
**Methodologies for data collection / generation**
</td>
<td>
Data required for the development of the case studies has been collected with
the help of semi-structured questionnaires administered during interviews by
project partners. Additional data has been gathered from the existing
knowledge base of project partners (e.g. previous project documentations,
previous service provision documentations, etc.) and/or from OER repositories
as well as other third-party secondary data sources.
</td> </tr>
<tr>
<td>
**Format and volume of the dataset**
</td>
<td>
Data collected during the interviews conducted in the framework of case study
development has been stored as video files (.mp4) with a total size of 1.8 GB.
And .pptx files with a size of 30 MB, containing basic information about the
companies that received the I4A services. The scripts of the case studies
stemming from the interviews have been preserved in a .xlsx file of a total
volume of 1 MB.
</td> </tr>
<tr>
<td>
**Metadata and**
**standards**
</td>
<td>
All e-learning material developed based on the case studies are SCORM
compliant to enable its packaging and facilitate the re-use of the learning
objects. The Articulate software, which has been used to create the e-learning
material of the project, has generated the Content Aggregation Metadata File
required.
</td> </tr>
<tr>
<td>
**For whom might the dataset be useful?**
</td>
<td>
The dataset would be quite useful for innovation intermediaries and
consultants as well as educators who would use this case-based e-learning
material in their own activities.
</td> </tr>
<tr>
<td>
**Confidentiality**
</td>
<td>
The e-learning material is currently openly available to all interested
stakeholders through the web portal of the _INNO-4-AGRIFOOD_ , protected
with Creative Commons Attribution-NonCommercial-NoDerivatives 4.0
International Public License (CC BY-NC-ND 4.0). By doing so, the e-learning
content can be freely used by any interested stakeholder only for non-
commercial purposes while alteration, transformation and/or build-upon this
material is not allowed.
</td> </tr> </table>
## Outcomes of the INNO-4-AGRIFOOD Co-creation Workshop – Services and tools
<table>
<tr>
<th>
**Dataset name**
</th>
<th>
Outcomes of the INNO-4-AGRIFOOD Co-creation Workshop – Services and tools.
</th> </tr>
<tr>
<td>
**Type of study**
</td>
<td>
The INNO-4-AGRIFOOD Co-creation Workshop, which was held on the 15 th of
September 2017 at Amsterdam, the Netherlands with a view to co-creating
innovative ideas and designs for the innovation support services and smart
tools of the project, building upon the valuable contribution of diverse agri-
food stakeholders.
</td> </tr>
<tr>
<td>
**Dataset description**
</td>
<td>
The data includes innovative concepts and ideas provided by the participants
of the workshop’s co-creative session that focused on the innovation support
services and smart tools of INNO-4-AGRIFOOD.
</td> </tr>
<tr>
<td>
**Methodologies for data collection / generation**
</td>
<td>
The data were collected during the INNO-4-AGRIFOOD Co-creation Workshop and
documented as transcript notes.
</td> </tr>
<tr>
<td>
**Format and volume of the dataset**
</td>
<td>
The data have been integrated into the report on the _Outcomes of the
INNO-4AGRIFOOD Co-creation Workshop: Innovation support services and ICT
tools_ , which is stored in .pdf format. The size of the file is 5.53MB.
</td> </tr>
<tr>
<td>
**Metadata and**
**standards**
</td>
<td>
The report in which the data have been incorporated provides insights into the
objectives and methodology of the INNO-4-AGRIFOOD Co-Creation Workshop,
elaborates on the outcomes of its session on the services and tools and
translates the aforementioned outcomes into meaningful conclusions and key
potential characteristics for innovation support services and tools. Basic
descriptive metadata are provided along with the report (i.e. title and type
of file).
</td> </tr>
<tr>
<td>
**For whom might the dataset be useful?**
</td>
<td>
The dataset has contributed significantly in developing the services, smart
tools and e-learning modules of the project in line with the needs and
preferences of agrifood stakeholders in the context of INNO-4-AGRIFOOD. Beyond
the context of the project, innovation support service designers and providers
as well as ICT application developers and training providers could potentially
find the dataset and its accompanying report useful.
</td> </tr>
<tr>
<td>
**Confidentiality**
</td>
<td>
The report on the _Outcomes of the INNO-4-AGRIFOOD Co-creation Workshop:_
_Innovation support services and ICT tools_ , which includes the dataset, is
published on the web portal of the project. The report has been published
incorporating only anonymised and aggregated data.
</td> </tr> </table>
## Pool of agri-food SMEs
<table>
<tr>
<th>
**Dataset name**
</th>
<th>
Pool of agri-food SMEs.
</th> </tr>
<tr>
<td>
**Type of study**
</td>
<td>
Deployment of INNO-4-AGRIFOOD services and tools in real-life contexts.
</td> </tr>
<tr>
<td>
**Dataset description**
</td>
<td>
The dataset consists of 2 separate lists of agri-food SMEs which may have been
interested in benefiting from the innovation support services of the project
in the framework of its 3 iterative testing, validation and fine-tuning
rounds. The 1 st list includes SMEs which are either clients or among the
professional network of INNO4-AGRIFOOD consortium partners, and to which
services may be delivered by these partners. The 2 nd list includes SMEs who
have been identified through other channels (e.g. through the
INNO-4-AGRIFOOD’s Beneficiaries and Advisory Boards, the online contact form
of the project’s web portal, etc.), and to which services may have been
delivered by external innovation consultants.
</td> </tr>
<tr>
<td>
**Methodologies for data collection / generation**
</td>
<td>
In addition to the professional networks of INNO-4-AGRIFOOD (1 st list of
the dataset), several sources have been employed to identify suitable SMEs to
participate in the real-life deployment of the novel services and tools of the
project, including (among others) networks and member organisations of
INNO-4AGRIFOOD’s Advisory and Beneficiaries Boards as well as interested SMEs
which participated in the surveys launched in the context of the project or
expressed their interest through the online contact form of its web portal (2
nd list of the dataset).
</td> </tr>
<tr>
<td>
**Format and volume of the dataset**
</td>
<td>
A spreadsheet (in .xlsx format) with two separate tabs (each for one of the
two lists described above) has been used to store the Pool of agri-food SMEs,
which, contains 198 records reaching a volume of 19 KB. In fact, the complete
dataset contains the following data for each recorded SME: (1) for the first
list of SMEs: i) Name of the SME; (ii) Contact person (name and surname); (ii)
Country, (iv) sector; 2) for the second list of SMEs: (i) SME name/ name &
surname of the person, (ii) country. In the case of the 1 st list,
information about the INNO-4-AGRIFOOD consortium partner connected to a
recorded SME has been included.
</td> </tr>
<tr>
<td>
**Metadata and**
**standards**
</td>
<td>
Descriptive and structural metadata has been created and provided along with
the dataset.
</td> </tr>
<tr>
<td>
**For whom might the dataset be useful?**
</td>
<td>
The dataset would be most useful for consortium partners during the real-life
deployment activities of the project.
</td> </tr>
<tr>
<td>
**Confidentiality**
</td>
<td>
The dataset is stored at the private server of the Coordinator (Q-PLAN) and
will be preserved for at least 5 years following the end of the project,
before eventually being deleted. Copies of the dataset are available only to
relevant INNO-4AGRIFOOD consortium partners and will not be disclosed or used
for purposes outside the framework of the project, unless otherwise allowed by
the external stakeholder that has provided the respective data.
</td> </tr> </table>
## Roster of specialists database
<table>
<tr>
<th>
**Dataset name**
</th>
<th>
Roster of specialists database.
</th> </tr>
<tr>
<td>
**Type of study**
</td>
<td>
Involvement of trained staff of innovation intermediaries and SME support
networks in the deployment of INNO-4-AGRIFOOD services and tools in real-life
contexts.
</td> </tr>
<tr>
<td>
**Dataset description**
</td>
<td>
Pool of appropriately qualified SME consultants who participated in the
testing of project’s services and tools by providing them to agri-food SMEs.
The Roster of Specialists Database (RSD) encompasses valuable information
about the recorded consultants, such as demographics and contact details of
themselves and their affiliated organisations, data about their progress
towards completing the project’s e-learning offer and providing its services
as well as miscellaneous data that helped INNO-4-AGRIFOOD consortium members
to better match them with appropriate agri-food SMEs to service.
</td> </tr>
<tr>
<td>
**Methodologies for data collection / generation**
</td>
<td>
The RSD has been populated with consultants who have successfully completed
the INNO-4-AGRIFOOD e-learning courses addressing the project’s services,
participated in the project’s 1 st webinar and/or have been personally
trained by a designated INNO-4-AGRIFOOD Coach. The database has been enriched
as the reallife deployment activities of INNO-4-AGRIFOOD progress and the
staff of innovation intermediaries and SME support networks gained experience
in the project’s services and e-learning modules.
</td> </tr>
<tr>
<td>
**Format and volume of the dataset**
</td>
<td>
The Roster of Specialists Database is stored in a standard spreadsheet format
and comprises of 55 records of SME consultants across the EU. Moreover, the
dataset file has a volume of 160 KB.
</td> </tr>
<tr>
<td>
**Metadata and**
**standards**
</td>
<td>
Descriptive and structural metadata have been created to accompany the dataset
so as to increase its discoverability among the interested stakeholders.
</td> </tr>
<tr>
<td>
**For whom might the dataset be useful?**
</td>
<td>
Agri-food SMEs who would like to receive support from innovation consultants
specialised in supporting online collaboration for innovation.
</td> </tr>
<tr>
<td>
**Confidentiality**
</td>
<td>
Records of the dataset remained for internal use only during the lifecycle of
the project. With that in mind, only a copy of the dataset is hosted in the
Coordinator’s (Q-PLAN) private server and will be preserved for at least 5
years following the completion of the project, before eventually being
deleted.
</td> </tr> </table>
## Service testing metrics
<table>
<tr>
<th>
**Dataset name**
</th>
<th>
Service testing metrics.
</th> </tr>
<tr>
<td>
**Type of study**
</td>
<td>
Testing, validation and fine-tuning of the INNO-4-AGRIFOOD services and smart
tools.
</td> </tr>
<tr>
<td>
**Dataset description**
</td>
<td>
The dataset includes data collected during the iterative testing, validation
and finetuning of the INNO-4-AGRIFOOD services and smart tools, aimed at
managing ambiguity during the various iterations as well as measuring the
impact of improvements after each iteration. In particular, it contains both
qualitative and quantitative data on (i) the satisfaction of SMEs that
received INNO-4-AGRIFOOD services, (ii) the satisfaction of SMEs and
innovation consultants that have used the INNO-4-AGRIFOOD smart tools, (iii)
the impact of the INNO-4-AGRIFOOD services on the business of the SMEs that
received them, (iv) the activities performed in the framework of each
INNO-4-AGRIFOOD service provided in the context of the project, and (v)
different aspects of the services and smart tools that can be further
streamlined according to users’ needs and expectations.
</td> </tr>
<tr>
<td>
**Methodologies for data collection / generation**
</td>
<td>
In line with the _INNO-4-AGRIFOOD Metrics Model_ , this dataset has been
fuelled by the respective surveys that run over the 3 real-life deployment
rounds of the project’s services and smart tools as well as by the service
stories that were produced under this framework. All surveys employed
questionnaire-based tools aiming at mining both qualitative and quantitative
data from agri-food SMEs and innovation consultants.
</td> </tr>
<tr>
<td>
**Format and volume of the dataset**
</td>
<td>
The dataset is stored in a typical spreadsheet format, that is .xlsx. The
volume of the dataset’s final version is 250 KB.
</td> </tr>
<tr>
<td>
**Metadata and**
**standards**
</td>
<td>
Descriptive metadata has been attached to the dataset (such as title,
abstract, author, type of data, data collection method and keywords).
</td> </tr>
<tr>
<td>
**For whom might the dataset be useful?**
</td>
<td>
Innovation support service designers and providers may find use in this
dataset.
</td> </tr>
<tr>
<td>
**Confidentiality**
</td>
<td>
All records are openly available to all interested stakeholders through the
data repository of Zenodo and incorporates only anonymized data so as to
ensure data providers’ confidentiality. The dataset is also protected with a
CC BY-NC-ND 4.0 licence.
</td> </tr> </table>
## User data and learning curve of e-learning participants
<table>
<tr>
<th>
**Dataset name**
</th>
<th>
User data and learning curve of e-learning participants.
</th> </tr>
<tr>
<td>
**Type of study**
</td>
<td>
Provision of e-training courses to staff of innovation intermediaries and SME
support networks.
</td> </tr>
<tr>
<td>
**Dataset description**
</td>
<td>
The dataset contains demographic data of the people who have registered to the
e-learning platform of INNO-4-AGRIFOOD and their affiliated organisations
along with data reflecting their e-learning progress.
</td> </tr>
<tr>
<td>
**Methodologies for data collection / generation**
</td>
<td>
Data has been provided voluntarily by the individuals who registered to the
INNO4-AGRIFOOD e-learning platform through a dedicated online form which aimed
at creating the profile necessary for their registration. Moreover, the
e-learning platform automatically collected all necessary data about the
online activities of the participants who accessed the system via a unique
username-password combination.
</td> </tr>
<tr>
<td>
**Format and volume of the dataset**
</td>
<td>
MySQL database stores the table definitions in . _frm_ files while the data is
stored in . _idb_ files in case of InnoDB tables. The data is exported to
standard spreadsheet format (.csv or other). 613 registered participants have
been recorded within the dataset resulting in a file of 1MB.
</td> </tr>
<tr>
<td>
**Metadata and**
**standards**
</td>
<td>
The dataset is not intended for sharing and re-use and thus the dataset is not
accompanied with metadata.
</td> </tr>
<tr>
<td>
**For whom might the dataset be useful?**
</td>
<td>
The dataset has been used by selected INNO-4-AGRIFOOD consortium members for
analysing the learning behaviour of the e-learning participants in the frame
of the project.
</td> </tr>
<tr>
<td>
**Confidentiality**
</td>
<td>
The data of e-learning participants is confidential and used only in the
context of the project. With that in mind, the dataset is currently stored in
Europa Media as the responsible party for the e-learning platform and will be
preserved for at least 5 years following the completion of the project, before
eventually being deleted. The information stored is in accordance with the
GDPR regulation. Moreover, the administrators of the e-learning platform have
access to the data provided by elearning participants 5 years after the
project ends, apart from their password information (which will be known only
to the e-learning participants themselves). E-learning participants could have
configured their profile indicating the open data they would have liked to
share. Still, the data of the participants’ learning curve (e.g. statistics on
accessing the e-learning, following existing material, concluding tests, etc.)
are accessible only to the administrators of the e-learning platform as well.
Any meaningful analysis or conclusions drawn from these data has been shared
in relevant upcoming reports that will be produced by the project.
</td> </tr> </table>
## Feedback derived from e-learning participants
<table>
<tr>
<th>
**Dataset name**
</th>
<th>
Feedback derived from e-learning participants.
</th> </tr>
<tr>
<td>
**Type of study**
</td>
<td>
Testing, validation and fine-tuning of the e-learning environment.
</td> </tr>
<tr>
<td>
**Dataset description**
</td>
<td>
The dataset includes feedback on technical and content-wise aspects of the
elearning environment of INNO-4-AGRIFOOD (including the e-learning platform as
well as its constituent e-learning modules), gathered from e-learning
participants with a view to evaluating its functionalities, graphics and
content.
</td> </tr>
<tr>
<td>
**Methodologies for data collection / generation**
</td>
<td>
Data has been provided voluntarily by e-learning participants of
INNO-4-AGRIFOOD via dedicated questionnaire-based feedback forms. The
questionnaires utilised by the feedback forms employed the Likert scale (1 -
Strongly Disagree to 5 - Strongly Agree) so that participants can quickly
provide their opinion on the functionalities and content of the different
e-learning modules as well as the platform as a whole.
</td> </tr>
<tr>
<td>
**Format and volume of the dataset**
</td>
<td>
MySQL database stores the table definitions in .frm files while the data is
stored in .idb files in case of InnoDB tables. The data is exported to
standard spreadsheet format (.csv or other). 613 registered participants have
been recorded within the dataset resulting in a file of 1MB.
</td> </tr>
<tr>
<td>
**Metadata and**
**standards**
</td>
<td>
As the dataset is closed (available only to Europa Media) no metadata has been
created to accompany the dataset.
</td> </tr>
<tr>
<td>
**For whom might the dataset be useful?**
</td>
<td>
The dataset has been used by selected INNO-4-AGRIFOOD consortium members to
analyse user experience on the e-learning environment and thus provide the
basis for further improvement in the future iterations in the context of the
project.
</td> </tr>
<tr>
<td>
**Confidentiality**
</td>
<td>
In order to ensure the privacy of the participants who provided their
feedback, the records of the database have remained confidential. With that in
mind, the dataset is currently stored within Europa Media’s private server and
will be preserved for at least 5 years following the completion of the
project, before eventually being deleted. Only the administrators of the
e-learning platform can access copies of the feedback provided.
</td> </tr> </table>
## Awareness creation, dissemination and stakeholder engagement
<table>
<tr>
<th>
**Dataset name**
</th>
<th>
Awareness creation, dissemination and stakeholder engagement.
</th> </tr>
<tr>
<td>
**Type of study**
</td>
<td>
Assessment of the results and impact of the awareness creation, dissemination
and stakeholder engagement activities of the project employing an indicator-
based framework.
</td> </tr>
<tr>
<td>
**Dataset description**
</td>
<td>
Data collected during INNO-4-AGRIFOOD with a view to measuring and assessing
the performance and results of the project in terms of awareness creation,
dissemination, stakeholder engagement.
</td> </tr>
<tr>
<td>
**Methodologies for data collection / generation**
</td>
<td>
Primary data has been collected through the dissemination activity reports of
project partners regarding media products, events, external events, general
publicity, etc. Third party tools have been employed as well (e.g. Google
analytics, social media statistics, etc.).
</td> </tr>
<tr>
<td>
**Format and volume of the dataset**
</td>
<td>
The collected data are preserved in a spreadsheet format (.xlsx). The total
size of the file is 21 KB.
</td> </tr>
<tr>
<td>
**Metadata and**
**standards**
</td>
<td>
Descriptive metadata has been created and attached to the dataset (such as
title, type of data, data collection method and keywords).
</td> </tr>
<tr>
<td>
**For whom might the dataset be useful?**
</td>
<td>
The dataset would be meaningful to the European Commission as well as
researchers who study relevant aspects of EU-funded projects.
</td> </tr>
<tr>
<td>
**Confidentiality**
</td>
<td>
All records are openly available to all interested stakeholders through the
data repository of Zenodo and incorporates only anonymized and aggregated data
so as to ensure data providers’ confidentiality. The dataset has been
published under the CC BY-NC-ND 4.0 licence so as to ensure its widest re-use.
</td> </tr> </table>
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0750_Flourish_644227.md
|
Flourish (644227) Deliverable D9.6
<table>
<tr>
<th>
uint8[] data
</th>
<th>
# Actual point data, size is (row_step ***** height)
</th> </tr>
<tr>
<td>
bool is_dense
</td>
<td>
# True if there are no invalid points
</td> </tr> </table>
sensormsgs/PointCloud2 contains the message type sensormsgs/PointField, which
is detailed below.
sensormsgs/PointField
<table>
<tr>
<th>
# This message holds the description of one point entry in the # PointCloud2
message format.
uint8 INT8 = 1 uint8 UINT8 = 2 uint8 INT16 = 3 uint8 UINT16 = 4 uint8 INT32 =
5 uint8 UINT32 = 6 uint8 FLOAT32 = 7 uint8 FLOAT64 = 8
string name # Name of field uint32 offset # Offset from start of point struct
uint8 datatype # Datatype enumeration, see above uint32 count # How many
elements in the field
</th> </tr> </table>
Cameras: sensormsgs/Image
<table>
<tr>
<th>
# This message contains an uncompressed image
# (0, 0) is at top-left corner of image
#
Header header # Header timestamp should be acquisition time of image
# Header frame_id should be optical frame of camera
# origin of frame should be optical center of cameara
# +x should point to the right in the image
# +y should point down in the image
# +z should point into to plane of the image
# If the frame_id here and the frame_id of the CameraInfo
# message associated with the image conflict # the behavior is undefined
uint32 height # image height, that is, number of rows uint32 width # image
width, that is, number of columns
# The legal values for encoding are in file src/image_encodings.cpp
# If you want to standardize a new string format, join
# [email protected] and send an email proposing a new encoding.
string encoding # Encoding of pixels -- channel meaning, ordering, size
# taken from the list of strings in
# include/sensor_msgs/image_encodings.h
uint8 is_bigendian # is this data bigendian? uint32 step # Full row length in
bytes uint8[] data # actual matrix data, size is (step ***** rows)
</th> </tr> </table>
GPS: sensormsgs/NavSatFix
<table>
<tr>
<th>
# Navigation Satellite fix for any Global Navigation Satellite System #
# Specified using the WGS 84 reference ellipsoid
# header.stamp specifies the ROS time for this measurement (the
# corresponding satellite time may be reported using the #
sensor_msgs/TimeReference message).
#
# header.frame_id is the frame of reference reported by the satellite
# receiver, usually the location of the antenna. This is a
# Euclidean frame relative to the vehicle, not a reference # ellipsoid.
Header header
# satellite fix status information
NavSatStatus status
# Latitude [degrees]. Positive is north of equator; negative is south. float64
latitude
# Longitude [degrees]. Positive is east of prime meridian; negative is west.
float64 longitude
# Altitude [m]. Positive is above the WGS 84 ellipsoid # (quiet NaN if no
altitude is available). float64 altitude
# Position covariance [mˆ2] defined relative to a tangential plane # through
the reported position. The components are East, North, and # Up (ENU), in row-
major order.
</th> </tr> </table>
4
Flourish (644227) Deliverable D9.6
<table>
<tr>
<th>
#
# Beware: this coordinate system exhibits singularities at the poles.
float64[9] position_covariance
# If the covariance of the fix is known, fill it in completely. If the
# GPS receiver provides the variance of each measurement, put them
# along the diagonal. If only Dilution of Precision is available, # estimate
an approximate covariance from that.
uint8 COVARIANCE_TYPE_UNKNOWN = 0 uint8 COVARIANCE_TYPE_APPROXIMATED = 1 uint8
COVARIANCE_TYPE_DIAGONAL_KNOWN = 2 uint8 COVARIANCE_TYPE_KNOWN = 3 uint8
position_covariance_type
</th> </tr> </table>
Inertial measurement unit: sensormsgs/Imu
<table>
<tr>
<th>
# This is a message to hold data from an IMU (Inertial Measurement Unit) #
# Accelerations should be in m/sˆ2 (not in g’s), and rotational # velocity
should be in rad/sec.
# If the covariance of the measurement is known, it should be filled in
# (if all you know is the variance of each measurement, e.g. from the
# datasheet, just put those along the diagonal). A covariance matrix of
# all zeros will be interpreted as "covariance unknown", and to use the
# data a covariance will have to be assumed or gotten from some other source #
# If you have no estimate for one of the data elements (e.g. your
# IMU doesn’t produce an orientation estimate), please set element 0
# of the associated covariance matrix to -1. If you are interpreting # this
message, please check for a value of -1 in the first element of each #
covariance matrix, and disregard the associated estimate. Header header
geometry_msgs/Quaternion orientation float64[9] orientation_covariance # Row
major about x, y, z axes
geometry_msgs/Vector3 angular_velocity float64[9] angular_velocity_covariance
# Row major about x, y, z axes
geometry_msgs/Vector3 linear_acceleration float64[9]
linear_acceleration_covariance # Row major x, y z
</th> </tr> </table>
Thermometer: sensormsgs/Temperature
<table>
<tr>
<th>
# Single temperature reading.
Header header # timestamp is the time the temperature was measured
# frame_id is the location of the temperature reading
float64 temperature # Measurement of the Temperature in Degrees Celsius
float64 variance # 0 is interpreted as variance unknown
</th> </tr> </table>
# Conclusion
This deliverable reports on the FLOURISH projects data management plan. We
provided a detailed description of the types of data the project will
generate, the data format in which we will log and store data, and how we will
share data among FLOURISH partners and make relevant data available to the
public.
5
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0754_PD_manager_643706.md
|
# INTRODUCTION
PD_manager voluntarily (since it was not approved under one of the thematic
areas for which it was obligatory) explored the flexible pilot under Horizon
2020 called the Open Research Data Pilot (ORD pilot). The ORD pilot aims to
improve and maximise access to and re-use of research data generated by
Horizon 2020 projects and takes into account the need to balance openness and
protection of scientific information, commercialisation and Intellectual
Property Rights (IPR), privacy concerns, security as well as data management
and preservation questions.
The scope of this document is to answer all questions related to FAIR ** 1 **
data management and provide information about PD_manager compliance with FAIR
principles. In general terms, research data should be 'FAIR', that is
findable, accessible, interoperable and reusable. These principles precede
implementation choices and do not necessarily suggest any specific technology,
standard, or implementation-solution.
# DATA SUMMARY
_What is the purpose of the data collection/generation and its relation to the
objectives of the project?_
Data were collected in two phases for the project needs:
1. During the 1 st year of the project (Sep – Dec 2015) the involved partners (IRCCS Fondazione Ospedale San Camillo, IRCCS Santa Lucia Foundation and University of Ioannina - UOI) have gathered preliminary data from 20 patients (in total) affected by Parkinson’s disease, both in ON and OFF state in order to feed the research of WP4 for the detection and evaluation of symptoms and the detection of fluctuations. Useful data about the usability and wearability of the devices and the feasibility of the recordings in daily in-hospital and out-hospital settings were also collected.
2. During the last year of the project (July 2017-Mar 2018) the involved partners
(IRCCS Fondazione Ospedale San Camillo, IRCCS Santa Lucia Foundation,
University Of Ioannina and University of Surrey) conducted a non-blinded
parallel two group randomized controlled pilot study in which, 133 patients
have been enrolled, of which 75 have been assigned and tested to the
PD_manager group, while 58 have been assigned and tested to the control group
(clinical diaries).In both groups the duration was 2 weeks and the main
outcomes were: to assess (1) the acceptability and usability of the PD_manager
system, compared to traditional practices for Patients and Caregivers (dyads)
and (2) the Usefulness of intervention/value of the information provided by
PD_manager for decision making with respect to patient management, its
acceptability in clinical practice and the confidence/reliability in the
information (clinicians).
_What types and formats of data the project generated/collected?_
The data from the 1 st phase were:
* Clinical information (baseline)
* UPDRS items (not all of them) for annotation
* raw data from the MS Band sensors: 3-axis accelerometer, gyroscope, steps, heart rate and skin temperature
* raw data from the BQ Aquaris sensors: 3-axis accelerometer, gyroscope
* raw data from insoles: 3-axis accelerometer, pressures, steps
* video of the whole protocol for annotation
* cognitive battery usability questionnaire
* wearability questionnaire
* user needs questionnaire
The data from the 2 nd phase were:
* Clinical information (baseline)
* UPDRS for annotation
* raw data from the MS Band sensors: 3-axis accelerometer, gyroscope,
* raw data from the BQ model M sensors: 3-axis accelerometer, gyroscope
* raw data from insoles: 3-axis accelerometer, pressures, steps
* Features from motor symptoms manifested in legs captured with the sensor insole.
* Features from motor symptoms manifested in upper limbs captured with the wristband
* Activity and sleep data from the wristband (it was optional and only a few patients could activate it)
* Speech quality (sound analysis, language deficit) captured with the smartphone microphone
* Data for non-motor symptoms and impulsivity through questionnaires on smartphone
* Cognitive status data captured with cognitive monitoring app
* Data on mood with smartphone app
* Adherence to medication data with the mobile app
_Will you re-use any existing data and how?_
The 2 nd phase data will be reused for studying fluctuations and developing
a more sophisticated method.
The 2 nd phase data will also be reused for validating the data mining
studies we have conducted (correlation of H&Y with UPDRS) and for further
validating DSS (clinicians decision against PD_manager suggestion).
Raw data from 2 nd phase can also be reused from project partner for
building new methods for motor symptoms monitoring and evaluation since they
cover more days and different algorithms can be applied.
_What is the origin of the data?_
During the 1st phase a total of 20 patients were recruited (n.10 by IRCCS
(IT), n.5 by Fondazione Santa Lucia (IT), n.5 by UOI (GR)).
During the 2 nd phase 133 people with Parkinson’s disease (n=133, with 133
caregivers) have been enrolled into the study through clinical centers in
England (n=21, i.e. 10 each from
Royal Surrey NHS Hospital Trust in Guildford and St Peter’s NHS Hospital Trust
in Chertsey, Surrey), Greece (n= 20, Ioannina) and Italy (n= 41, IRCCS
Fondazione Ospedale San Camillo - Venice, and n= 50, IRCCS Fondazione Santa
Lucia - Rome).
_What is the expected size of the data?_
For 1 st phase each “. go" file (Moticon software) that includes all raw
data and the video for all 8 sessions of the patient, all synchronized, and is
compressed, is around 350 MB. However, the separate files for each patient are
between 2 and 3 GBs because of the videos.
Since videos cannot be shared the shared dataset will be just a few MBs.
For 2 nd phase the data size is around 3 GB for each patient.
_To whom might it be useful ('data utility')?_
PD Researchers; Researchers in signal processing; Researchers in medical data
mining
# FAIR DATA
## Making data findable, including provisions for metadata
_Are the data produced and/or used in the project discoverable with metadata,
identifiable and locatable by means of a standard identification mechanism
(e.g. persistent and unique identifiers such as Digital Object Identifiers)?_
NO, but DOI versioning in Zenodo is straightforward in case it is decided to
upload the data in a public repository
_What naming conventions do you follow?_
Name of the organization_patient_nr (2 digits)
e.g. UOI_patient_01
_Will search keywords be provided that optimize possibilities for re-use?_
Parkinson’s, Parkinson’s Disease, sensor data, UPDRS annotation etc.
_Do you provide clear version numbers? What metadata will be created?_
Even though there aren’t any versions, the dates of data recordings are
available. Moreover, the protocol description will be available for
researchers
_In case metadata standards do not exist in your discipline, please outline
what type of metadata will be created and how._
The data collection protocols will be provided as metadata (always if that
option is selected) since they complement the information needed to use the
PD_manager data. The protocols are already described in detail in deliverables
4.1 and 6.1 and 6.2 respectively.
## Making data openly accessible
_Which data produced and/or used in the project will be made openly available
as the default? If certain datasets cannot be shared (or need to be shared
under restrictions), explain why, clearly separating legal and contractual
reasons from voluntary restrictions. Note that in multi-beneficiary projects
it is also possible for specific beneficiaries to keep their data closed if
relevant provisions are made in the consortium agreement and are in line with
the reasons for opting out._
PD_manager will probably opt-out from making some of its data openly
available.
The reasons are:
* For the 2 nd phase dataset the consent we have is for use only for the purpose of the pilot study and just for other related purposes within the project.
* For the 1 st phase dataset commercial exploitation is still explored for some of the modules.
However, even at a next stage the 1 st phase consent forms enable the
consortium to change that decision and all necessary steps for streamlining
the process (approval) and especially the access (repositories) to data have
been made. Below the PD_manager data access Board is presented.
_How will the data be made accessible (e.g. by deposition in a repository)?_
A version of CKAN (ckan.org) was deployed within the project (currently
running on _http://195.130.121.50/_ ) . CKAN is a data management system
that makes data accessible – by providing tools to streamline publishing,
sharing, finding and using data.
_What methods or software tools are needed to access the data?_
Approval from the PD_manager Board is necessary. For processing the data
Matlab or any similar software is needed. The synchronized data can be
accessed without any additional effort using Moticon (www.moticon.de)
proprietary software.
_Is documentation about the software needed to access the data included?_
Nothing additional. You need to know how Matlab or Moticon software works.
_Is it possible to include the relevant software (e.g. in open source code)?_
Two open source alternatives to Matlab could be:
1. GNU Octave ( _www.gnu.org/software/octave/_ )
2. Scilab ( _www.scilab.org_ )
_Where will the data and associated metadata, documentation and code be
deposited? Preference should be given to certified repositories which support
open access where possible._
Initially, in an SFTP server running on University of Ioannina infrastructure.
Zenodo (zenodo.org) also is a good choice, especially now that it linked with
Github that we have used during the mobile apps development.
_Have you explored appropriate arrangements with the identified repository?_
No need to, Zenodo features and policies cover our needs. Specifically:
It supports DOI versioning; uploads gets a Digital Object Identifier (DOI) to
make them easily and uniquely citable
It supports Flexible licensing
It is integrated with GitHub that was used within PD_manager for the
development
They currently accept up to 50GB per dataset (one can have multiple datasets);
there is no size limit on communities.
All research outputs from all fields of science can be stored: publications
(book, book section, conference paper, journal article, patent, preprint,
report, thesis, technical note, working paper, etc.), posters, presentations,
datasets, images (figures, plots, drawings, diagrams, photos), software,
videos/audio. i.e. all the types of data in PD_manager
Zenodo was launched within an EU funded project, the knowledge bases were
first filled with EU grants codes
The data is stored in CERN Data Center. Both data files and metadata are kept
in multiple online and independent replicas. CERN has considerable knowledge
and experience in building and operating large scale digital repositories and
a commitment to maintain this data center to collect and store 100s of PBs of
LHC data as it grows over the next 20 years. In the highly unlikely event that
Zenodo will have to close operations, they guarantee that they will migrate
all content to other suitable repositories, and since all uploads have DOIs,
all citations and links to Zenodo resources (such as PD_manager data) will not
be affected.
_If there are restrictions on use, how will access be provided?_
Through the UOI SFTP server.
_Is there a need for a data access committee?_
Yes, we have patient data and we need to know the intended data use and the
purpose of the studies.
_Are there well described conditions for access (i.e. a machine readable
license)?_
NO
_How will the identity of the person accessing the data be ascertained?_
Probably an official letter from the organization in which the person studies/
works will be requested in addition to any documentation is typically
requested by the person himself.
## Making data interoperable
_Are the data produced in the project interoperable, that is allowing data
exchange and reuse between researchers, institutions, organisations,
countries, etc. (i.e. adhering to standards for formats, as much as possible
compliant with available (open) software applications, and in particular
facilitating re-combinations with different datasets from different origins)?_
YES
_What data and metadata vocabularies, standards or methodologies will you
follow to make your data interoperable?_
Those provided by ZENODO
_Will you be using standard vocabularies for all data types present in your
data set, to allow inter-disciplinary interoperability?_
The data models are named according to the indications of FHIR – Fast
Healthcare Interoperability Resources (hl7.org/fhir) which is the next
generation standards framework created by HL7.
_In case it is unavoidable that you use uncommon or generate project specific
ontologies or vocabularies, will you provide mappings to more commonly used
ontologies?_
We already adopted FHIR.
## Increase data re-use (through clarifying licences)
_How will the data be licensed to permit the widest re-use possible?_
To be defined. In any case:
1. We will only provide data that has been de-identified
2. the patients that participated in the PD_manager studies are fully informed and provided their consent that access to their de-identified data can be granted in the future for specific scientific purposes
_When will the data be made available for re-use? If an embargo is sought to
give time to publish or seek patents, specify why and how long this will
apply, bearing in mind that research data should be made available as soon as
possible._
The final decision should be made in 3 years after the end of the project,
i.e. by March 2021.
_Are the data produced and/or used in the project useable by third parties, in
particular after the end of the project? If the re-use of some data is
restricted, explain why._
They could be re-used under specific conditions.
_How long is it intended that the data remains re-usable?_
In case they are made openly available there will be no time restriction.
_Are data quality assurance processes described?_
Yes.
# ALLOCATION OF RESOURCES
_What are the costs for making data FAIR in your project?_
They are minor since they include only server maintenance costs or zero in
case we upload in Zenodo
_How will these be covered? Note that costs related to open access to research
data are eligible as part of the Horizon 2020 grant (if compliant with the
Grant Agreement conditions)._
They are indirect costs covered by the University
_Who will be responsible for data management in your project?_
A board consisting from one representative from each organization (permanent
staff) and led by Prof Angelo Antonini. The other members are:
Prof D Fotiadis from UOI
Prof MT Arrendondo from UPM
Prof G Spalletta from IRCCS Santa Lucia Foundation
Prof H Gage from University of Surrey
Dr D Miljkovic from JSI
Dr A Marcante from IRCCS San Camillo Hospital
Dr I Chkajlo from URI
Dr R Vilzmann from Moticon
Dr H Hatzakis from B3D
Dr M Rafaelli from Live
_Are the resources for long term preservation discussed (costs and potential
value, who decides and how what data will be kept and for how long)?_
A final decision will be made by March 2021 from this Board.
# DATA SECURITY
_What provisions are in place for data security (including data recovery as
well as secure storage and transfer of sensitive data)?_
Information will be kept in locked filing cabinets and password protected
computers, in a room with restricted access at the University of Ioannina.
Back up will be kept in an external hard disc locked in the same room.
Moreover, the final FTP will be SFTP (SSH File Transfer Protocol) which also
protects against password sniffing and man-in-the-middle attacks. It protects
the integrity of the data using encryption and cryptographic hash functions
and authenticates both the server and the user.
_Is the data safely stored in certified repositories for long term
preservation and curation?_
Zenodo allows up to 50 GB per dataset which means we can upload there the
complete 1 st phase dataset and split the 2 nd phase dataset in 2-3 parts.
# ETHICAL ASPECTS
_Are there any ethical or legal issues that can have an impact on data
sharing? These can also be discussed in the context of the ethics review. If
relevant, include references to ethics deliverables and ethics chapter in the
Description of the Action (DoA)._
Yes, there are. The data were collected from consenting patients that were
fully informed that their de-identified data may be sued for research also
after the end of the project. Details about the protocol and the information
sheets and consent forms are provided in the Ethics Deliverables.
_Is informed consent for data sharing and long term preservation included in
questionnaires dealing with personal data?_
YES.
For the 1 st phase the consent included the following statements from the
patients:
* I understand that all data collected during the study, may be looked at for monitoring and auditing purposes by authorized individuals working for, or reviewing the outcomes of the PD_manager project from regulatory authorities where it is relevant to my taking part in this research. I give permission for these individuals to have access to my records.
* I agree for my anonymised data and/or samples to be shared with the EU PD_Manager project partners
* I agree for my anonymised data and/or samples to be shared with other scientists conducting relevant research from outside the PD_manager project if this is the decision of the PD_manager project committee.
For the 2nd phase the consent included “I consent to my personal data being
used for the study as detailed in the information sheet. I understand that all
personal data relating to volunteers is held and processed in the strictest
confidence, and in accordance with the Data Protection Act (1998).”
According to the information sheet: “The information collected will be
analysed to meet the aims of the study. Under no circumstances will any of
your personal details be passed onto third parties or appear in any reports on
this study.”
The data, anonymised
The data sharing and ownership policies are the same across the datasets and
are in accordance with the Consortium Agreement (v. 3, 01/01/2015) as well the
data access procedures and rights in relation to the data gathered through the
whole PD_MANAGER project.
For any data sharing request that will be issued – which focuses on analysing
data collected within the project for different purposes – after the approval
from the PD_manager board, the Principal Investigator will be asked to submit
his research purpose to the competent ethical committee and receive approval
to get access to the data.
# OTHER ISSUES
_Do you make use of other national/funder/sectorial/departmental procedures
for data management?_
NO
_If yes, which ones?_
Not applicable
# FURTHER SUPPORT IN DEVELOPING YOUR DMP
The Research Data Alliance provides a Metadata Standards Directory that can be
searched for discipline-specific standards and associated tools.
The EUDAT B2SHARE tool includes a built-in license wizard that facilitates the
selection of an adequate license for research data. Useful listing of
repositories include:
* Registry of Research Data Repositories
* Some repositories like Zenodo, an OpenAIRE and CERN collaboration), allow researchers to deposit both publications and data, while providing tools to link them.
* Other useful tools include DMP online and platforms for making individual scientific observations available such as ScienceMatters.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0755_POLYPHEM_764048.md
|
# INTRODUCTION
This deliverable presents the Data management Plan (DMP) ruling data
management within the H2020 EU funded project “Small-Scale Solar Thermal
Combined Cycle” (POLYPHEM – 764048). The aim of the document is to describe
the data management life cycle for all datasets to be collected, generated and
processed within the research activities of the POLYPHEM project. Among other,
the document sets out:
* the handling of research data during and after the end of the project,
* the list of data collected, processed and generated,
* the methodology and standards to be applied,
* the data that will be made openly available and the procedure(s),
* the measures undertaken or to apply in order to facilitate the interoperability and reuse of the research data, and
* the rules of data curation and preservation.
In the frame of POLYPHEM, various types of research data are expected to be
collected, processed and/or generated: data collected in previous scientific
publications/patents, measuring data observed, design data created in the
frame of the project, numerical simulation and processing tools, etc. As
participants in the Open Research Data Pilot, for each one of those research
data, the POLYPHEM partners will carefully study the possibility and
pertinence to make them findable, accessible, interoperable and reusable, to
the extent possible (FAIR).
The DMP will be regularly updated. This document has been prepared following
the guidelines on FAIR data management in Horizon 2020.
The Common European Research Information Format (CERIF) will be used as
standard to build the database of the project results (data and metadata) in
order to make them easy to find and to interoperate. The results will be
preserved and made available in the repository Zenodo 1 which is referred to
in the European network OpenAIRE 2 .
The scheme presented in Figure 1 shows the principle of the data delivery,
conservation and restitution using standards at each step of the data
management process.
This DMP is created and will be updated with the respect of all national and
European legal requirements, such as the General Data Protection Regulation
(GDPR, Regulation (EU) 2016/679) 3 . It also complies with the requirements
of the article 29 of the Grant Agreement, specifically, in terms of obligation
to disseminate results (art. 29.1 of GA), open access to scientific
publications (art. 29.2 of GA) and open access to research data (art. 29.3 of
GA). It also respects the IPR protection framework applicable to the project,
potential conflicts of commercialization and dissemination of own results, as
defined in the article 8.3 of the project Consortium Agreement signed by the
beneficiaries.
The objective is to put useful information and recommendations on the
management of the project results into a prospective, descriptive and
upgradeable single document.
# DATA SUMMARY
## PURPOSE OF THE DATA COLLECTION/GENERATION
POLYPHEM will produce several datasets during the lifetime of the project. The
nature of the data will be both quantitative and qualitative and will be
analysed from a range of perspectives for project development and scientific
purposes. The created datasets will have the same structure, in accordance
with the guidelines of Horizon H2020 for the Data Management Plan.
The completion of the work plans associated to the 8 technical Work-Packages
(WP) of POLYPHEM will generate new and original scientific and technical data.
Some of these data will be created by a group of participants as a result of
collaborative work, while others will be created by one specific partner in
individual work. Data will also be collected in previous scientific
publications or patents and will serve as reference cases, results or
knowledge for new research developments.
The data collection, selection, classification and preservation is a critical
action which will be maintained and carefully monitored all along the
execution of the project. It will enable to exchange relevant technical
information among the beneficiaries and therefore increase the efficiency of
the collaborative research work for the achievement of the objectives of the
project. The preservation of the data after the completion of the project will
permit to continue some research by providing useful and re-usable information
to the partners engaged in the long-term development of similar technologies.
Technical specifications of instruments, components or processes, design of
new components, lessons learned from observations and experimental operation
will serve for conceptual improvements and future testing procedures without
repeating the same work.
Finally, the data management aims at sharing public results with communities
of professors, students, researchers, engineers, managers and policy makers,
during and after the end of the project. This will contribute to increase the
impact of the project in the short, mid and long-term.
## CATEGORIES, TYPES, FORMATS AND SIZES OF DATA GENERATED OR COLLECTED
All the data generated or collected during the project POLYPHEM will be made
available as electronic files (numerical files).
**2.2.1 _Categories_ **
In general, the data will be classified into 4 categories, each of them
contains sub-categories of datasets.
* Text-based data o Publication, article o Report, scientific survey
* Experimental result (structured text) o Numerical simulation result (structured text) o Datasheet o Technical specification of instrument/process Audio-Visual data
* Scientific and technical presentation
* Poster
* Flyer, leaflet
* Picture, image, drawing, illustration
* Scheme, sketch, diagram
* Video
* Models o Design of component
* Technical drawing, construction plan o Heat transfer model o Optical model o Thermo-mechanical model o Techno-economical model
* Software data o Script o Executable code o Source code
* Archives (compressed datasets)
**2.2.2 _Types_ **
There are 2 types of electronic files: binary and ASCII (or Unicode).
A binary file is a series of bits with logical values 0 or 1 (or other derived
logical values like True/False, etc…).
An ASCII file is made of series of characters encoded on 7 bits with the rules
of the ASCII standard (ISO 646). Original ASCII standard is restricted to
Latin characters (letters, numbers and signs), Unicode standard is used to
extend ASCII to worldwide utilization.
**2.2.3 _Format_ **
The format of a file is determined by the encoding system, or standard, used
by the original software to generate the file. Proprietary formats (or closed
formats) can only be read using the original software (or similar software)
which are usually commercial products. Open formats can be read by both
proprietary and free and open-source software. Open formats are also called
free file formats if they are not encumbered by any copyrights, patents,
trademarks or other restrictions so that anyone may use them at no monetary
cost for any desired purpose.
In POLYPHEM, the formats used to produce the data will tend to respect the
international standards as they are defined by the International Standard for
Archival Description (ISAD). Open formats will be preferred, to the possible
extent, because they make the data more easily accessible and re-usable.
Each format is identified through an extension at the end of the filename.
Extensions respect international standards and are presented in the form of 3
or 4-letters acronyms.
**2.2.4 _Size_ **
The size of the datasets is generally in the range of KB to MB for the text-
based data, models and software, and from MB to GB for the audio-visual data.
**2.2.5 _Summary: Document Type Definition_ **
The basic parameters of the Document Type Definition (DTD) are summarized in
the following Table 1.
#### Table 1: Summary of the document type definition (categories and formats
of the datasets)
<table>
<tr>
<th>
**Category**
</th>
<th>
**Type**
</th>
<th>
**Open Format/extension**
</th>
<th>
**Closed Format/extension**
</th> </tr>
<tr>
<td>
**Text based data**
</td>
<td>
ASCII, Unicode
</td>
<td>
.odt, .docx, .rtf, .ods, .xlsx,
.txt, .sgml, .xml, .csv
</td>
<td>
.doc, .xls
</td> </tr>
<tr>
<td>
binary
</td>
<td>
.pdf, .eps
</td>
<td>
</td> </tr>
<tr>
<td>
**Audio-visual data**
</td>
<td>
binary
</td>
<td>
.odp, .pptx, .odc, .ora, .bmp, .jpeg, .jpg, .png, .gif, .odg,
.eps, .wav, .mp3, .mpeg
</td>
<td>
.pps, .ppt, .vsd, .psd, .tiff, .wpg, .wmf, .emf, .wma, .ram,
.avi, .mov, .wmv, .mp4
</td> </tr>
<tr>
<td>
**Models**
</td>
<td>
binary
</td>
<td>
.dwg, .eps
</td>
<td>
.dxf, .ora, .stp
</td> </tr>
<tr>
<td>
**Software data**
</td>
<td>
binary
</td>
<td>
.exe, .dll
</td>
<td>
.elf, .m, .mat
</td> </tr>
<tr>
<td>
**Archives (compressed datasets)**
</td>
<td>
binary
</td>
<td>
.zip
</td>
<td>
.rar
</td> </tr> </table>
## RE-USE OF DATA
The consortium of the POLYPHEM project already agreed on the access to data,
ruled by the terms of section 9 of the Consortium Agreement.
(9.3- Access rights for implementation) _“Access rights to results […] needed
for the performance of the own work of a Party under the Project shall be
granted on a royalty-free basis […].”_
(9.4- Access rights for exploitation) _“Access rights to results if needed for
exploitation of a Party's own results shall be granted on fair and reasonable
conditions. Access rights to results for internal research activities shall be
granted on a royalty-free basis”._
Specific terms have been agreed for the access to software (section 9.8.3 of
the CA)
_“Access rights to software that is results shall comprise access to the
object code; and, where normal use of such an object code requires an
application programming interface (hereafter API), access to the object code
and such an API; and, if a Party can show that the execution of its tasks
under the Project or the exploitation of its own results is technically or
legally impossible without access to the source code, access to the source
code to the extent necessary.”_
_“Fraunhofer ISE refuses to provide source code or API in this Project and
will not, in any case, access to another Party’s source code or API, unless
otherwise agreed individually.”_
The consortium of the POLYPHEM project is encouraged to make existing data
available for research. In general, the data (in total or in part), when it is
made accessible to public, could be re-used by partners of POLYPHEM during and
after the project, or by external researchers, for the following aims:
* Implementation of the work programme of the project (execution of the tasks by the partners).
* Training of students, researchers, engineers by partners or by external academic institutions.
* Implementation of other research works on CSP technologies by partners or by external bodies.
## ORIGIN OF DATA
Most of the data will be originated by the POLYPHEM participants. Experimental
results will be generated from experimental facilities, test-benches and from
the operation of the prototype plant. Other data will be generated through the
utilization of software tools for simulation, for design of components and
processes. Text-based data will be produced by the partners in activities of
reporting, design, processing of raw data. Audio-visual data will be generated
by the partners for communication purposes or by external body under sub-
contracting legal framework.
Previous CSP initiatives and projects worldwide in which solar tower or solar
combined cycles data have been or still are collected will be the origin of
the part of the POLYPHEM collected, processed and generated data.
## DATA UTILITY
In general, the audience who might use data generated or collected in the
project POLYPHEM are:
* The POLYPHEM Consortium;
* European Commission services, European Agencies, EU and national policy makers;
* Research institutions, universities, institutes, training centers across the Europe and worldwide; CSP and renewable energies related industries; Private and public investment sector.
Open research data from POLYPHEM will be useful to other researchers to
underpin scientific publications by referring to the POLYPHEM results in
surveys or by incorporating the POLYPHEM results in comparative analysis with
their own project results.
More detailed description of the data and whom they might be useful to will be
given later in updated versions of the Data Management Plan, since data
collection and creation is an ongoing process.
# FAIR DATA
## MAKING DATA FINDABLE, INCLUDING PROVISIONS FOR METADATA
**3.1.1 _Discoverability: metadata provision_ **
The repository Zenodo complies with the principles of FAIR data. The best
practices are implemented to make data findable (see
http://about.zenodo.org/principles/):
_“(Meta)data are assigned a globally unique and persistent identifier : A DOI
is issued to every published record on_
_Zenodo.”_
_“Data are described with rich metadata […]: Zenodo's metadata is compliant
with DataCite's Metadata Schema_ 4 _minimum and recommended terms, with a few
additional enrichments.”_
_“Metadata clearly and explicitly include the identifier of the data it
describes : The DOI is a top-level and a mandatory field in the metadata of
each record.”_
_“(Meta)data are registered or indexed in a searchable resource : Metadata of
each record is indexed and searchable directly in Zenodo's search engine
immediately after publishing. Metadata of each record is sent to DataCite
servers during DOI registration and indexed there.”_
A metadata template has been created for POLYPHEM consortium on the basis of
the compulsory requirements of Zenodo in order to better describe, easily
discover and trace the data collected and generated by the POLYPHEM project
during the life and after the end of the action. The template includes the
basic mandatory metadata required by the repository and additional metadata
that could be optionally provided by the project consortium depending on the
type and/or version of the research data uploaded, if appropriate. The
template will be sent to the relevant partners to be filled in and stored at
the Zenodo repository. The content of this template is listed in Table 2.
_**Table 2: Template of metadata for archiving the POLYPHEM datasets** _
<table>
<tr>
<th>
**Metadata**
</th>
<th>
**Category**
</th>
<th>
**Additional comments**
</th> </tr>
<tr>
<td>
Type of data
</td>
<td>
Mandatory
</td>
<td>
</td> </tr>
<tr>
<td>
DOI
</td>
<td>
Mandatory
</td>
<td>
If not filled, Zenodo will assign an automatic DOI. Please keep the same DOI
if the document is already identified with a DOI.
</td> </tr>
<tr>
<td>
Responsible / author(s)
</td>
<td>
Mandatory
</td>
<td>
</td> </tr>
<tr>
<td>
Title
</td>
<td>
Mandatory
</td>
<td>
</td> </tr>
<tr>
<td>
Publication date
</td>
<td>
Mandatory
</td>
<td>
</td> </tr>
<tr>
<td>
Date of repository submission
</td>
<td>
Mandatory
</td>
<td>
</td> </tr>
<tr>
<td>
Version
</td>
<td>
Mandatory
</td>
<td>
</td> </tr>
<tr>
<td>
Description
</td>
<td>
Mandatory
</td>
<td>
</td> </tr>
<tr>
<td>
Keywords
</td>
<td>
Mandatory
</td>
<td>
Frequently used keywords.
</td> </tr>
<tr>
<td>
Size
</td>
<td>
Mandatory
</td>
<td>
The approximate size.
</td> </tr>
<tr>
<td>
Access rights
</td>
<td>
Mandatory
</td>
<td>
Open Access. Other permissions can be applied, when appropriate.
</td> </tr>
<tr>
<td>
Terms of Access Rights
</td>
<td>
Optional
</td>
<td>
Description of the Creative Common Licenses 5 . POLYPHEM will open the data
under Attribution, ShareAlike and Non Commercial Licenses.
</td> </tr>
<tr>
<td>
Communities
</td>
<td>
Mandatory
</td>
<td>
</td> </tr>
<tr>
<td>
Funding
</td>
<td>
Mandatory
</td>
<td>
European Union (EU), Horizon 2020, H2020-LCE-2017-RES-RIATwoStage, Grant N°
764048, POLYPHEM.
</td> </tr> </table>
**3.1.2 _Identification of data_ **
If the Digital Object Identifier (DOI) of the publications has been already
identified, the POLYPHEM consortium will maintain it to facilitate the
identification of the data. In case of no DOI has been attributed to the
publication or research outputs firstly, the partners comply to reserve the
DOI generated by the repository.
**3.1.3 _Naming convention_ **
No naming convention is foreseen in the POLYPHEM data management.
Version numbers will be provided in the metadata table accompanying the
updated version of the file uploaded.
**3.1.4 _Search keywords_ **
The keywords search option will be provided to optimize the possibility of
data re-use and facilitate the discoverability of the data in the Zenodo
repository.
## MAKING DATA OPENLY ACCESSIBLE
**3.2.1 _Types of data made openly available_ **
According to the article 26 of the GA, the partners who have generated the
research outputs are the owners of the generated data and have right to
disseminate its results as long as there is no legitimate purpose or need to
protect the results. Each dissemination action should be noticed in advance to
the other partners at least 45 days beforehand and accompanied by sufficient
information on the results to disseminate (Art. 29.1 of GA).
As soon as the research data is generated and ready to be uploaded, it should
be deposited in the repository Zenodo. The underlying data of the scientific
publications should be uploaded not later than the relevant publication
(Art.29.3 of GA). However, the consortium has the right to not make research
results public in order to protect it. In this case, the non-public data will
be archived at the repository under either “closed” or “restricted” depending
of the allowed access rights. Please see the 3.4 “Increase data re-use” sub-
section for further details.
**3.2.2 _Deposition of data_ **
The created data and accompanying metadata will be deposited at the Zenodo
repository and stored in JSON-format according to a defined JSON-schema 6 .
Metadata is exported in several standard formats such as MARCXML, Dublin Core
7 , and DataCite Metadata Schema (according to the OpenAIRE Guidelines) .
Zenodo’s policies are described in the web-page
_http://about.zenodo.org/policies/_ . The information is also given in annex
1.
Several communities already exist in Zenodo. The POLYPHEM consortium proposes
to define and create in Zenodo an additional community identified as potential
users of the data generated or collected in the project. The scientific and
technical scope of this community will cover all aspects of concentrated solar
energy and its applications like solar power generation, solar fuels, high
temperature solar process heat, solar thermal water desalination.
A few existing communities encompassing the scope of POLYPHEM will tentatively
be associated to the targeted users of the POLYPHEM datasets, like among
others:
* Renewable Energy Potential
* Power Trading Agent Competition
* Continental Journal of renewable Energy
* International Journal of Renewable Energy and Environmental Engineering
* Catalonia Institute for Energy Research (CREC)
**3.2.3 _Methods needed to access the data_ **
All metadata is openly available in Zenodo under Creative Commons licenses,
and all open content is openly accessible through open APIs. In line with the
FAIR data guidelines, Zenodo does its best effort to make data accessible (see
http://about.zenodo.org/principles/):
_« (Meta)data are retrievable by their identifier using a standardized
communications protocol : Metadata for individual records as well as record
collections are harvestable using the OAI-PMH_ _protocol by the record
identifier and the collection name. Metadata is also retrievable through the
public REST API. »_
_« The protocol is open, free, and universally implementable: […] OAI-PMH and
REST are open, free and universal protocols for information retrieval on the
web. »_
_« The protocol allows for an authentication and authorization procedure,
where necessary: Metadata are publicly accessible and licensed under public
domain. No authorization is ever necessary to retrieve it. »_
_« Metadata are accessible, even when the data are no longer available: Data
and metadata will be retained for the lifetime of the repository. This is
currently the lifetime of the host laboratory CERN, which currently has an
experimental programme defined for the next 20 years at least. Metadata are
stored in high-availability database servers at CERN, which are separate to
the data itself. »_
## MAKING DATA INTEROPERABLE
In order to make the research outputs and underlying data generated within the
POLYPHEM project interoperable, the consortium will use data in the standard
formats and prioritize the available (open) software, whenever possible. The
consortium will also respect the common standards officially applied to the
various formats that will be used for the data.
The repository Zenodo is organized and managed in order to make data
interoperable, to the maximum extent, in agreement with the FAIR data rules
and recommendations (see http://about.zenodo.org/principles/):
_« (Meta)data use a formal, accessible, shared, and broadly applicable
language for knowledge representation: Zenodo uses JSON Schema as internal
representation of metadata and offers export to other popular formats such as
Dublin Core or MARCXML. »_
_« (Meta)data use vocabularies that follow FAIR principles: For certain terms
we refer to open, external vocabularies, e.g.: license (Open Definition_ 8
_), funders (FundRef_ 9 _) and grants (OpenAIRE). »_
_« (Meta)data include qualified references to other (meta)data: Each
referenced external piece of metadata is qualified by a resolvable URL. »_
Moreover, in order to further enhance the data exchange and re-use between
researchers, organizations, institutions, countries and other, the consortium
intends also encourage Zenodo community to perform as far as possible a
followup of the POLYPHEM data re-used by other community participants for
retracing the derivatives works based on the re-used data. The aim is to make
this interoperability data concept viable through the possibility and utility
of consultation of the results of the re-used POLYPHEM data to enrich and
stimulate further scientific reflexions.
## INCREASE DATA RE-USE (THROUGH CLARIFYING LICENSES)
All the openly accessible data and corresponding metadata uploaded on Zenodo
will be available for re-use, including after the end of the project. The
publication and underlying data will be also uploaded in compliance with the
6-month embargo allowed by the EC. Moreover, the POLYPHEM research data
uploaded on Zenodo, excepting the data uploaded under closed, embargoed or
restricted access, will be in open access under the Creative Common Licenses:
Attribution, ShareAlike, Non Commercial, and No Derivatives. For the POLYPHEM
data, only three first license types will be applied (see Table 3):
_**Table 3: Creative Commons licenses used for the diffusion and re-use of
POLYPHEM data** _
<table>
<tr>
<th>
**Chosen Licenses**
</th>
<th>
**Icon**
</th>
<th>
**Meaning**
</th>
<th>
**Abbrevi ation**
</th> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
**Attribution:** Permits all uses of the original work, as long as it is
attributed to the original author.
</td>
<td>
BY
</td> </tr>
<tr>
<td>
</td>
<td>
**Non-commercial:** License does not permit any commercial use of the original
work.
</td>
<td>
NC
</td> </tr>
<tr>
<td>
</td>
<td>
**Share Alike:** Any derivative work should use the same license as the
original work.
</td>
<td>
SA
</td> </tr> </table>
Although the consortium is encouraged to extend the open access to the data
and will contribute to this to the extent possible, it reserves the right of
upload data in the repository under justified restricted access as well as to
keep it as such after the end of the project.
In this regard, during the lifetime of the project, the sharing of the files
under restricted access will be possible only with the consent of the
depositor or author of their original version. The description of the
potential “restricted” data as well as reasons explaining this choice of the
consortium will be detailed in the next versions of the DMP clarified by the
particularities of the implemented project research activities and evaluation
of the potential impact of the open status of the results by the partners.
According to the Zenodo policy, the files under the closed access will be
protected against any unauthorised access at all levels.
As for the files under embargo status, the end data of the embargo will be
compulsorily provided. The allowed 6month embargo period for the publications
and underlying data will be respected. The access to the embargoed data will
be restricted until the end of embargo period and will be open automatically
after the end of the embargo period.
After the end of the project, uploaded data will be preserved in the
repository regardless the access mode. The responsible partner(s) reserve the
possibility to make the “closed” and “restricted” data openly accessible after
the end of the project on the consent of the relevant partners if their
confidentiality considerations change.
Zenodo contributes to make the data reusable through the following rules and
practices (see http://about.zenodo.org/principles/):
_« (Meta)data are richly described with a plurality of accurate and relevant
attributes : Each record contains a minimum of DataCite's mandatory terms,
with optionally additional DataCite recommended terms and Zenodo's
enrichments. »_
_« (Meta)data are released with a clear and accessible data usage license :
License is one of the mandatory terms in Zenodo's metadata, and is referring
to a Open Definition license : Data downloaded by the users is subject to the
license specified in the metadata by the uploader. »_
_« (Meta)data are associated with detailed provenance : All data and metadata
uploaded is traceable to a registered Zenodo user. Metadata can optionally
describe the original authors of the published work. »_
_« (Meta)data meet domain-relevant community standards : Zenodo is not a
domain-specific repository, yet through compliance with DataCite's Metadata
Schema, metadata meets one of the broadest cross-domain standards available.
»_
# ALLOCATION OF RESOURCES
The research data collected, generated and/or processed and project research
outputs will be uploaded and preserved during and after the end of the project
in the Zenodo repository. The repository allows uploading data free of charge
with the size limited to up to 50 GB per record. The data will be stored
indefinitely (minimum 5 years). Currently there are no costs for preserving
data in this repository and, thus, no costs have been foreseen to these
matters by the project. If any unforeseen costs related to the open access of
research data occur, it is possible to be charged on the Program given its
eligibility status for reimbursement, according to the articles 6 and 6.2 of
GA.
Moreover, each partner has devoted its own human resources to respect the
prescriptions set out by the deliverable D9.1 “Data Management Plan”. CNRS
remains the partner responsible for the management and supervision of the
management of the data within the POLYPHEM project, including data
verification before uploading, uploaded data updating and so on. The costs of
the personnel assigned to the data management have been foreseen in the
initial project budget estimation and is considered as to be charged on the
Program.
Also, as required by the article 18 of the GA, all the records and data will
be preserved internally by the consortium during five years after the project.
The openly accessible, restricted and closed data shared through the
repository will be preserved after the end of the project. The access for the
restricted and closed data status will be possible through the express request
of access addressed to the POLYPHEM project coordinator.
# DATA SECURITY
The public repository Zenodo has been selected as a long-term secure storage
of the POLYPHEM project research outputs given its features fulfilling
technical and legal data security requirements and long term preservation.
Please consult the terms at _http://about.zenodo.org/infrastructure/_ and
repository’s features at _https://help.zenodo.org/features/_ .
The data will also be stored internally on the POLYPHEM project intranet. No
access external to the consortium will be possible. Further details on the
security storage of the data collected, generated and processed within the
project are available in the deliverable D10.1 “Project Management Handbook”.
# ETHICAL ASPECTS
There are no ethical issues affecting to the POLYPHEM project research
activities. Thus, no specific ethical considerations should be applied to the
data sharing within the project.
However, while sharing any openly accessible data, the POLYPHEM consortium
will respect the relevant requirements described in the deliverable D11.1
“POPD – Requirement No.1” and apply the rule of noticing to the partners the
intention of dissemination of any project related data at least 45 days
beforehand according to the article 29.1 of the GA. Moreover, the consortium
will respect the obligations mentioned in the article 34.1 of the GA “Ethics
and Research Integrity”, in particular those related to the compliance with:
* Ethical principles (including the highest standards of research integrity), and
* Applicable national, EU and international law, during the implementation of the project.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0759_MAKE-IT_688241.md
|
**Executive Summary**
This Project Handbook describes the internal procedures of the MAKE-IT
consortium in terms of management structures, communication and collaboration
as well as quality control measures. It also de.nes the way the partners are
dealing with Responsible Research and Innovation (RRI), especially considering
ethical issues related to personal data collection, analysis and storage. Open
source and open access are important elements of RRI and the strategy of the
consortium in dealing with these aspects is reflected in the open data
management plan, which forms part of this document.
The main target group for this deliverable are the consortium partners
themselves as this handbook de.nes the project internal processes for securing
high quality research work to be performed across a set of complementary
partner institutions. It serves as a reference document for all MAKE-IT team
members including individuals joining in the project at a later stage.
Since the project is bringing together a set of diverse experts from different
.elds and backgrounds a core principle guiding internal processes is open
participation and flexibility. Transparency about the project status as well
as risk recognition is an additional principle that the project partners are
committed to.
Still, in order to effectively operate in a distributed team we have de.ned
some procedures of how to best communicate and structure our collaboration.
Regular meetings are held via videoconference as well as faceto-face.
Communication is also taking place via e-mail and the project mailing list.
The main tool for sharing and collaborating on documents is SharePoint.
The consortium is committed to producing high quality research outcomes and
deliverables and thus quality control is important. Quality guidelines
describe the internal peer review process, which is applied to all project
deliverables. In order to continuously improve our internal processes regular
internal surveys are performed, normally before project meetings. These
surveys are intended for the whole group to serve as a self-reflection and
self-evaluation tool about the project structures.
In terms of ethics, the consortium is following the general rules de.ned by
the EC (c.f. chapter 3) and commits strongly to respect the individual and
their privacy at all times. Templates have been prepared for informed consent
as well as the exchange of primary research data amongst partners that may
contain personal data from study participants. Raising awareness about related
RRI issues is of concern for the management and is regularly stressed.
Finally, openness is a core value of the project and thus the consortium is
looking into open strategies with regards to the research outcomes. This
relates to software that is published under speci.c open licenses, following,
and where possible contributing to, open standards as well as the research
publications, which should be made openly accessible as far as possible.
This handbook is understood as a living document and is updated if need arises
in order to improve the internal processes.
**1.**
**Introduction**
The MAKE-IT project is committed to high quality output and responsible
research and innovation. Thus this document de.nes a set of procedures that
the consortium is committed to adhere to and to improve in the course of the
project.
Openness and transparency are two of the guiding principles reflected in the
different processes and methods described. At the same time there is a strong
awareness within the consortium related to privacy and data protection of
individual citizens. These core principles underlying the research work in
MAKE-IT correspond with the practices related to Responsible Research and
Innovation (RRI).
Section 2 below describes the management structures, including the nominees
for the various boards. Section 3 is dedicated to speci.c quality management
procedures, including communication structures and tools, the peer reviewing
process for high quality deliverables, as well as risk management, SWOT and
other quality assurance means. In Section 4 the technical infrastructure for
communication and collaboration is presented. Section 5 presents the RRI
policies and identi.es the most relevant aspects for MAKE-IT while Section 6
outlines the speci.c ethical guidelines that the project is following. In
Section 7 the consortium’s strategy towards openness is described and relates
to open source in terms of software as well as open access in terms of
publications and other project results. Finally, Section 8 discusses
implication of gender aspects for the project.
The appendix includes examples of templates mentioned throughout the project.
**2.**
**Management structure**
Both the Grant Agreement (GA) and the Consortium Agreement (CA) specify a
number of bodies for the management of the project. Though the GA and CA,
being legal documents that can be found on SharePoint in _WP1 > Contracts _ ,
take precedence over this handbook, the following sections specify the
operational view of these bodies.
**2.1.**
**Work Package (WP)**
The work package (WP) is the building block of the project. The WP leader
1. organises the WP,
2. prepares and chairs WP meetings,
3. organizes the production of the results of the WP,
4. represents the WP in the WP Leaders Committee (WPLC).
Current WP leaders are shown in Table 1.
**WP**
**WP name**
**WP leader**
WP1
Project Management and Coordination
Paul Tilanus, TNO
WP2
Conceptual & Methodological framework
Jeremy Millard, DTI
WP3
Case Explorations
Christian Voigt, ZSI
WP4
Innovation Action research
Tijs van den Broek, TNO
WP5
Technology and Use Scenarios
Olivier Jay, DTI
WP6
Synthesis and Impact Analysis
Bastian Pelka, TUDO
WP7
Dissemination, Exploitation and Communication
Massimo Menichinelli, IAAC
Table
1
:
Current WP leaders
**2.2.**
**WP Leaders Committee (WPLC)**
The WPLC consists of
* the WP leaders of all (active) WPs,
* the scienti.c lead of the project, • consortium manager.
The additional 1 WPLC members are shown in Table 2\.
The consortium manager organizes and chairs the WPLC meetings. The WPLC
manages the coordination between the WPs. The WPLC has a mandate from the
Project Management Board (PMB) for all day-to-day management.
The PMB members and task managers, even if not WPLC member, are welcome at the
WPLC meetings.
**Role**
**Person**
Scienti2c lead
David Langley, TNO
Consortium manager
Paul Tilanus, TNO
Table
2
:
WPLC members in addition to the WP leaders
**2.3.**
**Project Management Board (PMB)**
The PMB consists of one representative of each partner. The current PMB-
members are listed in Table 3. The members of the PMB are referred to as
‘partner manager’.
The PMB takes all decisions that affect the direction of the project. The PMB
members are addressed for any issue, technical or administrative, concerning
that partner.
**Partner**
**Partner manager**
TNO
Iris Blankers
DTI
Jeremy Millard
ZSI
Christian Voigt
TUDO
Bastian Pelka
IAAC
Massimo Menichinelli
FLZ
Roberto Vdovic
HLW
Karim Jafarmadar
AHHAA
Helin Haga
CIR
Jeremie Gay
Table
3
:
Partner managers
**2.4.**
**MAKE-IT Advisory Board**
The MAKE-IT Advisory Board (MAB) is a group of persons from outside the
project. The MAB will be consulted for important decisions that affect the
direction of research and/or are related to adoption of the results from the
MAKEIT project. The MAB members are listed in Table 4\.
**MAB member**
**A:liation**
Sherry Lassiter
MIT
Dale Dougherty
Maker Faire & Make Magazine
David Cuartielles
Arduino
Willem Vermeend
NL Smart Industry & IoT Academy
Katherine Stokes
NESTA
Tom Saunders
NESTA
Table
4
:
MAB members
# 3\. Quality procedures and Code of Conduct
## 3.1. Internal communication structures & procedures
The Consortium Agreement (CA) speci.es a number of rules for the governance of
the project. Though the CA, being a legal document that can be found on
SharePoint in _WP1 > Contracts _ , takes precedence over this handbook, the
following describes the operational view of project meetings.
**3.1.1.**
**PMB Meetings**
Every 6 months a PMB meeting will be scheduled. In principle the PMB meetings
will be collocated with the plenary workshops. If important decisions need to
be taken at PMB level, then an ad-hoc meeting can be scheduled.
The agenda will be distributed at least two weeks before the meeting. All
partner managers can enlist agenda items for the PMB meeting.
No minutes are taken at the PMB meetings, but decisions and actions of the PMB
are listed. These decisions and actions are shared with the WP leaders via the
consortium manager in the .rst WPLC meeting after the PMB meeting.
**3.1.2.**
**WPLC meetings**
Every two weeks the WPLC has a conference call. The main purpose of these
meetings is the alignment of work between the WPs.
The agenda will be distributed at approximately one week before the meeting.
The decisions and action points of the WPLC meetings are communicated to all
PMB members by the consortium management via e-mail. For that purpose the
agenda of the WPLC meeting is extended, within two working days after the WPLC
meeting, with the actual participants list, the decisions and action points.
The extended agenda is shared on SharePoint ( _WP1 > Meetings > WPLC _ ). This
allows PMB members to react, e.g. if decisions are taken in a WPLC meeting and
a PMB member considers that decision to require PMB endorsement.
**3.1.3.**
**WP and task meetings**
For meetings within the WP the WP leaders have full freedom to arrange them as
they wish. The only constraint will be the travel budget of the partners.
If a partner is not participating fully in the WP or task, and there is a risk
of that partner becoming a ‘defaulting partner’, as de.ned in the Consortium
Agreement, then the following steps will be taken.
* The manager of the task/WP will have a private discussion with the partner. The result will be recorded in an e-mail, sent in Cc to the consortium manager. In the unlikely case the WP leading partner is not fully participating, any partner in the WP can signal this to the Consortium manager, initiating the next step immediately.
* If this fails to produce the desired behaviour or if a WPL is not participating fully in the WP, the Consortium manager will have a private discussion with the partner. The result will be recorded in an e-mail, sent in Cc to the PMB.
* If this fails to produce the desired behaviour, the PMB starts the ‘defaulting partner procedure’ as de.ned in the Consortium Agreement.
## 3.2.External communication structures & procedures
The following key groups are identi.ed in the external communication. In all
other cases the WPLC will propose how to proceed. Wherever there is a risk of
con.dential information of any partner being published, the ‘PMB check’, as
described in section 3.3.1, has to be applied.
For all material used in the external communication, the quality
assurance/review procedures, as described in 3.3, apply.
**3.2.1.**
**MAB**
All communication with the MAB members is coordinated by the scienti.c lead.
Support will be provided by those project members who already have a personal
relation with the MAB members and the consortium management.
**3.2.2.**
**EU**
All communication with the European Commission (EC), and in particular with
the project of.cer, will be coordinated by the consortium management as de.ned
in Table 5 below:
**Role**
**Person**
Consortium manager
Paul Tilanus, TNO
Consortium management support
Catelijne Rauch, TNO
Table
5
:
Current consortium management
**3.2.3.**
**Related projects**
Exchange of information with related projects will be coordinated by the
consortium management team (Table 5). Support can be provided by partners
already having personal relations with project members of the related project.
Project members should be aware that exchange of information with related
projects might require an NDA prior to the information exchange.
**3.3.**
**Quality of (non-)deliverables and peer review**
Reviews are the key elements in the quality assurance of a project like MAKE-
IT. For the review process there is a distinction between review of
deliverables and the review of other material.
**3.3.1.**
**Deliverables**
For deliverables good planning is possible, since a global description of the
content, the submission date and the partners working on it are set out in the
DoA. The review will be done in three stages:
* Structure or scope review
* Content review
* PMB check
Two independent reviewers are appointed by the WPLC for each deliverable, and
in principle both 2 perform the structure/scope and the content review.
Reviewers are considered independent when they are not authors of the
deliverable. Of course, others are free to review too, but the appointed
reviewers take on the quality assurance responsibility for the deliverable.
**3.3.1.1.**
**Structure or scope review**
The input for the structure review is the structure description of the
deliverable. The structure description consists of at least two levels in the
**3.3.1.2.**
**Content review**
The input to the content review is the full deliverable text; only supporting
parts – references, list of abbreviations and annexes – might still need
completion.
The **content review** starts at the latest **3 weeks before** the submission
date. **Review comments** are submitted to the deliverable editor **2 weeks
before** the submission date.
In general the content review contains four main attention areas.
* DoA coverage
* Is the scope and the content of the deliverable consistent with the intention of the deliverable as stated in the DoA?
* In case of deviations, are they fully and plausibly motivated?
* Are the relations to other MAKE-IT work/deliverables clear? Deliverables are rarely produced in splendid isolation, so … a deliverable provides input to other work, or brings other work together, or
…
* Target audience o Is the target audience clear?
* In case of multiple target groups, is it clear what parts of the deliverable are intended for each audience?
* Are the management summary, introduction and conclusions/recommendations at the level, and using the language, of the target audience?
Note: The detailed content might be too detailed for all target groups, but
not the sections mentioned above.
* Are the conclusions fully backed by the preceding material (no “jumping to conclusions”) and are recommendations actionable?
* Language and structure o Is the language used proper international English?
Signal use of national variants – Dunglish, Gerlish, Itlish, … – and
sociolects – legalish, techlish, ... In case of doubt, consult a native
English speaker.
* Is the text well-structured, e.g. using lists and tables where appropriate. Pages with a grey rectangle of text are suspicious J
* Do chapters have a local introduction/purpose and local conclusions/recommendations?
* Are illustrations and diagrams used to support the text where appropriate? If taken from external sources, is the attribution correct/complete?
* Are references to literature included – suf.cient but not overdone?
* Is the terminology from the MAKE-IT glossary used as agreed (see also section 3.7)?
* Technical content
* < For the editor/WP leader to guide the review process>
**3.3.1.3.**
**PMB check**
The PMB members receive the deliverable one week before the submission date.
They check that the deliverable does not disclose commercially sensitive
information of their organisation. If the deliverable contains material from
nonpartners that is made available via their organisation, the PMB member
checks the deliverable respects the con.dentiality agreements made by their
organisation with the non-partners.
Note: the PMB check is not a classical review. It is an ‘emergency break’ if
con.dential material is about to be disclosed and this was not noted by
authors and reviewers.
Both submissions to reviewers are Cc-d by the deliverable editors to the
consortium manager. The submission to the PMB for the PMB check is done by the
consortium manager. Deliverables are uploaded and submitted by the consortium
management.
The timeline for deliverables is depicted in Figure 1.
Figure 1: Timeline review of for deliverables
**3.3.2.**
**Non-deliverables**
For non-deliverables, such as publications and dissemination material, the
procedure for deliverables will be used where applicable and with a timeline
that .ts the material.
In all cases the WPLC is required to be informed via the WP leader about the
intention to publish MAKE-IT material as early as possible, with a minimum of
4 weeks. The WPLC will decide on the review procedure for that case. This is
enabled by WP leaders signalling planned academic publications or conference
contributions to the Scienti.c lead and signalling non-academic work to the
WP7 lead.
Since there are many types of material, this handbook cannot provide details
for all cases. We distinguish the following broad categories of material.
* Dissemination material (flyer, website, leaflets, popular science publications, …)
Default reviewer is the consortium manager, supported by one or more partner
managers.
* Scienti.c publication or conference presentation
Default reviewer is the scienti.c lead, supported by one or more partner
managers.
**3.4.**
**Risk management**
In the GA the results of an initial risk assessment are listed. This is
considered the initial risk register.
When a partner or WP leader identi.es
1. a new risk
2. a substantial rise of a risk, either because the chance of occurrence gets higher or the expected impact becomes bigger,
then this should be communicated with the consortium management as soon as
possible. At the latest at the next WPLC this risk, and potential measures,
will be discussed.
Periodically, approximately once every 3-4 months, the risk register will be
reviewed in the WPLC. On this occasion, risks that cannot occur any longer, or
became very small, will be removed. New risks can be added, with the
associated mitigating actions.
**3.5.**
**SWOT**
A mid-term analysis of strengths, weaknesses, opportunities and threats (SWOT)
will be performed on the consortium team and the project. This will be done
during the plenary workshop in December 2016 and is to be used to refocus, if
needed, the project in the second project year.
The SWOT analysis is a structured planning method to evaluate the Strengths,
Weaknesses Opportunities and Threats of a particular undertaking, be it for a
policy or programme, a project or product or for an organization or
individual. It is generally considered to be a simple and useful tool for
analysing project objectives by identifying the internal and external factors
that are favourable and unfavourable to achieving that objective. Strengths
and weaknesses are regarded internal to the project while opportunities and
threats generally relate to external factors.
Strengths can be seen as characteristics of the project that give it an
advantage over others while weaknesses are regarded as characteristics that
place the team at a disadvantage relative to others. Opportunities comprise
elements that the project could exploit to its advantage whilst threats
include elements in the environment that could cause trouble for the project.
The project manager will communicate the results of the SWOT to the whole
consortium. The WPLC and the PMB will discuss and implement any measures that
might be needed to steer the project, as a result of the SWOT.
Figure 2: Template for the SWOT analysis
## 3.6.Project survey (incl. Responsible Research & Innovation - RRI)
Prior to the plenary workshops a short project survey, including questions
regarding Responsible Research and
Innovation (RRI, see chapter 5), will be sent to all project members by ZSI.
The questions will be discussed in the WPLC one month before the plenary
workshop. The idea of this survey is to identify room for improving the
cooperation within the project and awareness of the RRI principles.
**3.7.**
**Glossary/DeCnition of core concepts**
During the kick-off meeting it was agreed that a glossary of relevant terms
will be produced by WP2 (c.f. D2.1). The review of deliverables and other
material will include a check on using the terminology included in the
glossary in a way that matches the glossary de.nition.
Though WP2 ends after June 2016, the glossary will be maintained as a living
document. When needed, the WPLC can be requested to provide additional
de.nitions of terms for consistent use within MAKE-IT.
**3.8.**
**Project templates**
The MAKE-IT project intends to use a consistent ‘project style’. This is
implemented by providing templates for the deliverables, the presentations and
posters. More project style templates can be produced by WP7 when needed.
All available project style templates can be found on SharePoint in _WP1 >
Templates _ .
# 4\. Tools and collaboration infrastructure
**4.1.**
**Document sharing**
One key element in a research project like MAKE-IT is
collecting/sharing/analysing information and the collaborative production of
reports on the results of the analysis.
For both purposes a SharePoint environment has been created (see Figure 3)
with URL: _https://ecity.tno.nl/sites/MAKE-IT/SitePages/Home.aspx_ .
Figure
3
:
SharePoint Make-IT > Home
Within this SharePoint environment directories are available for each WP and
all submitted deliverables. Furthermore, lists are maintained for project
members and external contact persons.
Partner managers should announce a new project member to MAKE-IT SharePoint
manager _Catelijne Rauch_ . Name and e-mail address are suf.cient for creation
of the SharePoint access. All project members have to provide their contact
details in the project member list.
If a project member leaves the project, this should also be reported to
Catelijne Rauch.
**4.2.**
**E-mail and telephone**
Day to day information exchange will be based on e-mail and telephone.
Basic rule for exchange of information via e-mail: _never_ include a document
larger than 50kB in an e-mail. Provide in the e-mail a link to the document,
stored on SharePoint instead.
The available e-mail distribution lists are listed in Table 6.
**E-mail**
**Contains**
[email protected]
All WP leaders, scienti2c lead, consortium management
[email protected]
All partner managers
[email protected]
All project employees working in WP1
…
…
[email protected]
All project employees working in WP7
[email protected]
All project employees working in MAKE-IT
Table
6
:
Available e-mail distribution lists
Partner managers should announce a new project member to _Catelijne Rauch_ and
indicate the e-mail lists the new project member should be in. Project members
leaving the project will be deleted from the e-mail lists.
**4.3.**
**Online meetings**
Online meetings, such as the WPLC meetings, will use ‘Skype for Business’.
This tool supports screen sharing, making it possible to discuss lists of
action points and decisions, presentations, etc.
Invitations for the meetings will include a link as shown in Figure 4\.
Figure 4: Link in a Skype for Business meeting request
Clicking this link one joins the meeting, and this requires only a suitable
browser (on Windows, Mac, Linux or Android based operating system).
**4.4.**
**Quarterly progress reports**
One of the risks of working in a consortium is that one of the partners spends
a lot of effort without reaching a substantial result. To avoid this happening
without the WP leader and the consortium manager being aware, the effort of
each partner shall be reported every quarter.
The tool used for this monitoring is QPR, an Excel based tool where the
partner reports the person months spent in the recently closed quarter for
each Task. Figure 5 shows a part of the Excel sheet.
Figure 5: QPR tool (partial)
The consortium management will consolidate all partner inputs. In the WPLC it
is checked if the effort as reported is balanced with outputs of that partner.
QPR timeline:
* The partner managers receive a request for QPR reporting on the .rst working day of the month after closing a quarter.
* The partner manager reports the effort at the latest on the 15 th of the month after closing a quarter.
* The consolidated QPR report is available at the latest on the 22 nd of the month after closing a quarter and will be on the agenda of the .rst WPLC after the 22 nd .
# 5\. Responsible research and innovation (RRI)
**5.1.**
**What is RRI?**
Responsible Research and Innovation (RRI) has been formulated and widely
promoted as guiding principle and policy concept by the European Commission to
better align science with society and to meet the so called grand challenges
3 .
The starting ground was laid in 2001 with the formulation of the “Science and
Society Action Plan” to foster communication between science and society which
later, in 2007, was further shaped into the “Science in Society” programme in
FP7. RRI as concept was .rstly mentioned in 2010 and became an overarching
strategic guiding principle in Horizon 2020 and was then further con.rmed in
the recent Rome Declaration on Responsible Research and Innovation in Europe
4 .
Although a rather young concept, RRI became an important umbrella term for
principles that might not be actually new but which existed in isolation in
parallel. The formulation of the concept of RRI represents the approach to
generate a holistic paradigm with different so called key dimensions which
will be described in detail in the following.
RRI is as a guiding principle “a transparent, interactive process by which
societal actors and innovators become mutually responsive to each other with a
view on the (ethical) acceptability, sustainability and societal desirability
of the innovation process and its marketable products” (Schomberg, 2013).
Others’ de.nitions of RRI (c.f.
Jacob et al., 2013; Owen et al., 2013) might slightly differ from Von
Schomberg’s but as described by Wickson & Carey
(2014) the overall common accordance is that responsible research and
innovation should
1. address signi.cant socio-ecological needs and challenges,
2. actively engage different stakeholders,
3. anticipate potential problems and assess available alternatives and reflect on underlying values and beliefs and
4. adapt according to these ideas. Generally speaking, RRI is doing science and innovation with and for society by re-imaging the science-society relationship.
In other words, RRI is meant to provoke a paradigm shift among researchers and
other stakeholders such as civil society organisations, educators, policy
makers, and businesses, etc. who actively take part in science and innovation
developments.
According to the European Commission (Jacob et al., 2013), RRI comprises the
following key dimensions 5 :
1. **Governance:** Governance of policymakers to prevent harmful or unethical developments in research and innovation
2. **Open Access** : Open access to research results and publications to boost innovation and increase the use of scienti.c results
3. **Ethics** : Research must respect ethical standards and fundamental rights to respond to societal challenges
4. **Gender** : Gender equality and in a wider sense diversity
5. **Public Engagement** : Engagement of all societal actors (researchers, industry, policy makers, civil society) in a reflective research process
6. **Science education** : Enhancement of current education processes to better equip future researchers and society as a whole with the necessary competences to participate in research processes
As can be seen in Figure 6, there are overlaps between these key dimensions
and overall, there are differences in the structure and layer of these
dimensions. While some are rather narrow and concrete, others are broader and
have rather an overarching function (European Commission, 2015) (such as the
key dimension governance) and some remain on a rather abstract level. RRI and
its key dimensions is an evolving concept, so the key dimensions are still
subject to change. While some argue that the six key dimensions have to be
complemented with further two (European Commission, 2015), others claim that
RRI shall rather focus on process requirements ( Kupper, Klaassen, Rijnen,
Vermeulen, & Broerse, 2015).
In Figure 6, the two perspectives have been integrated for a better overview
by the RRI Tools project 6 . While the inner circle shows the six key
dimensions with its overlaps, the outer circle depicts the process
requirements: **openness and transparency, anticipation and reflection,
responsiveness and adaptive change and diversity and inclusion** . In fact,
the two perspectives complement each other in a constructive way, while the
one focuses on the process of RRI, the other puts forward policy agendas or
visions. However, for a better understanding and easy comprehension, we will
put on the glasses of the six key dimensions as they are more debated in
scienti.c and public discourse.
Figure 6: Overview of key dimensions and process requirements of RRI according
to RRI-Tools project
The key dimensions can be perceived as a set of moral values that shall be
introduced in research and innovation. According to Kupper et al. (2015), for
RRI to become a success story and to provoke shifts in mentality, however, it
has to be based on further values such as democratic values regarding
participation and power, social and moral values regarding care for the future
and individual and institutional values of open-mindedness or receptiveness to
change.
In the following the six key dimensions will be described in more detail.
**5.1.1.1.**
**Governance**
Among the six key dimensions of RRI, governance has a slightly different
function compared to the others, as it is rather an organising and steering
principle that determines the success of all other RRI dimensions. In other
words, RRI relies on good governing structures for the promotion of RRI.
“Governing is any form of coordination that a stakeholder sets to foster and
mainstream (the process requirements and outcomes of) RRI within its own
organisation or in the interaction with other stakeholders” according to the
RRI Tools project 7 .
7\. _http://www.rri-tools.eu_
Governance methods range from foresight techniques (scenario studies, value
sensitive design, etc.), assessment (ethical committees, needs assessment,
technology assessment, etc.), agenda setting (consultation, co-creation, etc.)
to regulation (code of conduct, policies, funding guidelines, etc.).
Governance as an organising principle is seen on different levels, at funding
agencies level which need to support governance of RRI to institutional
responsibilities level. Organisations are called to set up RRI guidelines and
policies and also to install respective infrastructure and personnel support
(e.g. RRI of.cers).
Currently, governance of RRI is rarely seen on a project level. The **MAKE-IT
project** can be perceived as an attempt to tackle RRI on a project level.
However, comprehensive RRI guidelines for projects are still missing and thus
this handbook will aim at meeting this need. Also it has to be acknowledged
that governance structures need to be at least on institutional level in order
to be sustainable. On a project level however, it makes sense to break down
what RRI in the context means speci.cally and how RRI “can be done” in the
project since RRI is not a universal principle but a concept that needs
adaptation.
**5.1.1.2.**
**Open Access**
In the narrower sense, open access is about enabling or giving access to
research results and publications to the public. It addresses only the .nal
stage of research activity, the publication and dissemination phase. Open
access, in this sense, is different from open science, open innovation and
open data although there are obvious overlaps.
For instance, in contrast to open access, open science implies opening up the
whole science process in real time to the public, from choosing areas to
investigate in, formulating the research questions to choosing the methods,
collecting data and .nally discussing the results. Open science means
democratising science and research through ICT.
To avoid confusion, in the following, we will refer to open access in the
narrower sense.
The value underlying open access is about democratising knowledge and removing
barriers for the interested public, thus also empowering society. It enhances
openness and transparency of the research process. The proposition is that
open access is for the bene.t of society but also for the bene.t of research
and innovation as access to a more diverse range of stakeholders might
contribute to the development of new knowledge and to boost innovation
potential. Furthermore, an argument that is often used to convince researchers
is the fact, that open access articles are cited more often than publications
in traditional formats (Föger et al., 2016).
For some, open access means publishing in digital, online, and free of charge
publication formats removing price barriers but not permission barriers
(Gratis OA). For others open access means that additionally literature shall
be free of unnecessary copyright and licensing restrictions (c.f. RRI-Tools
project).
There is the call for publicly funded research and innovations developments
being accessible free of charge for the public. In 2012, the European
Commission proposed to all Member States, that 60% of all scienti.c
publications shall be open access until 2016, following the Green or Golden
Road (c.f. chapter 7). With the launch of Horizon2020 it has become mandatory
to follow open access publication strategies (European Commission, 2012) .
**MAKE-IT** will follow open access publication strategies and will also make
data available to the public at an earlier stage where suitable (c.f. chapter
7).
**5.1.1.3.**
**Ethics**
Ethics as a principle under the umbrella term of RRI has moved beyond
ful.lling legal requirements and protecting objects of research. Certainly,
complying to national and international standards and submitting proposals to
ethics committees is fundamental also under this notion but the principle of
Ethics is understood as a process, similarly to all other key dimensions, that
urges researchers and stakeholders to question themselves if they comply with
high moral standards and if they ensure increased societal relevance and
acceptability of research and innovation outcomes. Ethics thereby shall not be
perceived as a constraint but rather as a guiding principle to help ensure
high quality outcomes and to justify decisions.
The European Commission de.nes ethics as key dimension of RRI as follows:
_“European society is based on shared values. In order to adequately respond
to societal challenges, research and innovation must respect fundamental
rights and the highest ethical standards. Beyond the mandatory legal aspects,
this aims to ensure increased societal relevance and acceptability of research
and innovation outcomes. Ethics should not be perceived as a constraint to
research and innovation, but rather as a way of ensuring high quality
results.” (p.4)_ 7
Ethics comprises three main aspects (European Commission, 2015):
1. Research integrity and good research practice: scienti.c misconduct and questionable research practices shall be avoided.
2. Research ethics for the protection of research objects (people, animals, and environment). This is the aspect that is best developed in institutional guidelines as well as national and international laws and policies. This aspect matches the traditional notion of ethics and is most referred to when speaking about ethics.
3. Societal relevance and ethical acceptability of research and innovation outcomes. This aspect is closest to the key dimension of ethics in the understanding of RRI as it is a cross-cutting principle. It relates to the grand challenges as formulated in the Lund Declaration in 2009 8 . In this sense, it is ethical if science and innovations contribute in facing and solving them.
Ethics further implies social justice and inclusion aspects: The widest range
of societal actors and civil society shall bene.t from research and innovation
outcomes. In other words, products and services as a result of R&I activities
shall be acceptable and affordable for different social groups. Researchers
and innovators are asked to reflect upon the impact of their activities on
“society” and to minimise potential negative outcomes.
Instruments that can be used to reflect upon potential negative and positive,
intended and unintended outcomes comprise, for instance, ELSI/ELSA tools
(Ethical, Legal and Social Implications/Aspects) and mechanisms for
multistakeholder/transdisciplinary processes of appraisal of ethical
acceptability. RRI is thus not “outsourced” to ethical committees but consists
in continuous reflective questioning.
Chapter 6 is dedicated especially to dealing with ethics in **MAKE-IT** .
**5.1.1.4.**
**Gender**
Gender equality means equal rights, opportunities, and responsibilities for
both genders so that individuals can exploit and realise their full potentials
independently from their sex.
Gender equality as a key dimension of RRI comprises two main aspects (European
Commission, 2015):
* The human capital dimension: Gender balanced teams in research and innovation and
* The science and innovation dimension: Inclusion and integration of gender perspectives in research and innovation content and process.
Firstly, to meet the human capital dimension of gender, emphasis shall be laid
on balanced research teams and gender balanced leading positions. This is
mainly a task for research and innovation institutions to set and follow
gender equality plans but also international research projects with different
institutions on board can emphasise gender balance for instance in the
compositions of advisory boards or key note speakers at conferences or panel
discussion boards. Promoting gender equality at all levels means contributing
to achieving excellence: Female scientists and innovators are given an
opportunity for promotion and making their voices heard. Furthermore, an
attractive work place with flexible and family friendly working conditions
might attract top-level female researchers (as long as household and family
tasks are mostly carried out by women).
Secondly, to include gender in research and innovation activities as such, for
instance, in the formulation of the research question, in the selection of the
data (collection), etc. helps avoiding gender bias in results. Output that is
mainly based on a male perspective is not universally valid, since it cannot
simply be transferred to or adapted to the other half of the population.
However, gender bias is often unintentional but to make these biased
perception, assumptions and prepositions more explicit is one of the goals of
gender as key dimension in RRI.
The European Commission 9 underlines three objectives in the Horizon 2020 in
terms of gender balance in research and innovation activities:
1. fostering gender balance in Horizon 2020 research teams,
2. fostering gender balance in decision-making bodies (40% female in panels and 50% female participation in advisory groups) and
3. integrating sex and gender analysis in research and innovation.
Apart from the institutional change that is necessary to come to equal
participation in research and innovation activities, a research project can
aim at addressing unconscious gender bias, e.g. perception of women’s
achievement in STEM (Science Technology Engineering Mathematics), in the
formulation of the research questions, and in analysing the breath and width
of penetration of gender perspectives in research content. Furthermore,
project members shall make sure that tasks and responsibilities are equally
distributed and that in advisory boards and other decision making or
consulting bodies both sexes are represented. Similarly, it has to be made
sure that among .rst authors on research papers there are also female authors.
All written materials, dissemination instruments, conceptual notions, reports,
etc. should be critically analysed with gender sensitive glasses on.
Gender analysis and gender monitoring throughout the project shall aim at
looking at both aspects of gender equality, at the human capital dimension
(where possible, apart from institutional conditions) and the research aspect
of gender (Föger et al., 2016).
In **MAKE-IT** we pay special attention to gender in this project. On the one
hand gender is an aspect to be considered in the de.nition of the speci.c
research questions. E.g. is gender equally presented in the Maker communities
or is there a dominant gender? Does gender influence governance structures in
the different cases, etc.
On the other hand, gender is also relevant when it comes to internal
processes, such as the composition of research teams, of work package leaders,
of advisory groups, the use of gender sensitive language and the awareness of
producing gender sensitive content. We are aware of the current imbalance in
the WPL and less so in the advisory board and will consider gender speci.cally
in any new allocations.
In line with the Toolkit on Gender in EU-funded research (European Commission,
2009) Make-IT will strive at doing gender-sensitive research. Particularly in
the following project steps gender as a research factor has to be addressed
and taken into account:
1. Research ideas and hypotheses: The main research questions have been formulated in the proposal. However, we will analyse and assess the relevance of gender in our research when specifying the research questions.
2. Project design and research methodology: As the toolkit suggests, in the very moment research concerns humans, research has to differentiate between the two genders and analyse the gender speci.c situation. In our research we will aim at representative data in the sense that both perspectives will be described.
3. Research implementation: Data-collection tools such as questionnaire, interview guidelines, etc. need to be gender sensitive and use gender-neutral language and have to allow for differentiation between gender perspectives. In the data analysis we will particularly pay attention whether there are differences between males and females, for instance, in the usage of FabLabs, in terms of artefacts that are produced, in terms of learning, etc.
4. Dissemination phase – reporting of data: We will use gender-neutral language in our publications. Furthermore, we will sensitively decide which visual materials to use. Also we will aim at publishing gender speci.c results.
**5.1.1.5.**
**Science Education**
Science education under the RRI umbrella is meant to meet several objectives
(European Commission, 2015; Föger et al., 2016):
1. To empower the society to critically reflect and to improve on their skills to be able to challenge research, thus to make them “science-literate” (in this sense, there is a great overlap with the key dimension of public engagement)
2. To enhance future researchers and other societal actors to become good RRI actors
3. To make science attractive to children and teenagers with the purpose to promote science careers, especially in STEM
4. To close the gap between science and education. There is still a signi.cant distance between the two areas.
Thereby, science education does not build on one-way communication channels
but on channels that allow and enhance “the society” to talk back. According
to the RRI-Tools project, RRI should be integrated in all levels of education,
from primary to university level, and in different segments of education, i.e.
formal, lifelong learning and informal learning activities. Inspirational
activities that make pupils reflect upon “good” research, its negative and
positive outcomes, about ethics and ethical dilemmas, gender inequalities,
etc. can have an empowering function. Other tools comprise courses in open
democracy, in co-design or co-research or “living labs” that enable
participants to shape the development of certain technologies or services.
In **MAKE-IT** we will particularly pay attention to activities that address
children and teenagers. Most FabLabs in our 10 case studies regularly offer
educational activities to young people and schools and the Maker movement has
started to get attention from schools and educational authorities.
**5.1.1.6.**
**Public Engagement**
From the so-called “de.cit” model with the willingness to educate and to
inform about science through one-waycommunication channels in the past two
decades, the emphasis now is laid on public engagement, which means more
elaborate and active involvement of citizens.
According to the International Association for Public Participation (Pearce,
2010) participation ranges from informing to active co-decision:
1. informing „…we will keep you informed…“)
2. consulting „… we will keep you informed, listen to and acknowledge concerns and aspirations, and provide feedback on how public input influenced the decision…“
3. involving „… we will work with you to ensure that your concerns and aspirations are directly reflected in the alternatives developed and provide feedback on how public input influenced the decisions…”
4. collaborating „…we will look to you for your advice and innovation in formulating solutions and incorporate your advice and recommendations into the decisions to the maximum extent possible…“ and 5) empowering „…we will implement what you decide…“).
There is a vast range of tools and methods with different levels of
participation available, e.g. public consultations, public deliberations for
decision making, public participation in R&I processes.
The goal by opening up research and innovation processes to the public is to
better meet the values, needs and expectations of society and thus to improve
R&I and to .nd solutions to the so called grand challenges that society is
facing (Cagnin, Amanatidou, & Keenan, 2012).
According to Föger et. al (2016), participation is not free of charge and
cannot just simply be “ordered” and thus activities have to be calculated in
the budget allocation.
Thus, this key dimension of RRI is dif.cult to realise in **MAKE-IT** but
activities will be set to involve children and teenagers.
**5.2.**
**RRI in MAKE-IT**
The notion of Responsible Research and Innovation does not offer a checklist
or one universal guideline how to do RRI. It is also not in the spirit of RRI
to have such set measures, as RRI is rather perceived as a process that
requires continuous questioning and reflection. Thus, mechanisms have to be
installed and embedded in the project to stimulate reflection of the
consortium and to keep these alive throughout the lifetime of the project.
We would like to point out that not all key dimensions are equally relevant
for MAKE-IT. Except some projects that deal speci.cally with RRI, there are no
projects as of our knowledge that have installed RRI as a whole as a
crosscutting principle. Most projects address one or two key dimensions of
RRI.
In MAKE-IT RRI principles will be implemented as far as possible and relevant,
whereby the responsibility for implementation and the monitoring will be
shared among all consortium members. WP leaders shall particularly pay
attention that RRI principles are reflected in their work package where
relevant.
To .nd out which key dimensions are particularly relevant for MAKE-IT we
conducted a workshop with all consortium members at the Kick-Off Meeting.
**5.2.1.**
**Results of The Hague workshop**
In the framework of the Kick-Off meeting in The Hague in January 2016, we
carried out a small workshop with all the Consortium partners to get a few
ideas on what RRI means in this project and which RRI key dimension might be
of particular relevance in this project. The workshop was meant to make the
project partners familiar with the concept of RRI and to stimulate reflection
and discussion on RRI themes.
After a short introduction to the concept of RRI and its key dimensions, the
partners were asked to note down on cards important aspects for MAKE-IT
related to any of the RRI dimensions and to cluster them accordingly.
Figure 7: Exercise on RRI dimensions
Under **Science education** , the partners mentioned two aspects: work in WP 2
in the conceptualisation and development of the methodological framework as
well as security issues in WP 5 relating to the use of machines.
When thinking about **Open Access** , the partners found the need for
documentation, open services and open data on fablab.io. In WP 5, the IPR of
fabricated parts could be an issue. Furthermore, the choice of journals where
the consortium publishes is an important aspect of open access (open access
versus closed access journals).
In respect to **Governance** , a public policy in WP 2 was found useful as
well as privacy and data protection guidelines. Further, a potential impact on
employability and employment was mentioned by the partners. In WP 5, the
partners found that in the technology and use scenarios, designers and makers
shared responsibility.
The **Ethics** key dimension of RRI seems to be the hottest topic in MAKE-IT
as it received the largest number of comments: There are security issues in
WP5 when people are exposed to fumes, for instance, when operating the
machines. Children as participants in Fablabs might be particularly affected
by any of these potentially harmful practices. As consortium partners we will
have to make sure to involve people in any of the research activities only
after brie.ng them and after their given consent. Partners posed the question
how to deal with liberty of expression when making use of digital fabrication:
the technology can be used for the good and the bad. On the one side, it
allows for convenient prosthetics, on the other hand, a gun can easily be
built. This is particularly relevant in WP 5. In action research (WP 4),
transparency and ethics are regarded as crucial to build on trust and
engagement. Giving access to all people irrespective of socio-economic class,
ethnic group or disability was another mentioned aspect (particularly relevant
in WP7 outreach activities). Social inclusion as value to reach marginalised
groups is found very important.
Regarding **Public Engagement** there was just one question noted down:
Whether WP 4 technology innovation constitutes a social innovation example.
The key dimension **Gender** shall be taken into account when deciding upon
how to approach and address stakeholders. In regard to gender the question is
whether to aim for proportionality or equality or both.
The results of the workshop served as a good starting point in the RRI
considerations in MAKE-IT. The exercise was particularly useful to sensitise
the project partners. Furthermore, it became clear that some key dimensions
are more important or relevant for MAKE-IT than others. The three core topics
that evolved were: Gender, ethics and open access.
In the following we will therefore concentrate on these key dimensions which
will be dealt with in more detail. However, also the remaining three shall
remain in our mind-sets as we would like to continuously stimulate reflection
and discussion on RRI.
**5.3.**
**RRI management plan**
In order to stimulate reflection and deliberation on Responsible Research and
Innovation and to keep these alive we have foreseen several instruments:
* Regular surveys: RRI speci.c questions will be added to the regular management survey that is distributed a few days before each partner meetings. Questions to be included will look at how different key dimensions have been addressed in the past 6 months and what could be done to better address the respective key dimensions of RRI or showcase lessons learned.
* RRI Self-Reflection-Tool: The RRI-Tools project has developed the so called “RRI Self-Reflection-Tool”. It is an online tool for different stakeholder groups and for people with any level of knowledge on RRI. The tool is meant to comprise food for thought to stimulate reflection on RRI key dimensions and process requirements. Participants can choose which questions they would like to reflect upon (since not all of them will be relevant) and receive suggestions at the end how to further improve in terms of RRI. Further resources such as best practice examples, tools or literature will be recommended. In MAKE-IT we will invite the project partners to regularly make use of the Self-Reflection –Tool.
* RRI reflections at Consortium Meetings: At every consortium meeting we would like to propose a short reflection on RRI issues and to discuss RRI topics based on the results of the Self-reflection-Tool and the experiences made by the consortium.
**5.4.**
**RRI instruments and tools**
Our main instruments for implementing RRI are described in detail in the
following sessions. The main tools are the:
* ethical guidelines, including forms for informed consent and con.dentiality agreement
* open data management plan
* RRI self-assessment tool and survey
**6.**
**Ethical guidelines**
Ethics is an integral part of responsible research, from the conceptual phase
to the publication of research results. The consortium of MAKE-IT is clearly
committed to show appreciation of potential ethical issues that may arise
during the course of the project and has as such de.ned a set of procedures on
how to deal with ethics in a responsible way.
The main aspects the project is dealing with in regards to ethics are the
protection of identity, privacy, obtaining informed consent and communicating
bene.ts and risks to the involved target groups.
The studies performed in MAKE-IT may include data collection from individuals
and organisations remotely as well as on site. In order to achieve the goals
de.ned within the research tasks of the work programme the consortium may
collect personal data from study participants. Such data may include basic
demographic data, responses to questionnaires or interaction data with
technologies.
**6.1.**
**Data protection and privacy**
During any data collection process data protection issues involved with
handling of personal data will be addressed by the following strategies:
Volunteers to be enrolled will be exhaustively informed, so that they are able
to autonomously decide whether they give their consent to participate or not.
The purposes of the research, the procedures as well as the handling of their
data (protection, storage) will be explained. For online interviews these
explanations will be part of the initial brie.ng of interviewees, for face-to-
face interventions informed consent (see below) shall be agreed and signed by
both, the study participants as well as the respective research partner.
The data exploitation will be in line with the respective national data
protection acts. Since data privacy is under threat when data are traced back
to individuals – they may become identi.able and the data may be abused – we
will anonymise all data.
The data gathered through questionnaires, interviews, observational studies at
the workplace, focus groups, workshops and other possible data gathering
methods during this research will be anonymised and therefore the data cannot
be traced back to the individual. Data will be stored only in anonymous forms
so the identities of the participants will only be known by the research
partners involved. Raw data like interview protocols and audio .les will be
shared within the consortium partners only after the con.dentially agreement
(See Annex II) has been signed. Reports based on interviews, focus group and
other data gathering methods will be based on aggregated information and
comprise anonymous quotations respectively.
The collected data will be stored on password-protected servers at the partner
institution responsible for data collection and analysis. The data will be
used only within the project and will not be made accessible for any third
party. It will not be stored after the end of the project (incl. the time for
.nal publications) unless required by speci.c national legislation.
The stored data do not contain the names or addresses of participants and will
be edited for full anonymity before being processed (e.g. in project reports).
**6.2.**
**Communication strategy**
Study participants will be made aware of the potential bene.ts and identi.ed
risks of participating in the project at all times.
The main means of communicating bene.ts and risks to the individual is the
informed consent. Prior to consent, each individual participant in any of the
studies in MAKE-IT will be clearly informed of its goals, its possible adverse
events, and the possibility to refuse to enter or to retract at any time with
no consequences. This will be done through a project information sheet or the
informed consent form and it will be reinforced verbally.
In order to make sure that participants are able to recall what they agree
upon when signing, the informed consent forms will be provided in the native
language of the participants. In addition, the consortium partners will make
sure that the informed consent forms are written in a language suitable for
the target group(s).
**6.3.**
**Informed consent**
As stated above informed consent will be collected from all participants
involved in MAKE-IT studies. An English version of the declaration of consent
form is provided in the Annex I of this document.
**6.4.**
**Relevant regulations and scientiCc standards**
The consortium follows European regulations and scienti.c standards to perform
ethical research. The following lists some of the basic regulations and
guidelines.
The MAKE-IT project will fully respect the citizens’ rights as reported by EGE
and as proclaimed in the Charter of
Fundamental Rights of the European Union (2000/C 364/01), having as its main
goal to enhance and to foster the participation of European citizens to
education, regardless of cultural, linguistic or social backgrounds. Regarding
the personal data collected during the research the project will make every
effort to heed the rules for the protection of personal data as described in
Directive 95/46/EC 10 .
In addition, the consortium follows the following European Regulations and
Guidelines:
1. The Charter of Fundamental Rights of the European Union:
_http://www.europarl.europa.eu/charter/default_en.htm_
2. The European Convention on Human Rights http://www.echr.coe.int/Documents/Convention_ENG.pdf
3. Horizon 2020 ethics self-assessment _http://ec.europa.eu/research/participants/portal/doc/call/h2020/h2020msca-itn-2015/1620147-h2020_-_guidance_ethics_self_assess_en.pdf_
4. The EU Code of Ethics: _http://www.respectproject.org/ethics/412ethics.pdf_
5. The European Textbook on Ethics in Research https://ec.europa.eu/research/sciencesociety/document_library/pdf_06/textbook-on-ethics-report_en.pdf
6. European data protection legislation: _http://ec.europa.eu/justice/data-protection/index_en.htm_ 7) The RESPECT Code of Practice for Socio-Economic Research:
_http://www.respectproject.org/code/index.php?id=de_
8) The Code of Ethics of the International Sociological Association (ISA):
_http://www.isasociology.org/about/isa_code_of_ethics.htm_
**6.4.1.**
**National and Local Regulations and Standards**
In addition to the more general and EU-wide guidelines, project partners have
to adhere to, and respect, national regulations and laws as well as to
research-related organisational ethical approval as requested by the own
institutions. All partners are aware of their responsibilities in that respect
and will follow the respective guidelines.
# 7\. Open access and open research data
The project .rmly believes in openness to be a major factor for innovation.
There are many examples of how open innovation and open source are successful
models, especially in domains where many different stakeholders are required
to bring about effective change. Openness has many facets. The most important
ones for the MAKE-IT consortium are, following Carlos Moedas’s (European
Commissioner for Research, Science and Innovation) strategy of the 3 Os, Open
Science, Open Innovation and Open Data 11 :
1. **Open project collaboration** . All partners are committed to developing (working for) relationships with external partners for mutual bene.t. Making contacts with similar projects and establishing collaboration is considered bene.cial for all. Open collaboration in MAKE-IT is understood in a trans-disciplinary way, opening research processes to the wider public and allowing new form of collaboration as intended in the action research stream of the project.
2. **Open source technology** . From a technology perspective, the project builds upon open source technologies, such as the CAPs for Makers (especially fablabs.io) and wants to share its results with the community. Business models and exploitation strategies are not based on locking down access to project results, but on providing added value through services.
3. **Open access to scienti1c results** . From a scienti.c perspective, the consortium clearly favours open access to its scienti.c output, which is supported by several project members’ internal policies of supporting open access in general.
4. **Open access to research data** . MAKE-IT is part of a pilot action on open access to research data and is thus committed to providing access not only to project results and processes, but also to data collected during that process. The general policy of the MAKE-IT project is to apply “open by default” to its research data, with exceptions being made based on privacy, competitiveness and the relationship between researchers and cases; ethical rules on anonymity as described above (chapter 6) are thus highly relevant and need to be agreed with each of the case participants.
MAKE-IT is part of the H2020 pilot action on open access to research data and
has started to develop a .rst data management plan. The open access strategy
will be detailed in the following sections.
<table>
<tr>
<th>
_https://ec.europa.eu/research/participants/data/ref/h2020/grants_manual/hi/oa_pilot/h2020-hi-
oa-pilot-_
</th> </tr>
<tr>
<td>
_guide_en.pdf_
</td>
<td>
</td> </tr> </table>
**7.1.**
**Open access strategy for publications**
In line with the EC policy initiative on open access 12 , which refers to
the practice of granting free online access to research articles, the project
is committed to follow a publication strategy considering a mix of both 'Green
open access' (immediate or delayed open access that is provided through self-
archiving) and 'Gold open access' (immediate open access that is provided by a
publisher) as far as possible.
All deliverables (reports, software, data, media, other) labelled as “public”
will be made accessible via the MAKE-IT website (make-it.io). The publications
stemming from the project work will also be made available on the website as
far as it does not infringe the publishers rights as well as on the OpenAIRE
platform _https://www.openaire.eu/_ .
All outcomes of the project labelled as “public” will be distributed under
speci.c free/open license, where the authors retain the authors’ rights but
the users can redistribute the content freely. The following are a few
relevant sources for deciding on the speci.c license for each outcome:
•
Data:
•
A de.nition of Open Data:
_http://opende.nition.org/_
•
Licenses:
_http://opende.n_
_ition.org/licenses/_
•
Software:
•
Free Software
•
The de.nition:
_http://www.gnu.org/philosophy/free-sw.html_
•
Licenses:
_http:/_
_/www.gnu.org/licenses/licenses.html_
•
Open Source Software:
•
The de.nition:
_https://opensource.org/osd-annotated_
•
Licenses:
_https_
_:_
_//opensource.org/licenses_
•
Reports, publications, media:
•
Creative Commons
•
Explanation:
_https://creativecommons.org/about/_
•
Licenses:
_htt_
_ps://creativecommons.org/licenses/_
•
Choose a license:
_https://creativecommons.org/_
_choose/_
**7.2.**
**Data management plan (DMP)**
This is a .rst version of the DMP for MAKE-IT, which provides an analysis of
the main aspects to be followed by the project’s data management policy. The
DMP evolves in the course of the project and will be updated accordingly as
research data is collected. The data management plan will be facilitated by
the DMP online tool 14 . Consortium partner can either register and .ll in
the information requested directly in the tool in several iterations
throughout the duration of the project or contribute to the template that will
be developed based on the tool.
At the time of writing it is expected that the project will produce the
following data:
1. WP2: aggregated datasets for trend analysis
2. WP3: case study data from interviews, workshops, questionnaires, etc.
3. WP4: case study data from surveys, platform data (e.g. from fablabs.io or happy lab platform), social media data and observational analysis.
4. WP5: platform usage data from Maker CAPs, such as fablabs.io
5. WP6: analysis of existing data, collected through the other research work packages
6. WP7: data from other CAPs regarding Dissemination, Exploitation, Communication of the MAKE-IT project
This initial list includes primary (empirical) and secondary (desk-top,
aggregated) data. For the currently identi.able primary research data sets,
that the project will produce, we follow the requested template description as
de.ned by
the European Commission 15 (Table 7):
**Data set Data set description referenc**
**e & name **
DOI_1 Aggregated Twitter
MAKE- feeds collected for IT_Twitte the trends analysis rAggrega of WP2; this
will be te_X included in the
Deliverable D2.1 as well as in an academic
publication;
The data will only show links (Twitter IDs), which will allow authors to
delete their tweets anytime
14\. _https://dmponline.dcc.ac.uk/_
**Standards & metadata Data sharing Archiving & **
**preservation**
Twitter's Developer Twitter's Developer Agreement The aggregated data
Agreement & Policy: & Policy: (links to Twitter IDs)
_https://dev.twitter.co https://dev.twitter.com/overvie_ supporting the
_m/overview/terms/agr w/terms/agreement-and-policy_ publication will be
_eement-and-policy_ made available on Most Tweets are public, but the project
website researchers are not allowed to for the duration of at
republish any information that least 5 years after
links back to the user or his/her project end.
location. Thus following this policy and internal ethical
guidelines only aggregated data will be made available; user and location will
be omitted.
Authors need to keep control of their tweets, e.g. if they delete a tweet or
go private they express their wish not to be analysed but if they are part of
an archive,
15\.
_https://ec.europa.eu/research/participants/data/ref/h2020/grants_manual/hi/oa_pilot/h2020-hi-
oa-data-mgt_en.pdf_
this wish isn’t respected
_https://twittercommunity.com/t_
_/twitter-and-open-data-in-_
_academia/51934/4_
DOI_2
MAKE-
IT_Surve
y_X
Survey data being
collected at the
diOerent cases
(
possibly in WP
3
and WP4); the data
will be anonymised
and will refer to
aspects covered in
the three core
pillars of the
project:
collaboration,
governance, value
creation
As indexed on the
sharing platform e.g.
Zenodo, it will have
publication data, DOI,
keywords, collections,
license, uploaded by
the Consortium.
Shared on Zenodo, open digital
repository; license will be most
probably:
Creative Commons Attribution
Share-Alike
Zenodo is developed
by
_CERN_
under the
EU FP7 project
_OpenAIREplus_
(
grant agreement
no. 283595); the
service is free for
the moment;
Zenodo is working
on a sustainability
plan to deliver an
open service in the
future; if this is not
the case MAKE-IT
will provide the data
accessible via its
website for the
duration of at least 5
years after project
end.
DOI_3
MAKE-
IT_Interv
iew_X
Interviews
conducted with
individuals being
associated to any of
the cases to be
studied (WP3 and
WP4) needs to be
stored
anonymously;
The data may be in
the following
format (depending
on the interviews
and the speci2c
cases):
1\.
audio 2les
2\.
transcripts
3\.
aggregated
2
les
4\.
interview
As indexed on the
sharing platform e.g.
Zenodo, it will have
publication data, DOI,
keywords, collections,
license, uploaded by
the Consortium.
Shared on Zenodo, open digital
repository; license will be most
probably:
Creative Commons Attribution
Share-Alike
Is possible MAKE-IT
will make use of
Zenodo (see above).
guidelines
DOI_4 Usage of machines
MAKE- in the labs/maker
IT_Machi spaces (if available); neUsage this data can
_X include information about check-in,
check-out, usage time, material,
gender; it will be part of the case
studies (WP3 and
WP4); depending on agreements from the lab (and possibly their users) As
indexed on the sharing platform e.g. Zenodo, it will have publication data,
DOI, keywords, collections, license, uploaded by the Consortium.
Only in clear agreement with the organsations providing the data;
Shared on Zenodo, open digital repository; license will be most probably:
Creative Commons Attribution
Share-Alike
Is possible MAKE-IT will make use of
Zenodo (see above).
DOI_5 Platform usage data
MAKE- from fablab.io IT_Platfo (anonymous data); rmUsage the data includes:
_X Communication pattern, usage
patterns, uploads, downloads, etc.
As indexed on the sharing platform e.g. Zenodo, it will have publication data,
DOI, keywords, collections, license, uploaded by the Consortium.
Shared on Zenodo, open digital repository; license will be most probably:
Creative Commons Attribution
Share-Alike
Is possible MAKE-IT will make use of
Zenodo (see above).
Table 7: Currently identi7able primary research data sets
To summarise, the main open access points for MAKE-IT data, publications, and
innovation are:
* The project website: _www.make-it.io_
* Zenodo: _http://www.zenodo.org/_
* OpenAIRE _https://www.openaire.eu/_ for depositing publications and research data
## 7.3.Open access and open data handling process
The internal procedures to grant open access to any publication, research data
or other innovation stemming from the MAKE_IT project (e.g. technology) follow
a lightweight structure, while respecting ethical issues at all time.
The main workflow starts at the WP level, where each team is responsible for
respecting ethical procedures at all times during the data gathering and
processing steps. The WP team members are also responsible for making any data
anonymous, if applicable. For any publication the WPLB needs to be informed;
agreement has to be reached within the WP for making any outcome openly
available; the .nal approval is done by the PMB (see Figure 8):
Figure 8: Open Access work 9ow
Finally, it should be stressed that due to the nature of the Project, the Data
Management Plan has to be revised during the course of project activities,
especially those related to action research. Due to the open nature of this
type of research it is not possible to clearly specify all data sources and
collected outcomes from the beginning.
**8.**
**Conclusions**
This handbook describes the main procedures of the MAKE-IT project to operate
successfully and effectively in order to achieve high quality project results
following a responsible research and innovation (RRI) approach. Open access,
ethics, and engagement of all societal actors are amongst the key elements of
the European RRI framework (European Union, 2012). MAKE-IT is clearly
committed to respond to societal challenges in a responsible way by the
research topic itself as well as by the way the research is conducted.
While this handbook is provided in the form of a report and deliverable it is
a living document in the sense of being updated and challenged by the
consortium in the course of the project. The processes described in here are
implemented in the daily work of the consortium and most of the elements (e.g.
the forms for informed consent, data management plan, etc.) are separately
available on the collaboration infrastructure such as Sharepoint.
The management reports will include updates on any crucial changes in the
handbook as well as on the results of speci.c measures such as the SWOT
analysis or any additional elements added to the project structure related to
high quality responsible research.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0760_CENTAUR_641931.md
|
# Executive Summary
CENTAUR was part of a pilot on open access being run within the H2020 research
program. As part of the pilot, CENTAUR was required to produce a Data
Management Plan. The H2020 research program is promoting open access of data
and publications as the European Commission believes that the wide
availability of data will lead to optimal use of public funding by reducing
duplication and encouraging and supporting future research and innovation in a
cost efficient manner. CENTAUR was an innovation project rather than a
research and development project. The project’s Data Management Plan attempts
to follow the principle of open data access whilst accepting the need for
confidentiality to address privacy needs to protect personal data, and to
provide for Intellectual Property Rights (IPR) protection and the commercial
confidentiality of the partners, especially for the non-University partners
who have contributed financially to the project activities. These constraints
and how the partners acted regarding these constraints were clearly set out in
the Project Consortium Agreement. The Data Management Plan now describes how
the consortium managed the competing needs of the partners with the
aspirations of the European Commission.
The Data Management Plan addresses how the partners collected data, catalogued
it and, when appropriate, made it available on an open access basis during and
after the project. The plan also described the review mechanism the consortium
used to ensure that as much of the data collected during the project was made
available as soon as was practicable. All academic publications from the
project were made available in an open access repository.
The Lead Beneficiary provided facilities for storage of open access data and
archived this data and deposited it in an enduring open access data repository
before the end of the project.
The Data Management Plan was reviewed at each General Assembly meeting. A
revised plan was issued annually. Each revision of the Data Management Plan
listed the open access data sets and also the data that was held confidential
and the reason for this categorisation was also described. This approach was
intended to provide an appropriate balance between the aspiration for open
access data and the need to retain some data within the consortium to support
effective market replication and exploitation so that public benefit, in terms
of jobs growth and enhanced flood protection, could be obtained via readily
available CENTAUR systems.
**_CONTENTS_ **
Executive Summary 4
1 Introduction 6
1.1 Partners Involved in Deliverable 6
1.2 Project Details 6
1.3 Project Summary 6
2 Policies 7
3 Data Collection, Documentation Sharing and Storage 7
3.1 Overview 7
3.2 Data handling during and after the project 8
3.3 Summary of data being collected, processed and generated 10
3.3.1 Flow survey data for development of the dual drainage model 10
3.3.2 Virtual testing simulation data 11
3.3.3 Laboratory testing 11
3.3.4 Coimbra/Veolia pilot and demonstration testing 12
3.3.5 Flow Control Device design data 13
3.3.6 LMCS design data 14
3.3.7 Site selection methodology and results 14
4 Legal and Ethical Compliance 15
5 Long Term Storage and Archiving 15
6 Data security 16
7 Summary 16
8 References 17
Appendix A. Register of Completed Datasets 18
# 1 Introduction
## 1.1 Partners Involved in Deliverable
USFD – this deliverable has been drafted by USFD and has been commented on by
all partners in the CENTAUR consortium.
## 1.2 Project Details
CENTAUR - Cost Effective Neural Technique to Alleviate Urban Flood Risk
Funded by: European Commission – Contract No. 641931
Start Date: 01 September 2015
Duration: 36 months
Contact Details: [email protected]_
Co-ordinating Institution: University of Sheffield Website:
_www.sheffield.ac.uk/centaur_
## 1.3 Project Summary
The project developed a radically new market-ready approach to real time
control (RTC) to be used within sewer networks with the aim of being able to
reduce local flood risk in urban areas in a highly cost effective manner.
Existing RTC projects (e.g. in the cities of Vienna, Dresden, and Aarhus) are
characterised by complex sensor networks, linked to high cost centralised
control systems governed by calibrated hydrodynamic modelling tools and often
fed by high cost and complex radar rainfall technology. Such systems are
expensive and complex to install and operate, requiring a high up-front
investment in new infrastructure, communication equipment and control systems,
and require highly trained staff. In contrast, this CENTAUR has developed a
novel low cost decentralised, autonomous RTC system. The concept is to be able
to install such low cost RTC systems in existing infrastructure and for these
to require low levels of maintenance and staff input. During the project the
CENTAUR system was installed, tested and demonstrated in two networks, a
combined sewer network in Coimbra, Portugal and a stormwater network in
Toulouse. This RTC approach utilised data driven distributed intelligence
combined with local, low cost monitoring systems installed at key points
within existing sewer infrastructure. The system utilised mechanically robust
devices to control flow in order to reduce flood risk at vulnerable sites.
This system was informed and its control governed directly by sensors
distributed within the local network, without the need for an expensive
hydrodynamic model or real time rainfall measurements. The system delivered
many of the benefits of existing RTC systems, but avoided the high costs and
complex nature of extensive sensor networks, centralised control systems,
communications systems and infrastructure modifications. The developed system
has therefore proven to be of significant benefit to operators of small to
medium sized sewer networks, because of its low up-front capital cost and its
high cost benefit when used to control localised flooding.
# 2 Policies
The project participants at all times met their obligation on the access
rights and nondisclosure of data as set out in the project Consortium
Agreement. Nothing in the Data Management Plan removed any rights or
obligations as set out in the Consortium Agreement
The project aimed to follow the H2020 guidelines as regards open access and
data management and also adhered to the principles of the data management
policy of the coordinating institution, the University of Sheffield.
H2020 Guidelines:
_https://ec.europa.eu/research/participants/data/ref/h2020/grants_manual/hi/oa_pilot/h2020_
_-hi-oa-data-mgt_en.pdf_ .
University of Sheffield Guidelines on the Management of Research Data:
_http://www.sheffield.ac.uk/library/rdm_ .
The Data Management Plan was reviewed by all partners at each General Assembly
meeting and a revision was re-issued every 12 months.
# 3 Data Collection, Documentation Sharing and Storage
## 3.1 Overview
The European Commission has recognised that to achieve the best return for
their funding of research and development activities, any of the resultant
research data should be capable of re-use. This is best achieved by making
data and publications openly accessible. The data from CENTAUR was made openly
accessible, subject to any constraints set out in the Consortium Agreement on
data ownership and its use by other parties. These constraints related to
compliance with any national legal requirements (e.g. Personal Data), the
protection of IPR and commercial confidentiality in order to achieve effective
market replication and exploitation of the CENTAUR technology and supporting
knowledge developed during the project.
Subject to the above constraints data created within the project was made
available, once it had been processed into a final formal, organised and
catalogued and was free of error. Appendix A contains a table of all completed
data sets, including whether the data is open access or not. Partners used
current best practice in terms of data collection processing and validation
and ensured that sufficient resources were made available from the project
funds to complete these tasks. Adequate description of the context,
measurement and processing methods were also made available for the data that
was made publically available. Detailed information was linked to each open
data set so that it was clear how it was structured. Adequate documentation
was also provided so that the open data sets were searchable by a 3 rd
party. Each open access data set included information on the sensors used,
their calibration and validation, and the file and parameter naming
conventions.
The co-ordinator listed the available open access data sets.
The co-ordinator hosted the open access data electronically and transferred
all accessible open access data to Zenodo ( _http://www.zenodo.org_ ) an
enduring open access repository before the completion of the project. An open
access software tool, used to locate potential locations for the installation
of a CENTAUR system was stored on GitHub – ( _https://github.com/_ ) making
it readily available.
The peer-reviewed scientific publications arising from the work in CENTAUR
followed the requirements set out in the Grant Agreement and Consortium
Agreement. They were all openly accessible as this was a requirement of the
Grant Agreement. All publications were stored in an OpenAIRE compliant
repository and listed on the CENTAUR page in the OpenAIRE portal. The co-
ordinator also listed the details of all publications on the project website,
along with links to access the publications.
Appendix A of the Data Management Plan lists the completed data sets produced
during the project. It also lists those data sets that are open, and those
that are restricted to the members of the consortium along with the reason why
any data set had been restricted. Data sets were only restricted for one of
three reasons: to comply with national regulations for the protection of
personal data; to protect IPR for future exploitation; and for data that was
commercially confidential and the release of which would be financially
damaging to a partner.
After generating a data set partners were required to list it in Appendix A of
the Data Management Plan and then state whether the data was to be open
access, if this was not possible then the reason was given as to why the data
set was not to be open access. These decisions were reviewed periodically at
the subsequent General Assembly meetings. If any objections was raised as to
the status of any data set, this was discussed at a General Assembly and then
a final decision on the status of a data set was taken by the General Assembly
following the decision making process described in the Consortium Agreement.
## 3.2 Data handling during and after the project
The project data was collected or generated primarily by the University of
Coimbra (UoC), University of Sheffield (USFD) and later in the project by
Veolia. These partners were supported in the field data collection by
Environmental Monitoring Solutions (EMS). Steinhardt generated data on the
system design, EAWAG generated simulation data. Aguas de Coimbra were involved
in the data being collected by UoC, but did not generate any data themselves.
As a general principle, the primary responsibility for storage and handling of
the data lay with the partner originally collecting it. An understandable data
structure was always used for any data collected. For field and laboratory
data collection, filenames incorporated the date of collection and where
appropriate the sensor id. This information was then linked to a spreadsheet
providing further details, including any calibration parameters and comments
on any issues affecting data quality. For both laboratory and virtual testing
datasets the filename incorporated the date of the test or simulation and/or
number of the test or simulation. The date and run number was linked to a
spreadsheet summarising the testing carried out and including the relevant
parameters for the test or simulation run. For field data, the datasets
covered a longer period, hence the filename included both start and end date
if applicable, but otherwise conformed to the same basic standards as the
laboratory and virtual testing data sets.
Key metadata was stored alongside the data, for field and laboratory
measurements this included calibration data, sensor details, sensor location,
and details of the tests that were carried out. For virtual testing the
metadata included information on the hydrodynamic model, the version of the
algorithm and parameters used and the rainfall event(s) run. All data was
checked prior to storing, these checks were primarily ‘sense checks’ such as
mass balances and where practical, cross-checking data between sensors for
consistency.
Data was backed up on a regular (weekly) basis, with the backup stored at a
different site by the partner that had collected it.
Some of the data was useful to other partners and was shared as needed via the
project’s user controlled Google Drive folder. This data store was provided by
the University of Sheffield and is password protected and provided an
appropriate level of protection for data used within the project. The folder
was managed by the Project Co-ordinator.
It was the responsibility of the partner collecting the data to deem it open
access or restricted within the consortium or restricted to within the partner
organisation following the principles outlined above. For a datasets which
deemed to be suitable for open access, the Project Co-ordinator worked with
the partner that generated the data and organised its placement in the Zenodo
data repository which enabled the data set to be assigned a unique DOI
(Digital Object Identifier). The data was linked to the CENTAUR community on
Zenodo ( _https://zenodo.org/communities/centaur/_ ) and the data was also
linked to the CENTAUR page on OpenAIRE (
_https://www.openaire.eu/search/project?_ _projectId=corda_ ___h2020::_
_a468749db757b4bb290b04b284706d8a_ ) . The project co-ordinator ensured that
the data sets uploaded to this repository were quality checked and placed in a
structured manner that provided 3 rd parties the ability to search and use
the data. Discoverability of the data sets was ensured by including a clear
abstract / description and relevant keywords within the Zenodo record, any
publications referencing the data would use the DOI. Keywords included the
project name acronym, keywords listed in the DoA and any additional keywords
specific to the data set (e.g. laboratory water depths). For software tools
these deposited on GitHub, where version control is a core feature of this
platform.
After the project finished the coordinator collected all the internal data on
the project’s
Google Drive folder, archived it and will store it on at an institutional
secure storage area. This will be for a period of at least 5 years. This is
the definitive record of the project data, the Google Drive service is
subscribed to institutionally by USFD, hence there are no direct costs
associated with the project. This data will be available to project partners
for this 5 year period, during which any follow on publications or studies are
most likely be completed. There is no need for data recovery as the Google
Drive is mirrored across multiple sites, accidental deletion is very unlikely
as files are removed to a ‘trash’ folder and only completely deleted if
subsequently removed from the trash folder.
Appendix B includes an example of information that has been included as part
of the metadata for the Open Access data sets. The open access data is
expected to be useable for the foreseeable future after the project ends, the
repository used is free to use, hence no costs are involved, it is publically
funded so will be expected to be enduring.
## 3.3 Summary of data being collected, processed and generated
A number of separate datasets were generated during the CENTAUR project. The
majority of datasets had common features in that the parameters recorded
related to flows and depths in sewer pipes, or on an urban catchment surface
or to the status of the flow control device, these data sets were time series
collected at a single location. The other types of data are the Steinhardt
flow control device designs, the EMS LCMS designs and specifications and the
data created by EAWAG from the use of site selection methodologies and tools.
### 3.3.1 Flow survey data for development of the dual drainage model
**3.3.1.1 Purpose**
To calibrate and verify the dual drainage model of the Coimbra pilot study
catchment.
#### 3.3.1.2 Relation to project objectives
A calibrated dual drainage model was required to allow the performance of the
urban drainage network to be better understood and allow selection of a site
to install the flow control device for pilot testing. The model was used in
virtual testing to assess the performance of the flow control device (see
3.3.2).
**3.3.1.3 Timescale**
Winter 2015 and Spring 2016.
#### 3.3.1.4 Types and formats
Observational data from installed pressure transducers and flow monitors. The
data was stored uncompressed and unencrypted in ASCII and/or spreadsheet
formats.
#### 3.3.1.5 Methodologies and standards
Data collection and analysis was guided by the document ‘A guide to short term
flow surveys of sewer systems’ (WRc, 1987).
#### 3.3.1.6 Access to data and required metadata
This data was made accessible as the associated metadata required to make the
data reusable includes location details of the sewerage network, which is the
confidential property of the water company which owns the sewer network. This
data can also be used to identify the flood risk of individual properties, and
its release can therefore have a significant financial impact on individuals.
This data was retained securely by the partners that collected and initially
used it (EMS and UoC). It was shared with UFSD and EAWAG, as they required it
to complete their tasks. This sharing was done via password protected files
and via the password protected project Google site folder. The key metadata
included the locations of the data collection, information on the surrounding
drainage network, the sensor specifications and calibration details. This
information was stored alongside the stored flow data.
### 3.3.2 Virtual testing simulation data
#### 3.3.2.1 Purpose
To develop and test the CENTAUR control algorithm using previously calibrated
hydrodynamic sewer network models.
#### 3.3.2.2 Relation to project objectives
Prior to implementing the flow control device on an operational sewer network
it was tested both in the laboratory and using hydrodynamic models to confirm
that the control algorithm was stable and safe.
**3.3.2.3 Timescale**
From Spring 2016 until Spring 2018.
#### 3.3.2.4 Types and formats
Simulation data from calibrated hydro-dynamic models. The data was stored
uncompressed and unencrypted in ASCII and/or spreadsheet formats.
#### 3.3.2.5 Methodologies and standards
The models were produced in accordance with the ‘Code of Practice for the
Hydraulic Modelling of Sewer Systems’ (WaPUG, 2002).
#### 3.3.2.6 Access to data and required metadata
This data was not made accessible as the associated metadata required to make
the data re-usable included details of sewerage networks, which is the
confidential property of the water companies which own the sewers. This data
can also be used to identify the flood risk of individual properties. This
data was retained securely by the partners that collected and used it (EMS and
UoC). It was shared via password protected files and via the password
protected project Google site folder. The key metadata included details of the
network model, the version of the software and the model calibration
parameters used in the simulations. This information was stored in a
spreadsheet format alongside the results produced.
### 3.3.3 Laboratory testing
**3.3.3.1 Purpose**
To test the CENTAUR flow control device hardware and the control algorithm.
#### 3.3.3.2Relation to project objectives
Prior to implementing the flow control device on an operational sewer network
it was tested both in the laboratory and using hydrodynamic models to confirm
that the control algorithm was stable and safe and that the hardware was
reliable and operated as expected for the pilot study.
**3.3.3.3 Timescale**
Summer 2016 to Autumn 2017
##### 3.3.3.4 Types and formats
Experimental data from the laboratory test facility constructed for CENTAUR.
The data was stored uncompressed and unencrypted in ASCII and/or spreadsheet
formats.
##### 3.3.3.5 Methodologies and standards
There are no relevant standards, however the data was collected by calibrated
sensors and checked for consistency before being accepted.
##### 3.3.3.6 Access to data and required metadata
This data has been made accessible via the Zenodo data repository, it can be
accessed via the DOI _10.5281/zenodo.1406296_ .
Metadata concerning the laboratory rig dimensions and information on the
sensors was provided. Detailed technical information on the control algorithm
which operated the flow control device was commercially sensitive and was not
provided.
The data will primarily be of interest to anybody wishing to replicate results
presented in published papers, there is unlikely to be a significant amount of
re-use as the data is very context specific. The total amount of data shared
was 500 MB, the measured data was compressed on Zenodo reducing the download
to 80 MB.
The data was made available on a Creative Commons Attribution-ShareAlike
licence ( _https://creativecommons.org/licenses/by-sa/4.0/_ ) .
At the time of writing Zenodo listed 48 unique views of the data and 56 unique
downloads.
### 3.3.4 Coimbra/Veolia pilot and demonstration testing
**3.3.4.1 Purpose**
To test the CENTAUR flow control device hardware and the control algorithm.
#### 3.3.4.2 Relation to project objectives
Following virtual and laboratory testing, the flow control device and control
algorithm was tested in the Coimbra sewer network in Portugal and then in a
demonstration site in Toulouse managed by Veolia.
#### 3.3.4.3Timescale
From 2016 to September 2018.
##### 3.3.4.4 Types and formats
Observational data from the installed pressure transducers and the flow
control device status. The data was stored in uncompressed and unencrypted in
ASCII and/or spreadsheet formats.
##### 3.3.4.5 Methodologies and standards
There are no relevant standards, however the data was collected by calibrated
sensors and checked for consistency before being accepted.
##### 3.3.4.6 Access to data and required metadata
This data was not made accessible as the associated metadata required to make
the data re-usable included details of sewerage networks, which is the
confidential property of the water companies which own/manage the sewers. This
data can also be used to identify the flood risk of individual properties.
This data was retained securely by the partners that collected and used it
(EMS, UoC and Veolia). It was shared via password protected files and via the
password protected project Google site folder. The performance data from the
demonstration site was also commercially sensitive as it can be used to
develop the commercial business case for the deployment of CENTAUR. The key
metadata included the locations of the data collection, information on the
surrounding drainage network, the sensor specifications and calibration
details. This information was stored alongside the data by UoC and Veolia.
### 3.3.5 Flow Control Device design data
**3.3.5.1 Purpose**
Design information for the developed flow control device.
#### 3.3.5.2 Relation to project objectives
The flow control device was a key part of the CENTAUR system, allowing flows
in the drainage network to be controlled.
#### 3.3.5.3 Timescale
The design developed between the start of the project and the finalisation of
the design for the demonstration site, i.e. September 2015 to December 2017.
#### 3.3.5.4 Types and formats
The data consisted of drawings, written specifications and tables showing the
calculated flow rates under different conditions. These were archived in pdf
format.
#### 3.3.5.5Methodologies and standards N/A
##### 3.3.5.6 Access to data and required metadata
This data was not made accessible as the design is a key part of the CENTAUR
IP and know-how. It was shared via password protected files and via the
password protected project Google site folder, for partners that required
technical information on the FCD (Veolia, Aguas de Coimbra, UFSD, EMS). There
was not any requirement for metadata beyond what was already within stated
within the design documents.
### 3.3.6 LMCS design data
**3.3.6.1 Purpose**
Design information for the developed Local Monitoring and Control System
(LMCS).
#### 3.3.6.2 Relation to project objectives
The LMCS was a key part of the CENTAUR system, allowing monitoring of the
water levels, processing of data and communication of control actions to the
FCD.
#### 3.3.6.3 Timescale
The design of the LMCS developed throughout the project. The CE and ATEX
certification was completed in August 2018.
#### 3.3.6.4 Types and formats
The data consisted of circuit diagrams, code and written specifications. These
were archived in a pdf format.
#### 3.3.6.5 Methodologies and Standards N/A
**3.3.6.6 Access to data and required metadata** The data was not made
publically available, as the design was a key part of EMS’s commercially
valuable intellectual property and know how. Any data required to be shared
among partners was shared via password protected files. There was not any
requirement for metadata beyond that stated within the design documents.
### 3.3.7 Site selection methodology and results
**3.3.7.1 Purpose**
Developing a methodology to select optimum sites for the deployment of
CENTAUR.
#### 3.3.7.2Relation to project objectives
In order to efficiently market CENTAUR, a methodology to select sites from
commonly available catchment and drainage network data was required.
##### 3.3.7.3 Timescale
This part of Task 3.4 commenced early and developed throughout the project
between October 2016 and April 2018.
##### 3.3.7.4 Types and formats
The data output form the methodology scored/ranked the suitability of
different parts of the drainage network for installation of a CENTAUR system
and was in ASCII format.
The methodology is in the form of a java based software tool.
##### 3.3.7.5 Methodologies and standards
There are no relevant standards for the output data. The software tool was
version controlled through a GitHub repository.
##### 3.3.7.6 Access to data and required metadata
The output data was not made accessible as the associated metadata required to
make the data re-usable included details of sewerage networks, which is the
confidential property of the water companies which own the sewers. This data
was retained securely by EAWAG. It was shared via the password protected
project Google site folder with UoC, UFSD and EMS who required access to
complete some their tasks.
The software tool is openly access through a GitHub repository
( _https://github.com/ldesousa/centaur.loc_ ) , this repository includes the
relevant metadata to allow the tool to be run (i.e. instructions). This tool
can utilised by other researchers and practitioners to systematically
investigate potential in sewer storage.
# 4 Legal and Ethical Compliance
At all times the partners complied with national legal requirements as regards
the protection of personal data. The co-ordinating institution has a rigorous
policy on the collection and storage of personal data (
_http://www.sheffield.ac.uk/library/rdm/_ _expectations_ ) . This was adhered
to by all partners. After an assessment at the start of the project by the
Project Co-ordinator found that no personal data was planned to be collected
in this project. No partners generated personal data during the project.
# 5 Long Term Storage and Archiving
The co-ordinator provided electronic storage facilities for open access data
and its metadata created by any partner during the project. Open access data
was uploaded to the Zenodo data archive. Any open access software tool
produced was stored on GitHub.
The co-ordinator did not provide long term storage for any personal data, or
data that is required to protect IPR and commercially confidential
information.
At the end of the project, the co-ordinator archived any files and data (not
containing personal data or commercially confidential information) on the
shared project Google drive and made this available to all the partners.
All peer-reviewed scientific publications relating to the results of CENTAUR
were openly accessible. The partner producing any publication was responsible
for storing these publications in an enduring repository which is compatible
with OpenAIRE (it can be institutional, subject-based or centralised) as soon
as possible, and at the latest on publication. Such publications were listed
and linked to OpenAIRE (at _https://www.openaire.eu/search/project?_
_projectId=corda__h2020::_ _a468749db757b4bb_ _290b04b284706d8a_ ) and also
provided links for access on the project website.
# 6 Data security
Data was stored securely, to ensure its integrity and also to ensure
compliance with personal data protection regulations, IPR protection and
commercial confidentiality.
Devices that contained data were password protected and securely stored when
not in use. Data sets available online were in a password protected folder,
such as the project’s Google Drive.
Data that was open access was not password protected and was made available
via the open access data repository Zenodo.
# 7 Summary
The CENTAUR project endeavoured to make the data produced open access
following the H2020 guidelines. The partners took into account any constraints
on data availability described in the Consortium Agreement and any national
legal requirements on the protection of data.
The project beneficiaries ensured that sufficient resources were made
available from the project funds to ensure that all the data sets that are
uploaded onto the open access repository Zenodo are organised, catalogued and
practically free of error, and that sufficient metadata was provided so that a
third party can use the data.
The partners ensured all peer-reviewed scientific publications relating to the
results of CENTAUR were available through an open access route and were listed
on the CENTAUR page of the OpenAIRE portal.
The co-ordinator collated a list of all data collected during the project and
required partners to declare whether data was open access or restricted in
line with the policy outlined in the Data Management Plan. Access to Open data
was unrestricted, apart from where an embargo period is deemed necessary to
allow academic publications to be finalised. Appendix A of the Data Management
Plan lists all completed data sets and their availability. The co-ordinator
ensured that all open access data produced during the project was
appropriately archived and deposited in an enduring open access repository.
The Data Management Plan has been reviewed periodically at each General
Assembly and contains a record of the data sets collected and the status of
each data set as regards its availability.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0764_AMBEC_785493.md
|
# Data summary
Research data to be generated or collected and processed by the project are
described in the Table 1. In the table we identify two categories of research
data:
* **Open Research Data** – any form of non-confidential data needed to validate the results presented in scientific publications resulting from project research activities in Open Access Journals and Non-Confidential products of research (including but not limited to designs, code, etc.) created and/or used in the framework of the project, where “Non-Confidential” means that such data can be made (or is already) publicly available.
* **Restricted Research Data** – any form of confidential data and products of research (including but not limited to datasets, designs, code, etc.) created and/or used in the framework of the project, which will present high innovation level and possibility for commercialization. For this category the Consortium will consider either keeping data restricted for project participants for internal user or apply for a patent in order to commercially exploit (in this case the appropriate IPR protection measures, e.g. NDA, will be taken for data sharing outside the consortium).
This table will be constantly updated during the project, especially regarding
Open Research Data.
## Table 1 Dataset Summary
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
**Partners**
</th>
<th>
**Dataset collection**
</th> </tr>
<tr>
<th>
**Related**
</th>
<th>
**involved in**
</th>
<th>
**and publication**
</th> </tr>
<tr>
<th>
**No**
</th>
<th>
**Dataset name**
</th>
<th>
**Category**
</th> </tr>
<tr>
<th>
**WP(s)**
</th>
<th>
**generation/pr**
</th>
<th>
**date (for open**
</th> </tr>
<tr>
<th>
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<th>
</th>
<th>
**ocessing**
</th>
<th>
**datasets only)**
</th> </tr>
<tr>
<td>
DS1
</td>
<td>
Test matrix
</td>
<td>
Restricted
</td>
<td>
WP1
</td>
<td>
KhAI, Ivchenko
</td>
<td>
30/10/2018
</td> </tr>
<tr>
<td>
DS2
</td>
<td>
Test vehicle concept
</td>
<td>
Restricted
</td>
<td>
WP1
</td>
<td>
Ivchenko, KhAI
</td>
<td>
\---
</td> </tr>
<tr>
<td>
DS3
</td>
<td>
Test vehicle and test bench design
</td>
<td>
Restricted
</td>
<td>
WP2
</td>
<td>
Ivchenko, Motor Sich
</td>
<td>
\---
</td> </tr>
<tr>
<td>
DS4
</td>
<td>
Data on previous research on multiphase flow characteristics and heat transfer
phenomena in the
bearing chamber
</td>
<td>
Open
</td>
<td>
WP3
</td>
<td>
KhAI
</td>
<td>
14-07-2018
/
07/12/2018
(see all details in the Annex 1)
</td> </tr>
<tr>
<td>
DS5
</td>
<td>
Two-phase modelling
results
</td>
<td>
Open/Restricted
</td>
<td>
WP3
</td>
<td>
KhAI
</td>
<td>
\---
</td> </tr>
<tr>
<td>
DS6
</td>
<td>
Test data
</td>
<td>
Open/Restricted
</td>
<td>
WP4
</td>
<td>
Ivchenko
</td>
<td>
\---
</td> </tr> </table>
The category for each dataset specified in Table 1 was discussed by the
partners and agreed with the Topic manager. Where the category is
“Open/Restricted” this means that the decision on provision of the open access
to this dataset or its part will be made in the course of the project case-by-
case basis pursuant to all partners’ and topic manager’s consent. All updates
in the categories will be specified in further versions of the DMP.
AMBEC Project – 785493 – Deliverable 5.2 – DMP 4
Table 2 presents the detailed description of project data, purpose of their
collection/generation, relation to the objectives of the project, size, types
and formats, origin, potential re-use of existing data and data utility, which
means to whom these data might be useful.
AMBEC Project – 785493 – Deliverable 5.2 – DMP 5
## Table 2 Research data description
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
**Tools for**
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<th>
**Purpose and relation to the**
</th>
<th>
**Expected**
</th>
<th>
**accessing**
</th>
<th>
**Re-use of**
</th> </tr>
<tr>
<th>
**No**
</th>
<th>
**Description**
</th>
<th>
**Origin**
</th>
<th>
**Format**
</th>
<th>
**Data utility**
</th> </tr>
<tr>
<th>
**project objectives**
</th>
<th>
**Size**
</th>
<th>
**and/or**
</th>
<th>
**existing data**
</th> </tr>
<tr>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<th>
</th>
<th>
</th>
<th>
**processing**
</th>
<th>
</th> </tr>
<tr>
<td>
DS1
</td>
<td>
Test matrix
</td>
<td>
To cover representative conditions of engine operation and generate
sufficient data in order to understand the heat transfer phenomena.
</td>
<td>
Typical engine running conditions supplied by the Topic Manager
</td>
<td>
docx, xls, pdf
</td>
<td>
Several
MB
</td>
<td>
Word,
Excel Adobe
Reader
</td>
<td>
Base for experimental investigation of fluid flows and heat transfer phenomena
in the bearing chamber.
</td>
<td>
Participants of AMBEC project
</td> </tr>
<tr>
<td>
DS2
</td>
<td>
Test vehicle concept
</td>
<td>
To define the test rig of the bearing chamber and its associated systems,
which allow to capture the heat transfer in the bearing chamber as function of
the variation of the key parameters
</td>
<td>
Geometry supplied by the Topic
Manager.
</td>
<td>
docx, dwg, pdf
</td>
<td>
Several
MB
</td>
<td>
Word
AutoCAD
Adobe
Reader
</td>
<td>
Base for designing of the test rig
</td>
<td>
Participants of AMBEC project
</td> </tr>
<tr>
<td>
DS3
</td>
<td>
Test vehicle and test bench design
</td>
<td>
To design the test vehicle and test rig systems which enable the integration
of a representative bearing chamber in a test rig assembly including the
systems capable to conduct the variation of the test parameters defined in the
test matrix
</td>
<td>
The results of
DS2
processing
</td>
<td>
docx, dwg, pdf
</td>
<td>
Several
GB
</td>
<td>
Word
AutoCAD
Adobe
Reader
</td>
<td>
Base for manufacturing of test vehicle and test rig systems
</td>
<td>
Participants of AMBEC project
</td> </tr> </table>
AMBEC Project – 785493 – Deliverable 5.2 – DMP 6
<table>
<tr>
<th>
DS4
</th>
<th>
Data on previous research on multiphase flow characteristics and heat transfer
phenomena in the bearing chamber
</th>
<th>
Analysis of the current stateof-the-art in the field of investigations of
multiphase flow characteristics and heat transfer phenomena in the
bearing chamber
</th>
<th>
Research
articles in relevant journals, conference proceedings, summary reports of
research project
</th>
<th>
pdf, txt
</th>
<th>
Several
GB
</th>
<th>
Web browser
Text editor
Adobe
Reader
</th>
<th>
Understanding which
methodologies are used for multiphase flow modelling. Select best practices
for AMBEC project implementation
</th>
<th>
Participants of
AMBEC project
Researchers at Universities and research centres
working in the
field of
thermodynamics and heat transfer
</th> </tr>
<tr>
<td>
DS5
</td>
<td>
Two-phase
modelling results
</td>
<td>
Development of methodology
for calculation of fluid flow and heat transfer coefficient distribution in
different zones of the bearing chamber depending on influence of key
parameters.
</td>
<td>
Geometry of the bearing chamber, key parameters
</td>
<td>
docx, xls, pdf, cas, data
</td>
<td>
Several
GB
</td>
<td>
Word,
Excel Adobe
Reader
ANSYS
</td>
<td>
A background for improvement of approaches for simulation of fluid flow and
heat transfer in the bearing chamber based on the results of experimental
investigation
</td>
<td>
Participants of
AMBEC project,
Researchers at Universities and research centres
working in the
field of
thermodynamics and heat transfer
</td> </tr>
<tr>
<td>
DS6
</td>
<td>
Test data
</td>
<td>
To generate sufficient data in order to understand the heat transfer phenomena
in the bearing chamber.
</td>
<td>
Test matrix, test vehicle and test rig
</td>
<td>
docx, xls, pdf
</td>
<td>
Several
MB
</td>
<td>
Word,
Excel Adobe
Reader
</td>
<td>
Base for refinement of multiphase flows’ simulation methods
</td>
<td>
Participants of
AMBEC project,
Researchers at Universities and research centres
working in the
field of
thermodynamics and heat transfer
</td> </tr> </table>
AMBEC Project – 785493 – Deliverable 5.2 – DMP 7
# FAIR data 2. 1. Making data findable
## Metadata
Metadata is the data which enables others identify and find the open research
data in a repository. Proper and full metadata will allow other researchers
determine the usefulness of specific datasets for their needs and if so reuse
the data for their research. Data which will be necessary for validation and
support of scientific publications will be made findable through the Zenodo
research data repository ( _https://zenodo.org_ ). In Zenodo all metadata is
openly available under CC0 license, and all open content is openly accessible
through open APIs. According to Zenodo principles, every published record on
Zenodo will be assigned a DOI (Digital object identifier). Zenodo's metadata
is compliant with DataCite's Metadata Schema minimum and recommended terms,
with a few additional enrichments. Metadata of each record is sent to DataCite
servers during DOI registration and indexed there.
According to the requirements of Grant Agreement Article 29.2, the
bibliographic data will include:
* the terms “Clean Sky 2 Joint Undertaking”, “European Union (EU)” and “Horizon 2020”;
* the name of the action, acronym and grant agreement number; the publication date, and length of embargo period if applicable, and a persistent identifier.
The datasets to be placed in a repository will be supplemented with the
information on the methodology used to collect the data, analytical and
procedural information, definitions of variables, units of measurement, any
assumptions made, the format and file type of the data and software used to
collect and/or process the data. If a dataset require any other specific
documentation to enable it reuse, it will be mentioned either in a file
header, or in a ‘readme’ text file.
## Search keywords
Keywords will be indicated for each entry in the repository to feed search
queries and optimize possibilities for re-use. Example keywords include:
DS5: two-phase modelling, fluid flow, heat transfer coefficient
distribution, bearing chamber, etc.
## Naming conventions and versions
All files in a datasets placed to the repositories will be structured by using
a name convention containing project name, dataset No, dataset name, date and
version number:
**AMBEC_DSX_Dataset name_xxxx.yy.zz_vX.ext**
_(where .ext is a generic extension)_
## Making data openly accessible
**Restricted Research Datasets** will be accessible to consortium partners and
Topic Manager. Such data will be first of all stored at the PCs of the project
participants which generate and/or collect data, or in their institutional
secure servers. Internal access to the data will be provided via the SAFRAN
Extranet Portal WeShare or secure ftp server in case of large datasets (will
be identified later). Zenodo secure storage will be considered, which provides
the possibility to house closed and restricted content, so that artefacts can
be captured and stored safely.
**Open access** will be provided to Non-confidential project outputs **.**
First of all, the scientific articles in Open Access Journals will be
published, adhering to suitable “Open Access”:
* Self-archiving (“green”): final peer-reviewed manuscript in ZENODO repository. Open access to the publication will be ensured within at most 6 months.
* Open access publishing (“gold”): articles to be published in open access journals, or in hybrid journals that both sell subscriptions and offer the option of making individual articles openly accessible.
The copyright to the research publication will be retained by the author, and
adequate licences to publisher will be granted.
At the same time, the **open research data** needed to validate the results
presented in such publications will be deposited to the Zenodo repository, to
make it possible for third parties to access, mine, exploit, reproduce and
disseminate these data. Where required, information about tools and
instruments necessary for validating the result will be also provided. Open
Access procedures set out in the Grant Agreement and described in the
Guidelines will be followed.
Most of the research data will be produced in common electronic
document/data/image formats (.docx, .pdf, jpg, .eps, etc.) that can be
accessed via commonly-used methods and open software. For CFD-modelling .agdb,
.wbpj, .iges, .csdoc, .smdb formats will be used for geometry and meshing,
.cas, .data – for solution and results.
## Making data interoperable
To make AMBEC open research data interoperable, which means allowing data
exchange and re-use between researchers, institutions, organisations,
countries, etc. the standards for formats, as much as possible compliant with
available (open) software applications will be applied. In particular, re-
combinations with different datasets from different origins will be
facilitated.
The distinct and standard terminology will be used in all datasets and in
descriptive metadata fields to allow accurate and quick indexing and retrieval
of relevant data. Appropriate keywords (see Section 2.1) will be used for
indexing and subject headings of data and metadata. The keywords will be
updated in the course of project implementation to ensure that the most recent
and adequate terminology is applied and so to maintain interoperability.
This will be as well relevant to metadata in Zenodo, which use a formal,
accessible, shared, and broadly applicable language for knowledge
representation.
## Increase data re-use
### Data licensing
AMBEC project will use one or several main Creative Commons licenses to
protect an ownership of datasets or their parts (see Table 1). Preliminary,
the preference will be given to Attribution-NonCommercial-ShareAlike 4.0
International license (CC BY-NC-SA 4.0).
Decision regarding appropriate licence selection will be done by consortium
simultaneously with the making decision as for providing open access to
dataset or its specific part. **_Date of data release_ **
All open research data will be made available through Zenodo repository
immediately after the consortium decision to provide open access. However, an
embargo period may be applied in case of data associated with research
publication, for which “green” open access is selected. AMBEC team will
respect the EC recommendation as for maximum embargo period of 6 months.
**_Re-use by third parties_ **
Re-use of restricted research data (see Table 1) will be limited to project
partners and Topic Manager and is regulated by AMBEC Consortium Agreement and
CS2 JU Implementation Agreement.
Re-use by third parties of open research data to be deposited to Zenodo
repository will be subjected to standard restrictions of applied license,
e.g.:
* Attribution: requires to give appropriate credit, provide a link to the license and indicate if changes were made.
* ShareAlike: requires to use the same licence as original on all derivative works based on original data
* Non-Commercial: prohibits the use of the dataset for commercial purposes.
Open research data deposited to Zenodo repository will remain re-usable
throughout the lifetime of the repository.
### Data quality
Each partner will be responsible for quality of data it collect and/or produce
and will apply its regular procedures and protocols focused on data quality
assurance and control.
# Allocation of resources
## Costs for making data FAIR
To respect the requirements of GA article 29.2, AMBEC partners will publish at
least 2 scientific articles to disseminate key project results in peer-
reviewed journals, which provide “green” or “gold” open access. Average open
access fee for AMBEC-relevant scientific journals (e.g. International Journal
of Heat and Mass Transfer (ISSN 0017-9310), Aerospace Science and Technology
(ISSN 1270-9638), Journal of Engineering for Gas Turbines and Power (ISSN
0742-4795), etc.) is about 2,000 Euro.
Fees associated with open access scientific publications will be
responsibility of author(s)’ organizations and will be covered by AMBEC
project costs. In case of multiple authors from different partners’
organizations, open access fee sharing will be an option to be discussed and
agreed on a case-by-case basis.
Machine-readable electronic copies of project publications as well as
bibliographic metadata and associated research data, needed to validate the
results presented in scientific publications, will be deposited to Zenodo
research data repository, which is free of charge.
## Responsibility for data management
Each partner is solely responsible for management of data it produces,
including data capture, data quality, metadata production, data storage and
backup, etc. As for open research data (see Table 1), AMBEC project technical
leader Dr. Taras Mykhailenko will be responsible for data management and
deposition to Zenodo repository.
## Long term data preservation
Issues of long-term preservation of AMBEC research data after the AMBEC
project completion (including data selection, data volume, preservation
duration, preservation repository(ies) and associated costs) will be studied
during the M30-36 and appropriate consortium decision(s) will be taken.
Relevant information will be presented in the final DMP.
# Data security
## Data storage and backup
Employees of AMBEC partner organizations, who are involved in research
activities, are responsible for storage and regular backups of data they are
producing and/or processing. For this purpose, regular practices and company
regulations will be applied.
Whatever the case, the following principles will be followed by all AMBEC
partners to ensure data security:
* store data in at least two separate storage media (e.g. hard drive and DVD) to avoid data loss;
* check data integrity periodically;
* limit the use of USB flash drivers;
* store data in a proprietary formats, which are widely used.
Datasets will be stored at AMBEC Private Collaborative Area of the SAFRAN
Extranet Portal WeShare, which provides secure coproduction, storage,
organization, sharing and consulting of information.
Specifically, “Exchange Documents” library will be used to store, organize,
sync, and share documents with project participants. WeShare library tool
provide opportunities for coauthoring, versioning, and check out to work on
documents in parallel mode. Security and preservation of data uploaded by
partners to the WeShare will be provided according to the regulations and
usual practices of SAFRAN. WeShare portal is available from 8 a.m. to 7 p.m.
French hours from Monday to Friday, business days out French holidays.
Two persons per partner organization will have access to AMBEC Private
Collaborative Area. For this purpose, they will use personal login and
password. User’s activity in WeShare will be tracked.
Open research data deposited to Zenodo repository will be stored and backuped
in line with repository’s policy, which includes multiple file replicas in a
distributed file system, backed up to tape on a nightly basis.
## Data transfer
Partners will communicate by email, whereas research data exchange will be
performed exclusively with the use of WeShare portal, which provide a
possibility to notify partners by email about new data deposition.
In future, if necessary, reliable and secure ftp server or Zenodo secure
storage will be used for transfer of big data resulted from numerical
simulation and real experiments.
5. **Ethical aspects**
No ethical issues has been identified
6. **Other issues**
No other issues to report at this time.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0767_HybridHeart_767195.md
|
# 1.2. Re-using existing data
The partners in the HybridHeart consortium have previously gained experience
in their respected fields of research and therefore will together lead to
successful development of a soft actuated biocompatible heart. We have
explored the potential of reusable data but at this stage, we see no potential
of re-using the existing data, however this will be further discussed along
the course of the project.
# 1.3. The expected size of the data
The expected size of the data generated in WP 2 is 200 GB, whereas WP 3
expects to generate approximately 400 MB data. The other WPs and partners
cannot yet predict the size of the data. This information will be updated in
the subsequent version of DMP.
# 1.4. The data utility
The HybridHeart Proof-of-Principle established in this project will set a
baseline for the feasibility of novel artificial motile organ development
based on soft robotics technology combined with TE and wireless energy
transfer to follow. As such, this project will change the future of organ
replacement, using the latest advancement and new applications of soft
robotics and TE technologies, which will cause a foundational shift in
transplantation research and medicine with unlimited availability of safe,
biocompatible and off-the-shelf solutions for all patients.
The consortium envision that the data will be useful for the following
stakeholders:
<table>
<tr>
<th>
**Target audience**
</th>
<th>
**Essential stakeholders**
</th> </tr>
<tr>
<td>
Medical community
</td>
<td>
Cardiologists, cardiac surgeons, researchers in the field of TE, professional
organizations such as the European Society of Cardiology, TERMIS.
</td> </tr>
<tr>
<td>
Soft robotics community
</td>
<td>
Researchers and scientific organizations, such as IEEE Robotics & Automation
Society, and Technical Committees on Soft Robotics.
</td> </tr>
<tr>
<td>
Medical device companies
</td>
<td>
Companies interested in novel types of artificial organs, such as Medtronic,
Heartware, Syncardia, Carmat, Thoratex, St Jude Medical.
</td> </tr>
<tr>
<td>
Patient advocacy groups
</td>
<td>
Patient organizations such as the European Heart Network and national
organizations.
</td> </tr>
<tr>
<td>
Regulatory bodies
</td>
<td>
Notified bodies.
</td> </tr>
<tr>
<td>
Healthcare payers
</td>
<td>
Health insurance companies at national level.
</td> </tr>
<tr>
<td>
General public
</td>
<td>
Governments, standardization institutes (OECD, ISO), press.
</td> </tr> </table>
# 2\. Findable, Accessible, Interoperable, and Reusable (FAIR) Data
**2.1. Making data findable, including provisions for metadata**
## Discoverability of the Data (metadata provision)
Digital object identifier will be generated for all publications, related
documents (e.g. study protocol, data transfer agreements, data access
policies) as well as the datasets. During the course of the project all data
will be recorded in lab journals and stored digitally and locally in the
secure internal drive of the consortium. We will look at www.fairsharing.org
and https://bioportal.bioontology.org/ for existing databases, standards,
metadata, and ontologies that can be used for type of data that will be
generated in the project. If no metadata provision is available each partner
will create a codebook or file explaining the variable names, calculations
used to analyze the data, exact scripts for calculations used with analysis of
the data, parameter settings, detailed methodology, etc. this code book will
be linked to the generated data.
## Naming and keywords of the data
The data will be named as follows:
WP number/Institution name/Task or deliverable
number/Subtasknumber/filename/year
Example:
WP6/AMC/D6.3/D6.3.1/Datamanagementplan.2018.v1.0
Keywords will also be available to ensure data discoverability **,** following
existing standards such as the Medical Subjects Headings (MeSH) terms.
### 2.2. Making data openly accessible
The data that will be generated and collected for development and validation
of the artificial heart is in accordance with the General Data Protection
Regulation (Regulation (EU) 2016/679). All partners will own the full
intellectual property (IPR) relating to their own proprietary technologies.
Access to the existing IPR between the partners and terms and condition
regarding ownership of IP generated in the project will be agreed upon in a
prearranged consortium agreement (CA). IP generated within the project shall
be disclosed in confidence to the other partners in the consortium.
When a partner wishes to commercially exploit knowledge of which (part of) the
IPR is in the hands of another consortium partner, the exploiting partner will
pay royalties or another appropriate form of remuneration. After protection of
findings, results will be disseminated via (high-impact) peer-reviewed
articles following the ‘green’ or ‘gold’ standard according to the EC Open
Access policy. To ensure accessibility of the publications we will also
publish the author version on partners’ institutions website and/or the
HybridHeart website.
Each partner/member of the consortium will make their own decision on when to
open their datasets, but the data will be at latest in the public domain when
a related publication a peerreviewed journal is available. Restriction made to
the open access will be voluntarily. There are no specific software tools
needed to access the data (standard file formats readable in open source
software will be used).
### 2.3. Making data interoperable
Similar to what was previously stated in section 2.1. to enhance data
interoperability, we will search for existing metadata, standards, and
ontologies at _www.fairsharing.org_ and _https://bioportal.bioontology.org/_
. This information will be updated in the next version of DMP. Somewhere
during the course of the project the consortium will discuss the potential of
placing the open access publications and the data set (including the
associated metadata) at data repositories such as _www.zenodo.org_ and
_www.re3data.org_ . However this will not change the obligations to protect
the result, the confidentiality obligations and the security obligations.
### 2.4. Increase data re-use (through clarifying licenses)
We will ensure the accessibility of the published articles by either
publishing it open access journal or made the author version available on our
website or partners’ websites. We will look into using available repositories
such as _www.zenodo.org_ to increase the discoverability of the data. During
the next general assembly meeting, we will discuss the term and the extent of
reusing data generated in this project. It is possible that access to certain
datasets will be restricted and can only be granted upon submission of
research proposal and subsequent approval from the consortium. The data
produced can be of interest to other researchers in the medical, soft robotics
and tissue engineering communities as well as for medical devices companies.
Each consortium members will be responsible for the quality assurance of how
his or her data will be reused. Prior to submitting deliverables or
publications, we will perform internal reviewer process within the consortium
to ensure the quality of the data/publications.
# 3\. Allocation of resources
The individual beneficiaries and partner organizations will be responsible for
the data management of the HybridHeart project. Data generated will be stored
locally in the internal server of the partner that generated and owned the
data. Long term preservation: collected data will be stored for up to 10 years
at each of partner organization. Costs for data storage and preservation will
be estimated at a later stage by using the “Data management costing tool”
provided by the UK Data Service
(http://www.dataarchive.ac.uk/media/247429/costingtool.pdf).
# 4\. Data security
All generated data will be digitally and locally stored in the internal server
of each partner organizations. This local storage underlie the EU roles and
national rules, which will be followed and to protect the data. According to
the standard protocols, the data will be regularly back-up.
# 5\. Ethical aspects
The HybridHeart project will comply with ethical principles and if applicable
international, EU and national law (in particular, EU Directive 2004/23/EC).
The consortium confirms that it will follow these ethical principles
regardless where the research is performed.
The consortium ensures to:
* Keep track of the material imported/exported between EU members of states and associated countries.
* Obtain the necessary accreditation/designation/authorization/licensing for using animal in research.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0768_AfriAlliance_689162.md
|
# Executive Summary
The overall objective of this deliverable is to provide an initial Data
Management Plan that describes what Data will be generated during the project
execution, including formats and structure, and how the data
(including meta-data) will be collected, stored, and made accessible. This
deliverable is mandatory since AfriAlliance participates in the Pilot
initiative from the European Commission on Open Data. The deliverable follows
the guidelines on FAIR Data Management in Horizon 2020, which prescribes the
inclusion of specific elements in the plan, including: 1) a summary of the
data being collected; 2) methods for making sure data are FAIR (findable,
accessible, interoperable, re-usable); 3) resources to be allocated; 4)
security of data, as well as any other aspects. The document describes the
_initial_ plans for Data Management and will be revised as soon as additional
elements regarding Data Management have been identified in the course of the
implementation of the AfriAlliance project.
# AfriAlliance Data Summary
AfriAlliance is a Coordination and Support Action project which nevertheless
consists of several distinct research activities to achieve its objectives,
such as studies into the motivations to participate in Working Groups in an
African context (WP1), specific short-term social innovation needs (WP2), the
barriers for online knowledge sharing (WP3) and an inventory of current
monitoring and forecasting efforts (WP4).
As a Coordination and Support Action, one of the main objectives of the
project is to share as broadly as possible any results generated by the
project with the broad water sector community, in particular with experts and
organizations active in the field of water and climate. This counts for both
data and metadata.
The Data Management Plan deliverable complements the previously submitted
Project Information Strategy deliverable, with the understanding that data
generated during the project are a subset of the overall information that will
be managed during the project (ref. D6.3, page 11). In particular, the scope
of the Data Management Plan concerns a subset of information mentioned in
Table 1 of Deliverable 6.3, an extract of which is repeated below:
**Table 1 AfriAlliance Information (Data) (extract from Deliverable D6.3)**
<table>
<tr>
<th>
Type of
Information
</th>
<th>
Owner
</th>
<th>
Access
Rights
</th>
<th>
Repository
</th>
<th>
Format
Used
</th>
<th>
Standards
Used
</th>
<th>
Quality
Control
</th>
<th>
Purpose / Use
</th> </tr>
<tr>
<td>
Input Data (e.g. surveys information)
</td>
<td>
Task
Leaders
</td>
<td>
Partners
</td>
<td>
ProjectPlace
</td>
<td>
Different
</td>
<td>
Customized format (AA identity)
</td>
<td>
Content and format by WP leaders, with advise by PMT
</td>
<td>
Background data for further elaboration into Task deliverables
</td> </tr>
<tr>
<td>
Output Data (reports, papers, policy notes)
(*)
</td>
<td>
Task
Leaders
</td>
<td>
Open Access
</td>
<td>
ProjectPlace, Website
</td>
<td>
MS
Word, printed copies
</td>
<td>
Customized format (AA identity)
</td>
<td>
Content and format by WP leaders, with advise by PMT
</td>
<td>
AfriAlliance information to be shared within the platform and to the broad
water sector
</td> </tr> </table>
Ethical aspects concerning the plan are covered in the Ethical aspects
deliverable (D7.1 – 7.3)
To comply with the Horizon 2020 Open Research Data Pilot, AfriAlliance will
make available data potentially useful for others as well as all aspects that
are needed to replicate the undertaken research. In this context, the
following types of data can be distinguished (see Table 1).
**Table 2 Summary of AfriAlliance Data**
<table>
<tr>
<th>
Type of data
</th>
<th>
Description
</th>
<th>
AfriAlliance WP/tasks
</th> </tr>
<tr>
<td>
Empirical data
</td>
<td>
The data (set) needed to validate results of scientific efforts.
</td>
<td>
WP1: data from survey of motivations to participate in Working Groups and data
from surveys for the Social Network Analysis
WP2: data from interviews and Focus Group on short-term social innovation
needs
WP3: data from investigation of barriers and obstacles for online knowledge
sharing
WP4: inventory of current monitoring and forecasting efforts
</td> </tr>
<tr>
<td>
Associated metadata
</td>
<td>
The dataset’s creator, title, year of publication, repository, identifier etc.
based on the ISO 19157 standard.
</td>
<td>
WP1-WP4
Questionnaire, interviews and user-driven metadata entry through geoportal)
</td> </tr>
<tr>
<td>
Documentation
</td>
<td>
Such as code books, informed consent forms, etc.: these aspects are domain-
dependent and important for understanding the data and combining them with
other data sources.
</td>
<td>
WP1-WP4
Questionnaire, interviews and user-driven metadata entry through geoportal.
</td> </tr>
<tr>
<td>
Methods &
tools
</td>
<td>
(Information about) the software, hardware, tools, syntax queries, machine
configurations – i.e. domain-dependent aspects that are important for using
the data.
</td>
<td>
Data collection instruments
WP1: questionnaire and software to analyse and visualise the relationships
between stakeholders and their level of connectedness (SNA Analysis)
WP2: questionnaire, Focus Group Discussion protocol)
WP3: questionnaire, Focus Group Discussion protocol
WP4: search terms and questionnaire, interviews and user-driven metadata entry
and search keywords through the AA geoportal.
</td> </tr> </table>
All generated data will use widely adopted data formats, including but not
limited to:
* Basic Data formats: CSV, XLS, XML
* Aggregated Data / Meta-data: PDF, HTM, MS files
Concerning Monitoring and Forecasting tools (WP4), the project will make
extensive use of existing data and repositories. In fact, the essence of the
data management concerning M&F tools is a more effective / more comprehensive
use of existing data rather than the generation of new (source) data.
Existing data which is going to be used for that purpose stems from many
different sources, especially generated locally in Africa.
# AfriAlliance FAIR Data
AfriAlliance will follow the FAIR approach to data, i.e. data will be managed
in order to make them:
* Findable
* Accessible
* Interoperable
* Reusable
## Making data findable, including provisions for metadata
### Discoverability of Data
Data generated in AfriAlliance will be available (for external use) via the
following resources (ref Table 1):
* AfriAlliance Website
* Akvo RSR (Really Simple Reporting) tool
* Web Catalog Service (WCS)
The Website will include most of the (aggregated and summarised) data
generated during the project, including links to the WCS tool which use
existing data.
The Akvo RSR tool will provide overall and summarised information about the
project, including results and impact. The tool will follow the International
Aid Transparency Initiative (IATI) standard for reporting.
The WCS will contain in particular all meta-data information concerning
existing data used by the foreseen improved monitoring and forecasting tool.
### Identifiability of Data
AfriAlliance will make use of repositories assigning persistent IDs to data to
allow easy finding (and citing) of AfriAlliance data.
### Naming Conventions
All names given to AfriAlliance Data will be named according to the following
naming convention:
* Basic Data: AA WPx <name of data> -<date generated>-version
* Meta Data: AfriAlliance <Descriptive Name of Data>-name generated-version
### Keywords
Data will be assigned relevant keywords to make them findable (e.g. through
internet browsing). Such keywords may vary depending on the Work Package where
data belong to.
### Versioning
All data (and data sets) will clearly mention the version (indicated both in
the naming and within the information included in the data) as long as contact
information (owner of the generated or aggregated data set).
### Standards
Data, and in particular meta-data, will follow an identified standard for
meta-data creation. Although there are many different standards, the initial
preference of the consortium is to follow ISO 19157 as it is specifically
adopted to ensure quality of geographic information, which is the core of
AfriAlliance data (used by the foreseen WCS). Several ISO standards exist and
ISO 19157 is a recent one, also adopted by INSPIRE (Infrastructure for Spatial
Information in Europe) Directive and national implementations, and includes
metadata quality control mechanisms.
## Making Data Openly Accessible
### Data Openly Accessible
AfriAlliance will make all data generated by the project available, with the
exception of basic data with ethics constraints which will be kept within the
consortium and only available in ProjectPlace. WP4 data, the WCS and the
geoportal will be freely available with open access to all the metadata and
workflows. It must be noted that the WCS will contain little (only sample)
real hard data.
### Modalities for Sharing Data
All data generated will be available in the resources mentioned in 2.1.1. In
particular, data will be made available with the following modalities:
Website: all generated data will have an easily identified section on the
website where most of the data will be posted. The website will also include
reference to IATI standard data, and will therefore be the main source to
retrieve also general data of the project. Moreover, an easily findable
reference will be made to access the WCS tool.
The WCS tool being a web-based application, will exist also as “standalone”
resource (with a clear reference to AfriAlliance project), which will be
designed to get as many hits as possible with the most common web browsing
modalities.
### Methods and tools needed to access data
Apart from widely known access methods (internet search based), it is
important to specifically mention that the WCS software source code will be
made available in an open source repository. The initial selection of the
consortium for this purpose is the Github resource.
Search terms and user-driven metadata entry and search key-words will be made
available through the AA WP4 geoportal. Entry search keywords will be rather
simple words such as for example: monthly rainfall, country, and other water-
and climate related searches, available from pre-coded drop down menus.
### Data repositories
Most of the data generated will be stored in a web-based repository. This
includes the WP4 geoportal which will contain only metadata, which are web-
based information on data sources, data quality, etc.
### Access to Data
No restrictions will apply, apart from the subset of source data (i.e. data
from questionnaires) whose use is restricted according to the Ethics
requirements.
## Making data interoperable
Interoperability of data is very important in AfriAlliance, especially in
relation to the geoportal.
The interoperability principle behind WP4 data is based on the concept of
“volunteered geographic information” (VGI), which is the harnessing of tools
to create, assemble, and disseminate geographic data provided voluntarily by
individuals (Goodchild, 2007). VGI is a special case of a broader phenomenon
known as user-generated content. Common standards and methodologies following
the general principle will be adopted, and will be further specified in
updated revisions of the plan.
## Owners and Access Rights
### Data Licence
Most of the data generated in AfriAlliance will be open source, licenced under
the Creative Commons Attribution License (CC-BY), version 4.0, in order to
make it possible for others to mine, exploit and reproduce the data.
WP4 geoportal WCS will be open source licenced using the GNU General Public
License Version 2 (June 1991) and the portal user guide documentation will be
provided and licensed under the Creative Commons Attribution-NonCommercial 3.0
License. Minor changes can be adopted in case it is required by certain
Partners needs/regulations; those cases will be properly documented.
### Data Re-use
No restrictions will apply for the re-use of Data, also no restriction in
time.
### Third Parties Usage
AfriAlliance will make data publicly available to Third Parties, under the
condition that the source is referenced according to indications provided in
the data.
### Data Quality Assurance
Generally speaking, AfriAlliance will follow the quality assurance guidelines
provided in Deliverable 6.3 (Project Information Management strategy) to
ensure proper quality of data. With particular reference to quality of
metadata, the ISO19157 standard guidelines will be followed.
### Availability in Time
In principle, data will be available indefinitely
# Allocation of Resources for Data Management
## Data Management Costs
Costs related to generating, storing, and distribution of data are properly
taken in consideration in the respective Work Package where data specified in
Table 2 will be collected.
In WP2, data generated from the network analysis as well as Action Groups
results will be covered by both staff time and other direct costs directly
allocated to those activities
Dissemination material, which can be considered a particular subset of output
data in a CSA, has a specific budget line allocated to the respective WP
leader.
As regards data managed in WP4, Web Services and associated resources like
dissemination packages, and other production costs, have been allocated a
substantial budget (ref. DoA AfriAlliance for details).
## Data Ownership
Ownership of data is largely determined by Work Package Ownership. A more
specific attribution of ownership is indicated in Table 1 above.
## Long Term Preservation Costs
Long term preservation costs relates to costs for server/hosting, and time for
updating data formats. Those costs are being included in the concerned WP
budgets.
# Data Security
Data Security aspects are covered in D7.1-3 (ethics).
# Other
The AfriAlliance Data Management Plan follows largely the guidelines (and
template) recommended by Horizon 2020 in the framework of the Open Data
programme of the European Commission.
In addition, it is worth mentioning that any additional internal guidelines in
terms of Information Management practices and IPR policies that are currently
followed (or will be approved in the future) in the Coordinator’s organization
(UNESCO-IHE) will be integrated, as appropriate, as part of the plan, after
previous discussion and agreement with the consortium members. Equally, in
case any regulations or policy prevailing in any organization of the
consortium, and any additional external practice/policy/standard, which
becomes relevant for the plan, will be integrated in further revisions of the
plan.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0770_HOBBIT_688227.md
|
# Data Management Lifecycle
HOBBIT continuously collects datasets (i.e., not limited to specific domains)
as the base for benchmarks. Those datasets are provided by both the project
industrial partners and members of the HOBBIT community.
To **keep the dataset submission process manageable** , we host an instance of
the _CKAN_ open source data portal software, extended with custom metadata
fields for the HOBBIT project. This instance is hosted at
_https://hobbit.ilabt.imec.be/_ . Figure 1 shows an screenshot of this CKAN
instance, where several datasets are listed. Because the CKAN instance only
stores _metadata_ about the datasets, the datasets themselves need to be
stored elsewhere, such as the HOBBIT FTP storage. Users who want to add a
dataset of their own, first need to request 1 to be added to an organization
on the CKAN instance, after which they can add datasets to this organization.
If users have no storage available for their dataset, they can add their
dataset to the HOBBIT FTP server after contacting us. Because of this, storage
requirements in this CKAN instance are limited, which is why no data deletion
strategy is needed.
Datasets will be kept available on the HOBBIT platform for **at least the
lifetime of the server** , unless they are removed by their owners. After the
project, the HOBBIT platform will be maintained by the HOBBIT Association, and
so will the datasets. **Owners may add or remove** a dataset at any time.
In the previous version of this deliverable, we described a query interface
that was to be setup over the metadata of this CKAN instance. As there was no
need for such a query interface, both inside and outside of the project, and
the setup would be non-trivial, we removed this interface.
**Figure 1: Screenshot of the current CKAN deployment.**
# Data Management Plan
Conform to the guidelines of the Commission, we will provide the following
information for every dataset submitted to the project. This information will
be obtained either through automatically generating it (e.g., for the
identifier), or by asking whoever provides the dataset upon submission.
## Dataset Reference and Name
The datasets submitted will be identified and referenced by using a URL. This
URL can then be used to access the dataset (either through dump file, TPF
entrypoint or SPARQL endpoint), and it can also be used as an identifier to
provide metadata.
## Data Set Description
The submitter will be asked to provide a short textual, human-interpretable
description of the dataset, at least in English, and optionally in other
languages as well. Additionally, a machine-interpretable description will also
be provided (see 2.3 Standards and metadata).
## Standards and Metadata Publication
Since we are dealing with Linked Datasets, it makes sense to adhere to a
Semantic Web context for the description of the datasets as well. Therefore,
in line with the application profile for metadata catalogues in the EU,
_DCAT-AP_ , we will use W3C recommended vocabularies such as _DCAT_ and
_Dublin Core_ to provide metadata about each dataset. The metadata that is
currently associated with the datasets includes:
* Title
* URL
* Description
* External Description
* Tags
* License
* Organization
* Visibility
* Source
* Version
* Contact
* Contact Email
* Applicable Benchmark 2
This metadata is stored in the CKAN instance’s database, and can be view on
the dataset overview page, as shown in Figure 2.
**Figure 2: Screenshot of a dataset overview page, with the collected
metadata.**
## Data Sharing
Industrial companies are normally unwilling to make their internal data
available for competitions because this could reduce the competitiveness of
these companies significantly. However, HOBBIT aims to pursue a policy of
making data **open, as much as possible** . Therefore, several mechanisms are
put in place.
As per the original proposal, HOBBIT deploys a standard data management plan
that includes (1) employing **mimicking algorithms** that compute and
reproduce variables that characterize the structure of company-data, (2)
feeding these characteristics into **generators that are able to generate data
similar to real company data** without having to make the real company data
available to the public. The mimicking algorithms are implemented in such a
way that can be used within companies and simply return parameters that can be
used to feed the generators. This preserves Intellectual Property Rights (IPR)
and circumvents the hurdle of making real industrial data public by allow
configuring deterministic synthetic data generators so as to compute data
streams that display the same variables as industry data while being fully
open and available for evaluation without restrictions.
Since we provide a mimicked version of the original dataset in our benchmarks,
**open access will be the default behaviour** . However, on a case-by-case
basis, datasets are **protected** (i.e., visible only to specific user groups)
on request of the data owner, and in agreement with the HOBBIT platform
administrators.
## Current Status
The domain name has been changed to _https://hobbit.ilabt.imec.be/_ , due to
internal organization changes in imec. As described in the intermediate data
management plan, all organizations are available on the CKAN instance:
_https://hobbit.ilabt.imec.be/organization_
Each **organization** made their datasets available, either publicly, or only
with the consortium for sensitive data. The number of datasets has been
increased to 25 datasets, of which half are RDF datasets. 23 of those datasets
are publicly available under an open license. The server behind this CKAN
instance will remain active for at least one year after the project ends. In
this period, ownership will be transitioned to the HOBBIT Association.
Page
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0772_HiFi-ELEMENTS_769935.md
|
**Publishable executive summary**
The document describes the way of data management in this project. A
significant portion of the project resources, estimated to be approximately
50%, are devoted to data management. The purpose of this document is to verify
that data which is managed, is treated in a so-called FAIR way, that is, data
are findable, accessible, interoperable and reusable as much as possible.
This Deliverable document is structured in the following way: The introduction
of this document lists the different categories of data storage used in this
project, and following from that all possible file types. The following
section “data summary” lists describes the management of data in the context
of FAIR principles.
As the main goal of this project is to interconnect different simulation tools
and component models with each other, it is strictly required that data in
this project are findable, which is enabled by a standardized naming
convention of signal data. Following up on that, the standardized naming
convention also allows for openly accessible data. The naming convention will
be made public, which is one purpose of this project. Partially, also
simulation component models will be made also openly accessible to the public.
This will allow re-usability beyond and after HIFI-ELEMENTS.
# Purpose of the Document
The document describes the way of data management in this project. Data
management is understood in the sense of which data is being produced,
collected, stored and maintained in this project. The purpose of this document
is to verify that data which is managed, is treated in a so-called FAIR way,
that is, data are findable, accessible, interoperable and reusable as much as
possible.
## Document Structure
This Deliverable document is structured in the following way: The introduction
of this document lists the different categories of data storage used in this
project, and following from that all possible file types. The following
section data summary lists describes the management of data in the context of
the so-called FAIR principles (findable, accessible, interoperable, and re-
useable).
## Deviations from original Description in the Grant Agreement Annex 1 Part A
### Description of work related to deliverable in GA Annex 1 – Part A
The Data Management Plan is a mandatory deliverable for all H2020 projects.
The general definition of the contents of the Data Management Plan reads [1]:
Data Management Plans (DMPs) are a key element of good data management. A DMP
describes the data management life cycle for the data to be collected,
processed and/or generated by a Horizon 2020 project. As part of making
research data findable, accessible, interoperable and re-usable (FAIR), a DMP
should include information on:
* the handling of research data during and after the end of the project
* what data will be collected, processed and/or generated
* which methodology and standards will be applied
* whether data will be shared/made open access and
* how data will be curated and preserved (including after the end of the project).
A DMP is required for all projects participating in the extended ORD pilot,
unless they opt out of the ORD pilot. However, projects that opt out are still
encouraged to submit a DMP on a voluntary basis.
### Time deviations from original planning in GA Annex 1 – Part A
There are no deviations with respect to timing of this deliverable.
### Content deviations from original plan in GA Annex 1 – Part A
There are no deviations from the Annex 1 – Part A with respect to the content.
# Introduction
The Data Management Plan of HIFI-ELEMENTS will introduce first the storage
systems in which data are stored, and subsequently the types of data that have
to be managed within HIFI-ELEMENTS will be listed. Then, it will be explained
what type of data is stored where and comments on the data life cycle are
given.
## Storage systems
For the activities within HIFI-ELEMENTS, the following storage systems are
being used:
### Cloud services
#### Projectplace
In order to manage the project coordination, data exchange between the
different work packages on management level, organization of appointments and
on-line meetings is organized using the web services provided by Projectplace
( _https://www.projectplace.com/_ ) . The coordination data and parts of
major technical results (e.g. system diagrams, topology definitions etc.) are
stored on that share. Manager of the project data is the contractor
UNIRESEARCH.
#### Wordpress
The official project website ( _http://www.hifi-elements.eu/_ ) is
maintained via the cloud based hosting server Wordpress.
#### SYNECT
The tool SYNECT by dSPACE is an essential part of the project and is described
as the following [2]:
SYNECT is a data management and collaboration software tool with a special
focus on model based development and ECU testing. The software is designed to
help you manage data throughout the entire development process. This data can
include models, signals, parameters, tests, test results, and more. SYNECT
also handles data dependencies, versions and variants, as well as links to the
underlying requirements. One key aspect of SYNECT is direct connection to
engineering tools, e.g., MATLAB®, Simulink®, TargetLink®, or AutomationDesk,
and application/product lifecycle management systems (ALM/PLM) so that you can
work with your preferred tools and seamlessly exchange data. SYNECT is ideal
for automotives, aerospace, industrial automation and medical engineering –
and wherever embedded systems are developed through model based design.
### Version Control Systems
#### Subversion
Documents or files that undergo multiple stages of revisions or development
are stored in version control systems such as available with the Open Source
Tool Subversion ( _https://subversion.apache.org/_ ) . Subversion supports
web protocol based and server based access to the versioned repository,
forking of files into different revisions and releases are supported as well
as renaming of files from one repository revision to the other. Subversion is
able to maintain both text based as also binary files.
### Network File Shares
Other data are stored individually at each partner on proprietary (Network)
File Shares. Those network shares are in general only available in the
intranet of each partner and are accessible only by the members of the
project. The partner network shares are used for the daily work of each task
to keep and manage working data.
## Data Types
In this section, the different data types dealt with in the project are
presented.
### Documentation
For general documentation and presentation of project results and its
communication, only commonly used data formats are being used:
* Text documents (.txt)
* Adobe Acrobat Portable Document Format (.pdf)
* Microsoft Office Word (.docx)
* Microsoft PowerPoint (.pptx)HTML
The first four mentioned file formats are used at each project partner
internally, but also for exchange between the project partners and for
presentation purposes. All file formats can be opened and edited not only by
proprietary software of the file format developers (e.g. Microsoft Office,
Adobe Acrobat), but also Open Source Software like Open Office.
In order to exchange presentations between the partners, each of them is
recommended to use pdf as standard file format to control the distribution of
information from the owner of that information. However, the users are also
allowed to distribute presentations in editable PowerPoint format if they
would like to explicitly allow for the dissemination of presentation contents
by other partners.
### Tabulated Data
Tabulated Data files are mainly used in the context of the simulation and
testing activities but also during meetings, for simple analysis and
visualisation of values. These fall mainly under the following categories:
* Simulation Data
* Testing Data
* Parameter Files
Tabulated data are mostly contained in the following types:
* Microsoft Excel (.xlsx)
* Comma Separated Values (.CSV)
* Plain text files (.txt)
* Binary data
### Program Files (Data)
HIFI-ELEMENTS also deals with different simulation tools and programs and with
the inter exchange of those programs between the partners. Therefore, such
program in and output also need to be partially exchanged between the partners
and the usage of programs need to be managed.
Those programs can be proprietary simulation software or commercial software
like GT-SUITE, KULI, Matlab, MOTOR CAD, Maxwell, Morphee/Xmod, etc. Depending
on the license agreement situation, the program files may be exchanged freely,
otherwise it needs to be ensured that each partner has the required license to
perform his task or receives this license, if possible, from another partner.
In- and output of program data are discussed in the following section ‘Model
Data’.
### Model Data
Pertinent to the simulation software described in the previous sub-section,
for each simulation software model data are required, either serving as input
to the simulation or as an output that requires post-processing. In general
each file format for the model data of the simulation task is proprietary and
requires conversion tools to openly readable file formats.
## FAIR Data
### Making Data findable, including provisions for metadata
As one main goal of this project is to interconnect different simulation tools
and component models with each other, it is strictly required that data in
this project are findable. First of all, between simulation tools signal data
have to be exchanged. In the context of this project data from signals become
findable when the naming of the signals follow a pre-defined structure.
Therefore, as an output of WP1, the deliverable D1.1 “Document describing the
safety requirements and modelling guidelines” explains the naming convention
(see Figure 1) of the in- and output signals from each component simulation
model.
This allows for an replacement of component models (low fidelity model against
high fidelity model or plant models for different component hardware).
Furthermore, the connection of different component data signals will be
enabled efficiently by the standardized naming convention. Thirdly, only the
standardised naming convention allows for an easy access of data signals for
post-processing of results.
D1.1 [4]
### Making data openly accessible
Following up on the standards described in the previous sub-section, the
standardised naming convention also allows for openly accessible data. The
naming convention will be made public, which is one purpose of this project.
Partially, component models will be made also openly accessible to the public.
The FMU/FMI description will be made publicly available as well.
### Making data interoperable
Also, by the principles of signal data naming convention and component model
interface standardization as described above, data that are generated and used
above are thereby interoperable.
### Increase data re-use
The standardization of interface for component models allows for the easy
exchange of such models and their reuse. This is especially due to those
component models which will become publicly available. Those simulation models
will be re-useable beyond and after HIFI-ELEMENTS.
## Allocation of resources
A significant portion of the project resources, estimated to be approximately
50%, are devoted to data management.
## Data security
Principles of data security are taken into account when the layout of the data
management was defined. All data are password protected and only made
available to project members On the publicly accessible website, only public
data are stored.
## Ethical aspects
Ethical aspects are regarded in this project or are not compromised by this
project. Only technically related data are treated in this project and
therefore do not contain person related data. Exception are the financial
reports to EUfin which may contain some person related data reports.
Storage locations, of data are in EU or other partner countries (Turkey).
## Other
Software development and the data management that goes alongside with it, will
be conducted using the Agile Model Development Principles. This principle is
presented in the framework of WP1, see also [3].
# Overview of Data Management
**Table 3-1 Overview on Storage Systems**
<table>
<tr>
<th>
Type of
Storage
</th>
<th>
Storage Location
</th>
<th>
Indexing
</th>
<th>
Access
</th>
<th>
Versioning
</th>
<th>
Security/ Encryption
</th>
<th>
Security/Backup
</th>
<th>
Costs
</th> </tr>
<tr>
<td>
**Cloud –**
**Projectplace**
</td>
<td>
</td>
<td>
yes
</td>
<td>
Restricted, individual access rights
</td>
<td>
Yes
</td>
<td>
Password protected
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Cloud –**
**Wordpress**
</td>
<td>
Germany
</td>
<td>
</td>
<td>
Public
</td>
<td>
No
</td>
<td>
Password
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**SYNECT**
**Server**
</td>
<td>
Amazon Cloud hosted, Ireland
</td>
<td>
Data queries (SQL)
</td>
<td>
Restricted, individual access rights
</td>
<td>
Yes
</td>
<td>
Encryption (httpsprotocol)
</td>
<td>
Incremental backup
</td>
<td>
Hosting costs
</td> </tr>
<tr>
<td>
**Version**
**Control –**
**Subversion**
</td>
<td>
Not partner specific, dSPACE, Germany
</td>
<td>
no
</td>
<td>
Restricted,
HIFI “global” access rights
</td>
<td>
Yes
</td>
<td>
Password
</td>
<td>
Incremental backup
</td>
<td>
Open
Source Tool, only file storage costs
</td> </tr>
<tr>
<td>
**Version**
**Control –**
**Subversion**
</td>
<td>
Partner specific servers,
e.g. at FEV
</td>
<td>
no
</td>
<td>
Restricted, individual access rights
</td>
<td>
Yes
</td>
<td>
</td>
<td>
Incremental backup
</td>
<td>
Open
Source Tool, only file storage costs
</td> </tr>
<tr>
<td>
**File Shares**
</td>
<td>
Partner specific servers
</td>
<td>
yes
</td>
<td>
User access limited.
</td>
<td>
No
</td>
<td>
</td>
<td>
Incremental backup
</td>
<td>
various
</td> </tr> </table>
**Table 3-2 Overview on Data File Types**
<table>
<tr>
<th>
File Type Example Type of
Storage
</th>
<th>
Access/Licensing
</th>
<th>
Versioning
</th>
<th>
Confidential/
Classified/Public
</th> </tr>
<tr>
<td>
**Text**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Adobe Acrobat**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Word**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**PowerPoint**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**HTML**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Excel**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**CSV**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Binary data**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
SYNECT Database
Backup File (*.bak)
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Program File Data**
</td>
<td>
MATLAB/
Simulink
</td>
<td>
</td>
<td>
Partially proprietary
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
KULI
</td>
<td>
</td>
<td>
Commercial License
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
xMOD
</td>
<td>
File
</td>
<td>
License to HIFIUsers
</td>
<td>
YES
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
SYNECT Client
</td>
<td>
</td>
<td>
License to HIFIUsers
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
SYNECT Client Add-
Ons (.ADDON /
.ADDONZ)
</td>
<td>
</td>
<td>
No license required.
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
MOTOR CAD
</td>
<td>
File
</td>
<td>
Commercial License
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
Maxwell
</td>
<td>
File
</td>
<td>
Commercial License
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Scripts**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
Python Scripts (.PY)
</td>
<td>
File
</td>
<td>
No license
</td>
<td>
Yes
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
MATLAB Scripts
(.M)
</td>
<td>
File
</td>
<td>
License to
execute
</td>
<td>
yes
</td>
<td>
</td> </tr>
<tr>
<td>
**Model Data**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
Enterprise
Architect SysML
</td>
<td>
File
</td>
<td>
Commercial License
</td>
<td>
yes
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
**MATLAB**
**SIMULINK Models**
**(.SLX/.MDL)**
</td>
<td>
File
</td>
<td>
Commercial License. **Limited access for IP**
**protection**
</td>
<td>
yes
</td>
<td>
**Confidential**
</td> </tr>
<tr>
<td>
</td>
<td>
**KULI Models**
</td>
<td>
File
</td>
<td>
Commercial License.
**Limited access for IP**
**protection**
</td>
<td>
yes
</td>
<td>
**Confidential**
</td> </tr>
<tr>
<td>
</td>
<td>
**xMOD models**
(.xmodel/.dll/.rtdll)
</td>
<td>
File
</td>
<td>
License to HIFIUsers
</td>
<td>
yes
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
FMI 2.0 FMU Files (.FMU)
</td>
<td>
</td>
<td>
In general no license. May contain license check or
</td>
<td>
yes
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
reference to a specific runtime system for simulation.
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**Simulation Data**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
ASAM MDF-4
Capturing Results
(.MF4)
</td>
<td>
</td>
<td>
XML format, license for
reader/writer tooling
</td>
<td>
yes
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
MATLAB Data (.MAT)
</td>
<td>
</td>
<td>
Commercial license
</td>
<td>
yes
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
xMOD ASCI files
(.CSV, .txt)
</td>
<td>
File
</td>
<td>
No license
</td>
<td>
Yes
</td>
<td>
</td> </tr>
<tr>
<td>
**Testing Data**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
ASAM XIL API
Stimulation Files
(.STI)
</td>
<td>
</td>
<td>
XML format, license for
reader/writer tooling
</td>
<td>
yes
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
xMOD API testing format (.xcce)
</td>
<td>
File
</td>
<td>
No license
</td>
<td>
No
</td>
<td>
</td> </tr>
<tr>
<td>
**Parameter**
**Files**
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
ASAM Parameter
Values (.CDFX)
</td>
<td>
</td>
<td>
XML format, license for
reader/writer tooling
</td>
<td>
yes
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
xMOD calibration parameters based on XML (.XPAR)
</td>
<td>
File
</td>
<td>
XML format, no license
</td>
<td>
Yes
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
MATLAB Scripts
(.M)
</td>
<td>
</td>
<td>
License to
execute
</td>
<td>
yes
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
MATLAB Data (.MAT)
</td>
<td>
</td>
<td>
Commercial license
</td>
<td>
yes
</td>
<td>
</td> </tr> </table>
# Discussion and Conclusions
The document described the way of data management in this project. A
significant portion of the project resources, estimated to be approximately
50%, are devoted to data management. The purpose of this document is to verify
that data which is managed, is treated in a so-called FAIR way, that is, data
are findable, accessible, interoperable and reusable as much as possible.
# Recommendations
It is recommended that this document will be updated in case that significant
changes of the data management procedure that is used in this project will
occur.
# Risk Register
## Risk Register
“With reference to the critical risks and mitigation actions this deliverable
is not linked to any open risk. See D8.1 – “Project handbook” and the
monitoring file of the Steering Committee
( _https://service.projectplace.com/pp/pp.cgi/r1293387004_ ):
New identified risks that occurred are listed in the table below. Currently no
risks are identified.
<table>
<tr>
<th>
Risk
number
</th>
<th>
Description of Risk
</th>
<th>
Proposed Risk Mitigation Measure
</th>
<th>
Probability / effect
</th>
<th>
Current estimation
of risk occurence
(comments)
</th> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
## Quality Assurance
The Steering Committee is the body for quality assurance. The procedure for
review and approval of deliverables is described in the deliverable report
D8.1 – “Project Handbook”. The quality will be ensured by checks and approvals
of Work package leaders, see front pages of all deliverables.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0774_SafeWaterAfrica_689925.md
|
(3) what methodology & standards will be applied, (4) whether data will be
shared /made open access & how and (5) how data will be curated & preserved.
# Overall Dataset Framework
This document contains the second version of the DMP, which, according to the
document
“Guidelines on FAIR Data Management in Horizon 2020”, aims to make our
research data findable, accessible, interoperable and reusable (FAIR). In
SafeWaterAfrica, data management procedures are included into the WP8 and can
be summarized according to the framework shown in **Figure 1** , in which the
complete workflow of dissemination and publication is shown.
**Figure 1** : SafeWaterAfrica workflow of dissemination and publication
DMP: Data Management Plan
PEDR: Plan for Exploitation and Dissemination of Results
OA: Open Access
SC: Steering Committee
Dissemination Manager: Jochen Borris, Fraunhofer
Data Manager: Manuel Andrés Rodrigo Rodrigo, UCLM
The procedure for the management of data begins with the production of a data
set by one or several of the partners. According to the Figure, they should
inform the Data Manager about the data by filling in the template shown in
Annex 1, in which the metadata is included. Dataset is then archived by the
partner that has produced it, while metadata are managed by the Data Manager.
The data archived by the partner may be in the form of tables and,
occasionally, as documents such as reports, technical drawings, pictures,
videos and material safety data sheets. Software used to store the research
results mainly includes the:
* applications of the office suites of Microsoft, Open and Libre Office, e.g. Word and Excel, and
* Origin Data Analysis and Graphing by Originlab.
* Following checkup by the Data Manager, the metadata will be included in the Annex II section of the next edition of the DMP and depending on the decision-tree shown, data can be considered for publication.
The DMP addresses the required points on a dataset by dataset basis and
reflects the current status of reflection within the consortium about the data
that will be produced. The DMP presents in details only the procedures of
creating ‘primary data’ (data not available from any other sources) and of
their management. In the internal procedures to grant open access to any
publication, research data or other innovation generated in the EU project the
main workflow starts at the WP level. If the WP team member considers putting
research data open access, it will inform the project steering committee about
its plans. The project steering committee will then discuss these plans in the
consortium and decide whether the data will be made openly accessible or not.
The general policy of the EU project is to apply “open access by default” to
its research data. Project results to be made openly accessible for the public
will be labelled “public” in the project documentation (table, pictures,
diagram, reports etc.). All project results labelled “public” will be
distributed under specific free/open license, where the authors retain the
authors’ rights and the users can redistribute the content freely by
acknowledgement of the data source.
With regard to the five points covered in the template proposed in the
“Guidelines on Data Management in Horizon 2020” (Data set reference and name,
Data set description, Standards and metadata, Data sharing and Archiving and
Preservation), they are included in the Table template proposed in Annex I and
there are common procedures that will be described together for all datasets
included in the next sections of this document.
# Data Set Reference and Name
For an easy identification, all datasets produced in SafeWaterAfrica will be
also provided with a short name (Data set reference) following the format SWA-
DS-xxyyy, where xx refers to the work package in which data are produced and
yyy is a sequential reference number assigned by the Data Manager upon
reception of a proposal of Dataset. This name will be included in the template
and will not be filled in by the partner that propose the Dataset. Opposite,
partner that produces the Dataset will propose a descriptive name (1) ,
consisting of a sentence in which the content of the dataset is clearly
reflected. This sentence should be shorter than 200 characters and will be
checked and, if necessary, modified by the Data Manager for the sake of
uniformity.
# Data Set Description
It consists of a plain text with a maximum extension of 200 words in which it
is very briefly summarized the content, methodology and organization of the
dataset in order to let the reader have a first clear idea of the main aspects
of the Dataset. It will be filled in by the partner that produces the Dataset
(2) and checked upon reception and, if necessary, modified by the Data
Manager for the sake of uniformity.
# Standards and Metadata
Metadata is structured information that describes, explains, locates, or
otherwise makes it easier to retrieve, use, or manage an information resource.
Metadata is often called data about data or information about information.
Metadata that are going to be included in our DMP are going to be classified
into three groups:
* Descriptive metadata, which designates a resource for purposes such as discovery and identification. In the DMP of SafeWaterAfrica this metadata are needed to be filled in by the partner that propose the Dataset and include elements such as the contributors (3) (institution partners that contributes the dataset), creator/s (4) (author/s of the dataset), subjects (5) (up to six keywords that clearly identifies the content).
* Administrative metadata, which provides information to help manage a resource, such as when and how it was created, file type and other technical information, and who can access it. In the DMP of SafeWaterAfrica, these metadata are needed to be filled in by the partner that propose the Dataset and include elements such as language (6) (most likely
English), file format (7) (excel, cvs, …) and type of resource (8) (Table,
Figure, picture…). It is proposed to use commonly used metadata standards in
this project based on the digital object identifier system® (DOI). With this
purpose, DOI of the final version of the metadata form for each Dataset will
be obtained by the Data Manager.
* Structural metadata, which indicates how compound objects are put together. In the DMP of SafeWaterAfrica, these metadata are needed to be filled in by the partner that proposed the Dataset in Table 1 and include elements such as parameters (9) included in the dataset (including information about methodology used to obtain it according to international standards, equipment, etc.), structure of the datatable (10) (showing clearly how data are organized) and additional information for the dataset (11) (such as Decimal delimiter, the Column delimiter, etc.)
* Upon reception of the first version of the Dataset, this information will be checked by the Data Manager and, if necessary, modified for the sake of uniformity and clarity.
# Data Sharing
The data sharing procedures and rights in relation to the data collected
through the SafeWaterAfrica project are the same across the different datasets
and are in accordance with the Grant Agreement. Partner that produces the
datasheet should inform about the status (12) of the dataset: public, if
data are going to be published, or private, if no diffusion out of the
consortium is aimed (because data are considered as sensitive). In the case of
public data, a link to sample data can also be included to allow potential
users a rapid determination about the relevance of the data for their use
(13) . This link will be checked by the Data Manager and the partner that
produce the Dataset is responsible for keeping it alive for the whole duration
of SafeWaterAfrica.
With respect to the access procedure, in accordance with Grant Agreement
Article 17, data must be made available upon request, or in the context of
checks, reviews, audits or investigations. If there are ongoing checks etc.,
the records must be retained until the end of these procedures.
Each partner must ensure open access to all peer-reviewed scientific
publications relating to its results. As per Article 29.2, the partners must:
* As soon as possible and at the latest on publication, deposit a machine-readable electronic copy of the published version or final peer-reviewed manuscript accepted for publication in a repository for scientific publications; moreover, the beneficiary must aim
to deposit at the same time the research data needed to validate the results
presented in the deposited scientific publications.
* Ensure open access to the deposited publication — via the repository — at the latest:
o On publication, if an electronic version is available for free via the
publisher, or o Within six months of publication in any other case. o Ensure
open access — via the repository — to the bibliographic metadata that identify
the deposited publication. The bibliographic metadata must be in a standard
format and must include all of the following: the terms “European Union (EU)”
and “Horizon 2020”;-the name of the action, acronym and grant number;-the
publication date, and length of embargo period if applicable, and-a persistent
identifier.
Data will also be shared when the related deliverable or paper has been made
available at an open access repository, via the gold or the green model. The
normal expectation is that data related to a publication will be openly
shared. However, to allow the exploitation of any opportunities arising from
the raw data and tools, data sharing will proceed only if all co-authors of
the related publication agree. The Lead author, who is the author with the
main contribution and who is listed first, is responsible for getting
approvals and then sharing the data and metadata in the repository of its
institution or, alternative, in the repository **Fraunhofer ePrints** (
_http://eprints.fraunhofer.de/_ ) , an open access repository for research
data.
# Archiving and Preservation
The archiving and preservation procedures in relation to the data collected
through the SafeWaterAfrica project are the same across the different datasets
and are in accordance with the Grant Agreement.
The research data is generated at the sites of the partners, and stored and
archived at each place in accordance to the rules of each organisation and in
accordance with the referring national legislation. Additionally the data is
copied to the project intranet that is available to all beneficiaries. The
project uses the software Atlassian Confluence. This wiki software
installation is provided by the coordinator Fraunhofer IST. The software runs
on a separate server on the campus in Braunschweig, Germany. Access is limited
to the IT administrators and to the beneficiaries via any internet browser,
secured by personal accounts. Differential back-ups are made each night on
magnetic tape. Server and tapes are stored in a locked room. The electricity
grid is backed up by batteries. The Confluence server will be provided also
after the end of the project for at least five years.
# Legal Issues
The SafeWaterAfrica partners are to comply with the ethical principles as set
out in Article 34 of the Grant Agreement, which states that all activities
must be carried out in compliance with:
* The ethical principles (including the highest standards of research integrity e.g. as set out in the European Code of Conduct for Research Integrity, and including, in particular, avoiding fabrication, falsification, plagiarism or other research misconduct) and Commission recommendation (EC) No 251/2005 of 11 March 2005 on the European Charter for Researchers and on a Code of Conduct for the Recruitment of Researchers (OJ L 75, 22.03.2005, p. 67), the European Code of Conduct for Research Integrity of ALLEA (All European Academies) and ESF (European Science Foundation) of March 2011
(
_http://www.esf.org/fileadmin/Public_documents/Publications/Code_Conduct_ResearchIntegr_
_ity.pdf_ )
* Applicable international, EU and national law.
Furthermore, activities raising ethical issues must comply with the ‘ethics
requirements’ set out in Annex 1 of the Grant Agreement. At this point, the
DMP warrants that 1) research data are placed at the disposal of colleagues
who want to replicate the study or elaborate on its findings, 2) all primary
and secondary data are stored in a secure and accessible form and 3) the
freedom of expression and communication.
Regarding confidentiality, all SafeWaterAfrica partners must keep any data,
documents or other material confidential during the implementation for the
project and for at least five years (preferible 10 years) after the period set
out in Article 3 (42 months, starting 2016-06-01). Further detail on
confidentiality can be found in Article 36 of the Grant Agreement.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0775_CARBOMET_737395.md
|
**Appendix - CarboMet Privacy notice**
The University of Manchester is the data controller for information collected
by CarboMet
**About us:**
Metrology of Carbohydrates for Enabling European BioIndustries (CarboMet) is a
Coordination and Support Action (CSA) funded by Horizon 2020 FET-OPEN (Grant
agreement no. 737395).
CarboMet will facilitate engagement between key players and stakeholders of
the glycoscience community across Europe to identify the current state of the
art and in particular future innovation and technological challenges in
carbohydrate metrology.
In order to fulfil its objectives, CarboMet will carry out a number of
activities:
* Communication via a dedicated website and social media accounts (Twitter & LinkedIn); Online surveys for community input;
* Scoping meetings and workshops including training workshops in advanced technologies;
* Creation of policy briefings and white papers in hot topics;
* A periodic e-newsletter for communication and dissemination of CarboMet activities.
**What information are we collecting and by whom:**
We will collect information about key players and stakeholders from across the
European glycoscience field. The information we will collect about individuals
includes:
* names;
* roles and positions;
* organizations;
* glyco topics of interest;
* contact details i.e. email addresses.
This will be collected by the CarboMet Project Coordination team (see here for
details _https://carbomet.eu/contact/_ ).
**How is information collected?**
We will obtain information from you in the following ways:
4. Information you give us directly. For example, we may obtain information about you when you take part in one of our events (via Eventbrite*) or when you sign up to our mailing list (via our website through MailChimp*).
5. Social Media. When you interact with us on social media platforms such as LinkedIn* and Twitter* we may obtain information about you. The information we receive will depend on the privacy preferences you have set on those platforms.
6. Public information. We supplement our information with information from publicly available sources such as university websites, corporate websites, and public social media accounts.
7\.
**Why is it being collected and under what legal basis**
We may use your information for a number of different purposes.
We will rely on your consent for the following uses:
* Providing you with information you have asked for e.g. the CarboMet newsletter;
* Obtaining feedback to better understand how we can improve CarboMet activities;
* Seeking your views or comments;
* Sending you communications which you have requested and that may be of interest to you
e.g. CarboMet newsletters, event invitations, etc.
We will use our legitimate interests for the following uses:
* CarboMet event management, including communications during, before and after events;
* Keeping a record of your relationship with us;
* Sharing attendance at meetings and names on published papers
**How can I opt out or withdraw my consent to these uses of data?**
You can opt out at any time by unsubscribing from the various third parties*
we use or by contacting the CarboMet Project Coordination team
_https://carbomet.eu/contact/_ We endeavor to act on withdrawals of consent as
soon as we can.
**Who will the information be shared with?**
Your information will be shared within the CarboMet Project Coordination team
as required. In addition, it may be shared in order to fulfil CarboMet
activities, on a considered and confidential basis, with a range of external
organisations, including the following:
* On occasion, and only where necessary with representatives from the European Commission Horizon 2020 FET-OPEN as part of CarboMet monitoring and review process i.e. the CarboMet Project Officer and external assessors;
* Companies and organisations providing services on behalf of CarboMet e.g. for hotel accommodation during events.
Other than as set out above, we will not publish or disclose any personal
information about you to other external enquirers or organisations unless you
have requested it or consented to it, or unless it is in your vital interests
to do so (e.g. in an emergency situation).
We will not share your information with third parties for marketing purposes.
**How long will we keep your information?**
We will keep your information for the duration of the CarboMet programme
(unless you have opted out of some processing or withdrawn your consent) i.e.
from 1 st January 2017 until 31 st December 2020.
Further information about the University of Manchester’s processing of
personal data for research is available from:
_https://www.manchester.ac.uk/discover/privacy-
information/dataprotection/privacy-notices/_
**Contact details:**
Any questions regarding this policy and our privacy practices should be sent
by email to [email protected]_
*Third parties. We use the following third parties. You can check their respective privacy policies via the links provided:
* Eventbrite _https://www.eventbrite.co.uk/support/articles/en_US/Troubleshooting/eventbrite-privacy-policy?lg=en_GB_
* LinkedIn _https://www.linkedin.com/legal/privacy-policy?trk=uno-reg-guest-home-privacypolicy_
* Mailchimp _https://mailchimp.com/legal/privacy/?_ga=2.242343879.499783966.1540303468-884086585.1519923329_ Twitter _https://twitter.com/en/privacy_
.
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
**HISTORY OF CHANGES**
</th> </tr>
<tr>
<td>
**Version**
</td>
<td>
**Publication date**
</td>
<td>
</td>
<td>
**Change**
</td> </tr>
<tr>
<td>
1.0
</td>
<td>
31.01.2019
</td>
<td>
▪ Initial version
</td>
<td>
</td> </tr> </table>
7
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0777_AiRT_732433.md
|
# Executive summary
This document includes the last revision of the deliverable D7.2 of the AiRT
project, which covers the Data Management Plan. Consortium of AiRT project
participates in the Open Research Data Pilot for H2020 programme. Therefore,
this document describes how research data collected during the project has
been made accessible by the partners, and the repositories selected where
deposit the data.
Document elaboration is based on information included in deliverable D6.3
(Report on monitoring and evaluation of communication and dissemination
activities). Revisions for the Data Management Plan have been made during the
project, to include information related to deliverables and tasks. This
deliverable is led by UPV, although all the partners participate in the
compliance of the task.
# Introduction
The partners in the AiRT Project participate in the Open Research Data Pilot
in Horizon 2020 (ORD pilot). The aim of this pilot is “to improve and maximise
access to and re-use of research data generated by Horizon 2020 projects”
(Participant Portal H2020 Online Manual, 2017).
In this document, we introduce the first version of the Data Management Plan
for the AiRT Project and later revisions. The aim of the deliverable was to
describe how the partners would manage the research data collected during the
project, including the guidelines to collect, register, preserve, research and
publish the data. The Deliverable has been constantly updated to cover any new
data generated through each deliverable, specifying the typology of data
collected and how they have been made accessible by the consortium. There have
been three updates for the data management plan, one in the month 12 (D7.2
v2), one at the end of the project (D7.2 v3) and the last included in this
version (D7.2 v4), which present the total results.
The starting point of the deliverable was Deliverable D6.3 (Report on
monitoring and evaluation of communication and dissemination activities). This
document has been written following guidelines “on FAIR Data Management in
Horizon 2020” (European Commission, 2016) and guidelines “to the rules on open
access to scientific publications and open access to research data in Horizon
2020” (European Commission, 2017).
The structure of the document is as follows. After this introduction, section
3 explains what are the data that have been disseminated as open access.
Section 4 includes how these data have been made FAIR (findable, accessible,
interoperable, reusable). Section 5 involves how the project has covered costs
related to make data FAIR. Section 6 explains how consortium selected
repositories that assure data security, while section 7 covers the ethical
aspects related to the use of information. Sections 8 include the results for
Data Management Plan after the different revisions undertaken in this
deliverable.
# Data summary
The main objective of the AiRT Project is “to develop the world´s first indoor
RPAS specifically designed for professional use by the CIs”. Moreover,
specific objectives were defined in the project proposal
(http://airt.webs.upv.es/the-project/objectives/). Table 1 presents the
relationship between objectives and deliverables, indicating which of them are
public and confidential deliverables.
During the development of the project and development of the RPAS, different
data have been obtained and generated in relation to each deliverable. All the
partners have had access to documents and deliverables in Basecamp (Figure 1),
where different folders were created to make easier the access to information.
One of the folders was a repository for the deliverables, including their
different versions (drafts and releases). Additionally, copies of documents
deposited in Basecamp were also saved by the project coordinator and by the
coordinators and authors of deliverables. In case of data needed of higher
space, such as videos, the project coordinator and the partner in charge of
filming saved them directly or uploaded them to other repositories like Vimeo
(Figure 2), and Zenodo.
As a general rule, the partners made available data that are not defined as
confidential due to its association with Intellectual Property Rights or
because they are strategic for the Exploitation Plan (Table 1). Dissemination
and Communication Plans, explained in Deliverables D6.1 to D6.5, indicate
which research results would be published and communicated. This document use
information from these deliverables, as basis to explain how scientific
research articles and their related data would be available through open
access means.
Figure 1. Internal management of data and information
Figure 2. Vimeo´s repositorie
Table 1. Objective, deliverables and type of information. Source:
http://airt.webs.upv.es/the-project/deliverables/
<table>
<tr>
<th>
**Specific objective in the project**
</th>
<th>
**Deliverable**
</th>
<th>
**Public or Confidential?**
</th> </tr>
<tr>
<td>
SO1. Analysis of CIs needs, ethical/ security issues and risk analysis. The
Airt RPAS system will lay special focus on the needs of Creative Industries
while shooting indoor.
</td>
<td>
D2.1. CI’s needs for indoor filming using RPAS
</td>
<td>
Public
</td> </tr>
<tr>
<td>
D2.2. Ethical aspects and safety of RPAS use indoor
</td>
<td>
Public
</td> </tr>
<tr>
<td>
SO2. Adaptation of indoor positioning system (IPS) for the RPAS.
</td>
<td>
D3.1. Hardware IPS optimized system
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
D3.2. IPS with improved update rate and I2C Protocol
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
D3.3. Environmental map
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
D3.4. Technical validation of IPS
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
SO3. User-friendly, intuitive interface Graphical user interface of indoor
navigation.
</td>
<td>
D5.1. End-user friendly adapted software
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
SO4 Adaptation of RPAS. Safety feature and requirements defined by CIs will be
integrated in an innovative RPAS by the partner AeroTools.
</td>
<td>
D4.1. RPAS design specifications
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
D4.2. Prototype according to specifications (Manufacture of 3 prototypes)
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
</td>
<td>
D4.3. Integration of advanced functionalities
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
D4.4. Test reports
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
SO5. Integration and validation. The adapted IPS by Pozyx Labs will be
integrated within the RPAS and a technical validation test performed.
</td>
<td>
D5.2. System integration
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
D5.3. Technical validation
</td>
<td>
Confidential
</td> </tr>
<tr>
<td>
SO6. Demonstration. The benefits of the AiRT system for Creative Industries,
the new services which can be provided and its possible exploitation will be
presented.
</td>
<td>
D5.4. RPAS user guide
</td>
<td>
Public
</td> </tr>
<tr>
<td>
D5.5. Report on results of demonstration
</td>
<td>
Public
</td> </tr>
<tr>
<td>
SO7. Elaboration of a proposal for a European legislation for indoor RPAS
safety and security (policy handbook).
</td>
<td>
D6.5. Development of policy on a best-practice model
</td>
<td>
Public
</td> </tr> </table>
The European Commission has defined the guidelines related to the rules on
open access to scientific publications and research data (Figure 3). The
objective is “to make research data findable, accessible, interoperable and
reusable (FAIR)” (European Commission, 2016). They consider open access (OA)
as “providing online access to scientific information that is free of charge
to the end-user and re-usable” (European Commission, 2017). Moreover, they
differentiate between open access information included in peer-reviewed
scientific research articles and that included in the research data.
Concerning peer-reviewed scientific research articles, they indicate two
routes to open access:
* Self-archiving/green open access: the manuscript is archived in an online repository. Some publishers establish a period of embargo before open access. This is the route we will use for papers in peer-reviewed journals.
* Open access publishing/gold open access: the article is immediately published in open access mode. There is a publication cost for the authors.
It is compulsory that every peer-reviewed scientific publication is open
access (green or gold). The guidelines also encourage providing open access in
the case of monographs, books, conference proceedings, and grey literature
(reports and others). In our project, books, communications to workshops and
conferences, and public deliverables will be also open access, through gold or
green open access, as we explain in this document.
In relation to research data, they refer to data available in digital forms
and resulted from statistics, experiments, observations, surveys, interviews
and images. Data obtained would be used for reasoning, discussion and
calculation. They would be located in a repository. Some repositories
facilitate to deposit both publications and data, such as Zenodo and many
academic publishers.
Figure 3. Open access to scientific publication and research data in the wider
context of dissemination and exploitation.
Source: European Commission (2017)
The Dissemination Plan (D6.3) has described the different publications that
would be elaborated during the project (Tables 2 to 5). These publications
include peer-reviewed scientific publications, books, conference proceedings
and the own deliverables. Tables 2 to 5 include the works to disseminate
during the project. Data Management Plan has been reviewed during the project
to incorporate any update in data and information able to be included in the
open access data pilot.
Table 2. AiRT project’s books and data. Source: Elaboration from deliverable
D6.3
<table>
<tr>
<th>
**Dissemination**
**activity**
</th>
<th>
**Associated deliverable**
</th>
<th>
**Origin of data**
</th>
<th>
**Data generated**
</th>
<th>
**Data utility**
</th> </tr>
<tr>
<td>
Research Book
</td>
<td>
D6.4 Report on Workshop conclusions
</td>
<td>
Information produced at the workshop (April 2018)
</td>
<td>
File in text format.
Springer publishing
(2018)
</td>
<td>
CIs, academia, technology firms, customers/users
</td> </tr>
<tr>
<td>
Policy book
</td>
<td>
D6.5 Development of policy on a bestpractice model
</td>
<td>
Secondary data and own analyses
</td>
<td>
File in text format. Published online and in paper
</td>
<td>
EU commissions
(EGE, CEN, DG
Health and Food
Safety, and others),
CIs,
users/customers, drone
manufacturers (including hardware and software)
</td> </tr> </table>
Table 3. AiRT project’s peer reviewed scientific papers and data. Source:
Elaboration from deliverable D6.3
<table>
<tr>
<th>
**Dissemination**
**activity**
</th>
<th>
**Associated deliverable**
</th>
<th>
**Origin of data**
</th>
<th>
**Data generated**
</th>
<th>
**Data utility**
</th> </tr>
<tr>
<td>
Regarding safety and security issues
</td>
<td>
D2.2. Ethical aspects and safety of RPAS use indoor
</td>
<td>
Secondary data and own analyses
</td>
<td>
Qualitative data in text format. Paper sent to a peerreviewed journal.
</td>
<td>
Scientific academic community,
specialised in CIs and ICT
</td> </tr>
<tr>
<td>
Regarding digital applications and interface
</td>
<td>
D5.1: End-user friendly adapted software
</td>
<td>
Secondary data and own analyses and development
</td>
<td>
Qualitative and quantitative data in text format. Paper to be sent to a
peerreviewed journal.
</td>
<td>
Scientific academic community,
specialised in CIs and ICT
</td> </tr>
<tr>
<td>
Regarding
Distributed System
Architecture
</td>
<td>
D5.2: System integration
</td>
<td>
Own analyses and development
</td>
<td>
Qualitative data in text format. Paper to be sent to a peerreviewed journal.
</td>
<td>
Scientific
community, Companies
specialized in System’s architectures
</td> </tr>
<tr>
<td>
Regarding 3D mapping technology development
</td>
<td>
D3.3. Environmental map
</td>
<td>
Secondary data and own analyses and development
</td>
<td>
Qualitative and quantitative data in text format. Paper to be sent to a
peerreviewed journal.
</td>
<td>
Scientific academic community,
specialised in CIs and ICT
</td> </tr>
<tr>
<td>
Regarding Creative industries specific needs/ requirements
</td>
<td>
D2.1. Needs of CIs for indoor filming using RPAS
</td>
<td>
Analysis of information in focus
groups
</td>
<td>
Qualitative data in text format. Paper to be sent to a peerreviewed journal.
</td>
<td>
Scientific academic community,
specialised in CIs and ICT
</td> </tr>
<tr>
<td>
Regarding new European Aviation regulation for RPAS
</td>
<td>
D6.5 Development of policy on a bestpractice model
</td>
<td>
Secondary data and own analyses
</td>
<td>
Qualitative data in text format. Paper to be sent to a peerreviewed journal.
</td>
<td>
Scientific academic community,
specialised in CIs and ICT
</td> </tr>
<tr>
<td>
Regarding indoor positioning system for RPAS
</td>
<td>
D3.2. IPS with improved update rate and I2C Protocol
</td>
<td>
Secondary data and own analyses and development
</td>
<td>
Qualitative and quantitative data in text format. Paper to be sent to a
peerreviewed journal.
</td>
<td>
Scientific academic community,
specialised in CIs and ICT
</td> </tr> </table>
Table 4. AiRT project’s participation in international conferences and data.
Source: Elaboration from deliverable D6.3
<table>
<tr>
<th>
**Dissemination**
**activity**
</th>
<th>
**Associated deliverable**
</th>
<th>
**Origin of data**
</th>
<th>
**Data generated**
</th>
<th>
**Data utility**
</th> </tr>
<tr>
<td>
Regarding CIs and related fields
</td>
<td>
Project proposal.
D2.1. Needs of CIs for indoor filming using
RPAS
</td>
<td>
Document of the project proposal.
Analysis of information in focus groups
</td>
<td>
Qualitative data in text format. Papers sent and to be sent to conferences
</td>
<td>
Scientific academic community, specialised in CIs and ICT, drone manufacturers
(including hardware and software)
</td> </tr>
<tr>
<td>
Regarding drones and related fields
</td>
<td>
D6.5
Development of
</td>
<td>
Own analyses and
</td>
<td>
Qualitative and quantitative data
</td>
<td>
Scientific academic community, specialised in CIs
</td> </tr>
<tr>
<td>
</td>
<td>
policy on a bestpractice model
D4.3 Integration of advanced functionalities D5.1: End-user friendly adapted
software D5.2: System integration
</td>
<td>
developments
</td>
<td>
in text format. Papers to be sent to conferences
</td>
<td>
and ICT, drone manufacturers (including hardware and software)
</td> </tr>
<tr>
<td>
Regarding related technology
(Embedded
Wireless Systems and Networks)
</td>
<td>
D4.3 Integration of advanced functionalities D5.1: End-user friendly adapted
software D5.2: System integration
</td>
<td>
Own analyses and developments
</td>
<td>
Qualitative and quantitative data in text format. Paper to be sent to
conference
</td>
<td>
Scientific academic community, specialised in CIs and ICT, drone manufacturers
(including hardware and software)
</td> </tr> </table>
Table 5. AiRT project’s public deliverables and data. Source: Source:
Elaboration from deliverable D6.3
<table>
<tr>
<th>
**Dissemination**
**activity**
</th>
<th>
**Associated deliverable**
</th>
<th>
**Origin of data**
</th>
<th>
**Data generated**
</th>
<th>
**Data utility**
</th> </tr>
<tr>
<td>
D2.1. CI’s needs for indoor filming
using RPAS
</td>
<td>
The own deliverable
</td>
<td>
Analysis of information in focus groups
</td>
<td>
Tables with data from analysis with QDAMiner.
File in text format.
</td>
<td>
Drone manufacturers (including hardware and software); scientific academic
community, specialised in CIs and ICT; CI’s users/customers, including drone
operators
</td> </tr>
<tr>
<td>
D2.2. Ethical aspects and safety of RPAS use indoor
</td>
<td>
The own deliverable
</td>
<td>
Analysis of information in focus groups.
State of the art.
Secondary data.
Own analysis.
</td>
<td>
Qualitative data in text format.
</td>
<td>
Drone manufacturers (including hardware and software); scientific academic
community, specialised in CIs and ICT; CI’s users/customers, including drone
operators
</td> </tr>
<tr>
<td>
D5.4. RPAS user guide
</td>
<td>
The own deliverable
</td>
<td>
Secondary data.
Own analysis.
</td>
<td>
Qualitative data in text format.
</td>
<td>
Drone manufacturers (including hardware and software); scientific academic
community, specialised in CIs and ICT; CI’s users/customers, including drone
operators
</td> </tr>
<tr>
<td>
D5.5. Report on results of demonstration
</td>
<td>
The own deliverable
</td>
<td>
Own analysis
</td>
<td>
Qualitative data in text format.
</td>
<td>
Drone manufacturers (including hardware and software); scientific academic
community, specialised in CIs and ICT; CI’s users/customers, including drone
operators
</td> </tr> </table>
<table>
<tr>
<th>
D6.5. Development of policy on a bestpractice model
</th>
<th>
The own deliverable
</th>
<th>
Secondary data
and own analyses
</th>
<th>
Qualitative data in text format.
</th>
<th>
Drone manufacturers (including hardware and software); scientific academic
community, specialised in CIs and ICT; CI’s users/customers, including drone
operators
</th> </tr> </table>
# Open access
In this section, we present how AiRT’s project consortium has assured that
scientific publications and data associated to them follow the guidelines
about FAIR data (findable, accessible, interoperable, reusable) defined by the
European Commission (2016). Presentation for each type of document included in
Tables 2 to 5 is shown in Tables 6 to 9\. The main repositories for self-
archiving are Riunet (the institutional repository by UPV and compatible with
OpenAire), Zenodo and ResearchGate, depending on information and files. For
underlying data we have used also these repositories and Vimeo for videos.
Other repositories might be used depending on the subject. The main advantages
of Zenodo are its multidisciplinary, its high capacity to accept 50 GB, and
its dependence from the CERN which gives to him higher security. Riunet is the
repository by UPV and accept all types of data. Researchers in the project use
ReserachGate to disseminate all their works, both related and not to the
project. Moreover, we created a project in this repository to include any work
related to the project.
Table 6. Open access for books
<table>
<tr>
<th>
</th>
<th>
**FAIR data for Research Book and Policy Book**
</th> </tr>
<tr>
<td>
Findable and accessible
</td>
<td>
Books will be published with Springer, in gold open access. Therefore, the
entire version will be accessible for other researchers and firms.
</td> </tr>
<tr>
<td>
</td>
<td>
Metadata: every book will include information about the following metadata
(guideline from
OpenAire). These metadata might change depending on publisher’s rules
* Identifier given to the book/chapters by books’ editors
* Title: for book and chapters
* Creator: names of the partners involved
* Funder: EU H2020, name of the action, acronym
* Project identifier: name of the project and project ID – Publisher
* Source: name of the section in which publisher position the book and link to the book’s webpage
* Publication year
* Keywords
* Contributors: authors in the chapters
* Size: number of pages and size of the doc/file (MB)
* File format
* Language
* Version (draft, author’s version, editor’s version)
* Rights: specify restrictions from the publisher, like embargo
* Description: abstract
* Coverage: in case data refer to specific countries/ locations
</td> </tr>
<tr>
<td>
</td>
<td>
Underlying data, when they are not related to IPRs or exploitation plan, will
be deposited in
</td> </tr>
<tr>
<td>
</td>
<td>
Riunet (compatible with OpenAire) or Zenodo, once any embargo related to data
included in the books has finished. Participants also use ResearchGate to
disseminate their research and we created a project to include AiRT’s project
dissemination.
</td> </tr>
<tr>
<td>
Interoperable
</td>
<td>
Methods to analyse data and software used, in case someone is used, are
explained in the chapters and deliverables related (see Table 9). In case
these explanations are not enough, we will add more details with the
underlying data in the repositories.
</td> </tr>
<tr>
<td>
Reusable
</td>
<td>
All the underlying data that are also open access will allow other researchers
to reuse them. Underlying data that are related to exploitation plan and IPRs
will no be open access. Data will be accessible 3 years after the end of the
project.
</td> </tr> </table>
Table 7. Open access for Peer reviewed scientific papers
<table>
<tr>
<th>
</th>
<th>
**FAIR data**
</th> </tr>
<tr>
<td>
Findable and accessible
</td>
<td>
The papers will be findable and accessible through:
* The journal’s website _._
* Versions of the papers will be deposited in a repository such as ResearchGate and others similar. Authors will check in SHERPA/ROMEO before submitting a paper whether the journal allows self-archiving. Publishers tend to allow authors to archive, in other repositories, final draft post-referee in Word format (not in editor’s pdf) and indicating where it is published with the link to the journal. Other repositories will be checked with publishers. The aim is to disseminate our works as much as possible.
</td> </tr>
<tr>
<td>
Journals are being selected in relation to the next subjects: a) Regarding
safety and security issues
2. Regarding digital applications and interface
3. Regarding Distributed System Architecture
4. Regarding 3D mapping technology development
5. Regarding Creative industries specific needs/ requirements
6. Regarding new European Aviation regulation for RPAS
7. Regarding indoor positioning system for RPAS
</td> </tr>
<tr>
<td>
Metadata: every paper will include information about (guideline from OpenAire)
_:_ – Identifier: DOI
* Title
* Creator: names of the partners involved
* Funder: EU H2020, name of the action, acronym
* Project identifier: name of the project and project ID
* Publisher
* Source: publication name and link to the journal webpage
* Publication year
* Keywords
* Contributors: authors in the paper
* Size: number of pages and size of the doc/file (MB)
* File format
* Language
* Version (draft, accepted version)
* Rights: specify restrictions form the publisher, like embargo
* Description: abstract
* Coverage: in case data refer to specific countries/ locations
</td> </tr>
<tr>
<td>
Underlying data:
– Underlying data will be deposited in the repository of the journal as an
additional file to
</td> </tr>
<tr>
<td>
</td>
<td>
the paper when journal facilitate this option. This will be used for excel
files when the journal indicates.
– In other cases, the repositories Riunet, Zenodo and ResearchGate will be
used to deposit underlying data. For example, videos and transcriptions from
focus groups will be deposited in these repositories once the embargo of the
publications has finished.
Vimeo might be also used in case of videos
</td> </tr>
<tr>
<td>
Interoperable
</td>
<td>
Methods to analyse data and software used, in case someone is used, are
explained in the papers and deliverables related (see Table 9). In case these
explanations are not enough, we will add more details with the underlying data
in the repositories.
</td> </tr>
<tr>
<td>
Reusable
</td>
<td>
All the underlying data that are also open access will allow other researchers
to reuse them. Underlying data that are related to exploitation plan and IPRs
will no be open access. Data will be accessible 3 years after the end of the
project.
</td> </tr> </table>
Table 8. Open access for Participation in international conferences
<table>
<tr>
<th>
</th>
<th>
**FAIR data**
</th> </tr>
<tr>
<td>
Findable and accessible
</td>
<td>
Versions of the communications will be deposited in:
– Final draft in Word format (not in editor’s pdf) and indicating where it is
published with the link to the book. These files will be deposited in Riunet
or ResearchGate. Riunet is Universitat Politècnica de València own repository
and is compatible with OpenAire.
Other repositories will be checked with editors.
</td> </tr>
<tr>
<td>
International conferences are being selected in relation to the next subjects:
a) Regarding CIs and related fields
2. Regarding drones and related fields
3. Regarding related technology (Embedded Wireless Systems and Networks)
</td> </tr>
<tr>
<td>
Metadata: every communication will include information about (guideline from
OpenAire) – Identifier given to the communication by conference editors or
book’s editors – Title
* Creator: names of the partners involved
* Funder: EU H2020, name of the action, acronym
* Project identifier: name of the project and project ID
* Publisher
* Source: name of the conference, publication name and link to the editor’s webpage – Publication year
* Keywords
* Contributors: authors in the communication
* Size: number of pages and size of the doc/file (MB)
* File format
* Language
* Version (draft, accepted version)
* Rights: specify restrictions form the publisher, like embargo
* Description: abstract
* Coverage: in case data refer to specific countries/ locations
</td> </tr>
<tr>
<td>
Interoperable
</td>
<td>
Methods to analyse data and software used, in case someone is used, are
explained in the communication and deliverables related (see Table 9). In case
these explanations are not enough, we will add more details with the
underlying data in the repositories.
</td> </tr>
<tr>
<td>
Reusable
</td>
<td>
All the underlying data that are also open access will allow other researchers
to reuse them. Underlying data that are related to exploitation plan and IPRs
will no be open access. Data
</td> </tr> </table>
will be accessible 3 years after the end of the project.
Table 9. Open access for Public Deliverables
<table>
<tr>
<th>
</th>
<th>
**FAIR data**
</th> </tr>
<tr>
<td>
Findable and accessible
</td>
<td>
Public deliverable will be deposited in:
* The CORDIS website, as they have been submitted to the European Commission. (http://cordis.europa.eu/project/rcn/206031_en.html)
* The AiRT Project website
* Repositories: Riunet or Zenodo. Other repositories will be checked.
</td> </tr>
<tr>
<td>
Name of the deliverables:
1. D2.1. CI’s needs for indoor filming using RPAS
2. D2.2. Ethical aspects and safety of RPAS use indoor
3. D5.4. RPAS user guide
4. D5.5. Report on results of demonstration
5. D6.5. Development of policy on a best-practice model
</td> </tr>
<tr>
<td>
Metadata: every deliverable include
* The title
* Version
* Creator, and author/reviewer
* Date and due month
* Funder: EU H2020, name of the action, acronym, proposal number
* Logo of the project
* Type of CC license
* Text: document written, including data and analysis
</td> </tr>
<tr>
<td>
Underlying data:
* In case of data in the deliverables are used for peer reviewed scientific papers, underlying might be deposited in the repository of the journal as an additional file to the paper when journal facilitate this option. This will be used for excel files when the journal indicates.
* In other cases, the repositories Riunet and Zenodo will be used to deposit underlying data. Among these data we can cite transcriptions from focus groups. However, videos from focus groups would need other means due to capacity of MB. In these cases, the repository Vimeo would be used additionally to Zenodo or Riunet.
In underlying data from deliverables, we will maintain them closed in
repositories until the embargo in publications has finished to avoid its use
before our publications.
</td> </tr>
<tr>
<td>
Interoperable
</td>
<td>
Methods to analyse data and software used, in case someone is used, are
explained in the deliverables.
</td> </tr>
<tr>
<td>
Reusable
</td>
<td>
Public deliverables will be Creative Common licensed:
– CC BY-NC-SA until the finish of embargo in any publication related to them
(12 months) – Then, license will change to CC BY
</td> </tr> </table>
# Allocation of resources
This section includes information about the costs related to making data FAIR
(findable, accessible, interoperable, reusable) and how these costs would be
covered. Table 10 present a summary for this information. AiRT Project covered
the costs of the two books published in open access through Springer
Publishing. The rest of works have been made open access through self-
repository. The web SHERPA/ROMEO has been checked to know whether self-
archiving though other repositories are allowed.
Table 10. Allocation of resources
<table>
<tr>
<th>
**Dissemination activity**
</th>
<th>
**_Where will be preserved*_ **
</th>
<th>
**Cost**
</th>
<th>
**Years preserved**
</th> </tr>
<tr>
<td>
Books (2)
</td>
<td>
* Publisher repository
(Springer)
* Riunet (OpenAire compatible) in case publisher allows it
* ResearchGate in case publisher allows it
</td>
<td>
* Springer: around 5,000€ by book in open access.
* Riunet and the rest of repositories such as
ResearchGate have not costs for researchers
</td>
<td>
No limited
</td> </tr>
<tr>
<td>
Peer reviewed scientific papers (5-6)
</td>
<td>
* Journal webpage
repository
* Riunet, Zenodo and others (ResearchGate), in this case following copyright rules of each journal**
</td>
<td>
– Riunet and the rest of repositories have not costs for researchers
</td>
<td>
* Journals preserve papers without time limit.
* Self-archiving in other repositories at least 3 years after the end of the project.
</td> </tr>
<tr>
<td>
Participation in
international conferences
</td>
<td>
* Publisher repository when there is a book with proceedings
* Riunet (OpenAire compatible) in case publisher allows it
</td>
<td>
* Costs are usually included in the conferences fees
* Riunet and the rest of repositories have not costs for researchers
</td>
<td>
Self-archiving in other repositories at least 3 years after the end of the
project.
</td> </tr>
<tr>
<td>
Public Deliverables
</td>
<td>
– ORCIS, Riunet, Zenodo
</td>
<td>
No costs
</td>
<td>
Self-archiving in
repositories at least 3 years after the end of the project.
</td> </tr> </table>
* Incompatibility between repositories will be checked before uploading any document or data ** They usually only allow to deposit authors versions before editor’s version
# Data security
Repositories selected are all certified repositories from big academic
publishers such as Springer. Journals selected for publications are all
included in the Journal Citation Report.
Repository Riunet, the repository of the Universitat Politècnica de València,
is compatible with OpenAire. This last repository and Zenodo are also
certified repositories that are created and used for important European
institutions. They offer a secure support where depositing research data to
make them open access. Using more than one repository assure that data are not
lost. ResearchGate is a common repository for researchers and assures
dissemination in the five continents. Data and document in repositories will
be maintained at least during three years after the end of the project. Data
security is also assured through different copies of documents and data
maintained by the coordinator of the project and authors of deliverables and
rest of works.
Although we have not work with sensitive information, we have been very
cautious with any data derived from personal information and shootings that
involved people talking. In all the cases a consent form was signed.
# Ethical aspects
Ethical aspects have been taken into account during the project development
and in each deliverable. All participants from creative industries signed
consent forms allowing to be filmed, including what they said while they were
filmed. They were informed that images, videos and interviews would be used
for research purposes. They were also informed about the aim of the project
and what each partner would do. Participants were also encouraged to ask
questions about the project and their participation.
# Advances in open data management
In this revision, advances in dissemination of results with include open data
management are included. Tables 11 to 14 include the advances and explanations
related to previous Tables 6 to 10. Table 11 presents advances in open access
publishing related to books while Table 12 includes advances in peer reviewed
scientific papers. Table 13 shows advances associated to conferences and Table
14 advances linked to public deliverables. In February 2018, an account in
Zenodo was opened. Deliverables were uploaded, and the rest of documents will
be uploaded to Zenodo, ResearchGate and Riunet, in the format that each
publisher allows, when embargo period finishes. It is compulsory at
Universitat Politècnica de València the open access of the authors’ version
through Riunet depository. People at the Library are in charge of embargo
deadline in Riunet. Therefore, when they publish open versions of each work,
we will publish also them in ResearchGate and Zenodo.
Table 11. Advances in open access of dissemination results: Books
<table>
<tr>
<th>
**Books**
</th>
<th>
**Allocation**
</th> </tr>
<tr>
<td>
Policy book:
* Title: Ethics and civil drones. European Policies and proposals for the industry
* Editors: María de Miguel Molina and Virginia Santamarina Campos
* Number of chapters: 6
* Number of pages: 92
* Publisher: Springer
* Reusable: under a CC BY license
* Year: 2018
* ISBN: 978-3-319-71086-0
* DOI: 10.1007/978-3-319-71087-7
* Reference to the H2020 project and funding: Introduction chapter is dedicated first to introduce the project and information about H2020 funding
</td>
<td>
Open access at:
_https://www.springer.com/la/book/9783319710860_
Research gate: link to publisher at
_https://www.researchgate.net/project/TechnologyTransfer-Of-Remotely-Piloted-
Aircraft-Systems-RpasFor-The-Creative-Industry_
Web of the AiRT project: link to publisher at _http://airt.webs.upv.es/the-
project/deliverables/_
</td> </tr> </table>
<table>
<tr>
<th>
Workshop book:
* Title: Drones and the Creative Industry. Innovative strategies for European SMEs
* Editors: Virginia Santamarina Campos and Marival Segarra Oña
* Number of chapters: 11
* Number of pages: 161
* Publisher: Springer
* Reusable: under a CC BY license
* Year: 2018
* ISBN: 978-3-319-95260-4
* DOI: 10.1007/978-3-319-95261-1
* Reference to the H2020 project and funding: Introduction chapter is dedicated first to introduce the project and information about H2020 funding
</th>
<th>
Open access at:
_https://www.springer.com/us/book/9783319952604_
Research gate: link to publisher at
_https://www.researchgate.net/project/TechnologyTransfer-Of-Remotely-Piloted-
Aircraft-Systems-Rpas-_
_For-The-Creative-Industry_
Web of the AiRT project: link to publisher at _http://airt.webs.upv.es/the-
project/deliverables/_
</th> </tr>
<tr>
<td>
Book Title: Innovative Approaches to Tourism and Leisure
* Chapter Title: Importance of Indoor Aerial Filming for Creative Industries (CIs): Looking Towards the Future.
* Pages: 51-66
* Authors: Santamarina Campos, Virginia; Miguel Molina, María Blanca De; Segarra-Oña, Marival; de-Miguel-Molina, María – Editors:
* Publisher: Springer
* ISBN/EAN: 978-3-319-67603-6
* Year: 2018
* Reference to the H2020 project and funding: yes, in acknowledgements
(https://link.springer.com/chapter/10.1007/978-3319-67603-6_4)
</td>
<td>
Research gate: _https://www.researchgate.net/project/TechnologyTransfer-Of-
Remotely-Piloted-Aircraft-Systems-Rpas-_
_For-The-Creative-Industry_
Embargo: from year 30 Dec 2017
Check the editor’s policy: January 2019
Riunet (UPV):
1. Author version (open after embargo): Word document including information of the publication and link to the journal web.
2. Editorial version (closed)
ResearchGate:
1. Author version (open after embargo): Word document including information of the publication and link to the journal web.
2. Editorial version (closed)
</td> </tr>
<tr>
<td>
Book Title: Tourism, Economy and Environment: New Trends and research
perspectives.
* Chapter Title: Indoor drones for the creative industries: the importance of identifying needs and communication strategies for new product development.
* Pages: 71-84
* Authors: Segarra-Oña MV, de-Miguel-Molina B, Santamarina-Campos V, de-Miguel-Molina M.
* Editors: Ferrari G, Garau G, Mondéjar-Jiménez J.
* Publisher: Chartridge Books Oxford
* ISBN/EAN: 1911033328/9781911033325
* Year: 2017
* Reference to the H2020 project and funding: yes, in acknowledgements (page 83)
</td>
<td>
Research gate: _https://www.researchgate.net/project/TechnologyTransfer-Of-
Remotely-Piloted-Aircraft-Systems-Rpas-_
_For-The-Creative-Industry_
Embargo: from Dec 2017
Check the editor’s policy: January 2019
Riunet (UPV):
1. Author version (open after embargo): Word document including information of the publication and link to the journal web.
2. Editorial version (closed)
ResearchGate:
a) Author version (open after embargo): Word
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
document including information of the publication and link to the journal web.
b) Editorial version (closed)
</th> </tr>
<tr>
<td>
Book Title: Derecho de los drones
* Chapter Title: Sujetos y políticas regulatorias de la Unión Europea sobre drones.
* Authors: Santamarina-Campos V, de-MiguelMolina M.
* Publisher: Wolters Kluver
* Pages: 87-107
* ISBN: 978-84-9020-763-5
* Year: 2018
* Reference to the H2020 project and funding: yes, in page 107
</td>
<td>
Web of the publisher: _https://tienda.wolterskluwer.es/p/derecho-de-losdrones_
Research gate: _https://www.researchgate.net/project/TechnologyTransfer-Of-
Remotely-Piloted-Aircraft-Systems-Rpas-_
_For-The-Creative-Industry_
Embargo: from Nov 2018
Check the editor’s policy: December 2019
Riunet (UPV):
1. Author version (open after embargo): Word document including information of the publication and link to the journal web.
2. Editorial version (closed)
ResearchGate:
1. Author version (open after embargo): Word document including information of the publication and link to the journal web.
2. Editorial version (closed)
</td> </tr>
<tr>
<td>
Book Title: Derecho de los drones
* Chapter Title: El mercado de los drones y sus servicios en Europa.
* Authors: de-Miguel-Molina B, de-Miguel-Molina M.
* Publisher: Wolters Kluver
* Pages: 59-86
* ISBN: 978-84-9020-763-5
* Year: 2018
* Reference to the H2020 project and funding: yes, in page 85
</td>
<td>
Web of the publisher: _https://tienda.wolterskluwer.es/p/derecho-de-losdrones_
Research gate: _https://www.researchgate.net/project/TechnologyTransfer-Of-
Remotely-Piloted-Aircraft-Systems-Rpas-_
_For-The-Creative-Industry_
Embargo: from Nov 2018
Check the editor’s policy: December 2019
Riunet (UPV):
1. Author version (open after embargo): Word document including information of the publication and link to the journal web.
2. Editorial version (closed)
ResearchGate:
1. Author version (open after embargo): Word document including information of the publication and link to the journal web.
2. Editorial version (closed)
</td> </tr>
<tr>
<td>
Book title: Distributed Computing and Artificial
</td>
<td>
Link to the publisher:
</td> </tr> </table>
<table>
<tr>
<th>
Intelligence, 15th International Conference. DCAI
_https://link.springer.com/chapter/10.1007/978-3-319-_
2018\. Advances in Intelligent Systems and Computing. _94649-8_30_
* Chapter Title: Intelligent Flight in indoor drones
* Authors: Tipantuña-Topanta GJ, Abad F, Mollá R, Embargo: from July 2018
Poza-Luján JL, Posadas Yagüe, JL. Check the editor’s policy: July 2019 –
Editors: De La Prieta F., Omatu S., Fernández-
Caballero A. Riunet (UPV):
* Publisher: Springer a) Author version (open after embargo): Word
* Pages: 247-254 document including information of the
* DOI: https://doi.org/10.1007/978-3-319-94649- publication and link to the journal web.
8_30 b) Editorial version (closed) – ISBN: 978-3-319-94648-1
* Year: 2018 ResearchGate:
* Reference to the H2020 funding: yes, in a) Author version (open after embargo): Word acknowledgements document including information of the
publication and link to the journal web.
b) Editorial version (closed)
</th> </tr>
<tr>
<td>
Book title: Distributed Computing and Artificial Link to the publisher:
Intelligence, 15th International Conference. DCAI
_https://link.springer.com/chapter/10.1007/978-3-319-_
2018\. Advances in Intelligent Systems and Computing. _94649-8_27_
* Chapter Title: Distributed system integration
driven by tests. Embargo: from July 2018
* Authors: Poza-Luján JL, Posadas-Yagüe JL, Kröner Check the editor’s policy: July 2019
S.
* Editors: De La Prieta F., Omatu S., Fernández- Riunet (UPV):
Caballero A. a) Author version (open after embargo): Word – Publisher:
Springer document including information of the
* Pages: 221-229 publication and link to the journal web.
* DOI: https://doi.org/10.1007/978-3-319-94649- b) Editorial version (closed)
8_27
* ISBN: 978-3-319-94648-1 ResearchGate:
* Year: 2018 a) Author version (open after embargo): Word
* Reference to the H2020 funding: yes, in document including information of the acknowledgements publication and link to the journal web.
b) Editorial version (closed)
</td> </tr>
<tr>
<td>
Book title: Highlights of Practical Applications of Link to the publisher:
Agents, Multi-Agent Systems, and Complexity: The
_https://link.springer.com/chapter/10.1007/978-3-319-_
PAAMS Collection. PAAMS 2018. Communications in _94779-2_8_
Computer and Information Science.
* Chapter Title: Virtual Environment Mapping Embargo: from June 2018
Module to Manage Intelligent Flight in an Indoor Check the editor’s policy:
July 2019
Drone.
* Authors: Tipantuña-Topanta GJ., Abad F., Mollá R., Riunet (UPV):
Posadas-Yagüe JL., Poza-Lujan JL. a) Author version (open after embargo): Word
* Editors: Bajo J. et al. document including information of the
* Publisher: Springer publication and link to the journal web.
* Pages: 82-89 b) Editorial version (closed)
* DOI: https://doi.org/10.1007/978-3-319-94779-
2_8 ResearchGate:
* ISBN: 978-3-319-94778-5 a) Author version (open after embargo): Word
</td> </tr>
<tr>
<td>
–
–
</td>
<td>
Year: 2018
Reference to the H2020 funding: yes, in acknowledgements
</td>
<td>
document including information of the publication and link to the journal web.
b) Editorial version (closed)
</td> </tr> </table>
Table 12. Advances in open access for Peer reviewed scientific papers
<table>
<tr>
<th>
**Papers**
</th>
<th>
**Allocation**
</th> </tr>
<tr>
<td>
**Journal** : International Journal of Micro Air Vehicles
**Paper** : Ethics for civil indoor drones: a qualitative analysis
**Authors** : de-Miguel-Molina, María; Santamarina
Campos, Virginia; Carabal-Montagud, MariaAngeles; de-Miguel-Molina, Blanca
**Year** : 2018
**Reference to the H2020 project and funding** : yes, in page 11, funding
DOI: _https://doi.org/10.1177/1756829318794004_
Keywords: Civil drones, safety, security, privacy, European policies, ethics
Pages: 1-12
Size of the doc: 802 KB
File format: .pdf
Version: accepted version
Language: English
Embargo: 1 year
Underlying data: NO
</td>
<td>
**Open access at the journal website** :
_http://journals.sagepub.com/doi/full/10.1177/175682931_
_8794004_
Research gate:
_https://www.researchgate.net/project/TechnologyTransfer-Of-Remotely-Piloted-
Aircraft-Systems-Rpas-ForThe-Creative-Industry_
</td> </tr>
<tr>
<td>
**Journal** : World Academy of Science, Engineering and Technology,
International Science Index 137, _International Journal of Mechanical,
Aerospace, Industrial, Mechatronic and Manufacturing_
_Engineering_ , 12(5), 519 - 523.
**Paper** : Development of an Indoor Drone Designed for the Needs of the
Creative Industries.
</td>
<td>
**Open access at the journal website** :
_https://waset.org/Publications/Mechanical-and-_
_Mechatronics-Engineering_
Link: _https://waset.org/Publication/development-of-anindoor-drone-designed-
for-the-needs-of-the-creativeindustries/10009012_
</td> </tr> </table>
<table>
<tr>
<th>
**Version deposited in** :
**Authors** : Campos, V., de-Miguel-Molina, M., Kröner, Riunet (UPV):
S., de-Miguel-Molina, B. (2018). c) Author version (open after embargo): Word
document including information of the publication **Year** : 2018 and link to
the journal web. d) Editorial version (closed)
**Reference to the H2020 project and funding** : Yes, in
acknowledgements, page 522. ResearchGate:
3. Author version (open after embargo): Word
DOI: Digital Article Identifier (DAI): document including information of the
publication urn:dai:10.1999/1307-6892/10009012 and link to the journal web.
4. Editorial version (closed)
Keywords: Virtual reality, 3D reconstruction, indoor
positioning system, UWB, RPAS, aerial film, Research gate: link to publisher
at
intelligent navigation, advanced safety measures,
_https://www.researchgate.net/project/Technology-_
creative industries _Transfer-Of-Remotely-Piloted-Aircraft-Systems-Rpas-For-_
_The-Creative-Industry_
Pages: 5
Size of the doc: 244 KB
File format: .pdf
Version: accepted version
Language: English
Embargo: 1 year
Underlying data: NO
</th> </tr>
<tr>
<td>
**Journal** : World Academy of Science, Engineering and **Open access at the
journal website** :
Technology, International Science Index 137,
_https://waset.org/Publications/Mechanical-and-_
_International Journal of Mechanical, Aerospace,_ _Mechatronics-Engineering_
_Industrial, Mechatronic and Manufacturing_
_Engineering_ , 12(5), 465 – 471. Link:
_https://waset.org/Publication/application-of-design-_
_thinking-for-technology-transfer-of-remotely-piloted-_
**Paper** : Application of Design Thinking for _aircraft-systems-for-the-
creative-industry/10008957_ Technology Transfer of Remotely Piloted Aircraft
Systems for the Creative Industry. **Version deposited in** :
Riunet (UPV):
**Authors** : Campos, V., de-Miguel-Molina, M., de- e) Author version (open
after embargo): Word
Miguel-Molina, B., Montagud, M. document including information of the
publication and link to the journal web.
**Year** : 2018 f) Editorial version (closed)
**Reference to the H2020 project and funding** : Yes, in ResearchGate:
acknowledgements, page 470. e) Author version (open after embargo): Word
document including information of the publication
DOI: Digital Article Identifier (DAI): and link to the journal web.
urn:dai:10.1999/1307-6892/10008957 f) Editorial version (closed)
</td> </tr> </table>
<table>
<tr>
<th>
Keywords: Design thinking, design for effectiveness, methodology, active
toolkit, storyboards, storytelling, PAR, focus group, innovation, RPAS, indoor
drone, robotics, TRL, aerial film, creative industries, end-users.
Pages: 7
Size of the doc: 298 KB
File format: .pdf
Version: accepted version
Language: English
Embargo: 1 year
Underlying data: NO
</th>
<th>
Research gate: link to publisher at
_https://www.researchgate.net/project/TechnologyTransfer-Of-Remotely-Piloted-
Aircraft-Systems-Rpas-For-_
_The-Creative-Industry_
</th> </tr>
<tr>
<td>
Journal: World Academy of Science, Engineering and Technology, International
Science Index 137, _International Journal of Mechanical, Aerospace,
Industrial, Mechatronic and Manufacturing_
_Engineering_ , 12(5), 542 - 545.
Paper: Regulation, Co-Regulation and Self-
Regulation of Civil Unmanned Aircrafts in Europe.
Authors: de-Miguel-Molina, M., Campos, V., Oña, M., de-Miguel-Molina, B.
Year: 2018
Reference to the H2020 project and funding: project and funding: Yes, in
acknowledgements, page 548.
DOI: Digital Article Identifier (DAI): urn:dai:10.1999/1307-6892/10008841
Keywords: Ethics, regulation, safety, security.
Pages: 4
Size of the doc: 163 KB
File format: .pdf
Version: accepted version
Language: English
</td>
<td>
**Open access at the journal website** :
_https://waset.org/Publications/Mechanical-and-_
_Mechatronics-Engineering_
Link: _https://waset.org/Publication/regulation-coregulation-and-self-
regulation-of-civil-unmanned-aircraftsin-europe/10008841_
**Version deposited in** :
Riunet (UPV):
7. Author version (open after embargo): Word document including information of the publication and link to the journal web.
8. Editorial version (closed)
ResearchGate:
7. Author version (open after embargo): Word document including information of the publication and link to the journal web.
8. Editorial version (closed)
Research gate: link to publisher at
_https://www.researchgate.net/project/TechnologyTransfer-Of-Remotely-Piloted-
Aircraft-Systems-Rpas-For-_
_The-Creative-Industry_
</td> </tr>
<tr>
<td>
Embargo: 1 year
Underlying data: NO
</td>
<td>
</td> </tr>
<tr>
<td>
Journal: (to be sent in January 2019)
Paper: User involvement in the design of a new RPAS for creative industries
Authors: de-Miguel-Molina, B., de-Miguel-Molina, M., Santamarina-Campos, V.,
Segarra-Oña, M.V.
Year: 2019
Reference to the H2020 project and funding: project and funding: Yes, in
acknowledgements.
DOI:
Keywords:
Pages:
Size of the doc:
File format: .pdf
Version:
Language: English
Embargo: 1 year
Underlying data: YES
</td>
<td>
Version in the journal webpage
**Version deposited in** :
Riunet (UPV):
9. Author version (open after embargo): Word document including information of the publication and link to the journal web.
10. Editorial version (closed)
ResearchGate:
9. Author version (open after embargo): Word document including information of the publication and link to the journal web.
10. Editorial version (closed)
Raw data: will be available at the Zenodo account
</td> </tr> </table>
Table 13. Advances in open access of dissemination results: Participation in
international conferences
<table>
<tr>
<th>
**Participation in international conferences**
</th>
<th>
**Allocation**
</th> </tr>
<tr>
<td>
Conference: Innovative Approaches to Tourism and Leisure.
Fourth International Conference IACuDiT, Athens 2017.
* Title: Importance of Indoor Aerial Filming for Creative Industries (CIs): Looking Towards the Future
* Authors: Virginia Santamarina-Campos, Blanca deMiguel-Molina, Marival Segarra-Oña, María de-MiguelMolina
* Publisher: Springer
* Reference to the H2020 funding: yes, it can be checked at the link of the publisher
</td>
<td>
Link of the publisher:
_https://link.springer.com/chapter/10.1007/978-3319-67603-6_4_
Research gate:
_https://www.researchgate.net/project/Technolog y-Transfer-Of-Remotely-
Piloted-Aircraft-Systems-_
_Rpas-For-The-Creative-Industry_
Embargo: from Dec 2017
Check the editor’s policy: January 2019
</td> </tr> </table>
<table>
<tr>
<th>
– Year: 2018
</th>
<th>
</th> </tr>
<tr>
<td>
Conference: 4th International Multidisciplinary Scientific
Conference on Social Sciences and Arts SGEM 2017
* Title: Transferring technology from the Remotely Piloted Aircraft Systems (RPAS) industry for the creative industry: Why and What for?
* Authors: Santamarina Campos, Virginia; Segarra-Oña,
Marival; Miguel Molina, María Blanca De; de-MiguelMolina, María
* Publisher: SGEM
* Reference to the H2020 funding: Yes, in acknowledgments of the paper
* Year: 2017
* Publication name: SGEM2017 Conference Proceedings
* DOI: 10.5593/sgemsocial2017/62/S23.013
* ISBN: 978-619-7408-24-9
* Book 6, Vol. 2, pp. 107-114
</td>
<td>
Link of the publisher: _https://www.sgemsocial.org/index.php/call-
forpapers/impact-factor-2_
_https://sgemworld.at/ssgemlib/spip.php?article49_
_90_
Research gate:
_https://www.researchgate.net/project/Technolog y-Transfer-Of-Remotely-
Piloted-Aircraft-Systems-_
_Rpas-For-The-Creative-Industry_
Check (Dec 2018): inclusion in the Web of Science in next months, like
previous Conference Proceedings.
Check the editor’s policy once it is accessible through the Web of Science.
</td> </tr>
<tr>
<td>
Conference: 4th International Multidisciplinary Scientific
Conference on Social Sciences and Arts SGEM 2017
* Title: What are the creative Industries needs for indoor filming? How to implicate the customer from the beginning of the NPD
* Authors: Segarra-Oña, Marival; Miguel Molina, María
Blanca De; Santamarina Campos, Virginia; de-MiguelMolina, María
* Publisher: SGEM
* Reference to the H2020 funding: Yes, in acknowledgments of the paper
* Year: 2017
* Publication name: SGEM2017 Conference Proceedings
* DOI: 10.5593/sgemsocial2017/41/S16.036
* ISBN: 978-619-7408-23-2
* Book 4, Vol. 1, pp. 285-292
</td>
<td>
Link of the publisher: _https://www.sgemsocial.org/index.php/call-
forpapers/impact-factor-2_
_https://sgemworld.at/ssgemlib/spip.php?article46 99_
Research gate:
_https://www.researchgate.net/project/Technolog y-Transfer-Of-Remotely-
Piloted-Aircraft-Systems-_
_Rpas-For-The-Creative-Industry_
Check (Dec 2018): inclusion in the Web of Science in next months, like
previous Conference Proceedings.
Check the editor’s policy once it is accessible through the Web of Science
</td> </tr>
<tr>
<td>
Conference: ICUAS 2018 - International Conference on Unmanned Aircraft
Systems.
* Title: Development of an Indoor Drone Designed for the Needs of the Creative Industries.
* Authors: Campos, V., Molina, M., Kröner, S., Molina, B.
* Publisher: WASET (https://waset.org/Publications)
* Reference to the H2020 funding: yes, in acknowledgments of the paper. It can be checked at the link of the publisher
(https://waset.org/abstracts/90557)
* Year: 2018
Location: The Netherlands
</td>
<td>
Link of the publisher: in open access
(https://waset.org/conference/2018/05/amsterda m/program)
Research gate:
_https://www.researchgate.net/project/Technolog y-Transfer-Of-Remotely-
Piloted-Aircraft-Systems-_
_Rpas-For-The-Creative-Industry_
**Version deposited in** :
Riunet (UPV):
11. Author version (open after embargo): Word document including information of the publication and link to the journal web.
12. Editorial version (closed)
ResearchGate:
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
11. Author version (open after embargo): Word document including information of the publication and link to the journal web.
12. Editorial version (closed)
</th> </tr>
<tr>
<td>
Conference: ICUAS 2018 - International Conference on Unmanned Aircraft
Systems.
* Title: Application of Design Thinking for Technology Transfer of Remotely Piloted Aircraft Systems for the Creative Industry.
* Authors: Campos, V., Molina, M., Molina, B., Montagud, M.
* Publisher: WASET (https://waset.org/Publications)
* Reference to the H2020 funding: Yes, in acknowledgments of the paper. It can be checked at the link of the publisher
(https://waset.org/abstracts/91486)
* Year: 2018
Location: The Netherlands
</td>
<td>
Link of the publisher: in open access
(https://waset.org/conference/2018/05/amsterda m/program)
Research gate: link to publisher at
_https://www.researchgate.net/project/Technolog y-Transfer-Of-Remotely-
Piloted-Aircraft-Systems-_
_Rpas-For-The-Creative-Industry_
**Version deposited in** :
Riunet (UPV):
13. Author version (open after embargo): Word document including information of the publication and link to the journal web.
14. Editorial version (closed)
ResearchGate:
13. Author version (open after embargo): Word document including information of the publication and link to the journal web.
14. Editorial version (closed)
</td> </tr>
<tr>
<td>
Conference: ICUAS 2018 - International Conference on Unmanned Aircraft
Systems.
* Title: Regulation, Co-Regulation and Self-Regulation of Civil Unmanned Aircrafts in Europe.
* Authors: Molina, M., Campos, V., Oña, M., Molina, B.
* Publisher: WASET (https://waset.org/Publications)
* Reference to the H2020 funding: Yes, in acknowledgments of the paper. It can be checked at the link of the publisher
( _https://waset.org/abstracts/75761_ )
* Year: 2018
Location: The Netherlands
</td>
<td>
Link of the publisher: in open access
(https://waset.org/conference/2018/05/amsterda m/program)
Research gate: link to publisher at
_https://www.researchgate.net/project/Technolog y-Transfer-Of-Remotely-
Piloted-Aircraft-Systems-_
_Rpas-For-The-Creative-Industry_
**Version deposited in** :
Riunet (UPV):
15. Author version (open after embargo): Word document including information of the publication and link to the journal web.
16. Editorial version (closed)
ResearchGate:
o) Author version (open after embargo): Word document including information of
the publication and link to the journal web.
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
p) Editorial version (closed)
</th> </tr>
<tr>
<td>
Conference: Business meets Technology
* Title: Design Thinking, Business Model Canvas and Intellectual Property Rights. Applying management tools to the AiRT project
* Authors: Blanca de-Miguel-Molina, María de-MiguelMolina, Marival Segarra-Oña, Virginia SantamarinaCampos
* Publisher: University of Applied Sciences AnsbachShacker Verlag
* Publication title: Business Meets Technology. Proceedings of the 1st International Conference of the University of Applied Sciences Ansbach 25th to 27th January 2018.
* Editors: Barbara E. Hedderich, Michael S.J. Walter, Patrick M. Gröner – Pages: 108-111.
* Reference to the H2020 funding: Yes, in acknowledgments of the paper.
* Year: 2018
* ISBN: 978-3-8440-6170-3 Location: Germany
</td>
<td>
Link of the publisher:
https://www.shaker.de/de/content/catalogue/ind
ex.asp?lang=&ID=8&ISBN=978-3-8440-6170-3
Research gate: link to publisher at
_https://www.researchgate.net/project/Technolog y-Transfer-Of-Remotely-
Piloted-Aircraft-Systems-_
_Rpas-For-The-Creative-Industry_
Embargo: from Sept. 2018
Check the editor’s policy: Oct. 2019 **Version deposited in** :
Riunet (UPV):
17. Author version (open after embargo): Word document including information of the publication and link to the journal web.
18. Editorial version (closed)
ResearchGate:
q) Author version (open after embargo): Word document including information of
the publication and link to the journal web.
Editorial version (closed)
</td> </tr>
<tr>
<td>
Conference: XXXVIII Jornadas de Automática
* Title: Arquitectura distribuida para el control autónomo de drones en interior
* Authors: Poza-Lujan JL, Posadas-Yagüe JL, TipantuñaTopanta GJ, Abad F, Mollá R.
* Publisher: Servicio de Publicaciones de la Universidad de Oviedo
* Reference to the H2020 funding: yes, in acknowledgements (page 934)
* Year: 2017
* Location: Oviedo
</td>
<td>
Link to the publisher: open access in _http://ja2017.es/actas.html_ ;
_http://ja2017.es/files/ActasJA2017.pdf_
_http://digibuo.uniovi.es/dspace/handle/10651/46_
_949_
_http://digibuo.uniovi.es/dspace/bitstream/10651/_
_46949/1/postprintUPV.pdf_
</td> </tr>
<tr>
<td>
Conference: 15 th International Conference on Distributed Computing and
Artificial Intelligence.
* Title: Intelligent Flight in indoor drones
* Authors: Tipantuña-Topanta GJ, Abad F, Mollá R, PozaLuján JL, Posadas Yagüe, JL.
* Publisher: Springer (future) or invitation to a journal (Neurocomputing or International Journal of Knowledge and Information Systems)
* Reference to the H2020 funding: yes, in acknowledgements
* Publisher: Springer
* DOI: https://doi.org/10.1007/978-3-319-94649-8_30
* ISBN: 978-3-319-94648-1
* Year: 2018
</td>
<td>
Link to the publisher:
https://link.springer.com/chapter/10.1007/978-3319-94649-8_30
</td> </tr>
<tr>
<td>
– Location: Toledo
</td>
<td>
</td> </tr>
<tr>
<td>
Conference: 15 th International Conference on Distributed Computing and
Artificial Intelligence.
* Title: Distributed system integration driven by tests. – Authors: Poza-Luján JL, Posadas-Yagüe JL, Kröner S.
* Publisher: Springer (future) or invitation to a journal (Neurocomputing or International Journal of Knowledge and Information Systems)
* Reference to the H2020 funding: yes, in acknowledgements
* Publisher: Springer
* DOI: https://doi.org/10.1007/978-3-319-94649-8_27
* ISBN: 978-3-319-94648-1
* Year: 2018
* Location: Toledo
</td>
<td>
Link to the publisher:
https://link.springer.com/chapter/10.1007/978-3319-94649-8_27
</td> </tr>
<tr>
<td>
Conference: 16 th International Conference on Practical
Applications of Agents and Multi-Agent Systems
* Title: Virtual environment mapping module to manage intelligent flight in an indoor drone.
* Authors: Tipantuña-Topanta GJ, Abad F, Mollá R, Posadas Yagüe JL, Poza-Luján JL.
* Publisher: Springer (future) or invitation to a journal (Neurocomputing or International Journal of Knowledge and Information Systems)
* Reference to the H2020 funding: yes, in acknowledgements
* Publisher: Springer
* DOI: https://doi.org/10.1007/978-3-319-94779-2_8
* ISBN: 978-3-319-94778-5
* Year: 2018
* Location: Toledo
</td>
<td>
Link to the publisher:
https://link.springer.com/chapter/10.1007/978-3319-94779-2_8
</td> </tr> </table>
Table 14. Advances in open access of dissemination results: Deliverables
<table>
<tr>
<th>
**Public Deliverables**
</th>
<th>
**Allocation**
</th> </tr>
<tr>
<td>
D1.3. Final Public Report
</td>
<td>
a) Is a public deliverable, available in open access at:
http://airt.webs.upv.es/the-project/deliverables/ b) Zenodo: will be available
at
_https://zenodo.org/communities/airt_project/search?page=1 &size=20 _
</td> </tr>
<tr>
<td>
D2.1. CI’s needs for indoor filming using RPAs
</td>
<td>
1. Is a public deliverable, available in open access at: _http://airt.webs.upv.es/the-project/deliverables/_
2. Zenodo: _https://zenodo.org/record/1258195#.W9YIWKdDm3c_
</td> </tr>
<tr>
<td>
D2.2. Ethical security and safety issues
</td>
<td>
1. Is a public deliverable, available in open access at: http://airt.webs.upv.es/the-project/deliverables/
2. Zenodo: _https://zenodo.org/record/1258202#.W9YIkKdDm3c_
</td> </tr>
<tr>
<td>
D5.4. User guide
</td>
<td>
1. Is a public deliverable, available in open access at: http://airt.webs.upv.es/the-project/deliverables/
2. Zenodo: _https://zenodo.org/record/1258206#.W9YIu6dDm3c_
</td> </tr>
<tr>
<td>
D5.5. Report on results of
demonstration
</td>
<td>
a) Is a public deliverable, available in open access at:
http://airt.webs.upv.es/the-project/deliverables/
</td> </tr>
<tr>
<td>
</td>
<td>
b) Zenodo: _https://zenodo.org/record/1258213#.W9YI-KdDm3c_
</td> </tr>
<tr>
<td>
D6.1. Project communication and
dissemination plan
</td>
<td>
1. Is a public deliverable, available in open access at: http://airt.webs.upv.es/the-project/deliverables/
2. Zenodo: _https://zenodo.org/record/1258217#.W9YJFadDm3c_
</td> </tr>
<tr>
<td>
D6.2. Communication materials
</td>
<td>
1. Is a public deliverable, available in open access at: http://airt.webs.upv.es/the-project/deliverables/
2. Zenodo: _https://zenodo.org/record/1258219#.W9YJOqdDm3c_
</td> </tr>
<tr>
<td>
D6.4. Report on workshop conclusions
</td>
<td>
1. Is a public deliverable, the link to the publisher is available in open access at: http://airt.webs.upv.es/the-project/deliverables/
2. Springer: _https://rd.springer.com/book/10.1007/978-3-319-95261-_
_1_
</td> </tr>
<tr>
<td>
D6.5. Policy Book
</td>
<td>
a) Is a public deliverable, available in open access at:
http://airt.webs.upv.es/the-project/deliverables/ b) Springer (see Table 11)
</td> </tr>
<tr>
<td>
D7.2. Open Research Data Pilot (version 1)
</td>
<td>
1. Is a public deliverable, available in open access at: http://airt.webs.upv.es/the-project/deliverables/
2. Zenodo: _https://zenodo.org/record/1258225#.W9YJVKdDm3c_
</td> </tr>
<tr>
<td>
D7.2. Open Research Data Pilot (final version)
</td>
<td>
a) Is a public deliverable, available in open access at:
http://airt.webs.upv.es/the-project/deliverables/ b) Zenodo: will be available
at
_https://zenodo.org/communities/airt_project/search?page=1 &size=20 _
</td> </tr> </table>
**Glossary**
</td>
<td>
Open access to scientific publication and research data in the wider context
of dissemination and exploitation
</td> </tr>
<tr>
<td>
**AiRT**
</td>
<td>
Arts indoor technology transfer
</td> </tr>
<tr>
<td>
**RPAS**
</td>
<td>
Remotely Piloted Aircraft
</td> </tr>
<tr>
<td>
**CIs**
</td>
<td>
Creative Industries
</td> </tr>
<tr>
<td>
**CORDIS**
</td>
<td>
Community Research and development information system
</td> </tr>
<tr>
<td>
**FAIR**
</td>
<td>
Findable, Accessible, Interoperable, Reusable
</td> </tr>
<tr>
<td>
</td>
<td>
</td> </tr> </table>
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0778_LIQUEFACT_700748.md
|
# Executive Summary
Recent events have demonstrated that Earthquake Induced Liquefaction Disasters
(EILDs) are responsible for tremendous structural damages and fatalities
causing in some cases half of the economic loss caused by earthquakes. With
the causes of liquefaction being substantially acknowledged, it is important
to recognize the factors that contribute to its occurrence, to estimate
hazards, then to practically implement the most appropriate mitigation
strategy considering the susceptibility of the site to liquefaction, the type
and size of the structure. The LIQUEFACT project addresses the mitigation of
risks to EILD events in European communities with a holistic approach. The
project deals not only with the resistance of structures to EILD events, but
also with the resilience of the collective urban community in relation to
their quick recovery from an occurrence. The LIQUEFACT project sets out to
achieve a more comprehensive understanding of EILDs, the applications of the
mitigation techniques, and the development of more appropriate techniques
tailored to each specific scenario, for both European and worldwide
situations.
# Introduction, Goal and Purpose of this document
The LIQUEFACT project is a collaborative project involving 11 partners from 6
different countries
(UK, Italy, Portugal, Slovenia, Norway and Turkey) including representation
from 4 EU Members States and is organised in three phases (Scoping, Research
and Implementation) across nine work packages (WPs), each of which
encapsulates a coherent body of work. The first 7 WPs highlight the major
technical activities that will take place throughout the project and have been
scheduled to correlate with one another. The final 2 WPs (WP8 and WP9) are the
continuous activities which will take place throughout the duration of the
project.
In order to ensure the smooth running of the project for all project partners
management structures and procedures are necessary to facilitate effective and
efficient working practices. Following the management information included in
the Grant Agreement (GA) and its annexes, the Consortium Agreement (CA),
Commission rules as contained in the Guidance Notes and organisational Risk
Management policies and procedures including Corporate Risk Strategy, Policy
and Guidance and Health and Safety Policies this manual highlights important
procedures to be carried out in order to monitor, coordinate and evaluate the
management activities of the project.
Goal: **This document aims to aid the LIQUEFACT project consortium to meet
their responsibilities regarding research data quality, sharing and security
though the provision of an initial data management plan in accordance with the
Horizon2020 Guidelines on Open Access.**
# Admin Details
**Project Name:** LIQUEFACT Data Management Plan
**Project Identifier:** LIQUEFACT
**Grant Title:** 700748
**Principal Investigator / Researcher:** Professor Keith Jones
**Project Data Contact:** Professor Keith Jones, +44(0) 1245 683907.
[email protected]
**Description:** Assessment and mitigation of liquefaction potential across
Europe: a holistic approach to protect structures/ infrastructure for improved
resilience to earthquake-induced liquefaction disasters.
**Funder:** European Commission (Horizon 2020)
**Institution:** Anglia Ruskin University
<table>
<tr>
<th>
**Task**
</th>
<th>
**Data**
</th>
<th>
**Type**
</th> </tr>
<tr>
<td>
T1.1
</td>
<td>
Reference list/Bibliography
</td>
<td>
Qualitative
</td> </tr>
<tr>
<td>
T1.2
</td>
<td>
Questionnaire
</td>
<td>
Qualitative and Quantitative
</td> </tr>
<tr>
<td>
T1.4
</td>
<td>
Glossary/Lexicon
</td>
<td>
Qualitative
</td> </tr>
<tr>
<td>
T2.1
</td>
<td>
Ground characterization; Geophysical prospecting; Soil Geotechnical and
Geophysical tests; Ground investigations; Lab testing
</td>
<td>
Quantitative
</td> </tr>
<tr>
<td>
T2.6
</td>
<td>
Reference list/Bibliography
</td>
<td>
Qualitative
</td> </tr>
<tr>
<td>
T3.1
</td>
<td>
Numerical modelling; Experimental data.
</td>
<td>
Quantitative
</td> </tr>
<tr>
<td>
T3.2
</td>
<td>
Field trials and pilot testing; Simulations; Numerical modelling
</td>
<td>
Quantitative
</td> </tr>
<tr>
<td>
T4.1
</td>
<td>
Soil characterization (Mechanics)
</td>
<td>
Quantitative
</td> </tr>
<tr>
<td>
T4.2
</td>
<td>
Centrifugal Modelling
</td>
<td>
Quantitative
</td> </tr>
<tr>
<td>
T4.3
</td>
<td>
Field trials; Lab and Field testing
</td>
<td>
Quantitative
</td> </tr>
<tr>
<td>
T4.4
</td>
<td>
Numerical modelling
</td>
<td>
Quantitative
</td> </tr>
<tr>
<td>
T5.2
</td>
<td>
Individual and Community resilience measures/metrics
</td>
<td>
Qualitative
</td> </tr>
<tr>
<td>
T5.3
</td>
<td>
Cost/Benefit Models
</td>
<td>
Quantitative
</td> </tr>
<tr>
<td>
T7.1
</td>
<td>
Reference list/Bibliography
</td>
<td>
Qualitative
</td> </tr> </table>
# Data Summary
* Quantitative and Qualitative data will be collected in line with the overarching aims and objectives of the LIQUEFACT project to help deliver a holistic approach to the protection of structures, infrastructure and improve community resilience to earthquake induced liquefaction disasters across Europe.
* It is important to recognise the opportunity for mitigation strategies to help aid protection for both people, places and communities through a more comprehensive understanding of Earthquake Induced Liquefaction Disasters (EILDs).
* Data collection will aid the development and application of techniques, applicable across European and global situations.
* Site specific data collection at differing case study sites across European will be undertaken alongside data gathering from the academic and community fields to better inform decision making.
* It is hoped that this data will be useful to a wide ranging, spatially and temporally diverse audience - across the policy-practitioner interface.
# Fair Data
## 2.1
* It is anticipated that data will be made available in varying forms for varying uses
* Identification mechanisms will be utilised to improve the usability of the data within differing contexts
* Data cleansing will be considered in order to present clear and considered formatting
* Versions, Keywords and Digital Object Identifiers will be explored in principle to aid the applicability of data
* Anglia Ruskin University adheres to the Research Data Management Guidelines; encouraging scientific enquiry and debate and increase the visibility of research encouraging innovation and the reuse of existing datasets in different ways, reducing costs by removing the need to collect duplicate research data encouraging collaboration between data users and data creators maximising transparency and accountability, and to enable the validation and verification of research findings and methods
* encouraging scientific enquiry and debate and increase the visibility of research
* encouraging innovation and the reuse of existing datasets in different ways, reducing costs by removing the need to collect duplicate research data
* encouraging collaboration between data users and data creators
* maximising transparency and accountability, and to enable the validation and verification of research findings and methods
## 2.2
* Appropriate data will be made available through the use of an online portal or reputable repository, details of which are yet to be confirmed but may include Zenodo or _www.Re3data.org_
* Generic software tools will be predominantly required including MS Office and SPSS
* A Technical Data Report will be provided for each data set through the creation and statement of the aims, objectives and methodology
## 2.3
* Text mining tools and methods will help external actors to extract common and relevant data
* Commonly used ontologies will be utilised
* A glossary of terms will be collated by project partners
* Data files will be saved in an easily-reusable format, commonly used by the research community. Including the following format choices; .txt; .xml; .html; .rft; .csv; .SPSSportable; .tif; .jpeg; .png
## 2.4
* Data will be stored either on each institution’s back-up server or on a separate data storage device that is kept in a secure and fireproof location, separate from the main data point
* Data will be released no later than the publication of findings and within three years of project completion and in line with the commercial sensitivity of the data
* Primary data will be securely retained, in an accessible format, for a minimum of five years after project completion
# Allocation of Resources
* At this stage costs have not been accounted for in the H2020 LIQUEFACT project budget.
* Data Management Plans will be regularly updated by the Project Coordinator with data collection, collation and usability the responsibility of all partners involved in the project.
* By providing this data it is anticipated that future utilisation will contribute to the long term success of the LIQUEFACT project and enhance EILD improvements across and between countries and organisations
# Data Security
This research aims to follow these principles;
* Avoid using personal data wherever possible.
* If the use of personal data is unavoidable, consider partially or fully anonymising the information to obscure the identity of the individuals concerned.
* Use secure shared drives to store and access personal data and sensitive business information, ensuring that only those who need to use this information have access to it.
* Use remote access facilities to access personal data and sensitive business information on the central server instead of transporting it on mobile devices and portable media or using third party hosting services
* Personal equipment (such as home PCs or personal USB sticks) or third party hosting services (such as Google Mail) should not be used for high or medium risk personal data or business information.
* If email is used to send personal data or business information outside the consortium environment, it should be encrypted. If you are sending unencrypted personal data or business information to another email account, indicate in the email title that the email contains sensitive information so that the recipient can exercise caution about where they open it.
* Do not use high or medium risk personal data or business information in public places. When accessing email remotely, exercise caution to ensure that you do not download unencrypted high or medium risk personal data or business information to an insecure device.
* Consider the physical security of personal data or business information, for example use locked filing cabinets/cupboards for storage.
* The fifth principle of the Data Protection Act 1998 states that personal data processed for any purpose or purposes should not be kept for longer than is necessary for that purpose or purposes. It is therefore important to implement retention and disposal policies so that personal data and sensitive business information is not kept for longer than necessary.
# Ethical Aspects
* Ethical considerations in making research data publicly available are clearly designed and discussed by Anglia Ruskin University regarding data sharing throughout the entire data cycle.
* Ensuring compliance with the Data Protection Act 1998.
* Informed consent will be obtained from all participants for their data to be shared/made publicly available. Providing participants with sufficient information to make an informed decision regarding involvement
* Data will always be anonymised with examples of direct or sensitive identifiers removed
* The user (licensor) will be given due credit for work when it is distributed, displayed, performed, or used to derive a new work.
# Other Procedures
* Data Protection Act 1998
* National Data Protection Laws
* Anglia Ruskin University Research Training, Ethics and Governance as part of the Research
Policy and Support group within the Research and Innovation Development Office
* Anglia Ruskin University's Research, Innovation and Knowledge Exchange strategy 20162017
* DMP Online - _https://dmponline.dcc.ac.uk/_
* Zenodo - _https://zenodo.org/_
* OpenAIRE - _https://www.openaire.eu/_
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0779_LIQUEFACT_700748.md
|
# Executive Summary
Recent events have demonstrated that Earthquake Induced Liquefaction Disasters
(EILDs) are responsible for tremendous structural damages and fatalities
causing in some cases half of the economic loss caused by earthquakes. With
the causes of liquefaction being substantially acknowledged, it is important
to recognize the factors that contribute to its occurrence, to estimate
hazards, then to practically implement the most appropriate mitigation
strategy considering the susceptibility of the site to liquefaction and the
type and size of the structure. The LIQUEFACT project addresses the mitigation
of risks to EILD events in European communities with a holistic approach. The
project deals not only with the resistance of structures to EILD events, but
also with the resilience of the collective urban community in relation to
their quick recovery from an occurrence. The LIQUEFACT project sets out to
achieve a more comprehensive understanding of EILDs, the applications of the
mitigation techniques, and the development of more appropriate techniques
tailored to each specific scenario, for both European and worldwide
situations.
# Introduction, Goal and Purpose of this document
The LIQUEFACT project is a collaborative project involving 11 partners from
six different countries
(UK, Italy, Portugal, Slovenia, Norway and Turkey) including representation
from four EU Member States and is organised in three phases (Scoping, Research
and Implementation) across nine work packages (WPs), each of which
encapsulates a coherent body of work. The first seven WPs highlight the major
technical activities that will take place throughout the project and have been
scheduled to correlate with one another. The final two WPs (WP8 and WP9) are
the continuous activities which will take place throughout the duration of the
project.
In order to ensure the smooth running of the project for all project partners,
management structures and procedures are necessary to facilitate effective and
efficient working practices. Following the management information included in
the Grant Agreement (GA) and its annexes, the Consortium Agreement (CA),
Commission rules as contained in the Guidance Notes and organisational Risk
Management policies and procedures including Corporate Risk Strategy, Policy
and Guidance and Health and Safety Policies this manual highlights important
procedures to be carried out in order to monitor, coordinate and evaluate the
management activities of the project.
Goal: **This document aims to aid the LIQUEFACT project consortium to meet
their responsibilities regarding research data quality, sharing and security
though the provision of an initial data management plan in accordance with the
Horizon2020 Guidelines on Open Access.**
# Admin Details
**Project Name:** LIQUEFACT Data Management Plan - DMP title
**Project Identifier:** LIQUEFACT
**Grant Title:** 700748
**Principal Investigator / Researcher:** Professor Keith Jones
**Project Data Contact:** Professor Keith Jones, +44(0) 1245 683907.
[email protected]
**Description:** Assessment and mitigation of liquefaction potential across
Europe: a holistic approach to protect structures/ infrastructure for improved
resilience to earthquake-induced liquefaction disasters.
**Funder:** European Commission (Horizon 2020)
**Institution:** Anglia Ruskin University
<table>
<tr>
<th>
**Task**
</th>
<th>
**Data**
</th>
<th>
**Type**
</th> </tr>
<tr>
<td>
T1.1
</td>
<td>
Reference list/Bibliography
</td>
<td>
Qualitative
</td> </tr>
<tr>
<td>
T1.2
</td>
<td>
Questionnaire
</td>
<td>
Qualitative and Quantitative
</td> </tr>
<tr>
<td>
T1.4
</td>
<td>
Glossary/Lexicon
</td>
<td>
Qualitative
</td> </tr>
<tr>
<td>
T2.1
</td>
<td>
Ground characterization; Geophysical prospecting; Soil Geotechnical and
Geophysical tests; Ground investigations; Lab testing
</td>
<td>
Quantitative
</td> </tr>
<tr>
<td>
T2.6
</td>
<td>
Reference list/Bibliography
</td>
<td>
Qualitative
</td> </tr>
<tr>
<td>
T3.1
</td>
<td>
Numerical modelling; Experimental data.
</td>
<td>
Quantitative
</td> </tr>
<tr>
<td>
T3.2
</td>
<td>
Field trials and pilot testing; Simulations; Numerical modelling
</td>
<td>
Quantitative
</td> </tr>
<tr>
<td>
T4.1
</td>
<td>
Soil characterization (Mechanics)
</td>
<td>
Quantitative
</td> </tr>
<tr>
<td>
T4.2
</td>
<td>
Centrifugal Modelling
</td>
<td>
Quantitative
</td> </tr>
<tr>
<td>
T4.3
</td>
<td>
Field trials; Lab and Field testing
</td>
<td>
Quantitative
</td> </tr>
<tr>
<td>
T4.4
</td>
<td>
Numerical modelling
</td>
<td>
Quantitative
</td> </tr>
<tr>
<td>
T5.2
</td>
<td>
Individual and Community resilience measures/metrics
</td>
<td>
Qualitative and Quantitative
</td> </tr>
<tr>
<td>
T5.3
</td>
<td>
Cost/Benefit Models
</td>
<td>
Quantitative
</td> </tr>
<tr>
<td>
T7.1
</td>
<td>
Reference list/Bibliography
</td>
<td>
Qualitative
</td> </tr> </table>
# Data Summary
* Quantitative and qualitative data will be collected in line with the overarching aims and objectives of the LIQUEFACT project; to help deliver a holistic approach to the protection of structures, infrastructure and resilience to Earthquake Induced Liquefaction Disasters (EILDs) across Europe.
* It is important to recognise the opportunity for mitigation strategies to help aid protection for both people, places and communities through a more comprehensive understanding of EILDs.
* Data collection will aid the development and application of techniques, applicable across European and global situations.
* Site specific data collection at differing case study sites across Europe will be undertaken alongside data gathering from the academic and community fields to better inform decision making.
* It is hoped that this data will be useful to a wide ranging, spatially and temporally diverse audience - across the policy-practitioner interface.
# Fair Data
## 2.1
* Open access will be provided to all scientific publications in line with the guidance provided by the Commission in their letter dated 27 th March 2017 (The open access to publications obligations in Horizon 2020).
* Self-archiving through suitable repositories within six months of publication (12 months for social science and humanities publications); or Open access publishing on the publisher/journal website.
* It is anticipated that data will be made available in varying forms for varying uses.
* Identification mechanisms will be utilised to improve the usability of the data within differing contexts.
* Data cleansing will be considered in order to present clear and considered formatting.
* Versions, Keywords and Digital Object Identifiers will be explored in principle to aid the applicability of data.
* Anglia Ruskin University adheres to the Research Data Management Guidelines;
* encouraging scientific enquiry and debate and increase the visibility of research.
* encouraging innovation and the reuse of existing datasets in different ways, reducing costs by removing the need to collect duplicate research data.
* encouraging collaboration between data users and data creators.
* maximising transparency and accountability, and to enable the validation and verification of research findings and methods.
## 2.2
* Appropriate data will be made available through the use of an online portal or reputable repository, details of which are yet to be confirmed but may include the LIQUEFACT website ( _www.liquefact.eu_ ) Zenodo or _www.Re3data.org_ .
* Generic software tools will be predominantly used including MS Office and SPSS.
* A Technical Data Report will be provided for each data set through the creation and statement of the aims, objectives and methodology.
## 2.3
* Text mining tools and methods will help external actors to extract common and relevant data.
* Commonly used ontologies will be utilised.
* A glossary of terms will be collated by project partners.
* Data files will be saved in an easily-reusable format, commonly used by the research community. Including the following format choices; .txt; .xml; .html; .rft; .csv; .SPSSportable; .tif; .jpeg; .png.
## 2.4
* Data will be stored either on each institution’s back-up server or on a separate data storage device that is kept in a secure and fireproof location, separate from the main data point.
* Data will be released no later than the publication of findings and within three years of project completion.
* Primary data will be securely retained, in an accessible format, for a minimum of five years after project completion.
# Allocation of Resources
* At this stage costs have not been accounted for in the H2020 LIQUEFACT project budget.
* Data Management Plans will be regularly updated by the Project Coordinator with data collection, collation and usability the responsibility of all partners involved in the project.
* By providing this data it is anticipated that future utilisation will contribute to the long term success of the LIQUEFACT project and enhance EILD improvements across and between countries and organisations.
# Data Security
This research aims to follow these principles;
* Avoid using personal data wherever possible.
* If the use of personal data is unavoidable, consider partially or fully anonymising the information to obscure the identity of the individuals concerned.
* Use our secure shared drives to store and access personal data and sensitive business information, ensuring that only those who need to use this information have access to it.
* Use remote access facilities to access personal data and sensitive business information on the central server instead of transporting it on mobile devices and portable media or using third party hosting services.
* Personal equipment (such as home PCs or personal USB sticks) or third party hosting services (such as Google Mail) should not be used for high or medium risk personal data or business information.
* If email is used to send personal data or business information outside the university environment, it should be encrypted. If you are sending unencrypted personal data or business information to another university email account, indicate in the email title that the email contains sensitive information so that the recipient can exercise caution about where they open it.
* Do not use high or medium risk personal data or business information in public places. When accessing email remotely, exercise caution to ensure that you do not download unencrypted high or medium risk personal data or business information to an insecure device.
* Consider the physical security of personal data or business information, for example use locked filing cabinets/cupboards for storage.
* The fifth principle of the Data Protection Act 1998 states that personal data processed for any purpose or purposes should not be kept for longer than is necessary for that purpose or purposes. It is therefore important to implement our retention and disposal policies so that personal data and sensitive business information is not kept for longer than necessary.
# Ethical Aspects
* Ethical considerations in making research data publicly available are clearly designed and discussed by Anglia Ruskin University regarding data sharing throughout the entire data cycle.
* Ensuring compliance with the Data Protection Act 1998.
* Informed consent will be obtained from all participants for their data to be shared/made publicly available. Providing participants with sufficient information to make an informed decision regarding involvement.
* Data will always be anonymised with examples of direct or sensitive identifiers removed.
* The user (licensor) will be given due credit for work when it is distributed, displayed, performed, or used to derive a new work.
# Other Procedures
* Data Protection Act 1998
* Anglia Ruskin University Research Training, Ethics and Governance as part of the Research
Policy and Support group within the Research and Innovation Development Office
* Anglia Ruskin University's Research, Innovation and Knowledge Exchange strategy 20162017
* DMP Online
* Zenodo
* OpenAIRE
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0780_MOSES_642258.md
|
**1\. Introduction**
## 1.1. Moses project
The background of the project, its concepts and the technologies are described
into the document in [AD1], “ _**Moses Project Grant Agreement and Annexes** _
”, Horizon 2020 Grant Agreement No. 642258\.
The main objective of Moses is “to put in place and demonstrate at the real
scale of application an information platform devoted to water procurement and
management agencies to facilitate planning of irrigation water resources **”**
. To achieve these goals, the MOSES project combines in an innovative and
integrated platform a wide range of data and technological resources: EO data,
probabilistic seasonal forecasting and numerical weather prediction, crop
water requirement and irrigation modelling and online GIS Decision Support
System. The following entities composes the Moses project Consortium:
1. Esri Italia Spa (Esri), Italy
2. Agenzia Regionale per la Prevenzione, l'Ambiente e l'Energia dell'Emilia-Romagna (ArpaeER), Italy
3. Agencia Estatal de Meteorologia (AEMET), Spain
4. Institutul National de Hidrologie Si Gospodarire a Apelor (INHGA), Romania
5. Administratia Nationala de Meteorologie R.A. (ANM), Romania
6. Alma Mater Studiorum - Università Di Bologna (UNIBO), Italy
7. Asociacion Feragua de Comunidades de Regantes de Andalucia (FER), Spain
8. Serco Belgium Sa (SERCO), Belgium
9. Technische Universiteit Delft (DUT), Netherlands
10. Universidad de Castilla - La Mancha (UCLM), Spain
11. Universite Chouaib Doukkali (UCD), Morocco
12. Agromet Srl (AM), Italy
13. Consorzio di Bonifica di Secondo Grado per il Canale Emiliano Romagnolo (CER), Italy
14. Aliara Agrícola Sl (ALI), Spain
15. Aryavarta Space Organization (ASO), India
16. Consorzio di Bonifica della Romagna (CBR), Italy 1
The project started on July 1, 2015, while the Kick-Off meeting took place in
Rome on July 14 and 15, 2015.
## 1.2. Purpose of the document
The purpose of the Data Management Plan (DMP) is to provide an analysis of the
main elements of the data management policy implemented by the members of the
consortium with regard to all the datasets generated within the project.
The document is organized as follows:
* Chapter 2 provides a brief description of the Release Candidate version of the MOSES DSS (web-site and platform);
* Chapter 3 describes in details the main points about datasets management required by Annex 1 of [RD3]. Both data sets used as input for the processing platform and outputs of the system (MOSES products) will be analyzed. We will exploit a dataset by dataset approach, as suggested in [RD3];
* Chapter 4 describes in details the structure of the web services which exposes REST interfaces to the datasets in Chapter 3;
* Finally, Chapter 5 provides details about the management of scientific publications resulting from the project, including the exploited datasets and tools.
This document has evolved during the lifespan of the project. New releases of
this document have been delivered in case new dataset types and data
structures changes in order to comply with raising design issues.
## 1.3. Definitions, acronyms and abbreviations
The following table lists acronyms and abbreviations used in this document.
<table>
<tr>
<th>
AD
</th>
<th>
Applicable Document
</th> </tr>
<tr>
<td>
CMS
</td>
<td>
Content Management System
</td> </tr>
<tr>
<td>
DA
</td>
<td>
Demonstration Area
</td> </tr>
<tr>
<td>
DSS
</td>
<td>
Decision Support System
</td> </tr>
<tr>
<td>
EASME
</td>
<td>
European Agency for Small and Medium Enterprises
</td> </tr>
<tr>
<td>
ECM
</td>
<td>
Early Season Crop Maps
</td> </tr>
<tr>
<td>
EO
</td>
<td>
Earth Observation
</td> </tr>
<tr>
<td>
GIS
</td>
<td>
Geographical Information System
</td> </tr>
<tr>
<td>
ISCM
</td>
<td>
In-Season Crop Maps
</td> </tr>
<tr>
<td>
IT
</td>
<td>
Information Technology
</td> </tr>
<tr>
<td>
LAI
</td>
<td>
Leaf Area Index
</td> </tr>
<tr>
<td>
NA
</td>
<td>
Not Applicable
</td> </tr>
<tr>
<td>
RD
</td>
<td>
Reference Document
</td> </tr>
<tr>
<td>
SF
</td>
<td>
Seasonal Forecast
</td> </tr>
<tr>
<td>
SWS
</td>
<td>
Synthetic Weather Series
</td> </tr>
<tr>
<td>
SYGMA
</td>
<td>
System for Grant MAnagement
</td> </tr>
<tr>
<td>
UAA
</td>
<td>
Utilized Agricultural Area
</td> </tr>
<tr>
<td>
WBS
</td>
<td>
Work Breakdown Structure
</td> </tr>
<tr>
<td>
WP
</td>
<td>
Work Package
</td> </tr> </table>
## 1.4. Applicable and reference documents
The following documents are applicable:
[AD1] “ _**Moses Project Grant Agreement and Annexes** _ ”, Horizon 2020 Grant
Agreement No. 642258
[AD2] _**“AMENDMENT to Moses Project Grant Agreement and Annexes** _ ”,
Horizon 2020 Reference No. AMD-642258-5
The following documents are used as references:
[RD1] Moses Consortium Agreement, version 2, 2015-05-18
[RD2] Guidelines on Open Access to Scientific Publications and Research Data
in Horizon 2020, Version 2.1, 15 February 2016
[RD3] Guidelines on Data Management in Horizon 2020, Version 2.1, 15 February
2016
[RD4] Moses Deliverable D2.2 (Design Definition File)
[RD5] Moses Deliverable D3.1 (Crop Mapping Package)
# 2\. Integrated Exploitation Platform
## MOSES website
The MOSES project website is available at the link _http://www.moses-
project.eu/_ . The website has been developed using the well-known CMS
Wordpress, and is hosted by the company Webfaction. A screenshot of the
welcome page of the website is shown in the following Figure 1.
**Figur**
**e**
**1**
**-**
**Screenshot of the MOSES**
**public web site**
## MOSES platform
All data coming from the MOSES platform have been centralized into a webGIS
portal. The public URL of the project’s portal is the following:
_https://moses.esriitalia.it/portal_ . The portal is also accessible from the
link “Portal” of the project’s official website: _www.moses-project.eu/_ A
screenshot of the welcome page of the portal is shown in the following Figure
2.
**Figure**
**2**
**-**
**Screenshot of the MOSES web**
**GIS**
**portal**
As the WebGIS portal is a tool mainly for GIS-expert users, during the Beta
and Release Candidate phase of the project has been developed a specific web
application for the final user. Such application has been designed according
to the feedbacks of MOSES Demonstration Area partners, and a very fine work of
tailoring and customization has been performed.
The web application has been deployed for every DA, and the URL is:
_https://moses.esriitalia.it/mosesviewer_rc_xx/_ , where xx = {it, sp, mo, ro}
is the identifier of the DA. Web applications are profiled and users can
access with their credentials of MOSES portal. A screenshot of the web
application is shown in the following Figure 3.
**Figure 3 - Screenshot of the MOSES front-end web application**
# 3\. Datasets generated by MOSES platform
During the activities of the project, several datasets have been generated by
the MOSES platform. In the following there is a list of the high-level
products (refer to [RD3] for more details).
<table>
<tr>
<th>
**ID**
</th>
<th>
**Product Name**
</th>
<th>
**Brief Description**
</th> </tr>
<tr>
<td>
1
</td>
<td>
Crop raster maps
</td>
<td>
Raster images showing the crop classification in the demonstration areas. Two
kinds of dataset will be produced each year for each demonstration area: an
“early season” classification of macro-classes and an in-season crop
classification, updated on weekly or fortnightly basis
</td> </tr>
<tr>
<td>
2
</td>
<td>
Seasonal probabilistic forecast
</td>
<td>
XML files containing the seasonal probabilistic forecast of 6 climatological
indices (expressed as anomalies with respect to the climatological averages).
Each file refers to a cell of the downscaling grid on the demonstration area.
The frequency of generation depends on the length of the irrigation season and
it ranges from once per year to once per month.
</td> </tr> </table>
<table>
<tr>
<th>
**ID**
</th>
<th>
**Product Name**
</th>
<th>
**Brief Description**
</th> </tr>
<tr>
<td>
3
</td>
<td>
Synthetic weather series generated from seasonal probabilistic forecasts
</td>
<td>
Comma separated value files of synthetic daily weather data (minimum and
maximum temperature, precipitation) computed by a weather generator fed by
climate data and seasonal probabilistic forecast. Each file refers to a cell
of the local meteo grid on the demonstration area. The frequency of emission
depends on the length of the irrigation season and it ranges from once for
year to once for month.
</td> </tr>
<tr>
<td>
4
</td>
<td>
Phenological stage data
</td>
<td>
Raster images and statistical tables of the phenological stage of the
monitored crops in the demonstration areas, computed by the “crop water demand
processor”. During crop growing season, these datasets will be updated on a
weekly basis
</td> </tr>
<tr>
<td>
5
</td>
<td>
Short-term forecasts of crop water demand
</td>
<td>
Raster images and statistic tables containing the short-term forecasts of crop
water demand, computed by the “crop water demand processor”. During crop
season, these datasets will be updated on a weekly basis
</td> </tr>
<tr>
<td>
6
</td>
<td>
Crop water demand monitoring data
</td>
<td>
Raster images and statistical tables of current crop water demand, computed by
the “crop water demand processor”. During crop season, these datasets will be
updated on a weekly basis
</td> </tr>
<tr>
<td>
7
</td>
<td>
Seasonal irrigation forecasts
</td>
<td>
This product is composed by two comma separated value files: seasonal
irrigation climate and seasonal irrigation forecast, where season means a
3month period.
Both the outputs are the statistical distribution of irrigations estimated by
the soil water balance processor and expressed as percentiles for each
computational unit.
The frequency of emission depends on the length of the irrigation season and
it ranges from once for year to once for month.
</td> </tr>
<tr>
<td>
8
</td>
<td>
Short-term irrigation forecasts
</td>
<td>
Comma separated value files containing the status of crop water availability,
forecasts of rainfall and crop water needs for the next 7 days and model
assessment of previous irrigations, for each computational unit of the
demonstration area. This product is updated on a daily basis.
</td> </tr>
<tr>
<td>
9
</td>
<td>
In field measures of water balance components
</td>
<td>
Tables containing direct (in field) measurements of water balance components,
collected in the Demonstration Area during crop season.
</td> </tr> </table>
The following paragraphs reports, for each product, the datasets generated
with their main characteristics.
## 3.1. Crop raster maps
This product consists in the two following datasets, which are described in
details in subsequent paragraphs:
* Early crop map
* In season crop map
### 3.1.1 Early Season Crop Maps (ECM)
<table>
<tr>
<th>
Product ID.
</th>
<th>
ECM
</th> </tr>
<tr>
<td>
Product Name
</td>
<td>
Early (Functional) Crop Map
</td> </tr>
<tr>
<td>
Purpose
</td>
<td>
_Mapping of broad crop classes at a very early stage in the irrigation season
used to estimate crop water requirements from mid spring to summer season in
combination with seasonal probabilistic forecasts and a soil water balance
model._
</td> </tr>
<tr>
<td>
Description (Content Specification):
</td>
<td>
_Early season crop classification mapping relies on the possibility to
discriminate irrigated crops prior to the growing season start; it is based on
a few satellite images, selected at given time windows: classes are aggregated
crops, also indicated as “Crop Functional Groups”; it is intended that if such
mapping is not feasible the seasonal irrigation forecast module will derive
such information from alternative sources (Land Use/Land cover maps,
statistics, ground surveys, etc.)._
</td> </tr>
<tr>
<td>
Output Layers 2 :
</td>
<td>
1. _Vector: EARLY_CROP_CLASS_
2. _Raster: EARLY_CROP_CLASS_
_Id of the crop map class (integer value)._
</td> </tr>
<tr>
<td>
Measurement Unit:
</td>
<td>
_N/A_
</td> </tr>
<tr>
<td>
Temporal/spatial applicable domains:
</td>
<td>
_Yearly/District area (TBC)_
</td> </tr>
<tr>
<td>
Temporal coverage
</td>
<td>
_One year_
</td> </tr>
<tr>
<td>
Spatial Coverage / Area:
</td>
<td>
_Demonstration area_
</td> </tr>
<tr>
<td>
Spatial Resolution / Scale (Data Grid):
</td>
<td>
1. _Vector derived from the EO input data / 1:10000 scale_
2. _Raster derived from the EO input data / 20x20m_
</td> </tr>
<tr>
<td>
Geographic projection / Reference system:
</td>
<td>
_UTM WGS84_
</td> </tr> </table>
<table>
<tr>
<th>
Product ID.
</th>
<th>
ECM
</th> </tr>
<tr>
<td>
Product Name
</td>
<td>
Early (Functional) Crop Map
</td> </tr>
<tr>
<td>
Input Data/Sources:
</td>
<td>
_EO data:_
* _L8 from USGS_
* _S2 from DHUS No EO DATA:_
* _Agronomic scheme [RD5]_
* _Orchards and vineyards mask [RD5]_
* _AOI mask [RD5]_
* _Agricultural mask (UAA Utilized Agricultural Area) [RD5]_
* _Verification data: Ground true from three ground surveys (selected crop fields Shapefile)_
</td> </tr>
<tr>
<td>
Input Data Archiving and rolling policies
</td>
<td>
_5 GB / rolling policy one year_
</td> </tr>
<tr>
<td>
Frequency of update (refresh rate):
</td>
<td>
_One year_
</td> </tr>
<tr>
<td>
Format:
</td>
<td>
_Vector/Raster_
</td> </tr>
<tr>
<td>
Naming convention:
</td>
<td>
_MOSES_ECM_YYYYMMDD_SSyyyyddd_SSyyyyddd_SSyyyddd.shp where:_
* _MOSES_ECM is the product identifier_
* _yyyyddd is the sensing day of the tree images used in input in format year/doy_
* _SS satellite ID (S2 or L8)_
* _YYYYMMDD is the generation time of the early crop map_
</td> </tr>
<tr>
<td>
Product ID.
</td>
<td>
ECM
</td> </tr>
<tr>
<td>
Product Name
</td>
<td>
Early (Functional) Crop Map
</td> </tr>
<tr>
<td>
Archiving and preservation
</td>
<td>
_Amount of data generated: 1 GB per year_
_Given the demonstration purpose of the alpha release of the MOSES platform,
all Early Crop Maps generated (as like as all other outputs generated during
the project) are saved on a storage server set up in Esri Italia premises._
_In details, the storage server has been implemented as a virtual disk (with
1TB capacity) inside the private Storage Area Network (SAN) owned by Esri
Italia._
_The system is managed with VMware HA Clusters e VMware DRS technologies,
which ensure a high degree of redundancy and availability. Virtual disks are
distributed on shared arrays on the SAN, and every array implements RAID-5
(rotating parity) level, plus a spare-disk that comes into operation in case
of malfunctioning of each single unit._
_The storage server will be kept available for the whole duration of the
project. Procedures for maintenance of data after the end of the project will
be defined during the development of the beta release of the system._
</td> </tr>
<tr>
<td>
Data sharing:
</td>
<td>
_Access to the storage server is possible through FTP protocol, exploiting any
FTP client application with the following parameters:_
_IP: 84.253.153.145_
_Username: client_
_Password: mosesClient_
_Port: 21_
_Early Crop Maps can be found in the server folder:_
_DA_XX/YYYY/ECM/_
_Where XX represents the demonstration area code (currently only "IT", namely
the "Consorzio di Bonifica di Romagna") and YYYY is the year (currently only
data belonging to crop season 2016 are available)._
_Inside this folder, it is possible to find two subfolders, named “RASTER” and
“VECTOR”, which contain the product in the two formats._
</td> </tr>
<tr>
<td>
Standards and metadata:
</td>
<td>
</td> </tr> </table>
**3.1.2 In-Season Crop Maps (ISCM)**
<table>
<tr>
<th>
**Product ID.**
</th>
<th>
</th> </tr>
<tr>
<td>
**Product Name**
</td>
<td>
**In-Season Crop Map**
</td> </tr>
<tr>
<td>
**Purpose**
</td>
<td>
\- _Mapping the extent of specific crop at quarterly /monthly frequency using
satellite observed multispectral data._
</td> </tr>
<tr>
<td>
**Description (Content Specification):**
</td>
<td>
\- _Layer of segments/fields containing information about the presence of
specific crops._
</td> </tr>
<tr>
<td>
**Output Layers:**
</td>
<td>
\- Id_icm : ID of the crop map class (integer value).
</td> </tr>
<tr>
<td>
**Measurement Unit:**
</td>
<td>
\- _N/A_
</td> </tr>
<tr>
<td>
**Temporal/spatial applicable domains:**
</td>
<td>
\- _Quarterly/District area_
</td> </tr>
<tr>
<td>
**Temporal coverage**
</td>
<td>
\- _Quarterly_
</td> </tr>
<tr>
<td>
**Spatial Coverage / Area:**
</td>
<td>
\- _Demonstration area_
</td> </tr>
<tr>
<td>
**Spatial Resolution / Scale (Data Grid):**
</td>
<td>
\- _Vectorial derived from the EO input data / 1:10000_
</td> </tr>
<tr>
<td>
**Geographic projection /** **Reference system:**
</td>
<td>
\- _UTM WGS84_
</td> </tr>
<tr>
<td>
**Input Data/Sources:**
</td>
<td>
* _EO data:_
* _S2 Bottom of Atmosphere (BOA) reflectance from satellite images downloaded in the pre-processing module. Other data:_
* _Training set/ground truth [RD5]_
* _Orchards and vineyards mask [RD5]_
\-
</td> </tr>
<tr>
<td>
**Input Data Archiving and rolling policies**
</td>
<td>
* _EO data:_
* _7 GB / 2 months (forecast considering all four DA, satellite image acquisition every 5 days and 12 bands) Other data:_
* _Less than 50 MB/yearly_
</td> </tr>
<tr>
<td>
**Frequency of update (refresh rate):**
</td>
<td>
\- _15-30 days_
</td> </tr>
<tr>
<td>
**Format:**
</td>
<td>
\- _Vector/Raster_
</td> </tr>
<tr>
<td>
**Naming convention:**
</td>
<td>
* _MOSES_ICM_SSYYYYMMDD1_ SSYYYYMMDD2_YYYYMMDD:_
_Where:_
* _MOSES_ICM is the product identifier_
* _YYYYMMDD1 is the sensing day of the first image used in input in format year/doy_
* _YYYYMMDD2 is the sensing day of the second image used in input in format year/doy_
* _SS satellite ID (S2)_
* _YYYYMMDD is the generation time of the crop map_
</td> </tr>
<tr>
<td>
**Product ID.**
</td>
<td>
</td> </tr>
<tr>
<td>
**Product Name**
</td>
<td>
**In-Season Crop Map**
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
_Amount of data generated: less than 1GB per year_
_Same solution adopted for storage of Early Crop Maps (see paragraph 3.1.1)_
</td> </tr>
<tr>
<td>
**Data sharing:**
</td>
<td>
_Data access through FTP protocol (connection parameters reported in “data
sharing” section of paragraph 3.1.1)_
_In Season Crop Maps can be found in the server folder:_
_DA_XX/YYYY/ISCM/_
_Where XX represents the demonstration area code and YYYY is the year._
_Inside this folder, it is possible to find a subfolder named “VECTOR” which
contain the product._
</td> </tr>
<tr>
<td>
**Standards and metadata:**
</td>
<td>
</td> </tr> </table>
## 3.2. Seasonal probabilistic weather forecast
<table>
<tr>
<th>
Product ID.
</th>
<th>
Seasonal Forecast (SF)
</th> </tr>
<tr>
<td>
Product Name
</td>
<td>
Seasonal probabilistic forecast
</td> </tr>
<tr>
<td>
**Purpose**
</td>
<td>
Seasonal probabilistic forecast includes the multi-model ensembles of seasonal
anomalies forecast, with respect to the reference climate, for 6 climate
indices, needed as input of the weather generator.
</td> </tr>
<tr>
<td>
**Description (Content Specification):**
</td>
<td>
This product provides specific information on the output of the statistical
calibration processor, needed as input of the weather generator processor.
The statistical downscaling processor is meant to remove all systematic biases
at local scale from the multi-model seasonal forecast EUROSIP outputs and to
calibrate the predictions of local climate indices on the observed local
climate using the same reference period.
For each cell of the local analysis meteo grid the forecasts are produced as
anomalies with respect to the climate for the 6 indices needed as input by the
Weather Generator processor, namely: o total precipitation (Prec3M); o
probability of wet days (WetDaysFrequency) ; o probability of a wet day after
a wet day
(WetWetDaysFrequency); o average minimum temperature (Tmin); o average maximum
temperature (Tmax);
o average difference of maximum temperature between dry and wet days
(DeltaTmaxDryWet).
</td> </tr> </table>
<table>
<tr>
<th>
Product ID.
</th>
<th>
Seasonal Forecast (SF)
</th> </tr>
<tr>
<td>
Product Name
</td>
<td>
Seasonal probabilistic forecast
</td> </tr>
<tr>
<td>
**Layers(*):**
</td>
<td>
The XML files contain the following elements (full dots) and corresponding
attributes (rings):·
◦ point: description of the computation area to which forecasts refers
◦ name – geographical name of location
◦ code – conventional point code
◦ lon – WGS84 longitude of center of computation area
◦ lat – WGS84 latitude of center of computation area
◦ info – other information
climate: description of reference climate
◦ from – year in which reference climate begins
◦ to – year in which the reference climate ends
models: description of the systems contributing to the multi-model
ensemble
◦ number – number of systems contributing to the ensemble
◦ name – acronym for all the systems contributing to the ensemble
◦ members – number of ensemble members for each system
◦ repetitions – number of repetitions (typically 1)
◦ year – year to which the seasonal forecast refers
◦ season – acronym of the season to which the seasonal forecast refers
forecast: includes all ensemble member forecast values for the 6 climate index
anomalies
◦ var – describe each forecast field including:
▪ type – acronym of the field
▪ attribute – full field or anomaly (anomalies in our case)
▪ value – all ensemble member values for the field
</td> </tr>
<tr>
<td>
**Measurement Unit:**
</td>
<td>
lon,lat: decimal degrees PREC3M: mm
WetDaysFrequency: %
WetWetDaysFrequency: %
Tmin: °C
Tmax: °C
DeltaTmaxDryWet: °C
</td> </tr> </table>
<table>
<tr>
<th>
Product ID.
</th>
<th>
Seasonal Forecast (SF)
</th> </tr>
<tr>
<td>
Product Name
</td>
<td>
Seasonal probabilistic forecast
</td> </tr>
<tr>
<td>
**Field of Applicability (Temporal/Spatial):**
</td>
<td>
Seasonal / District area
</td> </tr>
<tr>
<td>
**Temporal coverage**
</td>
<td>
3-months
</td> </tr>
<tr>
<td>
**Spatial Coverage / Area:**
</td>
<td>
Computation area
</td> </tr>
<tr>
<td>
**Spatial Resolution / Scale (Data Grid):**
</td>
<td>
Depending on the resolution of the local analysis grid (e.g.: for Italy, ERG5
analysis : 5 Km)
</td> </tr>
<tr>
<td>
**Geographic projection /** **Reference system:**
</td>
<td>
WGS84
</td> </tr>
<tr>
<td>
**Input Data/Sources:**
</td>
<td>
The ensemble multi-model seasonal forecast anomalies over the computation area
are extracted from the calibrated multi-model seasonal prediction produced
over the corresponding national domain.
These calibrated predictions are obtained by applying a MOS (Model Output
Statistics) statistical downscaling scheme using as input the multi-model
operational EUROSIP seasonal predictions (for more details see D3.2 - Seasonal
probabilistic forecasting).
An Identities table of the cells belonging to the analysis grid in .csv format
is needed to feed this processor. The table has to contain the following
fields:
* Id_meteo: identifier of the cell (5 digits)
* Table_name: name of the datatable in the meteo db (tipically: GRD_XXXXX where XXXXX is the id_meteo)
* Meteo_name: name of the location
* Longitude: longitude of the central point of the cell in decimal degrees
* Latitude: latitude of the central point of the cell in decimal degrees
* Height: height in meters of the central point of the cell
</td> </tr>
<tr>
<td>
**Input Data Archiving and rolling policies**
</td>
<td>
The inputs are not archived in the system
</td> </tr>
<tr>
<td>
**Frequency of update (refresh rate):**
</td>
<td>
Monthly
</td> </tr> </table>
<table>
<tr>
<th>
Product ID.
</th>
<th>
Seasonal Forecast (SF)
</th> </tr>
<tr>
<td>
Product Name
</td>
<td>
Seasonal probabilistic forecast
</td> </tr>
<tr>
<td>
**Format:**
</td>
<td>
Extensible Markup Language file (.xml) with the following elements (full dots)
and corresponding attributes (rings): point
◦ name – alphanumeric string
◦ code – integer 4 digit
◦ lon – float, 6 digit, precision 3 digits
◦ lat – float, 6 digit, precision 3 digits
◦ info – alphanumeric string
climate -
◦ from – integer 4 digit
</td> </tr>
<tr>
<td>
</td>
<td>
◦
</td>
<td>
to – integer 4 digit
</td> </tr>
<tr>
<td>
</td>
<td>
models
◦
</td>
<td>
number – integer 1 digit
</td> </tr>
<tr>
<td>
</td>
<td>
◦
</td>
<td>
name – alphanumeric 4 digit array with dimension ‘number’
</td> </tr>
<tr>
<td>
</td>
<td>
◦
</td>
<td>
member – integer 2 digit array with dimension ‘number’
</td> </tr>
<tr>
<td>
</td>
<td>
◦
</td>
<td>
repetition – integer 1 digit
</td> </tr>
<tr>
<td>
</td>
<td>
◦
</td>
<td>
year – integer 4 digit
</td> </tr>
<tr>
<td>
</td>
<td>
◦
</td>
<td>
season – alphanumeric 3 digit
</td> </tr>
<tr>
<td>
</td>
<td>
forecast ◦
</td>
<td>
var
▪ type – alphanumeric string
▪ attribute – alphanumeric string
▪ value – float array with dimension nrModels * nrMembers
</td> </tr>
<tr>
<td>
**Naming convention:**
</td>
<td>
**File name convention:** **GRD_XXXXX.csv** where:
● **XXXXX** is the identifier of the cell that refers to the local meteo
analysis grid
**Folder name convention:**
see “Data Sharing” section of this table
</td> </tr>
<tr>
<td>
Product ID.
</td>
<td>
Seasonal Forecast (SF)
</td> </tr>
<tr>
<td>
Product Name
</td>
<td>
Seasonal probabilistic forecast
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
About 10 kb for each XML file (corresponding to each cell on the local climate
observational analysis grid).
Example for a DA with 100 weather grid cells (12 forecasts): Annual storage:
100 * 12 * 10 kb = 12 Mb
All files and folder are archived on a storage server made available on Esri
Italia premises (same solution adopted for storage of Early Crop Maps
described in paragraph 3.1.1).
</td> </tr>
<tr>
<td>
**Data sharing:**
</td>
<td>
Data access is possible through FTP protocol (connection parameters reported
in “data sharing” section of paragraph 3.1.1)
Weather forecasts are saved in folders that can be found in the server folder:
DA_XX/YYYY/SF/MMM/
Where XX represents the demonstration area code, YYYY the year and MMM the
acronym of the 3-months forecast period.
</td> </tr>
<tr>
<td>
**Standards and metadata:**
</td>
<td>
</td> </tr> </table>
## 3.3. Synthetic weather series generated from seasonal probabilistic
forecasts
<table>
<tr>
<th>
Product ID.
</th>
<th>
Synthetic Weather Series (SWS)
</th> </tr>
<tr>
<td>
Product Name
</td>
<td>
Synthetic weather series generated from seasonal probabilistic forecasts
</td> </tr>
<tr>
<td>
**Purpose**
</td>
<td>
The synthetic series of seasonal forecasts of daily temperature, precipitation
and, if available, potential evapotranspiration are one the input data sources
that feed the seasonal version of soil water balance processor in order to
produce seasonal irrigation forecasts.
</td> </tr>
<tr>
<td>
**Description (Content Specification):**
</td>
<td>
The SyntheticSeries are an intermediate product between two different MOSES
processors: it is the output of the Weather Generator processor and it is one
of the input of the Seasonal irrigation forecast processor. It provides, for
each cell of the weather grid on the computation area, a synthetic series of
daily data of temperature and precipitation that represents the probabilistic
seasonal forecast.
In more details, the number _n_ of years of the synthetic series depends on
the number of _members_ and _repetitions_ of each _models_ of the
probabilistic seasonal anomalies forecast (see XML file description). Each
year of the series is composed by observed data of the previous 9months and a
synthetic series generated by the weather generator processor for the 3-months
forecast period.
The weather generator processor is fed by climate data and a probabilistic
seasonal anomalies forecast XML file (see the SeasonalForecasts description).
For instance, if the seasonal forecast refers to the summer season JJA (June,
July and August), each 12-months period of the SyntheticSeries is composed by
9 months of observed daily data (from the 1 st of September of the year
before the forecast until the 31 st of May of the forecast year) and 3
months of generated daily data (June, July and August).
</td> </tr>
<tr>
<td>
**Layers(*):**
</td>
<td>
Each record is composed by the following fields:
1. **date** : date of generated year
2. **tmin** : daily minimum air temperature
3. **tmax** : daily maximum air temperature
4. **tavg** : daily average air temperature
5. **prec** : total daily precipitation
6. **etp** : total daily evapotranspiration (not mandatory, to be implemented)
</td> </tr> </table>
<table>
<tr>
<th>
Product ID.
</th>
<th>
Synthetic Weather Series (SWS)
</th> </tr>
<tr>
<td>
Product Name
</td>
<td>
Synthetic weather series generated from seasonal probabilistic forecasts
</td> </tr>
<tr>
<td>
**Measurement Unit:**
</td>
<td>
* tmin: °C
* tmax: °C
* tavg: °C
* prec: mm
* etp: mm
</td> </tr>
<tr>
<td>
**Field of Applicability (Temporal/Spatial):**
</td>
<td>
Seasonal / District area
</td> </tr>
<tr>
<td>
**Temporal coverage**
</td>
<td>
3-months
</td> </tr>
<tr>
<td>
**Spatial Coverage / Area:**
</td>
<td>
Each file cover one cell of the weather grid
</td> </tr>
<tr>
<td>
**Spatial Resolution / Scale (Data Grid):**
</td>
<td>
Same resolution of the weather grid
</td> </tr>
<tr>
<td>
**Geographic projection /** **Reference system:**
</td>
<td>
UTM WGS84
</td> </tr>
<tr>
<td>
**Input Data/Sources:**
</td>
<td>
This product is the output of the Weather generator processor fed by:
* probabilistic seasonal forecast anomalies XML file (see the Seasonal Forecasts description)
* climate data (daily temperature and precipitation, at least 20 years) (for more details see D3.3 - Irrigation forecasting package)
* observed weather data (daily temperature and precipitation, at least the last 9 months before the forecast until the first day of seasonal forecast)
</td> </tr>
<tr>
<td>
**Input Data Archiving and rolling policies**
</td>
<td>
* Seasonal forecast anomalies XML: about 10 kb for file
* Climate data: about 300 kb for each weather grid cell
* Observed data: about 20 kb for each weather grid cell Rolling policy: year
</td> </tr>
<tr>
<td>
**Frequency of update (refresh rate):**
</td>
<td>
monthly (when seasonal irrigation forecast is requested)
</td> </tr>
<tr>
<td>
**Format:**
</td>
<td>
Comma separated value file (.csv), with the following fields:
* date, ISO8601 format (YYYY-MM-DD)
* tmin, float, precision: 1 digit
* tmax, float, precision: 1 digit
</td> </tr>
<tr>
<td>
Product ID.
</td>
<td>
Synthetic Weather Series (SWS)
</td> </tr>
<tr>
<td>
Product Name
</td>
<td>
Synthetic weather series generated from seasonal probabilistic forecasts
</td> </tr>
<tr>
<td>
</td>
<td>
* tavg, float, precision: 1 digit
* prec, float, precision: 1 digit
* etp, float, precision: 1 digit
</td> </tr>
<tr>
<td>
**Naming convention:**
</td>
<td>
**File name convention:** **GRD_XXXXX.csv** where:
* **XXXXX** is the identifier of the cell that refers to the local meteo analysis grid
**Folder name convention:**
* see “Data Sharing” section of this table
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
Each synthetic series requires about 1 Mb of storage.
Example of annual storage for a DA with 100 weather grid cells and three
seasonal irrigation forecast for year:
Annual storage: 100 * 3 * 1 Mb = 300 Mb
All files and folder are archived on a storage server made available on Esri
Italia premises (same solution adopted for storage of Early Crop Maps
described in paragraph 3.1.1).
</td> </tr>
<tr>
<td>
**Data sharing:**
</td>
<td>
Data access is possible through FTP protocol (connection parameters reported
in “data sharing” section of paragraph 3.1.1)
Synthetic weather series are saved in folders that can be found in the server
folder:
DA_XX/YYYY/SWS/MMM
Where XX represents the demonstration area code, YYYY is the year and MMM the
acronym of the 3-months forecast period.
</td> </tr>
<tr>
<td>
**Standards and metadata:**
</td>
<td>
</td> </tr> </table>
## 3.4. Phenological stage data
This MOSES product consists in the two following datasets, which are described
in details in subsequent tables:
* Leaf Area Index (LAI)
* Normalized Difference Vegetation Index (NDVI)
### 3.4.1 Leaf Area Index (LAI)
<table>
<tr>
<th>
Product ID.
</th>
<th>
LAI
</th> </tr>
<tr>
<td>
Product Name
</td>
<td>
Leaf Area Index
</td> </tr>
<tr>
<td>
Purpose
</td>
<td>
_Mapping and monitoring of Leaf Area Index (LAI) biophysical variable by means
of remote sensing multispectral data. The monitoring is performed at 5 days
temporal resolution or more depending on the availability of remote sensing
multispectral observations._
</td> </tr>
<tr>
<td>
Description (Content Specification):
</td>
<td>
_The LAI is the amount of one-sided leaf area per unit area of ground.
Different plant functional types will possess a different amount, range and
temporal evolution of leaf area, leaf biomass and leaf area density. The
product includes also layers containing zonal statistics (mean and standard
deviation) over segments/fields (=unit map defined in the “in season crop map
module”)._
</td> </tr>
<tr>
<td>
Output Layers:
</td>
<td>
_Vector layers:_
1. _LAI_mean_DOYaa_
2. _LAI_std_DOYaa Raster:_
_1\. LAI_DD_MM_AA_
</td> </tr>
<tr>
<td>
Measurement Unit:
</td>
<td>
_m 2/m 2_
</td> </tr>
<tr>
<td>
Temporal/spatial applicable domains:
</td>
<td>
_Daily/District area_
</td> </tr>
<tr>
<td>
Temporal coverage
</td>
<td>
_Daily_
</td> </tr>
<tr>
<td>
Spatial Coverage / Area:
</td>
<td>
_Demonstration area_
</td> </tr>
<tr>
<td>
Spatial Resolution / Scale (Data Grid):
</td>
<td>
_Vectorial derived from the EO input data / 1:10000_
_Raster derived from the EO input data / from 10x10m (Sentinel) to 30 x 30m
(Landsat 8)_
</td> </tr>
<tr>
<td>
Geographic projection / Reference system:
</td>
<td>
_UTM WGS84_
</td> </tr> </table>
<table>
<tr>
<th>
Product ID.
</th>
<th>
LAI
</th> </tr>
<tr>
<td>
Product Name
</td>
<td>
Leaf Area Index
</td> </tr>
<tr>
<td>
Input Data/Sources:
</td>
<td>
_EO data:_
* _L8 TOA or S2 BOA reflectance (rf,: DD 3.4). No EO DATA:_
* _csv file of input crop parameters and others (rf,:D3.4)_ ● _UCM vector_
</td> </tr>
<tr>
<td>
Input Data Archiving and rolling policies
</td>
<td>
_EO data_
_2 GB / quarterly_
_NO EO data:_
_10 MB / rolling policy 1 time_
</td> </tr>
<tr>
<td>
Frequency of update (refresh rate):
</td>
<td>
_Daily_
</td> </tr>
<tr>
<td>
Format:
</td>
<td>
_Vector /Raster_
</td> </tr>
<tr>
<td>
Naming convention:
</td>
<td>
_Raster_
_MOSES_LAI_YYYYMMDD_ SS_yyyyDOY.tif where:_
* _MOSES_LAI is the product identifier_
* _YYYYMMDD is the generation time of the LAI_
* _SS satellite ID (S2 or L8)_
* _yyyyDOY is the sensing day used in input in format year/doy_
</td> </tr>
<tr>
<td>
Archiving and preservation
</td>
<td>
_Amount of data generated: 1,5 GB per 15 days_
Raster images and vector data are archived on a storage server made available
on Esri Italia premises (same solution adopted for storage of Early Crop Maps
described in paragraph 3.1.1).
</td> </tr> </table>
<table>
<tr>
<th>
Product ID.
</th>
<th>
LAI
</th> </tr>
<tr>
<td>
Product Name
</td>
<td>
Leaf Area Index
</td> </tr>
<tr>
<td>
Data sharing:
</td>
<td>
Data access is possible through FTP protocol (connection
parameters reported in “data sharing” section of paragraph 3.1.1)
Raster images of LAI parameter are saved in the following folder on the
server:
DA_XX/YYYY/CWD/RASTER/LAI/
Where XX represents the demonstration area code and YYYY is the year.
“CWD” folder contains all outputs of the Crop Water Demand processor, divided
into “raster” and “vector” products. Inside the “RASTER” sub-folder it is then
possible to find specific directories of each product. Finally, inside the
“LAI” folder user can find the “tiff” files named according to the above
mentioned convention.
LAI in vector format is available as a field of the “Unit Crop Map” shapefile,
that can be found in the directory:
DA_XX/YYYY/CWD/VECTOR/
Inside this directory, files are named according to the following convention:
UCM_DA_YYYY_SEA_DOY
Where DA is identifier of Demo Area, e.g. “SP”; SEA is identifier of
irrigation season (initial of reference month, e.g. “JJA” for JuneJuly-
August); YYYY refers to the current year and DOY is the Day Of Year of the
computation. A unique file contains LAI indices, as like as NDVI, KC, CWD and
CWDF data described in the following paragraphs, since they are all computed
with the same processor. A unique file contains LAI indices, as like as NDVI,
KC, CWD and CWDF data described in the following paragraphs.
</td> </tr>
<tr>
<td>
Standards and metadata:
</td>
<td>
</td> </tr> </table>
### 3.4.2. Normalized Difference Vegetation Index (NDVI)
<table>
<tr>
<th>
**Product ID.**
</th>
<th>
**NDVI**
</th> </tr>
<tr>
<td>
**Product Name**
</td>
<td>
**Normalized Difference Vegetation Index**
</td> </tr>
<tr>
<td>
**Purpose**
</td>
<td>
_Mapping and monitoring of Normalized Difference Vegetation index (NDVI) by
means of remote sensing multispectral data. The monitoring is performed at 5
days temporal resolution or more depending on the availability of remote
sensing multispectral observations._
</td> </tr>
<tr>
<td>
**Description (Content Specification):**
</td>
<td>
_The NDVI is an index of plant “greenness” or photosynthetic activity, and is
one of the most commonly used vegetation indices. The product includes also
layers containing zonal statistics (mean and standard deviation) over
segments/fields (=unit map defined in the “in season crop map module”)._
</td> </tr>
<tr>
<td>
**Output Layers:**
</td>
<td>
_Vector layers:_
1. _NDVI_mean_DOYaa_
2. _NDVI_std_DOYaa_
_Raster:_
_1\. NDVI_DD_MM_AA_
</td> </tr>
<tr>
<td>
**Measurement Unit:**
</td>
<td>
_adimensional_
</td> </tr>
<tr>
<td>
**Temporal/spatial applicable domains:**
</td>
<td>
_Daily /District Area_
</td> </tr>
<tr>
<td>
**Temporal coverage**
</td>
<td>
_Daily_
</td> </tr>
<tr>
<td>
**Spatial Coverage / Area:**
</td>
<td>
_Demonstration Area_
</td> </tr>
<tr>
<td>
**Spatial Resolution / Scale (Data Grid):**
</td>
<td>
_Vectorial derived from the EO input data / 1:10000_
_Raster derived from the EO input data / from 10x10m (Sentinel) to 30 x 30m
(Landsat 8)_
</td> </tr>
<tr>
<td>
**Geographic projection /** **Reference system:**
</td>
<td>
_UTM WGS84_
</td> </tr>
<tr>
<td>
**Input Data/Sources:**
</td>
<td>
_EO data:_
_L8 TOA or S2 TOC reflectance (rf,: DD 3.4)._
_UCM vector_
</td> </tr>
<tr>
<td>
**Input Data Archiving and rolling policies**
</td>
<td>
_EO data_
_2 GB / quarterly_
</td> </tr>
<tr>
<td>
**Frequency of update (refresh rate):**
</td>
<td>
_Daily_
</td> </tr>
<tr>
<td>
**Format:**
</td>
<td>
_Vector/raster_
</td> </tr> </table>
<table>
<tr>
<th>
**Product ID.**
</th>
<th>
**NDVI**
</th> </tr>
<tr>
<td>
**Product Name**
</td>
<td>
**Normalized Difference Vegetation Index**
</td> </tr>
<tr>
<td>
**Naming convention:**
</td>
<td>
_Raster_
_MOSES_NDVI_YYYYMMDD_ SS_yyyyDOY.tif where:_
_MOSES_NDVI is the product identifier_
_YYYYMMDD is the generation time of the NDVI_
_SS satellite ID (S2 or L8)_
_yyyyDOY is the sensing day used in input in format year/doy_
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
_Amount of data generated: 1,5 GB per 15 days_
Raster images and vector data (shapefiles) are archived on a storage server
made available on Esri Italia premises (same solution adopted for storage of
Early Crop Maps described in paragraph 3.1.1).
</td> </tr>
<tr>
<td>
**Data sharing:**
</td>
<td>
Data access is possible through FTP protocol (connection parameters reported
in “data sharing” section of paragraph
3.1.1)
**Raster images** of NDVI parameter are saved in the following folder on the
server:
DA_XX/YYYY/CWD/RASTER/NDVI/
Where XX represents the demonstration area code and YYYY is the year.
Inside the “NDVI” folder user can find the “tiff” files named according to the
above mentioned convention.
NDVI in **vector format** is available as a field of the “Unit Crop Map”
shapefile, that can be found in the directory:
DA_XX/YYYY/CWD/VECTOR/
Inside this directory, files are named according to the following convention:
UCM_DA_YYYY_SEA_DOY
Where DA is identifier of Demo Area, e.g. “SP”; SEA is identifier of
irrigation season (initial of reference month, e.g. “JJA” for June-July-
August); YYYY refers to the current year and DOY is the Day Of Year of the
computation. The same file contains NDVI, LAI, KC, CWD and CWDF data described
in these paragraphs.
</td> </tr>
<tr>
<td>
**Standards and metadata:**
</td>
<td>
</td> </tr> </table>
## 3.5. Short term forecast of crop water demand
<table>
<tr>
<th>
**Product ID.**
</th>
<th>
**CWDF**
</th> </tr>
<tr>
<td>
**Product Name**
</td>
<td>
**Short term forecast of Crop Water Demand**
</td> </tr>
<tr>
<td>
**Purpose**
</td>
<td>
_Seven days forecast of the crop water demand by combining remote sensing
multispectral data and short term forecast meteorological data._
</td> </tr>
<tr>
<td>
**Description (Content Specification):**
</td>
<td>
_The crop water demand forecast is an estimate of the amount of water required
for optimal growth of a plant in the following 7 days. It is the cumulative 7
days values of the forecasted maximum crop evapotranspiration ETmax. CWDF
product includes forecast using two different estimates of forecasted ETmax
(cf. 3.4). The product includes also layers containing zonal statistics (mean
and standard deviation) of 7 days ETmax forecast over segments/fields (=unit
map defined in the “in season crop map module”)._
</td> </tr>
<tr>
<td>
**Output Layers:**
</td>
<td>
_Vector layers:_
1. _CWDF_an_mean_
2. _CWDF_an_std_
3. _CWDF_emp_mean_
4. _CWDF_emp_std_
_Raster:_
1. _CWDF_analytical_DD_MM_AA_
2. _CWDF_empirical_DD_MM_AA_
</td> </tr>
<tr>
<td>
**Measurement Unit:**
</td>
<td>
_mm/week_
</td> </tr>
<tr>
<td>
**Temporal/spatial applicable domains:**
</td>
<td>
_Weekly /District area_
</td> </tr>
<tr>
<td>
**Temporal coverage**
</td>
<td>
_weekly_
</td> </tr>
<tr>
<td>
**Spatial Coverage / Area:**
</td>
<td>
_Demonstration Area_
</td> </tr>
<tr>
<td>
**Spatial Resolution / Scale (Data Grid):**
</td>
<td>
_Vectorial derived from the EO input data / 1:10000_
_Raster derived from the EO input data / from 10x10m to 30 x 30m_
</td> </tr>
<tr>
<td>
**Geographic projection /** **Reference system:**
</td>
<td>
_UTM WGS84_
</td> </tr> </table>
<table>
<tr>
<th>
**Product ID.**
</th>
<th>
**CWDF**
</th> </tr>
<tr>
<td>
**Product Name**
</td>
<td>
**Short term forecast of Crop Water Demand**
</td> </tr>
<tr>
<td>
**Input Data/Sources:**
</td>
<td>
_EO data:_
* _L8 TOA or S2 TOC (rf,: DD 3.4)_
_No EO DATA:_
* _csv file of short term meteorological forecast (rf,:D3.4)_
* _Fruit mask if available_
* _UCM vector_
</td> </tr>
<tr>
<td>
**Input Data Archiving and rolling policies**
</td>
<td>
_EO data_
_2 GB / quarterly_ _NO EO data:_
_10 MB / daily_
</td> </tr>
<tr>
<td>
**Frequency of update (refresh rate):**
</td>
<td>
_Daily_
</td> </tr>
<tr>
<td>
**Format:**
</td>
<td>
_Vector/Raster_
</td> </tr>
<tr>
<td>
**Naming convention:**
</td>
<td>
_Raster_
_MOSES_CWDF_ME _YYYYMMDD_ SS_yyyyDOY.tif where:_
* _MOSES_CWDF is the product identifier_
* _YYYYMMDD is the generation time of the CWD_
* _SS satellite ID (S2 or L8)_
* _yyyyDOY is the sensing day used in input in format year/doy_
* _ME is the method used to estimate CWDF:analytical (an) or empirical (em)_
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
_Amount of data generated: 1,5 GB per 15 days_
Raster images and vector data (shapefiles) are archived on a storage server
made available on Esri Italia premises (same solution adopted for storage of
Early Crop Maps described in paragraph 3.1.1).
</td> </tr>
<tr>
<td>
**Product ID.**
</td>
<td>
**CWDF**
</td> </tr>
<tr>
<td>
**Product Name**
</td>
<td>
**Short term forecast of Crop Water Demand**
</td> </tr>
<tr>
<td>
**Data sharing:**
</td>
<td>
Data access is possible through FTP protocol (connection parameters reported
in “data sharing” section of paragraph
3.1.1)
**Raster images** of CWDF parameter are saved in the following folder on the
server:
DA_XX/YYYY/CWD/RASTER/CWDF/
Where XX represents the demonstration area code and YYYY is the year.
The folder contains the “tiff” files of both “analytical” and
“empirical” CWDF, named according to the above-mentioned convention.
CWDFs in **vector format** are available as fields of the “Unit Crop Map”
shapefile, that can be found in the directory:
DA_XX/YYYY/CWD/VECTOR/
Inside this directory, files are named according to the following convention:
UCM_DA_YYYY_SEA_DOY
Where DA is identifier of Demo Area, e.g. “SP”; SEA is identifier of
irrigation season (initial of reference month, e.g. “JJA” for June-July-
August); YYYY refers to the current year and DOY is the Day Of Year of the
computation. The same file contains CWDF, NDVI, LAI, KC and CWD data described
in these paragraphs.
</td> </tr>
<tr>
<td>
**Standards and metadata:**
</td>
<td>
</td> </tr> </table>
## 3.6. Short term forecast of Gross irrigation water requirements
<table>
<tr>
<th>
**Product ID.**
</th>
<th>
**GIWRF**
</th> </tr>
<tr>
<td>
**Product Name**
</td>
<td>
**Short term forecast of Gross Irrigation Water Requirements**
</td> </tr>
<tr>
<td>
**Purpose**
</td>
<td>
_Seven days forecast of the gross irrigation water requirements (GIWR) by
combining remote sensing multispectral data and short term forecast
meteorological data._
</td> </tr>
<tr>
<td>
**Description (Content Specification):**
</td>
<td>
_The gross irrigation water requirements forecast is an estimate of the amount
of irrigation water required for optimal growth of a plant in the following 7
days. It is defined as the crop water demand minus precipitation. GIWRF
product includes forecast using two different estimates of forecasted ETmax
(cf. 3.4). The product includes also layers containing zonal statistics (mean
and standard deviation) of 7 days GIWR forecast over segments/fields (=unit
map defined in the “in season crop map module”)._
</td> </tr>
<tr>
<td>
**Output Layers:**
</td>
<td>
_Vector layers:_
1. _GIWRF_an_mean_
2. _GIWRF_an_std_
3. _GIWRF_emp_mean_
4. _GIWRF_emp_std_
_Raster:_
1. _GIWRF_analytical_DD_MM_AA_
2. _GIWRF_empirical_DD_MM_AA_
</td> </tr>
<tr>
<td>
**Measurement Unit:**
</td>
<td>
_mm/week_
</td> </tr>
<tr>
<td>
**Temporal/spatial applicable domains:**
</td>
<td>
_Weekly /District area_
</td> </tr>
<tr>
<td>
**Temporal coverage**
</td>
<td>
_weekly_
</td> </tr>
<tr>
<td>
**Spatial Coverage / Area:**
</td>
<td>
_Demonstration Area_
</td> </tr>
<tr>
<td>
**Spatial Resolution / Scale (Data Grid):**
</td>
<td>
_Vectorial derived from the EO input data / 1:10000_
_Raster derived from the EO input data / from 10x10m to 30 x 30m_
</td> </tr>
<tr>
<td>
**Geographic projection /** **Reference system:**
</td>
<td>
_UTM WGS84_
</td> </tr> </table>
<table>
<tr>
<th>
**Product ID.**
</th>
<th>
**GIWRF**
</th> </tr>
<tr>
<td>
**Product Name**
</td>
<td>
**Short term forecast of Gross Irrigation Water Requirements**
</td> </tr>
<tr>
<td>
**Input Data/Sources:**
</td>
<td>
_EO data:_
* _L8 TOA or S2 TOC reflectance from Pre-processing module (rf,: DD 3.4)_
_No EO DATA:_
* _csv file of short term meteorological forecast (rf,:D3.4)_
* _Fruit mask if present_
* _UCM vector_
</td> </tr>
<tr>
<td>
**Input Data Archiving and rolling policies**
</td>
<td>
_EO data_
_2 GB / quarterly NO EO data:_
_10 MB / daily_
</td> </tr>
<tr>
<td>
**Frequency of update (refresh rate):**
</td>
<td>
_Daily_
</td> </tr>
<tr>
<td>
**Format:**
</td>
<td>
_Vector/Raster_
</td> </tr>
<tr>
<td>
**Naming convention:**
</td>
<td>
_Raster_
_MOSES_GIWRF_ME _YYYYMMDD_ SS_yyyyDOY.tif where:_
* _MOSES_GIWR is the product identifier_
* _YYYYMMDD is the generation time of the CWD_
* _SS satellite ID (S2 or L8)_
* _yyyyDOY is the sensing day used in input in format year/doy_
* _ME is the method used to estimate GIWR:analytical (an) or empirical (em)_
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
_Amount of data generated: 1,5 GB per 15 days_
Raster images and vector data (shapefiles) are archived on a storage server
made available on Esri Italia premises (same solution adopted for storage of
Early Crop Maps described in paragraph 3.1.1).
</td> </tr>
<tr>
<td>
**Product ID.**
</td>
<td>
**GIWRF**
</td> </tr>
<tr>
<td>
**Product Name**
</td>
<td>
**Short term forecast of Gross Irrigation Water Requirements**
</td> </tr>
<tr>
<td>
**Data sharing:**
</td>
<td>
Data access is possible through FTP protocol (connection parameters reported
in “data sharing” section of paragraph
3.1.1)
**Raster images** of GIWRF parameter are saved in the following folder on the
server:
DA_XX/YYYY/CWD/RASTER/GIWRF/
Where XX represents the demonstration area code and YYYY is the year.
The folder contains the “tiff” files of both “analytical” and
“empirical” GIWRF, named according to the above-mentioned convention.
GIWRFs in **vector format** are available as fields of the “Unit Crop Map”
shapefile, that can be found in the directory:
DA_XX/YYYY/CWD/VECTOR/
Inside this directory, files are named according to the following convention:
UCM_DA_YYYY_SEA_DOY
Where DA is identifier of Demo Area, e.g. “SP”; SEA is identifier of
irrigation season (initial of reference month, e.g. “JJA” for June-July-
August); YYYY refers to the current year and DOY is the Day Of Year of the
computation. The same file contains CWDF, NDVI, LAI, KC and CWD data described
in these paragraphs.
</td> </tr>
<tr>
<td>
**Standards and metadata:**
</td>
<td>
</td> </tr> </table>
## 3.7. Crop water demand monitoring data
This MOSES product consists in the two following datasets, which are described
in details in subsequent tables:
* Crop Coefficient(Kc)
* Crop Water Demand (CWD)
* Gross Irrigation Water Requirement (GIWR)
### 3.7.1 Crop Coefficient (Kc)
<table>
<tr>
<th>
Product ID.
</th>
<th>
Kc
</th> </tr>
<tr>
<td>
Product Name
</td>
<td>
Crop Coefficient
</td> </tr>
<tr>
<td>
Purpose
</td>
<td>
_Mapping and monitoring of the crop coefficient by means of remote sensing
multispectral data. The monitoring is performed at 5 days temporal resolution
or more depending on the availability of remote sensing multispectral
observations._
</td> </tr>
<tr>
<td>
Description (Content Specification):
</td>
<td>
_Crop coefficient is a crop property used to predict evapotranspiration and
corresponds to the ratio between crop evapotranspiration (ET c ) _ 3 _and
reference evapotranspiration (ET 0 ) _ 4
_Product includes crop coefficients estimated by two different methods
(empirical and analytical). The product includes also layers containing zonal
statistics (mean and standard deviation) over segments/fields (=unit map
defined in the “in season crop map module”)._
</td> </tr> </table>
<table>
<tr>
<th>
Product ID.
</th>
<th>
Kc
</th> </tr>
<tr>
<td>
Product Name
</td>
<td>
Crop Coefficient
</td> </tr>
<tr>
<td>
Output Layers:
</td>
<td>
_Vector layers:_
1. _Kc_an_mean_
2. _Kc_an_std_
3. _Kc_emp_mean_
4. _Kc_emp_std_
_Raster:_
1. _Kc_analytical_DD_MM_AA_
2. _Kc_empirical_DD_MM_AA_
</td> </tr>
<tr>
<td>
Measurement Unit:
</td>
<td>
_mm/mm_
</td> </tr>
<tr>
<td>
Temporal/spatial applicable domains:
</td>
<td>
_Daily/District Area_
</td> </tr>
<tr>
<td>
Temporal coverage
</td>
<td>
_Daily_
</td> </tr>
<tr>
<td>
Spatial Coverage / Area:
</td>
<td>
_Demonstration Area_
</td> </tr>
<tr>
<td>
Spatial Resolution / Scale (Data Grid):
</td>
<td>
_Vectorial derived from the EO input data / 1:10000_
_Raster derived from the EO input data / from 10x10m to 30 x 30m_
</td> </tr>
<tr>
<td>
Geographic projection / Reference system:
</td>
<td>
_UTM WGS84_
</td> </tr>
<tr>
<td>
Input Data/Sources:
</td>
<td>
_EO data:_
_1) L8 TOA or S2 TOC reflectance data (rf,: DD 3.4)._
_No EO DATA:_
1. _csv file of input crop parameters and miscellaneous (rf,:D3.4)_
2. _csv file of observed meteorological data_
_3)Fruit mask if present_
_4) UCM vector_
</td> </tr> </table>
<table>
<tr>
<th>
Product ID.
</th>
<th>
Kc
</th> </tr>
<tr>
<td>
Product Name
</td>
<td>
Crop Coefficient
</td> </tr>
<tr>
<td>
Input Data Archiving and rolling policies
</td>
<td>
_EO data_
_1) 4 GB / quarterly_
_NO EO data:_
1. _10 MB / rolling policy 1 time_
2. _10 MB/Daily_
</td> </tr>
<tr>
<td>
Frequency of update (refresh rate):
</td>
<td>
_Daily_
</td> </tr>
<tr>
<td>
Format:
</td>
<td>
_Vector/Raster_
</td> </tr>
<tr>
<td>
Naming convention:
</td>
<td>
_Raster_
_MOSES_KC_ME _YYYYMMDD_ SS_yyyyDOY.tif where:_
* _MOSES_KC is the product identifier_
* _YYYYMMDD is the generation time of the Kc_
* _SS satellite ID (S2 or L8)_
* _yyyyDOY is the sensing day used in input in format year/doy_
* _ME is the method used to estimate Kc:analytical (an) or empirical (em)_
</td> </tr>
<tr>
<td>
Archiving and preservation
</td>
<td>
_Amount of data generated: 3GB per 15 days_
Raster images and vector data (shapefiles) are archived on a storage server
made available on Esri Italia premises (same solution adopted for storage of
Early Crop Maps described in paragraph 3.1.1).
</td> </tr>
<tr>
<td>
Product ID.
</td>
<td>
Kc
</td> </tr>
<tr>
<td>
Product Name
</td>
<td>
Crop Coefficient
</td> </tr>
<tr>
<td>
Data sharing:
</td>
<td>
Data access is possible through FTP protocol (connection
parameters reported in “data sharing” section of paragraph 3.1.1)
Raster images of KC parameter are saved in the following folder on the server:
DA_XX/YYYY/CWD/RASTER/KC/
Where XX represents the demonstration area code and YYYY is the year.
The folder contains the “tiff” files of both “analytical” and “empirical” KCs,
named according to the above-mentioned convention.
KCs in vector format are available as fields of the “Unit Crop Map” shapefile,
that can be found in the directory:
DA_XX/YYYY/CWD/VECTOR/
Inside this directory, files are named according to the following convention:
UCM_DA_YYYY_SEA_DOY
Where DA is identifier of Demo Area, e.g. “SP”; SEA is identifier of
irrigation season (initial of reference month, e.g. “JJA” for JuneJuly-
August); YYYY refers to the current year and DOY is the Day Of Year of the
computation. The same file contains KC, NDVI, LAI, KC and CWDF data described
in these paragraphs.
</td> </tr>
<tr>
<td>
Standards and metadata:
</td>
<td>
</td> </tr> </table>
### 3.7.2 Crop Water Demand (CWD)
<table>
<tr>
<th>
Product ID.
</th>
<th>
CWD
</th> </tr>
<tr>
<td>
Product Name
</td>
<td>
Crop water demand
</td> </tr>
<tr>
<td>
Purpose
</td>
<td>
_Mapping and monitoring of the crop water demand and irrigation water
requirements by means of remote sensing multispectral data. The monitoring is
performed at 7 days temporal resolution or more depending on the availability
of remote sensing multispectral observations._
</td> </tr>
<tr>
<td>
Description (Content Specification):
</td>
<td>
_The crop water demand is an estimate of the amount of water required for
optimal growth of a plant and it is equal to the maximum crop
evapotraspiration ETmax. CWD product includes two different estimates of ETmax
, both employing FAO56 method but using 2 crop coefficients estimated by two
different methods (empirical and analytical). The product includes also layers
containing zonal statistics (mean and standard deviation) over segments/fields
(=unit map defined in the “in season crop map module”)._
</td> </tr>
<tr>
<td>
Output Layers:
</td>
<td>
_Vector layers:_
1. _CWD_an_mean_
2. _CWD_an_std_
3. _CWD_emp_mean_
4. _CWD_emp_std_
_Raster:_
1. _CWD_analytical_DD_MM_AA_
2. _CWD_empirical_DD_MM_AA_
</td> </tr>
<tr>
<td>
Measurement Unit:
</td>
<td>
_mm/day_
</td> </tr>
<tr>
<td>
Temporal/spatial applicable domains:
</td>
<td>
_Daily/District area_
</td> </tr>
<tr>
<td>
Temporal coverage
</td>
<td>
_Daily_
</td> </tr>
<tr>
<td>
Spatial Coverage / Area:
</td>
<td>
_Demonstration area_
</td> </tr>
<tr>
<td>
Spatial Resolution / Scale (Data Grid):
</td>
<td>
_Vectorial derived from the EO input data / 1:10000_
_Raster derived from the EO input data / from 10x10m to 30 x 30m_
</td> </tr>
<tr>
<td>
Geographic projection / Reference system:
</td>
<td>
_UTM WGS84_
</td> </tr> </table>
<table>
<tr>
<th>
Product ID.
</th>
<th>
CWD
</th> </tr>
<tr>
<td>
Product Name
</td>
<td>
Crop water demand
</td> </tr>
<tr>
<td>
Input Data/Sources:
</td>
<td>
_EO data:_
1. _L8 TOA or S2 TOC reflectance data (rf,: DD 3.4)._
_No EO DATA:_
2. _csv file of input crop parameters and miscellaneous (rf,:D3.4)_
3. _csv file of observed meteorological data_
4. _Fruit mask if present_
5. _UCM vector_
</td> </tr>
<tr>
<td>
Input Data Archiving and rolling policies
</td>
<td>
_EO data_
_1) 4 GB / quarterly_
_NO EO data:_
1. _10 MB / 1 time per year_
2. _10 MB/daily_
3. _less than 50 MB/ quarterly_
</td> </tr>
<tr>
<td>
Frequency of update (refresh rate):
</td>
<td>
_Daily_
</td> </tr>
<tr>
<td>
Format:
</td>
<td>
_Vector/Raster_
</td> </tr>
<tr>
<td>
Naming convention:
</td>
<td>
_Raster_
_MOSES_CWD_ME _YYYYMMDD_ SS_yyyyDOY.tif where:_
* _MOSES_CWD is the product identifier_
* _YYYYMMDD is the generation time of the CWD_
* _SS satellite ID (S2 or L8)_
* _yyyyDOY is the sensing day used in input in format year/doy_
* _ME is the method used to estimate CWD:analytical (an) or empirical (em)_
</td> </tr>
<tr>
<td>
Product ID.
</td>
<td>
CWD
</td> </tr>
<tr>
<td>
Product Name
</td>
<td>
Crop water demand
</td> </tr>
<tr>
<td>
Archiving and preservation
</td>
<td>
_Amount of data generated: 3GB per 15 days_
Raster images and vector data (shapefiles) are archived on a storage server
made available on Esri Italia premises (same solution adopted for storage of
Early Crop Maps described in paragraph 3.1.1).
</td> </tr>
<tr>
<td>
Data sharing:
</td>
<td>
Data access is possible through FTP protocol (connection
parameters reported in “data sharing” section of paragraph 3.1.1)
Raster images of CWD parameter are saved in the following folder on the
server:
DA_XX/YYYY/CWD/RASTER/CWD/
Where XX represents the demonstration area code and YYYY is the year.
The folder contains the “tiff” files of both “analytical” and
“empirical” CWDs, named according to the above-mentioned convention.
CWDs in vector format are available as fields of the “Unit Crop Map”
shapefile, that can be found in the directory:
DA_XX/YYYY/CWD/VECTOR/
Inside this directory, files are named according to the following convention:
UCM_DA_YYYY_SEA_DOY
Where DA is identifier of Demo Area, e.g. “SP”; SEA is identifier of
irrigation season (initial of reference month, e.g. “JJA” for JuneJuly-
August); YYYY refers to the current year and DOY is the Day Of Year of the
computation. The same file contains CWD, NDVI, LAI, KC and CWDF data described
in these paragraphs.
</td> </tr>
<tr>
<td>
Standards and metadata:
</td>
<td>
</td> </tr> </table>
### 3.7.3 Gross Irrigation Water Requirement (GIWR)
<table>
<tr>
<th>
Product ID.
</th>
<th>
GIWR
</th> </tr>
<tr>
<td>
Product Name
</td>
<td>
Gross Irrigation Water Requirement
</td> </tr>
<tr>
<td>
Purpose
</td>
<td>
_Mapping and monitoring of the irrigation water requirements by means of
remote sensing multispectral data. The monitoring is performed at 7 days
temporal resolution or more depending on the availability of remote sensing
multispectral observations._
</td> </tr>
<tr>
<td>
Description (Content Specification):
</td>
<td>
_The Gross water requirements is an estimate of the amount of irrigation water
required for optimal growth of a plant. It is defined as the crop water demand
minus precipitation. GIWR product includes estimates using two different
estimates of ETmax (cf. 3.4). The product includes also layers containing
zonal statistics (mean and standard deviation) over segments/fields (=unit map
defined in the “in season crop map module”)._
</td> </tr>
<tr>
<td>
Output Layers:
</td>
<td>
_Vector layers:_
1. _GIWR_an_mean_
2. _GIWR_an_std_
3. _GIWR_emp_mean_
4. _GIWR_emp_std_
_Raster:_
1. _GIWR_analytical_DD_MM_AA_
2. _GIWR_empirical_DD_MM_AA_
</td> </tr>
<tr>
<td>
Measurement Unit:
</td>
<td>
_mm/day_
</td> </tr>
<tr>
<td>
Temporal/spatial applicable domains:
</td>
<td>
_Daily/District area_
</td> </tr>
<tr>
<td>
Temporal coverage
</td>
<td>
_Daily_
</td> </tr>
<tr>
<td>
Spatial Coverage / Area:
</td>
<td>
_Demonstration area_
</td> </tr>
<tr>
<td>
Spatial Resolution / Scale (Data Grid):
</td>
<td>
_Vectorial derived from the EO input data / 1:10000_
_Raster derived from the EO input data / from 10x10m to 30 x 30m_
</td> </tr>
<tr>
<td>
Geographic projection / Reference system:
</td>
<td>
_UTM WGS84_
</td> </tr> </table>
<table>
<tr>
<th>
Product ID.
</th>
<th>
GIWR
</th> </tr>
<tr>
<td>
Product Name
</td>
<td>
Gross Irrigation Water Requirement
</td> </tr>
<tr>
<td>
Input Data/Sources:
</td>
<td>
_EO data:_
1. _L8 TOA or S2 TOC reflectance data (rf,: DD 3.4)._
_No EO DATA:_
2. _csv file of input crop parameters and miscellaneous (rf,:D3.4)_
3. _csv file of observed meteorological data_
4. _Fruit mask if present_
5. _UCM vector_
</td> </tr>
<tr>
<td>
Input Data Archiving and rolling policies
</td>
<td>
_EO data_
_1) 4 GB / quarterly_
_NO EO data:_
1. _10 MB / 1 time per year_
2. _10 MB/daily_
3. _less than 50 MB/ quarterly_
</td> </tr>
<tr>
<td>
Frequency of update (refresh rate):
</td>
<td>
_Daily_
</td> </tr>
<tr>
<td>
Format:
</td>
<td>
_Vector/Raster_
</td> </tr>
<tr>
<td>
Naming convention:
</td>
<td>
_Raster_
_MOSES_GIWR_ME _YYYYMMDD_ SS_yyyyDOY.tif where:_
* _MOSES_GIWR is the product identifier_
* _YYYYMMDD is the generation time of the CWD_
* _SS satellite ID (S2 or L8)_
* _yyyyDOY is the sensing day used in input in format year/doy_
* _ME is the method used to estimate CWD:analytical (an) or empirical (em)_
</td> </tr>
<tr>
<td>
Product ID.
</td>
<td>
GIWR
</td> </tr>
<tr>
<td>
Product Name
</td>
<td>
Gross Irrigation Water Requirement
</td> </tr>
<tr>
<td>
Archiving and preservation
</td>
<td>
_Amount of data generated: 3GB per 15 days_
Raster images and vector data (shapefiles) are archived on a storage server
made available on Esri Italia premises (same solution adopted for storage of
Early Crop Maps described in paragraph 3.1.1).
</td> </tr>
<tr>
<td>
Data sharing:
</td>
<td>
Data access is possible through FTP protocol (connection
parameters reported in “data sharing” section of paragraph 3.1.1)
Raster images of CWD parameter are saved in the following folder on the
server:
DA_XX/YYYY/CWD/RASTER/GIWR/
Where XX represents the demonstration area code and YYYY is the year.
The folder contains the “tiff” files of both “analytical” and
“empirical” CWDs, named according to the above-mentioned convention.
CWDs in vector format are available as fields of the “Unit Crop Map”
shapefile, that can be found in the directory:
DA_XX/YYYY/CWD/VECTOR/
Inside this directory, files are named according to the following convention:
UCM_DA_YYYY_SEA_DOY
Where DA is identifier of Demo Area, e.g. “SP”; SEA is identifier of
irrigation season (initial of reference month, e.g. “JJA” for JuneJuly-
August); YYYY refers to the current year and DOY is the Day
Of Year of the computation. The same file contains CWD, GIWR, NDVI, LAI, KC,
GIWRF and CWDF data described in these paragraphs.
</td> </tr>
<tr>
<td>
Standards and metadata:
</td>
<td>
</td> </tr> </table>
## 3.8. Seasonal irrigation forecast
This MOSES product consists in the two following datasets, which are described
in details in following tables:
* Seasonal irrigation climate
* Seasonal irrigation forecast
### 3.8.1. Seasonal irrigation climate
<table>
<tr>
<th>
Product ID.
</th>
<th>
SeasonalIrriClimate
</th> </tr>
<tr>
<td>
Product Name
</td>
<td>
Seasonal irrigation climate
</td> </tr>
<tr>
<td>
Purpose
</td>
<td>
Seasonal irrigation climate is the 3-months crop irrigation assessment
computed by the MOSES soil water balance processor for the climatic period,
e.g. for the Italy Demonstration area, from 1991 until the year of seasonal
forecast.
</td> </tr>
<tr>
<td>
Description (Content Specification):
</td>
<td>
This product provides, for each distinct combination of the unit map,
information about the total crop water needs for the analyzed season of the
climate series. In more details, the data provided are the irrigation
statistical distribution in millimeters expressed as percentiles for each unit
map. This information has to be integrated with the seasonal irrigation
forecast output (see the specific table of the product SeasonalIrriForecasts)
in order to compare the seasonal irrigation forecast and the irrigation
climate and to evaluate the signal of the forecast with respect to the
climate.
</td> </tr>
<tr>
<td>
Layers(*):
</td>
<td>
1. ID_CASE: identifier of the distinct combination of crop map, soil map and meteo grid.
2. CROP: identifier of the crop for the water balance processor
3. SOIL: identifier of the soil type for the water balance processor
4. METEO: identifier of the meteo cell that refers to the meteo grid of the demonstration area
5. p5: the 5 th percentile of total seasonal irrigation quantity for the crop computed from climate data
6. p25: the 25 th percentile of total seasonal irrigation quantity for the crop computed from climate data
7. p50: the 50 th percentile of total seasonal irrigation quantity for the crop computed from climate data
8. p75: the 75 th percentile of total seasonal irrigation quantity for the crop computed from climate data
9. p95: the 95 th percentile of total seasonal irrigation quantity for the crop computed from climate data
</td> </tr> </table>
<table>
<tr>
<th>
Product ID.
</th>
<th>
SeasonalIrriClimate
</th> </tr>
<tr>
<td>
Product Name
</td>
<td>
Seasonal irrigation climate
</td> </tr>
<tr>
<td>
Measurement Unit:
</td>
<td>
Millimeters
</td> </tr>
<tr>
<td>
Field of Applicability (Temporal/Spatial):
</td>
<td>
Monthly / District area
</td> </tr>
<tr>
<td>
Temporal coverage
</td>
<td>
3-months
</td> </tr>
<tr>
<td>
Spatial Coverage / Area:
</td>
<td>
Computation area
</td> </tr>
<tr>
<td>
Spatial Resolution / Scale (Data Grid):
</td>
<td>
Vectorial, the same resolution of the unit map
</td> </tr>
<tr>
<td>
Geographic projection / Reference system:
</td>
<td>
UTM WGS84
</td> </tr>
<tr>
<td>
Input Data/Sources:
</td>
<td>
This product is the results of the integration between the following input
data: - early crop map
* soil information
* climate series of observed weather data (e.g. 20 years of daily temperature and precipitation data)
</td> </tr>
<tr>
<td>
Input Data Archiving and rolling policies
</td>
<td>
* Climate data archiving: about 300 kb for each weather grid cell
* Rolling policies: annual update if possible
</td> </tr>
<tr>
<td>
Frequency of update (refresh rate):
</td>
<td>
monthly (when requested)
</td> </tr>
<tr>
<td>
Format:
</td>
<td>
Comma separated value file (.csv) with the following fields:
* ID_CASE, integer with 5 digits
* CROP, alphanumeric string
* SOIL, alphanumeric string
* METEO, integer of 5 digits
* p5 (mm), float, precision: 2 digits
* p25 (mm), float, precision: 2 digits
* p50 (mm), float, precision: 2 digits
* p75 (mm), float, precision: 2 digits
* p95 (mm), float, precision: 2 digits
</td> </tr>
<tr>
<td>
Product ID.
</td>
<td>
SeasonalIrriClimate
</td> </tr>
<tr>
<td>
Product Name
</td>
<td>
Seasonal irrigation climate
</td> </tr>
<tr>
<td>
Naming convention:
</td>
<td>
MOSES_SeasonalIrriClimate_AAAAA _MMM.csv where:
* MOSES_ SeasonalIrriClimate is the product identifier
* AAAAA is the computation area name, composed by 5 characters in capital letters (e.g. ITALY, SPAIN, MAROC, ROMAN)
* MMM is the 3-month period of the seasonal forecasts, composed by the initial letters of the forecast month
</td> </tr>
<tr>
<td>
Archiving and preservation
</td>
<td>
Values for the DA Italy:
About 100 kb for each .csv monthly emission.
About 8 Mb for the corresponding shapefile (see data sharing).
The files are archived on a storage server made available on Esri Italia
premises (same solution adopted for storage of Early Crop Maps described in
paragraph 3.1.1).
</td> </tr>
<tr>
<td>
Data sharing:
</td>
<td>
Data access is possible through FTP protocol (connection parameters reported
in “data sharing” section of paragraph 3.1.1)
Seasonal irrigation climate data are saved in folders that can be found in the
server folder:
DA_XX/YYYY/SWB/SEASONAL/MMM
Where XX represents the demonstration area code YYYY is the year and MMM the
acronym of the 3-months forecast period.
This dataset, together with the corresponding SeasonalIrriForecast, is
automatically processed on the MOSES geoDataBase, using the Unit Crop Map of
the corresponding DA.
A copy of the resulting maps is saved as shapefile (zipped) in the same
directory, with this naming convention:
swbSeasonal_YYY_MMM.zip
Where YYYY is the year and MMM the 3-months forecast period.
</td> </tr>
<tr>
<td>
Standards and metadata:
</td>
<td>
</td> </tr> </table>
### 3.8.2. Seasonal irrigation forecast
<table>
<tr>
<th>
Product ID.
</th>
<th>
**SeasonalIrriForecast**
</th> </tr>
<tr>
<td>
Product Name
</td>
<td>
**Seasonal irrigation forecast**
</td> </tr>
<tr>
<td>
**Purpose**
</td>
<td>
Seasonal irrigation forecast are the 3-months probabilistic forecasts of crop
water needs computed by the MOSES soil water balance processor.
</td> </tr>
<tr>
<td>
**Description (Content Specification):**
</td>
<td>
This product provides, for each distinct combination of the unit map,
information about the total crop water needs for the forecast season. In more
details, the data provided are the irrigation statistical distribution in
millimeters expressed as percentiles for each unit map.
This information has to be integrated with the seasonal irrigation climate
output (see the specific table of the product SeasonalIrriClimate) in order to
compare the seasonal irrigation forecast and the irrigation climate and to
evaluate the signal of the forecast with respect to the climate.
</td> </tr>
<tr>
<td>
**Layers(*):**
</td>
<td>
1. **ID_CASE** : identifier of the distinct combination of crop map, soil map and meteo grid.
2. **CROP:** identifier of the crop for the water balance processor
3. **SOIL:** identifier of the soil type for the water balance processor
4. **METEO:** identifier of the meteo cell that refers to the meteo grid of the computation area
5. **p5** : the 5 th percentile of total seasonal irrigation quantity for the crop
6. **p25** : the 25 th percentile of total seasonal irrigation quantity for the crop
7. **p50** : the 50 th percentile of total seasonal irrigation quantity for the crop
8. **p75** : the 75 th percentile of total seasonal irrigation quantity for the crop
9. **p95** : the 95 th percentile of total seasonal irrigation quantity for the crop
</td> </tr>
<tr>
<td>
**Measurement Unit:**
</td>
<td>
Millimeters
</td> </tr>
<tr>
<td>
**Field of Applicability (Temporal/Spatial):**
</td>
<td>
Monthly / District area
</td> </tr>
<tr>
<td>
**Temporal coverage**
</td>
<td>
3-months
</td> </tr>
<tr>
<td>
**Spatial Coverage / Area:**
</td>
<td>
Computation area
</td> </tr> </table>
<table>
<tr>
<th>
Product ID.
</th>
<th>
**SeasonalIrriForecast**
</th> </tr>
<tr>
<td>
Product Name
</td>
<td>
**Seasonal irrigation forecast**
</td> </tr>
<tr>
<td>
**Spatial Resolution / Scale (Data Grid):**
</td>
<td>
Vectorial, the same resolution of the unit map
</td> </tr>
<tr>
<td>
**Geographic projection /** **Reference system:**
</td>
<td>
UTM WGS84
</td> </tr>
<tr>
<td>
**Input Data/Sources:**
</td>
<td>
This product is the results of the integration between the following input
data:
* early crop map
* soil information
* synthetic series of daily temperature and precipitation generated by seasonal probabilistic forecast
(for more details see SyntheticSeries description and the D3.2 - Seasonal
probabilistic forecasting)
</td> </tr>
<tr>
<td>
**Input Data Archiving and rolling policies**
</td>
<td>
Archiving: each synthetic series requires about 1 Mb of storage (to be
multiplied for the number of cells of the meteo grid) Rolling policy: year
</td> </tr>
<tr>
<td>
**Frequency of update (refresh rate):**
</td>
<td>
monthly (when requested).
</td> </tr>
<tr>
<td>
**Format:**
</td>
<td>
Comma separated value file (.csv) with the following fields:
* ID_CASE, integer with 5 digits
* CROP, alphanumeric string
* SOIL, alphanumeric string
* METEO, integer of 5 digits
* p5 (mm), float, precision: 2 digits
* p25 (mm), float, precision: 2 digits
* p50 (mm), float, precision: 2 digits
* p75 (mm), float, precision: 2 digits
* p95 (mm), float, precision: 2 digits
</td> </tr>
<tr>
<td>
Product ID.
</td>
<td>
**SeasonalIrriForecast**
</td> </tr>
<tr>
<td>
Product Name
</td>
<td>
**Seasonal irrigation forecast**
</td> </tr>
<tr>
<td>
**Naming convention:**
</td>
<td>
**MOSES_SeasonalIrriForecast_AAAAA_YYYY_MMM.csv** where:
* **MOSES_ SeasonalIrriForecast** is the product identifier
* **AAAAA** is the computation area name, composed by 5 characters in capital letters (e.g. ITALY, SPAIN, MAROC, ROMAN)
* **YYYY** is the year of emission of the seasonal forecast
* **MMM** is the 3-month period of the seasonal forecast, composed by the initial letters of the forecast months
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
Values for the DA Italy:
About 100 kb for each .csv monthly emission.
About 8 Mb for the corresponding shapefile (see data sharing).
The files are archived on a storage server made available on Esri Italia
premises (same solution adopted for storage of Early Crop Maps described in
paragraph 3.1.1).
</td> </tr>
<tr>
<td>
**Data sharing:**
</td>
<td>
Data access is possible through FTP protocol (connection parameters reported
in “data sharing” section of paragraph 3.1.1)
Seasonal irrigation climate data are saved in folders that can be found in the
server folder:
DA_XX/YYYY/SWB/SEASONAL/MMM
Where XX represents the demonstration area code YYYY is the year and MMM the
acronym of the 3-months forecast period.
This dataset, together with the corresponding SeasonalIrriClimate, is
automatically processed on the MOSES geoDataBase, using the Unit Crop Map of
the corresponding DA.
A copy of the resulting maps is saved as zipped shapefile, in the same
directory, with this naming convention:
**swbSeasonal_YYY_MMM.zip**
Where YYYY is the year and MMM the 3-months forecast period.
</td> </tr>
<tr>
<td>
**Standards and metadata:**
</td>
<td>
</td> </tr> </table>
## 3.8. Short term irrigation forecast
<table>
<tr>
<th>
Product ID.
</th>
<th>
**ShortTermIrriForecast**
</th> </tr>
<tr>
<td>
Product Name
</td>
<td>
**Short-term irrigation forecast**
</td> </tr>
<tr>
<td>
**Purpose**
</td>
<td>
Short-term irrigation forecasts are the 7-days forecasts of crop water needs
computed by the MOSES soil water balance module.
</td> </tr>
<tr>
<td>
**Description (Content Specification):**
</td>
<td>
This product provides, for each distinct combination of the unit map
(intersection of the crop, soil and meteo map), information about the status
of crop water availability, forecasts of rainfall and crop water needs for the
next 7 days and model assessment of irrigation for the previous 14 days.
This set of information provides a framework about the irrigation needed by
crops for the next week, taking into account the actual irrigation carried out
in the previous 14 days (e.g. if the model computes 40 mm in the previous 14
days, whereas the farmer has irrigated 60 mm, it is possible to decrease the
short term irrigation forecast of 20 mm).
</td> </tr>
<tr>
<td>
**Layers(*):**
</td>
<td>
1. **dateForecast** : date of the last observed weather data;
2. **ID_CASE** : identifier of the distinct combination of crop map, soil map and meteo grid;
3. **CROP:** identifier of the crop for the water balance module;
4. **SOIL:** identifier of the soil type for the water balance;
5. **METEO:** identifier of the meteo cell that refers to the meteo grid of the demonstration area;
6. **readilyAvailableWater** : current readily available water for the crop [mm].
7. **soilWaterDefici** t: difference between field capacity and the actual quantity of water, summed on all the layers of the rooting depth [mm].
8. **forecast7daysPrec** : 7-days forecast of precipitation (sum) [mm].
9. **forecast7daysMaxTransp** : 7-days forecast of maximum crop transpiration (sum) [mm].
10. **forecast7daysIRR** : 7-days forecast of irrigation needs (sum).
11. **previousAllSeasonIRR** : Summed irrigation simulated by means of observed weather data during the all irrigation season, until the date of forecast [mm].
</td> </tr>
<tr>
<td>
**Measurement Unit:**
</td>
<td>
Millimeters
</td> </tr> </table>
<table>
<tr>
<th>
Product ID.
</th>
<th>
**ShortTermIrriForecast**
</th> </tr>
<tr>
<td>
Product Name
</td>
<td>
**Short-term irrigation forecast**
</td> </tr>
<tr>
<td>
**Field of Applicability (Temporal/Spatial):**
</td>
<td>
Daily / District area
</td> </tr>
<tr>
<td>
**Temporal coverage**
</td>
<td>
7-days
</td> </tr>
<tr>
<td>
**Spatial Coverage / Area:**
</td>
<td>
Demonstration area
</td> </tr>
<tr>
<td>
**Spatial Resolution / Scale (Data Grid):**
</td>
<td>
Vectorial, same resolution of the unit map
</td> </tr>
<tr>
<td>
**Geographic projection /** **Reference system:**
</td>
<td>
UTM WGS84
</td> </tr>
<tr>
<td>
**Input Data/Sources:**
</td>
<td>
This product is the results of the integration between several data sources:
* early crop map and in-season crop map
* soil information
* observed weather data (daily temperature and precipitation)
* 7-days weather forecast data (daily temperature and precipitation)
(for more details see D3.3 - Irrigation forecasting package)
</td> </tr>
<tr>
<td>
**Input Data Archiving and rolling policies**
</td>
<td>
(Values for the DA Italy)
Weather input: about 20 Mb / rolling policy: everyday
Soil and parameters input: about 1 Mb / rolling policy: stable
Crop map: see crop map product (paragraph 3.1)
</td> </tr>
<tr>
<td>
**Frequency of update (refresh rate):**
</td>
<td>
Daily
</td> </tr> </table>
<table>
<tr>
<th>
Product ID.
</th>
<th>
**ShortTermIrriForecast**
</th> </tr>
<tr>
<td>
Product Name
</td>
<td>
**Short-term irrigation forecast**
</td> </tr>
<tr>
<td>
**Format:**
</td>
<td>
Comma separated value file (.csv) with the following fields:
* dateForecast, ISO8601 (YYYY-MM-DD)
* ID_CASE, integer with 5 digits
* CROP, alphanumeric string
* SOIL, alphanumeric string
* METEO, integer of 5 digits
* readilyAvailableWater (mm), float, precision: 1 digit
* soilWaterDeficit (mm), float, precision: 1 digit
* forecast7daysPrec (mm), float, precision: 1 digit
* forecast7daysMaxTransp (mm), float, precision: 1 digit
* forecast7daysIRR (mm), integer
* previousAllSeasonIRR (mm), integer
</td> </tr>
<tr>
<td>
**Naming convention:**
</td>
<td>
**MOSES_ShortTermIrriForecasts_AAAAA_YYYYMMDD.csv**
where:
* **MOSES_ShortTermIrriForecasts** is the product identifier
* **AAAAA** is the demonstration area name, composed by 5 characters in capital letters (e.g. ITALY, SPAIN, MAROC, ROMAN)
* **YYYYMMDD** is the emission date of the short term forecasts
</td> </tr>
<tr>
<td>
**Archiving and preservation:**
</td>
<td>
Values for the DA Italy:
About 400 kb for each .csv daily emission.
About 8 Mb for the corresponding shapefile (see data sharing).
The files are archived on a storage server made available on Esri Italia
premises (same solution adopted for storage of Early Crop Maps described in
paragraph 3.1.1).
</td> </tr>
<tr>
<td>
Product ID.
</td>
<td>
**ShortTermIrriForecast**
</td> </tr>
<tr>
<td>
Product Name
</td>
<td>
**Short-term irrigation forecast**
</td> </tr>
<tr>
<td>
**Data sharing:**
</td>
<td>
Data access is possible through FTP protocol (connection parameters reported
in “data sharing” section of paragraph 3.1.1)
Short-term irrigation forecast data are saved in folders that can be found in
the server folder:
DA_XX/YYYY/SWB/INSEASON/
Where XX represents the demonstration area code and YYYY is the year.
This dataset is automatically processed on the MOSES geoDataBase, using the
Unit Crop Map of the corresponding DA.
A copy of the resulting maps is saved as zipped shapefile, in the same
directory, with this naming convention:
swbShotTerm_YYYYMMDD.zip
Where YYYYMMDD is the date of emission of the forecast.
</td> </tr>
<tr>
<td>
**Standards and metadata:**
</td>
<td>
</td> </tr> </table>
## 3.9. In-field measures of water balance components and IRRINET water
balance data
Canale Emiliano Romagnolo (CER) provides the following datasets to the MOSES
platform: ● IRRINET data (irrigation requirement data of the crops in the
Italian DA) ● DA-IT database with data collected during in-field measurement
campaigns.
The two datasets are detailed in the following tables.
### 3.9.1 IRRINET
<table>
<tr>
<th>
**Product ID.**
</th>
<th>
</th> </tr>
<tr>
<td>
**Product Name**
</td>
<td>
**IRRINET**
</td> </tr>
<tr>
<td>
**Purpose**
</td>
<td>
_Irrigation scheduling & water balance data _
</td> </tr>
<tr>
<td>
**Description (Content Specification):**
</td>
<td>
_The product provides information related to irrigation requirements of the
crops for a specific day within the irrigation season:_
* _ET 0 : reference evapotranspiration _
* _ET max : evapotranspiration of the crop in optimal condition _
* _ET act : actual evapotranspiration of the crop _
* _IrriDate: forecasted date of the next irrigation for the crop_
* _IrriAmount: irrigation amount of the next irrigation gift_
* _SoilMoisture: soil moisture content_
* _RootDepth: depth of crop roots_
* _DegDay: sum of the growing degrees day_
* _IrriNeeds: sum of the irrigation requirements of the crop_
</td> </tr>
<tr>
<td>
**Layers(*):**
</td>
<td>
_POINT layer: the above information are provided as point attributes_
</td> </tr>
<tr>
<td>
**Measurement Unit:**
</td>
<td>
* _ET 0 : mm/ha of the day _
* _ET max : mm/ha of the day _
* _ET act : mm/ha of the day _
* _IrriDate: date in American format_
* _IrriAmount: mm/ha_
* _SoilMoisture: mm/ha_
* _RootDepth: mm_
* _DegDay: integer_
* _IrriNeeds: mm/ha_
</td> </tr>
<tr>
<td>
**Field of Applicability (Temporal/Spatial):**
</td>
<td>
_Daily based values for the required date within crop life cycle_
</td> </tr>
<tr>
<td>
**Temporal coverage**
</td>
<td>
_Daily_
</td> </tr>
<tr>
<td>
**Spatial Coverage / Area:**
</td>
<td>
_Area covered by the service_
</td> </tr> </table>
<table>
<tr>
<th>
**Product ID.**
</th>
<th>
</th> </tr>
<tr>
<td>
**Product Name**
</td>
<td>
**IRRINET**
</td> </tr>
<tr>
<td>
**Spatial Resolution / Scale (Data Grid):**
</td>
<td>
_Free_
</td> </tr>
<tr>
<td>
**Geographic projection /** **Reference system:**
</td>
<td>
_WGS84 projection in decimal degree_
</td> </tr>
<tr>
<td>
**Input Data/Sources:**
</td>
<td>
_Query parameter: specific date within the crop life cycle_
_Inputs to be stored before the call for each plot/crop_
* _Plot coordinates_
* _Irrigation system (category)_
* _Crop type_
* _Start date of the crop_
* _Harvesting date_
* _Kind of rootstock_
* _Planting density_
* _Inter rows management: weeds/tillage_
* _Planting year_
</td> </tr>
<tr>
<td>
**Input Data Archiving and rolling policies**
</td>
<td>
_Around 10Kb per plot_
</td> </tr>
<tr>
<td>
**Frequency of update (refresh rate):**
</td>
<td>
_Daily_
</td> </tr>
<tr>
<td>
**Format:**
</td>
<td>
_JSON/XML stream_
</td> </tr>
<tr>
<td>
**Naming convention:**
</td>
<td>
_No files_
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
</td> </tr>
<tr>
<td>
**Data sharing:**
</td>
<td>
_HTTP API calls free available for the project members. Authentication may be
needed. An XML/JSON parser is needed to pick up each piece of information from
the stream._
_API documentation will be made available_
</td> </tr>
<tr>
<td>
**Standards and metadata:**
</td>
<td>
_Int. Metereological Standards, FAO ID56. Metadata: crop water requirement,
seasonal water stress/replenishment,_
</td> </tr> </table>
### 3.9.2 DA-IT database
<table>
<tr>
<th>
**Product ID.**
</th>
<th>
</th> </tr>
<tr>
<td>
**Product Name**
</td>
<td>
**DA-IT Database**
</td> </tr>
<tr>
<td>
**Purpose**
</td>
<td>
Collections of data coming from various measurements made on DA-
IT
</td> </tr>
<tr>
<td>
**Description (Content Specification):**
</td>
<td>
The product contains the measured data collected in the DA-IT during the
measurement campaigns (ground truth) for the following classes and parameters:
Soil Moisture
EM38: apparent EC, soil moisture volumetric content
TDR : soil moisture volumetric content
Gravimetric samples : soil moisture volumetric content
Crop biometrics
Plant Height : height of the plant canopy
Plant Width : width of the plant canopy
Plant Length: length of the plant canopy
Canopy Cover: spatial arrangement of the aboveground plant vegetation
Canopy Volume: volume of the plant aboveground vegetation
FAPAR = Fraction adsorbed of photosynthetically active radiation LSW = Leaf
Specific Weight, dry matter weight per leaf area unit at full maturity
LAI = Leaf Area Index, one-sided green leaf area per unit ground surface area
Phenology = plant development stage
Crop Yield
Yield: commercial production
Irrigation
Irr. Volume = volume of irrigation water supplied
Irr. Method = irrigation technology applied (Sprinkler =1; Drip = 2; Mini-
Sprinkler = 3; Surface =4)
</td> </tr>
<tr>
<td>
**Layers(*):**
</td>
<td>
POINT layer: the above information are provided as point attributes
</td> </tr> </table>
<table>
<tr>
<th>
**Product ID.**
</th>
<th>
</th> </tr>
<tr>
<td>
**Product Name**
</td>
<td>
**DA-IT Database**
</td> </tr>
<tr>
<td>
**Measurement Unit:**
</td>
<td>
EM38 = mS/m, m3/m3
TDR = m3/m3
Gravimetric sample = m3/m3
Plant Height = m
Plant Width = m
Plant Length = m
Canopy Cover = % or fraction of the ground area
Canopy volume = m3
FAPAR = % or fraction of PAR
LSW = DM g/cm2
LAI = m2/m2
Phenology = BBCH scale or other applicable
Yield = t/ha
Irr. Volume = m3/ha
Irr. Method = code (1-4)
</td> </tr>
<tr>
<td>
**Field of Applicability (Temporal/Spatial):**
</td>
<td>
3 or 4 times during the crop growth cycle at specific growth stages (early
vegetation, rapid development, full vegetative growth, senescence) , 10x10 m
pixel
</td> </tr>
<tr>
<td>
**Temporal coverage**
</td>
<td>
The data are collected during the growing season from March/April to the end
of October.
</td> </tr>
<tr>
<td>
**Spatial Coverage / Area:**
</td>
<td>
DA-IT area
</td> </tr>
<tr>
<td>
**Spatial Resolution / Scale (Data Grid):**
</td>
<td>
Variable with parameters from less than 1 m2 to 10x10 m
</td> </tr>
<tr>
<td>
**Geographic projection /** **Reference system:**
</td>
<td>
WGS84 projection in decimal degree
</td> </tr>
<tr>
<td>
**Input Data/Sources:**
</td>
<td>
Measurement instruments
</td> </tr>
<tr>
<td>
**Input Data Archiving and rolling policies**
</td>
<td>
N.A.
</td> </tr>
<tr>
<td>
**Frequency of update (refresh rate):**
</td>
<td>
Approx. every 5 weeks from April to October
</td> </tr>
<tr>
<td>
**Format:**
</td>
<td>
CSV file and Shape file
</td> </tr>
<tr>
<td>
**Naming convention:**
</td>
<td>
DA-IT MOSES Database
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
About 25kb per parameter and per data, totalizing approx. 3000 Kb
</td> </tr>
<tr>
<td>
**Data sharing:**
</td>
<td>
Data are available for MOSES internal uses until publication.
</td> </tr>
<tr>
<td>
**Product ID.**
</td>
<td>
</td> </tr>
<tr>
<td>
**Product Name**
</td>
<td>
**DA-IT Database**
</td> </tr>
<tr>
<td>
**Standards and metadata:**
</td>
<td>
No standards available
</td> </tr> </table>
# 4\. Structure of Web Services
The datasets generated by the MOSES processors are organized in hierarchic
layer structures in order to be published by ArcGIS Server and to be made
available through WebGIS interface. The web services structures is replicated
for each Demonstration Area in order to coherently separate and secure data
for the different users.
In the following tables, we list the available web services and, for each one,
the public URL of the service, a description and the structure of layers
inside the service. The Demonstration Area the products refer to is specified
in the URL, and it is identified by two alphanumeric characters (the DA short
name specified in the global configuration file). We report in the following,
as examples, the URLs with the products computed in the Italian Demonstration
Areas. Identical services are being realized for the other DAs in order to
publish the products generated in the 2018 irrigation season.
<table>
<tr>
<th>
**Early Crop Map**
</th>
<th>
</th> </tr>
<tr>
<td>
URL
</td>
<td>
https://moses.esriitalia.it/adminarcgis/rest/services/RCDAIT/earlyCropMap/MapServer
</td> </tr>
<tr>
<td>
Description
</td>
<td>
The REST web service publishes one layer (id = 0), with the output of the
Early Crop Map processor for the current irrigation season and DA (Italy, in
the example URL reported above), namely maps of broad crop classes at a very
early stage in the irrigation season.
</td> </tr>
<tr>
<td>
Layer structure
</td>
<td>
Layer 0: **sde.SDE.BETA_ECM_IT**
</td> </tr> </table>
**Table 1 - Early Crop Map web service description**
<table>
<tr>
<th>
**Unit (in-season) Crop Map**
</th> </tr>
<tr>
<td>
URL
</td>
<td>
https://moses.esriitalia.it/adminarcgis/rest/services/RCDAIT/unitCropMap/MapServer
</td> </tr>
<tr>
<td>
Description
</td>
<td>
The REST web service publishes one layer (id = 0) with the output of the Unit
Crop Map processor, namely the processor that extracts the soil units of the
DA that are uniform according to a set of characteristics such as cultivated
crop, soil composition, belonging to the same meteorological observation and
forecast grid, type of exploited irrigation system, etc.
</td> </tr>
<tr>
<td>
Layer structure
</td>
<td>
Layer 0: **UnitCropMap**
</td> </tr> </table>
**Table 2 - In season Crop Map web service description**
<table>
<tr>
<th>
**Seasonal irrigation forecast**
</th> </tr>
<tr>
<td>
URL
</td>
<td>
https://moses.esriitalia.it/adminarcgis/rest/services/RCDAIT/seasonalIrrigationForecast/MapServer
</td> </tr>
<tr>
<td>
**Seasonal irrigation forecast**
</td> </tr>
<tr>
<td>
Description
</td>
<td>
The REST web service publishes the output of the Soil Water Balance processor
in “Seasonal” configuration on the reference DA, which produces the
statistical distribution of irrigations, expressed as percentiles, for each
computational unit. It contains three layers, named “mean of irrigation
forecasts [mm]” (layer ID = 0), “mean of seasonal irrigation climate [mm]”
(layer ID = 1) and “seasonal irrigation anomaly forecast [mm]” (layer ID = 2),
whose content is described in the following tables.
</td> </tr>
<tr>
<td>
Layer structure
</td>
<td>
Layer 0: **median of seasonal irrigation forecast [mm]**
Layer 1: **median of seasonal irrigation climate [mm]**
Layer 2: **seasonal irrigation anomaly forecast [mm]**
</td> </tr> </table>
**Table 3 - Seasonal irrigation forecast web service description**
<table>
<tr>
<th>
**In-season irrigation forecast**
</th> </tr>
<tr>
<td>
URL
</td>
<td>
https://moses.esriitalia.it/adminarcgis/rest/services/RCDAIT/inSeasonIrrigationForecast/MapServer
</td> </tr>
<tr>
<td>
Description
</td>
<td>
The REST web service publishes the output of the Soil Water Balance processor
in “InSeason” configuration, which produces the short-term (seven days)
irrigation forecasts of crop water needs. It publishes data referred to daily
forecasts since the beginning of the current crop season.
</td> </tr>
<tr>
<td>
Layer structure
</td>
<td>
Layer 0: **Irrigation forecast (7 days)**
Layer 1: **Previous irrigation assessment (all season)**
Layer 2: **Precipitation forecast (7 days)**
Layer 3: **ET crop (7 days) [mm]**
Layer 4: **Previous irrigation assessment (14 days)**
Layer 5: **Readily available water**
Layer 6: **current soil water deficit [mm]**
Layer 7: **root depth [m]**
</td> </tr> </table>
**Table 4 - In-season irrigation forecast web service description**
<table>
<tr>
<th>
**Current In-season irrigation forecast**
</th> </tr>
<tr>
<td>
URL
</td>
<td>
https://moses.esriitalia.it/adminarcgis/rest/services/RCDAIT/currentInSeasonIrrigationForecast/MapServer
</td> </tr>
<tr>
<td>
Description
</td>
<td>
The REST web service publishes the last available output generated by the Soil
Water Balance processor in “In-Season” configuration on the DA specified in
the URL. Every day, as soon as the SWB processor generates new output, the
content of the feature class published by the service is overwritten. Unlike
the previous one, this web service may be easily used in order to visualize
the current irrigation forecasts through web maps or applications.
</td> </tr>
<tr>
<td>
**Current In-season irrigation forecast**
</td> </tr>
<tr>
<td>
Layer structure
</td>
<td>
Same layers and content of the “in-season irrigation forecast” service
</td> </tr> </table>
**Table 5 – Current In-season irrigation forecast web service description**
<table>
<tr>
<th>
**Weather forecast on the meteorological grid covering the DA**
</th> </tr>
<tr>
<td>
URL
</td>
<td>
https://moses.esriitalia.it/adminarcgis/rest/services/RCDAIT/WeatherForecast/MapServer
</td> </tr>
<tr>
<td>
Description
</td>
<td>
The REST web service publishes last available 7-days weather forecasts
computed on the meteorological grid covering the DA specified in the URL.
Weather forecasts are updated every day, i.e. at each execution of the Soil
Water Balance processor. Actually, short term weather forecasts represent an
input of that processor, but they are also published as a product since they
can be useful to potential MOSES clients.
The web service publishes a single layer (ID = 0) containing all weather data.
</td> </tr>
<tr>
<td>
Layer structure
</td>
<td>
Layer 0: **precipitation forecast [mm]**
</td> </tr> </table>
**Table 6 - Weather forecast web service description**
<table>
<tr>
<th>
**Crop Water Demand processor products**
</th> </tr>
<tr>
<td>
URL
</td>
<td>
https://moses.esriitalia.it/adminarcgis/rest/services/RC
-DAIT/cropWaterDemand/MapServer
</td> </tr>
<tr>
<td>
Description
</td>
<td>
The REST web service publishes the outputs of the Crop Water Demand processor
generated during the current irrigation season on the DA specified in the URL.
It allows the access to data generated since the beginning of the current
irrigation season.
</td> </tr>
<tr>
<td>
Layer structure
</td>
<td>
Layer 0: **NDVI**
Layer 1: **LAI**
Layer 2: **Crop coefficient analytical**
Layer 3: **Crop coefficient empirical**
Layer 4: **CWD empirical [mm/day]**
Layer 5: **CWD analytical [mm/day]**
Layer 6: **CWD forecast empirical [mm/week]**
Layer 7: **CWD forecast analytical [mm/week]**
Layer 8: **Gross irrigation requirement emp [mm/day]**
Layer 9: **Gross irrigation requirement analytical [mm/day]**
</td> </tr> </table>
**Table 7 - Crop Water Demand web service description**
Data published by all web services may be accessed by means of standard
queries. The server supports both HTTP GET and POST methods for request-
responses.
# 5\. Scientific publications
According to the requirements set by [RD2], MOSES consortium will provide an
open access to all scientific publications resulting from the project. Open
access will be guaranteed to the datasets exploited in the publications, too.
The consortium will realize a specific section of the project website, called
Publications, where all scientific papers will be listed and a “machine
readable copy” of the final version of the article will be linked (green open
access approach). In order to guarantee reliability and continuous access to
the publications, the full text articles linked by our website will be
physically stored on a public online repository, such as Zenodo (
_http://zenodo.org/_ ) .
List of publication on the website and uploads of the full-text articles in
the public repository will be updated whenever needed.
Furthermore, the complete list of the publications produced by project’s
partners and access modes will be included in Deliverable 6.2 (Communication
and Dissemination Report).
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0781_STEM4youth_710577.md
|
_1\. INTRODUCTION_
This Data Management Plan (DMP) has been prepared by mostly following the
document “Guidelines on FAIR Data Management in Horizon 2020” (Version 3.0, 26
July 2016) 1 . This final version of the DMP presents the data management
strategy in the StemForYouth (SFY) H2020 project as well as the description of
all the research data sets.
In this SWAFS project, the general goal of the project is to bring teenagers
closer to Science and Technology. Students are thus at the core of the project
and it is thus especially important to implement Responsible Research and
Innovation keys in all activities of the project. Project has to ensure that
RRI concepts will be assimilated by the students with all the significant
dimensions, as future possible researchers and responsible citizens.
For instance, through the Citizen Science projects co-created and implemented
in their schools, students themselves will collect, treat and analyse research
data. Having a comprehensive data management plan in order to allow Open
Access to Research Data is thus of vital importance, not only for the
researchers participating in the project but also to disseminate RRI best
practices to the youngest.
# 2\. DATA SUMMARY
**2.1 What is the purpose of the data collection/generation and its relation
to the objectives of the project?**
The data sets of this project have been mainly generated in WP4 (Citizen
Science at School), and WP7 (Trial and outreach activities), and assessed by
WP8 (Assessment and recommendations).
_**Citizen Science Data** _
The scientific research data collected in WP4 and WP7 are related to the
introduction of Citizen Science at School. Citizen Science experiments have
been performed through a collective research process. The young boys and girls
have participated to the governance of the research projects, design the
experiments, conduct them, and in some cases analysed the data and interpreted
the results. Experiments and their data gathering have been approved by the
Ethics Committee of the Universitat de Barcelona.
In relation to the main objective of the project -which is to bring teenagers
closer to Science and Technology-, the Citizen Science introduction at school
supports the latest research in science education, which advocates for a
reduced emphasis on memorisation of facts-based content and increased
engagement in the process of inquiry through hands-on or learning by doing
activities. It has been also demonstrated that students’ participation and
motivation is strongly increased when they participate in Citizen Science
projects, as a result of the close contact with scientists, the perception of
their ability to solve important issues for the community, and their
empowerment as true owners and disseminators of the projects results.
_**Trial and outreach activities** _
The scientific research data collected under the WP8 framework, during the
trials implementation
(Task 7.1 and Task 7.4), is related to students’ attitudes towards STEM and
their future career choice. High school students participated in Phase I and
II of the trials, by developing STEMforYouth activities generated as part of
WP6 from the following sub-courses: Mathematics, Engineering, Physics,
Chemistry, Medicine and Astronomy.
The objective was to identify students’ attitudes towards STEM as well as
their interest in STEM and their present and future career choice, before and
after the implementation of the STEMforYouth designed sub-courses. In general,
students increased their motivation for learning and their attitudes towards
STEM, stimulated by the learning methodologies employed in the sub-courses:
hands-on activities, inquiry-based learning, collaborative learning, learning
via experiments…. In addition, students, working on specific modules, such as,
Mathematics, found this subject more useful for a daily life purpose. Students
also acquired and reinforced their knowledge.
**2.2 What types and formats of data do the project generate/collect?**
_**Citizen Science Data** _
The project, in relation to Citizen Science experiments, collects **human
decision-making** data. The experiments are placed in public spaces and the
pedestrians freely decide to participate and to complete surveys and play
games using a tablet.
The experiments are divided in three parts and in each one the data generated
has different characteristics. In the first part we collect sociodemographic
data and, in some experiments through a survey, the participants' perception
about the topic of study (e.g. air quality in Games xAire or coastal
environmental pollution in Games xPalaio Faliro). In the second part we
collect properly the decision-making data by means of the social dilemmas
games, capturing the interactions between the participants when they are
interacting in the behavioural games. And finally, in the third part of the
experiments we (optionally) collect data using surveys about the topic of
study, the decisionmaking process or the participants’ experience.
The data is captured using MySQL database, the behavioural actions as well as
the survey data are shared in CSV format tables. The questions and answers of
the surveys are included in XLS file named _QuestionsAnswersSurvey_ . Each
repository has its own _README_ file with detailed information about each
field. Here is an example of the files and some of the fields that can be
found:
_Files_
* README.txt: detailed explanation of the metadata
* QuestionsAnswersSurvey.xls: set of the surveys’ questions and answers
* session.csv: data of each game
* users.csv: data of valid users
* dictator.csv: data collected in Dictator’s Game
* snowdrift.csv: data collected in Snowdrift Game
* trust.csv: data collected in Trust Game
Fields in Snowdrift Game (snowdrift.csv)
* id: choice identifier
* user_id: user's identifier
* rival_id: rival's identifier
* rol: role (E: symmetric, A: advantage, D: disadvantage)
* choice: choice (C: Cooperate, D: Defect)
* guess: guess choice (C: Cooperate, D: Defect)
* gain: total score
_**Trial and outreach activities** _
The data gathered from the Trial and Outreach Activities were mainly analysed
through a quantitative approach to collect **students’ beliefs and attitudes**
. The questionnaires were administrated to the students before and after the
implementation of the STEMforYouth activities. They freely accepted to fill
the questionnaires, being aware of they were participating in the data
collection process with a research purpose.
The questionnaires collect two types of data: sociodemographic data and
attitudes data. In particular, regarding the sociodemographic data,
questionnaires collect students’ pseudonym, age, sex, country, and future
academic and career preferences. The attitudes questions collect data
regarding: Students’ Image of the scientist: how they see STEM
professionals, including implicit stereotypes about STEM professionals such as
the vocational nature, their reserved nature and their high intellectual
capacity.
* Student attitudes towards STEM like enjoyment and self-concept.
* Students’ perceptions about the utility of STEM disciples, including ‘career purpose’ and ‘daily-life purpose’ dimensions.
The data was collected in paper-based format or computer-based format. In the
last case, questionnaires were generated on Google Forms. This format was used
for the participant schools with a sufficient number of computers for all the
students, and suitable access to Internet. This was a minority group in a
couple of countries.
The aforementioned data have been uploaded on a XLS format. In addition, it
has included a .TXT file with general information and codes calls “README”,
and a XLS format with questions.
_Files_
* README.txt: detailed explanation of the metadata.
* Questions.xls: set of questions included in the attitudes questionnaire.
* STEMforYouth_data.xls: data of the students’ attitudes towards STEM.
Fields in the collected data (STEMforYouth_data.xls)
* id: student’s identifier.
* Country: country where students implemented the STEMforYouth activity or activities.
* Age: student’s age.
* Gender: student’s gender (0: female and 1: male).
* Student’s answer to their academic and professional plans
* Student’s answer to the Likert scale (1: Strongly Agree, 2: Agree, 3: Somewhat Agree, 4: Somewhat Disagree; 5: Disagree; 6 Strongly Disagree).
_**Do you re-use any existing data and how?** _
## Citizen Science Data
Data from previous Citizen Science experiments on the same themes could be
used for comparison purposes, calibration or to complete the set of collected
data. These existing data from Universitat de Barcelona are already deposited
in repositories such as Dryad, GitHub and Zenodo with a CC0 1.0 license,
allowing re-use. The data gathered using citizen science experiment could also
be crossed with data from socio-economic demographics (such as average life-
expectancy, average wage, average house prices in a given neighbourhood or
region) data publicly available by public administration open repositories.
## Trial and outreach activities data
Only data from the STEMforYouth project was analysed.
_**What is the origin of the data?** _
## Citizen Science Data
The data are collected during Citizen Science experiments. The volunteers
freely and consciously deliver their data, which is the result of their
participation to the experiment. In addition, the experiments are thought to
solve or propose solutions relevant issues for the community based on the
evidences collectively gathered.
## Trial and outreach activities data
The Data were collected during Trials implementation in six European
countries: Poland, Italy, Greece, Czech Republic, Slovenia and Spain. Students
freely accepted to fill the questionnaires, being conscious that they were
part of a research study. The trials were carried out to test the
attractiveness, innovativeness and usefulness of the STEMforYouth sub-courses
through their implementation in a wide variety of contexts.
_**What is the size of the data?** _
## Citizen Science Data
<table>
<tr>
<th>
**Dataset Name**
</th>
<th>
**Size**
</th> </tr>
<tr>
<td>
**STEMForYouth: Games xBadalona**
</td>
<td>
_67kb_
</td> </tr>
<tr>
<td>
**STEMForYouth: Games xViladecans**
</td>
<td>
_87kb_
</td> </tr>
<tr>
<td>
**STEMForYouth: Games xBarcelona**
</td>
<td>
_72kb_
</td> </tr>
<tr>
<td>
**STEMForYouth: Games xPalaioFaliro**
</td>
<td>
_113kb_
</td> </tr>
<tr>
<td>
**STEMForYouth: Games xAire**
</td>
<td>
_186kb_
</td> </tr> </table>
## Trial and outreach activities data
<table>
<tr>
<th>
**Dataset Name**
</th>
<th>
</th>
<th>
**Size**
</th> </tr>
<tr>
<td>
**STEMForYouth: Trials and Outreach Data**
</td>
<td>
_538kb_
</td> </tr> </table>
_**To whom might it be useful ('data utility')?** _
## Citizen Science Data
Each set data could be analysed by the students that designed the experiments
and the researchers participating in these dynamics. In addition, the Open
Data might be useful to different collectives such as:
1. Others scientists having convergent research lines in terms of collective decision making.
2. Public institutions concerned by the social questions raised by the experiments. The data may serve as evidences to support some policies.
3. Teachers and students that will use the Citizen Science toolkit produced in the frame of StemForYouth in order to introduce Citizen Science at school.
## Trial and outreach activities
The data could be useful for a research purpose. It can be interesting for
different researchers, such as:
1. Researchers whose research areas is students’ attitudes field.
2. Researchers whose research areas is academic and professional students’ plans.
# 3\. FAIR DATA
**3.1 Making data findable, including provisions for metadata**
_**Are the data produced and/or used in the project discoverable with
metadata, identifiable and locatable by means of a standard identification
mechanism (e.g. persistent and unique identifiers such as Digital Object
Identifiers)?** _
Yes, the data are associated with metadata and locatable by means of a DOI in
both cases.
## Citizen Science Data
<table>
<tr>
<th>
**Dataset Name**
</th>
<th>
**DOI**
</th> </tr>
<tr>
<td>
**STEMForYouth: Games xBadalona**
</td>
<td>
_10.5281/zenodo.1308963_
</td> </tr>
<tr>
<td>
**STEMForYouth: Games xViladecans**
</td>
<td>
_10.5281/zenodo.1308974_
</td> </tr>
<tr>
<td>
**STEMForYouth: Games xBarcelona**
</td>
<td>
_10.5281/zenodo.1308972_
</td> </tr>
<tr>
<td>
**STEMForYouth: Games xPalaioFaliro**
</td>
<td>
_10.5281/zenodo.1314180_
</td> </tr>
<tr>
<td>
**STEMForYouth: Games xAire**
</td>
<td>
_10.5281/zenodo.1314207_
</td> </tr> </table>
## Trial and outreach activities
<table>
<tr>
<th>
**Dataset Name**
</th>
<th>
**DOI**
</th> </tr>
<tr>
<td>
**STEMForYouth: Trials and Outreach Data**
</td>
<td>
_10.5281/zenodo.1472067_
</td> </tr> </table>
_**What naming conventions do you follow?** _
## Citizen Science Data
All the data names set will contain, in this order: STEMForYouth / Name or
reference of the experiment
## Trial and outreach activities
A single file including STEMForYouth Data.
_**Do you provide search keywords that optimize possibilities for re-use?** _
## Citizen Science Data
The keywords that describe our data are: Human Decision Making, Social
Dilemmas, Citizen Science, STEMForYouth, Public Experiments, Collective
Experiments, Action Research, Human Behaviour, Collective Action, Game Theory,
Cooperation.
## Trial and outreach activities
The keywords that describe our data are: STEM, STEM education, STEMforYouth,
Attitudes, Image of Scientist, Academic plans, Career Choice, Enjoyment,
Utility, Astronomy, Engineering, Mathematics, Physics, Chemistry, Medicine.
_**Do you provide clear version numbers?** _
Yes, in both cases, the public repository provides the dataset version
following the convention of _Semantic Versioning 2.0.0_ .
_**What metadata has been created? In case metadata standards do not exist in
your discipline, please outline what type of metadata has been created and
how.** _
In both cases, Metadata created carefully explain and describe the content and
meaning of each of the fields of the database. Each dataset repository
contains a README file with its metadata associated.
**3.2 Making data openly accessible**
_**Which data produced and/or used in the project have been made openly
available as the default? If certain datasets cannot be shared (or need to be
shared under restrictions), explain why, clearly separating legal and
contractual reasons from voluntary restrictions.** _
## Citizen Science Data
The full data set has been available. It does not contain any personal data as
the players are using a pseudonym and the general sociodemographic data
collected, such as gender and age range, do not allow their identification.
## Trial and outreach activities
The full data set has been also available. It does not contain any personal
data. Students employed a pseudonym and sociodemographic data collects basic
information such as gender, age and country, which do not allow the students’
identification.
_**Note that in multi-beneficiary projects it is also possible for specific
beneficiaries to keep their data closed if relevant provisions are made in the
consortium agreement and are in line with the reasons for opting out.** _
The data sets generated through the Citizen Science experiments and the Trial
and outreach activities will be all made openly available.
Citizen Science Data associated to Citizen Science pilot experiments (Games
xViladecans/Barcelona/Viladecans) are already open. The Citizen Science Data
associated with the Citizen Science pilots replication (Games xPalaio Faliro
and Games xAire) will be openly available later on June 2019, when the
associated research paper will be ready to be submitted for peerreview
publication.
Data from trials and outreach activities will be openly available in a maximum
of 30 months (30th of April of 2021). This is because we are aiming to produce
research papers during this time, and publishing them on journals or books. If
this research work and the research instruments employed and designed by Jose
M. Diego-Mantecón are published before the expected time (30 months), data may
be opened earlier.
_**How will the data be made accessible (e.g. by deposition in a
repository)?** _
For the Citizen Science Data, the data are deposited in Zenodo using standard
CSV files for data tables.
For the trials and outreach activities, the data are deposited in Zenodo using
standard XLS files for data tables.
_**What methods or software tools are needed to access the data?** _
No specific software is necessary to access the data.
_**Is documentation about the software needed to access the data included?** _
No.
_**Is it possible to include the relevant software (e.g. in open source
code)?** _
Not applicable.
_**Where are the data and associated metadata, documentation and code
deposited? Preference should be given to certified repositories which support
open access where possible.** _
## Citizen Science Data
The data are deposited in Zenodo (OpenAire/CERN repository) and the code
associated with the games platform in Github, traditionally associated with
the Open Source movement.
## Trials and outreach activities
The data is deposited in Zenodo (OpenAire/CERN repository).
_**Have you explored appropriate arrangements with the identified
repository?** _
Yes.
_**If there are restrictions on use, how will access be provided?** _
Citizen Science Data and Trials and outreach activities data will be open and
free Access. The only restriction would be time; some data are already open
and some others will be open after 8 months (30 th June 2019) or 30 months
(30 th April 2021). This is the time expected for the research work, and
papers publication, as described in earlier sections in this document.
_**Is there a need for a data access committee?** _
No.
_**Are there well described conditions for access (i.e. a machine readable
license)?** _
Yes.
_**How will the identity of the person accessing the data be ascertained?** _
We are able to use the protocols from Zenodo and GitHub (OpenSource and
OpenData) although it will be generally difficult to identify the person.
**3.3 Making data interoperable**
_**Are the data produced in the project interoperable, that is allowing data
exchange and re-use between researchers, institutions, organisations,
countries, etc. (i.e. adhering to standards for formats, as much as possible
compliant with available (open) software applications, and in particular
facilitating re-combinations with different datasets from different
origins)?** _
## Citizen Science Data
The data produced in the project follow the standard format of behavioural
data obtained through social dilemmas. This way the results can be easily
compared with any sets of existing data.
## Trials and outreach activities data
The data gathered during the trials and outreach activities collect students’
attitudes towards STEM and their career plans. This data could be of interest
to others and related to other studies in similar contexts.
_**What data and metadata vocabularies, standards or methodologies will you
follow to make your data interoperable?** _
## Citizen Science Data
The data and metadata produced in the project use the standard vocabulary (see
2.2) used in the field of social dilemmas, which are well documented in a
variety of scientific articles. Similarly, the use of social dilemmas to
investigate human behaviour is a well established methodology and number of
similar data sets can be found.
## Trials and outreach activities data
The data produced in the project use the standard vocabulary used in the field
of student attitudes and beliefs, well documented in a variety of scientific
articles.
_**Will you be using standard vocabularies for all data types present in your
data set, to allow inter-disciplinary interoperability?** _
Yes. It is also intended to be fully comprehensive by students participating
in the project, but also the scientific community.
_**In case it is unavoidable that you use uncommon or generate project
specific ontologies or vocabularies, will you provide mappings to more
commonly used ontologies?** _
Yes.
**3.4 Increase data re-use (through clarifying licenses)**
_**How will the data be licensed to permit the widest re-use possible?** _
All the data will have a Creative Commons License: CC BY-SA 4.0.
( _https://creativecommons.org/licenses/by-sa/4.0/_ )
_**When will the data be made available for re-use? If an embargo is sought to
give time to publish or seek patents, specify why and how long this will
apply, bearing in mind that research data should be made available as soon as
possible.** _
## Citizen Science Data
The data related with the Citizen Science pilots experiments are already
available. The data related with the Citizen Science pilots replication will
be made available as soon as the corresponding research papers will be
published and in any case not later than 30 th of June of 2019.
## Trials and outreach activities data
Data will be open in a maximum of 30 months (not later than the 30 th of
April 2021). This is because we are aiming to produce research papers during
this time, and publishing them on journals or books. If this research work and
the research instruments employed and designed by Jose M. Diego-Mantecón are
published before the expected time (30 months), data may be opened earlier.
_**Are the data produced and/or used in the project useable by third parties,
in particular after the end of the project? If the re-use of some data is
restricted, explain why.** _
Yes, the data can be used by third parties under a Creative Commons License:
CC BY-SA 4.0.
_**How long is it intended that the data remains re-usable?** _
Always. In the Zenodo repository, items will be retained for the lifetime of
the repository. This is currently the lifetime of the host laboratory CERN,
which currently has an experimental programme defined for the next 20 years at
least. In all case, a DOI and a perma-link will be provided.
_**Are data quality assurance processes described?** _
## Citizen Science Data
Yes. The data quality is assessed by the researchers of Universitat de
Barcelona that helped conducting the Citizen Science experiments. The
documentation attached to each database is including a discussion about data
quality. Scientific papers using the data will also validate the data quality.
Zenodo and GitHub only guarantee a minimal quality process. For instance, all
data files are stored along with a MD5 checksum of the file content. Files are
regularly checked against their checksums to assure that file content remains
constant.
## Trials and outreach activities data
Yes. The data quality is assessed by the researchers of University of
Cantabria that designed the attitudes questionnaire. In particular, the
questionnaire employed is an amended version of DiegoMantecón’s (2012) student
mathematics-related beliefs instrument, and has been designed under his
anthropological model. Diego-Mantecón’s model seeks to validate results on
cross-cultural projects by considering key factors affecting human behaviour
and therefore human performance in any discipline or subject.
# 4\. ALLOCATION OF RESOURCES
**4.1. What are the costs for making data FAIR in your project?**
Citizen Science Data and Trials and outreach activities data, no cost
associated for the deposit in repositories as all the processes described are
free of charge. In addition, an offline copy of all data sets will be saved in
hard disk funded by the EU project (300-600 euros approx.).
**4.2. How will these costs be covered?**
Note that costs related to open access to research data are eligible as part
of the Horizon 2020 grant (if compliant with the Grant Agreement conditions).
Hard disks will be funded by the EU project in the case of the Citizen Science
experiments and Trials and outreach activities data.
**4.3. Who will be responsible for data management in your project?**
Julián Vicens, researcher of OpenSystems, Universitat de Barcelona.
**4.4. Are the resources for long term preservation discussed (costs and
potential value, who decides and how what data will be kept and for how
long)**
Long term preservation is already guaranteed in Zenodo and Github _._
# 5\. DATA SECURITY
**5.1 What provisions are in place for data security (including data recovery
as well as secure storage and transfer of sensitive data)?**
The data are stored in an in-house UB server. In addition, a copy is done in
an external disc. Data files and metadata in Zenodo are backed up nightly and
replicated into multiple copies in the online system.
**5.2 Is the data safely stored in certified repositories for long term
preservation and curation?**
Yes, Zenodo repository provide this certification
_https://zenodo.org/policies_
# 6\. ETHICAL ASPECTS
**6.1. Are there any ethical or legal issues that can have an impact on data
sharing?**
_These can also be discussed in the context of the ethics review. If relevant,
include references to ethics deliverables and ethics chapter in the
Description of the Action (DoA)._
_**Citizen Science Data** _
The Citizen Science experiments passed through the Ethics Committee of
Universitat de Barcelona. The data collection does not include any personal
data according to the Spanish LOPD (Ley Orgánica de Protección de Datos de
Carácter Personal, Organic Law for Personal Data Protection) or equivalent
laws of Poland and Greece.
_**Trials and outreach activities Data** _
The trials implementation passed through the Ethics Committee of Universidad
de Cantabria. The data collection does not include any personal data according
to the Spanish LOPD (Ley Orgánica de Protección de Datos de Carácter Personal,
Organic Law for Personal Data Protection) or equivalent laws of Poland,
Greece, Slovenia, Czech Republic and Italy.
**6.2. Is informed consent for data sharing and long term preservation
included in questionnaires dealing with personal data?**
We do not share personal data.
# 7\. OTHER ISSUES
**7.1. Do you make use of other national/funder/sectorial/departmental
procedures for data management? If yes, which ones?**
None.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0782_TRANSPIRE_737038.md
|
▪ Specify how access will be provided in case there are any restrictions
This is a cloud implementation with no limitation on what we chose to provide.
* 2.3. Making data interoperable
* Assess the interoperability of your data. Specify what data and metadata vocabularies, standards or methodologies you will follow to facilitate interoperability.
We will adhere to the crystal structure standard. We plan on working in
collaboration with the " _Crystallography Open Database_ " for linking our
data which existing crystal structure data.
* Specify whether you will be using standard vocabulary for all data types present in your data set, to allow inter-disciplinary interoperability? If not, will you provide mapping to more commonly used ontologies?
Yes a mapping will be defined during the project for interchangeable key words
and units of data.
* 2.4. Increase data re-use (through clarifying licenses)
* Specify how the data will be licensed to permit the widest reuse possible
After an initial period data will be open access.
* Specify when the data will be made available for re-use. If applicable, specify why and for what period a data embargo is needed.
Data will be made open access within 1 year of its creation, this is to
facilitate internal checking and publication.
* Specify whether the data produced and/or used in the project is useable by third parties, in particular after the end of the project? If the re-use of some data is restricted, explain why
Yes the data will be open access.
* Describe data quality assurance processes
The metadata will provide all the details of the way in which the data has
been generated. Where appropriate links to peer reviewed journal articles will
be provided as well as DOI numbers.
* Specify the length of time for which the data will remain re-usable
We have provisioned in the project budget for 10 years hosting, the data will
be available for this time.
* 3\. Allocation of resources
* Estimate the costs for making your data FAIR. Describe how you intend to cover these costs
We propose to purchase cloud hosting on a virtual private server. A VPS with
150GB storage with "a2 hosting" costs €24.89 per month, for 10 years hosting
the total cost is €2986.8. The data will be backed up on servers hosted in
TCD. We intend the hosting cost to be paid by the TRANSPIRE project.
* Clearly identify responsibilities for data management in your project
Data management will be done by Thomas Archer.
* Describe costs and potential value of long term preservation of long term data management
Cloud hosting needs to be paid on a monthly basis and the price and our
requirement is expected to fluctuate over time. The TRANSPIRE account with
appropriate funds should be kept open for until January 2027 to maintain this
resource. We estimate that €2986.8 will be sufficient funding.
Supplemental funding cannot be guaranteed to maintain this resource but is
expected to come from additional projects.
* 4\. Data security
* Address data recovery as well as secure storage and transfer of sensitive data
Data will be synced daily with a server hosted in TCD. The TCD server itself
has a zfs raid-z2 file system with daily snapshotting as well as 2 redundant
copies of the data.
* 5\. Ethical aspects
* To be covered in the context of the ethics review, ethics section of DoA and ethics deliverables. Include references and related technical aspects if not covered by the former
In creating this resource we will not infringe on the copy right held by
journals. Any publish image we host must not be a duplicate from a piece of
work for which we do not have the rights to publish. All work from this
project will be published in open access journals.
* 6\. Other
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0783_SafeWaterAfrica_689925.md
|
To provide detail and guarantee about the preservation of the data collected
during the SafeWaterAfrica project, as well as any results derived from the
associated research
It has been prepared by taking into account the template of the “Guidelines on
Data Management in Horizon 2020” and the “Guidelines on FAIR Data Management
in Horizon 2020” and it is oriented to the projects participant organizations,
the European Commission and to all stakeholders involved by the project. The
elaboration of the DMP will allow to SafeWaterAfrica partners to address all
issues related with data. The DMP is a Deliverable on Month 6 (D8.1). However,
it aims to be a live document and, hence, it will evolve during the
SafeWaterAfrica project according to the progress of project activities.
Consequently, future deliverables (D8.2, D8.3, and D8.4) will evaluate and
revise this Data Management Plan, if necessary.
The Grant Agreement and the Consortium Agreement and “Annex I – Description of
Work” of the Grant Agreement are to be referred to for type of data, storage,
recruitment process, confidentiality, ownership, management of intellectual
property and access. The Grant Agreement was signed on 2016-04-29 while the
Consortium Agreement was set into force on 01/06/2016. The procedures that
will be implemented for data collection, storage, access, sharing policies,
protection, retention and destruction will be according to the requirements of
the national legislation of each partner and in line with the EU standards.
The DMP covers (1) the handling of research data during & after the project,
(2) what data will be collected, processed or generated, (3) what methodology
& standards will be applied, (4) whether data will be shared /made open access
& how and (5) how data will be curated & preserved.
# OVERALL DATASET FRAMEWORK
This document contains the first version of the DMP. A number of questions in
connection with the DMP are still under discussion in the consortium.
Therefore, the current (month 6) DMP version does not provide answers for all
of them. In addition, we aim to make our research data findable, accessible,
interoperable and reusable (FAIR) and in order to do this, some modifications
are also expected once recommendation of the document “Guidelines on FAIR Data
Management in Horizon 2020” will be fully applied. Hence, it is planned to
solve this with the next update of the DMP planned to be issued towards the
end of project month 12. Further DMP updates are then planned towards halftime
of the project and towards the end of it.
In SafeWaterAfrica, data management procedures are included into the WP8 and
can be summarized according to the framework shown in Figure 1, in which the
complete workflow of dissemination and publication is shown.
Figure 1. SafeWaterAfrica workflow of dissemination and publication
DMP: Data Management Plan
PEDR: Plan for Exploitation and Dissemination of Results
OA: Open Access
SC: Steering Committee
DisseminationManager: _Jochen Borris_ , _Fraunhofer_
Data Manager: _Manuel Andrés Rodrigo Rodrigo_ , _UCLM_
The procedure for the management of data begins with the production of a data
set by one or several of the partners. According to the Figure, they should
inform the Data Manager about the data by filling in the template shown in
Annex 1, in which the most important metadata are included. Dataset is then
archived by the partner that has produced it, while metadata are managed by
the Data Manager. The data archived by the partner may be in the form of
tables and, occasionally, as documents such as reports, technical drawings,
pictures, videos and material safety data sheets. Software used to store the
research results mainly includes the:
* applications of the office suites of Microsoft, Open and Libre Office, e.g. Word and Excel, and Origin Data Analysis and Graphing by Originlab.
* Following checkup by the Data Manager, the metadata will be included in the Annex II section of the next edition of the DMP and depending on the decision-tree shown, data can be considered for publication.
The DMP addresses the required points on a dataset by dataset basis and
reflects the current status of reflection within the consortium about the data
that will be produced. The DMP presents in details only the procedures of
creating ‘primary data’ (data not available from any other sources) and of
their management. In the internal procedures to grant open access to any
publication, research data or other innovation generated in the EU project the
main workflow starts at the WP level. If the WP team member considers putting
research data open access, it will inform the project steering committee about
its plans. The project steering committee will then discuss these plans in the
consortium and decide whether the data will be made openly accessible or not.
The general policy of the EU project is to apply “open access by default” to
its research data.
Project results to be made openly accessible for the public will be labelled
“public” in the project documentation (table, pictures, diagram, reports
etc.). All project results labelled “public” will be distributed under
specific free/open license, where the authors retain the authors’ rights and
the users can redistribute the content freely by acknowledgement of the data
source.
With regard to the five points covered in the template proposed in the
“Guidelines on Data Management in Horizon 2020” (Data set reference and name,
Data set description, Standards and metadata, Data sharing and Archiving and
Preservation), they are included in the Table template proposed in Annex I and
there are common procedures that will be described together for all datasets
included in the next sections of this document.
# DATA SET REFERENCE AND NAME
For an easy identification, all datasets produced in SafeWaterAfrica will be
also provided with a short name (Data set reference) following the format SWA-
DS-xxyyy, where xx refers to the work package in which data are produced and
yyy is a sequential reference number assigned by the Data Manager upon
reception of a proposal of Dataset. This name will be included in the template
and will not be filled in by the partner that propose the Dataset. Opposite,
partner that produces the Dataset will propose a descriptive name (1) ,
consisting of a sentence in which the content of the dataset is clearly
reflected. This sentence should be shorter than 200 characters and will be
checked and, if necessary, modified by the Data Manager for the sake of
uniformity.
# DATA SET DESCRIPTION
It consists of a plain text with a maximum extension of 200 words in which it
is very briefly summarized the content, methodology and organization of the
dataset in order to let the reader have a first clear idea of the main aspects
of the Dataset. It will be filled in by the partner that produces the Dataset
(2) and checked upon reception and, if necessary, modified by the Data
Manager for the sake of uniformity.
# STANDARDS AND METADATA
Metadata is structured information that describes, explains, locates, or
otherwise makes it easier to retrieve, use, or manage an information resource.
Metadata is often called data about data or information about information.
Metadata that are going to be included in our DMP are going to be classified
into three groups:
* Descriptive metadata, which designates a resource for purposes such as discovery and identification. In the DMP of SafeWaterAfrica this metadata are needed to be filled in by the partner that propose the Dataset and include elements such as the contributors (3) (institution partners that contributes the dataset), creator/s (4) (author/s of the dataset), subjects (5) (up to six keywords that clearly identifies the content).
* Administrative metadata, which provides information to help manage a resource, such as when and how it was created, file type and other technical information, and who can access it. In the DMP of SafeWaterAfrica, these metadata are needed to be filled in by the partner that propose the Dataset and include elements such as language (6) (most likely English), file format (7) (excel, cvs, …) and type of resource (8) (Table, Figure, picture…). It is proposed to use commonly used metadata standards in this project based on the digital object identifier system® (DOI). With this purpose, DOI of the final version of the metadata form for each Dataset will be obtained by the Data Manager.
* Structural metadata, which indicates how compound objects are put together. In the DMP of
SafeWaterAfrica, these metadata are needed to be filled in by the partner that
proposed the Dataset in Table 1 and include elements such as parameters (9)
included in the dataset (including information about methodology used to
obtain it according to international standards, equipment, etc.), structure of
the datatable (10) (showing clearly how data are organized) and additional
information for the dataset (11) (such as Decimal delimiter, the Column
delimiter, etc.)
* Upon reception of the first version of the Dataset, this information will be checked by the Data
Manager and, if necessary, modified for the sake of uniformity and clarity.
# DATA SHARING
The data sharing procedures and rights in relation to the data collected
through the SafeWaterAfrica project are the same across the different datasets
and are in accordance with the Grant Agreement. Partner that produces the
datasheet should inform about the status (12) of the dataset: public, if
data are going to be published, or private, if no diffusion out of the
consortium is aimed (because data are considered as sensitive). In the case of
public data, a link to sample data can also be included to allow potential
users a rapid determination about the relevance of the data for their use
(13) . This link will be checked by the Data Manager and the partner that
produce the Dataset is responsible for keeping it alive for the whole duration
of SafeWaterAfrica.
With respect to the access procedure, in accordance with Grant Agreement
Article 17, data must be made available upon request, or in the context of
checks, reviews, audits or investigations. If there are ongoing checks etc.,
the records must be retained until the end of these procedures.
Each partner must ensure open access to all peer-reviewed scientific
publications relating to its results. As per Article 29.2, the partners must:
* As soon as possible and at the latest on publication, deposit a machine-readable electronic copy of the published version or final peer-reviewed manuscript accepted for publication in a repository for scientific publications; moreover, the beneficiary must aim to deposit at the same time the research data needed to validate the results presented in the deposited scientific publications.
* Ensure open access to the deposited publication — via the repository — at the latest:
* On publication, if an electronic version is available for free via the publisher, or o Within six months of publication in any other case.
* Ensure open access — via the repository — to the bibliographic metadata that identify the deposited publication. The bibliographic metadata must be in a standard format and must include all of the following: the terms “European Union (EU)” and “Horizon 2020”;-the name of the action, acronym and grant number;-the publication date, and length of embargo period if applicable, and-a persistent identifier.
Data will also be shared when the related deliverable or paper has been made
available at an open access repository, via the gold or the green model. The
normal expectation is that data related to a publication will be openly
shared. However, to allow the exploitation of any opportunities arising from
the raw data and tools, data sharing will proceed only if all co-authors of
the related publication agree. The Lead author, who is the author with the
main contribution and who is listed first, is responsible for getting
approvals and then sharing the data and metadata in the repository of its
institution or, alternative, in the repository **Fraunhofer ePrints** (
_http://eprints.fraunhofer.de/_ ) , an open access repository for research
data.
# ARCHIVING AND PRESERVATION
The archiving and preservation procedures in relation to the data collected
through the
SafeWaterAfrica project are the same across the different datasets and are in
accordance with the Grant Agreement. The data will be managed by collaborators
of participants as well as other scientists interested in SafeWaterAfrica
relationships.
Information should be stored for at least 5 years (and preferible 10 years)
after the end of the Project. In the meantime, backups should be made at least
once a month.
The knowledge generated by the Project among partners, scientific community,
target users and public at large during the Project are managed in two ways,
depending on the data source:
* The non-sensitive data will be organized into open access repositories of the partner that produce them or, alternatively, into Fraunhofer ePrints that will contain all the knowledge produced by the Project partners. A restricted access is expected for the knowledge that will be used for exploitation purposes; open access for all the other knowledge. Specific attention must be paid to the creation of an open access to the data collected during the field tests considering ethic standards described in D9.1 and D9.2. To this end, only raw data defined as open access will be organized in an exportable format to be used by the scientific community and practitioners for their own purposes. A registered access for data download will be the only request for their use, in order to understand which organization is interested in using them and for which particular scope.
* To manage and store the sensitive non-public data obtained, all partners from SafeWaterAfrica must comply with relevant European and national regulations as well as with the standards of practice defined by relevant professional boards and institutions.
The link/s to the open access Dataset/s will be proposed by the partner that
produces the dataset/s (14) . This link will be checked by the Data Manager
and the partner that produce the Dataset is responsible for keeping it alive
for the whole duration of SafeWaterAfrica.
With regard to the Management of copyright and Intellectual Property Rights
(IPR) issues, the IPR ownership is defined by the Consortium Agreement and
Grant Agreement related to Project. Such access will be provided by accepting
the terms and conditions of use, as appropriate. Materials generated under the
Project will be disseminated in accordance with the Consortium Agreement.
Those that use the data (as opposed to any resulting manuscripts) shall cite
it as follows:
_The data created by the SafeWaterAfrica project, funded by the European
Union's Horizon 2020 research and innovation programme under grant agreement
No 689925. For reuse of this data, please, contact SafeWaterAfrica
Consortium._ _www.safewaterafrica.eu._
Regarding the citation of the data, the Data Manager will include in the final
version of the metadata template relevant data about how the dataset has to be
referenced, including creators, year, title of the dataset and DOI.
# LEGAL ISSUES
The SafeWaterAfrica partners are to comply with the ethical principles as set
out in Article 34 of the Grant Agreement, which states that all activities
must be carried out in compliance with:
* The ethical principles (including the highest standards of research integrity e.g. as set out in the European Code of Conduct for Research Integrity, and including, in particular, avoiding fabrication, falsification, plagiarism or other research misconduct) and Commission recommendation (EC) No 251/2005 of 11 March 2005 on the European Charter for Researchers and on a Code of Conduct for the Recruitment of Researchers (OJ L 75,
22.03.2005, p. 67), the European Code of Conduct for Research Integrity of
ALLEA (All
European Academies) and ESF (European Science Foundation) of March 2011
(
_http://www.esf.org/fileadmin/Public_documents/Publications/Code_Conduct_ResearchIntegr_
_ity.pdf_ )
* Applicable international, EU and national law.
Furthermore, activities raising ethical issues must comply with the ‘ethics
requirements’ set out in Annex 1 of the Grant Agreement. At this point, the
DMP warrants that 1) research data are placed at the disposal of colleagues
who want to replicate the study or elaborate on its findings, 2) all primary
and secondary data are stored in a secure and accessible form and 3) the
freedom of expression and communication.
Regarding confidentiality, all SafeWaterAfrica partners must keep any data,
documents or other material confidential during the implementation for the
project and for at least five years (preferible 10 years) after the period set
out in Article 3 (42 months, starting 2016-06-01). Further detail on
confidentiality can be found in Article 36 of the Grant Agreement.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0784_LIQUEFACT_700748.md
|
# Executive Summary
Recent events have demonstrated that Earthquake Induced Liquefaction Disasters
(EILDs) are responsible for tremendous structural damages and fatalities
causing in some cases half of the economic loss caused by earthquakes. With
the causes of liquefaction being substantially acknowledged, it is important
to recognize the factors that contribute to its occurrence, to estimate
hazards, then to practically implement the most appropriate mitigation
strategy considering the susceptibility of the site to liquefaction and the
type and size of the structure. The LIQUEFACT project addresses the mitigation
of risks to EILD events in European communities with a holistic approach. The
project deals not only with the resistance of structures to EILD events, but
also with the resilience of the collective urban community in relation to
their quick recovery from an occurrence. The LIQUEFACT project sets out to
achieve a more comprehensive understanding of EILDs, the applications of the
mitigation techniques, and the development of more appropriate techniques
tailored to each specific scenario, for both European and worldwide
situations.
# Introduction, Goal and Purpose of this document
The LIQUEFACT project is a collaborative project involving 11 partners from
six different countries (UK, Italy, Portugal, Slovenia, Norway and Turkey)
including representation from four EU Member States and is organised in three
phases (Scoping, Research and Implementation) across nine work packages (WPs),
each of which encapsulates a coherent body of work. The first seven WPs
highlight the major technical activities that will take place throughout the
project and have been scheduled to correlate with one another. The final two
WPs (WP8 and WP9) are the continuous activities which will take place
throughout the duration of the project.
In order to ensure the smooth running of the project for all project partners,
management structures and procedures are necessary to facilitate effective and
efficient working practices. Following the management information included in
the Grant Agreement (GA) and its annexes, the Consortium Agreement (CA),
Commission rules as contained in the Guidance Notes and organisational Risk
Management policies and procedures including Corporate Risk Strategy, Policy
and Guidance and Health and Safety Policies this manual highlights important
procedures to be carried out in order to monitor, coordinate and evaluate the
management activities of the project.
Goal: **This document aims to aid the LIQUEFACT project consortium to meet
their responsibilities regarding research data quality, sharing and security
though the provision of an data management plan in accordance with the
Horizon2020 Guidelines on Open Access and to make provision for the
introduction of General Data Protection Regulations (GDPR) on 25 th May 2018.
**
# Admin Details
**Project Name:** LIQUEFACT Data Management Plan - DMP title
**Project Identifier:** LIQUEFACT
**Grant Title:** 700748
**Principal Investigator / Researcher:** Professor Keith Jones
**Project Data Contact:** Professor Keith Jones, +44(0) 1245 683907.
[email protected]
**Description:** Assessment and mitigation of liquefaction potential across
Europe: a holistic approach to protect structures/ infrastructure for improved
resilience to earthquake-induced liquefaction disasters.
**Funder:** European Commission (Horizon 2020)
**Institution:** Anglia Ruskin University
<table>
<tr>
<th>
**Task**
</th>
<th>
**Data**
</th>
<th>
**Type**
</th> </tr>
<tr>
<td>
T1.1
</td>
<td>
Reference list/Bibliography
</td>
<td>
Qualitative
</td> </tr>
<tr>
<td>
T1.2
</td>
<td>
Questionnaire
</td>
<td>
Qualitative and Quantitative
</td> </tr>
<tr>
<td>
T1.4
</td>
<td>
Glossary/Lexicon
</td>
<td>
Qualitative
</td> </tr>
<tr>
<td>
T2.1
</td>
<td>
Ground characterization; Geophysical prospecting; Soil Geotechnical and
Geophysical tests; Ground investigations; Lab testing
</td>
<td>
Quantitative
</td> </tr>
<tr>
<td>
T2.6
</td>
<td>
Reference list/Bibliography
</td>
<td>
Qualitative
</td> </tr>
<tr>
<td>
T3.1
</td>
<td>
Numerical modelling; Experimental data.
</td>
<td>
Quantitative
</td> </tr>
<tr>
<td>
T3.2
</td>
<td>
Field trials and pilot testing; Simulations; Numerical modelling
</td>
<td>
Quantitative
</td> </tr>
<tr>
<td>
T4.1
</td>
<td>
Soil characterization (Mechanics)
</td>
<td>
Quantitative
</td> </tr>
<tr>
<td>
T4.2
</td>
<td>
Centrifugal Modelling
</td>
<td>
Quantitative
</td> </tr>
<tr>
<td>
T4.3
</td>
<td>
Field trials; Lab and Field testing
</td>
<td>
Quantitative
</td> </tr>
<tr>
<td>
T4.4
</td>
<td>
Numerical modelling
</td>
<td>
Quantitative
</td> </tr>
<tr>
<td>
T5.2
</td>
<td>
Individual and Community resilience measures/metrics
</td>
<td>
Qualitative and Quantitative
</td> </tr>
<tr>
<td>
T5.3
</td>
<td>
Cost/Benefit Models
</td>
<td>
Quantitative
</td> </tr>
<tr>
<td>
T7.1
</td>
<td>
Reference list/Bibliography
</td>
<td>
Qualitative
</td> </tr> </table>
# 1\. Data Summary
Quantitative and qualitative data will be collected in line with the
overarching aims and objectives of the LIQUEFACT project; to help deliver a
holistic approach to the protection of structures, infrastructure and
resilience to Earthquake Induced Liquefaction Disasters (EILDs) across Europe.
It is important to recognise the opportunity for mitigation strategies to help
aid protection for both people, places and communities through a more
comprehensive understanding of EILDs. Data collection will aid the development
and application of techniques, applicable across European and global
situations. Site specific data collection at differing case study sites across
Europe will be undertaken alongside data gathering from the academic and
community fields to better inform decision making. It is hoped that this data
will be useful to a wide ranging, spatially and temporally diverse audience -
across the policy-practitioner interface.
# 2\. Fair Data
## 2.1 Open Access
Open access will be provided to all scientific publications in line with the
guidance provided by the Commission in their letter dated 27 th March 2017
(The open access to publications obligations in Horizon 2020). Self-archiving
through suitable repositories within six months of publication (12 months for
social science and humanities publications); or Open access publishing on the
publisher/journal website. It is anticipated that data will be made available
in varying forms for varying uses.
Identification mechanisms will be utilised to improve the usability of the
data within differing contexts. Data cleansing will be considered in order to
present clear and considered formatting. Versions, Keywords and Digital Object
Identifiers will be explored in principle to aid the applicability of data.
Anglia Ruskin University adheres to the Research Data Management Guidelines;
* Encouraging scientific enquiry and debate and increase the visibility of research.
* Encouraging innovation and the reuse of existing datasets in different ways, reducing costs by removing the need to collect duplicate research data.
* Encouraging collaboration between data users and data creators.
* Maximising transparency and accountability, and to enable the validation and verification of research findings and methods.
## 2.2 Repository
Appropriate data will be made available through the use of an online portal or
reputable repository, details of which are yet to be confirmed but may include
the LIQUEFACT website ( _www.liquefact.eu_ ) and Zenodo. Generic software
tools will be predominantly used including MS Office and SPSS. A Technical
Data Report will be provided for each data set through the creation and
statement of the aims, objectives and methodology.
## 2.3 Exceptions
In circumstances where the anonymization of data sets is not possible the
Liquefact Project will, to protect the rights of individuals concerned,
exclude certain data sets from publication in the online repository. This data
will be retained in accordance with Anglia Ruskin University data Research
Data Management Guidelines and held for a minimum of 5 years after the project
completion.
A table of exceptions is included below:
<table>
<tr>
<th>
Data Set
</th>
<th>
Related Results
</th>
<th>
Reason for Exclusion
</th> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
## 2.4 Metadata
Text mining tools and methods will help external actors to extract common and
relevant data.
Commonly used ontologies will be utilised. A glossary of terms will be
collated by project partners. Data files will be saved in an easily-reusable
format, commonly used by the research community.
Including the following format choices; .txt; .xml; .html; .rft; .csv;
.SPSSportable; .tif; .jpeg; .png.
## 2.5 Storage
Data will be stored either on each institution’s back-up server or on a
separate data storage device that is kept in a secure and fireproof location,
separate from the main data point. Data will be released no later than the
publication of findings and within three years of project completion. Primary
data will be securely retained, in an accessible format, for a minimum of five
years after project completion.
# 3 Allocation of Resources
At this stage costs have not been accounted for in the H2020 LIQUEFACT project
budget. Data Management Plans will be regularly updated by the Project
Coordinator with data collection, collation and usability the responsibility
of all partners involved in the project. By providing this data it is
anticipated that future utilisation will contribute to the long term success
of the LIQUEFACT project and enhance EILD improvements across and between
countries and organisations.
# 4\. Data Security
This research aims to follow these principles;
* Avoid using personal data wherever possible.
* If the use of personal data is unavoidable, consider partially or fully anonymising the information to obscure the identity of the individuals concerned.
* Use our secure shared drives to store and access personal data and sensitive business information, ensuring that only those who need to use this information have access to it.
* Use remote access facilities to access personal data and sensitive business information on the central server instead of transporting it on mobile devices and portable media or using third party hosting services.
* Personal equipment (such as home PCs or personal USB sticks) or third party hosting services (such as Google Mail) should not be used for high or medium risk personal data or business information.
* If email is used to send personal data or business information outside the university environment, it should be encrypted. If you are sending unencrypted personal data or business information to another university email account, indicate in the email title that the email contains sensitive information so that the recipient can exercise caution about where they open it.
* Do not use high or medium risk personal data or business information in public places. When accessing email remotely, exercise caution to ensure that you do not download unencrypted high or medium risk personal data or business information to an insecure device.
* Consider the physical security of personal data or business information, for example use locked filing cabinets/cupboards for storage.
* The fifth principle of the General Data Protection Regulation 2018 states that personal data processed for any purpose or purposes should not be kept for longer than is necessary for that purpose or purposes. It is therefore important to implement our retention and disposal policies so that personal data and sensitive business information is not kept for longer than necessary.
# 5\. GDPR
Anglia Ruskin University is fully complaint with the General Data Protection
Regulation (GDPR) Act that was introduced on the 25 th May 2018. All
personal data is handled securely and confidentially in accordance with
information security best practice policies. When it is necessary to share
information with beneficiaries or third parties, appropriate protection
measures are in place.
# 6\. Ethical Aspects
Ethical considerations in making research data publicly available are clearly
designed and discussed by Anglia Ruskin University regarding data sharing
throughout the entire data cycle. Ensuring compliance with GPDR 2018. Informed
consent will be obtained from all participants for their data to be
shared/made publicly available. Providing participants with sufficient
information to make an informed decision regarding involvement. Data will
always be anonymised with examples of direct or sensitive identifiers removed.
The user (licensor) will be given due credit for work when it is distributed,
displayed, performed, or used to derive a new work.
# 7\. Other Procedures
* Data Protection Act 1998
* General Data Protection Regulations 2018
* Anglia Ruskin University Research Training, Ethics and Governance as part of the Research Policy and Support group within the Research and Innovation Development Office
* Anglia Ruskin University's Research, Innovation and Knowledge Exchange strategy 2016-2017
* DMP Online
* Zenodo
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0785_KConnect_644753.md
|
# 1 Introduction
This deliverable is the final version of the data management plan. In this
document, the data generated by the KConnect project is identified and the
data management, archiving, preservation and licensing plans are given.
To begin with, Section 2 summarises the changes in this deliverable compared
to the initial Data Management Plan (KConnect Deliverable 6.1). In Section 3,
a topology of the types of medical data and texts and the KConnect components
that process the data are given. Following this, each section describes a
single data set. Beginning with Section 4, each section of this deliverable
describes a data resource identified in the KConnect project. The format
followed in each section corresponds to the structure proposed in the European
Commission Guidelines on Data Management in Horizon 2020 [1]: Name,
Description, Standards and Metadata, Data Sharing Conditions, Archiving and
Preservation, and Licensing Information. In summary, Sections 4 to 9 deal with
data for which no privacy issues exist (knowledge base, machine translation
training data, and annotations and indexes), while Sections 10 to 13 deal with
data in which care needs to be taken to ensure that privacy is preserved
(search logs and medical records).
# 2 Updates Compared to Previous DMP
This section lists the updates of this document compared to the initial Data
Management Plan (KConnect Deliverable 6.1).
* A topology of data and processing components is provided (as requested in the first project review)
* The list of datasets in the Knowledge Base is presented in detail, along with licensing information for each dataset
* A separate section on Hungarian MeSH is added (more restrictive licensing)
* The section on Qulturum (RJL) data is updated
* Information on documents indexed by HON and TRIP is added
# 3 Topology of the Data and Processing Components
This section begins by presenting the five main classes of medical text data
in KConnect. Then the KConnect components used in the processing of the text
data are linked to the data classes.
## 3.1 Classes of Text Data
There are five main classes of text data processed, analysed and indexed in
KConnect:
1. Non-Patient-Specific Medical Text - well curated
2. Non-Patient-Specific Medical Text - less curated
3. Patient-Specific Medical Text
4. Structured Medical Data
5. Data Generated by Search Engines
Each of these classes are described in more detail in the following sections.
Furthermore, for each dataset described in the individual sections below, the
class of data is written in parentheses next to the dataset name.
### 3.1.1 Non-Patient-Specific Medical Text (well curated)
This class contains documents that in general undergo a well-documented
process of quality control (such as peer review or strict editorial control).
This class of documents includes [along with an indication of the language in
which the majority of such documents appear]:
* Clinical Guidelines (national, regional, local) [multiple languages]
* Randomised Controlled Trials [English]
* Systematic Reviews [English]
* Regulatory Information [English]
* Medical Publications [English]
* Lists of Clinical Questions [English]
* Patient Information Leaflets [multiple languages]
### 3.1.2 Non-Patient-Specific Medical Text (less curated)
This class contains documents over which there is in general no quality
control process. This class of documents includes:
Wikipedia [multiple languages] Health web sites [multiple languages]
### 3.1.3 Patient-Specific Medical Text
This class contains medical records. In their original form, medical records
do contain text that is specific to particular patients, although in general,
medical records are anonymised before being made available to be processed by
KConnect tools. Medical records are usually written in the language of the
country or region in which they are produced.
### 3.1.4 Structured Medical Data
This class contains data that is stored in a structured way, including medical
vocabularies, thesauri, and ontologies. These sources are usually available
with the highest coverage in English, but translations of some of them are
also available.
### 3.1.5 Data Generated by Search Engines
This class contains data that is generated as part of the functioning of a
search engine, and in the case of KConnect, contains search logs. The search
logs contain queries that can be entered in multiple languages.
## 3.2 Link between Text Data and Processing Components
We now present which KConnect tools are used to process which classes of text
data. The leftmost column of Table 1 shows the data classes described above,
while the columns show the processing components developed in KConnect.
Shading in a table cell indicated that a data class is processed by the
corresponding KConnect component. Below is a brief description of each
component, along with the KConnect deliverable in which more information can
be found:
* GATE – General Architecture for Text Engineering, responsible for annotating the documents in KConnect with medical concepts – KConnect Deliverable 1.5
* MIMIR – Multiparadigm Indexing and Retrieval, provides semantic search using GATE annotations - KConnect Deliverable 1.5
* Machine Translation – medical-specific machine translation built on the MOSES framework – KConnect Deliverable 1.6
* Trustability Estimation – Machine learning system for estimating the level of trust of a website based on the HONcode principles – KConnect Deliverable 1.7
* Readability Estimation – Machine learning system that predicts the level of medical expertise required to understand a medical webpage – KConnect Deliverable 1.7
* Search Log Analysis – System for analysing queries in search logs, visualising results and applying machine learning to estimate user characteristics – KConnect Deliverable 1.7
* Knowledge Base – Store of over 1.2 billion medical statements in the Ontotext GraphDB system – KConnect Deliverable 2.2
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
**Processing Components**
</th> </tr>
<tr>
<td>
**Data class**
</td>
<td>
**GATE**
</td>
<td>
**MIMIR**
</td>
<td>
**Machine**
**Translation**
</td>
<td>
**Trustability Estimation**
</td>
<td>
**Readability Estimation**
</td>
<td>
**Search Log Analysis**
</td>
<td>
**Knowledge Base**
</td> </tr>
<tr>
<td>
Non-Patient-Specific Medical Text
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
… well curated
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
… less curated
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Patient-Specific Medical Text
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Structured Medical Data
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Data Generated by Search Engines
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
**Table 1. Data classes and processing component used for each class**
# 4 Knowledge Base
**4.1 Name**
Knowledge Base (Structured Medical Data)
## 4.2 Description
The knowledge base is a warehouse of semantically integrated data sets
published originally by third parties. It includes information on drugs, drug
targets, drug interactions, diseases, symptoms, adverse events, anatomies and
imaging modalities. In addition to the data sets it includes link sets that
map data between the different data sets and/or provide semantic
relationships. The data is available as RDF and is loaded into a GraphDB [2]
repository.
Information on all of the data included is given in Table 2. This table
includes the name of the dataset, its type, a short description, the language
of the dataset, a link to the graph where the dataset is stored, and finally
the license type. More details on each license type are given in Section 4.6.
<table>
<tr>
<th>
**Dataset Name**
</th>
<th>
**Type**
</th>
<th>
**Description**
</th>
<th>
**Language**
</th>
<th>
**Graph**
</th>
<th>
**License Type**
</th> </tr>
<tr>
<td>
CPT
</td>
<td>
An extended UMLS subset - New language version of an UMLS data set.
</td>
<td>
Current Procedural Terminology, 2015. Spanish translation.
</td>
<td>
Spanish
</td>
<td>
http://linkedlifedata.com/resource/cpt_spanish
</td>
<td>
UMLS - Category 3
</td> </tr>
<tr>
<td>
DrugBank
</td>
<td>
Original data set
</td>
<td>
Bioinformatics and cheminformatics resource that combines detailed drug data
with comprehensive drug target information.
</td>
<td>
English
</td>
<td>
http://linkedlifedata.com/resource/drugbank
</td>
<td>
Free for noncommercial use
</td> </tr>
<tr>
<td>
ICD 10 CM Swedish
</td>
<td>
An extended UMLS subset - New language version of an UMLS data set.
</td>
<td>
An extended UMLS subset. International
Classification of Diseases, Clinical Modification, 10th revision. Swedish
translation.
</td>
<td>
Swedish
</td>
<td>
http://linkedlifedata.com/resource/icd10_swe
</td>
<td>
Being cleared up
</td> </tr>
<tr>
<td>
ICPC Hungarian
</td>
<td>
UMLS subset
</td>
<td>
International Classification of Primary Care.
</td>
<td>
Hungarian
</td>
<td>
http://linkedlifedata.com/resource/icpc_hungarian
</td>
<td>
UMLS - Category 0
</td> </tr>
<tr>
<td>
ICPC Swedish
</td>
<td>
UMLS subset
</td>
<td>
International Classification of Primary Care.
</td>
<td>
Swedish
</td>
<td>
http://linkedlifedata.com/resource/icpc_swedish
</td>
<td>
UMLS - Category 0
</td> </tr>
<tr>
<td>
MedDRA Czech
</td>
<td>
UMLS subset
</td>
<td>
Medical Dictionary for Regulatory
Activities Terminology (MedDRA), 18.0
</td>
<td>
Chech
</td>
<td>
http://linkedlifedata.com/resource/meddra_czech
</td>
<td>
UMLS - Category 3
</td> </tr>
<tr>
<td>
MedDRA French
</td>
<td>
UMLS subset
</td>
<td>
Medical Dictionary for Regulatory
Activities Terminology (MedDRA), 18.0
</td>
<td>
French
</td>
<td>
http://linkedlifedata.com/resource/meddra_french
</td>
<td>
UMLS - Category 3
</td> </tr>
<tr>
<td>
MedDRA German
</td>
<td>
UMLS subset
</td>
<td>
Medical Dictionary for Regulatory
Activities Terminology (MedDRA), 18.0
</td>
<td>
German
</td>
<td>
http://linkedlifedata.com/resource/meddra_german
</td>
<td>
UMLS - Category 3
</td> </tr>
<tr>
<td>
MedDRA Hungarian
</td>
<td>
UMLS subset
</td>
<td>
Medical Dictionary for Regulatory
Activities Terminology (MedDRA), 18.0
</td>
<td>
Hungarian
</td>
<td>
http://linkedlifedata.com/resource/meddra_hungarian
</td>
<td>
UMLS - Category 3
</td> </tr>
<tr>
<td>
MedDRA
Spanish
</td>
<td>
UMLS subset
</td>
<td>
Medical Dictionary for Regulatory
Activities Terminology (MedDRA), 18.0
</td>
<td>
Spanish
</td>
<td>
http://linkedlifedata.com/resource/meddra_spanish
</td>
<td>
UMLS - Category 3
</td> </tr>
<tr>
<td>
MeSH Czech
</td>
<td>
UMLS subset
</td>
<td>
Medical Subjects Heading
</td>
<td>
Chech
</td>
<td>
http://linkedlifedata.com/resource/mesh_czech
</td>
<td>
UMLS - Category 3
</td> </tr>
<tr>
<td>
MeSH French
</td>
<td>
UMLS subset
</td>
<td>
Medical Subjects Heading
</td>
<td>
French
</td>
<td>
http://linkedlifedata.com/resource/mesh_french
</td>
<td>
UMLS - Category 3
</td> </tr> </table>
D
6
.
3
Updated Data Management Plan
<table>
<tr>
<th>
**Dataset Name**
</th>
<th>
**Type**
</th>
<th>
**Description**
</th>
<th>
**Language**
</th>
<th>
**Graph**
</th>
<th>
**License Type**
</th> </tr>
<tr>
<td>
MeSH German
</td>
<td>
UMLS subset
</td>
<td>
Medical Subjects Heading
</td>
<td>
German
</td>
<td>
http://linkedlifedata.com/resource/mesh_german
</td>
<td>
UMLS - Category 3
</td> </tr>
<tr>
<td>
MeSH Spanish
</td>
<td>
UMLS subset
</td>
<td>
Medical Subjects Heading
</td>
<td>
Spanish
</td>
<td>
http://linkedlifedata.com/resource/mesh_spanish
</td>
<td>
UMLS - Category 3
</td> </tr>
<tr>
<td>
MeSH Swedish
</td>
<td>
UMLS subset
</td>
<td>
Medical Subjects Heading
</td>
<td>
Swedish
</td>
<td>
http://linkedlifedata.com/resource/mesh_swedish
</td>
<td>
UMLS - Category 3
</td> </tr>
<tr>
<td>
RadLex
</td>
<td>
Original data set. Bioontology Bioportal version
</td>
<td>
A comprehensive lexicon for standardized indexing and retrieval of radiology
information resources.
</td>
<td>
English
</td>
<td>
http://linkedlifedata.com/resource/radlex
</td>
<td>
Free for noncommercial use
</td> </tr>
<tr>
<td>
UMLS Semantic Network
</td>
<td>
Original data set
</td>
<td>
Hierachy of UMLS semantic types
</td>
<td>
English
</td>
<td>
http://linkedlifedata.com/resource/semanticnetwork
</td>
<td>
Free
</td> </tr>
<tr>
<td>
SNOMED CT
Swedish
</td>
<td>
An extended UMLS subset - New language version of an UMLS data set.
</td>
<td>
Systematized Nomenclature of Medicine - Clinical Terms
</td>
<td>
Swedish
</td>
<td>
http://linkedlifedata.com/resource/snomed_swe
</td>
<td>
UMLS - Appendix 2
</td> </tr>
<tr>
<td>
SNOMED CT
English
</td>
<td>
UMLS subset
</td>
<td>
Systematized Nomenclature of Medicine - Clinical Terms
</td>
<td>
English
</td>
<td>
http://linkedlifedata.com/resource/snomedct_english
</td>
<td>
UMLS - Appendix 2
</td> </tr>
<tr>
<td>
UMLS Symptoms
</td>
<td>
UMLS subset
</td>
<td>
A subset of all UMLS concepts describing symptoms
</td>
<td>
English
</td>
<td>
http://linkedlifedata.com/resource/umls/symptoms
</td>
<td>
UMLS - Category 0,1,2
</td> </tr>
<tr>
<td>
UMLS English
</td>
<td>
UMLS subset
</td>
<td>
Unified Medical Language System
</td>
<td>
English
</td>
<td>
http://linkedlifedata.com/resource/umls_english
</td>
<td>
UMLS - Category 0,1,2
</td> </tr>
<tr>
<td>
UMLS French
</td>
<td>
UMLS subset
</td>
<td>
Unified Medical Language System
</td>
<td>
French
</td>
<td>
http://linkedlifedata.com/resource/umls_french
</td>
<td>
UMLS - Category 0,1,2
</td> </tr>
<tr>
<td>
UMLS German
</td>
<td>
UMLS subset
</td>
<td>
Unified Medical Language System
</td>
<td>
German
</td>
<td>
http://linkedlifedata.com/resource/umls_german
</td>
<td>
UMLS - Category 0,1,2
</td> </tr>
<tr>
<td>
UMLS Spanish
</td>
<td>
UMLS subset
</td>
<td>
Unified Medical Language System
</td>
<td>
Spanish
</td>
<td>
http://linkedlifedata.com/resource/umls_spanish
</td>
<td>
UMLS - Category 0,1,2
</td> </tr> </table>
**Table 2. Datasets in the Knowledge Base**
Page 9 of 19
## 4.3 Standards and metadata
The data is available in different RDF formats: RDF-XML, NTriple, Turtle,
TriG, TriX and RDF-JSON. It can be queried via SPARQL and the KB exposes the
OpenRDF REST API.
## 4.4 Data sharing conditions
Data sharing varies according to the sharing conditions associated with the
original data sets, further described in Section 4.6.
## 4.5 Archiving and preservation
Archiving and preservation varies according to the Archiving and preservation
arrangements associated with the original data sets. Ontotext stores backups
of the data sets converted to RDF and the corresponding link sets on its
servers.
## 4.6 Licensing information
Licensing varies according to the licensing of the original data sets. In
general UMLS - Category 0, 1, and 2 could be freely used for non-commercial
purposes, if there is no modification of the original data (with some other
limitation). Category 3 allows usage of the data only for internal purposes.
The rightmost column of Table 2 provides the license type for each dataset in
the Knowledge Base. The links to the license agreements for each of the
license types are in Table 3.
<table>
<tr>
<th>
**License Type**
</th>
<th>
**Link to License Information**
</th> </tr>
<tr>
<td>
UMLS, SNOMED
</td>
<td>
_https://www.nlm.nih.gov/research/umls/knowledge_sources/metathesaurus/release/lice_
_nse_agreement_appendix.html_
</td> </tr>
<tr>
<td>
DrugBank
</td>
<td>
_http://www.drugbank.ca/about_
</td> </tr>
<tr>
<td>
RadLex
</td>
<td>
_http://www.rsna.org/radlexdownloads/_
</td> </tr> </table>
**Table 3. License information for the datasets in the Knowledge Base**
# 5 Hungarian MeSH
**5.1 Name**
Medical Subject Heading Hungarian translation (Structured Medical Data)
## 5.2 Description
MeSH is the National Library of Medicine's controlled vocabulary thesaurus. It
consists of sets of terms naming descriptors in a hierarchical structure that
permits searching at various levels of specificity.
MeSH descriptors are arranged in both an alphabetic and a hierarchical
structure.
**5.3 Standards and metadata**
The data is available in XML format.
## 5.4 Data sharing conditions
Data sharing, download, any other form of distribution is only permitted with
written permission of Akademiai Publisher.
**5.5 Archiving and preservation**
n/a
**5.6 Licensing information**
The Hungarian translation is a property of Akademiai Publisher (proprietary
licence).
# 6 Summary Translation Test Data
**6.1 Name**
Khresmoi Summary Translation Test Data 1.1 (Non-Patient-Specific Medical Text
- well curated)
## 6.2 Description
This dataset contains data for development and testing of machine translation
of sentences from summaries of medical articles between Czech, English,
French, and German. The original sentences are sampled from summaries of
English medical documents crawled from the web in 2012 and identified to be
relevant to 50 medical topics. Within KConnect, this data will be translated
to Hungarian, Polish, Spanish and Swedish.
The original sentences in English were randomly selected from automatically
generated summaries of documents from the CLEF 2013 eHealth Task 3 collection
[1] which were found to be relevant to 50 test topics provided for the same
task. Out-of-domain and ungrammatical sentences were manually removed. The
sentences are provided with information on document ID and topic ID. The topic
descriptions are provided as well. The sentences were translated by medical
experts into Czech, French, and German and reviewed. The data sets can be
used, for example, for the development and testing of machine translation in
the medical domain.
## 6.3 Standards and metadata
The data is provided in two formats: plain text and SGML. They are split
according to the section (dev/test) and language (CS – Czech, DE - German, FR
- French, EN – English). All the files use the UTF-8 encoding. The plain text
files contain one sentence per line and translations are identified by line
numbers. The SGML format suits the NIST MT scoring tool. Topic description
format is based on XML, each topic description (<query>) contains the tags
shown in Table 4\.
<table>
<tr>
<th>
**Tag**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
<id>
</td>
<td>
topic ID
</td> </tr>
<tr>
<td>
<discharge_summary>
</td>
<td>
reference to discharge summary
</td> </tr>
<tr>
<td>
<title>
</td>
<td>
text of the query
</td> </tr>
<tr>
<td>
<desc>
</td>
<td>
longer description of what the query means
</td> </tr>
<tr>
<td>
<narr>
</td>
<td>
expected content of the relevant documents
</td> </tr>
<tr>
<td>
<profile>
</td>
<td>
profile of the user
</td> </tr> </table>
**Table 4. Translation data format**
**6.4 Data sharing conditions**
Access to this data set is widely open under the license specified below.
## 6.5 Archiving and preservation
The data set is distributed by the LINDAT/Clarin project of the Ministry of
Education, Youth and Sports of the Czech Republic and is available here:
_http://hdl.handle.net/11858/00-097C-0000-0023-866E-1_
## 6.6 Licensing information
The data set is made available under the terms of the Creative Commons
Attribution-Noncommercial (CC-BY-NC) license, version 3.0 unported. A full
description and explanation of the licensing terms is available here:
_http://creativecommons.org/licenses/by-nc/3.0/_
# 7 Query Translation Test Data
**7.1 Name**
Khresmoi Query Translation Test Data 1.0 (Data Generated by Search Engines)
## 7.2 Description
This data sets contains data for development and testing of machine
translation of medical queries between Czech, English, French, and German. The
queries come from general public and medical experts. Within KConnect, this
data will be translated to Hungarian, Polish, Spanish and Swedish.
The original queries in English were randomly selected from real user query
logs provided by Health on the Net foundation (750 queries by general public)
and from the Trip database query log (758 queries by medical professionals)
and translated to Czech, German, and French by medical experts. The test sets
can be used, for example, for the development and testing of machine
translation of search queries in the medical domain.
## 7.3 Standards and metadata
The data is split into 8 files, according to the section (dev/test) and
language (CS - Czech, DE - German, FR - French, EN – English). The files are
in plain text using the UTF-8 encoding. Each line contains a single query.
Translations are identified by line numbers.
**7.4 Data sharing conditions**
Access to this data set is widely open under the license specified below.
## 7.5 Archiving and preservation
The data set is distributed by the LINDAT/Clarin project of the Ministry of
Education, Youth and Sports of the Czech Republic and is available here:
_http://hdl.handle.net/11858/00-097C-0000-0022-D9BF-5_
## 7.6 Licensing information
The data set is made available under the terms of the Creative Commons
Attribution-Noncommercial (CC-BY-NC) license, version 3.0 unported. A full
description and explanation of the licensing terms is available here:
_http://creativecommons.org/licenses/by-nc/3.0/_
# 8 HON Annotated Websites
**8.1 Name**
HON annotated websites (Non-Patient-Specific Medical Text - less curated)
## 8.2 Description
The dataset comprises websites crawled and indexed by the HON search engine
annotated and indexed by the KConnect semantic annotation pipeline, in order
to create a searchable index with links to the KConnect knowledge base.
## 8.3 Standards and metadata
Texts are annotated using a Text Encoding Initiative (TEI) compliant
framework, GATE [3, 4], to create documents encoded with UTF-8, in GATE XML
format.
Annotations are linked to the knowledge base using URIs, and are searchable
using SPARQL
## 8.4 Data sharing conditions
Snapshots of this data can and has been shared for scientific use (CLEF
eHealth), but cannot be shared for commercial purposes as it consists of
crawled websites.
## 8.5 Archiving and preservation
The dataset is continuously updated as new sites are crawled. Snapshots of the
dataset at specific times are not kept due to limited storage available.
**8.6 Licensing information**
Licensing for research use can be negotiated on an individual basis.
# 9 TRIP Annotated Scientific Papers
**9.1 Name**
TRIP annotated scientific papers (Non-Patient-Specific Medical Text - well
curated)
## 9.2 Description
The dataset comprises scientific papers collected by TRIP and annotated and
indexed by the KConnect semantic annotation pipeline, in order to create a
searchable index with links to the KConnect knowledge base.
## 9.3 Standards and metadata
Texts are annotated using a Text Encoding Initiative (TEI) compliant
framework, GATE [3, 4], to create documents encoded with UTF-8, in GATE XML
format.
Annotations are linked to the knowledge base using URIs, and are searchable
using SPARQL
## 9.4 Data sharing conditions
The full dataset can generally not be shared due to copyrights owned by
various publishers of papers in the dataset.
## 9.5 Archiving and preservation
The dataset is continuously updated as new sites are crawled. Snapshots of the
dataset at specific times are not kept due to limited storage available.
**9.6 Licensing information**
Licensing for research use can be negotiated on an individual basis.
# 10 HON Search Logs
**10.1 Name**
HONSearchLogs (Data Generated by Search Engines)
## 10.2 Description
Search Engine Logs provided by the Health On the Net Foundation (HON). This
data set contains the query logs collected from various search engines
maintained by HON. The search engine logs are collected over a period of over
3 years (since November 2011) and are continuing to be collected.
The search engine logs contain the following information:
* query term
* users’ IP address – which enables determining the geographical distribution of the search
* exact date and time of the query
* language
* information on the search engine used to perform the search (honSearch, honSelect, …) information on the link followed
## 10.3 Standards and metadata
The search logs will be provided in the XML format, for which the metadata
will be provided. An illustration of the format draft is given in the Figure
1.
**Figure**
**1**
**. Search Log format draft**
## 10.4 Data sharing conditions
This data set is provided by HON for the project partners. This data can be
used for analysis of users’ behaviour linked to the search engine usage.
With the goal of preservation of the users' personal data, the original
content of the search logs is modified by HON. This modification consists of
masking the part of the users' IP address, however keeping the parts of the IP
which would enable the analysis of the global users' whereabouts. In the above
shown format draft the alternations of the original query logs are marked with
“*”.
## 10.5 Archiving and preservation
The original search logs are archived and kept on HON premises for the period
of 5 years. These archives consist of the original, non-treated search logs.
Investigation is underway for a possibility for longerterm preservation of the
anonymised logs.
## 10.6 Licensing information
The HONSearchLogs will be made available on demand by the partners. The data
are distributed under the terms of the Creative Commons Attribution-ShareAlike
(CC-BY-SA), version 3.0 unported. A full description and explanation of the
licensing terms is available here: _https://creativecommons.org/licenses/by-
sa/3.0/_
# 11 TRIP Database Search Logs
**11.1 Name**
Trip Database search logs (Data Generated by Search Engines)
## 11.2 Description
As users interact with the Trip Database ( _https://www.tripdatabase.com_ )
the site captures the user’s activity. It records search terms and articles
viewed. In addition this data is linked to the user so that information about
profession, geography, professional interests etc. can be considered. This may
be useful in helping understand the search process, important documents,
linked concepts etc.
There is considerable data going back multiple years and is constantly being
collected.
**11.3 Standards and metadata**
There are no official standards.
## 11.4 Data sharing conditions
The data can be shared with the KConnect consortia with prior permission.
Outside of KConnect the sharing of the data will be by negotiation.
Currently the data needs to be requested and downloaded by the Trip Database
but an API is being considered.
## 11.5 Archiving and preservation
The data is stored on the Trip servers and these are backed up and saved on a
daily basis. The production of the search logs is independent of the KConnect
project and is increasingly core to the development of the Trip Database. As
such the costs are seen as core to Trip.
**11.6 Licensing information**
There is currently no formal licensing information.
# 12 KCL Patient Records
## 12.1 Name
The South London and Maudsley NHS Foundation Trust (SLAM) Hospital Records
(Patient-Specific Medical Text)
## 12.2 Description
The South London and Maudsley NHS Foundation Trust (SLAM) is the largest
provider of mental health services in Europe. The hospital electronic health
record (EHR), implemented in 2007, contains records for 250,000 patients in a
mixture of structured and free text fields.
At the NIHR Biomedical Research Centre for Mental Health and Unit for Dementia
at the Institute of Psychiatry, Psychology and Neuroscience (IOPPN), King’s
College London we have developed the Clinical Record Interactive Search
application (CRIS, _http://www.slam.nhs.uk/about/corefacilities/cris_ ) ,
which allows research use of the pseudonymised mental health electronic
records data (with ethics approval since 2008).
## 12.3 Standards and metadata
Through this model we will be able to provide access to a regular snapshot of
the complete set of pseudonymised records in XHTML format.
**12.4 Data sharing conditions**
Records can be accessed by collaborators either onsite or through a remote
secure connection.
**12.5 Archiving and preservation**
The record system is maintained by hospital IT services.
**12.6 Licensing information**
Data access is governed through a patient led oversight committee.
# 13 Qulturum EHR and Guidelines
## 13.1 Name
Region Jönköping County (RJL) EHRs and National/Regional/Local Guidelines
(Non-Patient-Specific Medical Text - well curated, and Patient-Specific
Medical Text)
## 13.2 Description
RJL has provided 14 anonymised electronic patient records and their schema for
the development of the prototype at Findwise.
RJL is providing a connection to a test server via a WebClient at RJL. The
development of the prototype in the proposed test environment will access
fictional EHRs held in the Educational System.
The solution will provide near-live textual analysis of a patient’s EHR. A
patient’s EHR will be passed through the pipeline and annotated before
indexing. The index will however not be stored permanently. This will ensure
that Personal Health Information will not be permanently stored or duplicated.
The final solution may display structured information relating to the process
of the patient’s treatment. Confirmation and supply of access details, schema
and the presence of the required information is still outstanding. Again
however, this information will not be stored but "read live” from the Cambio
COSMIC Intelligence database in the FW/KConnect solution.
## 13.3 Standards and metadata
Any data relating to a patient will not be stored, duplicated or annotated
permanently by the
FW/KConnect solution. Annotation added to a patient’s record lasts only as
long as the clinician is viewing the patient record via the FW/KConnect
solution.
Currently only National/Regional and Local Guidelines will be annotated and
indexed ready for searching. The created index is stored and used by the Mimir
Index Service. This information will either be collected by crawling the
related public websites or read from files supplied by RJL. There are no
required permissions regarding the use of this information.
## 13.4 Data sharing conditions
The solution access and authentication have been designed so that the KConnect
services are accessed via Cambio COSMIC (which requires a secure login/pass
from the user). The access of patient health information via the FW/KConnect
solution is therefore the same as Cambio COSMIC. Only those healthcare workers
with the correct permissions/authorisation to access a patient’s data are
allowed to. Any access of a patient’s data is automatically recorded. No
patient data/information is permanently stored or created by the current
proposed FW/KConnect solution.
## 13.5 Archiving and preservation
No archiving or preservation of data or information is required apart from the
record logs of those users who have accessed.
## 13.6 Licensing information
There is currently no formal licensing required apart from the use of medical
terminologies used in the Knowledge Base.
# 14 Conclusion
This deliverable presents the final version of the Data Management Plan for
the KConnect project. It identifies data that has been collected or used in
the KConnect project. For each dataset, it presents a description of the
dataset, describes the standards and metadata adopted, outlines the data
sharing conditions, states the archiving and preservation policy, and finally
gives licensing information for the data.
Furthermore, the data is linked to a set of five classes of medical text data,
and the KConnect components used to process each class of data are presented.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0786_EuDEco_645244.md
|
# Executive summary
EuDEco is a Coordination and Support Action (CSA) funded under the ICT-15-2014
call: Big data and Open Data Innovation and take-up. Its activities move on a
wide range starting from collecting information via desk and field research
through working with third parties (experts and projects) to analysing use
cases and information received from others. Some activities involve data
collected by project partners for project purposes but there will be other
activities where EuDEco partners will work with data collected by third
parties for other purposes.
The objective of the present document is to draft the preliminary concept of
the EuDEco consortium in terms of data management, to define the approach of
the consortium in terms of the handling, storing, making available, archiving
and protecting the data generated or received.
EuDEco has currently identified eight datasets and defined the preliminary
approach for each of them in terms of handling, archiving, protecting, and
making available. In general EuDEco will follow an as open approach in terms
of publication as possible meaning that all datasets that do not contain
private data will be published as soon as possible. Some derogation might be
implemented in case of academic papers for a pre-defined period.
The present Data Management Plan (DMP) will be reviewed by the consortium on a
regular basis. The final DMP will be included in the exploitation plan which
is another deliverable of EuDEco due in M32.
# Introduction
This section details the purpose and scope of D7.2 – Data Management Plan of
EuDEco, its structure and relationship to other deliverables.
## Purpose and scope
The purpose of the Data Management Plan (DMP) is to provide an analysis of the
main elements of the data management policy that will be used by the EuDEco
project with regard to all the datasets that will be collected or processed by
the project. The DMP is not a fixed document but evolves during the lifespan
of the project.
## Structure of the document
The DMP should address the points below:
* Chapter 2 provides an overview of EuDEco and provides insight why a DMP is relevant for EuDEco;
* Chapter 3 provides information on roles and responsibilities within the consortium and in connection with the DMP, and reviews the datasets that EuDEco will work with during the project’s implementation;
* Chapter 4 provides preliminary information on EuDEco’s approach in connection with accessing, sharing, and protecting the datasets EuDEco will work with;
* Last but not least, Chapter 5 will go into details of storage, preservation and archiving of data.
## Relationships to other deliverables
D7.2 is in tight relationship with work packages (WPs) 1 to 6 because the
datasets that are mentioned in Chapter 3 are collected and processed in these
WPs. In addition the DMP influences the activities of all tasks within WP7
(e.g., dissemination plan and activities, content of the communication
materials). The first version of the DMP contains the draft versions of the
EuDEco consortium regarding data management while the final version of the
plan will be integrated into D7.4 – Final exploitation plan (due in M32).
# Overview of EuDEco and why a DMP is necessary
EuDEco aims to develop a model of the European data economy and to facilitate
the reuse of data in Europe. In order to reach these ambitious objectives
divers activities are foreseen to be implemented which include the development
of a heuristic, a refined and a final model of the data economy. The
finetuning of the model is supported by the User Expert Group (UEG) – which
has been launched by the EuDEco project and involves experts of European
projects focusing on open and big data topics or on the reuse of data –, by
the Advisory Board (AB) – consisting of renowned experts of the subject –, and
finally by use cases – which will verify the models developed. Based on the
use cases and lessons learnt, recommendations will be defined aiming at
minimising the burdens of data reuse in Europe and supporting policy makers in
establishing an environment that is in favour of reusing data. Last but not
least an observatory of the European data economy will be established in the
third year of the project, which is aimed at monitoring the evolution of data
reuse in Europe.
EuDEco is a Coordination and Support Action (CSA) type of project, not a
research project that would generate, collect or process big amounts of data
which would make it obligatory to develop a data management plan but it has
received funding in the frame of the ICT-15-2014 call: Big data and Open Data
Innovation and take-up and agreed to take part in the pilot for open research
data which made it necessary to develop a DMP. In addition, the consortium
also agrees on the necessity of creating such a plan since each WP will
include activities where partners will work with data generated by other
projects and/or organisations (use cases) or EuDEco will collect and publish
data (observatory). The first version of the DMP (present document) has been
elaborated and submitted to the European Commission in M6 and contains
preliminary thoughts of the consortium in terms of data management in EuDEco.
# Product of research - Dataset information
The research objectives require different data for different stages of the
project. In the first stage we mostly gather qualitative data from publicly
accessible sources such as academic databases, national legislation portals
and online libraries. Table 1 and Table 2 provide detailed information about
the datasets.
D7.2 Data management plan
_Public Report – Version 1.0 – 31 July 2015_
<table>
<tr>
<th>
**Name of dataset**
</th>
<th>
Case study data
</th>
<th>
Model data
</th>
<th>
Survey data
</th>
<th>
Observatory data
</th> </tr>
<tr>
<td>
**Work package**
</td>
<td>
WP1 – Task 1.5
</td>
<td>
WP2-WP4
</td>
<td>
WP4 – Tasks 4.2-4.4
</td>
<td>
WP5 – Task 5.4
</td> </tr>
<tr>
<td>
**Short description**
</td>
<td>
Together with the research on framework conditions, a series of case studies
is conducted to lay the foundation for the heuristic model. The case studies
will lead to deep insight into challenges and opportunities in specific
settings.
</td>
<td>
In order to elaborate the refined and the final model of the data economy
different data will be collected via diverse methods which include interviews,
workshops but also desk research. The data collected (both qualitative and
quantitative) will feed the models and the recommendations too.
</td>
<td>
It is planned to conduct a survey within the scope of the analysis of
requirements and barriers. The findings will provide a useful basis for the
development of related recommendations. Based on the final design of the
survey, qualitative and/or quantitative data is collected and analysed.
</td>
<td>
To allow an initial analysis of the state of the art of the European data
economy by means of the observatory, initial data has to be collected taking
the specified determinants and indicators into account.
</td> </tr>
<tr>
<td>
**Collection/ acquisition**
</td>
<td>
The data is collected from different sources including documents and
stakeholder.
</td>
<td>
The data is collected via interviews, workshops and desk research.
</td>
<td>
Practitioners and researchers with will be asked to participate in a survey.
</td>
<td>
The data is collected from different sources including public statistics.
</td> </tr>
<tr>
<td>
**Relevant standards**
</td>
<td>
No standards. A common set of questions is used.
</td>
<td>
No standards.
</td>
<td>
No standards.
</td>
<td>
No standards.
</td> </tr>
<tr>
<td>
**Visibility/ publication level**
</td>
<td>
The results are disclosed in D1.3. The deliverable is public.
</td>
<td>
The current status of the model is disclosed in D2.1,
D3.1 and D4.1. The deliverables are publicly available. It is ensured that the
data has been anonymised.
</td>
<td>
The final design of the survey and the results are disclosed in D4.2-D4.4. It
is evaluated whether making raw data available is useful. If so, it is ensured
that the data has been anonymised.
</td>
<td>
The data used is largely openly available. The exact sources are disclosed in
D5.3. The deliverable is public.
</td> </tr>
<tr>
<td>
**Responsible partner**
</td>
<td>
FRAUNHOFER
</td>
<td>
All
</td>
<td>
</td>
<td>
FRAUNHOFER
</td> </tr> </table>
Table 1 Dataset information – part 1
10
D7.2 Data management plan
_Public Report – Version 1.0 – 31 July 2015_
<table>
<tr>
<th>
**Name of dataset**
</th>
<th>
UEG and Network of Interest
(NoI) members and
participants of project events
</th>
<th>
Projects and initiatives related to the data economy
</th>
<th>
Big data conferences
</th>
<th>
Highly related, high-quality academic articles and studies
</th> </tr>
<tr>
<td>
**Work package**
</td>
<td>
WP6 – Task 6.3 and WP7 – Task 7.5
</td>
<td>
WP6 – Task 6.3
</td>
<td>
WP7 – Task 7.3
</td>
<td>
WP7
</td> </tr>
<tr>
<td>
**Short description**
</td>
<td>
Engagement activities of EuDEco include the creation of a UEG and a NoI. To
that end, a small database with key contacts and contact information is
created. Similar data is collected in WP7 in connection with the final
conference as well as the clustering workshops.
</td>
<td>
EuDEco develops a database of national and international (EU level) projects
and initiatives that deal with the data economy. This project pool is used for
getting aware of activities somehow related to EuDEco.
</td>
<td>
Pool of events (organised by third parties) collected and stored in a
database. The events are relevant from the point of view of EuDEco.
</td>
<td>
Similarly to the project pool,
EuDEco continuously searches for and collects relevant academic publications
and studies. These materials contribute to the development of the common
knowledge base and to the development of the project deliverables.
</td> </tr>
<tr>
<td>
**Collection/ acquisition**
</td>
<td>
The data is collected from direct interaction with stakeholders.
</td>
<td>
The data is collected via desk research of publicly available information.
</td>
<td>
The data is collected via desk research.
</td>
<td>
The data is collected via desk research.
</td> </tr>
<tr>
<td>
**Relevant standards**
</td>
<td>
No standard. An internal template has been defined.
</td>
<td>
No standard. An internal template has been defined.
</td>
<td>
No standards.
</td>
<td>
No standards.
</td> </tr>
<tr>
<td>
**Visibility/ publication level**
</td>
<td>
Part of the data is made public (e.g., the list of UEG members) but the
consortium keeps confidential all data that can be deemed as personal (i.e.,
contact details of individual people).
</td>
<td>
The database contains only publicly available information.
</td>
<td>
A list of related events/conferences will be made available on the
EuDEco website.
</td>
<td>
Links to the studies and articles will be published on the project website.
</td> </tr>
<tr>
<td>
**Responsible partner**
</td>
<td>
SIGMA
</td>
<td>
SIGMA
</td>
<td>
IVSZ
</td>
<td>
IVSZ
</td> </tr> </table>
Table 2 Dataset information – part 2
11
D7.2 Data management plan
_Public Report – Version 1.0 – 31 July 2015_
# Access, sharing and protection of data
As a CSA funded by Horizon 2020 (H2020), EuDEco will follow an approach in
terms of information/data sharing that is as open as possible. All reports,
studies and results of the project will be made publicly accessible via the
EuDEco website. However, academic publications may fall under restriction in
terms of publication. The published documents will follow the European
Commission’s (EC) rules, contain the necessary visual identity elements as
well as a disclaimer.
Sensitive data such as personal data of people, who registered for EuDEco
events, will be shared with the EC only (if requested).
All project results, datasets and other results of the project will be owned
by the consortium. Data that has been identified as public will be shared with
project stakeholders, the EC and interested public. Parts of the research may
be shared with peers via relevant academic portals. The EuDEco consortium does
not plan to charge a fee from third parties for accessing and reusing data.
The consortium considers licensing of data currently irrelevant for EuDEco.
In terms of formats, we will mostly use DOC (Microsoft Word) for text-based
documents, XLS (Microsoft Excel) for quantitative data. These files will be
made publicly available in PDF (Portable Document Format). MP3 (MPEG Audio
Layer III) or WAV (Waveform Audio File Format) for audio files, MOV (QuickTime
Movie) or WMV (Windows Media Video) for video files. These file formats have
been chosen because they are accepted standards and in widespread use.
# Storage, preservation and archiving of data
Data collected or processed as well as draft versions of the project documents
(containing only one partner’s or several partners’ contributions) are
currently stored on the computers and servers of each partner organisation as
well as on the external file server which is operated by FRAUNHOFER. Single
partners are responsible for ensuring the backup of their systems and
FRAUNHOFER to ensure the backup of the content server.
Some databases/documents which are jointly edited by two or more project
partners are usually shared and stored at Google Drive, with settings set to
limit access to the consortium partners. The final version of the publishable
project results will be stored in a separate folder on the content server as
well as published as soon as possible on the website.
In order to simplify the follow-up of versions, the storage of the draft and
final versions of datasets and documents as well as the backup, the
possibility of moving data and co-editing activities to SharePoint have been
discussed at the latest project meeting without any final decision. Using
SharePoint would
12
D7.2 Data management plan
_Public Report – Version 1.0 – 31 July 2015_
allow easy follow-up of versions, joint editing and easy storage of files.
SharePoint, as a cloud-based service, does not require any specific back-up
activity from the consortium.
13
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0789_TWEETHER_644678.md
|
# INTRODUCTION
In December 2013, the European Commission announced their commitment to open
data through the Pilot on Open Research Data, as part of the Horizon 2020
Research and Innovation Programme. The Pilot’s aim is to “improve and maximise
access to and re-use of research data generated by projects for the benefit of
society and the economy”.
In the frame of this Pilot on Open Research Data, results of publicly-funded
research should be disseminated more broadly and faster, for the benefit of
researchers, innovative industry and citizens.
On one hand, Open Access allows not only accelerating discovery process and
ease those research results to reach the market (thus meaning a return of
public investment), but also avoids a duplication of research efforts thus
leading to a better use of public resources and a higher throughput. On the
other hand, this Open Access policy is also beneficial for the researchers
themselves. Making the research publicly available increases the visibility of
the performed research, what is translated into a significantly higher number
of citations 1 as well as an increase in the collaboration potential with
other institutions in new projects, among others. Additionally, Open Access
offers small and medium-sized enterprises (SMEs) access to the latest research
for utilisation.
Under H2020, each beneficiary must ensure open access to all peer-reviewed
scientific publications relating to its results. This open access requirements
are based on a balanced support to both 'Green open access' (immediate or
delayed open access that is provided through self-archiving) and 'Gold open
access' (immediate open access that is provided by a publisher).
Apart from open access to publications, projects must also aim to deposit the
research data needed to validate the results presented in the deposited
scientific publications, known as "underlying data". In order to effectively
supply this data, projects need to consider at an early stage how they are
going to manage and share the data they create or generate.
In this document, we will introduce the first version of the Data Management
Plan (DMP) elaborated for the TWEETHER project. The DMP will describe how to
select, structure, store and make public the information used or generated
during the project, both considering scientific publications as well as
generated research data. In particular, the DMP will include the following
issues:
* What data will be collected / generated in the course of the project?
* What data will be exploited? What data will be shared/made open?
* What standards will be used / how will metadata be generated?
* How will data be curated / preserved including after project completion
This DMP will be updated during the project lifetime.
# TWEETHER PROJECT
The TWEETHER project will give the answer to the urgent needs to provide high
capacity everywhere by the realisation of a W-band wireless system with a
capacity and coverage of 10Gbps/km² for the backhaul and the access markets,
considered by operators a key investment opportunity. Such a system, combined
with the development of beyond state-of-the-art affordable millimetre wave
devices, will permit to overcome the economical obstacle that causes the
digital divide and will pave the way towards the full deployment of small
cells.
This system merges for the first time novel approaches in vacuum electron
devices, monolithic millimetre wave integrated circuits and networking
paradigms to implement a novel transmitter to foster the future wireless
communication networks.
In particular, the TWEETHER project will develop a novel, compact, low cost
and high yield Traveling Wave Tube (TWT) power amplifier with 40W output
power. This TWT will be the only device capable to provide wideband operation
and enough output power to distribute the millimetre wave frequency signal to
a useful distance.
On the other hand, advanced and high performance W-band transceiver chipset,
enabling the low power operation of the system, will be fabricated. More
concretely, this chipset will include various GaAs-based monolithic microwave
integrated circuits (MMICs) comprising elements such as power amplifiers,
down- and up-converters, 8-way multiplier, and SPDT switch.
These novel W-band elements will be integrated by using advanced micro-
electronics and micromechanics to achieve compact front end modules, which
will be assembled and packaged with interfaces and antennas for a field test
to be deployed at the campus of the _Universitat Politecnica de Valencia_ to
prove to prove the breakthrough of the TWEETHER system in millimetre wave
wireless network field.
Therefore, TWEETHER addresses a highly innovative approach so that the more
relevant audience of the project will be the scientific community working in
millimeter wave technology and wireless systems. In addition, due to the
strong impact of the system, other expected audience will be the industrial
community, standardization bodies working on the W-band and on definition of
Multimedia Wireless Systems (MWS), and potential users such as telecom
operators.
# CONSIDERATIONS FOR PUBLIC INFORMATION
The H2020’s open access policy pursues that the information generated by the
projects participating in that programme is made publicly available. However,
as stated in EC guidelines on Data Management in H2020 2 , “ _As an
exception, the beneficiaries do not have to ensure open access to specific
parts of their research data if the achievement of the action's main
objective, as described in Annex I, would be jeopardised by making those
specific parts of the research data openly accessible. In this case, the data
management plan must contain the reasons for not giving access_ .”
In line with this, the TWEETHER consortium will decide what information is
made public according to aspects as potential conflicts against
commercialization, IPR protection of the knowledge generated (by patents or
other forms of protection), meaning a risk for obtaining the project
objectives/outcomes, etc.
The TWEETHER project is pioneering research that is of key importance to the
electronic and telecommunication industry. Effective exploitation of the
research results depends on the proper management of intellectual property.
Therefore, the TWEETHER consortium will follow the following strategy (Figure
1): if the research findings result in a ground-breaking innovation, the
members of the consortium will consider two forms of protection: to withhold
the data for internal use or to apply for a patent in order to commercially
exploit the invention and have in return financial gain. In latter case,
publications will be therefore delayed until the patent filing. On the
contrary, if the technology developments are not going to be withheld or
patented, the results will be published for knowledge sharing purposes.
**ResearchResults**
Protect
Selection
Disseminate
and
share
Patenting
Open
Access
Publication
Repositoryof
Publication
and
ResearchData
**DisseminationPlan**
**Data**
**Management**
**Plan**
PatentPublication
Withhold
**Afterpatentfiling**
ScientificPublication
**Figure 1. Process for determining which information is to be made public
(from EC’s document “Guidelines on Open Access to Scientific Publications and
Research Data in Horizon 2020 – v1.0 – 11 December 2013”)**
# OPEN ACCESS TO PUBLICATIONS
The first aspect to be considered in the DMP is related to the open access
(OA) to the publications generated within the TWEETHER project, meaning that
any peer-reviewed scientific publication made within the context of the
project will be available online to any user at no charge. This aspect is
mandatory for new projects in the Horizon 2020 programme (article 29.2 of the
Model Grant Agreement).
The two ways considered by the EC to comply with this requirement are:
* Self-archiving / ‘green’ OA: In this option, the beneficiaries deposit the final peer-reviewed manuscript in a repository of their choice. In this case, they must ensure open access to the publication within a maximum of six months (twelve months for publications in the social sciences and humanities).
* Open access publishing / ‘gold’ OA: In this option, researchers publish their results in open access journals, or in journals that sell subscriptions and also offer the possibility of making individual articles openly accessible via the payment of author processing charges (APCs) (hybrid journals). Again, open access via the chosen repository must be ensured upon publication.
Publications arising from the TWEETHER project will be made public preferably
through the option of ‘gold’ OA in order to provide the widest dissemination
of the published results through the own webpages of the publishers. In other
cases, the scientific publications will be deposited in a repository (‘green’
OA). Most publishers allow to deposit a copy of the article in a repository,
sometimes with a period of restricted access (embargo) 3 . In Horizon 2020,
the embargo period imposed by the publisher must be shorter than 6 months (or
12 months for social sciences and humanities). This embargo period will be
therefore taken into account by the TWEETHER consortium to choose the open
access modality for the fulfilment of the open access obligations established
by the EC.
Additionally, according to the EC recommendation, whenever possible the
TWEETHER consortium will retain the ownership of the copyright for their work
through the use of a ‘License to Publish’, which is a publishing agreement
between author and publisher. With this agreement, authors can retain
copyright and the right to deposit the article in an Open Access repository,
while providing the publisher with the necessary rights to publish the
article. Additionally, to ensure that others can be granted further rights for
the use and reuse the work, the TWEETHER consortium may ask the publisher to
release the work under a Creative Commons license, preferably CC-0 or CC-BY.
Besides these two facts (retaining the ownership of the publication and
embargo period), the TWEETHER consortium will also consider the relevance of
the journal where it is intended to publish, measured by means of the “impact
factor” (IF). We expect that the work to be carried out in the TWEETHER
project leads to results with a very high impact, which are desired to be
published in high IF journals. Therefore, we will also consider this factor
when selecting the journal to publish the TWEETHER project results.
Here we provide a list of the journals initially considered for the
publications to be generated in the TWEETHER project with information about
the open access policy of each journal.
<table>
<tr>
<th>
**Publisher**
</th>
<th>
**Journal**
</th>
<th>
**Impact factor (2013)**
</th>
<th>
**Author charges**
**(for**
**OA)**
</th>
<th>
**Comments about open access**
</th> </tr>
<tr>
<td>
Institute of
Electrical and
Electronics
Engineers
(IEEE)
</td>
<td>
IEEE Wireless Communications
</td>
<td>
6.524
</td>
<td>
$1,750
</td>
<td>
A paid open access option is available for this journal.
If funding rules apply, authors may post Author's post-print version in
funder's designated repository. Publisher's version/PDF cannot be used.
</td> </tr>
<tr>
<td>
IEEE
Communications
Magazine
</td>
<td>
4.460
</td> </tr>
<tr>
<td>
IEEE Journal on
Terahertz
Technology
</td>
<td>
4.342
</td> </tr>
<tr>
<td>
IEEE
Electron Device
Letters
</td>
<td>
3.023
</td> </tr>
<tr>
<td>
IEEE
Transactions on Microwave
Theory and
Techniques
</td>
<td>
2.943
</td> </tr>
<tr>
<td>
IEEE
Transactions on
Electron
Devices
</td>
<td>
2.358
</td> </tr>
<tr>
<td>
</td>
<td>
IEEE
Transactions on Components,
Packaging, and
Manufacturing
Technology
</td>
<td>
1.236
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
IEEE Journal of the Electron
Devices Society
</td>
<td>
Started 2013
</td>
<td>
$1,350
</td>
<td>
It is a fully open-Access publication. Publisher's version/PDF can be archived
on author's personal website, employer's website or funder's designated
website. Creative Commons Attribution License is available if required by
funding agency.
</td> </tr>
<tr>
<td>
Springer
</td>
<td>
Journal of
Infrared,
Millimeter, and
Terahertz Waves
</td>
<td>
1.891
</td>
<td>
2,200€
</td>
<td>
Springer’s Open Choice eligible journals publish open access articles under
the liberal Creative Commons Attribution 4.0 International (CC BY) license.
If not, author's post-print can be posted on any open access repository after
12 months after publication (Publisher's version/PDF cannot be used)
</td> </tr>
<tr>
<td>
AIP
</td>
<td>
Applied Physics Letters
</td>
<td>
3.515
</td>
<td>
$ 2,200
</td>
<td>
A paid open access option is available for this journal.
If funding rules apply, publishers version/PDF may be used on author's
personal website, institutional website or institutional repository
</td> </tr> </table>
From this list, we can see that the majority of the journals targeted by the
TWEETHER project are IEEE journals, which allow an open access modality and
the author’s post-print version can be deposited in a repository. This is in
line with the Horizon 2020 requirements.
All the publication will acknowledge the project funding. This acknowledgment
must be included also in the metadata of the generated information, since it
allows to maximise the discoverability of publications and to ensure the
acknowledgment of EU funding. The terms to be included in the metadata are:
* "European Union (EU)" and "Horizon 2020"
* the name of the action, acronym and the grant number
* the publication date, length of embargo period if applicable, and a persistent identifier (e.g DOI, Handle)
Finally, in the Model Grant Agreement, “scientific publications” mean
primarily journal articles. Whenever possible, TWEETHER will provide access to
other types of scientific publications such as presentations, public
deliverables, etc.
# RESEARCH DATA
The scientific and technical results of the TWEETHER project are expected to
be of maximum interest for the scientific community. Through the duration of
the project, once the relevant protections (e.g. IPR) are secured, the
TWEETHER partners may disseminate (subject to their legitimate interests) the
obtained results and knowledge to the relevant scientific communities through
contributions in journals and international conferences in the field of
wireless communications and millimetre-wave technology.
Apart from the open access to publication explained in the previous section,
the Open Research Data Pilot also applies to two types of data 4 :
* The data, including associated metadata, needed to validate the results presented in scientific publications (underlying data);
* Other data, including associated metadata, as specified and within the deadlines laid down in a data management plan, to be developed by the project. In other words, beneficiaries will be able to choose which data, additionally to the data underlying publications, they make available in open access mode.
According to this requirement, the underlying data related to the scientific
publications will be made publicly available (See Section 8). This will allow
that other researchers can make use of that information to validate the
results, thus being a starting point for their investigations, as expected by
the EC through its open access policy.
These data will include a description of the procedures followed to obtain
those results (e.g., software used for simulations, experimental setups,
equipment used, etc.) as well as data generated following those procedures
(experimental measurements results, spreadsheets, images, etc.).
In addition, other type of data generated during the project could be the
specifications of the TWEETHER system and the services it supports, the
datasheets and performances of the technological developments of the project,
the field trial results with the KPIs (Key Performance Indicators) used to
evaluate the system performances, among others.
Since a huge amount of data is generated in a European project as TWEETHER, we
will make a selection of relevant information, disregarding that not being
relevant for the validation of the relevant published results. Moreover, we
will analyse on a case by case basis all data generated during the project
before making them open in order to be aligned with the exploitation and
protection policy. As a result, the publication of research data will be
mainly followed by those partners involved in the scientific development of
the project (i.e., academic and research partners), while those partners
focused on the “development” of the technology will limit this publication of
information due to strategic/organizational reasons (commercial exploitation).
A more detailed description of the information expected to be generated in
TWEETHER and whether and how it will be exploited or made publicly available
is provided in Section 8.
4 _EC document: “Guidelines on Open Access to Scientific Publications and
Research Data in Horizon 2020” – version_
_1.0 – 11 December, 2013_
# METADATA
Metadata refers to “data about data”, i.e., it is the information that
describes the data that is being published with sufficient context or
instructions to be intelligible for other users. Metadata must allow a proper
organization, search and access to the generated information and can be used
to identify and locate the data via a web browser or web based catalogue.
Two types of metadata will be considered within the frame of the TWEETHER
project: that corresponding to the project publications, which has already
been described in Section 4, and that corresponding to the published research
data.
In the context of data management, metadata will form a subset of data
documentation that will explain the purpose, origin, description, time
reference, creator, access conditions and terms of use of a data collection.
The metadata that would best describe the data depends on the nature of the
data. For research data generated in TWEETHER, it is difficult to establish a
global criteria for all data, since the nature of the initially considered
data sets will be different, so that the metadata will be based on a
generalised metadata schema as the one used in ZENODO 4 , which includes
elements such as:
* Title: free text
* Creator: Last name, first name
* Date
* Contributor: It can provide information referred to the EU funding and to the TWEETHER project itself; mainly, the terms "European Union (EU)" and "Horizon 2020", as well as the name of the action, acronym and the grant number
* Subject: Choice of keywords and classifications
* Description: Text explaining the content of the data set and other contextual information needed for the correct interpretation of the data.
* Format: Details of the file format
* Resource Type: data set, image, audio, etc.
* Identifier: DOI
* Access rights: closed access, embargoed access, restricted access, open access.
Additionally, a readme.txt file could be used as an established way of
accounting for all the files and folders comprising the project and explaining
how all the files that make up the data set relate to each other, what format
they are in or whether particular files are intended to replace other files,
etc.
# DATA SHARING, ARCHIVING AND PRESERVATION
A repository is the mechanism to be used by the project consortium to make the
project results (i.e., publications and scientific data) publicly available
and free of charge for any user. According to this, several options are
considered/suggested by the EC in the frame of the Horizon 2020 programme to
this aim:
For depositing scientific publications:
* Institutional repository of the research institutions (e.g., RiuNet at UPV) o Subject-based/thematic repository
* Centralised repository (e.g., Zenodo repository set up by the OpenAIRE project) For depositing generated research data:
* A research data repository which allows third parties to access, mine, exploit, reproduce and disseminate free of charge
* Centralised repository (e.g., Zenodo repository set up by the OpenAIRE project)
The academic institutions participating in TWEETHER have available appropriate
depositories which in fact are linked to OpenAIRE
(https://www.openaire.eu/participate/deposit/idrepos):
# Lancaster University - Lancaster E-Prints
Type: Publication Repository
Contents: Journal articles, Conference and workshop papers, Theses and
dissertations, Books, chapters and sections, Other special item types
Website URL: http://eprints.lancs.ac.uk/
Compatibility: OpenAIRE Basic (DRIVER OA)
OAI-PMH URL: http://eprints.lancs.ac.uk/cgi/oai2
# Hochschulschriftenserver - Universität Frankfurt am Main
Type: Publication Repository
Contents: Journal articles, Conference and workshop papers, Theses and
dissertations, Unpublished reports and working papers
Website URL: http://publikationen.ub.uni-frankfurt.de/
Compatibility: OpenAIRE Basic (DRIVER OA)
OAI-PMH URL: http://publikationen.ub.uni-frankfurt.de/oai
# Universitat Politècnica de Valencia (UPV) – RiuNet
Type: Publication Repository
Contents: Journal articles, Conference and workshop papers, Theses and
dissertations, Learning Objects, Multimedia and audio, visual materials, Other
special item types
Website URL: http://riunet.upv.es/
Compatibility: OpenAIRE 2.0+ (DRIVER OA, EC funding)
OAI-PMH URL: https://riunet.upv.es/oai/driver,
_https://riunet.upv.es/oai/openaire_
Note that all these repositories make use of the OAI-PMH protocol (Open
Archives Initiative Protocol for Metadata Harvesting), what allows that the
content can be properly found by means of the defined metadata.
These institutional repositories will be used to deposit the publications
generated by the institutions detailed above.
Apart from these repositories, the TWEETHER project will also use the
centralised repository ZENODO to ensure the maximum dissemination of the
information generated in the project (research publications and data), as this
repository is the one mainly recommended by the EC’s OpenAIRE initiative in
order to unite all the research results arising from EC funded projects.
Indeed, ZENODO 5 is an easy-to-use and innovative service that enables
researchers, EU projects and research institutions to share and showcase
multidisciplinary research results (data and publications) that are not part
of existing institutional or subject-based repositories. Namely, ZENODO
enables users to:
* easily share the long tail of small data sets in a wide variety of formats, including text, spreadsheets, audio, video, and images across all fields of science
* display and curate research results, get credited by making the research results citable, and integrate them into existing reporting lines to funding agencies like the European Commission
* easily access and reuse shared research results
* define the different licenses and access levels that will be provided
Furthermore, ZENODO assigns a Digital Object Identifier (DOI) to all publicly
available uploads, in order to make content easily and uniquely citable and
this repository also makes use of the OAIPMH protocol (Open Archives
Initiative Protocol for Metadata Harvesting) to facilitate the content search
through the use of defined metadata. This metadata follows the schema defined
in INVENIO 6 (a free software suite enabling to run an own digital library
or document repository on the web) and is exported in several standard formats
such as MARCXML, Dublin Core and DataCite Metadata Schema according to
OpenAIRE Guidelines.
On the other hand, considering ZENODO as the repository, the short- and long-
term storage of the research data will be secured since they are stored safely
in same cloud infrastructure as research data from CERN's Large Hadron
Collider. Furthermore, it uses digital preservation strategies to storage
multiple online replicas and to back up the files (Data files and metadata are
backed up on a nightly basis).
Therefore, this repository fulfils the main requirements imposed by the EC for
data sharing, archiving and preservation of the data generated in TWEETHER.
# DESCRIPTION OF DATA SETS TO BE GENERATED OR COLLECTED
This section provides an explanation of the different types of data sets to be
produced in TWEETHER, which has been identified at this stage of the project.
As the nature and extent of these data sets can be evolved during the project,
more detailed descriptions will be provided in future versions of the DMP.
The descriptions of the different data sets, including their reference, file
format, the level of access, and metadata and repository to be used
(considerations described in Section 6 and 7), are given below.
<table>
<tr>
<th>
**Data set reference**
</th>
<th>
DS_SP_1
</th> </tr>
<tr>
<td>
**Data set name**
</td>
<td>
TWT_SP_X
</td> </tr>
<tr>
<td>
**Data set description**
</td>
<td>
This data set will comprise the measured or simulated S-parameter results for
the TWT structure.
It will mainly consist of small-signal calculations of the cold simulations or
measurements of the TWT at the respective ports.
</td> </tr>
<tr>
<td>
**File format**
</td>
<td>
Touchstone format
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
The metadata is based on ZENODO’s metadata, including the title, creator,
date, contributor, description, keywords, format, resource type, etc. (See
Section 6)
</td> </tr>
<tr>
<td>
**Data sharing**
</td>
<td>
This data set will be widely open and will be deposited in the ZENODO
repository.
To analyse this data CST Software or Magic Software are necessary.
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
This data set will be archived and preserved in ZENODO (See Section 7)
</td> </tr> </table>
<table>
<tr>
<th>
**Data set reference**
</th>
<th>
DS_PS_1
</th> </tr>
<tr>
<td>
**Data set name**
</td>
<td>
TWT_PS_X
</td> </tr>
<tr>
<td>
**Data set description**
</td>
<td>
This data set will comprise results of the power levels at the relevant ports
of the TWT structure. They will include the DC bias conditions together with
the input and output power at all ports. The results will be either based on
measured values or obtained from simulations.
It will mainly consist of small-signal calculations of the hot simulations or
measurements of the TWT at the respective ports.
</td> </tr>
<tr>
<td>
**File format**
</td>
<td>
MDIF or XPA format
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
The metadata is based on ZENODO’s metadata, including the title, creator,
date, contributor, description, keywords, format, resource type, etc. (See
Section 6)
</td> </tr>
<tr>
<td>
**Data sharing**
</td>
<td>
This data set will be widely open and will be deposited in the ZENODO
repository.
To analyse this data CST Software or Magic Software are necessary.
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
This data set will be archived and preserved in ZENODO (See Section 7)
</td> </tr> </table>
<table>
<tr>
<th>
**Data set reference**
</th>
<th>
DS_CHIPSET_DS
</th> </tr>
<tr>
<td>
**Data set name**
</td>
<td>
Semi-conductor Radio Chipset Datasheet
</td> </tr>
<tr>
<td>
**Data set description**
</td>
<td>
This dataset contain the datasheet of the III-V semi conductor products used
by the 2 radios of the TWEETHER project
</td> </tr>
<tr>
<td>
**File Format**
</td>
<td>
File format is the PDF format
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
The metadata is based on ZENODO’s metadata, including the title, creator,
date, contributor, description, keywords, format, resource type, etc. (See
Section 6)
</td> </tr>
<tr>
<td>
**Data sharing**
</td>
<td>
This data set will be widely open and will be deposited in the ZENODO
repository.
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
This data set will be archived and preserved in ZENODO (See
</td> </tr>
<tr>
<td>
</td>
<td>
Section 7).
</td> </tr> </table>
<table>
<tr>
<th>
**Data set reference**
</th>
<th>
DS_SYS_1
</th> </tr>
<tr>
<td>
**Data set name**
</td>
<td>
System datasheet
</td> </tr>
<tr>
<td>
**Data set description**
</td>
<td>
System general architecture, network interfaces, system data sheet, sub-
assemblies datasheets, range diagrams, photos of equipment. General
information useful for potential users.
This data set will be suitable for publications in scientific and industrial
conferences.
</td> </tr>
<tr>
<td>
**File Format**
</td>
<td>
PDF
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
The metadata is based on ZENODO’s metadata, including the title, creator,
date, contributor, description, keywords, format, resource type, etc. (See
Section 6)
</td> </tr>
<tr>
<td>
**Data sharing**
</td>
<td>
This data set will be widely open and will be deposited in the ZENODO
repository.
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
This data set will be archived and preserved in ZENODO (See Section 7).
</td> </tr> </table>
<table>
<tr>
<th>
**Data set reference**
</th>
<th>
DS_SYS_2
</th> </tr>
<tr>
<td>
**Data set name**
</td>
<td>
System Deployments
</td> </tr>
<tr>
<td>
**Data set description**
</td>
<td>
System coverage capabilities. Deployment methods to optimize coverage,
frequency re-use process. Scenario graph. General information useful for
potential users.
This data set will be suitable for publications in scientific and industrial
conferences.
</td> </tr>
<tr>
<td>
**File format**
</td>
<td>
PDF
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
The metadata is based on ZENODO’s metadata, including the title, creator,
date, contributor, description, keywords, format, resource type, etc. (See
Section 6)
</td> </tr>
<tr>
<td>
**Data sharing**
</td>
<td>
This data set will be widely open and will be deposited in the ZENODO
repository.
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
This data set will be archived and preserved in ZENODO (See Section 7).
</td> </tr> </table>
<table>
<tr>
<th>
**Data set reference**
</th>
<th>
DS_MM-A_1
</th> </tr>
<tr>
<td>
**Data set name**
</td>
<td>
W-band Millimetre Antennas
</td> </tr>
<tr>
<td>
**Data set description**
</td>
<td>
Adaptation S parameters, bandwidth, radiating diagrams: co-polar & cross-
polar. Antennas datasheet: graphs and tables.
This data set will be suitable for publications in scientific and industrial
conferences.
</td> </tr>
<tr>
<td>
**File format**
</td>
<td>
PDF
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
The metadata is based on ZENODO’s metadata, including the title, creator,
date, contributor, description, keywords, format, resource type, etc. (See
Section 6)
</td> </tr>
<tr>
<td>
**Data sharing**
</td>
<td>
This data set will be widely open and will be deposited in the ZENODO
repository.
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
This data set will be archived and preserved in ZENODO (See
</td> </tr>
<tr>
<td>
</td>
<td>
Section 7).
</td> </tr> </table>
<table>
<tr>
<th>
**Data set reference**
</th>
<th>
DS_FT_1
</th> </tr>
<tr>
<td>
**Data set name**
</td>
<td>
Field trial description
</td> </tr>
<tr>
<td>
**Data set description**
</td>
<td>
This data set will comprise a description of the wireless network architecture
including the hardware, interfaces and services that will be deployed at the
UPV campus and used for the field trial. In addition, it will provide
information about sites (number of sites and its location), the expected
objectives to be achieved and the envisaged scenarios for the system.
This information will be interesting for potential users such as telecom
operators.
</td> </tr>
<tr>
<td>
**File Format**
</td>
<td>
PDF
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
The metadata is based on ZENODO’s metadata, including the title, creator,
date, contributor, description, keywords, format, resource type, etc. (See
Section 6)
</td> </tr>
<tr>
<td>
**Data sharing**
</td>
<td>
This data set will be widely open (URL access) and a summary of these data
will be deposited in the ZENODO repository.
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
This data set will be archived and preserved in ZENODO (See Section 7).
</td> </tr> </table>
<table>
<tr>
<th>
**Data set reference**
</th>
<th>
DS_FT_2
</th> </tr>
<tr>
<td>
**Data set name**
</td>
<td>
Field trial long term KPI measurements
</td> </tr>
<tr>
<td>
**Data set description**
</td>
<td>
This data set will comprise the results of the measurement campaign carried
out to evaluate the performance of the field trial deployed at the UPV campus
integrating the technology developed in TWEETHER.
It will include data obtained from the Network Monitoring System (PRTG
software or similar), which collects KPIs from the network elements. Some
examples of KPIs are throughput, RSSI (received signal strength indicator) and
dropped packets. Those data will be publicly accessible through a URL.
This information will be interesting for potential users such as telecom
operators.
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
The metadata is based on ZENODO’s metadata, including the title, creator,
date, contributor, description, keywords, format, resource type, etc. (See
Section 6)
</td> </tr>
<tr>
<td>
**Data sharing**
</td>
<td>
This data set will be widely open (URL access) and a summary of these data
will be deposited in the ZENODO repository.
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
This data set will be archived and preserved in ZENODO (See Section 7).
</td> </tr> </table>
<table>
<tr>
<th>
**Data set reference**
</th>
<th>
DS_FT_3
</th> </tr>
<tr>
<td>
**Data set name**
</td>
<td>
Field trial bandwidth tests
</td> </tr>
<tr>
<td>
**Data set description**
</td>
<td>
This data set will comprise descriptive information of the bandwidth tests
used to evaluate the network at specific times. Those tests will employ a
traffic generator software allowing to
</td> </tr>
<tr>
<td>
</td>
<td>
send and receive traffic between hosts comprising the network and providing a
measurement of the maximum available bandwidth and also latency and jitter
values.
It will mainly consist of a doc-type document with details related to the
steps to be followed in this test and the results obtained as well as well as
examples of the scripts (or its description) used to obtain those results.
This information will be interesting for potential users such as telecom
operators.
</td> </tr>
<tr>
<td>
**File format**
</td>
<td>
Word or PDF
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
The metadata is based on ZENODO’s metadata, including the title, creator,
date, contributor, description, keywords, format, resource type, etc. (See
Section 6)
</td> </tr>
<tr>
<td>
**Data sharing**
</td>
<td>
This data set will be widely open and will be deposited in the ZENODO
repository.
To perform this test, Ipref tool (or similar) is required.
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
This data set will be archived and preserved in ZENODO (See Section 7).
</td> </tr> </table>
Apart from the data sets specified that will be made open, other data
generated in TWEETHER such as the circuit detailed specifications and
realisation, and terminal integration should be kept confidential to avoid
jeopardising future exploitation.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0790_ACINO_645127.md
|
## Summary
Producing the Data Management Plan (DMP) within the project is a part of the
Horizon 2020 pilot action on open access to research data in which ACINO
participates. The purpose of the DMP is to describe the main elements of the
data management policy that will be used by the ACINO project with regard to
all the datasets to be generated by the project.
In other words, the Data Management Plan will describe the format and the way
to store, archive and share the data created within the project as well as the
use of the plan itself by the project participants. The data may include, but
not limited to, code, publications as well as measured data, for example, from
field trials. The Plan is a living document whose content concerning the data
management is updated from its creation (month 6 of the project) to the end of
the project (month 36).
1
# Introduction
Following the EC template, the data management plan includes the following
major components.
Data management
plan
Data set reference
and name
Data set description
Standards and
metadata
Data sharing
Archiving and
preservation
_Figure 1. Structure (template) of the data management plan._
Specifically, for ACINO these components are summarized below.
ACINO Data
management
Reference and name
ACINO [Name] [Type] [Place]
Date] [Owner
[
]
[
Target User]
Description
“Traffic meas. field trial,
Kista, June 2015
Planned publication in JLT”
Metadata
\-
Text file, if not part of
the data file
Sharing:
Zenodo.org integrated
with Github
Archiving:
\-
Zenodo.org integrated
with Github
_Figure 2. Main components of the ACINO data management plan._
Details for each component are given in the following sections.
2
# Data set reference and name
The following structure is proposed for ACINO data set identifier:
ACINO [Name] [Type] [Place] [Date] [Owner] [Target User]
Where
* “Name” is a short name for the data.
* “Type” describes the type of data (e.g. code, publication, measured data).
* “Place” describe the place the data were produced.
* “Data” is the date in format “YYYY-MM-DD”.
* “Owner” are the owner or owners of the data (if exist) ● [Optional] “Target user” is the target audience of the data.
* “_” (underscore) is used as the separator between the fields.
For example,
“ACINO_Field trial_Measurement_data_Kista_2015-06-30_Acreo_Internal.dat” is a
data file from the field trial in Kista, Sweden from 2015-06-30 made and owned
by Acreo with extension .dat (MATLAB). More information about the data is
provided in the metadata (see the following section).
All the data fields in the identifier above, apart from the target user, are
mandatory. If owner cannot be specified, “Unspecified-owner” should be
indicated.
3
# Data set description and metadata
The previous section defined a data set identifier. The data set description
is essentially an expanded description of the identifier with more details.
The data set description is organized as the metadata in the similar way as
the identifier but with more details and, depending on the file format, will
be either incorporated as a part of the file or as a separate file (in its
simplest form) in the text format. In the case of the separate metadata file,
it will have the same name with suffix “METADATA”.
For example, the metadata file name for the data file from the previous
section will look as follows:
“ACINO_Field trial_Measurement
data_Kista_015-06-30_Acreo_Internal_METADATA.txt”
The Metadata file can also describe a number of files (e.g. a number of log
files).
The project may consider a possibility to provide the metadata in XML or JSON
formats, if necessary for convenience of parsing and further processing.
4
# Data sharing
ACINO has chosen zenodo.org repository for storing the project data and an
ACINO project account has been created 1 . Zenodo.org is a repository
supported by CERN and the EU OpenAire project 2 , is open, free, searchable
and structured with flexible licensing allowing for storing all types of data:
datasets, images, presentations, publications and software. In addition to
that,
* The repository has backup and archiving capabilities.
* The repository allows for integration github.com 3 where the project code will be stored. GitHub provides a free and flexible tool for code developing and storage.
* Zenodo assigns all publicly available uploads a Digital Object Identifier (DOI) to make the upload easily- and uniquely-citable.
All the above makes Zenodo a good candidate as a _unified_ repository for all
foreseen project data (presentations, publications, code and measurement data)
from ACINO.
Information on using Zenodo by the project partners with application to the
ACINO data will be circulated within the consortium and addressed within the
respective workpackage (WP6).
The process of making the ACINO data public and publishable at the repository
will follow the procedures described in the Consortium Agreement. For the
code, the project partners will follow the internal “Open Source Management
Process” document.
All the public data of the project will be openly accessible at the
repository. Non-public data will be archived at the repository using the
“closed access” option.
5
# Archiving and preservation
The Guidelines on Data Management in Horizon 2020 require defining procedures
that will be put in place for long-term preservation of the data and backup.
The zenodo.org repository possesses these archiving capabilities including
backup and will be used to archive and preserve the ACINO data.
6
# Use of the Data Management plan within the project
The plan is used by the ACINO partners as a reference for data management
(naming, providing metadata, storing and archiving) within the project each
time new project data are produced.
The project partners are introduced to the DMP and its use as part of WP6
activities. Relevant questions from partners will also be addressed within
WP6. The workpackage will also provide support to the project partners on
using Zenodo as the data management tool.
7
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0792_IOSTACK_644182.md
|
# Executive summary
Open data is becoming increasingly important for maximizing the excellence and
growth of the research activity in Europe. In this sense, the motivation of
the IOStack project is very aligned with the foundations of open data: IOStack
aims at building a Software-Defined Storage toolkit on top of OpenStack, which
is the largest open source project in cloud technologies. Thus, our next step
is to plan how this natural synergy between IOStack and the open source
community is materialized into a set of open data assets available to the
general public.
The present document provides the data management plan of the IOStack project,
in particular for the management of open data. It describes the overall open
data strategy in IOStack as well as the concrete actions that the consortium
will undertake to transform the resulting project data assets into open data.
Essentially, these actions are directed to the three main data assets produced
in IOStack: publications, datasets and software source code. We also describe
how we will make use of dissemination mechanisms to increase the impact and
visibility of the open data generated in IOStack.
# Open Data in IOStack
Open data is becoming increasingly important for maximizing the excellence and
growth of the research activity in Europe. Since the beginning of the 2000’s,
Europe is leading a major initiative to make publicly founded research
projects _actually public_ by taking into special consideration the _openness
and transparency_ of the management of a project’s results. And no need not
mention, most of these results and research assets can be classified as _data_
(e.g., research papers, datasets). Clearly, the wave of open data continues
and it is being strengthened in the H2020 framework.
The value of open data is clear: it improves circulation, access to and
transfer of scientific knowledge and tools, which in turn, optimizes the
impact of publicly-funded scientific research. In this sense, the motivation
of the IOStack project is very aligned with the foundations of open data.
Actually, IOStack aims at building a Software-Defined Storage toolkit on top
of OpenStack, which is the largest open source project in cloud technologies.
This level of commitment with an open source community gives a sense on the
open data strategy of the project as a whole.
It is worth mentioning that some partners of the IOStack consortium are
already adhered to open data standards in other EU-funded projects. For
instance, URV —coordinator of the IOStack project— is currently implementing
_green_ open data policies for datasets, research papers and software source
code in the context of the FP7 CloudSpaces project 1 . Thus, our experience
on opening prior research results guarantees an effective application of open
data policies in IOStack as well.
In this document, we define the policies and mechanisms that will help us to
transform the project’s outputs and research results into open data.
# Types of Data Generated/Collected in IOStack
In this project, we consider 3 main sources of assets that can be subject of
open data policies: _research papers_ , _datasets_ and _software source code_
.
**Research papers** : In IOStack, research papers are the main vector of
propagating our research contributions to the appropriate audience —both
conferences and journals. During the project, we will target high-quality
publications in order to maximize the impact of our research discoveries. In
any case, as we detail later on, all the publications related to the IOStack
project will be made publicly available following _green_ open data standards.
**Datasets** : Often, a research publication is based on or has as a result a
dataset. Datasets may contain any type of information that can lead to
reproduce or verify the claims supported in the publication itself. In
IOStack, we foresee the generation of various datasets ranging from company
use cases workloads to data capturing the performance results of benchmarking
our SDS toolkit. Such information will be of great interest for the community
in order to foster research in this field. Datasets will be also made publicly
available in conjunction with the necessary metadata and tools for processing
the dataset.
**Software source code** : The ultimate objective of IOStack is to build an
open-source SDS toolkit for OpenStack. From an engineering perspective, such
an ambitious goal cannot be achieved as a single, monolithic piece of
software, but rather as a set of advanced software components converging on a
single architecture. Our objective is to achieve both a proper software
management in IOStack and transform the source code into open data from the
very beginning. As we detail next, all the partners are contributing to a
public and centralized code management system. This makes the development of
the project open and transparent for the public.
In what follows, we depict a battery of actions to convert the previous three
types of data assets into open data.
# IOStack Open Data Policies and Standards
Next, we aim at describing the overall open data strategy of IOStack as well
as the concrete measures to make data assets publicly available (see Fig. 1).
* Social media
Datasets
•Methodology
•Analysis tools
•Guidelines
Research
Publications
•Central
repository
•Publications
metadata
Software
Source
Code
•Public repository
•Community
involvement
Dissem
ination
Plan
* Conferences and Events • Mailing lists
Figure 1: High-level open data strategy in IOStack.
As can be observed in Fig. 1, the open data strategy of IOStack is based on 4
main action lists; one action list for each type of data asset produced during
the project and the dissemination plan, which is a particular action list to
promote the impact of the open data produced in IOStack. Thus, IOStack
implements an integral plan for generating open data and promoting it in order
to achieve the widest dissemination possible.
All the elements in Fig. 1 have a common denominator: the IOStack web site 2
. The IOStack is being actively maintained and offers easy access to all the
data assets of the project (publications, code, datasets), the project’s
deliverables and the social media accounts of the project 3 .
We continue by depicting the different action lists of the IOStack open data
strategy.
## Research Papers
Before going any further, we should consider that there are two main
approaches to implement open data on research papers: _gold_ and _green_ open
data [1]. In the former case, researchers can publish in an Open Access (OA)
journal, where the publisher of a scholarly journal provides free online
access. On the latter case, researchers can deposit a version of their
published works into a subject-based or institutional repository.
Although the _gold_ open data approach has gained strength in the latest
years, _we advocate for the green approach_ due to a strong reason: Today,
most high-impact conferences and journals are not yet Open Access (OA).
Consequently, adopting a pure gold open data approach may be in detriment of
the potential impact of IOStack publications. For this reason we adopt a green
open data approach in IOStack.
In what follows, we describe an action list to enable better access to the
scientific publications of IOStack in order to convert them into _green_ open
data.
* **Self-archiving** : Self-archiving is considered a valid route to make a research paper open data (green). URV has created a repository to archive all the publications related to the project. Concretely, the repository for publications is embedded into the IOStack official web site and can be accessed at “http://iostack.eu/publications”. The repository offers a user-friendly interface that permits to navigate across multiple publications.
* **Deposit procedure** : In each publication entry in the repository, we deposit a machine-readable copy (e.g., PDF) of the final version or final peer-reviewed manuscript accepted for publication. We will attempt to deposit the final version of the manuscript as soon as possible, trying to avoid any embargo period.
* **Durability and availability** : Internally, the server that hosts the publication repository —and the IOStack web site— integrates disk-level redundancy to support failures and data corruption. Moreover, URV backs-up the information of that server the every week in other machines. To maximize the durability and availability of open access to our research publications, each partner will self-archive its own publications so in case of catastrophic events in URV’s infrastructure publications can still be available. For example, URV already keeps two separate repositories for the publications of IOStack and the publications of Arquitectures i Serveis Telemàtics (AST) research group (“http://ast-deim.urv.cat/web/publications”).
* **Publication metadata** : Every paper in the IOStack publication repository contains the associated metadata that describes the type and topic of the publication (abstract), as well as the original publisher, venue and Document Object Identifier (DOI).
* **Standard methodologies** : Apart from the way research publications are made publicly available to users, we believe that it is also important to implement standard and open methodologies during the elaboration of research articles. To this end, as a part of the benchmarking framework of IOStack (d.2.2), the consortium will resort to exiting open benchmarks and datasets in order to validate research contributions.
With this initial battery of actions, we aim at transforming research papers
of IOStack into green open data easily accessible by the general public.
## Datasets
In many cases, a research publication has associated a dataset, either as a
source of information to extract novel observations or as a result of the
research process. Our aim is to deposit at the same time the research data
needed to validate the results of the associated research publications. Next,
we specify the action list that we undertake to implement green open data
policies on datasets:
* **Self-archiving** : Similarly to the approach adopted for research publications, URV has created a repository to store all the datasets related to IOStack. To ease the location of datasets, the repository for datasets is also embedded into the IOStack official web site and can be accessed at “http://iostack.eu/datasets”.
* **Durability and availability** : In the general case, the procedure to maintain the availability and durability of datasets is the same as explained for research publications, since all these data assets reside in the same physical servers. However, a distinguishing point for datasets is that we also make active use of them in a data processing cluster located at AST research group labs (URV). Internally, this cluster implements 3-way replication, so datasets have an additional physical infrastructure to maximize their durability in case of damage of the servers dedicated to host the IOStack web site and the publications.
* **Open formats and metadata** : Datasets will be generated making use of open formats instead of proprietary ones (e.g., Microsoft Excel). Concretely, we expect to make extensive use of the Comma Separated Value (CSV) format, which is generic enough to express very different types of information. Of course, for every dataset in the repository, we will provide the required dataset metadata to understand the _topic_ , _purpose_ , _collection/generation methodology_ as well as an explanation of the different _fields_ of the dataset. This will improve the researchers’ accessibility to the datasets generated in IOStack.
* **Parsing tools** : Sometimes, it is necessary to parse datasets to easily extract particular parts of the information contained inside it. If parsing tools are necessary for the correct analysis of the datasets, we will provide the tools jointly with the dataset in the repository.
With this action list on datasets we will facilitate their open access to
convert them into green open data.
## Source Code
The ultimate objective of IOStack is to provide a SDS toolkit —i.e., software
source code— on top of the OpenStack platform. This means that during the
development of the project, we should adopt open data policies from the very
beginning regarding the produced source code. Our strategy will not only
leverage the results at the end of the project as open data, but it also makes
the source code as open from the entire software life-cycle. In turn, this
paves the way for the involvement of the OpenStack community in IOStack as
well.
* **Central code repository** : To make the source code open to the general public, we created a code repository in GitHub for IOStack at “https://github.com/iostackproject”. This repository has been also linked to the IOStack web site (“http://iostack.eu/software”). GitHub is currently one of the most popular code management systems due to the advanced features and easy management that it provides to developers. This has various potential benefits to the management and dissemination of IOStack source code: for intance, GitHub is well-known across developer communities, which facilitates the access to the source code of IOStack. Moreover, GitHub offers a plenty of options to fork/branch/merge versions of a software project that enables third-parties to easily extend the source code developed in IOStack (even for internal use).
* **Availability and durability** : GitHub is a cloud-based system. This means that, internally, the code repositories in GitHub are stored across several physical machines, even in distinct geographical regions. Therefore, the availability and durability of IOStack source code is ensured and delegated to GitHub, conversely to our self-archiving approach for research papers and datasets.
* **Licensing** : Whenever possible, we will retain the copyright and grant adequate licenses to the source code created in IOStack. In general, the code will be released under open licenses such as Apache License 2.0 or GNU General Public License 3.0. Broadly speaking, these licenses provide the user of the software the freedom to use the software for any purpose, to distribute it, to modify it, and to distribute modified versions of the software, under the terms of the license, without concern for royalties [2]. However, the intellectual property of the source code is kept: For instance, the Apache License requires preservation of the copyright notice and disclaimer, which are related to the project [3].
* **Source code metadata and “how to”** : As a standard practice in the open source community, every software project in the IOStack repository will include a README file to help the user in the installation process, testing and first steps using the software. This will ease the use and adoption of the source code produced in IOStack.
This action list will make IOStack source code as open data assets. Next, we
explain how we will maximize the impact of open data in IOStack through our
Dissemination plan.
## Disseminating Open Data
From a practical viewpoint, the generation of open data is only a part of the
work that should be done in IOStack regarding data management. We believe that
the dissemination of open data results is as important as its generation,
since a hidden open data item might be useless for the general public.
In the following, we depict the major actions related to dissemination of open
data assets in IOStack:
* **Dataset link in research publications** : In IOStack, all the research publications that make use of datasets generated within the project should cite the repository where the datasets live. Although this is a common practice in the research community, it is important to remark that research publications may have high visibility that can be beneficial for the dissemination of open data in IOStack.
* **Promotion in conferences/events** : One great benefit of publishing in high-impact conferences is that one has direct access to the research community during the presentation of a paper. For this reason, if a research publication involves producing/collecting datasets, the responsible IOStack partner will disseminate not only the particular research contributions of the publication but the datasets as well.
* **Social media and mailing lists** : IOStack already has a Twitter account to disseminate the news and events related to the project (“http://twitter.com/iostackproject”). The consortium will use this account collaboratively to amplify the dissemination of our open data contributions. Moreover, MPStor —leader of the dissemination plan— will make use of large mailing lists to also notify industrial/research organizations about the advances and contributions of the project in this sense.
* **Internal reutilization** : The IOStack consortium will maximize collaboration between partners to exploit the open data generated during the project. In addition to save up unnecessary efforts, we will act as testbeds of our own open data. This will lead to future enhancements of the action lists defined in this document.
# IOStack Open Data Assets
In this section, we enumerate the available data assets in the IOStack project
to date. Of course, this is only an snapshot of the current state of IOStack’s
results. As the project advances, we will keep the open data assets of the
project updated in the data management plan.
**5.1 Research Papers**
## SDGen: Mimicking Datasets for Content Generation in Storage Benchmarks
* _Author Partners_ : URV and IBM.
* _Published at_ : 13th USENIX Conference on File and Storage Technologies (FAST’15). February 16-19, 2015, Santa Clara, CA, USA.
* _Deposit format_ : PDF File.
* _Available at_ : http://iostack.eu/publications/download/publications/2-fast-sdgen http://ast-deim.urv.cat/web/publications?view=publication&task=show&id=558 https://www.usenix.org/conference/fast15/technical-sessions/presentation/gracia-tinedo
* _Archivingandpreservation_ : The publication is freely available and archived at IOStack repository, AST lab servers and USENIX Association.
* _Type of open data_ : Green open data.
2. **Datasets**
There are datasets in collection/generation phase, but they are not available
yet.
3. **Source Code**
## Storlets
* _Responsible Partner_ : IBM.
* _Software description_ : The Storlet project provides computation-close-to-data functionalities to the IOStack architecture for object storage.
* _Available at_ : https://github.com/iostackproject/swift-storlets.
* _Archiving and preservation_ : Freely available and archived at GitHub.
* _License_ : Apache License 2.0.
* _Status_ : Development.
## SDS Controller for Object Storage
* _Responsible Partner_ : URV, IBM and BSC.
* _Software description_ : The SDS Controller for object storag will provide unified management, orchestration and automation of the services that form the IOStack toolkit.
* _Available at_ : https://github.com/iostackproject/SDS-Controller-for-Object-Storage.
* _Archiving and preservation_ : Freely available and archived at GitHub.
* _License_ : Apache License 2.0.
* _Status_ : Development.
## IO Bandwidth differentiation
* _Responsible Partner_ : BSC.
* _Software description_ : The Bandwidth Differentiation service will enable the IOStack toolkit to regulate the bandwidth assigned to each tenant in a multi-tenant analytics platform on Swift.
* _Available at_ : https://github.com/iostackproject/IO-Bandwidth-Differentiation and https://github.com/iostackproject/IO-Bandwidth-Differentiation-Client.
* _Archiving and preservation_ : Freely available and archived at GitHub.
* _License_ : Apache License 2.0.
* _Status_ : Development.
## SDGen
* _Responsible Partner_ : URV and IBM.
* _Softwaredescription_ : SDGen is a synthetic data generator that can emulate the compression properties of real datasets, which is a fundamental aspects when it comes to benchmark data reduction techniques in IOStack.
* _Available at_ : https://github.com/iostackproject/SDGen.
* _Archiving and preservation_ : Freely available and archived at GitHub.
* _License_ : GNU General License 3.0.
* _Status_ : Released.
# Final Remarks
Nowadays, open data is becoming a key enabler for the Europe Research Area in
order to maximize the impact and profit of publicly funded research. In this
document, we described the strategy and actions that we are undertaking in
IOStack for transforming the data assets of the project (datasets,
publications, source code) into open data. Our objective is to ease as much as
possible the access to the project’s results for the general public and
European research institutions.
However, our efforts for promoting the generation and management of open data
in IOStack must continue and for this reason the current manuscript is not a
definitive version of the IOStack’s data management plan. In contrast, this
document will evolve both _quantitatively_ —number of open data items
available— and _qualitatively_ —enhancing the presented action lists, possibly
including new actions— as the project progresses.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0793_VERTIGO_732112.md
|
# Executive Summary
This document presents the Data Management Plan of the VERTIGO STARTS Project.
It was created based on “Horizon 2020 Data Management Plan” template document
issued by the European Commission. After a reminder on this template based on
the FAIR ( _Findable, Accessible, Interoperable and Re-usable_ ) criteria, it
presents the various processes of data production and use (Data summary) in
VERTIGO and then provides answers to all the questions it addresses in terms
of FAIR criteria.
The Data Management Plan is produced in the framework of the VERTIGO WP3 -
Deployment of brokerage online platform under the responsibility of IRCAM as
workpackage leader and project coordinator.
The main data sets taken into consideration are the ones produced by the
project partners and by third parties concerned with STARTS activities,
including other STARTS projects and stakeholders involved in the STARTS
Residencies program.
Table of Abbreviations
<table>
<tr>
<th>
AGPL
</th>
<th>
Affero General Public License
</th> </tr>
<tr>
<td>
CNIL
</td>
<td>
Commission Nationale de l'Informatique et des Libertés (France)
</td> </tr>
<tr>
<td>
CNRS
</td>
<td>
Centre National de la Recherche Scientifique (France)
</td> </tr>
<tr>
<td>
CSS
</td>
<td>
Cascading Style Sheets
</td> </tr>
<tr>
<td>
DMP
</td>
<td>
Data Management Plan
</td> </tr>
<tr>
<td>
FAIR
</td>
<td>
Findable, Accessible, Interoperable and Re-usable
</td> </tr>
<tr>
<td>
SASS
</td>
<td>
Syntactically Awesome Stylesheets
</td> </tr>
<tr>
<td>
JSON
</td>
<td>
JavaScript Object Notation
</td> </tr>
<tr>
<td>
URI
</td>
<td>
Uniform Resource Identifier
</td> </tr>
<tr>
<td>
SSH
</td>
<td>
Secure Shell
</td> </tr>
<tr>
<td>
SSL
</td>
<td>
Secure Sockets Layer
</td> </tr>
<tr>
<td>
W3C
</td>
<td>
World Wide Web Consortium
</td> </tr> </table>
**1\. An introduction to the “Horizon 2020 Data Management**
# Plan”
The Horizon 2020 DMP has been designed to be applicable to any Horizon 2020
project that produces, collects or processes research data. As part of making
research data findable, accessible, interoperable and re-usable (FAIR 1 ), a
DMP should include information on:
* the handling of research data during & after the end of the project
* what data will be collected, processed and/or generated
* which methodology & standards will be applied
* whether data will be shared/made open access and
* how data will be curated & preserved (including after the end of the project).
The Horizon 2020 DMP contains a set of key-questions (in blue in the rest of
the document) to be answered with a level of detail appropriate to each
project.
It is not required to provide detailed answers to all the questions in the
first version of the DMP that needs to be submitted by month 6 of the project.
Rather, the DMP is intended to be a living document in which information can
be made available on a finer level of granularity through updates as the
implementation of the project progresses and when significant changes occur.
Therefore, DMPs should have a clear version number and include a timetable for
updates. As a minimum, the DMP should be updated in the context of the
periodic evaluation/assessment of the project. If there are no other periodic
reviews envisaged within the grant agreement, an update needs to be made in
time for the final review at the latest.
This DMP may be updated as the policy evolves.
## FAIR Data Management at a glance: issues to cover in Horizon 2020 DMP
This table provides a summary of the Data Management Plan (DMP) issues to be
addressed, as outlined above.
<table>
<tr>
<th>
**DMP component**
</th>
<th>
</th>
<th>
**Issues to be addressed**
</th> </tr>
<tr>
<td>
**1\. Data summary**
</td>
<td>
•
</td>
<td>
State the purpose of the data collection/generation
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Explain the relation to the objectives of the project
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Specify the types and formats of data generated/collected
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Specify if existing data is being re-used (if any)
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Specify the origin of the data
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
State the expected size of the data (if known)
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Outline the data utility: to whom will it be useful
</td> </tr>
<tr>
<td>
2. **FAIR Data**
2.1. Making data findable, including provisions for metadata
</td>
<td>
•
•
</td>
<td>
Outline the discoverability of data (metadata provision) Outline the
identifiability of data and refer to standard identification mechanism. Do you
make use of persistent and unique identifiers such as Digital Object
Identifiers?
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Outline naming conventions used
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Outline the approach towards search keyword
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Outline the approach for clear versioning
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Specify standards for metadata creation (if any). If there are no standards in
your discipline describe what type of metadata will be created and how
</td> </tr>
<tr>
<td>
2.2 Making data openly
accessible
</td>
<td>
•
</td>
<td>
Specify which data will be made openly available? If some data is kept closed
provide rationale for doing so
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Specify how the data will be made available
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Specify what methods or software tools are needed to access the data? Is
documentation about the software needed to access the data included? Is it
possible to include the relevant software (e.g. in open source code)?
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Specify where the data and associated metadata, documentation and code are
deposited
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Specify how access will be provided in case there are any restrictions
</td> </tr>
<tr>
<td>
2.3. Making data interoperable
</td>
<td>
•
</td>
<td>
Assess the interoperability of your data. Specify what data and metadata
vocabularies, standards or methodologies you will follow to facilitate
interoperability
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Specify whether you will be using standard vocabulary for all data types
present in your data set, to allow interdisciplinary interoperability? If not,
will you provide
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
mapping to more commonly used ontologies?
</td> </tr>
<tr>
<td>
2.4. Increase data re-use (through clarifying licences)
</td>
<td>
•
</td>
<td>
Specify how the data will be licenced to permit the widest reuse possible
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Specify when the data will be made available for re-use. If applicable,
specify why and for what period a data embargo is needed
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Specify whether the data produced and/or used in the project is useable by
third parties, in particular after the end of the project? If the re-use of
some data is restricted, explain why
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Describe data quality assurance processes
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Specify the length of time for which the data will remain reusable
</td> </tr>
<tr>
<td>
**3\. Allocation of resources**
</td>
<td>
•
</td>
<td>
Estimate the costs for making your data FAIR. Describe how you intend to cover
these costs
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Clearly identify responsibilities for data management in your project
</td> </tr>
<tr>
<td>
</td>
<td>
•
</td>
<td>
Describe costs and potential value of long term preservation
</td> </tr>
<tr>
<td>
**4\. Data security**
</td>
<td>
•
</td>
<td>
Address data recovery as well as secure storage and transfer of sensitive data
</td> </tr>
<tr>
<td>
**5\. Ethical aspects**
</td>
<td>
•
</td>
<td>
To be covered in the context of the ethics review, ethics section of DoA and
ethics deliverables. Include references and related technical aspects if not
covered by the former
</td> </tr>
<tr>
<td>
**6\. Other**
</td>
<td>
•
</td>
<td>
Refer to other national/funder/sectorial/departmental procedures for data
management that you are using (if any)
</td> </tr> </table>
# Data Summary
What is the purpose of the data collection/generation and its relation to the
objectives of the project?
This chapter presents the various data production processes in the project. It
includes data produced by the project itself for its own use and/or
communication, and data produced by external users: one of the central
features of the starts.eu platform published by the project is to be the main
support for communication and matchmaking of the STARTS community:
presentation of involved artists, projects, institutions, publication of news,
of calls, etc. A specific case is the process of STARTS Residencies managed by
VERTIGO which presents available Tech Projects and Producers, enables artists
and producers to apply to yearly calls for residencies, and then follows the
selected residencies and publishes documentation on their process and
outcomes.
**Common provisions for data published in the project web platform: starts.eu
and vertigo.starts.eu:**
The starts.eu domain belongs to IRCAM and is hosted by IRCAM in its own
servers located in France. As for the content published on these websites, the
following disclaimer is available :
_This website is the property of IRCAM, Institut de Recherche et de
Coordination Acoustique/Musique, based at 1, place Igor Stravinsky, 75004
paris, a non profit organization state-approved from decree dated the 24th
december of 1976, Siret number : 309 320 612 00018._
_This site along with all the material it contains is the property of IRCAM
and is protected in accordance to the Intellectual Property Code. As such, all
reproductions or representations (partial or complete) of this website, and
all extractions of our databases, by whatever means, without specific
authorization from IRCAM is strictly prohibited._
_This website enables external users to enter their data, including textual
and multimedia contents and to publish these data online. This concerns in
particular presentations of R &D projects and of organisations willing to
participate in the VERTIGO residencies program as Producers, as well as
physical persons registering to the platform such as artists with their
personal data and portfolio and biographies. All data (texts, images, videos,
sounds) are provided under the sole responsibility of these users and IRCAM
disclaims all liability on the authenticity and the truthfulness of the
supplied information. _
## Data production processes
The hierarchy of data production processes as part of the project is presented
hereinafter. The item numbers are used in the rest of the document for
referencing the related processes.
1. _Data generated by project partners for the project execution_
1. _Private data_
These data are produced and shared only by the project consortium members for
its execution: internal reports, deliverables, legal documents, calendar, etc.
They are stored in private repositories managed and hosted by IRCAM in its own
servers and accessible only by login/password to the consortium participants:
project data cloud, project management tools, internal mailing lists, etc.
2. _Public data for communication_
These data are published in the project web platforms starts.eu and
vertigo.starts.eu. Projects partners have access to the web platform
backoffice through an individual password/login.
3. _Open source software publication_
Most of the project’s software development is implemented in open source form
and is published in relevant external repositories such as github.
2. _Data generated by third parties (starts.eu)_
1. _Registration of physical persons : login and user profile_
Physical persons can register to starts.eu by email/password and can define
their user profile including data fields to be published (such as their name
or pseudo) and others privately stored (such as their email address) and
accessible only to the project consortium.
2. _Registration of other entities : projects, legal entities, etc._
Physical persons who commit in being mandated for representing an entity such
as a legal entity (cultural institution, research lab, company etc.) or a
project (such as a Tech collaborative project) can enter information about it
and define the data to be published (text, logo, photo, URL, location, etc.)
and other to be kept private and used solely for the platform management.
3. _Data production through the platform operation_
Registered users can use the platform for publishing information, exchanging
with other users, etc. This is for instance the case of other STARTS projects
publishing related news on starts.eu.
4. _Applications to STARTS Residencies calls_
Artists, possibly together with legal entities (“Producers”) can apply to
STARTS Residencies calls. They have therefore to upload various materials
(text, photos, videos) in digital form (CV, portfolio, residencies project,
etc.). The call clearly defines which of these materials is to be published
and which ones are to be kept private. In this latter case, their access is
restricted to the project’s participants and to the call’s reviewers and jury
members who sign a non-disclosure agreement of the related information.
5. _Third parties’ calls managed by the platform_
The platform also enables third parties, referred to as “Call organisers” to
publish their own calls for residencies. By using the platform, the Call
organisers commit in conforming to the VERTIGO STARTS Residencies Charter
(Full text in Exhibit), and in particular to differentiate the private or
public dissemination status of the application data and to define in the call
text the use to be done with private data.
6. _Data produced in the process of running STARTS Residencies_
Residencies selected as part of the STARTS Residencies program operated by
VERTIGO are expected to produce data in digital form presenting their process
and outcomes, including as part of a blog provided by the project. The status
of ownership of these data and grants licensed for their dissemination are
defined as part of a co-production contract to be jointly signed at the
beginning of the residency by all the parties involved in it: the artist(s),
the representative(s) of the Tech project, the representative of VERTIGO and
optionally the representative(s) of producer(s).
3. _Data produced by the project from third parties’ inputs_
1. _Modifications of user-entered data_
The project may apply slight modifications to user-entered data such as text
rephrasing, image reframing or resizing, video transcoding, etc. before
publishing them.
2. _Data produced from users’ activity_
Various kinds of data and figures are produced on the basis of the users’
activity, such as usage statistics, or data linked to their profiles such as
chatting with other users or following them. These data are either used to
monitor the platform usage based on anonymous statistics or for the platform
operation itself (such as storing user setups).
What types and formats of data will the project generate/collect
* _the software package (Vertigo-Mezzo) is used for the project web platform with domain starts.eu; it is written in (Python, JavaScript, CSS, SaSS, HTML)_
* _data generated and collected through the software by the users like text (SQL, JSON), numbers (SQL,_
_JSON), images (JPEG, PNG) and videos (WebM, MP4), audio (MP3, WAV)_
◦ _regarding the platform models_
◦ _regarding the models of each product design and published by residencies_
Will you re-use any existing data and how?
* _The public data which would have been uploaded by the user, presenting its works for example, will be republished as is (B.x) of with slight modifications (C.1)._
What is the origin of the data?
* _A.x : project partners_
* _B.x and C.1 : third parties registered to the platform_ What is the expected size of the data?
* _Very difficult to define. It could be dozens of gigabytes of user data at the end of the project._
To whom might it be useful ('data utility')?
* _To the STARTS community in general as data of common interest for the community_
* _To the project and the other STARTS projects as a support of their activity_
# FAIR data
## Making data findable, including provisions for metadata
Are the data produced and/or used in the project discoverable with metadata,
identifiable and locatable by means of a standard identification mechanism
(e.g. persistent and unique identifiers such as Digital Object Identifiers)?
* _Each data piece managed by the platform will have a unique URI._
What naming conventions do you follow?
* _No convention: it will depend mostly on titles used for the contents._
Will search keywords be provided that optimize possibilities for re-use?
* _Yes, providing contextual metadata in the HTML headers as well as keywords selected in a thesaurus for each resource._
Do you provide clear version numbers?
* _Yes, only for the software._
What metadata will be created? In case metadata standards do not exist in your
discipline, please outline what type of metadata will be created and how.
* _HTML header metadata_
* _Content metadata (dates, locations, etc.) which are generated from the user operation (such as date of a user process) or from the user data (such as geolocation from a postal address)._
## Making data openly accessible
Which data produced and/or used in the project will be made openly available
as the default? If certain datasets cannot be shared (or need to be shared
under restrictions), explain why, clearly separating legal and contractual
reasons from voluntary restrictions.
_As for data produced by the project (A.x), the project is a CSA and its own
data production is limited to the management of its specific support and
coordination processes : managing the STARTS community, managing residencies
calls and execution, communicating on its activity._
_As for data produced by external users or by their behaviour (B.x and C.x),
the related data production processes clearly specify which ones are to be
publicly disclosed._
How will the data be made accessible (e.g. by deposition in a repository)?
* _Web (public) with secured access through SSL certificates_
* _DB and files backups on servers (private)_
What methods or software tools are needed to access the data?
_Web browsers for public data_
* _Private and secured SSH sessions for private data_
Is documentation about the software needed to access the data included?
* _Not for public data: all resources are accessible through HTML5 standards_
* _Yes, for private data which can be restored in another instance of the platform software as explained in the Mezzo documentation_
Is it possible to include the relevant software (e.g. in open source code)?
* _Yes, Mezzo and all related dependencies are fully open sourced._
Where will the data and associated metadata, documentation and code be
deposited? Preference should be given to certified repositories which support
open access where possible.
* _Vertigo-Mezzo and Ulysses to be available on GitHub._
* _All data are stored on IRCAM’s servers physically located in its headquarters in France._
Have you explored appropriate arrangements with the identified repository?
* _IRCAM and all partners of the VERTIGO consortium have a full management of all repositories and can allow any collaborator to access to them._
If there are restrictions on use, how will access be provided?
This is also specified for each data production process. As a summary :
* _Public data: no restrictions as any data published on the platform can be accessed_
* _Private data: only consortium members, and for applications, reviewers and Jury members having signed a non-disclosure agreement (B.4), or third parties organisers of calls committing in fulfilling the VERTIGO STARTS Residencies Charter (B.5) are allowed to read users’ data with private status._
Is there a need for a data access committee?
* _A priori no, the access rules having been defined and formalised in the project methodology. If unforeseen questions nevertheless arise, they will be handled by the Project Management Board._
Are there well described conditions for access (i.e. a machine-readable
license)?
* _For the platform software: AGPL licence_
_https://fr.wikipedia.org/wiki/GNU_Affero_General_Public_License_
* _For the user data accessible through the platform: any applicable statement and law applicable to the project in the About / Privacy section and the disclaimer statements, including the one applicable to the project platform and given at the beginning of Part 3._
How will the identity of the person accessing the data be ascertained?
* _Any uploaded data is linked to a user registered in the platform and then in the database which maps these access rights._
_Any downloaded data is linked to a cookie generated by the platform itself
linked to the user session anonymously._
## Making data interoperable
Are the data produced in the project interoperable, that is allowing data
exchange and re-use between researchers, institutions, organisations,
countries, etc. (i.e. adhering to standards for formats, as much as possible
compliant with available (open) software applications, and in particular
facilitating re-combinations with different datasets from different origins)?
* _Web standards and formats are used for all standard data types (text, images, video, sound…)._
* _In case of the production of specific data not conforming to these standards and needing some special software or any tool to be re-used, for instance technical data inherent to the artworks outcoming from artistic residencies these data will be stored in their original format for further use._
What data and metadata vocabularies, standards or methodologies will you
follow to make your data interoperable?
* _Web standards as documented and published by the W3C_
Will you be using standard vocabularies for all data types present in your
data set, to allow interdisciplinary interoperability?
* _No_
In case it is unavoidable that you use uncommon or generate project specific
ontologies or vocabularies, will you provide mappings to more commonly used
ontologies?
* _No mapping is foreseen_
## Increase data re-use (through clarifying licences)
How will the data be licensed to permit the widest re-use possible?
* _Residency outcomes and contents are published by users under the terms of the licence they have chosen case by case. If any license is defined, the link of the license terms should be accessible through the platform. This is true for the public/private status of user-entered data (B1-4 and B.6) and for third parties calls which must publish the use of private data (B.5)_
When will the data be made available for re-use? If an embargo is sought to
give time to publish or seek patents, specify why and how long this will
apply, bearing in mind that research data should be made available as soon as
possible.
* _Re-use as soon as the data are published online through the platform_
Are the data produced and/or used in the project useable by third parties, in
particular after the end of the project? If the re-use of some data is
restricted, explain why.
_Data already published in the project time frame should be used for
publication by third parties. As for data with private status, the goal is to
enable the reuse of the data by further STARTS projects in case IRCAM is not
one of its partners and thus cannot further operate the platform. However,
since there is no legal entity which can be mentioned in the licensing
agreements with a life time longer than the project (for instance the STARTS
program), this will require concerned third parties to ask permission to the
registered users to manage the data in subsequent projects._
How long is it intended that the data remains re-usable?
* _As long as the platform is online._
Are data quality assurance processes described?
* _The platform provides some guidelines to facilitate data publication in the context of the Web_
* _Rejections of non-standard formats_
* _Reviews and moderation by different committees before any publication_ Further to the FAIR principles, DMPs should also address:
* _N/A_
# Allocation of resources
What are the costs for making data FAIR in your project?
* _The related costs concern:_
<table>
<tr>
<th>
◦
</th>
<th>
_Mostly : the related aspects of development of the project web platform as
part of workpackage WP3, as well as the production of the data management
deliverables._
</th> </tr>
<tr>
<td>
◦
</td>
<td>
_The part of workpackage WP2 dedicated to the definition of the forms for
applications and applicable to Artists, Tech Projects and Producers and
distinguishing between public and private data;_
</td> </tr>
<tr>
<td>
◦
</td>
<td>
_The corresponding parts of the call formulations in WP4, as well as for the
production of the VERTIGO STARTS Residencies charter for third parties calls
and the production and management of co-production contracts for residencies._
</td> </tr>
<tr>
<td>
◦
</td>
<td>
_Elements of the web site editorial content including the IRCAM legal
disclaimer as part of WP5 and the management of existing data for the project
communication._
</td> </tr>
<tr>
<td>
◦
</td>
<td>
_The use of existing data for the project dissemination as part of WP6,
including the use of platform usage statistics._
</td> </tr> </table>
How will these be covered? Note that costs related to open access to research
data are eligible as part of the Horizon 2020 grant (if compliant with the
Grant Agreement conditions).
* _The related costs are to be covered as part of the EU Grant for the project._
Who will be responsible for data management in your project?
* _Hugues Vinet, project coordinator, assisted by Guillaume Pellerin, leader of the web platform development WP3 workpackage (IRCAM)._
Are the resources for long term preservation discussed (costs and potential
value, who decides and how what data will be kept and for how long)?
* _WP3 is in charge of the platform development and sustainability. The sustainability plan is to be finalised as part of deliverable D3.7 - Report on Web Platform Development, Usage Statistics and Sustainability Plan – Final._
* _In addition to the data managed through the platform, the data produced as part of the project are stored in the project private cloud and in other repositories managed by IRCAM such as the project mailinglists. The project management board is in charge of specifying requests related to the long-term preservation of these data._
# Data security
What provisions are in place for data security (including data recovery as
well as secure storage and transfer of sensitive data)?
* _SSL Encryption of all web content by specific certificates_
* _SSH encryption for accessing to the infrastructure by the sys admins and external backups_
* _As defined as an initial constraint at the beginning of the VERTIGO project, the platform doesn’t use any system or digital service hosted outside Europe for all its administrative and platform management._
Is the data safely stored in certified repositories for long term preservation
and curation?
* _Not up to now. A potential platform for back-upping data would be Huma-Num, a long-term preservation service provided by the CNRS in France, but with external and international backups through scientific dedicated networks._
# Ethical aspects
Are there any ethical or legal issues that can have an impact on data sharing?
These can also be discussed in the context of the ethics review. If relevant,
include references to ethics deliverables and ethics chapter in the
Description of the Action (DoA).
* _N.A._
Is informed consent for data sharing and long-term preservation included in
questionnaires dealing with personal data?
* _Yes_
# Other issues
Do you make use of other national/funder/sectorial/departmental procedures for
data management? If yes, which ones?
• _Procedures defined by the CNIL (Commission Nationale de l'Informatique et
des Libertés, France)_
# Further support in developing the DMP
The Research Data Alliance provides a Metadata Standards Directory that can be
searched for discipline-specific standards and associated tools.
The EUDAT B2SHARE tool includes a built-in license wizard that facilitates the
selection of an adequate license for research data.
Useful listings of repositories include:
* _Registry of Research Data Repositories_
* _Some repositories like Zenodo, an OpenAIRE and CERN collaboration), allow researchers to deposit both publications and data, while providing tools to link them._
Other useful tools include DMP online and platforms for making individual
scientific observations available such as ScienceMatters.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0795_RECAP_693171.md
|
# Executive Summary
The present document is a deliverable of the RECAP project, funded by the
European Commission’s Directorate – General for Research and Innovation (DG
CNECT), under its Horizon 2020 Innovation Action programme (H2020).
The deliverable presents the third and final version of the project Data
Management Plan (DMP). This final version lists the various datasets that have
been collected, processed or produced by the RECAP project and outlines the
main data sharing and the major management principles that have been followed.
Furthermore, it incorporated all the critical changes such as changes in the
consortium policies and any external factors that may had impact on the data
management within the project and might influence it even after the project
duration.
The deliverable is structured in the following chapters:
Chapter 1 includes an introduction to the deliverable.
Chapter 2 includes the description of the datasets along with the documented
changes and additional information.
# 1\. Introduction
The RECAP project aims to develop and pilot test a platform for the delivery
of public services that will enable the improved implementation of the CAP,
targeting public Paying Agencies, Agricultural Consultants and farmers. The
RECAP platform will make use of large volumes of publicly available data
provided by satellite remote sensing, and user-generated provided by farmers
through mobile devices.
This deliverable D1.10 “Data Management Plan (3)” aims to document all the
updates on the RECAP project data management life cycle for all datasets to
have been collected, processed and/ or generated and a description of how the
results will be shared, including access procedures and preservation according
to the guidelines in Horizon 2020 and General Data Protection Regulation
(GDPR).
Although the DMP is being developed by DRAXIS, its implementation involves all
project partners’ contribution. Since, this the final version of the project
Data Management Plan all the Work Packages are included despite the fact that
some of them might have not occurred any changes.
# 2\. DMP Components in RECAP
_**2.1 DMP Components in WP1 – Project Management (DRAXIS)** _
<table>
<tr>
<th>
DMP Component
</th>
<th>
</th>
<th>
Issues to be addressed
</th> </tr>
<tr>
<td>
Data Summary
</td>
<td>
</td>
<td>
</td>
<td>
Contact details of project partners and advisory board
Databases containing all the necessary information regarding the project
partners and Advisory Board members.
The project partners data are stored in a simple table in the RECAP wiki, with
the following fields:
Name
Email
Phone
Skype id
The advisory board members data is described by the following fields:
Name
Description
Affiliation
Organisation
Country
Proposed by
Furthermore, interviews have been conducted with the Advisory Board members
and webinars have been held in order to inform them about the project status
and progress. Most interviews and webinars have been conducted remotely either
using Skype or WebEx.
The expected size of the data is not applicable, as the size is not a
meaningful measure. In total we have conducted 9 interviews and 2 webinars.
Moreover, 2 consortium meetings have been conducted remotely in order to
discuss the project progress and address any important issue. Work Package
leaders have sent input on how they handle the data produced during the
project.
</td> </tr>
<tr>
<td>
Making data findable, provisions for metadata
</td>
<td>
including
</td>
<td>
The data with regards to the interviews, webinars and consortium meetings are
stored on DRAXIS server and are not directly accessible from outside.
Moreover, these data cannot be made available to third parties. However, the
interviews are available in D1.2 Report on Advisory Board meetings (1), D1.7
Report on Advisory Board meetings (2), D1.8 Report on Advisory Board meetings
(3) and D1.9 Report on Advisory Board meetings (4). The dissemination level of
these deliverables is public and they are available in the project’s website
and Wiki and in Zenodo 1 through the Digital Object Identifier (DOI):
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
D1.2 Report on Advisory Board meetings
</th>
<th>
(1):
</th>
<th>
DOI:
</th> </tr>
<tr>
<td>
</td>
<td>
_https://doi.org/10.5281/zenodo.1442621_
D1.7 Report on Advisory Board meetings
</td>
<td>
(2):
</td>
<td>
DOI:
</td> </tr>
<tr>
<td>
</td>
<td>
_https://doi.org/10.5281/zenodo.1442637_
D1.8 Report on Advisory Board meetings
</td>
<td>
(3):
</td>
<td>
DOI:
</td> </tr>
<tr>
<td>
</td>
<td>
_https://doi.org/10.5281/zenodo.1442640_
D1.9 Report on Advisory Board meetings
</td>
<td>
(4):
</td>
<td>
DOI:
</td> </tr>
<tr>
<td>
</td>
<td>
_https://doi.org/10.5281/zenodo.1476012_
The naming convention used is: Data_WP1_1_Advisory Board
Regarding the input for the DMP, the data are also stored on DRAXIS server and
are not directly accessible from outside. These data are presented in the
respective deliverables, which are publicly available either through the
project website and Wiki or through Zenodo with the following DOIs:
D1.3 Data Management Plan (1): DOI:
</td> </tr>
<tr>
<td>
</td>
<td>
_https://doi.org/10.5281/zenodo.1442627_
D1.5 Data Management Plan _https://doi.org/10.5281/zenodo.1442633_
The naming convention used is: Data_WP1_2_Data Manag
As part of any stored data, metadata were generated, which include sufficient
information with appropriate keywords to help external and internal users to
locate data and related information.
</td>
<td>
(2): DOI:
ement Plan.
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
The datasets are not publicly available.
All the data are made publicly available as part of the aforementioned
deliverables and through, RECAP wiki, RECAP website and Zenodo.
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Increase data re-use
</td>
<td>
Data are publicly available as part of the aforementioned deliverables and can
be accessed and re-used by third parties indefinitely without a license.
</td> </tr>
<tr>
<td>
Allocation of resources
</td>
<td>
No additional costs are foreseen for making this dataset FAIR.
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
The data have been collected for internal use in the project, and not intended
for long-term preservation. No personal information will be kept after the end
of the project. Furthermore, DRAXIS pays special attention to security and
respects the privacy and confidentiality of the users' personal data by fully
complying with the applicable national, European and international framework,
and the European Union's General Data Protection Regulation 2016/679.
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Other issues
</td>
<td>
N/A
</td> </tr> </table>
<table>
<tr>
<th>
DMP Component
</th>
<th>
Issues to be addressed
</th> </tr> </table>
## 2.2 DMP Components in WP2 – Users’ needs analysis & coproduction of
services (UREAD)
<table>
<tr>
<th>
Data Summary
</th>
<th>
</th>
<th>
</th>
<th>
The purpose of the data collection is the generation of user needs for scoping
of the initial requirements (Deliverable 2.2) and also for the coproduction
phase (Deliverable 2.4), where applicable results are also used to produce
peer reviewed papers.
Collating data from end users is an integral part of the RECAP project – co-
production of the final product helps to ensure that a useful product is
created.
Questionnaire data (including written responses (.docx and .xslx) and
recordings (.mp3)) compromise the majority of the data.
The origin of the data is from Paying Agency partners in the RECAP project,
farmers in the partner countries as well as agricultural consultants and
accreditation bodies in the partner countries.
Written responses are likely to be fairly small in size (<1Gb over the course
of the project). Recordings are larger files and likely to be 10-20 Gb over
the course of the project.
The data are essential for the technical team to develop the RECAP platform;
other partner teams throughout the project, as well as the wider research
community when results are published will benefit.
</th> </tr>
<tr>
<td>
Making data findable, including provisions for metadata
</td>
<td>
The data are stored on the University of Reading servers and labelled with the
work package, country of origin and the type of data. As it contains
confidential and personal data, the raw data will not be made available from
outside but anonymized data can be made available upon request and after an
evaluation of the request (i.e. purpose, goals, etc.).
The data are available to the public through the D2.4 Report on coproduction
of services either through the project website and Wiki or through Zenodo with
the following DOI: _https://doi.org/10.5281/zenodo.1744847_ The naming
convention used is:
Data_WP2_1_User requirements Data
Data_WP2_1_UK_User requirements Data
As part of any stored data, metadata were generated, which include sufficient
information:
to link it to the research publications/ outputs, to identify the funder and
discipline of the research, and with appropriate keywords to help external
and internal users to locate data.
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
The data will be kept closed until the end of the project due to data contain
personal data and therefore it cannot legally be made public. Anonymized and
summarised will be available in any public deliverable or through any other
relevant publications.
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Increase data re-use
</td>
<td>
Any data published in papers will be immediately available to metaanalysis.
However, it is not legal to release personal data such as the questionnaire
responses.
Raw data contains personal data and cannot legally be made available.
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
Data quality is assured by asking partners to fill out paper questionnaire in
their own languages. These are the translated and stored in spreadsheets.
Separately, the interviews are recorded, translated and transcribed. This
ensured accurate data recording and translation.
</th> </tr>
<tr>
<td>
Allocation of resources
</td>
<td>
Costs of publishing papers in open access format is the key cost in this part
of the project. During the duration of the project, money from the RECAP
budget will be used to cover journal fees (these are approximately
£1000/paper). Papers are likely to be published after the completion of the
project, in this case the university has a fund to which we can apply in order
to cover the costs of open access publishing. The data is stored on University
of Reading servers.
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
University of Reading servers are managed by the university IT services and
they are regularly backed up and secure. Data will be kept for 6 years after
the end of the project. Furthermore, pays special attention to security and
respects the privacy and confidentiality of the users' personal data by fully
complying with the applicable national, European and international framework,
and the European Union's General Data Protection Regulation 2016/679.
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Other issues
</td>
<td>
N/A
</td> </tr> </table>
## 2.3 DMP Components in WP3 – Service integration and customisation (DRAXIS
– NOA)
### 2.3.1 System Architecture
<table>
<tr>
<th>
DMP Component
</th>
<th>
</th>
<th>
Issues to be addressed
</th> </tr>
<tr>
<td>
Data Summary
</td>
<td>
</td>
<td>
Functional and non-functional aspects, technical capabilities, components
descriptions and dependencies, API descriptions, information flow diagrams,
internal and external interfaces, software and hardware requirements and
testing procedures related data specified and validated among the RECAP
technical and pilot partners.
Technical requirements reports have been created in order to describe the
aforementioned procedures and requirements for all the pilots. These reports
were the basis upon which the system has been developed and modified.
</td> </tr>
<tr>
<td>
Making data findable, provisions for metadata
</td>
<td>
including
</td>
<td>
The reports are stored on DRAXIS server and are not directly accessible from
outside. Moreover, these data cannot be made available to third parties.
However, they are both discoverable and accessible to the public through the
D3.1 RECAP System Architecture. The deliverable contains a table stating all
versions of the document, along with who contributed to each version, what the
changes where as well as the date the new version was created. Moreover, the
deliverable is publicly available either through the project website and Wiki
or through Zenodo with the following DOI:
_https://doi.org/10.5281/zenodo.1442649_ .
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
The naming convention used is: Data_WP3_1_System Architecture Data. As part of
any stored data, metadata are generated, which include sufficient information
with appropriate keywords to help external and internal users to locate data.
</th> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
All data are made publicly available as part of the D3.1: System architecture
and through Zenodo.
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Increase data re-use
</td>
<td>
Data are publicly available as part of the D3.1: System Architecture and can
be accessed and re-used by third parties indefinitely without a license.
</td> </tr>
<tr>
<td>
Allocation of resources
</td>
<td>
No additional costs are foreseen for making this dataset FAIR.
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
The data have been collected for internal use in the project, and not intended
for long-term preservation. Furthermore, DRAXIS fully complies with the
applicable national, European and international framework, and the European
Union's General Data Protection Regulation 2016/679.
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Other issues
</td>
<td>
N/A
</td> </tr> </table>
### 2.3.2 RECAP Platform
<table>
<tr>
<th>
DMP Component
</th>
<th>
</th>
<th>
Issues to be addressed
</th> </tr>
<tr>
<td>
Data Summary
</td>
<td>
</td>
<td>
</td>
<td>
Various data like users’ personal information, farm information, farm logs,
reports and shapefiles containing farm location have been generated via the
platform. All of these data are useful for the self-assessment process and the
creation of meaningful tasks for the farmers. The data described above are
saved in the RECAP central database.
All user actions (login, logout, account creation, visits on specific parts of
the app) are logged and kept in the form of a text file. This log is useful
for debugging purposes.
Reports containing information on user devices (which browsers and mobile
phones) as well as number of mobile downloads (taken from play store for
android downloads and app store for mac downloads) are useful for marketing
and exploitation purposes, as well as decisions regarding the supported
browsers and operating systems.
Furthermore, inspection results have been generated by the inspectors through
the system. The inspection results are available through the farmer’s
electronic record and are saved in the RECAP central database. Inspectors are
able to discover all inspection results, whereas farmers are only able to
discover results of their farms. The administrator of the app is able to
discover all the inspection results generated by the platform.
</td> </tr>
<tr>
<td>
Making data findable, provisions for metadata
</td>
<td>
including
</td>
<td>
The data are not directly accessible from outside. These data cannot be made
available to third parties. However, the data are available to the public
through the deliverables D3.3 Software components development, D3.4 1 st
version of product backlog and development report and D3.5 Final version of
revised product backlog and development report.
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
The dissemination level of these deliverables is public and they are available
in the project’s website and Wiki and in Zenodo through the Digital Object
Identifier (DOI):
D3.3 Software components development: DOI:
_https://doi.org/10.5281/zenodo.1442655_
D3.4 1 st version of product backlog and development report: DOI:
_https://doi.org/10.5281/zenodo.1442659_
D3.5 Final version of revised product backlog and development report: DOI:
_https://doi.org/10.5281/zenodo.1475999_
The naming convention used is: Data_WP3_2_RECAP platform Data. Every action on
the platform will produce meaningful metadata such as time and date of data
creation or data amendments and owners of actions that took place as well as
associated farmer, inspector and inspection type will be saved along with the
inspection results to enhance the discoverability of the results. However,
only the administrator of the platform will be able to discover all the data
generated by it.
The database is not discoverable to other network machines operating on the
same LAN, VLAN with the DB server or other networks. Therefore, only users
with access to the server (RECAP technical team members) are able to discover
the database.
</th> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
Only registered users and administrators have access to the data. The data
produced by the platform are personal data and cannot be shared with others
without the user’s permission. No open data will be created as part of RECAP.
The database will only be accessible by the authorized technical team.
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Increase data re-use
</td>
<td>
RECAP will be integrated with third party applications, currently being used
by the local governments, in order to re-use information already inserted in
those systems.
Moreover, the language of the content and data are in the pilot languages
(English, Greek, Lithuanian, Spanish and Serbian).
The raw data are not publicly available.
However, the RECAP platform is an open source platform and it is offered under
the GNU General Public License version 3 and it is accessible through Zenodo
through the DOI: _https://doi.org/10.5281/zenodo.1451796_ .
</td> </tr>
<tr>
<td>
Allocation of resources
</td>
<td>
Resources have been allocated according to the project plan and WP3 allocated
resources. No additional costs are foreseen for making this dataset FAIR.
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
All platform generated data have been saved on the RECAP database server.
Encryption has been used to protect personal user data like emails and
passwords. All data are transferred via SSL connections to ensure secure
exchange of information.
If there is need for updates, the old data are overwritten and all actions are
audited in detail and a log is kept, containing the changed text for
</td> </tr>
<tr>
<td>
</td>
<td>
security reasons. The system is weekly backed up and the backups are kept for
3 days. All backups are hosted on a remote server to avoid disaster scenarios.
All servers are hosted behind firewalls inspecting all incoming requests
against known vulnerabilities such as SQL injection, cookie tampering and
cross-site scripting. Finally, IP restriction enforces the secure storage of
data.
DRAXIS pays special attention to security and respects the privacy and
confidentiality of the users' personal data by fully complying with the
applicable national, European and international framework, and the European
Union's General Data Protection Regulation 2016/679. Moreover, "Personal Data
Protection Policy " and "Terms and Conditions" have been included in the RECAP
platform, in order to inform the users of how RECAP collects, processes,
discloses and protects the incoming information.
The RECAP platform will not keep personal data and other information after the
end of the action that took place on 31-10-2018.
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
All farmer generated data will be protected and will not be shared without the
farmer’s consent.
</td> </tr>
<tr>
<td>
Other issues
</td>
<td>
N/A
</td> </tr> </table>
### 2.3.3 Software Development Tool (SDK)
<table>
<tr>
<th>
DMP Component
</th>
<th>
</th>
<th>
Issues to be addressed
</th> </tr>
<tr>
<td>
Data Summary
</td>
<td>
</td>
<td>
Various data like users’ personal information, farm information, farm logs,
reports and shapefiles containing farm location have been generated via the
platform. All of these data are useful for the agricultural consultants or
even the Paying Agencies to create added value services on the top of the
RECAP platform.
The SDK tool was developed based on the user requirements identified and
collected through questionnaires.
</td> </tr>
<tr>
<td>
Making data findable, provisions for metadata
</td>
<td>
including
</td>
<td>
The data collected from the questionnaires are not directly accessible from
outside and are stored on the University of Reading servers. These data cannot
be made available to third parties. However, the data are available to the
public through the deliverables D3.3 Software components development, D3.4 1
st version of product backlog and development report, D3.5 Final version of
revised product backlog and development report and D2.4 Report on co-
production of services. The dissemination level of these deliverables is
public and they are available in the project’s website and Wiki and in Zenodo
through the Digital Object Identifier (DOI):
</td> </tr>
<tr>
<td>
</td>
<td>
D3.3 Software components development: DOI:
_https://doi.org/10.5281/zenodo.1442655_
D3.4 1 st version of product backlog and development report: DOI:
_https://doi.org/10.5281/zenodo.1442659_
D3.5 Final version of revised product backlog and development report: DOI:
_https://doi.org/10.5281/zenodo.1475999_
D2.4 Report on co-production of services: DOI:
_https://doi.org/10.5281/zenodo.1744847_
The naming convention used is: Data_WP3_3_RECAP SDK tool Data.
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
Only registered users (agricultural consultants-developers) are able to use
the RECAP SDK tool.
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Increase data re-use
</td>
<td>
Through SDK users are able to re-use the RECAP data and generate added value
services for them and their clients. The SDK has been developed in a common
programming and user-friendly language, php.
However, the RECAP SDK tool is an open source and it is offered under the GNU
General Public License version 3 and it is accessible through
Zenodo through the DOI: 10.5281/zenodo.1475193
</td> </tr>
<tr>
<td>
Allocation of resources
</td>
<td>
Resources have been allocated according to the project plan and WP3 allocated
resources. No additional costs are foreseen for making this dataset FAIR.
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
DRAXIS pays special attention to security and respects the privacy and
confidentiality of the users' personal data by fully complying with the
applicable national, European and international framework, and the European
Union's General Data Protection Regulation 2016/679. The RECAP platform will
not keep personal data and other information after the end of the action that
took place on 31-10-2018.
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Other issues
</td>
<td>
N/A
</td> </tr> </table>
### 2.3.4 User uploaded photos
<table>
<tr>
<th>
DMP Component
</th>
<th>
</th>
<th>
Issues to be addressed
</th> </tr>
<tr>
<td>
Data Summary
</td>
<td>
</td>
<td>
RECAP users are able to upload photos from a farm. These photos are
timestamped and geolocated and are saved in the RECAP database. The purpose of
the images is to prove compliance or not. The most common file type expected
is jpg.
</td> </tr>
<tr>
<td>
Making data findable, provisions for metadata
</td>
<td>
including
</td>
<td>
Metadata related to the location and the time of the taken photo as well as a
name, description and tag for the photo are saved. These metadata help the
discoverability of the photos within the platform. Farmers are able to
discover photos related to their farms (uploaded either by them or the
inspectors) and Paying Agencies are able to discover all photos that have been
granted access to.
</td> </tr>
<tr>
<td>
</td>
<td>
The images folder is not discoverable by systems or persons in the same or
other servers in the same LAN/VLAN as the storage/database server.
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
Only if the farmer allows to, some photos might be openly used within the
RECAP platform as good practice examples. Otherwise the photos will only be
only accessible by the relevant RECAP users.
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
Photos are saved in jpeg format.
</td> </tr>
<tr>
<td>
Increase data re-use
</td>
<td>
Farmers are able to download photos and use them in any way they want.
Inspectors and paying agencies have limited abilities of reusing the data,
depending on the access level given by the farmer.
</td> </tr>
<tr>
<td>
Allocation of resources
</td>
<td>
Resources have been allocated according to the project plan and WP3 allocated
resources. No additional costs are foreseen for making this dataset FAIR.
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
User generated photos are saved on the RECAP server. SSL connections are
established so that all data are transferred securely.
In case of necessary updates, the old data are overwritten and all actions are
audited in detail and a log is kept, containing the changed text for security
reasons. The system is weekly backed up and the back-ups are kept for 3 days.
All backups are hosted on a remote server to avoid disaster scenarios.
All servers are hosted behind firewalls inspecting all incoming requests
against known vulnerabilities such as SQL injection, cookie tampering and
cross-site scripting. Finally, IP restriction enforces the secure storage of
data.
DRAXIS pays special attention to security and respects the privacy and
confidentiality of the users' personal data by fully complying with the
applicable national, European and international framework, and the European
Union's General Data Protection Regulation 2016/679. Moreover, "Personal Data
Protection Policy " and "Terms and Conditions" have been included in the RECAP
platform, in order to inform the users of how RECAP collects, processes,
discloses and protects the incoming information.
The RECAP platform will not keep uploaded photos after the end of the action
that took place on 31-10-2018.
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
All user generated data are protected and will not be shared without the
farmer’s consent.
</td> </tr>
<tr>
<td>
Other issues
</td>
<td>
N/A
</td> </tr> </table>
<table>
<tr>
<th>
DMP Component
</th>
<th>
Issues to be addressed
</th> </tr>
<tr>
<td>
Data Summary
</td>
<td>
As part of RECAP videos and presentations have been created in order to
educate farmers and inspectors on the current best practices. Some of them are
available for the users to view whenever they want and some other will be
available only via live webinars.
</td> </tr> </table>
### 2.3.5 E-learning material
<table>
<tr>
<th>
Making data findable, including provisions for metadata
</th>
<th>
Metadata such as video format, duration, size, time of views, number of
participants for live webinars will be saved along with the videos and the
presentations in order to enhance the discoverability of the results. All
registered users are able to discover the e-learning material via a dedicated
area that lists all the available sources.
The database and the storage area are not discoverable to other network
machines operating on the same LAN, VLAN with the DB server or other networks.
Therefore, only users with access to the server (RECAP technical team members)
are able to discover the database and the storage area.
</th> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
The e-learning material is only accessible through the RECAP platform. All
RECAP users have access to that material.
The database is only accessible by the authorized technical team.
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Increase data re-use
</td>
<td>
The e-learning material is mainly created by the paying agencies and there is
a possibility to re-use existing material from other similar systems.
</td> </tr>
<tr>
<td>
Allocation of resources
</td>
<td>
Resources have been allocated according to the project plan and WP3 allocated
resources. No additional costs are foreseen for making this dataset FAIR.
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
Videos and power point presentations are saved on the RECAP database server.
All data are transferred via SSL connections to ensure secure exchange of
information.
The system is weekly backed up and the back-ups are kept for 3 days. All
backups are hosted on a remote server to avoid disaster scenarios. DRAXIS pays
special attention to security and respects the privacy and confidentiality of
the users' personal data by fully complying with the applicable national,
European and international framework, and the European Union's General Data
Protection Regulation 2016/679. Moreover, "Personal Data Protection Policy "
and "Terms and Conditions" have been included in the RECAP platform, in order
to inform the users of how RECAP collects, processes, discloses and protects
the incoming information.
The RECAP platform will not keep e-learning material after the end of the
action that took place on 31-10-2018.
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Other issues
</td>
<td>
N/A
</td> </tr> </table>
### 2.3.6 CC laws and rules
<table>
<tr>
<th>
DMP Component
</th>
<th>
Issues to be addressed
</th> </tr>
<tr>
<td>
Data Summary
</td>
<td>
Cross compliance law and inspection lists with checkpoints are used both by
the inspectors during the inspections but also by the farmers to perform some
sort of self-assessment. The lists have been given to the technical team by
the Paying agencies in a various format (excel, word) and have been
transformed in electronic form.
</td> </tr>
<tr>
<td>
Making data findable, including provisions for metadata
</td>
<td>
The datasets are not available to the public but only to the RECAP consortium.
However, all registered users have access to the laws and the inspection
checklists via the RECAP platform.
The naming convention used is: Data_WP3_4_RECAP CC rules Data. Metadata
related to the different versions of the checklists and the newest updates of
the laws, along with dates and times will also be saved.
Metadata help the easy discoverability of the most up to date content.
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Increase data re-use
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Allocation of resources
</td>
<td>
Resources have been allocated according to the project plan and WP3 allocated
resources. No additional costs are foreseen for making this dataset FAIR.
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
All content related to CC laws and inspections are securely saved on the RECAP
database server. All data are transferred via SSL connections to ensure secure
exchange of information.
The system is weekly backed up and the backups are kept for 3 days. All
backups are hosted on a remote server to avoid disaster scenarios. DRAXIS pays
special attention to security and respects the privacy and confidentiality of
the users' personal data by fully complying with the applicable national,
European and international framework, and the European Union's General Data
Protection Regulation 2016/679. Moreover, "Personal Data Protection Policy "
and "Terms and Conditions" have been included in the RECAP platform, in order
to inform the users of how RECAP collects, processes, discloses and protects
the incoming information.
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Other issues
</td>
<td>
N/A
</td> </tr> </table>
### 2.3.7 Information extraction and modeling from remotely sensed data
<table>
<tr>
<th>
DMP Component
</th>
<th>
Issues to be addressed
</th> </tr>
<tr>
<td>
Data Summary
</td>
<td>
Collection of Very High Resolution (VHR) satellite imagery and farmer
declarations. Generation of satellite based spectral indices and remote
sensing classification products. Generation of soil loss estimation products
based on revised Universal Soil Loss Equation (RUSLE) using Rainfall erosivity
(R-factor), Soil Erodibility (K-factor), Topography (LSfactor), Cover
Management (C-factor) and Support Practices (P-factor) data.
All data sets were used to establish a mechanism for breaches of
crosscompliance and introduce the concept of smart sampling the fields to be
inspected. The products were used in the pilot implementation.
Processing of open and commercial satellite data for monitoring CAP
implementation is in the core of RECAP.
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
Data are available in raster and vector format, accessible through a MapServer
application on top of a PostGIS database.
Historical, Landsat-based spectral indices have been used to assist a
timeseries analysis at the preliminary research phase of the development.
Sentinel-2 data were used exclusively for the output remote sensing products
delivered to the RECAP platform.
The origin of the data was USGS for Landsat ( _http://glovis.usgs.gov/_ )
and ESA for Sentinel, delivered through the Hellenic National Sentinel Data
Mirror Site ( _http://sentinels.space.noa.gr/_ ) and the Copernicus Open
Access Hub ( _https://scihub.copernicus.eu/dhus/#/home_ ) . Farmers’
declarations, along with access to the Land Parcel Identification System
(LPIS), and VHR imagery has been provided by the Paying Agencies that
participate in the project. VHR imagery was used in the preliminary research
phase of the RS component development.
Sentinel-2 data are Landsat-8 images are around 1 GB each, both compressed.
For 5 pilot cases, and a need to have at least one image per month on a yearly
basis, with cloud cover percentage under the required threshold, we end up
with imagery amounting to at least 12 GB and at most 200 GB per pilot case.
Indices and classification products account for an additional 90% generated
data for each pilot. VHR imagery is of the order of 20GB in total. Vector data
are a few MBs in size.
Data and products are useful for the Paying Agencies, the farmers themselves
and the farmer consultants. They are ingested to the RECAP platform and
disseminated to project stakeholders, while their usefulness was demonstrated
during the pilot cases. VHR satellite data were not redistributed, and a
relevant agreement has been signed to ensure that these data are used only for
the development and demonstration activities of RECAP.
Data and products will be useful for the Paying Agencies, the farmers
themselves and the farmer consultants. They will be ingested by the RECAP
platform and disseminated to project stakeholders, while their usefulness will
be demonstrated during the pilot cases. VHR satellite data will not be
redistributed, and a relevant agreement has been signed to ensure that these
data are used only for the development and demonstration activities of RECAP.
</th> </tr>
<tr>
<td>
Making data findable, including provisions for metadata
</td>
<td>
Data (rasters) are stored at the National Observatory of Athens servers and
labeled with the area of interest id, timestamp and type of data MapServer and
PostGIS provide a build-in keyword search tool that is used.
The image data and the processed products are available to all stakeholders
through a PostGIS. Registered users have unlimited access to the products for
the duration of the project, with the exception of the
VHR satellite data and farmers’ declarations.
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
Spectral Indices and EO-based classification objects are made available
through the RECAP platform. Commercial VHR satellite imagery that was used in
the context of the pilots was restricted due to the associated
</td> </tr>
<tr>
<td>
</td>
<td>
restrictions of the satellite data vendor and the Joint Research Center (JRC).
Farmers’ declarations are considered to be Personal data and hence will be not
open for reuse.
Data and products are made accessible through an API on top a PostgreSQL
database.
No special software is needed. A user can create scripts to access and query
the database and retrieve relevant datasets.
The data and associated metadata are deposited in NOA’s servers.
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
PostGIS and MapServer is a widely accessible tool for managing geospatial
information. No standard vocabulary will be used and no ontology mapping is
foreseen.
</td> </tr>
<tr>
<td>
Increase data re-use
</td>
<td>
The EO-based geospatial products that have been generated in RECAP are made
available for re-use for the project’s lifetime and beyond. All EO-based
products will remain usable after the end of the project. No particular data
quality assurance process is followed, and no relevant warranties will be
provided.
EO-based products will remain re-usable at least two years after the project’s
conclusion.
</td> </tr>
<tr>
<td>
Allocation of resources
</td>
<td>
Costs for maintaining a database of the EO-based products that will be
generated to serve the pilot demonstrations are negligible. Fees have been
paid for the publication _https://doi.org/10.5281/zenodo.2161483_ .
Data are stored on NOA’s servers.
Long-term preservation of the products generated for the pilots is minimal.
However, if this is to scale-up and go beyond the demonstration phase, then
making data FAIR will incur significant costs. Generating FAIR spectral
indices and EO-based classification products for large geographical regions
and with frequent updates, has a potential for crossfertilization of different
fields (e.g. precision farming, CAP compliance, environmental monitoring,
disaster management, etc.).
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
NOA servers are managed by the IT department. They are regularly backed up and
secure. NOA fully complies with the applicable national, European and
international framework, and the European Union's General Data Protection
Regulation 2016/679.
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Other issues
</td>
<td>
N/A
</td> </tr> </table>
<table>
<tr>
<th>
DMP Component
</th>
<th>
Issues to be addressed
</th> </tr>
<tr>
<td>
Data Summary
</td>
<td>
The following maps have been provided by the pilot countries and are used by
the RECAP platform in the form of map layers:
</td> </tr> </table>
_2.3.8 Maps_
<table>
<tr>
<th>
</th>
<th>
Habitat
Natura sites,
Nitrate Vulnerable Zones,
Botanical Heritage Sites
Watercourse maps
Slope map (or DEM)
Administrative boundaries and settlements
Land Use / Land Cover Maps, as detailed as possible
ILOT and sub-ILOT
LPIS (WMS or SHP)
The need comes from the fact that by using these maps, useful information
regarding the compliance to the rules will be derived. All maps are not
produced as part of this project but as explained they have been provided to
the technical team by the pilots and will be reused. The types of the maps
differ but some indicative types for vectors are ESRI Shapefile, GeoJSON, GML,
etc. and for rasters is GeoTiff. Similarly, the size varies a lot, from 1KB to
10GB.
Vector data are store in PostGIS database and raster data in the file system
and both are served to the RECAP platform through Geoserver.
</th> </tr>
<tr>
<td>
Making data findable, including provisions for metadata
</td>
<td>
All registered users have access to the above maps. The users are able to
identify the maps by their distinctive name.
The naming convention used is: Data_WP3_5_RECAP Maps Data.
Metadata are generated related to the different versions of the maps.
Metadata help the easy discoverability of the most up to date content.
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
Maps are saved in standard formats that are commonly used through OGC
services.
</td> </tr>
<tr>
<td>
Increase data re-use
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Allocation of resources
</td>
<td>
Resources have been allocated according to the project plan and WP3 allocated
resources. No additional costs are foreseen for making this dataset FAIR.
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
All maps are saved on the RECAP server. SSL connections are established so
that all data are transferred securely.
In case of necessary updates, the old data are overwritten and all actions are
audited in detail and a log is kept, containing the changed text for security
reasons. The system is weekly backed up and the backups are kept for 3 days.
All backups are hosted on a remote server to avoid disaster scenarios.
All servers are hosted behind firewalls inspecting all incoming requests
against known vulnerabilities such as SQL injection, cookie tampering and
cross-site scripting. Finally, IP restriction enforces the secure storage of
data.
DRAXIS pays special attention to security and respects the privacy and
confidentiality of the users' personal data by fully complying with the
</td> </tr> </table>
<table>
<tr>
<th>
DMP Component
</th>
<th>
Issues to be addressed
</th> </tr>
<tr>
<td>
Data Summary
</td>
<td>
The main purpose of the data collection within WP4 is to i) monitor the
effective implementation of the pilots; and to ii) evaluate the RECAP
Platform.
WP4 data collection addresses the main objectives of the project since it
allows the evaluation of the RECAP Platform in the 5 participating territories
(Greece, Spain, Lithuania, UK and Serbia) and takes into
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
applicable national, European and international framework, and the European
Union's General Data Protection Regulation 2016/679.
</th> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Other issues
</td>
<td>
N/A
</td> </tr> </table>
### 2.3.9 Examples of BPS applications
<table>
<tr>
<th>
DMP Component
</th>
<th>
Issues to be addressed
</th> </tr>
<tr>
<td>
Data Summary
</td>
<td>
Examples of previous years submitted BPS applications have been shared with
the technical team. As part of the user journey, the farmers have to enter
details similar to the ones they have entered in the BPS application hence the
use of such data allowed the effective design of the DB as well as training
material for the classifiers of the Remote Sensing Component.
The data have been delivered in excel sheets by all pilots.
</td> </tr>
<tr>
<td>
Making data findable, including provisions for metadata
</td>
<td>
Only the technical team have access to these data and they have not been used
on the RECAP platform.
The naming convention used is: Data_WP3_6_BPS Examples Data.
No metadata will be produced.
</td> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Increase data re-use
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Allocation of resources
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
All data are securely saved in the DRAXIS and NOA’s premises. Both DRAXIS and
NOA pay special attention to security and respects the privacy and
confidentiality of the users' personal data by fully complying with the
applicable national, European and international framework, and the European
Union's General Data Protection Regulation 2016/679. Furthermore, the
technical team has signed three Confidentiality Agreements with the Greek
Paying Agency in order to use these data:
ID: 16211, Date: 17/02/2017
ID: 28222, Date: 24/03/2017
ID: 53535, Date: 14/06/2017
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Other issues
</td>
<td>
N/A
</td> </tr> </table>
_**2.4 DMP Components in WP4 – Deployment and operation (INI)** _
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
account the different end-users groups (e.g. Farmers/ Agricultural
Consultants, Inspectors, Certification Bodies, Paying Agencies).
WP4 data collection is mainly made through the following documents: WP4
Monitoring Sheet (Excel) and Pilot Implementation Report (Word) for monitoring
the implementation of Pilots. Those documents are filled out by the 5 Pilot
Teams;
Evaluation Questionnaire (Google Forms or Excel) for collecting feedback from
the Pilot participants as user of the RECAP Platform. Evaluation Questionnaire
includes a Privacy Notice and it is filled out by the Pilot participants
(users of the RECAP Platform). Data collected thought the Evaluation
Questionnaire are exclusively for analytical and statistical purposes; and
will not be re-used.
As a result, the origin of WP4 data is mainly from:
Partners of the project;
Pilot participants (Farmers/ Agricultural Consultants, Inspectors,
Certification Bodies, Paying Agencies).
WP4 data collection is only used for the evaluation of the RECAP Platform, and
the definition of potential recommendations for its improvement.
</th> </tr>
<tr>
<td>
Making data findable, provisions for metadata
</td>
<td>
including
</td>
<td>
The raw data collected in WP4 are not made publicly available as it includes
confidential and personal data.
Once treated and anonymized, the results of the implementation and the
evaluation of the 5 Pilots conducted in WP4 are made public in D4.3
Intermediate Evaluation and Adaptation Report, D4.4 Final Evaluation Report
and D4.5 Report on procedures followed and lessons learnt.
The dissemination level of these deliverables is public and they are available
in the project’s website and Wiki and in Zenodo through the Digital Object
Identifier (DOI):
D4.3 Intermediate Evaluation and Adaptation Report: DOI:
_https://doi.org/10.5281/zenodo.1442676_
D4.4 Final Evaluation Report: DOI: _https://doi.org/10.5281/zenodo.1744861_
D4.5 Report on procedures followed and lessons learnt: DOI:
_https://doi.org/10.5281/zenodo.1885901_
Data are stored on INI’s servers and labelled with the task name, country of
origin and the type of data. Data will be searchable by country, task name and
data type.
The naming convention used is: Data_WP4_1_Intermediate Pilot
Evaluation_<Country> Data
As part of any stored data, metadata were generated, which include sufficient
information:
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
to link it to the research publications/ outputs,
to identify the funder and discipline of the research, and with appropriate
keywords to help external and internal users to locate data.
</th> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
All raw data collected in WP4 are for internal use within the project
consortium, as the objective of WP4 is to validate the RECAP platform
developed in WP3. As raw data contain personal data, the databases are not
publicly available.
Data will be stored on INI’s servers.
Raw data will be treated in order to produce D4.3, D4.4 and D4.5, which are
public deliverables and are accessible through Zenodo.
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Increase data re-use
</td>
<td>
The data of WP4 have started to be collected and generated in WP4 in the fall
2017, and all the specifications and periods of use, and re-use have been
established in deliverable D4.1 Pilot Plan which is public accessible through
Zenodo with the DOI: _https://doi.org/10.5281/zenodo.1442670_ . Data quality
have been assured by asking partners to fill out evaluation questionnaire in
their own languages. Feedback collected have been translated into English in
order to ensure accurate data collection and analysis.
Data collected thought the Evaluation Questionnaire is exclusively for
analytical and statistical purposes; and will not be re-used. Once treated and
anonymized, the results of the implementation and the evaluation of the 5
Pilots conducted in WP4 are made public in D4.3, D4.4 and D4.5.
</td> </tr>
<tr>
<td>
Allocation of resources
</td>
<td>
Resources have been allocated according to the project plan and WP4 allocated
resources. No additional costs are foreseen for making this dataset FAIR.
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
The data are collected for internal use in the project, and not intended for
long-term preservation. WP4 leader (INI) keeps two daily incremental backups,
one on a separate disk and another one on a remote server within Spain. For
the purpose of the evaluation, the following personal data were collected,
through the Evaluation Questionnaire:
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
Pilot Country
End user profile
Email
Age
Education
Name and Surname
Home or Farm Address
Phone number
Social accounts links
CAP claimant identification number
Holding No
Location of the parcels
For abovementioned personal data all GDPR principles were followed and
performed with respective actions by the pilot partners:
1. Lawfulness, fairness and transparency - All data collection practices during the project are not breaking the law. Personal data are collected in a fair way in relation to the data subject. Nothing is hidden from data subjects and reasons for collection were clearly stated and well explained to every data subject.
2. Purpose limitation – Purpose of collection is not only clearly stated, yet the collected data will be stored only until such purpose is completed. In addition, there was no processing of the data for the archiving purposes in the public interest or for scientific, historical or statistical purposes.
3. Data minimisation – Collected personal data are minimised as much as possible to achieve the purpose of the project.
4. Accuracy - Inaccurate or incomplete data were erased or rectified.
5. Storage limitation – All personal data collected during the project will be deleted after the project (when it is no longer necessary).
6. Integrity and confidentiality (security) – All personal data related to data subject are stored in a form that enables identification of the data subject.
7. Accountability – All partners integrate all appropriate technical and organisational measures within the company to secure the overall effectiveness, compliance with the law, etc.
All the involved parties in the questionnaire collection and pilot
implementation fully comply with the applicable national, European and
international framework, and the European Union's General Data Protection
Regulation 2016/679. Specifically,
</th> </tr>
<tr>
<td>
</td>
<td>
INI abides by the Spanish regulation in terms of protection of personal data
(Ley Orgánica 15/1999 de 13 de diciembre and Real Decreto 1720/2007 de 21 de
diciembre) and is controlled each year by an Auditor regarding the Policy of
Data Protection and receive compliance Certificate.
INO, responsible for the Serbian pilot, is compliant with the regulations of
respected Serbian law (Zakon o zaštiti podataka o ličnosti -Sl. glasnik RS",
br. 97/2008, 104/2009 - dr. zakon, 68/2012 - odluka US i 107/2012) as well as
with the regulations of the reformed EU General Data Protection Regulation
(GDPR).
Strutt & Parker, responsible for the UK pilot, have a standard approach to
GDPR across the BNP group and they also have a Chief Data Officer.
OPEKEPE, responsible for the Greek pilot, handled the data based on the ISO
27001:2013 Information technology- Security techniques- Information security
management systems- Requirements and following GDPR.
INTIA, responsible for the Spanish pilot, provided the data with encrypted.
NMA, responsible for the Lithuanian pilot with regards to the inspections,
reviewed all the personal data processed in NMA and a register of personal
data processing records was prepared. Legal acts regulating NMA activities
were also reviewed and changed accordingly with GDPR. Data Protection Officer
was also appointed in NMA.
LAAS, responsible for the Lithuanian pilot with regards to the farmers and
agricultural consultants, has already implemented the IT solutions which are
necessary for security and accounting of data processing. Data Protection
Officer was also appointed in LAAS.
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
An Informed Consent Form has been prepared for the participation to Pilot
Activities. It was translated in local languages by the pilot partners, and
included in the RECAP Platform. The agreement is asked in the process of
signing up into the RECAP Platform.
Evaluation Questionnaire includes a Privacy Notice that specifies that the
treatment of the data is confidential, complies with GDPR and is carried out
exclusively for analytical and statistical purposes.
In the frame of Focus Group or Individual Interviews with Pilot participants,
a clear verbal explanation is provided to each interviewee and focus group
participant.
</td> </tr>
<tr>
<td>
Other issues
</td>
<td>
N/A
</td> </tr> </table>
## 2.5 DMP Components in WP5 – Dissemination & Exploitation (ETAM)
<table>
<tr>
<th>
DMP Component
</th>
<th>
</th>
<th>
Issues to be addressed
</th> </tr>
<tr>
<td>
Data Summary
</td>
<td>
</td>
<td>
</td>
<td>
Data collection is necessary for the elaboration of the Dissemination and
Communication Strategy, the establishment and management of the Network of
Interest, the Market assessment and the Business plan. Specifically, they are
necessary for target groups’ tracking procedure and for Paying Agencies,
agricultural consultants and farmers collective bodies’ profiling.
Regarding the types and formats of data collected, these are lists of
communication recipients and target groups’ lists in excel files containing
organisations/ bodies and their e-mail addresses.
Parts of the lists have been developed in previous projects of the WP leader.
The rest of the data have been developed through desk research. Regarding the
data utility, they are useful to the WP leader for carrying out communication
and dissemination and for the development of the business plan.
Early on May 2018, ETAM contacted everyone whose information was held to make
them aware and to ensure compliance with the General Data Protection
Regulation (GDPR) that came into effect on the 25th May 2018\.
</td> </tr>
<tr>
<td>
Making data findable, provisions for metadata
</td>
<td>
including
</td>
<td>
The data are available through the public deliverables and are accessible
through Zenodo:
D5.1 Communication and dissemination plan:
</td>
<td>
DOI:
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
_https://doi.org/10.5281/zenodo.1442678_
D5.2 Market Assessment Report:
</td>
<td>
DOI:
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
_https://doi.org/10.5281/zenodo.1442680_
D5.3 Dissemination pack:
</td>
<td>
DOI:
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
_https://doi.org/10.5281/zenodo.1442682_
D5.4 Network of interest meeting
</td>
<td>
report (1):
</td>
<td>
DOI:
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
_https://doi.org/10.5281/zenodo.1442688_
D5.5 Project Workshops
</td>
<td>
(1):
</td>
<td>
DOI:
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
_https://doi.org/10.5281/zenodo.1442690_
D5.7 Network of interest meeting
</td>
<td>
report (2):
</td>
<td>
DOI:
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
_https://doi.org/10.5281/zenodo.1442696_
D5.9 Project Workshops
</td>
<td>
(2):
</td>
<td>
DOI:
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
The na
</td>
<td>
_https://doi.org/10.5281/zenodo.1486689_ D5.10 Network of interest meeting
_https://doi.org/10.5281/zenodo.1476524_
ming conventions used are:
Data_WP5_1_Communication and dissemination Data
Data_WP5_2_Market Assessment Data
</td>
<td>
report (3):
</td>
<td>
DOI:
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
Data_WP5_3_Network of Interest Data Data_WP5_4_Project Workshops Data
As part of any stored data, metadata were generated, which include sufficient
information with appropriate keywords to help external and internal users to
locate data and related information.
</th> </tr>
<tr>
<td>
Making data openly accessible
</td>
<td>
Data concerning e-mail addresses are not openly available, as being personal
data.
Deliverables publically posted on the website of RECAP, on the RECAP Wiki and
Zenodo make available all respective data.
No particular methods or software tools are needed to access the data.
Data are stored on ETAM server.
</td> </tr>
<tr>
<td>
Making data interoperable
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Increase data re-use
</td>
<td>
Deliverables publicly posted on the website of RECAP, on the RECAP Wiki and
Zenodo make available all respective data without any restrictions.
</td> </tr>
<tr>
<td>
Allocation of resources
</td>
<td>
Resources have been allocated according to the project plan and WP5 allocated
resources. No additional costs are foreseen for making this dataset FAIR.
</td> </tr>
<tr>
<td>
Data security
</td>
<td>
ETAM fully complies with the applicable national, European and international
framework, and the European Union's General Data Protection Regulation
2016/679.
</td> </tr>
<tr>
<td>
Ethical aspects
</td>
<td>
N/A
</td> </tr>
<tr>
<td>
Other issues
</td>
<td>
N/A
</td> </tr> </table>
# 3\. Conclusion
This final DMP reflects the data management strategy regarding the collection,
management, sharing, archiving and preservation of data and the procedure that
RECAP followed in order to efficiently manage the data collected and/ or
generated during the project.
# Abbreviations
<table>
<tr>
<th>
API
</th>
<th>
Application Programming Interface
</th> </tr>
<tr>
<td>
BPS
</td>
<td>
Basic Payments Scheme
</td> </tr>
<tr>
<td>
CAP
</td>
<td>
Common Agricultural Policy
</td> </tr>
<tr>
<td>
CC
</td>
<td>
Cross Compliance
</td> </tr>
<tr>
<td>
DEM
</td>
<td>
Digital Elevation Model
</td> </tr>
<tr>
<td>
DMP
</td>
<td>
Data Management Plan
</td> </tr>
<tr>
<td>
DOI
</td>
<td>
Digital Object Identifier
</td> </tr>
<tr>
<td>
ESA
</td>
<td>
European Space Agency
</td> </tr>
<tr>
<td>
EU
</td>
<td>
European Union
</td> </tr>
<tr>
<td>
IP
</td>
<td>
Internet Provider
</td> </tr>
<tr>
<td>
jpeg
</td>
<td>
Joint Photographic Experts Group
</td> </tr>
<tr>
<td>
JRS
</td>
<td>
Joint Research Center
</td> </tr>
<tr>
<td>
mp3
</td>
<td>
Motion Picture Experts Groups Layer-3
</td> </tr>
<tr>
<td>
LAN
</td>
<td>
Local Area Network
</td> </tr>
<tr>
<td>
LPIS
</td>
<td>
Land Parcel Identification Systems
</td> </tr>
<tr>
<td>
OGC
</td>
<td>
Open Geospatial Consortium
</td> </tr>
<tr>
<td>
PDF
</td>
<td>
Portable Document Format
</td> </tr>
<tr>
<td>
RS
</td>
<td>
Remote Sensing
</td> </tr>
<tr>
<td>
RUSLE
</td>
<td>
Revised Universal Soil Loss Equation
</td> </tr>
<tr>
<td>
SQL
</td>
<td>
Structured Query Language
</td> </tr>
<tr>
<td>
SSL
</td>
<td>
Secure Sockets Layers
</td> </tr>
<tr>
<td>
USGS
</td>
<td>
United States Geological Survey
</td> </tr>
<tr>
<td>
VHR
</td>
<td>
Very High Resolution
</td> </tr>
<tr>
<td>
VLAN
</td>
<td>
Virtual LAN
</td> </tr>
<tr>
<td>
WMS
</td>
<td>
Web Map Server
</td> </tr>
<tr>
<td>
XML
</td>
<td>
Extensible Markup Language
</td> </tr> </table>
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0796_VITAL_644843.md
|
# 1\. INTRODUCTION
Effective research data management is an important and valuable component of
the responsible conduct of research. This document provides a data management
plan (DMP), which describes how data will be collected, organised, managed,
stored, secured, back- uped, preserved, and where applicable, shared.
In the EU Framework Programme for Research and Innovation _Horizon 2020_ a
pilot action on open access to research data will be implemented. Open access
can be defined as the practice of providing on-line access to scientific
information that is free of charge to the enduser and that is re-usable. In
the context of research and innovation, 'scientific information' can refer to
(i) peer-reviewed scientific research articles (published in scholarly
journals) or (ii) research data (data underlying publications, curated data
and/or raw data).
Moreover, H2020 projects are required to develop a Data Management Plan (DMP),
in which they will specify what data will be open and define the details of
their handling. DMP can be divided into two main categories:
* Utilization of research data that are generated (and collected) within the context of the project
* Dissemination of the scientific results generated from the project.
Since in terms of disseminating VITAL results, the PMT will focus on applying
–where applicable- the gold open-access model, the rest of the document
focuses on the first of the formerly mentioned category, i.e. the management
plan of the research data generated within the research context of VITAL.
The scope of the current DMP is to make the VITAL data easily:
* Discoverable
* Accessible
* Assessable and intelligible
* Useable beyond the original purpose for which it was collected
* Interoperable to specific quality standards
# 2\. ETHICS AND INTELLECTUAL PROPERTY
## 2.1. Ethics
Overall, the VITAL project consortium confirms that it will assure that if the
items mentioned hereunder are applicable to the project they will be conformed
to:
* Directive 95/46/EC (Protection of personal data)
* Opinion 23/05/2000 of the European Group on Ethics in Science and New Technologies concerning ‘Citizens Rights and New Technologies: A European Challenge’ and specifically those relating to:
* ICT (Protection of privacy and protection against personal intrusion)
* Ethics of responsibility (Right to information security)
* Article 15 (Freedom of expression and research and data protection)
The project will ensure that the consortium agreement (or addendums thereof)
is constructed to enable such assurances to be formally made and adhered to by
consortium partners.
In addition, with respect to Directive 95/46/EC (Protection of personal of
data), individual work packages will be specifically requested to ensure that
any models, specifications, procedures or products also enable the project end
users to be compliant with this directive.
The VITAL partners also will abide by professional ethical practices and
comply with the Charter of Fundamental Rights of the European Union (c.f.,
http://www.europarl.europa.eu/charter/pdf/text_en.pdf).
## 2.2. IPR and Knowledge Management Plan
A Consortium Agreement has been signed at an early stage of the project in
order to define the important points necessary to obtain the best possible
management (financial conditions, Intellectual Property Rights (IPR),
planning) of intellectual property. IPR will be managed in line with a
principle of equality of all the partners towards the foreground knowledge and
in full compliance with the general Commission policies regarding ownership,
exploitation rights and confidentiality. In general, knowledge, innovations,
concepts and solutions that are not going to be protected by patent
applications by the participants will be made public after agreement between
the partners, to allow others to benefit from these results and exploit them.
However, where results require patents to show the impact of VITAL, we will
perform freedom to operate searches to determine that this does not infringe
on patents belonging to others. Additionally, we will consider the
intellectual property rights belonging to third parties and consortium members
to ensure no infringement on intellectual property rights. The unified
consortium agreement will be used as a reference for all IPR cases. The
Consortium Agreement identifies the background intellectual property of each
of the partners that may be used to achieve the project objectives. The
corresponding list of patents at the disposal of the partners for the duration
of the project is also included.
The principle of territoriality for industrial property will be applied within
the VITAL project and the best instrument (several national patent
registrations, European Patent application or an international application)
will be selected in each case. Concerning the standards-related activities,
they are considered to be part of the sharable foreground knowledge and
contributing partner(s) are equally owners of the Use and Author Right. Each
partner shall abstain from using or introducing into VITAL any background or
side-ground work that would or might require unexpected licensing of the work.
The basic philosophy of VITAL is to implement an open source policy for most
but not all results. This balances the need to protect the individual
interests of each partner with the need to make a quick and lasting impact on
the wider community. This open source approach to dissemination of VITAL
results, including its prototypes and test environments, will ensure that
critical innovations can be patented in a reasonable way.
The Consortium Agreement will provide rules for handling confidentiality and
IPR to the benefit for the Consortium and its partners. All the project
documentation will be stored electronically and as paper copies. Classified
Documents will be handled according to proper rules with regard to
classification (as described above), numbering and locked storing and
distribution limitations. The policy, that will govern the IPR management in
the scope of VITAL, is driven by the following principles:
### 2.2.1. Ownership of knowledge
* Knowledge shall be the property of the partner carrying out the work leading to that knowledge.
* Where several partners have jointly carried out work generating the knowledge and where their respective share of the work cannot be ascertained, they shall have joint ownership of such knowledge. The contractors concerned shall agree amongst themselves the allocation and terms of exercising ownership of that knowledge in accordance with the provisions of this contract.
### 2.2.2. Protection of knowledge
* Where knowledge is capable of industrial or commercial application, its owner shall provide for its adequate and effective protection, in conformity with relevant legal provisions, including the consortium agreement.
* Where a partner does not intend to protect or to extend the protection of its knowledge in a specific country or intends to waive the protection of its knowledge, the Commission shall be informed at least 30 days prior to the corresponding deadline. In such a case and where the Commission considers it necessary to protect such knowledge in a particular country, it may, with the agreement of the contractor concerned, adopt protective measures.
### 2.2.3. Use and dissemination
* The partners shall use or cause to be used the knowledge arising from the project, which they own, in accordance with their interests. The contractors shall set out the terms of use in a detailed and verifiable manner, notably in the plan for using and disseminating the knowledge.
* If dissemination of knowledge would not adversely affect its protection or its use, the contractors shall ensure that it is disseminated within a period of two years after the end of the project.
### 2.2.4. Access rights
The access rights for execution of the project are the following:
* Project partners shall enjoy access rights to the knowledge and to the pre-existing know-how, if that knowledge or pre-existing know-how is needed to carry out their own work under that project. Access rights to knowledge shall be granted on a royalty-free basis. Access rights to pre-existing know-how shall be granted on a royalty-free basis, unless otherwise agreed before signature of the contract.
* Subject to its legitimate interests, the termination of the participation of a project partner shall in no way affect its obligation to grant access rights pursuant to the previous paragraph to the other contractors until the end of the project.
The access rights for use of knowledge are the following:
* Partners shall enjoy access rights to knowledge and to the pre-existing know how, if that knowledge or pre-existing know-how is needed to use their own knowledge. Access rights to knowledge shall be granted on a royalty-free basis, unless otherwise agreed before signature of the contract. Access rights to pre-existing know-how shall be granted under fair and non-discriminatory conditions to be agreed.
* Subject to the partners’ legitimate interests, access rights may be requested under the conditions laid down in the previous paragraph until two years after the end of the project or after the termination of the participation of a partner, whichever falls earlier, unless the partners concerned agree on a longer period.
The consortium agreement that will be signed before the end of the contract
negotiations with the Commission will gather the basic aspects of the IPR
management:
* Confidentiality
* Ownership of results / joint ownership of results / difficult cases (i.e. pre-existing know-how so closely linked with result difficult to distinguish pre-existing know-how and result)
* Legal protection of results (patent rights)
* Commercial exploitation of results and any necessary access right
* Commercial obligation
* Relevant Patents, know-how, and information Sublicense
* Pre-existing know-how excluded from contract
Nevertheless, many specific IPR cases, that will need a concrete solution from
the bases previously fixed, may exist. In these conflict situations, the
Project Management Team will be the responsible to arbitrate a solution. In
case of any members of this Board is directly affected by the conflict, it
will not participate in the arbitration process.
# 3.STRUCTURE OF VITAL DMP
Following the template recommended by the EC [1], the Data Management Plan
(DMP) includes the following major components, as described in the figure
below.
Data management
plan
Data set
reference and
name
Data set descrip1on
Standards and
metadata
Data sharing
Archiving and
preserva1on
**Figure 3-1: Structure (template) of the data management plan**
Specifically, in VITAL, the aforementioned components are applicable as
summarized below:
VITAL
Data management
Reference and name
VITAL [Name] [Type] [Place]
Date] [Owner]
[
Target User]
[
Descrip1on
“Traffic meas. experiments,
Nice, June. 2015
Planned publica1on in
NOMS”
Metadata
-‐
Text file, if not part of
the data file
Sharing:
Zenodo.org integrated
with Github
Archiving:
Github (through
-‐
Zenodo.org)
**Figure 3-2: Main components of the VITAL Data Management Plan**
# 4\. DATA SET REFERENCE AND NAME
The following structure is proposed for VITAL data set identifier:
VITAL [Name] [Type] [Place] [Date] [Owner] [Target User] Where
* “Name” is a short name for the data.
* “Type” describes the type of data (e.g. code, publication, measured data).
* “Place” describe the place the data were produced.
* “Data” is the date in format “YYYY-MM-DD”.
* “Owner” are the owner or owners of the data (if exist)
* [Optional] “Target user” is the target audience of the data.
* “_” (underscore) is used as the separator between the fields.
For example,
“VITAL_Field_Experiment_data_Trento_2015-06-30_Create-Net_Internal.dat” is a
data file from a field experiment in Trento, Italy from 2015-06-30 made and
owned by Create-Net with extension .dat (MATLAB). More information about the
data is provided in the metadata (see the following section).
All the data fields in the identifier above, apart from the target user, are
mandatory. If owner cannot be specified, “Unspecified-owner” should be
indicated.
# 5.DATA SET DESCRIPTION AND METADATA
The previous section defined a data set identifier. The data set description
is essentially an expanded description of the identifier with more details.
The data set description is organized as the metadata in the similar way as
the identifier but with more details and, depending on the file format, will
be either incorporated as a part of the file or as a separate file (in its
simplest form) in the text format. In the case of the separate metadata file,
it will have the same name with suffix “METADATA”.
For example, the metadata file name for the data file from the previous
section will look as follows
“VITAL_Field_Experiment_data_Trento_015-06-30_Create-
Net_Internal_METADATA.txt” The Metadata file can also describe a number of
files (e.g. a number of log files).
The project may consider a possibility to provide the metadata in XML or JSON
formats, if necessary for convenience of parsing and further processing.
The project will develop several data types related to the VNF (Virtual
Network Function) Descriptors, NS (Network Service) Descriptors, VNF
Catalogues, etc., which will be specifically encoded into the metadata format
appropriately in order to have consistency in the description and filtering of
the data types.
# 6.DATA SHARING
VITAL has chosen zenodo.org repository for storing the project data and a
VITAL project account has been created 1 . Zenodo.org is a repository
supported by CERN and the EU OpenAire project 2 , is open, free, searchable
and structured with flexible licensing allowing for storing all types of data:
datasets, images, presentations, publications and software. In addition to
that:
* The repository has backup and archiving capabilities.
* The repository allows for integration github.com 3 where the project code will be stored. GitHub provides a free and flexible tool for code developing and storage.
* Zenodo assigns all publicly available uploads a Digital Object Identifier (DOI) to make the upload easily-‐ and uniquely-‐citable.
All the above makes Zenodo a good candidate as a _unified_ repository for all
foreseen project data (presentations, publications, code and measurement data)
from VITAL.
Information on using Zenodo by the project partners with application to the
VITAL data will be circulated within the consortium and addressed within the
respective work package (WP6).
The process of making the VITAL data public and publishable at the repository
will follow the procedures described in the Consortium Agreement. For the
code, the project partners will follow the internal “Open Source Management
Process” document.
All the public data of the project will be openly accessible at the
repository. Non-public data will be archived at the repository using the
“closed access” option.
# 7\. ARCHIVING AND PRESERVATION
The Guidelines on Data Management in Horizon 2020 require defining procedures
that will be put in place for long-term preservation of the data and backup.
The zenodo.org repository possesses these archiving capabilities including
backup and will be used to archive and preserve the VITAL project data.
Further, the VITAL project data will also be stored in a project-managed
repository tool, called redmine, which is managed by the project coordinator.
It has flexible live data storage capability. The redmine repository will
directly link to the project website, where access information to different
data types will be provided. This will permit the users and research
collaborators to have easy and convenient access to the project research data.
A GitHub account will be linked to the repository in order to preserve and
backup the software produced in the project.
# 8\. USE OF DMP WITHIN THE PROJECT
The VITAL project partners use this plan as a reference for data management
(naming, providing metadata, storing and archiving) within the project each
time new project data are produced.
The project partners are introduced to the DMP and its use as part of WP6
activities. Relevant questions from partners will also be addressed within
WP6. The work package will also provide support to the project partners on
using Zenodo as the data management tool.
The DMP will be used as a live document in order to update the project
partners about the use, monitoring and updates of the shared infrastructure.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0802_AEROWORKS_644128.md
|
# **1** Introduction
## 1.1 Summary
As part of Horizon 2020, the AEROWORKS project participates in a pilot action
on open research data. The aim is to provide indications as to what kind of
data the project will collect, how the data will be preserved and what are the
adopted sharing policies towards making these data readily available to the
research community.
## 1.2 Purpose of document
This DMP details what kind of research data will be created during the
project's lifespan and forms prescriptions as to how these data will be made
available and thus reusable and verifiable by the larger research
community. The project's efforts in the area of open research data are
outlined giving particular attention to the following issues:
* The types of open and nonopen data that will be generated or collected by the consortium, via experimental campaigns and research, during the project's lifespan;
* The technologies and infrastructures that will be used to securely preserve the data longterm;
* The standards used to encode the data;
* The data exploitation plans;
* The sharing/access policies applied to each dataset.
The plan can be considered also as a checklist for the future and as a
reference for the resource and budget allocations related to data management.
## 1.3 Methodology
The content of this document builds upon the input of the project's industrial
partners and all the peers of workpackages 5, 6, 7 and 8. A short
questionnaire, outlining the DMP's objectives and stating the required
information in a structured manner, has been edited by LTU and disseminated to
the partners. The compiled answers have been integrated into a coherent plan.
The present DMP will evolve as the project progresses in accord with the
project's efforts in this area. At any time, the DMP will reflect the current
state of the consortium's agreements regarding data management, exploitation
and protection of rights and results.
H2020-ICT-2014-1
AEROWORKS
1.4
Outline
For each partner involved in the collection or generation of research data a
short techni
cal description is given stating the context in which the data has been
created. The different
datasets are identified by projectwide unique identifiers and categorized
through additional
metadata such as, for example, the sharing policy attached to it.
The considered storage facilities are outlined and tutorials are provided for
their use
submitting and retrieving the research data). A further appendix lists the
format standards
(
that will be used to encode the data and provides references to technical
descriptions of these
formats.
1.5
Partners involved
Partners and Contribution
Short Name
Contribution
LTU
Coordinating and integrating inputs from partners
# **2** Data sharing, access and preservation
The digital data created by the project will be curated differently depending
on the sharing policies attached to it. For both open and nonopen data, the
aim is to preserve the data and make it readily available to the interested
parties for the whole duration of the project and beyond.
## 2.1 NonOpen research data
The nonopen research data will be archived and stored longterm in the
REDMINE portal administered by LTU. The REDMINE platform is currently been
employed to coordinate the project's activities and tasks and to store all the
digital material connected to
AEROWORKS.
## 2.2 Open research data
The open research data will be archived on the Zenodo platform (). Zenodo is a
EUbacked portal based on the well established GIT version control system ()
and the Digital Object Identifier (DOI) system (). The portal's aims are
inspired by the same principles that the EU sets for the pilot; Zenodo
represents thus a very suitable and natural choice in this context.
The repository services offered by Zenodo are free of charge and enable peers
to share and preserve research data and other research outputs, in any size
and format: datasets, images, presentations, publications and software. The
stored data and the associated metadata through wellestablished practices
such as mirroring and periodic backups.
Finally, each uploaded dataset is assigned a unique DOI making the data
uniquely identifiable and thus traceable and referenceable.
**3** Description of AEROWORKS data sets
This section will list the data sets produced within AEROWORKS project
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0803_TWEETHER_644678.md
|
# INTRODUCTION
In December 2013, the European Commission announced their commitment to open
data through the Pilot on Open Research Data, as part of the Horizon 2020
Research and Innovation Programme. The Pilot’s aim is to “improve and maximise
access to and re-use of research data generated by projects for the benefit of
society and the economy”.
In the frame of this Pilot on Open Research Data, results of publicly-funded
research should be disseminated more broadly and faster, for the benefit of
researchers, innovative industry and citizens.
On one hand, Open Access allows not only accelerating discovery process and
ease those research results to reach the market (thus meaning a return of
public investment), but also avoids a duplication of research efforts thus
leading to a better use of public resources and a higher throughput. On the
other hand, this Open Access policy is also beneficial for the researchers
themselves. Making the research publicly available increases the visibility of
the performed research, what is translated into a significantly higher number
of citations 1 as well as an increase in the collaboration potential with
other institutions in new projects, among others. Additionally, Open Access
offers small and medium-sized enterprises (SMEs) access to the latest research
for utilisation.
Under H2020, each beneficiary must ensure open access to all peer-reviewed
scientific publications relating to its results. These open access
requirements are based on a balanced support to both 'Green open access'
(immediate or delayed open access that is provided through self-archiving) and
'Gold open access' (immediate open access that is provided by a publisher).
Apart from open access to publications, projects must also aim to deposit the
research data needed to validate the results presented in the deposited
scientific publications, known as "underlying data". In order to effectively
supply this data, projects need to consider at an early stage how they are
going to manage and share the data they create or generate.
During the first months of the project, TWEETHER elaborated the first version
of the Data Management Plan (DMP), which described how the scientific
publications and research data generated during the project was going to be
stored and made public. In particular, this DMP addressed the following
issues:
* What data will be collected / generated in the course of the project?
* What data will be exploited? What data will be shared/made open?
* What standards will be used / how will metadata be generated?
* How will data be curated / preserved including after project completion
Since the DMP is expected to mature during the project, this deliverable
provides an updated version of the previous DMP with a review of the data sets
that will be collected, processed or generated inside the project and with
more information about the mechanisms used to share or make the publications
and the data open.
Namely, the main updates of this deliverable are:
* Inclusion of Sections 4.1 Inclusion of Section 5.1
* Description of the new data set related to the measurements on the W-band chipsets presented in Section 8 (Data set reference: DS_CHIPSET_SP).
# TWEETHER PROJECT
The TWEETHER project will provide high capacity everywhere by the realisation
of a W-band wireless system with a capacity and coverage of 10Gbps/km² for the
backhaul and the access markets, considered by operators a key investment
opportunity. Such a system, combined with the development of beyond state-of-
the-art affordable millimetre wave devices, will permit to overcome the
economical obstacle that causes the digital divide and will pave the way
towards the full deployment of small cells.
This system merges for the first time novel approaches in vacuum electron
devices, monolithic millimetre wave integrated circuits and networking
paradigms to implement a novel transmitter to foster the future wireless
communication networks.
In particular, the TWEETHER project is developing a novel, compact, low cost
and high yield Traveling Wave Tube (TWT) power amplifier with 40W output
power. This TWT will be the only device capable to provide wideband operation
and enough output power to distribute the millimetre wave frequency signal to
a useful distance.
On the other hand, advanced and high performance W-band transceiver chipset,
enabling the low power operation of the system, is currently being fabricated.
More concretely, this chipset includes various GaAs-based monolithic microwave
integrated circuits (MMICs) comprising elements such as power amplifiers,
down- and up-converters and 8x frequency multiplier.
These novel W-band elements will be integrated by using advanced micro-
electronics and micromechanics to achieve compact front end modules, which
will be assembled and packaged with interfaces and antennas for a field test
to be deployed at the campus of the _Universitat Politecnica de Valencia_ to
prove the breakthrough of the TWEETHER system in the millimetre wave wireless
network field.
Therefore, TWEETHER addresses a highly innovative approach, being its more
relevant audience the scientific community working in millimeter wave
technology and wireless systems. In addition, due to the strong impact of the
system, other expected audience will be the industrial community,
standardization bodies working on the W-band and on definition of Multimedia
Wireless Systems (MWS), and potential users such as telecom operators. In this
way, defining an appropriate open data strategy will help increase the
visibility of the performed research inside the scientific community and the
industrial ecosystem, on one hand, and will ensure proper management of the
intellectual property, on the other hand.
# CONSIDERATIONS FOR PUBLIC INFORMATION
The H2020’s open access policy pursues that the information generated by the
projects participating in that programme is made publicly available. However,
as stated in EC guidelines on Data Management in H2020 2 , “ _As an
exception, the beneficiaries do not have to ensure open access to specific
parts of their research data if the achievement of the action's main
objective, as described in Annex I, would be jeopardised by making those
specific parts of the research data openly accessible. In this case, the data
management plan must contain the reasons for not giving access_ .”
In agreement with this, the TWEETHER consortium will decide what information
is made public according to aspects as potential conflicts against
commercialization, IPR protection of the knowledge generated (by patents or
other forms of protection), meaning a risk for obtaining the project
objectives/outcomes, etc.
The TWEETHER project is pioneering research that is of key importance to the
electronic and telecommunication industry. Effective exploitation of the
research results depends on the proper management of intellectual property.
Therefore, the TWEETHER consortium will follow the following strategy (Figure
1): if the research findings result in a ground-breaking innovation, the
members of the consortium will consider two forms of protection: to withhold
the data for internal use or to apply for a patent in order to commercially
exploit the invention and have in return financial gain. In latter case,
publications will be therefore delayed until the patent filing. On the
contrary, if the technology developments are not going to be withheld or
patented, the results will be published for knowledge sharing purposes.
**Research**
**Results**
Protect
Selection
Disseminate
and share
Patenting
Open Access
Publication
Repository
of
Publication
and
Research
Data
**Dissemination**
**Plan**
**Data Management Plan**
Patent
Publication
Withhold
**After**
**patent**
**filing**
Scientific
Publication
**Figure 1. Process for determining which information is to be made public
(from EC’s document “Guidelines on Open Access to Scientific Publications and
Research Data in Horizon 2020 – v1.0 – 11 December 2013”)**
# OPEN ACCESS TO PUBLICATIONS
The first aspect to be considered in the DMP is related to the open access
(OA) to the publications generated within the TWEETHER project, meaning that
any peer-reviewed scientific publication made within the context of the
project will be available online to any user at no charge. This aspect is
mandatory for new projects in the Horizon 2020 programme (article 29.2 of the
Model Grant Agreement).
The two ways considered by the EC to comply with this requirement are:
* Self-archiving / ‘green’ OA: In this option, the beneficiaries deposit the final peer-reviewed manuscript in a repository of their choice. In this case, they must ensure open access to the publication within a maximum of six months (twelve months for publications in the social sciences and humanities).
* Open access publishing / ‘gold’ OA: In this option, researchers publish their results in open access journals, or in journals that sell subscriptions and also offer the possibility of making individual articles openly accessible via the payment of author processing charges (APCs) (hybrid journals). Again, open access via the chosen repository must be ensured upon publication.
Publications arising from the TWEETHER project will be deposited in a
repository (‘green’ OA) and, whenever possible, the option ‘gold’ OA will be
used in order to provide the widest dissemination of the published results.
With respect to the ‘green’ OA option it should be mentioned that most
publishers allow to deposit a copy of the article in a repository, sometimes
with a period of restricted access (embargo) 3 .
In Horizon 2020, the embargo period imposed by the publisher must be shorter
than 6 months (or 12 months for social sciences and humanities). This embargo
period will be therefore taken into account by the TWEETHER consortium to
choose the open access modality for the fulfilment of the open access
obligations established by the EC.
Additionally, according to the EC recommendation, whenever possible the
TWEETHER consortium will retain the ownership of the copyright for their work
through the use of a ‘License to Publish’, which is a publishing agreement
between author and publisher. With this agreement, authors can retain
copyright and the right to deposit the article in an Open Access repository,
while providing the publisher with the necessary rights to publish the
article. Additionally, to ensure that others can be granted further rights for
the use and reuse the work, the TWEETHER consortium may ask the publisher to
release the work under a Creative Commons license, preferably CC-0 or CC-BY.
Besides these two facts (retaining the ownership of the publication and
embargo period), the TWEETHER consortium will also consider the relevance of
the journal where it is intended to publish, measured by means of the “impact
factor” (IF). We expect that the work to be carried out in the TWEETHER
project leads to results with a very high impact, which are desired to be
published in high IF journals. Therefore, we will also consider this factor
when selecting the journal to publish the TWEETHER project results.
Here we provide a list of the journals initially considered for the
publications to be generated in the TWEETHER project with information about
the open access policy of each journal.
<table>
<tr>
<th>
**Publisher**
</th>
<th>
**Journal**
</th>
<th>
**Impact factor (2013)**
</th>
<th>
**Author charges**
**(for**
**OA)**
</th>
<th>
**Comments about open access**
</th> </tr>
<tr>
<td>
Institute of
Electrical and
</td>
<td>
IEEE Wireless Communications
</td>
<td>
6.524
</td>
<td>
$1,750
</td>
<td>
A paid open access option is available for this journal.
</td> </tr>
<tr>
<td>
Electronics
Engineers
(IEEE)
</td>
<td>
IEEE
Communications Magazine
</td>
<td>
4.460
</td>
<td>
</td>
<td>
If funding rules apply, authors may post Author's post-print version in
funder's designated repository. Publisher's version/PDF cannot be used.
</td> </tr>
<tr>
<td>
IEEE Journal on
Terahertz
Technology
</td>
<td>
4.342
</td> </tr>
<tr>
<td>
IEEE
Electron Device Letters
</td>
<td>
3.023
</td> </tr>
<tr>
<td>
IEEE
Transactions on Microwave
Theory and
Techniques
</td>
<td>
2.943
</td> </tr>
<tr>
<td>
IEEE
Transactions on
Electron
Devices
</td>
<td>
2.358
</td> </tr>
<tr>
<td>
IEEE
Transactions on Components,
Packaging, and
Manufacturing
Technology
</td>
<td>
1.236
</td> </tr>
<tr>
<td>
IEEE Journal of the Electron
Devices Society
</td>
<td>
Started 2013
</td>
<td>
$1,350
</td>
<td>
It is a fully open-Access publication. Publisher's version/PDF can be archived
on author's personal website, employer's website or funder's designated
website. Creative Commons Attribution License is available if required by
funding agency.
</td> </tr>
<tr>
<td>
Springer
</td>
<td>
Journal of
Infrared,
Millimeter, and Terahertz Waves
</td>
<td>
1.891
</td>
<td>
2,200€
</td>
<td>
Springer’s Open Choice eligible journals publish open access articles under
the liberal Creative Commons Attribution 4.0 International (CC BY) license.
If not, author's post-print can be posted on any open access repository after
12 months after publication (Publisher's version/PDF cannot be used)
</td> </tr>
<tr>
<td>
AIP
</td>
<td>
Applied Physics
Letters
</td>
<td>
3.515
</td>
<td>
$ 2,200
</td>
<td>
A paid open access option is available for this journal.
If funding rules apply, publishers version/PDF may be used on author's
personal website, institutional website or institutional repository
</td> </tr> </table>
From this list, we can see that the majority of the journals targeted by the
TWEETHER project are IEEE journals, which allow an open access modality and
the author’s post-print version can be deposited in a repository. This is in
line with the Horizon 2020 requirements.
All the publication will acknowledge the project funding. This acknowledgment
must be included also in the metadata of the generated information, since it
allows to maximise the discoverability of publications and to ensure the
acknowledgment of EU funding. The terms to be included in the metadata are:
* "European Union (EU)" and "Horizon 2020"
* the name of the action, acronym and the grant number
* the publication date, length of embargo period if applicable, and a persistent identifier (e.g DOI, Handle) Finally, in the Model Grant Agreement, “scientific publications” mean primarily journal articles. Whenever possible, TWEETHER will provide access to other types of scientific publications such as conference papers, presentations, public deliverables, etc.
## Access to peer-reviewed scientific publication
An important objective of TWEETHER is the dissemination of its research
results to the scientific community, targeting the scientific journals,
conferences or workshops with the highest impact. Indeed, several peer-
reviewed scientific papers have been presented so far in relevant
international conferences. These publication are or will be available online,
as required by the EC:
* C. Paoloni, R. Letizia, F. Napoli, Q. Ni, A. Rennie, F. André, K. Pham, F. Magne, I. Burciu, M. Rocchi, M. Marilier, R. Zimmerman, V. Krozer, A. Ramirez, R. Vilar, "Horizon 2020 TWEETHER project for W-band high data rate communications", 16th International Vacuum Electronics Conference (IVEC 2015), Beijing, China, April 2015.
Available through OpenAIRE and UPV’s RiuNet repository:
_http://hdl.handle.net/10251/62240_
* C. Paoloni, R. Letizia, Q. Ni, F. André, I. Burciu, F. Magne, M. Rocchi, M. Marilier, R.
Zimmerman, V. Krozer, A. Ramirez, R. Vilar, “Scenarios and Use Cases in
Tweether: Wband for Internet Everywhere”, 24th European Conference on Networks
and Communications, Paris, France, June 2015.
Available through OpenAIRE and UPV’s RiuNet repository:
_http://hdl.handle.net/10251/62274_
* C. Paoloni, R. Letizia, F. André, S. Kohler, F. Magne, M. Rocchi, M. Marilier, R. Zimmerman, V. Krozer, G. Ulisse, A. Ramirez, R. Vilar, "W-band TWTs for New Generation High Capacity Wireless Networks", 17th International Vacuum Electronics Conference (IVEC 2016), Monterey, US, April 2016.
The access to this publication will be available shortly through OpenAIRE.
* Claudio Paoloni, François Magne, Frédéric André, Viktor Krozer, Rosa Letizia, Marc Marilier, Antonio Ramirez, Marc Rocchi, Ruth Vilar, Ralph Zimmerman, “Millimeter Wave Wireless System based on Point to Multipoint Transmissions”, 25 th European Conference on Networks and Communications (EUCNC2016).
To be published. We will provide access upon publication.
* Claudio Paoloni, François Magne, Frédéric André, Viktor Krozer, Marc Marilier, Antonio Ramirez, Ruth Vilar, Ralph Zimmerman, “W-band point to multipoint system for small cells backhaul”, 25 th European Conference on Networks and Communications (EUCNC2016).
To be published. We will provide access upon publication.
* C. Paoloni, F. André, V. Krozer, R. Zimmerman, S. Koeller, Q. T. Le, R. Letizia, A. Sabaawi, G. Ulisse, “A Traveling Wave Tube for 92 – 95 GHz band wireless applications”, 41st International Conference on Infrared, Millimeter and Terahertz Waves (IRMMW-THz 2016), Copenhagen, Denmark, 2016.
To be published. We will provide access upon publication.
Apart from the open access to the scientific papers detailed above, TWEETHER
has provided access to other type of documents such as public deliverables and
presentations given in scientific and industrial workshops through the project
website and ZENODO repository.
In addition, a workshop on Millimetre-wave Technologies for High-Speed
Broadband Wireless Networks was organized in the frame of TWEETHER. The
presentations of this workshop are available on the project website for
download.
# RESEARCH DATA
The scientific and technical results of the TWEETHER project are expected to
be of maximum interest for the scientific community. Through the duration of
the project, once the relevant protections (e.g. IPR) are secured, the
TWEETHER partners may disseminate (subject to their legitimate interests) the
obtained results and knowledge to the relevant scientific communities through
contributions in journals and international conferences in the field of
wireless communications and millimetre-wave technology.
Apart from the open access to publication explained in the previous section,
the Open Research Data Pilot also applies to two types of data 4 :
* The data, including associated metadata, needed to validate the results presented in scientific publications (underlying data);
* Other data, including associated metadata, as specified and within the deadlines laid down in a data management plan, to be developed by the project. In other words, beneficiaries will be able to choose which data, additionally to the data underlying publications, they make available in open access mode.
According to this requirement, the underlying data related to the scientific
publications will be made publicly available (See Section 8). This will allow
that other researchers can make use of that information to validate the
results, thus being a starting point for their investigations, as expected by
the EC through its open access policy. But, in order to be aligned with the
protection policy and strategy described, the data sets will be analysed on a
case by case basis before making them open with the objective to not
jeopardize exploitation or commercialization purposes. As a result, the
publication of research data will be mainly followed by those partners
involved in the scientific
4 _EC document: “Guidelines on Open Access to Scientific Publications and
Research Data in Horizon 2020” – version_
_1.0 – 11 December, 2013_
development of the project (i.e., academic and research partners), while those
partners focused on the “development” of the technology will limit the
publication of information due to strategic/organizational reasons (commercial
exploitation).
In the first version of the DMP the project consortium provided an explanation
of the different types of data sets to be generated in TWEETHER. Examples of
these data are the specifications of the TWEETHER system and the services it
supports, the datasheets and performances of the technological developments of
the project, the field trial results with the KPIs (Key Performance
Indicators) used to evaluate the system performances, among others.
As the nature and extent of these data sets can be evolved during the project,
the objective of this deliverable is to review the data sets identified so far
to determine if they should be modified/updated or if new data sets should be
included. In particular, it has been included a data set related to the
measurements on the W-band chipsets (see Section 8). The rest of the data sets
are still relevant.
## Access to research data
According to the requirement of providing access to the data needed to
validate the results presented in the scientific publications (i.e.,
underlying data), some research results will be publicly available:
* Results of the W-band TWT gain and output power simulated by using MAGIC 3D Particle in Cell Simulators. These results were presented in the IVEC paper.
* The underlying data corresponding to the paper “Millimeter Wave Wireless System based on Point to Multipoint Transmissions” to be presented at the EUCNC 2016 will be made open upon publication.
# METADATA
Metadata refers to “data about data”, i.e., it is the information that
describes the data that is being published with sufficient context or
instructions to be intelligible for other users. Metadata must allow a proper
organization, search and access to the generated information and can be used
to identify and locate the data via a web browser or web based catalogue.
Two types of metadata will be considered within the frame of the TWEETHER
project: that corresponding to the project publications, and that
corresponding to the published research data.
With respect to the metadata related to scientific publications, as described
in Section 4, they include the title, the authors, publication date, funding
institution (EU H2020), grant number, persistent identifier (e.g DOI, Handle),
etc. Figure 2 shows an example of metadata used for the scientific paper
presented at the EuCNC2015.
**Figure 2. Metadata used for the scientific paper presented at the
EuCNC2015**
In the context of data management, metadata will form a subset of data
documentation that will explain the purpose, origin, description, time
reference, creator, access conditions and terms of use of a data collection.
The metadata that would best describe the data depends on the nature of the
data. For research data generated in TWEETHER, it is difficult to establish a
global criteria for all data, since the nature of the initially considered
data sets will be different, so that the metadata will be based on a
generalised metadata schema as the one used in ZENODO 4 , which includes
elements such as:
* Title: free text
* Creator: Last name, first name
* Date
* Contributor: It can provide information referred to the EU funding and to the TWEETHER project itself; mainly, the terms "European Union (EU)" and "Horizon 2020", as well as the name of the action, acronym and the grant number
* Subject: Choice of keywords and classifications
* Description: Text explaining the content of the data set and other contextual information needed for the correct interpretation of the data.
* Format: Details of the file format
* Resource Type: data set, image, audio, etc.
* Identifier: DOI
* Access rights: closed access, embargoed access, restricted access, open access.
Additionally, a readme.txt file could be used as an established way of
accounting for all the files and folders comprising the project and explaining
how all the files that make up the data set relate to each other, what format
they are in or whether particular files are intended to replace other files,
etc.
Based on the comments presented above, Figure 3 shows an example of metadata
used in ZENODO for the data uploaded to this platform.
**Figure 3. Metadata used in ZENODO for data uploaded to this platform**
# DATA SHARING, ARCHIVING AND PRESERVATION
A repository is the mechanism to be used by the project consortium to make the
project results (i.e., publications and scientific data) publicly available
and free of charge for any user. According to this, several options are
considered/suggested by the EC in the frame of the Horizon 2020 programme to
this aim:
For depositing scientific publications:
* Institutional repository of the research institutions (e.g., RiuNet at UPV) o Subject-based/thematic repository o Centralised repository (e.g., Zenodo repository set up by the OpenAIRE project) For depositing generated research data:
* A research data repository which allows third parties to access, mine, exploit, reproduce and disseminate free of charge o Centralised repository (e.g., Zenodo repository set up by the OpenAIRE project)
The academic institutions participating in TWEETHER have available appropriate
repositories which in fact are linked to OpenAIRE
(https://www.openaire.eu/participate/deposit/idrepos):
# Lancaster University - Lancaster E-Prints
Type: Publication Repository
Contents: Journal articles, Conference and workshop papers, Theses and
dissertations,
Books, chapters and sections, Other special item types
Website URL: http://eprints.lancs.ac.uk/
Compatibility: OpenAIRE Basic (DRIVER OA)
OAI-PMH URL: http://eprints.lancs.ac.uk/cgi/oai2
# Hochschulschriftenserver - Universität Frankfurt am Main
Type: Publication Repository
Contents: Journal articles, Conference and workshop papers, Theses and
dissertations,
Unpublished reports and working papers
Website URL: http://publikationen.ub.uni-frankfurt.de/
Compatibility: OpenAIRE Basic (DRIVER OA)
OAI-PMH URL: http://publikationen.ub.uni-frankfurt.de/oai
# Universitat Politècnica de Valencia (UPV) – RiuNet
Type: Publication Repository
Contents: Journal articles, Conference and workshop papers, Theses and
dissertations,
Learning Objects, Multimedia and audio, visual materials, Other special item
types
Website URL: http://riunet.upv.es/
Compatibility: OpenAIRE 2.0+ (DRIVER OA, EC funding)
OAI-PMH URL: https://riunet.upv.es/oai/driver,
_https://riunet.upv.es/oai/openaire_
Note that all these repositories make use of the OAI-PMH protocol (Open
Archives Initiative Protocol for Metadata Harvesting), what allows that the
content can be properly found by means of the defined metadata.
These institutional repositories will be used to deposit the publications
generated by the institutions detailed above. Indeed, as commented in Section
4.1, the scientific papers published so far are available in the RiuNet
repository and in OpenAIRE through the following link:
_https://www.openaire.eu/search/project?projectId=corda__h2020::546a6950975d78f06a46bc53f2bf_
_c9ef_
Apart from these repositories, the TWEETHER project will also use the
centralised repository
ZENODO to ensure the maximum dissemination of the information generated in the
project
(research publications and data), as this repository is the one mainly
recommended by the EC’s OpenAIRE initiative in order to unite all the research
results arising from EC funded projects.
Indeed, ZENODO 5 is an easy-to-use and innovative service that enables
researchers, EU projects and research institutions to share and showcase
multidisciplinary research results (data and publications) that are not part
of existing institutional or subject-based repositories. Namely, ZENODO
enables users to:
* easily share the long tail of small data sets in a wide variety of formats, including text, spreadsheets, audio, video, and images across all fields of science
* display and curate research results, get credited by making the research results citable, and integrate them into existing reporting lines to funding agencies like the European Commission
* easily access and reuse shared research results
* define the different licenses and access levels that will be provided
Furthermore, ZENODO assigns a Digital Object Identifier (DOI) to all publicly
available uploads, in order to make content easily and uniquely citable and
this repository also makes use of the OAIPMH protocol (Open Archives
Initiative Protocol for Metadata Harvesting) to facilitate the content search
through the use of defined metadata. This metadata follows the schema defined
in INVENIO 6 (a free software suite enabling to run an own digital library
or document repository on the web) and is exported in several standard formats
such as MARCXML, Dublin Core and DataCite Metadata Schema according to
OpenAIRE Guidelines.
On the other hand, considering ZENODO as the repository, the short- and long-
term storage of the research data will be secured since they are stored safely
in same cloud infrastructure as research data from CERN's Large Hadron
Collider. Furthermore, it uses digital preservation strategies to storage
multiple online replicas and to back up the files (Data files and metadata are
backed up on a nightly basis).
Therefore, this repository fulfils the main requirements imposed by the EC for
data sharing, archiving and preservation of the data generated in TWEETHER.
For this reason, a ZENODO community for TWEETHER documents has been created,
and can be accessed through the following link:
_https://zenodo.org/collection/user-tweether-project_
# DESCRIPTION OF DATA SETS TO BE GENERATED OR COLLECTED
This section provides an explanation of the different types of data sets to be
produced in TWEETHER, which has been identified at this stage of the project.
As the nature and extent of these data sets can be evolved during the project,
in this deliverable a new data set associated with the S-parameters of the
W-band chipsets has been identified and included in this section together with
the rest of the data sets described in the previous data management plan.
The descriptions of the different data sets, including their reference, file
format, the level of access, and metadata and repository to be used
(considerations described in Section 6 and 7), are given below.
<table>
<tr>
<th>
**Data set reference**
</th>
<th>
DS_SP_1
</th> </tr>
<tr>
<td>
**Data set name**
</td>
<td>
TWT_SP_X
</td> </tr>
<tr>
<td>
**Data set description**
</td>
<td>
This data set will comprise the measured or simulated S-parameter results for
the TWT structure.
It will mainly consist of small-signal calculations of the cold simulations or
measurements of the TWT at the respective ports.
</td> </tr>
<tr>
<td>
**File format**
</td>
<td>
Touchstone format
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
The metadata is based on ZENODO’s metadata, including the title, creator,
date, contributor, description, keywords, format, resource type, etc. (See
Section 6)
</td> </tr>
<tr>
<td>
**Data sharing**
</td>
<td>
This data set will be widely open and will be deposited in the ZENODO
repository.
To analyse this data CST Software or Magic Software are necessary.
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
This data set will be archived and preserved in ZENODO (See Section 7)
</td> </tr> </table>
<table>
<tr>
<th>
**Data set reference**
</th>
<th>
DS_PS_1
</th> </tr>
<tr>
<td>
**Data set name**
</td>
<td>
TWT_PS_X
</td> </tr>
<tr>
<td>
**Data set description**
</td>
<td>
This data set will comprise results of the power levels at the relevant ports
of the TWT structure. They will include the DC bias conditions together with
the input and output power at all ports. The results will be either based on
measured values or obtained from simulations.
It will mainly consist of small-signal calculations of the hot simulations or
measurements of the TWT at the respective ports.
</td> </tr>
<tr>
<td>
**File format**
</td>
<td>
MDIF or XPA format
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
The metadata is based on ZENODO’s metadata, including the title, creator,
date, contributor, description, keywords, format, resource type, etc. (See
Section 6)
</td> </tr>
<tr>
<td>
**Data sharing**
</td>
<td>
This data set will be widely open and will be deposited in the ZENODO
repository.
To analyse this data CST Software or Magic Software are necessary.
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
This data set will be archived and preserved in ZENODO (See Section 7)
</td> </tr> </table>
<table>
<tr>
<th>
**Data set reference**
</th>
<th>
DS_CHIPSET_DS
</th> </tr>
<tr>
<td>
**Data set name**
</td>
<td>
Semi-conductor Radio Chipset Datasheet
</td> </tr>
<tr>
<td>
**Data set description**
</td>
<td>
This dataset contain the datasheet of the III-V semi conductor products used
by the 2 radios of the TWEETHER project
</td> </tr>
<tr>
<td>
**File Format**
</td>
<td>
File format is the PDF format
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
The metadata is based on ZENODO’s metadata, including the title, creator,
date, contributor, description, keywords, format, resource type, etc. (See
Section 6)
</td> </tr>
<tr>
<td>
**Data sharing**
</td>
<td>
This data set will be widely open and will be deposited in the
</td> </tr>
<tr>
<td>
</td>
<td>
ZENODO repository.
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
This data set will be archived and preserved in ZENODO (See Section 7).
</td> </tr> </table>
<table>
<tr>
<th>
**Data set reference**
</th>
<th>
DS_CHIPSET_SP
</th> </tr>
<tr>
<td>
**Data set name**
</td>
<td>
CHIPSET_SP_X
</td> </tr>
<tr>
<td>
**Data set description**
</td>
<td>
This data set will comprise the measured or simulated S-parameter results for
the OMMIC chipsets.
</td> </tr>
<tr>
<td>
**File format**
</td>
<td>
Touchstone format
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
The metadata is based on ZENODO’s metadata, including the title, creator,
date, contributor, description, keywords, format, resource type, etc. (See
Section 6)
</td> </tr>
<tr>
<td>
**Data sharing**
</td>
<td>
This data set will be widely open and will be deposited in the ZENODO
repository provided that this does not jeopardise future exploitation.
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
Whenever possible, this data set will be archived and preserved in ZENODO (See
Section 7).
</td> </tr> </table>
<table>
<tr>
<th>
**Data set reference**
</th>
<th>
DS_SYS_1
</th> </tr>
<tr>
<td>
**Data set name**
</td>
<td>
System datasheet
</td> </tr>
<tr>
<td>
**Data set description**
</td>
<td>
System general architecture, network interfaces, system data sheet, sub-
assemblies datasheets, range diagrams, photos of equipment. General
information useful for potential users.
This data set will be suitable for publications in scientific and industrial
conferences.
</td> </tr>
<tr>
<td>
**File Format**
</td>
<td>
PDF
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
The metadata is based on ZENODO’s metadata, including the title, creator,
date, contributor, description, keywords, format, resource type, etc. (See
Section 6)
</td> </tr>
<tr>
<td>
**Data sharing**
</td>
<td>
This data set will be widely open and will be deposited in the ZENODO
repository.
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
This data set will be archived and preserved in ZENODO (See Section 7).
</td> </tr> </table>
<table>
<tr>
<th>
**Data set reference**
</th>
<th>
DS_SYS_2
</th> </tr>
<tr>
<td>
**Data set name**
</td>
<td>
System Deployments
</td> </tr>
<tr>
<td>
**Data set description**
</td>
<td>
System coverage capabilities. Deployment methods to optimize coverage,
frequency re-use process. Scenario graph. General information useful for
potential users.
This data set will be suitable for publications in scientific and industrial
conferences.
</td> </tr>
<tr>
<td>
**File format**
</td>
<td>
PDF
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
The metadata is based on ZENODO’s metadata, including the title, creator,
date, contributor, description, keywords, format, resource type, etc. (See
Section 6)
</td> </tr>
<tr>
<td>
**Data sharing**
</td>
<td>
This data set will be widely open and will be deposited in the ZENODO
repository.
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
This data set will be archived and preserved in ZENODO (See Section 7).
</td> </tr> </table>
<table>
<tr>
<th>
**Data set reference**
</th>
<th>
DS_MM-A_1
</th> </tr>
<tr>
<td>
**Data set name**
</td>
<td>
W-band Millimetre Antennas
</td> </tr>
<tr>
<td>
**Data set description**
</td>
<td>
Adaptation S parameters, bandwidth, radiating diagrams: co-polar & cross-
polar. Antennas datasheet: graphs and tables.
This data set will be suitable for publications in scientific and industrial
conferences.
</td> </tr>
<tr>
<td>
**File format**
</td>
<td>
PDF
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
The metadata is based on ZENODO’s metadata, including the title, creator,
date, contributor, description, keywords, format, resource type, etc. (See
Section 6)
</td> </tr>
<tr>
<td>
**Data sharing**
</td>
<td>
This data set will be widely open and will be deposited in the ZENODO
repository.
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
This data set will be archived and preserved in ZENODO (See Section 7).
</td> </tr> </table>
<table>
<tr>
<th>
**Data set reference**
</th>
<th>
DS_FT_1
</th> </tr>
<tr>
<td>
**Data set name**
</td>
<td>
Field trial description
</td> </tr>
<tr>
<td>
**Data set description**
</td>
<td>
This data set will comprise a description of the wireless network architecture
including the hardware, interfaces and services that will be deployed at the
UPV campus and used for the field trial. In addition, it will provide
information about sites (number of sites and its location), the expected
objectives to be achieved and the envisaged scenarios for the system.
This information will be interesting for potential users such as telecom
operators.
</td> </tr>
<tr>
<td>
**File Format**
</td>
<td>
PDF
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
The metadata is based on ZENODO’s metadata, including the title, creator,
date, contributor, description, keywords, format, resource type, etc. (See
Section 6)
</td> </tr>
<tr>
<td>
**Data sharing**
</td>
<td>
This data set will be widely open (URL access) and a summary of these data
will be deposited in the ZENODO repository.
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
This data set will be archived and preserved in ZENODO (See Section 7).
</td> </tr> </table>
<table>
<tr>
<th>
**Data set reference**
</th>
<th>
DS_FT_2
</th> </tr>
<tr>
<td>
**Data set name**
</td>
<td>
Field trial long term KPI measurements
</td> </tr>
<tr>
<td>
**Data set description**
</td>
<td>
This data set will comprise the results of the measurement campaign carried
out to evaluate the performance of the field trial deployed at the UPV campus
integrating the technology developed in TWEETHER.
It will include data obtained from the Network Monitoring System (PRTG
software or similar), which collects KPIs from the network elements. Some
examples of KPIs are throughput, RSSI (received signal strength indicator) and
dropped packets. Those data will be
</td> </tr>
<tr>
<td>
</td>
<td>
publicly accessible through a URL.
This information will be interesting for potential users such as telecom
operators.
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
The metadata is based on ZENODO’s metadata, including the title, creator,
date, contributor, description, keywords, format, resource type, etc. (See
Section 6)
</td> </tr>
<tr>
<td>
**Data sharing**
</td>
<td>
This data set will be widely open (URL access) and a summary of these data
will be deposited in the ZENODO repository.
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
This data set will be archived and preserved in ZENODO (See Section 7).
</td> </tr> </table>
<table>
<tr>
<th>
**Data set reference**
</th>
<th>
DS_FT_3
</th> </tr>
<tr>
<td>
**Data set name**
</td>
<td>
Field trial bandwidth tests
</td> </tr>
<tr>
<td>
**Data set description**
</td>
<td>
This data set will comprise descriptive information of the bandwidth tests
used to evaluate the network at specific times. Those tests will employ a
traffic generator software allowing to send and receive traffic between hosts
comprising the network and providing a measurement of the maximum available
bandwidth and also latency and jitter values.
It will mainly consist of a doc-type document with details related to the
steps to be followed in this test and the results obtained as well as well as
examples of the scripts (or its description) used to obtain those results.
This information will be interesting for potential users such as telecom
operators.
</td> </tr>
<tr>
<td>
**File format**
</td>
<td>
Word or PDF
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
The metadata is based on ZENODO’s metadata, including the title, creator,
date, contributor, description, keywords, format, resource type, etc. (See
Section 6)
</td> </tr>
<tr>
<td>
**Data sharing**
</td>
<td>
This data set will be widely open and will be deposited in the ZENODO
repository.
To perform this test, Ipref tool (or similar) is required.
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
This data set will be archived and preserved in ZENODO (See Section 7).
</td> </tr> </table>
Apart from the data sets specified that will be made open, other data
generated in TWEETHER such as the circuit detailed specifications and
realisation, and terminal integration should be kept confidential to avoid
jeopardising future exploitation.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0806_TWEETHER_644678.md
|
# INTRODUCTION
In December 2013, the European Commission announced their commitment to open
data through the Pilot on Open Research Data, as part of the Horizon 2020
Research and Innovation Programme. The Pilot’s aim is to “improve and maximise
access to and re-use of research data generated by projects for the benefit of
society and the economy”.
In the frame of this Pilot on Open Research Data, results of publicly-funded
research should be disseminated more broadly and faster, for the benefit of
researchers, innovative industry and citizens.
On one hand, Open Access allows not only accelerating discovery process and
ease those research results to reach the market (thus meaning a return of
public investment), but also avoids a duplication of research efforts thus
leading to a better use of public resources and a higher throughput. On the
other hand, this Open Access policy is also beneficial for the researchers
themselves. Making the research publicly available increases the visibility
and scientific impact of the performed research, which is translated into a
significantly higher number of citations 1 as well as an increase in the
collaboration potential with other institutions in new projects, among others.
Additionally, Open Access offers small and medium-sized enterprises (SMEs)
access to the latest research for utilisation.
Under H2020, each beneficiary must ensure open access to all peer-reviewed
scientific publications relating to its results. These open access
requirements are based on a balanced support to both 'Green open access'
(immediate or delayed open access that is provided through self-archiving) and
'Gold open access' (immediate open access that is provided by a publisher).
Apart from open access to publications, projects must also aim to deposit the
research data needed to validate the results presented in the deposited
scientific publications, known as "underlying data". In order to effectively
supply this data, projects need to consider at an early stage how they are
going to manage and share the data they create or generate.
During the first months of the project, TWEETHER elaborated the first version
of the Data Management Plan (DMP) which was reported in Deliverable D7.3,
“Data management plan (version 1)”, which described how scientific
publications and research data generated during the project was going to be
stored and made public. In particular, this DMP version addressed the
following issues:
* What data will be collected / generated in the course of the project?
* What data will be exploited? What data will be shared/made open?
* What standards will be used / how will metadata be generated?
* How will data be curated / preserved including after project completion
Since the DMP is expected to mature during the project, an updated version was
reported in Deliverable D7.5 “Data management plan (version 2)”, where a
review of the data sets to be collected, processed or generated inside the
project was reported, including more information about the mechanisms used to
share or make the publications and the data open.
This Deliverable, D7.11 “Final Data Management Plan”, describes the final DMP
used in the project including complete information on the format and expected
data items to be collect from the demonstration and functional evaluation
phase in the project, the field trial done at the Universitat Politècnica de
València, Spain, in September 2018.
# TWEETHER PROJECT
The TWEETHER project targeted to provide high capacity everywhere by the
realisation of a Wband wireless system with a capacity and coverage of
10Gbps/km² for the backhaul and the access markets, considered by operators a
key investment opportunity. Such a system, combined with the development of
beyond state-of-the-art affordable millimetre wave devices, permits to
overcome the economical obstacle that causes the digital divide and will pave
the way towards the full deployment of small cells.
This approach merged for the first-time novel approaches in vacuum electron
devices, monolithic millimetre wave integrated circuits and networking
paradigms to implement a novel transmitter to foster the future wireless
communication networks.
In particular, TWEETHER project has developed a novel, compact, low cost and
high yield Traveling Wave Tube (TWT) power amplifier to produce 40W output
power. This TWT is capable to provide wideband operation and enough output
power to distribute the millimetre wave frequency signal over a wide sector
with radius longer than 1 km.
On the other hand, advanced and high-performance W-band transceiver chipset,
enabling the low power operation of the system, has been fabricated. More
specifically, this chipset includes various GaAs-based monolithic microwave
integrated circuits (MMICs) comprising elements such as power amplifiers,
down- and up-converters, low noise amplifier and 8x frequency multiplier.
These novel W-band elements have been integrated by using advanced micro-
electronics and micromechanics to achieve compact front end modules, which
will be assembled and packaged with interfaces and antennas for a field test
to be deployed at the campus of the _Universitat Politecnica de Valencia_ to
demonstrate adequate operation of the breakthrough of the TWEETHER system in
the millimetre wave wireless network field.
Therefore, TWEETHER addresses a highly innovative approach, being its more
relevant audience, the scientific community working in millimetre wave
technology and wireless systems. In addition, due to the strong impact of the
system, other expected audience will be the industrial community,
standardization bodies working on the W-band and on definition of Multimedia
Wireless Systems (MWS), and potential users such as telecom operators. In this
way, defining an appropriate open data strategy will help increase the
visibility of the performed research inside the scientific community and the
industrial ecosystem, on one hand, and will ensure proper management of the
intellectual property, on the other hand.
# CONSIDERATIONS FOR PUBLIC INFORMATION
The H2020’s open access policy pursues that the information generated by the
projects participating in the programme is made publicly available. However,
as stated in EC guidelines on Data Management in H2020 2 , “ _As an
exception, the beneficiaries do not have to ensure open access to specific
parts of their research data if the achievement of the action's main
objective, as described in Annex I, would be jeopardised by making those
specific parts of the research data openly accessible. In this case, the data
management plan must contain the reasons for not giving access_ .”
In agreement with this, TWEETHER consortium can decide what information is
made public according to aspects as potential conflicts against
commercialization, IPR protection of the knowledge generated (by patents or
other forms of protection), market position risk for the companies in the
consortium, or any other risk that would impede to achieve the project
objectives and expected outcome.
TWEETHER project is pioneering research that is of key importance to the
electronic and telecommunication industry. Effective exploitation of the
research results depends on the proper management of intellectual property.
Therefore, the TWEETHER consortium follows the following strategy (Figure 1):
if the research findings result in a ground-breaking innovation, the members
of the consortium will consider two forms of protection: to withhold the data
for internal use or to apply for a patent in order to commercially exploit the
invention and have in return financial gain. In latter case, publications will
be therefore delayed until the patent filing. On the contrary, if the
technology developments are not going to be withheld or patented, the results
will be published for knowledge sharing purposes.
**Research**
**Results**
Protect
Selection
Disseminate
and share
Patenting
Open Access
Publication
Repository
of
Publication
and
Research
Data
**Dissemination**
**Plan**
**Data Management Plan**
Patent
Publication
Withhold
**After**
**patent**
**filing**
Scientific
Publication
**Figure 1. Process for determining which information is to be made public
(from EC’s document “Guidelines on Open Access to Scientific Publications and
Research Data in Horizon 2020 – v1.0 – 11 December 2013”)**
# OPEN ACCESS TO PUBLICATIONS
The first aspect to be considered in the DMP is related to the open access
(OA) to the publications generated within the TWEETHER project, meaning that
any peer-reviewed scientific publication made within the context of the
project will be available online to any user at no charge. This aspect is
mandatory for new projects in the Horizon 2020 programme (article 29.2 of the
Model Grant Agreement).
The two ways considered by the EC to comply with this requirement are:
* Self-archiving / ‘green’ OA: In this option, the beneficiaries deposit the final peer-reviewed manuscript in a repository of their choice. In this case, they must ensure open access to the publication within a maximum of six months (twelve months for publications in the social sciences and humanities).
* Open access publishing / ‘gold’ OA: In this option, researchers publish their results in open access journals, or in journals that sell subscriptions and also offer the possibility of making individual articles openly accessible via the payment of author processing charges (APCs) (hybrid journals). Again, open access via the chosen repository must be ensured upon publication.
Publications arising from the TWEETHER project will be deposited in a
repository (‘green’ OA) and, whenever possible, the option ‘gold’ OA will be
used in order to provide the widest dissemination of the published results.
With respect to the ‘green’ OA option it should be mentioned that most
publishers allow to deposit a copy of the article in a repository, sometimes
with a period of restricted access (embargo) 3 .
In Horizon 2020, the embargo period imposed by the publisher must be shorter
than 6 months (or 12 months for social sciences and humanities). This embargo
period will be therefore taken into account by the TWEETHER consortium to
choose the open access modality for the fulfilment of the open access
obligations established by the EC.
Additionally, according to the EC recommendation, whenever possible the
TWEETHER consortium will retain the ownership of the copyright for their work
through the use of a ‘License to Publish’, which is a publishing agreement
between author and publisher. With this agreement, authors can retain
copyright and the right to deposit the article in an Open Access repository,
while providing the publisher with the necessary rights to publish the
article. Additionally, to ensure that others can be granted further rights for
the use and reuse the work, the TWEETHER consortium may ask the publisher to
release the work under a Creative Commons license, preferably CC-0 or CC-BY.
Besides these two facts (retaining the ownership of the publication and
embargo period), the TWEETHER consortium has considered the relevance of the
journal where to publish, measured by means of the “impact factor” (IF). Table
1 below provide a list of the journals by TWEETHER partners and relevant
information about the open access policy of IEEE.
**Table 1. Publications from TWEETHER consortium and publisher OA policy.**
<table>
<tr>
<th>
**Publisher**
</th>
<th>
**Journal**
</th>
<th>
**Impact factor**
</th>
<th>
**Author charges**
**(for OA)**
</th>
<th>
**Comments about open access**
</th> </tr>
<tr>
<td>
Institute of
Electrical and
Electronics
Engineers
(IEEE)
</td>
<td>
IEEE Transaction on Vehicular Technology
</td>
<td>
4.32
</td>
<td>
$1,950
</td>
<td>
A paid open access option is available for these journals. If funding rules
apply, authors may post Author's post-print version in funder's designated
repository. Publisher's version/PDF cannot be used.
</td> </tr>
<tr>
<td>
IEEE Transaction on
Wireless Communications
</td>
<td>
5.88
</td> </tr>
<tr>
<td>
IEEE
Electron Device Letters
</td>
<td>
2.528
</td> </tr>
<tr>
<td>
IEEE Transactions on
Microwave Theory and
Techniques
</td>
<td>
3.176
</td> </tr> </table>
From Table 1, IEEE journals allow an open access modality and the author’s
post-print version can be deposited in a repository. This is in line with the
Horizon 2020 requirements.
IEEE policy on Open Access stablishes that, upon submission to the
corresponding IEEE publication authors may share or post their submitted
version of the article (also known as the preprint or author version) in the
following ways:
* On the author’s personal website or their employer’s website
* On institutional or funder websites if required
* In the author’s own classroom use
* On Scholarly Collaboration Networks (SCNs) that are signatories to the International Association of Scientific, Technical, and Medical Publishers’ Sharing Principles
In this case, the following text should be included on the first page of the
submitted article, posted in any of the above outlets: “This work has been
submitted to the IEEE for possible publication. Copyright may be transferred
without notice, after which this version may no longer be accessible.”
Once the article is accepted by IEEE, if the paper was previously posted
(preprint) on **author’s personal website, author’s employer’s website,
arXiv.org,** or the funder’s repository, then it should be replaced the
submitted version with the accepted version adding the IEEE copyright notice
(© 20XX IEEE). When the article is published, the posted version should be
updated with a full citation to the original IEEE publication, including DOI.
For the funder’s repository, a 24 months embargo must be enforced. The posted
article must be removed from any other third-party.
If the article is not published under an open access license (OA fee) and use
the standard IEEE Copyright Form the author may not post the final published
article online, but may:
* Share copies of the final published article for individual personal use
* Use the final published article in their own classroom with permission from IEEE
* Use in their own thesis or dissertation, provided that certain requirements are met Note that any third-party reuse requires permission from the publisher, IEEE.
For articles that are published open access under the IEEE Open Access
Publishing Agreement (OAPA) the author may post the final published article
on:
* Their personal website and their employer’s website
* Institutional or funder websites as required
* Third-party reuse requires permission from IEEE.
In any case. all the publications will acknowledge the project funding. This
acknowledgment must be included also in the metadata of the generated
information, since it allows to maximise the discoverability of publications
and to ensure the acknowledgment of EU funding. The terms to be included in
the metadata are:
* "European Union (EU)" and "Horizon 2020"
* the name of the action, acronym and the grant number
* the publication date, length of embargo period if applicable, and a persistent identifier (e.g DOI, Handle)
Finally, in the Model Grant Agreement, “scientific publications” mean
primarily journal articles. Whenever possible, TWEETHER will provide access to
other types of scientific publications such as conference papers,
presentations, public deliverables, etc.
## **Access to peer-reviewed scientific publication**
An important objective of TWEETHER is the dissemination of its research
results to the scientific community, targeting the scientific journals,
conferences or workshops with the highest impact. Indeed, several peer-
reviewed scientific papers have been presented so far in relevant
international conferences. These publications are or will be available online,
as required by the EC:
## Journal papers
* C. Paoloni, F. Magne, F. André, J. Willebois, Q.T. Le, X. Begaud, G. Ulisse, V. Krozer, R. Letizia, R. Llorente, M. Marilier, A. Ramirez, R. Zimmerman, “W-band Point to Multipoint Transmission Hub and Terminals for High Capacity Wireless Networks”, submitted on 15 th October to IEEE Transactions on Microwave Theory and Techniques, special issue on 5G Hardware and System Technologies. In review, to be published if accepted on June 2019.
In review. IEEE Open Access fee will be paid upon acceptance.
* G. Ulisse and V. Krozer, "W-Band Traveling Wave Tube Amplifier Based on Planar Slow Wave Structure", IEEE Electron Device Letters, vol. 38, no. 1, January 2017.
Open Access: _https://ieeexplore.ieee.org/document/7742417_
* J. Shi, L. L. Yang, Q. Ni, "Novel Intercell Interference Mitigation Algorithms for Multicell OFDMA Systems with Limited Base Station Cooperation," in publication in IEEE Transactions on Vehicular Technology, vol. PP, no.99, pp.1-16, 2016.
Open Access: _https://eprints.soton.ac.uk/391331/1/tvt-yang-2542182-proof.pdf_
* J. Shi, Lu Lv, Q. Ni, H. Pervaiz, and C. Paoloni., “Modeling and Analysis of Point-toMultipoint Millimeter-Wave Backhaul Networks” under final revision round in IEEE Transactions on Wireless Communications.
Open access: http://eprints.lancs.ac.uk/128927/1/FINAL_VERSION.pdf.
## Conference papers
1. Shrestha, J. Moll, A. Raemer, M. Hrobak, V. Krozer, "20 GHz Clock Frequency ROM-Less Direct Digital Synthesizer Comprising Unique Phase Control Unit in 0.25 μm SiGe
Technology", European Microwave Conference (EuMC), Madrid, Spain, September
2018.
2. C. Paoloni, F. Magne, F. Andre, J. Willebois, Q.T. Le, X. Begeaud, G. Ulisse, V. Krozer, R. Letizia, M. Marilier, A. Ramirez, R. Zimmerman, "Transmission Hub and Terminals for Point to Multipoint W-band TWEETHER System", European Conference on Networks and Communications 2018 (EUCNC 2018), Ljubljana, Slovenia, June 2018\.
Open Access: _http://eprints.lancs.ac.uk/126591/1/Trasmisson_Hub_.pdf_
3. M. Mbeutcha, G. Ulisse, V. Krozer "Millimeter-Wave Imaging Radar System Design Based on Detailed System Radar Simulation Tool ", 22nd International Microwave and Radar Conference (MIKON), Poznan, Poland, May 2018.
4. F. Andre, T. L. Quang, G. Ulisse, V. Krozer, R. Letizia, R. Zimmerman, C. Paoloni, "W-band TWT for High Capacity Transmission Hub for Small Cell Backhaul", 2018 IEEE International Vacuum Electronics Conference (IVEC), Monterey, USA, April 2018.
5. S. Mathisen, R. Basu, L.R.Billa, J. Gates, N.P. Rennison, R. Letizia, C. Paoloni, “Low Cost Fabrication for W-band Slow Wave Structures for Wireless Communication Travelling Wave Tubes”, IVEC2018, Monterey, USA, April 2018.
Open Access:
_http://eprints.lancs.ac.uk/125214/1/IVEC2018_W_band_SWS_Paper_Final.pdf_
6. F. Magne, A. Ramirez, C. Paoloni, "Millimeter Wave Point to Multipoint for Affordable High Capacity Backhaul of Dense Cell Networks", Workshop on Economics and Adoption of Millimeter Wave Technology in Future Networks of the IEEE Wireless Communications and Networking Conference (IEEE WCNC), Barcelona, Spain, April 2018\.
7. Open Access: Link will be available
8. G. Ulisse, V. Krozer, "Planar slow wave structures for millimeter-wave vacuum electron devices", 47th European Microwave Conference (EuMC), Nuremberg, Germany, October 2017.
C. Paoloni, F. Magne, F. André, X. Begaud, V. Krozer, M. Marilier, A. Ramírez,
J.R. Ruiz, R.
Vilar, R. Zimmerman, "TWEETHER Future Generation W-band Backhaul and Access
Network Technology", 26th European Conference on Networks and Communications
(EuCNC 2017), Oulu, Finland, June 2017. Open Access:
_http://eprints.lancs.ac.uk/86088/1/TWEETHER_Future_Generation_W_band_Backhaul_and_A
ccess_NetworkTechnology.pdf_
9. G. Ulisse, V. Krozer, "Investigation of a Planar Metamaterial Slow Wave Structure for Traveling Wave Tube Applications", 18th International Vacuum Electronics Conference (IVEC 2017), London, United Kingdom, April 2017.
10. F. André, S. Kohler, V. Krozer, Q.T. Le, R. Letizia, C. Paoloni, A. Sabaawi, G. Ulisse, R. Zimmerman, "Fabrication of W-band TWT for 5G small cells backhaul", 18th International Vacuum Electronics Conference (IVEC 2017), London, United Kingdom April 2017. Open Access:
_http://eprints.lancs.ac.uk/86085/1/Fabrication_of_W_band_TWT_for_5g_small_cells_backhaul.
pdf_
11. C. Paoloni, F. André, V. Krozer, R. Zimmermann, Q.T. Le, R. Letizia, S. Kohler, A. Sabaawi, G. Ulisse, “Folded wave guide TWT for 92 – 95 GHz band outdoor wireless frontend”, Workshop on Microwave Technology and Techniques (MTT), ESA/ESTEC, The Netherlands, April 2017\.
Open Access: _http://eprints.lancs.ac.uk/89688/1/Draft_ESA_final.pdf_
12. J.E. González, X. Begaud, B. Huyart, Q. T. Le, R. Zimmermann, F. Magne ‘Millimeter Wave Antennas for Backhaul Networks’, 11th European Conference on Antennas and Propagation (EuCAP 2017), Paris, France, March 2017.
13. C. Paoloni, F. Magne, F. André, X. Begaud, J. da Silva, V. Krozer, M. Marilier, A. Ramírez, R.
Vilar, R. Zimmerman, “TWEETHER project for W-band wireless networks”, 9th IEEE
UKEurope-China Workshop on mm-Waves and THz Technologies (UCMMT2016, Qingdao,
China), September 2016.
Open Access: _http://eprints.lancs.ac.uk/81351/4/TWEETHER_UCMMT2016_new.pdf_
14. Jia Shi, Qiang Ni, C. Paoloni and F. Magne, “Efficient Interference Mitigation in mmWave
Backhaul Network for High Data Rate 5G Wireless Communications”, 12th
International Conference on Wireless Communications, Networking and Mobile
Computing (WICOM'2016), Xi'an, China, September 2016.
Open Access: _http://eprints.lancs.ac.uk/83549/1/WiCOM_paper.pdf_
15. C. Paoloni, F. André, S. Kohler, V. Krozer, Q.T. Le, R. Letizia, A. Sabaawi, G. Ulisse, R.
Zimmerman, "A Traveling Wave Tube for 92 – 95 GHz band wireless applications",
41st International Conference on Infrared, Millimeter and Terahertz Waves
(IRMMW-THz 2016), Copenhagen, Denmark, September 2016. Open Access: Link will
be available
16. C. Paoloni, F. Magne, F. André, V. Krozer, M. Marilier, A. Ramírez, R. Vilar, R. Zimmerman,
“W-band point to multipoint system for small cells backhaul”, in the Special
Session
“Millimeter-waves as a key enabling technology for 5G: Status of the pre-
development
activities and way forward”, 25th European Conference on Networks and
Communications (EuCNC 2016), Athens, Greece, June 2016. Open Access: Link will
be available
17. C. Paoloni, F. Magne, F. André, V. Krozer, R. Letizia, M. Marilier, A. Ramírez, M. Rocchi, R. Vilar, R. Zimmerman, “Millimeter Wave Wireless System Based on Point to Multipoint Transmissions”, 25th European Conference on Networks and Communications (EuCNC 2016), Athens, Greece, June 2016.
Open Access: _http://eprints.lancs.ac.uk/85850/1/07561014.pdf_
18. C. Paoloni, R. Letizia, F. André, S. Kohler, F. Magne, M. Rocchi, M. Marilier, R. Zimmerman, V. Krozer, G. Ulisse, A. Ramirez, R. Vilar, "W-band TWTs for New Generation High Capacity Wireless Networks", 17th International Vacuum Electronics Conference (IVEC 2016), Monterey, US, April 2016.
Open Access: _http://eprints.lancs.ac.uk/84542/1/p_521.pdf_
19. C. Paoloni, “W-band access and backhaul for high capacity wireless networks”, Layer 123 Packet Microwave & Mobile Backhaul 2015, London, United Kingdom, September 2015.
20. C. Paoloni, R. Letizia, Q. Ni, F. André, I. Burciu, F. Magne, M. Rocchi, M. Marilier, R. Zimmerman, V. Krozer, A. Ramirez, R. Vilar, “Scenarios and Use Cases in Tweether: W-band for Internet Everywhere”, 24th European Conference on Networks and Communications, Paris, France, June 2015.
Open Access:
_https://riunet.upv.es/bitstream/handle/10251/62274/Vilar%20Mateo,%20R.%20-_
_%20Scenario%20and%20use%20cases%20in%20.pdf?sequence=4_
21. C. Paoloni, R. Letizia, F. Napoli, Q. Ni, A. Rennie, F. André, K. Pham, F. Magne, I. Burciu, M.
Rocchi, M. Marilier, R. Zimmerman, V. Krozer, A. Ramirez, R. Vilar, "Horizon
2020 TWEETHER project for W-band high data rate communications", 16th
International Vacuum Electronics Conference (IVEC 2015), Beijing, China, April
2015.
Open Access: _https://doi.org/10.1109/IVEC.2015.7223770_
Apart from the open access to the scientific papers detailed above, TWEETHER
has provided access to other type of documents such as public deliverables and
presentations given in scientific and industrial workshops through the project
website ( _https://tweether.eu/public-deliverables_ ), where full-text is
available for the publications marked as “Public” in the Grant Agreement.
Moreover, all public information and associated dataset have been made
available in the ZENODO repository setup at the project start (
_https://zenodo.org/search?page=1 &size=20&q=tweether _ ).
In addition, a workshop on Millimetre-wave Technologies for High-Speed
Broadband Wireless Networks was organized in the frame of TWEETHER. The
presentations of this workshop are available on the project website:
_https://tweether.eu/workshop/agenda.php_
# RESEARCH DATA
The scientific and technical results of TWEETHER project are expected to be of
maximum interest for the scientific community. Through the duration of the
project, once the relevant protections (e.g. IPR) are secured, TWEETHER
partners may disseminate (subject to their legitimate interests) the obtained
results and knowledge to the relevant scientific communities through
contributions in journals and international conferences in the field of
wireless communications and millimetre-wave technology.
Apart from the open access to publication explained in the previous section,
the Open Research Data Pilot also applies to two types of data 4 :
* The data, including associated metadata, needed to validate the results presented in scientific publications (underlying data);
* Other data, including associated metadata, as specified and within the deadlines laid down in a data management plan, to be developed by the project. In other words, beneficiaries will be able to choose which data, additionally to the data underlying publications, they make available in open access mode.
According to this requirement, the underlying data related to the scientific
publications will be made publicly available (See Section 8). This will allow
that other researchers can make use of that information to validate the
results, thus being a starting point for their investigations, as expected by
the EC through its open access policy. But, in order to be aligned with the
protection policy and strategy described, the data sets will be analysed on a
case by case basis before making them open with the objective to not
jeopardize exploitation or commercialization purposes. As a result, the
publication of research data will be mainly followed by those partners
involved in the scientific development of the project (i.e., academic and
research partners), while those partners focused on the “development” of the
technology will limit the publication of information due to
strategic/organizational reasons (commercial exploitation).
In the first version of the DMP the project consortium provided an explanation
of the different types of data sets to be generated in TWEETHER. Examples of
these data are the specifications of the TWEETHER system and the services it
supports, the datasheets and performances of the technological developments of
the project, the field trial results with the KPIs (Key Performance
Indicators) used to evaluate the system performances, among others.
As the nature and extent of these data sets can be evolved during the project,
the objective of this deliverable is to review the data sets identified so far
to determine if they should be modified/updated or if new data sets should be
included. In particular, it has been included a data set related to the
measurements on the W-band chipsets (see Section 8). The rest of the data sets
are still relevant.
## **Access to research data**
According to the requirement of providing access to the data needed to
validate the results presented in the scientific publications (i.e.,
underlying data), and key research results have been made available through
the Zenodo portal ( _www.zenodo.com_ ) and, additionally, through the
Lancaster University institutional repository
(http://www.research.lancs.ac.uk/portal/) in some cases.
Zenodo is a result from the OpenAIRE project commissioned by the EC to support
their Open Data policy by providing a catch-all repository for EC funded
research. It was launched in May 2013.
The following key research data has been made publicly available in the
repository:
• Results of the W-band TWT gain and output power simulated using MAGIC 3D
Particle in Cell Simulators. These results were presented in the IVEC 2016
paper (Deliverable D7.10).
4 _EC document: “Guidelines on Open Access to Scientific Publications and
Research Data in Horizon 2020” – version_
_1.0 – 11 December, 2013_
<table>
<tr>
<th>
**Gain and output power of Traveling Wave Tube at W-band**
</th> </tr>
<tr>
<td>
_Claudio Paoloni, Rosa Letizia, Frédéric André, Sophie Kohler, François Magne,
Marc_
_Rocchi, Marc Marilier, Ralph Zimmerman, Viktor Krozer, Giacomo Ulisse,
Antonio Ramírez, Ruth Vilar_
</td> </tr>
<tr>
<td>
Results of the W-band TWT gain and output power simulated by using MAGIC 3D
Particle in Cell Simulators. These results correspond to Figure 2 in the paper
"W-band TWTs for New Generation High Capacity Wireless Networks", 17 th
International Vacuum Electronics Conference.
</td> </tr>
<tr>
<td>
Zenodo link: _https://zenodo.org/record/57266#.W9SqKqcrx0s_
</td> </tr>
<tr>
<td>
DOI link: _http://doi.org/10.5281/zenodo.57266_
</td> </tr> </table>
* Underlying data corresponding to the paper “TWEETHER Future Generation W-band Backhaul and Access Network Technology” presented in the EUCNC 2017.
<table>
<tr>
<th>
**TWEETHER Future Generation W-band Backhaul and Access Network Technology**
</th> </tr>
<tr>
<td>
_Claudio Paoloni, François Magne, Frédéric André, Xavier Begaud, Viktor
Krozer, Marc Marilier, Antonio Ramirez, José Raimundo Ruiz Carrasco, Ruth
Vilar, Ralph Zimmerman_
</td> </tr>
<tr>
<td>
Datasets from Figure 6(a) and (b) showing the lens antenna simulated by 3D
simulator in the paper presented at EUCNC 2017. Data from Figure 8 is also
included. These data are measured by a Vector Network Analyser on chips on
wafer.
</td> </tr>
<tr>
<td>
Zenodo link: _https://zenodo.org/record/1042528#.W9SqCqcrx0s_
</td> </tr>
<tr>
<td>
DOI link: _http://doi.org/10.5281/zenodo.1042528_
</td> </tr> </table>
* Results of the W-band TWT datasets reporting the dispersion of the folded waveguide, beam line and output power of the paper "Fabrication of W-band TWT for 5G small cells backhaul" presented in IVEC 2017.
<table>
<tr>
<th>
**Fabrication of W-band TWT for 5G small cells backhaul**
</th> </tr>
<tr>
<td>
_Frédéric André, Sophie Kohler, Viktor Krozer, Quang Trung Le, Rosa Letiz,
Claudio Paoloni, Ahmed Sabaaw, Giacomo Ulisse and Ralph Zimmerman_
</td> </tr>
<tr>
<td>
Datasets of the Dispersion of the folded waveguide and beam line (Fig1) and
output power (Fig 2) of the paper "Fabrication of W-band TWT for 5G small
cells backhaul" in IVEC 2017.
</td> </tr>
<tr>
<td>
Both MAGIC3D and CST- Particle StudioS were used for particle in cell
simulations of the whole amplifier. Both the simulators confirmed more that
than 40W on the full band 92 – 95 GHz. The simulations included the couplers
and the RF windows. Specific simulations for the design of the electron
optics, the windows and the collector were performed.
</td> </tr>
<tr>
<td>
Zenodo link: _https://zenodo.org/record/1623601#.W_6emi1Dl5Y_
</td> </tr>
<tr>
<td>
DOI link: _https://doi.org/10.5281/zenodo.1623601_
</td> </tr> </table>
* Underlying data corresponding to the paper: "Folded wave guide TWT for 92 – 95 GHz band outdoor wireless frontend”, ESA/ESTEC, April 2017\.
<table>
<tr>
<th>
**Folded wave guide TWT for 92 – 95 GHz band outdoor wireless frontend**
</th> </tr>
<tr>
<td>
_Claudio Paoloni, Frédéric André, Sophie Kohler, Viktor Krozer, Quang Trung
Le, Rosa Letizia, Ahmed Sabaawi, Giacomo Ulisse, Ralph Zimmerman_
</td> </tr>
<tr>
<td>
The dataset includes data shown in Figure 3 from the conference paper “Folded
wave guide TWT for 92 – 95 GHz band outdoor wireless frontend”, shown in the
Workshop on Microwave Technology and Techniques (MTT), ESA/ESTEC, The
Netherlands, April 2017.
</td> </tr>
<tr>
<td>
Zenodo link: _https://zenodo.org/record/1628635#.W_6weC1Dl5Y_
</td> </tr>
<tr>
<td>
DOI link: _https://doi.org/10.5281/zenodo.1628635_
</td> </tr> </table>
* Underlying data corresponding to the paper: "W-Band Traveling Wave Tube Amplifier Based on Planar Slow Wave Structure", IEEE Electron Device Letters, January 2017.
<table>
<tr>
<th>
**W-Band Traveling Wave Tube Amplifier Based on Planar Slow Wave Structure**
</th> </tr>
<tr>
<td>
_Giacomo Ulisse, Viktor Krozer_
</td> </tr>
<tr>
<td>
Underlying data corresponding to Figures 2a, 2b, and Figure 5 in the journal
paper "W-Band Traveling Wave Tube Amplifier Based on Planar Slow Wave
Structure", IEEE Electron Device Letters, vol. 38, no. 1, January 2017.
</td> </tr>
<tr>
<td>
Zenodo link: _https://zenodo.org/record/1631397#.W_6x8C1Dl5Y_
</td> </tr>
<tr>
<td>
DOI link: _https://doi.org/10.5281/zenodo.1631397_
</td> </tr> </table>
* Underlying data corresponding to the paper: "Millimeter Wave Point to Multipoint for Affordable High Capacity Backhaul of Dense Cell Networks" (IEEE WCNC 2018).
<table>
<tr>
<th>
**Millimeter Wave Point to Multipoint for Affordable High Capacity Backhaul of
Dense Cell Networks**
</th> </tr>
<tr>
<td>
_Francois Magne, Antonio Ramirez, Claudio Paoloni_
</td> </tr>
<tr>
<td>
Datasets corresponding to underlying data shown in Figure 3, Figure 6, Figure
7, Figure 8 and Figure 9 in the paper “Millimeter Wave Point to Multipoint for
Affordable High Capacity Backhaul of Dense Cell Networks", Workshop on
Economics and Adoption of Millimeter Wave Technology in Future Networks of the
IEEE Wireless Communications and Networking Conference (IEEE WCNC), Barcelona,
Spain, April 2018.
</td> </tr>
<tr>
<td>
Zenodo link: _https://zenodo.org/record/1635593#.W_6ymi1Dl5Y_
</td> </tr>
<tr>
<td>
DOI link: _https://doi.org/10.5281/zenodo.1635593_
</td> </tr> </table>
* Underlying data corresponding to the paper: "Planar slow wave structures for millimeter-wave vacuum electron devices", 47th European Microwave Conference (EuMC).
<table>
<tr>
<th>
**Planar slow wave structures for millimeter-wave vacuum electron devices**
</th> </tr>
<tr>
<td>
_Giacomo Ulisse, Viktor Krozer_
</td> </tr>
<tr>
<td>
Datasets corresponding to Figure 2, Figure 3, Figure 5 and Figure 6 from the
conference paper "Planar slow wave structures for millimeter-wave vacuum
electron devices", 47th European Microwave Conference (EuMC), Nuremberg,
Germany, October 2017.
</td> </tr>
<tr>
<td>
Zenodo link: _https://zenodo.org/record/1630703#.W_6gLi1Dl5Y_
</td> </tr>
<tr>
<td>
DOI link: _https://doi.org/10.5281/zenodo.1630703_
</td> </tr> </table>
* Results from the final Field-Trial on September 2018 implemented in the Campus of the Universitat Politècnica de València. These published results include detailed performance data that **is published with restrictions** , i.e. each data access request must be granted by TWEETHER Coordinator. The information published include the data collected from the following IP addresses (from Deliverable D6.6, “Performance evaluation in the small-scale field trial”) gathered during the period from September 29 th to October 3 rd :
<table>
<tr>
<th>
**IP address**
</th>
<th>
**Device**
</th> </tr>
<tr>
<td>
10.128.4.211
</td>
<td>
MK-MASTER-1
</td> </tr>
<tr>
<td>
10.128.4.212
</td>
<td>
MK-SLAVE-1
</td> </tr>
<tr>
<td>
10.128.4.221
</td>
<td>
MK-MASTER-2
</td> </tr>
<tr>
<td>
10.128.4.222
</td>
<td>
MK-SLAVE-2
</td> </tr>
<tr>
<td>
10.128.4.231
</td>
<td>
MK-MASTER-3
</td> </tr>
<tr>
<td>
10.128.4.232
</td>
<td>
MK-SLAVE-3
</td> </tr> </table>
Each file includes the daily records of one or several parameters collected
every 60 seconds.
<table>
<tr>
<th>
**Parameter in the filename**
</th>
<th>
</th> </tr>
<tr>
<td>
RSSI60
</td>
<td>
Every minute, the 60 values for the RSSI of the previous minute are registered
in this file in order to be processed.
</td> </tr>
<tr>
<td>
SNR60
</td>
<td>
Every minute, the 60 values for the SNR of the previous minute are registered
in this file in order to be processed.
</td> </tr>
<tr>
<td>
RSSIandSNR
</td>
<td>
Minimum, Maximum and Mean values for RSSI and SNR calculated from RSSI60 and
SNR60
</td> </tr>
<tr>
<td>
RXCCQ
</td>
<td>
Client Connection Quality of the Wlan interface. Indicator of the efficiency
of the wireless transmission.
100% would indicate that no frames are lost.
</td> </tr> </table>
Example: file _2018-09-30_10.128.4.211_RSSIandSNRgood.csv_
time;RSSImean;RSSImin;RSSImax;SNRmean;SNRmin;SNRmax;
00:00:04;-63.96;-64;-63;41.04;41;42;
00:01:03;-63.26;-64;-63;41.72;41;42;
00:02:04;-58.05;-64;-55;46.95;41;50;
00:03:04;-63.94;-64;-63;41.06;41;42;
00:04:04;-63.93;-64;-63;41.07;41;42;
00:05:04;-63.98;-64;-63;41.02;41;42;
00:06:04;-63.96;-64;-63;41.04;41;42;
00:07:04;-63.94;-64;-63;41.06;41;42;
00:08:04;-63.96;-64;-63;41.06;41;42;
<table>
<tr>
<th>
**H2020 TWEETHER Field Trial Results (Data files)**
</th> </tr>
<tr>
<td>
_Antonio Ramirez_
</td> </tr>
<tr>
<td>
W-Band transmission performance data collected during September 29th to
October 3 rd .
</td> </tr>
<tr>
<td>
ZIP file containing the CSV files of the different parameters captured during
TWEETHER Field Trial (21/09/2018 to 04/10/2018.
</td> </tr>
<tr>
<td>
Zenodo link: _https://zenodo.org/record/1478518#.W-FtG9VKiM8_ (October 2018)
</td> </tr>
<tr>
<td>
DOI link: _http://doi.org/10.5281/zenodo.1478518_
</td> </tr> </table>
# METADATA
Metadata refers to “data about data”, i.e., it is the information that
describes the data that is being published with sufficient context or
instructions to be intelligible for other users. Metadata must allow a proper
organization, search and access to the generated information and can be used
to identify and locate the data via a web browser or web-based catalogue.
Two types of metadata will be considered within the frame of the TWEETHER
project: that corresponding to the project publications, and that
corresponding to the published research data.
With respect to the metadata related to scientific publications, as described
in Section 4, they include the title, the authors, publication date, funding
institution (EU H2020), grant number, persistent identifier (e.g DOI, Handle),
etc. Figure 2 shows an example of metadata used for the scientific paper
presented at the EuCNC2015.
**Figure 2. Metadata used for the scientific paper presented at the
EuCNC2015**
In the context of data management, metadata will form a subset of data
documentation that will explain the purpose, origin, description, time
reference, creator, access conditions and terms of use of a data collection.
The metadata that best describe the data depends on the nature of the data.
For research data generated in TWEETHER, it is difficult to establish a global
criteria for all data, since the nature of the initially considered data sets
will be different, so that the metadata will be based on a generalised
metadata schema as the one used in ZENODO 4 , which includes elements such
as:
* Title: free text
* Creator: Last name, first name
* Date
* Contributor: It can provide information referred to the EU funding and to the TWEETHER project itself; mainly, the terms "European Union (EU)" and "Horizon 2020", as well as the name of the action, acronym and the grant number
* Subject: Choice of keywords and classifications
* Description: Text explaining the content of the data set and other contextual information needed for the correct interpretation of the data.
* Format: Details of the file format
* Resource Type: data set, image, audio, etc.
* Identifier: DOI
* Access rights: closed access, embargoed access, restricted access, open access.
Additionally, a readme.txt file could be used as an established way of
accounting for all the files and folders comprising the project and explaining
how all the files that make up the data set relate to each other, what format
they are in or whether particular files are intended to replace other files,
etc.
Based on the comments presented above, Figure 3 shows an example of metadata
used in ZENODO for the data uploaded to this platform.
**Figure 3. Metadata used in ZENODO for data uploaded to this platform**
# DATA SHARING, ARCHIVING AND PRESERVATION
A repository is the mechanism to be used by the project consortium to make the
project results (i.e., publications and scientific data) publicly available
and free of charge for any user. According to this, several options are
considered/suggested by the EC in the frame of the Horizon 2020 programme to
this aim:
* For depositing scientific publications:
* Institutional repositories of the research institutions (e.g., RiuNet at UPV) o Subject-based/thematic repository
* Centralised repository (e.g., Zenodo repository set up by the OpenAIRE project)
* For depositing generated research data:
* A research data repository which allows third parties to access, mine, exploit, reproduce and disseminate free of charge
* Centralised repository (e.g., Zenodo repository set up by the OpenAIRE project)
The academic institutions participating in TWEETHER have available appropriate
repositories which in fact are linked to OpenAIRE
(https://www.openaire.eu/participate/deposit/idrepos):
# • Lancaster University - Lancaster E-Prints
Type: Publication Repository
Contents: Journal articles, Conference and workshop papers, Theses and
dissertations, Books, chapters and sections, Other special item types Website
URL: _http://eprints.lancs.ac.uk/_
Compatibility: OpenAIRE Basic (DRIVER OA)
OAI-PMH URL: _http://eprints.lancs.ac.uk/cgi/oai2_
# • Hochschulschriftenserver - Universität Frankfurt am Main
Type: Publication Repository
Contents: Journal articles, Conference and workshop papers, Theses and
dissertations,
Unpublished reports and working papers
Website URL: _http://publikationen.ub.uni-frankfurt.de/_
Compatibility: OpenAIRE Basic (DRIVER OA)
OAI-PMH URL: _http://publikationen.ub.uni-frankfurt.de/oai_
# • Universitat Politècnica de Valencia (UPV) – RiuNet
Type: Publication Repository
Contents: Journal articles, Conference and workshop papers, Theses and
dissertations,
Learning Objects, Multimedia and audio, visual materials, Other special item
types
Website URL: _http://riunet.upv.es/_
Compatibility: OpenAIRE 2.0+ (DRIVER OA, EC funding)
OAI-PMH URL: https://riunet.upv.es/oai/driver,
_https://riunet.upv.es/oai/openaire_
The institutional repositories are used to deposit the publications generated
by the academic institutions participating in TWEETHER. Indeed, as commented
in Section 4.1, the scientific papers published so far are available in the
RiuNet repository and in OpenAIRE through the following link:
_https://www.openaire.eu/search/project?projectId=corda__h2020::546a6950975d78f06a46bc53f2bf
c9ef_
Note that all these repositories make use of the OAI-PMH protocol (Open
Archives Initiative Protocol for Metadata Harvesting), what allows that the
content can be properly found by means of the defined metadata. OAI-PMH is a
mechanism for interoperability of repositories. Data Providers are
repositories that expose structured metadata via OAI-PMH. Service Providers
make OAI-PMH service requests to harvest metadata. OAI-PMH is invoked through
HTTP.
Apart from these repositories, TWEETHER project also uses the centralised
repository ZENODO to ensure the maximum dissemination of the information
generated in the project (research publications and data), as this repository
is the one recommended by the EC’s OpenAIRE initiative in order to unite all
the research results arising from EC funded projects.
Indeed, ZENODO 5 is an easy-to-use and innovative service that enables
researchers, EU projects and research institutions to share and showcase
multidisciplinary research results (data and publications) that are not part
of existing institutional or subject-based repositories. Namely, ZENODO
enables users to:
* easily share the long tail of small data sets in a wide variety of formats, including text, spreadsheets, audio, video, and images across all fields of science
* display and curate research results, get credited by making the research results citable, and integrate them into existing reporting lines to funding agencies like the European Commission
* easily access and reuse shared research results
* define the different licenses and access levels that will be provided
Furthermore, ZENODO assigns a Digital Object Identifier (DOI) to all publicly
available uploads, in order to make content easily and uniquely citable and
this repository also makes use of the OAI-PMH protocol (Open Archives
Initiative Protocol for Metadata Harvesting) to facilitate the content search
through the use of defined metadata. This metadata follows the schema defined
in INVENIO 6 (a free software suite enabling to run an own digital library
or document repository on the web) and is exported in several standard formats
such as MARCXML, Dublin Core and DataCite Metadata Schema according to
OpenAIRE Guidelines.
On the other hand, considering ZENODO as the repository, the short- and long-
term storage of the research data will be secured since they are stored safely
in same cloud infrastructure as research data from CERN's Large Hadron
Collider. Furthermore, it uses digital preservation strategies to storage
multiple online replicas and to back up the files (Data files and metadata are
backed up on a nightly basis).
Therefore, this repository fulfils the main requirements imposed by the EC for
data sharing, archiving and preservation of the data generated in TWEETHER.
For this reason, a ZENODO community for TWEETHER documents has been created,
and can be accessed through the following link:
_https://zenodo.org/collection/user-tweether-project_
# DESCRIPTION OF DATA SETS GENERATED OR COLLECTED
This section provides an explanation of the different types of data sets to be
produced in TWEETHER, which has been identified at this stage of the project.
As the nature and extent of these data sets can be evolved during the project,
in this deliverable a new data set associated with the S-parameters of the
W-band chipsets has been identified and included in this section together with
the rest of the data sets described in the previous data management plan.
The descriptions of the different data sets, including their reference, file
format, the level of access, and metadata and repository to be used
(considerations described in Section 6 and 7), are given below.
<table>
<tr>
<th>
**Data set reference**
</th>
<th>
DS_SP_1
</th> </tr>
<tr>
<td>
**Data set name**
</td>
<td>
TWT_SP_X
</td> </tr>
<tr>
<td>
**Data set description**
</td>
<td>
This data set will comprise the measured or simulated S-parameter results for
the TWT structure.
It will mainly consist of small-signal calculations of the cold simulations or
measurements of the TWT at the respective ports.
</td> </tr>
<tr>
<td>
**File format**
</td>
<td>
Touchstone format
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
The metadata is based on ZENODO’s metadata, including the title, creator,
date, contributor, description, keywords, format, resource type, etc. (See
Section 6)
</td> </tr>
<tr>
<td>
**Data sharing**
</td>
<td>
This data set will be widely open and will be deposited in the ZENODO
repository.
To analyse this data CST Software or Magic Software are necessary.
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
This data set will be archived and preserved in ZENODO (See Section 7)
</td> </tr> </table>
<table>
<tr>
<th>
**Data set reference**
</th>
<th>
DS_PS_1
</th> </tr>
<tr>
<td>
**Data set name**
</td>
<td>
TWT_PS_X
</td> </tr>
<tr>
<td>
**Data set description**
</td>
<td>
This data set will comprise results of the power levels at the relevant ports
of the TWT structure. They will include the DC bias conditions together with
the input and output power at all ports. The results will be either based on
measured values or obtained from simulations.
It will mainly consist of small-signal calculations of the hot simulations or
measurements of the TWT at the respective ports.
</td> </tr>
<tr>
<td>
**File format**
</td>
<td>
MDIF or XPA format
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
The metadata is based on ZENODO’s metadata, including the title, creator,
date, contributor, description, keywords, format, resource type, etc. (See
Section 6)
</td> </tr>
<tr>
<td>
**Data sharing**
</td>
<td>
This data set will be widely open and will be deposited in the ZENODO
repository.
To analyse this data CST Software or Magic Software are necessary.
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
This data set will be archived and preserved in ZENODO (See Section 7)
</td> </tr> </table>
<table>
<tr>
<th>
**Data set reference**
</th>
<th>
DS_CHIPSET_DS
</th> </tr>
<tr>
<td>
**Data set name**
</td>
<td>
Semi-conductor Radio Chipset Datasheet
</td> </tr>
<tr>
<td>
**Data set description**
</td>
<td>
This dataset contain the datasheet of the III-V semi conductor products used
by the 2 radios of the TWEETHER project
</td> </tr>
<tr>
<td>
**File Format**
</td>
<td>
File format is the PDF format
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
The metadata is based on ZENODO’s metadata, including the title, creator,
date, contributor, description, keywords, format, resource type, etc. (See
Section 6)
</td> </tr>
<tr>
<td>
**Data sharing**
</td>
<td>
This data set will be widely open and will be deposited in the ZENODO
repository.
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
This data set will be archived and preserved in ZENODO (See Section 7).
</td> </tr> </table>
<table>
<tr>
<th>
**Data set reference**
</th>
<th>
DS_CHIPSET_SP
</th> </tr>
<tr>
<td>
**Data set name**
</td>
<td>
CHIPSET_SP_X
</td> </tr>
<tr>
<td>
**Data set description**
</td>
<td>
This data set will comprise the measured or simulated S-parameter results for
the OMMIC chipsets.
</td> </tr>
<tr>
<td>
**File format**
</td>
<td>
Touchstone format
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
The metadata is based on ZENODO’s metadata, including the title, creator,
date, contributor, description, keywords, format, resource type, etc. (See
Section 6)
</td> </tr>
<tr>
<td>
**Data sharing**
</td>
<td>
This data set will be widely open and will be deposited in the ZENODO
repository provided that this does not jeopardise future exploitation.
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
Whenever possible, this data set will be archived and preserved in ZENODO (See
Section 7).
</td> </tr> </table>
<table>
<tr>
<th>
**Data set reference**
</th>
<th>
DS_SYS_1
</th> </tr>
<tr>
<td>
**Data set name**
</td>
<td>
System datasheet
</td> </tr>
<tr>
<td>
**Data set description**
</td>
<td>
System general architecture, network interfaces, system data sheet, sub-
assemblies datasheets, range diagrams, photos of equipment. General
information useful for potential users. This data set will be suitable for
publications in scientific and industrial conferences.
</td> </tr>
<tr>
<td>
**File Format**
</td>
<td>
PDF
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
The metadata is based on ZENODO’s metadata, including the title, creator,
date, contributor, description, keywords, format, resource type, etc. (See
Section 6)
</td> </tr>
<tr>
<td>
**Data sharing**
</td>
<td>
This data set will be widely open and will be deposited in the ZENODO
repository.
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
This data set will be archived and preserved in ZENODO (See Section 7).
</td> </tr> </table>
<table>
<tr>
<th>
**Data set reference**
</th>
<th>
DS_SYS_2
</th> </tr>
<tr>
<td>
**Data set name**
</td>
<td>
System Deployments
</td> </tr>
<tr>
<td>
**Data set description**
</td>
<td>
System coverage capabilities. Deployment methods to optimize coverage,
frequency re-use process. Scenario graph. General information useful for
potential users.
This data set will be suitable for publications in scientific and industrial
conferences.
</td> </tr>
<tr>
<td>
**File format**
</td>
<td>
PDF
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
The metadata is based on ZENODO’s metadata, including the title, creator,
date, contributor, description, keywords, format, resource type, etc. (See
Section 6)
</td> </tr>
<tr>
<td>
**Data sharing**
</td>
<td>
This data set will be widely open and will be deposited in the ZENODO
repository.
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
This data set will be archived and preserved in ZENODO (See Section 7).
</td> </tr> </table>
<table>
<tr>
<th>
**Data set reference**
</th>
<th>
DS_MM-A_1
</th> </tr>
<tr>
<td>
**Data set name**
</td>
<td>
W-band Millimetre Antennas
</td> </tr>
<tr>
<td>
**Data set description**
</td>
<td>
Adaptation S parameters, bandwidth, radiating diagrams: co-polar & cross-
polar. Antennas datasheet: graphs and tables.
This data set will be suitable for publications in scientific and industrial
conferences.
</td> </tr>
<tr>
<td>
**File format**
</td>
<td>
PDF
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
The metadata is based on ZENODO’s metadata, including the title, creator,
date, contributor, description, keywords, format, resource type, etc. (See
Section 6)
</td> </tr>
<tr>
<td>
**Data sharing**
</td>
<td>
This data set will be widely open and will be deposited in the ZENODO
repository.
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
This data set will be archived and preserved in ZENODO (See Section 7).
</td> </tr> </table>
<table>
<tr>
<th>
**Data set reference**
</th>
<th>
DS_FT_1
</th> </tr>
<tr>
<td>
**Data set name**
</td>
<td>
Field trial description
</td> </tr>
<tr>
<td>
**Data set description**
</td>
<td>
This data set will comprise a description of the wireless network architecture
including the hardware, interfaces and services that will be deployed at the
UPV campus and used for the field trial. In addition, it will provide
information about sites (number of sites and its location), the expected
objectives to be achieved and the envisaged scenarios for the system.
This information will be interesting for potential users such as telecom
operators.
</td> </tr>
<tr>
<td>
**File Format**
</td>
<td>
PDF
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
The metadata is based on ZENODO’s metadata, including the title, creator,
date, contributor, description, keywords, format, resource type, etc. (See
Section 6)
</td> </tr>
<tr>
<td>
**Data sharing**
</td>
<td>
This data set will be widely open (URL access) and a summary of these data
will be deposited in the ZENODO repository.
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
This data set will be archived and preserved in ZENODO (See Section 7).
</td> </tr> </table>
<table>
<tr>
<th>
**Data set reference**
</th>
<th>
DS_FT_2
</th> </tr>
<tr>
<td>
**Data set name**
</td>
<td>
Field trial long term KPI measurements
</td> </tr>
<tr>
<td>
**Data set description**
</td>
<td>
This data set will comprise the results of the measurement campaign carried
out to evaluate the performance of the field trial deployed at the UPV campus
integrating the technology developed in TWEETHER.
It will include data obtained from the Network Monitoring System (PRTG
software or similar), which collects KPIs from the network elements. Some
examples of KPIs are throughput, RSSI (received signal strength indicator) and
dropped packets. Those data will be publicly accessible through a URL.
This information will be interesting for potential users such as telecom
operators.
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
The metadata is based on ZENODO’s metadata, including the title, creator,
date, contributor, description, keywords, format, resource type, etc. (See
Section 6)
</td> </tr>
<tr>
<td>
**Data sharing**
</td>
<td>
This data set will be widely open (URL access) and a summary of these data
will be deposited in the ZENODO repository.
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
This data set will be archived and preserved in ZENODO (See Section 7).
</td> </tr> </table>
<table>
<tr>
<th>
**Data set reference**
</th>
<th>
DS_FT_3
</th> </tr>
<tr>
<td>
**Data set name**
</td>
<td>
Field trial bandwidth tests
</td> </tr>
<tr>
<td>
**Data set description**
</td>
<td>
This data set will comprise descriptive information of the bandwidth tests
used to evaluate the network at specific times. Those tests will employ a
traffic generator software allowing to send and receive traffic between hosts
comprising the network and providing a measurement of the maximum available
bandwidth and also latency and jitter values.
It will mainly consist of a doc-type document with details related to the
steps to be followed in this test and the results obtained as well as well as
examples of the scripts (or its description) used to obtain those results.
This information will be interesting for potential users such as telecom
operators.
</td> </tr>
<tr>
<td>
**File format**
</td>
<td>
Word or PDF
</td> </tr>
<tr>
<td>
**Standards and metadata**
</td>
<td>
The metadata is based on ZENODO’s metadata, including the title, creator,
date, contributor, description, keywords, format, resource type, etc. (See
Section 6)
</td> </tr>
<tr>
<td>
**Data sharing**
</td>
<td>
This data set will be widely open and will be deposited in the ZENODO
repository.
To perform this test, Ipref tool (or similar) is required.
</td> </tr>
<tr>
<td>
**Archiving and preservation**
</td>
<td>
This data set will be archived and preserved in ZENODO (See Section 7).
</td> </tr> </table>
Apart from the data sets specified that will be made open, other data
generated in TWEETHER such as the circuit detailed specifications and
realisation, and terminal integration is kept confidential to avoid
jeopardising future exploitation.
End of Deliverable D7.11
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0807_CRACKER_645357.md
|
**1\. Executive Summary**
This document describes the Data Management Plan (DMP) adopted within CRACKER
and provides information on CRACKER’s data management policy and key
information on all datasets that have been produced within CRACKER, as well as
resources developed by the Cracking the Language Barrier federation of
projects (also known as the “ICT-17 group of projects”) and other projects who
wish to follow a common line of action, as provisioned in the CRACKER
Description of Action.
This final version includes the principles according to which the plan is
structured, the standard practices for data management that are being
implemented, and the description of the actual datasets produced within
CRACKER.
The document is structured as follows:
* Background and rationale of a DMP within H2020 (Section 2)
* Implementation of the CRACKER DMP (Section 3)
* Collaboration of CRACKER with other projects and initiatives (Section 4)
* Recommendations for a harmonized approach and structure for a DMP to be optionally adopted by the Cracking the Language Barrier federation of projects (Section 5).
2. **Background**
The use of a Data Management Plan (DMP) is required for projects participating
in the Open Research Data Pilot, which aims to improve and maximise access to
and re-use of research data generated by projects. The elaboration of DMPs in
Horizon 2020 projects is specified in a set of guidelines applied to any
project that collects or produces data. These guidelines explain how projects
participating in the Pilot should provide their DMP, i.e., to detail the types
of data that will be generated or gathered during the project, and after it is
completed, the metadata and standards which will be used, the ways how these
data will be exploited and shared for verification or reuse and how they will
be preserved.
In principle, projects participating in the Pilot are required to deposit the
research data described above, preferably into a research data repository.
Projects must then take measures, to the extent possible, to enable for third
parties to access, mine, exploit, reproduce and disseminate, free of charge,
this research data.
The guidance for DMPs calls for clarifications and analysis regarding the main
elements of the data management policy within a project. The respective
template identifies in brief the following five coarse categories 1 :
1. **Data set reference and name** : an identifier for the data set; use of a standard identification mechanism to make the data and the associated software easily discoverable, readily located and identifiable.
2. **Data set description** : details describing the produced and/or collected data and associated software and accounting for their usability, documentation, reuse, assessment and integration (i.e., origin, nature, volume, usefulness, documentation/publications, similar data, etc.).
3. **Standards and metadata** : related standards employed or metadata prepared, including information about interoperability that allows for data exchange and compliance with related software or applications.
4. **Data sharing** : procedures and mechanisms enabling data access and sharing, including details about the type or repositories, modalities in which data are accessible, scope and licensing framework.
5. **Archiving and preservation (including storage and backup)** : procedures for long-term preservation of the data including details about storage, backup, potential associated costs, related metadata and documentation, etc.
1
See details _here_ .
3. **The CRACKER DMP**
**3.1 Introduction and Scope**
For its own datasets, CRACKER follows _META-SHARE_ ’s best practices for data
documentation, verification and distribution, as well as for curation and
preservation, ensuring the availability of the data throughout and beyond the
runtime of CRACKER and enabling access, exploitation and dissemination,
thereby also complying with the standards of the _Open Research Data Pilot_ .
META-SHARE is a pan-European infrastructure bringing online together providers
and consumers of language data, tools and services. It is organized as a
network of repositories that store language resources (data, tools and
processing services) documented with high-quality metadata, aggregated in
central inventories allowing for uniform search and access. It serves as a
component of a language resource marketplace for researchers, developers,
professionals and industrial players, catering for the full development cycle
of language resources and technology, from research through to innovative
products and services [Piperidis, 2012].
Language resources in META-SHARE span the whole spectrum from monolingual and
multilingual data sets, both structured (e.g., lexica, terminological
databases, thesauri) and unstructured (e.g., raw text corpora), as well as
language processing tools (e.g., part-of-speech taggers, chunkers, dependency
parsers, named entity recognisers, parallel text aligners, etc.). Resources
are described according to the META-SHARE metadata schema [Gavrilidou et al.
2012], catering in particular for the needs of the HLT community, while the
META-SHARE model licensing scheme has a firm orientation towards the creation
of an openness culture respecting, however, legacy and less open, or
permissive, licensing options.
META-SHARE has been in operation since 2012, and it is currently in its 3.1.1
version, released in December 2016. It currently features 28 repositories set
up and maintained by 37 organisations in 25 countries of the EU. The observed
usage as well as the number of nodes, resources, users, queries, views and
downloads are all encouraging and considered as supportive of the choices made
so far [Piperidis et al., 2014]. Resource sharing in CRACKER has built upon
and extended the existing META-SHARE resource infrastructure, its specific
_MT-dedicated repository_ as well as editing and annotation tools in support
of translation evaluation and translation quality scoring (e.g.,
_http://www.translate5.net/_ ).
This infrastructure, together with its bridges, provides support mechanisms
for the identification, acquisition, documentation and sharing of MT-related
data sets and language processing tools.
**3.2 Dataset Reference and Name**
CRACKER opts for a standard identification mechanism to be employed for each
data set, in addition to the identifier used internally by META-SHARE itself.
Reference to a dataset ID can be optionally made with the use of an ISLRN (
_International Standard Language Resource Number_ ), the most recent universal
identification schema for LRs which provides LRs with unique identifiers using
a standardized nomenclature, ensuring that LRs are identified, and
consequently recognized with proper references (cf. figures 1 and 2).
**Figure 1. An _example_ resource entry from the ISLRN website indicating the
resource metadata, including the ISLRN. **
**Figure 2. Examples of resources with the ISLRN indicated, from the ELRA
(left) and the LDC (right) catalogues.**
**3.3 Dataset Description**
In accordance with META-SHARE ontology, CRACKER has been addressing the
following resource and media types:
* **corpora** (text, audio, video, multimodal/multimedia corpora, n-gram resources),
* **lexical/conceptual resources** (e.g., computational lexicons, ontologies, machine-readable dictionaries, terminological resources, thesauri, multimodal/ multimedia lexicons and dictionaries, etc.)
* **language descriptions** (e.g., computational grammars)
* **technologies** (tools/services) that can be used for the processing of data resources.
Several datasets that have been produced (test data, training data) by the
WMT, IWSLT and QT Marathon events and extended with information on the results
of their respective evaluation and benchmarking campaigns (documentation,
performance of the systems etc.) are documented and made available through
META-SHARE.
A brief description of all the resources generated by CRACKER, or with the
support of CRACKER, and in coordination with project QT21, is provided below.
#### 3.3.1 R#1 WMT 2015 Test Sets
<table>
<tr>
<th>
**Resource Name**
</th>
<th>
WMT 2015 Test Sets
</th> </tr>
<tr>
<td>
**Resource Type**
</td>
<td>
Corpus
</td> </tr>
<tr>
<td>
**Media Type**
</td>
<td>
Text
</td> </tr>
<tr>
<td>
**Language(s)**
</td>
<td>
The core languages are German-English and Czech-English; other guest language
pairs will be introduced in each year.
For 2015 the guest language was Romanian. We also included Russian, Turkish
and Finnish, with funding from other sources.
</td> </tr>
<tr>
<td>
**License**
</td>
<td>
The source data are crawled from online news sites and carry the respective
licensing conditions.
</td> </tr>
<tr>
<td>
**Distribution**
**Medium**
</td>
<td>
Downloadable
</td> </tr>
<tr>
<td>
**Usage**
</td>
<td>
For tuning and testing MT systems.
</td> </tr>
<tr>
<td>
**Size**
</td>
<td>
3000 sentences per language pair, per year.
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
These are the test sets for the WMT shared translation task. They are small
parallel data sets used for testing MT systems, and are typically created by
translating a selection of crawled articles from online news sites.
WMT15 test sets are available at http://www.statmt.org/wmt15/
</td> </tr> </table>
#### 3.3.2 R#2 WMT 2016 Test Sets
<table>
<tr>
<th>
**Resource Name**
</th>
<th>
WMT 2016 Test Sets
</th> </tr>
<tr>
<td>
**Resource Type**
</td>
<td>
Corpus
</td> </tr>
<tr>
<td>
**Media Type**
</td>
<td>
Text
</td> </tr>
<tr>
<td>
**Language(s)**
</td>
<td>
Cracker has contributed to the German-English and Czech-English test sets from
2015 to 2018 2 , as well as a different guest language in each of these
years.
The guest language pairs for 2016 were Romanian-English.
We also included Russian, Turkish, Chinese, Estonian and Kazakh with funding
from other sources, as well as Finnish in 2016.
</td> </tr>
<tr>
<td>
**License**
</td>
<td>
The source data are crawled from online news sites and carry the respective
licensing conditions.
</td> </tr>
<tr>
<td>
**Distribution**
**Medium**
</td>
<td>
Downloadable
</td> </tr>
<tr>
<td>
**Usage**
</td>
<td>
For tuning and testing MT systems.
</td> </tr>
<tr>
<td>
**Size**
</td>
<td>
3000 sentences per language pair, per year.
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
These are the test sets for the WMT shared translation task. They are small
parallel data sets used for testing MT systems, and are typically created by
translating a selection of crawled articles from online news sites.
WMT16 test sets are available at
_http://data.statmt.org/wmt16/translation-task/test.tgz_
</td> </tr> </table>
#### 3.3.3 R#3 WMT 2017 Test Sets
<table>
<tr>
<th>
**Resource Name**
</th>
<th>
WMT 2017 Test Sets
</th> </tr>
<tr>
<td>
**Resource Type**
</td>
<td>
Corpus
</td> </tr>
<tr>
<td>
**Media Type**
</td>
<td>
Text
</td> </tr>
<tr>
<td>
**Language(s)**
</td>
<td>
Cracker has contributed to the German-English and Czech-English test sets from
2015 to 2018 3 , as well as a different guest language in each of these
years. The guest language pairs for 2017 were LatvianEnglish (2017).
We also included Russian, Turkish, Chinese, Estonian and Kazakh with funding
from other sources, as well as Finnish in 2017.
</td> </tr>
<tr>
<td>
**License**
</td>
<td>
The source data are crawled from online news sites and carry the respective
licensing conditions.
</td> </tr>
<tr>
<td>
**Distribution**
**Medium**
</td>
<td>
Downloadable
</td> </tr>
<tr>
<td>
**Usage**
</td>
<td>
For tuning and testing MT systems.
</td> </tr>
<tr>
<td>
**Size**
</td>
<td>
3000 sentences per language pair, per year.
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
These are the test sets for the WMT shared translation task. They are small
parallel data sets used for testing MT systems, and are typically
</td> </tr> </table>
2
The 2018 test sets have not yet been made available.
3
The 2018 test sets have not yet been made available.
<table>
<tr>
<th>
</th>
<th>
created by translating a selection of crawled articles from online news sites.
WMT17 test sets are at
_http://data.statmt.org/wmt17/translation-task/test.tgz_
</th> </tr> </table>
#### 3.3.4 R#4 WMT 2015 Translation Task Submissions
<table>
<tr>
<th>
**Resource Name**
</th>
<th>
WMT 2015 Translation Task Submissions
</th> </tr>
<tr>
<td>
**Resource Type**
</td>
<td>
Corpus
</td> </tr>
<tr>
<td>
**Media Type**
</td>
<td>
Text
</td> </tr>
<tr>
<td>
**Language(s)**
</td>
<td>
They match the languages of the test sets.
</td> </tr>
<tr>
<td>
**License**
</td>
<td>
Preferably CC BY 4.0.
</td> </tr>
<tr>
<td>
**Distribution**
**Medium**
</td>
<td>
Downloadable
</td> </tr>
<tr>
<td>
**Usage**
</td>
<td>
Research into MT evaluation. MT error analysis.
</td> </tr>
<tr>
<td>
**Size**
</td>
<td>
25M (compressed text)
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
These are the submissions to the WMT translation task from all teams. We
create a tarball for use in the metrics task, but it is available for future
research in MT evaluation.
The WMT15 version is available at
_http://www.statmt.org/wmt15/wmt15-submitted-data.tgz_
</td> </tr> </table>
#### 3.3.5 R#5 WMT 2016 Translation Task Submissions
<table>
<tr>
<th>
**Resource Name**
</th>
<th>
WMT 2016 Translation Task Submissions
</th> </tr>
<tr>
<td>
**Resource Type**
</td>
<td>
Corpus
</td> </tr>
<tr>
<td>
**Media Type**
</td>
<td>
Text
</td> </tr>
<tr>
<td>
**Language(s)**
</td>
<td>
They match the languages of the test sets.
</td> </tr>
<tr>
<td>
**License**
</td>
<td>
Preferably CC BY 4.0.
</td> </tr>
<tr>
<td>
**Distribution**
**Medium**
</td>
<td>
Downloadable
</td> </tr>
<tr>
<td>
**Usage**
</td>
<td>
Research into MT evaluation. MT error analysis.
</td> </tr>
<tr>
<td>
**Size**
</td>
<td>
44M (compressed text)
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
These are the submissions to the WMT translation task from all teams. We
create a tarball for use in the metrics task, but it is available for future
research in MT evaluation. The WMT16 version is available at
_http://data.statmt.org/wmt16/translation-task/wmt16-submitted-datav2.tgz_
</td> </tr> </table>
#### 3.3.6 R#6 WMT 2017 Translation Task Submissions
<table>
<tr>
<th>
**Resource Name**
</th>
<th>
WMT 2017 Translation Task Submissions
</th> </tr>
<tr>
<td>
**Resource Type**
</td>
<td>
Corpus
</td> </tr>
<tr>
<td>
**Media Type**
</td>
<td>
Text
</td> </tr>
<tr>
<td>
**Language(s)**
</td>
<td>
They match the languages of the test sets.
</td> </tr>
<tr>
<td>
**License**
</td>
<td>
Preferably CC BY 4.0.
</td> </tr>
<tr>
<td>
**Distribution**
**Medium**
</td>
<td>
Downloadable
</td> </tr>
<tr>
<td>
**Usage**
</td>
<td>
Research into MT evaluation. MT error analysis.
</td> </tr>
<tr>
<td>
**Size**
</td>
<td>
46M (compressed text)
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
These are the submissions to the WMT translation task from all teams. We
create a tarball for use in the metrics task, but it is available for future
research in MT evaluation.
The WMT17 version is at
_http://data.statmt.org/wmt17/translationtask/wmt17-submitted-data-v1.0.tgz_
</td> </tr> </table>
#### 3.3.7 R#7 WMT 2015 Human Evaluations
<table>
<tr>
<th>
**Resource Name**
</th>
<th>
WMT 2015 Human Evaluations
</th> </tr>
<tr>
<td>
**Resource Type**
</td>
<td>
Pairwise rankings of MT output (2015-2016), and direct assessments (i.e.,
adequacy and fluency) (2016-2017)
</td> </tr>
<tr>
<td>
**Media Type**
</td>
<td>
Numerical data (in csv).
</td> </tr>
<tr>
<td>
**Language(s)**
</td>
<td>
N/a
</td> </tr>
<tr>
<td>
**License**
</td>
<td>
Preferably CC BY 4.0
</td> </tr>
<tr>
<td>
**Distribution**
**Medium**
</td>
<td>
Downloadable
</td> </tr>
<tr>
<td>
**Usage**
</td>
<td>
In conjunction with the WMT Translation Task Submissions, this can be used for
research into MT evaluation.
</td> </tr>
<tr>
<td>
**Size**
</td>
<td>
50M
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
Data available here:
2015 – _http://www.statmt.org/wmt15/translation-judgements.zip_
</td> </tr> </table>
#### 3.3.8 R#8 WMT 2016 Human Evaluations
<table>
<tr>
<th>
**Resource Name**
</th>
<th>
WMT 2016 Human Evaluations
</th> </tr>
<tr>
<td>
**Resource Type**
</td>
<td>
Pairwise rankings of MT output (2015-2016), and direct assessments (i.e.,
adequacy and fluency) (2016-2017)
</td> </tr>
<tr>
<td>
**Media Type**
</td>
<td>
Numerical data (in csv)
</td> </tr>
<tr>
<td>
**Language(s)**
</td>
<td>
N/a
</td> </tr>
<tr>
<td>
**License**
</td>
<td>
Preferably CC BY 4.0
</td> </tr>
<tr>
<td>
**Distribution**
**Medium**
</td>
<td>
Downloadable
</td> </tr>
<tr>
<td>
**Usage**
</td>
<td>
In conjunction with the WMT Translation Task Submissions, this can be used for
research into MT evaluation.
</td> </tr>
<tr>
<td>
**Size**
</td>
<td>
50M (gzipped).
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
Data available here:
2016 – _http://data.statmt.org/wmt16/translation-
task/wmt16-translationjudgements.zip_
2016 – _http://computing.dcu.ie/~ygraham/da-human-judgments.tar.gz_
</td> </tr> </table>
#### 3.3.9 R#9 WMT 2017 Human Evaluations
<table>
<tr>
<th>
**Resource Name**
</th>
<th>
WMT 2017 Human Evaluations
</th> </tr>
<tr>
<td>
**Resource Type**
</td>
<td>
Pairwise rankings of MT output (2015-2016), and direct assessments (i.e.,
adequacy and fluency) (2016-2017)
</td> </tr>
<tr>
<td>
**Media Type**
</td>
<td>
Numerical data (in csv); 2017 with full output (texts).
</td> </tr>
<tr>
<td>
**Language(s)**
</td>
<td>
N/a
</td> </tr>
<tr>
<td>
**License**
</td>
<td>
Preferably CC BY 4.0
</td> </tr>
<tr>
<td>
**Distribution**
**Medium**
</td>
<td>
Downloadable
</td> </tr>
<tr>
<td>
**Usage**
</td>
<td>
In conjunction with the WMT Translation Task Submissions, this can be used for
research into MT evaluation.
</td> </tr>
<tr>
<td>
**Size**
</td>
<td>
60MB (gzipped).
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
Data available here:
_http://computing.dcu.ie/~ygraham/newstest2017-system-levelhuman.tar.gz_
_http://www.statmt.org/wmt17/results.html_
</td> </tr> </table>
#### 3.3.10 R#10 WMT 2015 News Crawl
<table>
<tr>
<th>
**Resource Name**
</th>
<th>
WMT 2015 News Crawl
</th> </tr>
<tr>
<td>
**Resource Type**
</td>
<td>
Corpus
</td> </tr>
<tr>
<td>
**Media Type**
</td>
<td>
Text
</td> </tr>
<tr>
<td>
**Language(s)**
</td>
<td>
English, German, Czech plus variable guest languages.
</td> </tr>
<tr>
<td>
**License**
</td>
<td>
The source data are crawled from online news sites and carry the respective
licensing conditions.
</td> </tr>
<tr>
<td>
**Distribution**
**Medium**
</td>
<td>
Downloadable
</td> </tr>
<tr>
<td>
**Usage**
</td>
<td>
Building MT systems
</td> </tr>
<tr>
<td>
**Size**
</td>
<td>
5.2Gb
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
This data set consists of text crawled from online news, with the html
stripped out and sentences shuffled.
2015 – _http://www.statmt.org/wmt15/training-monolingual-news-_
_2014.v2.tgz_
</td> </tr> </table>
#### 3.3.11 R#11 WMT 2016 News Crawl
<table>
<tr>
<th>
**Resource Name**
</th>
<th>
WMT 2016 News Crawl
</th> </tr>
<tr>
<td>
**Resource Type**
</td>
<td>
Corpus
</td> </tr>
<tr>
<td>
**Media Type**
</td>
<td>
Text
</td> </tr>
<tr>
<td>
**Language(s)**
</td>
<td>
English, German, Czech plus variable guest languages.
</td> </tr>
<tr>
<td>
**License**
</td>
<td>
The source data are crawled from online news sites and carry the respective
licensing conditions.
</td> </tr>
<tr>
<td>
**Distribution**
**Medium**
</td>
<td>
Downloadable
</td> </tr>
<tr>
<td>
**Usage**
</td>
<td>
Building MT systems
</td> </tr>
<tr>
<td>
**Size**
</td>
<td>
4.8Gb
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
This data set consists of text crawled from online news, with the html
stripped out and sentences shuffled.
2016 – _http://data.statmt.org/wmt16/translation-task/trainingmonolingual-
news-crawl.tgz_
</td> </tr> </table>
#### 3.3.12 R#12 WMT 2017 News Crawl
<table>
<tr>
<th>
**Resource Name**
</th>
<th>
WMT 2017 News Crawl
</th> </tr>
<tr>
<td>
**Resource Type**
</td>
<td>
Corpus
</td> </tr>
<tr>
<td>
**Media Type**
</td>
<td>
Text
</td> </tr>
<tr>
<td>
**Language(s)**
</td>
<td>
English, German, Czech plus variable guest languages.
</td> </tr>
<tr>
<td>
**License**
</td>
<td>
The source data are crawled from online news sites and carry the respective
licensing conditions.
</td> </tr>
<tr>
<td>
**Distribution**
**Medium**
</td>
<td>
Downloadable
</td> </tr>
<tr>
<td>
**Usage**
</td>
<td>
Building MT systems
</td> </tr>
<tr>
<td>
**Size**
</td>
<td>
3.7Gb
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
This data set consists of text crawled from online news, with the html
stripped out and sentences shuffled.
2017 – _http://data.statmt.org/wmt17/translation-task/trainingmonolingual-
news-crawl.tgz_
</td> </tr> </table>
#### 3.3.13 R#13 Quality Estimation Datasets
<table>
<tr>
<th>
**Resource Name**
</th>
<th>
WMT 2017 Quality Estimation Datasets – phrase-level
</th> </tr>
<tr>
<td>
**Resource Type**
</td>
<td>
Bilingual corpora labelled for quality at phrase-level
</td> </tr>
<tr>
<td>
**Media Type**
</td>
<td>
Text
</td> </tr>
<tr>
<td>
**Language(s)**
</td>
<td>
German-English
</td> </tr>
<tr>
<td>
**License**
</td>
<td>
TAUS Terms of Use
( _https://lindat.mff.cuni.cz/repository/xmlui/page/licence-TAUS_QT21_ ). TAUS
grants to QT21 User access to the WMT Data Set with the following rights:
i) the right to use the target side of the translation units into
a commercial product, provided that QT21 User may not resell the
WMT Data Set as if it is its own new translation; ii) the right to make
Derivative Works; and iii) the right to use or resell such Derivative Works
commercially and for the following goals:
i) research and benchmarking; ii) piloting new solutions; and iii) testing of
new commercial services.
</td> </tr>
<tr>
<td>
**Distribution**
**Medium**
</td>
<td>
Downloadable
</td> </tr>
<tr>
<td>
**Usage**
</td>
<td>
Other researchers working on quality estimation or evaluation of machine
translation
</td> </tr>
<tr>
<td>
**Size**
</td>
<td>
7,500 machine translations annotated for quality with binary labels (good/bad)
at the phrase-level (67,817 phrases). To be used to train and test quality
estimation systems.
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
The corpus will consist of source segments in English, their machine
translation, a segmentation of these translations into phrases and a binary
score given by humans indicating the quality of these phrases.
</td> </tr> </table>
#### 3.3.14 R#14 WMT 2016 Automatic Post-‐editing data set
<table>
<tr>
<th>
**Resource Name**
</th>
<th>
WMT 2016 Automatic Post-editing data set
</th> </tr>
<tr>
<td>
**Resource Type**
</td>
<td>
corpus
</td> </tr>
<tr>
<td>
**Media Type**
</td>
<td>
text
</td> </tr>
<tr>
<td>
**Language(s)**
</td>
<td>
English to German
</td> </tr>
<tr>
<td>
**License**
</td>
<td>
_TAUS Terms of Use_
TAUS grants to QT21 User access to the WMT Data Set with the following rights:
i) the right to use the target side of the translation units into
a commercial product, provided that QT21 User may not resell the
WMT Data Set as if it is its own new translation; ii) the right to make
Derivative Works; and iii) the right to use or resell such Derivative Works
commercially and for the following goals:
i) research and benchmarking; ii) piloting new solutions; and iii) testing of
new commercial services.
</td> </tr>
<tr>
<td>
**Distribution**
**Medium**
</td>
<td>
Downloadable
</td> </tr>
<tr>
<td>
**Usage**
</td>
<td>
Training of Automatic Post-editing and Quality Estimation components
</td> </tr>
<tr>
<td>
**Size**
</td>
<td>
1294 kb
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
Training, development and text data consist of English-German triplets (
_source_ , _target_ and _post-edit_ ) belonging to the Information Technology
domain and already tokenized. Training and development respectively contain
12,000 and 1,000 triplets, while the test set contains 2,000 instances. Target
sentences are machine-translated with the KIT system. Post-edits are collected
by Text & Form from professional translators. All data is provided by the EU
project QT21 ( _http://www.qt21.eu/_ ).
</td> </tr> </table>
#### 3.3.15 R#15 WMT 2017 Automatic Post-‐editing data set
<table>
<tr>
<th>
**Resource Name**
</th>
<th>
WMT 2017 Automatic Post-editing data set
</th> </tr>
<tr>
<td>
**Resource Type**
</td>
<td>
Corpus
</td> </tr>
<tr>
<td>
**Media Type**
</td>
<td>
text
</td> </tr>
<tr>
<td>
**Language(s)**
</td>
<td>
English to German
</td> </tr>
<tr>
<td>
**License**
</td>
<td>
TAUS Terms of Use
(https://lindat.mff.cuni.cz/repository/xmlui/page/licence-TAUS_QT21). TAUS
grants to QT21 User access to the WMT Data Set with the following rights:
i) the right to use the target side of the translation units into
a commercial product, provided that QT21 User may not resell the WM
T Data Set as if it is its own new translation; ii) the right to make
Derivative Works; and iii) the right to use or resell such Derivative Works
commercially and for the following goals:
i) research and benchmarking; ii) piloting new solutions; and iii) testing of
new commercial services.
</td> </tr>
<tr>
<td>
**Distribution**
**Medium**
</td>
<td>
downloadable
</td> </tr>
<tr>
<td>
**Usage**
</td>
<td>
Training of Automatic Post-editing and Quality Estimation components
</td> </tr>
<tr>
<td>
**Size**
</td>
<td>
1294 kb
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
For WMT 2017, 11.000 segments have been added to the WMT16 training set (En-
De) together with a new test (for 2017) made of 2.000 segments (En-De). Also
in 2017, a new language pair has been added: De-En with 25k segments for
training, 1k segments for dev, 2k segments for test. Adding the 2016 and 2017
Auto PE data together, we obtain for each language pair a total of 28k
segments each, split in: En-De: training set = 23 k, dev set = 1k, test-set16
= 2k, test-set17 = 2k, De-En: training set: 25k, dev-set = 1k, test-set17= 2k
Training, development and text data consist of English-German triplets
(source, target and post-edit) belonging to the Information Technology domain
and already tokenized. Training and development respectively contain 12,000
and 1,000 triplets, while the test set contains 2,000 instances. Target
sentences are machine-translated with the KIT
</td> </tr>
<tr>
<td>
</td>
<td>
system. Post-edits are collected by Text & Form from professional translators.
All data is provided by the EU project QT21
(http://www.qt21.eu/).
</td> </tr> </table>
#### 3.3.16 R#16 WMT 2018 Automatic Post-‐editing data set
<table>
<tr>
<th>
**Resource Name**
</th>
<th>
WMT 2018 Automatic Post-editing data set
</th> </tr>
<tr>
<td>
**Resource Type**
</td>
<td>
Corpus
</td> </tr>
<tr>
<td>
**Media Type**
</td>
<td>
text
</td> </tr>
<tr>
<td>
**Language(s)**
</td>
<td>
English to German
</td> </tr>
<tr>
<td>
**License**
</td>
<td>
TAUS Terms of Use
(https://lindat.mff.cuni.cz/repository/xmlui/page/licence-TAUS_QT21). TAUS
grants to QT21 User access to the WMT Data Set with the following rights:
i) the right to use the target side of the translation units into
a commercial product, provided that QT21 User may not resell the WM
T Data Set as if it is its own new translation; ii) the right to make
Derivative Works; and iii) the right to use or resell such Derivative Works
commercially and for the following goals:
i) research and benchmarking; ii) piloting new solutions; and iii) testing of
new commercial services.
</td> </tr>
<tr>
<td>
**Distribution**
**Medium**
</td>
<td>
downloadable
</td> </tr>
<tr>
<td>
**Usage**
</td>
<td>
Training of Automatic Post-editing and Quality Estimation components
</td> </tr>
<tr>
<td>
**Size**
</td>
<td>
1294 kb
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
For WMT2018 we have added a new test set of 2.000 segments for each of the 2
language pairs from 2017 (en-de and de-en). Each language pair covers 30k
segments. The split is: En-De: training set = 23 k, dev set = 1k, test-set16 =
2k, test-set17 = 2k, test-set18= 2k, DeEn: training set: 25k, dev-set = 1k,
test-set17= 2k, test-set18 = 2k.
Training, development and text data consist of English-German triplets
(source, target and post-edit) belonging to the Information Technology domain
and already tokenized. Training and development respectively contain 12,000
and 1,000 triplets, while the test set contains 2,000 instances. Target
sentences are machine-translated with the KIT system. Post-edits are collected
by Text & Form from professional translators. All data is provided by the EU
project QT21
(http://www.qt21.eu/).
</td> </tr> </table>
#### 3.3.17 R#17 QT21 Domain Specific Human Post-‐Edited data set
<table>
<tr>
<th>
**Resource Name**
</th>
<th>
QT21 Domain Specific Human Post-Edited data set
</th> </tr>
<tr>
<td>
**Resource Type**
</td>
<td>
corpus
</td> </tr>
<tr>
<td>
**Media Type**
</td>
<td>
text
</td> </tr>
<tr>
<td>
**Language(s)**
</td>
<td>
English to German, English to Czech, English to Latvian, German to English
</td> </tr>
<tr>
<td>
**License**
</td>
<td>
QT21-TAUS Terms of Use
(https://lindat.mff.cuni.cz/repository/xmlui/page/licence-TAUS_QT21). TAUS
grants to QT21 User access to the WMT Data Set with the following rights:
i) the right to use the target side of the translation units into
a commercial product, provided that QT21 User may not resell the
WMT Data Set as if it is its own new translation; ii) the right to make
Derivative Works; and iii) the right to use or resell such Derivative Works
commercially and for the following goals:
i) research and benchmarking; ii) piloting new solutions; and iii) testing of
new commercial services.
</td> </tr>
<tr>
<td>
**Distribution**
**Medium**
</td>
<td>
downloadable
</td> </tr>
<tr>
<td>
**Usage**
</td>
<td>
Training of Automatic Post-editing and Quality Estimation components / Quality
Estimation / Error Analysis
</td> </tr>
<tr>
<td>
**Size**
</td>
<td>
70 MB
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
Set of 165,000 domain specific Human Post Edited (HPE) triplets for 4 language
pairs and 6 translation engines. Each triplet consists in (source, reference,
HPE). The domain for En-De and En-Cz is IT, the domain for En-Lv and De-En is
Pharma. A total of 6 translation engines have been used to produce the targets
that have been post edited: PBMT and NMT from KIT for En-De, PBMT from KIT for
De-En, PBMT from CUNI for En-Cz and both PBMT and NMT system from Tilde for
En-Lv. For each language pair, one unique set of source segments has been used
as input to the different translation engines. Each translation engine has
provided 30,000 target segments except for the two En-Lv engines which have
provided 22,500 target segments each. En-De and De-En HPEs have been collected
by professional translators from Text&Form. En-Lv HPEs have been collected by
professional translators from Tilde. En-Cz HPEs have been collected by
professional translators from Traductera. All data is provided by the EU
project QT21 ( _http://www.qt21.eu/_ ).
</td> </tr> </table>
#### 3.3.18 R#18 QT21 Domain Specific Human Error-‐Annotated data set
<table>
<tr>
<th>
**Resource Name**
</th>
<th>
QT21 Domain Specific Human Error Annotated data set
</th> </tr>
<tr>
<td>
**Resource Type**
</td>
<td>
corpus
</td> </tr>
<tr>
<td>
**Media Type**
</td>
<td>
text
</td> </tr>
<tr>
<td>
**Language(s)**
</td>
<td>
English to German, English to Czech, English to Latvian, German to English
</td> </tr>
<tr>
<td>
**License**
</td>
<td>
QT21-TAUS Terms of Use
(https://lindat.mff.cuni.cz/repository/xmlui/page/licence-TAUS_QT21). TAUS
grants to QT21 User access to the WMT Data Set with the following rights:
i) the right to use the target side of the translation units into
a commercial product, provided that QT21 User may not resell the
WMT Data Set as if it is its own new translation;
</td> </tr>
<tr>
<td>
</td>
<td>
ii) the right to make Derivative Works; and iii) the right to use or resell
such Derivative Works commercially and for the following goals:
i) research and benchmarking; ii) piloting new solutions; and iii) testing of
new commercial services.
</td> </tr>
<tr>
<td>
**Distribution**
**Medium**
</td>
<td>
downloadable
</td> </tr>
<tr>
<td>
**Usage**
</td>
<td>
Training of Automatic Post-editing and Quality Estimation components / Quality
Estimation / Error Analysis
</td> </tr>
<tr>
<td>
**Size**
</td>
<td>
39 MB
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
Set of 14,000 domain specific Human Error Annotated (HEA) quadruplets for 4
language pairs and 6 translation engines. Each quadruplet consists in (source,
reference, HPE, HEA). The domain for En-De and En-Cz is IT, the domain for En-
Lv and De-En is Pharma. This HEA data set is based on the HPE in Section
3.3.15. A total of 6 translation engines have been used to produce the targets
that have been post-edited: PBMT and NMT from KIT for En-De, PBMT from KIT for
De-En, PBMT from CUNI for En-Cz and both PBMT and NMT system from Tilde for
En-Lv. For each language pair, one unique set of source segments has been used
as input to the different translation engines. From each translation engine,
2.000 target segments have been error-annotated. From each subset of 2.000 HEA
segments, 200 are annotated by 2 different professional translator. En-De and
De-En HEAs have been collected by professional translators from Text & Form.
En-Lv HEAs have been collected by professional translators from Tilde. En-Cz
HEAs have been collected by professional translators from Aspena. All data is
provided by the EU project QT21 ( _http://www.qt21.eu/_ ).
</td> </tr> </table>
#### 3.3.19 R#19 QT21 WMT17 Human Post-‐Edited data set
<table>
<tr>
<th>
**Resource Name**
</th>
<th>
QT21 WMT Human Post-Edited data set
</th> </tr>
<tr>
<td>
**Resource Type**
</td>
<td>
corpus
</td> </tr>
<tr>
<td>
**Media Type**
</td>
<td>
text
</td> </tr>
<tr>
<td>
**Language(s)**
</td>
<td>
English to German, English to Czech, English to Latvian
</td> </tr>
<tr>
<td>
**License**
</td>
<td>
QT21-TAUS Terms of Use
(https://lindat.mff.cuni.cz/repository/xmlui/page/licence-TAUS_QT21). TAUS
grants to QT21 User access to the WMT Data Set with the following rights:
i) the right to use the target side of the translation units into
a commercial product, provided that QT21 User may not resell the
WMT Data Set as if it is its own new translation; ii) the right to make
Derivative Works; and iii) the right to use or resell such Derivative Works
commercially and for the following goals:
i) research and benchmarking; ii) piloting new solutions; and iii) testing of
new commercial services.
</td> </tr>
<tr>
<td>
**Distribution**
**Medium**
</td>
<td>
downloadable
</td> </tr>
<tr>
<td>
**Usage**
</td>
<td>
Training of Automatic Post-editing and Quality Estimation components / Quality
Estimation / Error Analysis
</td> </tr>
<tr>
<td>
**Size**
</td>
<td>
10,800 Human Post Edited (HPE) triplets (for 3 language pairs)
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
Set of 10,800 Human Post Edited (HPE) triplets for 3 language pairs on WMT17
news task data. Each triplet consists in (source, reference, HPE). For each
language pair, the target segments have been produced on the WMT17 news task
by the3 best WMT17 systems in their respective language pair. Each translation
engine has provided 1,200 segments. Translations (targets) have been generated
using, “1 62.0 0.308 uedin-nmt”,”3 55.9 0.111 limsi-factored-norm”, “54.1
0.050 CU-Chimera” for En-Cz, “69.8 0.139 uedin-nmt”,”66.7 0.022 KIT”, “66.0
0.003 RWTH-nmt-ensemb” for En-De and “54.4 0.196 tilde-nc-nmtsmt”, “50.8 0.075
limsi-fact-norm”,”50.0 0.058 usfd-cons-qt21” for EnLv. HPEs for En-De have
been collected by professional translators from Text&Form. En-Lv HPEs have
been collected by professional translators from Tilde. En-Cz HPEs have been
collected by professional translators from Traductera. All data is provided by
the EU project QT21 ( _http://www.qt21.eu/_ ).
</td> </tr> </table>
#### 3.3.20 R#20 QT21 WMT17 Human Error Annotated data set
<table>
<tr>
<th>
**Resource Name**
</th>
<th>
QT21 WMT Human Error Annotated data set
</th> </tr>
<tr>
<td>
**Resource Type**
</td>
<td>
corpus
</td> </tr>
<tr>
<td>
**Media Type**
</td>
<td>
text
</td> </tr>
<tr>
<td>
**Language(s)**
</td>
<td>
English to German, English to Czech, English to Latvian
</td> </tr>
<tr>
<td>
**License**
</td>
<td>
QT21-TAUS Terms of Use
(https://lindat.mff.cuni.cz/repository/xmlui/page/licence-TAUS_QT21). TAUS
grants to QT21 User access to the WMT Data Set with the following rights:
i) the right to use the target side of the translation units into
a commercial product, provided that QT21 User may not resell the
WMT Data Set as if it is its own new translation; ii) the right to make
Derivative Works; and iii) the right to use or resell such Derivative Works
commercially and for the following goals:
i) research and benchmarking; ii) piloting new solutions; and iii) testing of
new commercial services.
</td> </tr>
<tr>
<td>
**Distribution**
**Medium**
</td>
<td>
downloadable
</td> </tr>
<tr>
<td>
**Usage**
</td>
<td>
Training of Automatic Post-editing and Quality Estimation components / Quality
Estimation / Error Analysis
</td> </tr>
<tr>
<td>
**Size**
</td>
<td>
3,600 quadruplets (for 3 language pairs)
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
Set of 3,600 WMT17 Human Error Annotated (HEA) quadruplets for 3 language
pairs and 9 translation engines. Each quadruplet consists in (source,
reference, HPE, HEA). The source data comes from the WMT17 news task. A total
of 9 translation engines have been used to produce the targets that have been
post edited: Translations (targets) have been generated using, “1 62.0 0.308
uedin-nmt”,”3 55.9 0.111 limsi-factored-norm”, “54.1 0.050 CU-Chimera” for En-
Cz, “69.8 0.139
</td> </tr>
<tr>
<td>
</td>
<td>
uedin-nmt”,”66.7 0.022 KIT”, “66.0 0.003 RWTH-nmt-ensemb” for EnDe and “54.4
0.196 tilde-nc-nmt-smt”, “50.8 0.075 limsi-factnorm”,”50.0 0.058 usfd-cons-
qt21” for En-Lv. From each translation engine, 200 target segments have been
post edited which further have been error annotated by 2 different
professional translator. En-De HEAs have been collected by professional
translators from Text&Form. En-Lv HEAs have been collected by professional
translators from Tilde. En-Cz HEAs have been collected by professional
translators from Aspena. All data is provided by the EU project QT21
( _http://www.qt21.eu/_ ).
</td> </tr> </table>
#### 3.3.21 R#21 IWSLT 2015 Data Sets
<table>
<tr>
<th>
**Resource Name**
</th>
<th>
IWSLT 2015 Data Sets
</th> </tr>
<tr>
<td>
**Resource Type**
</td>
<td>
Corpus
</td> </tr>
<tr>
<td>
**Media Type**
</td>
<td>
Text
</td> </tr>
<tr>
<td>
**Language(s)**
</td>
<td>
IWSLT 2015: from/to English to/from French, German, Chinese, Thai, Vietnamese,
Czech
</td> </tr>
<tr>
<td>
**License**
</td>
<td>
Data are crawled from the TED website and carry the respective licensing
conditions.
</td> </tr>
<tr>
<td>
**Distribution**
**Medium**
</td>
<td>
Downloadable
</td> </tr>
<tr>
<td>
**Usage**
</td>
<td>
For training, tuning and testing MT systems.
</td> </tr>
<tr>
<td>
**Size**
</td>
<td>
Approximately, for each language pair, training sets include 2,000 talks, 200K
sentences and 4M tokens per side, while each dev and test sets 10-15 talks,
1.0K-1.5K sentences and 20K-30K tokens per side. In each edition, the training
sets of previous editions are re-used and updated with new talks added to the
TED repository in the meanwhile.
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
These are the data sets for the MT tasks of the evaluation campaigns of IWSLT.
They are parallel data sets used for building and testing MT systems. They are
publicly available through the WIT3 website _http://wit3.fbk.eu_ , see
release: 2015-01
</td> </tr> </table>
#### 3.3.22 R#22 IWSLT 2016 Data Sets
<table>
<tr>
<th>
**Resource Name**
</th>
<th>
IWSLT 2016 Data Sets
</th> </tr>
<tr>
<td>
**Resource Type**
</td>
<td>
Corpus
</td> </tr>
<tr>
<td>
**Media Type**
</td>
<td>
Text
</td> </tr>
<tr>
<td>
**Language(s)**
</td>
<td>
IWSLT 2016: from/to English to/from Arabic, Czech, French, German
</td> </tr>
<tr>
<td>
**License**
</td>
<td>
Data are crawled from the TED website and carry the respective licensing
conditions.
</td> </tr>
<tr>
<td>
**Distribution**
**Medium**
</td>
<td>
Downloadable
</td> </tr>
<tr>
<td>
**Usage**
</td>
<td>
For training, tuning and testing MT systems.
</td> </tr>
<tr>
<td>
**Size**
</td>
<td>
Approximately, for each language pair, training sets include 2,000 talks, 200K
sentences and 4M tokens per side, while each dev and test sets 10-15 talks,
1.0K-1.5K sentences and 20K-30K tokens per side. In each edition, the training
sets of previous editions are re-used and updated with new talks added to the
TED repository in the meanwhile.
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
These are the data sets for the MT tasks of the evaluation campaigns of IWSLT.
They are parallel data sets used for building and testing MT systems. They are
publicly available through the WIT3 website _http://wit3.fbk.eu_ , see
release: 2016-01
</td> </tr> </table>
#### 3.3.23 R#23 IWSLT 2017 Data Sets
<table>
<tr>
<th>
**Resource Name**
</th>
<th>
IWSLT 2017 Data Sets
</th> </tr>
<tr>
<td>
**Resource Type**
</td>
<td>
Corpus
</td> </tr>
<tr>
<td>
**Media Type**
</td>
<td>
Text
</td> </tr>
<tr>
<td>
**Language(s)**
</td>
<td>
IWSLT 2017:
* multilingual: German, English, Italian, Dutch, Romanian
* bilingual: from/to English to/from Arabic, German, French, Japanese, Korean, Chinese
</td> </tr>
<tr>
<td>
**License**
</td>
<td>
Data are crawled from the TED website and carry the respective licensing
conditions.
</td> </tr>
<tr>
<td>
**Distribution**
**Medium**
</td>
<td>
Downloadable
</td> </tr>
<tr>
<td>
**Usage**
</td>
<td>
For training, tuning and testing MT systems.
</td> </tr>
<tr>
<td>
**Size**
</td>
<td>
Approximately, for each language pair, training sets include 2,000 talks, 200K
sentences and 4M tokens per side, while each dev and test sets 10-15 talks,
1.0K-1.5K sentences and 20K-30K tokens per side. In each edition, the training
sets of previous editions are re-used and updated with new talks added to the
TED repository in the meanwhile.
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
These are the data sets for the MT tasks of the evaluation campaigns of IWSLT.
They are parallel data sets used for building and testing MT systems. They are
publicly available through the WIT3 website _http://wit3.fbk.eu_ , see
release: 2017-01
</td> </tr> </table>
#### 3.3.24 R#24 IWSLT 2015 Human Post-‐Editing data
<table>
<tr>
<th>
**Resource Name**
</th>
<th>
IWSLT 2015 Human Post-Editing data
</th> </tr>
<tr>
<td>
**Resource Type**
</td>
<td>
corpus
</td> </tr>
<tr>
<td>
**Media Type**
</td>
<td>
text
</td> </tr>
<tr>
<td>
**Language(s)**
</td>
<td>
English to German (EnDe) and Vietnamese to English (ViEn)
</td> </tr>
<tr>
<td>
**License**
</td>
<td>
Post-edits are released under a Creative Commons Attribution (CCBY) 4.0
International License.
</td> </tr>
<tr>
<td>
**Distribution**
**Medium**
</td>
<td>
downloadable
</td> </tr>
<tr>
<td>
**Usage**
</td>
<td>
Analysis of MT quality and Quality Estimation components
</td> </tr>
<tr>
<td>
**Size**
</td>
<td>
600 segments for EnDe and 500 segments for ViEn (10K tokens each). 5 different
automatic translations post-edited by professional translators
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
The human evaluation (HE) dataset created for EnDe and ViEn MT tasks was a
subset of the official test set of the IWSLT 2015 evaluation campaign. The
resulting HE sets are composed of 600 segments for EnDe and 500 segments for
EnFr, each corresponding to around 10,000 words. Human evaluation was based on
Post-Editing, i.e., the manual correction of the MT system output, which was
carried out by professional translators. Five primary runs submitted to the
evaluation campaign were post-edited for each of the two tasks.
Data are publicly available through the WIT3 website _http://wit3.fbk.eu_ , at
_this_ page.
</td> </tr> </table>
#### 3.3.25 R#25 IWSLT 2016 Human Post-‐Editing data
<table>
<tr>
<th>
**Resource Name**
</th>
<th>
IWSLT 2016 Human Post-Editing data
</th> </tr>
<tr>
<td>
**Resource Type**
</td>
<td>
corpus
</td> </tr>
<tr>
<td>
**Media Type**
</td>
<td>
text
</td> </tr>
<tr>
<td>
**Language(s)**
</td>
<td>
English to German (EnDe) and English to French (EnFr)
</td> </tr>
<tr>
<td>
**License**
</td>
<td>
Post-edits are released under a Creative Commons Attribution (CCBY) 4.0
International License.
</td> </tr>
<tr>
<td>
**Distribution**
**Medium**
</td>
<td>
downloadable
</td> </tr>
<tr>
<td>
**Usage**
</td>
<td>
Analysis of MT quality and Quality Estimation components
</td> </tr>
<tr>
<td>
**Size**
</td>
<td>
600 segments for both EnDe and EnFr (10K tokens each).
Respectively, 9 and 5 different automatic translations post-edited by
professional translators
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
The human evaluation (HE) dataset created for EnDe and EnFr MT tasks was a
subset of one of the official test sets of the IWSLT 2016 evaluation campaign.
The resulting HE sets are composed of 600 segments for both EnDe and EnFr,
each corresponding to around 10,000 words. Human evaluation was based on Post-
Editing, i.e., the manual correction of the MT system output, which was
carried out by professional translators. Nine and five primary runs submitted
to the evaluation campaign were post-edited for the two tasks, respectively.
Data are publicly available through the WIT3 website _http://wit3.fbk.eu_ , at
_this_ page.
</td> </tr> </table>
### 3.3.26 R#26 IWSLT 2017 Human Post-‐Editing data
<table>
<tr>
<th>
**Resource Name**
</th>
<th>
IWSLT 2017 Human Post-Editing data
</th> </tr>
<tr>
<td>
**Resource Type**
</td>
<td>
corpus
</td> </tr>
<tr>
<td>
**Media Type**
</td>
<td>
text
</td> </tr>
<tr>
<td>
**Language(s)**
</td>
<td>
Dutch to German (NlDe) and Romanian to Italian (RoIt)
</td> </tr>
<tr>
<td>
**License**
</td>
<td>
Post-edits will be released under a Creative Commons Attribution (CCBY) 4.0
International License.
</td> </tr>
<tr>
<td>
**Distribution**
**Medium**
</td>
<td>
will be downloadable
</td> </tr>
<tr>
<td>
**Usage**
</td>
<td>
Analysis of MT quality and Quality Estimation components
</td> </tr>
<tr>
<td>
**Size**
</td>
<td>
603 segments for both NlDe and RoIt (10K tokens each). For each direction, 9
different automatic translations post-edited by professional translators
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
The human evaluation (HE) dataset created for NlDe and RoIt MT tasks was a
subset of the official test set of the IWSLT 2017 evaluation campaign. The
resulting HE sets are composed of 603 segments for both NlDe and RoIt, each
corresponding to around 10,000 words. Human evaluation was based on Post-
Editing, i.e., the manual correction of the MT system output, which was
carried out by professional translators. Nine primary runs submitted to the
evaluation campaign with engines trained on constrained data conditions and in
bilingual/multilingual/zero-shot mode, were post-edited for each of the two
tasks.
Data will be publicly available through the WIT3 website _http://wit3.fbk.eu_
.
</td> </tr> </table>
## 3.4 Standards and Metadata
CRACKER follows META-SHARE’s best practices for data documentation. The basic
design principles of the META-SHARE model have been formulated according to
specific needs identified, namely: (a) a typology for language resources (LR)
identifying and defining all types of LRs and the relations between them; (b)
a common terminology with as clear semantics as possible; (c) minimal schema
with simple structures (for ease of use) but also extensive, detailed schema
(for exhaustive description of LRs); (d) interoperability between descriptions
of LRs and associated software across repositories.
In answer to these needs, the following design principles were formulated:
* expressiveness, i.e., cover any type of resource;
* extensibility, allowing for future extensions and catering for combinations of LR types for the creation of complex resources;
* semantic clarity, through a bundle of information accompanying each schema element;
* flexibility, by employing both exhaustive and minimal descriptions;
* interoperability, through mappings to widely used schemas (DC, Clarin Concept Registry, which has taken over the ISOcat DCR).
The central entity of the META-SHARE ontology is the Language Resource. In
parallel, LRs are linked to other satellite entities through relations,
represented as basic elements. The interconnection between the LR and these
satellite entities pictures the LR’s lifecycle from production to use:
reference documents related to the LR (papers, reports, manuals etc.),
persons/organizations involved in its creation and use (creators, distributors
etc.), related projects and activities (funding projects, activities of usage
etc.), accompanying licenses, etc. CRACKER has followed these standard
practices for data documentation, in line with their design principles of
expressiveness, extensibility, semantic clarity, flexibility and
interoperability.
The META-SHARE metadata can also be represented as linked data following the
work being done in Task 3.3 of the CRACKER project, the _LD4LT group_ and the
LIDER project, which has produced an _OWL version_ of the META-SHARE metadata
schema. Such representation can be generated by the mapping process initiated
by the above tasks and initiatives.
As an example, a subset of the META-SHARE metadata records has been converted
to Linked Data and is accessible via the _Linghub_ portal.
Included in the conversion process to OWL was the _legal rights_ module of the
METASHARE schema, taking into account the _ODRL_ model & vocabulary v.2.1.
## 3.5 Data Sharing
As said, resource sharing has built upon META-SHARE. CRACKER maintained and
released an improved version of the META-SHARE software.
For its own data sets, CRACKER has applied, whenever possible, the permissive
licensing and open sharing culture which has been one of the key components of
META-SHARE for handling research data in the digital age.
Consequently, for the MT/LT research and user communities, sharing of all
CRACKER data sets has been organised through META-SHARE. The metadata schema
provides components and elements that address copyright and Intellectual
Property Rights (IPR) issues, restrictions imposed on data sharing and also
IPR holders. These together with an existing licensing toolkit has served as
guidance for the selection of the appropriate licensing solution and creation
of the respective metadata. In parallel, ELRA/ELDA has implemented a
_licensing wizard_ , helping rights holders in defining and selecting the
appropriate license under which they can distribute their resources.
## 3.6 Archiving and Preservation
All datasets produced are provided and made sustainable through the existing
META-SHARE repositories, or new repositories that partners may choose to set
up and link to the META-SHARE network. Datasets are locally stored in the
repositories’ storage layer in compressed format.
# Collaboration with Other Projects and Initiatives
CRACKER created an umbrella initiative that included all running and recently
completed EU-supported projects working on technologies for a multilingual
Europe, namely the Cracking the Language Barrier federation, which is set up
around a short multi-lateral Memorandum of Understanding (MoU).
The MoU contains a non-exhaustive list of general areas of collaboration, and
all projects and organisations that sign this document are invited to
participate in these collaborative activities.
At the time of writing (December 2017), the MoU has been signed by 12
organisations and 25 projects (including service contracts):
* _Organisations:_ CITIA, CLARIN, ECSPM, EFNIL, ELEN, ELRA, GALA, LTInnovate, META-NET, NPLD, TAUS, W3C.
* _Projects:_ ABUMATRAN, CRACKER, DLDP, ELRC, EUMSSI, EXPERT, Falcon, FREME, HimL, iHEARu KConnect, KRISTINA, LIDER, LT_Observatory, MixedEmotions, MLi, MMT, MultiJEDI, MultiSensor, Pheme, QT21, QTLeap, ROCKIT, SUMMA, XLiMe
Additional organisations and projects have been approached for participation
in the initiative. The group of members is constantly growing.
# Recommendations for Harmonised DMPs for the ICT-‐17 Federation of Projects
One of the areas of collaboration included in the CRACKER MoU refers to the
data management and repositories for data, tools and technologies; thus, all
projects and organisations participating in the initiative are invited to join
forces and to collaborate on harmonising data management plans (metadata, best
practices etc.) as well as data, tools and technologies distribution through
open repositories.
At the kick-off meeting of the ICT-17 group of projects on April 28, 2015,
CRACKER offered support to the Cracking the Language Barrier federation of
projects by proposing a Data Management Plan template with shared key
principles that can be applied, if deemed helpful, by all projects, again,
advocating an open sharing approach whenever possible (also see Deliverable
D1.2). This plan has been included in the overall communication plan and it
will inform the working group that will maintain and update the roadmap for
European MT research.
In future face-to-face or virtual meetings of the federation, we propose to
discuss the details about metadata standards, licenses, or publication types.
Our goal has been to prepare a list of planned tangible outcomes of all
projects, i.e., all datasets, publications, software packages and any other
results, including technical aspects such as data formats. We would like to
stress that the intention is not to provide the primary distribution channel
for all projects’ data sets but to provide, in addition to the channels
foreseen in the projects’ respective Descriptions of Actions, one additional,
alternative common distribution platform and approach for metadata description
for all data sets produced by the Cracking the Language Barrier federation.
<table>
<tr>
<th>
**In this respect, the activities that the participating projects may
optionally undertake in the future are the following:**
1. Participating projects may consider using META-SHARE as an additional, alternative distribution channel for their tools or data sets, using one of the following options:
1. projects may set up a project or partner specific META-SHARE repository, and use either open or even restrictive licences;
2. projects may join forces and set up one dedicated Cracking the Language Barrier META-SHARE repository to host the resources developed by all participating projects, and use either open or even restrictive licences (as appropriate).
2. Participating projects may wish to use the _META-SHARE repository_ _software_ for documenting their resources, even if they do not wish to link to the network.
</th> </tr> </table>
As mentioned above, the collaboration in terms of harmonizing data management
plans and recommending distribution through open repositories forms one of the
six areas of collaboration indicated in the Cracking the Language Barrier MoU.
Participation in one or more of the potential areas of collaboration in this
joint community activity, is optional.
An example of harmonized DMP is that of the _FREME_ project. FREME signed the
corresponding Memorandum of Understanding and is participating in this
initiative. As part of the effort, FREME will make available its metadata from
existing datasets that are used by FREME, using a combined metadata scheme:
this covers both the META-SHARE template provided by CRACKER, as well as the
_DataID schema_ . FREME will follow both META-SHARE and DataID practices for
data documentation, verification and distribution, as well as for curation and
preservation, ensuring the availability of the data and enabling access,
exploitation and dissemination. Further details as well as the actual dataset
descriptions have been documented in the _FREME Data Management Plan_ . See
Section 3.1.2 of that plan for an example of the combined approach.
## Recommended Template of a DMP
As pointed out already, the collaboration in terms of harmonizing DMPs is
considered an important aspect of convergence within the groups of projects.
In this respect, any project that is interested in and intends to collaborate
towards a joint approach for a DMP may follow the proposed structure of a DMP
template. The following Section describes a recommended template, while
Section 3 has provided a concrete example of such an implementation, i.e., the
CRACKER DMP. It is, of course, expected that any participating project may
accommodate its DMP content according to project-specific aspects and scope.
These DMPs are also expected to be gradually completed as the project(s)
progress into their implementation.
<table>
<tr>
<th>
**I. The ABC Project DMP**
1. **Introduction/ Scope**
2. **Data description**
3. **Identification mechanism iv. Standards and Metadata**
**v. Data Sharing vi. Archiving and preservation**
</th> </tr> </table>
**Figure 3. The recommended template for the implementation and structuring of
a DMP.**
### Introduction and Scope
Overview and approach on the resource sharing activities underpinning the
language technology and machine translation research and development within
each participating project and as part of the Cracking the Language Barrier
initiative.
### Dataset Reference and Name
It is recommended that a standard identification mechanism should be employed
for each data set, e.g., (a) a PID (Persistent Identifier as a long-lasting
reference to a dataset) or (b) _ISLRN_ (International Standard Language
Resource Number).
### Dataset Description
It is recommended that the following resource and media types are addressed:
* **corpora** (text, audio, video, multimodal/multimedia corpora, n-gram resources),
* **lexical/conceptual resources** (e.g., computational lexicons, ontologies, machine-readable dictionaries, terminological resources, thesauri, multimodal/ multimedia lexicons and dictionaries, etc.)
* **language descriptions** (e.g., computational grammars)
* **technologies** (tools/services) that can be used for the processing of data resources
In relation to the resource identification of the Cracking the Language
Barrier initiative and to have a first rough estimation of their number,
coverage and other core characteristics, CRACKER has circulated two templates
dedicated to datasets and associated tools and services respectively. Projects
that wished and decided to participate in this uniform cataloguing were
invited to fill in these templates with brief descriptions of the resources
they estimate to be produced and/or collected. The templates are as follows
(also in the Appendix):
<table>
<tr>
<th>
**Resource Name**
</th>
<th>
Complete title of the resource
</th> </tr>
<tr>
<td>
**Resource Type**
</td>
<td>
Choose one of the following values: Lexical/conceptual resource, corpus,
language description (missing values can be discussed and agreed upon with
CRACKER)
</td> </tr>
<tr>
<td>
**Media Type**
</td>
<td>
The physical medium of the content representation, e.g., video, image, text,
numerical data, n-grams, etc.
</td> </tr>
<tr>
<td>
**Language(s)**
</td>
<td>
The language(s) of the resource content
</td> </tr>
<tr>
<td>
**License**
</td>
<td>
The licensing terms and conditions under which the LR can be used
</td> </tr>
<tr>
<td>
**Distribution Medium**
</td>
<td>
The medium, i.e., the channel used for delivery or providing access to the
resource, e.g., accessible through interface, downloadable, CD/DVD, hard copy
etc.
</td> </tr>
<tr>
<td>
**Usage**
</td>
<td>
Foreseen use of the resource for which it has been produced
</td> </tr>
<tr>
<td>
**Size**
</td>
<td>
Size of the resource with regard to a specific size unit measurement in form
of a number
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
A brief description of the main features of the resource (including URL, if
any)
</td> </tr> </table>
**Table 1. Template for datasets description**
<table>
<tr>
<th>
**Technology Name**
</th>
<th>
Complete title of the tool/service/technology
</th> </tr>
<tr>
<td>
**Technology Type**
</td>
<td>
Tool, service, infrastructure, platform, etc.
</td> </tr>
<tr>
<td>
**Technology Type**
</td>
<td>
The function of the tool or service, e.g., parser, tagger, annotator, corpus
workbench etc.
</td> </tr>
<tr>
<td>
**Media Type**
</td>
<td>
The physical medium of the content representation, e.g., video, image, text,
numerical data, n-grams, etc.
</td> </tr>
<tr>
<td>
**Language(s)**
</td>
<td>
The language(s) that the tool/service operates on
</td> </tr>
<tr>
<td>
**License**
</td>
<td>
The licensing terms and conditions under which the tool/service can be used
</td> </tr>
<tr>
<td>
**Distribution Medium**
</td>
<td>
The medium, i.e., the channel used for delivery or providing access to the
tool/service, e.g., accessible through interface, downloadable, CD/DVD, etc.
</td> </tr>
<tr>
<td>
**Usage**
</td>
<td>
Foreseen use of the tool/service for which it has been produced
</td> </tr>
<tr>
<td>
**Description**
</td>
<td>
A brief description of the main features of the tool/service
</td> </tr> </table>
**Table 2. Template for technologies description**
### Standards and Metadata
Participating projects have been recommended to deploy the META-SHARE metadata
schema for the description of their resources and provide all details
regarding their name, identification, format, etc.
Providers of resources wishing to participate in the initiative will be able
to request and get assistance through dedicated helpdesks on questions
concerning (a) the metadata based LR documentation at helpdesk-metadata@meta-
share.eu (b) the use of licences, rights of use, IPR issues, etc. at helpdesk-
[email protected] and (c) the repository installation and use at helpdesk-
[email protected].
### Data Sharing
It was recommended that all datasets (including all relevant metadata records)
produced by the participating projects would be made available under licenses,
which are as open and as standardised as possible, as well as established as
best practices. Any interested provider can consult the META-SHARE licensing
options and pose related questions to the respective helpdesk.
### Archiving and Preservation
As regards long-term preservation, two options may be considered:
1. As part of the further development and maintenance of the META-SHARE infrastructure, a project that participates in the Cracking the Language Barrier initiative may opt to set up its own project or partner specific META-SHARE repository and link to the META-SHARE network, with CRACKER providing all support necessary in the installation, configuration and set up process.
2. Alternatively, one dedicated Cracking the Language Barrier META-SHARE repository can be set up to host the resources developed by all participating projects, with CRACKER catering for procedures and mechanisms enabling long-term preservation of the datasets.
It should be repeated at this point that following the META-SHARE principles,
the curation and preservation of the datasets, together with the rights of
their use and possible restrictions, are under the sole control and
responsibility of the data providers.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0809_GAIA-CLIM_640276.md
|
# D.7.1: Data management plan commensurate with Pilot on Open Research Data,
initial version, July 2015
**Project Name:** Gap Analysis for Integrated Atmospheric ECV Climate
Monitoring (GAIA-‐CLIM)
**Funder:** European Commission (Horizon 2020)
**Grant Title:** No 640276
# Project description
The Gap Analysis for Integrated Atmospheric ECV Climate Monitoring
(GAIA-‐CLIM) Project will establish sound methods for the characterisation of
satellite-‐based Earth Observation (EO) data by surface-‐based and
sub-‐orbital measurement platforms -‐ spanning Atmosphere, Ocean and Land
observations. GAIA-‐CLIM shall add value by:
* Improving traceability and uncertainty quantification on sub-‐orbital measurements;
* Quantifying co-‐location uncertainties between sub-‐orbital and satellite data;
* Use of traceable measurements in data assimilation; and
* Provision of co-‐location match-‐up data, metadata, and uncertainty estimates via a ‘virtual observatory’ facility.
The project is not envisaged to directly collect primary data, i.e. make
measurements for the sole purpose of the project. Rather it will provide added
value to existing and forthcoming measurements, taken by both consortium
members under separate funding support and by third party institutions
participating in various national and international measurement programs.
GAIA-‐CLIM shall primarily use metrologically reference quality measurements
that are traceable and have well quantified uncertainty estimates. At the
global scale, currently envisaged potential contributing networks include the
Global Climate Observing System (GCOS) Reference Upper-‐Air Network, the
Network for Detection of Atmospheric Composition Change (NDACC) and the Total
Column Carbon Observing Network (TCCON). At the European level, these include
networks such as MWRNET and ACTRIS. A full listing of contributing
observations will become apparent upon completion of task 1.2, envisaged in
year 2 of the project. Importantly, GAIA-‐CLIM will only make use of those
primary observations to which no academic restrictions to use, re-‐use, and
re-‐distribute any longer apply. The providers of primary data from these
networks shall implicitly or explicitly agree to release their data according
to this data management plan and the ‘virtual observatory’ data policy. At the
time of writing, the ‘virtual observatory’ and respective data policy do not
exist, yet. However, this data policy will be in compliance with the H2020
Pilot on Open Research Data (s. next section). The usage of satellite data has
to follow the data policies prescribed by the satellite operators, although
GAIA-‐CLIM will only use those data where the rights for re-‐use and
re-‐distribution in the ‘virtual observatory’ can be attained. In reality
this constitutes the vast majority of satellite data. Furthermore,
re-‐analysis and Numerical Weather Prediction (NWP) data may also become part
of the forthcoming ‘virtual observatory’. Such data will generally arise from
within the consortium (ECMWF and MO partners under WP4) and no restrictions
are envisaged. Project parts dealing with enhancing existing primary data
streams are:
* Preparation and assessment of reference-‐quality sub-‐orbital data (including in global assimilation systems) and characterisation of key satellite datasets
1. Assessment of several new satellite missions, using data assimilation of reference-‐quality sub-‐orbital measurements, targeting temperature and humidity (under work package 4).
2. Development of infrastructure to deliver data dissemination for reference data co-locations with satellite measurements (under work packages 3 and 5).
3. Development of a software infrastructure for preparation, monitoring, analysis and evaluation of reference data (under work packages 2 and 5).
4. Development of a general methodology for using reference-‐quality sub-‐orbital data for the characterisation of EO data (under work packages 4 and 5).
* Creation and population of a ‘virtual observatory’
1. Creation of a collocation database between EO measures and reference-‐quality measurements.
2. Preparation of data to enable comparisons, including relevant uncertainty information and metadata for users to understand and make appropriate use of the data for various applications.
3. Creation of data interrogation and visualization tools, building upon existing European and global infrastructure capabilities.
4. Planning for the potential transition of the resulting ‘virtual observatory’ from research to operational status in support of the Copernicus Climate Change Service and Copernicus Atmospheric Service.
# Pilot on Open Research Data
GAIA-‐CLIM participates in the H2020 Pilot on Open Research Data. Knowledge
generated during the project will be shared openly. Any milestones,
deliverables or technical documents produced will, following appropriate
internal-‐to-‐project review procedures involving at least an expert and a
management-‐based review, be published online and made discoverable.
Peer-‐reviewed publications will by policy be to journals that are either
open access or allow the authors to pay for the articles to be made open
access (for such instances, the additional charges will be paid).
# Dissemination and Exploitation of Results
A core facet of GAIA-‐CLIM is the ‘virtual observatory’ of visualization,
subsetting, and analysis tools, which will constitute the primary means by
which end-‐users will be able to access, visualize and utilize the outputs of
the project. The ‘virtual observatory’ will be build upon and extend a number
of existing facilities operated by project partners, which already undertake
subsets of the desired functionality such as the Network of Remote Sensing
Ground-‐Based Observations in support of the Copernicus Atmospheric Service
(NORS), the Cloud-‐Aerosol-‐Water-‐Radiation Interactions (ICARE) Project
and the US National Oceanic and Atmospheric Administration (NOAA) Products
Validation System (NPROVS). The resulting ‘virtual observatory’ facility will
be entirely open and available to use for any application area. Significant
efforts will be made to build an interface that is easy to use and which makes
data discovery, visualization and analysis effortless. The ‘virtual
observatory’ work package includes a specific task dedicated to documenting
the steps required to transition this facility from a research to an
operations framework with a view to constituting a long-‐term infrastructure.
# Primary source datasets envisaged to be used within GAIA-‐CLIM
For the initial version of this data management plan, a number of datasets
that are envisaged to contribute primary data streams to be used in
GAIA-‐CLIM are documented here. Upon completion of Task 1.2 in year 2 some
further datasets will likely be added. Where networks have data policies that
place restrictions on near-‐real-‐time use, GAIA-‐CLIM shall only use the
open delayed-‐mode data. Note that GAIA-‐CLIM will respect the data policy
of the data originators and that the documentation herein should not be taken
to imply advocacy for changing existing policies. Rather, it is important to
understand and document the policies and practices that pertain to the source
data.
## 1\. GRUAN
**Data set reference and name**
GCOS Reference Upper Air Network (GRUAN)
### Data set description
A group of stations coordinated by the GRUAN Lead Centre, hosted by the German
Meteorological Service, DWD. Data products that meet necessary conditions of
traceability and uncertainty quantification, documentation and publication are
served via the US National Oceanic and Atmospheric Administration’s National
Centers for Environmental Information (NOAA NCEI) in Asheville, North
Carolina, USA.
### Standards and metadata
Data and comprehensive metadata must be undertaken according to stated
requirements (documented through a technical document), shared with a central
processing facility, and traceable to either SI or community accepted
standards. The processing is open and transparent.
**Data sharing**
Data are shared without restriction or delay via NOAA NCEI.
### Archiving and preservation (including storage and backup)
The archive is on a secure backed-‐up service and a copy is retained at the
GRUAN Lead Centre. Entire data streams are periodically reprocessed when new
insights on instruments accrue. Such reprocessing always incurs a change in
version number and associated documentation.
## 2\. NDACC
**Data set reference and name**
Network for the Detection of Atmospheric Composition Change (NDACC)
### Data set description
The NDACC is composed of more than 70 high-‐quality, remote-‐sensing
research stations 1 for observing and understanding the physical and
chemical state of the stratosphere and upper troposphere and for assessing the
impact of stratospheric changes on the underlying troposphere and on global
climate. While the NDACC remains committed to monitoring changes in the
stratosphere with an emphasis on the long-‐term evolution of the ozone layer,
its priorities have broadened considerably to encompass issues such as the
detection of trends in overall atmospheric composition and understanding their
impacts on the stratosphere and troposphere, and establishing links between
climate change and atmospheric composition. A wide variety of trace gases is
measured 2 .
### Standards and metadata
NDACC is organized in several working groups, which are predominantly based on
the applied measurement techniques: i.e. Brewer & Dobson, FTIR, Lidar,
Microwave, Satellite, Sondes, Spectral UV, Theory, UV/Vis and Water Vapor. To
ensure quality and consistency of NDACC operations and products, a number of
protocols have been formulated covering topics such as measurement and
analysis procedures, data submission, instrument inter-‐comparisons, theory
and analysis, validation, and Cooperating Networks 3 . Regular working group
meetings and instrument inter-‐comparisons are held to safeguard a continued
high standard of the network’s products.
### Data sharing
All NDACC data over two years old is publicly available 4 . However, many
NDACC investigators have agreed to make their data publicly available
immediately upon archiving. The public record is available through anonymous
ftp 5 . The use of NDACC data prior to its being made publicly available
(i.e., for field campaigns, satellite validation, etc.) is possible via
collaborative arrangement with the appropriate PI(s). Rapid delivery data,
which will likely be revised before entry in the full database, is also
available for some instruments 6 .
In all cases when NDACC data is used in a publication, the authors agree to
acknowledge both the NDACC data center and the data provider. Whenever
substantial use is made of NDACC data in a publication an offer of
co-‐authorship will be made through personal contact with the data providers
and/or owners. Users of NDACC data are also expected to consult the on-‐line
documentation and reference articles to fully understand the scope and
limitations of the instruments and resulting data, and are encouraged to
contact the appropriate NDACC PI (listed in the data documentation on the web
page) to ensure the proper use of specific data sets. Those using NDACC data
in a talk or paper are asked to acknowledge its use, and to inform the ‘Theory
and Analysis Working Group‘ PIs of any relevant publications.
### Archiving and preservation (including storage and backup)
All data are released to the public and available on the anonymous ftp site no
more than two years after measurement date. Data and comprehensive metadata is
accessible via the NDACC data table 7 and clicking on the station name will
take the user to the associated public data site.
## 3\. TCCON
**Data set reference and name**
Total Carbon Column Observing Network (TCCON)
### Data set description
TCCON is a network of ground-‐based Fourier Transform Spectrometers that
takes direct solar absorption spectra at about 20 sites around the globe. From
these, column averaged mole fractions of trace gases (CO 2 , CH 4 , N 2
O, HF, CO, H 2 O, and HDO) are inferred with a retrieval software. The HF
and HDO retrievals are uncalibrated and hence preliminary. Each site
contributes their dataset as an extending series for the current version of
the retrieval software. Data are updated monthly and are publicly available no
later than one year after the measurement; however, many sites choose to
release their data much sooner.
### Standards and metadata
TCCON products are calibrated against in-‐situ WMO values 8 . In this way,
the long-‐term stability is checked continuously. All data are delivered with
an extensive metadata overhead.
### Data sharing
Data is openly accessible and hosted at the Carbon Dioxide Information
Analysis Center (CDIAC) 9 at Oak Ridge National Laboratory, USA. The data is
made freely available to everyone. Acknowledgement and/or co-‐authorship in
case of heavy use cases is expected. The data are stored in NetCDF format and
each file has a DOI assigned to it (one per site and retrieval version). It is
envisaged that each dataset will be described in a data publication paper.
### Archiving and preservation (including storage and backup)
Archiving and preservation are ensured by the World Data Center (WDC) for
Atmospheric Trace Gases’ 10 standard implemented by CDIAC. In the near
future, the data will be mirrored at the PANGAEA 11 data center, hosted by
the Alfred-‐Wegener-‐Institute in Bremerhaven/Germany.
## 4\. ACTRIS
**Data set reference and name**
ACTRIS (Aerosols, Clouds, and Trace gases Research InfraStructure Network) 12
### Data set description
**ACTRIS** is a European Project aiming at integrating European ground-‐based
stations, equipped with advanced atmospheric probing instrumentation for
aerosols, clouds, and short-‐lived gas-‐phase species. ACTRIS will have the
essential role to support building of new knowledge as well as policy issues
on climate change, air quality, and long-‐range transport of pollutants. The
networks provide consistent datasets of observations, which are made using
state-‐of-‐the-‐art measurement technology and data processing. Many of the
stations from the different networks are co-‐located with or close to
remote-‐sensing and in-‐situ instrumentation. The data is available through
the ACTRIS data portal 13 .
### Standards and metadata
At the time of writing, there is no unified standard for all measurements and
no metadata made available, yet.
### Data sharing
The ACTRIS Data Centre web portal allows to search and analyse atmospheric
composition data from a multitude of data archives through a single user
interface. For some of the databases, the interface furthermore allows to
download data.
ACTRIS data is freely available for non-‐commercial use. Use of this data
implies an agreement to reciprocate. 14
### Archiving and preservation (including storage and backup)
The ACTRIS database is maintained by the Norsk Institutt for Luftforskning
(NILU). The ACTRIS-‐2 project runs until 2019. Attempts are made to achieve
long-‐term preservations by making the network an European Research
Infrastructure.
## 5\. MWRNET
**Data set reference and name**
An International Network of Microwave Radiometers (MWRnet)
### Data set description
MWRnet links together a group of stations operated by independent institutions
and running Microwave Radiometers (MWR) operationally. MWRnet activities are
coordinated by the MWRnet chairs. Data products from the independent member
institutions are collected and harmonized occasionally to foster the
participation to international experiments and projects.
### Standards and metadata
Data products from MWRnet members are collected and harmonized for providing
uniform datasets to large-‐scale international experiments and projects. The
resulting data and metadata have been tailored case by case according to the
needs.
For the MWR data assimilation experiment performed within the HYdrological
cycle in Mediterranean EXperiment (HyMeX) 15 preparation phase, the OBSOUL
ascii format was used to comply with the Météo France ARPEGE/ALADIN/AROME
system.
For the contribution to the HyMeX Special Observing Period 1 (SOP1), data and
associated metadata were provided in NetCDF format 16 .
For the contribution to the TOPROF 17 Observation minus Background (O-‐B)
experiment, it has been adopted the observation data product standard defined
for the High-‐Definition Clouds and Precipitation for advancing Climate
Prediction (HD(CP)2) project, which follows to the possible extent the
principles given in the NetCDF Climate and Forecast Metadata Conventions 1.6
18 .
### Data sharing
The policy for data sharing is agreed with the MWRnet members case by case.
For the HyMeX preparation phase and SOP1 field experiment, the MWR data have
been released according to the HyMeX Data and Publication Policy 19 . For
GAIA-‐CLIM, the MWRnet members shall agree to release their MWR data
according to this data management plan and the Virtual Observatory data
policy.
### Archiving and preservation (including storage and backup)
The policy for data archiving and preservation is decided by the MWRnet chairs
case by case. For the HyMeX SOP1 field experiment, the MWRnet data has been
gathered on the HyMeX common backed-‐up database for secured, facilitated,
and enhanced availability. The entire data streams are periodically
reprocessed when new insights on instruments accrue. Such reprocessing always
incurs a change in version number and associated documentation.
For GAIA-‐CLIM, the MWRnet data archiving and preservation policy is still to
be decided.
**Scientific research data should be easily:**
## 1\. Discoverable
**Are the data and associated software produced and/or used in the project
discoverable (and readily located), identifiable by means of a standard
identification mechanism (e.g. Digital Object Identifier, DOI)?**
Data and metadata will mainly be made available through the ‘virtual
observatory’ facility. This online tool will make the data discoverable and
also provide mapping, comparison and visualization functions. Data versioning,
source locations, and any DOIs from the primary data sources will be retained.
The possibility of creating data and software DOIs for the ‘virtual
observatory’ shall be investigated, but it is not yet decided. For instance,
DOI-‐registration works well for static data sets but remains mostly
unexplored for regularly updated (changed) data. Thus, a decision for or
against usage of DOIs depends very much on the final operation mode of the
‘virtual observatory’, which needs to be developed during the project. The
‘virtual observatory’ facility will be hosted by EUMETSAT and made
discoverable.
## 2\. Accessible
**Are the data and associated software produced and/or used in the project
accessible and in what modalities, scope, licenses?**
As GAIA-‐CLIM participates in the Pilot on Open Research Data, knowledge
generated during the project is shared openly. Any milestones, deliverables or
technical documents produced are, following appropriate
internal-‐to-‐project review procedures, published online and made
discoverable. Commensurate with the Pilot on Open Research Data, all work
explicitly produced by GAIA-‐CLIM will be open. However, GAIA-‐CLIM work in
many cases will build upon pre-‐existing capabilities of the partners. In a
restricted subset of these cases, Intellectual Property Right (IPR)
restrictions relate to these background materials. Such background material
IPR is covered within the consortium agreement (cf. Annex 1). The policing of
this aspect is the responsibility of the Technical Coordination Group.
The ‘virtual observatory’ facility will be entirely open and available to use
for any application area. However, following the results of the user survey,
the ‘virtual observatory’ will contain online applications. The underlying
software will be openly shared to the extent useful, but GAIA-‐CLIM will not
provide software usage support for users. This is beyond the scope and
resources of the project.
Peer-‐reviewed publications will by policy be to journals that are either
open access or allow the authors to pay for the articles to be made open
access.
## 3\. Assessable and intelligible
**Are the data and associated software produced and/or used in the project
assessable for and intelligible to third parties in contexts such as
scientific scrutiny and peer review?**
Research is undertaken within GAIA-‐CLIM to improve observational
traceability for a number of broadly used methods of observation and the
quantification of the co-‐location mismatch uncertainties. The software
resulting from GAIA-‐CLIM that shall constitute input to the ‘virtual
observatory’ shall be shared openly and without restriction and shall be well
documented.
The novel approach of GAIA-‐CLIM is to demonstrate comprehensive, traceable,
EO Cal/Val for a number of metrologically mature ECVs, in the domains of
atmospheric state and composition, that will guarantee that products are
assessable and intelligible to third-‐party users.
## 4\. Usable beyond the original purpose for which it was collected
**Are the data and associated software produced and/or used in the project
useable by third parties even long time after the collection of the data?**
Data served will be available for any use regardless of whether it is within
the currently envisaged end-‐uses or otherwise. Significant efforts will be
made to build an interface that is easy to use and which makes data discovery,
visualization and analysis effortless. . All software that underlies the
‘virtual observatory’ and is created using GAIA-‐CLIM resources shall be made
available. The ‘virtual observatory’ work package includes a specific task
dedicated to documenting the steps required to transition this facility from a
research to an operations framework in support of Copernicus services.
Once the project will be completed, the ‘virtual observatory’ and its
underlying software will remain available, but in a "frozen state" with the
aim of becoming further developed and integrated into the emerging Copernicus
Climate Change Service and Copernicus Atmospheric Service. If continued in
this way, Copernicus data and software distribution policies will be applied
in the long-‐term.
## 5\. Interoperable to specific quality standards
**Are the data and associated software produced and/or used in the project
interoperable allowing data exchange between researchers, institutions,
organisations, countries, etc.?**
The project will only deal with both EO and sub-‐orbital (including in-‐situ
and ground-‐based remote-sensing) data, which are available for academic use
without restriction to simplify issues over dissemination of added value
products derived by the project. These added value products will be made
available immediately after they are produced and quality controlled without
restriction. Data are accompanied by conversion tool that enable likely two
different output formats that are in broad use within the recognised primary
stakeholder communities, e.g. CF-‐compliant NetCDF. The data will be made
available along with reading routines and visualisation tools through the
‘virtual observatory’ facility, which will allow data discovery and data usage
for calibration and validation of level 1 and level 2/3 EO observations. The
expectation is that new software written will use open-source software to the
extent possible and useful and use of existing software shall have a
preference for using programming languages that are open source or have open
source compilers available such as e.g., C++, Fortran or python.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0812_TRACE_635266.md
|
# Executive Summary
This document is a report that gathers in a single document essential
information about the Data Management Plan in project TRACE, Task 1.4, WP1.
This task consists on carrying out the necessary activities to adhere to the
Guidelines on Open Access to Scientific Publications and Research Data in
Horizon 2020. This task generates this deliverable D1.4.
This document is structured as follows. The next two sections provide a short
description of the TRACE project and the main aspects of Work Package 1 (WP1),
respectively. Section 4 presents a summary of the data made available, the
applications that were used to collect users’ data, and focus on which data
was effectively collected and how it is provided to others. Section 5
addresses the facility of accessing the data and the format used for the open
data-set. Sections, 6, 7, and 8 address the cost of providing the TRACE data,
its privacy concerns and related ethical issues, respectively. Finally,
Section 9 wraps up the document.
# Introduction
This project explored the potential of walking and cycling tracking services
to promote walking and cycling mobility. TRACE will focus on established
walking and cycling promotion measures and thoroughly assess the potential of
ICT based tracking services to overcome barriers to implementation and finding
new factors driving the effectiveness of those measures.
Through specific research, the related ICT challenges like scheme dynamics,
privacy, trust, lowcost, interoperability and flexibility were tackled for
each type of measure. It will be established measures to promote walking and
cycling travel to workplace, shopping, school and leisure.
Both the ability that tracking tools may have to address traditional
challenges of these measures as well as their potential to bring new features
in the fields of awareness raising, financial/tax incentives, infrastructure
planning and service concepts.
A common, flexible and open access tool were developed to provide an ICT input
and output platform that addresses the related ICT challenges. Over this
platform it will be easy for anyone to build products based on tracking
services tailored to the requirements of the specific measures. This project
will develop and test a representative set of such products in real measures
underway.
These test cases will at the same time validate and provide additional inputs
for the project’s research issues and trigger the widespread of tracking
services to support walking and cycling measures in Europe.
Users, policy makers and walking and cycling practitioners and final users
will be deeply involved in all stages of the project.
TRACE’s identity comes from the realization that the emergence of tracking-
enabling technologies and their market uptake opens a window of potential for
cycling and walking tracking-based solutions to increase cycling and walking.
New as they are in the market, the possible uses of these technologies are
still depending on further developments that manage that potential. There are
still theoretical and practical knowledge limitations of various types that
are constraining a higher uptake.
TRACE aims to lead the progress on this knowledge and to quickly and widely
spread it to the relevant players: cities, national/regional authorities,
local stakeholders potentially benefiting any relevant business players. TRACE
will achieve so in several ways:
* By providing an open knowledge base on cycling and walking tracking potential, challenges, solutions and benefits that can be consulted and applied by stakeholders;
* By providing (open access) tools addressing fundamental ICT challenges which can be used by market-oriented application developers;
* By developing market oriented tools that will be used by the TRACE sites and could be used anywhere else;
* By running a set of 8 pilot cases which will become (successful) examples for other sites to follow;
* By using the consortium’s network of cities and stakeholders, including the project followers, as well as umbrella organizations (besides the participant POLIS) of relevant stakeholders (like CIVINET), to convey TRACE’s messages and tools
* By setting up web-based communication channels and using related information platforms (e.g. ELTIS) to widespread news and project outputs ;
* By directly involving partners which will be commercially interested in developing top-notch tools and spreading the most their application towards cycling and walking promotion.
# Objectives of WP1
Work package WP1 (in which this document is included) coordinates and manages
the project, both technically and administratively, and oversees the
relationships and the communication between project contractors and the EU
(European Commission). Namely, the main issues addressed are:
* Support the objectives of the H2020 program as pursued by this project.
* Ensure the quality and timely production of the project's deliverables.
* Define and oversee the logistics of the project.
* Setup and manage a repository of software and reports, as well as the web site of the project.
* Define and supervise the data management plan.
One of the key components of the TRACE project is its effective management,
which includes both technical and administrative management. The strategic
technical issues were discussed in periodic project meetings. In addition,
task leaders and WP leaders will arrange meetings much more frequently as
required by the specifics of the corresponding task and/or WP, so that their
progress is ensured. Obviously, the project management activities also involve
a focus on ensuring ongoing successful collaboration between the partners as
well as with other EU-related projects, and with the community. This work
package spans the full lifecycle of the project, from month 1 to month 36.
# Data Summary
In this project there are three types of output that is relevant to mention
(in addition to the resulting deliverables and software):
1. Documents used for disseminating the TRACE project results (both intermediate and final), and
2. Data collected with the software that was developed and used in pilots during the TRACE project, and
3. Software developed
The goals of the above types are obvious: i) disseminate the project results,
ii) provide to community an important asset that resulted from the project,
and iii) provide software to be freely used in other applications. Note that
the development of software in TRACE has always followed an open source
approach; thus, free open-source software has always been used whenever
possible.
## Documentation
Regarding the first item, the list of such documents is addressed in the
Apendix 1; note that this file is available in the TRACE repository (Google
drive). Most of the documents are in PDF format. The documents produced are
useful not only for the purpose indicated (e.g., publication in a conference,
presentation, etc.) but also to anyone who may be interested on the topic.
## Data-Collected
With respect to the second item, the data that was collected resulted from the
3 tools used:
* Biklio mobile application (Biklio),
* Positive Drive mobile application (PD), and - Traffic Snake Game (TSG).
We now provide a very short description of each tool mentioned above (for more
details, please see the corresponding deliverables D5.4, D5.2, and D5.3,
respectively).
### Biklio
The aim of Biklio is to create an application that generates a network of
recognition and benefits to urban bicycle users, linking them to local
businesses and forming a cycling community. The Biklio application encourages
citizens to ride their bikes for more time or for longer distances. Appealing
benefits offered by the shop, museum, etc. can persuade participants to opt
for the bicycle instead of the car more often. At the same time, local shops
will benefit from new customers.
Biklio is intrinsically innovative from two perspectives. Firstly, Biklio
follows an original concept to link bicycle users to local businesses in their
community; although inspired by other behaviorchange applications, to the best
of our knowledge no previous application has tried to combine bicycle usage
tracking with benefits from local businesses. Secondly, Biklio has been
developed with a high priority for easy and lightweight operation. This
implied a significant ICT effort on putting together a set of techniques that
allowed most of the features of Biklio to operate correctly even when GPS
tracking or internet connectivity are not available. The research conducted in
WP4 of TRACE played a decisive role in accomplishing this challenging goal.
In contrast to other tools in TRACE, the development of Biklio also included
developing its own branding. The Biklio brand was created based on a desire to
put an emphasis on the rewarding and recognition dimension of bicycle users,
as suggested in D2.3. Defining this brand was achieved through an iterative
process that involved feedback from all the partners in TRACE, as well as
feedback from future users via focus groups and online surveys.
Next, we present the basic concepts in Biklio.
#### Eligibility criteria
Biklio supports different types of benefits with distinct eligibility
criteria. What is common among all benefits is that, after accomplishing the
conditions required by a benefit, the user can claim the benefit when he/she
consumes in the shop. There are two basic types of eligibility criteria, which
are envisioned to be used by most shops:
* The customer is eligible to a benefit if he/she arrives at the vicinity of a shop by bicycle;
* The customer is eligible to a benefit if he/she is a proven regular bicycle user or arrived in the area by bicycle.
The choice of the two above options was determined by the user requirements
received from WP2 (deliverable D2.3). As documented in D2.3, for shops, the
main interest is in attracting new clients and, in this perspective, they are
interested in that the criteria are sufficiently flexible to enlarge their
customer base. In this sense, the preference of shopkeepers tends to the first
option above, which seems to be less restrictive on eligible users (for
example, opening the chance to gain benefits for people who live or work at
walking distance to the shop). Also according to D2.3, an additional desirable
scheme is to also give out benefits to regular bicycle users. This explains
the second eligibility criteria.
Complementarily, Biklio will support more diverse criteria to be added in the
future. These will allow introducing continuous novelty to the application and
to cover relevant objectives to the campaign manager, shopkeepers,
municipalities, among other stakeholders. These richer criteria may include a
combination of items such as time cycled, area, path, schedule (happy hour),
number of check-ins, days cycled per week, target group segmentation (age,
regular/occasional cyclist, residential or work location, frequent/occasional
customer), among others. Support for advanced eligibility criteria is,
however, not supported in the current version of the tool that is considered
in this deliverable.
#### Definition of benefits
According to D2.2 and D2.3, the benefits will normally be discounts in the
purchase of products, but may be other types of benefits, like product
offerings in the purchase on other products. The benefit and the associated
eligibility criteria are chosen by the shopkeeper.
There is a suggested guideline that the value of the benefit should be at
least 5% of value of the purchased items or services. This rule is not
validated a priori (i.e., when the shop-keeper first defines and publishes the
benefit). Instead, we rely on a posteriori validation, by having the consumers
to report a complaint for non-compliant benefits; in that event, the campaign
manager may inquire the shopkeeper and, if the report is justified, withdraw
the benefit from the system and, ultimately, ban the shop from the Biklio
network.
### PD
Positive Drive is the first gamification tracking platform and app of its kind
that only positively rewards good/preferred behaviour in traffic. With fact
based accurate information combined with state of the art algorithms PD gives
users the right nudges to try to contribute to solve the problems of present
and future like: congestion, increasing CO2 emissions and road safety by in
the case of TRACE encouraging cycling, walking, route choices etc.
Why Positive Drive? Because mobility policy should be more positive! Common
knowledge tells us that rewarding the good works much better than punishing
the wrong. But for some reason there are still too many mobility regulations
put in place from a negative point of view. Like for instance the urge to make
speedbumps, to use traffic lights and to enforce with fines. Positive Drive
proves it can be more positive, by stimulating users to make the right
choices. With small nudges PD tries to push the users into a more durable
direction. All is based on and developed in cooperation with well-known
behavioural scientists.
Positive Drive registers travelled routes and rewards users, when shown the
desired behaviour, with points: (s)miles. These (s)miles can be used in our
game-room, a playful lottery-like game filled with (local) prizes and
interesting discounts. The prizes can be local (offered by local retail) or
can be financial incentives (for example from a government), or a mix. The
platform is extremely flexible and can easily (and cost effectively) be
customized to the local situation and standards and can offer a sustainable
collaboration between municipalities, local businesses, employers and
travellers.
Within the TRACE project Positive Drive made huge improvements. The back
office and app are modernized, it is smarter, battery-life is improved, and
the system is more flexible by customizable and attractive to different types
of target groups. Furthermore, now it tracks all modalities, which makes
Positive Drive extremely flexible to all types of behaviour change campaigns.
### TSG
The concept of the Traffic Snake Game is to encourage sustainable home-school
transport amongst primary school children, their parents and teachers.
The Traffic Snake Game campaign is traditionally a paper-based campaign.
Schools that sign up to participate in the Traffic Snake Game receive a large
five meter long snake banner, large green „stickers‟ and smaller dots that
depict a sustainable mode of travel. Each class is given five green stickers
that each represent one day of the week. Pupils then have to select a dot that
represents the mode of transport used to get to school and the dot is put onto
the green sticker along with their peers dots and then placed on to the snake
banner. A reward scheme incentivizes the kids to complete the snake as soon as
they can. Rewards consist of gadgets, extra playing time, an excursion, an
apple, no homework for a day, etc. During the game, the percentage of
sustainable trips increases by 10% to 20%. Three weeks after the game, the
percentage of sustainable transports is still higher than before the game
(e.g., around 7% in the worse cases). Baseline and
“before” and “after” data were obtained by simple hands-up surveys conducted
by teachers in class rooms.
In 2015, a web-based version (TSG 2.0) was developed that can be played on a
computer (schools often use a SMART board to play this version of the
campaign). The web-based version can run without any physical materials, but
schools tend to use the materials (i.e., banner, stickers, and posters) of the
paper-based campaign aside the web-based version.
Within TRACE, mobility tracking was added to the original TSG campaign. More
specifically, within the TRACE project, tracking hardware that is suitable for
tracking primary school children and an online platform was developed to
display the tracking data and to handle administrative steps to set up the
tracking campaign. Tracking may offer relevant data to the schools that aim to
increase traffic safety around the school. For example, the school can learn
where it might be useful to ask someone to help children safely cross the
street by learning about the routes of children that cycle and walk to school.
In addition, tracking can support the TSG campaign in several aspects.
### Data
The data collected from each type of software tool differs slightly, however
the resulting data-set follows the same structure:
* Biklio – this mobile application collects the following data:
* Identity: for example, name, email address o Individual characteristics: for example, age, city
* Activity data: mode of transport used in movements made by the user o Position data: coordinates of the User in space and time
* Use of the Application: quantification of the utilization of different features of the Application
* PD – this mobile application collects the following data:
* first name o last name o gender
* date of birth o street address o house number o addition o zip code o city o e-mail o phone number
The above data is collected in addition to the GPS data Note that the name and
email address are needed for the registration process. If a user wins a prize,
he or she can simply ‘claim’ the prize by filling out the profile page. This
page asks for personal information, which can be used for the shipment of
prizes that were won.
Positive Drive does not collect data that is not directly related to the
mentioned goal of the project. The collected data is exclusively used for the
listed goals; this means 3 rd parties do not have access to this data.
Anonymized data was made available to the pilot site for analysis purposes of
the project.
* TSG - this hadware/software collects the following data:
* Hardware (trackers) collects:
* GPS-coordinates
* timestamp
* tracker-ID o third-party server (data is destroyed after modality recognition and mapmapmatching process)
* modality is attributed in a follow-up process (based on speeds derived from GPS-coordinates)
* map-matched routes o school’s user interface (password protected webpages):
* School’s name
* School’s address
* E-mail of contact person
* Class ID
* Kids name
* Tracker-ID
* Parent’s email
No extra data was used in addition to that collected with the three tools
mentioned above that were used in the following pilots:
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
**Pilot Site**
</td>
<td>
**Partner**
</td>
<td>
**PD**
</td>
<td>
**TSG**
</td>
<td>
**Biklio**
</td> </tr>
<tr>
<td>
Belgium
</td>
<td>
M21
</td>
<td>
Y
</td>
<td>
Y
</td>
<td>
</td> </tr>
<tr>
<td>
Belgrade
</td>
<td>
FTTE
</td>
<td>
Y
</td>
<td>
Y
</td>
<td>
</td> </tr>
<tr>
<td>
Esch
</td>
<td>
LuxM
</td>
<td>
Y
</td>
<td>
</td>
<td>
Y
</td> </tr>
<tr>
<td>
Breda
</td>
<td>
Breda
</td>
<td>
Y
</td>
<td>
</td>
<td>
Y
</td> </tr>
<tr>
<td>
Agueda
</td>
<td>
Agueda
</td>
<td>
Y
</td>
<td>
Y
</td>
<td>
</td> </tr>
<tr>
<td>
Plovdiv
</td>
<td>
EAP
</td>
<td>
</td>
<td>
Y
</td>
<td>
Y
</td> </tr>
<tr>
<td>
Bologna
</td>
<td>
SRM
</td>
<td>
</td>
<td>
Y
</td>
<td>
Y
</td> </tr>
<tr>
<td>
Southend on Sea
</td>
<td>
SSSBC
</td>
<td>
</td>
<td>
Y
</td>
<td>
Y
</td> </tr> </table>
## Software
With respect to the third type of results (software/hardware resulting from
TRACE), we have the following:
* Biklio mobile application (Biklio),
* Positive Drive mobile application (PD),
* Traffic Snake Game (TSG), TAToo (Tracking Analysis Tool,) and Mobile sofware modules.
The first three items above (Biklio, PD, and TSG) were already mentioned as
these were used in TRACe pilots to collect the data previously described. Note
that Biklio was developed from scratch by INESC ID (for both Android and iOS)
being available for free. Regarding the PD mobile application the software is
not freely available as it is being sold by a private company. With respect to
TSG, this includes a hardware box that was developed from scratch by a private
company; thus, naturally, this is also not freely available. TaToo is a
software tool that was developed by TIS being freely available; however, it is
worthy to note that this tool uses some software from a private company that
imposes some restrictions on its use. Finally, some software components were
developed by INESC ID that are freely available and correspond to basic
functionalities that were identified as needed for mobile applications; these
modules were developed and freely provided to help the community building
tracking-based applications and campaigns for behavior change (see more
details in http://h2020-trace.eu/outputs/open-source-software/).
We believe that the data made public may be useful to anyone who may be
interested on the topic. In particular, for those who may be interested on
knowing how users behave (mobility-wise) and may obtain some key indicators
such as volume of users, number of trips, average speed, etc. For example, in
the TRACE project we developed yet another software tool (in addition to those
mentioned above), called TAToo that, based on the data mentioned above,
provides maps with the key indicators previously mentioned (more details in
D5.5.). Thus, TAToo – Tracking Analysis Tool – translates GPS or other
georeferenced trajectory data into information that characterizes the observed
flows over the mobility network, through indicators that reveal the demand for
cycling and walking, its behaviour and the performance of the existing
mobility infrastructure.
### Anonymized Data-set
An important aspect that was considered regarding the data-set was its format
and the possibility of exchanging information between partners, organizations,
etc. Thus, all the data is written in a CSV file; each line contains a
trajectory ID, date, latitude and longitude, and mode of transport. Here a
brief example of CSV file:
4261353,2015-11-30 22:43:58,45.445988,9.124048,bycicle
4261353,2015-11-30 22:44:57,45.445496,9.121952,bycicle
4261353,2015-11-30 22:45:57,45.444817,9.119162,bycicle
4261353,2015-11-30 22:46:57,45.444828,9.119143,bycicle
4261353,2015-11-30 22:47:57,45.444832,9.119166,bycicle
4261353,2015-11-30 22:48:57,45.444782,9.119164,bycicle
4261353,2015-11-30 22:49:57,45.444794,9.119179,bycicle
4261353,2015-11-30 22:50:57,45.444767,9.119217,bycicle
4261354,2015-11-30 22:43:58,45.445988,9.124048,bycicle
4261354,2015-11-30 22:44:57,45.445496,9.121952,bycicle
4261354,2015-11-30 22:45:57,45.444817,9.119162,bycicle
4261354,2015-11-30 22:46:57,45.444828,9.119143,bycicle
4261354,2015-11-30 22:47:57,45.444832,9.119166,bycicle
4261354,2015-11-30 22:48:57,45.444782,9.119164,bycicle
4261354,2015-11-30 22:49:57,45.444794,9.119179,bycicle
4261354,2015-11-30 22:50:57,45.444767,9.119217,bycicle
4261355,2015-11-30 22:43:58,45.445988,9.124048,bycicle
4261355,2015-11-30 22:44:57,45.445496,9.121952,bycicle
4261355,2015-11-30 22:45:57,45.444817,9.119162,bycicle
4261355,2015-11-30 22:46:57,45.444828,9.119143,bycicle
4261355,2015-11-30 22:47:57,45.444832,9.119166,bycicle
4261355,2015-11-30 22:48:57,45.444782,9.119164,bycicle
4261355,2015-11-30 22:49:57,45.444794,9.119179,bycicle
4261355,2015-11-30 22:50:57,45.444767,9.119217,bycicle
# FAIR data
## Making data findable, including provisions for metadata
Most of the data that resulted from the TRACE project does not have a unique
identifier as, for example, a Digital Object Identifier, given that such
identification is not adequate (as is the case for the software modules made
available). However, when such an identifier is adequate it is used; for
example, that is effectively the case with the scientific publication entitled
“Termite: Emulation Testbed for Encounter Networks”.
For all the documents, it is very easy to have a global view of them all as
well as to look and find anyone in specific. This is made particularly easy
given the list already mentioned (provided in the Apendix 1 **Error! Reference
source not found.** ). In addition, the metadata that is used, fully describes
all the most relevant aspects of the documents.
As mentioned above, the data resulting from the TRACE project is of three
types: i) documents used for disseminating the TRACE project results (both
intermediate and final), ii) data collected with the software that was
developed and used in pilots during the TRACE project, iii) software/hardware
developed. The documents of the first type are freely accessible; the data of
the second type is made freely accessible only after being anonymized for
privacy reasons. Finally, the software/hardware availability has been
addressed already in Section 4\.
The data that is freely accessible is made available either from the TRACE web
site or from the TRACE Google repository. Given that all data is provided in
traditional open formats (e.g., PDF, CSV) any free adequate tool can be used.
In particular, for those who want to use any of the software modules made
freely available, there are easy to follow instructions. It’s worthy to note
that such software modules can be accessed (with any browser) given that they
are stored in github (a well-known and widely repository used for software).
## Making data interoperable
The open data set that is provided (resulted from the collected data and was
later anonymized) follows a very simple format (i.e., plain text in a CSV
file) allowing data exchange and re-use between researchers, institutions,
organisations, countries, etc.(see Section 4.3.1). As a matter of fact, the
data has been used as input to the TRACE tool called TAToo (already mentioned)
and was made as simple as possible so that no unnecessary restrictions were
made.
## Increase data re-use (through clarifying licences)
The data that is provided by TRACE is open, i.e., it is free for use by
others. This data is already available either from the TRACE web-site or from
the Google repository, it can be used after TRACE has finished, and it remains
accessible as it is.
# Allocation of resources
Making the data publicly available had no cost associated. As a matter of
fact, the only cost that is somehow related is the travelling/materials used
for the presentations that were done. So, with respect to the specifics of
making the data available once it has been produced, there is no cost.
Regarding the future (i.e., after the TRACE project has finished) the data is
available to others. The cost related to such hosting was taken into account
as the services used are free; for example, servers hosted by INESC ID, github
servers, etc. Such hosting is free and will be provided as long as the
institutions do have such a policy (which is not expected to change in the
near future).
# Data security
The data that is provided raises no security/privacy concerns. As a matter of
fact, this is obviously valid for documents and software. The issue that
required some care was the data collected from users with the tools already
mentioned (Biklio, PD, and TSG). The raw data that was collected no longer
exists and what has been made available to the community is an anonymized data
set with the structure mentioned in Section 4.3.1.
In addition, given that the data being considered is stored on data centres,
its availability is guaranteed.
# Ethical aspects
Obviously, the data that was collected (with Biklio, PD and TSG) as well as
the software/hardware that was developed and used obeyed to all the
security/privacy national requirements (i.e., in the pilot sites) as well as
to the fundamental principle of privacy by design. That is why all the data
that is provided in the open data set previously mentioned was carefully
anonymized before being made public. In addition, all users were previously
(to the data collection) informed about the purpose of the data being
collected and all other related aspects.
# Conclusion
This document presents essential information about the Data Management Plan in
project TRACE. It describes the main output of the TRACE project regarding
documentation, data collected, and software/hardware produced, and which one
can be accessed under what circumstances. A major concern of the consortium
was to always provide open data / open software to the community so that
anyone interested could use it freely for research purposes, with no
associated cost.
**Apendix 1.** Output from TRACE
## Monitoring of Output from TRACE
As already mentioned, the TRACE project keeps in its Google repository an
“excel” file with all the output that has resulted from the project during its
three years of duration. This file has several sheets and is rather long. For
this reason, we do not show it here; this file is available in the URL
_https://docs.google.com/spreadsheets/d/1XqtwT8eziJuBD8Met23iIIdTmevlRpFPTTjJ60v0Kk/edit#gid=603467175_
This file includes the following sheets:
* TRACE events
* External events
* Publications
* Online
* Press Releases
* Press – Media
* Other dissemination activities
## TRACE events
The sheet “TRACE Events” is used to register UPCOMING events (w.r.t. moment of
the insertion) directly organized by TRACE partners to promote the TRACE
project. The table was updated continuously with information on the events,
the target groups addressed and possible need for TRACE promotion materials.
Polis distributed the corresponding materials to maximize the visibility of
TRACE.
Thus, the “TRACE events” sheet also allows us to easily report which events
that the project partners have been involved in.
After an event has taken place, the table was updated with additional
information (e.g., the event agenda, presentations done, abstract and pictures
from the event, all uploaded to the Google TRACE repository in the sub-folder:
“TRACE events”).
Thus, there is a sub folder, within the events folder, for each event with the
date, place and name of the event that was organized. (i.e.,
20160218_Agueda_TRACE local focus group) to make it easier for later recall.
## External events
The sheet “External Events” was used to register UPCOMING events that partners
participated in to promote the TRACE project. The table was updated
continuously with information on the events, the target groups addressed and
possible need for TRACE promotion materials. Polis distributed the materials
to maximize the visibility of TRACE.
The “External events” sheet also allows us to easily report which events the
project partners have been involved in.
After an event has taken place, the table was uploaded with additional
information (e.g., the event agenda, presentation if done, abstract and
pictures from the event, all uploaded to the Google TRACE repository in the
sub-folder: “External events”.
Thus, there is a sub folder, within the events folder, for each event with the
date, place and name of the event that you are attending. (i.e.,
20160218_Athens_ECOMM 2016).
## Publications
The “Publications” sheet is used for monitoring any publication made by
project partners during the project. It is indicated the website where the
publication is available; the publication’s PDF is also uploaded to the Google
TRACE repository subfolder: “Publications”. The name of the files is composed
by the first author last name, venue acronym and year.
## Online
When a local electronic newsletter, Facebook account or other social media for
disseminating the project, is created or used, it is indicated on this page.
The outreach to the number of visitors on the website is updated, as well as
the number of “likers” on Facebook and followers on Twitter.
## Press releases
When a press release is written, the corresponding information about it is
inserted in the sheet
“Press releases”. The title is written in English so that everybody knows what
it is about. It is also indicated if the press actually has used the press
release as the source for an article etc.
The press release itself is uploaded to the Google TRACE repository subfolder
named “Press Releases”.
## Press media
This sheet monitors the press activities in the media. It could also be called
“TRACE in the media”. This sheet reports all media activity about TRACE that a
partner is aware of, i.e. an article in the local newspaper, a feature about
the TRACE project in the national television, etc.
If a copy of the media activity is available, it is uploaded it to the Google
TRACE repository subfolder named “Press-Media”. It is also checked the
copyright of the articles or TV features before uploading them. Thus, if not
possible, only the link for the online content is provided.
## Other Dissemination Activities
If the activity that is to be describes is not covered by any of the other
categories, this is registered in this sheet and is uploaded to the TRACE
repository subfolder named “Other”.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0814_PHA-ST-TRAIN-VAC_675370.md
|
The data generated for this project are being recorded in Word and Excel files
which are proprietary but widely used and easily convertible to rtf/csv if
needed. This means that the data are easily able to be opened and used by
others without the need for specialist software.
There is no agreed standard for recording data and metadata in this area of
research but the researchers will follow the processes outlined in this plan
to ensure that it is as easy for others to understand it as possible. The
metadata at a dataset level made available via Pure and/or Datacite via the
DOI is searchable and standardised and so should facilitate automated
searching and assessment although not automated combining of the underlying
data.
Where researchers utilise shorthand vocabulary to describe data for example in
file names, protocols or columns headings these will be described in the excel
spreadsheet or linked to from the spreadsheet and held with the project files.
This should enable others to understand the work that has been undertaken.
**2.3. Making data interoperable**
# 2.4. Increase data re-use (through clarifying licences)
As it is likely that data will be subject to extreme confidentiality
restrictions, it is not possible to specify any licences for data sharing at
this stage. This plan will be updated with further detail should it be
possible to make any dataset available. The Pure system currently include
Creative Commons and Public Data Commons Licences as standard and USTRATH has
the option to expand these classifications if a more appropriate licence was
identified for a dataset.
Data are subject to a 4 year on-disclosure agreement but wherever this if
found not to apply data will be shared upon completion of the dataset. At the
expiration of the non-disclosure agreement any data that can be shared will be
made available and the only foreseeable impediment to this sharing is
commercialisation.
The current practice at USTRATH is to retain data for 10 years unless there is
a stipulation not to for legal reasons. At the review point the researchers
will be consulted and the statistics related to data access considered before
a decision is taken about whether to retain or destroy the data.
Data created are sampled and spot checked by the supervisor of the students
using the spreadsheet record as the starting point. This ensures that systems
proscribed are being followed.
**3\. Allocation of resources**
# Data security
What provisions are in place for data security (including data recovery as
well as secure storage and transfer of sensitive data)?
Is the data safely stored in certified repositories for long term preservation
and curation?
<table>
<tr>
<th>
Data generated by the project will be stored at GSK and at USTRATH.
At USTRATH: During research data will be stored on Dropbox and backed-up on
Strathcloud in folders accessible only to the student and their supervisor.
Strathcloud sits upon USTRATH’s storage which is dual sited and replicated
between two data centres which are physically separated by several hundred
metres. Data links between datacentres are provided by dual disparate fabrics,
providing added resilience. Additionally, the central I.T. service provides
tape based backup to a third and fourth site. Completed data will be deposited
in USTRATH’s Pure system which is also based on the storage detailed above.
Data can be recovered via
Strathcloud or Pure at the system level or by using the replication/back up
options if needed.
</th> </tr>
<tr>
<td>
Data retention processes for Pure have been detailed in an earlier section of
this DMP. The costs related to research data management & FAIR fall into three
categories:
At GSK: GSK Vaccines has validated systems and SOPs to ensure that data are
collected, The opportunity cost of time taken by researchers during research
to record and
1\.
processed, transmitted and archived in a way that guarantees data
confidentiality and annotate the outputs of their research effectively: this
is good practice and excellent
integrity and applies industry standards when available (e.g. CDISC). These
systems and training for our students’ future careers.
processes are regularly audited by GSK Vaccines Quality department and have
successfully Costs for data storage during and beyond the project: data will
be stored in various
2\.
undergone a significant number of inspections by EMEA, FDA, PMDA and other
national areas & systems on the USTRATH infrastructure which is a robust and
resilient storage
authorities.
network designed to meet the needs of our researchers throughout the research
Should transfer of data between sites be required at any point, Strathcloud
will be used to lifecycle. This commitment by USTRATH is made as part of our
commitment to enable the transfer as it facilitates encrypted transfer of
files. excellence in research with the knowledge that costs are only partially
recuperated
</td> </tr>
<tr>
<td>
via FEC.
3\. Costs for data preparation and curation: the cost of supporting
researchers in data management planning, mediated data deposit and maintenance
of files over time is again undertaken by USTRATH as part of its commitment to
excellence in research.
Quantifying these costs numerically in relation to one project is not
feasible.
</td> </tr> </table>
# Ethical aspects
Are there any ethical or legal issues that can have an impact on data sharing?
These can also be discussed in the context of the ethics review. If relevant,
include references to ethics deliverables and ethics chapter in the
Description of the Action (DoA).
Is informed consent for data sharing and long term preservation included in
questionnaires dealing with personal data?
There is a legally binding non-disclosure agreement in place that restricts
the project’s ability to make data openly available as discussed earlier in
the plan.
Ethical approval has been obtained by both sites as this is relevant to the
data that the project will use. The Ethics approval documents are stored with
the data and will be used to direct the research in conjunction with this data
management plan.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0815_BALANCE_731224.md
|
1. INTRODUCTION
BALANCE intends to develop a Data Management Plan, to be updated during the
project lifetime. This document also aims to pool results generated in the
project that may lead to intellectual property (IP). This data management plan
(DMP) will thus contain all forms of knowledge generated by the project.
Whenever significant changes arise in the project, such as
* new data sets
* changes in consortium policies
* new, exploitable results
* external factors a new version of the DMP shall be uploaded taking into account the major developments. In any case, the DMP shall be updated as part of the mid-term and final project reviews of BALANCE.
2. OBJECTIVE
The objective of the DMP is to provide a structured form of repository for the
consultation of data, measurements, facts and know-how gathered during the
project, for the benefit of a more systematic progress in science. Where the
knowledge developed in the EU-funded project is not governed by intellectual
property for the purpose of commercial exploitation and business development,
it is important to valorize the results of project activities by facilitating
take-up of key data and information for further elaboration and progress by
other projects and players in Europe.
3. STRUCTURE OF THE DMP
The DMP will give an outline of knowledge that stands at the basis of BALANCE
(“Background”) in the form of Data sets and Patents that are employed in the
project. It is then necessary to define the data sets to be gathered within
the project lifetime, both through indexing and description of data origin,
nature, scale and purpose. To facilitate referencing and reuse of data,
appropriate meta-data (data about the data) shall be provided. This implies
also a policy on the ways data can or will be shared. Finally, plans on how
the data will be stored long-term need to be expressed. A similar structure is
maintained for IP generated in the project, but tabled separately. The DIMP
shall be elaborated on behalf of each BALANCE partner to begin with, and may
be redesigned to represent the data and IP repository for BALANCE as a whole
if deemed necessary or more coherent.
In detail, the following information will be requested from each partner in
the form of 3 distinct tables for previous knowledge, data generated and
results (exploitable outcome) generated:
# Background data
Identifiers for the know-how/data sets that are utilized within the project,
based on previous assets. These should have a univocal reference, that can
trace to the set of data leading to the background knowledge utilized in the
project.
# Data set reference and name, and approximate size
Identifier for the data set to be produced. This should be a univocal
reference, ultimately possibly a DOI (digital object identifier). The scale of
the data set should be indicated (number and bytes size of files or of data
points).
# Data set or result description
Description of the data that will be generated or collected, its origin (in
case it is collected), nature (in case it is result of original work or
elaboration) and whether it underpins a scientific publication. For results,
the nature/form of the outcome should be defined. A description of the
technical purpose of the data/results will be given. The target end user and
the existence (or not) of similar data/results and the possibilities for
integration and reuse may be indicated.
# Standards, metadata and data storage
Reference to existing suitable standards, codes, regulations, guidelines or
best practices the data have complied to and/or are akin to. If these do not
exist, an outline on methodology and how metadata can/will be created should
be given. For results, also planned restrictions on IP sharing should be
indicated. There should be a description of the procedures that will be put in
place for long-term preservation of the data: how long the data should be
preserved, what is approximated end volume, what the associated costs are and
how these are to be covered.
# Data sharing and channels for exploitation
Description of how exploitable outcome will be brought forward and developed.
For data, how these will be shared, including access procedures, embargo
periods (if any), outlines of technical mechanisms for dissemination and
necessary software and other tools for enabling re-use, and definition of
whether access will be widely open or restricted to specific groups.
Identification of the repository where data will be stored, if already
existing and identified, indicating in particular the type of repository
(institutional, standard repository for the discipline, etc.).
In case the dataset cannot be shared, the reasons for this should be mentioned
(e.g. ethical, IP).
4. BALANCE DATA SETS IDENTIFIER – GENERAL
Call Topic: H2020: LCE-33-2016
Type of action: ECRIA
Proposal number: 731224
Start project: 01.12.2016 End project:30.11.2019
Project focus:
To gather leading research centres in Europe in the domain of Solid Oxide
Electrolysis (SOE) and Solid Oxide
Fuel Cells (SOFC) to collaborate and accelerate the development of European
Reversible Solid Oxide Cell (ReSOC) technology, through targeted research
activities as well as through alignment of national programmes and projects on
Re-SOC and energy storage.
5. PARTNER-SPECIFIC DATA SETS
5.1.VTT
<table>
<tr>
<th>
**Knowledge owned by Partner before the project used for the project**
</th> </tr>
<tr>
<td>
_**Data sets** _
</td>
<td>
_**Patents/References** _
</td> </tr>
<tr>
<td>
Reversible stack data and setup
</td>
<td>
Kotisaari, M., et al. "Evaluation of a SOE Stack for Hydrogen and Syngas
Production: a Performance and Durability Analysis." Fuel Cells (2016).
</td> </tr>
<tr>
<td>
Stack component degradation analysis
</td>
<td>
Thomann, O., et al. "Post-experimental analysis of a solid oxide fuel cell
stack using hybrid seals." Journal of Power Sources 274 (2015): 1009-1015.
</td> </tr>
<tr>
<td>
SOFC system modelling and experimental characterisation
</td>
<td>
Halinen, M., et al. "Performance of a 10 kW SOFC demonstration unit." ECS
Transactions 35.1 (2011): 113-120. Halinen, M. et al.. "Experimental study of
SOFC system heatup without safety gases." International Journal of Hydrogen
Energy 39.1 (2014): 552-561.
</td> </tr>
<tr>
<td>
**Knowledge produced and shared by partner during the project**
</td> </tr>
<tr>
<td>
_Data set identifier and scale (amount of data)_
</td>
<td>
_Origin & _
_Nature_
_(literature, experiments, analysis,_
</td>
<td>
_Purpose (technical description)_
</td>
<td>
_Metadata (Standards, references)_
</td>
<td>
_Restrictions (Patents, IP, other)_
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
_modelling, etc.)_
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
Stack
characterisation after design optimisation
</td>
<td>
Experiments
</td>
<td>
Connect design innovation with improved performance
(efficiency, durability, costeffectiveness)
</td>
<td>
</td>
<td>
None
</td>
<td>
Digital, on
VTT and BALANCE supports
</td>
<td>
TBD
</td>
<td>
Deliverable
3.5
</td>
<td>
Website
</td>
<td>
Conference
</td> </tr>
<tr>
<td>
ReSOC system (modelling and experimental)
</td>
<td>
Experiments & modelling
</td>
<td>
Optimisation of system
efficiency, flexibility and
costeffectiveness
</td>
<td>
</td>
<td>
None
</td>
<td>
Digital, on
VTT and BALANCE supports
</td>
<td>
TBD
</td>
<td>
Deliverable
4.1
</td>
<td>
Website
</td>
<td>
Conference and workshop
</td> </tr>
<tr>
<td>
**Results produced during the project for exploitation**
</td>
<td>
</td>
<td>
**Tools and channels for the exploitation of results created by the project**
</td> </tr>
<tr>
<td>
_Result identifier and nature (dataset, prototype, app, design, publication,
etc.)_
</td>
<td>
_Function and purpose (technical description)_
</td>
<td>
_Restrictions (Patents, IP, other)_
</td>
<td>
_Metadata (Standards, references)_
</td>
<td>
_Target end user_
</td>
<td>
_In-house exploitation_
</td>
<td>
_Events (Brokerage,_
_conferences, fairs)_
</td>
<td>
_Marketing_
</td>
<td>
_Other_
</td> </tr>
<tr>
<td>
System operation strategy
</td>
<td>
To optimise reaction speed of the system to energy demand in order to
</td>
<td>
Possible generation of IP
</td>
<td>
</td>
<td>
ReSOC system integrators
</td>
<td>
N.A.
</td>
<td>
Conferences and fairs
</td>
<td>
Integrate in VTT offering marketing amterial
</td>
<td>
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
maximise operation profit
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
Stack design innovation
</td>
<td>
Improvement
of stack design in terms of performance and costeffectiveness
</td>
<td>
Possible generation of IP
</td>
<td>
</td>
<td>
ReSOC stack developers
</td>
<td>
N.A.
</td>
<td>
Conferences and fairs
</td>
<td>
Integrate in VTT offering marketing amterial
</td>
<td>
</td> </tr>
<tr>
<td>
Stack
characterisation expertise
</td>
<td>
Stack performance needs to be assessed by independent organisation
</td>
<td>
N.A.
</td>
<td>
</td>
<td>
ReSOC developers (system and stack)
</td>
<td>
Yes, for internal stack development
</td>
<td>
Conferences and fairs
</td>
<td>
Integrate in VTT offering marketing amterial
</td>
<td>
</td> </tr> </table>
5.2.CEA
<table>
<tr>
<th>
**Knowledge owned by Partner before the project used for the project**
</th> </tr>
<tr>
<td>
_**Data sets** _
</td>
<td>
_**Patents/References** _
</td> </tr>
<tr>
<td>
SOEC stack design
</td>
<td>
\- M. Reytier, S. Di Iorio, A. Chatroux, M. Petitjean, J. Cren, M. De Saint
Jean, J. Aicart, J. Mougin, « Stack performances in high temperature steam
electrolysis and co-electrolysis », Int. Journal Hydrogen Energy 40/35 (2015)
11370–11377 - Article to be published in ECS Transactions in 2017
</td> </tr>
<tr>
<td>
Test procedures for ReSOC stacks
</td>
<td>
IEC TC105 documents, restricted to IEC use
SOCTESQA project (FCH JU, Grant Agreement 621245) for the definition of SOFC,
SOEC and rSOC stack test procedures
</td> </tr>
<tr>
<td>
Oxidation tests for interconnects/coatings
</td>
<td>
* M. Stange, C. Denonville, Y. Larring, A. Brevet, A. Montani, O. Sicardy, J. Mougin, P.O. Larsson, “Improvement of corrosion properties of porous alloy supports for solid oxide fuel cells, Int. Journal of hydrogen energy (2017) 1-11, http://dx.doi.org/10.1016/j.ijhydene.2017.03.170
* M. Stange, C. Denonville, Y.Larring, C. Haavik, A. Brevet, A. Montani, O. Sicardy J. Mougin, P.O. Larsson, “Coating developments for Metal-supported Solid Oxide Fuel Cells”, 11th European SOFC Forum 1-4 July 2014, Luzern, A1406 (2014).
* M. Stange, C. Denonville, Y. Larring, C. Haavik, A. Brevet, A. Montani, O. Sicardy, J. Mougin, P.O. Larsson, “Coating Developments for Metal-Supported Solid Oxide Fuel Cells”, ECS Transactions 57 (1) (2013) 511-520
* P.-O.Santacreu, P. Girardon, M. Zahid, J. Van Herle, A. Hessler-Wyser, J. Mougin, V. Shemet, “On Potential Application of
Coated Ferritic Stainless Steel Grades K41X and K44X in SOFC/HTE
Interconnects”, ECS Transactions, 35 (1) (2011) 24812488
</td> </tr>
<tr>
<td>
SOEC and rSOC system design and operation
</td>
<td>
* A. Chatroux, M. Reytier, S. Di Iorio, C. Bernard, G. Roux, M. Petitjean, J. Mougin, “A Packaged and Efficient SOEC System Demonstrator”, ECS Transactions, 68 (1) (2015) 3519-3526
* A. Chatroux, S. Di Iorio, G. Roux, C. Bernard, J. Mougin, M. Petitjean, M. Reytier, “Power to Power efficiencies based on a SOFC/SOEC reversible system”, 12th European SOFC&SOE Forum 5-8 July 2016, Luzern, B1104 (2016).
</td> </tr>
<tr>
<td>
**Knowledge produced and shared by partner during the project**
</td>
<td>
</td>
<td>
**Tools for the diffusion of knowledge created by the project**
</td> </tr>
<tr>
<td>
_Data set identifier and scale (amount of data)_
</td>
<td>
_Origin & _
_Nature_
_(literature,_
</td>
<td>
_Purpose (technical description)_
</td>
<td>
_Metadata (Standards, references)_
</td>
<td>
_Restrictions (Patents, IP, other)_
</td>
<td>
_Data storage means_
</td>
<td>
_Peerreviewed_
_scientific_
</td>
<td>
_Other publications_
</td>
<td>
_Other tools (website, newsletter,_
</td>
<td>
_Events (seminars, workshops,_
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
_experiments, analysis, modelling, etc.)_
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
_articles_
_(green/gold_
_diff.)_
</th>
<th>
_(leaflets, reports, …)_
</th>
<th>
_press releases)_
</th>
<th>
_Conferences, fairs)_
</th> </tr>
<tr>
<td>
Performance and
durability results with reSOC modified stack design
</td>
<td>
Comparison
of experimental results of
Balance with literature available
</td>
<td>
Improvement of performance durability and flexibility
</td>
<td>
IEC TC105, 62282-8-101 and SOCTESQA
protocols under development
</td>
<td>
Use of public data about stack design
</td>
<td>
Balance deliverables, private section of website
</td>
<td>
publications
</td>
<td>
Deliverables, leaflets
</td>
<td>
website
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Test procedures for BALANCE ReSOC stacks
</td>
<td>
Literature (international standards) and other projects
(SOCTESQA)
</td>
<td>
Harmonized
test conditions and results presentation
</td>
<td>
IEC TC105, 62282-8-101 and SOCTESQA
protocols under development
</td>
<td>
IEC standards are not for free: confidentially shared within consortium
</td>
<td>
Balance deliverables, private section of website
</td>
<td>
TBD
</td>
<td>
TBD
</td>
<td>
Possible feedback to IEC and
SOCTESQA
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Results of oxidation tests on
interconnects/coatings
</td>
<td>
Comparison
of experimental results of
Balance with literature available
</td>
<td>
Improvement of performance durability and flexibility
</td>
<td>
/
</td>
<td>
None
</td>
<td>
Balance deliverables, private section of website
</td>
<td>
publications
</td>
<td>
Deliverables, leaflets
</td>
<td>
website
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Performance of rSOC system
</td>
<td>
Comparison
of experimental results of
Balance with literature available
</td>
<td>
Improvement of performance, efficiency and flexibility
</td>
<td>
/
</td>
<td>
Use of public data about system design
</td>
<td>
Balance deliverables, private section of website
</td>
<td>
publications
</td>
<td>
Deliverables, leaflets
</td>
<td>
website
</td>
<td>
TBD
</td> </tr> </table>
<table>
<tr>
<th>
**Results produced during the project for exploitation**
</th>
<th>
**Tools and channels for the exploitation of results created by the project**
</th> </tr>
<tr>
<td>
_Result identifier and nature (dataset, prototype, app, design, publication,
etc.)_
</td>
<td>
_Function and purpose (technical description)_
</td>
<td>
_Restrictions (Patents, IP, other)_
</td>
<td>
_Metadata (Standards, references)_
</td>
<td>
_Target end user_
</td>
<td>
_In-house exploitation_
</td>
<td>
_Events (Brokerage,_
_conferences, fairs)_
</td>
<td>
_Marketing_
</td>
<td>
_Other_
</td> </tr>
<tr>
<td>
Modified ReSOC stack design
</td>
<td>
Improvement of performance durability and flexibility
</td>
<td>
Use of public data about stack design
</td>
<td>
/
</td>
<td>
Stack or system manufacturer
</td>
<td>
For own R&D programs + technology transfer
</td>
<td>
Conferences,
fairs
</td>
<td>
/
</td>
<td>
/
</td> </tr>
<tr>
<td>
Test procedures for BALANCE ReSOC stacks
</td>
<td>
Harmonized
test conditions and results presentation
</td>
<td>
IEC standards are not for free: confidentially shared within consortium
</td>
<td>
IEC TC105, 62282-8-101 and SOCTESQA
protocols under development
</td>
<td>
Other R&D partners, stack and system manufacturers
</td>
<td>
For own R&D programs
</td>
<td>
Conferences
</td>
<td>
/
</td>
<td>
/
</td> </tr>
<tr>
<td>
Results of oxidation tests on
interconnects/coatings
</td>
<td>
Improvement of performance durability and flexibility
</td>
<td>
None
</td>
<td>
/
</td>
<td>
Stacks components
or stack manufacturers
</td>
<td>
For own R&D programs
</td>
<td>
Conferences
</td>
<td>
/
</td>
<td>
/
</td> </tr>
<tr>
<td>
Performance of rSOC system
</td>
<td>
Improvement of performance, efficiency and flexibility
</td>
<td>
Use of public data about system design
</td>
<td>
/
</td>
<td>
ReSOC system manufacturers or integrators
</td>
<td>
For own R&D programs + technology transfer
</td>
<td>
Conferences,
fairs
</td>
<td>
/
</td>
<td>
/
</td> </tr> </table>
5.3.DTU
<table>
<tr>
<th>
</th>
<th>
**Knowledge owned by Partner before the project used for the project**
</th> </tr>
<tr>
<td>
_**Data sets** _
</td>
<td>
_**Patents/References** _
</td> </tr>
<tr>
<td>
Test procedures for ReSOC stacks
</td>
<td>
IEC TC105 documents, restricted to IEC use
SOCTESQA project (FCH JU, Grant Agreement 621245) for the definition of SOFC,
SOEC and rSOC stack test procedures
</td> </tr>
<tr>
<td>
ReSOC test methodology
</td>
<td>
M.Chen et al., Final project report for ForskEL 2011-1-10609 Development of
SOEC Cells and Stacks,
_http://www.energinet.dk/SiteCollectionDocuments/Danske dokumenter/Forskning -
PSO-projekter/10609 ForskEL 2011 Final Report.pdf_
M. Chen et al., Final project report for ForskEL 2013-1-12013 Solid Oxide
Electrolysis for Grid Balancing,
https://energiforskning.dk/sites/energiteknologi.dk/files/slutrapporter/final_report_12013.pdf
</td> </tr>
<tr>
<td>
State-of-the-art SOC cells developed at DTU
</td>
<td>
A. Hauch, K. Brodersen, M. Chen, and M. B. Mogensen, Ni/YSZ electrodes
structures optimized for increased electrolysis performance and durability.
Solid State Ionics, Vol. 293, 2016, p. 27-36.
K. Brodersen, A. Hauch, M. Chen, and J. Hjelm, “Durable Fuel Electrode”,
European Patent Application no. 15181381.3 - 1360, submitted in August, 2015.
</td> </tr>
<tr>
<td>
SOC performance characterization and interpretation
</td>
<td>
Søren Koch, Christopher Graves and Karin Vels Hansen, Elchemea Analytical
software, _https://www.elchemea.dk/_ Christopher Graves, 2012, RAVDAV Data
Analysis Software.
</td> </tr>
<tr>
<td>
SEM postmortem analysis
</td>
<td>
K. Thyden, Y. L. Liu , and J. B. Bilde-Sørensen, Microstructural
Characterization of SOFC Ni–YSZ Anode Composites by LowVoltage Scanning
Electron Microscopy,” Solid State Ionics, 178(39–40), 2008, pp. 1984–1989.
</td> </tr>
<tr>
<td>
Interconnect coatings and oxidation testing
</td>
<td>
S. Molin, P. Jasinski, L. Mikkelsen, W. Zhang, M. Chen, and P. V. Hendriksen,
Low Temperature Processed MnCo 2 O 4 and MnCo 1.8 Fe 0.2 O 4 as
Effective Protective Coatings for Solid Oxide Fuel Cell Interconnects at
750°C, _Journal of Power Sources,_ **336** 408-418 (2016).
D. Szymczewska, S. Molin, M. Chen, P. Jasiski, and P. V. Hendriksen, Corrosion
Study of Ceria Protective Layer Deposited by Spray Pyrolysis on Steel
Interconnects, _Advances in Solid Oxide Fuel Cells and Electronic Ceramics II:
Ceramic Engineering and Science Proceedings Volume 37,_ [3] 79 (2016).
</td> </tr>
<tr>
<td>
High pressure testing set-up and methodology
</td>
<td>
X. Sun, A. D. Bonaccorso, C.R. Graves, S.D. Ebbesen, S. H. Jensen, A. Hagen et
al. Performance Characterization of Solid Oxide Cells Under High Pressure.
Fuel Cells. 2015;15(5):697-702.
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
S.H. Jensen, X. Sun, S.D. Ebbesen, R. Knibbe, M. Mogensen, Hydrogen and
synthetic fuel production using pressurized solid oxide electrolysis cells,
International Journal of Hydrogen Energy. 35 (2010) 9544–9549.
S. H. Jensen, X. Sun, S. D. Ebbesen, M. Chen. Pressurized Operation of a
Planar Solid Oxide Cell Stack. Fuel Cells. 2016;16(2):205–218
</th> </tr>
<tr>
<td>
</td>
<td>
**Knowledge produced and shared by partner during the project**
</td>
<td>
**Tools for the diffusion of knowledge created by the project**
</td> </tr>
<tr>
<td>
_Data set identifier and scale (amount of data)_
</td>
<td>
_Origin & _
_Nature_
_(literature, experiments, analysis, modelling, etc.)_
</td>
<td>
_Purpose (technical description)_
</td>
<td>
_Metadata (Standards, references)_
</td>
<td>
_Restrictions (Patents, IP, other)_
</td>
<td>
_Data storage means_
</td>
<td>
_Peerreviewed_
_scientific articles_
_(green/gold_
_diff.)_
</td>
<td>
_Other publications (leaflets, reports, …)_
</td>
<td>
_Other tools (website, newsletter, press releases)_
</td>
<td>
_Events (seminars, workshops, Conferences, fairs)_
</td> </tr>
<tr>
<td>
Performance and durability of SOC in reversible operation
</td>
<td>
Comparison of experimental results achieved in Balance with available
literature data
</td>
<td>
Improved performance and efficiency
</td>
<td>
/
</td>
<td>
None
</td>
<td>
Balance deliverables, private section of website / Participant
Portal H2020
</td>
<td>
publications
</td>
<td>
Deliverables, leaflets
</td>
<td>
website
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
post-mortem analysis results
</td>
<td>
Comparison of experimental results of
Balance with literature available
</td>
<td>
Understanding the degradation mechanisms
</td>
<td>
/
</td>
<td>
None
</td>
<td>
Balance deliverables, private section of website/
Participant
Portal H2020
</td>
<td>
publications
</td>
<td>
Deliverables, leaflets
</td>
<td>
website
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Test procedure for ReSOC testing
</td>
<td>
Literature (international standards) and
</td>
<td>
Harmonized
test conditions and
</td>
<td>
IEC TC105, 62282-8-101 and SOCTESQA
</td>
<td>
IEC standards are not for free: confidentially
</td>
<td>
Digital, on
ENEA and BALANCE supports
</td>
<td>
TBD
</td>
<td>
TBD
</td>
<td>
Possible feedback to IEC and
SOCTESQA
</td>
<td>
TBD
</td> </tr> </table>
<table>
<tr>
<th>
in BALANCE project
</th>
<th>
other projects (SOCTESQA)
</th>
<th>
results presentation
</th>
<th>
protocols under development
</th>
<th>
shared within consortium
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
Results on IC coating and oxidation testing
</td>
<td>
Comparison of experimental results of
Balance with literature available
</td>
<td>
Improved corrosion resistance and lifetime
</td>
<td>
/
</td>
<td>
None
</td>
<td>
Balance deliverables, private section of website/
Participant
Portal H2020
</td>
<td>
publications
</td>
<td>
Deliverables, leaflets
</td>
<td>
website
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
**Results produced during the project for exploitation**
</td>
<td>
</td>
<td>
**Tools and channels for the exploitation of results created by the project**
</td> </tr>
<tr>
<td>
_Result identifier and nature (dataset, prototype, app, design, publication,
etc.)_
</td>
<td>
_Function and purpose (technical description)_
</td>
<td>
_Restrictions (Patents, IP, other)_
</td>
<td>
_Metadata (Standards, references)_
</td>
<td>
_Target end user_
</td>
<td>
_In-house exploitation_
</td>
<td>
_Events (Brokerage,_
_conferences, fairs)_
</td>
<td>
_Marketing_
</td>
<td>
_Other_
</td> </tr>
<tr>
<td>
Test procedures for BALANCE ReSOC
cells/stacks
</td>
<td>
Harmonized
test conditions and results presentation
</td>
<td>
IEC standards are not for free: confidentially shared within consortium
</td>
<td>
IEC TC105, 62282-8-101 and SOCTESQA
protocols under development
</td>
<td>
Other R&D partners, stack and system manufacturers
</td>
<td>
For own R&D programs
</td>
<td>
Conferences
</td>
<td>
/
</td>
<td>
/
</td> </tr>
<tr>
<td>
Results of oxidation tests on
interconnects/coatings
</td>
<td>
Improved corrosion resistance and lifetime
</td>
<td>
None
</td>
<td>
/
</td>
<td>
Stacks components
or stack manufacturers
</td>
<td>
For own R&D programs
</td>
<td>
Conferences
</td>
<td>
/
</td>
<td>
/
</td> </tr> </table>
<table>
<tr>
<th>
SOCs with improved performance and durability for low temperature ReSOC
application
</th>
<th>
Improved cell component
materials and production methods
</th>
<th>
Patents
</th>
<th>
/
</th>
<th>
Other R&D partners and stack manufacturers
</th>
<th>
For own R&D programs
</th>
<th>
Conference
</th>
<th>
/
</th>
<th>
/
</th> </tr> </table>
5.4.ENEA
<table>
<tr>
<th>
**Knowledge owned by Partner before the project used for the project**
</th> </tr>
<tr>
<td>
_**Data sets** _
</td>
<td>
_**Patents/References** _
</td> </tr>
<tr>
<td>
DRT analysis methods for EIS
analysis
</td>
<td>
C. Boigues-Muñoz et al. _Journal of Power Sources_ 286: 321329 (2015)
C. Boigues-Muñoz et al. _Journal of Power Sources_ 294: 658668 (2015)
</td> </tr>
<tr>
<td>
Test procedures for ReSOC stacks
</td>
<td>
IEC TC105 documents, restricted to IEC use
SOCTESQA project (FCH JU, Grant Agreement 621245) for the definition of SOFC
stack test procedures
</td> </tr>
<tr>
<td>
Innovative cell mapping set-up
</td>
<td>
Article under publication
</td> </tr>
<tr>
<td>
</td>
<td>
**Knowledge produced and shared by partner during the project**
</td> </tr>
<tr>
<td>
_Data set identifier and scale (amount of data)_
</td>
<td>
_Origin & Nature (literature, experiments, analysis, modelling, etc.) _
</td>
<td>
_Purpose (technical description)_
</td>
<td>
_Metadata (Standards, references)_
</td>
<td>
_Restrictions (Patents, IP, other)_
</td> </tr>
<tr>
<td>
Test
procedures for BALANCE
</td>
<td>
Literature (international standards) and
</td>
<td>
Harmonized test conditions
</td>
<td>
IEC TC105, 62282-8-101 and
</td>
<td>
IEC standards are not for free:
</td> </tr> </table>
<table>
<tr>
<th>
stacks (1 reference document and 9 protocol documents)
</th>
<th>
other projects (SOCTESQA)
</th>
<th>
and results presentation
</th>
<th>
SOCTESQA
protocols under development
</th>
<th>
confidentially shared within consortium
</th>
<th>
BALANCE supports
</th>
<th>
</th>
<th>
</th>
<th>
to IEC and
SOCTESQA
</th>
<th>
</th> </tr>
<tr>
<td>
Platforms and database for inventory and mapping of national ReSOC programmes
</td>
<td>
Dedicated questionnaire, FCH JU databases, national associations
</td>
<td>
Generating a
common
research agenda for EU on ReSOC
</td>
<td>
EERA
repositories, others TBD
</td>
<td>
None (public information to be gathered)
</td>
<td>
Digital, on
ENEA and BALANCE supports
</td>
<td>
No
</td>
<td>
Project promotion sheets, flyers, position paper
</td>
<td>
EERA channels
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
**R**
</td>
<td>
**esults produced during the project for exploitation**
</td>
<td>
**Tools and channels for the exploitation of results created by the project**
</td> </tr>
<tr>
<td>
_Result identifier and nature (dataset, prototype, app, design, publication,
etc.)_
</td>
<td>
_Function and purpose (technical description)_
</td>
<td>
_Restrictions (Patents, IP, other)_
</td>
<td>
_Metadata (Standards, references)_
</td>
<td>
_Target end user_
</td>
<td>
_In-house exploitation_
</td>
<td>
_Events (Brokerage,_
_conferences, fairs)_
</td>
<td>
_Marketing_
</td>
<td>
_Other_
</td> </tr>
<tr>
<td>
Validated ReSOC cells for technology benchmarking
</td>
<td>
Test criteria for validation according to IEC standard 62282-8-101
</td>
<td>
IEC Standard
is
proprietary
</td>
<td>
IEC standard 62282-8-101
</td>
<td>
ReSOC developers and
integrators
</td>
<td>
Test bench
reliability
improvements
</td>
<td>
Conferences
</td>
<td>
n.a.
</td>
<td>
</td> </tr> </table>
<table>
<tr>
<th>
Innovative cell performance validation
</th>
<th>
Simultaneous
measurement of
electrochemical performance and process identification and locally resolved
gas composition and temperature
</th>
<th>
Internal
know-how, but no formal restrictions
</th>
<th>
See
References
</th>
<th>
ReSOC developers and
integrators
</th>
<th>
Simultaneous analysis procedure optimization and tool development
</th>
<th>
Scientific conferences
</th>
<th>
n.a.
</th>
<th>
</th> </tr> </table>
5.5.UoB
<table>
<tr>
<th>
**Knowledge owned by Partner before the project used for the project**
</th> </tr>
<tr>
<td>
_**Data sets** _
</td>
<td>
_**Patents/References** _
</td> </tr>
<tr>
<td>
rSOC testing procedures
</td>
<td>
PhD thesis James Watton, 2016, FCH JU project reports, FCTESTQA, etc.
protocols
</td> </tr>
<tr>
<td>
SOFC/SOE test rigs
</td>
<td>
own development
</td> </tr>
<tr>
<td>
in-house fabricated SOC cells
</td>
<td>
PhD thesis Nikkia McDonald (2016), ongoing PhD thesis Anisa Nor Arifin (exp.
2018)
</td> </tr>
<tr>
<td>
in-house developed method to modify SOC anodes
</td>
<td>
PhD thesis Lina Troskialina (2016)
</td> </tr>
<tr>
<td>
</td>
<td>
**Knowledge produced and shared by partners during the project**
</td> </tr>
<tr>
<td>
_Data set identifier and scale (amount of data)_
</td>
<td>
_Origin & _
_Nature_
_(literature, experiments, analysis, modelling, etc.)_
</td>
<td>
_Purpose (technical description)_
</td>
<td>
_Metadata (Standards, references)_
</td>
<td>
_Restrictions (Patents, IP, other)_
</td> </tr> </table>
<table>
<tr>
<th>
Performance and durability of SOC in reversible operation
</th>
<th>
Comparison of experimental results achieved in Balance with available
literature data
</th>
<th>
Improved performance and efficiency
</th>
<th>
/
</th>
<th>
None
</th>
<th>
Balance deliverables, private section of website / Participant
Portal H2020
</th>
<th>
publications
</th>
<th>
Deliverables, leaflets
</th>
<th>
website
</th>
<th>
TBD
</th> </tr>
<tr>
<td>
post-mortem analysis results
</td>
<td>
Comparison of experimental results of
Balance with literature available
</td>
<td>
Understanding the degradation mechanisms
</td>
<td>
/
</td>
<td>
None
</td>
<td>
Balance deliverables, private section of website/
Participant
Portal H2020
</td>
<td>
publications
</td>
<td>
Deliverables, leaflets
</td>
<td>
website
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
Test procedure for ReSOC testing in BALANCE project
</td>
<td>
Literature (international standards) and other projects
(SOCTESQA)
</td>
<td>
Harmonized
test conditions and results presentation
</td>
<td>
IEC TC105, 62282-8-101 and SOCTESQA
protocols under development
</td>
<td>
IEC standards are not for free: confidentially shared within consortium
</td>
<td>
Digital, on
ENEA and BALANCE supports
</td>
<td>
TBD
</td>
<td>
TBD
</td>
<td>
Possible feedback to IEC and
SOCTESQA
</td>
<td>
TBD
</td> </tr>
<tr>
<td>
</td>
<td>
**Results produced during the project for exploitation**
</td>
<td>
**Tools and channels for the exploitation of results created by the project**
</td> </tr>
<tr>
<td>
_Result identifier and nature (dataset, prototype, app, design, publication,
etc.)_
</td>
<td>
_Function and purpose (technical description)_
</td>
<td>
_Restrictions (Patents, IP, other)_
</td>
<td>
_Metadata (Standards, references)_
</td>
<td>
_Target end user_
</td>
<td>
_In-house exploitation_
</td>
<td>
_Events (Brokerage,_
_conferences, fairs)_
</td>
<td>
_Marketing_
</td>
<td>
_Other_
</td> </tr> </table>
<table>
<tr>
<th>
Test
procedures for
BALANCE ReSOC
cells/stacks
</th>
<th>
Harmonized
test conditions and results presentation
</th>
<th>
IEC standards are not for free: confidentially shared within consortium
</th>
<th>
IEC TC105, 62282-8-101 and SOCTESQA
protocols under development
</th>
<th>
Other R&D partners, stack and system manufacturers
</th>
<th>
For own R&D programs
</th>
<th>
Conferences
</th>
<th>
/
</th>
<th>
/
</th> </tr>
<tr>
<td>
Results of reversible coelectrolysis
</td>
<td>
proof-ofconcept
</td>
<td>
Patents
</td>
<td>
/
</td>
<td>
/
</td>
<td>
For own R&D programs
</td>
<td>
Conferences
</td>
<td>
Marketing of IP
</td>
<td>
/
</td> </tr>
<tr>
<td>
SOCs with improved performance and durability for low temperature ReSOC
application
</td>
<td>
Improved cell component
materials and production methods
</td>
<td>
Patents
</td>
<td>
/
</td>
<td>
Other R&D partners and stack manufacturers
</td>
<td>
For own R&D programs
</td>
<td>
Conference
</td>
<td>
Marketing of IP
</td>
<td>
/
</td> </tr> </table>
5.6.TUD
<table>
<tr>
<th>
</th>
<th>
**Knowledge owned by Partner before the project used for the project**
</th> </tr>
<tr>
<td>
_**Data sets** _
</td>
<td>
_**Patents/References** _
</td> </tr>
<tr>
<td>
Thermodynamic modelling software
</td>
<td>
Cycle tempo – An in house thermodynamic modelling software developed by TU
Delft
</td> </tr>
<tr>
<td>
Single cell experimental set up
</td>
<td>
Facility for i-V curve and impedance measurements.
1. Kinetics of internal methane steam reforming in SOFCs and its influence on cellperformance, _ECS Transactions_ , 57 (1) 2741-2751 (2013)
2. Influence of operation conditions on carbon deposition in SOFCs fuelled by tar containingbiosyngas, _ECS Transactions_ , 35 (1) 2701-2712 (2011)
</td> </tr>
<tr>
<td>
Expertise in thermodynamic modelling of power plants
</td>
<td>
1. Thermodynamic analysis of coupling a SOEC in co-electrolysis mode with dimethyl ethersynthesis, _Fuel cells Wiley_ , DOI 10.1002/fuce.201500016
2. Thermodynamic analysis of Solid Oxide Fuel Cell Gas Turbine systems operating withvarious biofuels, _Fuel cells Wiley_ , DOI 10.1002/fuce.201200062
3. Thermodynamic evaluation and experimental validation of 253 MW integrated coalgasification combined cycle power plant in buggenum, Netherlands, _Applied Energy_ , 155, page 181
4. Thermodynamic system studies for a NGCC plant with CO 2 capture and hydrogen storage with metal hydrides, _Energy Procedia_ , 63, page 1996
</td> </tr> </table>
<table>
<tr>
<th>
**Knowledge produced and shared by partners during the project**
</th>
<th>
**Tools for the diffusion of knowledge created by the project**
</th> </tr>
<tr>
<td>
_Data set identifier and scale (amount of data)_
</td>
<td>
_Origin & Nature (literature, experiments, analysis, modelling, etc.) _
</td>
<td>
_Purpose (technical description)_
</td>
<td>
_Metadata (Standards, references)_
</td>
<td>
_Restrictions (Patents, IP, other)_
</td>
<td>
_Data storage means_
</td>
<td>
_Peerreviewed_
_scientific articles_
_(green/gold_
_diff.)_
</td>
<td>
_Other publications (leaflets, reports, …)_
</td>
<td>
_Other tools (website, newsletter, press releases)_
</td>
<td>
_Events (seminars, workshops, Conferences, fairs)_
</td> </tr>
<tr>
<td>
Identification of different process chains for ReSOC system
integration with the grid – ( _1 technical document)_
</td>
<td>
Literature study (including published work from different countries)
</td>
<td>
To have a complete understanding of which process
chains fit with different system configurations
</td>
<td>
NA
</td>
<td>
None
</td>
<td>
Internal website of Process &
Energy (TU
Delft), digital project
platform of
BALANCE
</td>
<td>
To be mutually agreed between respective partners
</td>
<td>
To be decided
</td>
<td>
Project website
</td>
<td>
To be discussed
</td> </tr>
<tr>
<td>
Thermodynamic modelling of the entire system – steady state and dynamic (
_models, technical description of the models_ )
</td>
<td>
Modelling and analysis
</td>
<td>
For integrated system
development,
to identify process
inefficiencies, aid in
individual
component development
</td>
<td>
NA
</td>
<td>
None
</td>
<td>
Internal website of Process &
Energy (TU
Delft), digital project
platform of
BALANCE
</td>
<td>
To be mutually agreed between respective partners
</td>
<td>
Project leaflets
</td>
<td>
Project website
</td>
<td>
To be discussed
</td> </tr> </table>
<table>
<tr>
<th>
**Results produced during the project for exploitation**
</th>
<th>
**Tools and channels for the exploitation of results created by the project**
</th> </tr>
<tr>
<td>
_Result identifier and nature (dataset, prototype, app, design, publication,
etc.)_
</td>
<td>
_Function and purpose (technical description)_
</td>
<td>
_Restrictions (Patents, IP, other)_
</td>
<td>
_Metadata (Standards, references)_
</td>
<td>
_Target end user_
</td>
<td>
_In-house exploitation_
</td>
<td>
_Events (Brokerage,_
_conferences, fairs)_
</td>
<td>
_Marketing_
</td>
<td>
_Other_
</td> </tr>
<tr>
<td>
A complete steady state working model of the ReSOC
grid integrated
system under different scenarios
</td>
<td>
To gain a complete understanding of the behaviour of the system
</td>
<td>
Possible generation of
IP
</td>
<td>
</td>
<td>
ReSOC
stack developers, power companies
</td>
<td>
Feed data for system
development
and testing at a real scale
</td>
<td>
International, European &
national level conferences, Workshops, summer schools
</td>
<td>
TBD
</td>
<td>
</td> </tr>
<tr>
<td>
Dynamic
model of the
ReSOC
system for grid stabilization
</td>
<td>
To provide insights as to what can possibly happen during transient operation
</td>
<td>
Possible generation of
IP
</td>
<td>
</td>
<td>
ReSOC
stack developers, power companies
</td>
<td>
Feed data for system
development
and testing at a real scale
</td>
<td>
International, European &
national level conferences, Workshops, summer schools
</td>
<td>
TBD
</td>
<td>
</td> </tr>
<tr>
<td>
Technoeconomic and LCA of system
</td>
<td>
Economic assessment for real scale implementation
</td>
<td>
</td>
<td>
</td>
<td>
Government agencies, EU commission, possible technology investors
</td>
<td>
</td>
<td>
Extension to other possible projects
</td>
<td>
</td>
<td>
</td> </tr> </table>
5.7.EPFL
<table>
<tr>
<th>
</th>
<th>
**Knowledge owned by Partner before the project used for the project**
</th> </tr>
<tr>
<td>
_**Data sets** _
</td>
<td>
_**Patents/References** _
</td> </tr>
<tr>
<td>
Test procedures for SO stacks and cells
</td>
<td>
Internal know-how and test protocols, FCH-JU-Design reports
Load cycling and reversible SOE/SOFC operation in intermediate temperature
steam electrolysis, Montinaro D,Dellai A,Modena S,Ghigliazza F,Bertoldi
M,Diethelm S,Pofahl S,Bucheli O,Van herle J. _Proc 5th European Fuel Cell
Piero Lunghi Conference_ , 2013, p 151-152
</td> </tr>
<tr>
<td>
Oxidation tests for interconnects/coatings
</td>
<td>
Ferritic (18% Cr) with and without ceramic coating for interconnect
application in SOFC, J Van herle, Aïcha Hessler-Wyser, Philippe Buffat, Max
Aeberhard, Thomas
Nelis, Michele Molinelli, Pierre-Olivier Santacreu, Thomas Kiefer, Frank Tietz
, Proc 2nd Eur. Fuel Cell Technology & Applications Conference EFC2007, Dec
11-14,
2007, Rome, Italy, Paper EFC2007-39199
Potential application of coated ferritic stainless steel grades K41X and K44X
in SOFC/HTE interconnects, Santacreu P.-O., Girardon P., Zahid M., Van herle
J., HesslerWyser A., Mougin J., Shemet V., _ECS Transactions_ 35 (PART 3),
2011, pp. 2481-2488.
Evaluation of protective coatings for SOFC interconnects, Tallgren, J.,
Bianco, M., Himanen, O., Thomann, O., Kiviaho, J., Van herle, J. , _ECS
Transactions_ 68 (1), pp.
1597-1608 (2015)
Properties of spinel protective coatings prepared using wet powder spraying
for SOFC interconnects, Hong, J., Bianco, M., Van herle, J., Steinberger-
Wilckens, R,
_ECS Transactions_ **68** (1), pp. 1581-1587 (2015)
</td> </tr>
<tr>
<td>
Interconnect testing set-up description
</td>
<td>
Technical drawings, internal report O. Cornes, FCH-JU-SOFCLife reports, FCH-
JU-Scored reports
</td> </tr>
<tr>
<td>
Single cell experimental set-up description
</td>
<td>
Anode supported SOFC with screen-printed cathodes, J. Van herle, R. Ihringer,
R. Vasquez, L. Constantin, O. Bucheli, _J. Eur. Ceram. Soc._ 21 (10-1)
1855-1859 (2001) Solid Oxide Fuel Cell Anode Degradation by the Effect of
Siloxanes, Hossein Madi, Andrea Lanzini, Stefan Diethelm, Davide Papurello,
Jan Van herle, Matteo Lualdi, Jørgen Gutzon Larsen and Massimo Santarelli,
_Journal of Power Sources (2015)_ 279, 460-471. J. Sfeir, PhD thesis (2002)
</td> </tr>
<tr>
<td>
Stack experimental set-up description
</td>
<td>
Current collection, stacking of anode-support cells with metal interconnects
to compact repeating units , M. Molinelli, D. Larrain, R. Ihringer, L.
Constantin, N. Autissier, O. Bucheli, D. Favrat, J. Van herle,
_Electrochemical Society Proceedings_ Vol 2003-07, Pennington, NJ, p. 905-913
(2003)
Performance of solid oxide fuel cell stacks under partial internal reforming
of methane, Stefan Diethelm and Jan Van herle, _Proc Eur Fuel Cell Forum_ ,
Lucerne (CH), June 2011, EFCF, Obgardihalde 2, CH-6043 Adligenswil, paper B902
Electrolysis and Co-electrolysis performance of SOE short stacks, Diethelm,
S., Van herle, J., Montinaro, D., Bucheli, O., _Fuel Cells_ , 13 (4), pp.
631-637
</td> </tr>
<tr>
<td>
SRU Segmented cell set-up description
</td>
<td>
Local current measurement in a solid oxide fuel cell repeat element Frédéric
Ravussin, Jan Van herle, Nordahl Autissier, Michele Molinelli, Diego Larrain,
Daniel Favrat, _J. Eur. Ceram. Soc_ . 27 (2-3), 1035-1040 (2007)
Investigation of Local Electrochemical Performance and Local Degradation in an
Operating SOFC, Z. Wuillemin, A. Müller, A. Nakajo, N. Autissier, S. Diethelm,
M.
Molinelli, J. Van herle, D. Favrat, _Proc. 8 th Eur. SOFC Forum _ , Lucerne
(CH), July 2008, EFCF, Morgenacherstr. 2F, CH-5452 Oberrohrdorf (CH), paper
B1009, 20 p.
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
Locally-resolved study of degradation in a SOFC repeat-element, Wuillemin, Z.,
Nakajo, A., Müller, A., Schüler, A.J., Diethelm, S., Van herle, J., Favrat,
D., _ECS Transactions_ Volume 25, Issue 2 PART 1, 2009, Pages 457-466, 11th
International Symposium on SOFC (SOFC-XI)- 216th ECS Meeting; Vienna; 4-9 Oct
2009
</th> </tr>
<tr>
<td>
Cell/stack data analysis tools
</td>
<td>
Steam and co-electrolysis sensitivity analysis on Ni-YSZ supported cells,
Rinaldi, G., Diethelm, S., Van herle, J. _ECS Transactions_ 68 (1), 2015, pp.
3395-3406. Investigation of 2R-cell degradation under thermal cycling and
RedOx cycling conditions by electrochemical impedance spectroscopy, Diethelm,
S., Singh, V., Van herle, J. _ECS Transactions_ 68 (1), 2015, pp. 2285-2293.
H. Madi, PhD thesis (2016), P. Caliandro Ph D thesis (2017 – in preparation)
</td> </tr>
<tr>
<td>
Post test microscopy characterisation tools
</td>
<td>
Rapid preparation and SEM microstructural characterization of Nickel-YSZ
cermets, Christian Monachon, Aïcha Hessler-Wyser, Antonin Faes, Jan Van herle,
Enrico
Tagliaferri, _J. Amer. Ceram. Soc._ (2008) 91 (10) 3405-07
Ni-zirconia anode degradation and triple phase boundary quantification from
microstructural analysis , Faes, A., Hessler-Wyser, A., Presvytes, D.,
Vayenas, C.G., Vanherle, J. , (2009) Fuel Cells, 9 (6), pp. 841-851. DOI
10.1002/fuce.200800147).
PhD thesis A Faes (2011), PhD thesis A Schuler (2012), PhD thesis Q Jeangros
(2014)
TEM investigation on zirconate formation and chromium poisoning in LSM/YSZ
cathode , Hessler-Wyser, A., Wuillemin, Z., Schuler, J.A., Faes, A., Van
herle, J. ,
(2011) _Journal of Materials Science_ 46 (13), 4532-4539
Comparison of SOFC Cathode Microstructure Quantified using Xray Nanotomography
and Focused Ion Beam Scanning Electron Microscopy, George J. Nelson,
William M. Harris, Jeffrey J. Lombardo, John R. Izzo, W.K.S. Chiu, P.
Tanasini, M. Cantoni, J. Van herle, C. Comninellis, Joy C. Andrews, Yijin Liu,
Piero Pianetta, and Yong S. Chu, _Electrochemistry Communications_ 13(6),
586-589 (2011)
Accessible Triple-Phase Boundary Length: A Performance Metric to Account for
Transport Pathways in Heterogeneous Electrochemical Materials, A.Nakajo,
A.P.Cocco, M.B.DeGostin, A.A.Peracchio, B.N.Cassenti, M.Cantoni, J.Van herle,
W.K.S. Chiu, _J Power Sources_ 325, 786-800
Post-test Analysis on a Solid Oxide Cell Stack Operated for 10,700 Hours in
Steam Electrolysis Mode, Rinaldi, G., Diethelm, S., Oveisi, E., Burdet, P.,
Van herle, J., Montinaro, D., Fu, Q., Brisse, A., Fuel Cells, Article in
Press.
</td> </tr>
<tr>
<td>
Methanation experimental set-up description
</td>
<td>
Technical drawings and design calculations H. Madi
</td> </tr>
<tr>
<td>
SOFC cell/SRU/stack models
</td>
<td>
Generalized model of a planar SOFC repeat element for design optimization, D.
Larrain, N. Autissier, F. Maréchal, J. Van herle, D. Favrat, _J. Power
Sources_ , 131, 304312 (2004)
Simulation of stack and repeat elements including interconnect degradation and
anode oxidation risk D. Larrain, J. Van herle, D. Favrat, _J. Power Sources_
161 (2006)
392-403
Electrochemical model of SOFC for simulation at the stack scale II.
Implementation of degradation processes, Nakajo A., Tanasini, P. Diethelm, S.,
Van herle, J., Favrat, D., _Journal of the Electrochemical Society_ 158,
B1102-B1118 (2011)
Mechanical reliability and durability of SOFC stacks. Part I: Modelling of the
effect of operating conditions and design alternatives on the reliability ,
Nakajo A., Mueller F., Brouwer J., Van herle J., Favrat D., (2012)
_International Journal of Hydrogen Energy_ 37 (11), pp. 9249-9268
</td> </tr> </table>
<table>
<tr>
<th>
SOEC models
</th>
<th>
FCH-JU SOPHIA reports
</th> </tr>
<tr>
<td>
OSMOSE energy system optimisation platform
</td>
<td>
http://ipese.epfl.ch
</td> </tr>
<tr>
<td>
Optimisation system models on SOFC/SOEC, with H2/CH4 production pathways
</td>
<td>
_http://ipese.epfl.ch_ publications
PhD thesis N Autissier (2008), PhD thesis E Facchinetti (2011)
Energy balance model of a SOFC cogenerator operated with biogas. J. Van herle,
F. Maréchal, S.Leuenberger, D. Favrat, _J. Power Sources_ 118 (2003), 375-383
Process flow model of SOFC system supplied with sewage biogas, J. Van herle,
F. Maréchal, S. Leuenberger, Y. Membrez, O. Bucheli, D. Favrat , _J. Power
Sources_ ,
131, 127-141 (2004)
A methodology for thermo-economic modeling and optimization of sofc systems F
Palazzi, N Autissier, F Maréchal, J Van herle, Chem. Eng. Trans, 7, 13-18
(2005)
Thermo-economic optimisation of a solid oxide fuel cell - gasturbine hybrid
system, N. Autissier, F. Palazzi, J. Van herle, F. Maréchal, D. Favrat ,
_Journal of Fuel Cell Science & Technology _ 4, May 2007, 123-129
</td> </tr>
<tr>
<td>
</td>
<td>
**Knowledge produced and shared by partner during the project**
</td>
<td>
</td>
<td>
**Tools for the diffusion of knowledge created by the project**
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
_Data set identifier and scale (amount of data)_
</td>
<td>
_Origin & _
_Nature_
_(literature, experiments, analysis, modelling, etc.)_
</td>
<td>
_Purpose (technical description)_
</td>
<td>
_Metadata (Standards, references)_
</td>
<td>
_Restrictions (Patents, IP, other)_
</td>
<td>
_Data storage means_
</td>
<td>
_Peerreviewed_
_scientific articles_
_(green/gold_
_diff.)_
</td>
<td>
_Other publications (leaflets, reports, …)_
</td>
<td>
_Other tools (website, newsletter, press releases)_
</td>
<td>
_Events (seminars, workshops, Conferences, fairs)_
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Performance and durability results with reSOC cells and stacks
</td>
<td>
Experimental results and their analysis
</td>
<td>
Improvement of performance, durability and flexibility
</td>
<td>
</td>
<td>
None
</td>
<td>
EPFL HDs,
Balance
Deliverables
</td>
<td>
Publications
</td>
<td>
Deliverable reports
</td>
<td>
Website
</td>
<td>
tbd
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
MIC/coating oxidation results
</td>
<td>
Experimental results and their analysis
</td>
<td>
Improvement of performance,
</td>
<td>
</td>
<td>
</td>
<td>
EPFL HDs,
Balance
Deliverables
</td>
<td>
Publications
</td>
<td>
Deliverable reports
</td>
<td>
Website
</td>
<td>
tbd
</td>
<td>
</td>
<td>
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
</th>
<th>
durability and flexibility
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th>
<th>
</th> </tr>
<tr>
<td>
Results on segmented reSOC SRU
</td>
<td>
Experimental results and their analysis
</td>
<td>
Understanding of operation, degradation
</td>
<td>
</td>
<td>
In house design
</td>
<td>
EPFL HDs,
Balance
Deliverables
</td>
<td>
Publications
</td>
<td>
Deliverable reports
</td>
<td>
Website
</td>
<td>
tbd
</td> </tr>
<tr>
<td>
Post test analysis on MIC/coatings, cells, stacks
</td>
<td>
Experimental observations and their analysis
</td>
<td>
Understanding degradation, quantification of
microstructures
</td>
<td>
</td>
<td>
In house techniques
</td>
<td>
EPFL HDs,
Balance
Deliverables
</td>
<td>
Publications
</td>
<td>
Deliverable reports
</td>
<td>
Website
</td>
<td>
Tbd
</td> </tr>
<tr>
<td>
Results on methanator testing
</td>
<td>
Experimental results and their analysis
</td>
<td>
Dynamic operation
</td>
<td>
</td>
<td>
In house design
</td>
<td>
EPFL HDs,
Balance
Deliverables
</td>
<td>
Publications
</td>
<td>
Deliverable reports
</td>
<td>
Website
</td>
<td>
Tbd
</td> </tr>
<tr>
<td>
ReSOC systems performance analysis and process chains
</td>
<td>
Literature,
Flowsheets, Modeling results
</td>
<td>
Identification of ReSOC operation routes and integration
</td>
<td>
</td>
<td>
OSMOSE platform
</td>
<td>
EPFL HDs,
Balance
Deliverables
</td>
<td>
Publications
</td>
<td>
Deliverable reports
</td>
<td>
Website
</td>
<td>
Tbd
</td> </tr>
<tr>
<td>
Technoeconomic and LC analysis with ReSOC
</td>
<td>
Flowsheets, Modeling results
</td>
<td>
System optimisation
</td>
<td>
</td>
<td>
OSMOSE platform
</td>
<td>
EPFL HDs,
Balance
Deliverables
</td>
<td>
Publications
</td>
<td>
Deliverable reports
</td>
<td>
Website
</td>
<td>
Tbd
</td> </tr>
<tr>
<td>
Mapping of national ReSOC programmes (PEM, batteries) and industry partners
</td>
<td>
Databases, national associations, websites
</td>
<td>
Generating the
common
research agenda for EU on reSOC
</td>
<td>
</td>
<td>
Confidential data
</td>
<td>
EPFL HDs,
Balance
Deliverables
</td>
<td>
</td>
<td>
Report
</td>
<td>
Website
</td>
<td>
Tbd
</td> </tr> </table>
<table>
<tr>
<th>
**Results produced during the project for exploitation**
</th>
<th>
**Tools and channels for the exploitation of results created by the project**
</th> </tr>
<tr>
<td>
_Result identifier and nature (dataset, prototype, app, design, publication,
etc.)_
</td>
<td>
_Function and purpose (technical description)_
</td>
<td>
_Restrictions (Patents, IP, other)_
</td>
<td>
_Metadata (Standards, references)_
</td>
<td>
_Target end user_
</td>
<td>
_In-house exploitation_
</td>
<td>
_Events (Brokerage,_
_conferences, fairs)_
</td>
<td>
_Marketing_
</td>
<td>
_Other_
</td> </tr>
<tr>
<td>
Test procedures for reSOC cells/stacks
</td>
<td>
Harmonized test conditions and results presentation
</td>
<td>
</td>
<td>
</td>
<td>
R&D partners, stack & system manufacturers
</td>
<td>
Own R&D, test
rig
improvements
</td>
<td>
Conferences
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Results on reSOC
cells/stacks operation
</td>
<td>
ReSOC
capability
</td>
<td>
Material supplier
</td>
<td>
</td>
<td>
Tbd
</td>
<td>
Knowledge on reSOC
capability
</td>
<td>
tbd
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Results on MIC/coating oxidation
</td>
<td>
Behaviour under ReSOC conditions
</td>
<td>
Material
supplier
</td>
<td>
</td>
<td>
Tbd
</td>
<td>
Knowledge on MICs under reSOC cond., test rig
improvements
</td>
<td>
tbd
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Methanator
test rig and results
</td>
<td>
Dynamic methanator operation
</td>
<td>
</td>
<td>
</td>
<td>
R&D partners, system manufacturers
</td>
<td>
Knowledge on methanation
</td>
<td>
Conferences
</td>
<td>
</td>
<td>
</td> </tr> </table>
<table>
<tr>
<th>
Technoeconomic model of integrated
ReSOC system
</th>
<th>
Optimisation
</th>
<th>
</th>
<th>
</th>
<th>
Gov agencies,
EU
</th>
<th>
Model improvement
</th>
<th>
Conferences
</th>
<th>
</th>
<th>
</th> </tr> </table>
5.8.IEN
<table>
<tr>
<th>
</th>
<th>
**Knowledge owned by Partner before the project used for the project**
</th> </tr>
<tr>
<td>
_**Data sets** _
</td>
<td>
_**Patents/References** _
</td> </tr>
<tr>
<td>
ASR express-test setup
</td>
<td>
Know-how and Book Golec T. et al. Selected aspects of the design and operation
of the first Polish residential micro-CHP unit based on solid oxide fuel
cells, ISBN: 978-83-7789-394-4 (Kicinski J., Cenian A., Lampart P., ed.), 2015
[in Polish]
</td> </tr>
<tr>
<td>
Modelling (steadystate and dynamic), system optimization and numerical
simulations of
SOEC/SOFEC/SOEC
</td>
<td>
* Kupecki J., Milewski J., Szczesniak A., Bernat R., Motylinski K., Dynamic numerical analysis of cross, co-, and counter-current flow configurations of a 1 kW-class solid oxide fuel cell stack, International Journal of Hydrogen Energy 2015;40(45):15834–15844
* Kupecki J., Off-design analysis of a micro-CHP unit with solid oxide fuel cells fed by DME, International Journal of Hydrogen Energy 2015;40(35):12009 -12022
* Kupecki J. Modelling of physical, chemical and material properties of solid oxide fuel cells, Journal of Chemistry, Vol.1, 414950, 2015
* Kupecki J., Jewulski J., Milewski J., Multi-Level Mathematical Modeling of Solid Oxide Fuel Cells [in] Clean Energy for Better Environment, ISBN: 978-953-51-0822-1, pp. 53-85, Intech, Rijeka, 2012
* Kupecki J., Milewski J., Jewulski J., Investigation of SOFC material properties for plant-level modeling, Central European Journal of Chemistry 2013;11(5):664-671
* Kupecki J. Modeling platform for a micro-CHP system with SOFC operating under load changes, Applied Mechanics and Materials 2014;607:205-208
* Kupecki J., Błesznowski M. Multi-parametric model of a solid oxide fuel cell stack for plant-level simulations [in] Book of abstracts (ModVal 11) ISBN 978-80-263-0576-7, pp. 86, 2014
* Kupecki J., Integrated Gasification SOFC Hybrid Power System Modeling: Novel numerical approach to modeling of advanced power systems,. ISBN: 978-3639286144 VDM Verlag Dr. Müller, Saarbrucken, 2010
</td> </tr>
<tr>
<td>
Experimental
characterization of SOFC/SOEC/SOFEC
</td>
<td>
Know-how, several internal reports \+
Kupecki J., Mich D., Motylinski K., Computational fluid dynamics analysis of
an innovative start-up method of high temperature fuel cells using dynamic 3D
model, Polish Journal of Chemical Technology
2017;19(1):67-73
</td> </tr> </table>
<table>
<tr>
<th>
Experimental techniques of SOC stacks
</th>
<th>
Know-how and Book Golec T. et al. Selected aspects of the design and operation
of the first Polish residential micro-CHP unit based on solid oxide fuel
cells, ISBN: 978-83-7789-394-4 (Kicinski J., Cenian A., Lampart P., ed.), 2015
[in Polish]
</th> </tr>
<tr>
<td>
Control strategies for
SOFC/SOEC/SOFEC during fault modes and regular operation
</td>
<td>
* Motylinski K., Kupecki J., Milewski J., Stefanski M., Bonja M., Control-oriented dynamic model of a
1 kW-class SOFC stack for simulation of failure modes, Proceedings of XXI
World Hydrogen Energy Conference (WHEC 2016), Zaragoza, Spain, 13-16 VI 2016,
pp. 357
* Kupecki J., Motylinski K., Thermal management of a SOFC stack during the reformer failure – a numerical study using dynamic model [in] Energy and Fuels 2016 (Dudek M., Olkuski T., Suwala
W., Lis B., Pluta M. eds), ISBN: 978-83-911589-9-9, pp. 33
* Motylinski K., Kupecki J., Naumovich Y., Numerical model for evaluation of the effects of carbon deposition on the performance of a 1 kW SOFC stack – a proposal [in] Energy and Fuels 2016
(Dudek M., Olkuski T., Suwala W., Lis B., Pluta M. eds), ISBN:
978-83-911589-9-9, pp. 82
</td> </tr> </table>
<table>
<tr>
<th>
</th>
<th>
**Knowledge produced and shared by partners during the project**
</th>
<th>
</th>
<th>
**Tools for the diffusion of knowledge created by the project**
</th> </tr>
<tr>
<td>
_Data set identifier and scale (amount of data)_
</td>
<td>
_Origin & _
_Nature_
_(literature, experiments, analysis, modelling, etc.)_
</td>
<td>
_Purpose (technical description)_
</td>
<td>
_Metadata (Standards, references)_
</td>
<td>
_Restrictions (Patents, IP, other)_
</td>
<td>
_Data storage means_
</td>
<td>
_Peerreviewed_
_scientific articles_
_(green/gold_
_diff.)_
</td>
<td>
_Other publications (leaflets, reports, …)_
</td>
<td>
_Other tools (website, newsletter, press releases)_
</td>
<td>
_Events (seminars, workshops, Conferences, fairs)_
</td> </tr>
<tr>
<td>
Data of interface resistivity for interconnectCr-barrier –
CCM
</td>
<td>
Experiment
</td>
<td>
Effective selection of the CCMs for cathode
</td>
<td>
</td>
<td>
no
</td>
<td>
digital project platform of BALANCE ,
internal IEn facilities
</td>
<td>
To be mutually agreed between
respective partners
</td>
<td>
TBD(unlikely)
</td>
<td>
TBD
</td>
<td>
To be mutually agreed between
</td> </tr>
<tr>
<td>
Short stake performance
</td>
<td>
Experiment
</td>
<td>
Confirmation of the expected performanceof the ReSOC short stacks
</td>
<td>
</td>
<td>
no
</td>
<td>
digital project platform of BALANCE ,
internal IEn facilities
</td>
<td>
To be mutually agreed between
respective partners
</td>
<td>
TBD (must be agreed)
</td>
<td>
TBD (must be agreed)
</td>
<td>
To be mutually agreed between
respective partners
</td> </tr>
<tr>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr> </table>
<table>
<tr>
<th>
**Results produced during the project for exploitation**
</th>
<th>
**Tools and channels for the exploitation of results created by the project**
</th> </tr>
<tr>
<td>
_Result identifier and nature (dataset, prototype, app, design, publication,
etc.)_
</td>
<td>
_Function and purpose (technical description)_
</td>
<td>
_Restrictions (Patents, IP, other)_
</td>
<td>
_Metadata (Standards, references)_
</td>
<td>
_Target end user_
</td>
<td>
_In-house exploitation_
</td>
<td>
_Events (Brokerage,_
_conferences, fairs)_
</td>
<td>
_Marketing_
</td>
<td>
_Other_
</td> </tr>
<tr>
<td>
ASR data for cathode CCM results – data set
</td>
<td>
Data for selection of the appropriate materials and technology to build stack
</td>
<td>
Uncertain
</td>
<td>
</td>
<td>
Producer of the ReSOC stack and components
</td>
<td>
Knowledge for ReSOC stack design
</td>
<td>
Presentation on SOFC-targeted conferences, papers
</td>
<td>
Unlikely
</td>
<td>
</td> </tr> </table>
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0816_SWITCH_643963.md
|
# Executive summary
The data generated in the SWITCH project can be roughly classified into two
main groups: 1) documentation including internal documents (design documents,
quality reports or technical reports), deliverables, multimedia content and
other documents; and 2) technical solutions, including ontologies, knowledge
base instance data, time series obtained by monitoring, and software source
code. All data and metadata generated will be compliant to industries standard
or certain initiatives. Documentation will be stored on internal shared cloud
storage (currently, Google drive is used), while technical solutions will be
available through knowledge base, web page or public open source repositories
(currently, Github is being considered). All the data will be licensed –
documents will include Creative Commons Attribution (CC BY) license and SWITCH
technical solutions will be protected under Apache v2.0 license. The SWITCH
technical solutions do not include pilot applications used to demonstrate
project results. These are subject to their individual licensing which is
further regulated in the SWITCH consortium agreement. During the project life-
time, project partner BEIA will curate all datasets except for developed
software for which project partner WT will be responsible. After the end of
the project all the data will be archived and preserved for 5 years by the
Project coordinator, UvA. Further attempts will be made to store information
for longer periods through standardization and by joining the Open Research
Data Pilot. The Data Management Plan is a living document, which will evolve
through the lifespan of the project. Future updates will be agreed upon with
the partners.
# Introduction
## Objectives
The SWITCH Data Management Plan is based on the “Guidelines for Data
Management in Horizon 2020” provided by the European Commission. The objective
of the SWITCH Data Management Plan is to answer the following questions:
* What types of data will the project generate/collect?
* What standards will be used for data and metadata?
* How will this data be exploited and/or shared/made accessible for verification and reuse?
* How will this data be curated and preserved?
* Who are the responsible partners for curating these data and metadata?
## Related tasks
The SWITCH Data Management Plan (D2.2) relates to all work packages, as each
work package will generate some kind of data. Project partner UL is organising
this activity, while all the partners are contributing. D2.2 is part of WP2:
SWITCH Interactive Development Environment (SIDE). D2.2 directly relates to
Task 2.3: Data management plan and the SWITCH knowledge base.
# A Data Management Plan for SWITCH
## Background information
The overall objective of the SWITCH project is to address the entire life-
cycle of time-critical, self-adaptive Cloud applications by developing new
middleware, front-end tools and services to enable users to specify their
time-critical requirements for an application interactively using a direct
manipulation user interface, deploy their applications and adapt the
infrastructure to changing requirements either automatically (using the
specified requirements) or by human intervention, if desired.
During the projects lifetime many different sets of data will be generated,
including documents, source code, metadata and ontologies. Thus appropriate
data management methodologies and procedures should be introduced. The SWITCH
Data Management Plan will ensure that all data that the project generates will
meet certain standards, will be stored appropriately, allowing access to
authorised parties, and will be accessible and available after the project
ends.
We anticipate that the results of the SWITCH project will be of great interest
to different organisations and individuals. This initial version of the Data
Management Plan takes this into consideration as follows:
1. Software industry and individual software engineers: SWITCH will generate data and metadata related to Cloud application monitoring, flexibility and efficiency of various middleware solutions in given context, infrastructure topology, Quality of Service (QoS) and Quality of Experience (QoE) metrics, etc., which may be of interest to software engineering companies. For example, software companies that are dealing with video streaming, teleconferencing or sensor based Cloud applications.
2. Consultancy companies: various research papers produced by the SWITCH project as well as other data and metadata generated by the project may be of interest to consultancy companies. Such companies might need to better understand the future
of the Cloud, and how time-critical applications will be designed, deployed
and managed in federated Clouds. This in turn may contribute to higher quality
of their provided services.
3. Cloud service providers: data centres that provide Service level agreements (SLAs) for critical services might deploy SWITCH as Software as a Service (SaaS); the SLAs specified by the SWITCH project as part of DRIP (WP3) may be appreciated by Cloud service providers that would like to provide guaranteed Quality of Service related to the networking part to their customers.
4. Small and medium-sized enterprises (SMEs) and entrepreneurs: enterprises that operate time critical services, which are built, deployed or controlled using SWITCH, or that develop new applications with time critical requirements. In all these cases, the metadata and data generated by the SWITCH project could prove useful.
5. Service consumers: consumers will want to understand the business model and technologies in developing, deploying and operating time critical applications in Clouds.
6. Research and education organizations, such as universities: SWITCH data and metadata could be used in the education/training purpose (e.g. UL has a 3 rd degree Bologna course in “Development of Distributed Engineering Applications”), or by administrators of universities’ computing centres, or other research-oriented infrastructures, such as the European Grid Initiative (EGI).
7. Time critical application specialists in specific domains: from the analysis of the use cases, SWITCH data and metadata could be useful to a wide collection of domains which require time critical services in collaborative business environments, video & entertainment, disaster warning and others.
8. Non-specialists: Currently the datasets that will be produced during the project will not offer significant value to non-specialists, as the datasets will not be adjusted for their use. If significant interest is shown in the datasets by non-specialist groups, further effort will be made to accommodate their needs.
9. Regulatory bodies, public administrations, and investors: the documents or publications generated by SWITCH will provide input for regulatory bodies and public administrations to made decisions on setting regulations in specific application domains, such as disaster early warning, or to understand the technical details when investing in companies related to time critical cloud applications.
All parties interested in available data will be permitted access to data of
their interest in accordance with the definitions in this document. This will
foster the impact of the project and allow more people to benefit from project
outcomes.
## Dataset description
The main datasets generated by the SWITCH project are summarized in Table 3-1.
**Table 3-1: Dataset description**
<table>
<tr>
<th>
**Output**
</th>
<th>
**Tag**
</th>
<th>
**Instances**
</th>
<th>
**Origin**
</th> </tr>
<tr>
<td>
Documentation
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Information about partners
</td>
<td>
INF
</td>
<td>
• Documents
</td>
<td>
Produced during the project.
</td> </tr>
<tr>
<td>
Deliverables
</td>
<td>
D
</td>
<td>
• Documents
</td>
<td>
Produced during the project.
</td> </tr>
<tr>
<td>
Multimedia
</td>
<td>
MM
</td>
<td>
* Video
* Sound
* Pictures
</td>
<td>
Produced during the project for the means of dissemination and exploitation.
</td> </tr>
<tr>
<td>
Other documents
</td>
<td>
OD
</td>
<td>
* Documents
* Spreadsheets
* Presentations
* Internal documents (design docs, quality reports, or technical reports)
</td>
<td>
Produced during the project for the means of project
coordination, dissemination and exploitation.
</td> </tr>
<tr>
<td>
Research papers
</td>
<td>
RP
</td>
<td>
• Documents
</td>
<td>
Produced during the project as a result of a research work.
</td> </tr>
<tr>
<td>
Technical solutions
</td>
<td>
</td>
<td>
</td>
<td>
</td> </tr>
<tr>
<td>
Ontologies
</td>
<td>
ONT
</td>
<td>
* QoS ontologies
* QoE ontologies
* Monitoring ontologies
* Elasticity ontologies
</td>
<td>
Produced as a result of the work during the project.
</td> </tr>
<tr>
<td>
Knowledge base instance data
</td>
<td>
KBI
</td>
<td>
• RDF
</td>
<td>
Produced as a result of the work during the project.
</td> </tr>
<tr>
<td>
Time series obtained by monitoring
</td>
<td>
TSM
</td>
<td>
• Database files
</td>
<td>
Produced as a result of the work during the project.
</td> </tr>
<tr>
<td>
Software source code
(developed in the project) 1
</td>
<td>
SSC
</td>
<td>
• Source code files
</td>
<td>
Produced as a result of the work during the project.
</td> </tr> </table>
## Data and Metadata Standards
To ensure the widest possible use the data will be stored in widely adopted
file formats and will be compliant to industry standards or initiatives like
RDA 2 , ISO 3 , OGF 4 , OMG 5 , OASIS 6 , IETF 7 , IEEE 8 and
W3C 9 . Metadata will use Dublin Core 10 standard set where possible. This
will lead to an open solution for future data access and harmonisation
enabling interoperability to allow data exchange between researchers,
organisations and other interested parties.
## Policies for Access and Storage
The datasets have different levels of access and different means of storage
described in the Table 3-2. Datasets, that are publicly available, will be
uniquely identifiable and discoverable by using a standard identification
mechanism, such as a Digital Object Identifier (DOI). Moreover software and
data produced will be stored and equipped with the context necessary to
reproduce findings in the project, making datasets intelligible and
assessable. This way we will ensure that all parties that have the ambition to
exploit and/or review project results will have the proper means to find
relevant information and will be equipped with minimal necessary software,
data, and information to reproduce these results and findings.
**Table 3-2: Policies for access and storage**
<table>
<tr>
<th>
**Tag**
</th>
<th>
**Storage**
</th>
<th>
**Access**
</th> </tr>
<tr>
<td>
Documentation
</td>
<td>
</td> </tr>
<tr>
<td>
INF
</td>
<td>
Shared cloud storage 11
</td>
<td>
Restricted
</td> </tr>
<tr>
<td>
D
</td>
<td>
Shared cloud storage
</td>
<td>
Restricted
</td> </tr>
<tr>
<td>
MM
</td>
<td>
Shared cloud storage
</td>
<td>
Publicly available
</td> </tr>
<tr>
<td>
OD
</td>
<td>
Shared cloud storage
</td>
<td>
Publicly available
</td> </tr>
<tr>
<td>
RP
</td>
<td>
Shared cloud storage
</td>
<td>
Publicly available
</td> </tr>
<tr>
<td>
Technical solutions
</td>
<td>
</td> </tr>
<tr>
<td>
ONT
</td>
<td>
Knowledge base
</td>
<td>
Publicly available
</td> </tr>
<tr>
<td>
KBM
</td>
<td>
Knowledge base
</td>
<td>
Publicly available
</td> </tr>
<tr>
<td>
TSM
</td>
<td>
Knowledge base
</td>
<td>
Publicly available
</td> </tr>
<tr>
<td>
SSC
</td>
<td>
Public open source repository 12
</td>
<td>
Publicly available
</td> </tr> </table>
2. Research Data Alliance www.rd-alliance.org/
3. International Organisation for Standardisation, www.iso.org
4. Open Grid Forum, www.ogf.org
5. Object management group, www.omg.org
6. Advanced Open Standards for the Information Society, www.oasis-open.org
7. International engineering task force, www.ietf.org
8. IEEE, www.ieee.org
9. World Wide Web Consortium (W3C) www.w3.org
10. Dublin Core Metadata Initiative http://dublincore.org
11. Currently, GoogleDrive is used as shared cloud storage among partners. 12 Github will be used.
## Policies for re-use, distribution
Access to databases and associated software tools generated under the project
will be available under the Apache v2.0 2 license. Such access will be
provided using web-based applications, as appropriate.
Similarly, the Creative Commons Attribution (CC BY) 3 licence will be used
for all publicly available documents. Licensees may copy, distribute, display
and perform the work and make derivative works based on it only if they give
the author or licensor credit.
## Plans for Archiving and Preservation
### Short term
During the project’s lifetime project partner BEIA will take care of data
storage and employ appropriate preservation techniques; project partner WT
will curate the software produced by the SWITCH project. All partners will
contribute their prepared data sets. After the projects conclusion all the
data will be archived and maintained on an internally accessible server and
made available on request at no charge to the user.
### Long term
After the conclusion of the project partner UvA will curate the data
originally curated both by partner WT and partner BEIA along with all other
data needed for it to be useful. UvA will store this data for 5 years after
the end of the project including managing the webpage, where datasets, papers,
software, and other data will be made available to general public. Further
efforts will be made to preserve data after that period, such as joining the
Open Research Data Pilot. The sustainability of the project results, such as
software and ontology, will also be achieved by nurturing open source
communities of developers and users, and by industrial exploitation after the
project.
### Standardization
Standards will be used, aiming to make usage of the data as wide and
interoperable as possible while ensuring that the archiving and preservation
of the dataset is made possible beyond the project lifetime. Activities
related to uptake and advancement of standards will be of benefit to the
project and will be led by the Scientific Coordinator. The goal is to strongly
support the dissemination and upgrading of project results, widen the
exploitation potential of project output, and provide the project with access
to a large pool of external expertise. Participating in standardization
processes may bring to the project higher international recognition and new
opportunities for collaboration.
Specific target standardization bodies will be identified. The initial list
will include RDA, OGF, OMG, OASIS, IETF, IEEE and W3C. The initial list can be
refined depending on project achievements to also include other
standardization organisations. Special attention shall be given to Semantic
Web related standards and proposals for standards.
Influence on the different standardization activities depends on: (a) high
quality technical work; and (b) adequate participation of Consortium partners
within the standardization committees. The project will contribute to the
European and worldwide standardization bodies in order to ensure and increase
Europe’s participation and contribution to the international standardization
processes, today largely dominated by countries from other continents. The
Innovation and exploitation Coordinator (IEC) will be in charge to monitor and
detect possible important developments that should be taken into account by
the Consortium, as well as possible technical contributions to standards from
the project.
# Abbreviations
CC BY – Creative Commons Attribution License
DOI – Digital Object Identifier
DRIP – Dynamic Real-time Infrastructure Planner
EGI – European Grid Initiative
IEC – Innovation and exploitation Coordinator
IEEE – Institute of Electrical and Electronic Engineers
IETF – International Engineering Task Force
ISO – International Organisation for Standardization
OASIS – Advanced Open Standards for the Information Society
OGF – Open Grid Forum
OMG – Object Management Group RDA – Research Data Alliance
SaaS – Software as a Service
SIDE – SWITCH Interactive Development Environment
SLA – Service License Agreement
SME – Small and medium-sized enterprise W3C – World Wide Web Consortium
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0821_MIR-BOSE_737017.md
|
# WHO IS IN CHARGE OF THE DMP AND OF THE DATA
_Person in charge of_ Raffaele Colombelli ([email protected]). The
DMP will _DMP_ be updated in collaboration with the partners of the project.
_Data ownership_ Results are owned by the Beneficiaries that generate them.
Joint ownership:
* each of the joint owners shall be entitled to use their jointly owned Results for non-commercial research activities on a royaltyfree basis, and without requiring the prior consent of the other joint owner(s), and
* each of the joint owners shall be entitled to otherwise Exploit the jointly owned Results and to grant non-exclusive licenses to third parties(without any right to sub-license), if the other joint owners are given:
1. at least 45 calendar days advance notice;
2. Fair and Reasonable conditions
3. possibility to discuss/modify such conditions
# RESOURCES REQUIRED FOR MANAGING THE DMP
<table>
<tr>
<th>
_Hardware_
</th>
<th>
Hard drive and server space will be employed.
</th> </tr>
<tr>
<td>
_Staff effort_
</td>
<td>
Each beneficiary is responsible for the conservation of their generated data.
Each beneficiary is also responsible for placing the relevant data on an open
access platform
</td> </tr>
<tr>
<td>
_Costs_
</td>
<td>
The type of data generated are essentially ASCII files, with their accurate
description, therefore the cost is expected to be marginal.
</td> </tr> </table>
# WHAT ARE THE DATASETS
_Datasets_ Datasets will be defined as the data generated by the proposed
project. In the case of this project, we expect most of the data set to be
ASCII files.
## DATA DESCRIPTION
_Data Type**Raw data** _ . Most experiments performed within the consortium
involve recording properties of semiconductor active and passive
devices such as reflectivity, transmission, photoluminescence, and time-
resolved versions of these data.
The primary (i.e., raw) forms of data will be
* mostly ASCII files (.dat or .txt files). This will be the primary method for the generation of data during the project;
* photos (SEM or optical) of the investigated samples in the form of st andard image formats (.jpg, .bmp);
* electromagnetic and/or electronic simulations, performed with numerical or analytic methods. In both cases, the output data will be mostly in the form of ASCII files.
_Reuse of data_ It is not expected to reuse any previously generated data in
the current proposal
_Data acquisition_ Data will be acquired through standard laboratory tools
(such as
Labview) and also software that permits to run specific equipments
(OPUS for Bruker spectrometers; OMNIC for Nicolet spectrometers; ) and saved
in ASCII formats too.
_Data archival_ All data will be stored in digital form, either in the format
in which it was originally generated (.dat ASCII files; OPUS / OMNIC files;
jpg files). If required the data will be converted from specialized software
formats in order to permit the use of data without recurring to proprietary
software. Description of the files will be provided as PDF.
## ACTIONS DURING THE PROJECT: STORAGE, ACCESS AND SECURITY
<table>
<tr>
<th>
_Data support_
</th>
<th>
As well as electronic conservation of the data, some information on the data
collection will be noted as hardcopies in Lab-books
</th> </tr>
<tr>
<td>
_Data Hosting_
</td>
<td>
The data will be conserved on local hard drives and backed-up according to
each Beneficiary procedures.
</td> </tr>
<tr>
<td>
_Data Privacy:_
</td>
<td>
Special accreditation will be given to all persons likely to access the data.
</td> </tr>
<tr>
<td>
_Data integrity and traceability:_
</td>
<td>
Laboratory books will be used.
</td> </tr>
<tr>
<td>
_Data reading_
</td>
<td>
As much as possible, standard formats will be used. Therefore it is expected
that no special/proprietary software will be required to access the data.
</td> </tr>
<tr>
<td>
_Data sharing_
</td>
<td>
Relevant data will be shared through email or through secure ftp servers (for
instance the coordinating institution, University Paris Sud, has such secure
ftp server and it can be used by all
Beneficiaries).
</td> </tr> </table>
## DESCRIPTION ASSOCIATED TO EACH DATASET (METADATA)
<table>
<tr>
<th>
_Standards and metadata_
</th>
<th>
No particular standard, except for data collection (see above), used.
</th> </tr>
<tr>
<td>
_Method of production and metadata_
_responsibility_
</td>
<td>
It will be the responsibility of each researcher to annotate their data with
metadata. The PI will be responsible to remind during the periodic meetings of
the project that all participants must assure data is being properly
processed, documented, and stored
</td> </tr>
<tr>
<td>
_Other Information_
**3.4 DISSEMINATION**
</td>
<td>
The naming of the data sets will be adapted depending on the type of sample or
measurement undertaken. The format will be described in the pdf file when
uploaded for dissemination.
</td> </tr>
<tr>
<td>
_General principle_
</td>
<td>
In accordance with the grant agreement, the beneficiaries will deposit in a
research data repository (e.g. ZENODO, among others) and take measures to make
it possible for third parties to access, mine, exploit, reproduce and
disseminate the data needed to validate the results presented in scientific
publications
</td> </tr>
<tr>
<td>
_Potential for reuse_
</td>
<td>
The scientific community is targeted for reuse of the data
</td> </tr>
<tr>
<td>
_Data repository and access_
</td>
<td>
Research data from this project will be deposited in ZENODO to ensure their
long-term access by the scientific community. There are no ethical or privacy
issues involved in sharing of the type of data generated by MIR-BOSE. Data
will not require specific/proprietary software to be processed, and a pdf file
will be generated to describe the data and, if necessary, how it can be
analyzed.
</td> </tr>
<tr>
<td>
_Exceptions_
</td>
<td>
Exception to the diffusion of data will be related Intellectual Property
protection (e.g. Patent, Licensing etc.)
</td> </tr>
<tr>
<td>
_Embargo_
</td>
<td>
The data will be released after a maximum embargo period of 6 months,
depending on the embargo period for the related publication.
</td> </tr> </table>
# AFTER THE PROJECT: DATA SELECTION AND PRESERVATION
<table>
<tr>
<th>
_Data at the end of the project_
</th>
<th>
The data generated through publications will be kept on ZENODO. Each partner
will store a copy of their generated data on a hard disk.
</th> </tr>
<tr>
<td>
_Data selection_
</td>
<td>
Data related to diffusion related events (publications, conferences, patents)
will be conserved. There is no plan to destroy any collected data as the
archive is not burdensome in cost or space.
</td> </tr>
<tr>
<td>
_Potential for reuse_
</td>
<td>
The scientific community is targeted for reuse of the data
</td> </tr> </table>
_Final data volume_ To be determined
_Data repository and_ Research data from this project will be deposited in
ZENODO to _access_ ensure their long-term access by the scientific community.
_Lifetime_ At least five years
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0824_AutoPost_644629.md
|
# Executive summary
This deliverable is the initial version of the plan for disseminating the
activities and the generated knowledge and results of the AUTOPOST project.
The AUTOPOST dissemination plan is designed to:
* Build an active community of interest around the project results.
* Disseminate information about the technical progress and results to the media industry and research communities, through conferences, fairs and scholarly publications.
* Provide opportunities for feedback from potential users of the AutoPost tools, through precommercial demonstrations and workshop sessions.
This document also covers the information the project intends to disseminate,
the identified target audience, and the list of the dissemination activities,
including: project brand development, development and publishing of a web
site, promotion of the publication of scientific communications and
presentations in conferences, design and publishing of printed materials,
generation of briefings and reports, courses and other teaching and
demonstration activities, establishment of relations with other research
projects, one-to-one relationships and promotion of the active participation
in conferences and fairs.
The initial schedule of activities and the introduction to the assessment plan
are included. This plan will be updated as the project progresses.
\--------------------------------------------------------------------------------
This document reflects only the author's views and the European Community is
not liable for any use that may be made of the information contained herein.
All **logos, trademarks, imagines,** and **brand names** used herein are the
property of their respective owners. Images used are for illustration purposes
only.
This work is licensed under the Creative Commons License “BY-NC-SA”.
# 1\. Dissemination and communication strategy
## 1.1 Objectives and contents
BM will be responsible for designing and implementing the AutoPost
dissemination strategy. The AutoPost project will adopt and implement a
proactive dissemination strategy designed to:
1. Build an active community of interest around the project results.
2. Disseminate information about the technical progress and results to the media industry and research communities, through conferences, fairs and scholarly publications.
3. Provide opportunities for feedback from potential users of the AutoPost tools, through precommercial demonstrations and workshop sessions.
The AutoPost dissemination and communication strategy will focus on the
following contents:
* **AutoPost project** : aims and objectives of the project and the final benefits to the end users.
* **AutoPost results** : the innovative products and technologies derived from the research activities and their applications in the world, as well as the scientific achievements resulting from the project.
* **AutoPost activities** : all events and activities carried out by partners, including publications, seminars, workshops, presentations, performances, etc.
## 1.2 Target audiences
The AutoPost dissemination strategy will target the following potential
user/stakeholder groups:
* Professional end users: Post-production professionals and companies are the main target audiences as they represent AutoPost’s potential clients.
* R+D communities including professional researchers and academia.
* Schools and vocational training: The AutoPost Consortium wishes to carry out workshops, keynotes, presentations and tutorials intended for audiovisual, film, digital media schools and training centres.
* Specialized media and government bodies will help to spread the aims and results of the project.
* Standardization bodies: If useful, AutoPost will actively contribute to related standardization activities.
* General public.
The first three constitute the main target groups for AutoPost. The
professional users, and the students from audiovisual and digital media
schools and training centres (future professionals) are the potential client
base for the project results once in commercial phase. Attracting their
interest, and opening ways for interaction and feedback since the early stages
of the project, is critical to ensure the success of the AutoPost exploitation
strategy. The R+D community, on the other hand, will contribute to the
validation and exploitation of the project results during the project
implementation, and beyond. The R+D community is the one in the best position
to build on top of the AutoPost scientific results and continue progressing
the state-of the-art with regard to innovative technologies for the creative
industries.
## 1.3 Overview of the means and activities
_Means_
and
tools
Logo
Website
Media &
social media
Publications
Open
research data
Academic
publications
Press
releases
Materials
(
flyers, fact
sheets)
Events
Conferences,
trade shows,
exhibitions
Training
Collaboration
Post
\-
production
platforms
Similar
projects
DG CONNECT
**Figure 1. Overview AutoPost dissemination and communication activities**
# 2\. AutoPost communication means and tools
## 2.1 Project brand
The communication strategy includes the design of a logo and the establishment
of design and communication directives for all the different supports to be
used.
Two different logos will be used. A version displaying the full name of the
project will be used in all project’s documents, including dissemination and
communication materials (Figure 2).
**Figure 2. AutoPost logo.**
An icon-like version (Figure 3) is to be used for materials in small formats
or overlayed in video applications.
**Figure 3. AutoPost icon.**
Following the provisions set in D1.1 Project handbook and Quality plan for
dissemination and external communication, the European Union emblem as well as
the necessary funding visibility statements and disclaimer will be included in
any dissemination and communication material produced by AutoPost.
## 2.2 AutoPost website
A web site for the project with specific areas targeted to different levels of
interest has been designed and developed. A working version of the project web
site is already available at _www.autopostproject.eu_ . Both the contents
and images will be updated, as the news section and outcomes.
The AutoPost website is meant to be the anchor for communication activities of
the project. It will contain public information about project overview
information, activities, partners, news and events, outcomes, dissemination
agenda and contact area. It will be regularly updated with the project public
deliverables and documents, publishable abstracts of confidential
documentation, communication materials and related news. As the website is
intended as a means of general communication, the Consortium will ensure that
– whenever possible – contents are produced in a plain language, accessible to
non-specialists. The website will be functional for 4 years after the end of
the project. On the other hand, all videos produced by AutoPost (demos,
tutorials, promotional clips) will be shared publicly through Vimeo. Links to
the videos will be available in the project website and also in partner’s own
dissemination and communication means.
The website development is detailed in the deliverable D6 1. Project website.
## 2.3 Media communication & social media
The AutoPost project will strive to maximize its impact by using on-line and
social media to communicate its achievements and events. An on-line
communication campaign will accompany all of the project dissemination
activities (participation in exhibitions, final showcase, workshops, etc.).
General and specialized media will be sent in the form of AutoPost press
releases in relation to such events.
# 3\. Dissemination activities
The AutoPost project will carry out specific activities in order to attain the
dissemination and communication objectives. These are meant to disseminate
information about the project as well as to share the generated knowledge with
the different audiences.
Figure 4 below shows the AutoPost list of activities per target group. This
list may be modified according to the specific needs or possibilities of the
partners during the project implementation:
•
Collaboration with post
\-
production platforms worldwide
•
Conferences, exhibitions and trade fairs
**Professional end**
**-**
**users**
•
Open research data
•
Scientific publications (Green open access whenever possible)
•
Conferences, exhibitions and trade fairs
•
Liaison with related projects and Creativity Unit DG
\-
CONNECT
**R+D communities**
•
Workshops, keynotes and presentations in digital media schools and
training centres.
**School and vocational training**
•
Specific communication material and press releases (issued as
appropriate along with project results and/or events)
**Specialized media and agencies**
•
Press releases, flyers (issued as appropriate along with project results
and/or events)
**General public**
**Figure 4. AutoPost specific dissemination and communication activities per
target groups**
The AutoPost project is a rather short, fast-paced project. It is expected
that most of the dissemination activities take place during the final year of
the project (M6-M18), with a peak in the final months.
## 3.1 AutoPost events
In order to reach the professional end-users and the R+D communities, the
AutoPost consortium plans to submit disseminations material (ie. Posters,
papers, workshops, etc.) to conferences and exhibitions addressing these
target audiences.
Given the budgetary and timing restrictions, the AutoPost project has selected
the following events as the most suitable for the projects dissemination and
communication strategy (Figure 5). Through these events, a balance between
industrial and R+D events is achieved and a fair access to all of our target
groups will be granted. The project will strive to place the project’s final
workshop (D6.5) at the FMX 2016:
IBC 2015
–
Amsterdam (M9)
ICT 2015
–
Lisbon (M10)
CVMP 2015
–
London (M11)
NAB SHOW
–
Las Vegas (M16)
# FMX 2016 – Stuttgart (M17)
**Figure 5. AutoPost calendar of events**
## 3.2 Publications
### 3.2.1 Scientific publications
With regard to scientific publications, the AutoPost project will aim to
publish in journals such as IEEE Transactions on Broadcasting, SMPTE Motion
Imaging Journal, and conferences such as CVMP, ICME, SIGGRAPH. The selection
of journals and conferences will depend on the availability of publishable
results. It is very likely that most of the publications are submitted during
the final months of the project, thus assessed for publishing past the project
end date.
The AutoPost project will aim, whenever possible, to publish in journals with
green open access policies. A small budgetary allocation has been planned to
support gold open access publication if necessary.
### 3.2.2 Open research data
The consortium is aware of the mandate for open access of publications in the
H2020 projects and the participation of the project in the Open Research Data
Pilot. The consortium has chosen ZENODO (http://zenodo.org/) as the scientific
publication and data repository for the project outcomes. The Consortium,
through WP6, will ensure that scientific results that will not be protected
will be duly and timely deposited in the scientific results repository ZENODO
1 , free of charge to any user. These might be:
1. Machine-readable electronic copies of the final version or final peer-reviewed manuscript accepted for publication; made available immediately with open access publishing (gold open access) or with a certain delay to get past the embargo period of green open access.
2. Research datasets needed to validate the results presented in the publications.
3. Other data, including associated metadata, as laid out in the Data Management Plan (D6.3).
4. The software tools and libraries (or information about) necessary for validating the results.
AutoPost will deliver on M6 (June 2015) D6.3 Data Management plan.
### 3.2.3 Communication materials
In order to gain traction among practitioners and media, introductory
documentation and tutorials will be prepared by product specialists. These
will be hosted on the AutoPost website and the Consortium members’ websites
and promoted to the user base with marketing communications.
Producing the necessary communication materials to effectively communicate the
objectives and results of the project: press releases, flyers and brochures to
complement public presentations and video demonstrations. This will also
include the creation of technical documentation and tutorials for training
purposes, and scientific and industrial posters. All this technical and
training material will also be available to download from the project website
and accessible on the project’s video channel. All AutoPost communication
materials will explicitly acknowledge the name of the project and the fact
that it is funded by the European Commission.
## 3.3 Training
With regard to activities in relation with schools and vocational training the
AutoPost consortium will reach out to the following organisations in order to
plan and organise dedicated AutoPost sessions:
**Master in digital arts –Universidad Pompeu Fabra**
_http://www.idec.upf.edu/university-master-in-digital-arts_
The programme is aimed at first degree or university graduates in subjects
such as Audiovisual Communication, Fine Arts, cinema schools, etc. Most of the
students are familiar with post-production
techniques, and also platforms such as After Effects.
AutoPost will organise these training sessions, in agreement with the training
entity, once the MS3 First version of tracking and matting SDKs is achieved.
This will allow the training sessions to serve as well for gathering user’s
feedback on the preliminary plugins.
## 3.4 Collaborations
The AutoPost project partners, due to their trajectories in their respective
areas of activity, have plenty of knowledge and contacts within their sectors.
This will allow AutoPost to put together a sound strategy for reaching out to
similar ongoing funded projects, and participants in closed projects. Apart
for the obvious benefit in dissemination terms, liaisons with similar projects
have the potential to enrich the collaborating projects.
Moreover, since the AutoPost solutions will be distributed as plugins, close
collaboration with worldwide post-production platforms such as The Foundry,
SGO, Adobe, Imagineer Systems, Quantel or Assimilate will be sought,
particularly to help disseminate AutoPost tools among their user bases and
provide visibility at international events. In relation to that, some of the
AutoPost partners are already in close relationship with some of these
companies such as SGO, Imagineer Systems and the Foundry due to past and
current collaborations in other R&D and commercial projects.
Details of these activities and its progress will be reported in the
confidential management report.
## 3.5 Summary table of responsibilities
<table>
<tr>
<th>
**Type of activity**
</th>
<th>
**Target audience**
</th>
<th>
**Responsible**
</th> </tr>
<tr>
<td>
**Project brand**
</td>
<td>
Professional and scientific community, general
</td>
<td>
BM, ALL
</td> </tr>
<tr>
<td>
**Project website**
</td>
<td>
Professional and scientific community, general
</td>
<td>
BM, ALL
</td> </tr>
<tr>
<td>
**Communication materials**
</td>
<td>
Professional and scientific community, general
</td>
<td>
BM, ALL
</td> </tr>
<tr>
<td>
**Professional communication**
</td>
<td>
Professional community
</td>
<td>
MOTO, DG, IL, (BM, HHI)
</td> </tr>
<tr>
<td>
**Scientific publication**
</td>
<td>
Scientific community
</td>
<td>
BM, HHI, IL, (MOTO, DG)
</td> </tr>
<tr>
<td>
**Event participation**
</td>
<td>
Professional and scientific community
</td>
<td>
BM, ALL
</td> </tr>
<tr>
<td>
**Workshop organization**
</td>
<td>
Professional and scientific community
</td>
<td>
BM, ALL
</td> </tr>
<tr>
<td>
**Media communication**
</td>
<td>
Professional and scientific community, general
</td>
<td>
BM, ALL
</td> </tr> </table>
## 4\. Impact assessment
### 4.1 Target audiences’ feedback
The AutoPost project, as part of task WP6.T1, will keep a log to collect and
process all feedback received from dissemination activities. When applicable,
these external inputs will be directed to the appropriate WP leaders to ensure
that they are taken into account for improving the outcomes of the project.
The intended feedback will be received primarily by the allowed comments in
the project website and searching periodically the on-line media though
relevant keywords for the project.
### 4.2 Metrics
In order to have an accountable assessment of the impact of the communication
efforts, the project will establish measures such as analytics of the usage of
the project’s website (ie. Google analytics) and registers of the leads
acquired in public presentations.
Details of the dissemination and communication assessment metrics and its
progress will be reported in the confidential management report.
## 5\. AutoPost IPR management
### 5.1 IPR main principles
With regard to the management of IPR issues, the main arrangements and
regulations made between the participants are formalized in the Consortium
Agreement (CA). The CA, a confidential document, addresses the topics of
ownership, access rights, and communication of knowledge, confidentiality,
among others. IPR management is part of the Exploitation planning task in WP6
led by imcube labs (IL), and in which the entire consortium has
responsibility. IPR issues will be referred to the project’s Supervisory Board
(SB) for decisions, as appropriate.
A set of rules regarding the use and dissemination of knowledge will be set
forth in the Consortium Agreement in order to a) control the disclosure of
ideas while giving an appropriate level of dissemination to the project and b)
comply with the Open Access mandate, applicable to those scientific results
(publications or data) that are not deemed to be protected for exploitation.
The most important principles that govern the IPR issues in the Consortium
Agreement, in accordance with the H2020 guidelines for intellectual property
rules, are related to ownership and access rights of background and foreground
knowledge.
All necessary IPR arrangements will be confidentially discussed and made among
members of the consortium, in compliance of the GA and the CA provisions.
These arrangements will necessarily be made in a way that the future
exploitation of the AutoPost results is granted and fostered.
Details of the IPR arrangements and related activities will be reported in the
confidential management report.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0826_UMOBILE_645124.md
|
# 1 Executive Summary
Open Access Model garantees free access for users and free dissemination of
knowledge. UMOBILE participates in the "Pilot on Open Research in HORIZON
2020": participating projects are required to develop a Data Management Plan
(DMP), in which they specify what data will be open.
This Data Management Plan explains which of the research data generated in
UMOBILE will be made open,
how data will be shared and which procedures will be put in place for long-
term preservation of the data.
Following "Guidelines on Data Management in Horizon 2020", the DMP clarifies
that scientific generated re-
search data will be easily:
1. Discoverable
2. Accessible
3. Assessable and intelligible
4. Useable beyond the original purpose for which it was collected
5. Interoperable to specific quality standards
# 2 Open access to scientific publications
Open access to scientific publications refers to free of charge online access
for users. Open access wil be achieved
through the following steps:
1. Any paper presenting the project results will acknowledge the project: The research leading to these results has received funding from the European Union’s Horizon 2020 research and innovation programme under
grant agreement No 645124-UMOBILE and display the EU emblem.
2. Any paper presenting the project results will be deposited at least by the time of publishment to a formal repository for scientific papers. If the organization hasn’t a formal repository ( https://www.openaire.eu/ participate/deposit/idrepos) , the paper can be uploaded in the European sponsored repository for scientific papers: http://zenodo.org/.
3. Authors will ensure that the publisher accepts open access via self-archiving in their departments formal repository or via http://zenodo.org/. Usually they do accept; if not, they will try to negotiate with them. In case of no success they will not publish via self-archiving.
4. Authors can choose to pay “author processing charges” to ensure open access publishing, but still they have
to deposit the paper in a formal repository for scientific papers (step 2).
5. Authors will ensure open access via the repository to the bibliographic metadata identifying the deposited
publication. More specifically, the following will be included:
* The terms “European Union (EU)” and “Horizon 2020”;
* “Universal, mobile-centric and opportunistic communications architecture-UMOBILE”, Grant agreement number 645124;
* Publication data, length of embargo period if applicable; and
* A persistent identifier.
6. Each case will be examinated separately in order to decide if self-archiving of paying for open access publish-
ing.
# 3 Open access to research data
Open access to research data refers to the right to access and re-use digital
research data generated by projects.
EU expects funded researchers to manage and share research data in a manner
that maximizes opportunities
for future research and complies with best practice in the relevant subject
domain, that is:
* The dataset has clear scope for wider research use
* The dataset is likely to have long-term value for research or other purposes
* The dataset have broad utility for reference and use by research communities
* The dataset represents a significant output of the research project
Openly accessible research data, generated during UMOBILE project, will be
accessed, mined, exploited, reproduced and disseminated free of charge for the
user. Specifically, the "Guidelines on Data Management in Horizon 2020"
clarifies that the beneficiaries must:
* _(a) deposit in a research data repository and take measures to make it possible for third parties to access, mine, exploit, reproduce and disseminate — free of charge for any user — the following:_
* _(i) the data, including associated metadata, needed to validate the results presented in scientific publications as soon as possible;_
* _(ii) other data, including associated metadata._
It is useful to categorize the data as in the following table (which provides
also an exampe of the dataset).
<table>
<tr>
<th>
**Category**
</th>
<th>
**Description**
</th>
<th>
**Examples**
</th> </tr>
<tr>
<td>
Raw Collected Data
</td>
<td>
Obtained data that has not been
subjected to any quality assurance or
control
</td>
<td>
Measuments collected from devices
(Hotspots, Smartphones, UAVs,
Videocameras, . . . )
</td> </tr>
<tr>
<td>
Validated Collected Data
</td>
<td>
These are the raw data that has been
evaluated for completeness,
correctness, and
conformance/compliance of a specific
data set against the standard operating
procedure (verified), as well as
reviewed for specific analytic quality
(validated)
</td>
<td>
Images and videos collected with
UAVs, which are verified (content
verification) and filtered (quality
enhancement)
</td> </tr>
<tr>
<td>
Analyzed Collected Data
</td>
<td>
Validated data are then analyze, through statistical operations, based on a
specific target or application scenario
</td>
<td>
Patterns of smoke or fire found in the
video collected from UAV
</td> </tr>
<tr>
<td>
Generated Data
</td>
<td>
The data needed to validate the results
presented in scientific publications
(pseudo-code, libraries, workflow,
naming schemes, . . . )
</td>
<td>
Naming scheme associated to the
analyzed data (i.e: geolocalization, fire
dimension, . . . )
</td> </tr> </table>
The followings sections describe some sample datasets that we are planning to
collect and generate in UMOBILE. The provided datasets are, at this early
stage of the project, possible examples which are probably subject to
change with the evolution of the project.
For each of the dataset that we are going to share in the project lifetime,
policies for access and sharing as well as policies for re-use and
distribution, will be defined and applied. A generic guideline is provided in
sections "Data sharing" and "Archiving and preservation".
# 4 Dataset 1: Message delay
**4.1 Data set reference and name**
UMOBILE.MES_DELAY
## 4.2 Data set description
Message delay is a Key Performance Indicator in computer networks. Data
produced by simulation tools and/or by real life trials will be used as a
means to quantify the performance advantages the UMOBILE architecture offers
compared to current practices. Message delay is measured in seconds and it may
range from milliseconds to
minutes or even hours in scenarios involving disruptive communication
environments. Scientific publications related to the UMOBILE project may
include Message delay data.
**4.3 Standards and metadata**
Metadata will include the simulation tool used to create the message delay
data and the configuration parameters.
# 5 Dataset 2: AAA logs
**5.1 Data set reference and name**
UMOBILE.AAA_LOGS
## 5.2 Data set description
AAA logs are written by the AAA server in order to record all the events that
happen during while the server is running. They contain information about the
authentication requests and they are very useful in order to detect
problems in the testing phase or even to extract information about users
behavior.
These logs contain private information about the users that must be handled
with care. Even if the information has been collected in a testing phase, user
rights have to be respected. Therefore, and because of the open nature of the
data managed in this project, the information in the logs must be anonymized
before releasing it.
## 5.3 Standards and metadata
There are no standards for these logs. A possible solution are RADIUS servers
as AAA servers. In this case, the logs would include the attributes defined by
RADIUS.
# 6 Dataset 3: Social Network Reports
**6.1 Data set reference and name**
UMOBILE.SOCIAL_REPORTS
## 6.2 Data set description
These reports contain personal information about the users’ and information
about their behavior. This information
can be used for statistical purposes and this is especially valuable in some
use cases of UMOBILE.
These reports contain private information about the users that must be handled
with care. Even if the information has been collected in a testing phase, user
rights have to be respected. Therefore, and because of the open nature of the
data managed by this project, the information in the reports must be
anonymized before releasing it.
## 6.3 Standards and metadata
There are no standards for this type of dataset. The kind of the information
provided in these reports depends on the information needed in each situation
and the availability of each social network.
# 7 Dataset 4: Affinity Networking
**7.1 Data set reference and name**
UMOBILE.AFFINITY_SETS
## 7.2 Data set description
These traces shall contain contact data related with: visits of devices to
UMOBILE hotspots; direct contact between devices (Bluetooth and Wi-Fi).
Aspects kept relate with average visit/contact time; social strength
computation derived from the association and exchange of data between devices;
whether or not the owners of devices were acquainted before, etc. The data
shall be provided both in sql format as well as in text. There is NO private
data concerning the users kept. The MACs are hashed, and the IPs are hidden.
This data is useful to better
understand the evolution of affinity networks based on short-range wireless
technology, over time and with different time granularity (e.g. days, weeks,
months).
## 7.3 Standards and metadata
The data is expected to be provided in ANSI SQL, XML, or text (ASCII) format.
For this data set, data citation and metada practices derived from CRAWDAD
shall be considered ( http://www.dlib.org/dlib/january15/
henderson/01henderson.html)
# 8 Dataset 5: Social Context
**8.1 Data set reference and name**
UMOBILE.SOCIAL_CONTEXT_SETS
## 8.2 Data set description
These traces shall contain contact data related with: UMOBILE users physical
activity (walking, running, standing, driving); surrounding environment
(noisy, calm, number of talking events); relative distance among UMOBILE
devices; social interaction among UMOBILE devices (strength of social ties);
Traces can also include information about the overall social context of an
UMOBILE user, such as social isolation. The data shall be provided both in sql
format as well as in text. There is NO private data concerning the users kept,
since the identity of the user is not collected nor stored. This data is
useful to better understand the context of each UMOBILE users in different
scenarios. For instance, such traces will help to understand how to improve
social daily routines (e.g. with the goal of reducing social isolation), and
will allows us to consider information about the users’ context aiming to
improve the efficiency when reacting to emergency situations, or civil
protection cases, or even the dissemination of micro-blogs.
## 8.3 Standards and metadata
This data set may help to better understand what is the semantics and
mandatory/optional fields that should be
considered in a data dissemination protocol. Related to for instance: draft-
irtf-icnrg-ccnxsemantics-00, draft-irtficnrg-ccnxmessages-00
# 9 Data sharing
Open access to research data wil be achieved in UMOBILE through the following
steps:
1. Write, and update as needed, the "Data Management Plan" (current document)
2. Select what data we’ll need to retain to support validation of the project finding (the datasets described in the
above section)
3. Deposit the research data into a **online research data repository** . In deciding where to store project data,
the following choice will be performed, in order of priority:
* An institutional research data repository, if available
* An external data archive or repository already established in the UMOBILE research domain (to preserve the data according to recognised standards)
* The European sponsored repository: http://zenodo.org/
* Other data repositories (searchable here: http://www.re3data.org) , if the previous ones are ineligible
4. License the data for reuse (Horizon 2020 recommendation is to use CC0 or CC BY)
5. Provide info on tools needed for validation: everything that could help third party in validating the data (workflow, code,. . . )
Independent of the choose, the authors will ensure that the repository:
* Gives the submitted dataset a persistent and unique identifier, to make sure that research outputs in disparate repositories can be linked back to particular researchers and grants
* Provides a landing page for each dataset, with metadata
* Helps to track if the data has been used by providing access and download statistics
* Keeps the data available in the long term, if desired
* Provides guidance on how to cite the data that has been deposited
Even following the previously described steps, each case will be examinated
separately in order to decide which online repository to choose.
## 9.1 Policies for Access and Sharing
As suggested from the Euporean Commission, the partners will deposit **at the
same time the research data needed to validate the results presented in the
deposited scientific publications** . This timescale applies for data
underpinning the publication and results presented: research papers written
and published during the funding period will be made available with a subset
of the data necessary to verify the research findings. The consortium will
then make a newer, complete version of data, available within 6 months of
project completion. This embargo period
is requested to allow time for additional analysis and further publication of
research findings to be performed.
Other data (not underpinning the publication) will be shared during the
project life following a granular approach to data sharing, releasing subsets
of data at distinct periods, rather than wait until the end of the project, in
order to
obtain feedback from the user community and refine it as necessary.
An important aspect to take into account, is **who is allowed to access the
data** . It could happen that some of the dataset shouldn’t be publicly
accessible to everyone. In this case, a control mechanisms will be
established.
These include:
* Authentication systems that limit read access to authorized users only
* Procedures to monitor and evaluate, one to one, access requests: user must complete a request form stating the purpose for which they intend to use the data.
* Adoption of a Data Transfer Agreement that outlines conditions for access and use of the data
Each time a new dataset will be deposited, the consortiun will decide on who
is allowed to access the data. Generally speaking, anonymised and aggregate
data will be made freely available to everyone, whereas sensitive and
confidential data will only be accessed by specific authorized users.
## 9.2 Policies for Re-use, Distribution
A key aspect will be **how users will learn of the existence of data** and the
content it contains. People will not be
interested in a set of unlabelled files published on a website. To attract
interest, partners will describe accurately the content of published dataset
and, each time a new dataset will be deposited, the information will be
disseminated using the appropriate mean (i.e.: mailing list, press release,
facebook, website), based on the type of data and on
the interested target audience.
Research data will be made available in a way that can be shared and easily
reused by others. That means:
1. sharing data using **open file format** (whenever possible), so that they can be implemented by both proprietary
and open source software;
2. using format based on an underlying open standard
3. using format which is interoperable among diverse internal and external platforms and applications
4. using format which does not contain proprietary extensions (whenever possible)
Documenting datasets, data sources, and methodology by which the data were
acquired establishes the basis for interpreting andappropriately using data.
Each generated or collected and then deposited dataset, will include
documentation to help users to re-use it.
As recommended, the license that will be applied to the data is CC0 or CC BY.
If some limitations will occur on
the generated data, these restrictions will be clearly described and
justified.
Potential issues, that could affect how data can be shared and used may
include the need to: protect participant confidentiality, comply with informed
consent agreement, protect Intellectual Property Rights, submit patent
applications, protect commercial confidentiality. Possible measures that may
be applied to address these issues include: encryption of data during storage
and transfer, anonymisation of personal information, development of Data
Transfer Agreements that specify how data may be used by an end user,
specification of embargo periods, and development of procedures and systems to
limit access to authorized users only (as already explained).
# 10 Archiving and preservation
Dataset will be maintained for 5 years following project completion.
To ensure high-quality long-term management and maintenance of the dataset,
the consortium will implement **procedures to protect information over time**
. These procedures will permit a broad range of users to easily
obtain, share, and properly interpret both active and archived information,
and they will ensure that information are:
* kept up-to-date in content and format so they remain easily accessible and usable;
* protected from catastrophic events (e.g., fire and flood), user error, hardware failure, software failure or corruption, security breaches, and vandalism.
Regarding the second aspect, solutions dealing with disaster risk management
and recovery, as well as with regular backups of data and off-site storage of
backup sets, are alway integrated when using the official data repositories
(i.e.: http://zenodo.org/) ; the partners will ensure the adoptions of
similar solutions when choosing an institutional research data repository.
Partners are encouraged to claim costs for resources necessary to manage and
share data; these will be clearly described and justified. Arrangements for
post-project data management and sharing must be made during the life of the
project. Costs associated with long-term curation and preservation, such as
POSF (Pay Once, Store Forever) storage, will be purchased before the close of
the project grant
# 11 Conclusion
The purpose of the Data Management Plan is to support the data management life
cycle for all data that will be collected, processed or generated by the
UMOBILE project. The DMP is not a fixed document, but evolves during the
lifespan of the project. This document is expected to mature during the
project; more developed versions of the plan could be included as additional
deliverables at later stages. The DMP will be updated at least by the mid-term
and final review to fine-tune it to the data generated and the uses identified
by the consortium since not all data or potential uses are clear at this stage
of the project.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0828_KConnect_644753.md
|
# 1 Introduction
This deliverable is the initial version of the data management plan. In this
document, the data generated by the KConnect project is identified and the
current status of the data management, archiving, preservation and licensing
plans are given. In particular, this initial analysis of the data indicates
where further efforts are required to clearly specify these aspects of the
data management plan. The final version of the data management plan is D6.3
due in July 2016.
Each section of this deliverable describes a data resource identified in the
KConnect project. The format followed in each section corresponds to the
structure proposed in the European Commission Guidelines on Data Management in
Horizon 2020 [1].
In summary, Sections 2 to 5 deal with data for which no privacy issues exist
(knowledge base, machine translation training data, and annotations and
indices), while Sections 6 to 9 deal with data in which care needs to be taken
to ensure that privacy is preserved (search logs and medical records).
# 2 Knowledge Base
**2.1 Name**
Knowledge Base
## 2.2 Description
The knowledge base is a warehouse of semantically integrated data sets
published originally by third parties. It includes information on drugs, drug
targets, drug interactions, diseases, symptoms, adverse events, anatomies and
imaging modalities. In addition to the data sets it includes link sets that
map data between the different data sets and/or provide semantic
relationships. The data is available as RDF and is loaded into a GraphDB [2]
repository.
Original data sets:
* Drugbank
* UMLS
* RadLex
* DBPedia (medical subset)
* GeoNames
## 2.3 Standards and metadata
The data is available in different RDF formats: RDF-XML, NTriple, Turtle,
TriG, TriX and RDF-JSON. It can be queried via SPARQL and the KB exposes the
OpenRDF REST API.
**2.4 Data sharing conditions**
Data sharing varies according to the sharing conditions associated with the
original data sets.
## 2.5 Archiving and preservation
Archiving and preservation varies according to the Archiving and preservation
arrangements associated with the original data sets. Ontotext stores backups
of the data sets converted to RDF and the corresponding link sets on its
servers.
**2.6 Licensing information**
Licensing varies according to the licensing of the original data sets.
# 3 Summary Translation Test Data
**3.1 Name**
Khresmoi Summary Translation Test Data 1.1
## 3.2 Description
This dataset contains data for development and testing of machine translation
of sentences from summaries of medical articles between Czech, English,
French, and German. The original sentences are sampled from summaries of
English medical documents crawled from the web in 2012 and identified to be
relevant to 50 medical topics.
The original sentences in English were randomly selected from automatically
generated summaries of documents from the CLEF 2013 eHealth Task 3 collection
[1] which were found to be relevant to 50 test topics provided for the same
task. Out-of-domain and ungrammatical sentences were manually removed. The
sentences are provided with information on document ID and topic ID. The topic
descriptions are provided as well. The sentences were translated by medical
experts into Czech, French, and German and reviewed. The data sets can be
used, for example, for the development and testing of machine translation in
the medical domain.
## 3.3 Standards and metadata
The data is provided in two formats: plain text and SGML. They are split
according to the section (dev/test) and language (CS – Czech, DE - German, FR
- French, EN – English). All the files use the UTF-8 encoding. The plain text
files contain one sentence per line and translations are identified by line
numbers. The SGML format suits the NIST MT scoring tool. Topic description
format is based on XML, each topic description (<query>) contains the
following tags:
<table>
<tr>
<th>
**Tag**
</th>
<th>
**Description**
</th> </tr>
<tr>
<td>
<id>
</td>
<td>
topic ID
</td> </tr>
<tr>
<td>
<discharge_summary>
</td>
<td>
reference to discharge summary
</td> </tr>
<tr>
<td>
<title>
</td>
<td>
text of the query
</td> </tr>
<tr>
<td>
<desc>
</td>
<td>
longer description of what the query means
</td> </tr>
<tr>
<td>
<narr>
</td>
<td>
expected content of the relevant documents
</td> </tr>
<tr>
<td>
<profile>
</td>
<td>
profile of the user
</td> </tr> </table>
**3.4 Data sharing conditions**
Access to this data set is widely open under the license specified below.
## 3.5 Archiving and preservation
The data set is distributed by the LINDAT/Clarin project of the Ministry of
Education, Youth and Sports of the Czech Republic and is available here:
_http://hdl.handle.net/11858/00-097C-0000-0023-866E-1_
## 3.6 Licensing information
The data set is made available under the terms of the Creative Commons
Attribution-Noncommercial (CC-BY-NC) license, version 3.0 unported. A full
description and explanation of the licensing terms is available here:
_http://creativecommons.org/licenses/by-nc/3.0/_
# 4 Query Translation Test Data
**4.1 Name**
Khresmoi Query Translation Test Data 1.0
## 4.2 Description
This data sets contains data for development and testing of machine
translation of medical queries between Czech, English, French, and German. The
queries come from general public and medical experts.
The original queries in English were randomly selected from real user query
logs provided by Health on the Net foundation (750 queries by general public)
and from the Trip database query log (758 queries by medical professionals)
and translated to Czech, German, and French by medical experts. The test sets
can be used, for example, for the development and testing of machine
translation of search queries in the medical domain.
## 4.3 Standards and metadata
The data is split into 8 files, according to the section (dev/test) and
language (CS - Czech, DE - German, FR - French, EN – English). The files are
in plain text using the UTF-8 encoding. Each line contains a single query.
Translations are identified by line numbers.
**4.4 Data sharing conditions**
Access to this data set is widely open under the license specified below.
## 4.5 Archiving and preservation
The data set is distributed by the LINDAT/Clarin project of the Ministry of
Education, Youth and Sports of the Czech Republic and is available here:
_http://hdl.handle.net/11858/00-097C-0000-0022-D9BF-5_
## 4.6 Licensing information
The data set is made available under the terms of the Creative Commons
Attribution-Noncommercial (CC-BY-NC) license, version 3.0 unported. A full
description and explanation of the licensing terms is available here:
_http://creativecommons.org/licenses/by-nc/3.0/_
# 5 Annotated Text Data
**5.1 Name**
Text and annotation indices
## 5.2 Description
The dataset comprises texts annotated and indexed by the KConnect semantic
annotation pipeline, in order to create a searchable index with links to the
KConnect knowledge base.
There are several datasets, each held by the KConnect partner that is
responsible for the underlying texts. In the next version of this deliverable,
these datasets will be individually identified and described, as the plan for
the progress of the KConnect work does not allow this to be done at this
stage.
## 5.3 Standards and metadata
Texts are annotated using a Text Encoding Initiative (TEI) compliant
framework, GATE [3, 4], to create documents encoded with UTF-8, in GATE XML
format.
Annotations are linked to the knowledge base using URIs, and are searchable
using SPARQL
**5.4 Data sharing conditions**
Data sharing varies according to the sharing conditions associated with the
underlying text collection.
## 5.5 Archiving and preservation
Archiving and preservation varies according to the archiving and preservation
arrangements associated with the underlying text collection.
**5.6 Licensing information**
Licensing varies according to the licensing of the underlying text collection.
# 6 HON Search Logs
**6.1 Name**
HONSearchLogs
## 6.2 Description
Search Engine Logs provided by the Health On the Net Foundation (HON). This
data set contains the query logs collected from various search engines
maintained by HON. The search engine logs are collected over a period of over
3 years (since November 2011) and are continuing to be collected.
The search engine logs contain following information:
* query term
* users’ IP address – which enables determining the geographical distribution of the search
* exact date and time of the query
* language
* information on the search engine used to perform the search (honSearch, honSelect, …) information on the link followed
## 6.3 Standards and metadata
The search logs will be provided in the XML format, for which the metadata
will be provided. An illustration of the format draft is given in the Figure
1.
**Figure**
**1**
**. Sea**
**rch Log format draft**
## 6.4 Data sharing conditions
This data set is provided by HON for the project partners. This data can be
used for analysis of users’ behaviour linked to the search engine usage.
With the goal of preservation of the users' personal data, the original
content of the search logs is modified by HON. This modification consists of
masking the part of the users' IP address, however keeping the parts of the IP
which would enable the analysis of the global users' whereabouts. In the above
shown format draft the alternations of the original query logs are marked with
“*”.
## 6.5 Archiving and preservation
The original search logs are archived and kept on HON premises for the period
of 5 years. These archives consist of the original, non-treated search logs.
Investigation is underway for a possibility for longerterm preservation of the
anonymised logs.
## 6.6 Licensing information
The HONSearchLogs will be made available on demand by the partners. The data
are distributed under the terms of the Creative Commons Attribution-ShareAlike
(CC-BY-SA), version 3.0 unported. A full description and explanation of the
licensing terms is available here:
_https://creativecommons.org/licenses/by-sa/3.0/_
# 7 TRIP Database Search Logs
**7.1 Name**
Trip Database search logs
## 7.2 Description
As users interact with the Trip Database ( _https://www.tripdatabase.com_ )
the site captures the user’s activity. It records search terms and articles
viewed. In addition this data is linked to the user so that information about
profession, geography, professional interests etc. can be considered. This may
be useful in helping understand the search process, important documents,
linked concepts etc.
There is considerable data going back multiple years and is constantly being
collected.
**7.3 Standards and metadata**
There are no official standards.
## 7.4 Data sharing conditions
The data can be shared with the KConnect consortia with prior permission.
Outside of KConnect the sharing of the data will be by negotiation.
Currently the data needs to be requested and downloaded by the Trip Database
but an API is being considered.
## 7.5 Archiving and preservation
The data is stored on the Trip servers and these are backed up and saved on a
daily basis. The production of the search logs is independent of the KConnect
project and is increasingly core to the development of the Trip Database. As
such the costs are seen as core to Trip.
**7.6 Licensing information**
There is currently no formal licensing information.
# 8 KCL Patient Records
**8.1 Name**
The South London and Maudsley NHS Foundation Trust (SLAM) Hospital Records
## 8.2 Description
The South London and Maudsley NHS Foundation Trust (SLAM) is the largest
provider of mental health services in Europe. The hospital electronic health
record (EHR), implemented in 2007, contains records for 250,000 patients in a
mixture of structured and over 18 million free text fields.
At the NIHR Biomedical Research Centre for Mental Health and Unit for Dementia
at the Institute of Psychiatry, Psychology and Neuroscience (IOPPN), King’s
College London we have developed the Clinical Record Interactive Search
application (CRIS, _http://www.slam.nhs.uk/about/corefacilities/cris_ ) ,
which allows research use of the pseudonymised mental health electronic
records data (with ethics approval since 2008).
## 8.3 Standards and metadata
Through this model we will be able to provide access to a regular snapshot of
the complete set of pseudonymised records in XHTML format.
**8.4 Data sharing conditions**
Records can be accessed by collaborators either onsite or through a remote
secure connection.
**8.5 Archiving and preservation**
The record system is maintained by hospital IT services.
**8.6 Licensing information**
Data access is governed through a patient led oversight committee.
# 9 Qulturum Patient Records
**9.1 Name**
Region Jönköping County Patient Records
## 9.2 Description
Region Jönköping County (RJC) provides test data consisting of 50 fictitious
patient records with the same data structure and content as a real patient
record, in Region Jönköping County’s electronic health records system
(Cosmic), would have.
RJC will also provide real patient records if needed. In order to do so, the
anonymisation process must be secured. The number of patient records that will
be provided depends on how much data is needed in order to develop and test
the KConnect solution. The first step of using test data instead of real
patient records is necessary in order to secure that the tool is correctly
adapted and implemented before real EHR content is used.
## 9.3 Standards and metadata
The use case(s) decided on will determine how data will be manipulated, e.g.
addition of metadata/annotations.
**9.4 Data sharing conditions**
Records data be created, stored/archived, accessed, used and kept secure by
Jönköping staff on site.
**9.5 Archiving and preservation**
An archiving and preservation plan is under development.
**9.6 Licensing information**
There is currently no formal licensing information.
# 10 Conclusion
This deliverable presents the initial version of the Data Management Plan for
the KConnect project. It identifies data that is being collected in the
KConnect project and allows missing information to be identified. An updated
version of the deliverable that will include the information currently not
available will be released as D6.3 in July 2016\.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
0832_GAIA-CLIM_640276.md
|
# Open Research Data, third (final) version, March 2018
**Project Name:** Gap Analysis for Integrated Atmospheric ECV Climate
Monitoring (GAIA-CLIM)
**Funder:** European Commission (Horizon 2020)
**Grant Title:** No 640276
# 1\. Project brief description
The Gap Analysis for Integrated Atmospheric ECV Climate Monitoring (GAIA-CLIM)
Project endeavoured to establish improved methods for the characterisation of
satellite-based Earth Observation (EO) data by surface-based and sub-orbital
measurement platforms for six of the GCOS atmospheric Essential Climate
Variables (ECVs), namely, Temperature, Water Vapour, Ozone, Carbon Dioxide,
Methane, and Aerosols. GAIA-CLIM added value by:
* Objectively assessing and mapping existing measurement capabilities
* Improving traceability and uncertainty quantification on sub-orbital measurements;
* Quantifying co-location uncertainties between sub-orbital and satellite data;
* Using traceable measurements in data assimilation; and
* Providing co-location match-up data, metadata, and uncertainty estimates via a ‘virtual observatory’ facility.
The novel approach of GAIA-CLIM was to demonstrate comprehensive, traceable,
EO Cal/Val for a number of metrologically mature ECVs, in the domains of
atmospheric state and composition, that will guarantee that products are
assessable and intelligible to third-party users.
Further details on GAIA-CLIM’s project outcomes can be found at _www.gaia-
clim.eu_
# 2\. Outline of GAIA-CLIM’s policy for data management
GAIA-CLIM has been a member of the Open Data Pilot under H2020. The project
promoted the processing and sharing of data openly in support of project aims
of enhancing the long-term value of EO data for the scientific community. The
purpose of this Data Management Plan (DMP) is to document the collection and
use of data records that were managed within the GAIA-CLIM project. This third
and final version of the DMP, updated from previous versions D7.1 and D7.2,
reflects the final status of the project in relation to data produced and/or
collected. It provides a final agreed record of the data management policies
of GAIA-CLIM in respect of data dissemination.
This DMP ensured that:
* There has been a coherent and evolving approach as to what, specifically, is required on management of data by the consortium throughout the lifetime of the project, and after its completion.
* Project findings are publicly available both during and after the project (including the virtual observatory, which will continue to be accessible on EUMETSAT servers after the end of the project). In addition, that they represent a lasting legacy to the contributing observing networks, leading to improvements in data traceability and comparability of EO measurement systems.
* Data preservation strategies are in place in support of long-term use of project outcomes.
* Data usage by GAIA-CLIM respects conditions of use, policy on access, and intellectual property rights of the primary data collectors, including authorship and acknowledgement that accurately reflects the contributions of those involved.
It should be stressed that GAIA-CLIM constituted a rather particular case in
terms of data management as covered by the guidance pertaining to the
preparation of DMPs under the H2020 Pilot on Open Research Data. The project
did not directly collect primary data, i.e., make measurements for the sole
purpose of the project. Rather, it provided added value and additional
metadata to existing measurements (by optimizing the value of multiple sources
of primary data to enable traceable characterization of EO data) taken by both
consortium members under separate funding support and by third party
institutions. Therefore, and in line with the project objectives, as used in
this document, the term ‘project data’ refers to metadata and / or value-added
products, i.e. secondary data products arising from primary data created,
hosted, and managed by existing networks and stations, in order to improve
global capabilities to use non-satellite data to characterise space-borne
satellite measurement systems.
In this context, it is important to stress that GAIA-CLIM has retained only
that primary data used in its mission, which solely constitutes a small subset
of the primary data made available by the contributing observing networks. It
was never the intention of GAIA-CLIM to create or curate a comprehensive
archive from underlying primary observational networks, nor would it have been
practical to do so. Contributing networks retain primary Intellectual Property
Rights (IPR) and may well, in future, revise their data, data formats,
metadata etc. We note in particular that the C3S 311a Lot 3 activity 1 ,
instigated in 2017 (and led by GAIA-CLIM participants), constitutes an
operational service for accessing the baseline and reference in-situ data
holdings. This is a more appropriate mechanism to address the issue, providing
a sustainable long-term resource for multiple use cases.
Sharing of value added products derived from GAIA-CLIM is important to the
long-term study of EO sensor performance, validation of satellite-derived data
products, and to maximize their value in climate applications. In this regard,
this DMP focusses primarily upon supporting the scientific findings of the
project, which has been actively working to create appropriate linkages, and
maximize the availability and utility of the data and tools produced after the
project. The release of the data and associated products into the public
domain has been the _de facto_ policy, ensuring usability and longterm
availability of data to scientific and public audiences alike. Collaboration
with internal and external project partners will remain ongoing to ensure this
takes place on a best endeavours-basis post-project cessation.
The main way of actually serving data from the GAIA-CLIM project is through
its ‘virtual observatory’ tool 2 , developed and hosted by TUT and EUMETSAT.
The virtual observatory is intended to provide the user with access to both
metadata and observational data from different ground-based reference networks
with co-located satellite data. GAIA-CLIM has used measurements of
metrologically “reference-quality” (from GRUAN and NDACC networks) that are
traceable and have well quantified uncertainty estimates and measurements
(from AERONET), for which no sufficient evidence has yet been provided to
assess the reference-quality status, but which are close to that status. A
full listing of contributing observations is available at the end of this
document under Annex 2. Importantly, GAIA-CLIM only made use of those primary
observations to which no academic restrictions to use, reuse, and re-
distribution currently apply. The providers of primary data from these
networks have thereby either implicitly or explicitly agreed to release that
portion of their data which we have utilised according to this DMP. Similarly,
GAIA-CLIM has only used satellite data available for re-use and
redistribution. Furthermore, re-analysis and Numerical Weather Prediction
(NWP) data are also part of the virtual observatory. Such data arose from
within the consortium (ECMWF and MO partners under WP4) without restriction.
However, GAIA-CLIM work in many cases built upon pre-existing capabilities of
the partners. In a restricted subset of these cases, IPR restrictions relate
to these background materials as articulated in the Consortium Agreement (cf.
Annex 1). A list of co-located datasets available in the virtual observatory
is given in Annex 3.
The virtual observatory data policy is made explicit and is in compliance with
the H2020 Pilot on Open Research Data (s. next section) and this DMP.
Project parts that dealt with enhancing existing primary data streams were:
* Preparation and assessment of reference-quality non-satellite data (including in global assimilation systems) and characterisation of key satellite datasets:
1. Assessment of several new satellite missions, using data assimilation of reference-quality non-satellite measurements, targeting temperature and humidity (under work package 4).
2. Development of infrastructure to deliver quantified uncertainties for reference-data colocations with satellite measurements (under work packages 3 and 5).
3. Development of capabilities for preparation, monitoring, analysis, and evaluation of reference-quality data (under work packages 2 and 5).
4. Development of a general methodology for using reference-quality non-satellite data for the characterisation of EO data (under work packages 4 and 5).
* Creation and population of a virtual observatory:
1. Creation of a collocation database between EO measures and reference-quality measurements.
2. Adoption of ISO/WIGOS and ESA-CCI standards for observational metadata in the virtual observatory.
3. Preparation of data to enable comparisons, including relevant uncertainty information and metadata for users to understand and make appropriate use of the data for various applications.
4. Creation of data interrogation and visualization tools for both non-satellite and satellite observing capabilities, building upon existing European and global infrastructure capabilities offered by partners and in-kind collaborators.
5. Planning for the potential transition of the resulting virtual observatory from research to operational status in support of the Copernicus Climate Change Service (C3S) and Copernicus Atmospheric Monitoring Service (CAMS).
# 3\. Pilot on Open Research Data
GAIA-CLIM participated in the H2020 Pilot on Open Research Data. Knowledge
generated during the project has been shared openly. Any milestones,
deliverables, or technical documents produced, which were deemed public in the
Grant Agreement, have all been published online and made discoverable. Peer-
reviewed publications have all been submitted to journals that are either open
access or allow the authors to pay for the articles to be made open access
(for such instances, the additional charges have been paid).
# 4\. Dissemination and Exploitation of Results
In order to maximize the benefit and usability of project findings, GAIA-CLIM
incorporated a strong focus on user interaction throughout the life cycle of
the project. The virtual observatory, as a key outcome of the project, is the
primary means of dissemination of data and associated project findings through
which end-users are able to access, visualize, and utilize the outputs of the
project. The virtual observatory built upon and extended a number of existing
facilities operated by project partners, which already undertook subsets of
the desired functionality, specifically:
* the Network of Remote Sensing Ground-Based Observations in support of the Copernicus Atmospheric Service (NORS);
* the Cloud-Aerosol-Water-Radiation Interactions (ICARE) Project;
* the US National Oceanic and Atmospheric Administration (NOAA) Products Validation System (NPROVS).
The resulting virtual observatory facility is entirely open and available to
use for any application area. All downloaded data are provided in NetCDF-4
format and use the Climate and Forecast (CF) metadata convention. Significant
efforts have been made to build an interface that is easy to use and which
makes data discovery, visualization, and analysis user-friendly. The virtual
observatory work package included a specific task dedicated to documenting the
steps required to transition this facility from a research to an operational
framework with a view to constituting a long-term infrastructure (see
deliverable D5.8 Transition roadmap for the virtual observatory 3 ).
The GAIA-CLIM website 4 represents the public interface of the project and
shall remain available for at least five years after the end of GAIA-CLIM (on
a best endeavours maintenance basis from NERSC). It provides an overview of
the main results, activities by work package, as well as an open portal to
disseminate information on project outcomes such as publications, peer-
reviewed journal articles, and project deliverables. This particularly
includes access to:
* The _Library of (1) smoothing/sampling error estimates for key atmospheric composition_ _measurement systems, and (2) smoothing/sampling error estimates for key data comparisons_ ,
* The _Product Traceability and Uncertainty_ (PTU) documents developed by WP2.
* The full _list of gaps_ i n knowledge and observing capability identified within the scope of GAIACLIM;
* The _virtual observatory_ , and
* The ‘ _GRUAN Processor_ ’ tool.
# 5\. Preservation of value added data products
The value-added tools and data products GAIA-CLIM has retained and made
available consist of:
* Metadata collected under work package 1 relating to networks and their measurement maturity. This included:
o Station location metadata to ISO standard 19115; o Measurement system
metadata; o Visualisation capabilities; o 3D-tool design for the visualisation
of existing measurements online; o Measurement maturity assessment metadata; o
Observational metadata following WIGOS and ESA-CCI standards.
* Selected primary data that meets co-location criteria and is deemed reference quality or almost reference quality from underlying networks. (work package 2)
* Co-location uncertainty information arising from a variety of approaches, including statistically based and dynamically based estimation (work packages WP3 and WP4).
* The “GRUAN Processor” tools to convert from geophysical to Top of the Atmosphere (TOA) radiance (WP4).
* Capabilities to visualize, subset, and analyse the co-location database (work package WP5).
Most of the above products and capabilities have been hosted and preserved at
th e _www.gaia-clim.eu_ domain. Those tools and capabilities preserved
elsewhere are described in subsequent sub-sections.
Were follow-on support available, then new data and services may be able to be
appended and the facilities made operational. These new data and capabilities
would be subject to the data policies of the provider and funder at that time.
GAIA-CLIM shall undertake solely to preserve and make available the data and
functionalities created during the project lifetime.
### 5.1 Virtual observatory
The virtual observatory constitutes the primary means of dissemination of
project results. The virtual observatory facility is entirely open and
available to use for any application area. Data versioning, source locations,
and any DOIs from the primary data sources are retained. Significant efforts
were undertaken to collaborate with existing European and international
programs with similar aims in order to produce a facility which makes data
discovery, visualization, and analysis easy and useful for the end users,
while optimizing the use of reference data. The objective was to develop an
interface that uses software tools to deliver products in standard formats,
such as NetCDF that are compliant with CF conventions. Such formats are self-
describing and provide a definitive description of what the data in each
variable represents, as well as the spatial and temporal properties of the
data. This enables users of data from different sources to decide which
quantities are comparable, and facilitates building applications with powerful
extraction and display capabilities. In turn, this may support the relevance
and sustainability of the facility, or aspects thereof, into the future, as
well as effective data preservation.
The final data products will be kept beyond the lifetime of the project
through the virtual observatory, which shall be hosted by EUMETSAT. It is
important to stress that the virtual observatory is solely a demonstrator
project and therefore the GAIA-CLIM data, documentation, and functionalities
will be retained in “frozen” mode in the state they existed in at the end of
the project with the aim of becoming further developed and integrated into the
emerging Copernicus or any other service. If continued in this way, data and
software distribution policies of the respective service will be applied in
the long-term.
The virtual observatory has been designed and developed as a traditional
client-server application. Visible to the user is the graphical user interface
(GUI) that allows the user to interact with the components of the virtual
observatory. The GUI is used to send queries to retrieve data that result in
graphical displays. The data themselves are stored in a non-relational
database that holds all the data including the uncertainties of the
measurements and the co-locations available in the virtual observatory. The
non-relational data base is very versatile, allowing easy addition of new data
of various types to the virtual observatory and makes extensions in the future
relatively simple, should the virtual observatory be further developed and
made operational.
### 5.2 Metadata discovery tool
The metadata discovery and visualization tool is planned to be maintained by
CNR over long-term, including annual updates of the metadata and the GUI. The
discovery and observational metadata was realised in PostgreSQL and then
linked to a MongoDB via GeoServer platform, all of which are open source. The
3D tool uses the Cesium JavaScript library, which is distributed under the
Apache 2.0 license agreement. To remain consistent with the conditions of use
in our 3D tool, the Cesium logo and a link to its website are present. The
source code of the 3D tool itself is openly available on Github 5 ,
including documentation 6 .
### 5.3 Library of smoothing/sampling uncertainties
Research was undertaken within GAIA-CLIM to improve quantification of the co-
location mismatch uncertainties. Several methods have been developed for - and
applied to - the quantification of smoothing and sampling issues in a range of
atmospheric ground-based measurement techniques, and to estimate the
uncertainties that need to be taken into account when comparing non-perfectly
co-located ground-based and satellite measurements with different spatio-
temporal smoothing and sampling properties. The resulting software and Look-up
tables that constitute input to the virtual observatory have been documented
and shared openly without restriction. The actual guiding material and data
files are hosted by BIRA and available for download by anonymous/guest ftp at:
_ftp://ftp-ae.oma.be/dist/GAIA-CLIM/D3_6/_
### 5.4 GRUAN processor
A stand-alone ‘GRUAN Processor’ module has been developed based on a core
radiative transfer modelling capability built around two existing open-source
software packages (EUMETSAT’s NWP SAF 7 RTTOV fast radiative transfer model
and Radiance Simulator). This software, referred to as the GRUAN Processor,
enables the comparison of collocated geophysical fields and simulated
brightness temperature between radiosonde and model fields.
The Processor will allow improved NWP-based calibration, recalibration, and
validation of satellite instruments thanks to robust channel-by-channel
uncertainty estimates. In addition, it is expected to serve as a long-term,
semi-automatic, monitoring tool for the Met Office NWP global model. The
integration and automation of the Processor in the Met Office system is
expected to take place during the fiscal year 2018/2019. It is to the
discretion of ECMWF to use their copy of the Processor (installed and used
within the scope of the GAIA-CLIM project), and/or its future versions, as an
additional monitoring tool in their system.
This work is in line with the Copernicus CAMS and C3S streams and has the
potential to become an operational Copernicus service with data and graphic-
based monitoring available from the Copernicus portal. The Processor post-
processed outputs are publicly available on a demonstrator web-page 7 hosted
by NWP SAF.
# 6\. Primary source datasets used within GAIA-CLIM
The final contributing networks to the virtual observatory were as follows:
1. GRUAN,
2. NDACC, and
3. AERONET.
Further details on these networks, their governance, and their data policies
are given in Annex 2. A list of co-located datasets available in the virtual
observatory is given in Annex 3. As primary data collectors, these networks
have assessed data quality, integrity, originality, and content prior to
publishing. Whilst GAIA-CLIM activities under work package 2 lead to changes
in how a subset of the data are processed by the underlying networks, GAIA-
CLIM was a user, not at a provider, of these primary data products.
As described previously, GAIA-CLIM has respected the data policies and
practices of the data originators/custodians, and the documentation herein
should not be taken to imply advocacy for changing their existing policies.
Rather, it is important to note that GAIA-CLIM activities and this DMP work
alongside and document existing practices that pertain to the source data.
Where networks have data policies that place restrictions on near-real-time
use, GAIA-CLIM has only used the open delayedmode data.
# 7\. Summary
This Data Management Plan presents the data management policy that has been
used by the GAIACLIM partners in their collection and use of data records
during the lifespan of the project. This third and final DMP version has
evolved to reflect the conclusion of the project. Since there was no primary
data produced under the GAIA-CLIM project, the data management policy relates
to metadata and added-value products produced in GAIA-CLIM for existing and
future measurements. This is in keeping with the over-arching objectives of
GAIA-CLIM to provide the necessary methodologies for the characterization of
satellite-based EO data using surface and sub-orbital measurements.
|
https://phaidra.univie.ac.at/o:1140797
|
Horizon 2020
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.