filename
stringlengths
18
35
content
stringlengths
1.53k
616k
source
stringclasses
1 value
template
stringclasses
1 value
0226_3D-MUSE_780548.md
# Fair Data ## Making data findable Efforts will be made to make metadata available, even if the data itself will not be. For example properties of the technology and IP that are of relevance to potential users can, with the consent of the IP holder, be advertised through our commercial partners and/or spin-offs from the project. Data sets related to scientific publications shall be referenced in these publications and made available in an open access online data repository and linked to from the project home page (http://www.3dmuse.eu). * Identification: This has yet to be determined * Naming: This has yet to be determined * Keywords: This has yet to be determined * Versioning: This has yet to be determined * Metadata standards: Field specific standards will be used ## Making data openly accessible High level description of IC design files and the 3D sequential integration process (circuit properties, circuit characterization by measurements) may be made openly available, pending an evaluation of a) the quality of the data, and b) IP issues. No other data will be released to the public. The method of data access has not yet been determined, nor its location. The question of licensing has not yet been determined. ## Making data openly interoperable The integrated circuit design files and PDKs shall be in the data format compatible with generally used design tools that are available for all project partners and commonly used in industry and academia (e.g. Cadence, Synopsis, Mentor …). This ensures easy re-use internally, as well as the possibility of licensing this IP to external customers or partners. Measurement data shall most likely be in a data format that can be loaded into Matlab, a widespread tool for data analysis. ## Increase data re-use (through clarifying licenses) The re-use of data outside of the consortium, including licensing models, will be determined by the project at a later date. **Allocation of resources** No major costs related to data management are foreseen. # Data Security All project partners already operate with a high level of data security. No person sensitive data is collected nor generated, while all proprietary, business-sensitive data is handled according to strict, internal rules and regulations. **Ethical Aspects** No ethical aspects have been identified. # Other The project is participating in the EC Open Data pilot, and will adhere to the stipulated regulations.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0227_HERCULES_688860.md
# 1\. EXECUTIVE SUMMARY The aim of this deliverable is to set forth the project management plan in order to describe the roles of each project organization, identify the responsibilities of each project members, schedule the deliverables and the project reporting period, as well as to define the work of the quality control and risk mitigation committee. The deliverable will become a consultation guideline for the partners involved in the execution of the project with G.A. N° 688860, by ensuring the fulfilment of the management plan and work organization and a sound management and execution of the project activities. The main scope of this deliverable is to provide: * Key information on the consortium structure, responsibilities and decision mechanism * Easy to use instructions about administration procedures and interim report to the European Commission. * Brief description of the main IPR “Intellectual Property Rights” and Exploitation Terms * Etc. The present document is integrated by the D 7.5 “Collaboration and Communication Tools” that offers all the tools and communication channels that have been set out to promote and support the cooperation within the project and to assure a common understanding of the project progresses and challenges that the project will face. # 2\. INTRODUCTION ## 2.1. HERCULES main project data The next table sums up the main HERCULES project data that all partners may take into account, whenever requested, to provide the project factsheet: Table 1. HERCULES main data <table> <tr> <th> Acronym </th> <th> </th> </tr> <tr> <td> Grant agreement number </td> <td> 688860 </td> </tr> <tr> <td> Starting date </td> <td> 01/01/2016 </td> </tr> <tr> <td> Duration </td> <td> 36 m </td> </tr> <tr> <td> Type of Action </td> <td> IA </td> </tr> <tr> <td> Budget </td> <td> 3.261.443,47 € </td> </tr> <tr> <td> EC Grant </td> <td> 2.072.300,00 € </td> </tr> <tr> <td> PO </td> <td> Mr. Sandro D’Elia </td> </tr> </table> ## 2.2. HERCULES objectives HERCULES intends to obtain an order-of-magnitude improvements in cost and power consumption of next generation real-time applications by making use of state-of-the-art multi-core scheduling techniques that have been recently proposed in the real-time research community. Such an ambitious goal will be achieved through a multi-layered optimization involving different components of the architectural stack: 1. Selection of the most suitable COTS heterogeneous hardware platforms available in the market, considering important factors like performance/cost, power consumption, programmability and predictability. [ARM’s big.LITTLE architecture, Nvidia Tegra X1…] 2. Introduction of a multi-core RTOS for the flexible and predictable resource management, including not only computing cores, but also, and most notably, memories and buses, both from a perspective of timing and power. Indeed, as the number of computing cores tends to rapidly increase to hundreds and thousands of units, the “scarce resources” of interest to schedule are not cores, but bandwidth of memories, buses and networks. This imposes new challenges to the RTOS that needs to efficiently manage these resources for achieving the required performance in terms of timing and power consumption. [ERIKA Enterprise RTOS, Linux sched_deadline,…] 3. Refactoring and parallelization of the target applications using parallel programming models that are suitable to embedded systems. In particular, the focus will be on models that could exploit lightweight runtimes for the management of parallelism and task synchronization, without requiring complex routines that would negatively affect memory occupation, bus bandwidth and power. [OpenCL, CUDA, pthreads, lightweight OpenMP,…] The project selected two target applications characterized by both _high- performance and real-time_ _requirements,_ one from the automotive, one from the avionics domain. _Automotive application_ : an autonomous driving system jointly developed by Pitom snc and Magneti Marelli. _Avionic application_ : a visual-awareness system for monitoring activities of an airplane developed by Airbus. 4 2.3. **HERCULES partners** ## Table 3. GANTT <table> <tr> <th> **Year** </th> <th> **Month** </th> </tr> <tr> <td> 2016 </td> <td> Jan </td> <td> Feb </td> <td> Mar </td> <td> Apr </td> <td> May </td> <td> Jun </td> <td> Jul </td> <td> Aug </td> <td> Sep </td> <td> Oct </td> <td> Nov </td> <td> Dec </td> </tr> <tr> <td> </td> <td> M1 </td> <td> M2 </td> <td> M3 </td> <td> M4 </td> <td> M5 </td> <td> M6 </td> <td> M7 </td> <td> M8 </td> <td> M9 </td> <td> M10 </td> <td> M11 </td> <td> M12 </td> </tr> <tr> <td> 2017 </td> <td> Jan </td> <td> Feb </td> <td> Mar </td> <td> Apr </td> <td> May </td> <td> Jun </td> <td> Jul </td> <td> Aug </td> <td> Sep </td> <td> Oct </td> <td> Nov </td> <td> Dec </td> </tr> <tr> <td> </td> <td> M13 </td> <td> M14 </td> <td> M15 </td> <td> M16 </td> <td> M17 </td> <td> M18 </td> <td> M19 </td> <td> M20 </td> <td> M21 </td> <td> M22 </td> <td> M23 </td> <td> M24 </td> </tr> <tr> <td> 2018 </td> <td> Jan </td> <td> Feb </td> <td> Mar </td> <td> Apr </td> <td> May </td> <td> Jun </td> <td> Jul </td> <td> Aug </td> <td> Sep </td> <td> Oct </td> <td> Nov </td> <td> Dec </td> </tr> <tr> <td> </td> <td> M25 </td> <td> M26 </td> <td> M27 </td> <td> M28 </td> <td> M29 </td> <td> M30 </td> <td> M31 </td> <td> M32 </td> <td> M33 </td> <td> M34 </td> <td> M35 </td> <td> M36 </td> </tr> </table> ***Note:** Office deadline for submission: When the deadline (i.e. Deliverable, Progress Report) is expressed in terms of M X, it is understood that the deadline is finished the last workable day of the numbered month. # 3\. PROJECT LEGAL DOCUMENTS The most relevant document for the daily project management are included in the main important legal documents of the project. The legal binding documents of the project are of three different types: 1\. _Grant Agreement_ N° 688860 is the agreement signed by each LEARS’ project partner with the European Commission via the participant portal and is subjected to the H2020 set of rules described within the AGA (Annotated Model Grant Agreement) available on line at: _http://ec.europa.eu/research/participants/data/ref/h2020/grants_manual/amga/h2020-amga_en.pdf_ The Grant Agreement is composed by 6 Annexes: * Annex 1 Description of the action (DoA) * Annex 2 Estimated budget for the action * Annex 3 Accession Forms (required for any amendment) * Annex 4 Model for the financial statements (form c) * Annex 5 Model for the certificate on the financial statements (required by the auditor for the financial control) * Annex 6 Model for the certificate on the methodology (required by the auditor for the financial control) • In case of Amendment, the process will be the following: * Formal Agreement of the consortium, two options: 1. By virtual (email, on-line poll) agreement of the EB (Executive Board members) 2. By consensus during a EB meeting (this will be included in the minutes that will be approved by all the consortium * UNIMORE, as project coordinator, will prepare the amendment, require prior approval of the PO and send to the EC via participant portal * After the approval the changes will be automatically included in the new DoA and new version will be downloadable from the Participant Portal. 2. _Consortium Agreement_ : this is a complementary contract to the G.A., signed among the G.A. beneficiaries. It is subject to the G.A., and mainly oriented to the regulation of the day-to-day running consortium and prevention of any conflicts or contingency situation. 3. _Industrial Advisory Board Non-Disclosure Agreement_ : this is a confidentiality agreement between the project beneficiaries and the members of the IAB. For simplicity, the project beneficiaries have approved to give UNIMORE (PC) a mandate to sign a Collaboration Agreement, that includes a NDA, with each IAB member on behalf of all the project partners. # 4\. PROJECT ORGANIZATION HERCULES Project involves 7 organizations from 4 different countries, the consortium includes 3 universities (University of Modena and Reggio E.), ETH Zurich, CTU Prague, 2 SMEs (Pitom snc, Evidence srl) and 2 large industrial companies (Magneti Marelli, Airbus). The consortium has been selected to provide the required set of expertise to achieve the intended results: the picture below shows how each project partner has been clearly identified for each required role: _Application providers:_ Pitom snc, Magneti Marelli, Airbus _OEM/Tier 1/System integrators_ : Magneti Marelli, Airbus, _Programming model and runtime_ : ETH Zurich _RTOS support:_ Evidence srl _Real-time scheduling and schedulability analysis_ : UNIMORE _Data transfers and memory management optimization:_ CTU Prague **Figure 1. HERCULES Partner Composition** ## 4.1. HERCULES management bodies The management and governance structure and procedures are described both in the Annex I of the G.A. (section 3.2 - Management structure and procedures) and in the Consortium Agreement (section 6 - Governance Structure). The Executive Board members included in section 3.2 of the G.A. have already been confirmed. The following sections will only summarize the different bodies as the main description has already been included in the above- mentioned documents: **Figure 2. HERCULES governance structure** ## 4.2. Project Coordinator (PC) The Project Coordinator and technical manager (TM) is Prof. Marko Bertogna (UNIMORE), responsible both for scientific and managerial tasks and for all communication with the European Project Officer. The main tasks of the TM are, among others: To ensure that all the goals and strategic objectives are met, and (most important) that they are in-line with the ICT program and strategic objectives. To define and supervise technical strategies and mid-long term goals, and to identify short-term sub-projects, for single partners, if needed. To chair both the Executive Board and the Industrial Advisory Boards, as described below. To work in close contact with the Project Manager (see below). The Project Coordinator will be supported by an external Project Manager (PM), Dr. Francesco Guaraldi, who will be the interface among the Consortium and the EU officers. He is responsible for timing completion of deliverables and project objectives. He is in charge of monitoring the progress of each task in each work package. He will work in very close contact with Work Package Leaders to monitor their work, identify potential risks that can harness the project and immediately propose corrective actions. For this reason, he will also be in direct contact with the Executive Board. In case some parts of the project are at risk, he will identify and propose corrective actions to recover HERCULES in a safe state. The PM is also in charge of producing documentation, both for “internal” use (e.g., minute of the meetings) and “external” use, such as Periodic Reports and the Financial Statements. He will interact with the Financial Department of UNIMORE for the timely distribution of allocated budget to each partner. Contact details <table> <tr> <th> **Role** </th> <th> **Name** </th> <th> **Email address** </th> </tr> <tr> <td> Project Coordinator </td> <td> Marko Bertogna </td> <td> [email protected] </td> </tr> <tr> <td> Project Manager </td> <td> Francesco Guaraldi </td> <td> [email protected] </td> </tr> <tr> <td> Dissemination Manager </td> <td> Michal Sojka </td> <td> [email protected] </td> </tr> <tr> <td> Exploitation manager </td> <td> Roberto Mati </td> <td> [email protected] </td> </tr> </table> ## 4.3. Executive Board The Executive Board is the decision making team that is in charge of directing the project and take all the strategic decision by composite majority (2/3). Each partner selects a member, who can delegate another member of the same organization, if needed. Each member of the EB has one vote, in case of tie the PC vote acts double. ## Table 4. EB members list <table> <tr> <th> **N°** </th> <th> **Partner** </th> <th> **Name** </th> <th> **Surname** </th> </tr> <tr> <td> PP1 </td> <td> UNIMORE </td> <td> Marko </td> <td> Bertogna </td> </tr> <tr> <td> PP2 </td> <td> CTU </td> <td> Zdenek </td> <td> Hanzalek </td> </tr> <tr> <td> PP3 </td> <td> ETHZ </td> <td> Luca </td> <td> Benini </td> </tr> <tr> <td> PP4 </td> <td> EVI </td> <td> Paolo </td> <td> Gai </td> </tr> <tr> <td> PP5 </td> <td> PITOM </td> <td> Roberto </td> <td> Mati </td> </tr> <tr> <td> PP6 </td> <td> AB </td> <td> Klaus </td> <td> Schertler </td> </tr> <tr> <td> PP7 </td> <td> MM </td> <td> Valerio </td> <td> Giorgetta </td> </tr> </table> ### 4.4. General Assembly All project partners members are part of the General Assembly, the main task of the GA is to discuss with the WP leaders about the progress (Technical and Scientific) of every Work Package. The assembly will discuss the short and middle term action plans to reach on time the Deliverables/Milestones or, in case of unexpected risks/delay, it will propose to the EB any contingency measure to be adopted. ### 4.5. WP leaders WP leader will coordinate the work within each assigned tasks, controlling the quality of the deliverables, assuring the correct interaction among partners and maintaining the PC and PM informed about the project updates and progress. WP leaders will also be responsible for the peer review and quality control of the deliverables produced by other WP Packages, according to the tables below. Work Package Leaders will continuously monitor the progress of their WP and identify critical issues to report to the Executive Board. They will actively participate in the project meetings, by preparing presentations of the technical advances and financial status of their WP, if needed. The WPLs may nominate separate task leaders when necessary. The main role of WP Leaders is to distribute the workload among the partners participating in the WP (including itself) and to supervise it to ensure a timely and qualitatively sufficient delivery of the related tasks. ### 4.6. Industrial Advisory Board The project is driven by a clear industrial demand in the embedded computing domains. To understand and capitalize such a demand, the project will take advantage of an Industrial Advisory Board (IAB) made of senior members of key industrial/manufacture companies, such as: BMW, Porsche, Continental Automotive, Autoliv, Finmeccanica, Selex ES, Honeywell, MBDA, Nvidia, ARM, Tom’s Hardware, Codeplay, Volkswagen, IMA, SACMI, Yanmar, Topcon. The role of the IAB is to monitor HERCULES with respect to the industrial/market domain and to the needs of each IAB member, during the whole lifetime of the project. More in details: * In the first stage of the project, IAB members will review and discuss the requirement analysis of the project members, identifying and prioritizing the most important ones. They can also propose (nonbinding) modifications to the requirements, if needed. * After the first half of the project (M10, M30), they will ensure that the project is kept on focus, and they will constantly monitor the requirement priorities identified in the earlier stage to check if they still apply to the individual market segment of each IAB member. * In the last stages of the project (last 6 months) IAB members will validate the results and achievements of HERCULES, and check if they are in line with the requirements and priorities previously defined. New members can eventually join the IAB during the project timespan. The acceptance of a new IAB member must be approved by the EB and may not increase the budget assigned to the project. One of the main goals of HERCULES is to let industry effectively exploit the technology and the scientific research produced by the consortium. IAB members are highly valuable and cost-effective assets of the project, and their feedback is extremely beneficial to encourage a tight cooperation and the exchange of ideas between the HERCULES consortium. ### 4.7. Conflicts resolution mechanism The conflicts resolution has been described both in the G.A. and in the C.A. However, to prevent any conflict, the following action has been set: 1. The structure of the consortium has been clearly defined since the beginning. Each WPL is responsible for ensuring the timing completion of all the tasks and deliverables of the corresponding Work-Package, within the corresponding budget and quality required. 2. To avoid conflicts and “overlapping” of responsibilities among the partners, WP leaders have been chosen to ensure the best matching between partner main expertise and corresponding WP goals, so that each work package is led by the most suitable partner. 3. Each leader is responsible for his work-package, maximizing his productivity and removing potentially time-consuming inter-partner communication in the management process. 4. Decision are taken by consensus. If a decision is not being reached by two or more partners, the EB will act and take the decision. If necessary, authorization to EC will be asked. V. # 5\. QUALITY MANAGEMENT The Consortium set up a two stage peer review cross control procedure to assess the quality of the results and the progress of HERCULES technical deliverables, see table n° 5. This process involves all the partners as each of them has the responsibility to monitor the progress and the deliverable of two other WPs, so that at the end each WP leader will receive the peer reviews and quality control of two other Partners. Once the deliverable is amended by the original author and cleared by the 2 reviewers, the deliverable will pass to the PC for a final review before the submission. This process will enable the partners to detect, at an early stage, risks and potential deviation from the work-plan, allowing the application of corrective actions and a contingency plan within due time. Moreover, this system will promote a redistribution of responsibilities among all partners, allowing flexibility and clear responsibilities to each partner. This process will also facilitate the exchange and the interaction among work packages and strengthen each WP leader’s commitment. This cross control will reduce the need to take contingency actions, reducing the bureaucratic overhead that the EB might face with the lack of quality control. Table 5. Monitor and Review Committee <table> <tr> <th> WP N° </th> <th> WP Leader </th> <th> Reviewer 1 </th> <th> Reviewer 2 </th> </tr> <tr> <td> 1_Application </td> <td> AGI </td> <td> EVI </td> <td> UNIMORE </td> </tr> <tr> <td> 2_Architecture </td> <td> MM </td> <td> ETHZ </td> <td> EVI </td> </tr> <tr> <td> 3_Programming Model </td> <td> ETHZ </td> <td> CTU </td> <td> PITOM </td> </tr> <tr> <td> 4_RTOS </td> <td> EVI </td> <td> CTU </td> <td> PITOM </td> </tr> <tr> <td> 5_Scheduling </td> <td> CTU </td> <td> ETHZ </td> <td> MM </td> </tr> <tr> <td> 6_Diss&Expl </td> <td> PIT </td> <td> AGI </td> <td> UNIMORE </td> </tr> <tr> <td> 7_MGT </td> <td> UNIMORE </td> <td> MM </td> <td> AGI </td> </tr> </table> # 6\. CONFIDENTIALITY AND NON DISCLOSURE OF INFORMATION Every information generated by the Project and information Exchange among partners corresponding to activities inside the Project are subject to confidentiality obligation. Therefore, it is crucial to keep in mind the confidentiality legal framework, art. 36 of the G.A. and Section 10 of the C.A. Just to sum up the main points: -The confidentiality and non-disclosure agreement applies to the whole period of the project implementation and for a period of four years after the end of the project (art.10.2 of the C.A.). * Each Partner is responsible for the respect of the confidentiality agreement as for employees or third parties involved in the Project and shall ensure that they remain so obliged, as far as legally possible, during and after the end of the Project and/or after the end of the contractual relationship with the employee or third party (art.10.3 of the C.A.). * Non-compliance with the fulfilment of the confidentiality agreement might cause a breach and in severe cases the General Assembly might also declare that the party is a defaulting party and decide to finish and reduce the assigned grant. Concerning the high interest of IAB Members in the exploitation of the project results, it is agreed that no confidential information will be disclosed to them without a prior written agreement among the partners. Moreover, the IAB member, in line with the Collaboration Agreement signed, according to Art 3 of the same document, agree to treat any disclosed information with at least the same degree of protection with which the IAB Member treats its own Confidential Information, for a period of five (5) years from the date it is communicated. In this sense, the Project Partners must apply to this confidentiality condition in Project Communication even for the Project Deliverables. For this reason, the deliverables of HERCULES project have been classified according to the the EC’s consideration in DoA: * Public * Restricted only for members of the consortium and the European Commission Service. # 7\. INTERNAL, INTERIM AND FINAL REPORTS Partners are required to submit different types of progress reports. Internal management reports should be drafted (in the DoA and C.A.) every 6 months, from the beginning of the project. They are part of the project management plan for internal use, in order to check the technical work progresses and the adequacy of the use of the planned resources, according to the financial progress. The documents should be provided by mail to the project coordinator and PM according to the technical progress and financial progress templates. See table N° 6, below. ## Table 6. Internal management reports <table> <tr> <th> Description </th> <th> Verify the technical and financial progress according to the planned resources and the gantt chart </th> </tr> <tr> <td> Objective </td> <td> Monitor the adequate scientific and financial management of the project, to detect risks and avoid major deviation. </td> </tr> <tr> <td> Scheduling </td> <td> Each six months </td> </tr> <tr> <td> Required by </td> <td> PC and PM (Marko Bertogna and Francesco Guaraldi) </td> </tr> <tr> <td> Submission </td> <td> During the next month after the semester period </td> </tr> <tr> <td> Content </td> <td> * Overview of the technical progress of each partner’s activities * Overview of the WP progress * Overview of the financial progress </td> </tr> </table> Interim and final reports are requested by the EC in the Art.20 of the G.A.: _Interim Report_ : from month 1 (1st of Jan. 2016) to month 18 ( 30th of Jun. 2017) _Final Report:_ from month 19 (1st of Jul. 2017) to month 36 (31st of Dec. 2018) All partners must send financial and technical information for the official Periodic report to the PC on the last working day of the finishing month. After receiving and analysing the reports, the coordinator must submit a periodic report within 60 days following the end of each reporting period. ## Table 7. Interim and Final report <table> <tr> <th> Description </th> <th> The scientific report contains: an overview of the progress towards the objectives of the action, including milestones and deliverables, a summary for publication by the Commission, an updated “plan for the exploitation and dissemination of the results”. The financial report contains: individual financial statements, an explanation of use of resources and a periodic summary of financial statement together with the request for interim payment. </th> </tr> <tr> <td> Objective </td> <td> Provide the EC with all the scientific and financial details regarding the progress and cost of the project during each period. </td> </tr> <tr> <td> Scheduling </td> <td> Reporting periods identified in the Art.20 of the G.A. at M18 and M36 </td> </tr> <tr> <td> Required by </td> <td> PC and PM (Marko Bertogna and Francesco Guaraldi) and PO </td> </tr> <tr> <td> Submission </td> <td> All the partners after a prior approval by the PC will submit it to the participant Portal. The coordinator will submit the overall reports within 60 days by the end of the period. </td> </tr> </table> 11 <table> <tr> <th> Content </th> <th> • • </th> <th> Overview of the technical progress of each partner’s activities Overview of the WP progress </th> </tr> <tr> <td> </td> <td> • </td> <td> Overview of the financial progress </td> </tr> </table> ### 7.1. Structure of the interim and financial report The periodic report must be submitted by the coordinator within 60 days after the end of each reporting period. The periodic reports contain bot the technical and the financial report. A template is available on the project repository and has been prepared following the model available at: _http://ec.europa.eu/research/participants/data/ref/h2020/gm/report ing/h2020-tmpl-periodic-rep_en.pdf_ The interim and final reports are composed by two parts: **Part A:** it is an on-line module and it is requested to include a summary and some answers to a project implementation questionnaire, covering issues related to the project implementation, the economic and social impact, in the context of the H2020 performance and monitoring requirements. **Part B** : it has to be filled with the narrative part that includes explanations of the work carried out by the beneficiaries during the reporting period. It contains different sections: 1. Progress overview and explanation of the work carried out by the beneficiaries (objectives, explanation by WP, impact…) 2. Dissemination and exploitation plan and the results achieved. 3. Follow up of recommendations and comments from the previous reviews 4. Deviations from Annex I (containing tasks, use of resources, unforseen subcontracting). ### 7.2. The financial reports (FORM C) The template of the financial reports has been included in the Annex I. Every six month, the PC will request each partners the financial report for internal control of the costs according to the planned execution and timing. Each partner will be requested to upload the official financial report to the Participant Portal. All this information should be shared by the organization LEAR with the Financial department and the Financial & Legal authorized person. The costs will have to be described as indicated in the following table ## Table 8. Budget chapter costs <table> <tr> <th> Personnel Costs </th> <th> Person month should be included (also with decimals) for each WP (i.e. Luca Rossi, researcher, effort on WP5=1,2 PMs) </th> </tr> <tr> <td> Subcontracting </td> <td> It must be specified in the Annex I or approved by an Amendment or by written consensus of the PO. It should include the depreciation methods (if required) and the cost. </td> </tr> <tr> <td> Other direct costs </td> <td> Explanation of the major cost items; if the amount exceeds the 15% of personnel costs a short description is required </td> </tr> </table> Cost that are not taken into account in Annex I must be explained and detailed accurately. Table n° 8, shows how to describe each budget chapter. Other costs that are not included in Annex I of the project should be motivated and described. Once the PC receives the financial Reports, together with the PM, he will make a first check of the costs declared by each partner, and verify the connection with the activity reported and the explanation details. The narrative and financial report is collective. Any faulty report might generate delays interfering with the project execution: (e.g. a request of a further explanation of the use of resources by one of the partners coming from the Officers will affect the overall evaluation and the submission approval and the time to-pay for the partial reimbursement cost). Table n° 9 details the list of the HERCULES participant contacts. Table 9. HERCULES Participant Contact <table> <tr> <th> Partner </th> <th> Role </th> <th> Full Name </th> <th> Contact </th> </tr> <tr> <td> UNIMORE </td> <td> Primary Coordinator Contact </td> <td> Marko BERTOGNA </td> <td> [email protected] </td> </tr> <tr> <td> Team Member </td> <td> Barbara REBECCHI </td> <td> [email protected] </td> </tr> <tr> <td> Project Financial Signatory </td> <td> Andrea SACCHETTI </td> <td> [email protected] </td> </tr> <tr> <td> Project Legal Signatory </td> <td> Angelo Oreste ANDRISANO </td> <td> [email protected] </td> </tr> <tr> <td> Coordinator Contact </td> <td> Giulia SCATASTA </td> <td> [email protected] </td> </tr> <tr> <td> Coordinator Contact </td> <td> Francesco GUARALDI </td> <td> [email protected] </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> CTU </td> <td> Project Legal Signatory </td> <td> Petr KONVALINKA </td> <td> [email protected] </td> </tr> <tr> <td> Project Financial Signatory </td> <td> Milan POLIVKA </td> <td> [email protected] </td> </tr> <tr> <td> Project Financial Signatory </td> <td> Pavel RIPKA </td> <td> [email protected] </td> </tr> <tr> <td> Participant Contact </td> <td> Zdenek HANZALEK </td> <td> [email protected] </td> </tr> <tr> <td> Participant Contact </td> <td> Michal SOJKA </td> <td> [email protected] </td> </tr> <tr> <td> Participant Contact </td> <td> Benny ÅKESSON </td> <td> [email protected] </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> ETH Zurich </td> <td> Project Legal Signatory </td> <td> Sofia KARAKOSTAS </td> <td> [email protected] </td> </tr> <tr> <td> Project Legal Signatory </td> <td> Agatha KELLER </td> <td> [email protected] </td> </tr> <tr> <td> Project Financial Signatory </td> <td> Pasquale NIGRO </td> <td> [email protected] </td> </tr> <tr> <td> Participant Contact </td> <td> Andrea MARONGIU </td> <td> [email protected] </td> </tr> <tr> <td> Participant Contact </td> <td> Luca BENINI </td> <td> [email protected] </td> </tr> <tr> <td> Participant Contact </td> <td> Finanzabteilung ETHZ </td> <td> [email protected] </td> </tr> <tr> <td> Participant Contact </td> <td> Agatha KELLER </td> <td> [email protected] </td> </tr> <tr> <td> Participant Contact </td> <td> Angela CAVAZZINI </td> <td> [email protected] </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> EVI </td> <td> Project Legal Signatory/ Project Financial Signatory </td> <td> Paolo GAI </td> <td> [email protected] </td> </tr> <tr> <td> Participant Contact </td> <td> Claudio SCORDINO </td> <td> [email protected] </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> PIT </td> <td> Project Legal Signatory/ Project Financial Signatory/ Participant Contact </td> <td> Roberto MATI </td> <td> [email protected] </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> AGI </td> <td> Project Financial Signatory </td> <td> Peter LILISCHKIS </td> <td> [email protected] </td> </tr> <tr> <td> Project Legal Signatory </td> <td> Dieter HOFMANN </td> <td> [email protected] </td> </tr> <tr> <td> Participant Contact </td> <td> Klaus SCHERTLER </td> <td> [email protected] </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> MM </td> <td> Project Legal Signatory </td> <td> Giuseppe ROSSO </td> <td> [email protected] </td> </tr> <tr> <td> Project Legal Signatory </td> <td> Vecchia FRANCESCO </td> <td> [email protected] </td> </tr> <tr> <td> Participant Contact </td> <td> Daniela KERN </td> <td> [email protected] </td> </tr> <tr> <td> Participant Contact </td> <td> Agnieszka FURMAN </td> <td> [email protected] </td> </tr> <tr> <td> Participant Contact </td> <td> Giulio MERCANDO </td> <td> [email protected] </td> </tr> <tr> <td> Participant Contact </td> <td> Valerio GIORGETTA </td> <td> [email protected] </td> </tr> <tr> <td> Participant Contact </td> <td> Gaetano FIACCOLA </td> <td> [email protected] </td> </tr> <tr> <td> Participant Contact </td> <td> Fulvio TAGLIABO </td> <td> [email protected] </td> </tr> </table> Concerning the modification of the roles above, Participant Contact (PACO) for each organization can be added by any of the contacts of the organization during the project without any official notification. However, Project Legal Signatory and Financial Signatory (PFISIGN) can only be updated by the LEAR of the organization. As for the Contractual Reports (Interim and Final), the role of the PFSIGN is crucial as they are in charge of submitting the Financial Statement. ### 8\. ATTACHMENT 1 – Gantt Chart 16 ### 9\. ATTACHMENT II – List of Work Packages ### 10\. ATTACHMENT III – List of deliverables ### 11\. ATTACHMENT IV– List of Milestones <table> <tr> <th> **Milestone number** </th> <th> **Milestone name** </th> <th> **Related work package(s)** </th> <th> **Estimated date (month)** </th> <th> **Means of verification** </th> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> </td> <td> \- Complete specifications requirements and characterization of the addressed applications are provided ( **WP1** ); </td> </tr> <tr> <td> MS1 </td> <td> Requirement specification complete </td> <td> WP1-WP3 </td> <td> 6 </td> <td> * Target hardware architectures have been identified ( **WP2** ); * Target programming model to adopt in the project has been identified ( **WP3** ). </td> </tr> <tr> <td> IM2 </td> <td> Internal milestone with intermediate releases </td> <td> WP1-WP4 </td> <td> 18 </td> <td> First (alpha) versions of key software blocks are provided, namely: * the code of parallelized applications (both from automotive and avionics domain), written on top of the programming model identified at **MS1** ( **WP1** ); * a working prototype of the lightweight runtime support for predictability in the host+accelerator programming environment for the adopted HW platforms ( **WP3** ); * a working prototype of the Linux + sched_deadline patch to run on the Big.little host cores of the adopted HW platforms ( **WP4** ); * a working prototype of the lightweight RTOS (ERIKA Enterprise) for the DSP and/or many-core accelerator of the adopted HW platforms. </td> </tr> <tr> <td> MS3 </td> <td> Integrated HERCULES hardware/software stack and toolchain </td> <td> WP1-WP5 </td> <td> 30 </td> <td> A final integrated version of the HERCULES toolchain and software stack featuring: * the code of the parallelized application on the selected HW platforms ( **WP1** ); * the final version of the host-side RTOS ( **WP4-5** ); * the final version of RTOS (Erika Enterprise) for the DSP-based platforms ( **WP4-5** ); * the final version of the lightweight runtimes for GPU-based platforms ( **WP3** ); * the final version of the offloading runtime for host-to-accelerator lightweight communication ( **WP3** ); * the integrated schedulability analysis ( **WP5** ). </td> </tr> <tr> <td> MS4 </td> <td> Validation of the HERCULES approach </td> <td> WP1-WP5 </td> <td> 36 </td> <td> \- the final benchmarking and evaluation of the integrated toolchain, both for the automotive and avionics domain platforms and applications. </td> </tr> </table> ### 12\. ATTACHMENT V – Overall Budget <table> <tr> <th> </th> <th> </th> <th> **UNIMORE** </th> <th> **CTU** </th> <th> </th> <th> **ETHZ** </th> <th> **EVI** </th> <th> **PIT** </th> <th> **AGI** </th> <th> **MM** </th> <th> **TOT** </th> </tr> <tr> <td> Personnel </td> <td> </td> <td> 346500 </td> <td> </td> <td> 239400 </td> <td> 530400 </td> <td> 374000 </td> <td> 178982,6 </td> <td> 239400 </td> <td> 319000 </td> <td> **2227682,6** **58715,22** **55500** **249000** **30000** </td> </tr> <tr> <td> Subcontracting </td> <td> </td> <td> 58715,22 </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> Other direct costs </td> <td> Travel </td> <td> 55500 </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> </td> <td> Consumables </td> <td> 6000 </td> <td> </td> <td> 36500 </td> <td> 36500 </td> <td> 46000 </td> <td> 28000 </td> <td> 38250 </td> <td> 57750 </td> </tr> <tr> <td> </td> <td> Durable Equipm </td> <td> 30000 </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> Indirect Costs </td> <td> </td> <td> 109500 </td> <td> </td> <td> 68975 </td> <td> 141725 </td> <td> 105000 </td> <td> 51745,65 </td> <td> 69412,5 </td> <td> 94187,5 </td> <td> **640545,65** </td> </tr> <tr> <td> **OVERALL BUDGET** </td> <td> **606215,22** </td> <td> </td> <td> **344875** </td> <td> **708625** </td> <td> **525000** </td> <td> **258728,25** </td> <td> **347062,5** </td> <td> **470937,5** </td> <td> **3261443,47** </td> </tr> <tr> <td> **TOTAL PROJECT COST** </td> <td> 606215,22 </td> <td> </td> <td> 344875 </td> <td> 708625 </td> <td> 525000 </td> <td> 258728,25 </td> <td> 347062,5 </td> <td> 470937,5 </td> <td> **3261443,47 2072300** </td> </tr> <tr> <td> **TOTAL FUNDING REQUIRED** </td> <td> 606215,22 </td> <td> </td> <td> 344875 </td> <td> </td> <td> 367500 </td> <td> 181109,775 </td> <td> 242943,75 </td> <td> 329656,25 </td> </tr> </table>
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0229_VIMMP_760907.md
# DATA MANAGEMENT PLAN <table> <tr> <th> **1** </th> <th> **DATA SUMMARY** </th> </tr> </table> The purpose of data collection and generation for this project is to validate results given in any scientific publications about the virtual market place, its underlying technologies (materials modelling metadata schema, ontology etc.) and non-confidential simulation results produced in the framework of WP5 (“ _End user cases: requirements, validation and demonstration_ ”). The main data types originating from the end-user case studies generated during the project will be computational workflows, simulation inputs, trajectories, post-processed simulation data and associated documents. Workflows, simulation parameters, simulation trajectories and post-processed outputs will originate from molecular dynamics (MD), dissipative particle dynamics (DPD) or computational fluid dynamics (CFD) approaches using a variety of computational codes. Documents include journal articles and technical reports produced during the project providing information on setting up simulations used to determine material properties. Both of these data types will have descriptive metadata attached to them. No previously generated simulation data will be re-used for the project. Associated documents will be used to produce inputs for simulations: these can be determined either manually or automatically using machinelearning techniques. The simulation data will be generated from series of simulations produced by a number of software packages (e.g. DL_POLY, DL_MESO, LAMMPS, OpenFOAM) originating from example industrial use cases for the marketplace. Associated documents will be acquired from a variety of sources, including openly available repositories, and used as part of a knowledge base to devise the required inputs for simulations. Each simulation dataset is likely to be very large (up to hundreds of gigabytes) but only a limited number of these will need to be stored to enable reproducibility of scientific publications. Each associated document is likely to be fairly small (a few megabytes). Documentation and software tools required to access and re-use the simulation data will also be deposited. The software tools will include metadata readers and analysis programs to obtain material properties from simulation data and generate inputs for additional post-processing steps. The data will be of use to researchers intending to reproduce and verify simulation results, as well as being a demonstration of the market place’s abilities and flexibility. <table> <tr> <th> **2** </th> <th> **FAIR DATA** </th> </tr> <tr> <td> </td> <td> </td> </tr> <tr> <td> 2.1 </td> <td> MAKING DATA FINDABLE </td> </tr> </table> Simulation data will either be produced directly with discoverable metadata (by means of APIs for various simulation codes) or have the metadata attached to it using a post-processing wrapper. In the case of data generated for storage and curation, a Digital Object Identifier (DOI) can be generated and included in the attached metadata. Documents associated with simulations will have metadata attached using a wrapper program before being included in openly accessible repositories. In the case of research articles and other technical documents that are available online, DOIs are typically already supplied and can be included in the attached metadata. A naming convention for simulation data files and their associated data (articles and reports) will be devised that will include a date and time stamp to help identify particular simulations: this information will also be included in the searchable metadata associated with the files. The metadata for simulations and associated data will include searchable keywords that will allow simulations to be identified by e.g. methods, material. Re-use of at least sections of previous simulation workflows will be optimized if they can be searched for as part of the translation process for new problems. The metadata used for simulation data will identify the kind of software used to generate the data as well as specific codes and version numbers. A metadata standard will be created in the course of the project in collaboration with ongoing EMMC-activities to uniquely identify the user case, the generic physics, and the specific computational details adressed in the simulations based on the European Materials Modelling Ontology (EMMO). At least some of the taxonomies required for the marketplace can already be identified along the lines of MODA, e.g. modelling entities, physical equations, while others will need to be devised to provide details of required computational solvers and software. A wide agreement on this metadata standard endorsed by many practitioners in the field of computational modelling will greatly aid the accessibility and finding of data produced in this project. <table> <tr> <th> 2.2 </th> <th> MAKING DATA OPENLY ACCESSIBLE </th> </tr> </table> Comparatively little data generated during the lifetime of the project will be made openly available by default, as some simulation results may have potential for commercial or industrial protection. The data to be stored for open access will be selected to ensure no intellectual property rights belonging to any project partner are infringed while still allowing results to be validated. Data to be made openly accessible will be prepared for deposition and notice given to all project partners, who will have 30 days to veto the data release (either in its entirety or in part) and, if possible, suggest remedies to ensure their interests in the results or background are safeguarded. A minimum embargo period of 45 days prior to release will be applied: the eventual release date for the data will be supplied in its associated metadata. This process is in accordance with Article 29, Paragraph 29.1 of the VIMMP Grant Agreement In cases where documents specifying simulation input information are commercially sensitive, access to the documents may be restricted and links to these in the metadata supplied with simulation output data may be removed to allow the latter to become openly accessible. (Reverse-engineering simulation trajectories to obtain input parameters is considered unfeasible for material systems of the complexity considered in the industrial use cases.) The selected data with its associated metadata, documentation and software tools for access will be deposited in an openly accessible repository for long-term storage and curation. The data will be supplied in a file format based on the Allotrope Data Format: software tools will be developed during the project to allow data to be accessed, attach and read metadata. Documentation on the proposed data format and the software tools required to access the contained data will be supplied in the repository. The software tools for accessing deposited data, attaching and reading metadata will be included in the repository as open source codes. The openly accessible repository for the data, metadata, documentation and code has yet to be decided upon, but three options are under consideration: (1) Zenodo, (2) EUDAT, and (3) a section of the data and model store of the VIMMP-marketplace platform itself (due to be setup by the end of the project). Each option has benefits and drawbacks, e.g. Zenodo permits free deposition of EC-funded research for datasets of up to 50GB (which may not be enough for some datasets). Association of the selected repository with OpenAIRE is desirable for enabling access to data. No restrictions on the use of deposited data will be applied. <table> <tr> <th> 2.3 </th> <th> MAKING DATA INTEROPERABLE </th> </tr> </table> The simulation data produced in the project will be interoperable by design: standardized open data formats will be created to simplify exchange of data between different codes and modelling methodologies. The metadata vocabularies created for the project and used for deposited data will be created along the lines of the MODA framework (see above). Extensions required to fulfil the needs of the marketplace will be put to the wider materials modelling community for consultation and contributed back to MODA for proposed inclusion in future versions. <table> <tr> <th> 2.4 </th> <th> INCREASE DATA RE-USE </th> </tr> </table> Data to be made widely available will be licensed under an open-source licence (to be decided). Selection of the data to avoid IP conflicts should allow its perpetual re-use by third parties after its embargo period – at least 45 days after intent of release. The data selected for release will be useable by third parties, both during the lifetime of the project and afterwards on a permanent basis. Mechanisms to ensure integrity of stored data will be implemented: these will either be based upon existing practices in the case of available open research data repositories or devised along similar lines for the marketplace platform data store. Analytics, cognitive methods and fast uncertainty quantification techniques will be developed during the lifetime of the project to supply quality measures for simulations. <table> <tr> <th> **3** </th> <th> **ALLOCATION OF RESOURCES** </th> </tr> </table> The costs of making data FAIR include those for the acquisition of existing data (i.e. documents with information on simulation inputs) and manual application of descriptive metadata, data back-up, storage and security. Over the lifetime of the project, these are estimated at approximately €30,000, two-thirds of which is likely to be used for applying descriptive metadata to acquired documents. Some savings may be effected by using established data repositories (e.g. Zenodo, EUDAT) that offer free deposition of research data. €67,000 was requested in the project bid to dedicate a section of a machine at STFC Hartree Centre as the computing architecture for the market place (including related support costs). In the case of this machine hosting the marketplace platform data store, the infrastructural costs for making data FAIR will already be covered. Each individual project partner will be responsible for management of any data they create and/or annotate with metadata. The required resources for long-term preservation (beyond the end of the project) have yet to be discussed, but may be along similar lines to those for the project itself. <table> <tr> <th> **4** </th> <th> **DATA SECURITY** </th> </tr> </table> As a long-term solution, data will be archived in the marketplace platform, which will securely store, manage and curate the data, as well as preserve it after project completion. Data that is commercially sensitive for any project partner will not be made openly available, and secure methods to transfer data to or from the platform (e.g. secure FTP) will be used. <table> <tr> <th> **5** </th> <th> **ETHICAL ASPECTS** </th> </tr> </table> No ethical issues are expected in sharing the data in general. In cases where personal data may be involved in publications (i.e. publications based on personalized surveys or questionnaires), the General Data Protection Regulation (GDPR) (Regulation (EU) 2016/679) will be followed by the consortium as described in the related deliverables D9.1 (POPD) and D9.2 (NEC). _This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 760907._ _This document and all information contained herein is the sole property of the VIMMP Consortium. It may contain information subject to intellectual property rights. No intellectual property rights are granted by the delivery of this document or the disclosure of its content._ _Reproduction or circulation of this document to any third party is prohibited without the consent of the author(s)._ _The statements made herein do not necessarily have the consent or agreement of the VIMMP consortium and represent the opinion and findings of the author(s)._ _All rights reserved._
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0233_DyVirt_764547.md
## Introduction to the DyVirt project The research carried out through this network will go beyond the now ubiquitous process of creating computer-based simulation models of structural dynamics. Obtaining a valuable virtual model is no longer a question of computing power, but now rests in the more difficult problem of developing trust in the model through the process of Verification and Validation (V&V). In the DyVirt research programme, a ‘virtualisation’ will be a computer model validated to previously unreachable levels of trust in its capability. By the invention of new techniques in Verification and Validation (V&V) and the development of new understanding of how substructures and components assemble into full structures, it will be possible for the first time, to trust computer models for design cycle decision-making in the absence of more complete test data from prototypes or fullscale testing. This research will substantially extend the reach of current technologies that can only guarantee trust in models in more restricted circumstances. For large engineering structures that operate in (often extreme) dynamic environments, such as wind turbines, aircraft, gas turbines and bridges, the process of virtualisation presents particularly difficult challenges. This is because the dynamic behaviour during operation needs to be fully captured by the computational model, but is highly sensitive to very small changes in (or disturbances to) the structure. For example, small differences in manufacturing tolerances, mechanical joints, or the operating environment (temperature, humidity), can all lead to apparently large changes in dynamic performance. Although the virtualisation process has been attempted in some domains, it has never been properly applied to engineering applications operating in highly dynamic environments; therefore, our research aim is to create a dynamic virtualisation capability that for the first time will address this limitation. The methodology for DyVirt will be to focus on structural dynamics, and bring in new knowledge from the fields of sensor technology, data mining, decision theory, machine learning, optimisation, signal processing, statistics, aerodynamics, fracture mechanics, materials science and computational mechanics. ## Overview of data management Virtualisation requires extensive use of datasets for V&V procedures. Our research will include data collected from material, component, substructure and system tests, optimally collected at different levels of assembly using strategically placed sensors/actuators and/or optimal excitation features. Some of these datasets are subject to confidentiality, where they include data from industry. Records of experiments and outputs of simulations and models, including (but not limited to) graphical software package files, other analysis files, reports, analyses and communications including Emails will be collected. These will be produced by the Universities and, in some cases, also Industrial Partners. The data will be collected or created by secure transfer of data, by agreement from the University and Industrial Partners. Our data management plan is to collect and preserve, well documented, and wellorganised data sets from all work packages in the network. Within WP1, a centralised, shared knowledge base will be created and linked to raw data, metrics, models, methods, requirements and model specifications. To achieve this, a data repository specifically dedicated to structural dynamics and virtualisation will be established using the USFD Library resources that already have the required einfrastructures in place. It will be informed by developers and end-users and will exploit intranet design-sharing protocols in order to achieve a secure and reliable data management plan. Ontologies will be explored as a semantic web approach for encoding large, complex and heterogeneous domain knowledge. The idea will be to develop a novel V&V ontology using the Ontology Web Language and achieve knowledge storage based on WWW protocols and Resource Description Frameworks (RDF). Confidentiality/security of data will need to be assured with appropriate agreements (NDA, sharing accesses etc.). 1 , 2 , 3 We note that for earthquake engineering, that ground motion records have been publically available for many years, we will use our website as a portal to give researchers access to these and other relevant sources of data. Where possible, and as appropriate, we will make this data available for sharing via OpenAIRE. We will also include “open code” to share algorithms and computer code developed as part of the project. # FAIR DATA _3.1 Making data findable, including provisions for metadata:_ Metadata refers to “data about data”, i.e., it is the information that describes the data that is being published with sufficient context or instructions to be intelligible for other users. Metadata must allow a proper organization, search and access to the generated information and can be used to identify and locate the data via a web browser or web-based catalogue. For reports and communications the documentation itself will be produced such that its purpose is explained within the document. For graphical software package and analysis files, the metadata will be included in all available metadata tags provided by the software package used or developed. In the context of data management, metadata will form a subset of data documentation that will explain the purpose, origin, description, time reference, creator, access conditions and terms of use of a data collection. The metadata that would best describe the data depends on the nature of the data. For research data generated in DyVirt the metadata will be based on a generalised metadata schema, which includes elements such as: * Title: free text * Creator: Last name, first name * Date: * Contributor: It can provide information referred to the EU funding and to the DyVirt project itself; mainly, the terms "European Union (EU)" and "Horizon 2020", as well as the name of the action, acronym and the grant number * Subject: Choice of keywords and classifications * Description: Text explaining the content of the data set and other contextual information needed for the correct interpretation of the data. * Format: Details of the file format * Resource Type: data set, image, audio, etc. * Identifier: DOI * Access rights: closed access, embargoed access, restricted access, open access. Additionally, a readme.txt file could be used as an established way of accounting for all the files and folders comprising the project and explaining how all the files that make up the data set relate to each other, what format they are in or whether particular files are intended to replace other files. _3.2 Making data openly accessible:_ The H2020’s open access policy pursues that the information generated by the projects participating in that programme is made publicly available. However, as stated in EC guidelines on Data Management in H2020, “As an exception, the beneficiaries do not have to ensure open access to specific parts of their research data if the achievement of the action's main objective, as described in Annex I, would be jeopardised by making those specific parts of the research data openly accessible. In this case, the data management plan must contain the reasons for not giving access.” In line with this, the DyVirt consortium will follow the strategy in Figure 1 to decide what information is made public according to aspects as potential conflicts against commercialisation, IPR protection of the knowledge generated (by patents or other forms of protection), meaning a risk for obtaining the project objectives/outcomes, etc. There may be restrictions on data sharing required whereby experimental data or datasets are produced by or in collaboration with the Industrial partners. **Fig 1.** Process for determining which information is to be made public (from EC’s document “Guidelines on Open Access to Scientific Publications and Research Data in Horizon 2020 – v1.0 – 11 December 2013”) Institutional repositories will be used by the project consortium to make the project results (i.e. publications and scientific data) publicly available and free of charge for any user. According to this, several options are considered/suggested by the EC in the frame of the Horizon 2020 programme to this aim: **For depositing scientific publications:** * Institutional repository of the research institutions (e.g. Open Access at The University of Sheffield uses the White Rose repository https://eprints.whiterose.ac.uk/) * Subject-based/thematic repository * Centralised repository (e.g. ZENODO) **For depositing generated research data:** * A research data repository which allows third parties to access, mine, exploit, reproduce and disseminate free of charge (e.g. ORDA at The University of Sheffield https://www.sheffield.ac.uk/library/rdm/orda) * Centralised repository (e.g.ZENODO) The academic institutions participating in DyVirt have available appropriate repositories. Data will be stored during the research according to the best practice of the host university of the researchers involved. Outputs will be stored in the form of analysis files or incorporated into reports, and shared with the Academic and Industrial partners by saving them on a secure project data repository. Access to primary data will be managed according to the host university of the researchers involved. Access to data shared under Open Data Policy will be managed according to the recommended best practice and capabilities of the repository of the host university producing the data. Data will be shared amongst the University and Industrial partners by means of the project Google site, and, for larger datasets, via the data repository of the host university producing the data. Data selected for longterm preservation and sharing will be stored on centrally provisioned University of Sheffield virtual servers and research storage infrastructure (https://www.sheffield.ac.uk/cics/research) for at least ten years. Records of these data will be published in ORDA, a registry of research data produced at the University of Sheffield. The ESRs will have the opportunity to publish results arising from their IRP as publications in the highest-ranking journals in this subject area (e.g. Journal of Sound and Vibration; Mechanical Systems and Signal Processing; Structural Control & Health Monitoring; Journal of Wind Energy; Renewable Energy; IEEE/ASME/ASCE Journals). The majority of these journals allow an open access modality and the author’s post-print version can be deposited in a repository. This is in line with the Horizon 2020 requirements. The DyVirt project website will act as a portal for (i) an ESR and supervisor intranet, that will enable sharing of information and project documentation, (ii) the data management platform for the project, and (iii) open access publications from the project, via a link to the OpenAIRE portal. Some of the academic institutions participating in DyVirt have available appropriate depositories for scientific publications which are linked to OpenAIRE. Institutions/organisations that don't have appropriate depositories or repositories that link to OpenAIRE will ensure that publications are deposited directly in OpenAIRE (https://www.openaire.eu): # The University of Sheffield Type: Publication Repository Website URL: _https://www.sheffield.ac.uk/library/openaccess_ # The University of Liverpool Type: Publication Repository Website URL: https://livrepository.liverpool.ac.uk/ # Eidgenoessische Technische Hochschule Zurich Type: Publication Repository Website URL: https://www.research-collection.ethz.ch/ # Gottfried Wilhelm Leibniz Universitaet Hannover Type: Publication Repository Website URL: https://www.repo.uni-hannover.de # Liege Universite Type: Publication Repository Website URL: https://orbi.uliege.be/ # Panepistimio Thessalias Type: Publication repository Website URL: _http://ir.lib.uth.gr_ # Akademia Gorniczo-Hutnicza IM. Stanislawa Staszica W Krakowie Type: Publication repository ## Website URL: _http://www.bg.agh.edu.pl/en/node/1376_ Apart from these repositories, the DyVirt project will also use the centralised repository ZENODO to ensure the maximum dissemination of the information generated in the project (research publications and data), as this repository is the one mainly recommended by the EC’s OpenAIRE initiative in order to unite all the research results arising from EC funded projects. ZENODO is an easy-to-use and innovative service that enables researchers, EU projects and research institutions to share and showcase multidisciplinary research results (data and publications) that are not part of existing institutional or subject-based repositories. ZENODO enables users to: * Easily share the long tail of small data sets in a wide variety of formats, including text, spreadsheets, audio, video, and images across all fields of science * Display and curate research results, get credited by making the research results citable, and integrate them into existing reporting lines to funding agencies like the European Commission * Easily access and reuse shared research results * Define the different licenses and access levels that will be provided ZENODO also assigns a Digital Object Identifier (DOI) to all publicly available uploads, in order to make content easily and uniquely citable and this repository also makes use of the OAI- PMH protocol (Open Archives Initiative Protocol for Metadata Harvesting) to facilitate the content search through the use of defined metadata. Metadata Schema according to OpenAIRE Guidelines. The short- and long-term storage of the research data in ZONODO is secure and it uses digital preservation strategies to storage multiple online replicas and to back up the files (Data files and metadata are backed up on a nightly basis). This therefore fulfils the requirements of the EC for data sharing, archiving and preservation of the data generated in DyVirt. 3. _Making data interoperable:_ Records of datasets will be published in _ORDA_ , the University of Sheffield’s registry of research data produced at the University, which will issue Data Cite DOIs for registered datasets and promote discovery 4. _Increase data re-use (through clarifying licenses):_ Apart from earthquake ground motion records, mentioned above, there has until now been only limited publically available data on dynamics of infrastructure. We anticipate, that the DyVirt data portal will radically improve the digital science activity in this field, specifically by encouraging more open collaborations amongst researchers, giving SMEs and other industrial organisations wider access to research data, and allowing policy makers and governmental bodies to have more access to well documented and archived data on this important area of policy. The University of Sheffield’s Good Research and Innovation Practice (GRIP) Policy follows UKRI principles for data sharing ( _https://www.ukri.org/funding/information-for-awardholders/data-policy/_ ). All DyVirt team members will ensure that a data access statement is put in the Acknowledgements section of all their research publications. Examples of data access statements: "All data created during this research are openly available from the University of X data archive at http://XXXXXXX” "All data supporting this work are provided as supplementary information accompanying this paper." "All data are provided in full in the results section of this paper." "This publication is supported by multiple datasets, which are openly available at locations cited in the references" "No new data were created during this study" "This study was a re-analysis of existing data that are publicly available from http://XXXXXXX" Commercial restrictions "Supporting data will be available from the University of X data archive at http://XXXX after a Y month embargo from the data of publication to allow for commercialisation of research findings" "Due to confidentiality agreements with research collaborators, supporting data can only be made available to researchers subject to a non-disclosure agreement. Details of the data and how to request access are available at the University of X data archive: _http://XXXXX_ ." It is also encouraged to use publications that allow supplementary files to be uploaded with the publication in order to more directly link the data files being published to the publication itself. Data should be retained, shared and / or preserved where it may be used to reproduce conclusions from publications arising from the research program, where it may provide additional information, or where it may be considered useful. The longterm preservation plan for the dataset is for it to be preserved as long as it is considered useful and economic within the data policy of the host institution that originally produced the data. # ALLOCATION OF RESOURCES The resources required to deliver the plan are available within the budget of the DyVirt project in conjunction with the facilities provided by the project Partners. The University of Sheffield research data storage facility allocates 10TB storage free to research groups during the lifetime of a project. If a larger quota is required then this will involve charges. Long-term archiving of data may involve charges also. ORDA (https://www.sheffield.ac.uk/library/rdm/orda), the University of Sheffield research data repository for managing and sharing research data is free to use. Contacts for project data: Project Manager – Dr Victoria Hand (University of Sheffield) Project Administrator – Grace Stokes (University of Sheffield) # DATA SECURITY Data and definitive project documentation will be stored on centrally provisioned University of Sheffield virtual servers and research data storage infrastructure throughout the lifetime of the project. Both Windows and Linux Virtual Servers with up to 10TB of storage are made available to research projects. Access control is by authorised University computer account username and password. Off-site access is facilitated by secure VPN connection authenticated by University username and remote password. By default, two copies of data are kept across two physical plant rooms, with a 28-day snapshot made of data and backed up securely offsite at least daily. This service is maintained by the University’s Corporate Information and Computing Services. Google Drive is used for more flexible collaborative working but only where non personal-sensitive information is involved. Where Google Drive is used, copies of complete and definitive documents will be transferred to the main project repository on the University research storage infrastructure. # ETHICAL ASPECTS Copyright and IPR issues shall be managed as noted in the Project Collaboration Agreement. The Ethics committee will be asked to oversee the data management process to ensure that data are collected and preserved in accordance with all ethical considerations, including any potential confidentiality and copyright issues. # OTHER **Data Management Policy & Procedures; Data Security Policies and Data Sharing Policies. ** **University of Sheffield:** **Data Management Policy & Procedures: ** _https://www.sheffield.ac.uk/govern/dataprotection_ **Data Security Policies:** _https://www.sheffield.ac.uk/cics/policies/infosec_ **Data Sharing Policy:** _https://www.sheffield.ac.uk/library/rdm/expectations_ **University of Hannover:** _https://www.uni-hannover.de/en/datenschutzerklaerung/_ **University of Liverpool:** # Data Management Policy & **Procedures:** _https://www.liverpool.ac.uk/library/research-data-_ _management/_ **AND** _https://www.liverpool.ac.uk/media/livacuk/computingservices/r esearch-data- management/researchdatamanagementpolicy.pdf_ **Data Security Policies:** _https://www.liverpool.ac.uk/csd/security/informationsecurity/_ **AND** _http://www.liverpoolac.uk/media/livacuk/computingservices/regulations/informatio nsecuritypolicy.pdf_ **Data Sharing Policy:** _https://www.liverpool.ac.uk/media/livacuk/computingservices/research- datamanagement/researchdatamanagementpolicy.pdf_ # Eidgenoessische Technische Hochschule Zurich **Data Management Policy & Procedures: ** _https://www.library.ethz.ch/en/ms/DigitalCuration-at-ETH-Zurich/Research- data/Research-data-managemen**t** _ **Data Security Policies:** _https://www.ethz.ch/en/footer/data- protection.html_ **Data Sharing Policy:** _https://www.library.ethz.ch/en/ms/Digital-Curation- at-ETH-_ _Zurich/Research-data/Publishing-research-data_
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0234_ColoFast_767419.md
# 1 Introduction ### 1.1 Audience This document is addressed to the general society, Commission services and subcontracted parties. ### 1.2 Definitions / Glossary (REVISAR) **Metadata** : Information about datasets stored in a repository/database template; all the different elements that are defining the dataset (what, where, when, why, and how the data were collected, processed, and interpreted, the length of the data, resolution of the pictures, units included in each table, picture or whatever, author, data of elaboration, etc…) are considered as metadata. Metadata also uses to include descriptions of how data and files are named, physically structured, and stored as well as details about the experiments, analytical methods, and research context. **Dataset** : Digital information created in the course of the project which is not a published. Administrative records are excluded. The most relevant research data is the one related with a research output. Publications, articles, lectures or presentations are not included also. **Data Management Plan** : Detailed document which summaries how datasets will be managed during the active research phase and as well as once the project is completed. **Secondary data** : Sources that contain commentary on or a discussion about a primary source. **Repository** : Generally speaking, the mechanism by which the digital content of a project is finally managed and stored. ### 1.3 Abbreviations **DMP** : Data Management Plan ### 1.4 Structure The document has been structured as follows: # 2\. Data Summary The main purpose of data collection/generation is to have available all the data related with the patient’s participants of the study. Accordingly, clinical, molecular and biochemistry data will be collected and reached the right time, all this data will be analyzed. Generated data will be available in different formats. No pre-existing data will be use along the development of the project; all the data involved in the achievement of the objectives will be data collected along the same project. Clinical data that will come from the patients recruited along the clinical study; IPs from the hospitals selected for that purpose, will be the responsible for collecting this information. Biochemical and molecular data will be generated in central laboratories that have been selected for that purposes. At the time being, we do not have clear information about the expected size of the all the dataset that will be generate; this in due to the fact that a potential number of repetitions within the experimental assays will be performed, affecting the global size of data to be collected and stored. Clinical data provided as well as molecular information that will be generated along the project are strictly necessary to reach the main goal of the project, that basically is to validate and commercialize a novel, non- invasive, simple to use kit (ColoFastTM), regulatory approved for the diagnosis of CRC in blood. ## 3\. FAIR Data ### 3.1 Making data findable, including provisions for metadata A standard identification mechanism has been not established yet for all the metadata that will be generated along the development of the project. Anyhow and generally speaking, metadata naming rules are based on CDISC SDTM (global standards). A PDF file with CDISC annotations will be shared with relevant study team members. In any moment, every metadata already used can be copied, re-used and adapted to current project. Clear version numbers will be provided; e.g. for versioning of electronic case report form (eCRF) by leaning on Semantic versioning structure ( _https://semver.org/_ ) . Type of metadata expected to be created are included below: * Generated by Metronomia: Annotated CRF, Schema Report (Data base structure in XML format including table names variable names and labels, variable type, codelists). * Molecular analysis data generated in central analytical labs will be generated as xls format or pdf format upon requested. * In general, data to be storage will be save as pdf format. * By contrast, all the intermediate data that will be used for statistical analysis purposes will be managed in xls files. ### 3.2 Making data openly accessible Research data collected from patients recruited within WP4 and generated along the project by means of different molecular techniques will be not openly available until data will be published in a scientific journal (always following what was stated in articles 27, 28 and 29 of the agreement). During research activities, only members that must in direct contact with the metadata in order to perform activities described in the memorandum, will have access to the data in the following conditions: * Regularly: * CRO Project Leader and Amadix’ staff will have access to relevant protocol data within the EDC system. * Principal investigators (PI), every site will have restricted access for the data collected in each place within the EDC system. * External data (e.g. lab data) will be available only for selected staff in Data Management and at sites. They either have edit access (e.g. Study sites, Clinical Research Associates (CRAs), Data Management staff or read-only access (sponsor, auditors). * Upon request: people involved in data management and analysis (statistician and bioinformaticians; according to agreement Art. 25). After the closure of the database, the data is downloaded as PDF files and provided to sponsor and study sites in DVDs. Noteworthy, at the moment that research data reach accepted status in any international journal, data will become then open-access. Results from the statistical analysis are highly confidential and will be shared only under limited restrictions, mostly between CRO statistical department and the sponsor. The same is valid for unblinded data, which can be handled and seen only by a dedicated unblinded team. Authorization to examine, analyze, verify, and reproduce any records and reports that are important to evaluation of a clinical trial is restricted to the sponsor. Any party (e.g., domestic and foreign regulatory authorities, sponsor's monitors and auditors) with direct access should take all reasonable precautions within the constraints of the applicable regulatory requirement(s) to maintain the confidentiality of subjects' identities and sponsor’s proprietary information. Data accessibility is depending on the type of data. * Access to clinical data from subjects participating in the clinical trial will be restricted to authorized personal and it will have done through logging credentials (username and password). o It is worth to mention that all data from patients will be maintained completely anonymous, only at the recruiting location, the information about the participants will be available by means of the corresponding informed consent; for the purpose of the project, participants will be exclusively identified by alphanumeric o numeric data. * Access to systems containing patient data will be granted only to trained persons and the process for granting access is strictly defined. * Other confidential data will be shared via username and password protected platforms (e.g. Sharepoint). * Data included and being sent via emails: the file will be protected with password or the email will be encrypted. Internally the data will be protected by user management. Only dedicated people can access the data. With regard tools or software to be used with the aim of viewing or accessing to the data; information is included as follows: * For monitoring the presence of clinical data, CLINCASE software from Metronomia will be used; ( _https://app.clincase.com/met-amadix1/app_ ) . * For general clinical trial management (documentation managed by the CRO): https://sis.cro-sss.de/sis * In addition, for data viewing some tools will be needed such as: PDF Viewer, XML Viewer, SAS Viewer and eCRF access by standard browser. Data access committee was not considered as necessary since all the sites involved in the RDM plan have already established their own management rules according to the standards. In the event of access data, the identity of the person will be ascertained by means of their signature (Username and password). Final version of metadata and/or processed data will be finally deposited in an on-cloud infrastructure that will be under sponsor’s responsibilities. Clinical as well as molecular data provided by Metronomia on one side and the molecular laboratories selected for that purposes (CEA and FOBT samples analyses and miRNA determination). General scheme about how the on-cloud system is set up is included in the following picture: **Exhibit 1 On cloud Infrastructure** Main characteristics of the infrastructure are: * Infrastructure installed in an external accommodation, outside the premises. * Physical server that hosts and executes the virtual system. * A virtual machine (VDI) for each user, which is accessed remotely through internet. * File server and domain controller. * Gextor application server. * Backup management server. * Algorithm server. * Two external hard drives, where the backups of the IT system are stored ### 3.3 Making data interoperable Generally speaking, research data could be produced using CDISC standards which are a global standard by definition. This is making a pooling of different study data more effective, accordingly the data produced in the project can be considered as interoperable that means that data exchange and re-use between researchers, institutions, organisations, countries can be possible if the mentioned rules for accession are fulfilled. CDISC Standards (SDTM, ADaM etc..) will be applied in order to facilitate interoperability of research data between members of the project according conditions mentioned in the previous topic. It is not expected to generate specific vocabularies that may avoid interoperability of data, consequently, no further information or specific elements will be generated to overcome this issue. ### 3.4 Increase data re-use All the research data, even at the end of the project, will be of the highest quality, have long-term validity and access will be guaranteed after 25 years according to regulatory requirements, this responsibility lies on the sponsor. If datasets are updated, the owner of the data is in charge of managing the different versions and to make sure that the latest version is available in the case of publically available data. In any case, quality control of the data is the responsibility of the relevant responsible partner generating the data. No limitations have been established for the re-use of the data; accession to the data have been already described. ##### 3.4.5 Data quality assurance In all the situations, data quality is confirmed; information about this issue is described in several documents that are being shared between all project’s participants: Standard Operating Procedures, Data Management Plan, Data Validation Plan, eCRF Test Plan, SOPs coming from the CRO. ## 4\. Allocation of resources Each COLOFAST partner (including subcontractors) must respect the policies set out in this DMP. Noteworthy, datasets have to be created, managed and stored appropriately and in line with European Commission and local legislation. Dataset validation and registration of metadata and backing up data will be preserved indefinitely according to law; there are currently no costs for archiving data in this repository. Resources for long-term preservation are mainly discussed/fixed project by project basis. A complete route map has to be established yet. As it has been stated, CRO SSS by means of Metronomia, a German enterprise located in Munich, will take care of the initial management of data research; if any data is transferred to the sponsor or any of the sites (data access is only available for specific information collected in any site), the receptor of the information become then the responsible for the management of this information; detailed information about the sponsor’ infrastructure for this purpose has been described in 3.2. **5\. Data security** #### A) Sponsor’s infrastructure ##### Repository access control Both physical and logical access systems are installed in the global data repository. Physical equipment that supports sponsor’s central information systems are housed in Acens, in the data center located at Madrid; VSistemas is taking care of all this activities in the name of the Sponsor. Access to facilities is restricted to authorized personnel only. In the case of external personnel to Acens, this access can only be made upon request and explicit authorization. The premises have surveillance 24x7h and 365 days per year, with closed circuit video surveillance. It also has redundant power systems to ensure the continuous availability of the power supply. With regard the logical access, to establish a connection with the computers of the IT system, it will be necessary to authenticate with a user and password, managed in the file server and domain controller. Different user profiles have been established in the information system, to provide the authorized accesses for each of them. These profiles establish their privileges and accesses to system resources. All users must have a profile assigned to access the system. The user identifiers and passwords are for individual and non-transferable use and therefore cannot be shared. Passwords are stored on the server in encrypted form. ##### Information backups The procedure for backing up the system is as follows: * The physical server has two external hard drives connected via USB, in which the backup server makes the copies. * Daily, at the end of the working day, incremental security backups are made of the Files, Gextor, and Algorithm servers. * Weekly, full backup copies of the servers are made, including both data and internal security and operation configurations. * An automatic check of the backups is carried out weekly to verify that they are correct and valid to be used in case of failure. * A complete copy of all virtual machines (VDI) is made weekly. * An automatic check of the backups of the virtual desktops is carried out monthly. * Veeam Backup & Replication is used as backup software. All backups operations, both right or with error, are automatically e-mailed to VSistemas personnel for follow-up and review. ##### Emergency plan within repository environment When a user of the system detects an incident, reported to the personnel of VSistemas by means of an electronic mail or telephone call is done. This notification is registered with the following data: * Incident number * Date and time in which it occurred (in the absence of detection). * Person who notifies it. * Description of the incident. * Incidence category * Urgency or criticality The registration of the incidents will be made through the GLPI tool, based on an internal VSistemas deployment, for its monitoring and control. The assigned technician to the incident is responsible for resolving it and recording the actions carried out. If he cannot resolve, it will be scaled to the appropriate technical level, documenting this event in the IT tool and continuing with the responsibility of racking the incident. #### A) Clincase database information Along the development of the trial, data storage is secure by means of the following activities (Clincase Hosting): * Fully qualified hosting environment. * All hardware owned and controlled by Quadratek Data Solution. * First class data hosting centre with compromised security. * Second data centre for real-time off-site mirroring of all data, daily full backups and disaster recovery. * Automated failover of all hosting platforms (n+1 redundancy). Specifications for the second data storages are included as follows (location: Hosting Center eShelter Berlin): * Server located at Collocation Area. * All hardware owned and controlled by Quadratek. * Vendor audit performed. * Fully redundant hardware. * 24x7h manned security, CCTV, electronic access system. * Redundant power supply, internet connection, air conditioning system. * Redundant data links. ##### Information backups * Backups located at second data center in Berlin (Level 3). * All databases mirrored in nearly real time. * Daily full backup stored on redundant encrypted hard drives. * Retention period of daily backups up to 365 days. * For true 24x7h operations: Independent full backups (not blocking the productive database instance). * All Backups are tested every 6 months. ##### Security * Firewall protection of data hosting environment. * Access to study servers only though HTTPS proxy. * Use of Debian Linux Operating System for maximum access security and virus protection. * Regular implementation of security patches (scheduled or urgent patches)  Remote access secured by SSH, role-based access restriction. * Monthly back-up of all data to tape and daily back-up of all data to Cloud. Business Continuity Plan describes all the details. In general, all the processes have relevant standard operating procedures (SOPs). **6\. Ethical aspects** COLOFAST project will follow ethical principles as set out in Article 34 of the Grant Agreement, which states that all activities must be carried out in compliance with: 1. ethical principles (including the highest standards of research and 2. applicable international, EU and national law. In this line, all the institutions involved, is also taking care of respecting the highest standards of research integrity fulfilling with the terms and conditions described in article 34 of the agreement. It is worth to mention that all the ethical requirements will be complied in any case where activities put in place are linked with ethical issues. As it has been informed, clinical trial protocol has been submitted in order to obtain the corresponding approval in all the countries in which recruitment sites are located; regarding the type of molecular data that will be generated, all this information is described in the corresponding informed consent that every of the participants must fulfil in order to participate in the study.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0235_REFLEX_691685.md
**Funder:** European Commission (Horizon 2020). **1 DATA SUMMARY** At the core of the REFLEX project is the development of the comprehensive “Energy Models System” (EMS), coupling different models and tools from the REFLEX partners. A good understanding of data types and data flows is therefore indispensable. In what follows, we discuss the purpose of data collection and generation, the types and format of data, data utility for third parties as well as the REFLEX data protection and exploitation strategy. ## 1.1 PURPOSE OF DATA COLLECTION AND GENERATION As briefly introduced above, the main objective of the REFLEX project is to analyse and evaluate the development towards a low-carbon energy system in the EU up to the year 2050. The focus is laying on the evaluation of several flexibility options to support the system integration of increasing generation from intermittent RES (see Deliverable 4.1: Overview of techno-economic characteristics of different options for system flexibility provision; and Deliverable 4.3: Report on cost optimal energy technology portfolios for system flexibility in the sectors heat, electricity and mobility). Thereby, the analysis and assessment are based on a modelling environment that considers the full extent to which current and future energy technologies and policies interfere and how they affect the environment and society while considering technological learning of low-carbon and flexible technologies. For analysing and answering the given research questions (see Administrative details), the different models and approaches are coupled to the EMS (see Deliverable 2.3: Report on modelling coupling framework). The _purpose for data collection_ and their preparation within REFLEX is to provide the needed input data for the applied mathematical energy system models. The _purpose of data generation_ is to provide the quantitative background which will help to understand and investigate the complex links, interactions and interdependencies between the different actors and technologies within the energy system as well as their impact on society and environment. The following sections give an overview on data flows in the REFLEX project and introduce the project’s data structure and different data types. ### 1.1.1 DATA FLOWS IN THE REFLEX PROJECT The **model pool** of the REFLEX partners contains bottom-up simulation tools and fundamental system optimisation models on national and European level as well as approaches for Life Cycle Assessment (LCA). Typically, one model cannot cover all aspects of an energy system or the implications of specific policies. Each of these different models focuses on a specific sector or aspect (e.g. heat, electricity, mobility, environmental / social impacts etc.) of the energy system. The models applied in REFLEX can be grouped into four fields: * Energy supply (ELTRAMOD, TIMES-Heat), * Energy demand/usage (TE3, ASTRA, FORECAST, eLOAD), * Energy market design (PowerACE), and * Impacts on environment, society and economy (eLCA, sLCA, ߨ ESA). For analysing and answering the given research questions, the different models and approaches are **coupled to the integrated Energy Models System (EMS)** . Applying the EMS allows to perform an in-depth and at the same time holistic assessment of the system transformation and shall contribute to the scientific underpinning of the EU’s SET-Plan. Final modelling results shall help to understand and investigate the complex links, interactions and interdependencies between the different actors and technologies within the energy system as well as their impact on society and environment. All models used within the project have already been used as stand-alone applications. Thus, each model has its own database with already existing data. In the course of the project, **common input data** as well as **model- specific input data** have been defined. Moreover, through model coupling, essential exogenous parameters of the models become endogenous variables of the EMS, i.e. relevant output data of one model serve as input data of another model. It is necessary that collective input variables are harmonized. Therefore, a **common REFLEX database** with a scenario storyline has been developed. This allows for alternative assumptions regarding e.g. the development of macroeconomic parameters, or the impact of fuel prices. Figure 1 illustrates REFLEX data flows. **Figure 1: REFLEX data flows** ### 1.1.2 DATA STRUCTURE IN THE REFLEX PROJECT The results of a model-based analysis depend not only on the chosen methodology, but also on the quality of the data used. For a consistent analysis within the EMS in REFLEX, a common database with harmonised datasets was implemented in a Data Warehouse (DWH). 1 The database of the REFLEX project contains **four groups of data** , i.e. (1) existing model input data, or so-called background data, (2) collected and generated new model input data, or so-called foreground data, (3) generated intermediate model output data for exchange between the models during the iteration process, and (4) generated final result data of the EMS. Table 1 gives a brief overview on these four groups, which will be described in more detail in the following sections. **Table 1: Groups of data within the database of the REFLEX project** <table> <tr> <th> Data group </th> <th> Description / content </th> </tr> <tr> <td> **Existing model input data** </td> <td> * Data collected, generated or purchased from commercial providers by a project partner before the start of the REFLEX project * Input data collected, generated or purchased from commercial providers by a project partner in the context of projects on behalf of other clients run in parallel to the REFLEX project </td> </tr> <tr> <td> **Collected and generated new model input data** </td> <td> * Data collected from publicly available sources or purchased from commercial providers by a project partner in the context of REFLEX * Data collected through surveys conducted by a project partner in the context of REFLEX * Data generated based on existing, new collected or new purchased data by a project partner in the context of REFLEX </td> </tr> <tr> <td> **Generated intermediate model output data** </td> <td> \- Intermediate results of the model applications for data exchange between the models or for further assessments within the project </td> </tr> <tr> <td> **Generated final** **result data of the** **EMS** </td> <td> \- Final results of REFLEX generated by model applications, e.g. CO 2 -emissions, energy demand, technology impact evaluation, etc. </td> </tr> </table> Regarding model input data we thus can differentiate among existing and new data, as well as among input data collected from publicly available sources, input data generated by project partners in the context of REFLEX, purchased data coming from commercial providers, and confidential data that have been provided to REFLEX project partners after the signature of non-disclosure agreements (see Figure 2). These different types of data sources will also have an impact on our ability to (re-)publish respective datasets in an openly accessible research data repository (see below). **Figure 2: REFLEX data structure** ## 1.2 TYPES AND FORMATS OF DATA COLLECTED AND GENERATED The following sections describe the four groups of REFLEX data regarding the specific types and formats of datasets collected and generated. ### 1.2.1 EXISTING MODEL INPUT DATA Each of models applied in REFLEX has already been used as a stand-alone application, and thus each model has its own database with already existing data input. These datasets originate from own previous work and own assumptions of the project partners as well as from literature and have been developed over many years. Most of these input data are rather model-specific and an unconditional application over several models is limited. Some of the existing data, however, will be reused in the EMS, given that they are up to date, or that no better data are available. The **REFLEX data warehouse** thereby is the central element of common data storage, use and exchange. It includes * all data needed for more than one model within the EMS and which therefore have to be harmonized; and/or * all data needed to validate the results presented in project reports and publications (so called "underlying data"). The harmonization of input data is necessary to ensure a consistent analysis within the EMS. For the same information (e.g. power plants’ start-up costs, or efficiency factors) the same dataset (values) have to be used in all models. The consortium has decided which of the existing datasets are applied. These are included in the project’s database and provided to all models before initializing the EMS runs. Table 2 gives an overview on the essential existing model input datasets reused in REFLEX. # Table 2: Essential existing model input datasets reused in REFLEX <table> <tr> <th> Dataset Time period covered Spatial scope Sources </th> </tr> <tr> <td> Power plants’ availability </td> <td> 2010-2050 </td> <td> EU28 </td> <td> DIW 2013 </td> </tr> <tr> <td> Power plants’ efficiency </td> <td> 2010-2050 </td> <td> EU28 </td> <td> DIW 2013 </td> </tr> <tr> <td> Power plants’ emission factor </td> <td> 2010-2050 </td> <td> EU28 </td> <td> UBA 2014 </td> </tr> <tr> <td> Power plants’ interest rate </td> <td> 2010-2050 </td> <td> EU28 </td> <td> IEA 2010 </td> </tr> <tr> <td> Power plants’ lifetime of investment </td> <td> 2010-2050 </td> <td> EU28 </td> <td> IEA 2010 </td> </tr> <tr> <td> Power plants’ load change costs (depreciation) </td> <td> 2010-2050 </td> <td> EU28 </td> <td> DIW 2013, Traber & Kemfert 2011, own assumptions </td> </tr> <tr> <td> Power plants’ load change costs (fuel factor) </td> <td> 2010-2050 </td> <td> EU28 </td> <td> DIW 2013, Traber & Kemfert 2011 </td> </tr> <tr> <td> Power plants’ operation management costs (fixed) </td> <td> 2010-2050 </td> <td> EU28 </td> <td> DIW 2013, VGB 2011a </td> </tr> <tr> <td> Power plants’ operation management costs (variable) </td> <td> 2010-2050 </td> <td> EU28 </td> <td> DIW 2013, Traber & Kemfert 2011 </td> </tr> <tr> <td> Power plants’ specific investment </td> <td> 2010-2050 </td> <td> EU28 </td> <td> DIW 2013 </td> </tr> <tr> <td> Power plants’ start-up costs (depreciation) </td> <td> 2010-2050 </td> <td> EU28 </td> <td> Traber & Kemfert 2011 </td> </tr> <tr> <td> Power plants’ start-up costs (fuel factor) </td> <td> 2010-2050 </td> <td> EU28 </td> <td> Traber & Kemfert 2011 </td> </tr> <tr> <td> Vehicles CO 2 standard </td> <td> 2010-2050 </td> <td> EU28 </td> <td> EU regulation, own assumptions </td> </tr> <tr> <td> Vehicles fuel consumption factors </td> <td> 2010-2050 </td> <td> EU28 </td> <td> GHG-TransPoRD project, ASSIST project </td> </tr> </table> ### 1.2.2 COLLECTED AND GENERATED NEW MODEL INPUT DATA Some of the needed input data for the EMS have been updated or newly defined according to the research questions and the focus of analysis within the REFLEX project. Furthermore, publicly and commercially available data are used. Additionally, required input data not available in the literature, in existing data repositories or on the market, have been generated by the project partners via empirical surveys and/or appropriate assumptions. These data are included in the project’s database, too, and in that way provided as harmonized datasets for all models within the EMS. The group of collected and generated new model input data includes: * data for the REFLEX scenario framework; * data for demand side management as one source of flexibility; and * data for experience curves to allow for an endogenous modelling of technological learning. ## _DATA FOR THE REFLEX SCENARIO FRAMEWORK_ Data for the REFLEX scenario storylines describe the overall framework for the model-based analysis and include the main macro-economic and societal drivers as well as technoeconomic parameters and regulatory conditions of the political environment. Two main scenarios are distinguished (for a detailed description see Deliverable 1.1: Scenario Description). First, a reference scenario (“Mod- RES”) based on observed trends and most recent projections is in line with the PRIMES 2016 Reference Case (Capros et al., 2016). Second, for a more ambitious policy scenario (“High-RES”), framework conditions are similar to those of Mod-RES in terms of population and economic growth, while both fuel- and CO 2 prices are assumed to be higher. More ambitious climate policies are considered. In order to capture the different possible stances on a future energy system, a centralized and a decentralized version are distinguished. The defined scenario storylines are translated into quantitative model input parameters until the year 2050, which is the defined horizon for the analysis. The data may be aggregated for whole Europe, or disaggregated on a national, sectoral or technological level. The macroeconomic trends and the societal drivers are based upon official projections provided by the European Commission. All political assumptions have been elaborated considering current and past policy implementations and are discussed with the European Commission and various stakeholders. Table 3 gives an overview on the data for the REFLEX scenario framework. # Table 3: Data for the REFLEX scenario framework <table> <tr> <th> Dataset </th> <th> Time period covered </th> <th> Spatial scope </th> <th> Sources </th> </tr> <tr> <td> **Gross domestic product** </td> <td> 2000-2050 </td> <td> EU28+NO+CH (NUTS 0 level) </td> <td> Capros et al. 2016 (assumptions for NO+CH based on other Horizon 2020 projects) </td> </tr> <tr> <td> **Population** </td> <td> 2000-2050 </td> <td> EU28+NO+CH (NUTS 0 level) </td> <td> Capros et al. 2016 (assumptions for NO+CH based on other Horizon 2020 projects) </td> </tr> <tr> <td> **Price electricity** (initial average costs of gross electricity generation) </td> <td> 2000-2050 </td> <td> EU28+NO+CH (NUTS 0 level) </td> <td> Capros et al. 2016 </td> </tr> <tr> <td> **Price fossil energy carriers** </td> <td> 2015-2050 </td> <td> EU28 </td> <td> Capros et al. 2016 </td> </tr> <tr> <td> **CO 2 -price ETS ** </td> <td> 2015-2050 </td> <td> EU28 </td> <td> Capros et al. 2016, own assumptions </td> </tr> </table> ## _DATA FOR DEMAND SIDE MANAGEMENT_ Relevant data for investigating system flexibility in the form of demand side management (DSM) are rarely available from public or commercial sources. In particular, the available database for the tertiary sector with regard to DSM is highly incomplete. Therefore, an empirical survey on DSM in the tertiary sector has been conducted with the aim to improve the model input data and to fill data gaps (see Deliverable 2.2 Report on Survey Findings). Based on the analysis of the collected specific empirical data, existing datasets have been extended as well as new datasets generated. The design of the survey has been established by the REFLEX partners. The survey itself has been conducted for 4 European countries by an international market research institute. Empirical data for one further country (Germany) was provided in-kind by REFLEX partner Fraunhofer ISI. DSM data for further countries are deduced from the survey results. Relevant model input parameters for modelling DSM options which should be deduced from the empirically ascertained data are given in Table 4\. # Table 4: Data for modelling DSM options <table> <tr> <th> **Dataset** </th> <th> **Time period covered** </th> <th> **Spatial** **scope** </th> <th> **Sources** </th> </tr> <tr> <td> **DSM potential** (share of flexible load per energy usage process) </td> <td> 2015-2050 </td> <td> EU28+NO+CH (NUTS 0 level) </td> <td> empirical survey, public and commercial sources </td> </tr> <tr> <td> **DSM costs** (activation costs per energy usage process) </td> <td> 2015-2050 </td> <td> EU28+NO+CH (NUTS 0 level) </td> <td> empirical survey, public and commercial sources </td> </tr> <tr> <td> **DSM time of interfere** (maximum load reduction time) </td> <td> 2015-2050 </td> <td> EU28+NO+CH (NUTS 0 level) </td> <td> empirical survey, public and commercial sources </td> </tr> <tr> <td> **DSM number of interventions** (frequency of DSM measures) </td> <td> 2015-2050 </td> <td> EU28+NO+CH (NUTS 0 level) </td> <td> empirical survey, public and commercial sources </td> </tr> <tr> <td> **DSM shifting time** (allowed points of time or time periods/frames for DSM) </td> <td> 2015-2050 </td> <td> EU28+NO+CH (NUTS 0 level) </td> <td> empirical survey, public and commercial sources </td> </tr> </table> ## _DATA FOR EXPERIENCE CURVES_ In order to enable an endogenous modelling of technological developments and resulting production cost reductions, experience curves for the most relevant technologies for each sector (electricity including storage and power-to-X, industry, mobility, heating/cooling, energy end-use) have been developed in a first step (see Deliverable D3.2 Comprehensive Report on Experience Curves). In a second step, they have been implemented in the sectoral models of the EMS. Special attention is given to the determination of uncertainty ranges of progress ratios (i.e. the slopes of the experience curves), as these can have a major impact on modelling results, especially for long-term modelling up to the year 2050. Especially for technologies that depend strongly on either the available geographical potential (e.g. wind onshore, offshore) or on raw material prices, a decomposition of the experience curve using a multi-level experience curve is performed. This allows the determination of the most important factors behind cost development, such as variations in steel or oil prices, as well as scale effects. The needed empirical data for defining experience curves has been collected by means of interviewing industry experts, conducting specific survey methods and analysing detailed statistics (e. g. construction, production and consumer price indices as well as installed capacities and cost developments in the electricity, heat and mobility sectors). Table 5 gives an overview on the technologies for which experience curves have been developed. # Table 5: Technologies by sector for which experience curves are developed <table> <tr> <th> Category </th> <th> Technology </th> </tr> <tr> <td> **Electricity generation** </td> <td> CCS (membrane, oxyfuel, pre-combustion, post-combustion) </td> </tr> <tr> <td> Photovoltaics: modules (mono/poly, CdTe), BOS, systems </td> </tr> <tr> <td> Wind onshore </td> </tr> <tr> <td> Wind offshore </td> </tr> <tr> <td> Fuel cell micro-CHP </td> </tr> <tr> <td> **Electricity storage** </td> <td> Battery: Lithium-ion (utility, residential) </td> </tr> <tr> <td> Battery: Redox-flow </td> </tr> <tr> <td> **Heating/cooling** </td> <td> Heat pump (air/water) </td> </tr> <tr> <td> **Industry** </td> <td> Industrial heat pumps (large scale) </td> </tr> <tr> <td> Industrial heat/steam (Industrial CCS) </td> </tr> <tr> <td> **Mobility** </td> <td> Battery electric vehicles (EV Lithium-ion battery) </td> </tr> <tr> <td> Fuel cell vehicles (EV fuel cell stack) </td> </tr> <tr> <td> Hybrid electric car (HEV NiMH battery) </td> </tr> <tr> <td> **Power-to-X** </td> <td> Power-to-hydrogen (alkaline electrolysis) </td> </tr> </table> To estimate the potential of alternative fuel technologies in Europe, the _global_ automotive market (especially including North America and Asia) is considered. The analysis is focused on major driving patterns. The reason for analysing global passenger car markets is to identify the global market penetration of electric vehicles, which in turn will influence the demand for Li-ion batteries substantially and thus will have a crucial impact on technological learning. This information helps to assess the future prices of batteries and fuel cells based on the learning curve theory. ### 1.2.3 GENERATED INTERMEDIATE MODEL OUTPUT DATA By coupling the different approaches of the REFLEX partners, the system boundaries of each stand-alone model are partly disbanded and most **exogenous parameters of each model become endogenous variables of the EMS** . This is done by using the relevant output data of one model as input data of another model. The REFLEX database, which was developed in the course of the project, facilitates the data exchange between the models and stores relevant input and output datasets. Technical routines are thereby able to overcome several compatibility issues, i.e. models may use different levels of detail or aggregation, time intervals of the same variables may differ, and finally, different models use different identifiers for the same datasets. Therefore, when importing the intermediate results into the database, a mapping and any necessary value aggregations or data splits take place at the same time. For this purpose, each data table contains the identifier structure used for both, the model from which these intermediate results originate and for other models using these intermediate data as own input data. To achieve a stable final state of the EMS within each REFLEX scenario storyline, several iterations with all models are performed. Table 6 gives an overview on main input and output variables of the different models coupled to the EMS. It shows which input is exogenous if the respective model is used as a stand-alone application but becomes an endogenous input dataset in the EMS. # Table 6: Main inputs and outputs of the different models <table> <tr> <th> Model </th> <th> Needed input i. a. Endogenous in EMS? (exogenous if stand-alone Provided output i. a. (If yes, provided by) application) </th> </tr> <tr> <td> </td> <td> Electricity demand </td> <td> Yes (eLOAD) </td> <td> Electricity prices Capacity and operation of power plants RES curtailment </td> </tr> <tr> <td> Techno-economic data for power plants </td> <td> No </td> </tr> <tr> <td> H2-demand for mobility </td> <td> Yes (ASTRA) </td> </tr> <tr> <td> Fuel prices </td> <td> No </td> </tr> <tr> <td> Heat demand </td> <td> Yes (FORECAST) </td> </tr> <tr> <td> </td> <td> Heat demand </td> <td> Yes (FORECAST) </td> <td> Capacity and operation of heat plants </td> </tr> <tr> <td> Capacity and operation of power plants </td> <td> Yes (ELTRAMOD) </td> </tr> <tr> <td> Electricity wholesale prices </td> <td> Yes (ELTRAMOD) </td> </tr> <tr> <td> Techno-economic data for power plants </td> <td> No </td> </tr> <tr> <td> </td> <td> Macro-economic framework data </td> <td> No </td> <td> Used transport technologies in global key markets </td> </tr> <tr> <td> Techno-economic data for vehicles </td> <td> No </td> </tr> <tr> <td> Fuel prices </td> <td> No </td> </tr> <tr> <td> </td> <td> Electricity prices </td> <td> Yes (ELTRAMOD) </td> <td> Used transport technologies and energy demand for mobility in EU H2-demand for Mobility </td> </tr> <tr> <td> Macro-economic framework data </td> <td> No </td> </tr> <tr> <td> Techno-economic data for vehicles </td> <td> No </td> </tr> <tr> <td> Fuel prices </td> <td> No </td> </tr> <tr> <td> </td> <td> Electricity wholesale prices </td> <td> Yes (ELTRAMOD) </td> <td> Yearly energy demand by sector </td> </tr> <tr> <td> Electricity demand mobility </td> <td> Yes (ASTRA) </td> </tr> <tr> <td> Macro-economic framework data </td> <td> No </td> </tr> <tr> <td> Techno-economic data for demand side technologies </td> <td> No </td> </tr> <tr> <td> Fuel prices </td> <td> No </td> </tr> <tr> <td> </td> <td> Electricity wholesale prices </td> <td> Yes (ELTRAMOD) </td> <td> Electricity demand structure (load profiles) </td> </tr> <tr> <td> Electricity demand (yearly) </td> <td> Yes (FORECAST) </td> </tr> <tr> <td> </td> <td> Electricity demand </td> <td> Yes (eLOAD) </td> <td> Capacity and operation of power plants under different market designs </td> </tr> <tr> <td> Framework data for energy markets </td> <td> No </td> </tr> <tr> <td> Techno-economic data for power plants </td> <td> No </td> </tr> <tr> <td> Fuel prices </td> <td> No </td> </tr> <tr> <td> </td> <td> Capacity, operation and emissions of power plants </td> <td> Yes (ELTRAMOD) </td> <td> Emissions Impacts on humans and environment </td> </tr> <tr> <td> Capacity and operation of heat plants </td> <td> Yes (TIMES-Heat) </td> </tr> <tr> <td> Energy / electricity demand </td> <td> Yes (FORECAST, eLOAD, ASTRA, TE3) </td> </tr> </table> Table 7 summarizes datasets of generated intermediate model output data for the data exchange within the EMS. # Table 7: Generated intermediate model output data for data exchange within the EMS <table> <tr> <th> Dataset </th> <th> Time period covered (10-year steps) </th> <th> Spatial scope </th> </tr> <tr> <td> **Price electricity (hourly)** </td> <td> 2015-2050 </td> <td> EU28+NO+CH (NUTS 0 level) </td> </tr> <tr> <td> **Demand electricity (hourly)** </td> <td> 2015-2050 </td> <td> EU28+NO+CH (NUTS 0 level) </td> </tr> <tr> <td> **Demand electricity for mobility (yearly)** </td> <td> 2015-2050 </td> <td> EU28+NO+CH (NUTS 0 level) </td> </tr> <tr> <td> **Demand district heating (yearly)** </td> <td> 2015-2050 </td> <td> EU28+NO+CH (NUTS 0 level) </td> </tr> <tr> <td> **Power plants installed capacity and operating (yearly and hourly)** </td> <td> 2015-2050 </td> <td> EU28+NO+CH (NUTS 0 level) </td> </tr> <tr> <td> **Power plants emissions (yearly)** </td> <td> 2015-2050 </td> <td> EU28+NO+CH (NUTS 0 level) </td> </tr> <tr> <td> **Power plants demand energy (yearly)** </td> <td> 2015-2050 </td> <td> EU28+NO+CH (NUTS 0 level) </td> </tr> <tr> <td> **Mobility demand energy (yearly)** </td> <td> 2015-2050 </td> <td> EU28+NO+CH (NUTS 0 level) </td> </tr> <tr> <td> **Mobility emissions (yearly)** </td> <td> 2015-2050 </td> <td> EU28+NO+CH (NUTS 0 level) </td> </tr> </table> ### 1.2.4 GENERATED FINAL RESULT DATA OF THE EMS After achieving a stable state based on several iterations with all models within the EMS for each REFLEX scenario storyline, the result data of the different models are collected and combined within the project’s database to the final result data of the EMS. These data are analysed to derive the key findings on the impacts of technological development and innovation on the energy system and on the environment, society and economy and are the basis for answering the research questions of the REFLEX project (see Administrative details). Table 8 gives an overview of the major generated final result data of the EMS. # Table 8: Major generated final result data of the EMS <table> <tr> <th> Dataset </th> <th> Time period covered (10-year steps) </th> <th> Spatial scope </th> </tr> <tr> <td> **Price electricity average yearly** </td> <td> 2015-2050 </td> <td> EU28+NO+CH (NUTS 0 level) </td> </tr> <tr> <td> **Demand electricity** </td> <td> 2015-2050 </td> <td> EU28+NO+CH (NUTS 0 level) </td> </tr> <tr> <td> **Demand district heating** </td> <td> 2015-2050 </td> <td> EU28+NO+CH (NUTS 0 level) </td> </tr> <tr> <td> **Power plants installed capacity** </td> <td> 2015-2050 </td> <td> EU28+NO+CH (NUTS 0 level) </td> </tr> <tr> <td> **Power plants operation** </td> <td> 2015-2050 </td> <td> EU28+NO+CH (NUTS 0 level) </td> </tr> <tr> <td> **Power plants emissions** </td> <td> 2015-2050 </td> <td> EU28+NO+CH (NUTS 0 level) </td> </tr> <tr> <td> **Mobility demand energy** </td> <td> 2015-2050 </td> <td> EU28+NO+CH (NUTS 0 level) </td> </tr> <tr> <td> **Mobility emissions** </td> <td> 2015-2050 </td> <td> EU28+NO+CH (NUTS 0 level) </td> </tr> <tr> <td> **Life cycle environmental and resource impacts** </td> <td> 2015-2050 </td> <td> EU28+NO+CH (NUTS 0 level) </td> </tr> <tr> <td> **Life cycle human health (damage / toxicity)** </td> <td> 2015-2050 </td> <td> EU28+NO+CH (NUTS 0 level) </td> </tr> <tr> <td> **Life cycle societal impacts (risk level)** </td> <td> 2015-2050 </td> <td> EU28+NO+CH (NUTS 0 level) </td> </tr> <tr> <td> **Costs external** </td> <td> 2015-2050 </td> <td> EU28+NO+CH (NUTS 0 level) </td> </tr> </table> ### 1.3 DATA UTILITY As elaborated above, REFLEX collects and generates a certain amount of research data. On the one hand, these data are necessary to meet the objectives of the project and to answer the research questions. On the other hand, most of the collected and generated data will be useful for further research by the project partners themselves or by third parties, as well as for stakeholders in the energy industry and for policy makers. #### 1.3.1 EXISTING MODEL INPUT DATA During the last years, comprehensive databases have been created by the project partners in order to answer a wide range of research questions based on their individual models in stand-alone applications. These data have been collected from a variety of sources, including institutional reports (e.g. from the EC or IEA), scientific publications (e.g. published by Elsevier) and data documentations from third modelling works (e.g. from the DIW Berlin). Moreover, a part of the data has been generated based on previous own modelling exercises as part of previous research projects and own assumptions. As these existing model input data are essential underlying data for the REFLEX model runs, their publication ensures the transparency of the generated results and allows third parties to compare project outcomes to other studies with a similar scope. Moreover, the collection of comprehensive datasets then re-published in one single place and in one single re-usable format, e.g. related to the variety of power plant parameters, facilitates future research as searching the various data coming from various sources and having been published in various formats (pdf, data-files, etc.) will not be necessary again. #### 1.3.2 COLLECTED AND GENERATED NEW MODEL INPUT DATA In what follows, we distinguish among the three categories for collected and generated model input data introduced above, i.e. data for the REFLEX scenario framework, data for DSM and data for experience curves. ## _DATA FOR THE REFLEX SCENARIO FRAMEWORK_ The collected and prepared data for the scenario framework regarding macro- economic and societal drivers, techno-economic parameters and regulatory conditions are tailored to the aim and scope as well as the specific research questions of the REFLEX project and the applied models to answering them. They are thus primarily useful as underlying data to ensure the transparency of the generated results and the comparability of the project outcomes to other (existing and future) studies with similar analysing scope. Their publication as open research data will also allow a critical discussion of the impact of scenario framework assumptions on model outcomes. Whereas the forecasts of the development of GDP and population have been relatively stable over time, this has not been the case for fuel prices, and even less for the level of costs related to CO 2 emissions. The data will be used for updating the existing databases of the different models and for further model-based research of the project partners outside of REFLEX. ## _DATA FOR DEMAND SIDE MANAGEMENT_ Parameters related to different DSM measures have been generated in the course of the REFLEX project based on elaborate empirical surveys. With the collected specific empirical data, the database for investigating system flexibility by DSM has been improved substantially because relevant data – especially for the tertiary sector – have been rarely available from public and commercial sources. Thus, existing, highly restricted datasets have been extended and at the same time new datasets have been created. These data have a high potential for re-use after the end of the REFLEX project. The survey data will not only be a valuable resource for future research activities in general, but it will also allow the identification of promising energy applications and DSM potentials in the selected sector in different European countries. To date, this is a very topical issue as European electricity systems are evolving towards a generation mix that is more decentralized and less predictable with additional flexibility being expected to be provided by the demand side. This implies that – also small- scale – consumers must be shifted from the current ‘passive’ role to providing ‘active’ demand response, and thus, new business models valorizing flexibility provided by the tertiary sector and adaptations in market design and regulation are required. REFLEX research data on DSM are therefore useful not only for the research community, but also for stakeholders from industry and policy making. ## _DATA FOR EXPERIENCE CURVES_ Endogenizing technological learning through experience curves allows for an enhanced assessment of the evaluation of impacts from policy measures or alternative incentive schemes on realizable future cost reduction. In addition, in view of current rapid and necessary changes in energy systems (driven partially by policies and partially by markets) and the ensuing need for flexibility, the endogenous modelling of cost development of existing and new innovative energy-related technologies in bottom-up models will become even more important. However, the data and experience curves required to do so have not been available when the REFLEX project started. A review of certain energy supply and demand technologies has been published by Junginger et al. (2010), but these required an updating. More recent studies have been published for some individual technologies (e.g. Bolinger and Wiser 2012; Candelise et al. 2013 or Chen et al. 2012). However, a comprehensive and up-to-date overview was missing. Especially with regard to technologies needed for increasing the flexibility in energy systems (such as storage technologies or DSM-devices) little or no experience curves had been published. Thus, to advance the energy models included in REFLEX beyond the state-of-the-art, data collection was required to devise or update experience curves for existing technologies and to estimate experience curves for new ones. Furthermore, it will require smart and innovative incorporation and interlinkage of these experience curves in various sectoral energy models to comprehensively assess the effects of technological learning and the demand for increased flexibility in energy systems. The collected data have a high potential for re-use after the end of the REFLEX project. The outcome – a comprehensive state-of-the-art and up-to-date overview of experience curves and underlying databases – will be of high value for other energy models outside the project (developed in the EU as well as worldwide) in order to meet the challenges of modelling our changing energy systems with increasing penetrations of innovative (improved and new) technological solutions for the coming decades. ### 1.3.3 GENERATED INTERMEDIATE MODEL OUTPUT DATA These data are only intermediate results of the EMS and are transferred between the models during the iteration processes. A relevant benefit of these data for further applications outside the framework of REFLEX is not expected. ### 1.3.4 GENERATED FINAL RESULT DATA OF THE EMS The overall objective of REFLEX is to support the EU’s SET-Plan by strengthening the knowledge base for transition paths towards a low-carbon energy system based on a crosssectoral analysis for the entire energy system of the EU. Due to the complexity of this system, it is obvious that the implementation of the SET-Plan requires in-depth knowledge on the interrelationship between the different sectors (electricity, heat and mobility), energy technologies but also on the interdependencies between energy and non-energy industries, environment (beyond greenhouse gas emissions) and society. The result data of the EMS within REFLEX help to understand and investigate the complex links, interactions and interdependencies between the different actors and technologies within the energy system as well as their impact on society and environment. Based on the EMS result data, recommendations for effective strategies for a transition of the European energy system to a low- carbon system can be derived. Policy makers at EU level as well as at regional level can use these findings when developing policy measures. Furthermore, the generated final result data of the REFLEX project can be used as a reference or starting point for further research work by the project partners or third parties on the future design of the energy system of the European Union. ### 1.4 DATA PROTECTION AND EXPLOITATION STRATEGY In order to ensure efficient dissemination and exploitation activities, free of any legal conflicts, the REFLEX project partners signed a Consortium Agreement (CA). 2 The CA is, among other things, dealing with the details on the partners’ background data and on the rights to, the protection and the exploitation of pre-existing datasets and results generated solely and/or jointly during the lifetime of the project. Moreover, the CA sets up specific rules on how to deal with dissemination activities (see also Deliverables D7.3 and D7.4 Dissemination and Communication Plan) and to ensure open access to peer-reviewed scientific publications. The following basic CA rules regarding data protection and exploitation apply: * All partners define their individual existing background data required for their successful participation in the project. Background data are own and/or commercial model input data generated/purchased before the start of the project. The rights to these data remain with the respective partner, but royalty-free access to the others is granted if not restricted by third parties, and if it is required to enable the research activities in the context of the project. * Data that are acquired by individual partners during the project without using REFLEX funds, e.g. in the context of projects on behalf of other clients run in parallel, will be also treated as pre-existing background data. * The property rights to data collected and datasets/results generated during the project by using REFLEX funds (foreground data) belong to those partners involved in the collection and generation processes. When more than one consortium member is involved, the dataset will be jointly owned by the respective consortium members. Dissemination and exploitation of data and results is executed in accordance with EU laws and with respect to specific laws in the participating countries. Before any dissemination activity takes place, respective legal aspects are examined and clarified. This is particularly the case for data from the DSM- survey and for experience curve data as well as data purchased from commercial providers. The possibility for protection of generated results within the project (consortium) is also examined before publication. All participants have departments specifically devoted to managing intellectual property. These departments manage the relevant protection processes. Within REFLEX, the dissemination and exploitation of results not only in terms of knowledge and insights, but also in terms of data, is coordinated by the Exploitation and Innovation Manager (EIM) regarding knowledge management and innovation activities. Thus, and as specified in the Grant Agreement, the EIM is responsible for: * maintaining a registry of relevant background data, * maintaining a registry of foreground data gathered and generated in the work packages during the project, * assessing the opportunities for exploitation, for example by following political events in the energy sector or searches of other scientific databases for similar developments, and * proposing specific exploitation measures, e.g. policy briefs and events. During the project, periodic analyses of transfer opportunities to adjust the exploitation strategies take place. Thereby, the EIM can identify synergies to ensure the best and suitable use and exploitation of results. All consortium partners contribute to the exploitation plan of the project throughout its life span. The EIM is in close contact and regularly informed about the exploitation plans of the partners and regularly advises the consortium and individual partners about possible strategies. The exploitation strategy related to REFLEX research data is outlined below: * First, it is decided whether to (re-)publish a dataset in the REFLEX data repository or not, and if yes in what way (data format, metadata provided, open or restricted access). * Participants inform the EIM and other consortium members if they wish to publish or disseminate any datasets, whether in a direct way or indirectly. * Before any dissemination activity takes place, the participants examined the possibility of protecting generated results, for instance with regard to potential reuse for commercial purposes. * In case of collected input data: It is examined whether the rights of third parties are affected and, if necessary, their consent to the re-publication of these data is obtained. * Upon (affirmative) dissemination decision, the dataset is made available (regarding different dissemination types see Sections 2.2 and 2.4). **2 FAIR DATA** Collected and generated data relevant for EMS model runs are implemented in the developed REFLEX database in a standardized way. The database is implemented in postgreeSQL. For managing the database, the management-tool “pgADMIN” and a proprietary developed interface tool is used, which provides several data-preparing and identifier-mapping functions. A selection of existing data as well as data collected and generated during the project will be made available to interested research groups and stakeholders from policy and industry. The following sections outline how the data are exploited and made accessible for verification and re-use and how data will be curated and preserved upon closure of the project. ## 2.1 MAKING DATA FINDABLE – REFLEX DATA REPOSITORY For making data findable, a data catalogue – the so-called **REFLEX data repository** – has been prepared. The consortium will continue to provide the data via the REFLEX project website for a limited period of time after the end of the REFLEX project (<18 months). The project website will be maintained during this period (only ensuring online accessibility). **For long-term data provision,** the following approaches (or a combination of them) are currently evaluated: 1. The data remain in the _DWH of the project partner ESA²_ . The data provision is transferred to the website of the ESA² Company ( _http://www.esa2.eu_ ). There a reference to the project will be created and access to the data will be enabled. 2. The data are published in the _ZENODO repository_ ( _https://zenodo.org/_ ). This is an online, free of charge storage created through the European Commission’s OpenAIREplus project and is hosted at CERN, Switzerland. It encourages open access deposition of any data format, but also allows deposits of content under restricted or embargoed access. 3. The data are published in the inter-disciplinary _OpARA repository_ ( _https://tudresden.de/zih/forschung/projekte/opara_ ). This is an online, long-term repository for research data, hosted by Technische Universität Dresden. 4. The data are published in the _“OpenEnergy Platform”_ ( _https://wiki.openmodinitiative.org/wiki/Proposal_for_the_OpenEnergy_Platform_ ). This platform is still under development and aims to expand the existing “OpenMod” online presence by, amongst others, offering a place to store and exchange data (raw data and processed data), which are needed for modelling works. Approach i) is preferred. For the case that a permanent provision via the website of ESA² should no longer be possible at any time, option iv) is preferred. The structure of the REFLEX data has already been designed for this option. Additionally, this platform offers the possibility to implement information about the REFLEX project. In any case the data will be available for third parties after the end of the project with free access to metadata and the open access data contents. The length of time for which the data will remain re-usable is not restricted. The **REFLEX data catalogue** is currently implemented on the REFLEX project website ( _http://reflex-project.eu/public/data-publication/_ ) with a simplified preliminary frontend version which has a format similar to Table 9 below. The catalogue thus gives a comprehensive overview on all datasets available in the data repository and at the same time allows users to access metadata sheets with detailed information on the content and scope of a specific dataset directly. Moreover, a short-link for downloading the individual dataset in a userfriendly format is provided. # Table 9: Data catalogue in the REFLEX data repository <table> <tr> <th> Dataset </th> <th> ID </th> <th> Version </th> <th> Description </th> <th> Download </th> </tr> <tr> <td> [dataset name and _link_ _to metadata sheet_ ] </td> <td> [dataset identifier] </td> <td> [version number, i.e. year_month_day] </td> <td> [brief description] </td> <td> [direct _link for_ _download_ ] </td> </tr> <tr> <td> Dataset 1 </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> Dataset 2 </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> Dataset 3 </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> Dataset 4 </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> … </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> </table> The frontend-concept of the final version is shown in Figure 3. This is currently under construction and will give a user-friendly overview with a content related hierarchic (tree) structure about the available data. By clicking on a data set, the associated metadata and a data preview are displayed and the data set can be downloaded. The scope and design of the metadata has been oriented on the metadata structure of the “Open Power System Data” platform ( _http://www.data.open- power-system-data.org/_ ). Table 10 shows the **REFLEX metadata sheet** . It includes not only general descriptive issues (e.g. name, description, version number, etc.), but also detailed information on its scope (time, sectoral, spatial) as well as administrative matters (e.g. recommended text for attribution or the licensing of the dataset). Each dataset can be unambiguously identified via the combination of dataset name and the version label. Both will be included in the unique dataset ID. **Figure 3: Concept of the frontend for the REFLEX data repository** # Table 10: Scope and contents of the metadata for a provided dataset <table> <tr> <th> Category </th> <th> Content </th> </tr> <tr> <td> Name </td> <td> Name of the dataset (a concise one, short but informative) </td> </tr> <tr> <td> ID </td> <td> Dataset identifier (unique dataset ID including the dataset name and version number) </td> </tr> <tr> <td> **Version** </td> <td> Year_month_day (e.g. 2015_04_21) </td> </tr> <tr> <td> **Keywords** </td> <td> List of keywords </td> </tr> <tr> <td> **Description** </td> <td> Short description of the dataset </td> </tr> <tr> <td> **Remarks** </td> <td> Specific remarks (e.g. restrictions, data gaps) </td> </tr> <tr> <td> **Timescale** </td> <td> Time period covered (e.g. 2005 to 2030) and timesteps (e.g. hourly, yearly, 5-year steps) </td> </tr> <tr> <td> **Spatial scope** </td> <td> Countries/regions covered with scope and level of differentiation or aggregation (e.g. EU-28, national data, list of countries) </td> </tr> <tr> <td> **Sectoral scope** </td> <td> Sectors covered (e.g. households, industry, traffic) as well as sub-categories if available (e.g. road traffic, rail traffic, aviation) </td> </tr> <tr> <td> **Sources and input data** </td> <td> Used sources to prepare/provide the dataset, if possible with links to the primary data </td> </tr> <tr> <td> **Attribution** </td> <td> Recommended text for attribution </td> </tr> <tr> <td> **Contact** </td> <td> Contact information for questions/remarks </td> </tr> <tr> <td> **Access** </td> <td> Terms of data access/usage, ideally a standard license </td> </tr> <tr> <td> **Download** </td> <td> Here link to the dataset for download, e.g. _CAP_installed.xls_ </td> </tr> <tr> <td> **Field documentation** </td> <td> _Link_ to extra file/page [see Table 11] </td> </tr> </table> The field documentation as last item of the metadata sheet and being fully displayed on a separate page/window includes the complete list of variables within the dataset with the subcategories field name, -type, -unit, and -description (see Table 11). # Table 11: Field documentation <table> <tr> <th> </th> <th> </th> <th> </th> <th> </th> <th> </th> <th> </th> </tr> <tr> <td> Field name </td> <td> e.g. CAP_inst </td> <td> … </td> <td> … </td> <td> … </td> <td> … </td> </tr> <tr> <td> Field type </td> <td> e.g. Number </td> <td> … </td> <td> … </td> <td> … </td> <td> … </td> </tr> <tr> <td> Field unit </td> <td> e.g. MW </td> <td> … </td> <td> … </td> <td> … </td> <td> … </td> </tr> <tr> <td> Field description </td> <td> e.g. Installed capacity end of the year </td> <td> … </td> <td> … </td> <td> … </td> <td> … </td> </tr> </table> With regard to the publication and transparency of results of the project work (e.g. in scientific journals, book chapters, conference proceedings or policy briefs), the preparation of tailored dataset packages for the different publications is planned. Such a package contains, together with a short content description, a compilation of (1) the metadata of the published results, and (2) the metadata of the relevant datasets required to verify the results, as long as these can be made available. ## 2.2 MAKING DATA OPENLY ACCESSIBLE – REFLEX OPEN DATA At the beginning of the REFLEX project, three different possibilities for data dissemination have been considered by the project partners, i.e. 1. _Open access data publication_ : In this case, data owners grant a royalty-free access of a meaningful selection of generated results to other participants and to the public, possibly restricted by appropriate embargo periods and/or respecting restrictions from editors of scientific journals and organizers of conferences. 2. _Commercial data exploitation_ : In this case, data suitable for commercial exploitation (e.g. for a commercial re-use by consulting companies) will be managed by the project partner ESA 2 , with the explicit purpose of exploiting research results (including research data) related to (coupled) energy systems modelling. 3. _Indirect data publication_ : In this case, parts of the generated data are disseminated only indirectly as part of intermediate or final results of models and/or as qualitative outcome based on post-analysis of results. The following discussions concentrate on direct data publication. Both, input data collected from publicly available sources (existing and new model input data) as well as data generated via the project partners’ application of the different simulation models are made available as open data as far as possible according to the guidelines of the EU. Thereby, any legal conflicts have to be avoided (see Section 2.5 for details). The same holds for generated final results of the EMS (see Table 12). The project partners are in discussion to make the results of the DSM-survey and the generated different DSM-datasets openly available for research purposes with added value for society. However, the commercial use by third parties shall be subject to explicit permission and an adequate royalty payment. A similar discussion is taking place for data on experience curves. The required data for technological learning curves have been collected by means of interviewing industry experts, conducting specific survey methods and analysing detailed statistics (e. g. construction, production and consumer price indices as well as installed capacities and cost developments in the electricity, heat and mobility sectors). Model input data purchased by the project partners, in contrast, cannot be re- published as open data due to the data usage conditions of the commercial providers. Finally, the same holds for confidential data on fuel price developments provided by the European Commission (Capros et al. 2016), for which non-disclosure agreements have been signed. # Table 12: REFLEX data published in the data repository (the right to make changes is reserve) <table> <tr> <th> Data group Data source </th> <th> (Re-)publication in the REFLEX data repository? </th> </tr> <tr> <td> **Existing model input data** </td> <td> Data collected from publicly available sources </td> <td> **YES** – if no legal conflicts </td> </tr> <tr> <td> Data generated by project partners </td> <td> Model outcomes: **YES** </td> </tr> <tr> <td> Purchased data </td> <td> **NO** </td> </tr> <tr> <td> **Collected and generated new model input data** </td> <td> Data collected from publicly available sources </td> <td> **YES** – if no legal conflicts </td> </tr> <tr> <td> Data generated by project partners </td> <td> Model outcomes: **YES** Data for DSM: **YES, but** restricted access 1 Data for experience curves: **YES, but** restricted access 1 </td> </tr> <tr> <td> Purchased data </td> <td> **NO** </td> </tr> <tr> <td> Confidential data </td> <td> **NO** – NDAs have been signed </td> </tr> <tr> <td> **Generated intermediate model output data** </td> <td> EMS models </td> <td> **NO** – transfer between models during iteration process, no relevant benefit for further applications </td> </tr> <tr> <td> **Generated final result data of the EMS** </td> <td> EMS models </td> <td> **YES** </td> </tr> <tr> <td> 1 Open and royalty-free access restricted to non-commercial use </td> <td> </td> </tr> </table> In the case of an _Open Access Data Publication_ , the dataset can be easily downloaded in a standard format. The planned standard format for data (re-)publication is “.csv”. The download links for different formats are given within the metadata sheet in the category “Download”. In case of _Commercial Exploitation_ of a dataset, a registration procedure for all those interested in such datasets is an option. This includes the opportunity to differ the conditions for access depending on the type of the inquirer or planned re-usage (e.g. dataset being free of charge for public scientific institutions, but subject to a charge in case of commercial reuse by a company). After registration of the request of a dataset, a time-limited download link is provided via e-mail to the registered contact together with the terms of usage and, as the case may be, with the invoice. The requisition could be implemented in the respective metadata sheets, which are available free of charge in any case, in the category “Access”. The datasets are made available to third parties as soon as they are generated, prepared and reviewed for publication/commercial exploitation and when the conditions of dissemination are decided and possible protections of datasets are clarified within the consortium. However, additional restrictions by setting appropriate embargo periods and/or respecting restrictions from editors of scientific journals and organizers of conferences are also possible. A generally valid statement regarding the embargo periods is not possible at the moment. It can differ from case to case. ## 2.3 MAKING DATA INTEROPERABLE To increase the interoperability of provided data, commonly used vocabularies for the metadata contents as well as for the identifiers and the contents of the identifiers within the datasets will be applied. These include standardized name conventions and codes used in official statistics (e.g. for countries, regions etc.). Furthermore, specific energy system topics related name conventions will be orientated on the “Open Power System Data” platform ( _http://www.data.open-power-system-data.org/_ ). An additional mapping procedure or the provision of mapping tools for data users is not envisaged. ## 2.4 INCREASE DATA REUSE – LICENSING OF REFLEX DATA Intellectual property rights on databases are subject to different pieces of legislation. First, copyright applies to a wide range of creative, intellectual and artistic works – including data. Second, in 1996, the EU adopted the Database Directive (96/9/EG), which had to be implemented into national law by the Member States. Moreover, rights and duties resulting from data privacy law or existing licenses on datasets have to be respected. In order to increase data re-use for REFLEX research data and avoid any legal ambiguities for data users, the (re-)publication of REFLEX data goes hand in hand with a clear licensing procedure. In what follows, we discuss different types of licenses available (i.e. standard vs. custom; open vs. restricted), give a more detailed overview on standardized licenses, describe how the data are licensed in order to permit the widest reuse possible, and elaborate on data quality assurance. ### 2.4.1 TYPES OF LICENSES A license is a contract by which the licensor (in our case the data owner) allows the licensee (the data user) to use otherwise protected material. Thus, it clarifies the conditions under which data can be reused. In contrast, in the absence of a license, the data owner still retains proprietary copyright (see also Section 2.5 for further elaborations thereon). A first distinction can be made among customized and standardized licenses. Drafting a new **customized license** , allows tailoring it to the individual needs and desires of the partners. However, it might leave issues open, and thus might in the end leave data users with default copyright (i.e. no open data). Moreover, it might be unclear or ambiguous on certain issues. An established, **standardized license** , in contrast, has the advantages of content and scope being well understood, and the legal text being rigorously written and suitable for different legal systems. The REFLEX project partners, therefore, agreed on the use of standardized licenses for datasets published in the REFLEX data repository. A second distinction can be made among open data and non-open data licenses. Thereby, “ **open data** […] can be freely used, modified, and shared by anyone for any purpose” (see also _https://opendefinition.org/_ ). 3 More specifically, data to be classified as open data must be “available under an open license; available as a whole and at no more than a reasonable one-time reproduction cost, preferably downloadable via the internet; and it must be provided in a convenient and modifiable form, machine-readable, available in bulk and provided in an open format” (see also _https://open-power-system- data.org/legal_ ). 4 Three main types of **open data licenses** – with increasing obligations for users – exist. These are _public domain licenses_ under which users are completely free to use, modify and share the data; _attribution licenses_ which demand users to appropriately credit the dataset creator; and finally _share-alike licenses_ which in addition to the ones above also condition that any derivative work is made available under the same license. **Licenses for non-open data** – with increasing restrictions for users – include _no-derivatives_ _licenses_ where it is only allowed to copy, distribute and display original copies of the work and any modification is subject to explicit permission; as well as _non-commercial use only_ _licenses_ and different combinations thereof. Figure 4 illustrates the different types of licenses distinguishing among open and non-open data. **Figure 4: Different types of licenses – open vs. non-open data** Several initiatives developed sets of standardized licenses for specific works such as artwork in general (e.g. _Creative Commons_ ), software (e.g. _MIT_ ), or even explicitly for data publication ( _Open Data Commons_ ). In what follows, we give an overview on the different standardized licenses available including examples. * **Public domain licenses** – ZERO licenses (i.e. no copyright): Users are completely free to use, modify and share the data, even for commercial purposes and all without asking permission. Examples: Creative Commons Zero ( _CC0 1.0_ ); Open Data Commons Public Domain Dedication and Licence ( _PDDL 1.0_ ); Data licence Germany - Zero 2.0 ( _dl- de-2-0_ ). * **Attribution licenses** – BY licenses (i.e. name the author): Users are completely free to use, modify and share the data, even for commercial purposes and all without asking permission. The only requirement is the adequate attribution, i.e. name of the data provider and references to the license and dataset. Examples: Creative Commons Attribution ( _CC-BY 4.0_ ); Open Data Commons Attribution ( _ODC-BY 1.0_ ); Data licence Germany – attribution – 2.0 ( _dl- de/by-2-0_ ). * **Share-alike licenses** – SA licenses: Users are completely free to use, modify and share the data, even for commercial purposes and all without asking permission. The only requirements are an adequate attribution (see above) plus that in case a database is amended or modified, it shall be published under the same license as the original one. Examples: Creative Commons Attribution-ShareAlike 4.0 ( _CC BY-SA 4.0_ ); Open Data Commons Open Database License ( _ODbL 1.0_ ). * **No derivatives license** – ND licenses: It is allowed to copy, distribute and display only original copies of the work. Any modification is subject to explicit permission. Example: Creative Commons Attribution-NoDerivs ( _CC BY-ND 4.0_ ). * **Non-commercial use only license** – NC licenses: It is allowed to use, copy, distribute, modify and display the work for non-commercial purpose only. Any commercial use is subject to explicit permission. Example: Creative Commons Attribution-NonCommercial ( _CC BY-NC 4.0_ ). * **Different combinations** : Creative Commons Attribution-NonCommercial-NoDerivs ( _CC BY-NC-ND 4.0_ ); Creative Commons Attribution-NonCommercial-ShareAlike ( _CC_ _BY-NC-SA 4.0_ ). ### 2.4.2 LICENSING REFLEX DATA The REFLEX project partners decided to apply the widely used **Creative Commons licenses** for the (re-)publication of REFLEX datasets (see Table 13). For data collected from publicly available sources (existing and new) as well as for input data generated by the project partners’ modelling works, the open CC-BY license has been chosen, providing open access to any user for any use on the single condition of an adequate attribution to the original dataset. The same holds for generated final project result data of the EMS. For generated data for demand side management as well as experience curves the open CC-BY-NC license has been chosen. It restricts open access to non- commercial use only. Any commercial use will be subject to explicit permission and the payment of royalty fees to be specified in the respective agreement between the REFLEX representative and the commercial party. The REFLEX project partners refrained from using share-alike licenses as their application to a certain extent restricts future users of a dataset in the re- publication of processed data. Especially if he/she might rely on several share-alike licenses, these might be incompatible and leave the user with a potential conflict. # Table 13: Draft of REFLEX data licensing (the right to make changes is reserved) <table> <tr> <th> Data group Data source </th> <th> General license type </th> <th> Standard license chosen </th> </tr> <tr> <td> **Existing model input data** </td> <td> Data collected from publicly available sources </td> <td> **Open data** with attribution </td> <td> Creative Commons Attribution **(CC-BY 4.0)** </td> </tr> <tr> <td> Data generated by project partners </td> <td> Model outcomes: **Open data** with attribution </td> <td> Creative Commons Attribution **(CC-BY 4.0)** </td> </tr> <tr> <td> Purchased data </td> <td> **-** </td> <td> **-** </td> </tr> <tr> <td> **Collected and generated new model input data** </td> <td> Data collected from publicly available sources </td> <td> **Open data** with attribution </td> <td> Creative Commons Attribution **(CC-BY 4.0)** </td> </tr> <tr> <td> Data generated by project partners </td> <td> _Model outcomes:_ **Open data** with attribution _Data for DSM:_ **Restricted access** for non-commercial use only 1 and with attribution _Data for experience curves:_ **Restricted access** for noncommercial use only 1 and with attribution </td> <td> _Model outcomes:_ Creative Commons Attribution **(CC-BY 4.0)** _Data for DSM:_ Creative Commons Attribution-NonCommercial ( **CC BY-NC 4.0)** _Data for experience curves:_ Creative Commons Attribution-NonCommercial **(CC BY-NC 4.0)** </td> </tr> <tr> <td> Purchased data </td> <td> **-** </td> <td> **-** </td> </tr> <tr> <td> Confidential data </td> <td> **-** </td> <td> **-** </td> </tr> <tr> <td> **Generated intermediate model output data** </td> <td> EMS models </td> <td> **-** </td> <td> **-** </td> </tr> <tr> <td> **Generated final result data of the EMS** </td> <td> EMS models </td> <td> **Open data** with attribution </td> <td> Creative Commons Attribution **(CC-BY 4.0)** </td> </tr> <tr> <td> 1 Any commercial use subject to explicit permission. </td> <td> </td> <td> </td> </tr> </table> In theory, the **compatibility of different licenses** could be an issue (see e.g. _www.github.com_ ). When processing data and publishing results, the license(s) of input data need(s) to be respected. Typical questions that might arise include: Can data published under license A be merged with other data published under license B? What license could be applied to such a derived or aggregated dataset? Are there any provisions associated with the license of an input dataset that constrain the creation and publication of derivations? Precisely, a data user might face the conflict of two input datasets being licensed under two different, incompatible share-alike licenses. For the REFLEX project no issues related to incompatibility of licenses occurred. The REFLEX project partner **ESA 2 is responsible for data management and (re-) publication. ** For datasets collected from project-external sources, an agreement on the use and republication of the respective data will be signed between the data owner and ESA 2 (see Section 2.5 and Figure 6 below for a detailed discussion of the proceeding). For data generated by REFLEX project partners themselves – whether outside (background data) or inside the project (foreground data) – the CA says that “results shall be vested in the party that has generated them” and that “where results are generated from work carried out jointly by two or more parties […], they shall have joint ownership.” To allow ESA 2 to take care of data publication and licensing, and to also manage commercial exploitation where envisaged, a contract between the concerned partner(s) and ESA 2 will be signed (see Figure 5). Thereby, data ownership remains with the party who has collected/generated the dataset. **Figure 5: Procedure for giving ESA 2 the right to republish data and manage licensing ** ### 2.4.3 DATA QUALITY ASSURANCE Open data obviously are only of use for future research or stakeholders from industry and policy, if they are up-to-date, as complete as possible and free of any human-made mistakes in data collection, storage and reporting. Moreover, coupled models – as is the case for the EMS – should use harmonized input data and be based on common scenario storylines. Within the REFLEX project, quality assurance processes as described in Table 14 therefore have been implemented for the different data groups. # Table 14: Processes of data quality assurance <table> <tr> <th> **Data group** </th> <th> **Processes** </th> </tr> <tr> <td> **Existing model input data** </td> <td> * Harmonization of model input data to ensure a consistent analysis within the EMS and regarding the defined REFLEX scenario storylines. For the same information, the same dataset (values) have been used in all models (consortium decision). * Harmonized data being provided to all models before initializing the EMS runs. </td> </tr> <tr> <td> **Collected and generated new model input data** </td> <td> * A minimum of two internal reviews of the generated new model input data took place. * Additionally, external peer-reviews in case of publication of selected modelling results and/or quantitative research in scientific journals. * Harmonization of model input data to ensure a consistent analysis within the EMS and regarding the defined REFLEX scenario storylines. For the same information have to be used the same dataset (values) in all models (consortium decision). * Harmonized data being provided to all models before initializing the EMS runs. </td> </tr> <tr> <td> **Generated intermediate model output data** </td> <td> \- Plausibility check for intermediate output data of a model during EMS runs by the responsible modeller before implementing the data in the DHW for data transfer to another model. </td> </tr> <tr> <td> **Generated final** **result data of the** **EMS** </td> <td> * A minimum of two internal reviews of the generated final result data took place. * Additionally, external peer-reviews in case of publication of selected modelling results and/or quantitative research in scientific journals. </td> </tr> </table> ## 2.5 REFLEX ANSWERS TO POTENTIAL LEGAL CONFLICTS Data management in the REFLEX project on the one hand refers to the management of data flows of the different models forming the EMS. Issues related to data harmonization, data storage, data exchange, and data up-/download have been dealt with by implementing the REFLEX data warehouse (see also Deliverable 2.3: Report on modelling coupling framework). Legal aspects associated to the ownership of input and result data, the rights to use them and possible confidentiality restrictions are defined in the Consortium Agreement. It provides details on the partners’ background data and on the rights to, the protection and the exploitation of datasets/results generated solely and/or jointly during the lifetime of the project. Thus, no potential legal conflicts did arise here. On the other hand, data management in the REFLEX project refers to the **(re-)publication** of input data in order to support a transparent research environment, and to the publication of final project modelling results in a format that makes it possible for third parties to access, mine and exploit the data. In general, potential legal conflicts might relate to the following questions: * How to avoid any copyright infringements when using input data from project-external sources for modelling works and re-publishing those in the REFLEX data repository? * How to avoid any legal conflicts when using these data for modelling works and publishing EMS modelling results in an open data format? * How to avoid any legal ambiguities for third parties who wish to use REFLEX final result data? Copyright may apply to a wide range of creative, intellectual or artistic works, including data. One thereby has to be clear in the use of terminology. Whereas a **single date is not protected** under copyright, structured or organized data which have been collocated to a database via a substantial investment (e.g. time, manpower) might be. In this context, the EC’s Database Directive (96/9/EG, Art. 1.2) defines a **database** as “a collection of independent works, data or other materials arranged in a systematic or methodical way and individually accessible by electronic or other means.” Besides copyright law and the EU Database Directive, different other legal concepts may govern data ownership and rights for data use and publication. These include amongst others data privacy laws, national legislation, licenses, or terms-of-use clauses. When searching for power system data, one will find that many data collections are available online, free of any charge. However, this does not imply that one is allowed to use such data freely. In the absence of any license agreement, the default is that the copyright holder reserves, or holds for his/her own use, all the rights provided by copyright and related law. This includes that already the use of such data – without an explicit consent of the owner – can be a copyright infringement. Discussions with operators of energy data platforms, who are dealing with these issues since several years, confirmed the concerns of the REFLEX partners. It has come to light that long and comprehensive debates are ongoing regarding the question if datasets can be (re-)published and what implications such a publication has on the rules for access, usage etc. This seems still a grey zone in which many researchers and platform operators work. Having these issues in mind, potential legal conflicts that did arise in the course of the REFLEX project were related to _(i) the use of datasets – i.e._ * Are we allowed to use and process the dataset for our modelling work? If “reuse is authorized provided the source is acknowledged” what is meant by “reuse”? * Are we allowed to copy, present, share, print, or transmit the dataset? * Are we allowed to modify or merge with own data and with the data of others to form new datasets? _(ii) the (re-)publication of datasets – i.e._ \- Are we allowed to re-publish the dataset in public and non-public electronic networks, and in our project data repository? and finally _(iii) implications of non-open-access input data on output data publication – i.e._ * Are there any legal conflicts when using data with unclear licensing (or purchased/ confidential data) as input and then publishing our modelling results as open data? Point (iii) could be neglected, as EMS modelling results are completely new created datasets, not mirroring input data. No conclusion to the values of input datasets can be drawn. Points (i) and (ii), in contrast, are more complex and will be discussed in-depth below. Table 15 summarizes potential legal conflicts for REFLEX input data, which are provided under a number of different legal regimes, i.e.: * © Public institutions & “All rights reserved.”; * © European Union & “Reuse is authorised provided the source is acknowledged.”; * © Publisher of academic journal; * data published in online reports without any specification on licensing/reuse; * data purchased; * confidential data coming from EC; or * data generated by project partners (model outcomes, data from surveys, etc.) with respect to the use of a dataset in the project context and its (re-)publication in the REFLEX data repository. # Table 15: Potential legal conflicts for REFLEX data related to use and (re-)publication <table> <tr> <th> Legal rules on REFLEX input data </th> <th> Potential legal conflicts regarding the use of the datasets? </th> <th> Potential legal conflicts regarding the (re-) publication of the dataset? </th> </tr> <tr> <td> **© Public institutions & “All rights reserved.” ** </td> <td> **YES** – Full copyright with the author </td> <td> **YES** – Full copyright with the author </td> </tr> <tr> <td> **© European Union & “Reuse is authorised provided the source is acknowledged.” ** </td> <td> **Maybe YES** – What is meant by “reuse”? Only processing? Or also modification, combination with another database? Etc. </td> <td> **Maybe YES** – No specification regarding re-publication </td> </tr> <tr> <td> **© Publisher of academic journal** </td> <td> **Maybe YES** – Depends on specific rules </td> <td> **Maybe YES** – Depends on specific rules; possibly re-publication not allowed for certain time period </td> </tr> <tr> <td> **Data published in online reports without any** **specification on licensing/reuse** </td> <td> **YES** – Default is full copyright with the author </td> <td> **YES** – Default is full copyright with the author </td> </tr> <tr> <td> **Data purchased** </td> <td> **NO** – explicitly purchased for use in the modelling works </td> <td> Not relevant here, re-publication forbidden </td> </tr> <tr> <td> **Confidential data coming from EC** </td> <td> **NO** – explicitly provided for use in the modelling works </td> <td> Not relevant here, re-publication forbidden </td> </tr> <tr> <td> **Data generated by project partners (model outcomes, data from surveys, etc.)** </td> <td> **NO** </td> <td> **NO** </td> </tr> </table> Copyright law only becomes relevant in case of the use of “substantial parts” of a database. As a rule of thumb, one can say that the use of less than 5% will not cause any legal conflicts; for the use of more than 15%, however, full copyright law applies. As REFLEX partners use parts of the respective databases to an extent above this last threshold, copyright law cannot solve the conflicts identified in Table 15. Thus, well-defined contracts are needed for the different datasets at hand in order to avoid any copyright infringements with the use and (re-) publication of REFLEX data. Within REFLEX, the project partner ESA 2 , being responsible for data management and the research data repository, therefore prepared an **“Agreement on the use and republication of research data in the REFLEX project data repository”** , including an annex with the respective metadata. One thereby had to consider that the **author is not always the copyright holder** , but only the copyright holder her-/himself has the right to assign a license. Thereby, copyright ownership is determined by a number of (national) laws depending on the type of employment (see RUI (2018) for further details). For instance, under private employment which is the case for a researcher at a research institute, in Germany all works created during working hours belong to the employer according to §43 UrhG and §69b UrhG. This, for instance, is the case for all REFLEX input datasets coming from DIW (2013). In the first step, consequently, the copyright holder has been identified. Second, a contact person being endowed with the right to sign such an agreement letter had to be found. Third, the agreement letter, including the annex and an e-mail explaining the context has been sent out, asking the respective contact person for completion and signature. Signed agreement letters finally have been archived by ESA². **Figure 6: REFLEX process for gathering agreements on use and re-publication of data** Besides copyright law, severe legal conflicts can originate from **data privacy law** , which becomes relevant if data refer to particular individuals. The publication of anonymous data, in contrast, is unproblematic from the data privacy law perspective. In the case of REFLEX research data, no related issue could be identified: * _DSM parameters_ (flexible load, shifting time, etc.) – absence of any reference to particular households or individuals in the respective datasets summarizing the DSM parameters. Representative and average values resulting from the empirical survey conducted for ten EU Member States have been used. DSM data for further countries have been deduced from the survey results. * _Power plant parameters_ (availability, efficiency, cost factors, etc.) – absence of any reference to particular installations. Partly average values for the different generation technologies have been used. * _Electricity demand_ – absence of any reference to particular consumers. Data aggregated over regions and/or sectors has been used. * _Heating demand_ – absence of any reference to particular consumers. Data aggregated over regions and time has been used. 3. **ALLOCATION OF RESOURCES** The EIM is responsible for data management within the REFLEX project (see Section 1.4). Estimated costs for making REFLEX data FAIR are 150.000 Euro (over all partners). These costs include: * the clarification of data protection issues and licences available, * the final preparation of data by each project partner for publishing (without effort/costs for data collection/purchasing/generation etc.), * the processes for assurance of data quality, * the development and implementation of the data catalogue in the project website, * the implementation of the registration procedure for access to commercially exploited datasets, * the data hosting and backup for security, * the data updating and maintenance of the data and of the data provision during the project lifespan. These costs are covered by the project funds, mainly by the budgeted personnel costs. The costs for long-term preservation after the end of the project are difficult to estimate. They depend mainly on the chosen platform, the effort/cost for maintaining but also on the scope and size of the collected and generated datasets. The permanent costs of preserving datasets on the ESA² DWH provided via ESA² Website are estimated with 800 EUR per year (under current conditions for the v-server and without personnel costs for maintenance). The OpenEnergy Platform and the ZENODO repository would be free of charge as long as the single dataset storage is no greater than the maximum 2GB of data. The permanent costs of preserving datasets on the OpARA repository are planned to be free of charge for TUD members, but the final decision on costs has not been taken yet. The costs for long term preservation shall be covered by the collected charges from the commercial exploitation of datasets during the project lifespan and after. 4. **DATA SECURITY** Most of the data handled in the REFLEX project are not sensitive regarding the laws governing data protection and data security. An exception represents the data from the DSM survey. A provision/publication of these data is only possible in an anonymous form. The DWH as well as the data provision via websites will be implemented on servers with regular backup and data recovery procedures. 5. **ETHICAL ASPECTS** Data collection, data storage, data usage, data generation and data dissemination in this project do not raise ethical issues. 6. **OTHER** No other national-, funder-, sectorial-, or departmental procedures for data management will be used.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0236_UNEXMIN_690008.md
**Introduction** This Data Management Plan (DMP) describes the data management life cycle for all datasets to be collected, processed or generated during the UNEXMIN project. It covers: * the handling of research data during and after the project, * what data will be collected, processed or generated, * what methodology and standards will be applied, • whether data will be shared / made open access and how, * how data will be curated / maintained and preserved. Reviews, updating and adjustments of the DMP will take place during the execution of the project at meetings/workshops organized in the project (months 12-45). The DMP will be updated as required. This document is prepared according to the requirements for FAIR data (making data findable, accessible, interoperable and re-usable) under the ORDP provisions, thus following the principle "as open as possible, as closed as necessary". In the case of UNEXMIN the project will lead to a successor commercial entity "NewCo" which will use the hardware and software designs produced by the UNEXMIN project, disclosure of which would compromise it commercial competitiveness. Therefore, critical aspects of the hardware and software will need to remain confidential. Some aspects of the design may be patented, but others that are not patentable or would be prohibitively expensive to patent will be regarded as "Trade Secrets". However, the largest volume of data generated by the project will be scientific data obtained during the four planned pilot studies. It is intended that these data will form the basis for a series of scientific publications by UNEXMIN consortium partners, but it is intended to make them available as soon as possible, and in any case by the end of the project (M45) at the latest. **2** **Data Summary** The purpose of this DMP is to support the management, by the project partners, of the data by the project partners which will be collected, generated and/or processed during the execution of the UNEXMIN project, and enhance the re-use, availability and survivability of these data. # 2.1 Purpose of the data and its relation to the project objectives There are four main categories of data which will be collected. First is the data to be collected specific to the design and development of the robot and its instrumentation and which will be documented in a series of Deliverables. These data are key to the success of the entire project as they will be used in production of the final robot design. Also included in this category is the new software to be developed for the robot itself and for postprocessing. Second is the actual navigation, physical, and chemical data to be collected during missions of the submersible robot. These data are of many types and forms as described in chapter 2.2 below, and will demonstrate how the autonomous submersible robot will yield information that can be obtained in no other way, which is the main purpose of the project. They will allow demonstration of the use of the autonomous submersible robot by producing new scientific results. Third is data on flooded abandoned mines around Europe, collected within WP5 (the “Flooded Mine Inventory”), as an indication of potential requirements for using the UNEXMIN project submersible robots for mineral exploration and geo- scientific research applications. This will form a relatively small and self- contained database, and will be of direct benefit to NewCo, the company to be formed to continue and exploit the work of the UNEXMIN project. Fourth is the data, recorded both in the set of deliverable documents and in published and unpublished papers, on the scientific interpretation and results from the pilot surveys. # 2.2 Types and formats of data that the project will generate/collect The UNEXMIN project will generate many different types of data in large quantities. These can be divided into four main groups: 1. data relevant to the UX-1 robot design and performance, which range from stakeholder requirement evaluations, simple conceptual descriptions, general drawings to elaborated blueprints of all the parts and their test and performance reports from simulations to real hardware tests in relevant environments. These data will consist of textual and graphical documents, with some numerical data from test results. Software to be used on the robot and for post-processing will be retained on the platforms on which it is developed, in source and binary forms. 2. Information produced about the test-sites, which, also, can widely vary from collected previously known information to high quality datasets and their visualization produced from the UX-1 robotic measurements. These cover many different data types, from x,y,z coordinates, distances, point clouds, pictures, temperature, conductivity, pH data, gamma-ray counts, spectral information of points / areas, mineral information of points / areas. It could also include databases produced by scientific instruments in a laboratory environment on reference samples to help the proper evaluation of the field data. Type “A” data generally will be produced during the planning, production and validation of the UX-1 robots (WP1–5), whereas the type “B” data will be produced during the test dives and post-processing of the delivered data related to WP2, 6 and 7. Type "B" data will in general include large to extremely large sets of numerical data and large numbers of monochrome and colour images from on-board cameras, and will form the basis of the main scientific results of the project. 3. The “Flooded Mines Inventory” to be generated by WP5 in the project can be considered as type “C” data, which will contain an extensive list of European flooded mines with their accessible information. This will be held electronically in a set of database tables. 4. Other data will also be generated as deliverables (written reports) of the project, conference posters, brochures, talks (in ppt and/or pdf format) and publications (general and scientific). These data can be considered type “D”, and will be largely produced in WP8 and/or in cooperation with WP8 as well as directly from WP7. # 2.3 Re-use of existing data Where relevant, existing data on robot components and scientific instruments to be used will be compiled and used in the design/development process. Similarly, for software, existing products will be used where appropriate as the basis and starting point for development of on-board and post-processing systems. Where possible and appropriate, open-source solutions will be used: thus the robot operating system ROS based on the open-source LINUX operating system will be used, and the main post-processing database selected is SQLite which is both open-source and freely redistributable. For the “Flooded Mines Inventory”, all data to be included will be abstracted from pre-existing sources, collected by project participants and linked third parties, and re-formatted by UNEXMIN project to compile the inventory database. # 2.4 Origin of the data The data on robot components and instruments is derived mostly from manufacturers and suppliers. The majority of pre-existing software to be used is either publicly available open-source products or is the property of UNEXMIN project participants made available for use by the project. Additional to this, new software is developed within the project for use on the robot itself, on mission-control computers, and for the wide range of post-processing applications. Data on flooded mines is derived from a variety of sources such as national mines databases maintained by the Linked Third Parties themselves, government mines departments, geological surveys, etc. Data from the pilot surveys is generated within the project by use of the UX-1 robots. # 2.5 Expected size of the data For data of categories (a), (c) and (d) identified in chapter 2.2 above, the expected volumes will be relatively small, measured in several megabytes to a few gigabytes at most. Data of category (b) collected during robot missions, will consist of medium to extremely large data sets. In particular the navigation sub-system will yield point-cloud data sets anticipated to be many gigabytes up to terabytes in total size. The on-board cameras will also produce many thousands of high- resolution images in each mission, requiring gigabytes of storage. Precise estimates of the total data volumes are difficult to obtain until mission parameters are defined in detail. # 2.6 Data utility It is anticipated that the data from UX-1 missions will be useful directly to a range of scientists within the project and more widely, as well as indirectly to mineral exploration and mining companies. Scientific studies will include the use of geophysical data from on-board instruments as well as imaging data, and are expected to yield new data of local and regional geological significance, as well as information of use to industrial archaeologists. Data acquired during development of UX-1 will be of use to NewCo in exploitation and further development of the technology. The flooded mines inventory will be of use to NewCo in identification of potential clients and users of its technology and to national and EC planners in development of strategies for exploration for critical mineral raw materials. **3** **FAIR Data** # 3.1 Making data findable, including provisions for metadata By far the most significant data produced by the UNEXMIN project will be the data from the pilot surveys. It is primarily these data that must be made **findable** , **accessible** , **interoperable** and **re-usable** . ## 3.1.1 Data discovery Data of the different types identified in chapter 2.2 above will be handled in different ways, as follows: 1. **Technical design and development data for the UX-1 robots.** Data that cannot be patented (or would be too expensive to patent) should by default being considered "trade secrets" unless agreed by all the owners of the data and/or the Steering Committee to be eligible to be placed in the public domain. It should automatically be passed to NewCo at the end of the UNEXMIN project, and it will then be a decision for NewCo to make, whether and when to release such data. 2. **Data generated by pilot site missions** . There will be multiple tables of data from each mission at each pilot site, from all of the sensors, cameras, and from the navigation system. For each tables there will also be a metadata file containing information specific to each instrument or camera. The anticipated volume of data will be extremely large: with individual mission databases from many gigabytes up to terabytes in size. This dictates that the data cannot practically be made available online. However, a simple online index to the databases will be set up to enable online data discovery. This index will identify the mission sites and dates/times, the set of sensors and navigation instruments for which data are available, and additional descriptive information which may be relevant (such as whether the mission was aborted or otherwise incomplete, or significant discoveries made). The quality of the data will be discussed between partners, who will decide which data are of sufficiently high quality to be used for publication of the research done in the project. Only these data will be kept in an open data repository to be made publicly available. 3. **Flooded mines inventory.** The flooded mines inventory will be held in a standard database and made freely available online through the UNEXMIN project website and subsequently through the NewCo website. 4. **Project Deliverables, Published Papers, Posters and Presentations.** A central list of these will be maintained on the UNEXMIN project website and subsequently by NewCo, and as defined in chapter 3.2.2 below, they will be held on the web server and downloadable in PDF and other formats as appropriate. Currently it is planned to add standard identification mechanisms or permanent metadata only through the use of the project website www.unexmin.eu and any successor website to be managed by NewCo, in which all online databases (flooded mines inventory, index of mission databases, list of reports and publications) will be held. Digital Object Identifiers may be set up at a later stage. ## 3.1.2 Naming conventions File naming conventions for the pilot missions have been defined in D6.1 - Database Specifications Manual and are summarised here: _There will be a very large number of separate data and metadata files generated by UNEXMIN missions. A file naming convention is therefore absolutely essential, in order to assist organisation and retrieval of data. The following standard naming convention is proposed for use both as file names and in naming relational database tables (see notation conventions in section 3 above):_ _{M}rrrr.UXM for metadata_ _{M}rrrr.UXD for scalar data_ _{M}rrrr.VVV for video data, where VVV is the standard extension for the video format used {M}rrrr-ffffff.PPP for image data, where PPP is the standard extension for the image format used_ <table> <tr> <th> _{M}_ </th> <th> _mission/sensor identifier in the form**XXXyyyymmddnnSSS** _ </th> </tr> <tr> <td> _**XXX** _ </td> <td> _three-character location code (such as IDR = Idrija)_ </td> </tr> <tr> <td> _**yyyymmdd** _ </td> <td> _the date (e.g.20160801 for 1st August 2016)_ </td> </tr> <tr> <td> _**nn** _ </td> <td> _sequential number of the mission on that date (in range 01 to 99)_ </td> </tr> <tr> <td> _**SSS** _ </td> <td> _a code identifying the sensor unit data stream_ </td> </tr> <tr> <td> _**rrrr** _ </td> <td> _sequential number of the recording session within the mission (in range 00019999). A recording session is the interval between turning a sensor on and turning it off again_ </td> </tr> <tr> <td> _**ffffff** _ </td> <td> _a six-digit sequential identifier (in the range 000001-999999) of the frame or image within a recording interval_ </td> </tr> </table> When data files have been validated and are to be archived in databases for release, standardised database names will be used. All databases produced in the project will include in their names the name of the project (UNEXMIN) and a short information on the type of data (e.g., "UNEXMIN_Ecton_Mission_12"). ## 3.1.3 Search keywords The submersible robot mission data are predominantly numeric, thus keyword searching is not of great relevance to this project. ## 3.1.4 Version numbers Version numbering is of importance to documents which are updated and to software developed by the project. In both cases version numbering will be simple and clear. For post-processing applications (data conversion, database access, data analysis, and visualisation) software, the project uses **git** , **SourceTree** and **bitbucket** for version control. ## 3.1.5 Metadata Metadata are defined for each sensor, camera, and subsystem as required. No standards are defined for such metadata, but each file will be specified to provide a full set of information required for correct space- and time- registration of data. This is of particular importance for onboard cameras where angle of view, focal length, aperture, lens and window distortion parameters and other such factors are crucial to correct registration of images on the point cloud obtained from the navigation system. When compatible with the themes of the INSPIRE Directive, the data and metadata added in the project will follow the format recommended by the INSPIRE Directive. Metadata will be created following the model of the INSPIRE Directive (they will offer the same types of information). As the provisions in the INSPIRE Directive are compulsory for EU members, this will ensure a high degree of compatibility with other data sets in the EU. # 3.2 Making data openly accessible An important issue is to monitor the progress of the novel technology development, carefully distinguishing among confidential data (like trade secrets e.g. confidential know-how) and Open Research Data in accordance with the H2020 directives, and if relevant seek patent protection to proprietary technology and methods, if such new discoveries evolve through the research in the project. The responsibility to initiate the patenting will be upon the respective WP leader participants, with ownership proportion agreed within the actively participating research team members. In addition to the various scientific methodologies and concepts UNEXMIN will develop technologies that will be suitable for commercial exploitation in the future. This will require very careful management of Intellectual Property Rights (IPR). The principles governing IPR are outlined in the Consortium Agreement and further details will be developed and agreed in the NewCo business plan. As a general rule, the data representing syntheses of the knowledge from previously published research will be open data from the moment they are put in the repository. The unpublished geological data (produced in the project and older unpublished data provided by partners) will be confidential for a 3-year period for the purpose of ensuring the novelty of the data used in scientific articles published by the partners. The unpublished technological data will remain confidential over a period that is decided by their owners. For components of the project that are sufficiently unique and innovative, the developing WP shall give early warning to the Steering Committee about the confidentiality relevant to this component to all consortium members, not to disclose details until protection is in place. Consideration will be given to patent protection before publication of details of these components. ## 3.2.1 Data to be made openly available as the default The very large volume of scientific data to be obtained by the pilot survey missions will be openly available. Although it cannot practically be supplied online for downloading, because of the sheer volume, an index of the data will be available and searchable online and data sets will be provided on request on suitable media (such as large-capacity portable hard disk drives), by the UNEXMIN project consortium and subsequently by NewCo. Where not patented, detailed design and development data for both hardware and software will be considered as Trade Secrets as these will be key elements to the exploitation plan for the project's successor business NewCo. ## 3.2.2 Data access All the metadata and data of types (a), (c) and (d) (see chapter 2.2 above) will be stored in a repository, accessible from the UNEXMIN project website. The publicly available Open Research Data will be available by any simple web browser. The search, browsing, displaying and downloading the data will be available freely, without any registration. The data to be released, where in relatively small quantities (such as the Flooded Mines Inventory) will be made accessible and freely downloadable from the UNEXMIN website and from the website of its successor NewCo. For larger volumes of data, access will be by request, on suitable media, as explained in chapter 3.1 above. In accordance with ORDP requirements, each beneficiary will ensure open access (free of charge, online access for any user) to all peer-reviewed scientific publications relating to its results. In particular, all beneficiaries will: 1. as soon as possible and at the latest on publication, deposit a machine-readable electronic copy of the published version or final peer-reviewed manuscript accepted for publication in a repository for scientific publications; Where practicable, the beneficiary will also deposit at the same time the research data needed to validate the results presented in the deposited scientific publications (if these data are derived from the pilot surveys, they will be stored and made available as described in chapter 3.2.5 below). 2. ensure open access to the deposited publication — via the repository — at the latest: (i) on publication, if an electronic version is available for free via the publisher, or (ii) within six months of publication in any other case. 3. ensure open access via the UNEXMIN repository (website) to the bibliographic metadata that identify the deposited publication. The bibliographic metadata will be in a standard format and must include all of the following: * the terms ["European Union (EU)" and "Horizon 2020"]; * the name of the action, acronym and grant number; * the publication date, and length of embargo period if applicable, and * a persistent identifier. For data of type (b) (as defined in chapter 2.2 above) derived from the pilot surveys, the following will apply. In accordance with ORDP requirements, the beneficiaries responsible for each pilot survey will: 1. deposit in the UNEXMIN research data repository and take measures to make it possible for third parties to access, mine, exploit, reproduce and disseminate — free of charge for any user — the following: 1. the data, including associated metadata, needed to validate the results presented in scientific publications as soon as possible; 2. other data, including associated metadata, as specified and within the deadlines laid down in chapter 3.2.8 below. 2. provide information — via the repository — about tools and instruments at the disposal of the beneficiaries and necessary for validating the results (and — where possible — provide the tools and instruments themselves). Because of the anticipated very high data volumes, it is not appropriate to use online data repositories. The UNEXMIN research data repository will consist of a set of high-capacity disk drives to be maintained by the UNEXMIN consortium and subsequently NewCo. Third party access to the data will be upon request, and supply by the third party of suitable media for transcription of the requested data. ## 3.2.3 Methods or software tools needed to access the data The submersible mission data will be held as tables in SQLite databases and as image files linked from such tables. SQLite is globally the most widely used database management system and there are open-source tools available to access it. For Windows computers, the programs _sqlite3.exe_ and _DB Browser for SQLite.exe_ are available free of charge and provide the necessary functionality to extract data into standard CSV format files. Full descriptions of data analysis and visualisation tools which will be used in the preparation of scientific publications will be included in those publications and they will be listed in the metadata related to each publication. ## 3.2.4 Documentation about the software needed to access the data Full documentation for SQLite is available online at www.sqlite.org. SQLite itself is an opensource product and has been implemented on a wide variety of operating systems. It has also been embedded in a large number of applications software systems. ## 3.2.5 Depositing data, metadata, documentation and code It is impractical to make anticipated large volumes of data of type (b) available online. Such data will be held in offline storage with multi-site backup for security, and made available to users on request. Part of (b) type data will be put to the online accessible storage, to represent the kinds of data that are accessible upon request. These data will be carefully selected by the consortium to have good quality, contain relevant information, but not contain any confidential, or ethically questionable information. Otherwise, all master copies of the pilot mission data/metadata will be held by the UNEXMIN consortium with additional copies held by relevant participants (in multiple locations) to ensure long-term data security. Design data and post-processing software developed for the project will be deposited at bitbucket (www.bitbucket.org) which provides industry-standard open access capabilities. The bitbucket system is already in use by the project's hardware and software developers. ## 3.2.6 Data access committee Data access policy has been agreed across the consortium, and no data access committee is required. Questions concerning data management policy will be discussed and agreed by the project's main Steering Committee. ## 3.2.7 Access conditions Mission data will be made available openly. Licensing is not required beyond the standard EC requirements for acknowledgment of EC funding of the project. Licensing of software will be decided individually by the software developers in line with their own standard software licensing policies. Open source access conditions do not require the identity of the person accessing the data to be disclosed. There is therefore no need to ascertain this identity. However, if software licensing terms of individual consortium members require this, their own standard procedures will be followed. Type (a) data that cannot be patented (or would be too expensive to patent) should by default be considered "trade secrets" unless agreed by all the owners of the data and/or the Steering Committee to be eligible to be placed in the public domain. It should automatically be passed to NewCo at the end of the UNEXMIN project, and it will then be a decision for NewCo to make, whether and when to release such data. ## 3.2.8 Deadlines Data held on the UNEXMIN website will be downloadable immediately, free of charge and without any need for user registration. Data, requested by third parties, which is held offline on high-capacity hard disks will be supplied on request to third parties who provide suitable media for transcription of the data, within a period of not more than 90 days from supply of the transcription media. The transcription will be carried out free of charge. # 3.3 Making data interoperable Metadata will be created following the model of the INSPIRE Directive (they will offer the same types of information). As the provisions in the INSPIRE Directive are compulsory for EU members, this will ensure a high degree of compatibility with other data sets in the EU. ## 3.3.1 Interoperability of project data The data are held in open-source SQLite database and can be easily exported in CSV format which is readable by a very wide range of standard software applications. The SQLite tables themselves are linked through key fields so that any required subset of data can be easily extracted using standard SQL commands. Location data are stored using standard survey grids to allow ease of combination with data sets from other sources, such as surface geological maps. ## 3.3.2 Data and metadata vocabularies The mission data are primarily numeric, so do not require any vocabularies. Where there may be uncertainty over the recording units used, these are fully documented in the pilot deliverable reports (D7.2 Pilot report from Kaatiala mine; D7.3 Pilot report from Urgeirica mine; D7.4 Pilot report from Idrija mine; D7.5 Pilot report from Ecton mine). ## 3.3.3 Increase data re-use (through clarifying licences) Access and licensing conditions are as described in chapter 3.2.7 above. Data will be freely available, without licensing restrictions beyond a requirement to acknowledge its source as the UNEXMIN project and the standard statement on funding by the EC. **3.3.4 When will the data be made available for re-use?** The data will be made available for re-use not later than 6 months after the end of the project. By agreement with consortium partners it could be made available earlier. Data will not be made available until it has been thoroughly validated for correctness and consistency. ## 3.3.5 Restrictions on data use by third parties Some of the technical data on hardware and software will be defined as Trade Secrets and its use restricted to participants in NewCo. The scope of such data will be kept as limited as possible and will be defined in the NewCo commercial exploitation plan (Deliverable D8.12). ## 3.3.6 Data quality assurance processes No formal processes are defined for data quality assurance, but validation will be carried out during the construction and population of the post- processing database. This will include such processes as outlier (rogue point) detection and elimination, and other statistical verification of data sets. # Allocation of resources ## Costs for making data FAIR No additional costs for making data FAIR have been identified. The processes required are an integral part of the project itself. For third parties who request mission data which (because of its volume) requires the use of special media, it will be expected that they will supply their own media for transcription of the data. Alternatively the UNEXMIN project or subsequently NewCo can supply the media for a small charge which will cover the costs. ## Responsibility for data management The project coordinator is ultimately responsible for data management, but this responsibility will be automatically devolved to the participant managing each phase of the project (design, development, testing, pilot surveys). ## Long term preservation As the Open Research Data will be kept in multiple copies by NewCo and relevant project participants, the archiving and preservation will follow the general procedures used by the project participants. The partners will assess the archiving and preservation procedures and decide whether special procedures are needed for the data produced in the project and also for how long the data are going to be kept in the repository. After the end of the project, copies of all data will be maintained by NewCo under conditions to be determined within the NewCo business plan, with off- site secure storage of full backup copies. The online open research data will be available for the public for minimum of five years after the end of the project, which is a responsibility of NewCo. As a contingency plan, if NewCo does not continue for any reason, all the data, and the responsibility with them will be transferred to a permanent organization in the UNEXMIN consortium, such as the Miskolc University, or to the Eurogeosurveys or relevant national geoscience/mining authorities for safe keeping. # Data security Responsibility for security of design and development data rests with the relevant participants, all of whom follow standard industry practice. For software under development, version control systems (git / SourceTree / bitbucket) also provides a high level of security. Additionally, the post- processing software developers use normal grandfather / father / son backup systems with backups held in separate locations. Pilot mission data will be transferred from the robot at the end of each mission to a hard disk which is removed to a safe location. Following conversion to the SQLite database, a further hard disk copy is retained in a safe location, and additional copies are made as required, and held by different project participants. The physical security of UNEXMIN data may be summarised as: * Secure backups of data before any processing * Very large data volumes (terabytes), thus 'cloud' storage impractical. Storage for most efficient transfer on external hard disks * Audit trail to be maintained – log of database operations * Need to establish a standard “chain of custody” for data security * Data locking: read-only access to any _copies_ of the primary database For long-term security, all data will be held centrally by UNEXMIN (and subsequently NewCo) with at least two further copies at different safe locations. Because of the risks of data loss through long-term decay of media, a particular requirement for NewCo will be regular transcription of data to new media. These transcriptions themselves will need to be validated for accuracy. Procedures will be defined as a part of the NewCo commercial exploitation plan. # Ethical aspects There are no ethical or legal issues which have an impact on sharing of pilot mission data. For technical design data, the only legal issues are as defined above, related to possible patenting or definition as Trade Secrets of some parts of the design of hardware and software components. The project does not handle or store personal data. # Other issues The UNEXMIN project does not make use of other national/funder/sectorial/departmental procedures for data management.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0237_ImageInLife_721537.md
# 1\. Introduction The ImageInLife project sprang from European scientists in both academia and industry who identified a common challenge: setting up a training frame to educate the next generation of imagers in complex biological systems (healthy & pathological), so they are able to master all major aspects of this competitive field and bring important innovations to universities and companies. The long-term goal of any initiative to image biological processes is reaching cellular or subcellular resolution in a complete organism. This is now possible using vertebrate embryos as models and the most recent technological advances as tools. Young researchers will be trained by addressing the following scientific bottlenecks and challenges: * Preparing vertebrate embryos (rodent & zebrafish) for optimal imaging ; * Fine-tuning sensors, reporters and actuators to track cell types, cellular processes and behaviours in living organisms ; * Developing and implementing new imaging instruments ; * Analysing complex sets of big-data images to extract relevant information ; * Using processed images to design computational and mathematical models of development and pathologies ; * Comparing these models with experimental data and create a feedback loop improving the whole work chain from sample preparation to instrumentation and analysis. This interdisciplinary training is based on an intersectoral organisation of the consortium with partners from academia and companies that need these future experts to develop new instruments, screen drugs and chemicals in living systems and develop software to analyse and model medical images. The full training programme is based on an optimal balance between training through research and many network-wide training events, including conferences with physical presence, digital conferences and monthly videolink events. Consortium members are keen to implement both classical and original outreach activities (e.g. MOOCs, serious games, Lego designs) to bring state-of-the-art microscopy to the classroom. ## 2\. Data summary **The following issues will be presented in this section:** * **the purpose of the data collection/generation** * **the relation of the data collection/generation to the objectives of the project** * **the types and formats of data that will be generated/collected** * **the origin of the data** * **the expected size of the data (if known)** ▪ **the data utility** The main objective of the ImageInLife project is to set up a training frame to educate the next generation of imagers in complex biological systems (healthy and pathological), so they are able to master all major aspects of this competitive field and bring important innovations to universities and companies. In this context, it is very important for the ImageInLife consortium to establish a suitable Data Management Plan that would help all the partners of the project to adopt the right policy concerning the collection, the generation and sharing of the data. ImageInLife is a large consortium (it includes 11 partners) and this is why it is necessary for all partners to agree on a common treatment of the project data. Thus, the objective is to gather the data under the FAIR principles: the data that would be Findable, Accessible, Interoperable and Reusable. The purpose of data/collection/generation in the framework of the ImageInLife project is to store and share the project data that might be useful for the project partners and that would be reusable by the researchers in the field (while respecting the ImageInLife data sharing policy). This goes hand in hand with the main objective of the project - to educate the next generation of imagers in complex biological systems that would be able to select, classify and store useful data in order to share it with the scientific community. In regard to the types of the generated/collected data - due to the scientific objective of ImageInLife, this project will mainly generate and collect **images** . Another important part of the generated data will be **software** . In a small part, the project will also generate **biological samples** and **simulation movies** . The formats of the data depend on the type of the data collected/generated: For _image files_ , the formats will be the following: * TIFF * Proprietary formats from microscope companies (Nikon/Zeiss/Leica) * DICOM * HDF5 * Uncompressed AVI movies * Compressed image formats (e.g., JPEG, MPEG4, AVI) For _software packages_ , ImageInLife expect to work with these formats: * Text (for source code) * Binary (executable software) * Fiji macros * LabVIEW files * Text and binary data (measurements and scores) For _biological samples_ , the data formats will be the following: * Plasmids * Transgenic or mutant zebrafish * Recombinant viruses On the long term, the policy will be to store these samples in external repository facilities for distribution (Addgene, EZRC). At first, samples would be distributed by the partners themselves upon request. While no external data will be reused for the project, some partners may reuse some of their own pre-existing data. We expect that the total volume of data generated by the consortium over the course of the project could reach around 200 terabytes. As far as the data utility is concerned, the ImageInLife data will be useful first of all to the consortium and to the project partners. However, the objective is also to share the data that might be useful to the international imaging scientific community. ## 3\. FAIR Data **3.1 Making data findable, including provisions for metadata:** * **Discoverability of data (metadata provision)** Metadata will be associated to all data files archived during the ImageInLife project. Because of the diversity of data types (images generated by microscopes from different companies, software produced in different environments) it will be difficult to generate a universal metadata format. However, we will strive to define a common metadata standard, particularly for image files. * **Identifiability of data and standard identification mechanism** All archived publications will be associated with a DOI. * **Naming conventions** A common naming convention will be adopted for the data files. The name will be as selfexplanatory as possible but will have to be partially coded to keep the name length within limits manageable by different computer operating systems (255 characters, but less would be more convenient). Names will begin with "ImageInLife" followed by: date of file generation (yyyymmdd), a letter for data type (I for image, S for software, R for report, etc.), source partner number in the consortium (P01 to P11, P00 for the management team, P99 for an external source), source person’s family name, and a 32-letter summary of the content of the file. The exact naming scheme will be discussed at the first consortium meeting to achieve a consensus. ▪ **Approach towards search keyword** A "keywords" category will be systematically included in the metadata file. * **Approach for clear versioning** For archived objects that may exist in different versions (typically, software and models), all versions will be archived, including the metadata of the newer versions indicating the location of the previous one and containing a summary of the changes. Versions will be numbered according to standard conventions (e.g. with a suffix "v" followed by a number and possibly a letter for minor changes). * **Standards for metadata creation** A standardized metadata format will be defined for images. A web-based interface will be created to facilitate generation of PDF-format metadata for image files in a common format for the whole ImageInLife consortium. This will be used for all image files, although some files will also be associated to another metadata file generated by the software. The standard metadata will include: keywords, lab of origin, user(s) who generated the file, microscope specifications (brand, type, objective mag and NA), organism name, strain, and developmental stage, experimental manipulations (injection of tracers, infections, and the like), imaging conditions (temperature, duration, time between frames, size of imaged area in X-Y-Z, Z-step, excitation methods, recorded wave lengths); other information will be included as needed. The interface will be evolutive according to the users' needs. **3.2 Making data openly accessible:** * **Data openly available** Whether the data will be made fully open immediately will be decided case-by- case. In any case, all data will be made fully accessible after publication of the corresponding article. In addition, the data will be openly shared among partners of the consortium, except in some specific cases (confidential medical imaging, projects involving third parties with specific agreements). The DOI corresponding to the data will be provided in publications. For data made open prior to publications, the corresponding links will be found on the partners’ institutional websites. * **Methods or software tools needed to access the data** Whenever possible, the data will be converted into formats that can be opened using public domain software (e.g. ImageJ for imaging data). If not, the specific tools required will be indicated in the metadata file (itself a PDF file, thus openly accessible). When a specific software will be generated by consortium members to access/visualize the data, this software and its source code will be available as any other data file. * **Emplacement of data, associated metadata, documentation and code** A specific ImageInLife archive will be created within the open Zenodo archive repository. Data will be deposited there associated with their metadata and any software required to access it. In addition to the Zenodo archive, each partner will archive their own data on their institutional servers and/or external storage media, and may allow direct access via their laboratory homepage. ▪ **Access in case of restrictions** For restricted datasets (e.g. medical image files), access will be managed by the PI responsible for the data generation. In general, efforts will be made to curate the data file in order to provide the relevant source data for other researchers who wish to reproduce the image analysis without compromising private information of patients. **3.3 Making data interoperable:** ▪ **Interoperability of data ( the use of data and metadata vocabularies, standards or methodologies in order to facilitate interoperability)** Because of the diverse types of data and of tools, it will not be possible (nor necessarily desirable) to attain a full interdisciplinary interoperability within the project. However, as specified above, efforts will be made, notably regarding imaging data, to convert all files into a common format (readable with public domain software, e.g., ImageJ) and have a common PDF-based metadata format. **3.4 Data re-use (through clarifying licenses):** * **Licensing of data in order to permit the widest reuse possible** In general, data will be licensed according to a Creative Commons Attribution License ( _http://creativecommons.org/licenses/by/3.0_ ), which permits unrestricted use, distribution and reproduction in any medium provided that the original work is properly attributed. The only exceptions will be cases where the data contains confidential medical information. In that case the files will be curated, in order to make available for reuse only data which cannot be traced back to a specific patient. * **Availability of data for re-use** See the section "Availability of Data" above. Data will be fully available for reuse after publication of the corresponding article. Decisions on making the data immediately available will be made by PIs on a case-by-case basis. * **Data quality assurance processes** Data quality will be the responsibility of each partner. However, a collegial evaluation of the archived data will be made during consortium meetings to ensure a shared minimal quality standard. * **The length of time for which the data will remain re-usable** We will not impose limits on the length of reuse of the data. The main issue will be the continuity of data archival on the Zenodo platform, which is so far impossible to predict. ## 4\. Allocation of resources **In this section, the allocation of resources will be exposed by examining the following issues:** * **Estimation of costs for making the data FAIR** * **Responsibilities for data management in your project** * **Costs and potential value of long term preservation** The estimated costs for making the ImageInLife data FAIR are mainly those of the time (estimated to be about 1 week/year for each partner) that the researchers will dedicate to the activities connected with this issue. These costs are thus covered by the project funds. Every collaborator involved in the project will be responsible for the management of the data that he/she will collect/generate. The Data Manager and the Project Manager will make sure that the common agreed principles of management of the project data are respected. Both of them will be contact persons in a case any of the project partners has a request concerning the management of the data. Regarding the costs of preservation of the digital data, ImageInLife will use a free data repository, Zenodo. The costs of long term preservation of data using a chargeable repository will be discussed later in the course of the project. Preservation of biological samples will be the responsibility of the partners that generated them. To ensure long-term preservation, valuable samples will be transferred to third-party repositories that do not charge for such archiving (Addgene for plasmids, EZRC for zebrafish). # 5\. Data security ▪ **Data recovery, secure storage and transfer of sensitive data** ImageInLife will make sure that the all the data collected/generated is safely stored for long term preservation. As stated above, the general principle will be a central archival on the Zenodo platform for the whole consortium plus a storage of each partner's data on their institutional servers. This issue will be discussed later in the project and the proper measurements and actions will be taken toward this objective. Some of ImageInLife partners will collect sensitive data (e.g. medical images and patient data). Ensuring their secure storage and restricting access will be their responsibility. Data that will be transferred will be the same as the one made publicly open, i.e. data curated to remove information allowing patient identification. ## 6\. Ethical aspects One of the partners will collect personal data from patients and exploit images obtained from medical imaging devices. Consent from participants will be obtained by the partner. However, the data that will be shared with other members of the consortium or with the research community will include only curated data to remove any possibility to identify the patients. ## 7\. Use of another national/funder/sectorial/departmental procedures for data management A few partner institutions require that articles be deposited into their own open archives (HAL format). This will be one way to ensure fully open access in the cases where publications do not appear in Open Access journals. In general, Open Access journals will be preferred for publications; however this will have to be balanced with the need for high impact.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0238_GoodBerry_679303.md
# Data Summary _**What is the purpose of the data collection/generation and its relation to the objectives of the project?** _ For GoodBerry data collection and integration is absolutely necessary. It is therefore of utmost importance that not only data is well generated but also well annotated using open standards and metadata as it is laid out in the following section. As GoodBerry aims at uncovering new principles across diverse berry species and across diverse experiments, good documentation, data keeping and integration are necessary. _**What types and formats of data will the project generate/collect?** _ We foresee that the following data will be collected and generated at the very least: Firstly phenotypic data about berry plants, secondly environmental and management data for plant growth, thirdly omics data including metabolomics and transcriptomics data sets stemming from different disciplines such as next generation sequencing (NGS) and/or qRT PCR based approaches. In addition derived data from the original raw data sets will also be collected. This is important, as different analytical pipelines might yield different results or include ad-hoc data analysis parts. Therefore specific care needs to be taken to document and archive these resources (including the analytic pipelines) as well. In addition, we will store correction values that arise from a ring exchange experiment, to transparently correct for individual laboratory influences. _**Will you re-use any existing data and how?** _ The project builds on existing data sets and relies on them. For instance without a genomic reference it would not be possible to analyze NGS data sets. However, it is of course also important to include existing data sets on the expression and metabolic behavior of berry crops. Whilst genomic references can simply be gathered from reference databases like the NCBI/EBI, in case of expression data that is not available via the SRA/ENA one might have to resort to gather it from supplemental tables. A current analysis however suggests that most strawberry data at least is available in standardized format and annotated with metadata either from the SRA and/or ENA. Unfortunately, this is not the case for metabolite accumulation data which will thus be gathered from publication resources. Similarly as experimental metadata descriptions for NGS data are somewhat lax when plants are considered (due to the one size fits all approach of minimal information), augmented metadata will be generated by accessing the adjoining publications of submitted data sets. _**What is the origin of the data?** _ Public data will be extracted as described in the previous paragraph. For GoodBerry, specific data sets will be generated by the consortium partners. In detail, we expect RNASeq and metabolite data as well as phenotypic and environmental/management data. _**What is the expected size of the data?** _ Metabolite data will comprise mostly evaluated data and therefore can be stored in less than one GB of volume. In case of RNASeq data it is necessary to keep read data at the current time. Given the project sample sizes and the targeted read depth we expect up to 8 TB of data. In addition, data on the environment and phenotype can usually be strongly compressed and will comprise only several GB of data. Within the project it will be determined, if image data shall be stored and shared via EMPHASIS EU infrastructure for phenotyping facilities as well (🡪 to be determined and further developed within next DMP versions). _**To whom might it be useful ('data utility')?** _ The data will be useful to the GoodBerry community as it is immediately necessary to analyze the data produced in GoodBerry to have data that can be used for prediction, computation and biological interpretation. In addition, at the very least the omics data will be interesting and useful for the whole berry crop community. Given the large amount of RNASeq data, it will be important even for an updated genome of the strawberry crops. This is because the RNASeq date will be helpful in defining or even finding gene models in the strawberry genome. Also some data sets particular about healthy metabolites will be interesting for the general public as well. # FAIR data ## Making data findable, including provisions for metadata _**Are the data produced and/or used in the project discoverable with metadata, identifiable and locatable by means of a standard identification mechanism (e.g. persistent and unique identifiers such as Digital Object Identifiers)?** _ All data sets will receive persistent unique identifiers. In addition to make the data sets useful, they will be annotated with at the very least metadata to fulfill DOI and OpenAIRE standards. As in many cases more metadata is necessary to allow true reusability, GoodBerry will provide more metadata than stipulated by DOI/OpenAIRE. The use of DOIs is still under discussion (🡪 to include in later DMP versions). _**What naming conventions do you follow?** _ Data variables will use standard names. This is e.g. the case for genes and metabolites. These will also be linked to free biomedical ontologies. In the case of datasets, the dataset names will also encode the provenience. _**Will search keywords be provided that optimize possibilities for re-use?** _ Keywords about the experiment and the general consortium will be included as well as an abstract about the data, where useful. _**Do you provide clear version numbers?** _ To maintain data integrity and to be able to re-analyze data, data sets will get version numbers. _**What metadata will be created? In case metadata standards do not exist in your discipline, please outline what type of metadata will be created and how.** _ We foresee to use e.g. MinSEQe for sequencing data and MIAMET for metabolites as well as MIAPE for phenotyping like data, but will also be relying on specific SOPs established in the EUBerry project. The latter will thus allow integrating data across projects and safeguards reusing established and tested protocols. ## Making data openly accessible _**Which data produced and/or used in the project will be made openly available as the default? If certain datasets cannot be shared (or need to be shared under restrictions), explain why, clearly separating legal and contractual reasons from voluntary restrictions.** _ By default all data sets from GoodBerry will be shared and made openly available. This is however usually after a gratuity period allowing partners to importantly iterate through data and potentially clean up data in the process and to allow partners to exert their publishing and patenting rights prior to unlimited sharing. _**Note that in multi-beneficiary projects it is also possible for specific beneficiaries to keep their data closed if relevant provisions are made in the consortium agreement and are in line with the reasons for opting out.** _ Does not apply. _**How will the data be made accessible (e.g. by deposition in a repository)?** _ Data will be made available via the project specific website (www.goodberry- eu.eu) linked into the German plant primary database (plabipd.de). It will be ensured that data which can be stored in international specialized repositories (NCBI, EBI, SRA, ENA etc.), will be stored and processed there as well. _**What methods or software tools are needed to access the data?** _ No specialized software will be needed to access the data. Access will be possible via web interfaces once publicly available. For data processing after obtaining raw data, typical open source software can be used (in particular R/BioConductor, Trimmomatic, Bowtie/BWA etc). _**Is documentation about the software needed to access the data included?** _ As no software is needed, no documentation needs to be provided. _**Is it possible to include the relevant software (e.g. in open source code)?** _ As stated above, software is only needed AFTER data has been obtained by a user in order to process and/or analyze the data. Here we use publicly available open source certified software. _**Where will the data and associated metadata, documentation and code be deposited? Preference should be given to certified repositories which support open access where possible.** _ As noted above specialized repositories like SRA/ENA are very likely the most common ones. _**Have you explored appropriate arrangements with the identified repository?** _ The submission is for free and it is the goal (at least of ENA) to obtain as much data as possible. Therefore, arrangements are neither necessary nor useful. _**If there are restrictions on use, how will access be provided?** _ There are no restrictions. _**Is there a need for a data access committee?** _ Consequently, there is no need for a committee. _**Are there well described conditions for access (i.e. a machine readable license)?** _ Yes where possible e.g. CC REL will be used. _**How will the identity of the person accessing the data be ascertained?** _ In case data are only shared within the consortium due to data cleanup tasks and/or publication preparations, a user specific log in is necessary. In case data are publicly available, these data have to be available without siphoning out personal information. I.e. access is anonymous, which is the only way to real open data and which is in accordance with data protection laws in Germany. ## Making data interoperable _**Are the data produced in the project interoperable, that is allowing data exchange and re-use between researchers, institutions, organisations, countries, etc. (i.e. adhering to standards for formats, as much as possible compliant with available (open) software applications, and in particular facilitating re-combinations with different datasets from different origins)?** _ At all times, data will be stored in common and openly defined formats. By default no proprietary formats will be used. _**What data and metadata vocabularies, standards or methodologies will you follow to make your data interoperable?** _ As mentioned above, we foresee to use e.g. MinSEQe for sequencing data and MIAMET for metabolites as well as MIAPE for phenotyping like data, but will also be relying on specific SOPs established in the EUBerry project. The latter will thus allow integrating data across projects and safeguards reusing established and tested protocols. Additionally, we will use ontology terms to enrich the data sets relying on free and open ontologies. _**Will you be using standard vocabularies for all data types present in your data set, to allow interdisciplinary interoperability?** _ Indeed, open biomedical ontologies will be used where they are mature. In certain cases like the environment, RWTH AACHEN is involved in extending open ontologies for environmental characterisations so that these can be used as well. _**In case it is unavoidable that you use uncommon or generate project specific ontologies or vocabularies, will you provide mappings to more commonly used ontologies?** _ Common and OPEN ontologies will be used thus this question does not apply. ## Increase data re-use (through clarifying licences) _**How will the data be licensed to permit the widest re-use possible?** _ Open licenses will be used, such as CC. _**When will the data be made available for re-use? If an embargo is sought to give time to publish or seek patents, specify why and how long this will apply, bearing in mind that research data should be made available as soon as possible.** _ In general, due time will be given to the consortium partners to exploit the data for publication and/or IP issues first. This is not least due to the fact that data will often see an increase in quality by additional cleaning up for publications. All consortium partners will be encouraged to make data available prior to publication under pre-publication agreements such as those started in Fort Lauderdale and set forth by the Toronto International Data Release Workshop. _**Are the data produced and/or used in the project useable by third parties, in particular after the end of the project? If the re-use of some data is restricted, explain why.** _ There will be no restrictions once data is made public. _**How long is it intended that the data remains re-usable?** _ Data will be made available for many years and potentially indefinitely after the end of the project. _**Are data quality assurance processes described?** _ Data will be analyzed using automatic procedures as well as by manual curation. This will be based on tools developed by RWTH AACHEN for other projects. # Allocation of resources _**What are the costs for making data FAIR in your project?** _ The costs comprise data curation, setup of databases and long term sustenance and storage including electricity. The costs will amount to approximately 250.000€ of which most can be covered by eligible costs that will be claimed. _**How will these be covered? Note that costs related to open access to research data are eligible as part of the Horizon 2020 grant (if compliant with the Grant Agreement conditions).** _ A large part of the cost is covered in WP5 – Data analysis and integration. As this work package is underfunded, additional resources are taken from own core funding. _**Who will be responsible for data management in your project?** _ Partner 7 – RWTH AACHEN _**Are the resources for long term preservation discussed (costs and potential value, who decides and how/what data will be kept and for how long)?** _ The partner RWTH AACHEN decides on preservation. The partner has pledged long term (potentially for decades) support based on own costs. RNA data will also be available through ENA/SRA (so data will be available from redundant resources). Here, EBI and NCBI will decide how to proceed in the future. At the moment, data will be kept indefinitely, but of course one has to take recent political developments especially in the EU into account. # Data security _**What provisions are in place for data security (including data recovery as well as secure storage and transfer of sensitive data)?** _ Once data is transferred to the GoodBerry database, data security standards as entailed by the German Plant Primary Data Database will be imposed. This comprises with regard to secure storage the usage of encrypted connections to the data (https and sftp) where passwords and usernames are generally transferred via separate safe media. Also data blobs are tagged with access rights and groups. Of course, necessary security updates will always be employed in addition to in-house server hardening procedures. All infrastructure will be based on open source so no backdoors introduced by hostile third parties or industry espionage will be present. In terms of data recovery, data is stored on a zfs based filesystem employing RAID like redundancy (potential migration to btrfs in the future). In addition, regular backups are made to LTO tape libraries. It will be considered to include GoodBerry data into high available data served where all data sets are mirrored and served from two redundant locations (30km apart) with a high bandwidth connection (🡪 later DMP decision point). _**Is the data safely stored in certified repositories for long term preservation and curation?** _ Transcriptomics data will be also made available upon publication via the standards ENA/SRA. In addition, the national resource will maintain safekeeping of data also after the project end. # Ethical aspects _**Are there any ethical or legal issues that can have an impact on data sharing? These can also be discussed in the context of the ethics review. If relevant, include references to ethics deliverables and ethics chapter in the Description of the Action (DoA).** _ At the moment, we do not foresee ethical or legal issues with data sharing. In terms of ethics, as this is plant data there is no need for an ethics committee. The consortium makes its best efforts to ensure data sharing, at the very latest after exploitation through papers and patents. _**Is informed consent for data sharing and long term preservation included in questionnaires dealing with personal data?** _ The only personal data that will potentially be stored is the submitter name and affiliation in the metadata. As lengthy questionnaires tend to stifle careful answering and deposition, this will be highlighted again to the submitters and they can opt out in which case only their institution will be mentioned. This is however a very unlikely case, as data evaluation will be published in scientific journals anyway providing the names of the authors. # Other issues _**Do you make use of other national/funder/sectorial/departmental procedures for data management? If yes, which ones?** _ GoodBerry will make use of the German Plant Primary Data Database which is a national resource. In addition, it needs to be evaluated whether EMPHASIS infrastructure will be useful for phenotypic data. # Abbreviations btrfs B-Tree File System CC Creative Commons CC REL Creative Commons Rights Expression Language DMP Data Management Plan DOI Digital Object Identifier EBI European Bioinformatics Institute EMPHASIS European Multi-Environment Plant Phenotyping And Simulation Infrastructure ENA European Nucleotide Archive https HyperText Transfer Protocol Secure IP Intellectual Property LTO Linear Tape Open MIAMET Minimal Information about Metabolite experiment MIAPE Minimal Information about Plant Phenotyping Experiment MinSEQe Minimum Information about a high-throughput Sequencing Experiment NCBI National Center for Biotechnology Information NGS Next Generation Sequencing OpenAIRE Open Access Infrastructure for Research in Europe qRT PCR quantitative Real time reverse transcriptase coupled PCR RAID Redundant Array of Inexpensive Disks RNASeq RNA Sequencing sftp Secure File Transfer Protocol SOP Standard Operating Procedures SRA Short Read Archive WP Work Package zfs Zettabyte File System
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0240_LLR_737033.md
# _Executive summary_ This first version of the data plan management is a core deliverable of the LLR project. It briefly describes [7] categories of data the LLR project may collect or generate for each work package and details the means of storage, preservation and sharing foreseen by the LLR consortium. For this first version of the DMP, grids will be filled as precisely as possible. Nevertheless, due to the recent beginning of the project and since the first semester will mostly be dedicated to the laser construction; the beneficiaries might not be able to provide detailed description of the datasets. An updated version will be delivered during month 18. # Introduction **LLR project and objectives** Laser Lightning Rod is a European project funded by the EC under the H2020 Programme. Gathering 7 leaders in the domains of nonlinear propagation of intense laser in the atmosphere, laser control of electric discharges, lightning physics, aeronautics and high power laser development, the LLR project aims to investigate and develop a new type of lightning protection based on the use of upward lightning discharges initiated by a high- repetition-rate multi-terawatt laser. As a H2020 and a Future Emerging Technology project, the LLR project commits to fostering sharing, dissemination and reuse of research data and results, so as to ensure their take-up by research organizations, PHD students or private sector. This broadened access to research data mainly intends to 1) allow any third party to follow and validate the results reached by the LLR team; 2) ensure data access and reuse by any searcher, student or industrial as much as possible; 3) maximize the impact of LLR research. Thus, the LLR consortium fully supports the ORD policy recently implemented by the European Commission and delivers hereinafter a data management plan aiming to handle the research data that might be collected or generated during the project. # LLR Data management plan ## Purposes According to the obligation to disseminate results and ensure open access to data, the Consortium delivers this first version of the DMP for the LLR project based on the FAIR data principle. With 4 years of data, generated by 7 partners across Europe, the LLR project undoubtedly requires a detailed but nonetheless flexible data management plan. This document sets the strategy and tools the LLR consortium wishes to adopt regarding the management, storage and dissemination of all relevant data the project might generate or the researchers might collect. By defining the data management process from the project’s beginning, the Consortium aims to ease and ensure data access and dissemination. ## Method and tools Based on the EC template and guidelines, the LLR data management plan follows the FAIR data principles and chose to develop a management grid composed of 5 sections: 1. The data summary section gives an overview of the dataset’s content; 2. The storage and access section specifies the hosting, storage, access and security details of the dataset, in order for the data to be findable, accessible and interoperable; 3. The third section defines the conditions of dissemination during the project and long-term preservation after the project’s lifespan; 4. The last section allows the beneficiaries to specify whether there is any other issue regarding the dataset, legal or ethical e.g. The DMP will be updated on a regular basis, following the project’s milestones: <table> <tr> <th> **Ref.** </th> <th> **DMP Version** </th> <th> **Description** </th> <th> **Approx. date** </th> </tr> <tr> <td> _DMP 0_ </td> <td> DMP First version </td> <td> In the first version of the DMP, the beneficiaries define a list of datasets as well as a management strategy and its corresponding tools. </td> <td> June, 2017 </td> </tr> <tr> <td> _DMP 1_ </td> <td> DMP updated 1 </td> <td> At month 18, the LLR team should already have gathered and processed data of numerous tests, among them the 200mj laser test. At this stage, the beneficiaries should get more visibility on the research data the project might generate. </td> <td> June, 2018 </td> </tr> <tr> <td> _DMP 2_ </td> <td> DMP updated 2 </td> <td> At month 36, many tests and data should be available on permanent media. The beneficiaries should be able to design the final DMP. </td> <td> January, 2020 </td> </tr> <tr> <td> _DMP 3_ </td> <td> Final DMP </td> <td> At the end of the project, the beneficiaries will be able to deliver a detailed and precise DMP, in particular regarding data access and preservation after the project’s lifespan. </td> <td> November, 2020 </td> </tr> </table> The Consortium has chosen the Green open access method: data and results will be made available on the _Zenodo_ repository, in accordance with the dissemination methods and plan detailed below. ## Responsibility for the data <table> <tr> <th> _Work Package_ </th> <th> _Dataset reference_ </th> <th> _Responsible partner_ </th> </tr> <tr> <td> WP1 </td> <td> None </td> <td> TRUMPF </td> </tr> <tr> <td> WP2 </td> <td> DS2.1 </td> <td> UNIGE </td> </tr> <tr> <td> DS2.2 </td> <td> UNIGE </td> </tr> <tr> <td> DS2.3 </td> <td> UNIGE </td> </tr> <tr> <td> DS2.4 </td> <td> CNRS </td> </tr> <tr> <td> WP3 </td> <td> DS3.1 </td> <td> AMC </td> </tr> <tr> <td> WP4 </td> <td> DS4.1 </td> <td> EPFL </td> </tr> <tr> <td> DS4.2 </td> <td> EPFL </td> </tr> <tr> <td> WP5 </td> <td> None </td> <td> CNRS </td> </tr> </table> In order to ensure the implementation of the DMP and the quality of ORD system, the Consortium chose to define a responsibility chain: the coordinator stays at the head of data management and coordination as for the timeframe, storage and accessibility, while each WP lead will be in charge of the corresponding WP datasets regarding its quality and general coherence, storage and accessibility. At last, each beneficiary will be responsible for the content, the storage and protection of the data he directly produces. Project coordinator • Overall compalbility • Respect of the terms of DMP (lmetable, formats, etc.) • Accessibility and preservalon WP lead • Accessibility of WP datasets • Compalbility and coherence Beneficary • Registralon of the data directly generated • Storage of the data directly generated • Accuracy and quality of the data directly generated • Proteclon of the data directly produced ## Allocation of resources <table> <tr> <th> _ALLOCATION OF RESOURCES_ </th> <th> </th> </tr> <tr> <td> **Costs** </td> <td> _Assess the additional costs for implementing the DMP_ </td> </tr> <tr> <td> **Staff** </td> <td> _Assess the additional costs for implementing the DMP_ </td> </tr> <tr> <td> **Hardware** </td> <td> _Any hardware necessary for data management_ Allocated local server at LOA (2 To) for data storage </td> </tr> </table> # Datasets description The DMP describes 7 datasets, corresponding to WP 2, 3 and 4. <table> <tr> <th> _Work Package_ </th> <th> _Dataset reference_ </th> <th> _Dataset description_ </th> </tr> <tr> <td> WP1 </td> <td> None </td> <td> The first work package consists in the construction of the laser system. No research data might be generated. </td> </tr> <tr> <td> WP2 </td> <td> DS2.1 </td> <td> Results from test with the 200mJ system </td> </tr> <tr> <td> DS2.2 </td> <td> Results from test with the 1J system </td> </tr> <tr> <td> DS2.3 </td> <td> Development of the telescope </td> </tr> <tr> <td> DS2.4 </td> <td> Results of preliminary campaign experiments </td> </tr> <tr> <td> WP3 </td> <td> DS3.1 </td> <td> Data collected during outdoor horizontal measurements </td> </tr> <tr> <td> WP4 </td> <td> DS4.1 </td> <td> Recollection of environment data </td> </tr> <tr> <td> DS4.2 </td> <td> Results from the lightning campaign </td> </tr> <tr> <td> WP5 </td> <td> None </td> <td> The 5 th work package mainly consists in project coordination and dissemination of the results. No research data will be generated. </td> </tr> </table> ## Dataset Template <table> <tr> <th> </th> <th> **DATASET SUMMARY** </th> </tr> <tr> <td> **Dataset reference and name** </td> <td> DS1,2,3… </td> </tr> <tr> <td> **Related WPs** </td> <td> What are the WPs related to this dataset? </td> </tr> <tr> <td> **Responsibility for the dataset** </td> <td> Name of the responsible partner </td> </tr> <tr> <td> **Dataset property** </td> <td> Name of the beneficiary /or “Consortium” if the data is jointly generated </td> </tr> <tr> <td> **Resource type/nature of data** </td> <td> Type of data: images, text, survey data, etc. </td> </tr> <tr> <td> **General description** </td> <td> Content and objectives of the dataset regarding the project’purposes. </td> </tr> <tr> <td> **Reuse of existing data** </td> <td> Does this dataset will be using any existing data? </td> </tr> <tr> <td> **Method of production** </td> <td> How will the data be obtained/generated (observation, simulation, analysis, reuse of data, survey, etc.)? </td> </tr> <tr> <td> **Others partners activities and responsibilities** </td> <td> Responsibility or participation of other partners in this DS </td> </tr> </table> <table> <tr> <th> </th> <th> **STORAGE AND ACCESS** </th> </tr> <tr> <td> _STORAGE AND RECORDING_ </td> <td> </td> </tr> <tr> <td> **Medium of data** </td> <td> Digital and/or hard copy </td> </tr> <tr> <td> **Data hosting** </td> <td> How the data will be stored? (local or distant server, external hard drive, …) </td> </tr> <tr> <td> **Data standards/formats** </td> <td> Under what format? </td> </tr> <tr> <td> **Projected volume** </td> <td> Estimated storage volume (Mo for ex.) </td> </tr> <tr> <td> _ACCESS_ </td> <td> </td> </tr> <tr> <td> **Data reading** </td> <td> Is any tool (software, etc) is needed in order to access the data? </td> </tr> <tr> <td> **Access procedures** </td> <td> Through what medium will the other partners will access the data during the project </td> </tr> <tr> <td> **Data sharing** </td> <td> Will the data be shared with third party during the project </td> </tr> <tr> <td> _DATA SECURITY_ </td> <td> </td> </tr> <tr> <td> **Sensitive data** </td> <td> Is this a sensitive dataset? If yes, why and what preventive actions will be taken? </td> </tr> </table> <table> <tr> <th> **DISSEMINATION AND PRESERVATION** </th> </tr> <tr> <td> _DISSEMINATION_ </td> </tr> <tr> <td> **Distribution medium** </td> <td> Under what format will the data be disseminated? </td> </tr> <tr> <td> **Data utility** </td> <td> Potential for reuse: to whom might the data be useful? Scientific community, general public, private sector, etc. </td> </tr> <tr> <td> **Dissemination level** </td> <td> Confidential or public </td> </tr> <tr> <td> **Type of license** </td> <td> Is there a license protecting this data? </td> </tr> <tr> <td> **Embargo** </td> <td> Is there an embargo period the beneficiary or the consortium wishes to implement? (with prior justification) </td> </tr> <tr> <td> _STORAGE AFTER THE PROJECT AND LONG-TERM PRESERVATION_ </td> </tr> <tr> <td> **Data selection** </td> <td> Will the DS be preserved? </td> </tr> <tr> <td> **Recommended lifetime** </td> <td> How long the data should be preserved? </td> </tr> <tr> <td> **Long-term preservation platform or medium** </td> <td> On what medium will the data be preserved after the project? </td> </tr> </table> <table> <tr> <th> </th> <th> **METADATA** </th> </tr> <tr> <td> **Standards and metadata** </td> <td> What document or standards might be needed in order to use or interpret the data? </td> </tr> <tr> <td> **Method of production and responsibility** </td> <td> Who will be in charge of creating of the metadata? What methodology should be used? </td> </tr> </table> <table> <tr> <th> </th> <th> **OTHER ISSUES** </th> </tr> <tr> <td> **Legal or ethical issue** </td> <td> Has any legal or ethical issue been identified? </td> </tr> </table> ## Dataset 2.1 – [200mJ Lab test] <table> <tr> <th> </th> <th> **DATASET SUMMARY** </th> </tr> <tr> <td> **Dataset reference and name** </td> <td> DS2.1 </td> </tr> <tr> <td> **Related WPs** </td> <td> DS2.1 will be generated within WP2 </td> </tr> <tr> <td> **Responsibility for the dataset** </td> <td> UNIGE </td> </tr> <tr> <td> **Dataset property** </td> <td> UNIGE, CNRS, TSL, AMCS </td> </tr> <tr> <td> **Resource type/nature of data** </td> <td> Image, binary, text files, and video files. </td> </tr> <tr> <td> **General description** </td> <td> Scientific data used for the analysis and characterization of the filaments produced with the 200mJ Laser. </td> </tr> <tr> <td> **Reuse of existing data** </td> <td> Previous data on the characterization of filament found in the literature might be used. </td> </tr> <tr> <td> **Method of production** </td> <td> Measurements, possible reuse of historical data, simulation, and analysis. </td> </tr> <tr> <td> **Others partners activities and responsibilities** </td> <td> CNRS, TSL and AMCS will collaborate in the gathering, simulation and analysis of the data. </td> </tr> </table> <table> <tr> <th> </th> <th> **STORAGE AND ACCESS** </th> </tr> <tr> <td> _STORAGE AND RECORDING_ </td> <td> </td> </tr> <tr> <td> **Medium of data** </td> <td> Digital and possibly hard copy </td> </tr> <tr> <td> **Data hosting** </td> <td> Local server at UNIGE and distant server. Any hard copies will be stored in binders. </td> </tr> <tr> <td> **Data standards/formats** </td> <td> Binary files. Spreadsheet files. Text files. Matlab .MAT files. Video and photographic images. Python .PY files. The exact description will be stored in headers or in separate text files. </td> </tr> <tr> <td> **Projected volume** </td> <td> Of the order of 100 GBytes </td> </tr> <tr> <td> _ACCESS_ </td> <td> </td> </tr> <tr> <td> **Data reading** </td> <td> Matlab, Python, Standard word processing software. </td> </tr> <tr> <td> **Access procedures** </td> <td> Hardcopy data requested by the partners will be sent per snail mail, scanned and sent by electronic means. Access to digital data stored on servers will be provided by way of remote data transfer using cloud services. </td> </tr> <tr> <td> **Data sharing** </td> <td> All output can be shared within the LLR consortium, and is primary located in the UNIGE archiving system. Data in this dataset is not expected to be classified as restricted as it will only have scientific value. The data will therefore be shared with third parties for exclusive research purposes upon request. </td> </tr> <tr> <td> _DATA SECURITY_ </td> <td> </td> </tr> <tr> <td> **Sensitive data** </td> <td> No </td> </tr> </table> <table> <tr> <th> </th> <th> **DISSEMINATION AND PRESERVATION** </th> </tr> <tr> <td> _DISSEMINATION_ </td> <td> </td> </tr> <tr> <td> **Distribution medium** </td> <td> </td> <td> Through publications in journals and presentation at conferences </td> </tr> <tr> <td> **Data utility** </td> <td> </td> <td> The scientific community could reuse the data for validation of the </td> </tr> <tr> <td> </td> <td> results of the project or for further research. </td> </tr> <tr> <td> **Dissemination level** </td> <td> Public </td> </tr> <tr> <td> **Type of license** </td> <td> Not applicable </td> </tr> <tr> <td> **Embargo** </td> <td> 1 year for analysis and redaction of publications ; may be extended subject to difficulties in writing or publishing the results. Except regarding research data necessary for validation of results and peer-reviewed publications as fixed by art. 29.2 and 29.3 of the Grant Agreement </td> </tr> <tr> <td> _STORAGE AFTER THE PROJECT AND LONG-TERM PRESERVATION_ </td> </tr> <tr> <td> **Data selection** </td> <td> Yes </td> </tr> <tr> <td> **Recommended lifetime** </td> <td> 5 years </td> </tr> <tr> <td> **Long-term preservation platform or medium** </td> <td> TBD </td> </tr> </table> <table> <tr> <th> </th> <th> **METADATA** </th> </tr> <tr> <td> **Standards and metadata** </td> <td> Metadata will be written in standard ASCI format </td> </tr> <tr> <td> **Method of production and responsibility** </td> <td> </td> </tr> </table> <table> <tr> <th> </th> <th> **OTHER ISSUES** </th> </tr> <tr> <td> **Legal or ethical issue** </td> <td> None </td> </tr> </table> ## Dataset 2.2 – [1J Lab test] <table> <tr> <th> </th> <th> **DATASET SUMMARY** </th> </tr> <tr> <td> **Dataset reference and name** </td> <td> DS2.2 </td> </tr> <tr> <td> **Related WPs** </td> <td> DS2.2 will be generated within WP2 </td> </tr> <tr> <td> **Responsibility for the dataset** </td> <td> UNIGE </td> </tr> <tr> <td> **Dataset property** </td> <td> UNIGE, CNRS, TSL, AMCS </td> </tr> <tr> <td> **Resource type/nature of data** </td> <td> Image, binary, text files, and video files. </td> </tr> <tr> <td> **General description** </td> <td> Scientific data used for the analysis and characterization of the filaments produced with the 1J Laser. </td> </tr> <tr> <td> **Reuse of existing data** </td> <td> Previous data on the characterization of filament found in the literature and taken at UNIGE might be used. </td> </tr> <tr> <td> **Method of production** </td> <td> Measurements, possible reuse of historical data, simulation, and analysis. </td> </tr> <tr> <td> **Others partners activities and responsibilities** </td> <td> CNRS, TSL and AMCS will collaborate in the gathering, simulation and analysis of the data. </td> </tr> </table> <table> <tr> <th> </th> <th> **STORAGE AND ACCESS** </th> </tr> <tr> <td> _STORAGE AND RECORDING_ </td> <td> </td> </tr> <tr> <td> **Medium of data** </td> <td> Digital and possibly hard copy </td> </tr> <tr> <td> **Data hosting** </td> <td> Local server at UNIGE and distant server. Any hard copies will be stored in binders. </td> </tr> <tr> <td> **Data standards/formats** </td> <td> Binary files. Spreadsheet files. Text files. Matlab .MAT files. Video and photographic images. Python .PY files. The exact description will be stored in headers or in separate text files. </td> </tr> <tr> <td> **Projected volume** </td> <td> Of the order of 100 GBytes </td> </tr> <tr> <td> _ACCESS_ </td> <td> </td> </tr> <tr> <td> **Data reading** </td> <td> Matlab, Python, Standard word processing software. </td> </tr> <tr> <td> **Access procedures** </td> <td> Hardcopy data requested by the partners will be sent per snail mail, scanned and sent by electronic means. Access to digital data stored on servers will be provided by way of remote data transfer using cloud services. </td> </tr> <tr> <td> **Data sharing** </td> <td> All output can be shared within the LLR consortium, and is primary located in the UNIGE archiving system. Data in this dataset is not expected to be classified as restricted as it will only have scientific value. The data will therefore be shared with third parties for exclusive research purposes upon request. </td> </tr> <tr> <td> _DATA SECURITY_ </td> <td> </td> </tr> <tr> <td> **Sensitive data** </td> <td> No </td> </tr> </table> <table> <tr> <th> **DISSEMINATION AND PRESERVATION** </th> </tr> <tr> <td> _DISSEMINATION_ </td> </tr> <tr> <td> **Distribution medium** </td> <td> Through publications in journals and presentation at conferences </td> </tr> <tr> <td> **Data utility** </td> <td> The scientific community could reuse the data for validation of the results of the project or for further research. </td> </tr> <tr> <td> **Dissemination level** </td> <td> Public </td> </tr> <tr> <td> **Type of license** </td> <td> Not applicable </td> </tr> <tr> <td> **Embargo** </td> <td> 1 year for analysis and redaction of publications ; may be extended subject to difficulties in writing or publishing the results. Except regarding research data necessary for validation of results and peer-reviewed publications as fixed by art. 29.2 and 29.3 of the Grant Agreement </td> </tr> <tr> <td> _STORAGE AFTER THE PROJECT AND LONG-TERM PRESERVATION_ </td> </tr> <tr> <td> **Data selection** </td> <td> Yes </td> </tr> <tr> <td> **Recommended lifetime** </td> <td> 5 years </td> </tr> <tr> <td> **Long-term preservation platform or medium** </td> <td> TBD </td> </tr> </table> <table> <tr> <th> </th> <th> **METADATA** </th> </tr> <tr> <td> **Standards and metadata** </td> <td> Metadata will be written in standard ASCI format </td> </tr> <tr> <td> **Method of production and responsibility** </td> <td> </td> </tr> </table> <table> <tr> <th> </th> <th> **OTHER ISSUES** </th> </tr> <tr> <td> **Legal or ethical issue** </td> <td> None </td> </tr> </table> ## Dataset 2.3 – [Development of the telescope] <table> <tr> <th> </th> <th> **DATASET SUMMARY** </th> </tr> <tr> <td> **Dataset reference and name** </td> <td> DS2.3 </td> </tr> <tr> <td> **Related WPs** </td> <td> DS2.3 will be generated within WP2 </td> </tr> <tr> <td> **Responsibility for the dataset** </td> <td> UNIGE </td> </tr> <tr> <td> **Dataset property** </td> <td> UNIGE </td> </tr> <tr> <td> **Resource type/nature of data** </td> <td> Report on the development of telescope. Image, binary, text files and sketches. </td> </tr> <tr> <td> **General description** </td> <td> Design of the telescope used to expand the laser beam and to allow a vertical shoot. </td> </tr> <tr> <td> **Reuse of existing data** </td> <td> Previous standard telescope design might be used as a work base </td> </tr> <tr> <td> **Method of production** </td> <td> Reflections and discussions. </td> </tr> <tr> <td> **Others partners activities and responsibilities** </td> <td> LOA and TSL might collaborate in the analysis of the data. </td> </tr> </table> <table> <tr> <th> </th> <th> **STORAGE AND ACCESS** </th> </tr> <tr> <td> _STORAGE AND RECORDING_ </td> <td> </td> </tr> <tr> <td> **Medium of data** </td> <td> Digital and possibly hard copy </td> </tr> <tr> <td> **Data hosting** </td> <td> Local server at UNIGE and distant server. Any hard copies will be stored in binders. </td> </tr> <tr> <td> **Data standards/formats** </td> <td> Binary files. Spreadsheet files. Text files. PDF files. Video and photographic images. </td> </tr> <tr> <td> **Projected volume** </td> <td> Of the order of 10 GBytes </td> </tr> <tr> <td> _ACCESS_ </td> <td> </td> </tr> <tr> <td> **Data reading** </td> <td> PDF reader. Standard word processing software. </td> </tr> <tr> <td> **Access procedures** </td> <td> Hardcopy data requested by the partners will be sent per snail mail, scanned and sent by electronic means. Access to digital data stored on servers will be provided by way of remote data transfer using cloud services. </td> </tr> <tr> <td> **Data sharing** </td> <td> All output can be shared within the LLR consortium, and is primary located in the UNIGE archiving system. Data in this dataset is not expected to be classified as restricted as it will only have scientific value. The data will therefore be shared with third parties for exclusive research purposes upon request. </td> </tr> <tr> <td> _DATA SECURITY_ </td> <td> </td> </tr> <tr> <td> **Sensitive data** </td> <td> No </td> </tr> </table> <table> <tr> <th> **DISSEMINATION AND PRESERVATION** </th> </tr> <tr> <td> _DISSEMINATION_ </td> </tr> <tr> <td> **Distribution medium** </td> <td> Through publications in journals and presentation at conferences </td> </tr> <tr> <td> **Data utility** </td> <td> The scientific community could reuse the data for further research. </td> </tr> <tr> <td> **Dissemination level** </td> <td> Confidential </td> </tr> <tr> <td> **Type of license** </td> <td> Not applicable </td> </tr> <tr> <td> **Embargo** </td> <td> 1 year for analysis and redaction of publications ; may be extended subject to difficulties in writing or publishing the results. Except regarding research data necessary for validation of results and peer-reviewed publications as fixed by art. 29.2 and 29.3 of the Grant Agreement </td> </tr> <tr> <td> _STORAGE AFTER THE PROJECT AND LONG-TERM PRESERVATION_ </td> </tr> <tr> <td> **Data selection** </td> <td> Yes </td> </tr> <tr> <td> **Recommended lifetime** </td> <td> 5 years </td> </tr> <tr> <td> **Long-term preservation platform or medium** </td> <td> TBD </td> </tr> </table> <table> <tr> <th> </th> <th> **METADATA** </th> </tr> <tr> <td> **Standards and metadata** </td> <td> Metadata will be written in standard ASCI format </td> </tr> <tr> <td> **Method of production and responsibility** </td> <td> As no large quantities of data will be produced, there are no requirements for long-term data management. The experiment output is stored in the UNIGE server that is backed up regularly. Volumes and cost are negligible. </td> </tr> </table> <table> <tr> <th> </th> <th> **OTHER ISSUES** </th> </tr> <tr> <td> **Legal or ethical issue** </td> <td> None </td> </tr> </table> ## Dataset 2.4 – [Development of the interferometer] <table> <tr> <th> </th> <th> **DATASET SUMMARY** </th> </tr> <tr> <td> **Dataset reference and name** </td> <td> DS2.4 </td> </tr> <tr> <td> **Related WPs** </td> <td> DS2.4 will be generated within WP2 </td> </tr> <tr> <td> **Responsibility for the dataset** </td> <td> CNRS (A. Houard) </td> </tr> <tr> <td> **Dataset property** </td> <td> CNRS, AMCS </td> </tr> <tr> <td> **Resource type/nature of data** </td> <td> Report on the development of the optical interferometer. Image, binary and text files, obtained from the tests realized at LOA. </td> </tr> <tr> <td> **General description** </td> <td> Design and test report on the optical interferometer developed to characterize the energy deposition from the laser filament. </td> </tr> <tr> <td> **Reuse of existing data** </td> <td> Previous data on the characterization of filament energy deposition in air obtained by LOA might be used. </td> </tr> <tr> <td> **Method of production** </td> <td> Measurements, possible reuse of historical data, simulation, and analysis </td> </tr> <tr> <td> **Others partners activities and responsibilities** </td> <td> UGE and TSL might collaborate in the gathering, simulation and analysis of the data. </td> </tr> </table> <table> <tr> <th> </th> <th> **STORAGE AND ACCESS** </th> </tr> <tr> <td> _STORAGE AND RECORDING_ </td> <td> </td> </tr> <tr> <td> **Medium of data** </td> <td> Digital and possibly hard copy </td> </tr> <tr> <td> **Data hosting** </td> <td> Local server at the LOA and distant server. Any hard copies will be stored in binders. </td> </tr> <tr> <td> **Data standards/formats** </td> <td> Binary files. Spreadsheet files. Text files. Matlab .MAT files. Video and photographic images. Origin files. The exact description will be stored in headers or in separate text files. </td> </tr> <tr> <td> **Projected volume** </td> <td> Of the order of 10 GByte </td> </tr> <tr> <td> _ACCESS_ </td> <td> </td> </tr> <tr> <td> **Data reading** </td> <td> Matlab, Origin, Standard word processing software. </td> </tr> <tr> <td> **Access procedures** </td> <td> Hardcopy data requested by the partners will be sent per snail mail, scanned and sent by electronic means. Access to digital data stored on servers will be provided by way of remote data transfer using cloud services. </td> </tr> <tr> <td> **Data sharing** </td> <td> All output can be shared within the LLR consortium, and is primary located in the LOA archiving system. Data in this dataset is not expected to be classified as restricted as it will only have scientific value. The data will therefore be shared with third parties for exclusive research purposes upon request. </td> </tr> <tr> <td> _DATA SECURITY_ </td> <td> </td> </tr> <tr> <td> **Sensitive data** </td> <td> None </td> </tr> </table> <table> <tr> <th> **DISSEMINATION AND PRESERVATION** </th> </tr> <tr> <td> _DISSEMINATION_ </td> </tr> <tr> <td> **Distribution medium** </td> <td> Through publications in journals and presentation at conferences </td> </tr> <tr> <td> **Data utility** </td> <td> The scientific community could reuse the data for validation of the results of the project or for further research. </td> </tr> <tr> <td> **Dissemination level** </td> <td> Public </td> </tr> <tr> <td> **Type of license** </td> <td> Not applicable </td> </tr> <tr> <td> **Embargo** </td> <td> 6 months for analysis and redaction of a publication </td> </tr> <tr> <td> _STORAGE AFTER THE PROJECT AND LONG-TERM PRESERVATION_ </td> </tr> <tr> <td> **Data selection** </td> <td> Yes </td> </tr> <tr> <td> **Recommended lifetime** </td> <td> Decades </td> </tr> <tr> <td> **Long-term preservation platform or medium** </td> <td> TBD </td> </tr> </table> <table> <tr> <th> </th> <th> **METADATA** </th> </tr> <tr> <td> **Standards and metadata** </td> <td> Metadata will be written in standard ASCI format </td> </tr> <tr> <td> **Method of production and responsibility** </td> <td> As no large quantities of data will be produced, there are no requirements for long-term data management. The experiment output is stored in the LOA server that is backed up regularly. Volumes and cost are negligible. </td> </tr> </table> <table> <tr> <th> </th> <th> **OTHER ISSUES** </th> </tr> <tr> <td> **Legal or ethical issue** </td> <td> None </td> </tr> </table> ## Dataset 3.1 – [Outdoor horizontal measurements] <table> <tr> <th> </th> <th> **DATASET SUMMARY** </th> </tr> <tr> <td> **Dataset reference and name** </td> <td> DS3.1 </td> </tr> <tr> <td> **Related WPs** </td> <td> DS3.1 will be generated within WP3 </td> </tr> <tr> <td> **Responsibility for the dataset** </td> <td> AMCS </td> </tr> <tr> <td> **Dataset property** </td> <td> LLR consortium </td> </tr> <tr> <td> **Resource type/nature of data** </td> <td> Data files, text files and images </td> </tr> <tr> <td> **General description** </td> <td> Interferometric data used to determine the existence and properties of a low density channel created over long horizontal distances by filamentation in air </td> </tr> <tr> <td> **Reuse of existing data** </td> <td> Comarison with similar data over short distances from experiment in Trumpf site will be made </td> </tr> <tr> <td> **Method of production** </td> <td> Recording of data concerning generated low density channel through measurements of interferometry over long distances ~ 100 m </td> </tr> <tr> <td> **Others partners activities and responsibilities** </td> <td> Trumpf (laser), Unige (telescope) and CNRS (interferometer) </td> </tr> </table> <table> <tr> <th> </th> <th> **STORAGE AND ACCESS** </th> </tr> <tr> <td> _STORAGE AND RECORDING_ </td> <td> </td> </tr> <tr> <td> **Medium of data** </td> <td> Digital and hard copy </td> </tr> <tr> <td> **Data hosting** </td> <td> Computer files and local server at LOA </td> </tr> <tr> <td> **Data standards/formats** </td> <td> Text files, binary files, standard spreadsheet files, images </td> </tr> <tr> <td> **Projected volume** </td> <td> Current estimate: in the order of Gbytes </td> </tr> <tr> <td> _ACCESS_ </td> <td> </td> </tr> <tr> <td> **Data reading** </td> <td> Standard word processing software; possibly specialized sofware </td> </tr> <tr> <td> **Access procedures** </td> <td> Access to digital data stored on servers will be provided by way of remote data transfer using cloud service. </td> </tr> <tr> <td> **Data sharing** </td> <td> Data for exclusive research purpose will be shared upon request </td> </tr> <tr> <td> _DATA SECURITY_ </td> <td> </td> </tr> <tr> <td> **Sensitive data** </td> <td> No </td> </tr> </table> <table> <tr> <th> **DISSEMINATION AND PRESERVATION** </th> </tr> <tr> <td> _DISSEMINATION_ </td> </tr> <tr> <td> **Distribution medium** </td> <td> Through publications in journals and presentation at conferences </td> </tr> <tr> <td> **Data utility** </td> <td> The scientific community could reuse the data for validation of the results of the project or for further research. </td> </tr> <tr> <td> **Dissemination level** </td> <td> Public </td> </tr> <tr> <td> **Type of license** </td> <td> Not applicable </td> </tr> <tr> <td> **Embargo** </td> <td> 6 months for analysis and redaction of a publication </td> </tr> <tr> <td> _STORAGE AFTER THE PROJECT AND LONG-TERM PRESERVATION_ </td> </tr> <tr> <td> **Data selection** </td> <td> Yes </td> </tr> <tr> <td> **Recommended lifetime** </td> <td> Decades </td> </tr> <tr> <td> **Long-term preservation platform or medium** </td> <td> TBD </td> </tr> </table> <table> <tr> <th> </th> <th> **METADATA** </th> </tr> <tr> <td> **Standards and metadata** </td> <td> Metadata will be written in standard ASCI format </td> </tr> <tr> <td> **Method of production and responsibility** </td> <td> As no large quantities of data will be produced, there are no requirements for long-term data management. The experiment output is stored in the LOA server that is backed up regularly. Volumes and cost are negligible. </td> </tr> </table> <table> <tr> <th> </th> <th> **OTHER ISSUES** </th> </tr> <tr> <td> **Legal or ethical issue** </td> <td> None </td> </tr> </table> ## Dataset 4.1 – [Environmental data] <table> <tr> <th> </th> <th> **DATASET SUMMARY** </th> </tr> <tr> <td> **Dataset reference and name** </td> <td> DS4.1 </td> </tr> <tr> <td> **Related WPs** </td> <td> DS4.1 will be generated within WP4 </td> </tr> <tr> <td> **Responsibility for the dataset** </td> <td> EPFL </td> </tr> <tr> <td> **Dataset property** </td> <td> Meteoswiss (meteorological data), EPFL and HES-SO (Radar data) </td> </tr> <tr> <td> **Resource type/nature of data** </td> <td> Images, data files and text files </td> </tr> <tr> <td> **General description** </td> <td> Meteorological data used for the analysis of the conditions for the initiation of lightning discharges from the Säntis Tower. </td> </tr> <tr> <td> **Reuse of existing data** </td> <td> Historical meteorological data may be used. </td> </tr> <tr> <td> **Method of production** </td> <td> Measurements, reuse of historical data, simulation, and analysis </td> </tr> <tr> <td> **Others partners activities and responsibilities** </td> <td> The HES-SO will collaborate in the gathering, simulation and analysis of the data. </td> </tr> </table> <table> <tr> <th> </th> <th> **STORAGE AND ACCESS** </th> </tr> <tr> <td> _STORAGE AND RECORDING_ </td> <td> </td> </tr> <tr> <td> **Medium of data** </td> <td> Digital and possibly hard copy </td> </tr> <tr> <td> **Data hosting** </td> <td> Local server at the EPFL. Backed up at the HES-SO. Any hard copies will be stored in binders. Scanning will be carried out whenever possible. </td> </tr> <tr> <td> **Data standards/formats** </td> <td> Text files. Binary files. Standard Spreadsheet files. Images. </td> </tr> <tr> <td> **Projected volume** </td> <td> Current estimate: in the order of several GBytes. </td> </tr> <tr> <td> _ACCESS_ </td> <td> </td> </tr> <tr> <td> **Data reading** </td> <td> Matlab/Octave. Standard word processing software. Unknown at this time: possibly specialized software. </td> </tr> <tr> <td> **Access procedures** </td> <td> Hardcopy data requested by the partners will be sent per snail mail, scanned and sent by electronic means. Access to digital data stored on servers will be provided by way of remote data transfer using cloud services. </td> </tr> <tr> <td> **Data sharing** </td> <td> Data in this dataset is not expected to be classified as restricted as it will only have scientific value. The data will therefore be shared with third parties for exclusive research purposes upon request. </td> </tr> <tr> <td> _DATA SECURITY_ </td> <td> </td> </tr> <tr> <td> **Sensitive data** </td> <td> None </td> </tr> </table> <table> <tr> <th> **DISSEMINATION AND PRESERVATION** </th> </tr> <tr> <td> _DISSEMINATION_ </td> </tr> <tr> <td> **Distribution medium** </td> <td> Through publications in journals and presentation at conferences </td> </tr> <tr> <td> **Data utility** </td> <td> The scientific community could reuse the data for validation of the results of the project or for further research. </td> </tr> <tr> <td> **Dissemination level** </td> <td> Public </td> </tr> <tr> <td> **Type of license** </td> <td> Not applicable </td> </tr> <tr> <td> **Embargo** </td> <td> No </td> </tr> <tr> <td> _STORAGE AFTER THE PROJECT AND LONG-TERM PRESERVATION_ </td> </tr> <tr> <td> **Data selection** </td> <td> Yes </td> </tr> <tr> <td> **Recommended lifetime** </td> <td> Decades </td> </tr> <tr> <td> **Long-term preservation platform or medium** </td> <td> TBD </td> </tr> </table> <table> <tr> <th> </th> <th> **METADATA** </th> </tr> <tr> <td> **Standards and metadata** </td> <td> Matlab MAT-File Format from mathworks.com (https://www.mathworks.com/help/pdf_doc/matlab/matfile_format.pdf). </td> </tr> <tr> <td> **Method of production and responsibility** </td> <td> </td> </tr> </table> <table> <tr> <th> </th> <th> **OTHER ISSUES** </th> </tr> <tr> <td> **Legal or ethical issue** </td> <td> None </td> </tr> </table> ## Dataset 4.2 – [Lightning campaign] <table> <tr> <th> </th> <th> **DATASET SUMMARY** </th> </tr> <tr> <td> **Dataset reference and name** </td> <td> DS4.2 </td> </tr> <tr> <td> **Related WPs** </td> <td> DS4.2 will be generated within WP4 </td> </tr> <tr> <td> **Responsibility for the dataset** </td> <td> EPFL </td> </tr> <tr> <td> **Dataset property** </td> <td> EPFL and HES-SO </td> </tr> <tr> <td> **Resource type/nature of data** </td> <td> Image, binary, text files, and video files. Current and field waveforms. High- speed optical lightning measurements, interferometric measurements, Lightning Mapping Array, video and still pictures. </td> </tr> <tr> <td> **General description** </td> <td> Field, interferometric and return-stroke current data used for the analysis of the conditions for the initiation of lightning discharges from the Säntis Tower. </td> </tr> <tr> <td> **Reuse of existing data** </td> <td> Historical current and field data might be used. </td> </tr> <tr> <td> **Method of production** </td> <td> Measurements, possible reuse of historical data, simulation, and analysis </td> </tr> <tr> <td> **Others partners activities and responsibilities** </td> <td> The HES-SO will collaborate in the gathering, simulation and analysis of the data. </td> </tr> </table> <table> <tr> <th> </th> <th> **STORAGE AND ACCESS** </th> </tr> <tr> <td> _STORAGE AND RECORDING_ </td> <td> </td> </tr> <tr> <td> **Medium of data** </td> <td> Digital and possibly hard copy </td> </tr> <tr> <td> **Data hosting** </td> <td> Local server at the EPFL. Backed up at the HES-SO. Any hard copies will be stored in binders. Scanning will be carried out whenever possible. </td> </tr> <tr> <td> **Data standards/formats** </td> <td> Binary files. Spreadsheet files. Text files. Matlab .MAT files. Video and photographic images. The exact description will be stored in headers or in separate text files. </td> </tr> <tr> <td> **Projected volume** </td> <td> Of the order of 1 TByte </td> </tr> <tr> <td> _ACCESS_ </td> <td> </td> </tr> <tr> <td> **Data reading** </td> <td> Matlab/Octave. Standard word processing software. Unknown at this time: possibly specialized software for interferometric and LAM data. </td> </tr> <tr> <td> **Access procedures** </td> <td> Hardcopy data requested by the partners will be sent per snail mail, scanned and sent by electronic means. Access to digital data stored on servers will be provided by way of remote data transfer using cloud services. </td> </tr> <tr> <td> **Data sharing** </td> <td> Data in this dataset is not expected to be classified as restricted as it will only have scientific value. The data will therefore be shared with third parties for exclusive research purposes upon request. </td> </tr> <tr> <td> _DATA SECURITY_ </td> <td> </td> </tr> <tr> <td> **Sensitive data** </td> <td> None </td> </tr> </table> <table> <tr> <th> **DISSEMINATION AND PRESERVATION** </th> </tr> <tr> <td> _DISSEMINATION_ </td> </tr> <tr> <td> **Distribution medium** </td> <td> Through publications in journals and presentation at conferences </td> </tr> <tr> <td> **Data utility** </td> <td> The scientific community could reuse the data for validation of the results of the project or for further research. </td> </tr> <tr> <td> **Dissemination level** </td> <td> Public </td> </tr> <tr> <td> **Type of license** </td> <td> Not applicable </td> </tr> <tr> <td> **Embargo** </td> <td> No </td> </tr> <tr> <td> _STORAGE AFTER THE PROJECT AND LONG-TERM PRESERVATION_ </td> </tr> <tr> <td> **Data selection** </td> <td> Yes </td> </tr> <tr> <td> **Recommended lifetime** </td> <td> Decades </td> </tr> <tr> <td> **Long-term preservation platform or medium** </td> <td> TBD </td> </tr> </table> <table> <tr> <th> </th> <th> **METADATA** </th> </tr> <tr> <td> **Standards and metadata** </td> <td> Matlab MAT-File Format from mathworks.com (https://www.mathworks.com/help/pdf_doc/matlab/matfile_forma t.pdf). </td> </tr> <tr> <td> **Method of production and responsibility** </td> <td> </td> </tr> </table> <table> <tr> <th> </th> <th> **OTHER ISSUES** </th> </tr> <tr> <td> **Legal or ethical issue** </td> <td> None </td> </tr> </table>
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0248_AfricanBioServices_641918.md
# 1 INTRODUCTION This is the third version of the Data Management Plan (DMP) of the _**AfricanBioServices** _ . The document describes the life cycle for all datasets that are collected, processed, stored and how data shall be made available during the _**AfricanbBioServices** _ project and how the data will be preserved at the end of the project. Additionally the DMP defines the roles in management of all data within the project, and describes where and how (do- it-yourself) to easily access the datasets. All the project participants are obliged to read and understand the basic principles of the DMP and must follow the rules of data storing and sharing within the project. The DMP is not a static document, but dynamic in its evolution through gaining more content as the database expands during the lifespan of the project. ## 1.1 The third version of the DMP This is the third version of the DMP (Deliverable 1.4) which will be delivered in M39 by NTNU in collaboration with TAWIRI. This DMP deliverable complies with the Guidelines on FAIR Data Management in Horizon 2020 Version 3.0 provided by the Commission. The DMP describes the procedures of both collated baseline data (input data) and collected new (field) data (output data), according to the Grant Agreement (GA). Data and information collected for the project will be uploaded into the repository, and will subsequently be standardized and stored in the database. Explanations of important concepts in the plan is described in ANNEX I. Each task and sub-task leader (see ANNEX II: Participant list) shall prepare metadata for each data set (e.g. activity) by using data repository template (ANNEX III). It is important that both input and output data are described in the data repository template. All output data files must be accompanied with the relevant research plan with a description of methodology. This information is essential for good data management (e.g. coordinating fieldwork and data collection) across activities, sub-tasks, tasks and WPs. ## 1.2 Updating the DMP The final version of the DMP will be delivered at M51 (Deliverable 1.1). Updated versions of the DMP will be created if important changes regarding data management occur due to inclusion of new data sets, changes in consortium policies or other changes (see Figures 1 and 2). # 2 ORGANISATION This document outlines the general procedures for collection, storage and preservation of data within the _**AfricanBioServices** _ project. Data management is organised in two different levels: ## 2.1 Project level within the different WPs The Work Breakdown Structure (WBS) is on four levels: WP, task or sub-task and ‘activity’. Within each task or subtask there can be several activities. Each activity that includes data management must follow the DMP. This implies creation of metadata file, a research plan, Grant Agreement Activity Reports (GAAR, see ANNEX IV). All WP, task and sub-task leaders are responsible for following the DMP for each activity within their WP. The metadata files, research plans and GAARs are online (i.e. uploaded to the repository/e-room/Google docs) and digital. These documents would therefore be searchable on activity level and would be identifiable. ## 2.2 Consortium level WP1 is responsible for data management at the consortium level. Specifically, this includes preparing data for the database (in the Repository and Upload service) and forming the relational database (Deliverable D5.) # 3 LIFE CYCLE OF DATA The data life cycle (Figure 1) is the process that begins with data collection (according to the research plan), uploading of data into the repository, standardization (by following the procedure in Internal Note 22, ANNEX V), quality assurance (QA) and quality control (QC) (by following the procedure described in Chapter 9.2) and finally uploading it to the database (Figure 1). During validation of data quality, (that the data management of a file follows the DMP) datasets that will not qualify to be uploaded into the relational database will remain in the repository. # 4 ROLES The following section assign the different roles in data management. All roles will be continuously maintained in the participant list (ANNEX II). ## 4.1 Task leaders  Update the GAARs according the task(s) they are responsible for. ## 4.2 Researchers * Communicate with DBG * Upload data into the Repository * Make metadata file  Make research plan ## 4.3 Database group * Facilitate input and output data from all activities * Review data and metadata in the repository  Prepare and upload datasets into upload service. * Ensure that researchers follow the DMP (including QA and QC) ### 4.3.1 Members database group * Peter Sjolte Ranke(NTNU) Leader * Marc Daverdin (NTNU) Quality assurance * Devolent Mtui (TAWIRI) Quality assurance * Lucy Njino (DRSRS) Quality assurance **Figure 1.** The life cycle of datasets during the project ## 4.4 Data Management Group The Data Management Group (DMG) will check and approve that the quality of data uploaded to the relational database is validated. ### 4.4.1 Members database group **Table 1** .Description of roles in DMG <table> <tr> <th> Country </th> <th> Thematic area </th> <th> Name </th> <th> Role </th> </tr> <tr> <td> Tanzania </td> <td> Biodiversity </td> <td> Machoke Mwita (TAWIRI) </td> <td> Principal contact </td> </tr> <tr> <td> Socioeconomic </td> <td> Angela Mwakatobe (TAWIRI) </td> <td> Principal contact </td> </tr> <tr> <td> Environment </td> <td> Devolent Mtui (TAWIRI) </td> <td> Leader </td> </tr> <tr> <td> Kenya </td> <td> Biodiversity </td> <td> Joseph Mukeka (KWS) </td> <td> Principal contact </td> </tr> <tr> <td> Socioeconomic </td> <td> Francis Wanyoike </td> <td> Principal contact </td> </tr> <tr> <td> Environment </td> <td> Lucy Njino (DRSRS) </td> <td> Principal contact </td> </tr> </table> # 5 DATA REPOSITORY The _**AfricanBioServices** _ repository is an active platform for storage and exchange of all data, collected and generated through the project. Data in the repository is organized in WPs in eight categories. All datasets obtained or generated within the scope of the _**AfricanBioServices** _ project are documented and deposited in the Data Repository, where they are available to all consortium members. **Figure 2.** The flow of information about data in the repository ## 5.1 STEP-BY-STEP * When uploading a dataset, users are forced to fill in a web form providing metadata of the dataset as specified in the metadata template (ANNEX III). * A research plan with a description of data collection methodology must be provided for all data sets and uploaded together with the data file. * Research plans are important to assure data quality. For each activity, it describes the methodology and planned work, according to the Grant Agreement, thus each activity should have a corresponding research plan. Due to this date, there are some activities that share the same research plan, and this will be handled before datasets are uploaded in the database * Regarding the data quality validation, many of the historical data lack a clear methodology, thus the data quality is difficult to assess. Methodology for each respective dataset should be acquired in the future, in order to determine their quality. * Users can comment on, or ask questions about datasets in the comment section of each uploaded data file. The repository has a forum where users can post a request for specific missing datasets, or new data that is not yet planned for. * In the metadata of each dataset, it is indicated what the status of the dataset is. This can be either ‘not started’, indicating that the data exists but actual collection has not yet begun; ‘in progress’, indicating that the dataset is not yet complete, outdated, not sufficiently documented, etc; or ‘complete’, indicating that the data-set is ready for further use. * The to-do list is used to describe what needs to be done to complete a dataset with status ‘in progress’. The metadata further indicates what the expected date of completion is, and documents the actual date of completion. # 6 UPLOAD SERVICE All relevant datasets are also included in the _**AfricanBioServices** _ upload service. This is a searchable webbased database for further normalised and standardised data files. This will be the foundation of the database. The database will be valuable tool for regional management officers by the end of the project. Standardizing variables is the first stage of building a database. There are several common variables that _must appear_ in all datasets that are to be collected by _**AfricanBioServices** _ consortium (see ANNEX V, Internal Note 22). It is mandatory for every researcher to collect these data in the field or make sure to obtain the information from the data provider. The importance of all these variables is to assist researchers in joining various datasets from different categories, for example, when one wants to relate rainfall data with movement of animals. ## 6.1 STEP-BY-STEP * Each researcher sharing a dataset will register and shall have an account on the site. For security reasons, the researcher must be accepted by the data management group before using their account. * Sign in at http://www.bio.ntnu.no/tanzania/ * When logged in, the users are able to search for datasets by defined searches, and download datasets that are public (users choose to upload datasets as public or private) and associated variable description files. * Furthermore, the users are able to upload datasets and corresponding descriptions, and separately define them as private or public (public in the sense of within the users of the upload service ## 6.2 ADVANTAGE The advantage of the upload service is to ensure that the datasets have reached the first level of quality before building the database. Furthermore, users are able to download what is already prepared, and make some table relations between the desired tables by joining two or more tables. The upload service will function as a relational database with data files acting as tables, which is structured for potential downloads. # 7 HOW TO GET PASSWORDS AND USER NAMES **The following persons provide passwords and user names:** * Therese Vangstad (NTNU) - Internal web page / e-room ([email protected]). Send email to orakel with a message ‘Therese Vangstad / password/username to AfricanBioServices e-room’ in the subject field on the email * Peter S. Ranke (NTNU) - Database: ([email protected]) * Joke Baker (RUG) - Data Repository: ([email protected]) # 8 DATA MANAGEMENT Data management is organised at two different levels. At the project level, the WP, task and sub-task leaders are responsible for handling and documentation of data on each activity and will follow the DMP for each activity within their WP. Data management at the consortium level is the responsibility of WP1 and is outlined in this section. ## 8.1 Data description The project distinguishes between input data, i.e. pre-existing datasets collected elsewhere that are used by the consortium members, and output data, i.e. newly collected or compiled datasets within the scope of the project. ### 8.1.1 Input data Existing data can be available from research institutes, national park management or previous research projects focusing on the Serengeti-Mara region. All existing datasets are uploaded “as they are” into the data repository and documented according to the metadata template defined in ANNEX III. For example: * Data from the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES) project, 2011-2013 * 2GIS layers on rainfall, soils, vegetation, and infrastructure collected in previous research projects * DRSRS aerial surveys of wildlife and livestock for the Mara ecosystem (1977-2016) * Satellite-derived land use change analyses (GIS layers) for the Serengeti-Mara ecosystem * Crop and forest cover mapping in Narok County, Kenya * Land cover mapping for Masai Mara National Park (2012)  Rainfall data in the Serengeti ecosystem and Mara region in Kenya  TAWIRI aerial surveys of wildlife in the Serengeti. ### 8.1.2 Quality Assurance of Input data (historical data) A procedure for QA of historical data used in AfricanBioServices will be developed to ensure the reliability of such data. ### 8.1.3 Output data Output data is all data collected or generated by researchers within the project duration. Output data can comprise of data collected or measured in the field or in a lab, but can also be the result of a model, spatial data, aggregated data (means, trends, etc. of input data), or program code and scripts. All datasets will be sufficiently uploaded into the repository shortly after collection, and subsequently documented according to the metadata template defined in ANNEX III. ## 8.2 Quality assurance and quality control Quality assurance and quality control (QA/QC) will follow a standard procedure, assessing, and if needed, improving the quality of the data. * Successfully move from the repository to the upload service (basic check of structure and common variables). * Whether the dataset is sufficiently described in a metadata file, following the template. * A variable description file, following the metadata * Whether a research plan covering methodology for all data collection, and that it covers the variables described in the metadata, and variable description. * Data errors will be searched by visualising data in a statistical software (R Development Core Team 2017) for any obvious outliers, i.e. temporal, spatial, etc. Each researcher will be responsible for responding to any remarks that are found during QA and QC for each data set and correct (or explain) these. ## 8.3 Standardization and metadata We distinguish metadata at two levels: 1) metadata describing the variables & parameters in datasets and databases, and 2) metadata describing the datasets that are used and/or generated during the project in a standard way. Metadata at the dataset level is standardized in the repository template (ANNEX III) and includes: * A unique system-based identifier * Dataset name and description * Information on when & by whom the data was uploaded * Information on the current status of the data & a to-do list to complete the dataset * Information on the access level of the dataset & special conditions for use * Relevant information on the source of the data, i.e. the original owner/institute DBG will evaluate the process. Data files uploaded to the repository should only be stored in a preferred file format that conform the international standards (based on the KNAW DANS Preferred Formats overview, November 2015) to ensure future compatibility: * Document (.txt; .pdf; .doc; .docx; .odt) * Spreadsheet (.csv; xls; .xlsx; .ods) * GIS shapefile (.shp + tables) * GIS raster data (.geotif; .img) * Database (.csv; .sql; .mdb; .accdb) * Picture (.jpg; .tif; .png) * Audio (.wav; mp3) * Video (.avi; .mp4; .mov) To be able to upload data into the upload service, the file accepted file formats is specified in the upload procedure. In order to prepare data to the relational database each researcher must follow the ‘Guideline for data collection and description of variables’ (see ANNEX V, Internal Note 22). ## 8.4 Data archiving, preservation and documentation All data should be stored «as is» in the data repository without changing the names of variables in the existing datasets. The repository is a common platform for online storage and exchange of data and is a product of the project in line with the Grant Agreement. An unambiguous interpretation of the data and contents will be secured by adding a metadata description for all data files. In this way data will be recognizable to the researchers that originally collected these data. Data and information for the project will be available online via website shared through web based database hosted at NTNU (http://www.bio.ntnu.no/africanbioservices) and data repository hosted at _http://africanbioservices.webhosting.rug.nl/HomePage._ Keeping backups is an important data management task that ensures data safety by avoiding risks of accidental deletion or failure of hard drives. It is recommended to keep at least two copies of backups in external drives. The NTNU and RUG has fully automated backup systems that will continue to maintain the _**AfricanBioServices** _ database and repository, however TAWIRI and DRSRS will put in place a similar dedicated system to be able to host the database and repository as the project comes to its conclusion. ## 8.5 Archiving and preservation At the end of the project all the data and information from the database and repository will be made available to TAWIRI and DRSRS and will be accessible through _http://www.tawiri.or.tz/_ and _http://www.environment.go.ke_ respectively. A prerequisite is however that the mentioned institutions have technical capacity both with respect to hosting and backup possibilities. Nowadays it is good scientific practice to archive every paper/chapter/thesis together with all primary and secondary data files that were used to produce it. This is required in the original data management plan of the project, and is also required by many journals. Proper archiving at the end of each activity is the responsibility of all WP leaders. The archives should be deposited in the project data repository. ## 8.6 Data access and sharing Data sharing is a prerequisite in achieving the objectives, milestones and deliverables of the _**AfricanBioServices** _ project in accordance with the Grant Agreement (Article 29). The project consortium is keen to share their research findings ease of online data storage, accessibility, and dissemination to users. This means that, the keenness of many institutions are keen to share research data so as to increase the reproducibility and visibility of their research. Metadata will be openly shared online to to show what data that will be available. Specific datasets will be made available upon request, however, third party will only access such materials after the project lifetime as provided in the Grant Agreement (Article 3). In addition, data will only be shared after they are published by AfricanBioServices. Each of the consortium partners is required to adhere to the rules of engagement in data sharing as itemized in the Grant Agreement, i.e. protection of results (Article 27), confidentiality of results (Article 36), and processing of personal information (Article 39). Researchers within the consortium are encouraged to submit his/her manuscript to an open access peer reviewed scientific journal/publisher in accordance with the Grant Agreement (Article 29.2). ## 8.7 Ethical guidelines The consortium partners must carry out their activities in compliance with management of intellectual property (Article 23a), ethics principles (Article 34), as well as observe confidentiality of data as itemized in Grant Agreement (Article 36). ### 8.7.1 Human subjects The consortium partners that engages in the interview of person(s) as a means of acquiring information on livelihoods must have a standard COSTECH research clearance. If the subjects include vulnerable groups such as children or collecting information on health issues, researchers must seek approval on ethics clearance from the National Institute for Medical Research (NIMR) prior to commencement of their research in Tanzania, and the National Commision For Science, Technology and Innovation (NACOSTI) in Kenya. The ethics clearance application should include: * Type of personal data and how it will be collected, stored and processed * Recruitment process, inclusion/exclusion criteria for participation * Detailed information on privacy/confidentiality and procedures that will be implemented for data collection, storage, access, sharing policies, protection, retention and destruction during and after the project * Detailed informed consent procedures to be implemented ### 8.7.2 Handling of animals All research procedures on wildlife will be in compliance with TAWIRI Research Guidelines of 2012. Handling of animals will be conducted after receiving relevant recommendations from Joint Management Research Committee (JMRC) to the TAWIRI Board, and permits from Commission for Science and Technology (COSTECH). Subsequently, animal handling permits will be sought from respective Management Authorities including Tanzania National Parks (TANAPA), Ngorongoro Conservation Area Authority (NCAA) and Tanzania Wildlife Management Authority (TAWA) depending on the protected area where the capture will take place within the Serengeti ecosystem. Handling of livestock will be in consent with the owner in consultation with the respective District Veterinary Officer (DVO). Animal safety and animal welfare issues will be followed in accordance with the Tanzania Veterinary Act No. 16 of 2003. To undertake the research work in Kenya, approval will be obtained from KWS and any other relevant government institutions that will include but not limited to NACOSTI and the Director of Veterinary Service (DVS) in the Ministry of Livestock Development. Aanimal safety and welfare considerations will be observed in accordance to relevant laws and regulations that will include but not limited to the Animals Diseases Act 1965 (GOK, 1965. Animal Diseases Act Chapter 364 Laws of Kenya. The Government Printer, Nairobi, Kenya), the Prevention of Cruelity to Animals Act Chapter 36 (GOK, Prevention of Cruelty to Animals Act Chapter 36 Laws of Kenya. The Government Printer Nairobi, Kenya), The Veterinary Surgeons and Veterinary Para-professionals Act 2011 (GOK, 2011.The Veterinary Surgeons and Veterinary Para-professionals Act. The Government Printer, Nairobi, Kenya) and the Wildlife Conservation and Management Act 2013 (GOK, 2013. The Wildlife Conservation and Management Act. The Government Printer Nairobi, Kenya). In Tanzania, the capture of wild animals will be done by competent wildlife veterinarians who are registered by the Veterinary Council of Tanzania (VCT) as stipulated under section 3 of the Veterinary Act No. 16, of 2003. These vets are members of Tanzania Veterinary Association (TVA), associate members of Wildlife Disease Association (WDA), and members of Wildlife Health Specialist Group (WHSG) of the IUCN who have undergone extensive training on capture and handling of wildlife. Registered wildlife vets from TAWIRI are Dr. Robert Fyumagwa, Dr. Ernest Mjingo and Dr. Zablon Bugwesa Katale. To ensure humane capture and handling of wild animals in Kenya, experienced and competent personnel in wildlife veterinary practice will be involved. These will comprise of veterinarians, veterinary assistants as well as capture rangers and drivers. The veterinarians will have to be registered by the Kenya Veterinary Board (KVB) as per the Veterinary Surgeons and Veterinary Para- professions Act, 2011 (GOK, 2011: The Veterinary Surgeons and Veterinary Para- professionals Act. The Government Printer, Nairobi, Kenya). Where these veterinarians will not be from KWS, they will have to seek clearance from the Service so as to undertake any wildlife veterinary work and might be required to work under the supervision of a KWS veterinary officer. To get clearance, they will have to demonstrate prerequisite experience and competence to handle wild animals as well as dangerous immobilisation drugs. Animals will be immobilised with recommended drugs and dosage rates delivered remotely by use of projectile darts using darting systems that are gentle and would cause minimal pain and trauma on the animals. Darting sites will be areas of the body with well covered muscles such as the hindquarters and shoulders. Immediately the animals go down, they will be put on sternal recumbency to decrease the incidence of bloat and regurgitation as well as protect the airways by decreasing the pressure of the abdominal viscera on the diaphragm. They will also be blindfolded to minimise stress from visual stimulation. The head will be placed low to allow saliva or regurgitated ruminal contents to drain out. Breathing will be monitored throughout the collaring procedure to ensure respiratory sufficiency and avoid hypoxia. The body temperature will be monitored with a thermometer throughout the collaring procedure for signs of hyperthermia, a common problem during immobilisation. ## 8.8 Authorship guidelines Authorship of all publications coming out of the _**AfricanBioServices** _ project shall follow the guidance given in the Vancouver regulations. Because there are various ways to interpret the Vancouver agreement (e.g. if authorship is acquired by creative efforts or not), no strict rules for authorship is stated in the DMP. However, the senior author on any manuscript developed in the name of _**AfricanBioServices** _ are strongly encourage to contact all persons that have contributed to the whole process, to ensure that all potential co-authors (with a significant contribution) are contacted and requested about their involvement (the conception or design of the work; or the acquisition, analysis, or interpretation of data for the work; drafting the work; revising it critically for important intellectual content; final approval of the version to be published; agreement to be accountable for all aspects of the work) in the manuscript. ## 8.9 Publish scientific paper(s) on the data AfricanBioServices aims at publish a scientific paper(s) on the data/database. # 9 APPROVAL OF THE DMP The DMP must be approved of the General Assembly.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0250_TRANSrisk_642260.md
**INTRODUCTION** **1.1 Changes with respect to the DoA** There are no changes from the DoA. # 1.2 Introduction to the Data Management Plan This Data Management Plan outlines how research data will be handled during the lifetime of the TRANSrisk project, and after the project is completed. TRANSrisk participates in the Horizon 2020 Open Research Data Pilot, and this plan has, therefore, been prepared under the principal that open access will be provided to research data, unless there are ethical or confidentiality issues that prevent us doing so. The Data Management Plan is a living document, and will be reviewed at least twice over the lifetime of the TRANSrisk project. These reviews will reflect the scientific outputs and supporting datasets created as the project progresses. The planned reviews will/ have take place: * After the mid-point of the project (when significant amounts of data have been generated). This is the current document. * At the end of the project (in preparation for permanent archiving of project results) General procedures for managing all data created during TRANSrisk are detailed in section 2 of this plan. Specific datasets that we anticipate will be created during TRANSrisk are detailed in section 3, alongside their associated data management procedures. <table> <tr> <th> **2** </th> <th> **GENERAL PROCEDURES FOR DATA MANAGEMENT** </th> </tr> </table> The following general procedures will apply to all data created as part of TRANSrisk. # 2.1 Internal Data Management All project data (with some exceptions) will be stored and managed on a project specific implementation of the Alfresco Community document management system during the lifetime of the TRANSrisk project. This system has been implemented and managed by the National Technical University of Athens (NTUA). Exceptions are: * Modelling input files (see 3.3). These will be stored on the internal IT systems of each partner engaged in modelling activities. * Stakeholder engagement documents (see 3.1). Partners are able to store non-confidential stakeholder engagement documents on a business intelligence system developed for the project (see 3.1.1). Although NTUA are managing the Alfresco system ownership of the data stored on the system remains with the partner(s) who produced it. Ownership and use of TRANSrisk results is detailed in the project’s Consortium Agreement. The TRANSrisk Alfresco implementation will be maintained for two years after the end of the project or until all partners agree to shut down the system (whichever is shorter). At this point relevant data will have been submitted to a repository, and all TRANSrisk data stored on Alfresco will be deleted from NTUA’s systems. # 2.2 General Standards and Metadata Common guidelines for document management in TRANSrisk have been drawn up as part of a TRANSrisk project manual, and are included as Appendix A of this Data Management Plan. These guidelines include: * File types. * File naming conventions. * Version control. * File and folder organisation guidelines (for the document management system). # 2.3 Data Backup All TRANSrisk data will initially be stored in a project specific implementation of the Alfresco Community document management system. The Alfresco server used has full version control and a backup of previous versions (full backups rather than differential), as well as retaining a backup copy for every deleted or overwritten piece of content in its repository. This allows instant recovery of documents either through the Alfresco interface or the administration interface. The TRANSrisk implementations of Alfresco and SpagoBI (see 3.1.1) are located on servers owned and operated in the EPU-NTUA premises in Athens. Data redundancy on a server-wide scale is achieved by the use of four SAS 600GB hard disk drives on a RAID 10 configuration - this ensures that no data is lost if one drive fails. Offsite backups of the server are taken on a weekly basis. More information about Alfresco can be found at the Alfresco website 1 . # 2.4 Distribution, Archiving and Preservation All TRANSrisk data will be made freely available, with some exceptions as noted in section 3 of this Data Management Plan. Data will be released under a Creative Commons ‘BY’ license, which allows free use of the released data under the condition that data is attributed to the original author. Data will initially be distributed through the TRANSrisk public website. The website is managed by the National Technical University of Athens, who have committed to maintaining the website for 2 years after the formal end of project. Data will be archived in the Zenodo data repository 2 .Due to the multi- disciplinary nature of TRANSrisk’s work we have not identified any discipline specific repositories to deposit data, however this will be kept under review as the project progresses. In addition to the above, the University of Sussex, as project coordinator, has been in consultation with a number of other EU FP7 and Horizon2020 projects about setting up a common platform for posting project outputs 3 . These discussions have led to the creation of the _http://climatechangemitigation.eu/_ website by the CARISMA project, where short articles on TRANSrisk outputs are regularly posted. <table> <tr> <th> **3** </th> <th> **DESCRIPTIONS AND DATASETS** </th> <th> **PROCEDURES FOR** </th> <th> **TRANSRISK** </th> </tr> </table> We have identified three broad categories of dataset that will be produced during TRANSrisk’s research and dissemination activities. # 3.1 Stakeholder Engagement Outputs ## 3.1.1 Data set description Stakeholder engagement data will underpin the TRANSrisk country case studies, which in turn will inform scientific paper outputs. This dataset consists of outputs from stakeholder workshops and individual stakeholder interviews. The main data outputs will be: * Interviews / workshop audio recordings 4 . Audio files, MB file sizes. * Interviews / workshop transcripts and minutes. Text files, KB files sizes. * Interviews / workshop images (flipcharts, whiteboards, posters). Image files, MB file sizes. * Online and telephone survey data (KB). Tabular data files, KB file sizes. * Written summaries of stakeholder engagement. Text and image files, KB and MB files sizes. Guidance on the electronic file types to be used is provided in the TRANSrisk project manual document ‘Guidelines for Document Management in TRANSrisk’, which is included as Appendix A of this Data Management plan. Partners also have the option of using an enhanced stakeholder database for non-confidential stakeholder information. This has been created within the open source SpagoBI Business Intelligence platform, implemented to accommodate TRANSrisk's specific needs, as part of project tasks 2.3 and 6.1. The system offers a wide variety of analytical tools, for reporting, multidimensional analysis, charts, KPIs interactive cockpits, ad-hoc reporting, location intelligence, free inquiry, data mining, network analysis, ETL, collaboration, office automation, Masterdata management and external processes. Users can access the Microsoft SQL Server database that holds project level data (e.g. that generated in Deliverables D5.1 or D6.1), or perform their own ad-hoc analyses on data they upload in various formats, e.g. Microsoft Excel files. ## 3.1.2 Standards and metadata Common guidelines for standards and metadata in TRANSrisk are described in section 2.2. Metadata is ‘data about data’, or information that describes the content of a document. Metadata will be created both automatically and through user input as part of the system used for document management in TRANSrisk. Open access data from this dataset will include associated metadata (authors, dates, versions, tags, etc.), and a data catalogue will be created to aid discoverability. ## 3.1.3 Data sharing Use of the raw data produced by stakeholder engagement is described in the TRANSrisk Ethics Requirements, document D1.2. This states that, _‘The data – including interview recordings, notes, survey responses and comments from stakeholder workshops – will be stored in accordance with UK data protection requirements and we will ensure that no identifiable data will be stored longer than required. After the completion of the research the data will be destroyed’_ . This raw stakeholder data will be restricted access, available only to relevant TRANSrisk researchers. Anonymising this data is possible, but this is unfeasible due to the volume of data that is likely to be produced. Data stored on the enhanced stakeholder database described in 3.1.1 will also be destroyed after completion of the research. Processed or compiled data (which does not name individuals) from stakeholder engagement will, however, be made publicly available. This will take place using the processes described in section 2.4. Note that the relevant EU legislation, the Data Protection Directive (1995), was implemented into UK law by Data Protection Act 1998 (DPA). UK legislation is therefore in line with data protection law across all EU member states. The UK’s decision to leave the EU will not have any impact on this area of legislation before TRANSrisk’s research work concludes. ## 3.1.4 Archiving and preservation Common guidelines for archiving and preservation in TRANSrisk are described in section 2.4. It is anticipated that the volume of stakeholder engagement output data archived will be less than 1GB, and consist mainly of PDF and .docx documents. # 3.2 Modelling Inputs and Outputs ## 3.2.1 Data set description TRANSrisk will use a suite of energy system models, soft coupled to macroeconomic models. The data relating to these models falls into two broad categories: * Model inputs. Historical economic, social, demographic and environmental data used to baseline and calibrate the models. Tabular data and databases, GB files sizes. * Model outputs. Documents describing the results of the modelling. Tabular data, text and image files, MB file sizes. Note that the models themselves could also be considered research data. Some of the models used are open source and are free to download and use, whilst others are proprietary models that are commercially confidential. Standards and metadata Model inputs – model inputs are drawn from a wide variety of sources, for example Eurostat, OECD, National Statistics Offices, EU databases and IMF databases. Short summaries of the model and a data dictionary will be created for each model used in TRANSrisk, which will be made publicly available. See Table 1 for a list of the datasets used in the models **Table 1: List of databases used in models for TRANSrisk** Model outputs – model outputs will follow the common document management standards for TRANSrisk, as described in section 2.2 of this Data Management Plan. Where the models themselves are open source, the metadata will also include links to the model’s download and documentation web pages. ## 3.2.2 Data sharing Model inputs – model input files will not be publicly shared by TRANSrisk, however a data dictionary will be created for each model to detail what data inputs each model uses, including links to the dataset where these are in the public domain. The reason for not (directly) sharing these files is that, in most cases, they are not outputs of the project (they are created by other bodies) and also that the file sizes can be very large (GBs). Model outputs – model outputs will be made available initially through the TRANSrisk internal document management system, and later on the TRANSrisk public website via the “TRANSrisk Models” webpage 5 . ## 3.2.3 Archiving and preservation Model outputs will follow the common standards for archiving TRANSrisk data, as described in section 2.4 of this Data Management Plan. It is anticipated that the volume of data archived will be between 1 and 5GB, and consist mainly of PDF, .docx and tabular data documents. # 3.3 Project Output Papers ## 3.3.1 Data set description Output papers will be the main outputs of TRANSrisk’s scientific and communications work. They include (but are not limited to): * Project deliverables. * Scientific publications. * Conference and workshop presentations. * Working papers. * Commentaries. * Policy Briefs. * Online Articles. * Press Releases. * Newsletters. * Leaflets/ Flyers. * Posters. * Videos. * Info-graphics. ## 3.3.2 Standards and metadata Output papers will follow the common document management standards for TRANSrisk, as described in section 2.2 of this Data Management Plan. Scientific publications deposited in a repository(s) will include bibliographic metadata, which will follow the format provided by Horizon 2020 guidance, namely: * The terms "European Union (EU)" and "Horizon 2020”. * The name of the action, acronym and grant number. * The publication date, and length of embargo period if applicable. * The authors. * A persistent identifier. ## 3.3.3 Data sharing Project output papers will be made available initially through the TRANSrisk internal document management system, and later on the TRANSrisk public website (see section 2.4). Planned arrangements include: * Project deliverables with public access will be available via the “TRANSrisk results” 6 webpage, which is included in the “Virtual Library” section of the website. * Conference and workshop presentations will be uploaded to the “Events” section of the website 7 . * Dissemination material (newsletters, press releases, leaflets/ flyers, videos etc.) distributed or presented in internal and external events will be also be made available via the same webpage. Scientific publications that are derived from the project will be offered to either open access journals or peer-review journals which offer open access with delay. They will also be placed in a repository at the same time as publication (see section 2.4) with open access either offered immediately (if copyright allows) or after an embargo period of no longer than 6 months. Where copyright allows they will also be made available through the TRANSrisk public website. ## 3.3.4 Archiving and preservation Project output papers will follow the common standards for archiving TRANSrisk data, as described in section 1.4 of this Data Management Plan. Where copyright allows, this will include scientific publications. It is anticipated that the volume of data archived will be between 1 and 5GB, and consist mainly of PDF, .docx, and tabular data documents. **Appendix A** # TRANSITIONS PATHWAYS AND RISK ANALYSIS FOR CLIMATE **CHANGE MITIGATION AND ADAPTATION STRATEGIES** **Project Manual: Section 4** **Guidelines for Document Management in TRANSrisk** <table> <tr> <th> **1** </th> <th> **Introduction** </th> </tr> </table> ## 1.1 Purpose of This Document This document provides guidance to TRANSrisk partners on how project documents should be produced, named, organised and managed. It assumes readers are familiar with the Alfresco Enterprise Content Management System used for TRANSrisk – if you are not familiar with this, please read section 5 of the project manual (Alfresco User Guide) first. This document forms part of the TRANSrisk project manual. **1.2 Why is Document Management Important?** TRANSrisk will produce a large number of documents, for example deliverables, scientific papers, working documents, meeting notes, etc. Many of these documents will be worked on by a number of different project partners. It is therefore essential that all partners are able to find, identify and access the documents they need, and can collaborate with partners on document development in a structured way. TRANSrisk has a commitment to open data as part of the grant agreement, which includes a long term commitment to data availability. Researchers from outside of the TRANSrisk partnership may therefore be accessing and using our documents for a number of years to come. This again means that documents need to be well organised, identifiable and accessible. **2 SUMMARY OF DOCUMENT MANAGEMENT IN TRANSRISK** When you begin to work on a document please follow the document management process outlined in figure 1: **Figure 1 – The TRANSrisk Document Management Process** Create • Use the templates provided in order to create a new document, if appropriate (3.1). • Consider whether you need to implement version control (3.2). • Add and format metadata, if necessary (3.3). Save • Save your document using a common file type (4.1). • Name your document using the TRANSrisk naming structure (4.2). Upload • Check if an older version of your document already exists on Alfresco. If it does, overwrite it (5.1). • Consider where (in what folder) your document needs to be saved in Alfresco. If necessary, create a new sub \- folder for it (5.2). • Correct the name, and add a description, author and appropriate tags to your document on Alfresco (5.3). Edit • Lock documents on Alfresco whilst you are editing them (6.1). • Do not leave files locked for longer than you need to edit them. If you have to stop editing upload a draft version and unlock the file for others to use (6.1). <table> <tr> <th> **3** </th> <th> </th> <th> **CREATE** </th> </tr> </table> ## 3.1 Templates A number of templates have been created for TRANSrisk documents, e.g. templates for deliverables, presentations, etc. Please use these templates, as they ensure TRANSrisk documents share a common visual identity. They templates are available on Alfresco ( _link_ ) . ## 3.2 Version Control Important documents should use version control. This allows development of the document to be tracked and the current status of a document to be easily identified. A document should use version control if: * It is a deliverable, or other key output of a work package. * If it is a key document for any part of TRANSrisk, for example project guidance. * It there will be significant collaboration on the final document, i.e. there will be several different versions before the final document is produced. If a document is short, produced by a single author and/or unlikely to be revised then it does not need to use version control. If you use version control, please do the following: * Include the version number in the document file name (see section 4.2) and/or;  Include a version control table in your document (see figure 2 below): o For text document place the table on the first page. o For Excel spreadsheets place the table in a separate tab named ‘About’. For simple and/or short documents you may simply use version control in the file name, whilst longer documents should use a version control table. The version number should follow the following format: * _Draft_ versions of the document should be numbered 0.1, 0.2, 0.3, etc. * The _final_ version of the document should be numbered 1.0. * _Minor revisions_ of the document should be numbered 1.1, 1.2, 1.3, etc. * _Major revisions_ of the document should be numbered 2.0, 3.0, etc. **Figure 2 – An Example of a Version Control Table** <table> <tr> <th> **Version** **Number** </th> <th> **Author(s)** </th> <th> **Purpose/ Change** </th> <th> **Date** </th> </tr> <tr> <td> 0.1 </td> <td> Ed Dearnley </td> <td> Initial document produced </td> <td> 09/12/15 </td> </tr> <tr> <td> 0.2 </td> <td> Jenny Lieu </td> <td> Updated/ corrected content </td> <td> 12/12/15 </td> </tr> <tr> <td> 0.3 </td> <td> Ed Dearnley </td> <td> Final draft for consultation </td> <td> 15/12/15 </td> </tr> <tr> <td> 1.0 </td> <td> Ed Dearnley </td> <td> Final approved version </td> <td> 06/01/16 </td> </tr> <tr> <td> 1.1 </td> <td> Jenny Lieu </td> <td> Revised to reflect new project roles </td> <td> 25/03/16 </td> </tr> </table> ## 3.3 Metadata Metadata is ‘data about data’, or information that describes the content of a document. Under the TRANSrisk grant agreement we have a commitment to open data, and all but the most basic documents should be created with an assumption that a researcher from outside of TRANSrisk may need to quickly understand the content and context of a document. Text documents created using TRANSrisk templates are unlikely to need any additional metadata, and any (other) text documents that have an executive summary and use version control are also likely to have sufficient metadata. Some other document types may, however, need additional metadata. Examples include: * Tabular data (spreadsheet) files should have an additional ‘notes’ or readme’ tab containing a brief description of the file contents, plus any version control information. * Audio, video and image files should be accompanied with a ‘readme’ text file the briefly explains the context of the files (for example when, why and by whom the files were created). Some scientific disciplines have their own recommended standards for metadata. Guidance for how these standards apply for TRANSrisk is being sought, and this guide will be updated as necessary. <table> <tr> <th> **4** </th> <th> </th> <th> **SAVE** </th> </tr> </table> ## 4.1 File Types Files used in TRANSrisk documents should be commonly used or open file types. If the file uses compression (e.g. audio or visual files) this should ideally be lossless compression. Examples are: * Text documents – Microsoft Word (.doc or .docx), Rich Text Format (.rtf) or PDF (.pdf). * Tabular data – Microsoft Excel (.xls or .xlsx) or Comma Separated Values (.csv). * Images – TIFF (.tif) or PNG(.png) 8 with losseless compression. * Audio - Free Lossless Audio Codec (FLAC) (.flac) or WAV (.wav) 9 . * Video - MPEG-4 (.mp4). Using these file types helps to ensure that both current and future users of the data will be able to access documents produced by TRANSrisk. Some other file types may only be accessible using specialist software that may not be available in years to come. A full list of acceptable file types can be seen on the UK Data Achieve website ( _link_ ) . ## 4.2 File Names It is important that documents are clearly named so partners can easily identify what the document is, what version it is and when it was produced. To do so please use the following structure for naming documents: * ( _Deliverables only_ ) Deliverable number. If a document is a deliverable please start with the number of the deliverable, e.g. ‘D.1.2.’. * Title of the document. Please use a concise, descriptive name and replace spaces with underscores. * Status of the document. This can be: o (DRAFT) The document is still being worked upon. o (FINAL) This is the final version of the document. o (SUBMITTED) For _deliverables only_ , this is the submitted version of the document. 8 JPEG (.jpeg or .jpg) can be used if the file was created in this format, e.g. outputted from a digital camera. 9 MP3 can be used if the file was created in this format, e.g. outputted from a Dictaphone. * ( _Documents using version control only_ ) Version number. * Date the document was produced. Please add the date in ‘Day Month Year’ format, e.g. 301215\. Examples of documents named using this structure could be: * ‘D.1.2.Ethics_Requirements(SUBMITTED)_v1.0_251115’. * ‘TRANSrisk_Management_Board_Meeting_Notes(FINAL)070215’. Note that you can change the name of a file once it’s been uploaded to Alfresco by clicking on ‘Edit Properties’. Editing the name does not break links to the document that you (or anyone else) might have sent out. Also note that Alfresco adds its own version numbers to documents updated on the site, i.e. initially documents will be listed as version ‘1.0’ and updated to ‘1.1’ or ‘2.0’ when a new version is uploaded. However, it is still important to add dates and (if necessary) version numbers, as documents need to be identifiable outside of the Alfresco system. <table> <tr> <th> **5** </th> <th> </th> <th> **UPLOAD** </th> </tr> </table> ## 5.1 Checking Whether a File Already Exists on Alfresco Before you upload a document please check to make sure a previous version of the document is not already in the Document Library. If it is, overwrite the previous version - please do not upload multiple versions of the same document with different names. Alfresco keeps old versions of documents when a new version is uploaded, so it is possible to access the older versions even if they are overwritten. ## 5.2 Choosing a Folder Documents need to be logically arranged within the Document Library. Please familiarise yourself with the sub-folder structure within your work package folder and: * If an existing folder is suitable for your file please use it. * If a new folder is needed please create it, giving it a concise, descriptive name. Please do not add additional folders to the top level (work package folders) and second level (deliverables folders) of the Document Library without seeking agreement of the Project Manager (top level) or Work Package Leader (second level). ## 5.3 Adding a Description, Author and Tags Once your file has been uploaded to Alfresco please click on ‘edit properties’ and: If replacing an existing file: * Correct the file name. Alfresco keeps the original name of the file you have replaced, so please correct this to show the file has been updated. If uploading a new file: * Write a concise description of the document’s contents. * Ensure you are listed as the author. * Add tags appropriate to the content of the document. Tags are words or phrases that describe the content of the document. Tags serve two functions in Alfresco: firstly they aid the search function (e.g. if a document is tagged ‘deliverable’ it will come up when someone searches for ‘deliverable’) and secondly they are displayed alongside a document therefore providing a quick overview of content. Clicking on the tags button brings up a list of tags that have been used in previous documents. You can also add your own (please look at the list of existing tags before adding your own tags). When adding tags please use the following guidelines: * Does the document relates to a particular work package or deliverable please tag it as such (for example, ‘d.1.1’, ‘wp2’)? * Does the document relate to any TRANSrisk working groups (for example, does it involve ‘work package leaders’, the ‘management board’, the ‘European commission’, etc)? * What kind of document is it (for example, is it a ‘briefing’, a ‘deliverable’, a ‘presentation’, ‘working paper’, etc)? * What kind of activity does the document relate to (for example, a ‘workshop’, ‘teleco’, ‘stakeholder’ event, ‘model’, etc)? * Finally, what kind of product does the document cover (for example, is it a ‘case study’, the ‘ethics requirements’ or the project ‘manual’)? A list of tags on Alfresco at the time of writing is shown in figure 3, however please feel free to add your own tags if there are no suitable tags already. When adding your own tags please keep them short (no long phrases) and make sure they are spelt correctly. **Figure 3 – Tags on Alfresco, February 2016** <table> <tr> <th> **Work package** </th> <th> **Deliverable** </th> <th> **TRANSrisk Group** </th> <th> **Document Type** </th> <th> **Activity Type** </th> <th> **Product Type** </th> </tr> <tr> <td> WP1 </td> <td> D1.1 </td> <td> consortium </td> <td> briefing </td> <td> call </td> <td> case study </td> </tr> <tr> <td> WP2 </td> <td> D1.2 </td> <td> european commission </td> <td> data </td> <td> meeting </td> <td> document management </td> </tr> <tr> <td> WP3 </td> <td> D2.1 </td> <td> management board </td> <td> deliverable </td> <td> milestone </td> <td> ethics requirements </td> </tr> <tr> <td> WP4 </td> <td> D2.2 </td> <td> partnership </td> <td> email </td> <td> qualitative </td> <td> grant agreement </td> </tr> <tr> <td> WP5 </td> <td> D2.3 </td> <td> scientific advisory board </td> <td> guidance </td> <td> quantitative </td> <td> manual </td> </tr> <tr> <td> WP6 </td> <td> D2.4 </td> <td> work package leaders </td> <td> journal </td> <td> stakeholder </td> <td> model </td> </tr> <tr> <td> WP7 </td> <td> D2.5 </td> <td> </td> <td> mailing list </td> <td> telco </td> <td> subcontract </td> </tr> <tr> <td> WP8 </td> <td> D3.1 </td> <td> </td> <td> newsletter </td> <td> workshop </td> <td> </td> </tr> <tr> <td> </td> <td> D3.2 </td> <td> </td> <td> note </td> <td> </td> <td> </td> </tr> <tr> <td> </td> <td> D3.3 </td> <td> </td> <td> presentation </td> <td> </td> <td> </td> </tr> <tr> <td> </td> <td> D4.1 </td> <td> </td> <td> template </td> <td> </td> <td> </td> </tr> <tr> <td> </td> <td> ,etc. </td> <td> </td> <td> working paper </td> <td> </td> <td> </td> </tr> </table> **Figure 4 – Adding a Description, Author and Tags to an Alfresco Document** <table> <tr> <th> **6** </th> <th> </th> <th> **EDIT** </th> </tr> </table> ## 6.1 Locking and Unlocking a Document If two or more people are working on a document at the same time there is a risk that changes made by one person can be overwritten by another person. For this reason Alfresco has the ability to lock a document whilst you are working on it. Please always use the locking function if you intend to edit a document on Alfresco. To lock a document, open it on Alfresco and click ‘edit offline’ (see figure 5). The document will be downloaded to your computer and other Alfresco users accessing the document will see that it has been locked by you. Once you’ve finished your edits upload the new version of the document. **Figure 5** **–** **Locking a Document Using ‘Edi** **t Offline’** Please do not leave documents locked for longer than absolutely necessary. If you have not finished your edits by the time you leave at the end of the working day, leave for a meeting, etc please upload a temporary ‘work in progress’ document to Alfresco. If you change your mind and do not need to make any edits to the document, return to the document on Alfresco and click ‘cancel editing’. <table> <tr> <th> **7** </th> <th> **ASSISTANCE** </th> </tr> </table> If you need assistance with this guidance please contact Ed Dearnley (TRANSrisk Project Manager) on: * Email - [email protected] * Phone – (+44) 1273 877983 * Skype - ed.dearnley.work
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0251_AgriDemo-F2F_728061.md
  <table> <tr> <td> </td> </tr> </table>                      <table> <tr> <th> </th> <th> </th> <th> </th> <th> **METADATA (adapted Dublin Core)** </th> <th> </th> <th> </th> <th> </th> <th> </th> <th> </th> <th> **ADDITIONAL INFO** </th> </tr> <tr> <th> **ID (also naming of the file)** </th> <th> **Title** </th> <th> **Extra description** </th> <th> **Project** </th> <th> **Interviewer** </th> <th> **Translator** </th> <th> **Date** </th> <th> **Language** </th> <th> **Country** </th> <th> **Labels** </th> <th> **storage location** </th> <th> **access rights** </th> <th> **Sensitive / non-sensitive data** </th> </tr> <tr> <td> info </td> <td> For raw or almost raw data: Country Code -Date (YYYYMMDD) - Type -number Country code info: https://upload.wikimedia.org/wikip edia/commons/8/86/Europe_ISO_31 66-1.svg For other cases: use camelcase & leading zeros as a general rule </td> <td> </td> <td> </td> <td> Agridemo/ PLAID </td> <td> E.g. Name of interviewer </td> <td> E.g. Name of translator </td> <td> Date of interview (not translation): </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> Researcher 1 Researcher 2 </td> <td> Choose between: -non-coded personal data -coded personal data -deidentified data/non-personal data </td> </tr> <tr> <td> _examples_ </td> <td> _BE20170125Interview001NLRaw_ </td> <td> _Interview of farmer 001 (audio)_ </td> <td> _dutch original_ </td> <td> _Agridemo_ </td> <td> _…_ </td> <td> _…_ </td> <td> _25/01/2017_ </td> <td> _Dutch_ </td> <td> _Belgium_ </td> <td> </td> <td> _B2DROP_ _(personal)_ </td> <td> _Researcher 1_ </td> <td> _non-coded personal data_ </td> </tr> <tr> <td> _BE20170125Interview001NL_ </td> <td> _Interview of farmer 001 (transcript)_ </td> <td> _dutch original, with ID instead of names_ </td> <td> _Agridemo_ </td> <td> _…_ </td> <td> _…_ </td> <td> _26/01/2017_ </td> <td> _Dutch_ </td> <td> _Belgium_ </td> <td> </td> <td> _B2DROP_ _(shared)_ </td> <td> _Researcher 2_ </td> <td> _coded personal data_ </td> </tr> <tr> <td> _BE20170125Interview001_ </td> <td> _Interview of farmer 001 (EN transcript)_ </td> <td> _english translation_ </td> <td> _Agridemo_ </td> <td> _…_ </td> <td> _…_ </td> <td> _25/01/2017_ </td> <td> _English_ </td> <td> _Belgium_ </td> <td> </td> <td> _B2DROP_ _(shared)_ </td> <td> _Researcher 1_ _Researcher 2_ </td> <td> _coded personal data_ </td> </tr> <tr> <td> _GB20170319FocusGroup001_ </td> <td> _Focus group transcript 001 (transcript)_ </td> <td> _…_ </td> <td> _Agridemo_ </td> <td> _…_ </td> <td> _…_ </td> <td> _19/03/2017_ </td> <td> _English_ </td> <td> _United Kingdom_ </td> <td> </td> <td> _B2DROP_ _(shared)_ </td> <td> _Researcher 2_ _Researcher 3_ </td> <td> _non-coded personal data_ </td> </tr> <tr> <td> _InterviewsGeneralConclusions_ </td> <td> _General conclusions of the interviews with farmers_ </td> <td> _…_ </td> <td> _Agridemo_ </td> <td> _…_ </td> <td> _…_ </td> <td> </td> <td> _English_ </td> <td> </td> <td> </td> <td> _B2DROP_ _(shared)_ </td> <td> _everyone of Agridemo_ </td> <td> _deidentified data/non-personal data_ </td> </tr> </table> 16 AgriDemo-F2F (n° 728061) DMP, version 2 – June 2019
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0254_EuTravel_636148.md
#### 1\. Executive Summary The EuTravel vision is to contribute towards the realisation of a sustainable and open single European market for mobility services by: 1. enabling travel users (both businesses and private) to easily organise a door-to-door pan- European multimodal trip in accordance with their own set of criteria including environmental performance; 2. providing multimodal travel service providers an easy and cost-effective way to deliver optimal customised services to cater for any type of specialised multimodal travel needs; 3. supporting policy decision making by contributing to the implementation of standards and regulations and facilitating fact-based EU policy making. The present document refers to EuTravel Deliverable D4.4 - Communications programme and provides a detailed description of the communication and dissemination objectives, strategy followed and activities during the lifetime of the project to contribute to the realisation of the project vision and the maximisation of its impact and exploitation of results. The present report has been developed in the framework of Task 4.4 Dissemination Strategy, Communication Plan and Evaluation Report. It consolidates the dissemination strategy and plan followed by the EuTravel consortium for dissemination and communication purposes of the project evolvements and achieved results. EuTravel dissemination plan set out the objectives, target groups tools, materials, and channels to be used in order to effectively spread EuTravel achievements to an EU-wide audience. The aim of the plan has been to identify and organise the dissemination activities to promote and share the EuTravel project results, while on the same time, to provide project partners with guidelines for the execution of these activities. Many targeted activities have taken place during the course of the project, including: * The set-up of the EuTravel website * Dissemination through press releases, online articles and social media channels * Publication of six scientific papers presented at related conferences and production of three white papers * The organisation of a successful midterm Conference, involving EuTravel Forum members * Dissemination through events, workshops and meetings - over twenty meetings and workshops were organised, including a Focus Group Meeting dedicated to people with mobility problems * Promotion of the project to events involving market representatives and business stakeholders * Liaison actions with other projects and initiatives including IT2Rail project * Knowledge transfer through the online Knowledge Base – Observatory and e-Learning portal. An integral part of D4.4 is also the data management plan of the project. The document is structured in an appropriate way as to present clearly the EuTravel dissemination objectives, strategy and rollout plan, the tools, materials and channels used, the target audiences, links to similar projects and initiatives and finally the monitoring mechanisms. Deliverable D4.4. consists of the following parts in alignment with the DoA: * **Part A:** Project Website and Dissemination Materials (Chapter 3). * **Part B:** Dissemination Strategy, Communication Plan and Evaluation Report. Includes all components of the dissemination strategy, communication activities planning and describes in detail all activities carried out. This part includes the description of clustering and liaising actions with other relevant research projects and the evaluation of the dissemination targets attainment (Chapter 4). * **Part C:** Data Management Plan (Chapter 5). The document concludes with the Annex, presenting the project dissemination materials. #### 2\. Objectives of work-package and task The overall objective of WP4 - EuTravel Sustainability and Communications Programme is to ensure sustainable development and exploitation of results during and after the end of the project, through an inclusive stakeholder engagement strategy, communication activities and a set of tools supporting the work of the Exploitation Board and Advisory Committee to involve representative stakeholders in the EuTravel Forum and create a base of potential users. WP4 activities are linked to the work carried out and outcomes of WP1 - Optimodality Framework and WP3 - EuTravel Living Labs (Figure 2-1). _Figure 2-1: Workplan Rational_ The specific implementation objectives of WP4 are: 1. Developing a Stakeholder Engagement Strategy for the EuTravel Forum and provide support measures including an Online Learning Program. 2. Organising annual evaluation of project outputs and provide Policy, Standardization and Research Recommendations. 3. Providing an Impact Assessment - Exploitation Planning - Sustainable Development Roadmap including: 1. An Impact Assessment method and tools (to be available through the Ecosystem) to monetarise the commercial impacts of the innovation concepts and related Value-added Services as applied in the Living Lab (pre- and post- measurements); 2. Individual partner and collective exploitation strategies and plans and a EuTravel sustainability roadmap. 3. A Communications programme and the EuTravel website as the project repository sustaining itself after the project lifetime. Figure 2-2 presents the relationship of WP4 tasks and related deliverables. D4.4 defines the project dissemination objectives, target groups and the role of the EuTravel Forum driving the EuTravel Engagement Strategy (D4.1) and identifies the communication channels and tools to promote the results consolidated in deliverables D4.2 Policy, Standardisation and Research Recommendations and D4.3 Exploitation Plan \- Sustainable Development Roadmap. _Figure 2-2: WP4 deliverables relationship_ This report consolidates the description of activities and outcomes of T4.4. Dissemination Strategy, Communication Plan and Evaluation Report. The following section is extracted from the project’s Description of Action (DoA) and demonstrates how each sub task of T4.4 is addressed in this deliverable. _Table 2-1: Deliverable’s adherence to EuTravel Work Plan_ <table> <tr> <th> **Main sub-task activities as described in DoA** </th> <th> **Document Reference** </th> </tr> <tr> <td> ST4.4.1 Set dissemination objectives / goals, identify target groups, communication channels and tools and finalise dissemination matrix. Conduct initial and final version of data management plan. </td> <td> Chapter 4.1: Dissemination Strategy Section 4.1.2: Dissemination goal and objectives Section 4.1.3: Dissemination target groups Section 4.1.7: Revised Dissemination Matrix Chapter 5: Part C: Data Management Plan </td> </tr> <tr> <td> ST4.4.2 Set-up project website, blog etc. </td> <td> Chapter 3: Part A: Project website and dissemination material </td> </tr> <tr> <td> ST4.4.3 Design Plan for dissemination activities. </td> <td> Section 4.2: Dissemination plan – Timing of activities </td> </tr> <tr> <td> ST4.4.4 Implement dissemination Plan including, material production, social media channels set-up and follow up, event organisation etc. The communications program will include a major Midterm Event Conference organised by FGC. </td> <td> Section 4.3: Dissemination plan rollout – Log of activities </td> </tr> <tr> <td> **Main sub-task activities as described in DoA** </td> <td> **Document Reference** </td> </tr> <tr> <td> ST4.4.5 Monitor, evaluate and refine dissemination activities. </td> <td> Section 4.4: Dissemination Monitoring and Evaluation </td> </tr> <tr> <td> ST4.4.6 Define the role and tasks of the EuTravel Forum. </td> <td> Section 4.1.6: Role and tasks of the EuTravel Forum </td> </tr> <tr> <td> ST4.4.7 Organise clustering and liaising actions with other relevant RDI projects </td> <td> Section 4.3.10: Liaison with other projects and initiatives </td> </tr> </table> #### 3\. Part A: Project website and dissemination material This chapter corresponds to D4.4a: “Project website and dissemination material” of the DoA and contains a detailed description of methodology and rationale of EuTravel project website along with the description of the dissemination material produced. Furthermore, screenshots of the website and the produced dissemination material are provided. Publications and other material are included in the Annex. ##### 3.1 Project Website Description ###### 3.1.1 Introduction This section describes in detail the EuTravel website that has been developed to serve as the public face of the Project. It is a website that utilizes the latest technology in order to deliver content to the visitor of the site as well as to enable easy interaction with the site webmaster. The EuTravel website is part of the dissemination activities undertaken for this project. The Project website can be accessed using the following internet address: _http://www.eutravelproject.eu_ . The current accompanying document provides a detailed description and analysis of the project’s web site, social media accounts and other material used for dissemination such as brochures and publications. The first version of the website was set-up on M2 (June 2015) and until then, is continuously updated. A total redesign was carried out after the mid-term review (December 2016). The website and linked portals will remain active and updated even after the end of the project. ###### 3.1.2 Methodology for website construction and development EuTravel website uses the latest technology to achieve cross browser and multi device compatibility. Further to that, the website interface is fully responsive, user friendly and delivers searchable information to the visitor. The user can access EuTravel website from smartphone, tablet, desktop PC or laptop and have easy access to the content. For the design and implementation of EuTravel official website, the following user interface design and user experience principles have been taken into consideration. The following GUI Design Principles [1] were adapted in EuTravel website interface design and implementation: * **Clarity** The interface is visually, conceptually and linguistically clear. * **Comprehensibility** The interface is easily understood and flow easily to be learned * **Consistency** The interface looks, acts, and operates the same throughput. * **Control** The user controls the interaction. * Actions results from explicit user requests * Actions are performed quickly * Actions are capable of interruption or termination o The user is never interrupted for errors • Efficiency * Minimize user’s eye and hand movements. o Transitions between various system controls flows easily and freely. * Navigation paths are as short as possible. Ensure that users never lose their work as a result of an error on their part • **Simplicity** o Provide as simple interface as possible o Make common actions simple at the expense of uncommon actions being made harder. o Provide uniformity and consistency Technologies of HTML5 [2], CSS3 [3] and JavaScript [4] adopted during the implementation of EuTravel interface, in order to achieve the responsive result, cross browser and multi device compatibility. ###### 3.1.3 Website content and screenshots 3.1.3.1. Website structure For producing the website structure, the mapping tool called SlickPlan [5] was used. Figure 3-1 shows the mock-up of the initial working version of EuTravel website. _Figure 3-1: EuTravel Website Structure_ 3.1.3.2. Home Page The home page of EuTravel website (shown in Figure 3-2 to Figure 3-4) is the page loaded first when the user enters the EuTravel website in a web browser. The home page includes a header that hosts links to all social media of EuTravel project (LinkedIn and Twitter), a search function along with a Login button for entering EuTravel Members area. The “Header” section has been designed in way that guides the user to navigate within the website, that’s why it is accessible from all the pages. Within the “Header” the project Logo is displayed and when pressed it redirects to the home page. Furthermore, the “Home” page includes a slide show with pictures relevant to the project, focusing on what the project is about. Slideshow images changes periodically during project implementation. Under the slideshow, the “Home” page includes a brief description of EuTravel project with a link that redirects the user to Project page where more info is available. Below that, the “Home” page includes separate links in boxes design that redirects the user to Conference, Knowledge Base and eLearning pages. More info about these sections of the website can be found in Chapter 4 (Part B: Dissemination Strategy, Communication Plan and Evaluation Report). The “Home” page also includes the News section and a slide show section with the logos of each project partner. The purpose of placing the news section in the home page is to provide to the website visitor with the latest updates of the project and provide the latest project information at a glance. _Figure 3-2: EuTravel Website – Home Page 1/3_ _Figure 3-3: EuTravel Website – Home Page 2/3_ _Figure 3-4: EuTravel Website – Home Page 3/3_ 3.1.3.3. Project page Project page, as shown in Figure 3-5 below, includes a detailed description of the EuTravel project, its objectives and challenges that aims to address with its completion. _Figure 3-5: EuTravel Website - Project Page_ 3.1.3.4. The Partners Page The “Partners” page includes a list with the consortium partners of EuTravel project. For each partner, the logo and the official website link are listed in a styled box as shown in Figure 3-6. _Figure 3-6: EuTravel Website - Partners Page_ 3.1.3.5. The Solutions Page The “Solutions” page, provides to the user a list with downloadable reusable artifacts (solutions) and their description, such as the EuTravel Common Information Model in JSON, XML, OWL and UML formats and the Unified Travel Ontlogy. 3.1.3.6. The Downloads Page Under the “Downloads” page, as shown in Figure 3-7, the user can find publicly available information about the EuTravel project like downloadable documents, approved deliverables, dissemination material and other useful links. _Figure 3-7: EuTravel Website - Downloads Page_ 3.1.3.7. The Contact Us Page The “Contact Us” page, as shown in Figure 3-8, provides contact details and a contact form, where the user can complete the appropriate information (e.g. name, telephone, email, subject and message) in order to contact the coordination team. To avoid spam messages, the contact form is protected with a random code called “captcha” that users’ needs to enter in order for their messages to be send. _Figure 3-8: EuTravel Website - Contact Us Page_ 3.1.3.8. The Survey Page The “Survey of stakeholder requirements for travel services in the EU”, hosted under _https://www.surveymonkey.com/r/EuTravel_ , has been set up as part of Task 1.1. Stakeholder needs analysis - Research focus areas -EuTravel KPIs . The main page screenshot of the survey is shown in Figure 3-9. _Figure 3-9: EuTravel Website – Survey_ 3.1.3.9. ITS Cluster Projects The ITS cluster is a group of H2020 projects in the ITS & connected vehicles domain of H2020 programme, dealing with different aspects of ICT research and operation in multimodal traffic and transport. The common goal is to accelerate ITS deployment in Europe for safer, more efficient, comfortable and seamless traffic and transport. This page provides links to these projects (Figure 3-10). _Figure 3-10: EuTravel Website – ITS Cluster Page_ 3.1.3.10. Members Area Through the EuTravel website, authorised users (members of the Consortium) can access the member area by clicking the blue button “Member Area” at the top menu and provide their credentials to the “Login” Page as shown in Figure 3-11. _Figure 3-11: EuTravel Website – Members Area Login_ After login, the user is able to find useful confidential information about the EuTravel project (Figure 3-12 to Figure 3-15) like Project Files (e.g. Submitted Deliverables, Description of Work, Consortium Agreement, Document Templates, etc.) along with performed Meetings and their Agenda, Presentations and other related files available to be downloaded. _Figure 3-12: EuTravel Website – Members Area 1/4_ _Figure 3-13: EuTravel Website – Members Area 2/4_ _Figure 3-14: EuTravel Website – Members Area 3/4_ _Figure 3-15: EuTravel Website – Members Area 4/4_ 3.1.3.11. Search Another important functionality of the EuTravel website is the advanced search component at the top-right hand side of the website as shown in Figure 3-16. Users are able to search through the entire content of the website including, but not limited to, articles, categories, subcategories and content within uploaded documents using search component. _Figure 3-16: EuTravel Website – Search Component_ The search results are presented to the user in a clean format and user- friendly design as shown in Figure 3-17. _Figure 3-17: EuTravel Website – Search Results_ ###### 3.1.4 Google Analytics website traffic Google Analytics [6] is a free web analytics service offered by Google that tracks the website traffic and reports through Key Performance Indicators (KPIs) the results. EuTravel website is registered in Google Analytics and a sample of the analytics results for a predefined period are shown below. Google Analytics offers a vast amount of reports; below only a few are listed for illustration purposes. Figure 3-18, shows the audience overview information like: * Number of sessions, users and page views * Average session duration and percentage of new sessions * Demographic statistics (e.g. sessions per country) _Figure 3-18: Google Analytics – Audience Overview_ Figure 3-19 below, shows the overview of the users that accessed the webpage using different devices (desktop, mobile and tablet) again for a predefined period. _Figure 3-19: Google Analytics – Mobile and Devices Overview_ Figure 3-20 below, shows another audience graphical demonstration with the active users of the website. _Figure 3-20: Google Analytics – Active Users_ ###### 3.1.5 EuTravel Website Administration This section describes the admin site of the EuTravel website. The EuTravel website has been built using the Content Management System (CMS) of eBOS (WiseBOS). The CMS is a browser based solution that empowers the users to create and easily maintain their websites and build a strong online presence. It manages all the workflow needs, while allowing website content management and customization without any previous IT knowledge. The EuTravel CMS database engine is powered by the Microsoft SQL Server and is built on the .NET Framework. This allows extreme flexibility and expandability based on the project needs. Figure 3-21 illustrates the welcome/main page that is displayed once logging in to the CMS. The Navigation Menu (on the top of the website) helps you easily navigate through the different categories of the EuTravel project website and the Actions Menu located on the left-hand side changes according to administrator selection on the Navigation Menu. _Figure 3-21: CMS - Main Page_ Using the CMS, administrators can create new page, rename or delete an existing page or change the content of an existing page. Furthermore, administrators can perform other website management actions like, change the position of a page and check the available space (Figure 3-22 to Figure 3-24). _Figure 3-22: CMS - Create New Page_ _Figure 3-23: CMS - Edit Existing Page_ _Figure 3-24: CMS - Edit Content of Existing Page_ In order to perform any of the above actions, administrators need to navigate to the Side Management page through the Navigation Menu by choosing “Site Administration” and then “Manage Site Menu” as shown in Figure 3-25. _Figure 3-25: CMS - Site Administration_ ##### 3.2 Dissemination material Please see description of dissemination actions in Part B and Dissemination Material in the Annex. #### 4\. Part B: Dissemination Strategy, Communication Plan and Evaluation Report ##### 4.1 Dissemination Strategy ###### 4.1.1 Introduction The main goal of the EuTravel communication and dissemination activities is to reach the largest possible number of travel and transport stakeholders and to generate an effective flow of information and publicity about: * the project architecture and tools, * results and lessons learned, * the contribution made to European knowledge and scientific excellence and, * the benefits to EU citizens in general. * promote the EuTravel brand so it becomes synonymous with multimodal travelling. Branding of the EuTravel project refers to the activities used to help distinguish the project and to make the target audiences recognize and appreciate the project and its research outcomes. Wide take-up activities have been used for extending the branding of EuTravel project and channelling the desired promotion to the identified potential target groups of future users. To realize this goal, project partners defined and implemented a dissemination strategy from M2, to capture the project concept, approach and outputs and design a detailed communication plan (Figure 4-1). _Figure 4-1: Project Dissemination Cycle_ Dissemination activities took place throughout the course of the project and are continuing, including: * Definition of the dissemination objectives in alignment with the project objectives (M1) and associated with quantifiable indicators, taking into account the dissemination tangible targets (Table 4-12). * Identification of the different target groups, key messages, communication channels and activities, scheduling of rollout (M2-M4). The dissemination plan is presented in this report and blended different communications channels and tools. * Implementation of the dissemination plan starting from May 2015 to project end and beyond. * Monitoring and continuous evaluation of results against the target values (indicators) and making necessary amendments. ###### 4.1.2 Dissemination goal and objectives EuTravel dissemination objectives have been set as early as the grant agreement preparation and are briefly presented as refined in Table 4-1: EuTravel Dissemination Objectives. Before defining all aspects of a dissemination plan, it was important to agree on the dissemination strategic goals on all levels. In this way the planned activities were appropriately designed to reach the desired level of project’s visibility. # Table 4-1: EuTravel Dissemination Objectives <table> <tr> <th> **Level** </th> <th> **Objectives** </th> </tr> <tr> <td> **Understanding (Internal dissemination)** </td> <td> * Analyse existing knowledge of project partners, identify barriers and change enablers and potential adopters of the project outcomes. * Articulate the value of the project outcomes. * Consolidate the knowledge base and understanding between project partners. </td> </tr> <tr> <td> **Awareness** </td> <td> • Raise awareness about the challenges and the potential solutions provided by the project and reach out to industry stakeholders, policy makers and authorities to gain broader insight. </td> </tr> <tr> <td> **Interest** </td> <td> * Spread understanding and acceptance of the benefits of the project’s innovation. * Widely diffuse the project’s concept and ideas at an early stage of the project and the project’s achievements and results at a mature stage of the project to the public. </td> </tr> <tr> <td> **Information** </td> <td> • Provide a regular flow of information about the project and its results to the travel industry and the research community by publishing the project’s results to scientific journals and conference proceedings. </td> </tr> <tr> <td> **Participation** </td> <td> • Prepare, establish and reinforce a network of potential users and prepare the groundwork for use and exploitation of results. </td> </tr> <tr> <td> **Networking Engagement** </td> <td> * Plan liaison activities, collaborate with research networks, initiatives and bodies, as well as ongoing EU and national projects, and consolidate institutional links and working relations. * Interact with targeted potential adopters on an ongoing basis, engage few representative stakeholders in the EuTravel Forum and solutions testing and validation (Living lab sessions) and engage with communities in order to obtain feedback about results (see also Deliverable D4.1 EuTravel Engagement Strategy). * Engage with representative user groups (i.e. communities of people with mobility problems, visually impaired etc.) at a mature stage of the project to consolidate their views in research </td> </tr> <tr> <td> **Level** </td> <td> **Objectives** </td> </tr> <tr> <td> </td> <td> and policy recommendations. </td> </tr> <tr> <td> **Knowledge Transfer and Exploitation** </td> <td> * Train partners in the use of new products, services and processes. * Create training material for the generic pubic on their passengers’ rights related to multimodal travelling. (see also Deliverable D4.1 EuTravel Engagement Strategy). * Create training material for the people with mobility problems on their passengers’ rights related to multimodal travelling. (see also Deliverable D4.1 EuTravel Engagement Strategy). * Promote the results, lessons learned and benefits to target audiences in industry, research and academic communities, regulatory and standards authorities and policy makers. * Engage and involve potential stakeholders in solving challenges and brainstorming on the creation of new value-added services, utilising the project outcomes. * Ensure sustainability by making research results and products available for the design of new solutions for the industry, further research, new projects, and exploitation by R&D communities. </td> </tr> </table> ###### 4.1.3 Dissemination target groups The selected target audiences are potential interested parties in EuTravel research outcomes and include: 1. Commercial industry stakeholders including travel and transport service providers (potential adopters of the project outcomes to create new Value-Added services for the travel industry). 2. Commercial ITS / Technology Providers 3. Research and Scientific Communities. 4. Travel users – the wider public including representative communities such as people with mobility problems. 5. Authorities and Policy Makers. ###### 4.1.4 Messages The EuTravel vision is to contribute towards the realisation of a sustainable and open single European market for mobility services by: 4. enabling travel users (both businesses and private) to easily organise a door-to-door pan- European multimodal trip in accordance with their own set of criteria including environmental performance; 5. providing multimodal travel service providers an easy and cost-effective way to deliver optimal customised services to cater for any type of specialised multimodal travel needs; 6. supporting policy decision making by contributing to the implementation of standards and regulations and facilitating fact-based EU policy making. EuTravel will deliver an Ecosystem promoting and supporting Optimodal travel which will be populated with tools that tap into existing mainstream IT travel reservation systems and sources of travel data. The key phrases promoted in EuTravel has been a) Optimodal and inclusive travel and b) the API of APIs innovation. ###### 4.1.5 Dissemination Principles **Information on EU funding – Obligation and right to use the EU emblem:** The following texts followed by **the** EU emblem must is and should be included in any public announcement and/or dissemination material of the EuTravel project, for general purpose material and for specific results respectively: _“This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 636148”._ Disclaimer excluding Agency Responsibility Any dissemination of results indicate that it reflects only the author's view and that INEA is not responsible for any use that may be made of the information it contains. For all deliverables the following text is added after the cover page. _The content of the publication herein is the sole responsibility of the publishers and it does not necessarily represent the views expressed by the European Commission or its services._ _While the information contained in the documents is believed to be accurate, the authors(s) or any other participant in the**EuTravel** consortium make no warranty of any kind with regard to this material including, but not limited to the implied warranties of merchantability and fitness for a particular purpose. _ _Neither the**EuTravel** Consortium nor any of its members, their officers, employees or agents shall be responsible or liable in negligence or otherwise howsoever in respect of any inaccuracy or omission herein. _ _Without derogating from the generality of the foregoing neither the**EuTravel** Consortium nor any of its members, their officers, employees or agents shall be liable for any direct or indirect or consequential loss or damage caused by or arising from any information advice or inaccuracy or omission herein. _ ###### 4.1.6 Role and tasks of the EuTravel Forum The EuTravel Forum includes all non-consortium technical and business stakeholders engaged in the project activities in different phases and even after its end, mainly senior stakeholders and experts from the European Travel Industry with a deep understanding of market needs and currently available solutions. The EuTravel Forum’s key objective has been to engage Forum members to: 1. identify industry challenges that could be addressed by the project, 2. gain consultancy and valuable input and feedback for the solutions development. The ultimate goal has been to maximize the benefits of the project for the overall travel industry, foster Europe wide co-operation and ensure the exploitation of the results after the project completion. The dissemination plan defines how Forum members could be involved in the different project tasks, on an entirely voluntary basis. # Table 4-2: EuTravel Forum engagement in Tasks <table> <tr> <th> </th> <th> **Project Tasks** </th> <th> **Expected involvement of EuTravel Forum Members - external stakeholders** </th> </tr> <tr> <td> T1.1 </td> <td> Stakeholder needs analysis - Research focus areas -EuTravel KPIs </td> <td> * Participate in user requirements survey * Participate in user requirement workshops </td> </tr> <tr> <td> T1.4 </td> <td> EU Optimodality Framework and Impact Assessment </td> <td> • Review Unified Travel Ontology </td> </tr> <tr> <td> T2.2 </td> <td> Ecosystem Specification and Prototype Implementation </td> <td> • Provide access to data to be used in the EuTravel solutions </td> </tr> <tr> <td> T3.3 </td> <td> Living Lab operation learning, refinements and reporting </td> <td> * Evaluate prototypes through questionnaires / interviews * Test and validate prototypes in meetings and workshops (living lab sessions) * Integrate own services in EuTravel Ecosystem </td> </tr> <tr> <td> T4.1 </td> <td> Stakeholder Engagement Strategy and Online Learning Program </td> <td> • Get access to online training material </td> </tr> <tr> <td> T4.2 </td> <td> Policy, Standardisation and Research Recommendations </td> <td> • Provide feedback / contribute to recommendations </td> </tr> <tr> <td> T4.3 </td> <td> Impact Assessment - Exploitation Planning - Sustainable Development Roadmap </td> <td> • Exploit project outcomes </td> </tr> <tr> <td> T4.4 </td> <td> Dissemination Strategy, Communication Plan and Evaluation Report </td> <td> * Participate in the midterm project conference * Get access to all dissemination material and final report </td> </tr> <tr> <td> T5.2 </td> <td> Innovation Board - Advisory Committee - EuTravel Forum </td> <td> • Participate in Advisory Committee meetings </td> </tr> </table> Forum members were invited by consortium partners and were engaged in several project activities. Further details on Forums composition and log of activities are provided in Deliverable D4.1 EuTravel Engagement Strategy. ###### 4.1.7 Revised Dissemination Matrix The communications programme uses different dissemination channels to reach the project audiences. The choice of the channel used, has a fundamental impact upon the success and outcomes of a communication activity. The dissemination channels, tools, level of reach and success indicators are identified in the following dissemination matrix (Table 4-3). # Table 4-3: Revised Dissemination Matrix - Level indicator: International (I); European (EU) <table> <tr> <th> **Communication Tool** </th> <th> **Type** </th> <th> **Level** </th> <th> **Success Indicator** </th> <th> **Value Target** </th> </tr> <tr> <td> Project Identity </td> <td> All </td> <td> I </td> <td> \- </td> <td> 1 </td> </tr> <tr> <td> Project Poster </td> <td> Documentation </td> <td> EU </td> <td> \- </td> <td> 1 </td> </tr> <tr> <td> Reference PPT </td> <td> Documentation </td> <td> EU </td> <td> \- </td> <td> 1 </td> </tr> <tr> <td> Project Leaflet, Newsletters, Factsheets, Infographics, Success Stories - Interviews </td> <td> Publications </td> <td> EU </td> <td> Number of publications </td> <td> 8 </td> </tr> <tr> <td> Articles - Whitepapers </td> <td> Publications </td> <td> EU </td> <td> Number of publications </td> <td> 5 </td> </tr> <tr> <td> Scientific Papers </td> <td> Publications </td> <td> I </td> <td> Number of publications </td> <td> 2-4 </td> </tr> <tr> <td> Deliverables </td> <td> Publications </td> <td> EU </td> <td> QA Standards </td> <td> 20 </td> </tr> <tr> <td> Press Releases </td> <td> Publications </td> <td> I </td> <td> Number of Press Releases </td> <td> 5 </td> </tr> <tr> <td> Policy Briefs </td> <td> Publications </td> <td> EU </td> <td> Number of publications </td> <td> 2 </td> </tr> <tr> <td> Website </td> <td> Online Presence </td> <td> I </td> <td> Number of users (SEO Metrics) </td> <td> 1500 </td> </tr> <tr> <td> Web 2.0. - Social Media </td> <td> Online Presence </td> <td> I </td> <td> Social media followers Number of posts </td> <td> 1000 50 </td> </tr> <tr> <td> Dedicated EC Portals </td> <td> Online Presence </td> <td> EU </td> <td> Number of entries </td> <td> 2-3 </td> </tr> <tr> <td> Video-slideshow, media </td> <td> Online Presence </td> <td> I </td> <td> Number of media elements </td> <td> 5 </td> </tr> <tr> <td> Project meetings, roundtables </td> <td> Events </td> <td> EU </td> <td> Number of events </td> <td> 4 </td> </tr> <tr> <td> Conferences, Workshops </td> <td> Events </td> <td> I </td> <td> Number of events Number and type of attendees </td> <td> 4 100 </td> </tr> <tr> <td> e-learning modules </td> <td> Engagement, Knowledge Transfer </td> <td> EU </td> <td> Number of enrolled users </td> <td> 150 </td> </tr> <tr> <td> Knowledge Base </td> <td> Engagement, Knowledge Transfer </td> <td> EU </td> <td> Number of entries Number of users </td> <td> 50 500 </td> </tr> <tr> <td> Project Liaison Activities </td> <td> Networking, Knowledge Transfer </td> <td> EU </td> <td> No of relevant projects </td> <td> 10 </td> </tr> <tr> <td> EuTravel Forum members actively involved </td> <td> Engagement, Knowledge Transfer </td> <td> EU </td> <td> Number of companies - organisations </td> <td> 10-20 </td> </tr> <tr> <td> Ecosystem Participant Brochure </td> <td> Engagement, Knowledge Transfer </td> <td> EU </td> <td> Number of service providers reached </td> <td> 100 </td> </tr> </table> Scientific papers and the EuTravel Forum members have been added to the initial matrix included in the GA. Policy Briefs reduced from 4 to 2\. Ecosystem Participant brochure reach increased from 20-50 to 100. Selected tools and channels per target group are shown in Table 4-4. # Table 4-4: Mapping communication channels to target stakeholder groups <table> <tr> <th> **Communication Tool** </th> <th> **Commercial Stakeholders** </th> <th> **Technology Providers** </th> <th> **Research & ** **Scientific** **Communities** </th> <th> **Travel Users Public** </th> <th> **Authorities** **Policy** **Makers** </th> </tr> <tr> <td> Project Identity </td> <td> ⏺ </td> <td> ⏺ </td> <td> ⏺ </td> <td> ⏺ </td> <td> ⏺ </td> </tr> <tr> <td> Project Poster </td> <td> ⏺ </td> <td> ⏺ </td> <td> ⏺ </td> <td> </td> <td> ⏺ </td> </tr> <tr> <td> Reference PPT </td> <td> ⏺ </td> <td> ⏺ </td> <td> ⏺ </td> <td> </td> <td> ⏺ </td> </tr> <tr> <td> Project Leaflet, Newsletters, Factsheets, Infographics, Success Stories - Interviews </td> <td> ⏺ </td> <td> ⏺ </td> <td> ⏺ </td> <td> </td> <td> ⏺ </td> </tr> <tr> <td> Articles - Whitepapers </td> <td> ⏺ </td> <td> ⏺ </td> <td> ⏺ </td> <td> ⏺ </td> <td> ⏺ </td> </tr> <tr> <td> Scientific Papers </td> <td> </td> <td> </td> <td> ⏺ </td> <td> </td> <td> ⏺ </td> </tr> <tr> <td> Deliverables </td> <td> ⏺ </td> <td> ⏺ </td> <td> ⏺ </td> <td> </td> <td> ⏺ </td> </tr> <tr> <td> Press Releases </td> <td> ⏺ </td> <td> ⏺ </td> <td> ⏺ </td> <td> ⏺ </td> <td> ⏺ </td> </tr> <tr> <td> Policy Briefs </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> ⏺ </td> </tr> <tr> <td> Website </td> <td> ⏺ </td> <td> ⏺ </td> <td> ⏺ </td> <td> ⏺ </td> <td> ⏺ </td> </tr> <tr> <td> Web 2.0. - Social Media </td> <td> ⏺ </td> <td> ⏺ </td> <td> ⏺ </td> <td> ⏺ </td> <td> ⏺ </td> </tr> <tr> <td> Dedicated EC Portals </td> <td> </td> <td> </td> <td> ⏺ </td> <td> </td> <td> ⏺ </td> </tr> <tr> <td> Video-slideshow, media </td> <td> ⏺ </td> <td> ⏺ </td> <td> ⏺ </td> <td> ⏺ </td> <td> ⏺ </td> </tr> <tr> <td> Project meetings, roundtables </td> <td> ⏺ </td> <td> ⏺ </td> <td> ⏺ </td> <td> </td> <td> ⏺ </td> </tr> <tr> <td> Conferences, Workshops </td> <td> ⏺ </td> <td> ⏺ </td> <td> ⏺ </td> <td> </td> <td> ⏺ </td> </tr> <tr> <td> e-learning modules </td> <td> ⏺ </td> <td> ⏺ </td> <td> ⏺ </td> <td> ⏺ </td> <td> ⏺ </td> </tr> <tr> <td> Knowledge Base </td> <td> ⏺ </td> <td> ⏺ </td> <td> ⏺ </td> <td> ⏺ </td> <td> ⏺ </td> </tr> <tr> <td> Project Liaison activities </td> <td> </td> <td> </td> <td> ⏺ </td> <td> </td> <td> </td> </tr> <tr> <td> EuTravel Forum members actively involved </td> <td> ⏺ </td> <td> ⏺ </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> Ecosystem Participant Brochure </td> <td> ⏺ </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> </table> ##### 4.2 Dissemination plan – Timing of activities and expected progress The overall dissemination rollout plan is depicted in the following table. # Table 4-5: Timing of dissemination and communications activities <table> <tr> <th> **Action** </th> <th> **M2** </th> <th> **M6** </th> <th> **M12** </th> <th> **M18** </th> <th> **M24** </th> <th> **M30** </th> <th> **M36** </th> <th> **M36+3** </th> <th> **Value Target** </th> </tr> <tr> <td> Project Identity </td> <td> 1 </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> 1 </td> </tr> <tr> <td> Project Poster </td> <td> </td> <td> </td> <td> </td> <td> 1 </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> 1 </td> </tr> <tr> <td> Reference PPT </td> <td> 1 </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> 1 </td> </tr> <tr> <td> Project Leaflet, Newsletters, Factsheets, Infographics, Success Stories - Interviews </td> <td> 1 </td> <td> 2 </td> <td> </td> <td> 4 </td> <td> </td> <td> 6 </td> <td> 8 </td> <td> </td> <td> 8 </td> </tr> <tr> <td> Articles - Whitepapers </td> <td> </td> <td> 1 </td> <td> </td> <td> 2 </td> <td> 3 </td> <td> 4 </td> <td> 5 </td> <td> </td> <td> 5 </td> </tr> <tr> <td> Scientific Papers </td> <td> </td> <td> </td> <td> </td> <td> 1 </td> <td> 2 </td> <td> 3 </td> <td> 4 </td> <td> </td> <td> 2-4 </td> </tr> <tr> <td> Deliverables </td> <td> </td> <td> 2 </td> <td> </td> <td> 6 </td> <td> </td> <td> </td> <td> 20 </td> <td> </td> <td> 20 </td> </tr> <tr> <td> Press Releases </td> <td> </td> <td> </td> <td> 2 </td> <td> </td> <td> </td> <td> </td> <td> 5 </td> <td> </td> <td> 5 </td> </tr> <tr> <td> Policy Briefs </td> <td> </td> <td> </td> <td> 1 </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> 2 </td> <td> 2 </td> </tr> <tr> <td> Website </td> <td> </td> <td> </td> <td> </td> <td> 500 </td> <td> </td> <td> </td> <td> 1000 </td> <td> 1500 </td> <td> 1500 </td> </tr> <tr> <td> Web 2.0. - Social Media </td> <td> </td> <td> </td> <td> </td> <td> 500 20 </td> <td> </td> <td> </td> <td> 800 40 </td> <td> 1000 50 </td> <td> 1000 50 </td> </tr> <tr> <td> Dedicated EC Portals </td> <td> </td> <td> </td> <td> </td> <td> 1 </td> <td> </td> <td> </td> <td> 2 </td> <td> 3 </td> <td> 2-3 </td> </tr> <tr> <td> Video-slideshow, media </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> 5 </td> <td> </td> <td> 5 </td> </tr> <tr> <td> Project meetings, roundtables </td> <td> 1 </td> <td> </td> <td> </td> <td> 3 </td> <td> </td> <td> </td> <td> 4 </td> <td> </td> <td> 4 </td> </tr> <tr> <td> Conferences, Workshops </td> <td> 2 </td> <td> </td> <td> </td> <td> 3 </td> <td> </td> <td> </td> <td> 4 </td> <td> </td> <td> 4 100 </td> </tr> <tr> <td> e-learning modules </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> 80 </td> <td> 150 </td> <td> 150 </td> </tr> <tr> <td> Knowledge Base </td> <td> </td> <td> </td> <td> </td> <td> 30 200 </td> <td> </td> <td> </td> <td> 45 400 </td> <td> 50 500 </td> <td> 50 500 </td> </tr> <tr> <td> Project Liaison Activities </td> <td> </td> <td> </td> <td> </td> <td> 6 </td> <td> </td> <td> </td> <td> 10 </td> <td> </td> <td> 10 </td> </tr> <tr> <td> EuTravel Forum members actively involved </td> <td> </td> <td> </td> <td> </td> <td> 10 </td> <td> </td> <td> </td> <td> 20 </td> <td> </td> <td> 10-20 </td> </tr> <tr> <td> Ecosystem Participant Brochure </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> 100 </td> <td> </td> <td> 100 </td> </tr> </table> ##### 4.3 Dissemination plan rollout – Log of activities All partners took part in the dissemination activities at a different level. The consortium is composed of representatives from all modes of long distance travelling. By having partners with the relevant knowledge in the field and from different types of organizations and countries, several dissemination channels were exploited to reach out to representative service providers across all modes: air, ferry, rail and coach, as well as travel agents. All activities are grouped per category in the following paragraphs. ###### 4.3.1 Project identity The project logo, presentation template and deliverables template were circulated from the coordinator to all partners at the beginning of the project (Figure 4-2). _Figure 4-2: EuTravel presentation and deliverables template_ ###### 4.3.2 Web presence and social media channels Web channels are employed for the wider dissemination of the project. This includes the project website, up and running from the project initiation (June 2015) and regularly updated with news on the project outcomes, as well as with related interesting events and news, available at _www.eutravelproject.eu_ . (see also Chapter 3 Part A: Project website and dissemination material). A total redesign was carried out after the mid-term review (December 2016). The website will remain active even after the end of the project. Additionally, social media channels are used to support the wide spread of information published at the project website, while assisting on creating groups of interest and connecting with stakeholders from the EU and international transport and travel community. These include the professional group at Linked In and the twitter account. 1\. LinkedIn group The Linked In group is meant to serve as a place for ideas exchange and discussions on interesting related topics, and is available at _https://www.linkedin.com/groups/8327020_ (Figure 4-3). _Figure 4-3: EuTravel Linked In group – Discussions_ 2\. Twitter @EuTravel_H2020 The twitter account, available at _https://twitter.com/EuTravel_H2020_ is used for creating connections and staying informed on the current trends, while communicating EuTravel project updates (Figure 4-4). _Figure 4-4: EuTravel Twitter Channel_ 3\. Partners Websites The project has been promoted on consortium partners websites (indicative screenshots below). _Figure 4-5: EuTravel on Inlecom website_ _Figure 4-6: EuTravel on Travelport website_ _Figure 4-7: EuTravel on BMT website_ ###### 4.3.3 Brochures and White Papers A brochure has been produced at the beginning of the project, to raise awareness of the new tools. Three white papers were produced (see material in the Annex): * Getting There Greener * A Train of Thoughts * Neutral Display A new brochure will be produced after the end of the project once all deliverables are accepted and will be communicated to EuTravel Forum members and new stakeholders to present the research outcomes. ###### 4.3.4 Press releases, articles and media Specific press releases about the project have been produced, distributed among certain general press media, to spread the information to a broad audience (Table 4-6: Summary of press releases, online articles and media). Two press releases can be found in the Annex. # Table 4-6: Summary of press releases, online articles and media <table> <tr> <th> **Publication Channel** </th> <th> **Circulation Journalist Description Link** </th> </tr> <tr> <td> **2016** </td> <td> </td> </tr> <tr> <td> Travelport website </td> <td> 15,000 </td> <td> Travelport Marketing team </td> <td> Press release </td> <td> _https://www.travelport.com/compan_ _y/media-center/press-releases/2016-_ _05-16/travelport-memberconsortium-awarded-eu-funding_ </td> </tr> <tr> <td> Tnooz </td> <td> Followers: 9000 on Linked In, 2600 on Twitter </td> <td> Editorial team </td> <td> Press release story </td> <td> _https://www.tnooz.com/article/eutra_ _vel-multi-modal/_ </td> </tr> <tr> <td> 4-traders </td> <td> n/a </td> <td> Editorial team </td> <td> Press release story </td> <td> _http://www.4-_ _traders.com/TRAVELPORTWORLDWIDE-LTD-_ _18063135/news/TravelportWorldwide-is-member-of-aconsortium-awarded-EU- funding-tocreate-online-travel-planning22368135/_ </td> </tr> <tr> <td> Travel 360 </td> <td> 1000 followers on twitter </td> <td> Editorial team </td> <td> Press release story </td> <td> _http://travel360benelux.com/fr/trave_ _lport-belgium/outil-de-planification/_ </td> </tr> <tr> <td> Bonvoyage project website </td> <td> n/a </td> <td> </td> <td> Online article on liaison action </td> <td> _http://bonvoyage2020.eu/bonvoyage_ _-and-eutravel-creation-of-newsynergies/_ </td> </tr> <tr> <td> EC Cordis Wire </td> <td> n/a </td> <td> </td> <td> Pre-event announcem ent </td> <td> _http://cordis.europa.eu/event/rcn/14 7576_en.html_ </td> </tr> <tr> <td> Linked In </td> <td> 300 </td> <td> </td> <td> Online Article, post – event </td> <td> _https://www.linkedin.com/pulse/than k-you-making-eutravel-conferencegreat- event-ioanna-fergadioti/_ </td> </tr> <tr> <td> Inlecom website </td> <td> 1500 </td> <td> </td> <td> Online Article, post </td> <td> _http://www.inlecom.eu/2016/10/10/ eutravel-conference/_ </td> </tr> <tr> <td> **Publication Channel** </td> <td> **Circulation** </td> <td> **Journalist** </td> <td> **Description** </td> <td> **Link** </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> – news entry </td> <td> </td> </tr> <tr> <td> Pass Me project website </td> <td> n/a </td> <td> </td> <td> Online Article post - event </td> <td> _https://www.passme.eu/relatednews/209-passme-and-the-future-ofmultimodal- travel_ </td> </tr> <tr> <td> Timon project website </td> <td> n/a </td> <td> </td> <td> Online Article post - \- news entry </td> <td> _https://www.timonproject.eu/index.php/news-andevents/timon- news/137-timonproject-at-eu-travel-conference-inbarcelona.html_ </td> </tr> <tr> <td> **2017** </td> </tr> <tr> <td> Travolution </td> <td> 400,000 </td> <td> Ben Ireland </td> <td> Press release story with EuTravel Forum member Accomable </td> <td> _http://www.travolution.com/articles/ 103867/accomable-and-eutravelpartner-to- provide-transport-fordisabled-travellers_ </td> </tr> <tr> <td> Travel Daily News </td> <td> 100,000+ </td> <td> Tatiana Rokou </td> <td> Press release story </td> <td> _https://www.traveldailynews.com/po st/accomable-and-eutravel-projectunite-to- make-booking-accessibletransport-easy_ </td> </tr> <tr> <td> Pan European Networks </td> <td> n/a </td> <td> Editorial team </td> <td> Press release story </td> <td> _http://www.paneuropeannetworks.co m/science-technology/76793/_ </td> </tr> <tr> <td> TTG </td> <td> 60,000 </td> <td> Matt Parsons </td> <td> Press release story </td> <td> _https://www.ttgmedia.com/news/tec hnology/accomable-signs-up-toeutravel- project-11261_ </td> </tr> <tr> <td> Access and Mobility Professional </td> <td> 55,000, plus 2,500 in the daily newsletter </td> <td> Joe Peskett </td> <td> Press release story </td> <td> _http://www.accessandmobilityprofess ional.com/airbnb-disabled- peopleintegrate-adaptedtransportation/?utm_source=Email+C ampaign &utm_medium=email&utm_ campaign=42639-221531- _ _Access+%26+Mobility+Professional+D_ _NA+-+2017-08-17_ </td> </tr> <tr> <td> Horizon 2020 Project </td> <td> n/a </td> <td> Editorial team </td> <td> Press release story </td> <td> _http://horizon2020projects.com/prknowledge-innovation/project-assisttravel- disabled/_ </td> </tr> <tr> <td> International Airport Review </td> <td> 28,447 </td> <td> Editorial team </td> <td> Press release story </td> <td> _https://www.internationalairportrevi ew.com/news/37819/travelstreamlined- disabled-passengerseutravel-project/_ </td> </tr> <tr> <td> Eurotransport </td> <td> 30,000 </td> <td> Editorial team </td> <td> Press release story </td> <td> _https://www.eurotransportmagazine. com/24806/news/industrynews/partnership- booking-accessibletransport/_ </td> </tr> <tr> <td> Intelligent Transport Magazine </td> <td> 30,000 </td> <td> Editorial team </td> <td> Byline article </td> <td> _https://www.intelligenttransport.com_ _/transport- articles/25490/accessibletravel-eutravel-project/_ </td> </tr> </table> ###### 4.3.5 Scientific Publications The scientific partners in the consortium presented results of the project in well-known scientific conferences and widely read international scientific journals. The scientific paper ‘A knowledge graph for travel mode recommendation and critiquing’, presented below was awarded as the best paper of the conference (Figure 4-8). _Figure 4-8: EuTravel – IARIA Best Paper Award_ The details of the papers are summarised in the following tables: # Table 4-7: Scientific Publications fully supported by EuTravel <table> <tr> <th> Type of scientific publication </th> <th> Publication in conference proceedings **BEST PAPER AWARD** </th> </tr> <tr> <td> Title of the scientific publication </td> <td> **A knowledge graph for travel mode recommendation and critiquing** </td> </tr> <tr> <td> Authors </td> <td> Bill Karakostas (VLTN) and Dimitris Kardaras </td> </tr> <tr> <td> Title of the journal or equivalent </td> <td> Proceedings of the 9 th International Conference on Advances in Databases, Knowledge, and Data Applications (DBKDA) </td> </tr> <tr> <td> Date </td> <td> 21-25 May 2017 </td> </tr> <tr> <td> Publisher </td> <td> IARIA, ISSN: 2308-4332, ISBN: 978-1-61208-558-6 </td> </tr> <tr> <td> Place of publication </td> <td> Barcelona, Spain </td> </tr> <tr> <td> Link </td> <td> _http://www.eutravelproject.eu/uploadfiles/DBKBDA_2017.pdf_ _https://www.iaria.org/conferences2017/awardsDBKDA17/dbkda2017__ _a5.pdf_ </td> </tr> </table> <table> <tr> <th> Type of scientific publication </th> <th> Publication in conference proceedings </th> </tr> <tr> <td> Title of the scientific publication </td> <td> **API mashups: How well do they support the travellers’ information needs?** </td> </tr> <tr> <td> Authors </td> <td> Bill Karakostas (VLTN) and Zannis Kalampoukis (CLMS) </td> </tr> <tr> <td> Title of the journal or equivalent </td> <td> Proceedings of the 8 th International Conference on Ambient Systems, Networks and Technologies (ANT), Volume 109 </td> </tr> <tr> <td> Date </td> <td> May 16-19 May 2017 </td> </tr> <tr> <td> Publisher </td> <td> Elsevier – Procedia Computer Science </td> </tr> <tr> <td> Place of publication </td> <td> Madeira, Portugal </td> </tr> <tr> <td> Link </td> <td> _http://www.sciencedirect.com/science/article/pii/S1877050917309900_ </td> </tr> </table> <table> <tr> <th> Type of scientific publication </th> <th> Publication in conference proceedings </th> </tr> <tr> <td> Title of the scientific publication </td> <td> **Service Availability Analysis of a Multimodal Travel Planner Using Stochastic Automata** </td> </tr> <tr> <td> Authors </td> <td> Spyridon Evangelatos (ILS), Zannis Kalampoukis (CLMS), Ioanna Fergadioti (ILS), Stelios Christofi (eBOS), Bill Karakostas (VLTN) and Yannis Zorgios (CLMS) </td> </tr> <tr> <td> Title of the journal or equivalent </td> <td> Proceedings of the 22 nd IEEE Symposium on Computers and Communications (ISCC), International Workshop on Intelligent & Sustainable Urban Transportation </td> </tr> <tr> <td> Date </td> <td> 03 – 06 July, 2017 </td> </tr> <tr> <td> Publisher </td> <td> IEEE Conference Publications (IEEE Computer & IEEE Communications Societies) </td> </tr> <tr> <td> Place of publication </td> <td> Heraclion, Crete, Greece </td> </tr> <tr> <td> Link </td> <td> _http://ieeexplore.ieee.org/document/8024520/_ </td> </tr> </table> # Table 4-8: Scientific Publications partially supported by EuTravel <table> <tr> <th> Type of scientific publication </th> <th> Publication in conference proceedings </th> </tr> <tr> <td> Title of the scientific publication </td> <td> **The Network Structure of Visited Locations According to Geotagged Social Media Photos** </td> </tr> <tr> <td> Authors </td> <td> Christian Junker (Fanlens.io), Zaenal Akbar (STI), Martí Cuquet (STI) </td> </tr> <tr> <td> Title of the journal or equivalent </td> <td> The 18th Working Conference on Virtual Enterprises (PRO-VE) http://pro-ve.org/ Part of IFIP Advances in Information and Communication Technology book series (IFIPAICT, volume 506) </td> </tr> <tr> <td> Date </td> <td> 17 -21 September 2017 </td> </tr> <tr> <td> Publisher </td> <td> Springer International Publishing AG </td> </tr> <tr> <td> Place of publication </td> <td> Vicenza, Italy </td> </tr> <tr> <td> Link </td> <td> _https://link.springer.com/chapter/10.1007/978-3-319-65151-4_26_ </td> </tr> </table> <table> <tr> <th> Type of scientific publication </th> <th> Publication in conference proceedings </th> </tr> <tr> <td> Title of the scientific publication </td> <td> **Complete Semantics to Empower Touristic Service Providers** </td> </tr> <tr> <td> Authors </td> <td> Zaenal Akbar (STI), Elias Karle(STI), Oleksandra Panasiuk (STI), Umutcan Simsek (STI), Ioan Toma (STI), and Dieter Fensel (STI) </td> </tr> <tr> <td> Title of the journal or equivalent </td> <td> OTM Confederated International Conferences - OTM 2017: On the Move to Meaningful Internet Systems. OTM 2017 Conferences pp 353- 370 </td> </tr> <tr> <td> Date </td> <td> October 23-27, 2017 </td> </tr> <tr> <td> Publisher </td> <td> Springer International Publishing AG </td> </tr> <tr> <td> Place of publication </td> <td> Rhodes, Greece </td> </tr> <tr> <td> Link </td> <td> _https://link.springer.com/chapter/10.1007/978-3-319-69459-7_24_ </td> </tr> </table> <table> <tr> <th> Type of scientific publication </th> <th> Accepted paper - to be published in conference proceedings </th> </tr> <tr> <td> Title of the scientific publication </td> <td> Enabling Analysis of User Engagements Across Multiple Online Communication Channels </td> </tr> <tr> <td> Authors </td> <td> Zaenal Akbar (STI), Anna Fensel (STI), and Dieter Fensel (STI) </td> </tr> <tr> <td> Title of the journal or equivalent </td> <td> Proceedings of 11th International Conference on Metadata and Semantics Research </td> </tr> <tr> <td> Date </td> <td> November 28th – December 1st 2017 </td> </tr> <tr> <td> Publisher </td> <td> Springer International Publishing AG </td> </tr> <tr> <td> Place of publication </td> <td> Tallinn, Estonia </td> </tr> <tr> <td> Link </td> <td> Proceedings will be published by Springer in Vol. 755 of the Communications in Computer and Information Science (CCIS) book series. _http://www.mtsr-conf.org/_ _http://www.mtsr-_ _conf.org/images/Accepted_Papers_Posters_Final.pdf?v=171025a_ </td> </tr> </table> ###### 4.3.6 EuTravel public deliverables All EuTravel public deliverables will be downloadable from the website once accepted. # Table 4-9: Public Deliverables <table> <tr> <th> **Del. #** </th> <th> **Deliverable Name** </th> <th> **WP#** </th> <th> **Lead** </th> <th> **Type** </th> <th> **Diss level** </th> </tr> <tr> <td> D1.1 </td> <td> EuTravel Stakeholder Requirements Specification </td> <td> WP1 </td> <td> BMT </td> <td> R </td> <td> PU </td> </tr> <tr> <td> D1.2 </td> <td> Policy, Legal and Standardization Requirements Analysis Report </td> <td> WP1 </td> <td> HD </td> <td> R </td> <td> PU </td> </tr> <tr> <td> D1.3 </td> <td> Technology Knowledge Base and Observatory </td> <td> WP1 </td> <td> NCSRD </td> <td> DEM </td> <td> PU </td> </tr> <tr> <td> D1.4 </td> <td> EU Optimodality Framework </td> <td> WP1 </td> <td> ILS </td> <td> R </td> <td> PU </td> </tr> <tr> <td> D2.1 </td> <td> Technology Ecosystem Architecture </td> <td> WP2 </td> <td> VLTN </td> <td> R </td> <td> PU </td> </tr> <tr> <td> D2.2 </td> <td> Ecosystem Specification and Prototype Implementation </td> <td> WP2 </td> <td> CLMS </td> <td> OTH </td> <td> PU </td> </tr> <tr> <td> D2.3 </td> <td> One-stop, cross-device multilingual interface </td> <td> WP2 </td> <td> EBOS </td> <td> OTH </td> <td> PU </td> </tr> <tr> <td> D2.4 </td> <td> EuTravel Value Added Services </td> <td> WP2 </td> <td> CLMS </td> <td> DEM </td> <td> PU </td> </tr> <tr> <td> D3.1 </td> <td> Modelling and Experiment plans - Scenarios and Use Cases </td> <td> WP3 </td> <td> TRI </td> <td> DEM </td> <td> PU </td> </tr> <tr> <td> D3.2 </td> <td> Living Lab Setup </td> <td> WP3 </td> <td> EBOS </td> <td> DEM </td> <td> PU </td> </tr> <tr> <td> D3.3 </td> <td> Living Lab operation learning refinements and reporting </td> <td> WP3 </td> <td> ILS </td> <td> R </td> <td> PU </td> </tr> <tr> <td> D4.3 </td> <td> Exploitation Plan - Sustainable Development Roadmap </td> <td> WP4 </td> <td> CLMS </td> <td> DES </td> <td> PU </td> </tr> <tr> <td> D4.4 </td> <td> Communications Programme: a. Project Website and Dissemination Materials, b. Dissemination Strategy Communication Plan and Evaluation Report, c. Data Management Plan </td> <td> WP4 </td> <td> ILS </td> <td> DES </td> <td> PU </td> </tr> </table> ###### 4.3.7 Midterm EuTravel Conference 4.3.7.1. Details The EUTRAVEL Conference headlined “The Future of Multimodal Travel in Europe” took place in Barcelona on the 6th of October 2106, presenting evidence that there are plenty of intriguing and innovative developments in the travel and mobility sector across Europe (Figure 4-9 to Figure 4-12). The conference was organised by Inlecom Systems Ltd (Project Coordinator) and FGC and was highly successful. Real time translation to Spanish for the audience was offered. The event focused on the advancements and challenges towards seamless door-to- door travelling and was organised in three topics: * Travel Technology and Business Driven Innovations, * Research Driven Innovations, with the presentation of 10 related EU funded projects, * Trends and Challenges, Mobility as a Service, Added Value Services for Travellers. _Figure 4-9: Conference Date_ _Figure 4-10: Conference Badges_ _Figure 4-11: Conference Audience_ _Figure 4-12: Conference Translation Facility_ 4.3.7.2. Conference Speakers and Statistics The following numbers summarise the conference statistics: * 59 companies and organisations represented, including Transport Operators, Travel Service Providers, Technology Providers, Start-ups, Research Institutes and Authorities; * 87 delegates (out of 107 that initially registered) (see list or participating companies and organisations in the Annex). * 1 moderator (representing the organiser) • 24 speakers from 11 European countries; 1. Nine (9) speakers were representing EuTravel consortium partners, six (6) of whom presented EuTravel project developments, * Welcome Messages from Organisers, _Albert Tortajada i Flores - FGC and_ EUTRAVEL Project Manager, _Ioanna Fergadioti - Inlecom Systems_ * Multimodal Travel Systems for Passengers in Europe, _David Classey - Travelport_ * Multimodal Travel Planner, _Kyriakos Petrou - EBOS Technologies_ * Value Added Services for Travellers in the API Economy _, Zannis Kalamboukis - CLMS UK_ • Regulatory Challenges of Multimodal Travelling, _Laura Halfhide - Hill Dickinson_ and three (3) of whom presented other projects the consortium partners are involved in: * AllWays Travelling Project, _Tom Jones - Amadeus_ * IT2Rail Project - Research and Engineering innovation in interoperability technology for distributed mobility applications, _Riccardo Santoro – Trenitalia_ * Mobility as a Service (MaaS) - Concept and Landscape, _Spyros Evangelatos - Inlecom Systems_ 2. Seven (7) speakers were invited as members of the EuTravel Forum (non-consortium technical and business stakeholders and experts from the European Travel Industry with a deep understanding of market needs and currently available solutions). Four (4) speakers represented commercial industry stakeholders including travel and transport service providers and presented the following topics: * Pioneering OTA’s Services in the era of multimodal travel, Zdenek Komenda - Kiwi.com * Social Trip Planning, Oscar Ferruz - Planedia * Iterative product development based on MaaS platform, Marko Javornik - Comtrade • Simplifying Intercity Bus Distribution, Pierre Becher - Distribusion.com It should be noted that at that time, Distribution was only involved as a Forum member before becoming an official partner replacing Eurolines. Three (3) speakers represented commercial Technology Providers and presented the following topics: * The Waynaut Multimodal Platform, Simone Lini - Waynaut * BigData4ATM Project, Ricardo Herranz - Nommon Solutions and Technologies * CIGO! a platform for pushing mobility policies to the end user, Dr Josep Lluís Larriba Pey - Sparsity Technologies 3. Two (2) speakers represented authorities and presented the following topics: * Innovations in the Barcelona Transport Network, Ricard Font - Mobility Secretary of Territory and Sustainability Department * Contactless Card Project, Ramon Bacardit, ATM - BCN Area Transport Authority, TMobilitat 4. Six (6) speakers presented other -relevant to the conference scope- projects: * DORA Project & Innovative Mobility Services, Patricia Bellver Muñoz - ETRA * PASSME Project, Designing the Intermodal Hub of the Future, Dr Rebecca Price - TU Delft, Industrial Design Engineering * OPTIMUM Project, Ruben Costa - Uninova * TIMON Project, Leire Serrano - DeustoTech-Mobility, Universidad de Deusto * Mobility4EU- Action Plan for the Future of Mobility in Europe, Dr Beate Müller - VDI/VDE Innovation und Technik GmbH * TravelSpirit - the Engine Oil Behind 'Mobility as a Service'- The Simply Connect Project, Giles Bailey - TravelSpirit & Simply Connect * EDITS - Transnational Journey Planning - Achievements and Future Challenges in Central Europe, Dr Bettina Neuhäuser - AustriaTech 5. One (1) speaker represented academia and presented the following topic: * Smart Software 4.0: What is The Future of Travel? Prof. Juan Miguel Gómez Berbís - Universidad Carlos III de Madrid The presentations were followed by a networking session to allow interaction with EuTravel Forum members. 4.3.7.3. Dedicated EuTravel conference website A dedicated website was setup three months before the conference and was used to allow participants registration. The following figures (Figure 4-13 to Figure 4-16) are presenting the content of the official Conference website. _Figure 4-13: EuTravel Conference 1/4_ _Figure 4-14: EuTravel Conference 2/4_ _Figure 4-15: EuTravel Conference 3/4_ _Figure 4-16: EuTravel Conference 4/4_ 4.3.7.4. Pre-event announcements The conference was promoted through all media channels and on EU Cordis Wire. _Figure 4-17: EuTravel Conference Announcement_ 4.3.7.5. Post-event related articles Project promotion also followed the event: * _https://www.linkedin.com/pulse/thank-you-making-eutravel-conference-great-eventioanna-fergadioti/_ * _https://www.timon-project.eu/index.php/news-and-events/timon-news/137-timonproject-at-eu-travel-conference-in-barcelona.html_ * _http://research.mobility.deustotech.eu/news/view/timon-project-at-eutravelconference-in-barcelona/_ 4.3.7.6. Program, Proceedings and presentations The conference program, presentations and proceedings can be viewed and are downloadable form the website (Figure 4-18, Figure 4-19). The conference dissemination material including the conference pass for delegates and the list of participants can be found in the Annex. _Figure 4-18: EuTravel Conference Program_ _Figure 4-19: EuTravel Conference Downloadable Presentations_ ###### 4.3.8 Cooperation with EuTravel Forum member Accomable - Focus Group Meeting Accomable is a travel service provider, member of the EuTravel Forum. They offer a platform for booking specially adapted and accessible hotels rooms and holiday rentals. Built for disabled and older travellers or anyone with a mobility issue, Accomable offers more than 1,100 quality adapted places to stay in over 60 countries worldwide. Following discussions with the founder and CEO Srin Madipalli, that took place in the context of T3.3 Living Lab operation learning, refinements and reporting since 2016, Inlecom with Accomable organised a dedicated focus group meeting in London on the 14 th of October, with the participation of twenty User Group Members, to contribute to the project’s Policy and Research recommendations (task T4.2: Policy, Standardisation and Research Recommendations). _Figure 4-20: Accomable Focus Group Meeting_ Related press releases and media coverage can be found in section 4.3.4 - Press releases, articles and media, above. Promotion of the event to disabled community groups through social media is presented in the following table. # Table 4-10: Accomable Focus Group Meeting Social media coverage <table> <tr> <th> **Channel** </th> <th> **Circulation** </th> <th> **Posted by** </th> <th> **Link** </th> </tr> <tr> <td> Disability Horizons - social media posts </td> <td> 22,000 Twitter followers 6,497 Facebook members </td> <td> Filipe Roldao </td> <td> _http://disabilityhorizons.com/_ _https://www.facebook.com/Disabili_ _tyHorizons/_ _https://twitter.com/DHorizons_ </td> </tr> <tr> <td> Troy Technologies (travel wheelchair manufacturer) community blog </td> <td> n/a </td> <td> Srin Madipalli </td> <td> _https://travelwheelchair.net/blog/li mited-mobility-travel-tipsaccomable-ceo- co-founder-srinmadipalli/_ </td> </tr> <tr> <td> HelpHopeLive.org </td> <td> n/a </td> <td> Srin Madipalli </td> <td> _https://helphopelive.org/news/blog_ </td> </tr> <tr> <td> New Mobility </td> <td> 30,000 </td> <td> Srin Madipalli </td> <td> _http://www.newmobility.com/_ </td> </tr> <tr> <td> Leonard Cheshire blog </td> <td> 12,000 members </td> <td> Srin Madipalli </td> <td> _https://www.leonardcheshire.org/s_ _upport-and-information/latestnews/news- and-blogs_ </td> </tr> <tr> <td> Scope about disability </td> <td> n/a </td> <td> Srin Madipalli </td> <td> _https://community.scope.org.uk/dis_ _cussion/36614/the-eu-travelproject-are- you-free-on-saturdayto-talk-about-travel_ </td> </tr> </table> The focus group discussion, results and recommendations are included in detail in deliverable D4.2 Policy, Standardisation and Research Recommendations. ###### 4.3.9 Dissemination through events, workshops and meetings 4.3.9.1. Meetings organised by EuTravel The most important meetings organised by the coordinator or consortium partners are listed in the following table. These events are relevant to specific focus areas of the project. In some case EuTravel Forum members and external stakeholders were involved (apart from the table below, see details in Deliverable D4.1 EuTravel Engagement Strategy). # Table 4-11: List of Project Workshops and Face to Face Meetings <table> <tr> <th> **a/a** </th> <th> **Description - Type of Meeting - Location** </th> <th> **Participating Partners** </th> <th> **EuTravel Forum Members, external experts** </th> <th> **Date** </th> </tr> <tr> <td> **1** </td> <td> Kick Off Meeting (London, Runnymede Hotel) </td> <td> ALL </td> <td> </td> <td> 25&26 May 2015 </td> </tr> <tr> <td> **2** </td> <td> Workshop - Working Group Meeting (London, SilverRail Offices) </td> <td> SR, CLMS, ILS </td> <td> </td> <td> 18 June 2015 </td> </tr> <tr> <td> **3** </td> <td> Workshop - Working Group Meeting (London, Eurolines Offices) </td> <td> EUL, CLMS, ILS </td> <td> </td> <td> 18 June 2015 </td> </tr> <tr> <td> **4** </td> <td> Workshop - Working Group Meeting (London, Travelport Offices) </td> <td> TRP, CLMS, ILS </td> <td> </td> <td> 19 June 2015 </td> </tr> <tr> <td> **5** </td> <td> Workshop - Working Group </td> <td> TRI, CLMS, </td> <td> Ricardo Santoro (representing </td> <td> 26 June </td> </tr> </table> <table> <tr> <th> **a/a** </th> <th> **Description - Type of Meeting - Location** </th> <th> **Participating Partners** </th> <th> **EuTravel Forum Members, external experts** </th> <th> **Date** </th> </tr> <tr> <td> </td> <td> Meeting Liaison with BonVoyage Project (Rome, Trenitalia Offices) </td> <td> ILS, BSE </td> <td> Ferrovie dello Stato Italiane), Stefano Salsano (University of Rome, BonVoyage Project) See also Section 4.3.10 Liaison with other projects and initiatives </td> <td> 2015 </td> </tr> <tr> <td> **6** </td> <td> Workshop - Working Group Meeting (Barcelona, Ferrocarrils de la Generalitat de Catalunya Offices) </td> <td> FGC, CLMS, ILS </td> <td> </td> <td> 13 July 2015 </td> </tr> <tr> <td> **7** </td> <td> Workshop - Working Group Meeting (Madrid, Amadeus Offices) </td> <td> AMD, CLMS, ILS </td> <td> </td> <td> 14 July 2015 </td> </tr> <tr> <td> **8** </td> <td> Workshop - Working Group Meeting (Roscoff, France, Brittany Ferries Offices) </td> <td> BF, CLMS, ILS </td> <td> </td> <td> 15 July 2015 </td> </tr> <tr> <td> **9** </td> <td> Advisory Boards Meeting (Innovation & Exploitation) & Stakeholders Workshop (London, Travelport Offices) </td> <td> EUL, TRP, CLMS, ILS, TRI, AMD, BF, HD, BMT </td> <td> Association of British Travel Agents and Tour Operators (ABTA) participated in the meeting (representative: Mrs Susan Parsons) </td> <td> 28&29 July 2015 </td> </tr> <tr> <td> **10** </td> <td> Midterm Technical & Consortium Meeting (Athens, Caravel Hotel) </td> <td> ILS, BMT, CLMS, EBOS, EUL, FGC, HD, NCSRD, PD, SR, STI, TRI, TRP, VLTN </td> <td> Professor Theodoros Kalampoukis, Athens University of Economics </td> <td> 9&10 February 2016 </td> </tr> <tr> <td> **11** </td> <td> Exploitation Planning Meeting (Athens, CLMS Offices) </td> <td> CLMS, TRP, ILS </td> <td> </td> <td> 11 February 2016 </td> </tr> <tr> <td> **12** </td> <td> Innovation & Technical Meeting (Athens, SilverRail Offices) </td> <td> CLMS, SR, ILS </td> <td> </td> <td> 24 May 2016 </td> </tr> <tr> <td> **13** </td> <td> Data Exchange & Legal Issues Components Interfacing Meeting (Athens, CLMS Offices) </td> <td> ILS, CLMS, EBOS, NCSRD, TRP </td> <td> </td> <td> 27 May 2016 </td> </tr> <tr> <td> **14** </td> <td> Workshop - Working Group Meeting, Legal and Regulatory Issues affecting the development of an Optimodal Transport Ecosystem in the European Union (London, Hill Dickinson Offices) </td> <td> ILS, BMT, HD, PD, SR TRI, TRP </td> <td> </td> <td> 13 June 2016 </td> </tr> <tr> <td> **15** </td> <td> EuTravel Conference (Barcelona, NH Hesperia Tower Hotel) </td> <td> AMD, BMT, BSE, CLMS, EBOS, FGC, ILS, HD, PD, </td> <td> EuTravel Forum members - See 4.3.7.2 Conference Speakers and Statistics </td> <td> 6 October 2016 </td> </tr> <tr> <td> **a/a** </td> <td> **Description - Type of Meeting - Location** </td> <td> **Participating Partners** </td> <td> **EuTravel Forum Members, external experts** </td> <td> **Date** </td> </tr> <tr> <td> </td> <td> </td> <td> SR, STI, TRI, TRP, VLTN </td> <td> </td> <td> </td> </tr> <tr> <td> **16** </td> <td> Advisory Boards Meeting (Innovation & Exploitation) (Barcelona, FGC Offices) </td> <td> AMD, ILS, BSE, BMT, CLMS, FGC, HD, SR, TRP </td> <td> Pierre Becher, Distribution (became partner after 01.03.2017 in replacement of Eurolines) </td> <td> 7 October 2016 </td> </tr> <tr> <td> **17** </td> <td> Innovation and Exploitation Planning Meeting Liaison with IT2Rail Project (Athens, CLMS Offices) </td> <td> AMD, CLMS, ILS </td> <td> </td> <td> 9 to 11 November 2016 </td> </tr> <tr> <td> **18** </td> <td> Midterm Review Meeting (Innovate UK Offices, Brussels) </td> <td> ALL PARTNERS </td> <td> </td> <td> 6&7 December 2016 </td> </tr> <tr> <td> **19** </td> <td> Technical meeting with ForthCRS in the context of the Living Labs / Validation of solutions (ForthCRS offices, Athens) </td> <td> CLMS </td> <td> ForthCRS IT team members See also deliverable D4.1 EuTravel Engagement Strategy </td> <td> 15 December 2016 </td> </tr> <tr> <td> **20** </td> <td> Exploitation Board Meeting Commercialization analysis for EuTravel Planner (Travelport Greece Offices, Athens) </td> <td> TRP, CLMS, ILS </td> <td> Travelport Greece Vasiliki Hatzikosta Travelport Support Services Yunus Konak </td> <td> 6&7 June 2017 </td> </tr> <tr> <td> **21** </td> <td> Technical meeting to explore in details liaison opportunities with IT2Rail project (UNIFE Offices, Brussels) </td> <td> AMD, CLMS </td> <td> Stefanos Gogos representing IT2Rail project and the European Rail Industry (UNIFE) See also Section 4.3.10 Liaison with other projects and initiatives. </td> <td> 9 January 2017 </td> </tr> <tr> <td> **22** </td> <td> Technical Meeting & Exploitation Board Meeting Minimum Connection Times Challenge (Travelport Greece Offices, Athens) </td> <td> NCSRD, CLMS, EBOS, SR, TRP, ILS </td> <td> </td> <td> 11&12 July 2017 </td> </tr> <tr> <td> **23** </td> <td> Focus Group Meeting with user group with mobility problems to consolidate views in Policy and Research Recommendations (Accomable Offices, Leonard Cheshire Disability, London) </td> <td> ILS, TRR </td> <td> Accomable, Srin Madipalli Vicky Clayton User Group Members. See also section 4.3.8 Cooperation with EuTravel Forum member Accomable - Focus Group Meeting and deliverable D4.2 Policy, Standardisation and Research Recommendations. </td> <td> 14 October 2017 </td> </tr> <tr> <td> **24** </td> <td> Planned in future time Final Review Meeting, (Innovate UK Offices, Brussels) </td> <td> ALL PARTNERS </td> <td> </td> <td> 16&17 January 2018 </td> </tr> </table> 4.3.9.2. Participation in Conferences and meetings to promote EuTravel EuTravel participated in several events in order to present the project as a whole, focusing on the vision, the objectives and the impact of the results. **Conferences related to EuTravel Research Papers** 16-19 May 2017 8 th International Conference on Ambient Systems, Networks and Technologies (ANT), Madeira, Portugal Paper Title: API mashups: How well do they support the travellers’ information needs? Partner: VLTN Researcher: Bill Karakostas 21-25 May 2017 9 th International Conference on Advances in Databases, Knowledge, and Data Applications (DBKDA), Barcelona, Spain Paper Title: A knowledge graph for travel mode recommendation and critiquing Partner: VLTN Researcher: Bill Karakostas 03-06 July 2017 22 nd IEEE Symposium on Computers and Communications (ISCC), International Workshop on Intelligent & Sustainable Urban Transportation, Crete, Greece Paper Title: Service Availability Analysis of a Multimodal Travel Planner Using Stochastic Automata Partner: CLMS Researcher: Zannis Kalampoukis 17-21 September 2017 18th Working Conference on Virtual Enterprises (PRO-VE), Vicenza, Italy Paper Title: The Network Structure of Visited Locations According to Geotagged Social Media Photos Partner: STI Researcher: Zaenal Akbar 23-27 October 2017 OTM 2017: On the Move to Meaningful Internet Systems Conference, Rhodes, Greece Paper Title: Complete Semantics to Empower Touristic Service Providers Partner: STI Researcher: Zaenal Akbar, Ioan Toma Other Conferences and Events 29 May – 2 June 2016 EUTRAVEL was promoted at the 13th International Conference, ESWC 2016, Heraklion, Crete, Greece Participating partner representing EuTravel: STI Proceedings 'The Semantic Web. Latest Advances and New Domains' available at Springer. Editors: Harald Sack, Eva Blomqvist, Mathieu d'Aquin, Chiara Ghidini, Simone Paolo Ponzetto, Christoph Lange, _ISBN: 978-3-319-34128-6 (Print) 978-3-319-34129-3 (Online)_ ESWC is the top conference on semantics in Europe and the second worldwide (estimated participation 250 researchers and academics). It is a major venue for discussing the latest scientific results and technology innovations around semantic technologies and their applications in various domains, including travel and many more. It is attended to both academia and industry. The EuTravel project sponsored the 13th edition of ESWC as gold sponsor (Partner: STI). The aim was to raise awareness about the work done in the EuTravel project, on the artifacts and platform produced by the project. During the event, several researchers and industry partitions showed interest in the ontology produced in the EuTravel project. STI had follow up contacts and interaction with them. Some of these researchers attended the follow up EuTravel Conference. 150 EuTravel Brochures where handed out. 6th September 2016 Presentation of EuTravel at the DRV German Travel Association Expo/Annual Convention Schauinsland-Reisen GmbH, Duisburg, Germany Participating partner representing EuTravel: Travelport 5 October 2016 Presentation of EuTravel at the FerryGateway Association (FGWA) (EuTravel Forum member), Briefing in London, where apart from Brittany Ferries key association members’ representatives were present: Color Line, DFDS, Irish Ferries, P&O Ferries, Stena Line, Tallink and Viking Line. The discussion was on the presentation of the technical solution and the involvement of Ferry companies in the Living Labs. Participating partner representing EuTravel: BF and CLMS 17 September 2017 Presentation of EuTravel at IATA Travel Partners Standards Council (TPSC). The Travel Partners Standards Council (TPSC) is a forum for airlines and surface transportation companies to develop standards that facilitate the exchange of passengers between different modes of transportation. TPSC looks after air/surface operator standards. Travelport participates in the EU Rail standards working group and IATA Rail Partners Working Group. Travelport (David Classey) was chairing the event. 9 October 2017 Presentation of EuTravel at InterFerry conference, Split, Croatia EuTravel was presented as part of PANEL 2: Intermodal Travel - Planes, Trains, Automobiles and... Ferries. Individual presentations followed by a panel discussion Panel: Christophe Mathieu – Brittany Ferries, France David Rowan – WIRED UK, UK Andrew Steele – Silverrail, UK Alan Warburton – Pharos, UK Topic Description as presented in the conference website: _Transportation providers are waking up to the fact they can play an important part in the movement of people from origin to destination by several modes of transport, which has been greatly facilitated by digitalisation. Developments in the seamless exchange of information, known as interoperability, is facilitating the connection of more than one travel mode for customers to search, plan and book their journeys in ways that were thought impossible not so_ _long ago. Travel modes such as air and rail have woken up to the fact it is now very possible to offer intermodal solutions. They are embracing the opportunities offered by connecting real-time data from transportation operators, global distribution systems (GDS) and online service providers through a larger choice of distribution channels and to new markets. Driven by political and environmental factors and combined with the need to grow sales, the ferry sector could strengthen its offer and be an integral part of the transportation network, alongside other modes of shipping, rail and air transport. With the rise of connectivity and big data, transportation organisations need to be ready for the impact of future technologies that improve the efficiency of intermodal transport. Those organisations that are able to embrace digitalisation will stand the best chances of weathering the coming storms in this arena. Three consortium members of the Horizon 2020 EuTravel project from Pharos, Brittany Ferries and SilverRail will share an overview of the EuTravel project, and how it could be used by Ferry operators to enhance their offering to enable the ferry segment to be purchased as part of a multi-modal travel booking._ _Our panellists from inside and outside the ferry market will share their insight and views on how transportation sectors are tackling this opportunity and what developments could be on the horizon._ ###### 4.3.10 Liaison with other projects and initiatives Towards establishing dialogue and collaboration with related projects to identify commonalities and important outcomes that would provide input to several tasks, EuTravel consortium members participated in events and took specific actions as described in the following paragraphs: **4.3.10.1. Participation in activities organized jointly with ITS Cluster H2020 projects** 6 - 9 June 2016, ITS European Congress, Glasgow In the context of ongoing collaboration activities, EuTravel as a member of the H2020 ITS and Connected Vehicle cluster, was demonstrated on the developed Travel Competition Game for the Glasgow congress, showcasing the added value of the services offered by each of the members of the cluster. The Travel Competition Game was demonstrated at the EC stand (B40). Participating partner representing EuTravel: Inlecom. **Online Game:** _http://its.movenda.com/_ (joint activity under H2020 ITS and Connected Vehicle Cluster) **ITS and Connected Vehicles Domain Workshops, organised by INEA, Brussels** EuTravel was represented in two workshops on: * November 2015 * 14 December 2016 Participating partner representing EuTravel: ILS Apart from discussions that took place during the workshops, teleconference meetings took place with the following projects: * **ETC:** The account based ticketing framework suggested by ETC was initially considered by EuTravel, even if it was mostly focused on urban transits. The two projects covered different geographical areas (in terms of available services and data sources). The difficulty of integrating ETC services in EuTravel lies in the fact that ticketing can be realised only for the available booking services and transportation legs integrated in the Common Information Model (i.e. ticketing can be realised only for data/content providers that fully share their services in the EuTravel Ecosystem). * **MASAI:** Both projects address the same challenges but with an entirely different approach. While MASAI followed a distributed architecture of services, EuTravel followed a centralised architectural approach, consolidating all services under the API of APIs and governed by the Common Information Model. The two projects could potentially ‘link’ if the services exposed by one could be discovered and consumed by the other. Nevertheless, due to commercially sensitive data managed by EuTravel, NDA restrictions and confidentiality issues (Chapter 5- Part C: Data Management Plan), such services could not be exposed to any other party besides EuTravel consortium partners. On this issue, see further discussion in Deliverable D4.2: Policy, Standardisation and Research Recommendations. EuTravel also participated in most of the joint teleconferences organised by CODECS project regarding joint cluster activities and invited related projects to the EuTravel Conference. **4.3.10.2. Liaison with ITS Observatory Project** Participation to Workshop EuTravel participated in the ITS Observatory Project User Requirements Workshop on the 17 th of June 2015 in Brussels at ERTICO premises. Participating partner representing EuTravel: Inlecom Cross fertilisation of activities Following the discussions during the workshop and the ITS cluster meeting and in the context of Task 1.3. - Technology assessment and EuTravel Knowledge Base and Observatory, EuTravel took into consideration the classification of ITS technologies provided by the ITS Observatory project coordinator on the 19 th of April 2016 (Figure 4-21). See also related Deliverable D1.3 Technology Knowledge Base and Observatory. _Figure 4-21: ITS Technologies Taxonomy proposed by ITS Observatory project_ 4.3.10.3. Liaison with BonVoyage Project The project coordinators met on the 26 June 2015 at Trenitalia Offices. Both projects aimed at addressing similar challenges related to multimodality. In fact, while EuTravel focus has been on passengers’ transits, Bonvoyage has been addressing both passengers’ travel and freight transport. Coordinators agreed to exchange use-cases scenarios to be considered in implementation. Through common partner Trenitalia, the two projects exchanged experiment plans, scenarios and use cases during the design phase. 4.3.10.4. Liaison with SHIFT2RAIL (IT2RAIL Project) EuTravel and IT2Rail consortium partners (including AMADEUS and TRENITALIA who are partners in both projects), discussed projects integration and cross fertilisation of results. Two meetings took place to further investigate this opportunity. * On 9-11 of November in Athens (AMD, CLMS, ILS) * On 9 of January 2017 in Brussels (AMD, CLMS, ILS, UNIFE) IT2Rail had the verbal support of Carlo Borghini (executive director of the Shift2Rail Joint Undertaking) for an exercise of this type to be accommodated because it could assist in the appeal of the technical frameworks of both projects vis-à-vis the market and thereby facilitate market-take-up. 1. The objective of the meetings were exploratory – to establish what would be required technically and procedurally from both projects to achieve the linkage. 2. The linkage being discussed would enable both projects to ‘prove’ the advantage of semantics technology in terms of the speed and ease with which a new player (e.g. transport operator, distributor) can join a multimodal eco-system and establish multiple connectivity (not only with fellow eco-system members but also with members of other semantic eco-systems) versus the current situation where multiple connectivity comes at very high cost and very slow time-to-market. On this issue, see further discussion in Deliverables D4.2: Policy, Standardisation and Research Recommendations and D.5.2 Innovation Board \- Advisory Committee Conclusions. ###### 4.3.11 Knowledge Transfer and Training 4.3.11.1. EuTravel Knowledge Base - Observatory The EuTravel Observatory is hosted under _http://www.eutravelproject.eu/Observatory._ It consolidates content from deliverable D1.3 Technology Knowledge Base and Observatory, with special focus on the library of selected technologies to be monitored throughout the project, but also includes content from other deliverables such as Information on related standards from D.1.2: Policy, Legal and Standardisation. Content has been periodically updated throughout the project by EBOS with the contribution of all partners and reviewed by the project coordinator. The observatory will be supported by ILS and EBOS after the project for at least two years and will be updated with related content from other projects such as MaaS4EU and other new initiatives. See more details on deliverable D1.3 Technology Knowledge Base and Observatory. 4.3.11.2. Knowledge base features Based on an undertaken research on several existing knowledge base portals, we have distinguished the required features to be included in the development of the new EuTravel knowledge base and observatory. The following section provides a summary description of the features of the knowledge base portal. The vision has been to implement a modern knowledge base which uses the latest technology to achieve cross browser compatibility and multi device compatibility. The knowledge base performs well under all major browsers like Safari, Chrome, Firefox, and Internet Explorer. Following the latest web design technologies, the knowledge base is also responsive and fully compatible with different devices like laptops, tablets and smartphones. In order to achieve easy access to content, the knowledge base has a 2-level categorization that includes the Category and Sub-category. Mainly there are three (3) types of pages, Home, Categories/Sub-Categories and Articles. The Administrator can access and manage the Articles through the administration control panel, a user friendly and easy to use interface as further described below and in further detail in D1.3 Technology Knowledge Base and Observatory. As shown in the figures below, the knowledge base contains separated sections of each “Category” related to the EuTravel project. The articles enclosed in “Category” sections are sorted descending by published date. “Category” options are also available in an expanded/collapsed menu on the right side of the page, as shown in Figure 4-22 and Figure 4-23. _Figure 4-22: EuTravel Knowledge Base – Categories_ _Figure 4-23: EuTravel Knowledge Base - Categories – Menu expanded_ When clicking on Category Name the user is redirected to the particular Category Page and all the Sub-Categories are populated as shown in Figure 4-24. For the purpose of a user-friendly navigation and design, each “Sub- Category” is presented in a separated section (Figure 4-25). Finally, each Category and Sub-Category Section can be collapsed or expanded from the menu. _Figure 4-24: EuTravel Knowledge Base – Subcategories_ _Figure 4-25: EuTravel Knowledge Base – Subcategory Selection_ When users enter a Sub-Category they can see detailed information about it, along with a list of related articles. By choosing an Article, this is automatically loaded into the screen (Figure 4-26). _Figure 4-26: EuTravel Knowledge Base – Article_ 4.3.11.3. Knowledge base administration As shown below (Figure 4-27 to Figure 4-29), through the control panel, Administrators are able to create articles by choosing preferred fields (title, contents, documents and more). Administrator can also create and save drafts of incomplete articles so that they can complete and publish them at a later date. The control panel provides to Administrators the ability to configure automatic backups of the entire knowledge base content (categories, articles, news, configuration settings, attached files and documents). _Figure 4-27: Knowledge Base Administration 1/3_ _Figure 4-28: Knowledge Base Administration 2/3_ _Figure 4-29: Knowledge Base Administration 3/3_ 4.3.11.4. EuTravel e-Learning portal Access to the Online Learning Program is given from the first page of the EuTravel website. Detailed description of the courses and e-learning environment can be found in D4.1 EuTravel Engagement Strategy. _Figure 4-30: e-learning courses_ ##### 4.4 Dissemination Monitoring and Evaluation The effectiveness of reaching the target audience groups and the impact of the communication activities have been monitored regularly, according to the success indicators and target values, set from the inception of WP4. Project success will be measured not only by the actions which will took place during the project but by the steps that will be taken after the project to ensure wide take-up by the industry. The dissemination materials produced where make use of several dissemination channels to reach the project audiences. The choice of the channel used, has a fundamental impact upon the success and outcomes of a communication activity. Different dissemination channels have different strengths and weaknesses, and different channels have been used according to the communication objective/goal and target group. The dissemination channels, tools, level of reach and success indicators are identified in the following dissemination matrix (Table 4-12): # Table 4-12: Measurable targets for dissemination activities and targets attainment <table> <tr> <th> **Communication Tool** </th> <th> **Type** </th> <th> **Success Indicator** </th> <th> **Value Target** </th> <th> **Actual** **Value** **M36** </th> </tr> <tr> <td> Project Identity </td> <td> All </td> <td> \- </td> <td> 1 </td> <td> 1 </td> </tr> <tr> <td> Project Poster </td> <td> Documentation </td> <td> \- </td> <td> 1 </td> <td> 1 </td> </tr> <tr> <td> Reference PPT </td> <td> Documentation </td> <td> \- </td> <td> 1 </td> <td> 1 </td> </tr> <tr> <td> Project Leaflet, Newsletters, Factsheets, Infographics, Success Stories - Interviews </td> <td> Publications </td> <td> Number of publications </td> <td> 8 </td> <td> 4 </td> </tr> <tr> <td> Articles - Whitepapers </td> <td> Publications </td> <td> Number of publications </td> <td> 5 </td> <td> 5 </td> </tr> <tr> <td> Scientific Papers </td> <td> Publications </td> <td> Number of publications </td> <td> 2-4 </td> <td> 6 </td> </tr> <tr> <td> Deliverables </td> <td> Publications </td> <td> QA Standards </td> <td> 20 </td> <td> 20 </td> </tr> <tr> <td> Press Releases </td> <td> Publications </td> <td> Number of Press Releases </td> <td> 5 </td> <td> 2 (+) </td> </tr> <tr> <td> Policy Briefs </td> <td> Publications </td> <td> Number of publications </td> <td> 2 </td> <td> n/a </td> </tr> <tr> <td> Website </td> <td> Online Presence </td> <td> Number of users (SEO Metrics) </td> <td> 1500 </td> <td> >5000 </td> </tr> <tr> <td> Web 2.0. - Social Media </td> <td> Online Presence </td> <td> Social media followers Number of posts </td> <td> 1000 50 </td> <td> 200 60 </td> </tr> <tr> <td> Dedicated EC Portals </td> <td> Online Presence </td> <td> Number of entries </td> <td> 2-3 </td> <td> 1 </td> </tr> <tr> <td> Video-slideshow, media </td> <td> Online Presence </td> <td> Number of media elements </td> <td> 5 </td> <td> 2 </td> </tr> <tr> <td> Project meetings, roundtables </td> <td> Events </td> <td> Number of events </td> <td> 4 </td> <td> 25 </td> </tr> <tr> <td> Conferences, Workshops </td> <td> Events </td> <td> Number of events Number and type of attendees </td> <td> 4 100 </td> <td> 10 >100 </td> </tr> <tr> <td> e-learning modules </td> <td> Engagement, Knowledge Transfer </td> <td> Number of enrolled users </td> <td> 150 </td> <td> 40 </td> </tr> <tr> <td> Knowledge Base </td> <td> Engagement, Knowledge Transfer </td> <td> Number of entries Number of users </td> <td> 50 500 </td> <td> > 80 > 1000 </td> </tr> <tr> <td> Project Liaison Activities </td> <td> Networking, Knowledge Transfer </td> <td> No of relevant projects </td> <td> 10 </td> <td> 4 </td> </tr> <tr> <td> EuTravel Forum members actively involved </td> <td> Engagement, Knowledge Transfer </td> <td> Number of companies - organisations </td> <td> 10-20 </td> <td> 14 </td> </tr> <tr> <td> Ecosystem Participant Brochure </td> <td> Engagement, Knowledge Transfer </td> <td> Number of service providers reached </td> <td> 100 </td> <td> n/a </td> </tr> </table> (+) Published in several channels ###### **Website and social media monitoring** The following figures present reports from the tool in use for measuring the number of visitors of the EuTravel website. _Figure 4-31: EuTravel Website Google Analytics Audience Overview \- M36_ _Figure 4-32: EuTravel Website Google Analytics Report 2 - M36_ _Figure 4-33: EuTravel Website Google Analytics Report 3 - M36_ _Figure 4-34: Website users’ engagement flow diagram_ #### 5\. Part C: Data Management Plan ##### 5.1 Introduction The success of the development of the EuTravel Ecosystem hinges on linking with existing open data sources and models. To support the Ecosystem development and increase adoption, the EuTravel consortium intends to publish data so that it can be accessed, mined, exploited, reproduced and disseminated, free of charge for the user. With reference to the Guidelines on Data Management in Horizon 2020 [7], the EuTravel consortium agreement and acknowledging exploitation and protection of results this document describes the Data Management Plan (DMP) which identifies the data that the project will generate, whether and how it will be exploited or made accessible for verification and re-use, and how it will be curated and preserved. This DMP aims to support the EuTravel data management life cycle for all data that will be collected, processed or generated by the project and will be updated during the project. The initial version of this chapter was produced and circulated on M6, describing the principles and procedures related to data management. The following paragraphs describe the final version of the DMP as revised during implementation, including details about the actual datasets used and stored. According to the "Guidelines on Data Management in Horizon 2020", the DMP has the aim to produce data so that researchers may benefit by their use directly, and / or to apply their methods based on data generated by Research in Horizon 2020 that are: * Discoverable, * Accessible, * Interoperable to specific quality standards. ##### 5.2 Data Management Plan Structure The DMP provides: * Data set description - Description of the data generated or collected, its origin, nature and to whom it could be useful. * Data sharing - Description of how data will be shared, if applicable, including access procedures, outlines of technical mechanisms for dissemination and necessary software and other tools for enabling re-use. In case the dataset cannot be shared, the reasons for this should be mentioned (i.e. in the case of commercial data). * Data storing and handling * Archiving and preservation (including storage and backup) -Description of the procedures that will be put in place for long-term preservation of the data. ##### 5.3 Compliance with Legislation The following principles define the EuTravel approach: * Compliance with Legislation: Any real data collected for research and demonstration purposes will be handled in accordance with the Data Protection legislation in the concerned countries and each company handling the data will be registered to handle this type of information with their data protection authority. * Use Limitation: All information leading to person identities will be encrypted and protected according to EU best practices e.g. using reference numbers instead of actual names. * Security Safeguards: Personal data will be kept secure from potential abuse, during all required processing before storing, until personal identities are eliminated. * Openness: All collection processes will be transparent on how data is collected, used, and shared. * Accountability: EuTravel will be accountable to comply with all above principles. ##### 5.4 Informed Consent Procedures EuTravel will obtain informed consent from anyone participating in the user requirement collection and the testing and validation of solutions - Living Lab use case scenarios outside the consortium. The purpose(s) for the data collection will be clearly specified to the stakeholders who will be notified each time the purpose is changed. Please also refer to Deliverable D5.1 Management and Progress Reports Project Handbook: * Chapter 6 - EuTravel Ethics Considerations * Annex II: Informed Consent Form Additionally, EuTravel partners will: * deny unauthorised persons to access the data-processing equipment used for processing the research datasets (equipment access control); prevent the unauthorised reading, copying, modification or removal of data media (data media control); * ensure that persons authorised to use the data-processing system only have access to the data covered by their access authorisation (data access control). ##### 5.5 Privacy principles Data confidentiality is an overriding concern throughout the EuTravel project and beyond, as the tools developed in EuTravel will continue to be used afterwards and even rolled-out to future applications. EuTravel partners need to be concerned, and take concrete measures, towards prohibiting to the best extent possible unauthorized access to their EuTravel-related data through both technical and legal means. This is applicable both upon EuTravel partners that collected the data as well as on partners that were provided with access or processed data on behalf of others. Project participants are already covered by a Consortium Agreement which encompasses NonDisclosure clauses as well as their Contract with the EC which itself includes clauses on the treatment of data. However, NDAs are implemented to cover future users, as well as through the implementation of security measures in their processing systems (such as the design of EuTravel nodes) aimed at a level of protection, according also to the principle of proportionality, which should be at least the same as that provided to their own confidential information. ###### 5.5.1 Commercial data The exchange of information at such rate as implied by the EuTravel project invites considerations of confidentiality and trade secrets. The EuTravel platform is fed with information that may be of critical importance to the partners that release them. If correlated, they may reveal business methods, pricing, payment terms or other business-sensitive information. While data exchange is important for the Project objectives to be accomplished, a lack of a duty to confidentiality would leave the information exchanged unprotected – something that by itself could challenge the Project’s success. The issue of confidentiality has been therefore of critical importance. An obligation to confidentiality should cover in particular: * Data exchanges undertaken in components interfacing, living labs testing; * Data transmitted through or uploaded to the EuTravel platform components; * Data exchanged (on a bilateral basis) between EuTravel partners; It should be noted that the above duty to confidentiality does not cover only personal data. Quite on the contrary, personal information is protected under basic data protection legislation, as analysed bellow. Here it is particularly technical and other proprietary information that are placed within the scope of protection. This information may have intellectual property rights protection over it (for instance, in the event that it qualifies for copyright or patent protection), but it may well be the case that it is unprotected raw data, that however still possesses business value for the disclosing party. The preferred means through which to protect confidential information exchanged during execution of the EuTravel project are non-disclosure agreements (NDAs) and these have been signed among project partners. For further details please also refer to: Deliverable D5.1 Management and Progress Reports Project Handbook, * Chapter 6 - EuTravel Ethics Considerations * Annex I: EuTravel Partners Non - Disclosure Agreement Authorisation and Ethical Approvals Inlecom is registered with ICO. However, after the end of the research project, when results will be rolled-out to organizations which are not partners in EuTravel and therefore not covered by the Consortium Agreement and NDAs in place, it is essential that comprehensive NDAs are signed prior to any disclosure of information. Such agreements should be drafted by the Project Coordinator and entered whenever deemed important before confidential information exchanges. ###### 5.5.2 Personal data Data collection forms part of personal data “processing”, according to Article 2 of Directive 95/46 (“'processing' shall mean any operation or set of operations which is performed upon personal data, whether or not by automatic means, such as collection, recording, organization, storage, adaptation or alteration, retrieval, consultation, use, disclosure by transmission, dissemination or otherwise making available, alignment or combination, blocking, erasure or destruction”). [9] In the EuTravel project context, collection of personal data may take place through: 1. interaction with individuals that may provide personal information (interviews or evaluation sessions), and 2. collection of data regarding the solutions testing With respect to (a) Because of the personal data collection involved, obtaining of lawful consent by the individuals concerned, for it to constitute the relevant legal basis for the processing, is imperative. Participants in interviews or other activities must be asked to sign the relevant form, for their consent to be demonstrable in writing. With respect to (b) data may be shared among partners for EuTravel project purposes. According to the Project’s Description of Action while collection will be undertaken by certain project partners, the outcome of this process may be made available to others in order to execute the relevant processing. Consequently, we distinguish between. 1. EuTravel partners who will process personal data by way of collection, 2. EuTravel partners who will process personal data by way of data transfer, 3. EuTravel partners who will not undertake any personal data processing under the Project. As regards personal data, these are secured, or made inaccessible via transformations (i.e. encryption). All Sensitive information, either for individuals or for businesses (i.e. disclosing information that might directly or indirectly cause financial damages and/or damages of other nature) is made inaccessible, so to trace the individual, transformed to be used for statistical and analytical purposes, as aggregates [10] [11]. Further, in this context, it is recommended that all EuTravel partners under points (a) and (b) above are instructed to undertake relevant data protection measures. It is a key objective of EuTravel that systems are designed so as to limit actual or potential access to personal data to only those parties that need to access or process data, encrypting, securing or masking data so that it is not visible to parties which are not authorised or which do not strictly require it. ###### 5.5.3 Confidentiality of datasets The following tables briefly summarise the general approach to confidentially of different categories of data: <table> <tr> <th> **Category** </th> <th> **Sub-Category** </th> <th> **Examples** </th> </tr> <tr> <td> **Commercial Data handled by EuTravel technical systems** Commercial Data handled by EuTravel technical systems, constitutes valuable business information for the EuTravel partners concerned. Out of their processing useful information may be derived on preferred routes, preferred partners and collaborators, client relationships, billing and collection etc. This information constitutes valuable business secrets and if unauthorized access was granted to them could cause serious damage to EuTravel partners. Therefore, as a general principle: This data will not be shared outside the consortium and access within the consortium will be strictly limited to only those parties agreed by the data owner (protected with NDA agreements). Some information may also concern third parties such as customers or </td> <td> Static and Dynamic Datasets Access to Travel and Transport APIs </td> <td> * Shopping Services * Pricing Services </td> </tr> <tr> <td> **Category** </td> <td> **Sub-Category** </td> <td> **Examples** </td> </tr> <tr> <td> subcontractors and thus must be stored and processed in accordance with appropriate data protection rules. </td> <td> </td> <td> </td> </tr> <tr> <td> **Performance related data produced by EuTravel** This data represents aggregated derived statistics about the overall performance of the solutions or components. * Technical data related to performance of innovative systems developed within EuTravel may be shared provided it does not compromise commercialisation prospects * Key performance indicators and aggregated results related to testing may be shared with the consent of the appropriate parties provided it is not commercially sensitive. * Key performance indicators and aggregated results related to third parties will not generally be shared without being anonymised and/or with consent to publish being given. </td> <td> Technical data related to performance of systems developed in EuTravel, considering all relevant restrictions with respect to legal considerations. </td> <td> * Performance related data. * Data presented in KPIs dashboard. </td> </tr> <tr> <td> Key performance indicators and aggregated results related to testing \- living labs. Data in this category require the agreement of all commercial parties related, to eliminate sensitive information </td> </tr> <tr> <td> Key performance indicators and aggregated results related to third parties. To minimise the possibility of any damage caused to 3rd parties, all data are anonymised as appropriate, and consent of all affected parties is sought. </td> </tr> <tr> <td> **Interviews and surveys** Participants in interviews or other activities must be asked to sign a relevant form, in order for their consent to be demonstrable in writing. The form must be composed in accordance with legal requirements, i.e. among others to describe how the information will be used, how the person concerned may review or amend them. </td> <td> Results stakeholder consultations and their consent to record and/or publish required before gathering information. </td> <td> * Responses to surveys. * Expert interviews. * Participation of individuals in trials or workshops. </td> </tr> </table> ##### 5.6 EuTravel data sets description The following tables summarise the data sets collected and used in the project. # Table 5-1: Commercial/Confidential datasets provided by project Partners protected with NDAs <table> <tr> <th> **Data Provider** </th> <th> **Data Type** </th> <th> **Data Format** </th> <th> **Static Data** </th> <th> **Realtime Data** </th> <th> **Mode** </th> <th> **Type of service offered** </th> </tr> <tr> <td> SilverRail </td> <td> SOAP API </td> <td> XML </td> <td> Operators, Stations, Amenities, Fare Qualifiers, Seating, Regions </td> <td> Shopping, Pricing, Booking, Paying, Retrieving, Modifying, Cancelling, Claiming Value documents, Refunding </td> <td> Rail </td> <td> Static + Realtime </td> </tr> <tr> <td> Travelport </td> <td> REST API </td> <td> XML, JSON </td> <td> \- </td> <td> Shopping, Pricing, Booking, Paying, Retrieving, Modifying, Cancelling, Claiming Value documents, Refunding </td> <td> Air Rail </td> <td> Static + Realtime </td> </tr> <tr> <td> National Express </td> <td> Transit Data </td> <td> GTFS </td> <td> Schedules, Timetables </td> <td> \- </td> <td> Coach </td> <td> Static </td> </tr> <tr> <td> Eurolines </td> <td> Transit Data </td> <td> GTFS </td> <td> Schedules, Timetables </td> <td> \- </td> <td> Coach </td> <td> Static </td> </tr> <tr> <td> Pharos </td> <td> REST API </td> <td> XML </td> <td> Schedules, Timetables, Routes </td> <td> Shopping, Pricing, Booking, Paying, Retrieving, Modifying, Cancelling, Claiming Value documents, Refunding </td> <td> Ferry </td> <td> Static </td> </tr> <tr> <td> Trenitalia </td> <td> Transit Data </td> <td> GTFS, CSV </td> <td> Schedules, Timetables, Stations </td> <td> </td> <td> Rail </td> <td> Static </td> </tr> <tr> <td> Distribusion </td> <td> </td> <td> </td> <td> Schedules, Timetables, Station Pairs </td> <td> Shopping, Pricing, Booking, Paying, Retrieving, Modifying, Cancelling, Claiming Value documents, Refunding </td> <td> Coach </td> <td> Static + Realtime </td> </tr> </table> # Table 5-2: Commercial datasets provided by EuTravel Forum members protected with NDAs <table> <tr> <th> **Data** **Provider** </th> <th> **Data Type** </th> <th> **Data Format** </th> <th> **Static Data** </th> <th> **Realtime Data** </th> <th> **Mode** </th> <th> **Type of service offered** </th> </tr> <tr> <td> OAG </td> <td> SOAP, REST API </td> <td> XML, JSON </td> <td> Operators, Timetables, Yearly Schedules, Amenities </td> <td> Availability, Connections, Low Cost Carriers, Travel Planner </td> <td> Air </td> <td> Static + Realtime </td> </tr> </table> # Table 5-3: Open datasets provided by EuTravel Form Members <table> <tr> <th> **Data Provider** </th> <th> **Data Type** </th> <th> **Data Format** </th> <th> **Static Data** </th> <th> **Realtime Data** </th> <th> **Mode** </th> <th> **Type of service offered** </th> </tr> <tr> <td> TMB </td> <td> Transit Data </td> <td> GTFS </td> <td> Schedules, Timetables, Routes </td> <td> </td> <td> Regional Rail </td> <td> Static </td> </tr> </table> # Table 5-4: Other open datasets utilised in the project <table> <tr> <th> **Data Provider** </th> <th> **Data Type** </th> <th> **Data Format** </th> <th> **Static Data** </th> <th> **Realtime Data** </th> <th> **Air** </th> <th> **Type of service offered** </th> </tr> <tr> <td> OpenStreet Map </td> <td> Map, Rest API, Geodat a </td> <td> OSM, Shapefi le </td> <td> Maps, Public Transport Routes, Railways </td> <td> \- </td> <td> Ferry Rail Metro </td> <td> Static </td> </tr> <tr> <td> Google Places API </td> <td> REST API </td> <td> JSON, XML </td> <td> Businesses, Timetables, Nearby attractions </td> <td> </td> <td> </td> <td> Static </td> </tr> </table> # Table 5-5: Other datasets collected in the project <table> <tr> <th> **EuTravel Data set** </th> <th> **Title** </th> <th> **Description** </th> <th> **Origin** </th> <th> **nature** </th> <th> **Stakeholder interest** </th> </tr> <tr> <td> WP1- T1.1 </td> <td> Requirements analysis survey data </td> <td> Data collected in questionnaire survey </td> <td> BMT </td> <td> Quantitative and qualitative feedback </td> <td> Related travel research </td> </tr> <tr> <td> WP3 -T3.3 </td> <td> Testing of use case scenarios </td> <td> Data collected during solutions testing </td> <td> CLMS, EBOS </td> <td> Quantitative and qualitative feedback </td> <td> Related travel research </td> </tr> <tr> <td> WP4 </td> <td> Engagement with user groups and EuTravel forum members </td> <td> Data collected during interviews, meeting, and solutions testing </td> <td> ILS </td> <td> Quantitative and qualitative feedback </td> <td> Related travel research </td> </tr> </table> # Table 5-6: Other datasets generated in the project <table> <tr> <th> **EuTravel Data set** </th> <th> **Title** </th> <th> **Description** </th> <th> **Origin** </th> <th> **nature** </th> <th> **Stakeholder interest** </th> </tr> <tr> <td> WP3 -T3.3 </td> <td> Dashboard KPIs </td> <td> Data collected during solutions testing </td> <td> CLMS, EBOS </td> <td> Dynamic Quantitative measurements </td> <td> Related travel research </td> </tr> </table> ##### 5.7 Data sharing This section of the report provides a description of how data will be shared, including access procedures for dissemination and necessary software and other tools for enabling re-use. ###### 5.7.1 Software Components The main reusable components built by the project and made available as open source to the industry through the project website under Solutions: http://www.eutravelproject.eu/Solutions. The key reusable outcomes include: 1. The EuTravel Common Information Model which is the backbone of the technologies built as part of the EuTravel platform. The final version of the domain model unifies the various terminologies used in air, rail, ferry and coach transport modes under a single hierarchical structure. It is downloadable in JSON, XML, OWL and UML formats. 2. The Unified Travel Ontology schema which is downloadable in OWL format. ###### 5.7.2 Scientific research publications The intention of the EuTravel project is to publish scientific papers under Open Access. Two methods of publishing have been evaluated, Gold Open Access, which is access to the version of record via the author’s own platform or Green Open Access, access without payment to a version of a publication through a repository [8]. Green Open Access * The author makes the work available by archiving it in a repository. This may be an institutional repository or a subject-based or central repository. * Usually this version should be the author’s final pre-publication version – the peerreviewed, accepted manuscript. * No charges are payable. * Access may be subject to an embargo, depending on the publisher’s self-archiving policy. Gold Open Access * The work is made freely available to the end user via the publisher’s website. * An APC (article processing charge) is usually charged. * The version made available is the final publisher’s version. * The work is available immediately, with no embargo periods. Although it is preferable to publish in online publications free of charge, when it is not possible it will be up to the partner who will cover the associated costs. The EuTravel Open Access strategy relates to the EU “Open” paradigm for publishing project results, which foresees the two ways mentioned above. ##### 5.8 Data storing and handling ###### 5.8.1 Front-end Travel Planner and personal data management (EBOS) One of the most critical components of the EuTravel portal is the User (Traveller) Profile which incorporates personal and travel preferences data. This applies only for registered to the portal users. Specifically, the user profile is divided into four sections: 1. Personal, 2. Family Members, 3. Account, 4. Preferences and Multimodal information. As described in deliverable D2.3 “One-stop, cross-device multilingual interface”, data related with these sections is stored independently from the Super API at a local UI database. Part of this data is shared via RESTful Web Services to the Super API for planning, booking and ticketing purposes. At the personal information section, the following fields are stored per traveller: <table> <tr> <th> • Title </th> <th> • Alt Number </th> </tr> <tr> <td> • First Name </td> <td> • Address </td> </tr> <tr> <td> • Middle Name </td> <td> • City </td> </tr> <tr> <td> • Last Name </td> <td> • Zip Code </td> </tr> <tr> <td> • Gender </td> <td> • Country </td> </tr> <tr> <td> • Birthday </td> <td> • Passport No </td> </tr> <tr> <td> • Phone Number </td> <td> </td> </tr> </table> Also, the traveller, at the family members section, is able to include family members by adding their Name, Surname, Relationship, Birthday and Phone information. By combining information from personal and family members’ sections, the traveller can proceed to book the planned itinerary for the whole family. Therefore, the frond-end shares with the Super API personal information such as: Name, Surname and Birthday of all travellers along with the email and phone number of the lead passenger. Furthermore, information related with the user credentials can be changed based on user choice. Passwords are not shared with the Super API and are only related with the front-end (UI) processes. From a security point of view, the passwords are encrypted using 128-Bit Encryption algorithm. Finally, an important aspect of the user profile is the preferences and multimodal sections. The traveller has the capability to customize the related parameters of these sections in order to retrieve personalised results from the Super API. These parameters are the following: Preferences Section * Travel Duration • Distance * Special Assistance • Carbon Emission * Price Ordering • Max Returned Solutions **Multimodal Section** * Inter-modal Waiting Time • Intra-modal Waiting Time ###### 5.8.2 Back-end – API of APIs (CLMS) The API of APIs (Super API) receives user preference information over the internet, as optional attachments in planning and shopping requests, during the first stage of communication with the front-end EuTravel Planner. Upon receipt, the API of APIs persists this information along with the remaining transport information, such as origin and destination locations, the respective dates, passenger count etc. for future reference, non-repudiation guarantee and statistics extraction. During the typical workflow, beginning with trip planning and ending with ticketing, the API of APIs' orchestration layer associates passengers with trips in an eponymous manner, as obliged by the specified functionality, especially booking and ticketing. In scenarios such as KPI measurement or pattern extraction, the API of APIs complies with standard European Regulation by aggregating this information on strictly anonymized data. **Anonymization** is achieved by filtering and omitting all sensitive fields from every piece of information, before performing any kind of processing. In this way, the requested functionality is fulfilled without jeopardizing user privacy and personal information. Data is stored on Microsoft Azure cloud, on a secure database server. User preference information is aggregated, anonymized and partially shared as part of the KPI Dashboard, Business Intelligence Dashboard and the Mobile Diary applications (Deliverable 2.4: EuTravel Value Added Services) to the consortium members and external users that will be engaged in the Living Lab, following the Living Lab participation terms. Such interactions will be secure at all times, to avoid jeopardizing user privacy. See also deliverable D3.2: Living Lab Setup Deployment of initial EuTravel Ecosystem, Section 5.5: EuTravel approach to data storing and handling. ##### 5.9 Archiving and preservation Any raw, generated or meta-data from the EuTravel project should be preserved and archived by the project partners. In the case of raw data collected from industrial partners in a predefined way (file format, fields, etc.) can be stored in a database in the existing schema. The entire storage data set will be archived for five years after the end of the project. The files containing the datasets will be versioned over time. Also, the datasets will be automatically backed up on a monthly basis. #### 6\. Conclusions The Communications programme approach within the EuTravel project framework has been proactive, including: 1. Dissemination Strategy and Plan From an early stage, the dissemination objectives were linked to tangible dissemination goals, followed by the identification of target groups, key messages to be communicated through the appropriate communication channels, associated activities and scheduling of implementation. 2. Rollout of activities in alignment with Stakeholder Engagement and EuTravel Forum, towards EU-wide adoption and exploitation of results The EuTravel dissemination actions have been closely linked to knowledge transfer actions with the view to promoting EU-wide adoption of results in the industry, addressing and involving market representatives and business stakeholders whenever possible. A number of dissemination activities have taken place as early as the project start such as the set-up of the EuTravel website and social media channels. The project has been promoted through press releases, online articles white papers and the organisation and participation in conferences and events. 3. Liaison actions with other projects and initiatives The project has been promoted within the scientific community and liaison with other projects has been established, but within a limiting scope. Fruitful collaboration with other projects could be established if the services exposed by one project could be discovered and consumed by another (like in the cases of MASAI and IT2RAIL that were investigated). To achieve this, apart from addressing problems like different projects’ timelines and technical constraints due to different architectures, services developed within projects should be able to be exposed to other parties besides consortium partners. In EuTravel, dissemination activities have been implemented in line with Part C of the deliverable (Data Management Plan) which describes the overall management principles on data, including sensitive commercial and personal data, reflecting the state of the Consortium agreements and NDAs on data management. Being bound by agreements that protect commercially sensitive data cannot allow the actual integration with other project services, that can lead to interesting research results and more open inclusive solutions for the industry. 4. Communications and dissemination activities monitoring mechanisms Set dissemination targets have been monitored and evaluated regularly. The level of attainment has been overall satisfactory. Dissemination activities will continue for at least three from the end of the project, with a stronger focus on commercialisation and with the key goal to attract more industry stakeholders. Last, significant achievements within the scope of the work described in this deliverable include: * The organisation of a very successful midterm Conference in Barcelona in 2016, involving EuTravel Forum members, market representatives and business stakeholders. * Wide dissemination through events, workshops and meetings - over twenty meetings and workshops were organised. * The organisation of a focus group meeting in London in 2017, to capture the views and feedback of a representative user group with mobility problems, consolidated in the project’s research and policy recommendations. * The publication of six scientific papers presented at related European Scientific Conferences.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0255_flora robotica_640959.md
# Introduction We have done a survey of available data repositories but we could not identify an appropriate repository that would fit the project’s focus. This might be due to our nontraditional, innovative, and interdisciplinary approach of combining plant science, computer science, engineering, and architecture. For example, there are multiple data repositories that focus on plant science which are, however, very specific for genetic data, geographical data, etc. Therefore, we have decided to use a multipurpose data repository. We have chosen Zenodo 1 due to its assumed high reliability now and in the future. In addition, we expect to generate big amounts of data. The overhead of storing and handling all of that data seems not justifiably. Hence, we have decided for a hybrid approach of selecting data that we store only for the lifetime of the project on our project’s Cloud service hosted by OwnCube e.U. 2 and for which we share data based on individual requests. A more selective choice of data will be uploaded to Zenodo allowing for preservation beyond the project’s lifetime. Finally, we plan a strategy of storing almost all data by one service (Zenodo) for simplicity and as a holistic approach. At the same time, we allow to continue ongoing data sharing procedures of members of our project consortium, especially, concerning self-archiving practices of publications (conference papers and journal papers) and open-source availability of developed software via software repositories such as GitHub 3 and sourceforge 4 . # Specifications for each data set 2.1 DATA1 – publications 2.1.1. Data set reference and name DATA1 – publications (conference papers, journal papers, monographs) 2.1.2. Data set description Scientific and peer-reviewed papers that are published by members of the project consortium. Groups with interest in this type of data set are the readerships of the respective conferences and journals. 2.1.3. Standards and metadata We will use Zenodo 5 . Hence, Zenodo also defines and handles the metadata. Citing from their policies 6 : “Metadata is licensed under CC0, except for email addresses. All metadata is exported via OAI-PMH and can be harvested. [...] All metadata is stored internally in MARC [...]. Metadata is exported in several standard formats such as MARCXML, Dublin Core, and DataCite Metadata Schema according to OpenAIRE Guidelines.” Version control is not necessary. Furthermore, we keep for each publication in this data set a BIBTEX entry and its DOI 7 . 2.1.4. Data sharing The publications will be uploaded as PDF (Portable Document Format) on the servers of Zenodo. In addition, an incomplete selection of papers will be uploaded to http://arXiv.org and members of the project consortium will also continue their self-archiving practices. 2.1.5. Archiving and preservation (including storage and backup) The data will be curated by Zenodo who state about their retention period: “Items will be retained for the lifetime of the repository. This is currently the lifetime of the host laboratory CERN, which currently has an experimental programme defined for the next 20 years at least. [...] In case of closure of the repository, best efforts will be made to integrate all content into suitable alternative institutional and/or subject based repositories.” 8 2.2 DATA2 – photo & video material of experiments 2.2.1. Data set reference and name DATA2 – photo & video material of plant & robot experiments 2.2.2. Data set description Generated data, photos and videos taken during experiments with natural plants and robots. Data is of interest for plant scientists and roboticists. It partially underpins scientific publications. The data is probably too specific to allow for integration. 2.2.3. Standards and metadata Data is labeled with experiment ID. Photos are numbered in chronological order or alternatively labeled directly with the time in the experiment (e.g., minutes after start of experiment). Videos are labeled with the experiment ID and numbered chronologically if there are multiple parts. Version control is not necessary. For each experiment dataset there is a short textual description of the experiment itself, when it was started, when it was ended, and who supervised it. For the data that is uploaded to Zenodo, Zenodo defines and handles the metadata (see above). 2.2.4. Data sharing Given the amount of this kind of data that will be generated throughout the project, we do not plan a central data deposit for all of it. Individual photos and videos that are representable for the respective experiment and/or relevant for publications will be stored in the project’s Cloud service (OwnCube e.U., An der Liesing 2-34/7, 1230 Wien, Austria) for the duration of the project, shared among project partners, and data requests for that data from outside the project will be handled individually. A smaller selection of that data will be uploaded to the servers of Zenodo and hence allow for easy sharing. 2.2.5. Archiving and preservation (including storage and backup) For the data stored on our cloud service we cannot guarantee preservation beyond the project’s lifetime but that is taken care of by the smaller selection of data that will be uploaded to Zenodo (lifetime of the host laboratory CERN is 20+ years, see above). 2.3 DATA3 – developed software 2.3.1. Data set reference and name DATA3 – developed software 2.3.2. Data set description A variety of different software packages are going to be developed in this project including software to run microprocessors (e.g., Raspberry Pi 9 , sensors, actuators, to run complete robot platforms, and also simulation software frameworks. The developed software is mostly of interest for roboticists while some of the simulation software might also be of interest for plant scientists. The software partially underpins scientific publications and is probably too specific to allow for integration. 2.3.3. Standards and metadata For service such as GitHub and sourceforge the metadata is very specific and depends on the respective hosting service. For the data that is uploaded to Zenodo, Zenodo defines and handles the metadata (see above). 2.3.4. Data sharing For software we plan a hybrid approach. For software that is under development (i.e., within the project’s lifetime) we are going to use a project-internal software versioning and revision control tool based on Apache Subversion (SVN) 10 which is hosted by the “Zentrum fu¨r Informationsund Medientechnologien (IMT)” of the University of Paderborn (UPB, project coordinator). For software releases we are going to use standard and well accepted hosting services for scientific open-source software such as GitHub 11 and sourceforge 12 but also to provide the software via Zenodo 13 . 2.3.5. Archiving and preservation (including storage and backup) For the software stored on our project-internal software versioning system we cannot guarantee preservation beyond the project’s lifetime but that is taken care of by the released software packages that will be uploaded to Zenodo (lifetime of the host laboratory CERN is 20+ years, see above). 2.4 DATA4 – sensor data of experiments 2.4.1. Data set reference and name DATA4 – sensor data of plant & robot experiments 2.4.2. Data set description Generated sensor data acquired during experiments with natural plants and robots. This includes data from sensors monitoring the plants (temperature, humidity, etc.) and from sensors of the robots (proximity sensors, force sensors, etc.). Data is of interest for plant scientists and roboticists. It partially underpins scientific publications. The data is probably too specific to allow for integration. 2.4.3. Standards and metadata Data is labeled with experiment ID, numbered in chronological order or alternatively labeled directly with the time in the experiment. For each experiment dataset there is a short textual description of the experiment itself, when it was started, when it was ended, and who supervised it. For the data that is uploaded to Zenodo, Zenodo defines and handles the metadata (see above). Version control is not necessary. 2.4.4. Data sharing Given the amount of this kind of data that will be generated throughout the project, we do not plan a central data deposit for all of it. Individual photos and videos that are representable for the respective experiment and/or relevant for publications will be stored in the project’s Cloud service (OwnCube e.U., An der Liesing 2-34/7, 1230 Wien, Austria) for the duration of the project, shared among project partners, and data requests for that data from outside the project will be handled individually. A smaller selection of that data will be uploaded to the servers of Zenodo and hence allow for easy sharing. 2.4.5. Archiving and preservation (including storage and backup) For the data stored on our cloud service we cannot guarantee preservation beyond the project’s lifetime but that is taken care of by the smaller selection of data that will be uploaded to Zenodo (lifetime of the host laboratory CERN is 20+ years, see above). 2.5 DATA5 – logging & tracking data of experiments 2.5.1. Data set reference and name DATA5 – logging & tracking data of plant & robot experiments 2.5.2. Data set description Generated data acquired during experiments with natural plants and robots including data generated by image processing of photos and logging data of our hardware systems. Data is of interest for plant scientists and roboticists. It partially underpins scientific publications. The data is probably too specific to allow for integration. 2.5.3. Standards and metadata Data is labeled with experiment ID, numbered in chronological order or alternatively labeled directly with the time in the experiment. For each experiment dataset there is a short textual description of the experiment itself, when it was started, when it was ended, and who supervised it. For the data that is uploaded to Zenodo, Zenodo defines and handles the metadata (see above). Version control is not necessary. 2.5.4. Data sharing Given the amount of this kind of data that will be generated throughout the project, we do not plan a central data deposit for all of it. Individual photos and videos that are representable for the respective experiment and/or relevant for publications will be stored in the project’s Cloud service (OwnCube e.U., An der Liesing 2-34/7, 1230 Wien, Austria) for the duration of the project, shared among project partners, and data requests for that data from outside the project will be handled individually. A smaller selection of that data will be uploaded to the servers of Zenodo and hence allow for easy sharing. 2.5.5. Archiving and preservation (including storage and backup) For the data stored on our cloud service we cannot guarantee preservation beyond the project’s lifetime but that is taken care of by the smaller selection of data that will be uploaded to Zenodo (lifetime of the host laboratory CERN is 20+ years, see above). 2.6 DATA6 – logging data & images of simulations 2.6.1. Data set reference and name DATA6 – logging data & images of simulations 2.6.2. Data set description Generated data acquired during simulations of natural plants and robots including logging data of simulations, screenshots, and other visual representations of data. Data is of interest for plant scientists and roboticists. It partially underpins scientific publications. The data is probably too specific to allow for integration. 2.6.3. Standards and metadata Data is labeled with experiment ID, numbered in chronological order or alternatively labeled directly with the time in the experiment. For each experiment dataset there is a short textual description of the experiment itself, when it was started, when it was ended, and who supervised it. For the data that is uploaded to Zenodo, Zenodo defines and handles the metadata (see above). Version control is not necessary. 2.6.4. Data sharing Given the amount of this kind of data that will be generated throughout the project, we do not plan a central data deposit for all of it. Individual photos and videos that are representable for the respective experiment and/or relevant for publications will be stored in the project’s Cloud service (OwnCube e.U., An der Liesing 2-34/7, 1230 Wien, Austria) for the duration of the project, shared among project partners, and data requests for that data from outside the project will be handled individually. A smaller selection of that data will be uploaded to the servers of Zenodo and hence allow for easy sharing. 2.6.5. Archiving and preservation (including storage and backup) For the data stored on our cloud service we cannot guarantee preservation beyond the project’s lifetime but that is taken care of by the smaller selection of data that will be uploaded to Zenodo (lifetime of the host laboratory CERN is 20+ years, see above). 11 # Conclusion A data management plan is not a fixed document, instead it might be refined later during the project. The feasibility and utility of this data management plan will be monitored during the next months and years of the project. Also the number of data requests from outside the consortium will be recorded. Depending on that experience we will consider appropriate actions and changes of this plan.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0256_NEWBITS_723974.md
# 1\. Introduction The NEWBITS project participated in the Open Research Data Pilot (ORD pilot). The ORD pilot aimed to improve and maximise access to and re-use of research data generated by Horizon 2020 projects. A Data Management plan (DMP) was required for all projects participating in the extended ORD pilot. The ORD pilot applied primarily to the data needed to validate the results presented in scientific publications. Other data could also be provided by the beneficiaries on a voluntary basis. A DMP describes the data management life cycle for the data collected, processed and/or generated by a Horizon 2020 project. As part of making research data findable, accessible, interoperable and re-usable (FAIR), a DMP should include information on: * the handling of research data during and after the end of the project, * what data will be collected, processed and/or generated, * which methodology and standards will be applied, * whether data will be shared/made open access and, * how data will be curated and preserved (including after the end of the project). NEWBITS Data Management Plan thus helped to: * sensitize project partners for data management, * agree on common data management rules, * assure continuity in data usage if project staff leave and new staff join, * easily find project data when partners need to use it, * avoid unnecessary duplication e.g. re-collecting or re-working data, * update data, * make project results more visible. A first version of the DMP developed in M8 served as a basis to define the level of expertise of each partner in data management and to assess the corrective actions which need to be implemented to secure research data. A DMP questionnaire was circulated to all project partners in order to sensitize all partners for data management and to get information about their individual data management strategy. The final version of the DMP submitted in M30 summarizes the partners’ answers to the DMP questionnaire per Work Package and lists all NEWBITS datasets generated and their level of open accessibility. # 2\. Summary The following section elaborates on the overall aspects of purpose of data collection, the types and formats of data generated and collected throughout the project, the re-use of existing data and data origin, the expected size of data and data utility on WP level. A detailed list of all data and their respective format to be made accessible by an open access repository is included in section 3.2. The following section concentrates on Work Packages 2 to 7 since Work Packages 1 and 8 only comprise confidential data. ### 2.1 Purpose of data collection NEWBITS aimed to design and implement a holistic intelligence process that would map the (C-)ITS business ecosystem (initiatives, projects, actors), identify (C-)ITS enablers and barriers, investigating existing key performance indicators, and gathering relevant information on products, market, demand, stakeholder’s involvement and innovation diffusion for (C-)ITS. NEWBITS formalized an enhanced understanding of the potential system benefits and fundamental economics of new business models suited to (C-)ITS in the European context, and developed relevant outcomes to support policy measures towards (C-)ITS deployment. To this purpose, NEWBITS collected and generated data for internal use and further processing by the NEWBITS project partners such as analyses, reports, plans, case study and workshop documentations, as well as data that would be made accessible for external users such as the project deliverables as well as communication and dissemination material. Being a Coordination and Support Action, NEWBITS did not generate typical research data. Analyses generated during project runtime were made accessible on a voluntary basis. ##### 2.1.1 Data collection in WP2 The data generated in Work Package 2 supported the assessments made on (C-)ITS services applied in the EU, US and Australia, the barriers and enablers to the deployment of (C-)ITS services and the key performance indicators applied for ITS services. <table> <tr> <th> **Purpose of data collection** </th> </tr> <tr> <td> Elaboration of deliverables: * D2.1 Overview of ITS initiatives in EU and US * D2.2 Report on barriers and KPIs for the implementation of (C-)ITS ▪ D2.3 Case study taxonomy </td> </tr> <tr> <td> **Types and formats** </td> </tr> <tr> <td> * Evidence from the literature (academic studies, grey literature, policy documents), * Stakeholder interviews and an online stakeholder survey </td> </tr> <tr> <td> **Re-use of existing data** </td> </tr> <tr> <td> NEWBITS project </td> </tr> <tr> <td> **Data origin** </td> </tr> <tr> <td> Primary data (interviews, online survey) and data from literature (previous Horizon 2020 projects, other EU/international projects, national projects). </td> </tr> <tr> <td> **Expected size** </td> </tr> <tr> <td> The data sums up to several tens of MBs. </td> </tr> <tr> <td> **Data utility** </td> </tr> <tr> <td> Data was useful for the NEWBITS partners. Some of the data (particularly primary data) may be useful for other researchers / consultants and policy makers as well. </td> </tr> </table> **Table 1: Data collection in WP2** ##### 2.1.2 Data collection in WP3 Work Package 3 generated a market research analysis, a stakeholder analysis and a user preferences analysis. All of them were based on the project’s case studies and were used for further project implementation. <table> <tr> <th> **Purpose of data collection** </th> </tr> <tr> <td> Elaboration of deliverables: * D3.1 Market Research Analysis * D3.2 Benchmarking ITS innovation diffusion and ITS production processes EU vs. USA ▪ D3.3 Conjoint Analysis on case studies </td> </tr> <tr> <td> **Types and formats** </td> </tr> <tr> <td> * Stakeholder information (probably XLSX) * Deliverables (DOCX and PDF) * Presentations (PPTX and PDF) </td> </tr> <tr> <td> **Re-use of existing data** </td> </tr> <tr> <td> Some of the data presented were extracted from previous surveys (mainly in the market research) </td> </tr> <tr> <td> **Data origin** </td> </tr> <tr> <td> Previous market surveys in the field of ITS. </td> </tr> <tr> <td> **Expected size** </td> </tr> <tr> <td> The complete information is presented in several documents accumulating to so several MB of data. </td> </tr> <tr> <td> **Data utility** </td> </tr> <tr> <td> The information generated was useful to project partners and to some extent to market researchers. </td> </tr> </table> **Table 2: Data collection in WP3** ##### 2.1.3 Data collection in WP4 Within Work Package 4, two analyses were conducted: * The Analysis of the Business Ecosystem was based on a sound literature review in order to validate the methodological framework and approach. * The Value Network analysis (VNA) of the case studies gathered and generated data in order to allow the shaping of business models. This core process involved the definition of the networks for all case studies, the definition and alignment of actors in line with the Stakeholder Analysis results; a Quantitative analysis to identify: value flows between actors and value flow scores, the mapping of the value flows (identifying interactions and major relations between actors and crafting value propositions), analysing the competitive environment (via cost-benefit analysis) and shaping business models. <table> <tr> <th> **Purpose of data collection** </th> </tr> <tr> <td> Elaboration of deliverables: * D4.1 Formalization of NEWBITS modelling method and systemic business dynamics * D4.2 Workshop documentation * D4.3 Report on Value Network Analysis </td> </tr> <tr> <td> **Types and formats** </td> </tr> <tr> <td> The VNA process acquired data to generate the value flows and scores via questionnaires. The data was validated via workshops and proxy data sources. In detail, the data provided information about the intensity of any stakeholder’s specific need and the importance of any particular source to fulfil this specific need. The quantitative analysis was based on a questionnaire where the identified stakeholders were able to rank the intensity and importance of their interactions within the ITS network. The validation of the flow scores was based on the “proxy data” technique and the interviewing of representatives of stakeholders groups. Workshops supported the data acquisition, validation and generation processes. Formats of data: * Text Documents: Classic documents in formats like RTF, ODF, OOXML, or PDF were relevant to gather data and/or display certain kinds of documents, e.g., deliverables, reports, etc. Templates may be used whenever possible, so that displayed data could be re-used. * Plain Text (TXT): Plain text documents (.txt) were used where structural metadata had to be extracted. * HTML: Nowadays much data is available in HTML format on various sites. This may well be sufficient if the data is very stable and limited in scope. * Tabular data: Project partner and other actors have shared information in spreadsheets, for example Microsoft Excel. Tabular data with minimal metadata was acquired and shared in comma-separated values (.csv), tab-delimited file (.tab) delimited text with SQL data definition statements; Other formats: delimited text (.txt) with characters not present in data used as delimiters widely-used formats: MS Excel (.xls/.XLSX), MS Access (.mdb/.accdb), dBase (.dbf) OpenDocument Spreadsheet (.ods) * The generated value flows, maps and other processes of VNA were displayed in a visual based format, such as TIF 6.0 (.tif), JPEG (.jpeg, jpg, jp2), GIF (.gif), RAW image format (.raw), BMP (.bmp), PNG (.png), Adobe Portable Document Format (PDF/A, PDF). </td> </tr> <tr> <td> **Re-use of existing data** </td> </tr> <tr> <td> WP4 re-used data from WP3 plus relevant existing data on the VNA and business ecosystem based research. </td> </tr> <tr> <td> **Data origin** </td> </tr> <tr> <td> NEWBITS project. </td> </tr> <tr> <td> **Expected size** </td> </tr> <tr> <td> Several MBs </td> </tr> <tr> <td> **Data utility** </td> </tr> <tr> <td> The data generated was useful for the project partners, external stakeholders involved in the case studies and the (C-)ITS community. </td> </tr> </table> **Table 3: Data collection in WP4** ##### 2.1.4 Data collection in WP5 Financial aspects of (C-)ITS deployment were treated in the Business Case Guidelines generated within Work Package 5. Building on Work Package 4, a Cost Benefit Analysis (CBA) was conducted supported by the data gathered in previous stages of the VNA, in order to provide better understanding of the competitive environment in which the project case studies were developed. Therein, a financial analysis of the NEWBITS case studies has been conducted and non-monetary benefits for the actors involved in the case studies has been analysed. <table> <tr> <th> **Purpose of data collection** </th> </tr> <tr> <td> Elaboration of deliverables: * D5.1 Business case guidelines * D5.2 Training curriculum * D5.3 Policy recommendations e-book </td> </tr> <tr> <td> **Types and formats** </td> </tr> <tr> <td> A cost-benefit analysis was performed collecting the following typologies of data: i) tangible cost and resource needs (financial investments/ capital cost or operating capital/expenditure; time and materials; facilities and equipment; factories and data stores; projects at different stages of implementation; market barriers for implementation; commercial products and digital services; ii) intangible costs and resource needs (R&D activities; high-end skilled workforces; human skills and competence; business relationships; brand-identity; media relations; lobbying activities; incentives to innovate (enablers); patents; licenses; projects under IPRs legislation; advertising and management structure; iii) benefits at input level (increased tangible value/ improved current capability / expanded intangible-future capabilities). * Revenues (format: DOCX, XLSX) * Costs (format: DOCX, XLSX) * Environmental impact (format: DOCX, PDF) * Collaboration types, (format: DOCX, PDF) * Resource pooling (format: DOCX, PDF) * Job creation (format: DOCX, PDF) * Incentives- regulations- support schemes (format: DOCX, PDF) </td> </tr> <tr> <td> **Re-use of existing data** </td> </tr> <tr> <td> WP5 re-used data from WP3 and 4 plus relevant existing data on CBA based research. </td> </tr> <tr> <td> **Data origin** </td> </tr> <tr> <td> Previous EU and national projects. </td> </tr> <tr> <td> **Expected size** </td> </tr> <tr> <td> Several GBs. </td> </tr> <tr> <td> **Data utility** </td> </tr> <tr> <td> The data generated was used by the project partners, f.ex. to derive the policy recommendations. </td> </tr> </table> **Table 4: Data collection in WP5** ##### 2.1.5 Data collection in WP6 Work Package 6 collected documents preparing the communication and dissemination of project results. It included a mapping of ITS stakeholders and the creation of a stakeholder community and network. <table> <tr> <th> **Purpose of data collection** </th> </tr> <tr> <td> Elaboration of deliverables: * D6.1 CoI configuration synthesis report * D6.2 Definition of NNP * D6.3 NNP web * D6.4 Network activities report </td> </tr> <tr> <td> **Types and formats** </td> </tr> <tr> <td> * Names and contact data of stakeholders (XLSX) * Information about Stakeholders, their activities and connections (XLSX) * Stakeholder assessment by the partners (XLSX) * Publications and report (DOCX and PDF) * Presentations (PPTX and PDF) * CSV Files </td> </tr> <tr> <td> **Re-use of existing data** </td> </tr> <tr> <td> * Existing data about transport related entities and ITS stakeholders and their analysis from previous projects * Consortium partners’ and other public data sources * Additionally, data from WP2 (list of ITS stakeholders) and WP7 (community database) </td> </tr> <tr> <td> **Data origin** </td> </tr> <tr> <td> Previous EU projects. The data has been identified via web search in websites / applications, mostly from open access sources. </td> </tr> <tr> <td> **Expected size** </td> </tr> <tr> <td> Several GB </td> </tr> <tr> <td> **Data utility** </td> </tr> <tr> <td> The data generated was used by the project partners, and (the part of the data that has been openly published) to ITS stakeholders. </td> </tr> </table> **Table 5: Data collection in WP6** ##### 2.1.6 Data collection in WP7 Work Package 7 delivered the formal structure and processes to enable an effective communication and dissemination of project results. It thereby produced a wide range of data in the form of online and printed communication material such as the website, newsletters, social media contributions, press releases, a project flyer and a project poster. Dissemination activities were moreover documented in the Dissemination and Communication Plan and monitored in the Dissemination Activity Report; and liaising activities documented in the External Liaison Plan. <table> <tr> <th> **Purpose of data collection** </th> </tr> <tr> <td> Elaboration of deliverables: * D7.1 Dissemination and Communication Plan * D7.2 Project Website * D7.3 Data Management Plan * D7.4 External Liaison Plan * D7.5 Dissemination Activity Report * D7.6 Roadmap Exploitation Plan </td> </tr> <tr> <td> **Types and formats** </td> </tr> <tr> <td> * Project flyer, poster, newsletters, press releases, other publications (PDF) * Project website (HTML) * Name and contact data of stakeholders in NEWBITS community database (format: XLSX) ▪ Reports (format: DOCX and PDF) * Presentations (format: PPTX and PDF) </td> </tr> <tr> <td> **Re-use of existing data** </td> </tr> <tr> <td> Newsletters and press releases were circulated by all partners to reach a wider audience. A common PowerPoint presentation template has been utilized by all partners for presentations of the status of the WPs during project meetings and dissemination activities. The NEWBITS community database served as a basis for the NEWBITS stakeholders’ database developed under WP6. </td> </tr> <tr> <td> **Data origin** </td> </tr> <tr> <td> NEWBITS projects </td> </tr> <tr> <td> **Expected size** </td> </tr> <tr> <td> Several GB. </td> </tr> <tr> <td> **Data utility** </td> </tr> <tr> <td> The data generated was mainly used by the project partners. </td> </tr> </table> **Table 6: Data collection in WP7** # 3\. FAIR Data ### 3.1 Making data findable, including provisions for metadata NEWBITS data produced and shared with the public are identifiable and locatable by means of a standard identification mechanism (e.g. persistent and unique identifiers such as Digital Object Identifiers) and search keywords. ### 3.2 Making data openly accessible ##### 3.2.1 Deposition in open access repositories NEWBITS deliverables are made accessible using the NEWBITS project website. In addition, the deliverables and some significant data were made openly accessible by deposition in the open access repository Zenodo. The following table summarizes NEWBITS deliverables and relevant data indicating their level of open accessibility. <table> <tr> <th> **Dataset** </th> <th> **Dissemination level** </th> <th> **Format** </th> <th> **Repository** </th> <th> **Comments** </th> </tr> <tr> <td> Work Package 1 </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> D1.1 Project Management Handbook </td> <td> Confidential </td> <td> PDF </td> <td> Stored in NEWBITS ownCloud </td> <td> For internal project management. </td> </tr> <tr> <td> D1.2 Quality Assurance and Risk Management Plan </td> <td> Confidential </td> <td> PDF </td> <td> Stored in NEWBITS ownCloud </td> <td> For internal assurance and risk management. </td> </tr> <tr> <td> D1.3 Reporting to European Commission </td> <td> Confidential </td> <td> PDF </td> <td> Stored in NEWBITS ownCloud </td> <td> For project reporting to European Commission. </td> </tr> <tr> <td> Work Package 2 </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> D2.1 Overview of ITS initiatives in EU and US </td> <td> Public </td> <td> PDF </td> <td> Shared via NEWBITS website and Zenodo </td> <td> </td> </tr> <tr> <td> D2.2 Report on barriers and KPIs for the implementation of C- ITS </td> <td> Public </td> <td> PDF </td> <td> Shared via NEWBITS website and Zenodo </td> <td> </td> </tr> <tr> <td> D2.3 Case study taxonomy </td> <td> Public </td> <td> PDF </td> <td> Shared via NEWBITS website and Zenodo </td> <td> </td> </tr> <tr> <td> Work Package 3 </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> D3.1 Market Research Analysis </td> <td> Public </td> <td> PDF </td> <td> Shared via NEWBITS website and Zenodo </td> <td> </td> </tr> <tr> <td> D3.2 Benchmarking ITS innovation diffusion and ITS production processes EU vs USA </td> <td> Public </td> <td> PDF </td> <td> Shared via NEWBITS website and Zenodo </td> <td> </td> </tr> <tr> <td> D3.3 Conjoint Analysis on case studies </td> <td> Public </td> <td> PDF </td> <td> Shared via NEWBITS website and Zenodo </td> <td> </td> </tr> <tr> <td> Work Package 4 </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> </table> <table> <tr> <th> D4.1 Formalization of NEWBITS modelling method and systemic business dynamics </th> <th> Public </th> <th> PDF </th> <th> Shared via NEWBITS website and Zenodo </th> <th> </th> </tr> <tr> <td> D4.2 Workshops materials </td> <td> Confidential </td> <td> Standard video and image formats </td> <td> Stored at the partner conducting the workshops </td> <td> Privacy-related issues apply. Datasets will be used only for dissemination purposes. </td> </tr> <tr> <td> D4.3 Report on Value Network Analysis </td> <td> Public </td> <td> CSV / PDF </td> <td> Shared via NEWBITS website and Zenodo </td> <td> Privacy-related issues apply. The datasets will be anonymised before being stored in a common repository </td> </tr> <tr> <td> Work Package 5 </td> </tr> <tr> <td> D5.1 Business case guidelines </td> <td> Public </td> <td> PDF </td> <td> Shared via NEWBITS website and Zenodo </td> <td> </td> </tr> <tr> <td> D5.2 Training curriculum </td> <td> Public </td> <td> PDF </td> <td> Available upon request </td> <td> </td> </tr> <tr> <td> D5.3 Policy recommendations e-book </td> <td> Public </td> <td> PDF </td> <td> Shared via NEWBITS website </td> <td> </td> </tr> <tr> <td> Work Package 6 </td> </tr> <tr> <td> D6.1 CoI configuration synthesis report </td> <td> Public </td> <td> PDF </td> <td> Shared via NEWBITS website and Zenodo </td> <td> </td> </tr> <tr> <td> D6.2 Definition of NNP </td> <td> Public </td> <td> PDF </td> <td> Shared via NEWBITS website and Zenodo </td> <td> </td> </tr> <tr> <td> D6.3 NNP platform </td> <td> Public </td> <td> Website </td> <td> Available upon request </td> <td> </td> </tr> <tr> <td> D6.4 Network activities report </td> <td> Public </td> <td> PDF </td> <td> Available upon request </td> <td> </td> </tr> <tr> <td> Work Package 7 </td> </tr> <tr> <td> D7.1 Dissemination and Communication Plan </td> <td> Public </td> <td> PDF </td> <td> Available upon request </td> <td> </td> </tr> <tr> <td> D7.2 Project Website </td> <td> Public </td> <td> PDF </td> <td> Openly accessible via world wide web </td> <td> </td> </tr> <tr> <td> D7.3 Data Management Plan </td> <td> Public </td> <td> PDF </td> <td> Available upon request </td> <td> </td> </tr> <tr> <td> D7.4 External liaison plan </td> <td> Public </td> <td> PDF </td> <td> Available upon request </td> <td> </td> </tr> <tr> <td> D7.5 Dissemination Activity Report </td> <td> Public </td> <td> PDF </td> <td> Available upon request </td> <td> </td> </tr> <tr> <td> D7.6 Roadmap Exploitation Plan </td> <td> Public </td> <td> PDF </td> <td> Available upon request </td> <td> </td> </tr> <tr> <td> NEWBITS flyer </td> <td> Public </td> <td> PDF </td> <td> Shared via NEWBITS website </td> <td> </td> </tr> <tr> <td> NEWBITS poster </td> <td> Public </td> <td> PDF </td> <td> Shared via NEWBITS website </td> <td> </td> </tr> <tr> <td> NEWBITS explanatory video </td> <td> Public </td> <td> MP4 </td> <td> Uploaded to Youtube and accessible via NEWBITS website and Zenodo </td> <td> </td> </tr> <tr> <td> NEWBITS newsletters </td> <td> Public </td> <td> PDF </td> <td> Shared via NEWBITS website </td> <td> </td> </tr> <tr> <td> NEWBITS press releases </td> <td> Public </td> <td> PDF </td> <td> Shared via NEWBITS website </td> <td> </td> </tr> <tr> <td> NEWBITS webinar recordings </td> <td> Public </td> <td> MP4 </td> <td> Shared via NEWBITS website </td> <td> </td> </tr> <tr> <td> Work Package 8 </td> </tr> <tr> <td> D8.1 H – Requirement No. 1 </td> <td> Confidential </td> <td> PDF </td> <td> Stored in NEWBITS ownCloud </td> <td> </td> </tr> <tr> <td> D8.2 POPD – Requirement No. 2 </td> <td> Confidential </td> <td> PDF </td> <td> Stored in NEWBITS ownCloud </td> <td> </td> </tr> <tr> <td> D8.3 NEC – Requirement No. 3 </td> <td> Confidential </td> <td> PDF </td> <td> Stored in NEWBITS ownCloud </td> <td> </td> </tr> <tr> <td> D8.4 OEI – Requirement No. 4 </td> <td> Confidential </td> <td> PDF </td> <td> Stored in NEWBITS ownCloud </td> <td> </td> </tr> <tr> <td> D8.5 OEI – Requirement No. 5 </td> <td> Confidential </td> <td> PDF </td> <td> Stored in NEWBITS ownCloud </td> <td> </td> </tr> </table> **Table 7: NEWBITS datasets and their level of open accessibility** <table> <tr> <th> **Dataset** </th> <th> **Document Identifier – DoI** </th> </tr> <tr> <td> NEWBITS explanatory video </td> <td> 10.5281/zenodo.1243169 </td> </tr> <tr> <td> D2.1 Overview of ITS initiatives in EU and US </td> <td> 10.5281/zenodo.1243032 </td> </tr> <tr> <td> D2.2 Report on barriers and KPIs for the implementation of C-ITS </td> <td> 10.5281/zenodo.1243056 </td> </tr> <tr> <td> D2.3 Case study taxonomy </td> <td> 10.5281/zenodo.1243096 </td> </tr> <tr> <td> D3.1 Market Research Analysis </td> <td> 10.5281/zenodo.1243130 </td> </tr> <tr> <td> D3.2 Benchmarking ITS innovation diffusion and ITS production processes EU vs USA </td> <td> 10.5281/zenodo.1243158 </td> </tr> <tr> <td> D3.3 Conjoint Analysis on case studies </td> <td> 10.5281/zenodo.2587773 </td> </tr> <tr> <td> D4.1 Formalization of NEWBITS modelling method and systemic business dynamics </td> <td> 10.5281/zenodo.1243162 </td> </tr> <tr> <td> D4.3 Report on Value Network Analysis </td> <td> 10.5281/zenodo.2591410 </td> </tr> <tr> <td> D5.1 Business case guidelines </td> <td> 10.5281/zenodo.2591415 </td> </tr> <tr> <td> D6.1 CoI configuration synthesis report </td> <td> 10.5281/zenodo.2591420 </td> </tr> <tr> <td> D6.2 Definition of NNP </td> <td> 10.5281/zenodo.2591427 </td> </tr> </table> **Table 8: NEWBITS datasets available on open access repository Zenodo** Data containing private information about ITS stakeholders was considered confidential. This referred in particular to: * The reporting on stakeholder interviews (in order to guarantee that some conclusions cannot be linked to a specific individual). * Individual results of the online stakeholder survey (as it was guaranteed that the survey was anonymous and that the results were only used for the NEWBITS project). * The identity of the stakeholders of the network of case study 3 (respecting a privacy agreement signed with this network). Where necessary, data were anonymised before sharing. A consensus has been requested from all external participants to allow data to be shared and reused. ##### 3.2.2 Methods or software tools needed to access the data No specific software tools are needed to access NEWBITS data. Common software such as Microsoft Word, Excel and Adobe Acrobat Reader or an alternative Open Office software are sufficient to access NEWBITS data. ##### 3.2.3 Restrictions on use NEWBITS data can be shared and re-used with the exception of personal data (which will be treated according the protection of personal data within the documents) and business critical data. All NEWBITS deliverables that could be openly shared have been made available via Zenodo. ##### 3.2.4 Data Access Committee When required data access issues have been discussed with the entire consortium anytime throughout the implementation of the project. ##### 3.2.5 Ascertainment of the identity of the person accessing the data There is no way of ascertaining the identity of the person accessing the data in neither the project website nor the Zenodo repository. For the NEWBITS Network Platform (NNP) platform each member has a unique login name. Each user will be identified by a unique login name. All the personal data related to each user comply with EU and National Personal Protection Data Laws. The project website and the NNP are monitored using Google Analytics. Google analytics data cannot be related to identifiable unique persons but provides helpful data for the analysis of what visitor do and like on the project website and the NNP. ### 3.3 Making data interoperable ##### 3.3.1 Interoperability Data produced in the NEWBITS project is interoperable. Interoperable means allowing data exchange and re-use between researchers, institutions, organisations, countries, etc. NEWBITS data classified as public is adhering to standards for formats, as much as possible compliant with available (open) software applications, and in particular facilitating recombinations with different datasets from different origins. NEWBITS data can be shared and re-used with the exception of personal data (which will be treated according the protection of personal data within the documents) and business critical data. ##### 3.3.2 Standards and methodologies NEWBITS does not follow any specific data and metadata vocabularies, standards or methodologies to make data interoperable. Data is stored in open software applications and explained in the NEWBITS deliverables. Mainstream software is used to generate the data. The language used is English. Metadata assorting the datasets will be defined in later stage of the project life cycle. ##### 3.3.3 Standard vocabularies Standard vocabularies were used for all data types present in the NEWBITS data set, wherever possible, to allow inter-disciplinary interoperability. ##### 3.3.4 Mappings to more commonly used ontologies In case it has been unavoidable that NEWBITS used uncommon or generated project specific ontologies or vocabularies, mappings to more commonly used ontologies were provided. ### 3.4 Increasing data re-use through clarifying licences ##### 3.4.1 Data licenses Apart from confidential data, NEWBITS data will be licensed under MIT licence which allows open data to be commercially used as well by the creator. Other datasets will be licensed using one of the creative commons license cc4 and will remain open. ##### 3.4.2 Date of data availability NEWBITS deliverables are published on the project website and thus be made available for use by third parties once they have been approved by the European Commission. Any other data will be made available for re-use immediately after the end of the project, after careful evaluation on what should be kept confidential due to privacy concerns and what will be shared openly. The datasets produced in WP6 were examined to comply with personal data protection laws and business IPR protection laws. No embargo to give time to publish or seek patents is foreseen. ##### 3.4.3 Usability by third parties after project end Apart from the data that has to be kept confidential due to privacy concerns or that has a commercial relevance for the partners, NEWBITS data produced and used in the project are useable by third parties, in particular after the end of the project. Hence, third parties will be free to repeat and re-use the research data. ##### 3.4.4 Duration of data re-usability NEWBITS project partners are obliged to preserve the data format and files for five years after the project end, so until June 2024. However, making data available and re-usable indefinitely has been considered during project runtime. ##### 3.4.5 Data quality assurance processes The overall NEWBITS project does not describe any data quality assurance processes. Data quality is asserted during the implementation of each task by the respective project partner. # 4\. Allocation of resources, data security and ethical aspects ### 4.1 Costs for making NEWBITS data FAIR The NEWBITS project partners have selected the Zenodo repository, which is free of charge. Resources for long term preservation have not been considered. ### 4.2 Responsibility for data management Each project partner has been responsible for a reliable data management regarding his/her work within the NEWBITS project. Steinbeis 2i, leader of Work Package 7 – Communication and Dissemination, has been responsible for the overall data management at project level. ## 4.3 Data security At first level, each project partner was responsible for the security and preservation of his/her data and the consideration of deliverable D8.2 Protection Of Personal Data Requirements nº2 signed by all NEWBITS partners. The project partners’ servers were regularly and continuously backed-up. NEWBITS project data was saved in an online platform (ownCloud). In order to keep the data secure, access has been controlled by encryption of a long term cloud-based backup. Access granted to project partners was secured by password control. Access to the Zenodo data repository was controlled by passwords. Project partners can request access to the entries on Zenodo via WP7 leader. ## 4.4 Ethical aspects As part of Work Package 8, the NEWBITS consortium has specified all relevant ethical aspects as identified and established by the NEWBITS Ethics Summary report. Deliverables describing the ethical and legal issues that could have an impact on data sharing include: * **D8.1 Human Beings Requirement nº 1** * **D8.2 Protection of Personal Data Requirement nº 2.** Specifically, D8.1 includes information on the procedures used to identify research participants to be involved in the diverse project activities (e.g. market and stakeholder analysis, telephone/online interviews and (web) surveys, "Communities of Interest") and on the consent procedures for the participation of humans. It also includes the consent form to be signed by “data subjects” before participate in an interview or validation workshop. D8.2 defined the protection of personal data in NEWBITS in order to protect individual’s privacy. As established in the deliverable text, all personal data will be limited only to the purposes of the project and will be stored in appropriate files that will be also reported to the correspondent legal agency. Personal data (names, contact data, private data) of stakeholders will be kept anonymous using the necessary encryption methods. To do that, named entities (such as proper nouns or organisations) will be detected and encrypted. Access to personal data will be allowed only to members of the consortium and according to the existing Consortium Agreement. For the questionnaires and other material that were filled in by external stakeholders an explicit statement on data sharing and long term preservation was applied and displayed. #### 4.4.1 Other Ethics Requirement nº4 Moreover, the elaboration of the first version of this Data Management Plan has taken into account the Other Ethics Requirement No.4, which stated the need to detail the collection and sharing of potentially commercially sensitive information in the DMP ( **D8.4 Other Ethics Requirement nº4** ). In this respect, to ensure the consideration of D8.4, the following three steps have been implemented: * D7.3 leader (S2i) and D8.4 leader (ORT) have agreed the inclusion of the requirement information into a specific subsection of the draft version of the DMP. * D8.4 leader has contacted all Work Package Leaders in NEWBITS in order to assess the requirement in their respective Work Packages. * An assessment of the requirement has been performed by the Work Package Leaders which have stated the possibility of collecting and sharing potentially commercially sensitive information (namely: WP3, WP4 and WP6). Though the term of “commercially sensitive information” is broad and a potentially limitless amount of information could fall within it, NEWBITS consortium has identified information sets that were collected and/or shared during the project implementation: * Operational data, including expenditures * Cost structure * Identity of stakeholders * Revenue models * Other information generated in the framework of CoI operation The result of the assessment in each relevant Work Package is summarized in the following section: * **WP3 Holistic Intelligence Process** Work Package 3 procedures involved potentially commercially sensitive data related to the deployment of the “Stakeholders Analysis” as part of Task 3.1, such as the **identity of stakeholders** . The information was used with analysis purposes within the project but under no circumstances made publicly available without an explicit authorization from the stakeholder. * **WP4 Developing Innovative Business Models** Work Package 4 procedures involved potentially commercially sensitive data related to the deployment of “Value Network Analysis” in Task 4.2. It was not foreseen that the data collected and shared to perform both analyses **at the level required by the project** , would by any means be commercially sensitive, therefore no disclosure of such information was expected to be required. For case study 3, however, a privacy agreement has been signed with the stakeholders of this network for not sharing any sensitive data from the VNA in WP4. In the event that information related to **cost structure** , **revenue models** and/or **operational data** was classified as “commercially sensitive” by the holder of such data, an authorization from the data holder has been asked in order to access and use the data for analysis purposes within the project. Any potentially commercially sensitive data – particularly data related to case study 3 protected by a distinct privacy statement – were included in a confidential annex of the respective deliverables in order to guarantee that these data were accessible only to the project consortium and the EC. * **WP6 NEWBITS Network Platform** One of the foreseen scopes of the Communities of Interest (CoIs), developed in Work Package 6, was to foster information collection and provision among members so that they can help each other on issues they are facing or are interested to share. The nature of this information and content cannot be pre- determined by the project with the exception of the processes that relate to WP3 and WP4 operation in which CoIs are to be involved. Thus, it is beyond the influence of the NEWBITS consortium to define what information could be “commercially sensitive” in the day-to-day operation of the CoIs. The underlying principle of CoI operation is based on the fact that CoIs manage content mainly provided by their members. In CoIs, the participants are supposed to have the ability to discuss anything that they consider of their interest and are willing to share with others. In this sense, if a member wishes to share with the other members commercially sensitive information, this will be on their own initiative and responsibility. Since the participation on CoIs will require **registration** , this operative principle is clearly communicated to the interested parties in the registration process. A registration is only activated once each member acknowledges or declares its sole responsibility on the content to be shared in the context of the CoI. ## 4.5 Other Issues The NEWBITS project does not make use of any other national/funder/sectorial/departmental procedures for data management.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0257_PAPA-ARTIS_733203.md
## 1\. General In the PAPA-ARTiS trial, the Clinical Trial Centre (CTC-L) at the University of Leipzig will be responsible, on behalf of the legal trial sponsor the University of Leipzig, for implementation of procedures for data collection, storage, protection, retention and destruction. The CTC-L has implemented a data safety and security concept according to the requirements of the German Federal Office for Information Security ( _www.bsi.bund.de_ ) . All PAPA- ARTiS related procedures will be developed in cooperation with the data security engineer of the CTC-L and have to be approved by the official data protection officer of the University of Leipzig prior to implementation. The chapters 11-13 of the trial protocol lists all detailed aspects of data collection, handling, storage, verification and protection. ## 2\. Data Collection Three types of data will be collected in the PAPA-ARTiS trial: 1. _clinical data_ 2. _information from patient questionnaires_ and 3. _imaging data_ Investigators in the recruiting trial centers will initially collect all data. Together with information on the trial, eligible patients will be informed about data capture, transmission and analysis processes. Once a patient is eligible, and has given his/her informed consent to trial participation and data collection, the investigator will assign the patient a unique patient identification code. Patient identification code lists will be generated in advance by the CTC-L and forwarded to the recruiting centers. These lists are part of the investigator site file and remain at the recruiting site. Furthermore, these lists are the only documents that allow for reidentification of the patients. The CTC-L will design CRFs and develop validated eCRFs (electronic case report forms) for data capture directly at the trial sites. Additionally, CTC-L is responsible for the eCRF training to staff of all sites and all monitors. The investigators (or their designated staff) will enter all clinical data into these eCRFs. Patient data will be recorded in pseudonymized form (i.e. without reference to the patient’s name) using exclusively the patient’s identification code. One member of the study team transfers information from the questionnaires to the eCRF. The staff at the trial site will document the data related to a patient’s visit in due time after each visit. In order to facilitate the documentation as per protocol in case of malfunction of the electronic system or any of its components, a paper version of the CRF will also be provided. The data transfer of the data entered into this paper version to the eCRF will done as soon as the electronic system is available again. The investigator or an authorised member of the study team will sign each eCRF page electronically. This confirms that all data on the eCRF are correct and havn’t been changed. If a value on the eCRF is changed later on, the electronic signature will be set back automatically and the corresponding page has to be signed again. This ensures that changes on the eCRF will be dated and signed as well. All entries and data changes will be tracked automatically including date, time and person who entered/changed information (audit trail). Major correction or major missing data have to be explained. However, the investigator has final responsibility for the accuracy and authenticity of all clinical and laboratory data entered in the CRF. All information required by the protocol, and therefore collected during the clinical trial, has to be verified by source data (e.g. patient’s records). The investigator and the trial staff are responsible for adequate source data documentation. ## 3\. Data Storage The EDC Tool secuTrial® by InterActiveSystems GmbH will be used for database programming. SecuTrial® uses an underlying Oracle database. The study database will be developed and validated according to the Standard Operating Procedures (SOPs) of the CTCL prior to data capture. All information entered into the eCRF by the investigator or an authorized member of the local study team will be systematically checked for completeness, consistency and plausibility by routines implemented in the database, running every night. Data management staff of the CTC-L will check error messages generated by these routines. In addition, the query management tool of secuTrial® will show discrepancies, errors or omissions to the investigator/data entry staff based on the pre- programmed checks. The CTCL will supervise and aid in the resolution of queries, should the site have questions. Corrected data will be re-checked by automatic routines during the night after entry. If a query cannot be resolved, the Data Management staff of the CTC-L will ask the coordinating investigator and/or the biometrician, if they may close the query (e.g. if it is clear that these data cannot/ were not collected). Final analysis will be performed when the data of all enrolled patients have been collected, all queries have been the resolved and the database has been closed. ## 4\. Data Protection ### 4.1 Access During the whole course of the study, all data undergo a daily backup. An access concept for the trial database will be implemented, based on a strict hierarchy and role model. Thus, data access is limited to authorized persons only, and unauthorized access to pseudonymized patient data is prevented. Any change of data (e.g. error correction during query management) is recorded automatically via an audit trail within the database. At the end of the study, once the database has been declared complete and accurate, the database will be locked. Thereafter, any changes to the database are possible only by joint written agreement between coordinating investigator, biometrician and data manager. According to ICH-GCP, the investigator must permit all authorized third parties access to the trial site and the medical records of the trial subjects (source data). These include the clinical trial monitors, auditors and other authorized employees of the co-ordinating investigator, as well as members of the local or federal authorities. All these persons are sworn to secrecy. ### 4.2 Monitoring Monitoring is mandatory for all participating study groups. Monitoring procedures will be country specific and as agreed by the co-ordinating investigator and the individual regional offices in all involved countries. General principles for monitoring will be outlined in the monitoring plan of the trial, which will be written by the CTC-L in cooperation with the coordinating investigator and distributed to the regional offices. Central and statistical monitoring procedures combined with on-site monitoring visits will ensure high protocol compliance and data quality, as well as ensure patients’ safety and rights. A risk-based monitoring strategy will be implemented as required by ICH E6 [1]. According to the risk analysis, treatment delivery parameters, adverse events, follow-up information, data transmission and protection and informed consent documents comprise risk-bearing trial aspects and will be monitored. Clinical monitors appointed by the CTC-L (for Germany) or ECRIN (for all other involved countries) will regularly visit the recruiting centres. The frequency of monitoring visits will depend on the trial site’s recruitment rate as well as on potential problems detected during previous on-site visits or by central monitoring. During the visits, the monitor will verify the informed consent forms. Only after monitors confirmed that a patient has unambiguously given his or her consent for trial participation as well as for data capture, transmission and analysis, the data will be used for analyses. Further tasks of the monitors are: source data verification of the key data in a random sample of patients, targeted source data verification for patients with possible deviations, discussion of open queries, check of essential parts of the investigator site file, check of source data for non-reported AEs or SAE´s and check for GCP-breaches and/or protocol violations. Prior to recruitment, each participating centre will receive a site initiation visit, during which the trial protocol and the eCRFs will be reviewed with centre staff and any necessary training will be provided. ### 4.3 Pseudonymization Local investigators will be trained by the CTC-L prior to study start on the pertinent procedures for pseudonymization. There is no risk for pseudonymization failure as far as the eCRFs are concerned, because no identifying data will be entered in the eCRF. However, pseudonymization failures may arise with the imaging data. It may happen that investigators upload images labelled with the patient’s name. A SOP will be developed by the CTC-L together with the imaging reference centres on how to deal with this situation (e.g. ask the responsible investigator to delete the non- pseudonymized record and upload a pseudonymized record instead, retrain investigators at the site concerned, inform the trial sponsor on the problem). Additionally, human cells/tissue will be collected in a sub-group of patients for a scientific subproject. Labelling of the samples will be exclusively with the trial identification number of the trial participant. Samples will be processed, stored and analyzed only by using the trial identification number. Any scientific research making use of the data beyond what is spelled out in the protocol of the clinical trial will be conducted in accordance with the applicable law on data protection and the patient will be asked explicitly to provide consent on participation in the scientific projects and pseudonymized storage and use of his/her samples. Since in the course of the trial contact between the trial centre and the patients might be necessary, the patients’ full name, address and telephone number will be ascertained and stored at the treating trial site after obtaining written permission to do so. This information will be stored separately from the trial data. ### 4.4 Withdrawal of Consent Patients may withdraw their consent to participate at any time without giving reasons. Nevertheless, the patient should be asked for the reason of the premature termination after being informed that he/she does not need to do so. Information as to when and why a patient was registered/randomized and when he/she withdrew consent must be retained in the documentation. In the event of withdrawal of consent, the necessity for storing data and/or samples will be evaluated. While Regulation (EC) No 45/2001 of the European Parliament and of the Council [2] strengthen personal data protection rights, encompassing the right to access, rectification and withdrawal of data, it also specifies the situations when restriction on those rights may be imposed. The withdrawal of informed consent should not affect the results of activities already carried out, such as the storage and use of data obtained on the basis of informed consent before. Data not needed will be deleted as requested, with full documentation of the reasons for deletion. Similarly, samples will be discarded as wished. ### 4.5 Data Exchange There will be an “ownCloud” as file sharing platform in the PAPA-ARTiS-trial. Hosting and maintenance of the “ownCloud” takes place at the University of Leipzig, behind the firewall of the institution. The “ownCloud” file-hosting system will be used for exchange of central trial documents as well as for imaging data for reference evaluation. Access to the “ownCloud” follows the same hierarchical role concept as the trial database. Data will be uploaded without personal information, using exclusively the trial identification number. Trial centers will only be able to upload data and see data concerning their own patients, while the reference organization may exclusively download data essential for their evaluation. Using an eCRF as well as the “ownCloud” file-hosting system, both located on servers of the CTC-L and thus behind the firewall of University of Leipzig, reduces the risk of unauthorized or unlawful access, disclosure, dissemination, alteration, destruction or accidental loss in comparison to data transmission over a network. Access to the servers is secured via https protocol, and requires user-specific login and password. ### 4.6 Transfer of Personal Data The co-ordinating investigator certifies herewith that the transfer of pseudonymized personal data will take place according to the documentation and communication regulations of the GCP-guidelines (E6; 2.11) [1]. Moreover, the co-ordinating investigator certifies that trial participants who do not permit the transfer of data will not be admitted to the trial. ## 5\. Archiving All relevant trial documentation (Trial Master File), the electronically stored data, the original CRFs and the final report will be stored for 30 years by the co-ordinating investigator’s institution after the trial’s completion. At the investigating sites, all completed study related documents (e.g. investigators’ files, patient identification lists, signed written consent forms, copies of all CRFs, initiation visit and monitoring visits reports, staff signature lists) and the patients’ files will be stored for 30 years after the trial’s completion. With this time span all local rules and legal requirements regarding archiving at all sites will be addressed. ## 6\. Data Sharing Data sharing with the scientific community will be carried out according to the recommendations of the International Committee of Medical Journal Editors. Previously published data sets may be made available to the scientific community upon request (e.g. for meta-analysis, disease-related registers) after agreement by the trial consortium. ## 7\. Adherence to National and EU Legislation We hereby confirm that all clinical trial information will be recorded, processed, handled, and stored by CTC-L on behalf of the co-ordinating investigator in such a way that it can be accurately reported, interpreted and verified while the confidentiality of records and the personal data of the subjects remain protected. This is done in accordance with the applicable law on personal data protection, with Directive 95/46/EEC [3] and regulation (EC) No 45/2001 of the European parliament and of the council [2]. This also accounts for the reference centers and the investigators at the trial sites. The clinical site contracts will also ensure that the clinical sites comply with national data protection laws.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0258_STORIES_731872.md
# Executive Summary This Deliverable includes the Initial Data Management Plans from each Work Package. We thought it is best to present a DMP from each WP because each WP has its own needs with regard to filing, organising and archiving. This is due to the fact that each WP is looking at a different aspect of the STORIES project, handling different kind of data and managed by different partner organisations. The partner organisations were instructed on generating a DMP and given a template as an example. Everyone has worked with the DMP online tool DMPonline of the Digital Curation Center, UK ( _https://dmponline.dcc.ac.uk/_ ) which provides a template for HORIZON2020 projects. The DMPs are in their initial stage and will be updated in due course. # DMP Work Package 1 – Pedagogical Framework **Plan Name** Horizon 2020 DMP - STORIES of Tomorrow - students visions on the future of space exploration (WP1) **Plan ID** H2020-ICT-2016-1 **Grant number** 731872 **Principal Investigator / Researcher** Angelos Lazoudis **Plan Data Contact** [email protected] **Plan Description** The nature of our research project is Technologies for Learning and Skills. The STORIES project aims to contribute to a dynamic future of children´s ebooks evolution by a) developing user-friendly interfaces for young students (10-12 years old) to create their own multi-path stories expressing their imagination and creativity and b) by intergrating the latest AR, VR and 3D printing technologies to visualize their stories in numerous innovative ways. The purpose of Work Package 1(Pedagogical Framework): Develop a pedagogical framework that builds on the essential features of creative STEM learning including exploration, dynamics of discovery, student-led activity, engagement in scientifically oriented questions, priority to evidence in responding to questions, formulations of evidence-based explanations, connection of explanations to scientific knowledge, and communication and justification of explanations. These elements support creativity as a generic element in the processual and communicative aspects of the pedagogy by integrating arts (virtual arts, performing arts, design, music) and proposing innovative teaching strategies that will offer students high participation and enable them to generate highly imaginative possibilities and supports students’ deeper learning. Based on project- and inquirybased approaches students will be asked to create their own stories about the future missions to and on Mars. The proposed pedagogical framework will guide the teachers in these interventions, will provide the reference for the development of the assessment approach of the project and will provide the necessary requirements for the enabling technologies the proposed project will develop. **1\. Data summary** **Provide a summary of the data addressing the following issues:** **State the purpose of the data collection/generation** **Explain the relation to the objectives of the project** **Specify the types and formats of data generated/collected** **Specify if existing data is being re-used (if any)** **Specify the origin of the data** **State the expected size of the data (if known)** **Outline the data utility: to whom will it be useful** The data generated under WP1 serves as evidence for designing the STORIES pedagogical approach. It links inquiry-based learning to creative STEM education and to storytelling. Based on data collected from literature reviews and additional data to be generated by the WP1 pedagogical experts, the STORIES pedagogical framework will provide the reference for the development of the assessment approach of the project (WP6) and will offer the necessary requirements for the development and implementation of enabling technologies within science education. The expected size of data is not known at this point. As the project evolves we will have an estimate of the data size. This will be reported in the final DMP. The data might be of use to those interested in better understanding how the arts and sciences can be integrated through digital storytelling in educational settings. 2. **FAIR data** **2.1 Making data findable, including provisions for metadata:** **Outline the discoverability of data (metadata provision)** **Outline the identifiability of data and refer to standard identification mechanism. Do you make use of persistent and unique identifiers such as Digital Object Identifiers?** **Outline naming conventions used** **Outline the approach towards search keyword** **Outline the approach for clear versioning** **Specify standards for metadata creation (if any). If there are no standards in your discipline describe what metadata will be created and how** All data will be shared with the consortium in the internal working space with those partners that need access to them (Fraunhofer - BSCW http://www.bscw.de/english/). Folders will be organized in a hierarchical and clear structure. Files will be uniquely identifiable and versioned by using a systematic name convention. Moreover each folder within the BSCW server that stores data will be characterized with keywords so the data can be easily found by using BSCW's search mechanism. **2.2 Making data openly accessible:** **Specify which data will be made openly available? If some data is kept closed provide rationale for doing so Specify how the data will be made available** **Specify what methods or software tools are needed to access the data? Is documentation about the software needed to access the data included? Is it possible to include the relevant software (e.g. in open source code)? Specify where the data and associated metadata, documentation and code are deposited** **Specify how access will be provided in case there are any restrictions** All generated data will be made openly available through the public reports (deliverables). Data will be provided in tables and spreadsheets as office documents (electronic) and there is no need to use special software in order to access the data. **2.3 Making data interoperable:** **Assess the interoperability of your data. Specify what data and metadata vocabularies, standards or methodologies you will follow to facilitate interoperability.** **Specify whether you will be using standard vocabulary for all data types present in your data set, to allow inter-disciplinary interoperability? If not, will you provide mapping to more commonly used ontologies?** In WP1 we are mainly dealing with data generated by the consortium and/or educational professionals (e.g. teachers). Open access is given to the aforementioned data (e.g. through public reports). No standard vocabulary or methodology is foreseen to be used for WP1 data. **2.4 Increase data re-use (through clarifying licenses):** **Specify how the data will be licenced to permit the widest reuse possible Specify when the data will be made available for re-use. If applicable, specify why and for what period a data embargo is needed** **Specify whether the data produced and/or used in the project is useable by third parties, in particular after the end of the project? If the re-use of some data is restricted, explain why** **Describe data quality assurance processes** **Specify the length of time for which the data will remain re-usable** Our generated data within WP1 will be licenced under Creative Commons. Even though we don't expect the data to be re-used in the project. Regarding the quality assurance process for WP1 data: we refer to WP4 and WP6 leaders (that handle the project's most sensitive data) and we are willing to follow their overall approach. **3\. Allocation of resources** **Explain the allocation of resources, addressing the following issues: Estimate the costs for making your data FAIR. Describe how you intend to cover these costs** No costs are foreseen for data of WP1. **Clearly identify responsibilities for data management in your project** The people responsible for the creation/management of the WP1 Stories data are the authors-contributors of the deliverables. Each team of authors will be lead by the (predefined) task leader who will be responsible for managing the group and the created data. **Describe costs and potential value of long term preservation** Not applicable 4. **Data security** **Address data recovery as well as secure storage and transfer of sensitive data** Data created within WP1 will be stored to the BSCW workspace server. Access to this data will be given via protected password only to members of the STORIES consortium. 5. **Ethical aspects** **To be covered in the context of the ethics review, ethics section of DoA and ethics deliverables. Include references and related technical aspects if not covered by the former** Ethical aspects will be addressed as part of the WP9. 6. **Other** **Refer to other national/funder/sectorial/departmental procedures for data management that you are using (if any)** Further procedures for data management are not known yet. This document was generated by DMPonline 4 of 4 # DMP Work Package 2 – Architecture Specification & Design **Plan Name** Horizon 2020 DMP - STORIES of Tomorrow - students visions on the future of space exploration (WP2) **Plan ID** H2020-ICT-2016-1 **Grant number** 731872 **Principal Investigator / Researcher** Constantine Abazis **Plan Data Contact** [email protected] **Plan Description** The nature of our research project is Technologies for Learning and Skills. The STORIES project aims to contribute to a dynamic future of children ebooks evolution by a) developing user-friendly interfaces for young students (10-12 years old) to create their own multi-path stories expressing their imagination and creativity and b) by integrating the latest AR, VR and 3D printing technologies to visualize their stories in innovative ways. The purpose of Work Package 2 (Architecture Specification & Design) is to define the specifications of the STORIES system architecture, which will be the basis for the technical implementation and service integration incorporating tasks like overall system architecture, functional components specification and design, data components, design and multi-modal user interfaces design. In this capacity, both system data (log records) and student action/teacher assessment data will be created and collected. **Institution** Other **1\. Data summary** **Provide a summary of the data addressing the following issues:** **State the purpose of the data collection/generation** **Explain the relation to the objectives of the project** **Specify the types and formats of data generated/collected** **Specify if existing data is being re-used (if any)** **Specify the origin of the data** **State the expected size of the data (if known)** **Outline the data utility: to whom will it be useful** Analytics is crucial for the project and a key feature and we intend to provide analytics for all student activities and learning outcomes. We intend to collect: **System data (coming from the system software database)** * unique student ID (this will be the master key in searching the database) * Students’ group ID * Student data (age, gender) * Number of students participated * Number of implementation scenarios created per school * Number of episodes created by each student group at each school * How many 3Dobjects, text blocks, images, videos, sounds used per story * Number of interactions between students * Number of interactions between students and experts / teachers * Time spent per episode * Number of Interactions with conversational agent * Modifications done and times an episode was re-written. **Classroom data:** coming from questionnaires for the students which do not pass though the Stories software system. **Stories assessment data:** Template within the system for evaluation by teacher – teacher will be able to edit scores, marks, comments. All the above data will be useful to the Project in assessing the effects & prerequisites of deep learning. 2. **FAIR data** **2.1 Making data findable, including provisions for metadata:** **Outline the discoverability of data (metadata provision)** **Outline the identifiability of data and refer to standard identification mechanism. Do you make use of persistent and unique identifiers such as Digital Object Identifiers?** **Outline naming conventions used** **Outline the approach towards search keyword** **Outline the approach for clear versioning** **Specify standards for metadata creation (if any). If there are no standards in your discipline describe what metadata will be created and how** No administrative, student or teacher data or data coming from these sources will be retrievable withour permission for certain uses. Concerning visual and design assets, in Work Package 2, our database will support submission of 3D, VR, AR and multimedia content and thereby, the description of the necessary meta-models for content submission and events description. **2.2 Making data openly accessible:** **Specify which data will be made openly available? If some data is kept closed provide rationale for doing so Specify how the data will be made available** **Specify what methods or software tools are needed to access the data? Is documentation about the software needed to access the data included? Is it possible to include the relevant software (e.g. in open source code)? Specify where the data and associated metadata, documentation and code are deposited** **Specify how access will be provided in case there are any restrictions** Our team will use the Open Science Framework for making data openly accessible. The following deliverables will be accessible from Work Package 2 via the Project's website: Overall Architecture Specification (1 and 2) Functional and Data Components Specification & Design User Interface Design analytic description and mock-ups Other data sets will be generated through questionnaires and (a) Conversational Agent(s). No existing data is going to be reused. Our team uses Microsoft Office for working documents and saves them as PDF files. **2.3 Making data interoperable:** **Assess the interoperability of your data. Specify what data and metadata vocabularies, standards or methodologies you will follow to facilitate interoperability.** **Specify whether you will be using standard vocabulary for all data types present in your data set, to allow inter-disciplinary interoperability? If not, will you provide mapping to more commonly used ontologies?** We will be using standard vocabulary and adhere fully to standards for formats of coresponding software applications. **2.4 Increase data re-use (through clarifying licenses):** **Specify how the data will be licenced to permit the widest reuse possible Specify when the data will be made available for re-use. If applicable, specify why and for what period a data embargo is needed** **Specify whether the data produced and/or used in the project is useable by third parties, in particular after the end of the project? If the re-use of some data is restricted, explain why** **Describe data quality assurance processes** **Specify the length of time for which the data will remain re-usable** All data producesd and/or used in the project are project-specific administrative data, and therefore will not be relevant for third parties. We will not use the data ourselves after the end of the project. 3. **Allocation of resources** **Explain the allocation of resources, addressing the following issues: Estimate the costs for making your data FAIR. Describe how you intend to cover these costs** **Clearly identify responsibilities for data management in your project** **Describe costs and potential value of long term preservation** No clear estimation can yet be provided on FAIR costs. Mr Abazis and Mr Paraskakis on Work Package 2 are responsible for data management. 4. **Data security** **Address data recovery as well as secure storage and transfer of sensitive data** Frequent backups and cloud storage architecture, in addition to the ad hoc set up, will ensure the data availability at all times. The data tier of the STORIES of Tomorrow architecture will include an ORM mechanism which along with a DRM (Digital Rights Management) algorithm and the SSL certificates will add an additional security layer which will protect the data delivery from end to end (Server/Client). 5. **Ethical aspects** **To be covered in the context of the ethics review, ethics section of DoA and ethics deliverables. Include references and related technical aspects if not covered by the former** Ethics and General Data Protection Regulation issues will be covered in subsequent meetings and presentations. 6. **Other** **Refer to other national/funder/sectorial/departmental procedures for data management that you are using (if any)** Not applicable. This document was generated by DMPonline 4 of 4 # DMP Work Package 3 – Technical implementation **Plan Name** Horizon 2020 DMP - STORIES of Tomorrow - students visions on the future of space exploration (WP3) **Plan ID** H2020-ICT_2016-1 **Grant number** 731872 **Principal Investigator / Researcher** Nikolaos Papastamatiou **Plan Data Contact** [email protected] **Plan Description** The nature of our research project is Technologies for Learning and Skills. The STORIES project aims to contribute to a dynamic future of children ebooks evolution by a) developing user-friendly interfaces for young students (10-12 years old) to create their own multi-path stories expressing their imagination and creativity and b) by integrating the latest AR, VR and 3D printing technologies to visualize their stories in innovative ways. The purpose of Work Package 3 (Technical implementation): Through the STORIES Platform, students in teams create their stories relevant to the travel and life on Mars. They work in a collaborative online platform where they communicate with their teachers, space experts and each other. Data is gathered for evaluation purposes. **1\. Data summary** **Provide a summary of the data addressing the following issues:** **State the purpose of the data collection/generation** **Explain the relation to the objectives of the project** **Specify the types and formats of data generated/collected** **Specify if existing data is being re-used (if any)** **Specify the origin of the data** **State the expected size of the data (if known) Outline the data utility: to whom will it be useful** Data collected in the platform includes: Basic students' information (name, surname, email, class and school) Projects created by teachers (title, mission, students involved (team members)) Stories created by students (title, assets uploaded, assets used). Interactions between students, their teachers and space experts. Interactions with the conversational agent. These data are necessary on the one hand to pilot the platform in schools and on the other hand to evaluate the students and understand if deeper learning was achieved. These data will be kept in SQL databases. Anonymous data(everything apart from students names and emails) will be exported in several formats (csv, xml, json, etc). The data will be generated through the pilots that will take place. The size of the assets(images, video, sound files) that will be used are estimated to be about 20MB per project. The assets will be given under a creative commons licence, and will not be available for commercial use. No previous data will be used. These data can be useful to all those researching collaborative problem solving, STEM in Education, deeper learning, learning by doing, etc. 2. **FAIR data** 1. **Making data findable, including provisions for metadata:** **Outline the discoverability of data (metadata provision)** **Outline the identifiability of data and refer to standard identification mechanism. Do you make use of persistent and unique identifiers such as Digital Object Identifiers?** **Outline naming conventions used** **Outline the approach towards search keyword** **Outline the approach for clear versioning** **Specify standards for metadata creation (if any). If there are no standards in your discipline describe what metadata will be created and how** There are no metadata for the data created by the platform tools. 2. **Making data openly accessible:** **Specify which data will be made openly available? If some data is kept closed provide rationale for doing so Specify how the data will be made available** **Specify what methods or software tools are needed to access the data? Is documentation about the software needed to access the data included? Is it possible to include the relevant software (e.g. in open source code)? Specify where the data and associated metadata, documentation and code are deposited** **Specify how access will be provided in case there are any restrictions** Names and email accounts of students will be kept under confidentiality. Other data will be freely available, from the project's platform for download. Images and videos created by students will be available under a creative commons licence. **2.3 Making data interoperable:** **Assess the interoperability of your data. Specify what data and metadata vocabularies, standards or methodologies you will follow to facilitate interoperability.** **Specify whether you will be using standard vocabulary for all data types present in your data set, to allow inter-disciplinary interoperability? If not, will you provide mapping to more commonly used ontologies?** N/A **2.4 Increase data re-use (through clarifying licenses):** **Specify how the data will be licenced to permit the widest reuse possible Specify when the data will be made available for re-use. If applicable, specify why and for what period a data embargo is needed** **Specify whether the data produced and/or used in the project is useable by third parties, in particular after the end of the project? If the re-use of some data is restricted, explain why** **Describe data quality assurance processes** **Specify the length of time for which the data will remain re-usable** These data can be useful to all those researching collaborative problem solving, STEM in Education, deeper learning, learning by doing, etc. 3. **Allocation of resources** **Explain the allocation of resources, addressing the following issues:** **Estimate the costs for making your data FAIR. Describe how you intend to cover these costs** **Clearly identify responsibilities for data management in your project** **Describe costs and potential value of long term preservation** Cloud server rental is necessary for long term preservation. Would be relevant to the assets uploaded. 4. **Data security** **Address data recovery as well as secure storage and transfer of sensitive data** The STORIES project will collect and process the personal data in accordance with applicable laws, i.e. the relevant international and European conventions, relevant EU and national legislations in addition to their national implementations in relevant EU Member States. In STORIES any personal data that have been anonymised, are no longer considered as personal data and can be used without further restrictions. In this respect, **Data Anonymization** is defined as the process of sanitising a data set from any personally identifiable information. The resulting data set cannot be used to identify any real persons. Daily backups are envisaged. All project data will be stored in cloud servers. 5. **Ethical aspects** **To be covered in the context of the ethics review, ethics section of DoA and ethics deliverables. Include references and related technical aspects if not covered by the former** So far no ethical issues are an issue within the project. As we have said all data will be anonymized. 6. **Other** **Refer to other national/funder/sectorial/departmental procedures for data management that you are using (if any)** Not applicable. This document was generated by DMPonline 4 of 4 # DMP Work Package 4 – Service Integration **Plan Name** Horizon 2020 DMP - STORIES of Tomorrow - students visions on the future of space exploration (WP4) **Plan ID** H2020-ICT-2016-1 **Grant number** 731872 **Principal Investigator / Researcher** Miltiadis Anastasiadis **Plan Data Contact** [email protected] **Plan Description** The nature of our research project is Technologies for Learning and Skills. The STORIES project aims to contribute to a dynamic future of children´s ebooks evolution by a) developing user-friendly interfaces for young students (10-12 years old) to create their own multi-path stories expressing their imagination and creativity and b) by intergrating the latest AR, VR and 3D printing technologies to visualize their stories in numerous innovative ways. The purpose of Work Package 4 (Service Integration): * Integrate all STORIES components into a functional environment providing all required services. * Integrate the supporting networking infrastructures with STORIES functional components. **Institution** Other **1\. Data summary** **Provide a summary of the data addressing the following issues:** **State the purpose of the data collection/generation** **Explain the relation to the objectives of the project** **Specify the types and formats of data generated/collected** **Specify if existing data is being re-used (if any)** **Specify the origin of the data** **State the expected size of the data (if known)** **Outline the data utility: to whom will it be useful State the purpose of the data collection/generation** The purpose of WP4 is: * Integrate all STORIES components into a functional environment providing all required services. * Integrate the supporting networking infrastructures with STORIES functional components. During the service integration process, the whole system will be integrated and will work as a single entity. Within this process the system will collect data coming from the system itself (login data, student data, time consumed, classroom data, etc), data coming from the evaluation questionnaires that will be online, evaluation data that is offline and data generated as a result of the data analytics engine. All data will be generated while the system is running and will be stored in a database, either on premise (school environment) or on the cloud. One critical issue is terminology. Defining terms is very important for such a complex project and essential when bringing together different worlds like pedagogy and school environments, information and communication technologies and assessment and evaluation teams from universities. Some key wordings to be used throughout the project are: **Story:** we have one story and this is the trip to Mars and colonizing it. Implementation scenarios: This term refers to how each team of students perceives implementing the story. There can be unlimited implementation scenarios. Each team of students will draft and implement its own scenario that will be agreed with their teacher. **Episodes:** Each implementation scenario will comprise of episodes that each group of children will define and agree with their teacher. All possible different episodes per implementation scenario must be connected logically and sequentially. The whole process will comprise of several iterations. Children groups will come back many times redefining and re-writing episodes – and in turn the implementation scenario, until it is approved by their teacher. **Explain the relation to the objectives of the project** The objective of the project is to enhance deeper learning in STEM, through the creation of a story telling platform. The data that will be collected and analyzed will enable researchers in universities and school teachers to understand student skills, and how these skills are enhanced by using the stories telling platform. **Specify the types and formats of data generated/collected** The data to be collected will be formatted data with predefined structures and of alphanumeric type. The system will collect and store data of numeric and character type structure. Furthermore the system will collect and store processed data resulting from the data analytics engine that will be also in the form of reports. These data will be kept in SQL databases. Anonymous data(everything apart students names and emails) will be exported in several format (csv, xml, json, etc). **Specify if existing data is being re-used (if any)** NO **Specify the origin of the data** Story telling platform generated data, evaluation questionnaires data. **State the expected size of the data (if known)** The size of the assets (images, video, sound files) that will be used are estimated for about 20MB per implementation scenario. The assets will be given under a Creative Commons Licence, and will not be available for commercial use.yet. **Outline the data utility: to whom will it be useful** Teachers and pedagogical researchers. More specifically These data can be useful to all those researching collaborative problem solving, STEM in Education, deeper learning, learning by doing, etc. 2. **FAIR data** **2.1 Making data findable, including provisions for metadata:** **Outline the discoverability of data (metadata provision)** **Outline the identifiability of data and refer to standard identification** **mechanism. Do you make use of persistent and unique identifiers such as Digital Object Identifiers?** **Outline naming conventions used** **Outline the approach towards search keyword** **Outline the approach for clear versioning** **Specify standards for metadata creation (if any). If there are no standards in your discipline describe what metadata will be created and how** **Outline the discoverability of data (metadata provision)** The system is not using metadata strucutres. **Outline the identifiability of data and refer to standard identification mechanism. Do you make use of persistent and unique identifiers such as Digital Object Identifiers?** Yes. For example for the 3D objects we have unique identifiers. The 3D objects that will be built in the course of the project will be stored on a common repository space and will be accessible by the wider pedagogical community. **Outline naming conventions used** There are many naming conventions which are listed in the respective deliverable of functional and data components specification and design. **Outline the approach towards search keyword** Searching will take place with character srings, words. **Outline the approach for clear versioning** The system is currently be building up and the technical team is following a process that is used in all project implementations about versioning, keeping old versions and updating from one version to another. **Specify standards for metadata creation (if any). If there are no standards in your discipline describe what metadata will be created and how** No metadata will be in use. **2.2 Making data openly accessible:** **Specify which data will be made openly available? If some data is kept closed provide rationale for doing so Specify how the data will be made available** **Specify what methods or software tools are needed to access the data? Is documentation about the software needed to access the data included? Is it possible to include the relevant software (e.g. in open source code)? Specify where the data and associated metadata, documentation and code are deposited** **Specify how access will be provided in case there are any restrictions Specify which data will be made openly available? If some data is kept closed provide rationale for doing so** The data to be made available constitutes mainly of the analytics reports of the evaluation for improving STEM and student skills. As a consortium we have agreed that the data to be collected and shared Anonymously comprises of the following categories: **System data (coming from the system software database)** unique student ID (this will be the master key in searching the database) Students’ group ID Student data (age, gender) Number of students participated Number of implementation scenarios created per school Number of episodes created by each student group at each school How many 3Dobjects, text blocks, images, videos, sounds used per story Number of interactions between students Number of interactions between students and experts / teachers Time spent per episode Number of Interactions with conversational agent Modifications done and times an episode was re-written. **Classroom data** : coming from questionnaires for the students which are not though embedded in the Stories software system. **Stories assessment data** : Template within the system for evaluation by teacher – teacher will be able to edit scores, marks, comments. **Specify how the data will be made available** Names and email accounts of students will be kept under confidentiality. Other data will be freely available, from the project's platform for download. Images and videos created by students will be available under a creative commons licence. **Specify what methods or software tools are needed to access the data? Is documentation about the software needed to access the data included? Is it possible to include the relevant software (e.g. in open source code)?** The outcome of the analytics reports will be stored on a common database. We can provide freely these evaluation reports or we can give access to the wider pedagogical community through the stories of tomorrow web site where we can have these evaluaiton reports. **Specify where the data and associated metadata, documentation and code are deposited** N/A **Specify how access will be provided in case there are any restrictions** N/A **2.3 Making data interoperable:** **Assess the interoperability of your data. Specify what data and metadata vocabularies, standards or methodologies you will follow to facilitate interoperability.** **Specify whether you will be using standard vocabulary for all data types present in your data set, to allow inter-disciplinary interoperability? If not, will you provide mapping to more commonly used ontologies?** Interoperability will take place especially when using 3D objects from third sources. But currently the project is in its first phase and building up. Service integration is not yet taking place and the technical team is still working on it. We will be sharing 3D objects the consortium is building and we will be reading 3D objects if necessary from external sources. **2.4 Increase data re-use (through clarifying licenses):** **Specify how the data will be licenced to permit the widest reuse possible Specify when the data will be made available for re-use. If applicable, specify why and for what period a data embargo is needed** **Specify whether the data produced and/or used in the project is useable by third parties, in particular after the end of the project? If the re-use of some data is restricted, explain why** **Describe data quality assurance processes** **Specify the length of time for which the data will remain re-usable** The same applies as for open data access. These data – presented above - can be useful to all those researching collaborative problem solving,STEM in Education, deeper learning, learning by doing, etc.. 3. **Allocation of resources** **Explain the allocation of resources, addressing the following issues: Estimate the costs for making your data FAIR. Describe how you intend to cover these costs** **Clearly identify responsibilities for data management in your project** **Describe costs and potential value of long term preservation** Cloud server rental is necessary for long term preservation.Would be relevant to the assets uploaded. We plan to explore costs with cloud operators having as our first priority security mechanisms in place for data storage and data transmission. Towards that we are also in discussions with big cloud operators like IBM, Oracle, Microsoft and we will decide in due course as project implementation and piltos are evolving. 4. **Data security** **Address data recovery as well as secure storage and transfer of sensitive data** Currently all data is test data and is stored on the servers of the technical partners. After the project is in piloting phase and we will have real data, the data will be stored on secure servers of a company that is certified with ISO 9001 and ISO 27001\. This company will be either Motivian or any other large cloud operator like IBM, Oracle or Microsoft. STORIES project will collect and process the personal data in accordance with applicable laws, and i.e. the relevant international and European conventions, relevant EU and national legislations in addition to their national implementations in relevant EU Member States. In STORIES any personal data that have been previously anonymised, are no longer considered as personal data and can be used without further restrictions. In this respect, Data Anonymization is defined as the process of sanitising a data set from any personally identifiable information. The resulting data set is not possible to be used to identify any real persons. Daily backups are envisaged. All project data will be stored in cloud servers. 5. **Ethical aspects** **To be covered in the context of the ethics review, ethics section of DoA and ethics deliverables. Include references and related technical aspects if not covered by the former** So far no ethical issues. As we have said all data will be anonymized. 6. **Other** **Refer to other national/funder/sectorial/departmental procedures for data management that you are using (if any)** Motivian is governed with ISO9001 and ISO 27001 covering in full data management and security and its technical team has an extensive experience in managing large data sets, confidentially and with advanced security mechanisms. Experience has been built up through very large projects in both the public and private sector. This document was generated by DMPonline 7 of 7 # DMP Work Package 5 – Piloting **Project Name** STORIES of Tomorrow - students visions on the future of space exploration (WP5) **Project Identifier** H2020-ICT-2016-1 **Grant Title** 731872 **Principal Investigator / Researcher** Jens Koslowsky **Project Data Contact** [email protected] **Description** The nature of our research project is Technologies for Learning and Skills. The STORIES project aims to contribute to a dynamic future of children´s ebooks evolution by a) developing user-friendly interfaces for young students (10-12 years old) to create their own multi-path stories expressing their imagination and creativity and b) by intergrating the latest AR, VR and 3D printing technologies to visualize their stories in numerous innovative ways. The purpose of Work Package 5: The main objectives of this work package are * To test extensively the proposed STORIES intervention in at least 15 pilot schools in Greece, Finland, Germany, France and Portugal and collect data to provide evidences for students deeper learning in STEM. * To organize a series of national workshops and international training events for teachers and educators where the proposals pedagogical approach and leading edge learning technologies will be applied in order to support the design, creation and use of digital content for personalized learning and teaching, and facilitate innovation in education. Starting from a core of 15 pre-selected schools around Europe, to increasingly build the community of stakeholders (teachers, students, science educators, researchers, business actors and policy makers) who will accompany the development of the project from the early phases till the full deployment of the project results into long term sustainability planning. * To create a series of guidelines and support materials, namely the STORIES tool-kit, for teachers and students in order to effectively implement the STORIES approach in their classroom. **Funder** European Commission (Horizon 2020) **Institution** Other **1\. Data summary** **Provide a summary of the data addressing the following issues:** **State the purpose of the data collection/generation** **Explain the relation to the objectives of the project** **Specify the types and formats of data generated/collected** **Specify if existing data is being re-used (if any)** **Specify the origin of the data** **State the expected size of the data (if known)** **Outline the data utility: to whom will it be useful** The implementation and pilot-testing of the STORIES platform and pedagogical approach is the main aim of the work in WP5. While the work of WP5 enables the data collection during the pilot, that data will be collected mostly as part of WP6 (evaluation). The pilot students in the pilot schools will log in with a unique identifier to the platform. The Stories created will be uploaded during the implementation to the platform. The usage data of students using the system will be collected during the work of WP5. The same applies during the STORIES Challenges. WRO Hellas will collect the information of all entries and participants of the challenges. Other data (names of teachers participating in summer schools and visionary workshops) are being collected in excel or pdf forms to prove attendence. 2. **FAIR data** **2.1 Making data findable, including provisions for metadata:** **Outline the discoverability of data (metadata provision)** **Outline the identifiability of data and refer to standard identification mechanism. Do you make use of persistent and unique identifiers such as Digital Object Identifiers?** **Outline naming conventions used** **Outline the approach towards search keyword** **Outline the approach for clear versioning** **Specify standards for metadata creation (if any). If there are no standards in your discipline describe what metadata will be created and how** All data will be shared with the consortium in the internal working space with those partners that need access to them (Fraunhofer - BSCW). Folders will be organized in a hierarchical and clear structure. Files will be uniquely identifiable and versioned by using a systematic name convention (to be clarified). **2.2 Making data openly accessible:** **Specify which data will be made openly available? If some data is kept closed provide rationale for doing so Specify how the data will be made available** **Specify what methods or software tools are needed to access the data? Is documentation about the software needed to access the data included? Is it possible to include the relevant software (e.g. in open source code)? Specify where the data and associated metadata, documentation and code are deposited** **Specify how access will be provided in case there are any restrictions** As we are dealing with sensitive data of primary students, the data will be handled very restrictivly. Further details are to be decided by the Ethics committee established as part of WP9. **2.3 Making data interoperable:** **Assess the interoperability of your data. Specify what data and metadata vocabularies, standards or methodologies you will follow to facilitate interoperability.** **Specify whether you will be using standard vocabulary for all data types present in your data set, to allow inter-disciplinary interoperability? If not, will you provide mapping to more commonly used ontologies?** As we are dealing with sensitive data of primary students, the data will be handled very restrictivly. Further details are to be decided by the Ethics committee established as part of WP9. **2.4 Increase data re-use (through clarifying licenses):** **Specify how the data will be licenced to permit the widest reuse possible Specify when the data will be made available for re-use. If applicable,** **specify why and for what period a data embargo is needed** **Specify whether the data produced and/or used in the project is useable by third parties, in particular after the end of the project? If the re-use of some data is restricted, explain why** **Describe data quality assurance processes** **Specify the length of time for which the data will remain re-usable** As we are dealing with sensitive data of primary students, the data will be handled very restrictivly. Further details are to be decided by the Ethics committee established as part of WP9. **3\. Allocation of resources** **Explain the allocation of resources, addressing the following issues: Estimate the costs for making your data FAIR. Describe how you intend to cover these costs** **Clearly identify responsibilities for data management in your project** **Describe costs and potential value of long term preservation** It is not clear how extensive the data collection will be. Therefore, for WP5 it is not possible to give an estimation. The national coordinators of the piloting are responsible for the teacher data collection on national level. The student data will be collected as part of WP6 and the technical WPs. The data of the STORIES challenges will be collected and managed by task leader WRO Hellas. 4. **Data security** **Address data recovery as well as secure storage and transfer of sensitive data** All data is saved on the BSCW and access is password protected. 5. **Ethical aspects** **To be covered in the context of the ethics review, ethics section of DoA and ethics deliverables. Include references and related technical aspects if not covered by the former** Ethical aspects will be address as part of the WP9. 6. **Other** **Refer to other national/funder/sectorial/departmental procedures for data management that you are using (if any)** Further procedures for data management are not known yet. This document was generated by DMPonline (http://dmponline.dcc.ac.uk) 4 of 4 # DMP Work Package 6 – Assessment and Validation **Project Name** STORIES of Tomorrow - students visions on the future of space exploration (WP6) **Project Identifier** H2020-ICT-2016-1 **Grant Title** 731872 **Principal Investigator / Researcher** Prof. Dr. Florian Kaiser **Project Data Contact** [email protected]; [email protected] **Description** The nature of our research project is Technologies for Learning and skills. The STORIES project aims to contribute to a dynamic future of children´s - ebooks evolution by a) developing user-friendly interfaces for young students (10-12 years old) to create their own multi-path stories expressing their imagination and creativity and b) by integrating the latest AR, VR and 3D printing technologies to visualize their stories in numerous innovative ways. The purpose of Work Package 6 (Assessment and Evaluation): The objective of the evaluation work is to ensure a continued learning process based on the deeper learning paradigm that addresses not just intellectual abilities but also motivational abilities such as collaborative problem solving at the center of the project. Our deeper learning paradigm incorporates the idea that a range of abilities and their orchestrated skillful application leads to STEM mastery. Necessarily, our approach to develop STEM mastery includes a multitude of educational measures (especially a wide variety of technology based instruments) to broadly foster deeper learning. The complexity that comes with multiple measures also leads to a challenging evaluation procedure. Corresponding to the learning modules, we will develop matching technology-based assessment instruments that allow assessing all six abilities that are central to deeper learning. These abilities can be divided into intellectual abilities and motivational abilities. To accomplish a comprehensive assessment of STEM mastery within the deeper learning paradigm in science education, we will develop standardized assessment instruments that cover intellectual and motivational abilities alike. Abilities such as collaborative problem solving, essential within the deeper learning paradigm, inevitably also depend on communicational and social skills. **Funder** European Commission (Horizon 2020) **1\. Data summary** **Provide a summary of the data addressing the following issues:** **State the purpose of the data collection/generation** **Explain the relation to the objectives of the project** **Specify the types and formats of data generated/collected** **Specify if existing data is being re-used (if any)** **Specify the origin of the data** **State the expected size of the data (if known)** **Outline the data utility: to whom will it be useful** The evaluation of the STORIES project is the purpose of the data collection in WP6. The objective of the evaluation work is to ensure a continued learning process based on the Deeper Learning paradigm. Therefore, we will assess possible consequences of the six deeper learning competencies (a) science understanding and knowing, b) scientific reasoning, c) reflecting on science, d) collaborative problem solving, e) interest and excitement, and f) identification with scientific enterprise) which integrates our knowledge of different subjects necessary to address the science challenges and fascination in science challenges (e.g., “The journey to Mars”). The data set will be generated through questionnaires and (a) Conversational Agent(s). No existing data is going to be reused. Our team uses Microsoft Office for working documents and saves them as PDF files. Since the Conversational Agent(s) is/are not finished yet, it is not possible to assess the data volume. Also, we do not have a technical solution for the data storage at the current state of the planning process. The data expands the largely under explored field of Deeper Learning. Consequently, the data might be useful for researchers with interest in Deeper Learning. 2. **FAIR data** **2.1 Making data findable, including provisions for metadata:** **Outline the discoverability of data (metadata provision)** **Outline the identifiability of data and refer to standard identification mechanism. Do you make use of persistent and unique identifiers such as Digital Object Identifiers?** **Outline naming conventions used** **Outline the approach towards search keyword** **Outline the approach for clear versioning** **Specify standards for metadata creation (if any). If there are no standards in your discipline describe what metadata will be created and how** At the current state of the project we cannot give any explanations about where and how data and metadata will be accessible and if we want to use Digital Object Identifiers. The data will be shared with the consortium in the internal working space (BSCW). Folders will be organized in a hierarchical and clear structure. Files will be uniquely identifiable and versioned by using a systematic name convention, but what that convention will include is not known so far. Also, we cannot name the thesaurus yet, which we are going to use for the keywords. **2.2 Making data openly accessible:** **Specify which data will be made openly available? If some data is kept closed provide rationale for doing so Specify how the data will be made available** **Specify what methods or software tools are needed to access the data? Is documentation about the software needed to access the data included? Is it possible to include the relevant software (e.g. in open source code)? Specify where the data and associated metadata, documentation and code are deposited** **Specify how access will be provided in case there are any restrictions** Our team will use the Open Science Framework for making data openly accessible. Further explanations will follow. We cannot say yet, which data will be accessible at what stage. However, there will be different access levels. Sensitive data will not be publicly available, according to data protection law. Perhaps the following software is needed to access the data: word/spreadsheet processing program (e.g. Microsoft Office), Adobe PDF Reader, and XML viewer. Where and how the data, metadata, and documentation will be deposited is not known at the current phase of the planning process. **2.3 Making data interoperable:** **Assess the interoperability of your data. Specify what data and metadata vocabularies, standards or methodologies you will follow to facilitate interoperability.** **Specify whether you will be using standard vocabulary for all data types present in your data set, to allow inter-disciplinary interoperability? If not, will you provide mapping to more commonly used ontologies?** Explanations about how we will make the data, metadata, and documentation interoperable and whether we are going to use standard vocabulary for all data types, will follow at later states of the project. **2.4 Increase data re-use (through clarifying licenses):** **Specify how the data will be licenced to permit the widest reuse possible Specify when the data will be made available for re-use. If applicable, specify why and for what period a data embargo is needed** **Specify whether the data produced and/or used in the project is useable by third parties, in particular after the end of the project? If the re-use of some data is restricted, explain why** **Describe data quality assurance processes** **Specify the length of time for which the data will remain re-usable** At this early phase, our team did not make decisions about how the data will be licensed and when the data will be available. The data can be reused by other scientists in the field of Deeper Learning and educational research. Neighbouring disciplines and interdisciplinary research groups might also be interested. Since we have not finally decided on the measurement methods yet, we cannot describe the data quality assurance processes. The length of time for which the data will remain reusable is not known at this state of the planning process. 3. **Allocation of resources** **Explain the allocation of resources, addressing the following issues:** **Estimate the costs for making your data FAIR. Describe how you intend to cover these costs** **Clearly identify responsibilities for data management in your project** **Describe costs and potential value of long term preservation** Owing to the missing estimation of the data amount at this stage of the project, we cannot say how costly FAIR data and long-term preservation will be. The work package leaders, Prof. Dr. Florian Kaiser and Dr. Siegmar Otto are responsible for the data management. 4. **Data security** **Address data recovery as well as secure storage and transfer of sensitive data** At this early stage of the project, our team has not made decisions about data recovery, secure storage and transfer of sensitive data. But the WP6 team is doing regular backups of the files. 5. **Ethical aspects** **To be covered in the context of the ethics review, ethics section of DoA and ethics deliverables. Include references and related technical aspects if not covered by the former** References for ethical aspects will follow in further steps of the project. 6. **Other** **Refer to other national/funder/sectorial/departmental procedures for data management that you are using (if any)** Further procedures for data management are not known yet. This document was generated by DMPonline (http://dmponline.dcc.ac.uk) 4 of 4 # DMP Work Package 7 – Dissemination and Exploitation **Plan Name** Horizon 2020 DMP - STORIES of Tomorrow - students visions on the future of space exploration (WP7) **Plan ID** H2020-ICT-2016-1 **Grant number** 731872 **Principal Investigator / Researcher** Ines Prieto **Plan Data Contact** [email protected], [email protected] **Plan Description** The nature of our research project is Technologies for Learning and Skills. The STORIES project aims to contribute to a dynamic future of children´s ebooks evolution by a) developing user-friendly interfaces for young students (10-12 years old) to create their own multi-path stories expressing their imagination and creativity and b) by intergrating the latest AR, VR and 3D printing technologies to visualize their stories in numerous innovative ways. The purpose of Work Package 7 (Dissemination and Exploitation): The tasks in this work package aim to define specific measures that will support the dissemination and exploitation of project results and contribute to their sustainability through effective networking and communication. For this purpose, the following objectives have been identified: * Organisation of a series of dissemination activities that will allow for the consortium to develop links and opportunities for collaboration with similar activities in Europe but also globally in order to succeed its ambitious objective, namely to use project based learning with a story-telling platform to achieve deeper learning in science in schools across Europe. * Exploitation of the services of the project. The business case of the final service will be studied, an initial market validation will take place and a detailed business plan will be delivered. **Funder** European Commission (Horizon 2020) **1\. Data summary** **Provide a summary of the data addressing the following issues:** In Work Package 7, there is no generation of data as such, since this Work Package is dedicated to the communication and exploitation of the results of the project and contribution to the project sustainability. Still, since WP7 ensures the project awareness is as broad as possible, it will communicate the results of the project. Scientific results will be communicated in the form of Scientific Publications since papers will be submitted to scientific journals and magazines focusing on digital story telling for formal and informal science education. An Internal Conference will present the outcomes of the project, and an international Call for Papers for a Special Issue or Edited Volume of "Digital Story Telling in formal and informal education" will be published. The Digital Stories produced by students in participating schools will be published in a specific digital library, with a dedicated web space allowing the public to assess, read and watch the outcomes of the project. **Specify the types and formats of data generated/collected** The data that will be collected will be used to the effective assesment of the dissemination activities by the different membres of the Consortium. It will be collected using the Microsoft Pack Office 2010 (Word, Excel, and Power Point). **Specify if existing data is being re-used (if any)** Re-using exisiting data is not planned **Specify the origin of the data** The data that will be disseminated will be provided by the Consortium Members to the WP7, in order to ensure correct communication of the outcomes of the project. CONFIDENTIAL DOCUMENTS (only for members of the Consortium including the Commission Services) D7.1 Dissemination Plan: contains regularly updated information provided by Consortium members about their dissemination activities, as well as updated list of potential targets for dissemination. D7.5 Exploitation Strategy and Business Plan: contains list of target groups and potential users of the story-telling platform, provided by the task leader. PUBLIC DOCUMENTS D7.2 Project web site : only general information about the project, and updated news. D7.3 Dissemination material : no specific data D7.4 Publications : Scientific papers and presentations of the outcomes of the project using data generated through evaluation of the project (WP6), D7.7 Digital Library : repository of the Students Pilots Activities (e-books about their stories of Mars), collected by each Consortium member involved in piloting activities, and respecting national regulations. D7.9 Final conference proceedings **State the expected size of the data (if known)** Not known yet. **Outline the data utility: to whom will it be useful** The information communicated will be used for dissemination purposes. General information about the project will be useful for the students involved, their families and schools. Published scientific results will be useful for the formal and informal educational community and education policy makers. Digital Library will be useful for the general public. 2. **FAIR data** **2.1 Making data findable, including provisions for metadata:** **Outline the discoverability of data (metadata provision)** Not applicable **Outline the identifiability of data and refer to standard identification mechanism. Do you make use of persistent and unique identifiers such as Digital Object Identifiers?** Not applicable **Outline naming conventions used** Not applicable **Outline the approach towards search keyword** Not applicable **Outline the approach for clear versioning** Not applicable **Specify standards for metadata creation (if any). If there are no standards in your discipline describe what metadata will be created and how** Not applicable **2.2 Making data openly accessible:** **Specify which data will be made openly available? If some data is kept closed provide rationale for doing so Specify how the data will be made available** **Specify what methods or software tools are needed to access the data? Is documentation about the software needed to access the data included? Is it possible to include the relevant software (e.g. in open source code)? Specify where the data and associated metadata, documentation and code are deposited** **Specify how access will be provided in case there are any restrictions Specify which data will be made openly available? If some data is kept closed provide rationale for doing so** The Data that will be used for assesing the correct dissemination of the project is only for internal use, and is the Exploitaiton Strategy and Business Plan. The deliverables: D7.1: Dissemination Plan D7.5 Exploitation Strategy and Business Plan are confidential, only for members of the Consortium including the Commission Services. The following deliverables will be made public: D7.2: Project web site D7.3: Dissemination material D7.4: Publications D7.7: Digital Library D7.9: Final conferences proceedings **Specify how the data will be made available** All relevant information and deliverables will be uploaded to the internal working space BSCW to which all Consortium members have access. The Deliverables are uploaded to the Participant's portal from the European Commission. The European Commission will upload the public Deliverables to CORDIS. The publications (D7.4) will be available according to the policies of the journals or conferences they will be published in. The Project web site (D7.2) and Digital library (D7.7) will be available on line. **Specify what methods or software tools are needed to access the data? Is documentation about the software needed to access the data included? Is it possible to include the relevant software (e.g. in open source code)?** All Deliverables are uploaded in PDF format. The Project web site (D7.2) and Digital library (D7.7) will be available on line. **Specify where the data and associated metadata, documentation and code are deposited** The data and associated documentation collected by, or provided to WP leader in D7.1 will be stored in its internal servor (2 copies) with disk back-up every night, and copy on LTO every night. No external access is given to the data. **Specify how access will be provided in case there are any restrictions** Consortium membres will access to confidential data (D7.1 and D7.5) through the project internal working space, upon invitation. **2.3 Making data interoperable:** **Assess the interoperability of your data. Specify what data and metadata vocabularies, standards or methodologies you will follow to facilitate interoperability.** **Specify whether you will be using standard vocabulary for all data types present in your data set, to allow inter-disciplinary interoperability? If not, will you provide mapping to more commonly used ontologies?** **Assess the interoperability of your data. Specify what data and metadata vocabularies, standards or methodologies you will follow to facilitate interoperability.** Not applicable, since the data that we are dealing with in the WP7 is not research data as such. **Specify whether you will be using standard vocabulary for all data types present in your data set, to allow inter-disciplinary interoperability? If not, will you provide mapping to more commonly used ontologies?** Not applicable, since the data that we are dealing with in the WP7 is not research data as such. **2.4 Increase data re-use (through clarifying licenses):** **Specify how the data will be licenced to permit the widest reuse possible Specify when the data will be made available for re-use. If applicable, specify why and for what period a data embargo is needed** **Specify whether the data produced and/or used in the project is useable by third parties, in particular after the end of the project? If the re-use of some data is restricted, explain why** **Describe data quality assurance processes** **Specify the length of time for which the data will remain re-usable** **Specify how the data will be licenced to permit the widest reuse possible Specify when the data will be made available for re-use. If applicable, specify why and for what period a data embargo is needed** Licensing is not relevant here. **Specify whether the data produced and/or used in the project is useable by third parties, in particular after the end of the project? If the re-use of some data is restricted, explain why** The Consortium hasn't taken decision on this point, especially in the case of the Digital Library, which involves making public productions of materials by students, and therefore must comply with the Image and Data right of the EU, as well as of every country. **Describe data quality assurance processes** Not known yet. **Specify the length of time for which the data will remain re-usable** Not known yet if reusable data. **3\. Allocation of resources** **Explain the allocation of resources, addressing the following issues: Estimate the costs for making your data FAIR. Describe how you intend to cover these costs** **Clearly identify responsibilities for data management in your project** **Describe costs and potential value of long term preservation** **Estimate the costs for making your data FAIR. Describe how you intend to cover these costs** Not applicable **Clearly identify responsibilities for data management in your project** Each task leader is responsible for his/her data management. **Describe costs and potential value of long term preservation** Long term preservation needs to be discussed with Consortium, but it is an important point when ensuring the long term sustainabilty of the project. 4. **Data security** **Address data recovery as well as secure storage and transfer of sensitive data** The consortium has not yet discussed this point. 5. **Ethical aspects** **To be covered in the context of the ethics review, ethics section of DoA and ethics deliverables. Include references and related technical aspects if not covered by the former** Answers will follow. 6. **Other** **Refer to other national/funder/sectorial/departmental procedures for data management that you are using (if any)** Further procedures for data management are not known yet. This document was generated by DMPonline 7 of 7 # DMP Work Package 8 – Project Management **Project Name** STORIES of Tomorrow - students visions on the future of space exploration (WP8) **Project Identifier** H2020-ICT-2016-1 **Grant Title** 731872 **Principal Investigator / Researcher** Julia Huebner **Project Data Contact** [email protected] **Description** The nature of our research project is Technologies for Learning and Skills. The STORIES project aims to contribute to a dynamic future of children´s ebooks evolution by a) developing user-friendly interfaces for young students (10-12 years old) to create their own multi-path stories expressing their imagination and creativity and b) by intergrating the latest AR, VR and 3D printing technologies to visualize their stories in numerous innovative ways. The purpose of Work Package 8 (Project Management): This work package is responible for the co-ordination of the project in both administrative and technical terms aiming towards achieving effective operation of the project as well as timely delivery of quality results. The main objective is therefore the effective management of the project. An effective project management system requires effective decision-making, operational internal communication, development of solid work breakdown structures, schedules, costs and resource plans, effective administrative and technical control of the project, quality assurance and risk management. The tasks of this work package will ensure that all pre-described objectives of the consortium are achieved in a timely manner and that all the outputs are of the expected quality. Furthermore, the internal processes of the consortium will be systematically assessed and evaluated. The data generated in WP 8 is exclusively operative data, as used in any research project administration. It is not 'research data' as such. **Funder** European Commission (Horizon 2020) **1\. Data summary** **Provide a summary of the data addressing the following issues:** **State the purpose of the data collection / generation:** Effective administration and operational internal communication **Explain the relation to the objectives of the project:** All data collection/generation in WP8 are relevant for a good project management and a good fundament for all partners for working together. **Specify the types and formats of data generated/collected:** At the University of Bayreuth we are working with Mikrosoft Office 2010\. Documents are produced in Word, Exel, Power Point and saved as PDF files. **Specify if existing data is being re-used (if any):** The internal contact list would be updated if necessary. **Specify the origin of the data:** All project managment tools are created if necessary or they are preset by the European Commission. **State the expected size of the data (if known):** Not yet known **Outline the data utility: to whom will it be useful:** Only for internal use 2. **FAIR data** **2.1 Making data findable, including provisions for metadata:** **Outline the discoverability of data (metadata provision):** Not applicable **Outline the identifiability of data and refer to standard identification mechanism. Do you make use of persistent and unique identifiers such as Digital Object Identifiers?** Not applicable **Outline naming conventions used:** Project Name, WP, Version, Date **Outline the approach towards search keyword:** Not applicable **Outline the approach for clear versioning:** Not applicable **Specify standards for metadata creation (if any). If there are no standards in your discipline describe what metadata will be created and how:** Not applicable **2.2 Making data openly accessible:** **Specify which data will be made openly available? If some data is kept closed provide rationale for doing so:** The project management data which will be created in WP8 are only for internal use. The Deliverables: D8.1 "Project Management Guidelines" D8.2 "Quality and Risk Management Plans and Final reports" D8.3 "Quality and Risk Management Plans and Final reports" D8.4 "Quality and Risk Management Plans and Final reports" D8.5 "Web-based Management Platform" are confidential, only for members of the consortium (including the Commission Services). The Deliverable D8.6 "Data Management Plan" will be Public (Open Research Data Pilot) **Specify how the data will be made available:** All project relevant information and Deliverables will be / are uploaded to the internal working space BSCW (https://fit-bscw.fit.fraunhofer.de/) to which all Consortium members have access. Furthermore the Deliverables are uploaded to the participant´s portal from the European Commission (https://ec.europa.eu/). The European Commission will uplaod the public Deliverable to CORDIS. **Specify what methods or software tools are needed to access the data? Is documentation about the software needed to access the data included? Is it possible to include the relevant software (e.g. in open source code)?** All Deliverables are uploaded in PDF format. **Specify where the data and associated metadata, documentation and code are deposited:** They will be stored on an external hard disk, and this storage will be updated every month. **Specify how access will be provided in case there are any restrictions:** The Consortium Members gets an invitation for the internal working space, and the European Commission gave access to the participant´s portal. **2.3 Making data interoperable:** **Assess the interoperability of your data. Specify what data and metadata vocabularies, standards or methodologies you will follow to facilitate interoperability:** Not applicable **Specify whether you will be using standard vocabulary for all data types present in your data set, to allow inter-disciplinary interoperability? If not, will you provide mapping to more commonly used ontologies?** Not applicable **2.4 Increase data re-use (through clarifying licenses):** **Specify how the data will be licenced to permit the widest reuse possible:** Not applicable **Specify when the data will be made available for re-use. If applicable, specify why and for what period a data embargo is needed:** Not applicable **Specify whether the data produced and/or used in the project is useable by third parties, in particular after the end of the project? If the re-use of some data is restricted, explain why:** All data producesd and/or used in the project are porject-specific administrative data, and therefore will not be relevant for third parties. We will not use the data ourselves after the end of the project. **Describe data quality assurance processes:** The data are/will be stored on an external hard disk, and this storage will be updated every month. **Specify the length of time for which the data will remain re-usable:** Not applicable 3. **Allocation of resources** **Estimate the costs for making your data FAIR. Describe how you intend to cover This document was generated by DMPonline (http://dmponline.dcc.ac.uk) 5 of 6 these costs:** Not applicable **Clearly identify responsibilities for data management in your project:** Each Work Package Leader is responsible for his/her data management. **Describe costs and potential value of long term preservation:** Not applicable 4. **Data security** **Address data recovery as well as secure storage and transfer of sensitive data:** Not applicable 5. **Ethical aspects** **To be covered in the context of the ethics review, ethics section of DoA and ethics deliverables. Include references and related technical aspects if not covered by the former** Answer will follow 6. **Other** **Refer to other national/funder/sectorial/departmental procedures for data management that you are using (if any):** Not applicable This document was generated by DMPonline (http://dmponline.dcc.ac.uk) 6 of 6
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0259_Neurofibres_732344.md
Executive summary 1\. Description of the deliverable content, objectives and purpose The objective of WP 8 Activities for the exploitation, dissemination and communication of results is to ensure the appropriate measures to exploit, disseminate and communicate the main results derived from the project. The deliverable 8.2 entitled as Data Management Plan (DMP) is a document which provides an analysis of the main elements of the data management policy that will be used by the members of the consortium with regard to the data generated through the life of the project. The DMP will be released in compliance with the Horizon 2020 FAIR DMP template 1 , provided by the European Commission in the Participant Portal, and updated over the course of the project in time with the periodic evaluation of the project (M12, M30 and M48). Acronyms DMP: Data Management Plan EC: European Commission EU: European Union FAIR: Findable, accessible, interoperable and re-usable H2020: Horizon 2020 MFs: Microfibres WP: Working Package PU: Public RE: Restricted CO: Confidential # Introduction NEUROFIBRES is a Horizon 2020 project participating in the Open Research Data Pilot. This pilot is part of the Open Access to Scientific Publications and Research Data Programme in H2020 2 . The goal of the program is to foster access to data generated in H2020 projects. Open Access refers to a practice of giving online access to all scholarly disciplines information that is free of charge to the end-user. In this way data becomes re-usable and the benefit of public investment in the research will be improved. The EC provided a document with guidelines 3 for projects participants in the pilot. The guidelines address aspects like research data quality, sharing and security. According to the guidelines, projects participating will need to develop a Data Management Plan (DMP). The purpose of the DMP is to provide an overview of the main elements of the data management policy that will be used by the Consortium with regard to the project research data. The DMP is not a fixed document but will evolve during the lifespan of the project. The DMP covers the complete research data life cycle of the NEUROFIBRES project. It describes the types of research data that will be generated during the project, the strategies on research data preservation and the provision on access rights. The research data should be “FAIR”, that is findable, accessible, interoperable and re-usable. These principles precede implementation choices and do not necessarily suggest any specific technology, standard or implementation solution. The repository ZENODO has been chosen as the main repository to store, classify and provide Open Access to the stored data objects originated within the NEUROFIBRES project frame. Nevertheless, institutional data bases will be also considered to provide open access to specific data. # Data Summary ## Purpose of data collection and its relation with the project objectives The purpose for data collection is to capture qualitative and quantitative evidence for their analysis to lead to the formulation of answers to the questions that concern the NEUROFIBRES project. Those data could be useful to scientists and industry with research and commercial interests in the fields of SCI, neuroprostheses, bio-electronic systems, electrical and mechanical engineering, biomaterials, affibodies, biotechnology, and electroconducting materials. 2 _http://ec.europa.eu/research/participants/docs/h2020-funding-guide/cross- cutting-issues/open-access-datamanagement/open-access_en.htm_ 3 _http://ec.europa.eu/research/participants/data/ref/h2020/grants_manual/hi/oa_pilot/h2020-hi- oa-pilotguide_en.pdf_ ## Types and formats of data NEUROFIBRES will generate and collect multimodal measurements of the properties and behaviour of the MFs, the interconnected system and their mechanical interaction with the host tissue, numerical data on the features of the affibodies accounting for their electroresponsiveness and affinity for the target molecules, numerical and qualitative data regarding cell culture and animal experiments in which the novel bio-electronic tools will be tested, data statistics and numerous images related with the experiments. The types and formats of data acquired within the NEUROFIBRES project frame include the following: * _Laboratory data_ : datasets (*.txt, *.doc, *.docx,*.xls, *.xlsx), multimodal measurements (*.txt, *.doc, *.docx,*.xls, *.xlsx), numerical data (*.XX), qualitative data (*.txt, *.doc, *.docx), data statistics (*.xls, *.xlsx), images (*.jpg, *.png, *.jpeg, *.tiff), videos (*.avi, *.mov, animated GIF). * _Research data_ : statistics (*.xls, *.xlsx), graphs (*.ogg, *xls, *.xlsx), bibliography (*.enl). * _Scientific texts_ : manuscripts and reports (*.doc, *.docx, *.pdf), publications (*.doc, *.docx, *.pdf), conference proceedings (*.doc, *.docx, *.pdf), conference presentations and posters (*.ppt, *.pptx, *.pdf), books and theses (*.doc, *.docx, *.pdf). * _Dissemination material_ : leaflets and fact-sheets (*.pdf), images (*.jpg, *.png, *.jpeg, *.tiff), animated images (*.gif), videos (*mp4), social network publications and website (*.html), presentations and templates (*.ppt, *.pptx, *.pdf). * _Management documents_ : deliverables (*.doc, *.docx, *.pdf), patents (*.doc, *.docx, *.pdf). ## Re-using of data Characterization of inflammation will be grounded on previous published and non published work aiming at the immunophenotypic identification of all fluorescent cell types in transgenic Thy1CFP//LysM-EGFP//Cd11C-EYFP mice as well as on methodological works providing the suitable parameters for intravital monitoring of cellular responses during the course of CNS pathology. ## Origin of data * Laboratory and experimental data: keysets of data obtained directly from the research setups. The sources will be identified in the deposited datasets. * Research data: analyzed data from the laboratory which additional information such as charts, diagrams, statistics…etc. * Scientific texts and dissemination material: prepared from the research data. Authoraccepted manuscripts, media (images, video, gifs), leaflets etc. are included in this category. * Patents: a potential product of the NEUROFIBRES research data. * Deliverables: a set of reports for European Commission based on the research data derived from the NEUROFIBRES project. ## Expected size of the data The size of the data will be attempted to be as low as possible with the purpose of facilitating its storage and exchange. In any case, a single file should not exceed the limit of 50 GB, set by the repository ZENODO (see section 2.2.). The expected size of the data is listed below: * _Laboratory data_ : datasets (< 5MB each), images (1 – 10MB each), 5D images stacks and videos (up to 70GB per file). * _Research data_ : statistics, graphs, bibliography (< 1MB each). * _Scientific texts_ : manuscripts and reports (1 – 5MB), publications (100KB – 2MB), conference proceedings (100KB – 2MB) ,conference presentations and posters (100KB – 2MB), books and theses (30 – 50MB). * _Dissemination material_ : leaflets and fact-sheets (2MB), images (1 – 10MB each), animated images (300KB), videos (10MB), presentations and templates (100KB – 5 MB). * _Management documents_ : deliverables (500KB – 2MB), patents (500KB – 2MB). ## Data utility Making typical sample data openly accessible will be useful for validation of the research results, increasing citations and even facilitating other scientific breakthroughs. Original scientific data from all disciplines involved in the project are susceptible of being exploited by researchers or industry external to the project. On the other hand, there is great inconsistency in the results of experimental treatments for spinal cord injury (SCI), and much disparity exists in the methodology used for verification of the therapeutic effects. Searching for standardisation, a consensus on the minimum content of papers published on this matter was reached 4 . Our consortium will produce much more complete data and metadata than the proposed SCI standards, making an additional contribution to the field. # FAIR Data In compliance with the European Commission guidelines, the data generated by the NEUROFIBRES project must be FAIR, that is findable, accessible, interoperable and re-usable. The decision to be taken by the project on how to publish its documents and data sets will come after the more general decision on whether to go for an academic publication directly or to seek first protection by registering the developed Intellectual Property. 4 J Neurotrauma 2014;31:1354 ## Making data findable All the records deposited in the ZENODO repository are indexed immediately in OpenAIRE, which is the Open Access Infrastructure for Research in Europe. OpenAIRE does this by aggregating European funded research output from nearly 1000 repositories from all over the world and makes them available via the OpenAIRE portal. Records indexed in OpenAIRE will be immediately available in the European Commission Participant Portal. _Metadata provision_ The data generated under the NEUROFIBRES frame will be discoverable, identifiable and locatable by means of suitable metadata. Descriptive metadata refers to the information about the objects content. ZENODO repository offers the possibility to assign several tags (metadata) to all uploads, in order to make the content findable. The tags ZENODO offers are: * Publication type (journal article, presentation, book, thesis…etc.). * Title, authors, affiliation. * Description of the content. * Communities that the data belong to. * Grants which have funded the research. * Identifiers (DOI, ISSN, PubMed ID, URLs…etc.). * Contributors. * References. ZENODO assigns other characteristics to uploads, in order to make the content interoperable, such as: * Journal name, volume, issue and pages; in the case of a manuscript. * Conference title, place, session…etc.; in the case of a conference proceeding. * Publisher, place, ISBN, pages; in the case of parts of books and reports. _Clear versioning_ The version of the data which is aimed to be deposited will be the final version. The digital object identifiers (DOIs) are automatically generated upon deposition on the repository. If necessary, posterior versions will be deposited; these posterior versions will be identified by their own DOI and identifiable by the date of deposition and file name. Including semantic information such as the version number in a DOI will not be encouraged, because this information may change over time, while DOIs must remain persistent. Most importantly, version suffixes are not machine readable. ZENODO DOI versioning is linear, which means that the ZENODO version number may in fact not be the real version number of the resource. The approach of NEUROFIBRES to the versioning is the one ZENODO provides: two versions (two DOIs) are semantically linked in the metadata of a DOI. This ensures that discovery systems have a machine readable way to discover that two DOIs are versions of the same resource. _Standard identification mechanism_ All deposited data will be uniquely identifiable through the standard identifier DOI. Additionally, other identifiers, such as Handle, ARK, PURL, ISSN, ISBN, PubMed ID, ORCID, PubMed Central ID, ADS Bibliographic Code, arXiv, Life Science Identifiers (LSID), EAN-13, ISTC, URNs and URLsmay be used. Data generated under the NEUROFIBRES frame will acknowledge the grant in the following way: “This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 732344”. and automatically be associated to the project via OpenAIRE portal. _Naming conventions_ ZENODO DOI versioning is linear. Currently DOIs registered by Zenodo follows the pattern “10.5281/zenodo.” where 10.5281 is the Zenodo DOI prefix and is a sequentially assigned integer. The word “zenodo” is semantic information, and as mentioned in the previous section, to include semantic information in DOIs it is not supported by NEUROFIBRES as it may change over time. The current practice was introduced when Zenodo was launched, and while it is not ideal ZENODO did not want to change the existing practice. _Keywords_ All deposited data will have an associated group of keywords to facilitate identification. ## Making data openly accessible The main results of NEUROFIBRES are expected to be exploited industrially, and therefore some data cannot be made available for verification and re-use by persons and/or organisations external to the consortium. Making openly accessible some details on the fabrication or composition of the MFs, affibodies and interconnection system, or electrical parameters critical for succeeding in this challenging application, might jeopardise the achievement of the industrial and commercial impact of the project and therefore will be confidential information. Despite this, the consortium has a proactive, free- online access publication policy, provided that patents are applied before submitting the results to scientific journals. On the other hand, we aware that making data openly accessible will be useful for validation of the research results, increasing citations and even facilitating other scientific breakthroughs. _Which data will be made openly accessible_ Any dissemination data linked to exploitable results will not be put into the open domain if they compromise its commercialization prospects or have inadequate protection. Categories of outputs that NEUROFIBRES will give Open Access (free of charge) include: * Scientific publications (author accepted manuscripts, supplementary files, and conference proceedings). * Research Data (key datasets accompanying publications that are needed to validate the results). * Other research data that may be of interest to scientists and/or industry * Deliverables (public). We will provide restricted access to the members of the Consortium only for templates (e.g. deliverables templates) and documents concerning internal meetings (e.g. minutes of meeting). Dissemination and outreach material will be openly available via the NEUROFIBRES website and the network social sites. _How the data will be made available_ The data will be available via ZENODO repository. The data which is owned by the Consortium will be deposited as soon as possible, in the repository with open access rights. In the case the data needs to be protected (for example, in the case of a future publication or patent), the data will be deposited with embargoed access. The embargo will be lifted after the data has been disclosed. The author accepted manuscripts will be deposited as soon as possible, and at the latest after 6 months after publication. The preferred route to deposit the manuscripts will be the ‘Green’ model: the author or representative deposits the published article or final peerreviewed manuscript in an online repository. Whenever the ‘Green’ route cannot be assured, the Consortium will provide open access following the ‘Gold’ route: an article is immediately released in Open Access mode by the scientific publisher upon publication; the payment of publication costs is shifted away from subscribing readers. All data deposited by the NEUROFIBRES project in the ZENODO repository will be available via the OpenAIRE portal: _https://www.openaire.eu/search/project?projectId=corda__h2020::719cd36fe3741c6_ _0b7bdc234b8867fe9_ _Methods or software tools needed to access the data_ The software tools necessary to access the data will be standard software tools and free of charge, such as Open Office, or Adobe Acrobat Reader. Other standard programs could be also used, such as Microsoft Office package. These well-used standard software are likely to be readable in the future. In the case of research data files that cannot be opened with standard programs, reliable information on the instruments and tools needed to validate the results will accompany the data setsand will be indicated in the section “Additional notes”, upon deposition in the ZENODO repository. _Where the data documentation and code are deposited_ The consortium has chosen ZENODO as the central scientific publication and data repository for the project outcomes. The online repository has been created through the European Commission’s OpenAIREplus project and is hosted at CERN. The ZENODO community NEUROFIBRES has been specifically created to gather all data under the frame of the NEUROFIBRES project: _https://zenodo.org/communities/neurofibres/?page=1 &size=20 _ Additionally, the public data will be included in the European Commission Funded Research (OpenAIRE) ZENODO community ( _https://zenodo.org/communities/ecfunded/_ ), which is curated by ZENODO. _Restrictions_ The deposited data is accessible under four types of access rights: * Open Access. * Embargoed Access. * Restricted Access. * Closed Access. The data which is owned by the Consortium will be deposited as soon as possible, in the repository with open access rights. In case there is any embargo period on the deposited data, the access to the data will be granted after the embargo period and at the latest after 6 months after publication. In the case of author-accepted manuscripts with an embargo period longer than 6 months, the Gold Open Access route will be followed. However, in some cases, data will be restricted due to scientific purposes, i.e. validation of the results, statistical analyses on research results, background information for new projects, etc. This also applies to some documents such as templates (e.g. deliverables templates) and documents concerning internal meetings (e.g. minutes of meetings), which are not public. ## Making data interoperable The following abbreviations will be used in the data: aa: Amino acid Au: Gold BBB: Blood Brain Barrier CARS: Coherent Anti Stokes Raman Spectroscopy CD: Circular dichroism CFP: Cyan Fluorescent Protein CNS: Central Nervous System DRG neuron: Dorsal Root Ganglion Neuron EGFP: Enhanced Green Fluorescent Protein EYFP: Enhanced Yellow Fluorescent Protein FEM: Finite Element Method HFBM: Hierarchical Fibre Bundle Model HPLC: High performance liquid chromatography K D : Equilibrium dissociation constant MALDI-TOF: Matrix-assisted laser desorption ionization time of flight MF: Microfiber MS: Mass spectrometry Myeline: insultating and nutritive layer around axons OPO: Optical Parametric Oscillator Pt: Platinum QD655: Quantum Dots emitting at 655 nm QFM: Quantized Fracture Mechanics RU: Response units SCI: Spinal Cord Injury Secondary neurodegeneration: delayed loss of neurons with regard to impact SPPS: Solid phase peptide synthesis SPR: Surface plasmon resonance UDHS: Unilateral Dorsal Hemi Section Video: Time lapse acquisition 2P: Two-Photon Microscopy 5D images: Multiparametric images (xyz, color, time): ## Data re-use Data re-use is subject to the license under which the data objects are deposited. _Licence to the data_ The ZENODO repository offers five types of licences for the files under Open Access right. The license chosen by NEUROFIBRES will be a Creative Commons Attribution NonCommercial 4.0. The characteristics of this license are: * The licenser allows to copy, distribute and communicate publicly the work, as well as create and disseminate works derived from the former. * The licenser allows the reproduction, dissemination and public communication of the work only for non-commercial purposes. By uploading content, no change of ownership is implied and no property rights are transferred to CERN. All uploaded content remains the property of the parties prior to submission. _When the data will be made available for re-use_ The data will be immediately available for reuse upon deposition. As we have stated in Section2.2, the data deposited will be conferred Open Access rights on deposition and at the latest after 6 months. _Third parties and re-usability_ The data produced may be used by third parties, since will be openly available during the lifetime of the ZENODO repository. _Data quality assurance processes_ The cornerstone of digital preservation is data integrity: data is complete and unaltered as it was when it was originally recorded. All data files are stored in ZENODO along with a MD5 checksum of the file content. Files are regularly checked against their checksums to assure that file content remains constant. _Length of time for which the data will remain re-useable_ All data files deposited in ZENODO are stored in CERN Data Centres, primarily Geneva, with replicas in Budapest. Data files are kept in multiple replicas in a distributed file system, which is backed up to tape on a nightly basis. Items will be retained for the lifetime of the repository. This is currently the lifetime of the host laboratory CERN, which currently has an experimental programme defined for the next 20 years at least. In case of closure of ZENODO repository, the best efforts will be made to integrate all content into suitable alternative institutional and/or subject based repositories. The NEUROFIBRES data is expected to be available after the end of the project duration (31.12.2020), for at least, the lifetime of the ZENODO repository. Records can be retracted from public view; however, the data files and records are preserved and can be no longer changed or retrieved. The record’s metadata can be modified in ZENODO repository. # Allocation of resources _Estimated costs for making the data FAIR_ The estimated cost of the article processing charges is in average 3.500 € per publication 5 . Considering an estimate of 3 publications per year per partner from the second to the fourth year of the project, the total cost of making the data openly accessible for the NEUROFIBRES project is about 200.000 €. The associated costs are covered by the author and/or co-authors of the publication as agreed in the NEUROFIBRES grant agreement (eligible costs in Horizon 2020 projects). _Responsibilities for data management_ Any member of the Consortium can upload content in the repository. The content will be approved by the coordinator of NEUROFIBRES, SESCAM. All approved items cannot be deleted. New versions of the content can be uploaded together with previous versions; all versions are simultaneously available. _Value of long term preservation_ The value of long-term preservation is on ensuring and facilitating the accessibility and usability of the preserved data. It involves planning, resource allocation and application of preservation methods that have been described in Section 2. The goal is the accurate reordering of authenticated content over time, so it remains usable as technological advances render original software obsolete. # Data security In this section, data recovery, secure storage and transfer of sensitive data is addressed. All data files are stored in CERN data centres. CERN has considerable knowledge and experience in building and operating large scale digital repositories and a commitment to maintain this data centre to collect and store 100s of PBs of LHC data as it grows over the next 20 years. In the highly unlikely event that ZENODO will have to close operations, CERN guarantees the migration of all content to other suitable repositories, and since all uploads have DOIs, all citations and links to ZENODO resources will not be affected. 5 R. van Noorden, Open access: The true cost of science publishing. Nature 495, 426 (2013).
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0260_BADGER_731968.md
# 1.Introduction BADGER is a Horizon 2020 project participating in the Open Research Data Pilot. This pilot is part of the Open Access to Scientific Publication and Research Data programme in H2020. The goal of the program is to foster access to data generated in H2020 projects. Open Access refers to a practice of giving online access to all scholarly disciplines in formation that is free of charge to the end-user. In this way data becomes reusable, and the benefit of public investment in the research will be improved. The EC has provided a document with guidelines [1] and a template (see Annex I) for projects participants in the pilot. The guidelines address aspects like research data quality, sharing and security. According to the guidelines, projects participating will need to develop a Data Management Plan (DMP). The DMP describes the types of data that will be generated or gathered during the project, the standards that will be used, the ways how the data will be exploited and shared for verification or reuse, and how the data will be preserved. This document has been produced following these guidelines and aims to provide a consolidated plan for BADGER partners in the data management plan policy that the project will follow. The document is the second version of the DMP, delivered in M18 (June 2017) of the project. The DMP will be updated periodically, every 6 months, during the lifecycle of the project. ## 1.1 Background of the BADGER DMP The BADGER DMP will be written in reference to the Article 29.3. in the Model Grant Agreement called “the open access to research data” (research data management). Project participants must deposit their data in a research data repository and take measures to make the data available to third parties. The third parties should be able to access, mine, exploit, reproduce and disseminate the data. This should also help to validate the results presented in scientific publications. In addition, Article 29.3 suggests that participants will have to provide information, via the repository, about tools and instruments needed for the validation of project outcomes. The DMP will be important for tracking all data produced during the BADGER project. Article 29.3 states that project beneficiaries do not have to ensure access to parts of research data if such access would lead to a risk for the project’s goals. In such cases, the DMP must contain the reasons for not providing access. According to the aforementioned DMP Guidelines it is planned that research data management projects funded under H2020 will receive support though the research Infrastructures Work Programme 2014-15 (call 3 e-Infrastructures). Full support services are expected to be available only to research projects funded under H2020, with preference to those participating in the Open Research Data Pilot. # 2.BADGER Data Management Plan ## 2.1 The BADGER Data Management portal BADGER will develop a data management portal as part of its website. This portal will provide to the public, for each dataset that will become publicly available, a description of the dataset along with a link to a download section. The portal will be updated each time a new dataset has been collected and is ready for public distribution. However the portal shall not contain any datasets that should not become publicly available. The initial version of the portal will become available during the second half of the 2 nd year of the project, in parallel to the establishment of the first versions of project datasets that can be made publicly available. The BADGER data management portal will enable project partners to manage and distribute their public datasets through a common infrastructure. ## 2.2 Datasets naming conventions Concerning the convention followed for naming the BADGER datasets, it should be noted that the name of each dataset comprises: (a) a prefix ‘DS’ indicating a dataset, along with its unique identification number, e.g. ‘DS1’, (b) the name(s) of the partner(s) responsible to collect it, e.g. SILO, along with an identifier denoting the internal numbering of the dataset concerning the specific partner, e.g. -01 and (c) a short title of the dataset summarizing its content and purpose, e.g. Underground Object Recognition Dataset. ## 2.3 Summary of foreseen BADGER datasets In the following, Table 1 provides a list of the expected datasets, whereas the detailed description of each dataset, in accordance to the H2020 DMP template is provided in the following sections. At this stage (M18) there are five datasets foreseen in the project, covering a series of research dimensions on the skills the BADGER robot should develop. In the course of the project more Datasets will be added in the Data Management Plan. **Table 1. Summary of foreseen BADGER datasets (as of month 18)** <table> <tr> <th> **No** </th> <th> **Dataset name** </th> </tr> <tr> <td> 1 </td> <td> DS1_CERTH_GPR_Measurements </td> </tr> <tr> <td> 2 </td> <td> DS2_TT_01_System_Requirements </td> </tr> <tr> <td> 3 </td> <td> DS3_TT_02_Pilot_Experiments </td> </tr> <tr> <td> 4 </td> <td> DS4_UoG_Ultrasonic_System </td> </tr> <tr> <td> 5 </td> <td> DS5_UC3M_Control_system </td> </tr> <tr> <td> 6 </td> <td> DS6_CERTH_Surface-Subsurface_Mapping </td> </tr> </table> # 3.Datasets Description In this paragraph, we provide detailed information about the datasets that are planned to be captured by the partners of the BADGER project. These are the foreseen datasets as of month 18 of the project. More datasets will be included in the course of the project and the DMP will be updated every semester. In order to meet the requirements of the DMP according to the Pilot Open Access of the Horizon 2020, each partner provided the description of their datasets using the template given in Annex I, which was formed by following the EC guidelines of the dataset aspects that should be reported in DMPs of the H2020 projects. Based on this information, partner SILO compiled the following tables. <table> <tr> <th> **DS1_GPR_Measurements** </th> </tr> <tr> <td> **Data description** </td> </tr> <tr> <td> The GPR measurements dataset is collected due to the need for formulation of consistent radar images during the surface rover navigation. The data to be collected will be used for the detection of buried objects, such as pipes. Also, a synthetic dataset is generated for training of the related algorithms. The data will allow the training of the object detection algorithms to be developed. </td> </tr> <tr> <td> **Partners activities and responsibilities** </td> </tr> <tr> <td> Partner owner of the data; copyright holder (if applicable) </td> <td> IDSGEO and CERTH </td> </tr> <tr> <td> Partner in charge of the data collection </td> <td> CERTH </td> </tr> <tr> <td> Partner in charge of the data analysis </td> <td> CERTH </td> </tr> <tr> <td> Related WP(s) and task(s) </td> <td> WP4 </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Info about metadata (production and storage dates, places) and documentation. </td> <td> tbd </td> </tr> <tr> <td> Standards, format </td> <td> The data collected consist of raw GPR measurements in plain text (ASCII) format. Other formats will be available, such as images (.png, .jpeg). Also, data in .hdf5 and .out formats are generated by gprMax software. </td> </tr> <tr> <td> Estimated data size </td> <td> The size of the files varies between 200KB – 20 MB. </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Purpose use of the data analysis </td> <td> Existing data are utilised for initial calibration of the GPR processing algorithms. These data are provided by IDS Georadar. The data will be useful to geoscience engineers by allowing further research regarding GPR signal processing. These datasets will enable training of object detection algorithms, as well as extracting of quantitative results </td> </tr> <tr> <td> </td> <td> regarding methods accuracy and robustness. </td> </tr> <tr> <td> Data access policy/ dissemination level </td> <td> The data produced and/or used in the project are useable by any interested party aiming to use them for research and development, especially by geoscience engineers. Part of the dataset could be made publicly available. </td> </tr> <tr> <td> Embargo periods (if any) </td> <td> tbd </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage. Where? For how long? </td> <td> The data will be stored and be accessible at the project open-data database that will be linked to the project webpage. </td> </tr> </table> <table> <tr> <th> **DS2_System_Requirements** </th> </tr> <tr> <td> **Data description** </td> </tr> <tr> <td> Data is generated and collected in order to identify relevant information to define use cases, user requirements, geological conditions, and system functionalities. Further data in terms of technical design are generated in order to be able to produce mechanical components. The generation / collection of these data is necessary to fulfil the tasks of the WP in which TT is to collaborate. </td> </tr> <tr> <td> **Partners activities and responsibilities** </td> </tr> <tr> <td> Partner owner of the data; copyright holder (if applicable) </td> <td> TT </td> </tr> <tr> <td> Partner in charge of the data collection </td> <td> TT </td> </tr> <tr> <td> Partner in charge of the data analysis </td> <td> TT, CERTH, UC3M </td> </tr> <tr> <td> Related WP(s) and task(s) </td> <td> WP1, WP3, WP6 </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Info about metadata (production and storage dates, places) and documentation. </td> <td> tbd </td> </tr> <tr> <td> Standards, format </td> <td> Text (e.g. docx, pdf), calculation files (e.g. xlsx), presentations (e.g. pptx), photos (e.g. jpg), videos (e.g. mov), CAD files (e.g. dxf) </td> </tr> <tr> <td> Estimated data size </td> <td> tbd </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Purpose use of the data analysis </td> <td> Data will be useful to consortium members, dissemination and exploitation partners </td> </tr> <tr> <td> Data access policy/ dissemination level </td> <td> TT will make openly available use case information, end user requirements, geological information, prototype information as well as test requirement / condition data. Information about test results and test validation as well as market and business information will not be made available so competitors cannot make use of it. This information forms the basis of the future commercial exploitation of the BADGER technology. Data will be available on the BADGER website as reports and during dissemination as presentation or articles in magazines. </td> </tr> <tr> <td> Embargo periods (if any) </td> <td> There are no restrictions for data meant to be published. Confidential data will not be made available for public in a foreseeable period of time. </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage. Where? For how long? </td> <td> The data will also be stored and be accessible at the project open-data database that will be linked to the project webpage. </td> </tr> </table> <table> <tr> <th> **DS3_Pilot_Experiments** </th> </tr> <tr> <td> **Data description** </td> </tr> <tr> <td> Data will be generated during pilot experiments at TT industrial premisses in Lennestadt, Germany. The generation / collection of these data is necessary to fulfil the tasks of the WP in which TT is to collaborate. </td> </tr> <tr> <td> **Partners activities and responsibilities** </td> </tr> <tr> <td> Partner owner of the data; copyright holder (if applicable) </td> <td> TT </td> </tr> <tr> <td> Partner in charge of the data collection </td> <td> TT </td> </tr> <tr> <td> Partner in charge of the data analysis </td> <td> TT, CERTH, UC3M </td> </tr> <tr> <td> Related WP(s) and task(s) </td> <td> WP3, WP6 </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Info about metadata (production and storage dates, places) and documentation. </td> <td> tbd </td> </tr> <tr> <td> Standards, format </td> <td> Text (e.g. docx, pdf), calculation files (e.g. xlsx), photos (e.g. jpg), videos (e.g. mov), sensor and instrumentation </td> </tr> <tr> <td> </td> <td> data (blob, .txt, .csv). </td> </tr> <tr> <td> Estimated data size </td> <td> tbd </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Purpose use of the data analysis </td> <td> Data will be useful to consortium members, dissemination and exploitation partners </td> </tr> <tr> <td> Data access policy/ dissemination level </td> <td> TT will make openly available use case information, end user requirements, geological information, prototype information as well as test requirement / condition data. Information about test results and test validation as well as market and business information will not be made available so competitors cannot make use of it. This information forms the basis of the future commercial exploitation of the BADGER technology. Data will be available on the BADGER website as reports and during dissemination as presentation or articles in magazines. </td> </tr> <tr> <td> Embargo periods (if any) </td> <td> There are no restrictions for data meant to be published. Confidential data will not be made available for public in a foreseeable period of time. </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage. Where? For how long? </td> <td> The data will be stored at the project open-data database that will be linked to the project webpage. </td> </tr> </table> <table> <tr> <th> **DS4_Ultrasonic_System** </th> </tr> <tr> <td> **Data description** </td> </tr> <tr> <td> Data will be collected to assess the performance of the ultrasonic systems, actuation devices, and trajectory-following performance. Simulation data will also be collected. These outputs are required to validate the objectives of the project. </td> </tr> <tr> <td> **Partners activities and responsibilities** </td> </tr> <tr> <td> Partner owner of the data; copyright holder (if applicable) </td> <td> UoG </td> </tr> <tr> <td> Partner in charge of the data collection </td> <td> UoG </td> </tr> <tr> <td> Partner in charge of the data analysis </td> <td> UoG </td> </tr> <tr> <td> Related WP(s) and task(s) </td> <td> WP2 </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Info about metadata (production and storage dates, places) and </td> <td> The data will be stored on UoG’s own repository using a unique filename and indexing system that will provide all </td> </tr> </table> <table> <tr> <th> documentation. </th> <th> appropriate test metadata. The definition of appropriate metadata will be such that all experimental conditions can be recreated at a later date. As this is a new project there is no standard as yet, but key parameters will include all internal gain settings, the power settings for the ultrasonics, actuator position histories, trajectories, and substrate parameters including sand type, depth, and compaction. </th> </tr> <tr> <td> Standards, format </td> <td> This data will be stored as numerical values in time domain, in a .csv or similar format, and will be required on all appropriate test runs. The data we envisage will be, for the post part, time domain logs and will not result in complex interoperability problems. </td> </tr> <tr> <td> Estimated data size </td> <td> The file size will depend on the length of the experimental runs carried out. </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Purpose use of the data analysis </td> <td> The data will be useful for UoG as we improve our systems, and to all partners for validation purposes. </td> </tr> <tr> <td> Data access policy/ dissemination level </td> <td> All data will be available upon reasonable request, but only those sections required to disseminate our work will be actively published. This is because not all runs will be successful and, although this will not be hidden, confusion can easily result if external partners focus on off-optimal runs. Publication of these datasets will generally be made through supporting files in association with our publications. Many publishers, and indeed our own library, can support this approach. The files will not require specialist software to open. </td> </tr> <tr> <td> Embargo periods (if any) </td> <td> There are no restrictions for data meant to be published. Confidential data will not be made available for public in a foreseeable period of time. </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage. Where? For how long? </td> <td> The data will be stored and be accessible at the project open-data database that will be linked to the project webpage. </td> </tr> <tr> <td> </td> <td> Dataset will also be preserved in UoG infrastructure. The University of Glasgow library can support persistent, accessible, and curated data storage as part of their centrally-funded role within the university. The data is not expected to be sensitive, but will be securely backedup by our library. </td> </tr> </table> <table> <tr> <th> **DS5_Control_System** </th> </tr> <tr> <td> **Data description** </td> </tr> <tr> <td> Input-output control data will be stored, aiming at benchmarking motion control strategies (inverse simulation or other) for underground robot control. </td> </tr> <tr> <td> **Partners activities and responsibilities** </td> </tr> <tr> <td> Partner owner of the data; copyright holder (if applicable) </td> <td> UoG, UC3M </td> </tr> <tr> <td> Partner in charge of the data collection </td> <td> UoG, UC3M </td> </tr> <tr> <td> Partner in charge of the data analysis </td> <td> UoG, UC3M </td> </tr> <tr> <td> Related WP(s) and task(s) </td> <td> WP2 </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Info about metadata (production and storage dates, places) and documentation. </td> <td> ROSbag added metadata </td> </tr> <tr> <td> Standards, format </td> <td> ROSbag format </td> </tr> <tr> <td> Estimated data size </td> <td> The file size will depend on the length of the experimental runs carried out. </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Purpose use of the data analysis </td> <td> The data will be useful for UoG and UC3M as we improve our systems, and to all partners for validation purposes. </td> </tr> <tr> <td> Data access policy/ dissemination level </td> <td> All data will be available upon reasonable request, but only those sections required to disseminate our work will be actively published. </td> </tr> <tr> <td> Embargo periods (if any) </td> <td> There are no restrictions for data meant to be published. </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage. Where? For how long? </td> <td> The data will be stored and be accessible at the project open-data database that will be linked to the </td> </tr> <tr> <td> </td> <td> project webpage. Dataset will be preserved in UoG and UC3M infrastructure. The University of Glasgow library can support persistent, accessible, and curated data storage as part of their centrally-funded role within the university. The data is not expected to be sensitive, but will be securely backedup by our library. </td> </tr> </table> <table> <tr> <th> **DS6_Surface-Subsurface_Mapping** </th> </tr> <tr> <td> **Data description** </td> </tr> <tr> <td> The Surface-Subsurface_Mapping measurements will be collected from the integrated surface rover - GPR unit. The data will be utilized for the metric mapping of the surface rover and the subsurface mapping required for the navigation of the underground robot. Moreover, the subsurface data will be used for the utility mapping of the subsurface. Data collection will be performed at the specific test site constructed at CERTH premises, necessary for the developing and testing of all the mapping and surface rover navigation methods. For this purpose pipes of different type and material have been placed on the subsurface. The electromechanically integrated surface rover / GPR unit will be used for the data collection of stereo images, B-Scan, along with ground-truth measurements by performing multiple traverses of the field. The data will contribute to the 3D reconstruction of an unknown environment, which is one of the main objectives of the project. </td> </tr> <tr> <td> **Partners activities and responsibilities** </td> </tr> <tr> <td> Partner owner of the data; copyright holder (if applicable) </td> <td> CERTH </td> </tr> <tr> <td> Partner in charge of the data collection </td> <td> CERTH </td> </tr> <tr> <td> Partner in charge of the data analysis </td> <td> CERTH </td> </tr> <tr> <td> Related WP(s) and task(s) </td> <td> WP4 </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Info about metadata (production and storage dates, places) and documentation. </td> <td> tbd </td> </tr> <tr> <td> Standards, format </td> <td> The data collected consist of raw GPR measurements in plain text (ASCII) format. Radargrams of images format (.png, jpg) Stereo Images with Calibration data (.bag) Timestamp synchronizaiton data in plain text (ASCII) </td> </tr> <tr> <td> Estimated data size </td> <td> The size of the files varies between 200KB – 1 GB. Total size of dataset expected to be ~100GB </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Purpose use of the data analysis </td> <td> The data to be produced can be utilized for the development of: * Outdoors mapping methods * GPR data processing and A-Assembly methods in B-Scans * Registered surface/subsurface recordings  Coupled surface/subsurface mapping methods * Further processing of subsurface data for utility mapping based on the extraction o semantics. </td> </tr> <tr> <td> Data access policy/ dissemination level </td> <td> The data produced and/or used in the project are useable by any interested party aiming to use them for research and development, especially by robotics and geoscience engineers. Part of the dataset could be made publicly available. </td> </tr> <tr> <td> Embargo periods (if any) </td> <td> tbd </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage. Where? For how long? </td> <td> The data will be stored and be accessible at the project open-data database that will be linked to the project webpage. </td> </tr> </table> # Timeline of release of datasets Most of the datasets will be generated and released by the end of the second year of the project. An indicative timeline for the release of the datasets that are foreseen in the current version of the BADGER DMP is provided in Table 2 below. **Table 2. Timeline of the release of the datasets** This timeplan a preliminary estimation of dataset release dates. The actual timetable of the datasets expected to be released will be formulated progressively, as the project evolves. # Data set security measures The data set that will be generated during the Badger project does not contain any personal data and therefore does not raise any issues with respect to the GDPR regulation. All data collected are strictly related to the underground environment. Data generated by the BADGER robot during the project will not contain information of any public infrastructure, because all tests will be carried out at the premises of TT partner, as well as in the CERTH test site. Hence, during the 3- year period of the project there are no issues raised with respect to sensitive infrastructure information. However, in the future, the commercial Badger system will operate on construction sites and the collected information might include public infrastructure (utilities etc.) which might be considered sensitive and even raise national security issues. Therefore, appropriate security measures should be put into effect. These measures should handle Internet, Linux-machine and Wi-Fi communication security. Also, they should address local database storage and data export security issues. These measures are grouped and listed below: The following simplified architecture in **Figure 1** depicts the main communication lines and software modules that need to be secured. All measures explained below refer to this figure. These measures will be implemented and demonstrated on the badger software architecture in the context of Task 5.4 in WP5. **Figure 1: Main Communication architecture** **Internet communication security measures include:** * Use VPN connection with key pairs for authentication instead of password. The end points are the construction site PC and a Tablet or a remote PC (at a remote office). * Configure the Linux environment at the construction site so that Root access is granted only locally. This way no remote user can access Root privileges over the Internet **Linux machine security measures include:** * Harden the Linux at the construction site. This means: ◦ Make sure Web browser uses https ◦ Follow strict password guidelines (do not allow for default passwords, no weak passwords, change of password on a regular basis, etc.) ◦ Configure Linux for minimum port definition ( ◦ Use only SSH * Harden the Apache server <table> <tr> <th> ◦ </th> <th> Allow communication only through 443 port (installation of SSL certificate) </th> </tr> <tr> <td> ◦ </td> <td> Disable directory listing </td> </tr> <tr> <td> ◦ </td> <td> Disable not necessary modules </td> </tr> <tr> <td> ◦ </td> <td> Restrict access to directories </td> </tr> <tr> <td> ◦ </td> <td> Use mod_evasive and mod_security modules </td> </tr> <tr> <td> ◦ </td> <td> Limit the default size of requests </td> </tr> </table> **Wi-Fi security measures include:**  Use of WPA2 protocol **Database security include:** * Encryption of the data which is exported. Encrypted data will be exported using a flush disk (hence data is encrypted on transit). When data is copied to an external database it will be decrypted. * In case data from the local database is transmitted over the Internet it should be encrypted and the transmission should be done over a secure channel such as VPN. # Conclusions The current document provides preliminary, yet detailed information about the datasets that are planned to be captured by the partners of the BADGER project. These are the foreseen datasets as of month 18 of the project. A more complete list of datasets will be included in the future, as the project progresses. The DMP will be updated every semester. The report also presents a set of data security measures that pertain to the Badger applications and collected data. BADGER will develop a data management portal as part of its website. The initial version of the data management portal will become available during the second half of the 2 nd year of the project, in parallel to the establishment of the first versions of project datasets that can be made publicly available.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0261_BADGER_731968.md
# 1.Introduction BADGER is a Horizon 2020 project participating in the Open Research Data Pilot. This pilot is part of the Open Access to Scientific Publication and Research Data programme in H2020. The goal of the program is to foster access to data generated in H2020 projects. Open Access refers to a practice of giving online access to all scholarly disciplines in formation that is free of charge to the end-user. In this way data becomes reusable, and the benefit of public investment in the research will be improved. The EC has provided a document with guidelines [1] and a template (see Annex I) for projects participants in the pilot. The guidelines address aspects like research data quality, sharing and security. According to the guidelines, projects participating will need to develop a Data Management Plan (DMP). The DMP describes the types of data that will be generated or gathered during the project, the standards that will be used, the ways how the data will be exploited and shared for verification or reuse, and how the data will be preserved. This document has been produced following these guidelines and aims to provide a consolidated plan for BADGER partners in the data management plan policy that the project will follow. The document is the first version of the DMP, delivered in M6 (June 2017) of the project. The DMP will be updated periodically, every 6 months, during the lifecycle of the project. ## 1.1 Background of the BADGER DMP The BADGER DMP will be written in reference to the Article 29.3. in the Model Grant Agreement called “the open access to research data” (research data management). Project participants must deposit their data in a research data repository and take measures to make the data available to third parties. The third parties should be able to access, mine, exploit, reproduce and disseminate the data. This should also help to validate the results presented in scientific publications. In addition, Article 29.3 suggests that participants will have to provide information, via the repository, about tools and instruments needed for the validation of project outcomes. The DMP will be important for tracking all data produced during the BADGER project. Article 29.3 states that project beneficiaries do not have to ensure access to parts of research data if such access would lead to a risk for the project’s goals. In such cases, the DMP must contain the reasons for not providing access. According to the aforementioned DMP Guidelines it is planned that research data management projects funded under H2020 will receive support through the research Infrastructures Work Programme 2014-15 (call 3 e-Infrastructures). Full support services are expected to be available only to research projects funded under H2020, with preference to those participating in the Open Research Data Pilot. # 2.BADGER Data Management Plan ## 2.1 The BADGER Data Management portal BADGER will develop a data management portal as part of its website. This portal will provide to the public, for each dataset that will become publicly available, a description of the dataset along with a link to a download section. The portal will be updated each time a new dataset has been collected and is ready for public distribution. The portal will however not contain any datasets that should not become publicly available. The initial version of the portal will become available during the 2 nd year of the project, in parallel to the establishment of the first versions of project datasets that can be made publicly available. The BADGER data management portal will enable project partners to manage and distribute their public datasets through a common infrastructure. ## 2.2 Datasets naming conventions Concerning the convention followed for naming the BADGER datasets, it should be noted that the name of each dataset comprises: (a) a prefix ‘DS’ indicating a dataset, along with its unique identification number, e.g. ‘DS1’, (b) the name(s) of the partner(s) responsible to collect it, e.g. SILO, along with an identifier denoting the internal numbering of the dataset concerning the specific partner, e.g. -01 and (c) a short title of the dataset summarizing its content and purpose, e.g. Underground Object Recognition Dataset. ## 2.3 Summary of foreseen BADGER datasets In the following, Table 1 provides a list of the expected datasets, whereas the detailed description of each dataset, in accordance to the H2020 DMP template is provided in the following sections. At this stage (M6) there are five datasets foreseen in the project, covering a series of research dimensions on the skills the BADGER robot should develop. In the course of the project more Datasets will be added in the Data Management Plan. **Table 1. Summary of foreseen BADGER datasets (as of month 6)** <table> <tr> <th> **No** </th> <th> **Dataset name** </th> </tr> <tr> <td> 1 </td> <td> DS1_CERTH_GPR_Measurements </td> </tr> <tr> <td> 2 </td> <td> DS2_TT_01_System_Requirements </td> </tr> <tr> <td> 3 </td> <td> DS3_TT_02_Pilot_Experiments </td> </tr> <tr> <td> 4 </td> <td> DS4_UoG_Ultrasonic_System </td> </tr> <tr> <td> 5 </td> <td> DS5_UC3M_Control_system </td> </tr> </table> # 3.Datasets Description In this paragraph, we provide detailed information about the datasets that are planned to be captured by the partners of the BADGER project. These are the foreseen datasets as of month 6 of the project. More datasets will be included in the course of the project and the DMP will be updated every semester. In order to meet the requirements of the DMP according to the Pilot Open Access of the Horizon 2020, each partner provided the description of their datasets using the template given in Annex I, which was formed by following the EC guidelines of the dataset aspects that should be reported in DMPs of the H2020 projects. Based on this information, partner SILO compiled the following tables. <table> <tr> <th> **DS1_GPR_Measurements** </th> </tr> <tr> <td> **Data description** </td> </tr> <tr> <td> The GPR measurements dataset is collected due to the need for detection and localization of underground targets and 3D mapping of the subsurface. Also, a synthetic dataset is generated for training of the related algorithms. The data will contribute to the 3D reconstruction of an unknown environment, which is one of the main objectives of the project. They will also allow the training of the object detection algorithms. </td> </tr> <tr> <td> **Partners activities and responsibilities** </td> </tr> <tr> <td> Partner owner of the data; copyright holder (if applicable) </td> <td> IDS and CERTH </td> </tr> <tr> <td> Partner in charge of the data collection </td> <td> CERTH </td> </tr> <tr> <td> Partner in charge of the data analysis </td> <td> CERTH </td> </tr> <tr> <td> Related WP(s) and task(s) </td> <td> WP4 </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Info about metadata (production and storage dates, places) and documentation. </td> <td> tbd </td> </tr> <tr> <td> Standards, format </td> <td> The data collected consist of raw GPR measurements in plain text (ASCII) format. Other formats will be available, such as images (.png, .jpeg). Also, data in .hdf5 and .out formats are generated by gprMax software. </td> </tr> <tr> <td> Estimated data size </td> <td> The size of the files varies between 200KB – 20 MB. </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Purpose use of the data analysis </td> <td> Existing data are utilised for initial calibration of the GPR processing algorithms. These data are provided by IDS Georadar. The data will be useful to geoscience engineers by allowing further research regarding GPR signal processing. These datasets will enable training of object detection algorithms, as well as extracting of quantitative results </td> </tr> <tr> <td> </td> <td> regarding methods accuracy and robustness. </td> </tr> <tr> <td> Data access policy/ dissemination level </td> <td> The data produced and/or used in the project are useable by any interested party aiming to use them for research and development, especially by geoscience engineers. Part of the dataset could be made publicly available. </td> </tr> <tr> <td> Embargo periods (if any) </td> <td> tbd </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage. Where? For how long? </td> <td> The data will be stored and be accessible at the project open-data database that will be linked to the project webpage. </td> </tr> </table> <table> <tr> <th> **DS2_System_Requirements** </th> </tr> <tr> <td> **Data description** </td> </tr> <tr> <td> Data is generated and collected in order to identify relevant information to define use cases, user requirements, geological conditions, and system functionalities. Further data in terms of technical design are generated in order to be able to produce mechanical components. The generation / collection of these data is necessary to fulfil the tasks of the WP in which TT is to collaborate. </td> </tr> <tr> <td> **Partners activities and responsibilities** </td> </tr> <tr> <td> Partner owner of the data; copyright holder (if applicable) </td> <td> TT </td> </tr> <tr> <td> Partner in charge of the data collection </td> <td> TT </td> </tr> <tr> <td> Partner in charge of the data analysis </td> <td> TT, CERTH, UC3M </td> </tr> <tr> <td> Related WP(s) and task(s) </td> <td> WP1, WP3, WP6 </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Info about metadata (production and storage dates, places) and documentation. </td> <td> tbd </td> </tr> <tr> <td> Standards, format </td> <td> Text (e.g. docx, pdf), calculation files (e.g. xlsx), presentations (e.g. pptx), photos (e.g. jpg), videos (e.g. mov), CAD files (e.g. dxf) </td> </tr> <tr> <td> Estimated data size </td> <td> tbd </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Purpose use of the data analysis </td> <td> Data will be useful to consortium members, dissemination and exploitation partners </td> </tr> <tr> <td> Data access policy/ dissemination level </td> <td> TT will make openly available use case information, end user requirements, geological information, prototype </td> </tr> <tr> <td> </td> <td> information as well as test requirement / condition data. Information about test results and test validation as well as market and business information will not be made available so competitors cannot make use of it. This information forms the basis of the future commercial exploitation of the BADGER technology. Data will be available on the BADGER website as reports and during dissemination as presentation or articles in magazines. </td> </tr> <tr> <td> Embargo periods (if any) </td> <td> There are no restrictions for data meant to be published. Confidential data will not be made available for public in a foreseeable period of time. </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage. Where? For how long? </td> <td> The data will also be stored and be accessible at the project open-data database that will be linked to the project webpage. </td> </tr> </table> <table> <tr> <th> **DS3_Pilot_Experiments** </th> </tr> <tr> <td> **Data description** </td> </tr> <tr> <td> Data will be generated during pilot experiments at TT industrial premises in Lennestadt, Germany. The generation / collection of these data is necessary to fulfil the tasks of the WP in which TT is to collaborate. </td> </tr> <tr> <td> **Partners activities and responsibilities** </td> </tr> <tr> <td> Partner owner of the data; copyright holder (if applicable) </td> <td> TT </td> </tr> <tr> <td> Partner in charge of the data collection </td> <td> TT </td> </tr> <tr> <td> Partner in charge of the data analysis </td> <td> TT, CERTH, UC3M </td> </tr> <tr> <td> Related WP(s) and task(s) </td> <td> WP3, WP6 </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Info about metadata (production and storage dates, places) and documentation. </td> <td> tbd </td> </tr> <tr> <td> Standards, format </td> <td> Text (e.g. docx, pdf), calculation files (e.g. xlsx), photos (e.g. jpg), videos (e.g. mov), sensor and instrumentation data (blob, .txt, .csv). </td> </tr> <tr> <td> Estimated data size </td> <td> tbd </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Purpose use of the data analysis </td> <td> Data will be useful to consortium members, dissemination and exploitation partners </td> </tr> <tr> <td> Data access policy/ dissemination level </td> <td> TT will make openly available use case information, end user requirements, geological information, prototype information as well as test requirement / condition data. Information about test results and test validation as well as market and business information will not be made available so competitors cannot make use of it. This information forms the basis of the future commercial exploitation of the BADGER technology. Data will be available on the BADGER website as reports and during dissemination as presentation or articles in magazines. </td> </tr> <tr> <td> Embargo periods (if any) </td> <td> There are no restrictions for data meant to be published. Confidential data will not be made available for public in a foreseeable period of time. </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage. Where? For how long? </td> <td> The data will be stored at the project open-data database that will be linked to the project webpage. </td> </tr> </table> <table> <tr> <th> **DS4_Ultrasonic_System** </th> </tr> <tr> <td> **Data description** </td> </tr> <tr> <td> Data will be collected to assess the performance of the ultrasonic systems, actuation devices, and trajectory-following performance. Simulation data will also be collected. These outputs are required to validate the objectives of the project. </td> </tr> <tr> <td> **Partners activities and responsibilities** </td> </tr> <tr> <td> Partner owner of the data; copyright holder (if applicable) </td> <td> UoG </td> </tr> <tr> <td> Partner in charge of the data collection </td> <td> UoG </td> </tr> <tr> <td> Partner in charge of the data analysis </td> <td> UoG </td> </tr> <tr> <td> Related WP(s) and task(s) </td> <td> WP2 </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Info about metadata (production and storage dates, places) and documentation. </td> <td> The data will be stored on UoG’s own repository using a unique filename and indexing system that will provide all appropriate test metadata. The definition of appropriate metadata will be such that all experimental conditions can be recreated at a later </td> </tr> </table> <table> <tr> <th> </th> <th> date. As this is a new project there is no standard as yet, but key parameters will include all internal gain settings, the power settings for the ultrasonics, actuator position histories, trajectories, and substrate parameters including sand type, depth, and compaction. </th> </tr> <tr> <td> Standards, format </td> <td> This data will be stored as numerical values in time domain, in a .csv or similar format, and will be required on all appropriate test runs. The data we envisage will be, for the post part, time domain logs and will not result in complex interoperability problems. </td> </tr> <tr> <td> Estimated data size </td> <td> The file size will depend on the length of the experimental runs carried out. </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Purpose use of the data analysis </td> <td> The data will be useful for UoG as we improve our systems, and to all partners for validation purposes. </td> </tr> <tr> <td> Data access policy/ dissemination level </td> <td> All data will be available upon reasonable request, but only those sections required to disseminate our work will be actively published. This is because not all runs will be successful and, although this will not be hidden, confusion can easily result if external partners focus on off-optimal runs. Publication of these datasets will generally be made through supporting files in association with our publications. Many publishers, and indeed our own library, can support this approach. The files will not require specialist software to open. </td> </tr> <tr> <td> Embargo periods (if any) </td> <td> There are no restrictions for data meant to be published. Confidential data will not be made available for public in a foreseeable period of time. </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage. Where? For how long? </td> <td> The data will be stored and be accessible at the project open-data database that will be linked to the project webpage. Dataset will also be preserved in UoG infrastructure. The University of Glasgow library can support persistent, accessible, and </td> </tr> <tr> <td> </td> <td> curated data storage as part of their centrally-funded role within the university. The data is not expected to be sensitive, but will be securely backedup by our library. </td> </tr> </table> <table> <tr> <th> **DS5_Control_System** </th> </tr> <tr> <td> **Data description** </td> </tr> <tr> <td> Input-output control data will be stored, aiming at benchmarking motion control strategies (inverse simulation or other) for underground robot control. </td> </tr> <tr> <td> **Partners activities and responsibilities** </td> </tr> <tr> <td> Partner owner of the data; copyright holder (if applicable) </td> <td> UoG, UC3M </td> </tr> <tr> <td> Partner in charge of the data collection </td> <td> UoG, UC3M </td> </tr> <tr> <td> Partner in charge of the data analysis </td> <td> UoG, UC3M </td> </tr> <tr> <td> Related WP(s) and task(s) </td> <td> WP2 </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Info about metadata (production and storage dates, places) and documentation. </td> <td> ROSbag added metadata </td> </tr> <tr> <td> Standards, format </td> <td> ROSbag format </td> </tr> <tr> <td> Estimated data size </td> <td> The file size will depend on the length of the experimental runs carried out. </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Purpose use of the data analysis </td> <td> The data will be useful for UoG and UC3M as we improve our systems, and to all partners for validation purposes. </td> </tr> <tr> <td> Data access policy/ dissemination level </td> <td> All data will be available upon reasonable request, but only those sections required to disseminate our work will be actively published. </td> </tr> <tr> <td> Embargo periods (if any) </td> <td> There are no restrictions for data meant to be published. </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage. Where? For how long? </td> <td> The data will be stored and be accessible at the project open-data database that will be linked to the project webpage. Dataset will be preserved in UoG and UC3M infrastructure. The University of Glasgow library can </td> </tr> <tr> <td> </td> <td> support persistent, accessible, and curated data storage as part of their centrally-funded role within the university. The data is not expected to be sensitive, but will be securely backedup by our library. </td> </tr> </table> # Timeline of release of datasets Most of the datasets will be generated and released by the end of the second year of the project. However, a dataset related to the user requirements might be generated and released earlier, around the end of the first year. An indicative timeline for the release of the datasets that are foreseen in the current version of the BADGER DMP is provided in Table 2 below. **Table 2. Timeline of the release of the datasets** This timeplan a preliminary estimation of dataset release dates. The actual timetable of the datasets expected to be released will be formulated progressively, as the project evolves. # Conclusions BADGER will develop a data management portal as part of its website. This portal will provide to the public, for each dataset that will become publicly available, a description of the dataset along with a link to a download section. The portal will be updated each time a new dataset has been collected and is ready for public distribution. The portal will however not contain any datasets that should not become publicly available. The initial version of the data management portal will become available during the 2 nd year of the project, in parallel to the establishment of the first versions of project datasets that can be made publicly available. The current document provides preliminary, yet detailed information about the datasets that are planned to be captured by the partners of the BADGER project. These are the foreseen datasets as of month 6 of the project. A more complete list of datasets will be included in the future, as the project progresses. The DMP will be updated every semester.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0262_FReSMe_727504.md
Although the FReSMe project’s consortium strives to make all the reasonable datasets available, it is explicitly not the intention to publish any raw datasets or any datasets that can be considered as confidential or as a trade secret, in agreement with the H2020 rules 3 . 3. “Guidelines to the Rules on Open Access to Scientific Publications and Open Access to Research Data in Horizon 2020”. Version 3.2. 21 March 2017 (p.4): “In the context of research funding, open access requirements do not imply an obligation to publish results. The decision to publish is entirely up to the grant beneficiaries. Open access becomes an issue _only if_ publication is chosen as a means of dissemination. Moreover, open access does not affect the decision to exploit research results commercially, e.g. through patenting. The decision on whether to publish through open access must come after the more general decision on whether to publish directly or to first seek protection.” # 2 FReSMe project and data management The aim of FReSMe is to demonstrate the process to generate Methanol fuel from blast furnace gases (BFG) supplemented with renewable hydrogen produced from water electrolysis. The methanol fuel will be demonstrated in a ship. It is expected that the technology developed generates several commercial exploitation lines. On one hand, it is applicable for the reduction of GHG emissions of the industry. On the other hand, it will become an alternative source of fuel in different sectors, from transport to power generation. In addition, it could also contribute to the storage of energy surplus coming from renewable sources. ## 2.1 Decision tree for dissemination or preservation of the data Since it is expected that the results of the project could be industrially exploited and commercialised, most of the data generated is considered confidential, so the main part cannot be made publicly available. Nevertheless, it is expected that scientific publications will be generated (at least 2 in relevant journals are planned in the DoA). These papers will be open to the society. This scientific publications will be curated and the data associated will be accessible for verification purposes. The decision to disseminate or preserve the data is taken following the decision tree established in figure 1: **Figure 1.** Decision tree for data generated in FReSMe project. During the first step of the decision making procedure, the data is identified as confidential or public, following the criteria stated in the DoA of the FReSMe GA (see section 2.2). The confidential data generated will be preserved by the partner that produced them. Meanwhile, the public data and those associated to scientific publications of the partners related to the project results will be saved, where appropriate, in an open access repository. ## 2.2 Dataset generated by the project As it was stated above, most of the datasets of the project will be linked to a confidential information. In this DMP, a summary of the sources from which the data will be generated and the types of numerical dataset expected is provided. Below, a list is provided with 5 sources of data and 5 types of datasets expected from the implementation of the FReSMe project. This takes into account all the data generated within the project, not only the public data. The dataset would be generated from the following sources (“source of data”): 1. Lab-scale test of the technologies and materials involved 2. Simulation of the processes and of the performance of the materials 3. Test campaigns of the pilot plant unit 4. Techno-economic analysis of the production process 5. Life cycle analysis of the process From these sources, 5 types of dataset have been identified (“Type of dataset”): 1. Process parameters dataset 2. Materials parameters dataset 3. Modelling parameters dataset 4. Environmental impact dataset 5. Techno economic dataset As a way to establish a first approach to the data generated, the list of deliverables identified in the DoA in FReSMe project GA is used. Table 1 shows the deliverables of each work package and additional information indicated for them in the GA. The last columns provide the characteristics of each deliverable regarding the DMP. The most relevant information included in table 1 regarding the DMP is the following: * Column “Dissemination level”: reflects the dissemination level - public (PU) or Confidential (CO)- that was established in the DoA in FreSMe GA. It will be the first criteria to apply in the procedure established to identify which data is considered as public and which one as confidential (first step in the decision tree of figure 1, above). * Column “Numerical dataset linked” (yes/no): to point out if the results that encompass the deliverables are (or not) based in numerical datasets. * Column “Source of data”: In case that the previous columns were “yes”, in this column is stated the source of the data (in some cases, the data can come from different sources), in accordance with the list above: 1, 2, 3, 4, 5. Additional details are provided in section 3\. * Column “Type of dataset”: In case there are numerical datasets, this column is to state the type of dataset linked in accordance with the list above: a, b, c, d, e (in some cases, there can be different types of data associated to one deliverable). Additional details are provided in section 3. <table> <tr> <th> </th> <th> **description** </th> <th> **Lead Partner** </th> <th> **Type of Dissemin deliver ation able level** </th> <th> **Due month** </th> <th> **Numerical** **Dataset linked** </th> <th> **Source of data** </th> <th> **Type of dataset** </th> </tr> <tr> <td> D1.1 </td> <td> Potential harm to the environment risk analysis </td> <td> i-deals </td> <td> R </td> <td> CO </td> <td> M3 </td> <td> no </td> <td> N/A </td> <td> N/A </td> </tr> <tr> <td> D1.2 </td> <td> Health and safety procedures </td> <td> i-deals </td> <td> R </td> <td> CO </td> <td> M2 </td> <td> no </td> <td> N/A </td> <td> N/A </td> </tr> <tr> <td> D1.3 </td> <td> Ethical standards H2020 </td> <td> i-deals </td> <td> R </td> <td> CO </td> <td> M2 </td> <td> no </td> <td> N/A </td> <td> N/A </td> </tr> <tr> <td> D2.1 </td> <td> Common Design Practice </td> <td> TNO </td> <td> R </td> <td> CO </td> <td> M2 </td> <td> no </td> <td> N/A </td> <td> N/A </td> </tr> <tr> <td> D2.2 </td> <td> Basis of Design </td> <td> MEFOS </td> <td> R </td> <td> CO </td> <td> M3 </td> <td> no </td> <td> N/A </td> <td> N/A </td> </tr> <tr> <td> D2.3 </td> <td> Basic Engineering Package </td> <td> TNO </td> <td> R </td> <td> CO </td> <td> M15 </td> <td> no </td> <td> N/A </td> <td> N/A </td> </tr> <tr> <td> D2.4 D2.5 D2.6 </td> <td> Detailed Engineering Package Pilot Plant Construction and Connection to battery limits Site-Acceptance Test </td> <td> MEFOS MEFOS MEFOS </td> <td> R DEM R </td> <td> CO </td> <td> M22 M33 M36 </td> <td> no no no </td> <td> N/A N/A N/A </td> <td> N/A </td> </tr> <tr> <td> PU </td> <td> N/A </td> </tr> <tr> <td> CO </td> <td> N/A </td> </tr> <tr> <td> D2.7 </td> <td> Lessons Learned Maintenance Log </td> <td> MEFOS </td> <td> R </td> <td> CO </td> <td> M48 </td> <td> no </td> <td> N/A </td> <td> N/A </td> </tr> <tr> <td> D2.8 </td> <td> decommissioning </td> <td> MEFOS </td> <td> DEM </td> <td> CO </td> <td> M48 </td> <td> no </td> <td> N/A </td> <td> N/A </td> </tr> <tr> <td> D3.1 </td> <td> 5 appropriate and scalable catalyst formulations </td> <td> NIC </td> <td> DEM </td> <td> CO </td> <td> M24 </td> <td> yes </td> <td> 1 </td> <td> b </td> </tr> <tr> <td> D3.2 D3.3 D3.4 </td> <td> SEWGS system report to support LCA N 2 dilution experimentally assessed and modelled Process conditions modelled and optimized </td> <td> TNO NIC NIC </td> <td> R R R </td> <td> CO </td> <td> M12 M36 M36 </td> <td> yes yes yes </td> <td> 1 1 1 </td> <td> a, b, </td> </tr> <tr> <td> PU </td> <td> a </td> </tr> <tr> <td> CO </td> <td> c </td> </tr> <tr> <td> D3.5 </td> <td> Sorbent and cycle development and testing for methanol </td> <td> TNO </td> <td> R </td> <td> CO </td> <td> M36 </td> <td> yes </td> <td> 1, 2 </td> <td> a,c </td> </tr> <tr> <td> D3.6 </td> <td> Long term behaviour assessments </td> <td> NIC </td> <td> R </td> <td> CO </td> <td> M36 </td> <td> yes </td> <td> 1 </td> <td> a </td> </tr> <tr> <td> D3.7 </td> <td> Long term process operation optimized </td> <td> NIC </td> <td> R </td> <td> CO </td> <td> M36 </td> <td> yes </td> <td> 1 </td> <td> a </td> </tr> </table> <table> <tr> <th> D4.1 D4.2 </th> <th> Delivery of scale-up methanol catalyst and sorbent Report on the flexible Test Campaign Results </th> <th> CRI MEFOS </th> <th> DEM R </th> <th> PU </th> <th> M36 M46 </th> <th> no yes </th> <th> N/A 3 </th> <th> N/A </th> </tr> <tr> <th> CO </th> <th> a </th> </tr> <tr> <td> D4.3 </td> <td> Combined Report on Process and Data analysis </td> <td> NIC </td> <td> R </td> <td> CO </td> <td> M46 </td> <td> yes </td> <td> 3 </td> <td> a </td> </tr> <tr> <td> D4.4 </td> <td> Synthesized Report on Evaluation of the test runs </td> <td> CRI </td> <td> R </td> <td> CO </td> <td> M48 </td> <td> yes </td> <td> 3 </td> <td> a </td> </tr> <tr> <td> D5.1 D5.2 D5.3 </td> <td> Definition of base and reference cases Electrolysis in the Iron & Steel Industry Enriched air for optimisation of Blast Furnace to methanol concept </td> <td> MEFOS MEFOS MEFOS </td> <td> R R R </td> <td> CO </td> <td> M14 M22 M24 </td> <td> no no yes </td> <td> N/A N/A 2 </td> <td> N/A </td> </tr> <tr> <td> PU </td> <td> N/A </td> </tr> <tr> <td> CO </td> <td> c </td> </tr> <tr> <td> D5.4 </td> <td> Alternative sources of CO2, H2 and Heat Sources </td> <td> Tata Steel </td> <td> R </td> <td> CO </td> <td> M34 </td> <td> yes </td> <td> 2 </td> <td> c </td> </tr> <tr> <td> D5.5 </td> <td> Full scale design and preliminary costing </td> <td> CRI </td> <td> R </td> <td> CO </td> <td> M48 </td> <td> yes </td> <td> 4 </td> <td> e </td> </tr> <tr> <td> D5.6 </td> <td> Overall process optimisation for flexibility and future developments </td> <td> MEFOS </td> <td> R </td> <td> CO </td> <td> M48 </td> <td> yes </td> <td> 4 </td> <td> e </td> </tr> <tr> <td> D6.1 D6.2 D6.3 </td> <td> Report on techno-economic assessment of methanol synthesis based on residual steel gasses Preliminary Life Cycle Inventory (LCI) report on proposed methanol synthesis Report on Life Cycle Analysis (LCA) </td> <td> POLIMI POLIMI POLIMI </td> <td> R R R </td> <td> CO </td> <td> M46 M28 M40 </td> <td> yes yes yes </td> <td> 4 1, 2, 3 5 </td> <td> a, c, e </td> </tr> <tr> <td> PU </td> <td> a, b, c </td> </tr> <tr> <td> CO </td> <td> d </td> </tr> <tr> <td> D6.4 </td> <td> Report on market analysis </td> <td> CRI </td> <td> R </td> <td> CO </td> <td> M40 </td> <td> no </td> <td> N/A </td> <td> N/A </td> </tr> <tr> <td> D7.1 </td> <td> Project management plan </td> <td> i-deals </td> <td> R </td> <td> CO </td> <td> M3 </td> <td> no </td> <td> N/A </td> <td> N/A </td> </tr> <tr> <td> D7.2 </td> <td> Annual coordination report 1 </td> <td> i-deals </td> <td> R </td> <td> CO </td> <td> M12 </td> <td> no </td> <td> N/A </td> <td> N/A </td> </tr> <tr> <td> D7.3 </td> <td> 1 st Project management plan revision </td> <td> i-deals </td> <td> R </td> <td> CO </td> <td> M14 </td> <td> no </td> <td> N/A </td> <td> N/A </td> </tr> <tr> <td> D7.4 </td> <td> Annual coordination report 2 </td> <td> i-deals </td> <td> R </td> <td> CO </td> <td> M24 </td> <td> no </td> <td> N/A </td> <td> N/A </td> </tr> <tr> <td> D7.5 </td> <td> 2 nd Project management plan revision </td> <td> i-deals </td> <td> R </td> <td> CO </td> <td> M30 </td> <td> no </td> <td> N/A </td> <td> N/A </td> </tr> <tr> <td> D7.6 </td> <td> Annual coordination report 3 </td> <td> i-deals </td> <td> R </td> <td> CO </td> <td> M36 </td> <td> no </td> <td> N/A </td> <td> N/A </td> </tr> <tr> <td> D7.7 </td> <td> Annual coordination report 4 </td> <td> i-deals </td> <td> R </td> <td> CO </td> <td> M48 </td> <td> no </td> <td> N/A </td> <td> N/A </td> </tr> <tr> <td> D7.8 </td> <td> Business plan </td> <td> i-deals </td> <td> R </td> <td> CO </td> <td> M12 </td> <td> no </td> <td> N/A </td> <td> N/A </td> </tr> <tr> <td> D7.9 D7.10 </td> <td> Dissemination and Exploitation Plan ORDP - Data Management Plan </td> <td> i-deals i-deals </td> <td> R ORDP </td> <td> CO </td> <td> M12 M6 </td> <td> no no </td> <td> N/A N/A </td> <td> N/A </td> </tr> <tr> <td> PU </td> <td> N/A </td> </tr> </table> **Table 1.** List of the deliverables and dataset linked. ## 2.3 Responsibilities The procedures to curate and preserve the datasets established in the DMP are mandatory for the data associated to an open access result, either public result or scientific publication. However, the partners are encouraged to follow it for all the results generated. Open access datasets have to be created, managed, and stored in accordance with this DMP. The project coordinator will make the procedures available to deposit the open access data in an open access repository (please refer to section 4). The validation and registration of datasets, with the associated metadata, is the responsibility of the partner that generates the data, following the procedures provided by the coordinator. Backing up data to be shared through open access repositories is the responsibility of the partner generating the data. Confidential datasets must be curated and preserved by the partner that generates them. All partners must consult the concerned partners before publishing, in an open domain, data associated to a result that would be feasible for a commercial exploitation. ## 2.4 Audience interested in the results of the FReSMe project The particular objective of the project is to demonstrate the process to produce Methanol fuel from the BFG in the steel industry in a relevant environment. This objective has two main implicit benefits: reducing the GHG emissions of the steel industry, and in this way its environmental impact, and producing an alternative fuel, with lower emissions, that reduces the European dependence on foreign fossil fuels. In agreement with this main objective, the following groups can be identified as interested audiences on the FReSMe project’s results: 1. Scientific Community. More specifically, those in the area of chemistry, new fuels and GHG emission reduction technologies or industrial processes. 2. Related industries and SME’s. On one hand, those that could apply the technology to reduce their GHG emissions; on the other hand, those that could use the methanol as fuel or as chemical platform. Therefore, those enterprises that would be interested in the results of the project could come from economic sectors such as the following: * Steel and Iron producers * Fuel producers and distributors * Transport sector: maritime and terrestrial * Engineering 3. Venture Capitalists that can contribute to a further development of the technology by funding commercial scale projects. 4. Public entities. The results of the project can be used to define policies to reduce the carbon footprint of the industry and of the transport sector, as well as to define a legal framework to increase the competiveness of European industry. The results will also be of high value to define public support for further improvement of the technology in order to achieve a commercial stage technology. 5. Public in general interested in new ways to reduce GHG emissions and in the technologies related with Carbon Capture Sequestration and Utilization (CCU/CCS). # 3 Datasets description In this section the metadata that shall be included in the dataset files and the description of the 5 different numerical dataset types identified in the FReSMe project is stated. Metadata will be included as a header of a numerical data file. ## 3.1 Metadata The data encompassed in the project will cover different areas of knowledge. The metadata associated to the dataset will specify, on one hand, the information about the generation of the dataset (project, organization, date, dataset type, scientific keywords) and on the other hand, the list of parameters (and its units) included in the file. Metadata will help make the data accessible and reusable, in accordance with the open access principles of the H2020 programme. According to this, the following information will be stated as a header of a data file: * Project Acronym: FReSMe * Author Institution: Acronym of the partner that generates the data * Date * Dataset type: a / b / c / d / e. * FReSMe Keywords (at least 1 keyword with a maximum of 3). Suggested keywords: Sorbent, Catalyst, Process, Test campaigns, Material, etc. This is an open field and will be determined by the partners during the project implementation _._ * In a report, Scientific community Keywords: _Free_ (those that the partners usually use in their area of expertise that is significant to refer to the dataset) * Parameter_A (unit) - Parameter_B (unit) - Parameter_C (unit) - ….. ## 3.2 Description of dataset types Below, the different numeric dataset types generated in the project are described. This description will be updated along the project implementation according to the data generated. The format described shall be applied to the numerical dataset files related with open access results of the project. It is recommended that the same rules and formats are applied, for the numerical datasets associated to confidential data generated within the project. ### a. Process parameters dataset Development of the FReSMe technology encompasses the determination of the parameters and boundary conditions of the processes involved, like materials production, CO 2 Capture, H 2 recovery, H 2 production and Methanol synthesis. Data included in Process parameter datasets could include the following parameters: temperature, pressure, volumetric flow rate, yield, efficiency, and tag name. File format: * Text file – CSV. * Header (metadata): FReSMe, Institution Acronym, date… parameter_A (unit), parameter_B (unit), parameter_C (unit)… * Numerical data Applicable Standards N/A ### b. Materials parameters dataset FReSMe project results includes the formulation and characterisation of the materials involved in the process, like the sorbent, catalyst and products. The parameters that determine the characteristics of these materials will be included in the Material parameter datasets. Data included in this dataset comprise parameters such as activity, selectivity and stability. File format: * Text file – CSV. * Header (metadata): FReSMe, Institution Acronym, date… parameter_A (unit), Parameter_B (unit), Parameter_C (unit)… * Numerical data Applicable Standards N/A ### c. Modelling parameters dataset The process involved in the technology of the project will be modelled in order to understand and foresee the influence of different parameters, conditions, materials, and compositions of blast furnace gases in the performance of the process. In addition to the data of the materials involved in the process modelled, the data included in this dataset covers aspects like process conditions, materials conditions, and parameters of adjustments to the model. File format: * Text file – CSV. * Header (metadata): FReSMe, Institution Acronym, date… parameter_A (unit), Parameter_B (unit), Parameter_C (unit)… * Numerical data Applicable Standards N/A ### d. Environmental impact dataset As an important aspect to be addressed by the project, the environmental impact of the process to produce Methanol fuel will be analysed and quantified. For this analysis, data available from the related sectors as well as those coming from the activities developed within the project will be used: process implementation, modelling and simulation. Data included in this dataset covers parameters like process efficiency, environmental footprint of the materials, and percentage of emissions reduced. File format: * Text file – CSV. * Header (metadata): FReSMe, Institution Acronym, date… parameter_A (unit), Parameter_B (unit), Parameter_C (unit)… * Numerical data Applicable Standards N/A ### e. Techno economic dataset In accordance with the H2020 final objectives, the aim of the project is to go further in the development of a commercially viable technology to produce Methanol fuel from the residual gases of steel industry. With this final objective, a techno-economic analysis will be addressed within the project implementation. For this analysis, data such as process efficiency, cost of materials, and market price of the products will be used. These parameters will be saved in the Techno-economic parameters dataset. File format: * Text file – CSV. * Header (metadata): FReSMe, Institution Acronym, date… parameter_A (unit), Parameter_B (unit), Parameter_C (unit)…. * Numerical data Applicable Standards N/A # 4 FReSMe open access data repository Scientific communications and public reports of the project will be made accessible to the public. These materials and the data associated with them will be curated in an open access data repository. As it was established in the GA, the scheme followed for this open access to the data will be **“Gold” Open Access:** Authors make a one-off payment to the publisher so that the scientific publication is immediately published in open access mode. The open access data results and scientific publications of the FReSMe project will be curated and made available through an open access repository. As first option, ZENODO repository is proposed to the partners. **Figure 1 - H2020 Open Access Mandate** # 5 Conclusions The data management plan of the FReSMe project is designed to guarantee the open access rules established in H2020 programme. The DMP describes the general procedures to enssure that the public data generated by the project is FAIR (findable, accessible, interoperable and reusable). Despite most of the results of the project being confidential, it is expected that at least two scientific publications will be “open access”, in addition to the public deliverables, as stated in Table 1. This is a living document that will be updated every 6 months, during the project execution according to the results generated, scientific publications delivered by the partners and actual implementation of the project. # 6 Glossary **DMP:** Data Management Plan. **“Gold” Open Access:** Authors make a one-off payment to the publisher so that the scientific publication is immediately published in open access mode. **“Green” Open Access:** Due to the contractual conditions of the publisher, the scientific publication can undergo an embargo period up to six months since publication date before the author can deposit the published article or the final peer-reviewed manuscript in open access mode. **Curate** : elect, organize, and look after the items in a collection or exhibition. **Tag name:** in the data set generated by the processes, tag name identify the device that generates the signal/data. **BFG:** Blast Furnace Gases **GHG:** Green House Gases **GA:** Grant Agreement **DoA:** Definition of Action **N/A:** Not Applicable **R:** Report, document **DEM:** DEMonstrator, pilot, prototype, plan designs **PU** : PUblic report or data **CO:** COnfidential report or data “This project has received funding from the _European Union’s Horizon 2020 research and innovation programme_ under grant agreement No 727504”.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0264_ADVANCE_820647.md
# Executive Summary This document (DI .3 Data Management Plan) is a deliverable of the ADVANCE project, which is funded by the European Union's H2020 through Clean Sky 2 Programme under Grant Agreement Number 820647. The objective of this project is to experim _entally d_ ete _rTnine_ critical temary ph _ose_ data for several key systems and to incorporate this information into aa existing TiAl thermodynamic database. At the completion Df this project, the next generation of pdvanced CALPHAD databases will be available for TiAl alloys which are promising in the manufacture of lightweight turbine engine components. The project is divided into six work packages (WPs), which can be categorized generally as being: managerial (WP 1) alloy production (WP2) experimental analysis (WP3, WP4 WP5) assessment and implementation (WP6) The WPs jointly address the overarching goals of the work plan, which are to produce high purity alloy samples (WP2) thaL are used to generate reliable experimental data for key TiAl systems (WP3 to WP5) and then assess the data and implement it into the existing CALPHAI) datehase (WP6). WPs 2, 3, 4, 5 will generate experimental data such as compositions of coexisting phases, phase fractions, transforrnation temperatures, lattice constants and so forth, but also the associated metadata. Further information on experimental data formats will be provided in the future updated data management plan. Various experimental data/information from WP3, WP4 and WP5 can be directly or indireckly used in therrrodynamic assessment and database development (WP6). Data will a'so be generated by CALPHAD optimazation within WP6. This data constitutes optimized parameters which enter model expressions describing the Gibbs free energy of individual phases. lime data (i.e optirnized parameters) will be stored in the so-called TDB file, which is an established standard used by the CALPHAD comrnunity. # Data Management and Responsibility 1 .1 DMP '"terna' Consortium Policy The data management plan (DMP) contains information concerning how data are organized, saved, made accessible and preserved during the entire project and following the compietion of the project. The m,'1P is a living document and will be updated as the implementation of the project progresses or when significant changes occur. Among the consortium partners, it's decided to store and share the necessary experimental data in the repository provided either by HZG or by MPG through GWDG up to five years after the end of the project. One imponant factor to assess which option to use is that the Topic Manager (MTU) can access the repository. The data integrity including backup and longtime archiving infrastructure (data recovery' data security) is managed by the chosen platform. Each partner has established regulations and procedures for curation and preservation of the data, and will take fulZ responsibility for individual dataset preservaLion of respective work package. It's intended to make experimental data publicly available as (peer-reviewed) by the consortium partners. All partners are committed to open access policies. Archiving of the arising peer-reviewed papers and the data underlined in the corresponding publication will be undertaken in the open access repository ZENODO. Experimental data will become published and be put into an open access repository, as long as it doesn't interfere with any partners who need to keep the data confidential for the purpose of exploitation. Approval from each parmer and the Topic Manager (MTU) is compulsory before any actions on publishing or external sharing of data. TCSAB will be the sole ovvner of all data generated within WP6 by CALPHAD optimization and of the resulting TDB -files, i.e. CALPHAD databases. This means all data including metadata generated within WP6 are confidential. Within the consortium, the terms and conditions on internal access rights for use of the CALPHAD databases are stipulated in the Grant Agreement, the Consortium Agreement and the Implementation Agreement. ## 1.2 Data Management Responsible The overall data management within the project is coordinated by TCSAB. Each ADVANCE pa.ltner has to respect the policies set out in the DMP. The Topic Manager (MTU) has the responsibility to supervise the activities on making and performing the DMP. All consortium partners and the Topic Manager will take part in the management of the pmject results which requires decisions on the preservation, maintenance, and sharing of the data, etc. Datasets are going to be created, managed and stored appropriately as the implementation of the project progresses. Dr. Anders Engström is nominated to be the Project Data Contact (PDC), who is responsible for coordination of the overall data management. Each work package leader will act as Dataset Responsible (DR) and ensure the respective dataset integrity and compatibility for its internal and external use. Uploading of datasets and metadata, updates and management of the different versions is the responsibility of the respective DR. Table. 1 here presents the datasets and the corresponding responsible WP leader. Table. 1 List of work package leaders <table> <tr> <th> Name of the WP Leader </th> <th> Affiliation </th> <th> Email </th> <th> Dataset l ) </th> </tr> <tr> <td> WPI: Dr. Anders </td> <td> TCSAB </td> <td> [email protected] </td> <td> </td> </tr> <tr> <td> WP2: Dr. Martin Palm, Dr. Frank Stein </td> <td> MPIE </td> <td> [email protected] [email protected] </td> <td> EPMA </td> <td> ICP </td> </tr> <tr> <td> WP3: Dr. Martin Palm, Dr. Frank Stein </td> <td> MPIB </td> <td> [email protected] [email protected] </td> <td> SEM </td> <td> XRD </td> <td> DSC </td> </tr> <tr> <td> WP4: Prof. Dr. Florian Pyczak </td> <td> HZG </td> <td> [email protected] </td> <td> HEXRD </td> </tr> <tr> <td> WP5: Ass. Prof. Dr. Svea Mayer </td> <td> MUL </td> <td> [email protected] </td> <td> TEM </td> <td> 3D-APT </td> </tr> <tr> <td> WP6: Dr. Yang Yang </td> <td> TCSAB </td> <td> [email protected] </td> <td> </td> </tr> </table> l) Not all data in each dataset are to be open to third parties or the general public. This is elaborated in section 1.1 of this Drw. 2) This dataset is confidential which is described in section 1.1 of this DNfP. ### 1.3 Data nature, link with previous data and potential users The detailed information on data nature will be elaborated in a future version of the DMP. The potential users of the generated data are believed to be universities, institutions, or research centres investigating thermodynamics of alloys as well as indusfries interested in development ofnew TiAI-based alloysi ## 1.4 Data Summary Table. 2 show a summary of the different datasets. Full details can be expected in future versions of the DMP as the project progresses. Table. 2 Summar of the datasets <table> <tr> <th> WP number </th> <th> Dataset </th> <th> Purpose of the data generation </th> <th> Format and expected size </th> </tr> <tr> <td> 2 </td> <td> EPMA </td> <td> Full chemical analysis of the alloys for homogeneity and impurity contents </td> <td> TBE </td> </tr> <tr> <td> ICP </td> </tr> <tr> <td> 3 </td> <td> SEM </td> <td> Determination of phases and phase fractions, calorimetric data, compositions of coexisting phases, transition temperatures </td> <td> TBE </td> </tr> <tr> <td> </td> </tr> <tr> <td> DSC </td> </tr> <tr> <td> 4 </td> <td> HEXRD </td> <td> Determination of phase constitution and phase transitions of the alloys as well as crystallographic phase information </td> <td> TBE </td> </tr> <tr> <td> 5 </td> <td> TEM </td> <td> Determination ofthe phases and phase fractions in small amounts using TEM; chemical analysis of the phases present at the atomic level; measurement of alloying element distributions and analysis of fine precipitates via 3D-APT </td> <td> TBE </td> </tr> <tr> <td> 3D-APT </td> </tr> <tr> <td> 6 </td> <td> TDB l) </td> <td> Used in combination with the appropriate simulation software, the databases will contribute to the ability to decrease costs and development time for TiAl alloy design. </td> <td> TDB format in MB range </td> </tr> </table> 1) This dataset is confidential which is described in section 1.1 of this DNR. TCSAB is going to commercially exploit and license the generated CALPHAD databases to customers. # FAR Data Raw experimental data usually have quite varying formats depending on the used equipment, i.e. vendor specific formats. Evaluated experimental data will usually have standard formats such as txt, doc, pdf, xlsx, tif, jpeg, which will be readable and editable using e.g. standard Microsoft products (Word, Excel...). No standard naming conventions exist for the obtained experimental data. Samples will be assigned disünctive codes according to their compositions and heat freatments. These codes are logically used as names for the data files stored in the repository, which will be coincident with the corresponding sample labels in publications. The data of a sufficient high quality will be made publicly available as publications to be of value to other researchers. The consortium ensures open access to all peer-reviewed scientific publications together with the underlined data. The published data will be available in perpetuity for use by researchers to confirm the quality. The arising publications and the data underlined in the corresponding paper will be uploaded in the open access repository ZENODO, within six months of publication. Upload to Zenodo automatically assigns a DOI persistent identifier to the dataset/item, and handles version control, in case datasets will be updated. The published experimental data can be re-used by everyone without any kind of licensing. The generated experimental data can be made available on request to third parties with qualified interest or the general public only after publication. TCSAB intends to make the developed CALPHAI) databases commercially available, i.e. to offer existing as well as new customers the possibility to acquire a license for use. The generated CALPHAD databases are readable in Thermo-Calc and DICTRA software. # Allocation of resources The corresponding costs for open access publications will be covered by the project's overhead. The published experimental data will be available in perpetuity for the general public. It's possible to share the experiment data within the consortium up to five years after the end of the project. # Data security Each partner has policies in place for data security and procedures for backup and recovery. Furthermore, the platform (either provided by HZG or by MPG through GWDG) that will be selected for internal data sharing within the consortium will provide further backup and long-time archiving of generated data. For the data to be shared with the general public, the data integrity including backup and longtime archiving infrastructure (data recovery, data security, etc.) is available at https://help.zenodo.org/. # Ethical aspects 6\. Other issues
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0266_PROTEUS_785349.md
## Executive Summary The data management plan (DMP) outlines how data is to be generated and handled, both during and after the project is completed. The DMP plan considers the many aspects of data management, generation, preservation, and analysis. This ensures that data are well-managed in the present, and prepared for preservation in the future. The objective of the DMP is to establish a detailed structure and framework on how the research data used during the project timeframe is originated, handled, maintained and preserved to guarantee findable, accessible, interoperable and re-usable (FAIR) data principles. The nature of the dataset generated, handled and preserved as part of the PROTEUS project is numerical, experimental and a set of software tool codes. ## 1\. DATA MANAGEMENT AND RESPONSABILITY ### 1.1. DMP Internal Consortium Policy The data management activities are going to be managed by a specific person per consortium partner, while the overall plan will be managed by Cranfield University. The PROTEUS project coordinator organization, Cranfield University, is responsible for adopting and procuring the strategy for data maintenance, selecting the data repository, uploading the data and keeping up- to-date the information. Given these responsibilities, Cranfield University is the responsible entity for the relation and communication with the topic manager. The list of the organizations and individuals responsible is presented in Table 1. Table 1 – Responsible Organizations and Individuals Organization Name Organization Role Responsible Contact Cranfield University Coordinator/ Overall Contact Prof Vassilios Pachidis <table> <tr> <th> University of Cambridge </th> <th> Beneficiary </th> <th> Prof Epaminondas Mastorakos </th> </tr> <tr> <td> Karlsruhe Institute of Technology </td> <td> Beneficiary </td> <td> Prof Rainer Koch </td> </tr> <tr> <td> Rolls-Royce PLC </td> <td> Topic Manager </td> <td> Mr Richard Tunstall </td> </tr> </table> ### 1.2. DATA MANAGEMENT Responsible The project data contact responsible and in charge of data procurement, upload, maintenance and update is given in Table 2. Table 2 –Project Data Contact Responsible <table> <tr> <th> Project Data Contact (PDC) </th> <th> **Prof Vassilios Pachidis** </th> </tr> <tr> <td> PDC Affiliation </td> <td> **Cranfield University** </td> </tr> <tr> <td> PDC mail </td> <td> **[email protected]** </td> </tr> <tr> <td> PDC telephone number </td> <td> **+44 (0) 1234 75 4663** </td> </tr> </table> ### 1.3. DATA nature, link with previous data and potential users The nature of the PROTEUS project dataset generated, handled and preserved is numerical, experimental and software toolsets. PROTEUS project involves the generation of numerical data from three different simulation sources: 3-D CFD (Computational Fluid Dynamics), 1-D mean-line and 0-D whole-engine performance cycle. Experimental data is provided by the topic manager for validation purposes collected from the agreed engine demonstrator program test-rig. The several inhouse methods developed by the Consortium partners within the PROTEUS project will be capitalised to predict the engine performance and operability during idle and sub-idle conditions leading to software toolset codes to be delivered to the topic manager. ### 1.4. Data Summary The data generated within the PROTEUS project will stem from 3-D CFD simulation results of engine component models, 0-D whole-engine performance simulation models, as well as in-house software tools developed by the Consortium partners. The high-fidelity data generated (3-D CFD) is to be reduced into component characteristics for integration into whole-engine performance tools. Data formats will include for example numerical datasets, computer codes, text data, NPSS outputs and technical figures, as summarized in Table 3. Table 3 - Data formats and file extensions to be used within the project <table> <tr> <th> **Data format** </th> <th> **Format extension** </th> </tr> <tr> <td> C, C++ files </td> <td> .c, .h </td> </tr> <tr> <td> Comma separated value files </td> <td> .csv </td> </tr> <tr> <td> Text files </td> <td> .txt </td> </tr> <tr> <td> Simple data files </td> <td> .dat </td> </tr> <tr> <td> Portable document format files </td> <td> .pdf </td> </tr> <tr> <td> Tagged image files </td> <td> .tif </td> </tr> <tr> <td> Portable network graphic files </td> <td> .png </td> </tr> <tr> <td> FORTRAN 90 files </td> <td> .f90 </td> </tr> <tr> <td> PYTHON files </td> <td> .py </td> </tr> <tr> <td> MATLAB files </td> <td> .m </td> </tr> <tr> <td> NPSS files </td> <td> .int, .mdl, .map </td> </tr> </table> In order to ensure portability and accessibility of the numerical data generated, comma separated value files (.csv) will be the primary format for data storage and transfer between partners. Final datasets are estimated to occupy broadly 1 Tb, although, the expected data size will be frequently redefined during the project. ## 2\. FAIR data To manage the numerical and experimental research data, as well as the computational toolsets generated and/or collected during the project, the partners will follow the fundamental IPR rules defined in the H2020 Grant Agreement. Following guidelines on data management in Horizon 2020, PROTEUS partners will ensure that the research data from the project is Findable, Accessible, Interoperable and Re-usable (FAIR). ### 2.1. Making data findable, including provisions for metadata On the basis of a FAIR practice, the management of the numerical and experimental data, and the computational toolsets will possess a file name and nomenclature according to a standard identification mechanism explained and detailed in metadata. The research data as well as the metadata file names will have a series of subject identifiers separated between them by an underscore (_). A metadata _readme_ .txt file will accompany the data repositories of the project, including a list describing the contents of each directory and standard file nomenclature. The metadata will be clearly identified by the directory name and the label _readme_ at the end, to permit the user to identify this file as metadata. The generated data will be archived in general by engine type and component, including subdirectories for each case. The file names contained in every subdirectory will be consistent with their location address as a way of finding and tracking the files and containing any possible misplacement. Hence, the file name standard convention to be used is described by the engine type, component and analysis case in the format: _enginetype_component_analysiscase_ .file extension. The engine type is given by the engine model. The analysis case is composed of the engine condition, mass flow rate and rotational speed. At the end of the project, all available public domain data will be uploaded by Cranfield University onto CORD within 3 (three) months of the project end. CORD is an institutionally-approved secure repository, which retains data for at least 10 years. CORD uses the _figshare_ platform, which is ISO27001 certified and assigns a _DOI_ to each item, to ensure that datasets are findable. Metadata such as keywords will be added to optimize possibilities for re-use. All data, prior uploading, will be checked for its validity and quality by each member of the consortium responsible for it and the Topic Manager. ### 2.2. Making data openly accessible The work undertaken within the PROTEUS project in general is sensitive and of commercial value to the Topic Manager, relating to recent and intended future products. Datasets with background Intellectual Property (IP) supplied by the Topic Manager to the PROTEUS partners, or data that may include third-party data licence from other companies, will not be made openly available. For instance, IP datasets might include geometry, rig and engine test data, performance model information and software. Similarly, results obtained from the PROTEUS project that contain background IP, such as geometry information, will not be made openly available. Any other datasets that do not contain the background IP supplied by the Topic Manager or without a third-party data licence from other companies will be made openly available. In this context, any PROTEUS project output data will be post-processed in such a way as to remove any Topic Manager’s background IP to make it publicly accessible. Making the data openly accessible requires formal clearance and consent from the Topic Manager. All peer-reviewed scientific publications resulting from the project will be Open Access where possible with previous clearance from the Topic Manager. The data output and results generated from the PROTEUS project will be obtained from commercial software such as ANSYS CFX, in-house developed toolsets and methods, and modelling-system tool (NPSS) supplied by the topic manager. Main output data will be post-processed in such a way that no software interface is required to access it; however, if any raw data is required to be stored, documentation will be included for the case of the in- house software and NPSS. For any other raw data derived from commercial programmes, the documentation to access is openly available. As a general principle, the software source codes from the different partners will not be made available. Partners will produce publications describing the new methods applied in the software rather than publishing the source code nor the software directly. It is preferable from the Topic Manger not to publish any PROTEUS-derived software directly; however, if any foreground software from PROTEUS is to be made public then it must be ensured that this software does not contain any background Topic Manager’s IP, including Topic Manager’s IP software. A special case is related to the NPSS thermodynamic modelling-system supplied by the Topic manager to Cranfield University as a background IP. Neither the software itself nor its source code should be made public, due to the Topic Manager’s background IP, and because the NPSS code carries a US Export-Control rating which prevents publication. Secured data will be stored in the Exostar ForumPass 6 information repository for the specific purposes and duration of the project. ForumPass 6 is a trusted workspace built upon the Microsoft Office SharePoint Server 2013 platform and is managed by the Topic Manager. The Exostar ForumPass 6 repository will allow access only to project participants. Metadata will be allocated in the lowest level of subdirectories to describe the datasets. The topic manager is responsible for managing and granting access to the Exostar ForumPass 6 platform. Consortium partners will have access to the storage though a security system based on username and password. Since the Topic Manager’s background IP protection has been agreed within the partners no data access committee is foreseen. Nevertheless, in the case of an external request for data access, Cranfield University as leading partner will select an internal data access committee. ### 2.3. Making data interoperable All research-generated data is exchangeable and can be re-used between the Consortium partners. The exchangeable data shared among the project participants is mainly of numerical nature and is transferred as a tabular data package in text or table-processor files as CSV (comma separated values) files. Due to the secure data storage and exchange platform managed by the Topic Manager, Exostar ForumPass 6, all experimental and numerical information can be formally shared and be accessible to all Consortium partners. The vocabulary and nomenclature used within the exchangeable data follows the international standards for engine station numbering and nomenclature to avoid ambiguous data transfer and misinterpretation. The vocabulary reference for this interoperable data is according to the SAE (1974) Gas Turbine Engine Performance Station Identification and Nomenclature Aerospace Recommended Practice (ARP) 775A. The nomenclature used in the software toolset and method codes will follow the SAE Aerospace Standards (AS) for digital computer programs to ensure interchangeability between the partners and the Topic Manager. For the case of gas turbine engine performance steady-state and transient simulation codes, the nomenclature standard used is according to the latest revised version SAE AS681K. Any additional necessary parameters related to time-based simulations will follow the SAE ARP1257 to cover for transient performance analyses. ### 2.4. Increase data re-use (through clarifying licences) To permit the widest reuse possible, the relevant experimental and numerical data derived from the PROTEUS project without any Topic Manager’s background IP and without a third-party data licence will be clear and accessible through a data usage licence. Any re-use activity will have to be cleared and with previous consent by the Topic Manager in advance and may be used by external entities for comparison and validation. Sensitive data containing the Topic Manager’s background IP or holding a third-party data licence will remain restricted and will not be licenced for re-use or disseminated by the PROTEUS partners unless written permission is obtained from the Topic Manager in each case. In the case of every consortium partner, their own source code, in-house software, and technical data involving the Topic Manager’s background IP will not be licenced. To this end, the datasets are expected to be made publicly available for re- use immediately after the full completion of the project. The Topic Manager encourages the partners to apply for patents for any novel technologies developed under the PROTEUS project. In this context, the Topic Manager prefers the partners to delay publication of any materials necessary to allow time for all relevant inventions to be protected. To guarantee the re-use of data, a clear history of origin, methodology, data workflows and references will be detailed in the final reports. The utilization of metadata, standard files, vocabulary and nomenclature previously described in sections 2.1 and 2.3 guarantees an easier re-use of the datasets. Any diversion from the well-established good practices for the data submission will be addressed in the metadata. The time for the data to remain re-usable includes the timeframe for the current technology developed to enter in service and indefinitely after this stage. The time of the data to remain re-usable is subjected to the restrictions given by the Topic Manager’s background IP. If any other improvement or upgrade to the methods developed herein is achieved within the timeframe given, a re-assessment of the data must be done to obtain up-to-date information, and in consent and agreement with the Topic manager’s background IP. ## 3\. Allocation of resources The costs involved for making the data FAIR are considered within the Horizon2020 CS2 (Clean Sky 2) Joint Technology Initiative (JTI) grant agreement under the work package (WP) 1.3 for deliverable preparation and progress reporting. The tasks within this WP are focused on assuring an appropriate FAIR data management. Cranfield University in its role of leading coordinator will be responsible for the data management of the project. The preservation of the resources and research-generated data will be discussed in advance between the leading organization and the Topic Manager to decide the future of the datasets in terms of its potential value and preservation time. ## 4\. Data security As described in the previous sections, data will be stored on a Cranfield University institutional network drive as well as on the Exostar ForumPass 6 trusted workspace. Cranfield’s network drives are backed up automatically by Cranfield’s IT on a daily basis to two separate data centers. Cranfield University uses Microsoft Active Directory controls to ensure 'role-based access controls' are enforced, which means that user permissions to content are controlled by their User ID and password. Access to these drives has been authorized only to the consortium participants. Cranfield University employs perimeter _Firewalls_ to protect internal networks and IT systems, including the _FileStore_ that hosts the shared drives. The university also undertakes monthly vulnerability scans to ensure systems remain patched and up-to-date and has a penetration testing schedule for all its major systems. Final datasets will also be stored on CORD, where the data can be retained for at least 10 years. Cranfield University also has an ongoing project looking to implement processes and systems to optimize the digital preservation of such datasets. ## 5\. Ethical aspects PROTEUS-related research will be checked for ethical compliance locally by each participating University and will need to obtain necessary approvals. Any data used within the PROTEUS project will be strictly selected from either publicly available sources or data generated by the project itself. There is no plan to use third party data or to collect any kind of personal data within PROTEUS. Data to be published will be reviewed by the Topic Manager before publication. ## 6\. Other issues Cranfield University has allocated approximately 8k Euro to support open access publications and other dissemination activities during the 3-year period. Cranfield also has a policy in place to support Open Access publications through additional funds if needed. ## Acknowledgement The project leading to this application has received funding from the Clean Sky 2 Joint Undertaking under the European Union’s Horizon 2020 research and innovation programme under grant agreement No 785349. **SUMMARY TABLE 4 FAIR Data Management at a glance: issues to cover in your DMP** This table provides a summary of the Data Management Plan (DMP) issues to be addressed, as outlined above. <table> <tr> <th> **DMP component** </th> <th> **Issues to be addressed** </th> </tr> <tr> <td> **1\. Data summary** </td> <td> * State the purpose of the data collection/generation The nature of the project datasets are experimental, numerical and a set of software tool codes. The purpose of the collected experimental data is to validate the numerical data and the generated computational tools. The several methods developed by the Consortium partners will be capitalised into software toolsets to predict the engine performance and operability during idle and sub-idle conditions. * Explain the relation to the objectives of the project The data generated within the project will stem from simulation results of engine component models as well as whole-engine performance simulation models. The high-fidelity 3-D CFD data is to be reduced into component characteristics for integration into whole-engine performance tools and obtain 0-D data for the entire gas turbine engine. * Specify the types and formats of data generated/collected Data format files to be used are: C (.c), C++ (.c, .h), comma separated values (.csv), text (.txt), simple data (.dat), portable document format (.pdf), tagged image (.tif), portable network graphics (.png), FORTRAN90 (.f90), PYTHON (.py), MATLAB (.m), NPSS (.int, .mdl, .map) * Specify if existing data is being re-used (if any) No analytical or numerical data is being re-used. The origin of all numerical data is coming from the current project itself. Test data is re-used from the Topic Manager. * Specify the origin of the data Datasets will be produced from 3-D CFD analyses, 0-D whole-engine analysis and from in-house software tools developed by the Consortium partners. * State the expected size of the data (if known) Final datasets are estimated to occupy broadly 1 Tb, although the expected data size will be frequently re-defined during the project. </td> </tr> <tr> <td> </td> <td> • Outline the data utility: to whom will it be useful The final software, methods and toolkits for the whole-engine performance simulation during idle and sub-idle conditions will be delivered to the Topic Manager, envisaged to be matured to TRL6 through validation against test data collected from the engine demonstrator program. </td> </tr> <tr> <td> 2. **FAIR Data** 2.1. Making data findable, including provisions for metadata </td> <td> * Outline the discoverability of data (metadata provision) The management of the numerical and experimental data, and computational tools will possess file names and nomenclature according to a standard identification mechanism detailed in the metadata files. * Outline the identifiability of data and refer to standard identification mechanism. Do you make use of persistent and unique identifiers such as Digital Object Identifiers? At the end of the project, all available public domain data will be uploaded by Cranfield University onto CORD within 3 (three) months of the project end. CORD is an institutionally-approved secure repository, which retains data for at least 10 years. CORD uses the _figshare_ platform, which is ISO27001 certified and assigns a _DOI_ to each item, to ensure that datasets are findable. * Outline naming conventions used The research data as well as the metadata file names will have a series of subject identifiers separated between them by an underscore (_). The generated data will be archived by engine type and component including subdirectories for each case. The file names contained in every subdirectory will be consistent with its location address as a manner of finding and tracking the files and contain any possible misplacement. The file name standard convention to be used is decried by the engine type, component and analysis case in the format: _enginetype_component_analysiscase_ .file extension. The engine type is given by the engine model. The analysis case is composed of the engine condition, mass flow rate and rotational speed. * Outline the approach towards search keyword Metadata such as keywords will be added to optimise possibilities for re-use. * Specify standards for metadata creation (if any). If there are no standards in your discipline describe what type of metadata will be created and how The metadata _readme_ .txt files that will accompany the data repositories of the project, including a list describing the contents of each directory and standard file nomenclature. The metadata will be clearly identified by the directory name and the label _readme_ at the end to permit the user identify this file as metadata. </td> </tr> </table> <table> <tr> <th> 2.2 Making data openly accessible </th> <th> * Specify which data will be made openly available? If some data is kept closed provide rationale for doing so Work undertaken within the PROTEUS project is sensitive and of commercial value to the Topic Manager, relating to recent and intended future products, and may include third-party data, supplied on the basis of a third-party licence from other companies. Datasets with background Intellectual Property (IP) supplied by the Topic Manager to the PROTEUS partners require to remain closed. These IP datasets include geometry, rig and engine test data, performance model information and software. Results of the PROTEUS project that are made openly available and public must not reveal background IP from the Topic Manager, such as geometry information. The PROTEUS project output data will be post-processed in such a way as to remove any Topic Manager’s background IP to make it publicly accessible. * Specify how the data will be made available Making the data openly accessible requires formal clearance and consent from the Topic Manager. All peer-reviewed scientific publications resulting from the project will be Open Access where possible with previous clearance from the Topic Manager. The clearance given by the Topic Manager will make sure that no data to be made openly available carries background IP or is subject to a third-party license. * Specify what methods or software tools are needed to access the data? Is documentation about the software needed to access the data included? Is it possible to include the relevant software (e.g. in open source code)? The data output and results generated from the PROTEUS project will be obtained from commercial software such as ANSYS CFX, in-house developed toolsets and methods, and modelling-system tool (NPSS) supplied by the topic manager. Main output data will be post-processed in such a way that no software interface is required to access it; however, if any raw data is required to be stored, documentation will be included for the case of the in- house software and NPSS. For any other raw data derived from commercial programmes, the documentation to access is openly available. As a general principle, the software source codes from the different partners will not be made available. Partners will produce publications describing the new methods applied in the software rather than publishing the source code nor the software directly. It is preferable from the Topic Manager not to publish any PROTEUS-derived software directly; however, if any foreground software from PROTEUS is to be made public then it must be ensured that this software does not contain any background Topic Manager’s IP, including Topic Manager’s IP software. A special case is related to the NPSS thermodynamic modelling-system supplied by the Topic manager to Cranfield University as a background IP. Neither the software itself nor its source code should be </th> </tr> </table> <table> <tr> <th> </th> <th> made public, due to the Topic Manager’s background IP, and because the NPSS code carries a US Export-Control rating which prevents publication. * Specify where the data and associated metadata, documentation and code are deposited Secured data will be stored in the Exostar ForumPass 6 information repository for the specific purposes and duration of the project. ForumPass 6 is a trusted workspace built upon the Microsoft Office SharePoint Server 2013 platform and is managed by the Topic Manager. The Exostar ForumPass 6 repository will allow access only to project participants. Metadata will be allocated in the lowest level of subdirectories to describe the datasets. * Specify how access will be provided in case there are any restrictions The topic manager is responsible for managing and granting access to the Exostar ForumPass 6 platform. Consortium partners will have access to the storage though a security system based on username and password. Since the Topic Manager’s background IP protection has been agreed within the partners no data access committee is foreseen. Nevertheless, in the case of an external request for data access, Cranfield University as leading partner will select an internal data access committee. </th> </tr> <tr> <td> 2.3. Making data interoperable </td> <td> * Assess the interoperability of your data. Specify what data and metadata vocabularies, standards or methodologies you will follow to facilitate interoperability. All research-generated data is exchangeable and can be re-used between the Consortium partners. The exchangeable data shared among the project participants is mainly of numerical nature, and is transferred as a tabular data package in text or table-processor files as CSV (comma separated values) files. Due to the secure data storage and exchange platform managed by the Topic Manager, Exostar ForumPass 6, all experimental and numerical information can be formally shared and be accessible to all Consortium partners. * Specify whether you will be using standard vocabulary for all data types present in your data set, to allow inter-disciplinary interoperability? If not, will you provide mapping to more commonly used ontologies? The vocabulary used within the exchangeable datasets is standard and follows the international standards for engine station numbering and nomenclature. The interoperable data vocabulary is according to the SAE (1974) Gas Turbine Engine Performance Station Identification and Nomenclature Aerospace Recommended Practice (ARP) 775A. The nomenclature used in the software toolset codes is according to the SAE Aerospace Standards (AS) for digital computer programs. For engine performance steady-state and transient simulations codes, the nomenclature standard is according to the SAE AS681lK. Any additional necessary parameters related time-based simulations will follow the SAE ARP1257. </td> </tr> </table> <table> <tr> <th> 2.4. Increase data re-use (through clarifying licences) </th> <th> * Specify how the data will be licenced to permit the widest reuse possible The relevant experimental and numerical data derived from the PROTEUS project without any Topic Manager’s background IP and without a third-party data licence, will be clear and accessible through a data usage licence that will be released, requiring formal clearance and consent from the Topic Manager. No Topic Manager’s background IP will be licenced or disseminate by the PROTEUS partners unless written permission is obtained from the Topic Manager in each case. In the case of every consortium partner, their own source code, in-house software, and technical data involving the Topic Manager’s background IP will not be licenced. * Specify when the data will be made available for re-use. If applicable, specify why and for what period a data embargo is needed Datasets are expected to be available for re-use at the end of the project with previous authorization and clearance from the Topic Manager and provided they are free of any Topic Manager’s background IP. The Topic Manager encourages the partners to apply for patents for any novel technologies developed under the PROTEUS project. In this context, the Topic Manager prefers the partners to delay publication of any materials necessary to allow time for all relevant inventions to be protected. * Specify whether the data produced and/or used in the project is useable by third parties, in particular after the end of the project? If the re-use of some data is restricted, explain why Sensitive data that requires Topic Manager’s background IP or holding a third- party data licence will remain restricted and will not be available for re-use due to the sensitive and commercial value, which may include third-party data. Any other re-use activity that does not involve the Topic Manager’s background IP will have to be cleared and with previous consent by the Topic Manager in advance and may be used by external entities for comparison and validation. These datasets may be made publicly available for re-use immediately after the full completion of the PROTEUS project. * Describe data quality assurance processes To guarantee the re-use of data, a clear history of origin, methodology, data workflows and references will be detailed in the final reports. The utilization of metadata, standard files, vocabulary and nomenclature previously described in sections 2.1 and 2.3 guarantees an easier re-use of the datasets. Any diversion from the well-established good practices for the data submission will be addressed in the metadata. * Specify the length of time for which the data will remain re-usable Naturally, the time for the data to remain re-usable includes the timeframe for the current technology developed to enter in service and indefinitely after this stage. The time of the data to remain re-usable is subjected to the restrictions given by the Topic Manager’s background IP. If any other improvement or upgrade to the methods developed herein is achieved within the timeframe given, a re-assessment of </th> </tr> </table> <table> <tr> <th> </th> <th> the data must be done to obtain up-to-date information, and in consent and agreement with the Topic manager’s background IP. </th> </tr> <tr> <td> **3\. Allocation of resources** </td> <td> * Estimate the costs for making your data FAIR. Describe how you intend to cover these costs The costs involved for making the data FAIR are considered within the Horizon2020 CS2 (Clean Sky 2) Joint Technology Initiative (JTI) grant agreement under the work package (WP) 1.3 for deliverable preparation and progress reporting. The tasks within this WP are focused on assuring an appropriate FAIR data management. * Clearly identify responsibilities for data management in your project Cranfield University in its role of leading coordinator will be responsible for the data management of the project. * Describe costs and potential value of long term preservation The preservation of the resources and research-generated data will be discussed in advance between the leading organization and the Topic Manager to decide the future of the datasets in terms of its potential value and preservation time. </td> </tr> <tr> <td> **4\. Data security** </td> <td> • Address data recovery as well as secure storage and transfer of sensitive data As described in the previous sections, data will be stored on a Cranfield University institutional network drive as well as on the Exostar ForumPass 6 trusted workspace. Cranfield’s network drives are backed up automatically by Cranfield’s IT on a daily basis to two separate data centers. Cranfield University uses Microsoft Active Directory controls to ensure 'role-based access controls' are enforced, which means that user permissions to content are controlled by their User ID and password. Access to these drives has been authorized only to the consortium participants. Cranfield University employs perimeter _Firewalls_ to protect internal networks and IT systems, including the _FileStore_ that hosts the shared drives. The university also undertakes monthly vulnerability scans to ensure systems remain patched and up-to-date and has a penetration testing schedule for all its major systems. Final datasets will also be stored on CORD, where the data can be retained for at least 10 years. Cranfield University also has an ongoing project looking to implement processes and systems to optimize the digital preservation of such datasets. </td> </tr> <tr> <td> **5\. Ethical aspects** </td> <td> • To be covered in the context of the ethics review, ethics section of DoA and ethics deliverables. Include references and related technical aspects if not covered by the former </td> </tr> <tr> <td> </td> <td> PROTEUS-related research will be checked for ethical compliance locally by each participating University and will need to obtain necessary approvals. Any data used within the PROTEUS project will be strictly selected from either publicly available sources or data generated by the project itself. There is no plan to use third party data or to collect any kind of personal data within PROTEUS. Data to be published will be reviewed by the Topic Manager before publication. </td> </tr> <tr> <td> **6\. Other** </td> <td> • Refer to other national/funder/sectorial/departmental procedures for data management that you are using (if any) Cranfield University has allocated approximately 8k Euro to support open access publications and other dissemination activities during the 3-year period. Cranfield also has a policy in place to support Open Access publications through additional funds if needed. </td> </tr> </table> <table> <tr> <th> </th> <th> </th> <th> </th> <th> **HISTORY OF CHANGES** </th> </tr> <tr> <td> **Version** </td> <td> **Issue date** </td> <td> </td> <td> **Change** </td> </tr> <tr> <td> 1.0 </td> <td> August 2017 </td> <td> ▪ </td> <td> Based on H2020 Initial version 1.0 October 2017 </td> </tr> <tr> <td> 2.0 </td> <td> February 2018 </td> <td> ▪ </td> <td> Update to Data Management Plan </td> </tr> </table>
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0267_PATH_734629.md
# Introduction ## Project Abstract PATH is intended to promote a collaborative researches focused in the development of high density plasma sources implemented with the Exchange of staff personnel between the partners of the network. The research will also address transfer of knowledge and training of the researchers in the specific field of plasma sources and its applications in the telecommunication sector. High density plasma sources find large number of industrial applications from material treatment to Telecommunication. Overcoming the density limit of current source will open new frontier in several technological field. PATH aims at cross linking different competences to study and develop prototype of plasma sources and plasma antenna based on hybrid technologies based on Radiofrequency and Hollow cathode technologies. A Gaseous Plasma Antenna (GPA) is a plasma discharge confined in a dielectric tube that uses partially or fully ionized gas to generate and receive electromagnetic waves; GPAs are virtually “transparent” above the plasma frequency and become “invisible” when turned off. Unlike ordinary metallic antennas, GPAs and Plasma Antenna Arrays can be reconfigured electrically (rather than mechanically) with respect to impedance, frequency, bandwidth and directivity on time scales the order of microseconds or milliseconds. It is also possible to stack arrays of GPAs designed to operate at different frequencies. A Plasma Antenna will be able to: (i) identifying the direction of incoming signal, (ii) tracking and locating the antenna beam on the mobile/target, (iii) beam-steering while minimizing interferences. Actual technology is based mainly on: (i) DC discharge, (ii) AC discharge, (iii) RF discharge, (iv) Microwaves, (v) Hollow cathode. Improvement of plasma source performances require a strong effort in term of modelling and technology. The aim of PATH is to merge European competences to make a substantial step toward innovative hybrid plasma sources. ## Document scope This document is the deliverable D7.1. Its intended use is to provide the PATH data management plan compliant to Open Research Data Pilot. The DMP provides an analysis of the main elements of the data management policy; it outlines how the research data -collected or generated- will be handled during and after the PATH project. It describes which standards and methodology for data collection and generation will be followed, and whether and how data will be shared. The format of the plan follows the H2020 template 1 . # Data Summary PATH is a Horizon 2020 project participating in the Research and Innovation Staff Exchange. The goal of RISE programme is to foster exchange of knowledge, professionals and know-how between European nations. ## Objectives of the PATH project PATH general objective is to establish the knowledge and technology needed to develop micro-plasma sources that can tune density up to 10 20 ion/m -3 with low power (<10W) and its use as elements for _advanced antenna systems (see par. 1.1)_ . One crucial aspect of PATH project is its multidisciplinary approach. The combination of the methodologies will contribute to the overall programme and scientific objectives of the project. Through mobility actions, researchers will approach the scientific goals of the project while at the same time they will be exposed to a variety of new technologies. Hence, the requirements on data management depend on the context of each field, but a data management strategy is needed for a more efficient exchange of knowledge and results between the partners and the overall scientific community. The DMP, in its current form, is intended as a starting point for the discussion about the PATH data management plan strategy. Nonetheless, it can be the case that this situation evolves during the lifespan of the project. Thus, the DMP will be updated during project lifetime with the Project Periodic Reports. ## Formats of outcomes The following table summarize the main data formats that will be used in the project PATH to communicate and disseminate project results. <table> <tr> <th> **Type of data** </th> <th> **Formats** </th> </tr> <tr> <td> Tabular data with minimal metadata </td> <td> Delimited text _(.txt)_ , MS excel _(.xls/.xlsx)_ , MS access _(.mdb/.mdbx)_ , OpenDocument Spreadsheet _(.ods)_ </td> </tr> <tr> <td> Textual data </td> <td> Rich Text Format _(.rtf)_ , plain text _(.txt)_ , MS word _(.doc/.docx)_ </td> </tr> <tr> <td> Image data </td> <td> TIFF _(.tif),_ Common image format _(.jpeg, .gif, .raw, .psd, .png, .pdf)_ </td> </tr> <tr> <td> Audio data </td> <td> Free Lossless Audio Codec _(.flac),_ Common audio format _(.mp3, .aif, .wav)_ </td> </tr> <tr> <td> Video data </td> <td> Common video format _(.mp4, .wmv)_ </td> </tr> <tr> <td> Documentation and scripts </td> <td> Common doc format _(.rtf, .pdf, .odt, .txt, .doc/.docx)_ _Secondary format: LaTeX format (.tex)_ </td> </tr> <tr> <td> Presentation and slides </td> <td> Common presentation format _(.pdf/.ppt/.odp)_ _Secondary format: LaTeX Beamer format (.tex)_ </td> </tr> </table> **Table 1** : recommanded formats for project’s outcome Other formats will be added with the project’s needs and developments. In terms of data and tooling, PATH will produce the following: * Reports and studies in high density plasma sources’ physics; * Innovative codes able to support system design and development; * Hollow cathode and Radio frequency sources innovative designs; * Plasma antenna tests, report and design; * Simulation of complex geometry plasma antenna. The following table summarize the main data formats that will be produced in the project PATH. <table> <tr> <th> **Type of data** </th> <th> **Formats** </th> </tr> <tr> <td> Software source code and test script </td> <td> Rich Text Format _(.rtf)_ , plain text _(.txt)_ , MS word _(.doc/.docx)_ </td> </tr> <tr> <td> Software executable code </td> <td> Executable file format _(.exe)_ </td> </tr> <tr> <td> Component design </td> <td> AUTOCAD data format, CST </td> </tr> <tr> <td> Antenna simulation codes </td> <td> _MatLab sorce code format (.m)_ _MatLab data format (.mat)_ </td> </tr> </table> **Table 2** : recommanded formats for internal data exchange Data collected in previous projects and experiment will be used to support experimental results and preliminary studies. Codes and scripts developed in other projects will be used and modified in order to suit with PATH’s needs. However, the overall objective of PATH is to design a plasma antenna. Thus, PATH will generate a wide range of data: results of different experiments, measurements, observations, results from fieldwork, recordings, videos and images of developed prototypes. The number of outcomes are not foreseeable, but of large scale. Outcomes, simulation environment, experimental raw data and antenna designs will capture the interests of a number of scientific community: plasma physics, electromagnetic physics, information technology engineering, telecommunication. Moreover, the industrial communities of antenna design and electromagnetic circuit will find PATH’s data as useful information for plasma instruments. Images and videos of working plasma instruments will catch students and amateurs interests. Not all the parts of PATH data will be open: industrial and patent-like information will be shared only in-between the consortium according to the IPR agreement stated in the Consortium Agreement. Scientific results, data from experiments and physics observations will be deposited in a repository and shared within the OpenAIRE portal. # Findable, accessible, interoperable and reusable (FAIR 2 ) data The information listed below reflects the conception and design of the individual work packages at the beginning of the project. Because the operational phase of the project started in January 2017, there is no dataset generated or collected until delivery date of this DMP. Nonentheless, a preliminary data set description scheme follows: <table> <tr> <th> Dataset reference and name </th> <th> Name Metada URL Homepage Publisher Maintainer </th> </tr> <tr> <td> Dataset description </td> <td> Description Provenance Usefulness Similar data Re-use and integration </td> </tr> <tr> <td> Standards and Metadata </td> <td> Metadata description Vocabularies and Ontologies </td> </tr> <tr> <td> Data Sharing </td> <td> License URL data set description Openness Software necessary Repository </td> </tr> <tr> <td> Archiving and Preservation </td> <td> Preservation Growth Archive Size </td> </tr> </table> **Table 3** : Dataset template # Allocation of resources Costs for making data FAIR in the project PATH impact on the project manpower allocation for each partner and specifically for the Coordinator. Each partner will cover the additional costs to make FAIR data under its own allocated budget with an estimated extra manpower activity of 1,5%. These extra costs will be covered in the unit cost related to management. The Coordinator is in also in charge of the development and maintenance of the project Web Site and ftp server that will be used for data exchange, communication and dissemination. The manpower increase due to this activities is estimated in an extra 5% of the already allocated person-months. These extra costs will be covered in the unit cost related to management. The Coordinator will be responsible for data management trough the whole time frame of the project. The resources for long term preservation have been preliminary discussed in terms of potential value and costs and a final decision will be taken 12 months in advance to the official project close-out. The Project Management Board is entiteled to decide what data and how it will be kept and for how long. # _Data Security_ Security of data has been preliminary discussed in terms of: * IT security * Storage * Backup * Transfer of sensitive data The Coordinator will provide a secure NAS RAID-5 facility for storage and backup up to 2 Gby. This storage area is intended to mirror the ftp server and store sensitive data that cannot be placed on the ftp server. This facility is not accessible via Web interface and it is considered secure. The physical location of the NAS is under secure survellaince 24/7 at Coordinator premises. Transfer of sensitive data will be performed via ftp storage or temporary dropbox folder. At the time of issuing the present document no action has been taken for safely store in certified repositories for long term preservation and curation. _**6 _Ethical aspects_ ** _ _No ethical or legal issues impacting on data sharing have been identified insofar._ # END OF DOCUMENT
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0268_SHiELD_727301.md
# Executive Summary The objective of this deliverable is to present the second version of the data management plan for the SHiELD project. According to the Grant Agreement, ” _this deliverable will include the report on data management in the first reporting period and an update of project’s data management plan (D1.6) if needed_ ”. This document covers a wide set of activities such as data collection, generation, storage and preservation. In this action, we envision five different types of data: data related to the use cases, data coming from publications, public deliverables and open source software. The document presents, following the EC template [1], how these different types of data will be collected, who the main beneficiaries are, and how SHiELD will store them, manage them, and make them accessible, findable and re-usable. The text continues with the foreseen resources needed for the openness and data to finalize with security and ethical aspects that will be taken into consideration in the context of SHiELD. This plan is the second version of the data management plan, which will be updated by M36 as part of the Technical Reports, having as input the work carried out in the use cases (WP6), the social and technical work packages (WP2 – WP5) and the dissemination activities (WP7). # Introduction ## About this deliverable This deliverable focuses on the management of the data in SHiELD. In this context there are two different types of data: those related to the publications generated as part of research activities, and those related to the data collected from citizens, users and non-users of digital public services, as well as from civil servants, that will be used as part of the implementation of the different key results established in the project. In this sense, we are not considering real data except for a single subset of data discussed below in Section 2.4.3. According to the article “39.2 Processing of personal data by the beneficiaries” of the SHiELD grant agreement: “The beneficiaries may grant their personnel access only to data that is strictly necessary for implementing, managing and monitoring the Agreement. The beneficiaries must inform the personnel whose personal data are collected and processed by the Commission.” In this sense, we are not processing directly personal data as it is described by our “Deliverable D1.8. Ethical protocols and approval”, where it is stated: “the patient health records data used in our trials are **simulated, and therefore will not identify any living individual** . In consequence, they require no specific treatment. However, to ensure that tests are viable, they must be good exemplars of such real-life data. This is the responsibility of the respective trial partners. Although there is no legal requirement to manage or process these data with any special care, in practical terms and for the purposes of the trials themselves, they will be treated as if they were real patient data. ## Document structure The document follows the established H2020 template for a Data Management Plan (DMP) [1]. Section 2 presents the data summary of what the purpose of the data collection and generation is. Section 3 explains how the data will be made fair, and thus findable, accessible, interoperable and reusable. Section 4 briefly explains how the financial resources for this openness are envisioned at this stage to be allocated. Section 5 and 6 outline security and ethical aspects respectively. And finally, Section 7 presents the conclusions and future work. # Data Summary ## Purpose of the data collection/generation and its relation to the project’s objectives The following list of SHiELD‘s project objectives and related key results (KR) provides a description for each KR specifying the purpose of the data collection/generation (if any): * **(O1) Systematic protection of health data against threats and cyber-attacks** . * **KR01: Knowledge base of generic security issues that may affect a system** . The purpose is to create a knowledge base which captures threats that should be managed by the architecture and regulatory data protection requirements (supporting objective O4). This knowledge base does not capture nor user's health data nor users, and it only manages threats and compliance issues in specific end-to-end applications. For the SHiELD use cases we will use fake data just to prove the benefits of the results. * **KR02: Tool that provides an automated analysis of data structures in order to identify sensitive elements that may be vulnerable to specific threats.** Data structure used to have flaws and weaknesses during the storage or exchange of data. The purpose is to analyse/collect the schema of these structures. SHiELD pilots will be used to identify sensitive data, and it will be traced during the pilots to ensure its privacy aspects and that access rights requirements are kept. * **KR03: Security requirements identification tool** : this tool will allow models of end-to-end applications to be created, and security threats and compliance issues affecting that application to be automatically identified. We will just list security threats and compliance issues according to ‘security by design’ principles. * **(O2) Definition of a common architecture for secure exchange of health data across European borders.** o **KR04: SHiELD open architecture and open secure interoperability API:** the purpose is to create a SHiELD architecture which is composed by the results of epSOS project but also with tools brought by SHiELD partners such as the anonymisation mechanisms. Furthermore the health data interchanged is fake, and we do not use real user data 1 . SHiELD pilots will invent users for each scenario. Basically the approach is to allow citizens and healthcare providers the possibility for accessing their health data from other countries. * **KR05: SHiELD (Sec)DevOps tool:** the purpose is twofold. During development time, a set of architectural patterns (mainly in Java) are stored in order to check data protection security mechanisms. During run time a set of tools provide monitoring facilities alerting the operator of the system that a threat is likely to occur. * **(O3) Assurance of the protection and privacy of the health data exchange.** This objective is addressed mostly in WP5, led by IBM based on their expertise in novel data security mechanisms for securing the exchanged data among the different Member States. This data is protected before, during and after it is exchanged. o **KR06: Data protection mechanisms** : the purpose is to collect a suite of security mechanisms to address data protection threats and regulatory compliance issues in end-to-end heterogeneous systems. This includes (but not limited to) tamper detection for mobile devices, data protection mechanisms, and consent-based access control mechanisms. o **KR07: Privacy protection mechanisms:** these privacy mechanisms address different aspects of privacy protection and regulation of data. These include methods for sensitive information identification. The purpose is to use and develop methods to mask private sensitive information dynamically on the fly as well as methods able to anonymize data while enabling analysis on the data. * **(O4) To understand the legal/regulatory requirements in each member state, which are only partly aligned by previous EU directives and regulations and provide recommendations to regulators for the development of new/improved regulations** . o **KR08: Legal recommendations report.** For this KR we are not going to use private data. The purpose is to create a common regulatory framework where the legal requirements regarding security among the state members are aligned. * **(O5) Validation of SHiELD in different pilots across three Member States** o **KR09: Pilots:** the purpose is to test implementations which are deployed in three Member States, supporting validation scenarios defined. The collected data will be used to prove that scenarios are working. * **KR10: Best practices:** the purpose of the data used is to describe lessons learned and best practices for protecting health data. * **(O6) Dissemination of SHiELD results** o **KR11: Publications:** the purpose is to collect the scientific papers, white papers, popular press articles, media and social media we are producing **.** * **KR12: Take up opportunities:** its purpose is to identify the main users, standards bodies and regulators. ## Types and formats During the first half of the project, we are just considering the format suggested in [2] and we are considering a Patient Summary as an identifiable “dataset of essential and understandable health information” that is made available “at the point of care to deliver safe patient care during unscheduled care [and planned care] with its maximal impact in unscheduled care”; it can also be defined at a high level as: “the minimum set of information needed to assure healthcare coordination and the continuity of care” [2]. From a technical point of view, we will use readable formats such as CSV, XML or JSON. Examples of the XML format are described in [3] which is the official Metada Registry. The SHiELD project manages structured and unstructured _simulated_ data collected.. * **Structured data** refers to kinds of data with a high level of organization, such as information in a relational database. For example: * SDO (discharge form) that contains 5 .txt files where each field is separated by “;” * ED (Emergency Department) dataset. o Prescription forms. * Constant collection forms. * **Unstructured data** refers to information that either does not have a pre-defined _data_ _model_ o r is not organized in a pre-defined manner. Unstructured information is typically _text_ \- heavy, but may contain data such as dates, numbers, and facts as well. Examples of are: * Reports of complementary tests (radiology, pathological anatomy, endoscopy, etc.) * Monitoring of evolutions in external consultations. o Unstructured data documents are typically uploaded in PDF format. ## Re-use of existing data We will reuse the existing and available data provided in epSOS ( _https://ec.europa.eu/digitalsingle-market/en/news/cross-border-health- project-epsos-what-has-it-achieved_ ) just to check the feasibility of the solutions provided in SHiELD. ## Origin of the data The data is based on the scenarios provided in SHiELD, and more precisely on the three member states requirements (UK, Italy, Spain (Basque country)). The data used in the scenarios that are going to be built in UK and Italy are simulated, and do not relate to nor describe any individual. The clinical data extracted from the health system of the Basque Country contain real patient data. These data are partially extracted and combined to build _quasi_ real clinical records, but at no time can any patient be identified, covering all ethical and legal requirements. In addition, a clinical protocol will be drawn up in the Basque Country, which will be approved by the local s committee and the informed consent of the patients whose data records are used in the validation of the tools will be collected. (See also ANNEX2: Draft DPIA)The use of these data will help us to demonstrate the developed technology. ### Lancs All data used in this program is ‘test data’ that has been generated using fake patients with fake ID numbers but with real symptoms and diagnosis linked into Snomed codes. The diagnosis, drugs and symptoms are not related to any particular patient i.e. we do not take a ‘real’ patient and then simply change the names. We use fake patients and simply add in various diagnoses and other symptoms matching real coding/ drug formulae. ### FCSR FCSR has created a synthetic dataset generator, which creates the requested number of fake patient profiles and related blood tests. Each patient is identified by his **biographical data** (made of SSN, patient ID, name, surname, address, nationality, gender, birth date and place, details about the job, details about the school career) and his **blood test data** (containing sample measurements of blood test components). Both the biographical data and the blood test data are generated based on statistics about the Italian population. As no real data are involved in the creation of the aforementioned dataset, it neither describes nor belongs to any individual, and thus it can be used in an ethics-compliant way to test the technology developed. A sample of fake patient profile generated according to the Italian profile (biographical data plus a sample blood test) is shown in the following: <table> <tr> <th> { </th> <th> "address": { "city": "Reggio Emilia", "postalCode": 2010, "road": "TESTI F. (Viale)", "roadNumber": 10, "telephoneNumber": 2982 }, "birthPlace": { "birthCity": "Reggio Emilia", "nationality": "IT" }, "career": { "job": "secondary", "schoolYears": 15 }, "exams": { "bloodTests": [{ </th> <th> "antithrombin": { "normalRange": [85, 117], "unit": "%", "value": 100 }, "cholesterol": { "hdl": { "normalRange": [0.9,2.0], "unit": "mmol/L", "value": 1.20076633391924 }, "ldl": { "normalRange": [2.0, 3.4], "unit": "mmol/L", "value": 3.34000795522381 }, </th> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> "total": { "normalRange": [3.0,6.5], "unit": "mmol/L", "value": 4.80668241996787 }, "tryglicerides": { "normalRange": [0.9, 1.7], "unit": "mmol/L", "value": 1.32954065412412 } }, "date": "1987-01-10", "patientId": 1421586624 </td> </tr> <tr> <td> } </td> <td> </td> <td> }] }, "familyDoctor": { "id": "54236bf1-447c-49b8-955f-69b4f46d5671" }, "identity": { "birthDate": "1956-07-21", "gender": "M", "name": "Gabriele", "patientId": 1421586624, "socialId": "ONPSIM10S36V660K", "surname": "Eupizi" } </td> </tr> </table> ### OSA Osabide Global (OSAKDIETZA´s System) will access the clinical data of the patients for the creation of the Patient Summary, consulting multiple repositories belonging to Osakidetza, among others: * Osabide AP * Presbide * Osanaia * eOsabide Queries to these applications will be made via invocations to **Web** **Services** . Although containing some elements of real data (see below), clinical data from fake patients are used preserving the same format as real ones for the SHiEld project. #### Test environment Ibermática is the one who carries out the developments on the Osakidetza system (Osabide Global) and for this, they have created a test environment with fake patients (their personal data are fictitious but the clinical data are real as mentioned in the previous section). These data have the same structure as real ones used in Osabide Global. Access this test environment is restricted to Osakidetza and developers of Ibermática (only those who are working for Osakidetza´s System). Therefore, these are the test data we use in SHiELD. Finally, for the simulation of sending clinical data between OpenNCP, we have taken the clinical data of a real patient from Osabide Global system with stroke (for use case breaking glass) and then, Ibermática has taken personal data from a patient of the test environment and mixed with the clinical data of the actual patient with stroke. Thus, creating a fake patient with useful clinical data for the use case, with its correct structure (XML HL7 CDA Level 3). In summary: the data * Are simulated, using the same structure as a live patient system * Use simulated (non-real) patient details and identifiers * Use some clinical data extracted from real patients with the target condition(s) **The real clinical data do not refer to the fake IDs of the simulated patient records, therefore** . #### Operations within the proof environment As stated before, the data is obtained through Web Services, and these are made through these three operations **getInformeHCRCDA3, getInformeHCRCDA2 y getInformeHCRCDA3_V2.** **Table 1. Web service structure for getInformeHRCDA3** <table> <tr> <th> **Name of the operation** </th> <th> **getInformeHCRCDA3** </th> </tr> <tr> <td> Access by </td> <td> B66-HCDSNS </td> </tr> <tr> <td> Access to </td> <td> B40-OsabideGlobal </td> </tr> <tr> <td> Description </td> <td> This method returns a patient's summary history report in HL7 CDA level 3 format. </td> </tr> <tr> <td> Input parameters </td> <td> **[MANDATORY]** **Int cic** : Identifying Cic in Osakidetza of the patient to be consulted. **[MANDATORY] Int language** : Code identifying the language in which the report will be displayed: * Spanish: 1 * Basque: 2 </td> </tr> <tr> <td> Output parameters </td> <td> **Patient CDA3WS:** **Int cic** : Cic that requested the report. **String hcr:** XML message with the result of the query(CIC of the Patient Summary Report in HL7 CDA Level 3). </td> </tr> <tr> <td> Output size </td> <td> </td> <td> Approximately 30kb, although it is variable depending on the volume of information obtained from the different systems. </td> </tr> <tr> <td> Number of records </td> <td> output </td> <td> 1 </td> </tr> <tr> <td> Errors </td> <td> </td> <td> _OG002 - ERROR MISSING VALUE REQUIRED_ _OG004 - WRONG PARAMETER ERROR ERROR_ _OG005 - ERROR DB NOT CONFIGURED_ </td> </tr> <tr> <td> Messaging </td> <td> </td> <td> XML </td> </tr> <tr> <td> Interface </td> <td> </td> <td> SOAP 1.1 </td> </tr> <tr> <td> Specific characteristics of the service for the interface </td> <td> \- 24X7 </td> </tr> <tr> <td> Security </td> <td> </td> <td> Authentication </td> <td> \- No: No authentication is required. </td> <td> </td> </tr> <tr> <td> Transport </td> <td> * HTTP * HTTPS </td> </tr> </table> **Table 2.Webservice structure for getInformeHRCDA2** <table> <tr> <th> **Name of the operation** </th> <th> **getInformeHCRCDA2** </th> </tr> <tr> <td> Access by </td> <td> B66-HCDSNS </td> </tr> <tr> <td> Access to </td> <td> B40-OsabideGlobal </td> </tr> <tr> <td> Description </td> <td> This method returns the report of a patient's summary history in PDF format and the report header in CDA level 2. </td> </tr> <tr> <td> Input parameters </td> <td> **[MANDATORY]** **Int cic** : Identifying Cic in Osakidetza of the patient to be consulted. **[MANDATORY] Int language** : Code identifying the language in which the report will be displayed: * Spanish: 1 * Basque: 2 </td> </tr> <tr> <td> Output parameters </td> <td> **PatientCDA2WS:** </td> </tr> <tr> <td> </td> <td> **Int cic** : Cic the report has been requested. **String cabecera_HCR** : XML message with the result of the header of the query (header of CIC Clinical History Report in HL7 CDA Level 2.) **Byte[] hcr** : Document sent from clinical history summarized in PDF format. </td> </tr> <tr> <td> Output size </td> <td> Approximately 120kb, although it is variable depending on the volume of information obtained from the different systems. </td> </tr> <tr> <td> Number of output records </td> <td> 1 </td> </tr> <tr> <td> Errors </td> <td> _OG002 - ERROR MISSING VALUE REQUIRED_ _OG004 - WRONG PARAMETER ERROR ERROR_ _OG005 - ERROR DB NOT CONFIGURED_ </td> </tr> <tr> <td> Messaging </td> <td> XML </td> </tr> <tr> <td> Interface </td> <td> SOAP 1.1 </td> </tr> <tr> <td> Specific characteristics of the service for the interface </td> <td> \- 24X7 </td> </tr> <tr> <td> Security </td> <td> </td> <td> Authentication </td> <td> \- No: No authentication is required. </td> <td> </td> </tr> <tr> <td> Transport </td> <td> * HTTP * HTTPS </td> </tr> </table> **Table 3.Webservice structure for getInformeHRCDA3_V2** <table> <tr> <th> **Name of the operation** </th> <th> **getInformeHCRCDA3_V2** </th> </tr> <tr> <td> Access by </td> <td> B66-HCDSNS </td> </tr> <tr> <td> Access to </td> <td> B40-OsabideGlobal </td> </tr> <tr> <td> Description </td> <td> This method returns the report of a patient's summary history in HL7 CDA level 3 format and also in PDF format. </td> </tr> <tr> <td> Input parameters </td> <td> **[MANDATORY]** **Int cic** : Identifying Cic in Osakidetza of the patient to be consulted. **[MANDATORY] Int language** : Code identifying the language in which the report will be displayed: • Spanish: 1 </td> </tr> <tr> <td> </td> <td> • Basque: 2 </td> </tr> <tr> <td> Output parameters </td> <td> **PacienteCDA3WS_V2** : **Int cic** : Cic who has requested the report. **String hcr_cda** : XML message from HL7 with the result of the query (CIC Summary Clinical History Report in HL7 CDA Level 3). **Byte[] hcr_pdf** : Document sent from clinical history summarized in PDF format. </td> </tr> <tr> <td> Output size </td> <td> Approximately 170kb, although it is variable depending on the volume of information obtained from the different systems. </td> </tr> <tr> <td> Number of output records </td> <td> 1 </td> </tr> <tr> <td> Errors </td> <td> _OG002 - ERROR MISSING VALUE REQUIRED_ _OG004 - WRONG PARAMETER ERROR ERROR_ _OG005 - ERROR DB NOT CONFIGURED_ </td> </tr> <tr> <td> Messaging </td> <td> XML </td> </tr> <tr> <td> Interface </td> <td> SOAP 1.1 </td> </tr> <tr> <td> Specific characteristics of the service for the interface </td> <td> \- 24X7 </td> </tr> <tr> <td> Security </td> <td> </td> <td> Authentication </td> <td> \- No: No authentication is required. </td> <td> </td> </tr> <tr> <td> Transport </td> <td> * HTTP * HTTPS </td> </tr> </table> #### Format and structure of the clinical data The format of the Patient Summary must be HL7, CDA Level 3: in the case of Osabide Global and unlike other reports, CDA level 3 (HL7) will be required, which indicates that both the header and the body will be properly structured. That is, just as other reports will be sent embedded in PDF (based on the weight MTOM could be applied for the optimization of binary delivery), in the case of Osabide Global, the XML should be sent properly structured according to the standard HL7 CDA level 3 of coding (LOINC). An example of the code has been added as Annex1. ## Expected size of the data At this stage of the project it still hard to define precisely the data size and ingestion rate. However, it can be useful to go into details regarding the dimension of the most important data involved in the use cases: * **Medical images** : include all the bio-images such as ultrasound scan, MRI (magnetic resonance imaging) or CT (computer tomography) scan. Considering that the **Computerized Tomography** uses 3D x-rays to make detailed pictures of structures inside of the body, it takes **pictures in slices** , like a loaf of bread. This means that each slice is a picture, the number of pictures can be from 30 for simple examinations to 1000+ for sophisticated examinations. This scan can be repeated several times (2-6) to reduce noise and to ensure high quality of the examination. In conclusion we will have from 30 to 1000 images each one of 5 MB times 2-6 series; we can say that a single CT examination for a patient will be between 300 MBand 30 GB depending on the kind of investigation; * **SDO and ED dataset** : is around 1 Kb per patient since no images are included and since the information is codified (.txt format). * **Blood tests:** for Italian patients, these are created synthetically using the synthetic dataset generator, which can produce as many entries as required. * **Patients’ profiles:** for Italian patients, these are created synthetically using the synthetic dataset generator, which can produce as many entries as required. ## To whom might it be useful ('data utility')? The results of SHiELD will be useful for healthcare providers, governments, and patients. # FAIR DATA This data management plan follows the FAIR (Findable Accessible Interoperable Reusable) principles. It should be noted that no real patient data (e.g., scans relating to living data subjects in the OSA trial) will be published in project deliverables or other publications. ## Data findable There are different types of data: * Data related to the use cases * Data coming from publications * Data coming from public deliverables * Open source software ### Data related to the use cases During the lifetime of the project and especially during the execution of trials, SHiELD partners expect several types of data to be generated, mainly health data, location data, personal data (“fake” names, addresses, contact information, IP addresses, etc.), pseudonymised data (user names, device identifiers, etc.), traffic data, as well as others. The first step in development of the use case studies will be to produce a high level outline of the scenario to be used in the project. Starting from epSOS data exchange gateway, a set up for subsequent validation experiments will be deployed. Since these experiments will involve some novel security mechanisms whose value is not yet proven, current patient data will not be used directly in the use cases. Instead, an equivalent test system will be implemented by using synthetic patient data to verify that security is effective without compromising the data exchange interoperability requirements and that SHiELD solutions are compliant to European General Data Protection Regulation 2016/679 [4]. The second step of the project will see the creation of synthetic data sets which may be sampled or combined randomly and associated with fictitiouspatients. This synthetic set of medical information will include the minimum patient summary dataset for electronic exchange developed in the epSOS project [5] defined according to the clinical point of view keeping in mind the medical perspective of the final users (medical doctors and patients). SHiELD WP6 deliverable 6.1 describes a set of scenarios, and all digitalized data included in Electronic Health Records (EHR), which includes as example: * Patient’s personal data * Medical histories & Progress notes * Diagnoses * Acute and chronic medications * Allergies * Vaccinations * Radiology images * Lab and test results (e.g., blood tests) * Clinical parameters (blood pressure, heart rate, capillary glucose, …) For each scenario it is going to be necessary to establish which are the minimum clinical data needed to manage the patient in the most efficient way. On the one hand, it will be necessary to establish the sensitivity and security of the data, but on the other hand it is essential to provide the health professionals with the minimum imprescriptible data in order to perform an efficient management and also provide security in the management of the patient. One of the aims of SHiELD is to establish the minimum data necessary for each scenario just to improve the clinical management of foreign patients while traveling along Europe. In this way we need to: * Identify the fields to include, their format and range of values they can adopt. * The classification of the field as part of the minimum set or if its inclusion is recommended, corresponding to each Health Service the final decision to include it or not. * Include the field and its value as part of the attributes of the document as a "tag" to identify the essential elements of its content without having to open (decrypt) the document. To codify different fields of the minimum dataset that will be exchange we have: * **SNOMED CT** or **SNOMED Clinical Terms:** is a systematically organized computer processable collection of medical terms providing codes, terms, synonyms and definitions used in clinical documentation and reporting. SNOMED CT is considered to be the most comprehensive, multilingual clinical healthcare terminology in the world. The primary purpose of SNOMED CT is to encode the meanings that are used in health information and to support the effective clinical recording of data with the aim of improving patient care. SNOMED CT provides the core general terminology for electronic health records. SNOMED CT comprehensive coverage includes: clinical findings, symptoms, diagnoses, procedures, body structures, organisms and other aetiologies, substances, pharmaceuticals, devices and specimens. between different Health Systems we have: * **ICD-10** is the 10th revision of the International Statistical Classification of Diseases and Related Health Problems, a medical classification list by the World Health Organization. It contains codes for diseases, signs and symptoms, abnormal findings, complaints, social circumstances, and external causes of injury or diseases. The code set allows more than 14,400 different codes and permits the tracking of many new diagnoses. The codes can be expanded to over 16,000 codes by using optional sub-classifications. This is just a brief list of medical data; indeed, it represents only a subset of the whole set of medical information that could be involved in SHiELD project. Documents that may include sensitive information and that can be used to test the technologies developed during the project will be synthetically generated. For example, Figure 1 represents a dismissal letter in which personal information can be found (e.g., name and surname). A fake dismissal letter using the very same format will be generated, so as to avoid the usage of real patients’ data. Figure 1: Document containing clinical record number and name. Regarding medical images, Figure 2 represents a slice of a simulated patient. Within this figure some sensitive information are circled in blue: * **FANTOCCIO** is the space dedicated to the patient name and surname; * **PID** is the internal patient ID, it means that the code identifies the patient within hospital internal system; * **Acc.num** is a progressive number in the hospital internal system; Figure 2: Slice of a simulated patient Additional to synthetic data regarding to patient past hospitalization, SHiELD project can include mobile data that can be useful for diagnostic purposes. Data could come from both mobile and wearable devices; some examples of datasets are provided: * GPS tracks ( _e.g._ localization); - Posts ( _e.g._ social registrations); - Last known activities: * SMS sent at time XX.XX; * Weather data; * Activity tracker - Chronic patient monitoring; * Drug therapy. This data coming from wearable devices are not directly health-related, although they allow to get to health-related conclusions after processing. They will be collected and processed in accordance with the provisions of the Privacy and Electronic Communications Regulation 2 . ### Metadata All publications will be indexed by using Digital Object Identifiers or similar mechanisms to be discovered and identified. All papers in journals and magazines will use this identifier. Concerning the naming convention, we will use the following: <<Dx.y Deliverable name _ date in which the deliverable was submitted.pdf>>. Each paper or deliverable contains a keywords section that can be used to optimize possibilities for re-use. Each deliverable is tagged with a clear version number as indicated on Figure 3, Figure 4 and Figure 5. This is part of the metadata that each deliverable contains. Additionally * Editor(s): who is/are the main leaders of this document * Responsible Partner: who is/are the main responsible partner of this document * Status-Version: draft, released, final * Date: submission date * Distribution level (CO, PU): confidential or public access according to SHiELD proposal * Project Number: SHiELD project number * Project Title: SHiELD title * Title of Deliverable * Due Date of Delivery to the EC: date to be sent to the European Commission (EC) * Work package responsible for the Deliverable * Editor(s):Who edit this deliverable * Contributor(s): who have contributed * Reviewer(s): reviewers * Approved by: people who internally approved it to be submitted to EC * Recommended/mandatory readers * Abstract: it summarises this document * Keyword List: a set of words which can provide an overview of the topic of this deliverable * Disclaimer: copyrights if any Each document registers its revision history: version number, data, reason for the modification, and by whom it is modified. Figure 3: Deliverable front page where version is shown Figure 4: Document description contains version number Figure 5: Page headers contains version number ## Data openly accessible Data related to the use cases are going to be accessible through the SHiELD deliverables which will be published on the website ( _http://www.project- shield.eu/_ ) . All deliverables include a set of keywords and a brief description that are aimed to facilitate the indexing and search of the deliverables in search engines. Scientific publications are going to be published as Open Data, we will use Open Aire [6] – compliant repositories. For example, TECNALIA use its own repository, already indexed by Open Aire. There are other repositories such as Zenodo [7] that can be used. The deliverables will be stored at AIME’s hosting provider, and for three years beyond the duration time frame of the project All data produced will be made available through the use of deliverables, papers in journals/magazines/conferences, or repositories. Where data used for proving functionalities are not real, they are going to be distributed using open source repositories, which will be easily accessible by using a browser. According to the SHiELD Grant Agreement (GA) page 15 “The SHiELD DevOps and solution will be as open source as possible (taking into account exploitation plans and the IPR issues that might arise from the usage of proprietary background)”. But basically all tools are following a freemium licensing schema, where there is a public version that can be released as open source and a commercial edition. All these software will be released at the end of the project, by which time they will be mature enough. At this moment, there are no specific arrangements, restrictions of use (apart from GA), there is no data access committee, and licenses depend on each tool used in SHiELD. ## Data interoperable Basically SHiELD project will produce a platform based on OpenNCP [5] which is interoperable with other software. The structures used for data exchange follow the eHealth DSI Interoperability Specifications [8]. Most of the vocabularies used follow the traditional software engineering artefacts descriptions, and for the eHealth domain we are using the HL7 [9] which specifications do not have a cost. ## Increase data re-use (through clarifying licences) Data stemming from the use cases will be delivered through the appropriate deliverables. Our approach is to extend a branch of the OpenNCP, and to add SHiELD functionalities. Once we have finalised the project we integrate these functionalities to the OpenNCP community, and this community will maintain this platform. At the time of writing, we do not envision any embargo on data. # Allocation of resources SHiELD does not envision additional resources for handling data management. SHiELD will use open access repositories as much as possible for the following data: * data related to the use cases * data related to the meta-analysis * data coming from publications * data coming from public deliverables * open source software Obviously there is an indirect cost for making data FAIR in our project. But we consider as part of the activities of the SHiELD project. All partners in the SHiELD project are responsible for data management. # Data security SHiELD will ensure that the General Data Protection Regulation (GDPR) [4], which came into force in May 2018, is ensured, especially in regards to protection of private data. In addition, the SHiELD project provides the following key results dealing with data security: * [KR03] Security requirements identification tool * [KR04] SHiELD open architecture and open secure interoperability API * [KR06] Data protection mechanisms: a suite of security mechanisms that address data protection threats and regulatory compliance issues in end-to-end heterogeneous systems * [KR07] Privacy tool: it monitors the data access attempts to ensure that only valid requests are accepted and only the data that is really needed is provided # Ethical aspects The basis of ethical research is the principle of informed consent as stated in our proposal. A clinical protocol will be developed and sent for approval to the ethics committee associated with a given trial and all the necessary competent authorities. All participants in SHiELD use cases will be informed of all aspects of the research that might reasonably be expected to influence willingness to participate. Project researchers will clarify questions and obtain permission from participants before and after each practical exercise (e.g. interview, co-creation session, etc.) to maintain on-going consent. Participants will be recruited by each organization leading the use cases (Osakidetza, FCSR, Lancs) and other supporting organizations (e.g. Ibermática, Aimes) and will cover more than one type of citizens. If participants wish to withdraw from the participation in the use cases at any time, they will be able to do it, and their data, even the pseudoanonymized data, will be destroyed. Each individual partner (Osakidetza, FCSR, Lancs) will act as Data controller for their trial; data subjects will therefore know who to contact with any questions or personal data access requests. In WP1 there is a task entitled as “Task 1.3 Ethical trials management” where we ensure that ethical principles are used throughout the use cases, which are clustered together for management purposes in the work package related to the use cases. Further explanations on ethical matters will be gathered in deliverable D1.8 Ethical protocols and approvals. # Conclusions The document is based on our SHiELD data management plan according to the established H2020 template for a Data Management Plan (DMP) [1]. This document is a report and reflects the use of data along this first half of the project. It is a living document, and it will be updated on a regular basis. The Data Summary section indicates the purpose of the data collection and generation. SHiELD’s purpose is not to process data and to create a knowledge base from this processing. We use “fake” data in the sense that the patient’s records are not identifying real patients with their real health records. In practical terms, though, we are building up test cases which may include individual items (typically MRI or CT scans) associated with real patients, because the SHiELD’s purpose is to test the technology developed. FCSR, Osakidetza and LANCS have their own process for generating these test cases, which are either be more or less complex. From a more general overview, each data will be made FAIR (findable, accessible, interoperable and reusable). SHiELD project’s key results are briefly explains how the financial resources for this openness are envisioned at this stage to be allocated. Section 5 and 6 outline security and ethical aspects respectively, and finally Section 7 summarises this document.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0269_PHASE-CHANGE SWITCH_737109.md
* For each wafer or chip a runcard (.xls, .xlsx) is maintained that documents all steps of processing. * Images and micrographs documenting the processing (.gif, .jgp, .png), schematics and drawings (.ppt) for illustration and numerical data for quantitative understanding (Origin, Excel) are also collected to These data is stored within the internal computers and servers of the individual partners-as described at the design stages (same procedure) Certain process parameters which enable the commercial partners of the project to protect certain data is kept by them on the internal servers. # 1.3. Characterization, measurement, testing data and their storage Spectroscopic ellipsometry, UV-VIS-NIR spectroscopy, Raman analysis, XRD, vector network analysers, Labview and others are used for extracting the desired data initially on the tools interface The data is then further transferred on other computers and analysed using Matlab, Mathematica and Cadence, Origin and other tools to extract the desired parameters. * Names of measurement files are linked to sample names given in run cards. Additional information about sample handling etc. are given in header data of the measurements * The data is stored on the internal servers of each institution # 1.4. Reporting data, deliverables, meetings–data accessibility Once the design steps, fabrication and measurement steps are over: reports drafts and publication drafts are started. Reports: The drafts of the reports and deliverables are uploaded on the project webpage:  _https://phasechange-switch.org/_ and the members of the consortium continuously update them ( _Fig.2_ ). The access to the files is given only to the consortium members by the managing entity of the webpage contracted by EPFL. These reports are continuously updated by the partners before the deliverables submission. Before the review meeting access is given to the reviewers to all the final deliverables and work packages presentations directly on the webpage with a special password. _Fig. 2 Consortium data example-of a subfolder_ # 1.5. Publications data-data accessibility and open access The publications drafts were updated by the leading authors in the internal folders of their institution during their development. The following the naming conventions were kept: yyyy-mm-dd_institution_(title-NAME)_journal/confname_(VERSION- NUMBER).(docx/latex). The leading author collected the necessary information from the internal and external partners via institutional email. Once, the article submitted and in the case of acceptance, an open access version is uploaded on the institutional webpages of the authors. The open access repository links are as follows:  _https://infoscience.epfl.ch/?ln=en_ **EPFL Lausanne** * _https://www.repository.cam.ac.uk_ **UCAM** * _https://pure.mpg.de/_ **MPG** * _https://domino.research.ibm.com/library/cyberdig.nsf/index.html_ **IBM** The final versions of the article with the title of the articles and journal names are uploaded in pdf format. An example of an uploaded article can be seen in _Fig.3_ . Additionally the data associated with the publications of UCAM are uploaded on the University of Cambridge Repository. _Fig.3 Example of an open access publication as stored on the EPFL files repository_ It is worth pointing out that: MPG hosted the 14th Open Access Conference in Berlin. As MPG scientists are encouraged to post their publications in the open repository PuRe ( _https://pure.mpg.de/_ ) , and it also has its own open access journals (e.g. _http://edition-open-access.de/_ , _https://elifesciences.org/_ ) # 1.6. Financial data and time sheets Mainly Excel sheets and proprietary data formats used for accounting and other administrative purposes. As part of the project, documentation parts of the data will be stored as physical printouts by each individual institution. The data is used for project administration only, including financial reporting and auditing by the financial departments. A summary of the different types of data collected within the different stages of the design/fabrication/characterization/reporting. Table 1: Data formats used in Phase change Project <table> <tr> <th> **Type of data** </th> <th> **Software/ test** **tool** </th> <th> **Format** </th> <th> **Aprox.** **Size** </th> <th> **Number** </th> <th> **Total Size** </th> </tr> <tr> <td> Presentations </td> <td> Power Point </td> <td> *.pptx </td> <td> 10 MB </td> <td> 100 </td> <td> ~1 GB </td> </tr> <tr> <td> Project management </td> <td> Microsoft word LaTeX Excel </td> <td> *.doc *.docx *.tex </td> <td> 10 MB </td> <td> 500 </td> <td> ~5 GB </td> </tr> <tr> <td> Images </td> <td> Optical microscope, SEM images </td> <td> gif, .jgp, .png </td> <td> </td> <td> </td> <td> 150 MB </td> </tr> <tr> <td> Designs, computations </td> <td> HFSS 3D FEM Electromagnetic, Mathematica, Matlab,etc </td> <td> aedt .m </td> <td> 1MB- 5GB </td> <td> 200 </td> <td> 900GB </td> </tr> <tr> <td> Mask Designs ( final output of the designs) </td> <td> L-edit K-Layout, CLE Win </td> <td> *.gds *. tdb </td> <td> 1 MB- 10MB </td> <td> </td> <td> 200MB </td> </tr> <tr> <td> Data from characterizations, testing, measurements </td> <td> ALD and thermal annealing, Spectroscopic ellipsometry, UVVIS-NIR spectroscopy, Raman analysis, XRD). PLD, sputtering tool Vector network analyser, SEM </td> <td> Raw data or csv/spreadshee t, s2p, JPG, tiff, xlx, opj </td> <td> <5MB </td> <td> 2000 </td> <td> 10 GB </td> </tr> <tr> <td> Data treatment and plotting. Mathematical models </td> <td> Excel Origin Matlab Python Mathematica </td> <td> *.xlsx *.OPJ *.m *.py *. mat </td> <td> 1MB </td> <td> 150 </td> <td> 150MB </td> </tr> <tr> <td> Publication datasets Open access </td> <td> EPFL;IBM, UCAM and MPG have repository sites for open access articles listed above </td> <td> pdf </td> <td> 3MB </td> <td> </td> <td> 18MB </td> </tr> </table> The publications are foreseen to be available as long as the institutions exist on the open access servers mentioned before. In respect to the design files, fabrication files& samples and measurement data collected the storage procedures are described below: ## 2.1.1. Data lifetime of design, fabrication and characterization data At AMO project-related scientific data (process data, device designs, measurement data, log files, etc.) is stored for at least ten years, most data is directly accessible for 15+ years. At MPG data is supposed to be re-usable and accessible openly for at least ten years for people with reasonable requests. At Thales the data collected from the project is stored in a specific folder on Thales network, according to the Thales’s quality management, data are kept during few years in the dedicated storage system. At EPFL data files are foreseen to be stored for around 10 years too on the internal servers. At UCAM: Policies such as the long-term preservation are already in place by the University Library’s Office of Scholarly Communication team and data will be stored for a lifetime established at the end of the project by it. ## 2.1.2. Data lifetime of open access articles/ allocation of resources The lifetime of this data will be given by the lifetime of the institutions, since these open access data are uploaded on the open access repository links which are aimed to live as much as the institutions will exist. * _https://infoscience.epfl.ch/?ln=en_ **EPFL Lausanne** * _https://www.repository.cam.ac.uk_ **UCAM** * _https://pure.mpg.de/_ **MPG** * _https://domino.research.ibm.com/library/cyberdig.nsf/index.html_ **IBM** There is a current one-off charge for long-term curated data storage at the University of Cambridge data repository of £4/GB for datasets above 20GB. UCAM uploads also additional files together with the publications. Open access data-mining tools are recently promoted by OpenAIRE and CERN through https://zenodo.org dataset and need to be further investigated to evaluate the benefits for the partners of the consortium in the near future. ## 2.1.3. Data lifetime of shared reports drafts, work packages presentations and foreseen files exchange directories As mentioned the reports, deliverables, workpackages presentations are shares within the partners within the projects webpage. We foresee to add on this pages shared files of interest from now on too. The lifetime of the webpage is foreseen to be up to 2025 and will be accessible to the project partners. # 2.2. Data security and recovery **AMO:** All project related data is included in AMO’s general data backup cycles. Access to data from strand a) is limited to AMO’s administrative staff and the local project coordinator, Dr.-Ing. Jens Bolten. Specific subsets, e.g. data covering salary payments to individual employees working for the project are further restricted as required by national law. Access to data from strand b) is available to all members of AMO’s nanostructuring group responsible for the project as well as to the heads of AMO other research groups and AMO’s CEOs. All staff members with access to the data sets have signed appropriate NDAs to ensure data security. **IBM:** All data generated, collected and used within the PCS is handled according IBM’s strict data security rules (IBM Bigfix). Automated daily backups are performed and stored on long-term repositories. # EPFL The EPFL centralized file storage service follows the modern practices and standards regarding storage, for instance high availability, multiple levels of data protection, partnership with providers for support. The service is managed centrally by the hosting department of the Vice Presidency for Information Systems (VPSI) and ensures security, coherence, pertinence, integrity and high-availability. Two distinct storage locations can be found on the EPFL campus with replication between the two. Physical servers’ pairing and clustering guarantees local redundancy of data. Moreover, volume mirroring protects data in case of disaster on the primary site. The copy is asynchronous and automatic and runs every two hours. The file servers are virtualized for separation between logical data and physical storage, RAID groups ensure physical storage protection: data is split in chunks written on many disks with double parity. Moreover, volume snapshots are used and can allow user restoration of previous versions if need be. For specific needs, optional backup on tape can also be done. Access to the data is managed by the owner of the volumes through the identity management system of EPFL. Any person who needs access to data has therefore to be a registered and verified user in the identity management system. **MPG:** All servers with experimental data have proper backup and every important experiment is instantly brought to the server. **Thales:** All data generated, collected and used within the PCS is handled according THALES’s strict data security rules. # Ethical aspects There are no ethical issues or human participants in this research project so no protocol apply in this case. # Future development In the following time frame we plan to make on the _https://phasechange- switch.org/_ a repository of important files needed during the development of the following the deliverables ( important design data and characterization data) in order to ease the file-exchange. For the moment the site is used for promotion of new publications and results as well as for the deliverables, work packages presentationsreports done by the consortium. The research data department of EPFL Library 1 will help in a closer contact with Nanolab for the further development of this management plan.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0270_MONICA_732350.md
# Executive Summary The purpose of the current deliverable is to present the 1st Data Management Plan (DMP) of the MONICA project and to provide the guidelines for maintaining the DMP during the project. The scope of the DMP is to describe the data management life cycle for all data sets to be collected, processed or generated in all Work Packages during the 36 months of MONICA project. FAIR Data Management is highly promoted by the Commission and since MONICA is a data intensive project, relevant attention has been given to this task. However, the DMP is a living document in which information will be made available on a more detailed level through updates and additions as the implementation of MONICA project progresses and MONICA is deployed at pilot sites. # Introduction The Data Management Plan methodology approach that has been used for the compilation of the D11.2 has been based on the updated version of the “Guidelines on FAIR Data Management in Horizon 2020 1 version 3.0 released on 26 July 2016 by the European Commission Directorate – General for Research & Innovation” (EC, 2016). The MONICA DMP addresses the following issues: * Data Summary * FAIR data o Making data findable, including provisions for metadata o Making data openly accessible o Making data interoperable o Increase data re-use * Allocation of resources * Data security * Ethical aspects * Other issues According to the EU’s guidelines regarding the DMP the document may be updated - if appropriate - during the project lifetime (in the form of deliverables). In MONICA where the pilot sites are in different countries and they are also of very different types and the deployment and usage of the deployed MONICA functionalities is not yet defined. Therefore, we will need to update the DMP with the data that is being collected/created at each pilot site according to its usage and whether it can be published as Open Data. # Methodology The Data Management Plan regards all the data sets that will be collected, processed and/or generated within the project. The methodology the consortium follows to create and maintain the project DMP is as follows: 1. Create a data management policy. 1. Using the elements that the EC guidelines (EC, 2016) proposes to address for each data set. 2. Adding the strategy that the consortium uses to address each of the elements. 3. Adding a Self-Audit process for the DMP. 2. Create a DMP template that will be used in the project for each of the collected data sets, see Appendix 1 MONICA Template for DMP. 3. Creating and maintaining DMPs 1. If a data set is collected, processed and/or generated within a work package, a DMP should be filled in. For instance, training data sets, example collections et c. 2. For each of the pilots, when it is known which data will be collected, the DMP for that pilot should be filled in. 4. The filled DMPs should be added to this document as updates in section 5. 1. This document is the living document describing which data is collected within the project as well as how it is managed. 5. Towards the end of the project an assessment will be made of which data is valuable to keep as Open Data after the end of the project. 1. For the data that is considered to be valuable an assessment of how the data can be maintained and the cost involved will be made. We expect that in the MONICA project the participating cities can accommodate most of this data within their existing Open Data infrastructure. # MONICA Open Data Management Policy The responsible party for creating and maintaining the DMP for a data set is the partner that creates/collects the data. If a data set is collected, processed and/or generated within a work package a DMP should be created. Before each pilot execution it should be clear which data set is collected/created in the pilot and how the data will be managed, i.e. the DMPs for the pilot data must be ready and accepted. This will be done individually for each of the pilots because of the difference between the pilots being in different countries and of different types of events, i.e. closed, open et c. ## Naming and identification of the Data set To have a mechanism for identifying the different collected/generated easily we will use a naming scheme. The naming scheme for MONICA data sets will be a simple hierarchical scheme including the country, pilot, creating or collecting partner and a describing data set name. This name should be used as the identification of the data set when it is published as Open Data in different open data portals. MONICA_{Country or WP}_{Pilot Site or WP}_{Responsible Partner}_{Description}_{Data Set Sub Index} **Figure 1: MONICA Data Set Naming Scheme** The parts are defined as follows: * **MONICA:** Static for all data sets, identifying the project. * **Country:** The two letter ISO 3166-1 country code for the pilot were data has been collected or generated. * **Pilot Site:** The name of the pilot site where the data was collected without spaces, i.e. Tivoli, RheinInFlammen et c. * **Responsible Partner:** The partner that is responsible for managing the collected data, i.e. creates and maintains the Open Data Management plan for the data set. Using the acronyms from the DoA. * **Description:** Short name for the data set without spaces, i.e. SoundPressure, PeopleCount et c. * **Data Set Sub Index:** Optional numerical index starting on 1. The intention is that data sets created/collected at different times can be distinguished and have their individual meta data MONICA_DK_Tivoli_CNET_AggregatedPeopleCount_1 **Figure 2: Example naming of a MONICA data set** In the example above the Data set is created within MONICA in Denmark at Tivoli. CNET is responsible for Open Data Management plan for the data set. The data set contains aggregated people count and it is the first of a series of data sets collected at different times. There can be situations where the data needs to be anonymised with regards to the location the data has been collected, for instance at some pilots it might not be allowed to publish people count data with the actual event location for security reasons. In these cases, the **Country** and **Pilot Site** will be replaced by string **UNKNOWN** when it is made available as Open Data **.** For data sets that are not connected to a specific pilot site the **Pilot Site** should be replaced with the prefix WP followed by the Work Package number that creates and maintains the Open Data Management plan for the data set, i.e. **WP6** . The same applies to the **Country** part which also should be replaced with the prefix WP followed by the Work Package number in the cases where the data set is not geographically dependent, such as pure simulations or statistics. ## Data Summary / Data set description The data collected/created needs to be described including the following information: * State the purpose of the data collection/generation * Explain the relation to the objectives of the project * Specify the types and formats of data generated/collected * Specify if existing data is being re-used (if any) o Provide the identification of the re-used data, i.e. MONICA identifier or pointer to external data, if possible. * Specify the origin of the data * State the expected size of the data (if known) * Outline the data utility: to whom will it be useful ## Fair Data FAIR data management means in general terms, that research data should be “FAIR”, that is findable, accessible, interoperable and re-usable. These principles precede implementation choices and do not necessarily suggest any specific technology, standard, or implementation solution. ### Making data findable, including provisions for metadata This point addresses the following issues: * Outline the discoverability of data (metadata provision) * Outline the identifiability of data and refer to standard identification mechanism. * Outline the naming conventions used. * Outline the approach towards search keywords. * Outline the approach for clear versioning. * Specify standards for metadata creation (if any). As far as the metadata are concerned, the way the consortium will capture and store this information should be described. For instance, for data records stored in a database with links to each item metadata can pinpoint their description and location. There are various disciplinary metadata standards, however the MONICA consortium has identified a number of available best practices and guidelines for working with Open Data, mostly by organisations or institutions that support and promote Open Data initiatives, and will be taken into account. These include: * Open Data Foundation * Open Knowledge Foundation * Open Government Standards Furthermore, data should be interoperable, adhering for data annotation, data exchange, compliant with available software applications related to smart city, security, and acoustics. ### Making data openly accessible The objectives of this point address the following issues: * Specify which data will be made openly available and if some data is kept closed explain the reason why. * Specify how the data will be made available. o Will the data will be added to any Open Data registries? * Specify what methods or software tools are needed to access the data, if a documentation is necessary about the software and if it is possible to include the relevant software (e.g. in open source code). * Specify where the data and associated metadata, documentation and code are deposited. o Will the data be stored in external Open Data portals or will it remain in the MONICA cloud Open Data Portal? * Specify how access will be provided in case there are any restrictions. ### Making data interoperable This point will describe the assessment of the data interoperability specifying what data and metadata vocabularies, standards or methodologies will be followed in order to facilitate interoperability. Moreover, it will address whether standard vocabulary will be used for all data types present in the data set in order to allow inter-disciplinary interoperability. Within the MONICA project we will deal with data of many different types and from very different sources but in order to promote interoperability we use of the following guidelines: * OGC SensorThings API model for time series data (OGC, 2017), such as environmental readings et c. * If the data is part of a domain with well-known open formats that are in common use, this should be selected. * If the data does not fall in the previous categories an open and easily machine-readable format should be selected. ### Increase Data Re-use This point addresses the following issues: * Specify how the data will be licensed to permit the widest reuse possible. o Tool to help selecting license: _https://www.europeandataportal.eu/en/content/show-license_ o If a restrictive license has been selected, explain the reasons behind it. * Specify when the data will be made available for re-use. * Specify if the data produced and/ or used in the project is useable by third parties, especially, after the end of the project. * Provide a data quality assurance processes description, if any. * Specify the length of time for which the data will remain re-usable. ## Allocation of Resources The objectives of this point address the following issues: * Estimate the costs for making the data FAIR and describe the method of covering these costs. o This includes, if applicable the, cost for anonymising data. * Identify responsibilities for data management in the project. * Describe costs and potential value of long term preservation. The MONICA project will host an Open Data portal in its cloud that will be used for storing the Open Data. The Data Protection Manager of MONICA is responsible for accepting the security solution used for the MONICA cloud solution. The actual maintenance of data will be the responsibility of the DMP owners, i.e. the creators of the data. ## Data security If data is not stored within the MONICA system, describe the mechanism for data security and backup. For sharing of sensitive data within consortium there are guidelines in chapter 4 of D10.5 The MONICA Ethical Guidelines that should be followed. For live pilot data it must be assessed by the Cyber Security risk management process defined in D10.11. ## Ethical aspects How does the data relate to the guidelines in deliverable D10.5 The MONICA Ethical Guidelines. An important point is: Does the informed consent, if any, cover the intended use of the data including long term preservation? ## Self-Audit Process Within MONICA project the ethical manager will be in charge of the execution of the defined data management plan and will supervising the compliance with legal and ethical issues in terms of information security, data protection and ethical issues. The existence of an auditing mechanism is deemed necessary in order to avoid the publication of non-validated data. **Figure 3: The self-auditing process of MONICA** The steps of the Self-Audit process that will be implemented are summarized below: * Self-Audit Planning o Plan and Set-up Self-Audit o Collect Relevant Documents * Identification, Classification and Assessment of Data sets o Analyze Documents o Identify Data Sets o Classify Data Sets o Assess Data Sets * Report of Results and Recommendations o Collate and analyze information from the audit o Report on the compliance with the Data Management Plan o Identify weaknesses and decide on corrective actions ## Other issues Other issues will refer to other national/ funder/ sectorial/ departmental procedures for data management that are used. # Initial DMP Components in MONICA During the next period each work package will analyse which DMP components are relevant in their work package. When the pilot definitions are ready with regards to which data is collected and how data is used, DMPs for the pilot needs to be created. This definition will follow the template in Annex 1. Below we present a first set of initial generic DMP components. ## User Scenarios <table> <tr> <th> **DMP Element** </th> <th> **Issues to be addressed** </th> </tr> <tr> <td> Identifier </td> <td> MONICA_WP2_WP2_DEXELS_UserScenarios_1 </td> </tr> <tr> <td> DMP Responsible Partner </td> <td> DEXELS </td> </tr> <tr> <td> Revision History </td> <td> **Date Partner Name Description of change** **2017-07-08** CNet Peeter Kool Created initial DMP </td> </tr> <tr> <td> Data Summary </td> <td> Definition of user scenarios for scoping of the initial requirements (Deliverable 2.1 Scenarios and Use Cases for use of IoT Platform in Event Management) is based on interviews, questionnaires and discussions with pilot partners. Collating data from end users is an integral part of the MONICA project – co-production of the final product will help to ensure that a useful product is created. The origin of the data is from pilot use case partners in the MONICA project. Written responses are likely to be fairly small in size (<1Gb over the course of the project). </td> </tr> <tr> <td> Making data findable, including provisions for metadata </td> <td> It will become both discoverable and accessible to the public when the consortium decides to do so. The report contains a table stating all versions of the document, along with who contributed to each version, what the changes where as well as the date the new version was created. </td> </tr> <tr> <td> Making data openly accessible </td> <td> The data are available in D2.1: Scenarios and Use Cases for use of IoT Platform in Event Management. The dissemination level of D2.1 is public. It is to be available through the MONICA wiki for the members of the consortium and when the project decides to publicize deliverables, it will be uploaded along with the other public deliverables to the project website or anywhere else the consortium decides. </td> </tr> <tr> <td> Making data interoperable </td> <td> Raw data cannot be made freely available because it contains sensitive information. </td> </tr> <tr> <td> Increase Data Re-use </td> <td> Engineers who want to build similar systems, could use this as an example. </td> </tr> <tr> <td> Allocation of Resources </td> <td> N/A </td> </tr> <tr> <td> Data security </td> <td> The Scenario and Use case report will be securely saved on the Fraunhofer premises and will be shared with the rest of the partners through the MONICA wiki. </td> </tr> <tr> <td> Ethical aspects </td> <td> N/A </td> </tr> <tr> <td> Other Issues </td> <td> </td> </tr> </table> ## User Requirements <table> <tr> <th> **DMP Element** </th> <th> **Issues to be addressed** </th> </tr> <tr> <td> Identifier </td> <td> MONICA_WP2_WP2_DEXELS_UserRequirements_1 </td> </tr> <tr> <td> DMP Responsible Partner </td> <td> DEXELS </td> </tr> <tr> <td> Revision History </td> <td> **Date Partner Name Description of change** **2017-06-08** CNet Peeter Kool Created initial DMP </td> </tr> </table> <table> <tr> <th> Data Summary </th> <th> Analysis and definition of User Requirements for scoping of the initial requirements (Deliverable 2.3 Initial Requirements Report which will be followed by a D2.4 Updated Requirements Report) are based on interviews, questionnaires and discussions with pilot partners (see previous DMP). The data is essential for the technical team to develop the MONICA platform; other partner teams throughout the project, as well as the wider research community will benefit when results are published. </th> </tr> <tr> <td> Making data findable, including provisions for metadata </td> <td> It will become both discoverable and accessible to the public when the consortium decides to do so. The report contains a table stating all versions of the document, along with who contributed to each version, a changelog as well as the date the new version was created. </td> </tr> <tr> <td> Making data openly accessible </td> <td> The data are available in D2.3: Initial Requirements Report. The dissemination level of D2.3 is confidential. It is available through the MONICA wiki for the members of the consortium and when the project decides to publicize deliverables, it will be uploaded along with the other public deliverables to the project website or anywhere else the consortium decides. </td> </tr> <tr> <td> Making data interoperable </td> <td> Raw data is recorded and formatted following the Volere template into the JIRA Issue tracker hosted at Fraunhofer premises. </td> </tr> <tr> <td> Increase Data Re-use </td> <td> Engineers who want to build similar systems, could use this as an example. </td> </tr> <tr> <td> Allocation of Resources </td> <td> N/A </td> </tr> <tr> <td> Data security </td> <td> The Initial Requirements Report will be securely saved on the Fraunhofer premises and will be shared with the rest of the partners through the MONICA BSCW document sharing system hosted by FIT. </td> </tr> <tr> <td> Ethical aspects </td> <td> N/A </td> </tr> <tr> <td> Other Issues </td> <td> </td> </tr> </table> ## System Architecture <table> <tr> <th> **DMP Element** </th> <th> **Issues to be addressed** </th> </tr> <tr> <td> Identifier </td> <td> MONICA_WP2_WP2_ISMB_SystemArchitecture_1 </td> </tr> <tr> <td> DMP Responsible Partner </td> <td> ISMB </td> </tr> <tr> <td> Revision History </td> <td> **Date Partner Name Description of change** **2017-08-10** CNET Peeter Kool Created initial DMP </td> </tr> </table> <table> <tr> <th> Data Summary </th> <th> A report describing the MONICA platform in detail containing information like component descriptions and dependencies, API descriptions, information flow diagram, internal and external interfaces, hardware requirements and testing procedures. This will be the basis upon which the system will be built. </th> </tr> <tr> <td> Making data findable, including provisions for metadata </td> <td> It will become both discoverable and accessible to the public when the consortium decides to do so. The report contains a table stating all versions of the document, along with who contributed to each version, a changelog where as well as the date the new version was created. </td> </tr> <tr> <td> Making data openly accessible </td> <td> The data are available in D2.2: Monica IoT architecture. The dissemination level of D2.2 is confidential. It is to be available through the MONICA wiki for the members of the consortium and when the project decides to publicize deliverables. </td> </tr> <tr> <td> Making data interoperable </td> <td> N/A </td> </tr> <tr> <td> Increase Data Re-use </td> <td> Engineers who want to build similar systems, could use this as an example. </td> </tr> <tr> <td> Allocation of Resources </td> <td> N/A </td> </tr> <tr> <td> Data security </td> <td> The Architecture report will be securely saved on the Fraunhofer premises and will be shared with the rest of the partners through the MONICA BSCW document sharing system hosted by FIT. </td> </tr> <tr> <td> Ethical aspects </td> <td> N/A </td> </tr> <tr> <td> Other Issues </td> <td> </td> </tr> </table> ## Pilot Generated Data Sources The main foreseen data sources from MONICA will come from the Pilot Use Cases and will be made available as open data as far as possible. However, there are also many data sets that cannot be made available due to its sensitiveness, for instance live video recordings. Here are an initial set of generic DMP Components that are not linked to individual pilots. But we foresee that for each component published as Open Data from a pilot will need its own individual DMP. ### Sound Level Time Series <table> <tr> <th> **DMP Element** </th> <th> **Issues to be addressed** </th> </tr> <tr> <td> Identifier </td> <td> MONICA_WP4_WP4_B&K_SoundLevel_1 </td> </tr> <tr> <td> DMP Responsible Partner </td> <td> B&K </td> </tr> <tr> <td> Revision History </td> <td> **Date Partner Name Description of change** **2017-08-22** CNet Peeter Kool Created initial DMP </td> </tr> <tr> <td> Data Summary </td> <td> The data collected from deployed acoustics sensors with sound level measurements. The data will also contain the position of the sensors with geocoordinates. </td> </tr> <tr> <td> Making data findable, including provisions for metadata </td> <td> The data will be made available in OGC SensorThings API format </td> </tr> <tr> <td> Making data openly accessible </td> <td> The data will be published on the MONICA Open Data portal. </td> </tr> <tr> <td> Making data interoperable </td> <td> The data will be made available in OGC SensorThings API format </td> </tr> <tr> <td> Increase Data Re-use </td> <td> Data scientists and sound engineers will benefit from being able to analyses the real time data streams as well as historic records.. </td> </tr> <tr> <td> Allocation of Resources </td> <td> N/A </td> </tr> <tr> <td> Data security </td> <td> </td> </tr> <tr> <td> Ethical aspects </td> <td> N/A </td> </tr> <tr> <td> Other Issues </td> <td> </td> </tr> </table> ### Surveillance Video Streams <table> <tr> <th> DMP Element </th> <th> Issues to be addressed </th> </tr> <tr> <td> Identifier </td> <td> MONICA_WP5_WP5_KU_VideoStream_1 </td> </tr> <tr> <td> DMP Responsible Partner </td> <td> KU </td> </tr> <tr> <td> Revision History </td> <td> **Date Partner Name Description of change** **2017-08-** CNet Peeter Kool Created initial DMP **21** </td> </tr> <tr> <td> Data Summary </td> <td> Surveillance cameras installed at the different events will generate large amounts of data. This data will not be publicly available. </td> </tr> <tr> <td> Making data findable, including provisions for metadata </td> <td> The data will generally not be stored in the MONICA cloud since the amount of data is too large. Instead it will be stored locally if at all. </td> </tr> <tr> <td> Making data openly accessible </td> <td> In general, the data will not be publicly available. After careful analysis some video streams might be made available for research purposes if it is possible from a security and privacy perspective. </td> </tr> <tr> <td> Making data interoperable </td> <td> The data will be in a standard video format. </td> </tr> <tr> <td> Increase Data Re-use </td> <td> Data scientists and sound engineers will benefit from being able to analyses the data streams. </td> </tr> <tr> <td> Allocation of Resources </td> <td> N/A </td> </tr> <tr> <td> Data security </td> <td> Should follow the guidelines from D10.11 </td> </tr> <tr> <td> Ethical aspects </td> <td> Local storage of video will follow the ethical guidelines and the local regulations and laws for the site. </td> </tr> <tr> <td> Other Issues </td> <td> </td> </tr> </table> ### Wearables Positioning Streams <table> <tr> <th> DMP Element </th> <th> Issues to be addressed </th> </tr> <tr> <td> Identifier </td> <td> MONICA_WP3_WP3_Dexels_PositionStream_1 </td> </tr> <tr> <td> DMP Responsible Partner </td> <td> DEXELS </td> </tr> <tr> <td> Revision History </td> <td> **Date Partner Name Description of change** **2017-08-** CNet Peeter Kool Created initial DMP **22** </td> </tr> <tr> <td> Data Summary </td> <td> UWB wristbands provide a stream of position information that is used for locating people for different purposes. </td> </tr> <tr> <td> Making data findable, including provisions for metadata </td> <td> The data will be made available in OGC SensorThings API format </td> </tr> <tr> <td> Making data openly accessible </td> <td> In general, the data will not be publicly available. After careful analysis some position streams might be made available for research purposes if it is possible from a security and privacy perspective. </td> </tr> <tr> <td> Making data interoperable </td> <td> The data will be made available in OGC SensorThings API format </td> </tr> <tr> <td> Increase Data Re-use </td> <td> Data scientists and sound engineers will benefit from being able to analyses the real time data streams as well as historic records. </td> </tr> <tr> <td> Allocation of Resources </td> <td> N/A </td> </tr> <tr> <td> Data security </td> <td> The data will be securely stored in the ATOS deployed MONICA cloud and using the guide lines from D10.11 </td> </tr> <tr> <td> Ethical aspects </td> <td> Contains possible sensitive information about individual people movement. </td> </tr> <tr> <td> Other Issues </td> <td> </td> </tr> </table> ### Common Operational Picture <table> <tr> <th> **DMP Element** </th> <th> **Issues to be addressed** </th> </tr> <tr> <td> Identifier </td> <td> MONICA_WP6_WP6_CNET_CommonOperationalPicture_1 </td> </tr> <tr> <td> DMP Responsible Partner </td> <td> CNET </td> </tr> <tr> <td> Revision History </td> <td> **Date Partner Name Description of change** **2017-08-** CNet Peeter Kool Created initial DMP **23** </td> </tr> <tr> <td> Data Summary </td> <td> The Common Operational Picture is the most central data element in MONICA. It represents the current status of all relevant operations and process parameters at the event site such as number of visitors, current reported incidents, threat levels, sound levels, et c. The data will be accessed and used by the event operators and security personnel. </td> </tr> <tr> <td> Making data findable, including provisions for metadata </td> <td> The data will be made available in SQL and No-SQL formats. It will be searchable using ODATA 2.0. </td> </tr> <tr> <td> Making data openly accessible </td> <td> In general, the data will not be publicly available. Parts of can be made available if possible considering ethical and security concerns. </td> </tr> <tr> <td> Making data interoperable </td> <td> The data will be made available in standard formats such as OGC SensorThingsAPI depending on the type of information. </td> </tr> <tr> <td> Increase Data Re-use </td> <td> The information could be very useful for research in all the areas related to public outdoor events such as, people movement, incident detection et c. </td> </tr> <tr> <td> Allocation of Resources </td> <td> N/A </td> </tr> <tr> <td> Data security </td> <td> The data will be securely stored in the ATOS deployed MONICA cloud and using the guide lines from D10.11 </td> </tr> <tr> <td> Ethical aspects </td> <td> Can contain sensitive data linking individuals to location and actions. </td> </tr> <tr> <td> Other Issues </td> <td> </td> </tr> </table> # Conclusion The purpose of this document is to provide the plan for managing the data generated and collected during the project; The Data Management Plan. Specifically, the DMP described the data management life cycle for all data sets to be collected, processed and/or generated by a research project. This document will be continuously updated with DMPs for each of the data sets during the project lifetime.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0272_FOSTER Plus_741839.md
**SUMMARY** This document is the first version of the Data Management Plan (DMP) for data collected and created by FOSTER Plus project. The project creates data from three different activities, the FOSTER Portal, the training and the dissemination activities. The training activities are supported by the FOSTER Portal and the dissemination helps the project to reach more participants and involve the community. This DMP has been made using the DMP Online tool. The deliverables 1.1 - Humans Requirement and 1.2 - Protection of Personal Data Requirement are related to this management of information on the context of the project. D1.1 –DATA MANAGEMENT PLAN 3 # 1\. DATA SUMMARY FOSTER Plus project is supported by a central system defined as the FOSTER Portal that compiles different modules to fit the project needs: * Content Portal – Contents about Open Science topics to be reused in other training contexts, online and face-to-face. * Event Calendar – Calendar of events provided by the community and the project partners regarding online or face-to-face events on the topics of Open Science. * Learning Module – Focus on e-learning courses with the different contents from the Portal integrated into those courses. * Speaker/Trainer Directory – Directory of trainers and speakers that can be contacted to provide local or online training or participation in events. It is a network of specialists regarding Open Science topics. Besides the FOSTER Portal, the project maintains a list of contacts for the newsletter, integrated into Mailchimp service. Finally, the project defined the FOSTER Open Science taxonomy that will also be shared to the community as an open resource. Regarding the FOSTER Portal, the usage data is generated through Google Analytics service. ## Purpose of Data Collection The data collected from the usage data is useful to understand the scope of people reached by the project, the types and patterns of usage and the number of visits which specific parts of the Portal obtain. Regarding the content portal, the data gathered is about the actual number/percentage of deposited contents, to be able to understand patterns in terms of sharing practices. To provide specific permissions for users (deposit contents, submit events, attend courses), a user account with basic information is needed. Regarding the attendance of courses, the user is tracked in terms of evolution in the course contents. ## Types and formats of data All the data is available in three types, as files on the filesystem of the servers, as items on the database, and in an external service, Google Analytics, for the traffic analysis. ## Reuse of data From the existing data gathered by the project, the following datasets can be shared: ### 4 D1.1 –DATA MANAGEMENT PLAN * Contents per Topic * FOSTER Open Science Taxonomy No personal data is shared. ## Expected size of the data The existing datasets are very small. The size expected is around 5 MB. Only the Portal resources have a bigger size, approximately 2000 are occupying 6.3 GB of space. ## Data Utility The information may be useful for science managers, trainers, funders to understand the existing contents available for reuse on the FOSTER Portal. The taxonomy may be integrated into other systems or projects as a community validated taxonomy regarding Open Science. # 2\. FAIR DATA ## 2\. 1. MAKING DATA FINDABLE, INCLUDING PROVISIONS FOR METADATA The datasets produced on the project will be available on Zenodo repository, associated with a digital object identifier (DOI) - https://zenodo.org/communities/foster/. ### 2.2. MAKING DATA OPENLY ACCESSIBLE All the datasets will be deposited and made available in open access. The formats are CSV and can be used by any spreadsheet software or text editor. Regarding each dataset, a contextual information describing the data and timespan will be made available. ### 2.3. MAKING DATA INTEROPERABLE The datasets will be available in open formats (CSV) and described based on Dublin Core metadata schema, with OpenAIRE 3.0 Guidelines, associating the project with the datasets. D1.1 –DATA MANAGEMENT PLAN 5 ### 2.4. INCREASE DATA RE-USE The datasets made available on the repository will be available in open access with the CC-BY license on FOSTER project collection in Zenodo.org ( _https://zenodo.org/communities/foster/_ ) . ## 3\. ALLOCATION OF RESOURCES The responsibility for managing the data related to the project lies on the PSC members (UMinho, OU, EIFL, UGOE, GLASGOW). The datasets will be updated during the project duration and will be made available on Zenodo repository, based on the existing service agreement provided by the service. Another options may be defined later based on the sustainability of the project. ## 4\. DATA SECURITY The existing data is managed by Open University partner and all backups, preservation and monitoring processes are already defined by local policies. All data are stored in servers at The Open University facilities. The servers are protected with a username and a password, while access to the servers is restricted. The software containing the data is periodically scanned for security audits and it kept updated and under monitoring to avoid any breach. In addition, the material is backed up every night. ## 5\. ETHICAL ASPECTS The project defined already two deliverables for ethical aspects of the project (1.1 - Humans Requirement and 1.2 - Protection of Personal Data Requirement). In accordance with legal restrictions, personal data concerning the portal users are not shared in any manner. ## 6\. OTHER All datasets made available will have explicit information of the project grant agreement 741839. ### 6 D1.1 –DATA MANAGEMENT PLAN
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0274_JERRI_709747.md
<table> <tr> <th> **Data set description:** </th> </tr> <tr> <td> Data set **JERRI_state_of_the_art_interviews.xlsx** : The data has been generated by Fraunhofer ISI (Benjamin Teufel, Ralf Lindner, Bruno Gransche and Kerstin Goos) and TNO (interview team led by Joram Nauta) by carrying out and recording expert interviews and extracting the relevant information in Microsoft Excel. The final data set will be compiled by integrating several single Excel files and carrying out minor corrections. It has a size about 130 kB. A documentation of the interview guideline and analysis is contained in “Deliverable D1.1 Synthesis on existing good RRI practices”. Data set **JERRI_Preliminary_comparison_of_RTOs.xlsx** : The data set has been generated by TNO (Anne Joignant) by desk research and the analysis of relevant documents and is documented with Microsoft Excel. It has a size of about 60 kB. It is also included as an annex in “Deliverable D1.1 Synthesis on existing good RRI practices”. Data set **JERRI_Goal_setting_workshop_documentation_FhG.pdf** : The data set has been generated by Fraunhofer ISI (Benjamin Teufel) by applying workshop methods and a documentation of workshop results with Microsoft PowerPoint. It has a size of 600 kB. A documentation for the replication of the results (in other contexts) will be included in the publicly available report “Deliverable D2.2 Description on specified RRI goals at Fraunhofer”. Data set **JERRI_Goal_setting_workshop_documentation_TNO.pdf** : The data set has been generated by TNO (Joram Nauta), it documents the process and the outcome of goal developing phase of JERRI. It i further outlined in “Deliverable D3.2 Description on specified RRI goals at TNO”. Data set **JERRI_OA_online_survey_FhG.pdf:** The data will be generated by Fraunhofer IRB (survey by Andrea Wuchner and Tina Klages) by carrying out and recording an online survey. The results are documented in CSV format. It has a size of about 500 KB. Data set **JERRI_Interviews_barriers_FhG.xlsx** : The data has been generated by Fraunhofer ISI (interview team led by Philine Warnke) by carrying out and recording expert interviews and extracting the relevant information in Microsoft Excel. The final data set will be compiled by integrating several Excel files and carrying out minor corrections. It has a size of about 20 kB. A documentation of the interview guideline and analysis will be contained in “Deliverable D4.1 Discussion paper on the analysis of organizational barriers”. </td> </tr> </table> <table> <tr> <th> Data set **JERRI_Interviews_barriers_TNO.xlsx** : The data will be generated by TNO (interview team led by Joram Nauta), analogous to the data set JERRI_Interviews_barriers_FhG.xlsx, and documented in “Deliverable 5.1 Discussion paper on the analysis of organizational barriers (TNO part)”. Data set **JERRI_Action_plan_workshop_documentation_FhG.pdf** : The data has been generated by Fraunhofer ISI (Philine Warnke) by applying workshop methods and a documentation of workshop results with Microsoft PowerPoint. It has a size of 3 MB. A documenatation for the replication of the results (in other contexts) is included in the publicly available report “Deliverable D4.2 Transformative RRI action plan for Fraunhofer”. Data sets **Workshop_evaluation_data_evaluation_report_I.xlxs** and **Workshop_evaluation_data_evaluation_report_II.xlxs** The data sets has been generated by the IHS team (Magdalena Wicher, Elisabeth Frankus, Alexander Lang, Milena Wuketich and Erich Griessler) by applying interview, workshop and survey methods respectively and a documentation of the files in the respective format. Documentations for the replication of the results (in other contexts) will be included in the publicly available reports “Deliverable D 8.2 Evaluation report I”, “Deliverable D 8.3 Evaluation report II” and “Deliverable 8.4 Summative Evaluation”. Data set **JERRI_International_interviews.docx** : The data has been generated by Fraunhofer ISI (Stephanie Daimer, Hendrik Berghäuser) by carrying out and recording expert interviews and extracting the relevant information in Microsoft Word. It has a size of 120 kB. A documentation of the interview guideline and analysis is contained in “Deliverable D9.1 Case study part I: RRI goals and practices” and “Deliverable D 9.2 Case study part II: Good practices for RRI institutionalisation”. Data set **JERRI_First_International_mutual_learning_workshop_documentation.pdf** : The data set has been generated by Fraunhofer ISI (Stephanie Daimer and Cheng Fan) by applying workshop methods and a documentation of workshop results with Microsoft Word. It will presumably have a size of 1 MB. A documentation for the replication of the results (in other contexts) will be included in “Deliverable D9.1 Case study part I: RRI goals and practices” and “Deliverable D 9.2 Case study part II: Good practices for RRI institutionalisation”. </th> </tr> </table> <table> <tr> <th> General remark on the public availability of the data sets: The JERRI consortium provides all above-mentioned data sets to the public via the respective repositories (see section “Data use after publication” and “Standards and metadata”). Concerning the Data Sets of the International Mutual Learning Workshops (WP9) only the Data/Documentation of the first International Mutual Learning are released so as not to risk the anonymity of the participants. The same goes for the data sets 'JERRI International Interviews' and 'JERRI Interviews barriers' (FhG & TNO) in which some parts of the interview results had to left out in order to ensure the anonymity of the interview partners, which could in some cases easily be identified by considering the respective reports. The data sets 'JERRI Action Plan Workshop Documentation' (TNO), 'Interview analysis for the evaluation report 1 & 2' (IHS) and the 'Survey data outcomes' (IHS) that had been listed in the former version of this document (from November 2016) can not made publicly available at all in order to not risk the anonymitiy of interviewed persons and/or to protect the confidence of internal organisational processes. </th> </tr> <tr> <td> **Data security and handling of sensitive data:** </td> </tr> <tr> <td> Data security: All data sets will be stored on secure local devices of the collecting consortium institution. Data sets will automatically and regularly backed up (cf. section “Archival storage and conservation”). Handling sensitive data: For all data sets, the EU directive 95/46/EC will be applied, in particular no personal data or links to individuals will be included in the data sets published by the JERRI consortium. More information on the handling of sensitive data is included in the formal “Deliverable D12.1: H - Requirement No. 1” and “Deliverable D12.2: POPD – Requirement No. 2”. </td> </tr> <tr> <td> **Data use after publication:** </td> </tr> <tr> <td> Subsequent use of the data sets that are made publicly available will be possible from the point when they are published via the respective repositories Fraunhofer Fordatis and IRIHS – Institutional Repository at IHS Vienna. </td> </tr> </table> <table> <tr> <th> Records of all publicly available data sets: The records contain information on copyrights on the respective data as well as on the JERRI project (cf. section “Standards and metadata”). Records will be publicly visible. As soon as it is technically possible, they will be disseminated to further systems via OAIPMH, e. g. to the OpenAIRE repository, and indexed via Google. </th> </tr> <tr> <td> **Standards and metadata:** </td> </tr> <tr> <td> At **Fraunhofer** , all data sets that will be made publicly available via the institutional research data infrastructure of the **Fraunhofer-Gesellschaft** will be recorded in „Fordatis“ ( **_https://fordatis.fraunhofer.de_ ** ) , provided with significant, standardised metadata and a DOI. A meta data profile has been developed. It is based on the general standard DataCite Version 4.0 ( _http://schema.datacite.org/_ ) and completed. Metadata will be stored and are searchable in the Fraunhofer Publica repository. The meta data profile is linked to further project information. At **TNO** , no institutional infrastructure for Open Data has been established yet. As part of the JERRI project itself, it will be assessed if and how repositories comparable to Fraunhofer Fordatis can/will be established. Data sets collected, stored and/or processed by TNO are made publicly available via Fraunhofer Fordatis if at least one author is part of the Fraunhofer-Gesellschaft. At **IHS** , all data sets that will be made public are available via the institutional research data infrastructure of **IHS** IRIHS – Institutional Repository at IHS Vienna ( _http://irihs.ihs.ac.at/_ ) . For the repository policy including metadata policy see _http://irihs.ihs.ac.at/policies.html_ . Particularly relevant metadata issues are (excerpt from the metadata policy): * “Anyone may access the metadata free of charge. * The metadata may be re-used in any medium without prior permission for not-forprofit purposes provided the OAI Identifier or a link to the original metadata record are given. * The metadata must not be re-used in any medium for commercial purposes without formal permission.” </td> </tr> <tr> <td> </td> </tr> </table> <table> <tr> <th> **Archival storage and conservation (incl. backup):** </th> </tr> <tr> <td> According to the rules of good scientific conduct, data will be stored for at least 10 years, irrespective of their publication. Data will be stored and backed up on secure local devices in the respective consortium institution that collected the data. Archival storage and conservation of the data sets at the **Fraunhofer- Gesellschaft** : All data sets that will be made publicly available via the institutional research data infrastructure of the Fraunhofer-Gesellschaft will be recorded in Fordatis. Data will be stored georedundant at two different places. A minimum storage of 30 MB will be necessary. Long-term measurements of longtime archiving will be implemented. Archival storage and conservation of the data sets at **IHS** : Data sets will be stored at IRIHS – Institutional Repository at IHS Vienna (cf. section “standards and metadata”) – a backup server produces a backup once per day, keeps the backup for one year, and then deletes it. </td> </tr> <tr> <td> **Responsibilities for the handling of data sets** </td> </tr> <tr> <td> Responsibilities for the following data set: * JERRI_state_of_the_art_interviews.xlsx Collection: Fraunhofer ISI and TNO Storage: Fraunhofer ISI Processing: Fraunhofer ISI and TNO Documentation: Fraunhofer ISI and TNO (Deliverable D1.1) Publication: Fraunhofer ISI Responsibilities for the following data sets: * JERRI_Goal_setting_workshop_documentation_FhG.pdf; * JERRI_OA_online_survey_FhG.pdf; * JERRI_Interviews_barriers_FhG.xlsx; * JERRI_Action_plan_workshop_documentation_FhG.pdf; * JERRI_International_interviews.docx; * JERRI_First_International_mutual_learning_workshop_documentation.pdf Collection: Fraunhofer ISI </td> </tr> </table> <table> <tr> <th> Storage: Fraunhofer ISI Processing: Fraunhofer ISI Documentation: Fraunhofer ISI (Deliverables D2.2, D4.2, D9.1) Publication: Fraunhofer ISI Responsibilities for the following data sets: * JERRI_Preliminary_comparison_of_RTOs.docx; * JERRI_Goal_setting_workshop_documentation_TNO.pdf; * JERRI_Interviews_barriers_TNO.xlxs; Collection: TNO Storage: TNO Processing: TNO Documentation: TNO (Deliverables D3.2, D5.2) Publication: Fraunhofer ISI Responsibilities for the following data sets: * Workshop_evaluation_data_evaluation_report_I.xlxs * Workshop_evaluation_data_evaluation_report_II.xlxs Collection: IHS Storage: IHS Processing: IHS Documentation: IHS Publication: IHS </th> </tr> <tr> <td> **Costs** </td> </tr> <tr> <td> For the management of research data in the JERRI project, the following personnel costs including overhead will incur: 373,980 €.. Data collection concepts (30 %): 112,194 € Data collection (30 %): 112,194 € Data securtiy / back-up (5 %): 18,699 € Processing of sensitive data, e. g. anonymisation (5 %): 18,699 € </td> </tr> </table> Data processing (5 %): 18,699 € Documentation of data management activities incl. Data Management Plan (15 %): 56,097 € Long-time storage (10 %): 37,398 € The costs listed above are addressed in the project calculation.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0275_ROSIN_732287.md
# 1\. Publishable Summary This deliverable provides the data management plan for the ROSIN project, which details how data will be stored, tagged and archived throughout and after the project. ROSIN will create data through _interviews and mining software repositories_ as well as developing trainings materials and associated source code for our education program. Additionally, ROSIN will collect data by receiving applications to the ROSIN FTP grant program, registering participants in education activities and collecting information about attendants in our dissemination events _]_ . The data management plan deals with how data will be stored in a secure and privacy-safeguarding way, and how reuse and sharing after the project will be ensured. The deliverable also provides governance arrangements on how to carry out the data management plan in practice. The deliverable is a living document that will be updated over the course of the project. Table 1 summarizes the main information about the data managed in ROSIN a described along the rest of this document. _Table 1. Overview of ROSIN data._ <table> <tr> <th> Data definition </th> <th> Data origin </th> <th> Data type or format </th> <th> Expecte d size </th> <th> Identification </th> <th> Metadata to find </th> <th> Metadata to interoperate </th> <th> Access - who </th> <th> Access - how </th> <th> License? </th> </tr> <tr> <td> **FTP proposal** </td> <td> 3 rd parties applying for FTP grant </td> <td> PDF documen t </td> <td> MB </td> <td> Unique identifier </td> <td> Keywords rosinftps.fluidreview. com sitemap </td> <td> No </td> <td> Confidential, only applicants, ROSIN Board and committees </td> <td> https://rosinftps.fluidreview.com </td> <td> \-- </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> </table> Version 1.0 27-09-2017 5 # 2\. Data Summary In this section, the following information is maintained for each WP in ROSIN, as far as applicable: * _What is the purpose of the data collection/generation and its relation to the objectives of the project?_ * _What types and formats of data will the project generate/collect?_ We consider the following main types of data to manage in ROSIN: Software source code, technical documentation, Media (images, videos, etc), user data Data formats include text, numbers, images, 3D models, text files, audio files, video files, software executables. * _Will you re-use any existing data and how?_ * _What is the origin of the data?_ * _What is the expected size of the data?_ * _To whom might it be useful (‘data utility’)?_ In the following sections the data summary information is described for each WP. ## 2.1. WP1 WP1 is responsible for running the project. Data is collected from the consortium partners, mainly regarding financial, legal, and administrative matters for example, meetings, reports, events etc. Also the planning is handled in WP1. The format of the data can be in Word, Excel, PPT, image file formats or PDF. The data is saved on the GitHub _rosin-project/intranet_ repository managed by TU Delft. All files can be handled through basic commercial off-the-shelf software. The contributors to the data collection within WP1 are all ROSIN consortium members. <table> <tr> <th> _What is the purpose of the data collection/generation and its relation to the objectives of the project?_ </th> <th> Coordination of the work developed in ROSIN, reporting of the project results to the EU Commission. </th> </tr> <tr> <td> _What types and formats of data will the project generate/collect?_ </td> <td> Management and technical reports, videos and images. </td> </tr> <tr> <td> _Will you re-use any existing data and how?_ </td> <td> No </td> </tr> <tr> <td> _What is the origin of the data?_ </td> <td> Consortium activity leading to project deliverables </td> </tr> <tr> <td> _What is the expected size of the data?_ </td> <td> Unknown at this moment </td> </tr> <tr> <td> _To whom might it be useful (‘data utility’)?_ </td> <td> ROSIN consortium members and EU Commission. Additionally, public deliverables can be useful to professionals related to robotics, especially: developers, entrepreneurs, SMEs, robot manufacturers, robot users (e.g. manufacturing, logistic or production companies in general), system integrators, researchers, students, etc. </td> </tr> </table> ## 2.2. WP2 The main data generated in WP2 is the source code for ROS-I software produced by 3 rd parties in the Focused technical Projects. Additional data produced is: * FTP application, evaluation and granting process: * FTP proposals (technical reports in PDF) o Evaluations of FTP proposals (technical reports in PDF) o Contracts * FTP execution: in addition to source code: * technical reports, software logs, software debug information, 3D models that are managed by each FTP members o Dissemination material: images, videos, presentations, will be handled by WP5 <table> <tr> <th> _What is the purpose of the data collection/generation and its relation to the objectives of the project?_ </th> <th> Increment the available code base of ROS open source robotics software for industrial applications. </th> </tr> <tr> <td> _What types and formats of data will the project generate/collect?_ </td> <td> Collect: user data (contact information of professionals applying for FTP), and for each FTP: project proposal, evaluation reports, contract Generate: Source code, technical reports, software logs, software debug information, 3D models, videos, images, </td> </tr> <tr> <td> _Will you re-use any existing data and how?_ </td> <td> Existing data might be used by FTP developers </td> </tr> <tr> <td> _What is the origin of the data?_ </td> <td> * FTP application, evaluation and granting process * The execution of the Focused technical Projects </td> </tr> <tr> <td> _What is the expected size of the data?_ </td> <td> Unknown at this moment </td> </tr> <tr> <td> _To whom might it be useful (‘data utility’)?_ </td> <td> The produced source code will be useful to the community of ROS developers and users. </td> </tr> </table> ## 2.3. WP3 The data generated in WP3 will be standard data collected and created in software engineering research process in order to analyze problems, and evaluate solutions. <table> <tr> <th> _What is the purpose of the data collection/generation and its relation to the objectives of the project?_ </th> <th> The data is collected to assess the practices of quality assurance in the ROS/ROS-I project and to monitor whether an improvement of quality has been achieved by the ROSIN project efforts. </th> </tr> <tr> <td> _What types and formats of data will the project generate/collect?_ </td> <td> Interviews with project members (text), observation data (video, screencast, transcripts), structured data resulting from analysis of code repositories (bug reports, pull requests, source code). </td> </tr> <tr> <td> _Will you re-use any existing data and how?_ </td> <td> No. </td> </tr> <tr> <td> _What is the origin of the data?_ </td> <td> The data is created in the project (interviews, observation) or extracted from publically available repositories (mostly github.com). </td> </tr> <tr> <td> _What is the expected size of the data?_ </td> <td> Below few gigabytes. </td> </tr> <tr> <td> _To whom might it be useful (‘data utility’)?_ </td> <td> The data will be made available to researchers in software engineering in robotics, to further support research in quality assurance for robotics platforms and applications (outside the ROSIN project). </td> </tr> </table> ## 2.4. WP4 The data collected by WP 4 are three-fold. On the one hand, source codes used in schooling activities will be developed and stored in local git repositories with no access to persons outside the consortium for now. The reason is to assure and retain high quality of the ROS applications developed for teaching activities until the curriculum has been further worked out. The second type of data collected by WP4 is the training materials. These will be lecture slides and teaching tutorials in form of pdf documents. These will be stored in the project’s github repository. The third type of data will be personal data of the ROSIN schooling activities participants as well as evaluation data of a training. The first one will be kept securely without public access. Likewise, the evaluation data will be kept privately in order to improve the quality of the teaching activities. <table> <tr> <th> _What is the purpose of the data collection/generation and its relation to the objectives of the project?_ </th> <th> Data will be generated in form of teaching materials (lecture slides and tutorial in pdf format and source code). The relation to the project’s objective are to be able to conduct high-quality trainings. The personal data and the evaluation data are kept for internal use, for one, for documenting the number of taught participants, and for another to improve future teaching activities. </th> </tr> <tr> <td> _What types and formats of data will the project generate/collect?_ </td> <td> The types will be source codes, pdf files and personal data records </td> </tr> <tr> <td> _Will you re-use any existing data and how?_ </td> <td> Suiting existing teaching material might be adopted in case no existing IP rights will be harmed. Existing ROS and ROS-I software package will be used and adopted for training activities. </td> </tr> <tr> <td> _What is the origin of the data?_ </td> <td> The vast majority of the data are created by the members of the project, except for existing ROS and ROS-I packages. </td> </tr> <tr> <td> _What is the expected size of the data?_ </td> <td> Unknown at the moment </td> </tr> <tr> <td> _To whom might it be useful (‘data utility’)?_ </td> <td> The data created in the teaching might be useful for other players in the growing ROS teaching community. </td> </tr> </table> ## 2.5. WP5 <table> <tr> <th> _What is the purpose of the data collection/generation and its relation to the objectives of the project?_ </th> <th> Data about the participants at dissemination events might be collected and used to gauge the type of audience that the project is reaching, in order to balance among different type of attendees (academia, industry, etc). Likewise, unsolicited expression of interests received in relation to the project can be received. </th> </tr> <tr> <td> _What types and formats of data will the project generate/collect?_ </td> <td> Personal data records with affiliation. </td> </tr> <tr> <td> _Will you re-use any existing data and how?_ </td> <td> Yes, in case users express interest in receiving information about the project </td> </tr> <tr> <td> _What is the origin of the data?_ </td> <td> Attendees themselves, possibly through the events’ organizers or by personal communication </td> </tr> <tr> <td> _What is the expected size of the data?_ </td> <td> Unknown at the moment </td> </tr> <tr> <td> _To whom might it be useful (‘data utility’)?_ </td> <td> To project management, in order to assess the success of the outreach of the project (central to its stated goal) </td> </tr> </table> ## 2.6. WP6 <table> <tr> <th> _What is the purpose of the data collection/generation and its relation to the objectives of the project?_ </th> <th> For WP6 on ethical issues, we will collect the proposals for Focused Technical Projects (FTP), including the proposers own ethics analysis. We use these texts and checklists to decide whether or not to co-fund the FTP. Within WP6, we do not collect data in another format than these texts. </th> </tr> <tr> <td> _What types and formats of data will the project generate/collect?_ </td> <td> The only type of data relevant for WP6 is the FTP proposal texts. </td> </tr> <tr> <td> _Will you re-use any existing data and how?_ </td> <td> At this moment (start of the project) we have no intention to use explicit databases or other collections. </td> </tr> <tr> <td> _What is the origin of the data?_ </td> <td> The ethics analyses are produced by the FTP proposers. </td> </tr> <tr> <td> _What is the expected size of the data?_ </td> <td> </td> </tr> <tr> <td> _To whom might it be useful (‘data utility’)?_ </td> <td> </td> </tr> </table> # 3\. FAIR data ## 3.1. Making data findable, including provisions for metadata Are the data produced and/or used in the project discoverable with metadata, identifiable and locatable by means of a standard identification mechanism (e.g. persistent and unique identifiers such as Digital Object Identifiers)? What naming conventions do you follow? Do you provide clear version numbers? Will search keywords be provided that optimize possibilities for re-use? What metadata will be created? In case metadata standards do not exist in your discipline, please outline what type of metadata will be created and how. ### 3.1.1. WP1 <table> <tr> <th> What naming conventions do you follow? Do you provide clear version numbers? </th> <th> Naming conventions and numbering for project deliverables and other documents such as meeting agendas and minutes, presentation slides, are described in ROSIN deliverable D1.2 Project Management guidelines </th> </tr> <tr> <td> Will search keywords be provided that optimize possibilities for reuse? </td> <td> Not necessary. </td> </tr> <tr> <td> What metadata will be created to make the data findable? </td> <td> Not necessary. </td> </tr> </table> ### 3.1.2. WP2 <table> <tr> <th> What naming conventions do you follow? Do you provide clear version numbers? </th> <th> Regarding FTP application, evaluation and granting process, the relevant naming conventions can be found in the _ROSIN_ _Call_ _for_ _Focused_ _Technical_ _ProjectsApplicant_ _Guide_ Regarding FTP results, relevant naming conventions that will be encouraged can be found: * _http://wiki.ros.org/Industrial_ * _http://wiki.ros.org/ROS/Patterns/Conventions_ </th> </tr> <tr> <td> Will search keywords be provided that optimize possibilities for reuse? </td> <td> Relevant keywords to category the FTPs will be identified and implemented in FluidReview </td> </tr> <tr> <td> What metadata will be created to make the data findable? </td> <td> Relevant keywords to category the FTPs will be identified and implemented in FluidReview </td> </tr> </table> ### 3.1.3. WP3 <table> <tr> <th> What naming conventions do you follow? Do you provide clear version numbers? </th> <th> The data obtained from software repositories will be indentifiable and traceable by appropriate identifiers (repository names, pull request numbers, issue numbers and commit hashes, or otherwise URLs). </th> </tr> <tr> <td> Will search keywords be provided that optimize possibilities for reuse? </td> <td> Since most data is textual, indexable, and published on the web in open formats, no additional keywords are necessary for modern search engines. </td> </tr> <tr> <td> What metadata will be created to make the data findable? </td> <td> We will create websites for sharing the public parts of the data, along with deliverables and research papers (that will provide a searchable context for search engines). When relevant, data will also be hosted on code repositories (github.com), which increases searchability. The datasets will always be described, including definition of contents and intended applications. </td> </tr> </table> ### 3.1.4. WP4 <table> <tr> <th> What naming conventions do you follow? Do you provide clear version numbers? </th> <th> All data are stored under git version control with distinct names indicating the type and purpose of the respective data. This also includes the complete git history of the data. </th> </tr> <tr> <td> Will search keywords be provided that optimize possibilities for reuse? </td> <td> We consider the possibility to tag certain states of the repository in order to easily find final states of source modules. Lecture slides follow a naming convention in the repository that makes it easy to find them. </td> </tr> <tr> <td> What metadata will be created to make the data findable? </td> <td> Not beyond the possibilities of git </td> </tr> </table> ### 3.1.5. WP5 _To complete by FHG_ <table> <tr> <th> What naming conventions do you follow? Do you provide clear version numbers? </th> <th> As data pertains users and their affiliation, conventional address book conventions will be followed </th> </tr> <tr> <td> Will search keywords be provided that optimize possibilities for reuse? </td> <td> A subdivision across categories (academia, system integrator, OEM, etc) will be considered </td> </tr> <tr> <td> What metadata will be created to make the data findable? </td> <td> Not beyond the keywords </td> </tr> </table> ### 3.1.6. WP6 <table> <tr> <th> What naming conventions do you follow? Do you provide clear version numbers? </th> <th> Because at this moment (start of the project) the data collection work within WP6 is limited to the collection of texts containing the proposers own ethical analysis, we do not need a specific data naming convention or versioning numbering other than standard text versioning conventions. </th> </tr> <tr> <td> Will search keywords be provided that optimize possibilities for reuse? </td> <td> </td> </tr> <tr> <td> What metadata will be created to make the data findable? </td> <td> </td> </tr> </table> ## 3.2. Making data openly accessible Which data produced and/or used in the project will be made openly available as the default? If certain datasets cannot be shared (or need to be shared under restrictions), explain why, clearly separating legal and contractual reasons from voluntary restrictions. Note that in multi-beneficiary projects it is also possible for specific beneficiaries to keep their data closed if relevant provisions are made in the consortium agreement and are in line with the reasons for opting out. How will the data be made accessible (e.g. by deposition in a repository)? What methods or software tools are needed to access the data? Is documentation about the software needed to access the data included? Is it possible to include the relevant software (e.g. in open source code)? Where will the data and associated metadata, documentation and code be deposited? Preference should be given to certified repositories which support open access where possible. Have you explored appropriate arrangements with the identified repository? If there are restrictions on use, how will access be provided? Is there a need for a data access committee? Are there well described conditions for access (i.e. a machine-readable license)? How will the identity of the person accessing the data be ascertained? ## 3.3. Making data interoperable Are the data produced in the project interoperable, that is allowing data exchange and re-use between researchers, institutions, organisations, countries, etc. (i.e. adhering to standards for formats, as much as possible compliant with available (open) software applications, and in particular facilitating re-combinations with different datasets from different origins)? What data and metadata vocabularies, standards or methodologies will you follow to make your data interoperable? Will you be using standard vocabularies for all data types present in your data set, to allow interdisciplinary interoperability? In case it is unavoidable that you use uncommon or generate project specific ontologies or vocabularies, will you provide mappings to more commonly used ontologies? **3.3.1. WP1** Not applicable. ### 3.3.2. WP2 <table> <tr> <th> _What data and metadata vocabularies, standards or methodologies will you follow to make your data interoperable?_ </th> <th> To be decided based no research on robotics and ROS available vocabularies and ontologies </th> </tr> <tr> <td> _In case it is unavoidable that you use uncommon or generate project specific ontologies or vocabularies, will you provide mappings to more commonly used ontologies?_ </td> <td> </td> </tr> </table> ### 3.3.3. WP3 <table> <tr> <th> _What data and metadata vocabularies, standards or methodologies will you follow to make your data interoperable?_ </th> <th> Standard formats (yaml, xml) will be used with schema appropriate for storing the data. Schema will be documented in research reports. </th> </tr> <tr> <td> _In case it is unavoidable that you use uncommon or generate project specific ontologies or vocabularies, will you provide mappings to more commonly used ontologies?_ </td> <td> N/A [standard schema do not exist for bug summaries, tool evaluation reports, etc.] </td> </tr> </table> ### 3.3.4. WP4 <table> <tr> <th> _What data and metadata vocabularies, standards or methodologies will you follow to make your data interoperable?_ </th> <th> The ROS-I common vocabularies for source packages and robotics teaching applications will be used. </th> </tr> <tr> <td> _In case it is unavoidable that you use uncommon or generate project specific ontologies or vocabularies, will you provide mappings to more commonly used ontologies?_ </td> <td> Does not apply </td> </tr> </table> **3.3.5. WP5** Not yet applicable. Update by FHG when repository of ROS-I industrial applications is created **3.3.6. WP6** Not applicable ## 3.4. Increase data re-use (through clarifying licences) _How will the data be licensed to permit the widest re-use possible?_ _When will the data be made available for re-use? If an embargo is sought to give time to publish or seek patents, specify why and how long this will apply, bearing in mind that research data should be made available as soon as possible._ _Are the data produced and/or used in the project useable by third parties, in particular after the end of the project? If the re-use of some data is restricted, explain why._ _How long is it intended that the data remains re-usable? Are data quality assurance processes described?_ **3.4.1. WP1** No applicable. ### 3.4.2. WP2 <table> <tr> <th> _How will the data be licensed to permit the widest re-use possible?_ </th> <th> Source code produced in FTPs will be licensed under Apache 2.0 </th> </tr> <tr> <td> _When will the data be made available for re-use?_ </td> <td> For each FTP, the code will be made immediately available upon completion of the FTP, and only under exceptional circumstances with a maximum delay of 2 years since the start of the FTP </td> </tr> <tr> <td> _Are the data produced and/or used in the project useable by third parties, in particular after the end of the project? If the re-use of some data is restricted, explain why._ </td> <td> All source code developed in FTPs will be made publicly available and reusable under Apache 2.0. The following data may remain confidential to the FTP group indefinitely: detailed requirements, Documentation, Proprietary 3D Models, Pictures, Live demos, Demo video, since those may contain IP of the companies involved in the FTP. </td> </tr> <tr> <td> _How long is it intended that the data remains re-usable? Are data quality assurance processes described?_ </td> <td> The source code created in an FTP will remain indefinitely available. The quality assurance process for the code will be defined in different deliverables of WP3. </td> </tr> </table> ### 3.4.3. WP3 <table> <tr> <th> _How will the data be licensed to permit the widest re-use possible?_ </th> <th> Source code produced will be licensed under Apache 2.0. The research data will be releases under a public domain license. Results of the analysis and design of tools will be published in open access papers (under copyright). </th> </tr> <tr> <td> _When will the data be made available for re-use?_ </td> <td> The source code repository data will be made available as soon as it is created (on the fly). Personal data (for instance transcripts of interviews, videos) that will not be published or shared at all. For sensitive data, only extracted summaries and analysis results will be available in the research reports, papers and deliverables. Fragments of personal data may be released, if we manage to obtain consent of the involved persons (but this cannot be guaranteed up front). </td> </tr> <tr> <td> _Are the data produced and/or used in the project useable by third parties, in particular after the end of the project? If the re-use of some data is restricted, explain why._ </td> <td> Yes, it is the intention that some of the data will be used as benchmarks in future research on quality assurance in robotics. </td> </tr> <tr> <td> _How long is it intended that the data remains re-usable? Are data quality assurance processes described?_ </td> <td> We will strive to maintain the data live as long as it is realistically reasonable, and definitely at least for several years beyond the project duration. Since we are talking about relatively small volumes of data, they can be hosted in public code repositories and websites, that have reasonably good durability (such as github.com, figshare.com). Using open formats (git, yaml, etc) will facilitate reasonably smooth migration to newer formats, should this become necessary. </td> </tr> </table> ### 3.4.4. WP4 <table> <tr> <th> _How will the data be licensed to permit the widest re-use possible?_ </th> <th> All teaching material will be published under Creative Commons License CC-BY- ND in order to allow the usage of the material. In order to ensure high quality standards of the teaching, the license does not allow to change the material or use parts thereof. After sufficient measures to ensure the quality of the source codes that will be generated in preparation of a schooling activity and used in it, the source packages should also be published under Apache 2.0 license or the license which is determined by the ROS packages used for the teaching activity. </th> </tr> <tr> <td> _When will the data be made available for re-use?_ </td> <td> _Yes, under the above explained licenses._ </td> </tr> <tr> <td> _Are the data produced and/or used in the project useable by third parties, in particular after the end of the project? If the re-use of some data is restricted, explain why._ </td> <td> See above. </td> </tr> <tr> <td> _How long is it intended that the data remains re-usable? Are data quality assurance processes described?_ </td> <td> See above. </td> </tr> </table> ### 3.4.5. WP5 <table> <tr> <th> _How will the data be licensed to permit the widest re-use possible?_ </th> <th> Marketing material to be made publicly available at events such as trade fairs and demos will not be restricted in terms of distribution, but not all of it will be available in source form to be edited by third parties </th> </tr> <tr> <td> _When will the data be made available for re-use?_ </td> <td> yes, to consortium members _._ </td> </tr> <tr> <td> _Are the data produced and/or used in the project useable by third parties, in particular after the end of the project? If the re-use of some data is restricted, explain why._ </td> <td> Possibly a selected part. </td> </tr> <tr> <td> _How long is it intended that the data remains re-usable? Are data quality assurance processes described?_ </td> <td> As it is marketing material, it is intended to be valid for as long as the project runs. </td> </tr> </table> **3.4.6. WP6** Not applicable. # 4\. Allocation of resources _What are the costs for making data FAIR in your project?_ The costs for making data FAIR in your project are unknown at this moment. _How will these be covered? Note that costs related to open access to research data are eligible as part of the Horizon 2020 grant (if compliant with the Grant Agreement conditions)._ A total budget of 12500EUR for open access publications have been allocated to TUD, FHA and ITU. In the third ROSIN Project Meeting in Copenhagen Nov. 2017 it will be discussed whether this covers the foreseen needs for data management. _Who will be responsible for data management in your project?_ Each workpackage leader is responsible for managing the data involved in their respective workpackage according to the procedures and provisions detailed in this Document Management Plan. The ROSIN Coordinator will ensure this. _Are the resources for long term preservation discussed (costs and potential value, who decides and how what data will be kept and for how long)?_ This will be discussed in the third ROSIN Project Meeting in Copenhagen Nov. 2017 # 5\. Data security _What provisions are in place for data security (including data recovery as well as secure storage and transfer of sensitive data)?_ _Is the data safely stored in certified repositories for long term preservation and curation?_ Data generated by activity of ROSIN partners in WP3, WP4 and WP5, including the dissemination of executed FTPs, is secured in: * The GitHub organization for the project and its associated repositories, maintained by TUD : _https://github.com/rosin-project_ * The project website, maintained by FHG and hosted by WordPress: _http://rosin-project.eu/_ Data generated in the FTP application, evaluation, granting and monitoring of FTPs is secured in the online platform provided by FluidReview: _https://rosin-ftps.fluidreview.com/_ # 6\. Ethical aspects At this moment (start of the project), there are no indications that ROSIN will collect data with ethical or legal issues. The project focuses on the development of open-source software, and the software (as far as foreseen) will not collect user data requiring informed consent. # 7\. Other issues _Do you make use of other national/funder/sectorial/departmental procedures for data management? If yes, which ones?_ We do not make use of other national/funder/sectorial/departmental procedures for data management. # 8\. Tools potentially useful for ROSIN DMP These are the tools that have been identified of interest for ROSIN’s DMP. They have been curated from the _H2020_ _Programme_ _Guidelines_ _on_ _FAIR_ _Data_ _Management_ _in_ _Horizon_ _2020_ (section 8): * _EUDAT_ _B2SHARE_ tool includes a built-in license wizard that facilitates the selection of an adequate license for research data. * _Zenodo_ , an OpenAIRE and CERN collaboration), allow researchers to deposit both publications and data, while providing tools to link them. It already includes several robotics datasets and publications. * _DMP_ _online_ contains some guidance to complement the H2020 DMP guidelines
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0277_DIMENSION_688003.md
# Executive Summary This report describes the handling and open access of research data in the DIMENSION project. Five main types of data is generated, collected and stored: design data, simulation data, measurement data, publication data and project documentation or report data. While the design, simulation and measurement data mostly have appropriate formats which require the use of special software and special hardware, are publications and project documents available in standard pdf format. Furthermore, design simulation and measurement data are restricted to special licenses and NDAs. Therefore, DIMENSION opt-out to enable open access to such data in the current state of the project. Contrary, publications and project documents (as long as not assigned as confidential or inclusions of confidential data are present) are public and open accessible by default. For publications the golden OA way is encouraged and the green OA way is mainly targeted. Since publications and project documents/reports are currently the only data sets considered for OA, no special repository is implemented yet. However, the publications and project documents/reports will be made available via the DIMENSION webpage. In any case the consortium has to agree on the openness of data beforehand. # Introduction ## Open Research Data DIMENSION is H2020 project which takes part on the Open Research Data (ORD) pilot following the guidelines given by EC [1], [2]. It is the intention of the EC that each research project ensures that the results, scientific publication and its data behind are open by default (with some reasons for opt-out) and thus provides a broader access to them. This enables several advantages: * the validation of research results (e.g. for peer reviews) becomes easier, * scientific breakthroughs becomes more visible, * research results will be more cited and therefore have a greater impact, * duplication of research activities will be avoided which improves the quality of results * research data is preserved, * EC research funds are better valued and scientific processes become more transparent for society which brings a public benefit, and * research is better distributed across scientific fields which helps to solve complex (social) challenges. An overview about the use of research results is shown in Figure 1. Either the results are exploited which means will be protected by IPR e.g. patents, or they are disseminated. The dissemination can be realized in two ways: first as publication in scientific journals or on conferences, second, the research data can be provided in data repository. For both dissemination opportunities the EC follows the Open Access principle. **Figure 1: Use of research results [3].** In sense of the OPR pilot, open data means that the research data can typically be accessed, mined, exploited, reproduced and disseminated, free of charge for the user. Research data refers to information, in particular facts or numbers, collected to be examined and considered as a basis for reasoning, discussion or calculation. In a research context, examples of data include statistics, results of experiments, measurements, observations resulting from fieldwork, survey results, interview recordings and images. The focus is on research data that is available in digital form. The types of data covered by the ORD pilot are: 1. the data needed to validate the results presented in scientific publications including the associated metadata, and 2. any other data (e.g. raw data) including the associated metadata. The open access to the research data can be denied in cases of: * results are commercially or industrially exploited, * incompatibility with confidentiality and security issues – IPR, * protection of personal data – privacy, * jeopardise the achievement of the main aim of the action, • project does not generate and collect any research data, * other legitimate reason. To use and provide open access to scientific research data, it should be easily discoverable, accessible, assessable and intelligible, useable beyond the original purpose for which it was collected as well as interoperable to specific quality standards. These qualities will be ensured by the data management plan (DMP). The DMP provides information on and specifies the data the research will generate, how to ensure its curation, preservation and sustainability, what parts of that data will be open and how. Its purpose is to support the data management life cycle and provide an analysis of the main elements of the data management policy of all the datasets that will be generated by the project. Therefore, it rules the data handling and refers to the right to access and reuse digital research data under its terms and conditions during and after research project. The DMP is not a fixed but rather a living document. It will be matured during the project and will be continuously updated at least for each project review (mid-term and at end of project) and when: * new data types are acquired, * changes in consortium policies occur, or • changes in consortium composition occur. This report describes the data handling of the DIMENSION project: which data will be created and how as well as if they are made accessible to public and how. First procedures for data management were already described in the DIMENSION description of action (DoA) [4]. ## DIMENSION DMP This document is the initial DMP of the DIMENSION project describing what data will be collected, processed or generated and following what methodology and standards, whether and how this data will be shared and/or made open, and how it will be curated and preserved. The plan will be made available on the project website and continuously updated by all project partners according to the project developments and the Horizon 2020 guidelines. Besides the Consortium Agreement, the DMP will be used to manage (amongst other things) the ownership and access to key knowledge of the project by effective data management and IPR protection procedures. The purpose of this DMP is to provide an analysis of the main elements of the data management policy that will be used by the participants of the DIMENSION project with regard to all the datasets that will be generated by the project. It is a document outlining how research data will be handled during a research project, and even after the project is completed, describing what data will be collected, processed or generated and following what methodology and standards, whether and how this data will be shared and/or made open, and how it will be curated and preserved. DIMENSION is a project that will create research scientific data in the disciplines of datacentre networks, optical communications and component design as well as fabrication. As DMPs are being enforced only recently by the EC and other major funding bodies worldwide, standardized procedures are not readily available and applicable to research projects covering these disciplines. Nevertheless, DIMENSION will consider the following approaches as far as applicable for providing the open research data: * data available on the Web (whatever format) under an open licence, * available as structured data (e.g. Excel instead of a scan of a table) * use non-proprietary formats (e.g. CSV instead of Excel) * use URIs to denote things, so that people can point at your stuff * link data to other data to provide context To provide the open access to the research data, the DIMENSION data management policy follows the basic principle: “as open as possible, as closed as necessary” which can be translated into two core principles: 1. The generated research data should generally be made as widely accessible as possible in a timely and responsible manner; 2. The research process should not be impaired or damaged by the inappropriate release of such data. The DIMENSION consortium will take the appropriate measures so that the research data generated in the project is easily discoverable, accessible, assessable and intelligible, useable beyond the original purpose for which it was collected and interoperable to specific quality standards. In the following sections the DIMENSION data management will be defined. After this introduction, Chapter 2 will summarize the research data which will be generated, collected, processed and stored. Different types of data will be defined and described with respect to its origin, properties, and (re-)use. In Chapter 3, the DIMENSION principles for FAIR (findable, accessible, interoperable, re-usable) data will be defined. In Chapter 4 the financing and responsibility for FAIR data will be briefly stated. Chapter 5 describes the measures which are taken for data security. Chapter 6 and 7 deal with ethical and other aspects regarding open research data. Finally, Chapter 8 summarized the DIMENSION data management. # Data Summary This section gives and overview about the research data which are generated, collected, processed and stored in DIMENSION. This includes the data description for different types and formats, purpose with respect to project objectives and tasks, for data re-use and how, data origin, expected data size, and to whom it might be useful. The state of data generated in DIMENSION will be strictly digital. In general, the data file formats to be used shall meet the following criteria: * widely used and accepted as best practice within the specific discipline, • self-documenting, i.e. the digital file itself can include useful metadata, * independent from specific platforms, hardware or software. However, different types of data will be generated and handled in DIMENSION. Considering the technical disciplines related to DIMENSION project, high- technology equipment and processes are used. Therefore, most of the research data will be in appropriate formats. In the following the main data types are described: ### Design data In DIMENSION several components and sub-systems will be designed (e.g. laser, laser driver, modulator, modulator driver, test boards,…). For the design special software like Cadence and Altium Designer is used. Design results are schematics and layouts of the components. The digital format of the designs is mostly appropriate (e.g. gds files) to the special software. Therefore, this software is required for data re-use. Since, these softwares are only available with legal licences, they cannot be provided by consortium. Furthermore, the component design is based on a given (semiconductor) technology physical design kit (PDK), e.g. IHP SG25H1 for the ICs, which is provided by a manufacturer, called foundry. The use of the PDK is only possible under the scope of a non-disclosure agreement (NDA). In addition, the DIMENSION designs and also the integration technology are subject of the main objectives as well as for planned IPR. This makes it impossible to provide an open access to those design data. However, it will be possible to provide pictures and screenshots from the designs which will be collected in design reports or corresponding deliverable reports. ### Simulation data Further data type is generated by collecting results from simulations and emulations. These are used to evaluate and estimate the performance and properties of the simulated device. In DIMENSION simulations are used on different levels. For the design of the components and ICs schematic and layout simulations are performed. With regard to the integration technology and packaging HF simulations are carried out using 3D EM solvers. The operation of the complete transmitter and optical links will be simulated with higher level system simulations. For the simulations and emulations special software is typically used, which need special software licenses for research or even business use. Such software includes for example OPNET, VPI, MATLAB, HFSS, Sonnet and Cadence. The format of simulation results are for a big extent appropriate data set only usable with the simulation software. In rare cases simulation results can be exported as ASCII coded text files or to typical database type of formats and spreadsheets with a complete description of the data set (list of fields). These have to be post processed by (simulation data) evaluation software. In most cases are diagrams of the parameters of interest and screenshots the outcomes of simulations. For these reasons it is not useful to provide the raw simulation data for open access in DIMENSION. It makes more sense to provide the processed simulation results as diagrams and collected as well as explained in a simulation report. There applicable and in case of simulation text files or spreadsheets, the data set could be put in a zip archive and be attached to the report. However, simulation data will only made open after the results have been published. ### Measurement data Measurement data is similar to simulation data. However, measurement data is generated and collected from hardware characterization and not from software investigations. The results come from lab experiments, system environment demonstrators and hardware analysis. The generated data is related to measurements of specific parameters that indicate the systems, sub-system and component performance. Special optical and HF test and measurement equipment is used to collect the results. Similar to simulation data, also measurement data is usually appropriate to the used equipment and corresponding software. Sometimes also typical database types of format or spreadsheets with a complete description of the data set (list of fields) are available. In most cases the measurement results are screenshot from the measured diagrams and plots. Therefore, also in case of measurement data open access of raw data is not promising in DIMENSION. Detailed measurement data will be made accessible after publication via measurement reports which also describe the measurement environment, e.g. specific performance parameters, test and measurement equipment, experimental setups, etc. If applicable database-based measurement data sets will be attached as zip archive to the reports. ### Publications The most open and visible data set of DIMENSION will be indeed the publication of research results. Publications contain a summary of results with description of their generation and corresponding condition. Publications are created by one partner for individual results or as joint publication on joint research efforts and success. Publications are made in scientific journal and on conferences, mostly in the format of pdf files which are commonly usable. In the context of publication data the OA approach of Horizon 2020 is embraced by DIMENSION, following the guidelines presented by the Commission. We encourage that the project results will be published mainly at fee-based open access scientific journals, following the OA Gold method, due to the high impact associated with certain journals. Indeed, there exist many open access highimpact journals in the disciplines of optical networks and communications, published by IEEE, OSA and Elsevier allowing a variety of publication venues. For this reason costs for publication fees have been foreseen in the consortium budget. It is anticipated that our researchers will also primarily target the OA Green method in the case of conferences and workshop contributions, since the two OA methods are non-mutually exclusive. In that case the published article or the final peer-reviewed manuscript is archived by the researcher in an online scientific repository before, after or alongside its publication. In this case, the authors must ensure open access to the publication within a time frame which is defined by the publisher (embargo times are usually six months to one year). The Open Access Infrastructure for Research in Europe (OpenAIRE) [5] will be explored by our researchers to determine which repository to choose. At TUD the open access repository Qucosa [6] is available which can be used for all publications with TUD contribution. ### Project documents and reports A second major data set for open access will be project documents and reports, such as deliverable reports. They are generated and collected to summarize the project progress and results as well as discuss different approaches, challenges and deviations with regard to the DIMENSION objectives. Reports can be related to design, simulation and measurements and contains the processed data. The openness or confidentiality of the project reports and documents is directly defined in the DoA and CA of the project. Therefore, as long as the documents are not assigned to be confidential, contain any confidential data or the content is not subject of an IPR, they are per se public. Normally, documents and reports are in standard pdf format. Public DIMENSION documents and reports will be made available and accessible on the DIMENSION webpage after their submission and publication. # FAIR data One of the grand challenges of data-intensive science is to facilitate knowledge discovery by assisting humans and machines in their discovery of, access to, integration and analysis of, taskappropriate scientific data and their associated algorithms and workflows [7]. In this regard FAIR data is a set of guiding principles to make data **f** indable, **a** ccessible, **i** nteroperable and **r** e-usable. ## Findable data ORD has to be findable easily, rapidly and identically. Therefore, exact and standard measures have to be used to identify the data sets. This can include the definition and use of naming conventions, search keywords, version numbers, metadata standards and standard data identifiers. For non-self-explaining data sets the researcher must ensure that sufficient documentation or metadata (i.e. information about the data, e.g. title, author, dates, access rights) is created and maintained to enable the generated research data to be found, used and managed throughout the project lifecycle. Documentation and metadata requirements will differ depending on the discipline and the nature of the specific activity inside the project. For example reports and publication do not need special metadata since this is already included in the document. In contrast to this, a set of the measurement data, e.g. a set of measured diagrams, need additional explanation of the content of the data set. This will be provided by data documentation which includes context for the data and ensures that the data can be understood in the long term. Metadata can be considered as a subset of the overall data documentation. Common types of metadata include: o descriptive metadata: identifies the resource and enables it to be discovered o technical metadata: enables a resource to be better managed, and in some cases preserved over time, by capturing information such as creation and modification dates, file formats and access restrictions. The most widely-used descriptive metadata standard is Dublin Core that works for different kinds of data (not just digital) and across disciplines. It is a simple metadata standard that is commonly used in institutional repositories. Furthermore, in DIMENSION the UK Data Audit Framework Methodology is considered. Both standards are further assessed with the project progress depending on the data types which further arise. Since currently only publications, documents and reports are subject of ORD, simple but distinct naming convention of data files are used in DIMENSION as a starting point. The naming convention of DIMENSION files with examples is shown in Table 2. For publication the documents are identified by the digital object identifier (DOI). **Table 2: DIMENSION data file naming convention.** <table> <tr> <th> **Convention** </th> <th> _[time stamp]_DIMENSION_[data type]_[data type postfix]_[version].[file format]_ </th> </tr> <tr> <td> **Item** </td> <td> Time stamp </td> <td> Data type </td> <td> Data type postfix </td> <td> version </td> <td> file format </td> </tr> <tr> <td> **Optional** </td> <td> X </td> <td> </td> <td> X </td> <td> </td> <td> </td> </tr> <tr> <td> **Definition(s)** </td> <td> YYYY_MM_DD </td> <td> Design Simulation Measurement Publication Document Report Deliverable Presentation … </td> <td> arbitrary </td> <td> v#.# </td> <td> According to software: pdf jpg zip gds xls(x) doc(x) ppt(x) … </td> </tr> <tr> <td> e.g. LDD, Laser, MD, … </td> </tr> <tr> <td> e.g. D1.2 </td> </tr> <tr> <td> e.g. ProjectMeeting </td> </tr> <tr> <td> … </td> </tr> <tr> <td> **Examples** </td> <td> 2016_07_31 </td> <td> DeliverableReport </td> <td> D1.2 </td> <td> v0.1 v1.0 </td> <td> As above </td> </tr> <tr> <td> **Complete** </td> <td> _2016_07_31_DIMENSION_DeliverableReport_D1.2_v1.0.pdf_ </td> </tr> </table> ## Accessible data To enable for third parties to mine, exploit, reproduce and disseminate ORD, it has become accessible. On the one hand this means that the data are stored and provided on a platform which can be accessed by interested parties. These platforms can be for example research data repositories. On the other hand, it has to be guaranteed that the available data itself can be opened and processed. In this regard as much as information on the data-related software or instruments have to be provided, e.g. in a separate document or metadata. It is useless to OA data which needs special and appropriate environments to be accessed. Similar to the re-use of the data, also the modality and conditions for the access itself can be defined by a license if there are conflicts with IPR, privacy- or security-related matters. Generally, all data which has been published can be made accessible. However, as described in Chapter 2, for most of the data types in DIMENSION appropriate formats and special restrictions (e.g. software licences, technology NDAs, …) are present which prohibit free-access to the data. Most of the data sets for OA are documents, reports and probably collected data from simulation and measurements (e.g. results as screenshot or spreadsheet collection). For this reasons, it has been decided in DIMENSION that the use of a special repository is not beneficial at the moment. The current ORD can be circulated well via the existing webpage as downloads and links. However, a special repository, which links publications to research data, will be evaluated and selected when first data is collected during project progress. Several repositories are considered for this purpose (e.g. re3data.org, Zenodo, OpenAire-CERN). For joint publications with TUD e.g. can be also placed in the Qucosa repository and will be clearly identified with a DOI. In any case the accessibility of data has to be confirmed by the DIMENSION consortium or contributing partners beforehand. ## Interoperable data Interoperable data means that the exchange and reuse of data enabled by OA is possible. Therefore, standard data vocab and formats which are compliant with available open software have to be used for storing and providing the data. OA of data which can only be used with special and restricted software makes no sense. Only in case of interoperability, data exchange between researchers, institutions, organisations, countries is possible and allows the re- combination with different datasets from different origins. In this regard DIMENSION will use as wherever possible data formats for OA knowledge representation, which are formal, accessible, shared and in a broadly applicable language. Qualified references to other data will be included. For example, information on the tools and instruments, which are needed to validate the measurement results, are provided with the data sets. In particular, the format for data sets of equal content, e.g. measurement data, will be a zip archive. These archives are linked to a (metadata) document, e.g. the measurement report. All documentation and reports are filed in pdf which is widely read- and usable. ## Re-usable data In order to make the ORD re-usable the conditions for the use have to be defined. Modalities and scope of use can be defined by licenses. This framework indicates if there are any restrictions and why. For example there can be embargo times e.g. to publish and patent during or after project. Furthermore, the durations of re-usability is regulated. Basically, it should be ensured that the data is usable beyond the original purpose for which it was collected even long time after the collection. This enables the interdisciplinary use for new developments after the DIMENSION has finished. Therefore, data is provided in a way that judgments can be made about their reliability and the competence of those who created them. Licenses which rule the use of ORD have to be clear and accessible. Within DIMENSION re-usability of data is enabled that all data which will be provided for OA yet, e.g. documentation and reports, is inherently public (and free of charge). Therefore, no special licenses are established at the moment. Once, special data will be provided for OA also special licenses will be issued. However, although the data will public the DIMENSION project and its consortium members reserve the copyright of the material. Data sets defined as ORD will be available and accessible after the material was published. In all cases, DIMENSION consortium or contributing partners have to agree beforehand something becomes OA. For re-usability the data will be stored on the webpage or on a repository system when implemented for at least ten years. # Allocation of resources Currently there are no costs to provide FAIR data. For hosting the DIMENSION webpage, which also provides public information, reports and project documents, a yearly fee of <100 € for using the web domain is required. These costs are covered the project budget of the coordinator TUD. Furthermore, costs for publication fees in open access journals have been foreseen in the consortium budget. Responsible partner for the DMP and the FAIR data is the coordinator TUD. # Data security Research data generated, processed and collected in DIMENSION is stored on computers, clusters and servers at each project partner’s premises. The facilities are hosted by the partners themselves and are secured according to actual security guidelines. The data are placed in a back-up storage to be able to restored in case of emergencies. The storage duration is usually 10 years according to the funding rules. General project data and documentation is also stored in the project SharePoint. Furthermore, public documents are also provided on the project webpage. Both, the SharePoint and webpage, are hosted by the coordinator TUD in house. Data security, backup and recovery is applied and the storage duration is also at least 10 years. # Ethical aspects In the DIMENSION project, there are no ethical or legal issues present which impairs the data managing. The research in dimension does not create, process and store personal data. Personal data of the DIMENSION consortium is no subject of the project’s data management and ORD. # Other issues There are no further national, funder, sectorial or departmental procedures to be followed for the data management in DIMENSION. For internal data use each partner applies the data management defined in its institution. The DIMENSION documentation is also stored in the project SharePoint, which is accessible to all project partners. # Conclusion To conclude the current version of DMP, a summarized overview about the DIMENSION research data set and its properties is shown in Table 2. As described in the previous Chapters, it is currently only possible to provide public project documents and publications. As a general rule, the DIMENSION consortium has to confirm beforehand the accessibility and re-usability of the data provided for ORD. This DMP represents the current state of data handling in the DIMENSION project. As living document it will be continuously updated at least at the end of each project period or in the cases of new data types are acquired, changes in consortium policies occur, or changes in consortium composition occur. **Table 3: Summary of DIMENSION data sets and management.** <table> <tr> <th> **Data type** </th> <th> **Origin** </th> <th> **Purposes** </th> <th> **Formats** </th> <th> **Re-usability** </th> <th> **Open** **access** </th> <th> **Comments** </th> </tr> <tr> <td> Design </td> <td> Design software, e.g. Cadence, Altium Designer </td> <td> Schematics, layouts for fabrication </td> <td> Appropriate, e.g. gds </td> <td> With corresponding design software; licenses; NDAs </td> <td> Not yet </td> <td> Design PDK subject to NDA </td> </tr> <tr> <td> Simulation </td> <td> Simulation software, e.g. OPNET, VPI, MATLAB, HFSS, Sonnet, Cadence </td> <td> Estimation/analysis of component or system performance based on schematics, layout </td> <td> Appropriate formats; database and spreadsheet formats; screenshots / pictures / diagrams </td> <td> With corresponding simulation software; licenses; zip archives with data sets attached to reports, after publication </td> <td> Not yet </td> <td> Special software connate be provided; licenses required </td> </tr> <tr> <td> Measurement </td> <td> Measurement device, setup, lab experiments </td> <td> Verification/analysis of component or system performance in hardware </td> <td> Equipment appropriate formats; screenshots / diagram pictures </td> <td> With special equipment; diagram pictures as zip archives attached to reports; after publication </td> <td> Not yet </td> <td> Equipment and appropriate software cannot be provided </td> </tr> <tr> <td> Publication </td> <td> Project results; Writing tools / software, e.g. MS Word or Latex </td> <td> Presentation of project results </td> <td> pdf </td> <td> With pdf readers; golden and green OA; After embargo times of publisher </td> <td> Yes </td> <td> Via DIMENSION webpage and/or via repositories, e.g. Qucosa </td> </tr> <tr> <td> Documents / reports </td> <td> Project results/progress; Writing tools / software, e.g. MS Word or Latex </td> <td> Summarize and report project progress and results </td> <td> pdf </td> <td> With pdf reader </td> <td> Yes </td> <td> Via DIMENSION webpage </td> </tr> </table>
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0278_PICs4All_687777.md
# 1 Introduction ## 1.1 About PICs4All PICS4All is a project in support of a European network of excellence in Photonic Integrated Circuit (PIC) technology, having the task to bring this technology to a broad European community of potential applicants of the PIC- technology being SMEs, large companies and research institutes. Dissemination, communication and outreach are the pivotal activities of the PICs4All project, since these are the essential first steps in bringing the knowledge of the PICS4All Application Support Centers and the photonic IC-community to such parties for application of the technology in their products or use it for further research and development: To enable know-how sharing, the PICs4All Coordination and Support Action has set up a European Network of experts in photonics comprising nine so called Application Support Centres (ASCs) distributed around Europe, whose main task is to stimulate the development of novel applications based on PICs for various application fields and enhance the cooperation between universities and other research centres, technology clusters and industry. The ASCs provide such parties access to the advanced PIC technology. The PICs4All experts offer their knowledge and hands-on support to academia, research institutes, SMEs and larger companies to: 1. assess whether their ideas or products can be better realized using PICs; 2. determine whether the application of PICs is economically viable in their products; and 3. access PIC design, manufacturing and evaluation facilities. In this way, PICs4All aims to increase the impact of integrated photonics by bridging the gap between technology and market. The PIC development cycle, as presented in Figure 1, is supported entirely by the expertise of PICs4All ASCs. **Figure** **1** **PIC development cycle supported by the PICs4All consortium.** Actions of PICs4All also aim on bringing together the PIC value chain for Europe’s key players in the field of photonic integration, including manufacturing and packaging partners, photonic CAD software partners, R&D labs and PIC design houses. Currently a gap exists between understanding the technology (its capabilities and potential) and market – industrial demands. ASCs will bridge this gap between potential users, especially those who have limited knowledge of PICs and PIC technology, and the community of PIC- specialized parties. As such ASCs act as PICtechnology ambassadors in their local/regional network, supporting dissemination and outreach which are so pivotal in seeking of new connections. Apart from this support action, within PICS4All the effectiveness of this support action is being studied as a separate item within the project. The main question here is whether the knowledge gap between specialists on PIC- technology, currently largely concentrated in academia like the organizations hosting the PICS4All ASCs and an increasing amount of companies active in designing, manufacturing and testing PICs and user organizations who are less familiar with PIC-technology can indeed be bridged by initiatives like PICS4All. # 2 Clusters of activities and the related management of data Within the PICS4All project in effect three types of activities are carried out, each have their own characteristics concerning data generation, preservation and accessibility. These clusters are: * support activities in which aid is offered to (potential) users of PIC-technology. These activities can comprise o technical and economic feasibility assessments, early support in concept design (and for a limited amount of cases: actual PIC-design); * testing and evaluation of PIC-prototypes and subsequently advice on potential improvements and application boundaries; * advice in finding the right support in the PIC-creation and application process by third parties (e.g. design houses, PIC-foundries, PIC-packaging manufacturers and packagers and PIC test facilities) when required. * a study for the effectiveness of the PICS4All approach in the product innovation process entered by PIC-users * documents generated within the project in the context of enabling and managing the PICS4All project. In the next section a more extensive description of data management structures as applied within these separate clusters of activities are provided. ## 2.1 Support activities ### 2.1.1 _Scope_ The key activities of the PICS4All ASCs are directed at supporting potential applicants of PIC-technology in innovative new products, or for the use in further applied or scientific research in various technical and scientific fields. PIC-technology, although it basic principles exist for over 30 years, has always been of subordinate interest as compared to (silicon) electronic integrated circuits due to the complexity of manipulating photons. Additionally, electronic IC-technology has evolved during the last 4 or 5 decades at an enormous pace pushed by Moore’s law of ever increasing capacity of electronic IC’s; virtually all problems could be solved using electronic IC’s. However, recently PIC’s have shown profound advantages in various areas of technology as compared to electronic IC’s in terms of energy efficiency, operation speed and functionality. These advantages have boosted, and will keep boosting, the use of PIC’s in new applications. Clear advantages of the use of photons instead of electrons have been shown in telecom and sensing applications in particular. However, many more applications (even unknown applications) could strongly benefit from using photonic IC’s additionally to or instead of electronic IC’s. PICS4All supports applicants who are not familiar with this technology in the technical and economical assessment, design, manufacturing and evaluation of PICs and PICapplications. The basic idea of PIC-based applications can be provided by the user, but also as a consequence of a discussion between PICS4All ASCs and users by elaborating the technical opportunities of PIC’s, eg. in the course of solving a particular problem. However, the basic idea or problem is proprietary to the potential applicant, the client. For as yet, in many cases customers have a distinct idea of the application and the underlying technology. #### 2.1.2 Data management characteristics As a consequence, in order to provide an attractive proposition to potential applicants and to protect the commercial interests of the client, the ASCs are bound to a Non-Disclosure Agreement (NDA) concluded with their clients. This NDA comprises a confidentiality clause, stating .. _’not to disclose Confidential Information to any third party without the prior written consent by the Disclosing Party’._ Since the daily practice up to now is that all information exchanged between ASCs and their clients is designated as ‘Confidential’ due to (potential) IPR- issues, the data generated by ASCs within a consulting trajectory are not accessible to outside parties. The current practice of ASCs is to process data generated in design and evaluation into PIC-designs and PIC-evaluation reports, possibly resulting in re-designs. In this way new products or components essential to new products are devised which, in principle, are proprietary to the clients. The ASCs however are prepared to disseminate information on new phenomena or basic principles as encountered in their consulting trajectories to the public domain whenever possible and allowed. The most appropriate method is to publish relevant results in scientific articles, conference proceedings or educational courses on PICtechnology; as university based entities, the ASCs have ample experience in using these channels for technical information dissemination. Currently, the raw data generated within each consultancy trajectory are stored at computers of the ASCs. In general, these data are also preserved at central server facilities, where they are backed-up. Of all ASC’s the data are not stored in any public depository. An overview of the data file typology, metadata and application software necessary to read and interpret the data files of the ASCs is provided in chapter 4\. ## 2.2 Effectiveness study ### 2.2.1 _Scope_ An important question to the principal of the PICS4All Coordination and Support Action, the European Commission, and of scientific relevance is whether activities as performed within PICS4All are effective in the creation of new innovative products and other opportunities which have a commercial or scientific impact. To this end the _Innovation, Technology, Entrepreneurship & Marketing (ITEM) group _ of the coordinator TU/e is conducting a study after this effectiveness. One of the main specialities of the ITEM-group is the generation of understanding of the various aspects involved in ‘Technology commercialization, incubators, and university knowledge transfer’. The global approach of the study is based on two activities: 1. Study of the numerical aspects of PICS4All activities, e.g. the amount of clients which were contacted by ASCs, who took the initiative, how clients became aware of the services of ASCs, how many contacts responded positively to offerings of consultancy by ASCs, etc. etc. These data are periodically analysed to track the progress of the support activities and to reveal trends and patterns in support activities that have proven to be particularly (in)efficient. Findings will be reported in the deliverables. 2. Interviews with contacted parties and clients in order to reveal motivations of contacts to accept or decline any offerings of consultancy and support, the rate of success of such support, the level of appreciation by the client of the help provided by ASCs, potential ways to improve support, etc. etc. These data serves to better understand the dynamics and effects of the support processes including challenges and best practices from an ASC and client perspective. The data will be analysed using qualitative analysis techniques such as coding, to reveal underlying dynamics of the support processes as well as common challenges and best practices. Since this data consists of highly confidential, sensitive information, reporting will mainly include aggregate, anonymized findings. #### 2.2.2 Data management characteristics The information needed to carry out the first activity is collected by the ASCs in the ZOHO Project Registration System. Every contact made by ASCs with potentially interested parties is registered in a Client/Project database. This database is created using the cloud based on the ZOHO Customer Relations Management tool. Details of this tool can be found in _Deliverable 5.2 ‘Project Registration System’_ . The complexity of the raw data supporting this part of the study is limited. An example of a first result can be found in Deliverable 4.1 _‘Yearly summary report on outreach activities and their impact on new users and application market’._ In this report, the relevant raw data underpinning the conclusions are provided. Currently, no unaided access to the ZOHO database can be provided to outside parties due to: * licensing issues; every user needs a separate license in order to use ZOHO; * confidentiality wrt. to the contents of the ZOHO database. Especially when a consultancy trajectory is entered, the ZOHO database also contains information about the basic product/component idea of the client, approach chosen, potential solutions, etc. This information is subject to the NDA concluded between ASCs and their clients and therefore not open to third parties. The raw data of the questionnaires filled in by or interview held with contacts and clients are currently kept in the form of text files at the TU/e servers. TU/e does not have an approved data handling policy yet (this is worked on) but does have a code of conduct containing some data management elements 1 . Since this data consists of confidential, sensitive information – which can only be interpreted in the context of the actual interviews that took place - data will not be made public and reporting will be based on aggregate, anonymized findings. ## 2.3 Project Management Data ### 2.3.1 _Scope_ Within the course of the PICS4All project, several types of documents are created in support of the management and reporting of the project’s activities:  Project Internal Documents: * minutes of WP-leaders meetings and telcos * minutes of Progress Meetings and attendance registers o internal communication of PL to Steering Committee  Project External Documents: * Deliverables and Progress Reports o All kinds of PR-materials, Project Presentations, etc. #### 2.3.2 Data management characteristics A copy of the abovementioned documents are stored at a file exchange facility (folder) hosted by the Project Office (Berenschot). All files are accessible to all partners. The documents which are intended for public use are, when appropriate, available through the project’s website ( _www.pics4all.jeppix.eu_ ) . The underlying raw data (mostly handwritten notes or draft versions) are not kept when they have no additional value; e.g. handwritten notes are generally not legible to others than the writer, draft versions contain mistakes, remarks, etc. which are not relevant to any party when the final version is complete. For this reason, no facility to make raw data on project management documents accessible to third parties are used or considered. # 3 Conclusions This report provides an overview of the various types of data as generated within the PICS4All project. In summary: * support activities: various types of data are generated. The data are stored for future reference at the PC’s and backup servers of the institutes hosting the ASCs. At this moment these data are not stored on publicly accessible data repositories for two reasons: * external data storage in an accessible form is not in the policy of the institutes involved; * the data concerning support activities are subject to Non-disclosure Agreements with clients since they might contain IPR-sensitive material. * Effectiveness study: data generated within the study for the effectiveness of the PICS4All CSA in knowledge transfer and creation of new market or scientific innovations are collected and stored at the PC’s and servers of the TU/e. It will be considered at a later stage which data should be stored and archived in which type of data archive/repository and with which degree of openness. Since the majority of our data are subject to NDAs and IPR, careful selection and appraisal of the data (and the possible mode of storage/archiving) is required. Whenever beneficial, a change in data management policy is considered. * Project management: no raw data underlying to the publicly accessible documents are relevant or available for storage in some kind of publicly accessible data repository. The currently reported status of data and data management does not preclude a (partly) different approach in the future. The PICS4All consortium appreciates the relevance of open data access, although it has opted-out of participation in the Open Data Pilot as promoted by the EU. During the course of the project, open access to data and structures to enable open data access will be considered and applied when not intervening with the IPR-interests of the participating parties. # 4 Appendix; data formats, metadata and data processing tools used by ASCs In this Appendix, an overview per ASC is given of data, file types, required application programs to read the files and other relevant information on data preservation are provided. This data mainly concerns data generated in the course of a support activity (cf. section 2.1) and therefore is not open to public publication. Currently, the raw data are stored at the computers or servers in accordance with the data preservation and storage policies of the various institutes hosting the ASCs. The available data and data formats of Milan, Cambridge and Paris have not been provided yet. ## 4.1 Aarhus University <table> <tr> <th> **Data set reference / name** </th> <th> **Dataset description / goal of the data** </th> <th> **Standards and metadata incl. required software** </th> <th> **Data sharing** </th> <th> **Way of archiving (incl. storage and back-up)** </th> </tr> <tr> <th> </th> <th> freely accessible? </th> <th> Is sharing of data between project partners possible/useful/carried out? </th> <th> Is sharing with external parties possible/useful/carried out? </th> <th> </th> </tr> <tr> <th> </th> <th> </th> <th> If yes, how can / is this be done? </th> <th> If no, why not? </th> <th> If yes, how can / is this be done? </th> <th> If no, why not? </th> <th> </th> </tr> <tr> <td> ZOHO database </td> <td> Client data (company name, contact persons, address), project data (questions, ideas and concepts, project progress). </td> <td> Zoho CRM software </td> <td> No </td> <td> Yes, data is shared. Most importantly the company name and contact person. Secondly the most promising fields/applications are identified. </td> <td> Project data is not shared when explicitly prohibited by bilateral NDA. </td> <td> . </td> <td> No, sharing company/person data is not considered relevant. Data on support projects is subject to NDA and thus not shared. </td> <td> Backup automatically taken care of by cloudbased ZOHO application. </td> </tr> <tr> <td> Personal contact database </td> <td> Overview and history of all potential users that have bene contacted and/or will be contacted. </td> <td> The datasets are stored in an Excel database (personal logbook) </td> <td> Excel reader software is freely accessible. </td> <td> Data can be shared based on a per-case basis, though not the full database at once. </td> <td> </td> <td> </td> <td> Data sharing in data repository is at this stage not considered, the large majority of the data cannot be disclosed because of </td> <td> AU backup server. </td> </tr> <tr> <td> Feasibility studies </td> <td> Feasibility studies contain conceptual ideas but, sometimes, also quantified simulations. These studies can also take the form of actual grant proposals. </td> <td> These reports are typically compiled into a textbased document, for readability. Raw datasets could be produced too. </td> <td> Documents are typically in pdf or doc format. Raw datasets can be in ascii or binary, and are ofton only - or most conveniently - accisible through the software in which it was generated. These packages can be very expensive, and the data are hence not freely accessible. </td> <td> Yes, on a per-case basis. (Potential) IP needs to be addressed, as feasibility studies might contain novel ideas. </td> <td> </td> <td> </td> <td> No, unless in the form of an official report or publication. Novel and concepts will remain confidential and will not be shared.. </td> <td> AU backup server. </td> </tr> </table> ## 4.2 National Technical University of Athens <table> <tr> <th> **Data set reference / name** </th> <th> **Dataset description / goal of the data** </th> <th> **Standards and metadata incl. required software** </th> <th> **Data sharing** </th> <th> **Way of archiving (incl. storage and back-up)** </th> </tr> <tr> <th> </th> <th> freely accessible? </th> <th> Is sharing of data between project partners possible/useful/carried out? </th> <th> Is sharing with external parties possible/useful/carried out? </th> <th> </th> </tr> <tr> <th> </th> <th> </th> <th> If yes, how can / is this be done? </th> <th> If no, why not? </th> <th> If yes, how can / is this be done? </th> <th> If no, why not? </th> <th> </th> </tr> <tr> <td> Mask Engineer Data Files </td> <td> Script files used to prepare and generate mask layout files. The script files are processed using the Phoenix Software compiler in order to convert them into mask layout files (gds) </td> <td> The file format is .spt and the software needed to execute them is Phoenix MaskEngineer Software. Simple visualization of the files can be done using a text editor. </td> <td> No </td> <td> Sharing of data between project partners is possible and useful. It can be readily done either by e-mail or through NTUA's ftp server </td> <td> </td> <td> Sharing of data with external parties is possible if the data is not protected by NDA with the respective customer. It can be readily done either by e-mail or through </td> <td> </td> <td> The data are permanently stored in NTUA servers, while they are regularly backed-up in order to ensure prolongued preservation and protection against mistakes or malicious actions. </td> </tr> <tr> <td> OptoDesigner Data Files </td> <td> Script files used to perform mode (FMM, FD) and propagation simulations (BPM, BEP, FDTD). The script files are processed using the Phoenix Software compiler in order to run the parameterize, run the simulation engine and organize and save the respective results. </td> <td> The file format is .spt and the software needed to execute them is Phoenix MaskEngineer Software. Simple visualization of the files can be done using a text editor while the results are saved either in .txt or .bmp format. </td> <td> No </td> <td> Sharing of data between project partners is possible and useful. It can be readily done either by e-mail or through NTUA's ftp server </td> <td> </td> <td> Sharing of data with external parties is possible if the data is not protected by NDA with the respective customer. It can be readily done either by e-mail or through NTUA's ftp server </td> <td> </td> <td> The data are permanently stored in NTUA servers, while they are regularly backed-up in order to ensure prolongued preservation and protection against mistakes or malicious actions. </td> </tr> <tr> <td> ASPIC Data Files </td> <td> Data files used to perfrom circuit level simulation on integrated photonic circuits. The files are used to view, edit and simulate the desired structures, as well as save and access the simulation results. </td> <td> The file format is .apc and the software needed to execute them is ASPIC Filarete. The results can be saved in either .txt, .mat or .csv format. </td> <td> No </td> <td> Sharing of data between project partners is possible and useful. It can be readily done either by e-mail or through NTUA's ftp server </td> <td> </td> <td> Sharing of data with external parties is possible if the data is not protected by NDA with the respective customer. It can be readily done either by e-mail or through </td> <td> </td> <td> The data are permanently stored in NTUA servers, while they are regularly backed-up in order to ensure prolongued preservation and protection against mistakes or malicious actions. </td> </tr> <tr> <td> Lumerical FDTD Data Files </td> <td> Data files used to perfrorm 2D and 3D FDTD simulations of photonic structures. The files are used to view edit and simulate the desired structures, as well as save and access the simulation results. Furthermore, since the software offers a scripting environment, script files can be also developed and executed. </td> <td> The file format of the layout and the script files is .fsp and .lsf respectively, while the software needed to execute them is Lumerical 3D-FDTD Software. The results can be saved in either .txt or .csv format. </td> <td> No </td> <td> Sharing of data between project partners is possible and useful. It can be readily done either by e-mail or through NTUA's ftp server </td> <td> </td> <td> Sharing of data with external parties is possible if the data is not protected by NDA with the respective customer. It can be readily done either by e-mail or through NTUA's ftp server </td> <td> </td> <td> The data are permanently stored in NTUA servers, while they are regularly backed-up in order to ensure prolongued preservation and protection against mistakes or malicious actions. </td> </tr> <tr> <td> Lumerical MODE Solutions Data Files </td> <td> Data files used to perfrorm 2D and 3D-EigenMode Expansion simulations of photonic structures. The files are used to view edit and simulate the desired structures, as well as save and access the simulation results. Furthermore, since the software offers a scripting environment, script files can be also developed and executed. </td> <td> The file format of the layout and the script files is .lms and .lsf respectively, while the software needed to execute them is Lumerical MODE Solutions Software. The results can be saved in either .txt or .csv format. </td> <td> </td> <td> Sharing of data between project partners is possible and useful. It can be readily done either by e-mail or through NTUA's ftp server </td> <td> </td> <td> Sharing of data with external parties is possible if the data is not protected by NDA with the respective customer. It can be readily done either by e-mail or through NTUA's ftp server </td> <td> </td> <td> The data are permanently stored in NTUA servers, while they are regularly backed-up in order to ensure prolongued preservation and protection against mistakes or malicious actions. </td> </tr> </table> ## 4.3 Technical University of Berlin <table> <tr> <th> **Data set reference / name** </th> <th> **Dataset description / goal of the data** </th> <th> **Standards and metadata incl. required software** </th> <th> **Data sharing** </th> <th> **Way of archiving (incl. storage and back-up)** </th> </tr> <tr> <th> </th> <th> freely accessible? </th> <th> Is sharing of data between project partners possible/useful/carried out? </th> <th> Is sharing with external parties possible/useful/carried out? </th> <th> </th> </tr> <tr> <th> </th> <th> </th> <th> If yes, how can / is this be done? </th> <th> If no, why not? </th> <th> If yes, how can / is this be done? </th> <th> If no, why not? </th> <th> </th> </tr> <tr> <td> Measurement Data </td> <td> Data generated in characterization of PICs </td> <td> Textfiles, hdf5 files </td> <td> yes </td> <td> </td> <td> Confidential with client </td> <td> </td> <td> Confidential with client </td> <td> TUB Internal backup system </td> </tr> <tr> <td> Simulation Data </td> <td> Rough data generated by/serving as a data file for PIC-simulation programmes. </td> <td> Textfiles, hdf5 files </td> <td> yes </td> <td> </td> <td> Confidential with client </td> <td> </td> <td> Confidential with client </td> <td> TUB Internal backup system </td> </tr> <tr> <td> Layout Data </td> <td> Rough data generated by/serving as a data file for PIC-design and layout programmes. </td> <td> Textfiles, GDSII files </td> <td> yes </td> <td> </td> <td> Confidential with client </td> <td> </td> <td> Confidential with client </td> <td> TUB Internal backup system </td> </tr> <tr> <td> ZOHO Questionaire </td> <td> Client data (company name, contact persons, address), project data (questions, ideas and concepts, project progress). </td> <td> HTML </td> <td> yes </td> <td> ZOHO System </td> <td> </td> <td> </td> <td> ZOHO contains confidential information </td> <td> Stored online in a cloud facility. </td> </tr> <tr> <td> Conference Talks, Posters </td> <td> Elaborated data meant for publishing </td> <td> PDF </td> <td> yes </td> <td> Published via conference proceedings </td> <td> </td> <td> Published via conference proceedings </td> <td> </td> <td> TUB Internal backup system </td> </tr> </table> ## 4.4 Eindhoven University of Technology <table> <tr> <th> **Data set reference / name** </th> <th> **Dataset description / goal of the data** </th> <th> **Standards and metadata incl. required software** </th> <th> **Data sharing** </th> <th> **Way of archiving (incl. storage and back-up)** </th> </tr> <tr> <th> </th> <th> freely accessible? </th> <th> Is sharing of data between project partners possible/useful/carried out? </th> <th> Is sharing with external parties possible/useful/carried out? </th> <th> </th> </tr> <tr> <th> </th> <th> </th> <th> If yes, how can / is this be done? </th> <th> If no, why not? </th> <th> If yes, how can / is this be done? </th> <th> If no, why not? </th> <th> </th> </tr> <tr> <td> PICs4All project documentation </td> <td> Project minutes, deliverables, milestones, communication kit, promotional materials </td> <td> text, pdf, presentation, mail </td> <td> yes </td> <td> Yes, it is essential. Berenschot server is used. </td> <td> </td> <td> Part of the documents are shared using public PICs4All website or </td> <td> </td> <td> Backup on local PC, backup on TU/e shared drive </td> </tr> <tr> <td> ZOHO database </td> <td> Database of scouted insitutions; keep track of the ASC activities </td> <td> </td> <td> </td> <td> Yes, database is shared among ASCs. Each ASC has an account to ZOHO system. </td> <td> </td> <td> distributed via mailing. </td> <td> Data sharing is dependent on the user/client’s approval. Most users/clients will likely not allow data sharing because of confidentiality issues. </td> <td> Cloud based software and data file. </td> </tr> <tr> <td> Techno-economic feasibility </td> <td> Conceptualization of ideas and assesment of using PICs </td> <td> text, pdf, presentation, mail, spreadsheet </td> <td> yes </td> <td> Yes, case-by-case scenario, as exchange of the confidential information is an issue. Sharing of the information shall be approved by parties involved. </td> <td> </td> <td> Sharing is possible, after approval of the parties, but unlikely to happen (IP issues). Most likely in the form of summary for formal deliverable reports. </td> <td> Data sharing is dependent on the user/client’s approval. Most users/clients will likely not allow data sharing because of confidentiality issues. </td> <td> Backup on local PC, backup on TU/e shared drive </td> </tr> <tr> <td> Simulation data </td> <td> System/device performance assesment </td> <td> simulation files, presentation, images </td> <td> yes (simulation software might be required) </td> <td> </td> <td> Data sharing is dependent on the user/client’s approval. Most users/clients will likely not allow data sharing because of confidentiality issues. </td> <td> Backup on local PC, backup on TU/e shared drive </td> </tr> <tr> <td> PIC layout </td> <td> Device layout ready for fabrication </td> <td> gds/cif file; scripting languange file </td> <td> yes (PIC layout software might be required) </td> <td> </td> <td> Data sharing is dependent on the user/client’s approval. Most users/clients will likely not allow data sharing because of confidentiality issues. </td> <td> Backup on local PC, backup on TU/e shared drive </td> </tr> <tr> <td> Measurements data </td> <td> Funcionality and device operation assesment </td> <td> txt, pdf, combined into report - pdf or presentation </td> <td> yes </td> <td> </td> <td> Data sharing is dependent on the user/client’s approval. Most users/clients will likely not allow data sharing because of confidentiality issues. </td> <td> Backup on local PC, backup on TU/e shared drive </td> </tr> <tr> <td> Application notes </td> <td> Application notes that origin from the successful PIC implementation </td> <td> text, pdf </td> <td> yes </td> <td> Yes. To share examples and experience. </td> <td> </td> <td> Yes. By publishing on the PICs4All website and consortium partners website; realesing via newsletter. </td> <td> </td> <td> Backup on local PC, backup on TU/e shared drive </td> </tr> <tr> <td> Database of potential users </td> <td> Database with contact information for outreach and scouting purpose </td> <td> spreadsheet </td> <td> yes </td> <td> On requrest. </td> <td> </td> <td> </td> <td> Data sharing is dependent on the user/client’s approval. Most users/clients will likely not allow data sharing because of confidentiality issues. </td> <td> Backup on local PC, backup on TU/e shared drive </td> </tr> </table> ## 4.5 Universitat Politècnica de València <table> <tr> <th> **Data set reference / name** </th> <th> **Dataset description / goal of the data** </th> <th> **Standards and metadata incl. required software** </th> <th> **Data sharing** </th> <th> **Way of archiving (incl. storage and back-up)** </th> </tr> <tr> <th> </th> <th> freely accessible? </th> <th> Is sharing of data between project partners possible/useful/carried out? </th> <th> Is sharing with external parties possible/useful/carried out? </th> <th> </th> </tr> <tr> <th> </th> <th> </th> <th> If yes, how can / is this be done? </th> <th> If no, why not? </th> <th> If yes, how can / is this be done? </th> <th> If no, why not? </th> <th> </th> </tr> <tr> <td> PICs Application examples </td> <td> List of applications examples where PICs can be used. This will be used to aware potential users about PIC technololgy on their area of study or business </td> <td> application examples at areas of study/business (.doc, . Ppt, microsoft word and powerpoint or compatible programmes) </td> <td> Yes </td> <td> e-mail, project internal document directory (Acronis) hosted by Berenschot </td> <td> </td> <td> Through presentations, e-mails, conference papers, social networks,… </td> <td> </td> <td> This documentation could be stored on a folder at the exchange directory in Acronis </td> </tr> <tr> <td> platform capabilities </td> <td> This data describes the main features and performances from different technology platforms and will help determining the platform to be used to reach user's application requirements </td> <td> losses, wavelengths, bandwidths, resolution, dimensions,…(.xls, microsoft excel or compatible programmes) </td> <td> Yes </td> <td> e-mail, project internal document directory (Acronis) hosted by Berenschot </td> <td> </td> <td> Through presentations, e-mails, conference papers, social networks,… </td> <td> </td> <td> They are included in presentations in Acronis </td> </tr> <tr> <td> Fabrication costs for different fab platforms </td> <td> This will help users to evaluate the techno-economic feasibility to turn their application into a PIC. </td> <td> volumes, chip dimensions, technology platforms,… (.xls, microsoft excel or compatible programmes) </td> <td> Yes </td> <td> e-mail, project internal document directory (Acronis) hosted by Berenschot </td> <td> </td> <td> Through presentations, e-mails, conference papers, social networks,… </td> <td> </td> <td> They are included in presentations in Acronis </td> </tr> <tr> <td> Potential users database </td> <td> Companies and organizations are included in this database as potential users of PIC's technology in order to be scouted during the project </td> <td> company/organization name, location, area of study/business, website, email…(.xls, microsoft excel or compatible programmes) </td> <td> No </td> <td> Shared on the project internal document directory (Acronis) hosted by Berenschot </td> <td> </td> <td> </td> <td> Confidential information </td> <td> Stored on the computers from the ASCs </td> </tr> <tr> <td> Applications of scouted users </td> <td> to help us determining the techno-economic feasibility upon user requirements/capalilities </td> <td> new or existing, area of interest…(.doc, microsoft word or compatible programmes) </td> <td> No </td> <td> depends on the user </td> <td> Confidential information </td> <td> </td> <td> probably they want to avoid sharing them </td> <td> Stored on the computers from the ASCs </td> </tr> <tr> <td> technical skills of users </td> <td> to evaluate if user can proceed by itself or need support </td> <td> modelling and simulation, design, packaging, characterization </td> <td> No </td> <td> depends on the user </td> <td> </td> <td> depends on the user </td> <td> </td> <td> Stored on the computers from the ASCs </td> </tr> <tr> <td> database with agents of PIC's value chain </td> <td> this database contains the name of the different agents concerning the PIC value chain in order to redirect the users if needed upon their application requirements and skills </td> <td> company name, location, area of study/business,… (.xls, microsoft excel or compatible programmes) </td> <td> yes </td> <td> e-mail, Acronis </td> <td> </td> <td> e-mail, presentations, social networks,… </td> <td> </td> <td> Acronis </td> </tr> <tr> <td> User's details obtained through PICs4All website </td> <td> A contact details file is generated from users that download the techno- economic template from the PICs4All website. It is used to scout the user in a near future. </td> <td> company name, location, area of study/business,… (.txt generated automatically, microsoft word or compatible programmes) </td> <td> No </td> <td> This information is shared between project coordinator and WP3 leader and Spanish ASC-representative responsible for the website. </td> <td> Confidential information </td> <td> </td> <td> Data sharing is dependent on the user/client’s approval. Most users/clients will most likely not allow sharing of their data because of </td> <td> The generated file is stored on a private folder at the website server. It is also sent to several e-mail addresses to notify a new interest on a PIC development. The back-up is performed at the same time as the website back-up </td> </tr> <tr> <td> NDA between ASC and organization / end-user </td> <td> It will contain the terms of the NDA between a given user and the ASCs concerning to the user's application. </td> <td> (.doc, microsoft word or compatible programmes) </td> <td> No </td> <td> the possibility of sharing depends on the user (confidential information) </td> <td> Confidential information </td> <td> </td> <td> This document doesn't concern third parties, confidential information </td> <td> All the documentation concerning a given company/organization will be stored on a folder named with the company/organization </td> </tr> <tr> <td> User's application techno-economic study </td> <td> This data considers the technical and economical features of the application requirements and it will be used to take a decission wether continue with the integration or not. </td> <td> technical and economical features (.xls, microsoft excel or compatible programmes) </td> <td> No </td> <td> the possibility of sharing depends on the user (confidential information) </td> <td> Confidential information </td> <td> </td> <td> Data sharing is dependent on the user/client’s approval. Most users/clients will most </td> <td> All the documentation concerning a given company/organization will be stored on a folder named with the company/organization name. </td> </tr> <tr> <td> user's application design </td> <td> In some cases ASCs will help users on design's development of the application: PIC Layout, characterization setup, packaging design. </td> <td> models and simulations, layouts, characterization setup designs, packaging designs. (Phoenix, VPI, Lumerical software, Filarete, Photon design, Luceda Photonics, …) </td> <td> No </td> <td> the possibility of sharing depends on the user (confidential information) </td> <td> Confidential information </td> <td> </td> <td> Data sharing is dependent on the user/client’s approval. Most users/clients will most likely not allow sharing of their data because of confidentiality/NDA issues. </td> <td> All the documentation concerning a given company/organization will be stored on a folder named with the company/organization name. </td> </tr> <tr> <td> PICs characterization measurements </td> <td> Here are included the characterization mesurements performed on the fabricated PIC, used to validate the user's application requirements. </td> <td> characterization measurements (.doc) </td> <td> No </td> <td> </td> <td> Probably, users want to keep them initially private </td> <td> </td> <td> Probably, users want to keep them initially private. </td> <td> All the documentation concerning a given company/organization will be stored on a folder named with the company/organization </td> </tr> <tr> <td> Events contribution </td> <td> This data considers the papers and documentation published by the partners in different events: workshops, conferences,... </td> <td> </td> <td> Yes </td> <td> e-mail, project internal document directory (Acronis) hosted by Berenschot </td> <td> </td> <td> E-mails, presentations, papers,… </td> <td> </td> <td> This information is supposed to be published at conferences, whorkshops,… and also it is saved on Berenschot server, of which </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> </table>
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0282_VaVeL_688380.md
Executive summary This document includes information about the data sources that the VaVeL consortium will work and conduct research on. More specifically for each data source the partners have defined a data management plan. The plan consists of information about legal issues, privacy, infrastructure changes, archiving, maintenance, standards and accessibility. The document will be regularly updated as more information becomes available and data issues are resolved. Please check the website of the project ( www.vavel-project.eu) under the deliverables section for updates. Early on, the consortium has agreed to make every effort to provide open access to as many datasets as possible. This document reflects this continuous effort. Document Information <table> <tr> <th> Contract Number </th> <th> H2020-688380 </th> <th> Acronym </th> <th> VaVeL </th> </tr> <tr> <td> Name </td> <td> Variety, Veracity, VaLue: Handling the Multiplicity of Urban Sensors </td> </tr> <tr> <td> Project URL </td> <td> http://www.vavel-project.eu/ </td> </tr> <tr> <td> EU Project Officer </td> <td> First Name - Last Name </td> </tr> <tr> <td> Deliverable </td> <td> D1.3 </td> <td> Data Management Plan </td> </tr> <tr> <td> Work Package </td> <td> Number </td> <td> WP1 </td> </tr> <tr> <td> Date of Delivery </td> <td> 31/05/2016 </td> <td> Actual </td> <td> 31/05/2016 </td> </tr> <tr> <td> Status </td> <td> Final </td> </tr> <tr> <td> Nature </td> <td> Report </td> </tr> <tr> <td> Distribution Type </td> <td> Public </td> </tr> <tr> <td> Authoring Partner </td> <td> National and Kapodistrian University of Athens </td> </tr> <tr> <td> QA Partner </td> <td> IBM </td> </tr> <tr> <td> Contact Person </td> <td> Ioannis Katakis </td> <td> [email protected] </td> </tr> <tr> <td> </td> <td> Dimitrios Gunopulos </td> <td> [email protected] </td> </tr> <tr> <td> </td> <td> Phone </td> <td> </td> <td> Fax </td> <td> </td> </tr> </table> List of Contributors: Ioannis Katakis (UoA), Dimitrios Gunopulos (UoA), Jaroslaw Legierski (OPL), Izabella Krzeminska (OPL), Robert Kunicki (CoW), Jakub Marecek (IBM), Maggie O’Donnell (DCC), Aaron O’Connor (DCC). Project Information This document is part of a research project funded by Horizon H2020 programme of the Commission of the European Communities as project number 688380. The Beneficiaries in this project are: <table> <tr> <th> No. </th> <th> Name </th> <th> Short Name </th> <th> Country </th> </tr> <tr> <td> 1 </td> <td> National and Kapodistrian University of Athens </td> <td> UoA </td> <td> Greece </td> </tr> <tr> <td> 2 </td> <td> Technische Universit¨at Dortmund </td> <td> TUD </td> <td> Germany </td> </tr> <tr> <td> 3 </td> <td> Technion - Israel Institute of Technology </td> <td> Technion </td> <td> Israel </td> </tr> <tr> <td> 4 </td> <td> Fraunhofer-Gesellschaft Zur F¨orderung Der Angewandten Forschung E.V. </td> <td> Fraunhofer </td> <td> Germany </td> </tr> <tr> <td> 5 </td> <td> IBM Ireland Limited </td> <td> IBM </td> <td> Ireland </td> </tr> <tr> <td> 6 </td> <td> AGT International </td> <td> AGT GROUP (R&D) GMBH </td> <td> Germany </td> </tr> <tr> <td> 7 </td> <td> Orange Polska S.A. </td> <td> OPL </td> <td> Poland </td> </tr> <tr> <td> 8 </td> <td> Dublin City Council </td> <td> DCC </td> <td> Ireland </td> </tr> <tr> <td> 9 </td> <td> City of Warsaw </td> <td> CoW </td> <td> Poland </td> </tr> <tr> <td> 10 </td> <td> Warsaw University of Technology </td> <td> WUT </td> <td> Poland </td> </tr> </table> # Introduction In this document the VaVeL consortium presents information about the data that will be exploited in the context of the project. It is by design that the major data providers are also member of the consortium. These are: The Dublin City Council, that will provide traffic, public transport, weather and video data. The City of Warsaw, that will provide public transport data, data on emergency calls and citizen reporting. Orange Polska, that will provide subscriber’s location data. Figure 2: Real-Time Data from City of Figure 1: Real-Time Data from City of Warsaw Dublin It is important to note that most of the above data sources will be provided in _real-time_ . This document serves as a management plan for the above data sources. More specifically it addresses the following issues and questions. _Meta-data_ : Details about the meta-information that accompany our data (if available). _Standards_ : We mention any standards that are followed by the data or by the way the consortium or the data providers provide access to the data. _Infrastructure Improvements:_ The infrastructure providing the data is as important as the data themselves. Hence, we present information on necessary infrastructure changes that were required in order to improve any data-related aspects (accessibility, information richness, volume, etc). _Quality:_ Data veracity is one of the main objectives of VaVeL. We provide brief information about the consortium’s efforts to address data quality issues if necessary. Depending on the case, ‘quality’ might imply cleaning, pre-processing, adding meta-data, transforming to a more convenient format or providing easier access. _Accessibility:_ We explicitly describe the access level provided for each data source for various user groups (consortium, public, etc). On top of that we outline the technical means that are necessary to access the data. We also report on our efforts to make the data more easier to discover. _Assessable and Intelligible:_ We describe the means that we provide in order to make the data more easy to use and understand its content and value. _Legal Issues & Privacy: _ The consortium provides details about legal issues related to each dataset as well as the path of resolving them. _Maintenance Plan:_ In this section we will describe the maintenance plan for each dataset. More specifically, we will discuss archiving of historical data and how the potential of maintaining/utilizing the data after the end of the project. Some of the above items were inspired by the document “Guidelines on Data Management in Horizon 2020”, published by the European Commission, Directorate-General for Research & Innovation (Version 1.0, 15 February 2016). VaVeL’s Open Data Strategy and History. The consortium is willing to publicly share and make easily accessible and discoverable as many datasets as possible. Many data sets are already available online (see following sections) and every effort will be made to make even more data sources accessible. On top of that, the consortium intends to make available tools that analyze urban data. More importantly the consortium has a history in publishing open data. Dublinked (see Figure 3a) is a web platform hosting multiple data resources originating from Dublin. On the other hand, the City of Warsaw along with its technical partners (Orange) has a history in making APIs for processing and accessing data open (api.um.warszawa.pl see Figure 3b). ( a) The Dublinked website in Dublin ( b) Open APIs in Warsaw Figure 3: VaVeL’s history in publishing data Open Data Portals On top of the above, the VaVeL consortium is currently investigating the exploitation of additional portals and ways to disseminate, archive, register and index its datasets and APIs in order to make the resources more discoverable. Such are the following: European Union Open Data Portal ( http://data.europa.eu/euodp/en/data) where a lot of European Organizations archive open data sets. The programmable Web ( http://www.programmableweb.com/apis) \- where more than 15.000 APIs are indexed. This repository is especially suitable for the APIs available from the City of Warsaw. # Dublin City Council Data ## Data from a Traffic Management System (SCATS) The Sydney Co-ordinated Adaptive Traffic System provides information on vehicular traffic at fixed sensor locations as spatio-temporal time series. The SCATS data are produced by aggregating the primary source data that are collected by the Dublin SCATS traffic sensor monitoring system. The primary data are given in the _Strategic Monitoring (SM)_ format Each sensor sends messages with varying frequency (depending on the location, conditions and other factors). The SM format specifies the message parameters. In practice, data are imported from two sources. For a period of time in 2012, the data have been recorded 1 as a sequence of following tuples: _streetSegId_ : a unique identifier for a street segment ID, _armNumber_ : an identifier for the arm on a street segment, _armAngle_ : bearing of the arm, _gpsArm_ : GPS position 20 meters into the arm, _gpsCentroid_ : GPS position of the centroid of the intersection. _aggerateCount_ : aggregated vehicles volume count on the arm, _flow_ : flow ratio calculated as the volume divided by the highest volume that has been measured in a sliding window of a week. These samples are captured at 6-minute intervals. The more recent data from 01/01/2013 onwards are sampled every minute and are provided by DCC and IBM as a sequence of following items _year_ , _month_ , _day_ , _hour_ , _minute_ : denoting the timestamp _site_ : measurement location _strategicApproachLink isLink detector index_ : index of the detector _degreeOfSaturation_ : _flow/capacity flow_ : current flow value These samples are used in conjunction with a file, which contains the coordinates. The _detector index_ from the sequence refers to the _lane number_ in the detectors.csv file. These messages, in addition to the information that is maintained after the aggregation to the SCATS format, includes additional system information that is not used in our analysis. This dataset is a sequence of tuples ( _z,m,t_ ), where _z_ is a geographic location of the observation (the sensor position), _m_ is a metric and _t_ is an integer. The location is either _detector index_ , or a vector consisting of a number of elements, including the GPS coordinates of the detector. The metric _m_ contains: _aggerateCount_ : aggregated vehicles volume count on the arm, _flow_ : flow ratio calculated as the volume divided by the highest volume that has been measured in a sliding window of a week. Integer _t_ element is the timestamp of the 5 minute interval in POSIX time, i.e. the number of microseconds that have elapsed since 00:00:00 Coordinated Universal Time (UTC), 1 January 1970. Data Collection SCATS Region has automated collection of operational and performance data. Traffic counts are collected on a lane-by-lane basis wherever detectors are installed. Collected data can be sent to the SCATS Central Manager for backup. If there is a failure in the communications with SCATS Central Manager, SCATS Region maintains a queue of data until the communications are restored. This ensures that there is no loss of data on the SCATS Central Manager. SCATS Central Manager manages the connection of up to 64 SCATS Regions (8 regions connected on the DCC system) and provides a global view of the whole system. SCATS Region is the software that is used to manage the traffic signal sites in a region. SCATS primarily manage the dynamic timing of signal phases at traffic intersections. The system uses sensors at each traffic intersection to detect vehicle presence in each lane and pedestrian demands. The vehicle sensors are inductive loops installed beneath the road surface. <table> <tr> <th> Metadata </th> <th> XY coordinate data for SCATS intersections. Traffic volumes for intersections. SCATS Picture is the application that is used to create or modify the site location details and site graphics stored in the SCATS Central Manager database. Meta data for the site is stored in the LX files </th> </tr> <tr> <td> Standards </td> <td> SCATS Access version 6.9.2 Copyright 2014 Roads and Maritime Services. SCATS proprietary format is the property of RMS. The format of the data that has been passed on to the consortium currently can be made available to 3rd parties. SCATS data stream is provided in JSON format. </td> </tr> </table> <table> <tr> <th> Infrastructure Improvements </th> <th> To add resilience the SCATS Management System has been changed from a physical to virtual environment. SCATS-CMS Virtualised Environment </th> </tr> <tr> <td> Quality </td> <td> The SCATS data that is collected is 100% accurate in relation to the data that it receives from its sensors. </td> </tr> </table> <table> <tr> <th> Accessibility </th> <th> Information contained in SCATS Access version 6.9.2 documentation may be of a commercially sensitive nature and must not be given to any individual or organisation without prior written consent from Roads and Maritime Services. The SM data from the Dublin system has been supplied to IBM for processing and translating to an open format which can then be of use to the consortium. Access to the SCATS data is via an AWS cloud based structure. For the VaveL project the inclusion of other data sources from SCATS will be explored to assess their validity for inclusion as a data stream to assist in automatic incident detection. Using AWS brought the following advantages: Compute services based on pay-as-you-use rates, 24x7 management of servers up to OS layer included regular back-up schedule. All operating system licenses are built into the price. High resilience as it is running over 2 Availability Zones (AZ) and a storage area in AWS S3 (Storage Bucket), SCATS data stream provided in JSON. Cloud Services Architecture </th> </tr> <tr> <td> Assessable and Intelligible </td> <td> Associated software produced and/or used in the project maybe assessable for and Intelligible to third parties in contexts such as scientific scrutiny and peer review (e.g. are the minimal datasets handled together with scientific papers for the purpose of peer review, are data is provided in a way that judgments can be made about their reliability and the competence of those who created them) </td> </tr> <tr> <td> Legal Issues and Privacy </td> <td> Dublin City Council Law Department has advised that they have no issue with the release of the data. Data is stored in the SCATS application logs which relates to system changes. No personal/confidential data is stored. </td> </tr> <tr> <td> Maintenance Plan </td> <td> The SCATS system is under an annual maintenance contract based on the software licences used by DCC. This is to ensure that the system as adequate support to maintain the level of service required. SCATS data backup files for configuration backups have two sub directories LX files containing daily backup of SCATS data which are created daily and RAM data backups which are also crested daily. SCATS Region collects data and stores in daily files. Each file includes the date and time at which the data was collected and the data itself. The collection of data occurs automatically. The data is retained for the period specified when configuring SCATS Region. The default is 365 days. A backup and archival system is in place to ensure that old data is still accessible outside the SCATS Regions specified data retention period. Meta data for the site is stored in the LX files </td> </tr> </table> Table 1: SCATS DATA - Management Plan ## Public Transport Data The street map is represented as a graph, where vertices represent important locations in space for a given means of transport (e.g. road intersections for cars). Each edge represents a means of traversing between the vertices, which can involve actual movement (e.g. between two intersections) or waiting (e.g. at a bus-stop). The graph is illustrated in Figure 4. Figure 4: The vertex-based transit graph. Cited in verbatim from https://github.com/ openplans/OpenTripPlanner/wiki/GraphStructure. Figure 5: An illustration of a delay function, which gives the travel-time along a segment of a road as a func- 0 0 _._ 2 0 _._ 4 0 _._ 6 0 _._ 8 1 0 _._ 6 0 _._ 8 1 Utilisation Traveltime,scaled Piecewise-linear Piecewise-convex tion of its utilisation, i.e. the ratio of the number of concurrent users to the maximum thereof. In principle, the GPS data are a sequence of vectors _y z,t _ , where _z_ is a traffic object, e.g. a bus with an on-board GPS receiver, and _t_ is an integer, e.g. the POSIX time of the acquisition. The overall data model is rather complex, but closely parallels those used by OpenStreetMap, OpenTripPlanner, and the General Transit Feed Specification; we hence direct the reader to the reference documentation for those. Our custom extensions to the standard format consist of: the travel-time estimates, which correspond to the weights of the edges in the graph the altitude data, which correspond to weights of the vertices in the graph The travel-time estimates are stored as delay functions and vehicle count data. A delay function gives the travel-time along a segment of a road as a function of its utilisation, i.e. the ratio of the number of concurrent users to the maximum thereof. See Figure ?? for an example. The delay functions are computed from the vehicle-count data (SCATS) and traces of vehicle movement (Bus GPS) described above. The vehicle GPS traces are imported from three very different data sources, even in the case of Dublin. Instead of plain coordinates, there is a more complex data model based on the General Transit Feed Specification. There, a vehicle Journey (or “route” in GTFS) is a particular instance of a journeyPattern starting at a given time. A journey Pattern is a sequence of two or more stops. In between each two stops, there are one or more blocks within a trip (or “segments” in GTFS and elsewhere) 2 . Notice that the production time table starts at 6am and ends at 3am in Dublin. The first source of GPS traces captures the movement of buses in Dublin in the period from 01/02/2012 till 30/04/2012 (except the days 10th till 12th February 2012) and contains the following values: _timestamp_ : timestamp microseconds since 01/01/1970 00:00:00 GMT, _lineId_ : bus line identifier, _direction_ : a string identifying the direction, _journeyPatternId_ _timeFrame_ : the start date of the production time table (in Dublin the production time table starts at 6am and ends at 3am), _vehicleJourneyId_ : a given run on the journey pattern, _operator_ : bus operator, not the driver, _congestion_ : boolean value [0=no,1=yes], _gpsPos_ : GPS position of the vehicle, _delay_ : seconds, negative if bus is ahead of schedule, _blockId_ : section identifier of the journey pattern, _vehicleId_ : vehicle identifier, _stopId_ : stop identifier, _atStop_ : boolean value [0=no,1=yes]. The second source of GPS traces captures the movement of buses in Dublin during a part of November 2012 (06/11/2012 till 30/11/2012) and contains tuples of the following elements: _timestamp_ : timestamp microseconds since 01/01/1970 00:00:00 GMT, _lineId_ : bus line identifier, _direction_ : a string identifying the direction, _journeyPatternId timeFrame_ : the start date of the production time table (in Dublin the production time table starts at 6am and ends at 3am), _vehicleJourneyId_ : a given run on the journey pattern, _operator_ : bus operator, not the driver, _congestion_ : boolean value [0=no,1=yes], _gpsPos_ : GPS position of the vehicle, _delay_ : seconds, negative if bus is ahead of schedule, _blockId_ : section identifier of the journey pattern, _vehicleId_ : vehicle identifier, _stopId_ : stop identifier, _atStop_ : boolean value [0=no,1=yes]. The third source of GPS traces captures the movement of buses in Dublin during January 2013 (01/01/2013 till 31/01/2013) and contains tuples of the following elements: _timestamp_ : timestamp microseconds since 01/01/1970 00:00:00 GMT, _lineId_ : bus line identifier, _direction_ : a string identifying the direction, _journeyPatternId timeFrame_ : the start date of the production time table (in Dublin the production time table starts at 6am and ends at 3am), _vehicleJourneyId_ : a given run on the journey pattern, _operator_ : bus operator, not the driver, _congestion_ : boolean value [0=no,1=yes], _gpsPos_ : GPS position of the vehicle, _delay_ : seconds, negative if bus is ahead of schedule, _blockId_ : section identifier of the journey pattern, _vehicleId_ : vehicle identifier, _stopId_ : stop identifier, _atStop_ : boolean value [0=no,1=yes]. Data Collection The standardised SIRI Vehicle Monitoring Service reports current positions of vehicles that are located and monitored in an ITCS. The data receiving client system may use this data for visualisation of the vehicles in a map, in tables, lists or diagrams or for any other purpose <table> <tr> <th> Metadata </th> <th> XY coordinates of the Dublin Bus stops. Distance and route patterns. </th> </tr> <tr> <td> Standards </td> <td> The system uses SIRI (Service Interface for Real-time Information) protocol. The Service Interface for Real Time Information (SIRI) specifies a European interface standard for exchanging information about the planned, current or projected performance of real-time public transport operations between different computer systems. </td> </tr> <tr> <td> Infrastructure Improvements </td> <td> The standardised SIRI services work based on bidirectional communication. For security reasons a virtual private network (VPN) is established. The exchange of visualisation data starts with the subscription request of the data receiving system (MORTPI). Once the request is done, trip information is transmitted by the data producer (AVLC) to the data receiving system throughout its entire validity period. The method and frequency of repetition is a matter for the data producer, but can be specified by the displaying system in the scope of the subscription. </td> </tr> <tr> <td> Quality </td> <td> The Service Interface for Real Time Information (SIRI) specifies a European interface standard for exchanging information about the planned, current or projected performance of real-time public transport operations between different computer systems. </td> </tr> <tr> <td> Accessibility </td> <td> Access to this information has been agreed as per the INSIGHT project and the same platform and access rights will apply for the VaVeL project. </td> </tr> <tr> <td> Assessable and In- telligible </td> <td> Associated software produced and/or used in the project maybe assessable for and Intelligible to third parties in contexts such as scientific scrutiny and peer review (e.g. are the minimal datasets handled together with scientific papers for the purpose of peer review, are data is provided in a way that judgments can be made about their reliability and the competence of those who created them) </td> </tr> <tr> <td> Legal Issues and Privacy </td> <td> Dublin City Council has advised that they have no issue with the use of the bus data and the accumulation and storage of this data by a third party for data distributed. </td> </tr> <tr> <td> Maintenance Plan </td> <td> The Public Transport Data and the system which provide the data are covered under a maintenance contract. Any changes/ additional requirements to the system or to provide access to data will be carried out under the supervision of the maintenance contractor and in accordance with the terms and conditions of the contract to ensure that the integrity of the system and/or the data provided by the system is not compromised. _Archiving and Preservation:_ The Public Transport Data systems are an integral part of the DCC traffic systems and it is our aim to ensure the integrity of the system and access to the system by third parties as agreed now and into the future is maintained. Any access agreements made for data during the life time of this project will be done so under the current and subsequent maintenance contracts. These will be reviewed as part of any new maintenance contract and every effort will be made to facilitate the access to data now and into the future. Any changes to the system to allow for expansion or upgrade will be done so in a manner that all stake holders will be consulted and informed of subsequent changes. </td> </tr> </table> Table 2: Public Transport Data - Management Plan ## Closed-Circuit Television Data Closed Circuit Television (CCTV) has been in use by Dublin City Council Environment and Transportation department for over 20 years with 280 camera installations at present. The use of traffic cameras is an essential tool for traffic management in the city in conjunction with an adaptive traffic control system, SCATS. Currently the Traffic Control Centre operators use the CCTV cameras to manually scan the traffic network to detect, verify and manage incidents. Selections of cameras are displayed on the Audio Visual wall and there is rotation of the entire CCTV camera list on one IP input. Each operator has access to Indigo Vision on their desk top which can be customized to display CCTV combinations as required. Traffic surveillance is an integral part of the traffic management system and the closer the time of the detection of the incident to the time of its occurrence the greater the impact the traffic control centre operator can have in effectively managing it. It would also be in the scope of the research to assess how this CCTV data could be combined with other sensory data from SCATS, weather data, public transport data in detecting incidents on the traffic network. The Traffic CCTV system consists of 2 backend systems running side by side, Meyertech Analogue CCTV & Indigo Vision IP CCTV. Every Camera in the system is available in both Analogue and IP format. This redundancy is to ensure that one system is always available to the Control Centre. The analogue cameras are available in IP by encoding the stream using Indigo Vision 10 and the same also applies for IP streams which are decoded using Indigo Vision hardware and software to make them available to the Meyertech system. All analogue cameras are compressed to IP via an Indigovision 9000 encoder to H264 format. The codec facilitates can operate at Cif / 2 Cif / 4 Cif at a variable bandwidth. The current operation stream 1 is set to 2048kbs and the “Mobile Centre” for remote access is set to 1024 kbs stream 2. Transmission of images from site is normally by high quality fibre optic cable and using uncompressed digital transmission equipment, ensuring no errors are introduced to the image prior to reaching the station equipment in DCC. Where a site has no fibre transmission available, the analogue camera is compressed on site by an Indigo Vision 9000 codec and transmitted to a fibre point via an NGW Express IP VPN Tunnel. The IP from site is transported to the CCTV stack via fibre and the stream is then pointed to an Indigovision decoder which allows the video to be viewed on the Meyertech system. Web images from a selection of cameras are made available on the Dublin City Council web site using Fusion Capture software. The Fusion capture application provides images to the web site only if the camera is in the “home position”, which is when the camera is zoomed out. These images are updated every 10 minutes which is dependent on the number of cameras in the cycle when in the home position and are not updated if the camera is in use by an operator. <table> <tr> <th> Metadata </th> <th> Data on the XY coordinates for the CCTV camera locations has been supplied to IBM and the consortium. </th> </tr> <tr> <td> Standards </td> <td> Standards operated by DCC are currently ONVIF compliant. An advisory note from the CCTV contrac on ONVIF there are several different layers and it does not always work seamlessly. ONVIF standards apply to IP only. </td> </tr> <tr> <td> Infrastructure Improvements </td> <td> It should be noted that the Fusion Capture technology was designed 12 years ago. The system design is struggling to meet the availability of compatible computer components available today. The system currently runs on Windows XP and has some driver issues with the capture card. No updates or patches are available. For fusion capture images it should be noted that several operators have the capability to set a preset, this can be set anywhere, zoomed in or out. The Fusion Capture has no capability to know what the camera is looking at or even if the camera has responded to the request to “Goto” Preset 1. As part of the VaVeL project work with the consortium to explore the possibilities of using the CCTV cameras as a sensor with the aim of enhancing the functionality to use the camera as a senor where by the cameras can be trained to detect incidents and automatically alert the traffic control operators that there is an incident on the traffic network. This could result in a faster more efficient response to incidents which in turn could reduce traffic congestion. To develop this research, video data that captures the scene for different incident types will be used to train algorithms to provide incident detection capability. Discussions are currently underway with the CCTV maintenance contractor to formalise how the requirements of this project will be included under the current maintenance framework agreement and how these will be included and documented to provide data for future use. </td> </tr> </table> <table> <tr> <th> Quality </th> <th> The Meyertech system provides analogue 1Vpp Video images. Indigo takes a 1Vpp image and compresses it to IP compression Cif / 2 Cif / 4 Cif variable bandwidth. The Fusion Capture compression is done by the card in the XP machine compression would be low but details unknown. </th> </tr> <tr> <td> Accessibility </td> <td> There is an API available as part of the SDK from Indigo vision and this is only released under an NDA. The SDK contains commercially sensitive information and will not be released or distributed for general use. This issue is currently under discussion with DCC and the maintenance contractor and currently investigating the option to provide an “isolated” sample of some cameras to the 3rd party until the development is complete. </td> </tr> <tr> <td> Assessable and Intelligible </td> <td> The data produced and/or used in the project is usable by third parties even long time after the collection of the data. CCTV images which are currently available on the DCC website and the accumulation and storage of these images by a third party from the DCC website. </td> </tr> <tr> <td> Legal Issues and Privacy </td> <td> Dublin City Council Law Department has advised that they have no issue with the use of the CCTV images which are currently available on the DCC website and the accumulation and storage of these images by a third party from the DCC website. A select number of cameras are made available to the public on the Dublin City Councils traffic home page. Other agencies with access to the cameras include the other three local authorities in the Dublin Region, Railway Procurement Agency, Dublin Port Tunnel, Dublin Bus and An Garda Siochana. </td> </tr> <tr> <td> Maintenance Plan </td> <td> CCTV is an integral part of the DCC traffic management infrastructure and it is DCCs policy to maintain and enhance its CCTV system as it has done since 1989. The CCTV system is covered under a maintenance contract which includes - CCTV out-station equipment, maintenance of Indigo Vision equipment and Meyertech equipment located at all remote operator user sites, maintenance of Wireless and Radio Link communication equipment, maintenance of all In-station Traffic Control Centre CCTV equipment including recording equipment, hard drives, fans, filters, and monitors, maintenance / cleaning of CCTV equipment, poles, connections, wiring, and housing sealing, supply, installation, testing, and commissioning of CCTV cameras, encoders, CCTV monitors and work stations, video recording facilities, CCTV masts and poles, mini pillars, cabinets, including all traffic management and safety requirements associated with the works, supply and installation of CCTV camera poles in Dublin City and environs, supply and installation of all communications equipment associated with CCTV, supply, installation, testing and commissioning of all equipment and software supplied as part of the contract. Any changes/ additional requirements to the system or to provide access to data will be carried out under the supervision of the maintenance contractor and in accordance with the terms and conditions of the contract to ensure that the integrity of the system and/or the data provided by the system is not compromised. _Archiving and Preservation._ The CCTV system is an integral part of the traffic management system and this is envisaged into the future and it is our aim to ensure the integrity of the system and access to the system by third parties as agreed now and into the future is maintained. Any access agreements made for data during the life time of this project will be done so under the current and subsequent maintenance contracts. These will be reviewed as part of any new maintenance contract and every effort will be made to facilitate the access to data now and into the future. Any changes to the system to allow for expansion or upgrade will be done so in a manner that all stake holders will be consulted and informed of subsequent changes. </td> </tr> </table> Table 3: Closed-Cicruit Television Data - Management Plan ## Measurements of Weather and Pollution Ireland’s National Roads Authority (NRA) maintains a network of sensor stations around Dublin city, each of which samples a variety of environmental factors at ten-minute intervals. As part of the initial data-collection effort, we have created a tool which pulls information from thirteen of these stations into a central database. At present, our focus is on creating a historical archive for future exploitation rather than providing the data in real-time, and as such the data is harvested only once per day; this can, of course, be changed at a later date to account for the project’s evolving requirements. The database also contains meta-information about the various data points, allowing human-readable reports to be generated with ease. The database can be queried using standard SQL. It is currently only accessible from within IBM, but it can be easily migrated to another location as necessary. The full list of stations from which these sensor data are drawn is provided in Table 4, while some of the more interesting points captured by the database are highlighted in Table 5. A visualisation of some of this data is shown in Figure 6. Table 4: NRA stations Dublin Port Tunnel M1 Drogheda Bypass M1 Dublin Airport M11 Bray Bypass M4 Enfield M50 Blanchardstown Master M50 Blanchardstown Slave M50 Dublin Airport M50 Sandyford Bypass Tipping Bucket M50 Sandyford Master M7 Newbridge Bypass M7 Portlaoise Bypass N81 Tallaght Table 5: Illustrative NRA datapoints <table> <tr> <th> Code </th> <th> Description </th> <th> Unit </th> </tr> <tr> <td> CL </td> <td> Cloud State </td> <td> Status Code: Clear, Cloud, Cloud and Rain </td> </tr> <tr> <td> PW </td> <td> Present Weather </td> <td> Status Code: 0 (unobstructed) to 99 (tornado) </td> </tr> <tr> <td> WL </td> <td> Water Layer </td> <td> mm </td> </tr> <tr> <td> SL </td> <td> Snow Layer </td> <td> mm </td> </tr> <tr> <td> IL </td> <td> Ice Layer </td> <td> mm </td> </tr> <tr> <td> RH </td> <td> Relative Humidity </td> <td> % </td> </tr> <tr> <td> PR </td> <td> Precipitation Total </td> <td> mm </td> </tr> <tr> <td> RI </td> <td> Rain Intensity </td> <td> mm/h </td> </tr> <tr> <td> P </td> <td> Pressure </td> <td> hpa </td> </tr> <tr> <td> T </td> <td> Air Temperature </td> <td> ◦C </td> </tr> <tr> <td> TS </td> <td> Surface Temperature </td> <td> ◦C </td> </tr> <tr> <td> VI </td> <td> Visibility </td> <td> m </td> </tr> <tr> <td> WD </td> <td> Wind Direction </td> <td> ◦ </td> </tr> <tr> <td> WS </td> <td> Wind Speed </td> <td> m/s </td> </tr> </table> Alternatively, there are weather data available through The Weather Company. In January 2016, IBM has acquired The Weather Company’s B2B, mobile and cloud- based web-properties, weather.com, Weather Underground, The Weather Company brand and WSI, its global businessto-business brand. Such data can be accessed, e.g., via wunderground 3 . These can be queried Figure 6: NRA data visualisation. Arrows indicate wind speed and direction; heatmap blobs indicate cumulative rain intensity at each station, in mm/h for weather information at a particular coordinate, e.g. Dublin, posing the following request (the <key> has to be generated in advance by registering to the wunderground website): http://api.wunderground.com/api/<key>/hourly10day/q/Ireland/Dublin.json As result, a json object is returned which contains the following fields: _FCTTIME_ : the time of the weather forecast _temp_ : the temperature _condition_ : the weather condition, e.g. “Rain” _icon_ : an icon to depict on a map, e.g. “rain” _icon url_ : link to an icon for graphical user interfaces _humidity_ : humidity in percent _feelslike_ : the perceived temperature and many undocumented fields. ## Social Media Further input to the system is provided by Twitter Inc, a social network operating a short messaging service. Twitter issues a stream of messages (“tweets”) up to 140 characters long, optionally including one or more “hashtags” - that is, arbitrary words preceded with a hash character, used to denote topics to which the message relates (e.g., #dublin). Tweets may also include links to websites and other auxiliary data; see Figure 7 for some examples. <table> <tr> <th> Metadata </th> <th> The metadata are described above and at https://www.wunderground. com/weather/api/ </th> </tr> <tr> <td> Standards </td> <td> The standards are described at https://www.wunderground.com/ weather/api/ </td> </tr> <tr> <td> Infrastructure Improvements </td> <td> None to be disclosed at the moment </td> </tr> <tr> <td> Quality </td> <td> The data quality is discussed at https://www.wunderground.com/ weather/api/ </td> </tr> <tr> <td> Accessibility </td> <td> IBM has an unrestricted access to both the complete history of data, current data, and weather forecasts. This access has not been shared with the other partners. </td> </tr> <tr> <td> Assessable and In- telligible </td> <td> The intelligibility issues are discussed at https://www.wunderground. com/weather/api/ </td> </tr> <tr> <td> Legal Issues and Privacy </td> <td> IBM will make the data available for commercial licensing aiming for a long- term availability into the future. </td> </tr> <tr> <td> Maintenance Plan </td> <td> a) Archiving and Preservation: IBM aims for long-term preservation of the data into the future. b) Usable beyond the original purpose for which it was collected: No. </td> </tr> </table> Table 6: Weather Data - Management Plan _“N3: Heavy delays from the M50 to J3 due to a collision on the R121 at Blanchardstown SC. Traffic is down to one lane in both directions.”_ _“M50: Emergency services are at the scene before J5 Finglas. Middle and right lanes now blocked. Traffic almost back to the M1/M50 roundabout”_ Figure 7: Sample Tweets from Live Drive. The Twitter web application and its public API allow developers to retrieve a substream of messages based on a given set of criteria; specifics hashtags, for instance, or tweets produced by a certain user, etc. The stream is a sequence of _tweets_ , which primarily consist of: _tweetId_ : a unique tweet identifier _date_ : integer, POSIX time of the tweet publication _twitterUserId_ : twitter user identifier _coordinate_ : geo-localization of tweet _messageText_ : tweet text. The stream is indexed by hashtag and clustered according to a given set of criteria (e.g. GPS co-ordinates). The Twitter substream generated within a geographical area of interest can be isolated by following relevant users (e.g., @livedrive) and monitoring certain hashtags (e.g., #dublin). Note that the input stream is not limited to users who are already known to the system; all tweets by Twitter users who are publicly tweeting in the area of interest are collected. In more detail, we can access the following fields in the Twitter stream: _tweetId_ : a unique Tweet ID, assigned by Twitter _twitterUserId_ : a twitter-ID of Tweeting user. Unique per Twitter account. _twitterUserScreenName_ : mnemonic user name (login name) _latitude_ : geographic latitude of sending device _longitude_ : geographic longitude of sending device _messageText_ : the actual tweet in raw textual form. It may include non-ASCII characters. _messageDate_ : timestamp of message sending. Format ’YYYY-MM-DD hh:mm:ss’ _location_ : tweet location place name, for Gazetteer lookups _countryCode_ : ISO short country code (two characters) _retweetStatusId_ : referred (tweetId) to embedded retweet (original tweet). 0 if not a retweet, -1 if not set or invalid value. _isRetweeted_ : boolean flag (’y’—’n’) if tweet contained a retweet _replyStatusId_ : referrer (tweetId) if tweet is-in-reply-to. -1 if not an answer-to tweet. _replyUserId_ : referrer (twitterUserId) to author of original tweet being answered. -1 if not an answer-to tweet. _isFavorite_ : Boolean flag (’y’—’n’) if tweet was marked as favorite _followersCount_ : Number of Twitter users currently following tweet author _followingCount_ : Number of Twitter users the tweet author currently follows Batch data samples are retrieved from the Twitter API using a spatial query. Further for Dublin, The Live Drive Radio data set results from Twitter messages sent by people driving in Dublin that report traffic hazards to the local radio. <table> <tr> <th> Metadata </th> <th> The metadata are described above and at https://dev.twitter.com/ rest/public </th> </tr> <tr> <td> Standards </td> <td> The standards are described above and at https://dev.twitter.com/ rest/public </td> </tr> <tr> <td> Infrastructure Improvements </td> <td> None to be disclosed at the moment </td> </tr> <tr> <td> Quality </td> <td> None to be disclosed at the moment </td> </tr> <tr> <td> Accessibility </td> <td> In November 2014, IBM Corporation entered into a licensing agreement with Twitter Inc., which allows for unlimited access to the data by IBM. Limited subset of the data is publicly available at https://dev.twitter. com/rest/public </td> </tr> <tr> <td> Assessable and In- telligible </td> <td> See https://dev.twitter.com/rest/public </td> </tr> <tr> <td> Legal Issues and Privacy </td> <td> IBM cannot share this access with the consortium. </td> </tr> <tr> <td> Maintenance Plan </td> <td> a) Archiving and Preservation: Data may be archived by Twitter Inc. b) Usable beyond the original purpose for which it was collected: Within IBM Corporation. </td> </tr> </table> Table 7: Social Media Data - Management Plan # City of Warsaw Data ## Real time trams location Warsaw operates 25 tram lines. Total trams line length exceeds 360 km. Number of trams in use is as follows: Morning peak: 414 Mid-day: 313 Afternoon peak: 421 Real time Trams location web service exposes information about geographical location of trams. Data set contains information about all vehicles active at the moment. The data is updated every 15 seconds. This dataset has been released by the Warsaw Trams. <table> <tr> <th> Metadata </th> <th> (data category from CKAN): real time, trams, online data. Meta data describing this dataset are used only in documentation as keywords: e.g. real time, trams, online data. CKAN platform used as middleware for CoW open data exposition supports meta-data in form of RDF but this functionality is currently not used by CoW IT department. </th> </tr> <tr> <td> Standards </td> <td> RESTlike Web Services, pooling data refreshing every 15 seconds. CoW trams location is exposed using RESTlike Web Services, in form of GET HTTP method. Information about trams location is refreshing every 15 seconds and must be retrieved by developers using request - response model (pooling). The geographical coordinates are float numbers compliant with EPSG 4326 (WGS 84). Example: 20.992 for long, 51.242 for the latitude. </td> </tr> <tr> <td> Infrastructure Improvements </td> <td> Caching of data period on MUNDO backend was changed from 30 to 15 seconds for VaVeL project </td> </tr> <tr> <td> Quality </td> <td> The quality of data is currently analyzed by consortium members. Some issues have already been resolved. </td> </tr> <tr> <td> Accessibility </td> <td> Publicly available open data after registration and terms and condition acceptance. </td> </tr> <tr> <td> Assessable and Intelligible </td> <td> Documentation available </td> </tr> <tr> <td> Legal Issues and Privacy </td> <td> Open data registration on api.um.warszawa.pl needed. Terms of use are available at https://api.um.warszawa.pl website. </td> </tr> <tr> <td> Maintenance Plan </td> <td> a) Archiving and Preservation: This data set will be implemented as source of information for realization of use cases defined by CoW and stored in data processing system installed in CoW. Online data available by API. Historical data collected in csv files is available at CoW Cloud b) Usable beyond the original purpose for which it was collected not possible </td> </tr> </table> Table 8: Real Time Tram Locations - Management Plan Real time Trams location web service exposes information about geographical location of trams. Data set contains information about all active at the moment vehicles. Warsaw operates 25 tram lines. Average number of vehicles varies from 414 during morning peak, 313 in mid-day, to 421 in the afternoon. The data is updated every 15 seconds. This dataset has been released by the Warsaw Trams CoW agency. Where HTTP response parameters are listed below: Time - datetime timestamp Lat - float - latitude (GPS) Lon - float longitude (GPS) FirstLine - string - number of the first line realized by vehicle Lines - string - the numbers of all lines (for multiline brigades will be more than one line) Brigade - string - the number of brigade Status - string task status can assume values “RUNNING” or “FINISHED”. LowFloor - bool - indicates if the tram is a low floor one 1 Yes, 0 No. Trams Location historical Data Archival data available to the consortium. Data from 21 march 2016 are collected in csv files and available at CoW Cloud. Time - datetime timestamp Lat - float - latitude (GPS) Lon - float longitude (GPS) FirstLine - string - number of the first line realized by vehicle Lines - string - the numbers of all lines (for multiline brigades will be more than one line) Brigade - string - the number of brigade FirstLine Brigade - concatenate 2 fields (tram ID for a day) Status - string task status can assume values “RUNNING” or “FINISHED” LowFloor - bool - indicates if the tram is a low floor one 1 Yes, 0 No. ## Bus Data 285 bus lines Total line length 4379,9 km Number of buses in use: 1729 (1366 operated by MZA) Morning peak: 1644 Mid-day: 1035 Afternoon peak: 1619 Historical Data Archive data from 21 April 2016 are available in CoW cloud. Data format: Side Number(vehicle Number), unix timestamp, latitude GPS, longitude GPS, Line, Brigade Example: 1525,1461362407,21.170208,52.160407,146,4 ## 19115 Non-emergency notification system API enables reporting of various issues to the City by locals and visitors. Issues such as failures, defects and non-critical threats concerning eg. the state of roads, snow removal, damage, acts of vandalism, etc. API also allows users to obtain information filtered by the keys. Information available: <table> <tr> <th> Metadata </th> <th> CKAN: bus </th> </tr> <tr> <td> Standards </td> <td> Websocket interface exposed by MZA (Warsaw’s Buses Authority) accessible only in internal CoW network </td> </tr> <tr> <td> Infrastructure Improvements </td> <td> None </td> </tr> <tr> <td> Quality </td> <td> The quality of data is currently analyzed by consortium members. Some issues have already been resolved. </td> </tr> <tr> <td> Accessibility </td> <td> Public data with restricted access. Available only for VaVeL consortium members </td> </tr> <tr> <td> Assessable and In- telligible </td> <td> Documentation available </td> </tr> <tr> <td> Legal Issues and Privacy </td> <td> Public data with restricted access Available only for VaVeL consortium members </td> </tr> <tr> <td> Maintenance Plan </td> <td> a) Archiving and Preservation: This data set will be implemented as source of information for realization of use cases defined by CoW and stored in data processing system installed in CoW. b) Usable beyond the original purpose for which it was collected not possible. </td> </tr> </table> Table 9: Bus Data - Management Plan <table> <tr> <th> Metadata </th> <th> CKAN keywords: bus, historical data </th> </tr> <tr> <td> Standards </td> <td> Flat csv files with bus locations </td> </tr> <tr> <td> Infrastructure Improvements </td> <td> Dedicated data collector was developed </td> </tr> <tr> <td> Quality </td> <td> The quality of data is currently analyzed by consortium members. Some issues resolved. </td> </tr> <tr> <td> Accessibility </td> <td> Public data with restricted access Available only for VaVeL consortium members </td> </tr> <tr> <td> Assessable and In- telligible </td> <td> Documentation available </td> </tr> <tr> <td> Legal Issues and Privacy </td> <td> Public data with restricted access. Available only for VaVeL consortium members. </td> </tr> <tr> <td> Maintenance Plan </td> <td> a) Archiving and Preservation: Archiving and Preservation: This data set will be implemented as source of information for realization of use cases defined by CoW and stored in data processing system installed in CoW. b) Usable beyond the original purpose for which it was collected not possible. </td> </tr> </table> Table 10: Bus Historical Data - Management Plan siebelEventId: Event ID in the city CRM system deviceType: type of device used to submit notification street: notification street name - this field is only used in notifications registered by CRM operators.It is validated with the city’s street names dictionary. street2: notification street name field only used in notifications submitted by citizens (ie. notifications generated outside of CRM). Not validated. district: district of the notification city: city of the notification houseNumber: building number of the notification aparmentNumber: apartament number of the notification category: notifcication category subcategory: notification subcategory(dictionary value, as when reporting) event: the process of intervention (dictionary value, as when reporting) description: notification description createDate: creation date notificationNumber: notification number xCoordWGS84: Latitude notification in WGS84 standard yCoordWGS84: Longitude notification in WGS84 standard xCoordOracle: Latitude notification in Oracle Spartial standard yCoordOracle: Longitude notification in Oracle Spartail standard notificationType: (INCIDENT(”Awaria/ Interwencja”), INFORMATIONAL(”Informacyjne”), COMPLAINT(”Reklamacja”), STATUS(”Status sprawy/zgoszenia”) PUBLIC INFORMATION(”Wniosek o dostp do informacji publicznej”), FREEFORM(”Wolne wnioski i uwagi”) statuses - List of notification statuses (changeDate change date, status status, description description, Source notification source (API(”API”), CALL(”CALL”), CKM(”CKM”), MAIL(”MAIL”), MOBILE(”MOBILE”), PHONE(”Phone”), PORTAL(”PORTAL”), SMS(”SMS”), WEB(”Web”), WEBCHAT(”WEBCHAT”), EMPTY(”brak”) ## Public transport timetables Public transport timetables data set is managed by Warsaw’ Public Transport Authority and stored in MySQL database. This information is exposed for developers as an Open API by api.um.warszawa.pl portal in RESTlike Web Services form. Exposed API allows to obtain information about timetables and information about bus or trams lines for the selected stops. API provides three methods. First of them (getBusstopId) is mandatory for the use of the other and is used to obtain the ID stop identifier. The other two (getTimetable and getLines) are used to obtain data about lines and timetables related with the stop. Historical data Public transport historical timetables data set is managed by Warsaw’ Public Transport Authority and stored in MySQL database. This information is exposed for developers as an Open API by api.um.warszawa.pl portal in RESTlike Web Services form. Exposed API <table> <tr> <th> Metadata </th> <th> (data category from CKAN): not emergency issue, real time, online data </th> </tr> <tr> <td> Standards </td> <td> RESTlike Web Services, data from Siebel CRM database </td> </tr> <tr> <td> Infrastructure Improvements </td> <td> Data exposed during MUNDO project, no additional improvements needed </td> </tr> <tr> <td> Quality </td> <td> The quality of data is currently analyzed by consortium members </td> </tr> <tr> <td> Accessibility </td> <td> Publicly available open data after registration and terms and condition acceptance </td> </tr> <tr> <td> Assessable and Intelligible </td> <td> Documentation available. </td> </tr> <tr> <td> Legal Issues and Privacy </td> <td> Open data registration on api.um.warszawa.pl needed. Terms of use areavaliable at https://api.um.warszawa.pl website. </td> </tr> <tr> <td> Maintenance Plan </td> <td> a) Archiving and Preservation: This data set will be utilized as source of information for realization of use cases defined by CoW and stored in data processing system installed in CoW. Via an API there are available historical data for approx. 2 month period. b) usable beyond the original purpose for which it was collected not possible </td> </tr> </table> Table 11: 19115 Data - Management Plan <table> <tr> <th> Metadata </th> <th> (data categories from CKAN): transport, timetables </th> </tr> <tr> <td> Standards </td> <td> RESTlike Web Services, data from ZTM MySQL SQL database exposed by MUNDO platform </td> </tr> <tr> <td> Infrastructure Improvements </td> <td> Data exposed during MUNDO project, no additional improvements needed </td> </tr> <tr> <td> Quality </td> <td> The quality of data is currently analyzed by consortium members. </td> </tr> <tr> <td> Accessibility </td> <td> Publicly available open data after registration and terms and condition acceptance. </td> </tr> <tr> <td> Assessable and Intelligible </td> <td> Documentation available. </td> </tr> <tr> <td> Legal Issues and Privacy </td> <td> Open data registration on api.um.warszawa.pl needed. Terms of use are available at http://www.ztm.waw.pl/?c=628&l=1 website. </td> </tr> <tr> <td> Maintenance Plan </td> <td> a) Archiving and Preservation: The actual timetable is available via the API. This data set will be utilized as source of information for realization of use cases defined by CoW and stored in data processing system installed in CoW. b) usable beyond the original purpose for which it was collected not possible. </td> </tr> </table> Table 12: Public transport timetables - Management Plan <table> <tr> <th> Metadata </th> <th> (data categories from CKAN): transport, timetables historical data </th> </tr> <tr> <td> Standards </td> <td> RESTlike Web Services, data from ZTM MySQL database exposed by MUNDO platform. </td> </tr> <tr> <td> Infrastructure Improvements </td> <td> New data set implemented for VaVeL project. Redevelopment CKAN extensions was made and implemented on MUNDO Data server </td> </tr> <tr> <td> Quality </td> <td> The quality of data is currently analyzed by consortium members </td> </tr> <tr> <td> Accessibility </td> <td> Public data with restricted access. Available only for VaVeL consortium members after registration and terms and condition acceptance. Documentation Avaliable. </td> </tr> <tr> <td> Assessable and Intelligible </td> <td> Documentation avaliable. </td> </tr> <tr> <td> Legal Issues and Privacy </td> <td> Restricted access only for VaVeL consortium member </td> </tr> <tr> <td> Maintenance Plan </td> <td> a) Archiving and Preservation: This data set will be implemented as source of information for realization of use cases defined by CoW and stored in data processing system installed in CoW. Via the API the timetable is available for the past 4 months b) usable beyond the original purpose for which it was collected not possible. </td> </tr> </table> Table 13: Public transport historical timetables - Management Plan allows to obtain information about timetables and information about bus or trams lines for the selected stops in selected time in the past. ## Bus & trams stops locations Bus & trams stops locations API offers developers the information about current geographical location of bus and trams stops in Warsaw. This dataset has been released by the Public Transport Authority (ZTM). ## Park& Ride Park & Ride Parking information data set contains information about P&R parking stations in City of Warsaw for selected geographical areas. ## Bike roads Park & Ride Parking information data set contains information about bike roads in City of Warsaw. ## Bike stations location (Veturilo) Bike stations location (Veturilo) data set contains information about rent-a- bike Veturilo stations in City of Warsaw for selected geographical areas. <table> <tr> <th> Metadata </th> <th> (data categories from CKAN): bus, tram, stops, geolocation </th> </tr> <tr> <td> Standards </td> <td> RESTlike Web Services </td> </tr> <tr> <td> Infrastructure Improvements </td> <td> New data set implemented for VaVeL project. Redevelopment CKAN extensions was made and implemented on MUNDO Data server </td> </tr> <tr> <td> Quality </td> <td> The quality of data is currently analyzed by consortium members </td> </tr> <tr> <td> Accessibility </td> <td> Public data with restricted access. New data set implemented for VaVeL project. Redevelopment CKAN extensions was made and implemented on MUNDO Data server </td> </tr> <tr> <td> Assessable and In- telligible </td> <td> Documentation avaliable </td> </tr> <tr> <td> Legal Issues and Privacy </td> <td> Available only for VaVeL consortium members after registration and terms and condition acceptance. Additional terms of use are available at http: //www.ztm.waw.pl/?c=628&l=1 website. </td> </tr> <tr> <td> Maintenance Plan </td> <td> a) Archiving and Preservation: This data set will be implemented as source of information for realization of use cases defined by CoW and stored in data processing system installed in CoW. Via API accessible are the descriptions of actual bus & trams stops parameters, b) usable beyond the original purpose for which it was collected not possible. </td> </tr> </table> Table 14: Bus and Trams Stop Locations - Management Plan <table> <tr> <th> Metadata </th> <th> (data categories from CKAN): park & ride, static data, geolocation </th> </tr> <tr> <td> Standards </td> <td> REST like Web Services, WFS </td> </tr> <tr> <td> Infrastructure Improvements </td> <td> None </td> </tr> <tr> <td> Quality </td> <td> The quality of the data will be analyzed by consortium members. </td> </tr> <tr> <td> Accessibility </td> <td> Public data with restricted access. </td> </tr> <tr> <td> Assessable and Intelligible </td> <td> Documentation available. </td> </tr> <tr> <td> Legal Issues and Privacy </td> <td> Public data with restricted access. Terms of use http://mapa.um.warszawa.pl/warunki.html </td> </tr> <tr> <td> Maintenance Plan </td> <td> a) Archiving and Preservation: This data set will be implemented as source of information for realization of use cases defined by CoW and stored in data processing system installed in CoW. b) usable beyond the original purpose for which it was collected not possible. </td> </tr> </table> Table 15: Park & Ride - Management Plan <table> <tr> <th> Metadata </th> <th> (data categories from CKAN): bike roads, static data, vector map, WFS </th> </tr> <tr> <td> Standards </td> <td> RESTlike Web Services, WFS </td> </tr> <tr> <td> Infrastructure Improvements </td> <td> None </td> </tr> <tr> <td> Quality </td> <td> The quality of the data will be analyzed by consortium members </td> </tr> <tr> <td> Accessibility </td> <td> Public data with restricted access. </td> </tr> <tr> <td> Assessable and Intelligible </td> <td> Documentation available. </td> </tr> <tr> <td> Legal Issues and Privacy </td> <td> Public data with restricted access. Terms of use http://mapa.um.warszawa.pl/warunki.html </td> </tr> <tr> <td> Maintenance Plan </td> <td> a) Archiving and Preservation: This data set will be implemented as source of information for realization of use cases defined by CoW and stored in data processing system installed in CoW. b) usable beyond the original purpose for which it was collected not possible. </td> </tr> </table> Table 16: Bike Roads - Management Plan <table> <tr> <th> Metadata </th> <th> (data categories from CKAN): city bike, static data, Veturilo </th> </tr> <tr> <td> Standards </td> <td> RESTlike Web Services, WFS </td> </tr> <tr> <td> Infrastructure Improvements </td> <td> None </td> </tr> <tr> <td> Quality </td> <td> The quality of the data must be analyzed by consortium members </td> </tr> <tr> <td> Accessibility </td> <td> Public data with restricted access. </td> </tr> <tr> <td> Assessable and Intelligible </td> <td> Documentation avaliable, WFS standard documentation is publicly available http://www.opengeospatial.org/standards/wfs </td> </tr> <tr> <td> Legal Issues and Privacy </td> <td> Public data with restricted access </td> </tr> <tr> <td> Maintenance Plan </td> <td> a) Archiving and Preservation: This data set will be implemented as source of information for realization of use cases defined by CoW and stored in data processing system installed in CoW. b) usable beyond the original purpose for which it was collected not possible. </td> </tr> </table> Table 17: City bike stations - Management Plan <table> <tr> <th> Metadata </th> <th> (data categories from CKAN):metro entrances, static data </th> </tr> <tr> <td> Standards </td> <td> RESTlike Web Services, WFS </td> </tr> <tr> <td> Infrastructure Improvements </td> <td> None </td> </tr> <tr> <td> Quality </td> <td> The quality of the data will be analyzed by the consortium members. </td> </tr> <tr> <td> Accessibility </td> <td> Public data with restricted access. </td> </tr> <tr> <td> Assessable and Intelligible </td> <td> Documentation avaliable. WFS standard documentation is publicly available http://www.opengeospatial.org/standards/wfs </td> </tr> <tr> <td> Legal Issues and Privacy </td> <td> Public data with restricted access. Terms of use http://mapa.um.warszawa.pl/warunki.html </td> </tr> <tr> <td> Maintenance Plan </td> <td> a) Archiving and Preservation: This data set will be implemented as source of information for realization of use cases defined by CoW and stored in data processing system installed in CoW. b) usable beyond the original purpose for which it was collected not possible. </td> </tr> </table> Table 18: Metro entrances - Management Plan ## Metro Entrances Metro Entrances data set exposes information about metro entrances in Warsaw. API allows to retrieve information for selected geographical area and filter data based on defined keys. Access to data is based on Web Feature Service (WFS) standard defined by Open Geospatial Consortium dedicated for exposition geospatial information in vector maps form. http: //www.opengeospatial.org/standards/wfs. ## Address points Address points data set offers information on addresses in City of Warsaw for a selected geographical area. API allows to retrieve information for the selected geographical area and filter data based on defined keys. Access to data is based on Web Feature Service (WFS) standard defined by Open Geospatial Consortium dedicated for the exposition geospatial information in vector maps form. Dataset Address points is maintained by Office of Surveying and Cadastre (BGiK) City of Warsaw and exposed using a URL (endpoint). ## Streets This data set exposes information about location of streets in the City of Warsaw. An API allows to retrieve information for a selected geographical area and filter data based on defined keys. Access to data is based on Web Feature Service (WFS) standard defined by Open Geospatial Consortium dedicated for exposition geospatial information in vector maps form. The dataset Streets is maintained by Office of Surveying and Cadastre (BGiK) City of Warsaw end exposed using a URL. <table> <tr> <th> Metadata </th> <th> (data categories from CKAN):address points, static data, geolocation </th> </tr> <tr> <td> Standards </td> <td> RESTlike Web Services, WFS </td> </tr> <tr> <td> Infrastructure Improvements </td> <td> None </td> </tr> <tr> <td> Quality </td> <td> The quality of the data will be analyzed by consortium members. </td> </tr> <tr> <td> Accessibility </td> <td> Public data with restricted access. </td> </tr> <tr> <td> Assessable and Intelligible </td> <td> Documentation available. WFS standard documentation is publicly available. http://www.opengeospatial.org/standards/wfs </td> </tr> <tr> <td> Legal Issues and Privacy </td> <td> Public data with restricted access </td> </tr> <tr> <td> Maintenance Plan </td> <td> a) Archiving and Preservation: This data set will be utilized as source of information for realization of use cases defined by CoW and stored in data processing system installed in CoW. b) usable beyond the original purpose for which it was collected not possible. </td> </tr> </table> Table 19: Address points - Management Plan <table> <tr> <th> Metadata </th> <th> (data categories from CKAN):streets, static data, geolocation </th> </tr> <tr> <td> Standards </td> <td> RESTlike Web Services, WFS </td> </tr> <tr> <td> Infrastructure Improvements </td> <td> None </td> </tr> <tr> <td> Quality </td> <td> The quality of the data will be assessed by the consortium members </td> </tr> <tr> <td> Accessibility </td> <td> Public data with restricted access. </td> </tr> <tr> <td> Assessable and Intelligible </td> <td> WFS standard documentation is publicly available http://www. opengeospatial.org/standards/wfs </td> </tr> <tr> <td> Legal Issues and Privacy </td> <td> Public data with restricted access </td> </tr> <tr> <td> Maintenance Plan </td> <td> a) Archiving and Preservation: This data set will be utilized as source of information for realization of use cases defined by CoW and stored in a data processing system installed in CoW. b) usable beyond the original purpose for which it was collected not possible. </td> </tr> </table> Table 20: Streets - Management Plan Figure 8: ZTM-Warsaw’s Twitter Account ## City of Warsaw Twitter ZTM Public Transport Authority publishes information about public transport on Twitter https://twitter.com/ztm_warszawa. Primarily it is information about the failures of public transport, the resolution of the failure and sudden timetable changes resulting from unplanned events (demonstrations, accidents, etc.). Standards Data from Twitter is available using Twitter Open API. Details can be found on page: https://dev.twitter.com/overview/documentation. Twitter offers two Open API sets for developers: REST API mostly dedicated for off-line access - https://dev.twitter.com/rest/ public Stream API dedicated for real time data access - https://dev.twitter.com/ streaming/overview Access to Twitter API requires a developer account (OAuth protocol credential needed) and application in Twitter developers portal ( https://dev.twitter.com/apps) . ## RSS services Public Transport Authority runs RSS (Rich Site Summary) service information about public transport. RSS contains 5 main categories: News Press releases Changes in public transport Public procurement Difficulties <table> <tr> <th> Metadata </th> <th> (data categories from CKAN):hashtags </th> </tr> <tr> <td> Standards </td> <td> RESTlike Web Services, WFS </td> </tr> <tr> <td> Infrastructure Improvements </td> <td> External data provider - N/A </td> </tr> <tr> <td> Quality </td> <td> The quality of the data will be assessed by VaVeL’s consortium members. </td> </tr> <tr> <td> Accessibility </td> <td> Private data with open access. Data publicly available after registration </td> </tr> <tr> <td> Assessable and Intelligible </td> <td> Public available open data delivered by Warsaw Public Transport Authority </td> </tr> <tr> <td> Legal Issues and Privacy </td> <td> Twitter Term and Conditions acceptance needed: https://dev.twitter. com/overview/terms/agreement-and-policy </td> </tr> <tr> <td> Maintenance Plan </td> <td> a) Archiving and Preservation: This data set will be implemented as source of information for realization of use cases defined by CoW and stored in data processing system installed in CoW. </td> </tr> </table> Table 21: Twitter Data - Management Plan <table> <tr> <th> Metadata </th> <th> RSS (XML) meta-data </th> </tr> <tr> <td> Standards </td> <td> RSS, multiple feeds: News: http://www.ztm.waw.pl/rss.php?l= 1&IDRss=1 Press releases: http://www.ztm.waw.pl/rss.php?l=1& IDRss=2 Changes in public transport: http://www.ztm.waw.pl/rss. php?l=1&IDRss=3 Public procurement: http://www.ztm.waw.pl/rss. php?l=1&IDRss=4 Changes in public transport: http://www.ztm.waw. pl/rss.php?l=1&IDRss=6 </td> </tr> <tr> <td> Infrastructure Improvements </td> <td> Data is delivered by Warsaw Public Transport Authority. Changes are not possible. </td> </tr> <tr> <td> Quality </td> <td> The quality of the data will be assessed by VaVeL’s consortium members. </td> </tr> <tr> <td> Accessibility </td> <td> Open Data exposed in Internet for everyone Publicly available open data delivered by Warsaw Public Transport Authority. </td> </tr> <tr> <td> Assessable and In- telligible </td> <td> RSS is well known information exposition standard, polish language is used in RSS information. </td> </tr> <tr> <td> Legal Issues and Privacy </td> <td> Open Data </td> </tr> <tr> <td> Maintenance Plan </td> <td> a) Archiving and Preservation: This data set will be implemented as source of information for realization of use cases defined by CoW and stored in a data processing system installed in CoW. </td> </tr> </table> Table 22: RSS services - Management Plan <table> <tr> <th> Metadata </th> <th> None. </th> </tr> <tr> <td> Standards </td> <td> XML data exposed using dedicated URL. </td> </tr> <tr> <td> Infrastructure Improvements </td> <td> Data is delivered by Warsaw Public Bikes operator (external system). Changes are not possible. </td> </tr> <tr> <td> Quality </td> <td> The quality of the data will be assessed by the VaVeL consortium. </td> </tr> <tr> <td> Accessibility </td> <td> Data exposed in Internet for everyone. Publicly available data delivered by Warsaw Public Bikes operator. </td> </tr> <tr> <td> Assessable and Intelligible </td> <td> XML file structure is clear. However there is no API documentation. </td> </tr> <tr> <td> Legal Issues and Privacy </td> <td> Public data. Unfortunately API usage terms and conditions are currently not accessible on the Nextbike web page. </td> </tr> <tr> <td> Maintenance Plan </td> <td> a) Archiving and Preservation: This data set will be utilized as a source of information for realization of use cases defined by CoW and stored in data processing system installed in CoW. b) usable beyond the original purpose for which it was collected: depends on confirmation from Warsaw Public Bikes operator Nextbike. </td> </tr> </table> Table 23: Veturilo stations - Management Plan ## Veturilo stations (Warsaw City Bike system) Warsaw’s City Bike system (Veturilo) exposes an API that contains information about bikes accessibility in Veturilo stations. Warsaw Public Bikes near real time information is provided by the portal http://nextbike.net (data refreshing every 1 minute). ## Orange subscribers location statistics Dataset of mobile subscriber’s location statistics contains statistical information on the amount of terminals communicated with given cells of the Public Land Mobile Network (PLMN). Subscriber activity is detected on the basis of network events (13 different events are taken into account) that are triggered together with voice and xMS communication. Inactive terminals are periodicaly updated accordingly to network & terminal settings (usually 1-2 hrs). For the VaVeL project samples will be delivered of data from urban area for selected cells located in Warsaw, for a defined period of time. The raw stream of data from mobile cells in Warsaw is between 300 and 400 events per second. Volume of raw data is between 18 and 20 M events for time period 24 hours for Warsaw area (data from about 6000 cells). Statistic information are collected in csv files. Average file size with aggregate of events from 24 hours for Warsaw area is about 8-9 MB. <table> <tr> <th> Metadata </th> <th> None </th> </tr> </table> <table> <tr> <th> Standards </th> <th> Mobile data statistics are calculated and collected by dedicated network system based on the events from MSS (Mobile Switching Centre Server). Statistics are provided in form of flat csv files. File names have the following form: statistics hours YYYY-MM-DD.csv File is generated daily at 1:00 am and contains data from previous day. For example, a file named stats hours 2016-05-04.csv is created on May 5 at 1:00 AM and includes statistics from 4th May. Data structure in files contains the following columns: date & hour,x,y,numer of events, where csv file columns are listed bellow: number of events - numeric (1,10) - Unique amount of MSISDN detected in given period x - float - latitude (GPS) center of cell y - float longitude (GPS) center of cell radius - Int cell radius date & hour date and hour - (e.g. - 2016-05-05 01:00:00 means 2016-05-05 between 0:00 am and 1:00 am) </th> </tr> <tr> <td> Infrastructure Improvements </td> <td> To expose the data described in this chapter a re-development of eventscollecting system was performed. The changes include: automation of statistics recording data compression automation of historical data cleaning The system is used for data collection is the pre-production instance and contains numerous restrictions e.g. limited storage and limited performance. </td> </tr> <tr> <td> Quality </td> <td> Because of the aforementioned limitations there is a possibility that not all events from all cells will be reported. The test instance due to a single node architecture cannot provide high values of SLA (redundancy mechanism not implemented). Since the events generation mechanism related to TDM events, not all subscribers activities are reported (e.g. statistics might not contain information about mobile data usage). </td> </tr> <tr> <td> Accessibility </td> <td> For data analysis this data set will be send via e-mail as an encrypted attachment to consortium leader Password will be send separated communication channel (e.g. SMS). Because of polish telecommunication law restriction and internal Orange Polska regulation this data set can be used only by consortium members for the project VaVeL. Open Access and any sharing this dataset with 3rd parties is prohibited. </td> </tr> </table> <table> <tr> <th> Assessable and Intelligible </th> <th> Raw data sharing with other parties (not consortium members) is currently not possible because of telecommunication law (Operators cannot process any telecommunication data without the special and clear consent from end user of terminal) and internal Orange regulations (privacy policy do not allow for sharing any data which can deliver business information about network). Statistics used for VaVeL project are prepared based on a validated mechanism. Based on all available network events on MSS we calculate the amount of unique MSISDNS appeared in defined cell in defined quantum of time (most often one hour). Mechanism is simple without any other conditions so the data are not performed and calibrated. But based on law regulation - this data aggregation procedure is not reversible. Also if data is exposed close to real time there are additional restrictions in which statistics lower than 10 for cell cannot be displayed. But in case of VaVeL project this is not implemented because we transfer historical location statistics. This data set is well documented in a data manual provided to the consortium. </th> </tr> </table> <table> <tr> <th> Legal Issues and Privacy </th> <th> Based on current regulations in Polish Telco Law and EU directives concerning the topic of telco transmission data processing we include below some points that require attention. Based on the above documents we outline some basic rules related to operators using data: 1. In particular, concerning personal data, always should be considered the highest protection required by the law principles (Telecommunication Law (TL), Law on the protection of personal data (LPPD), EU directives and in particular to 2002/58/EC directive on e-Privacy and 95/46/EC directive on personal data protection) 2. For processing of personal data, the consent of the data subject is required. 3. Anonymised data can be used without consent of the subject only if they are aggregated from the first step of processing and can not be associated to an individual. 4. Everything what can be legally conducted by OLP can be also performed by subcontractors (based on appropriate contracts). Using personal data by other companies should be specified in the provided consent provided by the subject for data usage. Law protecting the privacy of operators data do not make any extraordinary exceptions concerning R&D area. So for VaVeL project rules are the same as in case of creating any other operators data usage for other than delivery of telco services ordered by end user or terminal owner. Based on that we investigate and deploy some techniques allowing the calculation of statistics allowed by Law in the closest network area MSS. That’s why we will be able to share with other participants only those aggregated statistics which are safe from a legal point of view. This mechanism of statistics calculation was investigated with some proof of concept projects and we evaluated their reliability for being used as a source of location statistics. We prepare some extrapolation and compare operators data with calculations made based on optic sigh (based on camera). On the other hand we also believe that, even aggregated statistics, data taken from all users have better quality and value, than data taken from those who only give consent. Users that provide consent for using location data are actually a sub-sample but the mechanism can not be treated as “random selection” and we are not able to predict the bias of this factor in these data (especially in case of small areas of observation). </th> </tr> <tr> <td> Maintenance Plan </td> <td> 1. Archiving and Preservation: Finally Orange Mobile Subscriber’s Locationstatistics will be implemented as source of information for realization of use cases defined by CoW and stored in data processing system installed in CoW. 2. usable beyond the original purpose for which it was collected not possible </td> </tr> </table> Table 24: Orange subscribers location statistics - Management Plan
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0284_EKLIPSE_690474.md
New data generated will only come from the conceptual element of the Work Packages. These data will include mainly qualitative data aiming to support the further development of the applied and communication elements of the project. More specifically, new data generated as part of the EKLIPSE project will include: * WP2: Online feedback questionnaire results from project partners (Data originator: Institut für sozial-ökologische Forschung (ISOE) * WP4: Interviews on barriers to engaging with research, societal actors and policy in identifying knowledge gaps and emerging issues. Interviews will be carried out with target EU level policy makers (n= 5 to 10), National level governmental representatives (5 to 10), Researchers involved in Science Policy Interface activities (e.g. experts from IPBES; 5 to 10), NGOs (5), Business (5), Media representatives (5) and knowledge brokers (3) (Data originators: Centre for Ecology & Hydrology (CEH); Suomen ympäristökeskus (SYKE); Institut royal des Sciences naturelles de Belgique (RBINS)) * WP5: Questionnaires (n=100) and a minimum of 25 interviews with individuals in networks on key aspects of networking (e.g. trust, power relations, vested interests) which may impact on their willingness to collaborate with the mechanism (Data originators: ESSRG Kft. (ESSRG Kft.); Universidade do Porto (UPORTO)) * WP5: Developing the database of networks. This will involve reviewing and compiling existing lists and contact points based on the KNEU database, adding new networks and key information on networks, such as duration, keywords, organization, resources, funding model, and website address. The database will be maintained by the EKLIPSE Secretariat, and visualised on the EKLIPSE project website to allow transparency, and openness for new networks to join (Data originators: HelmholtzZentrum für Umweltforschung – UFZ (UFZ); Universidade do Porto (UPORTO)) # Data Quality & Data submission Responsibility for the quality of the data submitted lies with the data originator (i.e. project partners responsible for generating data). The data originators will be responsible for sending their data to the data assurance manager (Juliette Young, EKLIPSE Secretariat), who will ensure the data are securely stored. The database of networks will be securely stored by UFZ (Marie Vandewalle, EKLIPSE Secretariat). All interviewees will be requested to sign a consent form prior to being interviewed, agreeing to the data being stored securely for up to 5 years, and chosing the relevant options regarding the use of their quotes and/or identity. All data relating to the jointly synthesised work (WP3) will be sent to the EKLIPSE Secretariat by the expert groups. This data will then be compiled and added to the EKLIPSE project website within 3 months of the requests’ submission date. This data will include the EKLIPSE D1.3, Version 1.0 Page 4 of 5 background material used to synthesise knowledge, together with the final product(s) as specified by the requester at the beginning of the process. # Access to data The project results will be tailored to the needs of the stakeholders of the project (taking timing into consideration), and, in turn, stakeholders will be able to use the project results for other purposes, including research, commercial, investment, social, environmental, policy-making, skills or educational training. Within the first year of the project, we will liaise with the existing ten EU data centres relevant to biodiversity and ecosystem services to discuss possible links with this project and its dissemination and exploitation of results. Any eligible data generated in the project will be offered to the Environmental Information Data Centre (EIDC) which is the NERC Data Centre for the terrestrial and freshwater sciences. All publications, including peer-reviewed papers, from the project will be open access, to ensure wider dissemination and exploitation of project results. This will be done by, for example, archiving final peer-reviewed manuscripts on the NERC Open Research Archive (NORA). EKLIPSE D1.3, Version 1.0 Page 5 of 5
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0285_MendTheGap_692249.md
2.4. Increase data re-use (through clarifying licenses) All the materials generated by the project will be openly available through the MendTheGap website up to five years from the end of the project, and can be re- used by all interested parties. After shutting down MendTheGap website materials will be available on demand to project leader group members (Ino Čurik, Vlatka Čubrić Čurik, Maja Ferenčaković). Materials will be findable and reusable through the final depositing repository (Zenodo). # Allocation of resources Scientific publications will make use of open access whenever is possible. Such publications are financed by the research budget. Vlatka Čubrić Čurik will be responsible for data management plan updates. Vlatka Čubrić Čurik and Maja Ferenčaković will be responsible for backup and storage. Ino Čurik will be responsible for data archiving and publication within the Zenodo repository and Dropbox. # Data security All materials and other project products will be archived and of access to them preserved. The long-term strategy for maintaining, curating and archiving the data is via regular backup of website. Website will be FTP to local computer systems with passwords, firewall system in place, power surge protection, virus/malicious intruder protection. Diary entries and reminders will be set in order to never skip the backup task. A logical folder structure with the date as the directory name will be backed up to multiple hard drives, for maximum protection. In following an open access (OA) policy for publication and data, project members will be able to use the respective digital repositories of the Universities of Cambridge and Pisa, as well as Zagreb whose security policy has been written according to best practices. The materials that will be uploaded to Zenodo will be safely stored for the future using CERN's battle-tested repository software INVENIO. The technology offered by the software covers all aspects of digital library management from document ingestion through classification, indexing, and curation to dissemination (https://vre.leidenuniv.nl/vre/lrd/Pages/information-sheet.aspx?item=17). # Ethical aspects There are no Ethical issues to impact on data sharing. There is no personal data saved as the part of this project. # Other This project is not research project however, we will disseminate project results and actions. Dissemination actions will go toward; a) General public (PUB), b) Scientific community (SCI) and c) Policy makers (POL). Thus, we have settled dissemination scheme including all modern medias and types of communications.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0286_AGILE_636202.md
# 1 EXECUTIVE SUMMARY ### 1.1 Introduction The overall objective of the Horizon 2020 project AGILE is to obtain a significant reduction in aircraft development costs by enabling a more competitive supply chain able to reduce the time to market of innovative aircraft products. The overall project objective is translated in 4 technical objectives : 1. The development of advanced multidisciplinary optimization techniques and their integration. 2. The development of processes and techniques for efficient multisite collaboration in overall design teams. 3. The development of knowledge enabled information technologies to support interdisciplinary design campaigns. 4. To develop and publish an Open MDO Test suite that can serve as a reference database for future aircraft configuration research. A new element in Horizon 2020 is the preparation and use of a Data Management Plan. A Data Management Plan details what data the project will generate, whether and how it will be exploited or made accessible for verification and re-use, and how it will be curated and preserved. This document presents the Draft Data Management Plan for the AGILE project. It is based on the results of a questionnaire send out to all partners. An updated version of this document will be prepared near the end of the AGILE project. # 2 DATA MANAGEMENT PLAN ## 2.1 Introduction The Data Management Plan (DMP) provides a short, general outline of the consortium’s policy for data management, including the types of data which will be generated/collected by the project, the standards which will be used, how this data will be exploited and/or shared/made accessible for verification and use and how this data will be curated and preserved. The described policy will reflect the current state of consortium agreement regarding data management and will be consistent with those referring to exploitation and protection of results. A questionnaire was mailed out to all AGILE partners in April 2016. 12 partners returned this questionnaire (partly) filled in, and with 2 partners emails were exchanged that led to the conclusion that there was no need for them to fill in this questionnaire at this stage of the project. ### 2.2 Data Collection and Creation The AGILE project will generate a wealth of data when using the different MDO environments employed in the project. The first large test case data is the so called Open MDO Test Suite obtained after optimization of the reference aircraft configuration, other test case data are the so called Use Case data bases obtained after performing optimization of so called Novel Configuration Aircraft. For the _Creation of data_ a large number of tools are used, each tool having its specific input and output formats. The filesize varies heavily, ranging from several kBytes for text based input files to several GBytes. Most of the _tools being used_ in the AGILE project are able to read and/or write CPACS (Common Parametric Aircraft Configuration Schema [1]) XML files. Most of the tools are able to reproduce the data if needed. _Tools/programming langagues the most often mentioned are:_ * Matlab * Python * Tixi/Tigl viewer * XML editor * GNU Emacs * Java * Common Lisp * Tecplot for visualization Possible _pre-existing data_ needed to execute the tools concern the CAD geometry (CATIA, IGES, STEP, ..) and several partners listed tool specific input data (for example GasTurb engine deck, aircraft requirements, critical manoeuver load cases, ..). High fidelity Aerodynamic solvers use all their own data formats. CGNS [2] was not mentioned, one partner uses HDF5 [3]. ### 2.3 Data documentation and description Most partners did not fill in this section of the questionnaire. Partners that did provide answers mentioned the use of : * *.docx * *.xml * *.xlsx as file formats for data documentation. Several partners mentioned the use of internal company standards, one partner mentioned the use of Sharepoint for the documentation of Metadata. The CPACS XML format [1] will be used for saving aircraft design data. ### 2.4 Data storage and back-up _How will the data be stored and backed-up ?_ All partners that answered this question have in place a system for data storage and back-up. _Storage media ?_ Most of the partners use external USB drives for data storage and back-up. Some partners mentioned the use of the institutes back-up server or (encrypted) network/cloud storage. _Frequency of back-ups ?_ Back-up’s are made at least daily, at some partners hourly. _Data backed-up at different locations ?_ Most partners save back-ups at two different locations. ### 2.5 Data access and security _Copyright and handling of IPR issues :_ The general principle is that fully owned data remains property of the owner, joined owned data is owned by the partners who generated the data. Non Disclosure Agreements (NDAs) might be required. _Who owns the data ?_ The partner(s) who generated the data. _Limitations on access to your data ?_ Not many answers were received, one partner mention that no access will be given to customer specific and military data. One partner mentioned that their network is not accessible from outside. Another partner mentioned that data can be shared during the AGILE project, and that the code will be released as opensource after the AGILE project has finished. _Access criteria ?_ Two partners mentioned open access, most partners mentioned restricted or restricted to AGILE partners. One partner mentioned embargo period until the end of the AGILE project. _Who controls the access ?_ The AGILE partner owning the data. ### 2.6 Data sharing and re-use _How will the data be shared?_ * on request as needed (“only need to know”) * as CPACS file * through the AGILE cloud server _Requirements for sharing the data?_ * Be AGILE partner * Security _Who might be interested in the data?_ * AGILE partners * Customers of software providers * Authorities * Aircraft designers * On-board system designers * MDO Experts who wish to optimize the definition and execution of their MDO process * Tool experts who with to see how their tool performs in the MDO process _Data linked to a scientific publication?_ * One partner mentioned 3 publications linked to data * Needs to be updated at the end of the project _Tools needed to visualize the data_ * Tecplot, Paraview, gnuplot, xml editor, Matlab, Noesis Optimus, webbrowser, FEM post processor, Hyperview, Descartes, GNU emacs, Gendl, Genworks GDL _Additional information_ * Specific design campaign results will be disseminated using scientific publications and the meta data is shared as a data set for the public ### 2.7 Data preservation _Criteria used to decide on the data to be archived for preservation and long term access:_ * data will be preserved if it is used in an official publication * data files will be preserved, but only the final result * enough data should be stored so that it is possible to interpret the results generated by the MDO process and to be able to reproduce the MDO process * Optimus project files and results from Optimus execution will be stored and preserved. Filex created by applications driven by Optimus will be optionally stored and preserved * related to the use of the Sharepoint file server * data required to pass project milestones (SRR, PDR, CDR, …) * certification relevant data * all code and data will be archived indefinitely _How many years should/will the data be preserved_ * 3 to 5 years * indefinitely * depends on the data usage, aircraft data are in general kept until the last aircraft is grounded, so typically between 60 and 80 years. _File formats used for data preservation?_ * CPACS files * txt files * XML files * png files * binary solution files (FFA Format, MemCom data base, HDF5, ..) * zip archives * Mat files * Native file formats * ASCII and Unicode (UTF-8) _Costs associated with data preservation_ * none to small * 1000 $/year for hosting and server administration _Willingness/interest to use a central data repository for AGILE?_ * 5 partners were interested * 4 partners needed more information * 1 partner said may be * 1 partner mentioned if required * 1 partner said no **3 REFERENCES** 1. _https://software.dlr.de/p/**cpacs** /home/ _ 2. http://cgns.sourceforge.net/ 3. https://www.hdfgroup.org **4 APPENDIX: DATA MANAGEMENT PLAN QUESTIONNAIRE** **DATA MANAGEMENT PLAN QUESTIONNAIRE** Author(s): J.B. Vos WorkPackage Nº: WP1 Due date of deliverable: 30.11.2015 Actual submission date: DD.MM.YYYY Document ID: AGILE_DataManagementQuestionnaire.14.04.2016.docx Grant Agreement number: 636202 Project acronym: AGILE Project title: Aircraft 3 rd Generation MDO for Innovative Collaboration of Heterogeneous Teams of Experts Start date of the project: 01/06/2015 Duration: 36 months Project coordinator name, title and organisation: Björn Nagel, DLR – Air Transportation Systems | Integrated Aircraft Design Tel: +49 40 42878-2304 Fax +49 40 42878-2979 E-mail: [email protected]_ Pier Davide Ciampa, DLR – Air Transportation Systems | Integrated Aircraft Design Tel: +49 40 42878-2727 Fax +49 40 42878-2979 E-mail: [email protected]_ Project website address: _www.agile-project.eu_ This project has received funding from the European Union’s Horizon 2020 Page 1 research and innovation framework programme under grant agreement No 636202 ID: AGILE_DMP_Questionnaire.17.02.2016.docx Period: M06 **DOCUMENT INFORMATION** <table> <tr> <th> **Document Name** </th> <th> </th> <th> </th> <th> </th> </tr> <tr> <td> Document ID </td> <td> D1.3.2-Questionnaire </td> <td> </td> <td> </td> </tr> <tr> <td> Version </td> <td> 1.1 </td> <td> </td> <td> </td> </tr> <tr> <td> Version Date </td> <td> 14.04.2016 </td> <td> </td> <td> </td> </tr> <tr> <td> Author </td> <td> J.B. Vos </td> <td> </td> <td> </td> </tr> <tr> <td> Security </td> <td> R </td> <td> </td> <td> </td> </tr> </table> **APPROVALS** <table> <tr> <th> </th> <th> **Name** </th> <th> **Company** </th> <th> **Date** </th> <th> **Visa** </th> </tr> <tr> <td> Coordinator </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> WP Leader </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> </table> **DOCUMENTS HISTORY** <table> <tr> <th> **Version** </th> <th> **Date** </th> <th> **Modification** </th> <th> **Authors** </th> </tr> <tr> <td> 1.0 </td> <td> 17.02.2016 </td> <td> Initial Version </td> <td> </td> </tr> <tr> <td> 1.1 </td> <td> 14.04.2016 </td> <td> Include comments from partners testing the questionnnaire </td> <td> </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> </td> </tr> </table> **LIST OF AUTHORS** <table> <tr> <th> **Full Name** </th> <th> **Organisation** </th> </tr> <tr> <td> J.B. Vos </td> <td> CFS Engineering </td> </tr> <tr> <td> </td> <td> </td> </tr> <tr> <td> </td> <td> </td> </tr> <tr> <td> </td> <td> </td> </tr> </table> **DISTRIBUTION LIST** <table> <tr> <th> **Full Name** </th> <th> **Organisation** </th> </tr> <tr> <td> </td> <td> </td> </tr> <tr> <td> </td> <td> </td> </tr> <tr> <td> </td> <td> </td> </tr> <tr> <td> </td> <td> </td> </tr> </table> Page 2 ID: AGILE_DMP_Questionnaire.17.02.2016.docx Period: M06
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0287_SIC_693883.md
# 1.Executive summary This Data Management Plan (DMP) represents the final description of the data management life cycle of SIC after 3 years of project implementation A first DMP was submitted to REA on 16/11/2016, introducing the main types of data the project would use and how it would be managed. Secondly, based on the suggestions for improvement included in the Review Report received on 02/06/2017, a second version of the DMP was submitted on 13/07/2017. Following consultations with the REA PO on 16/06/2017, the first update of the DPM had only to address the following recommendation: ‘expand paragraph 2.2.1 adding information on data backup and recovery, transfer of sensitive data (if any) as well as for secure storage and archiving. In addition, a link to the Zenodo page where the policy is explained must be included. The rest of the document remained the same This last version aims to address the rest of recommendations included in the Review Report, mainly connected with the development of the Research Forum, linked to the SIC website, as well as explaining how the project has fulfilled its obligations regarding the new General Data Protection Regulation (GDPR) which entered into force on last 25/05/2018. ## Review report recommendations ‘The DMP is prepared and distributed among partners. However, it is recommended to address several shortcomings that will improve DMP, e.g., data types and formats need to be specified; the expected volume of the data need to be estimated; data naming conventions need to be described; facilitation of data interoperability needs to be described; procedures for data backup and recovery, for transfer of sensitive data (if any) as well as for secure storage and archiving need to be specified. It is recommended to expand paragraph 2.2.1 while addressing the issue of procedures. Regarding the application of Zenodo’s policy, perhaps it is better to link to this than include it in toto, as it may change over the life of the project.’ Therefore, this DMP specifically includes the new _Section 4 Research Portal, Section 5 General Data Protection Regulation and Section 6 Sustainability plan for keeping SIC data online_ , plus all the related annexes, in order to report on how the different recommendations have been addressed., while the rest of the document contains the same information included in the two previous submitted versions. January 2019 / 4 # 2.Data Management Plan ## 2.1 Definition According to the _EC_ “ a DMP is a key element of good data management which describes the data management life cycle for the data to be collected, processed and/or generated by a Horizon 2020 project. As part of making research data **findable, accessible, interoperable and re-usable** (FAIR), a DMP should include information on: The handling of research data during & after the end of the project What data will be collected, processed and/or generated Which methodology & standards will be applied Whether data will be shared/made open access and How data will be curated & preserved (including after the end of the project)” 1 . Therefore, NOT all kind of information and details collected during the project is relevant for this DMP (e.g. contact details of participants of an event) but data gathered for the purpose of research activities (see definition below) within the project. That research can be later on used to develop different outcomes (a report, a publication, a curriculum, a workshop, etc.) Hence, not all sort of information collected during the SIC project is subject of being part of the DMP. According to the “Guidelines on Open Access to Scientific Publication and Research Data in Horizon 2020” (2015): “ **Research data** refers to **information, in particular facts or numbers, collected to be examined and considered and as a basis for reasoning, discussion, or calculation** . In a research context, examples of data include **statistics, results of experiments, measurements, observations resulting from fieldwork, survey results, interview recordings and images. The focus is on research data that is available in digital form."** **SIC** is _not a research project but a coordination and_ _support action_ aiming at creating a network of networks which will identify, engage and connect actors including researchers, social innovators, citizens, policy- makers, as well as intermediaries, businesses, civil society organisations and public sector employees. Therefore, the type and format of data collected and generated will be slightly different. The project won’t have extensive **What** **is** **included** **i** **n** **the** **SIC** **DMP** **?** research data. Only in few cases participants’ opinions expressed in a questionnaire or qualitative experts’ feed backs will generate some data which will be included in some SIC deliverables. However, although the goal is not to develop scientific publications, a lot of research data will be collected in order to develop several deliverables, such as learning materials, landscape reports about SI in Europe, the impact strategy, etc. This DMP refers to that information. For other questions regarding ethical or IPR issues please see _D7.6_ _Ethical_ _Guidelines_ a nd _D7.5_ _Legal_ _and_ _IPR Guidelines_ . ## 2.2 Data management policy In the framework of the SIC project, data will be **collected from different origins** : by doing interviews; surveys to partners, project events participants and expert; research about other SI policies, projects, initiatives and good practices; collection of information during events and meetings; observation and evaluation of the events, Summer Schools, workshops and experimentation sessions. For the ethical standards to be followed when conducting research or other project activities with participants (e.g. informed consent and participation) see the _D7.6_ _Ethical_ _Guidelines_ . **2.2.1 Internal data management** The SIC project has currently _two different repositories_ for the internal data management which complement each other. Both of them have arranged security measures to protect the data. In Basecamp all data is written to multiple disks instantly, backed up daily, and stored in multiple locations. Files that our customers upload are stored on servers that use modern techniques to remove bottlenecks and points of failure. Nextcloud supports any existing storage solution, including object store technologies, keeping data under control of trusted IT administrators and managed with established policies. Nextcloud works with industry standard SQL databases like PostgreSQL, MySQL and MariaDB for user and metadata storage. More information about its security protection in Nextcloud can be found _here_ . The main features of the two SIC repositories are: <table> <tr> <th> </th> <th> **Basecamp: internal communication chat and 2 nd repository ** </th> </tr> <tr> <td> This repository was opened by the PCO during the preparation of the project proposal, on October 2014. The general workflow management is carried out by the PCO assistant (Patricia Martinez) in charge of: Giving access to partners by sending them an invitation to their email address. Adding reminders of relevant project milestones and other tasks in the Calendar section. Nonetheless, all partners are welcome to share information (Message option) and documents (Files option) relevant and useful for the SIC project. As part of the _security_ _policy_ i mplemented by the Basecamp repository, all data storage is written to multiple disks instantly, backed up daily, and stored in multiple locations. It currently has 272 Discussions, 404 Files and 10 Text documents Its main function is to act as both internal communication chat or forum for all SIC partners as well as a back-up for all project deliverables and relevant documents (minutes, agendas, templates, etc. Due to the large volume of data already stored in Basecamp, this is no longer the main internal repository. For that purpose the PCO opened Nextcloud (see table below). It has been kept for its usefulness as common SIC forum or internal chat to discuss relevant topics as well as a back-up system for Nextcloud. * Examples of internal chats: * Informing partners about the procedure to present proposals for the 2 open networks * Inviting partners to conferences/events organised by other partner where the SIC project can be disseminated (such as doing networking, bringing brochures and presenting the project objectives and results). Documents which may be useful for the consortium as a whole will be shared and stored here, so that partners can consult them when needed. Documents are uploaded in pdf, word, power point, excel or similar formats. As a backup system of the main SIC repository (Nextcloud), the PCO will carry on adding all the deliverables submitted to REA in pdf format in Basecamp. They will be stored in the Files section with the relevant identification name. For instance, for deliverables, the deliverable number and name as it appears in the GA; for agendas and minutes, the name of the Governance body, number of the meeting and date. _Example_ : Deliverable: WP1_D1.4_launch event_agenda_v1_lp Minutes of a meeting: Kick-off_management presentation_20160204_ale </td> </tr> </table> 7 <table> <tr> <th> The PCO, as owner of the account, can also _export_ _all_ _the_ _data_ f rom Basecamp, if an additional backup system is necessary. This procedure is not done automatically by the server but can be carry out by the PCO manually by clicking on the “Export” feature that the owner of Basecamp (The PCO) has. The procedure is explained step by step _here_ . It is possible both to export the data into a new Basecamp account and to receive an email with all the data to be downloaded in your local disk. Depending on the amount of data it can take more or less time. The procedure does not entail any additional cost. </th> </tr> </table> <table> <tr> <th> </th> <th> **Nextcloud: official SIC internal repository** </th> </tr> <tr> <td> This repository was opened by the PCO in January 2017 in order to provide partners with a system which allows a more user-friendly storage of data with a very big capacity. Its architecture is similar to Dropbox, although Nextcloud is free and open-source, allowing anyone to install and operate it on a private server. In contrast to proprietary services like Dropbox, the open architecture allows adding additional functionality to the server in form of applications (to send messages, to establish a calendar, to monitor all the actions done by users, etc.) The **workflow management** is carried out by the PCO assistant (Patricia Martinez), only person with Administrator status in the project and thus in charge of: Giving access to the SIC folders only to partners Creating the users and passwords for all partners Renaming users Managing different groups (The SIC project has currently just 1 common group for the whole consortium but if needed, the PCO assistant can create sub- working groups) Resetting passwords Eliminating users Changing users’ status (such as naming another Administrator, if needed). Explaining to partners the procedure to change their password to ensure security access. See Annex 4.3 Monitoring the change history of the group (file modifications, uploads, downloads of shares and changes to comments or tags) For more **technical operations** , the PCO counts on the assistance of an IT team at AEIDL. The programing engineer who set up Nextcloud has also the Administrator status of the repository in case his assistance is needed. Among the tasks which the Administrator can conduct, the Nextcloud _Administrator_ _Manual_ i ncludes how to carry out the _Backup_ o f the folders and the database, and the _Restoring_ _of the_ _Backup_ . Additional _hardening_ _and_ _security_ f eatures can be added to those already put in place by Nextcloud. **Backup procedure** </td> </tr> </table> <table> <tr> <th> The backup procedure is not done automatically, but it can easily be done by the PCO, with the assistance of the above-mentioned ad hoc IT expert who works half an hour per week in the maintenance of this repository. To back up a Nextcloud installation see Annex 4.4 **Procedure for restoring Backup** In case any folder of database needs to be recuperated, Nextcloud has a procedure established for their restoring. In the hypothetical case that happened, the PCO will contact the IT expert immediately and inform the REA PO. To restore a Nextcloud installation see Annex 4.4 For additional information, the Nextcloud Manual is included in Annex 4.5 Sensitive data: information related to health, sexual lifestyle, ethnicity, political opinion, religious or philosophical conviction is generally considered as sensitive data, and is not foreseen in the SIC project. Nevertheless, if requested by a partner, the PCO assistant can limit the access to a folder to only authorised people. With Nextcloud, system administrators (in this case the PCO assistant) can control and direct the flow of data between users or between servers. Rule based file tagging and responding to these tags as well as other triggers like physical location, user group, file properties and request type enables administrators to specifically deny access to, convert, delete or retain d a t a following business or legal requirements. It has been classified by the following folders (see Annex 4.2): WPs (7 folders); Official documents (1 folder containing the Grant Agreement and the Consortium Agreement); and ‘Pictures Library’ (with subfolders, to storage all pictures taking during SIC events and meetings) WPLs are in charge of keeping their files updated (Deliverables submitted, internal material useful for the implementation of the tasks within that WP, etc.). The whole Nextcloud server installed at AEIDL has 2TB (2 terabytes, or 2000 gigabytes of disk space The default quota initially allocated to the SIC folder is 1 GB per user. The total number of users (SIC partners) is currently 30. Therefore, a total of 30 GB is available for the project By end of June 2017 (M17) the space which has been used is 278.4 MB. A total volume of data for the whole project lifecycle has not been foreseen. Nevertheless, the capacity used after the first half of the project is less than 1 % of the total capacity. Therefore, it is estimated that the remaining space in Nextcloud will be more than enough to storage all the upcoming project data. Otherwise, the Project Coordinator Assistant can easily increase this quota, provided that there is remaining space, situation which is almost certain (as mentioned above, the server was open at the beginning of 2017 with a total capacity of 2TB, therefore the total percentage used thus far is minimum). </th> </tr> </table> January 2019 / 9 **General principles:** _🞈_ Each partner will store and safeguard the qualitative data in his/her institutional repository during and after finishing the activity he/she has conducted. In the normal scenario of joint participation (several SIC partners carry out an activity) all of them can store and preserve the information or to appoint one of them as responsible to do it. In that case, the relevant data will be made available to the other partners when requested. _🞈_ The data will be preserved during the project life and without any costs. _🞈_ Only relevant data for the SIC objectives will be collected and it will be used only for the purpose of the project. _🞈_ Partners do not need to share all the data they have been collected for every single task with the whole consortium. When developing joint results with other partners, the majority of the cases in the SIC deliverables, partners will share the specific data it is needed to achieve that joint deliverable. Moreover, in order to make that data accessible and re-usable (FAIR principles) a partner can request access to any data developed in the framework of the SIC project to another partner which can be useful for other SIC deliverable. **Methodology** to share documents in the common SIC storage system and make them easily findable: see Quality Assurance Guidelines, section _6\. Communication within the Governance structures_ and Quality Assurance Report- Part I, section _7.1 BC as a repository function._ _🞈_ On the other hand, the **final results** will always be shared with the consortium and stored in both Nextcloud and Basecamp, in pdf format, so that the data is **_findable, accessible, interoperable and_ _re-usable_ ** for all partners: data can be useful and reused for other SIC deliverables. _🞈_ The SIC consortium will evaluate the possibility of maintaining the data in a long-term, after the project life, in accordance with the project sustainability plan. **2.2.2 External data management** Some relevant SIC data will be publicly online through 2 tools: 1. The **website** : The _SIC_ _website_ i s managed by SIX, with a shared responsibility among the SIC consortium for uploading and contributing content. In practice this means that SIX is playing an editorial role curating its content. The network facilitators play a key role in bringing content and driving traffic to the SIC website. A member of SIX team gave the network facilitator training in how to use the backend of the website. A website user guide/handbook has been created and shared with the network facilitators, and a google hangout/virtual coffee session was dedicated to the process and troubleshooting. If necessary, one to one training and check ins on the phone with the SIX team will be available. 2. An **official repository (The SIC Learning Repository) (Open Access)** : as it was already mentioned, the SIC consortium, with the aim of sharing with the whole community the project results achieved, will provide open access for those deliverables that partners have considered relevant to include in the repository, requiring prior agreement (see section 3). ## 2.3 Protection of data The PCO launched a consultation to WPL regarding IPR, Open Access and Data Management aspects before drafting this plan. Following that consultations, WPL5- Nesta suggested to preserve the SIC Data thought a CC-BY-NC-SA licence/ Attribution-ShareAlike This license allows others to remix, tweak, and build upon our work even for commercial purposes, as long as they credit you and license their new creations under the identical terms. This license is often compared to “copyleft” free and open source software licenses. All new works based on ours will carry also this license, so any derivatives will also allow commercial use. Moreover, the EC encourages authors to retain their **copyright** and grant adequate licences to publishers and suggests _Creative_ _Commons_ _(CC)_ offers as useful licensing solutions 2 . Through CC the SIC consortium could get the mentioned CC-BY-NC-SA licence/ Attribution-ShareAlike. CC is a non-profit organization that enables the **sharing and use of creativity and knowledge through free legal tools** . Creative Commons licenses are not an alternative to copyright. Among the list of licences which CC offers, it’s the mentioned _CC-BY-NC-SA_ l icence. __ **How do CC licenses _operate_ ? ** CC licenses are operative only when applied to material in which a _copyright_ e xists, and even then only when a particular use would otherwise not be permitted by copyright. Note that the latest version of CC licenses also applies to rights similar to copyright, such as _neighboring_ _rights_ a nd _sui_ _generis_ _database_ _rights_ . _Learn_ _more_ _about_ _the_ _scope_ _of_ _the_ _licenses._ This means that CC license terms and conditions are **not** triggered by _uses_ _permitted_ _under_ _any_ _applicable_ _exceptions_ _and_ _limitations_ _to_ _copyright_ , nor do license terms and conditions apply to elements of a licensed work that are in the public domain. This also means that CC licenses do not contractually impose restrictions on uses of a work where there is no underlying copyright. This feature (and others) _distinguish_ _CC_ _licenses_ _from_ _some_ _other_ _open_ _licenses_ l ike the _ODbL_ a nd _ODC-BY_ , both of which are intended to _impose_ _contractual_ _conditions_ _and_ _restrictions_ o n the reuse of databases in jurisdictions where there is no underlying copyright or sui generis database right. All CC licenses are non-exclusive: _creators_ _and_ _owners_ _can_ _enter_ _into_ _additional,_ _different_ _licensing_ _arrangements_ f or the same material at any time (often referred to as “dual-licensing” or “multi- licensing”). However, _CC_ _licenses_ _are_ _not_ _revocable_ o nce granted unless there has been a breach, and even then the license is terminated only for the breaching licensee. **There are also** **_videos_ ** **_and_ ** **_comics_ ** **t** **hat offer visual descriptions of how CC licenses work.** ## 3\. Open access ### 3.1 Definition Open access (OA) refers to the practice of providing online access to scientific information that is free of charge to the end-user and reusable. 'Scientific' refers to all academic disciplines. In the context of research and innovation, 'scientific information' can mean: ▪ Peer-reviewed scientific research articles (published in scholarly journals), or ▪ Research data (data underlying publications, curated data and/or raw data). OA comprises **_2 steps:_ ** Depositing publications in repositories Providing open access to them Hence, the H2020 requirement about OA is not satisfied by publishing the material on the SIC platform. It has to be published on repositories (go to section 3.3 Open Access procedure). Under these definitions, ' **access** ' includes not only basic elements - the right to read, download and print – but also the right to copy, distribute, search, link, crawl and mine. Publication in the SIC platform: **Dissemination** purpose Publication in an official Repository: H2020 **Open Access** requirement **Open** **access** **Publication on** **the website** When it comes to open access, we need also to distinct between open access to scientific peer-reviewed publications and open access to research data: **Publications** – open access is an **_obligation_ ** in Horizon 2020. **Data** – the Commission is running a flexible pilot which has been extended and is **_applicable by_ _default_ ** However, while open access to research data thereby becomes applicable by default in Horizon 2020, the **Commission also recognises that there are good reasons to keep some or even all research data generated in a project closed** . Therefore, the SIC consortium must decide to what extent all the data gathered will be shared and for how long, so that the decisions taken during the lifespan of the project don't come into conflict with the sustainability of the SIC after its end. ### 3.2 Decisions taken in the framework of the SIC project regarding Open Access Although no peer-reviewed publications have been envisaged in the SIC project so far, that would make Open Access compulsory, the SG decided during its first meeting held on 03/05/2016 that OPEN ACCESS would be the rule for the data collected during the project. This **DPM is a living document** that will be updated on the basis of new decisions taken during the 3 years of project implementation, in particular whenever significant changes arise, such as new data, decision to file for a patent, changes in consortium composition and external factors, etc. ### 3.3 Open Access procedure WHAT data will have OA? List of deliverables already identified by partners (WPL) which although not mandatory (they won’t be subject of **peer-reviewed scientific publication)** would be subject of **Open Access** (following the recommendations of the EC in the H2020 Manual). This list is only a proposed starting point and therefore not exhaustive list <table> <tr> <th> **Deliverable number** </th> <th> </th> <th> **WP** </th> <th> **Beneficiary** </th> <th> </th> <th> **Due date** </th> </tr> <tr> <td> **D2.1 SIC Research** **Landscape Report** **D2.4 SIC Research** **Community Roadmap** </td> <td> </td> <td> **2** </td> <td> </td> <td> **TUDO** </td> <td> **M14** </td> <td> </td> </tr> <tr> <td> **2** </td> <td> </td> <td> **TUDO** </td> <td> **M36** </td> <td> </td> </tr> <tr> <td> **D3.7 Evaluation and** **case study report** </td> <td> </td> <td> **3** </td> <td> </td> <td> **YF** </td> <td> **M34** </td> <td> </td> </tr> <tr> <td> </td> <td> **D4.5 SIC Learning** </td> <td> </td> <td> **4** </td> <td> </td> <td> **UNIBO** </td> <td> **M36** </td> <td> </td> </tr> <tr> <td> **Materials Repository integrated in SIC Online Platform and** **fully developed** **D5.1 Results of** **landscape mapping** </td> </tr> <tr> <td> **5** </td> <td> </td> <td> **NESTA** </td> <td> **M8** </td> <td> </td> </tr> <tr> <td> **D5.3 3 Annual State of the Union reports - Part** **I** </td> <td> </td> <td> **5** </td> <td> </td> <td> **NESTA** </td> <td> **M10** </td> <td> </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> **D5.6 Annual State of the Union reports - Part** **II** **D5.7 Annual State of the Union reports - Part III** </td> <td> </td> <td> **5** </td> <td> </td> <td> **NESTA** </td> <td> **M22** </td> <td> </td> </tr> <tr> <td> **5** </td> <td> </td> <td> **NESTA** </td> <td> **M34** </td> <td> </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> </table> WHO will do it? In order to centralise the responsibility, the PCO will be in charge of providing the open access to the selected deliverables in close collaboration with the **author of the deliverable** , normally the TL, who will provide the PCO with the needed information for the metadata. In case the TL is not the author of the publication, the partner who is the author will assist the PCO. In the event of conflict, the WPL will inform the PCO who will report to the SG members to achieve a final decision. The PCO will inform the partner before and after publishing the data in the repository. WHERE? Repository suggested: Zenodo _Zenodo_ i s an open data repository service maintained by CERN, Geneva. Zenodo was launched in May 2013 and upgraded in September 2016. Zenodo archives, and makes available, research outputs in all scientific disciplines. Datasets can be located via the Zenodo Elasticsearch engine. For more information about the **Zenodo policy** click _here_ . _🞈_ Features: Research shared. — all research outputs from across all fields of research are welcome Citeable and discoverable. — uploads get a Digital Object Identifier (DOI) to make them easily and uniquely citeable. Communities — create and curate your own community for a workshop, project, department, journal, into which you can accept or reject uploads. Funding — identify grants, integrated in reporting lines for research funded by the European Commission via OpenAIRE. Flexible licensing — in case we were not under Creative Commons. Safe — the research output is stored safely for the future in the same cloud infrastructure as CERN's own LHC research data. To see a **guideline to sign up and upload** content into Zenodo go to **Annex 4.1** ROUTE? **Green Open Access** has been the option chosen by the SG members. It means: The author, or a representative, archives (deposits) the published article or the final peer-reviewed manuscript in the online repository before, at the same time as, or after publication. Green open access is possible because subscribers pay all the expenses needed to support the publication process. This means authors do not need to pay any additional charges. HOW? **Depositing** **the** **deliverables** **in** **the** **repositorie** **Providing** **open** **access** **to** **it** **2 steps:** The partner must **deposit a _machine-readable electronic_ _copy_ of the relevant document ** or final peer-reviewed manuscript (if any) **in the repository** . This must be done as soon as possible and at the latest upon publication. Where possible, the version deposited should be identical to the published version (in layout, pagination, etc.). __ **'Machine-readable electronic copy'** \- publications must be in a **format** that c **an be used and understood by a computer** . They must be stored in text file formats that are either standardised or otherwise publicly known so that anyone can develop new tools for working with the documents. **Self-archiving / 'green' OA** : partners can deposit the final peer-reviewed manuscript in the selected repository. They must ensure open access to the publication within at most 6 months and 12 months for publications in the social sciences and humanities which is the case of SIC. Beneficiaries must also provide open access, through the repository, to the **bibliographic metadata** that identify the deposited publication. __ **Metadata:** a set of data that describes and gives information about other data. These must be in a standard format and must include the following: _🞈_ The terms ["European Union (EU)" & "Horizon 2020"]["Euratom" & Euratom research & training programme 2014-2018"] _🞈_ The name of the action, acronym & grant number _🞈_ The publication date, the length of the embargo period (if applicable) and a persistent identifier. The purpose of the bibliographic metadata requirement is to **make it easier to find publications** **and ensure that EU funding is acknowledged** . Information on EU funding must therefore be included as part of bibliographic metadata so that Horizon 2020 can be properly monitored, statistics produced, and the programme's impact assessed WHEN? The partner in charge should ensure open access as soon as possible after the publication is ready **within at most 6 months** and 12 months for publications in the social sciences and humanities which is the case of SIC. ## 4\. SIC Research portal ### 4.1 The purpose of the data collection for the Research portal The main function of the _SIC research portal_ is to act as a _**knowledge hub** _ for the _**audience of SI community** _ (SI innovators, policy makers, researchers, practitioners) allowing them to read, analyse and review social innovation-related research. As a digital space it supports research related exchange, triggers discussion on emerging research topics, and provides support for new research collaboration and ideas. The SIC research portal promotes research development dissemination on a European, national and local level, and improve awareness of SI related research activities, emerging topics and results. Thematically the forum focuses on research activities and results along the SIC network topics; emerging research hot topics are of particular interest and as well as developments in more specialised fields of social innovation as well as at European, national and local levels. ### 4.2 The relation of the research portal to the objective of SIC It brings together on-going social innovation research activities in Europe, facilitating research discussions within the SIC networks to increase the impact of research results. The SIC research portal is open to all stakeholders from all sectors and disciplines interested in social innovation. ### 4.3 The types and formats of data generated The data are blog posts, written in a not academic style, rather in an easily comprehensible way to make blog posts accessible to a broad audience, including ‘lay-people’. The blog posts serve as a communication channel to disseminate academic, professional or technical expertise and knowledge to a broader audience. The origin of data can be: * Interesting insights from a conference/workshop/etc. together with a link to the presentations; * Summary of reports from a SI related project; * SIC related research orientated deliverables – with reference to the deliverable download; * A theory heard from a key note speaker on a conference; * A publication about SI definitions, theories, … * An innovative research method, where experiences of other researchers would be helpful;  An emerging hot topic research question; * Etc. Contributors of blog posts can be: * Ongoing research activities within SIC, mainly form WP2 partners * Beyond WP2 partners, from other researchers within the SIC consortium * Researchers within the SIC networks * Researchers from previous SI related projects * Researchers who were personally invited for posts, e.g. at a conference ### 4.4 The expected size of the data By 16 th January 2019 , 32 posts entries, with estimated 160.000 bytes have been published. By end of the SIC project, around 35 posts can be found in the research portal with estimated 175.000 bytes. ### 4.5 Discovery of data The data are formatted and presented as blog posts and can be found on the SIC website, in particular on the subsite _https://www.siceurope.eu/resources/research-portal_ The SIC website is hosted by the company _Effusion LLP_ , which WP1 partner SIX chose as the contributor following a considerable decision process (see Deliverable D1.5 Digital Platform) The metadata on the SIC website is stored and located on a server which is managed on the website behalf by Bytemark (bytemark.co.uk/terms/sla/) - a datacentre that has multiple connections to the internet, within a sealed airconditioned environment and which is UPS and diesel generator protected. (see more information in Annex 7.6: Website Hosting & Maintenance Schedule) All contributors can in addition to the SIC website chose to publish their social innovation publications in this reference manager, named For access to _Zotero_ they have to send a request to SIC Partner ZSI (Maria Schwarz ( [email protected]_ ) who is holding the formal agreement and hosting of SIC at the Zotero portal. ### 4.6 Naming conventions Each blog post is written by one or several author(s), who follow a predefined structure. Following that predefined structure, each blog post consists of following data: * **Title** of the blog post: it provides a notion of the content of the blog post; the title is created by the author(s) * **Author(s)** of the blog post, her/his or their affiliation * (in most cases) a summary or teaser of the blog post, created by the author(s) * **Content** of the blog post: is about emerging hot research topics and questions, remarkable theories and innovative research methods; it may include links to external website and sometimes photos which illustrated the description; the content is created by the author(s). * **Hashtags:** to make the finding easier (keyword search option) when spreading the blog post on social media (Twitter and Facebook). These hashtags are recommended to author(s): inspire you: #socinn #socialinnovation #socialimpact #socimp #socent #socialventures #CSR #workinnov #gamechangers #digital #soctec #socdesign #grassroots ~~#~~ experimental ~~#~~ community ~~#~~ cities ~~#~~ challenges ~~#~~ participation ~~#~~ networks _~~#~~ sicommunity _ _~~#~~ researchimpactEU _ #knowledge #publicsectorSI #refugees #innovation #socialcohesion 3 , #tsimanifesto. * **Tags** : which are linked to other relevant entries on the SIC website. 1. _SIC Network Topics:_ * Academia-led innovation * Cities and Regional Development * Climate Innovation Network * Collaborative and Sharing Economy * Community-led Innovation * Corporate Social Innovation * Digital Social Innovation * Funding Social Innovation * Intermediaries * Public Sector Innovation * Social Economy 2. _SIC Themes:_ * Education * Environment * Design * Finance * Health * Migration * Gender * Poverty * Employment * Energy * Transport / Mobility * Others The procedure for date backup and recovery on the SIC website is agreed with the website host Effusion. As mentioned above, the SIC website is located on a server which is managed on Effusion’s behalf by Bytemark. A data backup plan is put in place. The plan secures: A nightly backup of the SIC website is taken by Bytemark on behalf of Effusion and stored securely on-site by them. In the event of a serious system or hardware issue, the site could be restored from this backup. This is the ‘system backup’. In addition to the system backup, Effusion built the SIC website to be configured to automatically take a ‘site backup’ of the website as a minimum once each day. The ‘site backup’ takes a copy of the website itself, any files you have uploaded to it and the content contained within the CMS which will be stored within the site database. This backup is saved within the SIC webspace. Each day a further backup by the website is taken and appended to the SIC webspace. This means that, should SIC need to recover data from the site or the site itself, Effusion can help SIC to recreate the site at a given point in time subject to a backup being available. The backups are not stored indefinitely as they can take quite a lot of disk space, and a housekeeping process is contained within the backup system to clear-up unnecessary backups, such that, SIC could for example restore the site to any point within the past week, any week within the past month, any month within the past year. The data plan also offers SIC to carry out reports for the website periodically. This is part of the agreed maintenance schedule. The backup system can be accessed from within the administration menu: Configuration > System > Backup and Migrate. For more information on procedures for data processing and website hosting terms and conditions, see Annexes 7.7 and 7.8 The research portal doesn’t consist of sensitive data and therefore there is no procedure for transfer of sensitive data in the project Lastly, the Research portal has been identified by the SIC as one of the main outcomes which will be sustained after the project implementation. Details on how this platform will be exported to the _European School of Social_ _Innovation_ (EESI) website are including in the following section ## 5\. General Data Protection Regulation **5.1 What is the GDPR?** Since the 25 th of May 2018, the European Union has established a _new law on the protection of personal data_ . The law focuses on the personal data of natural persons in the working environment. That does not mean relationships between people in the private sphere. **What is personal data?** Data is personal if we can identify an individual behind it. It can be unique data or data linked to another. ⮚ _A first name alone is not personal data. You know lots of different people called Marie so if someone speaks about Marie you cannot identify one specific person. But if that person told you that s/he is referring to Marie who works at AEIDL, it becomes personal data._ So personal data can be: * Names and surname unless they are too common (EX: Jacques Dupond) * Postal addresses * Email addresses * National registry numbers * Bank accounts * GPS localisation * IP addresses * Vehicle numbers * CVs * Medical dossiers * Biogenetic code \- … It is sometimes difficult to make the difference between sensitive data like religious origin or sexual orientation. But the rule to know what is covered by the law is: **if you can identify a real person behind the data, it’s personal data.** ### Main pillars of the new law The main principle is to inform people. Real people have to be aware that organisations have their personal data, for which purpose and for how long they will keep it. That brings us to the questions: * On which basis can I justify the use of some information and not the other? o How long can I keep personal data? * Do I have to ask the consent of each person? * What type of data do I already manage? The new law gives us clues to answer these questions. The GDPR is about transparency, limitation and minimalism, precision and data retention time limit and of course about confidentiality. #### **Transparency** Transparency is the key point of the regulation. All individuals have to know where their personal data is stored and why by a correct information. Organisations must be transparent. When people give their personal data to companies, in exchange they have to receive information that explains why they had to give each type of data. ⮚ _When you subscribe to a newsletter, you have to give your first name, surname and email address. The organisation gives you the information that your data will only be used for the newsletter. If it is also used for something else like a mailing list to send alerts or kept in a database, you must be informed of this._ #### **Minimalism and limitation** We are only allowed to collect the data needed for our daily work and to continue our activities. **We cannot collect data for the sake of collecting data** . It is not necessary to know the geographical regions of subscribers for a newsletter subscription. That’s the principle of minimalism and limitation. We cannot ask for someone’s medical file if it is not needed and if there is no reason to keep this information. #### **Exactitude** The monitoring authority is also going to check if all the information is correct and if the organisation is still within the data conservation period. In order to keep track of the conservation period, we have to take note of the process for each type of information collected and also to guarantee that we only have the information we need #### **Confidentiality** Confidentiality is the last pillar of the GDPR. The data must be available only to the person that needs it. Access to personal data must be secured by a firewall and password, and if it is a paper version, secured by an alarm system. This pillar is also important for subcontractors. All the rules are mostly for the owner of the data. If the company is not the owner of the data, it has to reassure the client that the data is secure with restricted access, on European servers. ##### 5.2 GDPR during the SIC project In order to fulfil with the above mentioned GDPR regulation, WP1 Leader SIX updated the privacy policy for SIC and sent out an email (from [email protected]) on May 18 th 2018 to all the recipients of the SIC mailing list to ask them their permission to stay in touch. Those who didn’t comply to the policy and signed up approving the new GDPR policy were removed from the list. The process with these steps can be seen in the following: Connected with the Update Setting option, a new window popped up so that the users could either update their preferences or unsubscribe to the newsletter in a very easy way ##### 5.3 GDPR after the SIC project SIX, partner in charge of the SIC Communication and Dissemination WP 1, will send out the last official SIC newsletter in January 2019. As it has been agreed to keep a quarterly SIC newsletter in interest of sustainability (offered by SIX to design and launch) current SIC newsletter receivers will be asked if interested to continue to follow social innovation news in Europe, and in case of interest to sign up to a new list, which is designed and held by SIX following the GDPR rules. This transition and continuation of SIC news will also be shared via SIC’s other communication channels such as Twitter and Facebook. Both the Facebook Group and the @SICommunity Twitter handle will stay active after SIC comes to an end. Find more information about the Sustainability plan in Deliverable 1.3. ## 6\. Sustainability plan for keeping SIC data online As stated in Deliverable 1.3: Final Network Map and Sustainability Plan: _The SIC website will stay online for as long as the URL is active (until May 2019). From March 2019, after the H2020 project officially ends, the website will have a note on the homepage that says: ‘this is no longer a funded programme therefore we are not updating the site, but please feel free to view content”. Visitors will be guided to the Learning Repository, the ESSI website and sign up for SIX’s SIC newsletter._ Below, you’ll find a more detail plan about the way SIC will keep and move data around the Research Portal in particular, as well as about the SIC Learning Repository, which the SIC consortium has agreed to ‘invest’ in an updated version of, keeping this LR online platform alive for a year after SIC comes to an end. ### 6.1 Research Portal and research outputs As part of the SIC sustainability Plan to keep the main SIC products alive, 4 taskforce groups were created to discuss and develop a specific SIC Sustainability Plan lead by SIX. As part of such process, a SIC Research taskforce was created. Following discussions at the GA meetings in Zagreb in March 2018 plus a working/taskforce group process in Spring 2018, the group proposed an approach to the sustainability of the Research elements. The proposal was discussed by the consortium at the GA on 26 June 2018 in San Sebastian and the plan outlined below was agreed. **6.1.1 Agreed Sustainability Plan for SIC Research** **General comments/reflections:**  SIC has enabled a possibility for SI research to unite across research institutions but also to connect in different ways with communities and practice ‘outside’ research practice. This is valuable and something SIC would want to see continuing. The consortium thinks some of these ‘practices’ will be kept in the future through the involved researchers continued work e.g. other research projects/conferences, shared journals etc. and through the existence of network organisations like SIX. **Proposals/suggestion of sustainability** 1. **ESSI website to ‘house’ SIC Research elements** * ESSI website will take-over SIC Research section to sustain the research elements of SIC * All key SIC research outputs and other important research pieces should also be posted on ESSI website Considerations * All SIC pieces should keep the SIC brand/logo and should be referred to as SIC outputs  All content needs to be open access * All consortium partners are welcome to still keep producing content under SIC brand (particularly if things are produced with a partnership of 2 or more SIC partners) * If any researcher/partner uses content produced under SIC, it must be branded/cited as SIC original content * If we produce content independently inside our projects that would be interesting for SIC community, it should be shared on ESSI 2. **ESSI events to promote SIC** * SIX and ESSI will use each other’s events to promote SIC research in the time after SIC. The next biannual ESSI conference will take place in October 2019, and it has already been agreed that there will be left a space for SIC research to be ‘shared’. In addition, there will be other SIC Sustainability efforts (like the SI Assembly (D.6.10), which will provide a space for SI researchers to engage with other sectors and connect around the interest of SI. #### **6.1.2 Next steps** Members of WP2: Research, and other SIC partners who are involved with research communities (e.g. network facilitator of the Academic-led Social Innovation network) have been given the task to point out which pieces from the SIC website should be moved to the ESSI platform. This will be decided by the end of January 2019 to allow for SIC dissemination responsible and ESSI to collaborate on the transfer and reported in the 2 nd Progress Report due by 31 st March 2019. #### **6.1.3 SIC elements and data to be sustained on the Learning Repository** A number of key outputs from SIC (which are currently presented on the SIC website) will be moved to the updated SIC Learning Repository in M36. In addition to existing content, new descriptions of SIC learning offers will be created to describe how they can be conducted in the future. The Learning Repository is open access and fully SIC branded. **The updated version of the Learning Repository will have two main areas of content:** 1. Landing page in the form of a blog devoted to the Public Sector and the use of Social Innovation to improve the welfare system. This area will collect all the contents already produced in SIC to support the Public Sector to introject social innovation in the everyday practices of services delivery; 2. A second area containing all the tools produced and collected in SIC to support the design and experimentation of SI by researchers, innovators and intermediaries. The maintenance of the Learning Repository will be led by SIC partner UNIBO/POLIMI, who in the first year will update the platform with relevant news and content. All SIC partners are welcome to feed in content. For more information, see D4.5 SIC Learning Materials Repository integrated in SIC Online Platform and fully developed #### **6.2.1. Next steps** Similarly, to the Research Portal, SIC partners who are leading different areas of work where elements were agreed to be moved to the Learning Repository will be in charge, alongside with WP1 Leader SIX and Learning Repository host UNIBO/POLIMI, to select the existing content and the additional one that will be created. This process has started and will finish in M36. If new learning elements are offered in the future (like the SIC summer schools) a license signed by all partners will be created. The SIC website is not closing before May 2019, when the URL runs out allowing time to move last key content produced in the last months of SIC. More information on this procedure can be found in D1.3 Final network map and sustainability plan.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0288_TEKNOAX 2.0_737848.md
# Research Data Before illustrating the approach followed by the TEKNOAX 2.0 project with respect to data management, it is worth to define research data and the key principles for research data management. As a matter of fact, research data refers to data that is collected, observed, or created within a project for purposes of analysis and to produce original research results. Data are plain facts. When they are processed, organized, structured and interpreted to determine their true meaning, they become useful and they are called information. In a research context, research data can be divided into different categories, depending on their purpose and on the process through which they are generated. It is possible to have: * **Observational data** , which are captured in real-time, for example, sensor data, survey data, sample data. * **Experimental data** , which derive from lab equipment and tests, for example resulting from fieldwork * **Simulation data** , generated from physical or numerical models * **Derived / compiled data** , which involves using existing data points, often from different data sources, to create new data through some sort of transformation, such an arithmetic formula or aggregation. Research data may include all of the following formats: text or word documents, spreadsheets, laboratory notebooks, field notebooks, diaries, questionnaire, transcripts, codebooks, audiotapes, videotapes, photographs, films, test responses, slides, artifacts, specimen, samples, collection of digital objects acquired and generated during the research process, data files, database contents, models, algorithms, scripts, contents of software application such as input, output, log files, simulations, methodologies and workflows, standard operating procedures and protocols 1 . In order to guarantee open access to research results, no confidential data generated within the project could be made available in digital form. ## Key principles for research data management According to the “ _Guidelines on FAIR Data Management in Horizon 2020_ ”, research data must be _findable_ , _accessible_ , _interoperable_ , _re- usable_ 2 . The FAIR guiding principles are reported in the following table 3 . <table> <tr> <th> **FINDABLE** </th> <th> **F1** (meta)data are assigned a globally unique and eternally persistent identifier **F2** data are described with rich metadata **F3** (meta)data are registered or indexed in a searchable resource </th> </tr> </table> <table> <tr> <th> </th> <th> **F4** metadata specify the data identifier </th> </tr> <tr> <td> **ACCESSIBLE** </td> <td> **A1** (meta)data are retrievable by their identifier using a standardized communications protocol **A1.1** the protocol is open, free, and universally implementable **A1.2** the protocol allows for an authentication and authorization procedure, where necessary. **A2** metadata are accessible, even when the data are no longer available </td> </tr> <tr> <td> **INTEROPERABLE** </td> <td> **I1** (meta)data use a formal, accessible, shared, and broadly applicable language for knowledge representation **I2** (meta)data use vocabularies that follow FAIR principles **I3** (meta)data include qualified references to other (meta)data. </td> </tr> <tr> <td> **RE-USABLE** </td> <td> **R1** meta(data) have a plurality of accurate and relevant attributes. **R1.1** (meta)data are released with a clear and accessible data usage license **R1.2** (meta)data are associated with their provenance **R1.3** (meta)data meet domain-relevant community standards. </td> </tr> </table> ## Roadmap for data sharing According to the aforementioned aspects, data management could be based on the next elements 4 : * **Data set reference and name:** Identifier for the data set to be produced. In particular, in the framework of TEKNOAX 2.0, the technical data related to the laboratory tests of the new generation of axels, according to the internal ADR practices will be referred by using a code composed by the year and three progressive numbers (i.e 17-XXX-RP, 18-YYY-RP). A report will be associated to the data (in the code the presence of the report is identified by the acronym “RP”). * **Data set description:** The data that will be generated or collected during TEKNOAX 2.0 project execution will be described, as well as its origin (in case it is collected), nature and scale and to whom it could be useful. Information on the existence (or not) of similar data and the possibilities for integration and reuse will be also included (see previous point, RP report). * **Standards and metadata:** Reference to existing suitable standards of the discipline. If these do not exist, an outline on how and what metadata will be created has to be given. In the report, the standards used for the tests are mentioned. In particular, as far as the axels are concerned, standards are followed for the braking system whereas for the axles, internal non official procedures and custom made test machines will be used. * **Data sharing** : Description of how data will be shared, including access procedures, embargo periods (if any), outlines of technical mechanisms for dissemination and necessary software and other tools for enabling re-use, and definition of whether access will be widely open or restricted to specific groups. The repository where data will be stored will be identified, if already existing, indicating in particular the type of repository (institutional, standard repository for the discipline, etc.). In case the dataset cannot be shared, the reasons for this should be mentioned (e.g. ethical, rules of personal data, intellectual property, commercial, privacy related, security-related). This point has a remarkable importance being the scope of TEKNOAX 2.0 project to launch a product into the market by 12 months after the end of the project, dataset will be in most of the case confidential in order do not provide a competitive advantage to competitors. * **Archiving and preservation (including storage and backup):** Procedures that will be put in place for long-term preservation of the data shall be described. Indication of how long the data should be preserved, what is its approximated end volume, what the associated costs are and how these are planned to be covered. Since at month six not all data set has been generated yet (in particular, the overall dataset related to the laboratory tests on axles will be completed by the end of November 2017), the previous list has to be intended as a guideline for data generated in the future. Obviously, the sharing of data will be strictly linked to the level of confidentiality of the data itself. # Open access Before deepening the aspects related to TEKNOAX 2.0 research data in chapter 5, in the following, an overview of “open access” and the consequent contractual obligations are provided. Open access can be defined as the practice of providing on-line free of charge access to scientific information related to project outcomes. In the context of R&D “scientific information” mainly refers to * **peer-reviewed scientific research articles** , if projects results are going to be disseminated in academic journals * **research data** , that means not only data underlying the aforementioned scientific publications, but also any other data related to project activities, both processed or raw. 5 Although there are no legally binding definitions of open access, authoritative definitions appear in key political declaration such as the _2002 Budapest Declaration_ and the _2003 Berlin Declaration_ . Under these definitions, “access” includes the right to read, download and print, but also to copy, distribute, search, link, crawl and mine the aforementioned data, provided that obligations to confidentiality, security and protection of personal data are ensured and the achievements of TEKNOAX 2.0 objectives, including the future exploitability of results, are not jeopardized. Open access is not a requirement to publish, but it is seen by the European Commission as an approach to facilitate and improve the circulation of information in the European research area and beyond. Open access to data generated in projects funded by the European commission is the key to lower barriers for accessing publicly-funded research, as well as to demonstrate and share the potential of research activities supported with the help of public funding. ## Open Access in the Grant Agreement The importance given by the European Commission to the open access issue is clearly outlined in the TEKNOAX 2.0 Grant Agreement. Particularly, Article 29.2 and 29.3 states the responsibilities of beneficiaries and the actions to be undertaken in order to ensure open access to scientific publications and to research data respectively. The text of the aforementioned articles is reported below 6 ### _Article 29.2: Open access to scientific publications_ _Each beneficiary must ensure open access (free of charge, online access for any user) to all peer-reviewed scientific publications relating to its results._ _In particular, it must:_ 1. _as soon as possible and at the latest on publication, deposit a machine-readable electronic copy of the published version or final peer-reviewed manuscript accepted for publication in a repository for scientific publications;_ _Moreover, the beneficiary must aim to deposit at the same time the research data needed to validate the results presented in the deposited scientific publications._ 2. _Ensure open access to the deposited publication — via the repository — at the latest: I. on publication, if an electronic version is available for free via the publisher, or II. within six months of publication (twelve months for publications in the social sciences and humanities) in any other case._ 3. _Ensure open access — via the repository — to the bibliographic metadata that identify the deposited publication._ _The bibliographic metadata must be in a standard format and must include all of the following:_ _-the terms "European Union (EU)" and "Horizon 2020"_ _-the name of the action, acronym and grant number;_ _-the publication date, and length of embargo period if applicable, and_ _-a persistent identifier._ ### _Article 29.3: open access to research data_ _Regarding the digital research data generated in the action (‘data’), the beneficiaries must:_ 1. _deposit in a research data repository and take measures to make it possible for third parties to access, mine, exploit, reproduce and disseminate — free of charge for any user — the following:_ 1. _the data, including associated metadata, needed to validate the results_ _presented in scientific publications as soon as possible;_ 2. _other data, including associated metadata, as specified and within the deadlines laid down in the data management plan (see Annex I);_ 2. _provide information — via the repository — about tools and instruments at the disposal of the beneficiaries and necessary for validating the results (and — where possible — provide the tools and instruments themselves)._ It is relevant to underline also the following clauses: _This does not change the obligation to protect results in Article 27, the confidentiality obligations in Article 36, the security obligations in Article 37 or the obligations to protect personal data in Article 39, all of which still apply._ _As an exception, the beneficiaries do not have to ensure open access to specific parts of their research data if the achievement of the action's main objective, as described in Annex 1, would be jeopardized by making those specific parts of the research data openly accessible. In this case, the data management plan must contain the reasons for not giving access._ The confidentiality aspects have been duly taken into account in the preparation of this document in order to not compromise the protection of project results and legitimate interests of project partners. Special attention in this context has been paid in particular by the project coordinator ADR whose objective is to launch thanks to TEKNOAX 2.0 innovative axles in the market in the near future after the end of the project. ## Open Access Research Data Pilot Projects starting from January 2017 are by default part of the Open Research Data Pilot, launched by the EC in the context of the Horizon 2020 framework programme. The aim of the pilot is to improve and maximise access to and re- use of data generated by research projects, which are usually small sets, spread across repositories all over Europe. The pilot is an excellent opportunity to stimulate and nourish the data- sharing ecosystem and has the potential to connect researchers interested in sharing and re-using data with the relevant services within their institutions (library, IT services), data centres and data scientists. The pilot is intended to promote the value of data sharing to both researchers and funders, as well as to forge connections between the various players in the ecosystem. Being part of the ORDP, the TEKNOAX 2.0 consortium commits itself to undertake all the necessary actions where possible to be compliant with the aforementioned principles by respecting on the other side the IPR protection constrains. A brief description of such provisions is provided in the next section. ### Enabling projects to register, discover, access and re-use research data In order to comply with the principles which underpin the Open Research Data Pilot, researchers are expected to provide answers to key issues such as “what”, “where”, “when”, “how” and “who”. 6 **WHAT:** The Open Data Pilot covers all research data and associated metadata resulting from EC-funded projects, if they serve as evidence for publicly available project reports and deliverables and/or peer reviewed publications. To support discovery and monitoring of research outputs, metadata have to be made available for all datasets, regardless of whether the dataset itself will be available in Open Access. Data repositories might consider supporting the storage of related project deliverables and reports, in addition to research data. **WHERE** : All research data has to be registered and deposited into at least one open data repository. This repository should: 1) provide public access to the research data, where necessary after user registration; 2) enable data citation through persistent identifiers; 3) link research data to related publications (eg. journals, data journals, reports, working papers); 4) support acknowledgement of research funding within metadata elements; 5) offer the possibility to link to software archives; 6) provide its metadata in a technically and legally open format for European and global re-use by data catalogues and third-party service providers based on wide-spread metadata standards and interoperability guidelines. Data should be deposited in trusted data repositories, if available. These repositories should provide reliable long-term access to managed digital resources and be endorsed by the respective disciplinary community and/or the journal(s) in which related results will be published (e.g., Data Seal of Approval, ISO Trusted Digital Repository Checklist). **WHEN** : Research data related to research publications should be made available to the reviewers in the peer review process. In parallel to the release of the publication, the underlying research data should be made accessible through an Open Data repository. If the project has produced further research datasets (i.e. not necessarily related to publications) these should be registered and deposited as soon as possible, and made openly accessible as soon as possible, at least at the point in time when used as evidence in the context of publications. **HOW** : The use of appropriate licenses for Open Data is highly recommended **WHO** : Responsibility for the deposit of research data resulting from the project lies with the project coordinator (delegated to project partners where appropriate). ## Research Data Repositories The TEKNOAX 2.0 project website is provided with a Private Document Section ( _http://www.teknoax2dot0.eu/private-documents_ ) representing a repository that includes all the main project documents produced by the consortium in their consolidated version, and any other private document exchanged by partners. Only registered users (project partners) can access this area and new users can be generated only by administrator. The private area of the TEKNOAX 2.0 project website facilitates and enhances the information flow among the partners. Particular attention will be paid to the confidential and/or sensitive data and the consortium will not disclose or share this information to third parties. For very sensitive data, ADR will use internal repositories and if not necessary, this data will not be shared even with the consortium but relative data will be provided instead (e.g percentage of improvement etc.). Concerning the open access of discoverable data, an analysis of the potential options was carried out by the TEKNOAX 2.0 consortium. One of the possibilities is to use **ZENODO** (http://www.zenodo.org/), the cost free open access repository of **OpenAIRE** (the Open Access Infrastructure for Research in Europe, https://www.openaire.eu/). The goal of OpenAIRE portal is to make as much European funded research output as possible available to all. Institutional repositories are typically linked to it. Moreover, dedicated pages per project are visible on the OpenAIRE portal, making research output (whether it is publications, datasets or project information) accessible through the portal. This is possible due to the bibliographic metadata that must accompany each publication. Other possible repositories will be investigated at subsequent stage of the project, depending on the particular characteristics of the dataset by taking advantage of the **re3data** search engine (http://www.re3data.org/). Re3data.org is a global registry of research data repositories that covers research data repositories from different academic disciplines. It presents repositories for the permanent storage and access of data sets to researchers, funding bodies, publishers and scholarly institutions. # Publications According to the TEKNOAX 2.0 Grant Agreement, participants have the obligation to disseminate their results as soon as possible, unless it goes against their legitimate interests and according to any necessary restriction aimed to protection of results and confidentiality. Protecting results is indeed crucial, since their premature disclosure can destroy the participants’ chances of being granted intellectual property rights, in particular when dealing with patents and utility models that require novelty. No scientific publication on peer reviewed journals are foreseen within the TEKNOAX 2.0 project. Actually, the consortium does not include Universities of Research Centers being the project oriented to bring together complementary expertise able to launch in the market new products/services (new generation of axels and platform enabling innovative services) by 36 months from the grant. However, in order to disseminate project results, some articles will be published in topicspecific journals in order to pave the way to the future exploitation and commercialization. Prior to any dissemination activity and/or publication other partners shall be consulted in order for them to exercise their right to object in the case where such dissemination could cause significant harm to their background or results. In particular, at least 45 days prior notice of any dissemination activity shall be given to the other beneficiaries concerned who, within 30 days, may object about the dissemination activity. The publications will be made available in apposite repositories in order to have the maximum impact. # Teknoax 2.0 research data TEKNOAX 2.0 project is funded by the Fast Track to Innovation Pilot scheme, which provides funding for bottom-up proposals for close-to-market innovation activities. Being the main objective of this funding scheme to bring on the market the innovation proposed within three years from the beginning of the project, even more attention must be paid to the kind of data shared while the project is still ongoing, in order to not jeopardize market uptake and exploitation of project outcomes. In section 1.3 of TEKNOAX 2.0 DoA (Description of Actions), it is stated that the innovation potential and market competitive advantage of the solution proposed lies in the combination of three different innovations, listed below * A monolithic hollow shape axle obtained through an innovative mechanical production process * A system bringing intelligence onto axles * A collaborative ICT Platform and communication system as interface of end-users with manufacturers Research data linked to the development and testing phase of the aforementioned innovations will not be made accessible in order to not compromise the exploitation of results. This aspect is reflected also in the dissemination level of the corresponding deliverables, which remain confidential until the end of the project. However, it is worthwhile to point out that the project focuses more on the manufacturing and testing process of the intelligent axles than on the production of research or observational data and so the amount of Research Data that are going to be produced is limited, at least at this stage of the project. In the next section, a tentative description of the expected project datasets is provided, together with a preliminary indication on their confidentiality. ## Research data types The following table reports a tentative list of the research data types that will be produced by the TEKNOAX 2.0 project. A description of each dataset is given in the following sections of the present document. <table> <tr> <th> **WP** </th> <th> </th> <th> **Main Activities** </th> <th> **Research Dataset** </th> <th> **Confidential** </th> </tr> <tr> <td> **2** </td> <td>   </td> <td> Definition of end users needs and features to be considered for further development and customization of the novel axle Test campaigns to set optimal values for manufacturing processes </td> <td> Axles technical specifications </td> <td> no </td> </tr> <tr> <td> End user needs, market demand and future development </td> <td> no </td> </tr> <tr> <td> </td> <td>   </td> <td> Re-design of the clamping system Check of the new technology compliance with technical standards </td> <td> Test results on axles for the refinement and optimization of the manufacturing process </td> <td> yes </td> </tr> <tr> <td> 3 </td> <td>    </td> <td> To define TEKNOAX 2.0 platform architecture and specifications of its main components To test platform functionalities in a laboratory environment To validate platform lab results in an operating environment </td> <td> Platform sensing and functional requirements </td> <td> yes </td> </tr> <tr> <td> Data collected by axles </td> <td> yes </td> </tr> <tr> <td> Test procedures and results </td> <td> yes </td> </tr> <tr> <td> 4 </td> <td>   </td> <td> To perform tests on a trailer equipped for trials in real working conditions To circulate surveys and to perform training sessions in order to collect feedbacks from end users on axles and platform performances </td> <td> Validation of performances: test results in relevant operational environment </td> <td> no </td> </tr> </table> Any change in the expected results will be reported directly in the deliverable associated to the specific task or in the periodic report. ### Axles technical specifications and optimization of manufacturing process This dataset consists of: * Requirements from the point of view of the axles manufacturers (production-related) and end-users * Market needs * Mechanical requirements and technical specifications  Laboratory testing: * vertical fatigue tests * spindle tests * 3 or 4 point bending tests * Hub Endurance tests * Life test for Bearing The above dataset are considered confidential except for the market needs ### Platform sensing and functional requirements This dataset consists of:  Tests and data connected to the repository of the TEKNOAX 2.0 Platform (confidential) ### Validation of performances This dataset consists of: * Data related to on-field testing (confidential) * Data related to validation in relevant operational environment (public) # Conclusions The present document has intended to outline a preliminary strategy for the management of data generated throughout TEKNOAX 2.0 project. Considering that this deliverable is due at month six, few data sets has been generated yet, so it is possible that in the future some aspects outlined in the present document will need to be refined or adjusted. This initial data management plan has however demonstrated that the consortium fully commits itself to comply with open access requirements. Moreover, a tentative list of dataset has been generated, showing the soundness of the concepts that the projects aims to develop and demonstrate. _This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 737848._ _This report reflects only the author's view and the Agency is not responsible for any use that may be made of the information it contains._
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0289_SIMPATICO_692819.md
# Executive summary This document is the deliverable **“D1.3 – Data management plan v.1”** of the European project “SIMPATICO - SIMplifying the interaction with Public Administration Through Information technology for Citizens and cOmpanies” (hereinafter also referred to as **“SIMPATICO”** , project reference: 692819). The **Data Management Plan (DMP)** describes the types of data that will be generated and/or gathered during the project, the standards that will be used, the ways in which data will be exploited and shared (for verification or reuse), and in which way data will be preserved. This DMP has been prepared by taking into account the template of the **“Guidelines on Data Management in Horizon 2020”** [Version 2.1. of 15 February 2016]. The elaboration of the DMP will allow SIMPATICO partners to address all issues related with data protection, including ethical concerns and security protection strategy. SIMPATICO takes part in the **Open Research Data Pilot (ORD pilot)** ; this pilot aims to improve and maximise access to and re-use of research data generated by Horizon 2020 projects, such as the data generated by the SIMPATICO platform during its deployment and validation. Moreover, under Horizon 2020 each beneficiary must **ensure open access to all peer-reviewed scientific publications** relating to its results: these publications shall be made also available through the public section of the SIMPATICO website. All these aspects have been taken into account in the elaboration of the DMP. A first version of the deliverable “D1.3 – Data management plan v.1” has been released at the beginning of the project (M6). **This version is an update of D1.3 produced at project month M16** , after the detailed definition of the project use-cases and the revision of the ethics-related aspects of the project by the Ethics Advisory Board, and taking into account the feedback collected during the 1 st year project review. A revised deliverable is expected at the end of the project (“D1.4 – Data management plan v.2”, at M36). However, the DMP will be a **living document** throughout the project, and these initial versions will evolve during the SIMPATICO lifespan according to the progress of project activities. Starting from a brief illustration of the SIMPATICO project, and of the ethical concerns raised by the project activities, this report describes **the procedures of data collection, storing and processing** , with a final overview on **SIMPATICO security protection strategy** . This report does not cover the general concerns related to ethics and data protection, as they are the focus of dedicated deliverables already submitted – namely reports “D1.5 – Ethics compliance report”, “D8.1 – H – Requirement no. 1”, and “D8.2 – POPD – Requirement no. 2”. # Introduction The research activities undertaken in the SIMPATICO project have important data protection aspects, in particular due the foreseen involvement of public/private stakeholders and citizens and due to the necessity to collect, store and process personal data. This deliverable analyses the **data management implications** of the activities undertaken in the project, and describes the guidelines and procedures put in place in order to ensure compliance with data management requirements. The rest of this section provides **background information on the SIMPATICO project** (Subsection 1.1) and identifies in brief the **ethical issues** raised by the project activities (Subsection 1.2). The project aims **to maximise access to and re-use of research data** , also ensuring **open access to all peerreviewed scientific publications** relating to its results, in order pave the path for its data management plan according to the signed Grant Agreement - GA (Subsection 1.3.). Section 2 concerns the detailed **description of SIMPATICO datasets** , according to the requirements set out in Annex 1 - Data Management Plan template of the “Guidelines on Data Management in Horizon 2020” [1]: (a) the handling of research data during and after the project; (b) what data will be collected, processed or generated; (c) what methodology & standards will be applied; (d) whether data will be shared/made open access and how; (e) how data will be curated and preserved. Finally, Section 3 presents the **SIMPATICO security protection strategy** . ## SIMPATICO in brief SIMPATICO's goal is **to improve the experience of citizens and companies in their daily interactions with the public administration** by providing a **personalized delivery of** **e-services** based on advanced **cognitive system technologies** and by promoting an **active engagement of people** for the continuous improvement of the interaction with these services. The SIMPATICO approach is realised through a platform that can be deployed on top of an existing PA system and allows for **a personalized service delivery** without having to change or replace its internal systems: a process often too expensive for a public administration, especially considering the cuts in resources imposed by the current economic situation. The goal of SIMPATICO is accomplished through a solution based on the **interplay of language processing, machine learning and the wisdom of the crowd** (represented by citizens, business organizations and civil servants) **to change for the better the way citizens interact with the PA. SIMPATICO will** **adapt** **the interaction process** to the characteristics of each user; **simplify** text and documents to make them understandable; **enable feedback for the users** on problems and difficulties in the interaction; **engage** **civil servants, citizens and professionals** so as to make use of their knowledge and integrate it in the system (Fig. 1). Figure 1: SIMPATICO concept as a glance The project aims can be broken down into the following **smaller research objectives (ROs)** . **RO1. Adapt the interaction process with respect to the profile of each citizen and company** (PA service consumer), in order **to make it clear, understandable and easy to follow** . * A **text adaptation** framework **,** based on a **rich** **text information layer** and on machine learning algorithms capable of **inducing general text adaptation operations** from **few examples, and of customizing these adaptations to the user profiles.** * **A workflow adaptation engine** that will take user characteristics and tailor the interaction according to the user’s profile and needs. * A feedback and annotation mechanism that **gives users the possibility to visualize, rate, comment, annotate, document the interaction process** (e.g., underlying the most difficult steps), so as to provide valuable feedback to the PA, further refine the adaptation process and enrich the interaction. **RO2. Exploit the wisdom of the crowd to enhance the entire e-service interaction process.** An **advanced web-based social question answering engine (Citizenpedia)** where citizens, companies and civil servants will **discuss and suggest potential solutions and interpretation for the most problematic procedures and concepts.** * A **collective knowledge** database on e-services that will be used to simplify these services and improve their understanding. * An **award mechanism** that will **engage users and incentivize them to collaborate** by giving them **reputation** (a valuable asset for professionals and organizations) and **privileges** (for the government of Citizenpedia – a new public domain resource) according to their contributions. **RO3. Deliver the SIMPATICO Platform, an** **open software system that can interoperate with PA legacy systems.** * A platform that **combines consolidated e-government methodologies with innovative cognitive technologies** (language processing, machine learning) at different level of maturity, enabling their experimentation in more or less controlled operational settings. * An interoperability platform that enables an **agile integration of SIMPATICO’s solution with** PA legacy systems and that allows the exploitation of data and services from these systems with the SIMPATICO adaptation and personalization engines. **RO4. Evaluate and assess the impact of the SIMPATICO solution** * Customise, deploy, operate and evaluate the SIMPATICO solution on **three use-cases in two EU cities** – Trento (IT) and Sheffield (UK) – **and one EU region** – Galicia (ES). * **Assess the impact** of the proposed solution in terms of **increase in competitiveness, efficiency of interaction and quality of experience.** ### SIMPATICO technical framework and infrastructure The SIMPATICO project intends to provide a **software platform** incorporating technical innovations to enhance **the efficiency, effectiveness and inclusiveness of public services** . To this aim, SIMPATICO collects, generates and utilizes both personal and other data in a complex way. For what concerns this deliverable (consumption, production and storage of data), the key SIMPATICO components are the following: 1. **Citizen Data Vault** : it represents the component that will take care of personal data exchange between a user and SIMPATICO components. It is a distributed repository of the citizen (or company) profile and related information. It is continuously updated through each interaction and is used to automatically pre-fill forms. In this way, the citizen will give to the PA the same information only once, as the information will be stored in the vault and used in all the following interactions; 2. **Human Computation (Citizenpedia):** SIMPATICO fosters citizens’ involvement, by providing Citizenpedia, a hybrid of Wikipedia and a collaborative question answering engine, and sharing improvements on public resources in a semi-automatic basis. Citizens, companies and civil servants will discuss and suggest potential solutions and interpretation for the most problematic procedures and concepts. In addition, the user will be able to highlight portions of text that he/she considers unclear and ask for a simplified version. These interaction actions will further refine the user profile and will be stored in the citizen data vault to serve as the basis for the adaptation of future interactions. Public servants are able to moderate comments and suggestions of citizenships to prevent crowd’s wisdom bias. The knowledge collected by a user on a specific e-service (e.g., a request of clarification or the explanation of a concept) can propagate and improve the understanding and interaction of potentially all users and e-services. An award mechanism that engages users and incentivize them to collaborate by giving them reputation (a valuable asset for professionals and organizations) and privileges is designed. 3. **SIMPATICO Adaptation Engine:** it is a cognitive system that will make use of innovative text processing and machine learning algorithms to adapt the text and the workflow of the interaction according to the user profile. The text adaptation engine will adapt the text of the forms and of the other documents to make it more understandable and to clarify complex elements, while the workflow adaptation engine will adapt the interaction process itself by presenting the citizen only the elements that are relevant for his/her profile (e.g., if the citizen is not a foreigner he/she will not be presented the section of a form reserved for foreigners). The adaptation engine exploits data collected on the interactions of the users, exploiting both implicit and explicit techniques; these data are stored in the “User Profile” and “Log” components of SIMPATICO. These components are highlighted in Figure 2, depicting SIMPATICO conceptual architecture. Figure 2: SIMPATICO Platform conceptual architecture and main components ### SIMPATICO pilots The piloting of the SIMPATICO platform is planned in two European cities ( **Trento and Sheffield** ) and one region ( **Galicia** ) in Italy, Spain and the United Kingdom (UK), through a **two-phase use-case validation.** The stakeholders engaged in the **three use-cases** were selected for their experience and interest in e-services, as well as for the different socio- cultural backgrounds of the three regions. In this way, the Consortium have the opportunity to validate the effectiveness of the project results in **contexts which differ on the number and heterogeneity of citizens and their social and cultural background** . There are indeed important **differences in the technological ecosystems** , with Trento and Sheffield having just started the process of digitalization of their services to citizens and businesses (this process will actually happen in alignment and integration with the SIMPATICO activities), and Galicia having a mature and consolidated e-service delivery infrastructure (thus allowing to study the deployment of SIMPATICO on top of an already operating system). The contexts also **differ for the point of view of the number and heterogeneity of end-users and for the variety and maturity of e-services** ( **see deliverable “D6.1 – Use-case planning & evaluation v1” ** ). The tables below provide a short description of the SIMPATICO pilots summarising the general background and purpose of the use cases, as well as information on recruitment procedures, personal and sensitive data processing, and vulnerable groups involved in the experimentations. <table> <tr> <th> **TRENTO PILOT** </th> </tr> <tr> <td> **General background** Trento is the capital of the Autonomous Province of Trento in Italy. It is a cosmopolitan city of 117.317 inhabitants. The digitalization of all interactions between the PA and its citizens is a priority for Trento, and the city is currently working on a strategic project in this area. Trento has already done much to improve interactions with its citizens. Trento has already supported submitting applications through certified e-mail, by sending the filled application documents and a scan of identity document and signature. As part of its “smart city” strategy, Trento is working to realize a new e-service portal: it will serve as a “one-stop shop” or unique access point that offers integrated and facilitated access to all the various services. With this new portal, it will be possible for citizens and businesses to authenticate using smart service cards or one-time password devices, and to complete the interaction online. The Municipality of Trento has adopted “Sportello Telematico”, an end-to-end solution provided by the GLOBO srl company, specifically targeting the digitalization of modules for service provision by PA. Within this solution, the digital module is a composition of sections of organic information (e.g., birth data section, residence data section, real estate registry data section). The logic of the interaction with an information section is explicitly mapped by the module designer. The integrations with legacy systems are handle via a centralized REST web service, which routes the proper service request to the right data source service. Finally the solution supports module hierarchy, which guarantees the definition of a well organized digital module library. **Purpose of the use case** The main specific purpose of the first SIMPATICO experiment phase in Trento is to validate the integration between the Trento e-service portal and SIMPATICO solution, with the final aim to evaluate the SIMPATICO solution usability. The experiment will be based on two different eservices: * Childcare services: enrolment to day-nursery services; * Environment quality: acoustic derogation for temporary activities (e.g., regarding musical entertainment at public premises or other cultural events). **Recruitment procedures** The experimentation is structured in two phases: (1) a pre-evaluation phase, where the e-services and the SIMPATICO solution will be evaluated in a controlled environment from a citizen panel representative of the e-service user community; (2) an evaluation phase, where the e-services and the SIMPATICO solution will be evaluated in a production environment and open to everyone. **Personal and sensitive data processing** For both the above-mentioned e-services the project will collect demograhic information from the participants (e.g., gender, age). More specifically, the digital module of the childcare service will require to specify further personal data (i.e., parent/custodial records, child records, family work conditions, family economic conditions) and sensitive information (i.e., if the family is in charge of the social care service; if the child has some form of disability). **Vulnerable groups** Children, persons with disabilities, and immigrant or minority communities. Please note that: the use case will involve only participants capable to give consent (e.g., we will involve only the </td> </tr> <tr> <td> legal guardians and/or carers of children); the Informed Consent Form has been translated in Italian; appropriate efforts will be made to ensure fully informed understanding of the implications of participation (i.e., participants shall understand all the proceedings of the research, including risks and benefits). </td> </tr> </table> <table> <tr> <th> **GALICIA PILOT** </th> </tr> <tr> <td> **General background** Galicia is an autonomous community of Spain and historic nationality under Spanish law. It has a population of 2.717.749 inhabitants and has a total area of 29.574,4 km. The Xunta de Galicia is the collective decision-making body of the government of the autonomous community of Galicia. The Xunta has at its disposal a vast bureaucratic organization based at Santiago de Compostela, the galician government capital. According to data provided by IGE (Instituto Galego de Estatistica), the number of Galician elderly inhabitants is alarmingly increasing. Furthermore, the socioeconomic indicators for Galicia show a number of particular needs that make it suited for eservices improvement: a sparse distribution of the population, especially in the rural parts of the region. In that regions people often migrate to the richer coastal areas and other Spanish regions. This has resulted in large rural areas with low population density, where the access to public services is harder. Consequently, there is a big gap in the usage of e-services in Galicia in the segment of population older than 55. **Purpose of the use case** The main specific purpose of the use case is to analyse and validate the technological acceptance of elderly groups using the selected Xunta ("Government of Galicia") e-services and SIMPATICO solution. This analysis and validation will assess both (1) discretionary usage and satisfaction to measure the acceptance, and (2) the effectiveness and efficiency of the e-service usage improved by SIMPATICO. The main target audience are the elderlies, and two e-services have been selected: * Grants for the attendance to wellness and spas program; * Individual grants for personal autonomy and complimentary personal assistance for disabled people. **Recruitment procedures** This use case will be a closed experimentation. The participants will be recruited by three associations: * FEGAUS (Federation of Associations of alumni and ex alumni of the Senior University Programs) will provide elder users. These users will be between 55 and 75 years old. They have medium-high technological profile, i.e., they are able to autonomously access the internet and use modern devices such as smartphones and tablets. * ATEGAL (Association for Lifelong Learning) will provide elder users. These users will be adults with age over 55 years, and with a medium cultural level. The technical level of the ATEGAL members is lower than that of the FEGAUS members. * COGAMI is an association for people with disabilities of all ages. The technical level of the COGAMI members is heterogeneous, from entry-level users to experienced ones. **Personal and sensitive data processing** The project will collect demographic information from the participants (e.g., gender, age) and </td> </tr> <tr> <td> whether they have physical disabilities (especially in the case of COGAMI users). **Vulnerable groups** Elderly people and persons with disabilities. Please note that: * the use case will involve only participants capable to give consent (e.g., we will involve only persons with physical disabilities); * the Informed Consent Form has been translated in Spanish; * appropriate efforts will be made to ensure fully informed understanding of the implications of participation (i.e., participants shall understand all the proceedings of the research, including risks and benefits). </td> </tr> </table> <table> <tr> <th> **SHEFFIELD PILOT** </th> </tr> <tr> <td> **General background** Sheffield is a city and metropolitan borough in South Yorkshire, England (UK). It is England’s third largest metropolitan authority with 551.800 people. Sheffield is an ethnically diverse city, with around 19% of its population from minority ethnic groups. The largest of those groups is the Pakistani community, but Sheffield also has large Caribbean, Indian, Bangladeshi, Somali, Yemeni and Chinese communities. More recently, Sheffield has seen an increase in the number of overseas students and in economic migrants from within the European Union. It is estimated that migrants living in Sheffield actively speak at least 40 languages. Although a significant volume of information is openly available on the Sheffield City Council (SCC)'s website (http://www.sheffield.gov.uk/), current interactions between migrants and Sheffield City Council are mostly done in person or over the phone. An intended outcome is that more users of council services will prefer to use digital channels rather than traditional face to face, email and telephone contact. **Purpose of the use case** The main specific purpose of the first SIMPATICO experiment phase in Sheffield is to validate the implementation of the SIMPATICO technologies into the Sheffield City Council website, with the final aim to evaluate the SIMPATICO solution usability. The experiment will be based on three different e-services: * School Attendance (i.e., it aims to inform parents, education workers and general citizens about the importance of school attendance by children. The following tasks presented in the page: information advising why school attendance is important; form to report suspected truancy; pay term time absence fine); * Parenting Skills Course (i.e., it aims to inform parents about the support provided by the city council and external partners to equip them with better parenting skills); * Young Carers (it aims to support and provide information for people under 21 who look after someone else. All young carers under 18 have the right to an assessment). **Recruitment procedures** The experimentation is structured in two phases: (1) a pre-evaluation phase, where the e-services and the SIMPATICO solution will be evaluated in a controlled environment from a citizen panel representative of the e-service user community; (2) an evaluation phase, where the e-services and the SIMPATICO solution will be evaluated in a production environment and open to everyone. </td> </tr> <tr> <td> **Personal and sensitive data processing** The project will collect demographic information from the participants (e.g., gender, age) including sensitive information on ethnic or racial origin. **Vulnerable groups** Immigrants or minority communities, minors, possible persons with disabilities. Please note that: * the use case will involve only participants capable to give consent (e.g., we will involve only the legal guardians and/or carers of minors); * the Informed Consent Form has been translated in English; translation services for immigrants or minority communities will be provided (if needed); appropriate efforts will be made to ensure fully informed understanding of the implications of participation (i.e., participants shall understand all the proceedings of the research, including risks and benefits). </td> </tr> </table> ## SIMPATICO ethical issues The SIMPATICO consortium is committed to **perform a professional management of any ethical issue** that could emerge in the scope of the activities of the project, also through the support of its **Ethics Advisory Board** (see deliverable “D1.5 – Ethics compliance report”). For this reason, the consortium has identified relevant ethical concerns already during the preparation of the project proposal and, then, during the preparation of the Grant Agreement. During this phase, the European Commission has also carried out an ethics scrutiny of the proposal, with the objective of verifying the respect of ethical principles and legislation. With regard to SIMPATICO, the research entails specific ethical implications, involving human subjects and risks for the protection of personal data [2] [3]. In particular, the **SIMPATICO ethical issues (requirements)** , as reported in the European Commission ethics scrutiny report and acknowledged by the SIMPATICO project, are the following: #### _Protection of personal data – “D8.2 – POPD – Requirement no. 2”_ 1. _Copies of ethical approvals for the collection of personal data by the competent University Data Protection Officer/National Data Protection authority must be submitted by the coordinator to REA before commencement of data gathering._ 2. _Clarification and if relevant justification must be given in case of collection and/or processing of personal sensitive data. Requirement needs to be met before commencement of relevant work._ 3. _The applicant must explicitly confirm that the existing data are publicly available._ 4. _In case of data not publicly available, relevant authorisations must be provided, requirements to be met before grant agreement signature._ SIMPATICO involves **collecting and processing personal data** (i.e., any information which relates to an identified or identifiable natural person, such as name, address, email) and **sensitive data** (e.g., health, sexual life, ethnicity). The **Citizen Data Vault** represents the component that will take care of personal and sensitive data exchange between a user and SIMPATICO components. Personal and sensitive data will be made **publicly available** (e.g., for the data of **Citizenpedia** ) only after an **informed consent** has been collected and suitable **aggregation and/or pseudonymization techniques** have been applied. Mechanisms for encryption, authentication, and authorization (e.g., TLS protocol, Single-Sign-On implementations, Policy Enforcement Point for XACML) will be exploited in the processes, so to ensure the satisfaction of core **security and data protection requirements** , namely confidentiality, integrity, and availability. For further details, please see sections below and deliverables **“D1.5 – Ethics compliance report”** and **“D8.2 – POPD – Requirement no. 2”** . The Consortium will comply with the requirements of:a) the **Directive 95/46/EC** of the European Parliament and of the Council of 24 October 1995 (and subsequent modifications and supplements) on the protection of individuals with regard to the processing of personal data and on the free movement of such data; b) the **Regulation (EU) 2016/679** (General Data Protection Regulation – GDPR) of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC; ; c) the **national legislation** of SIMPATICO pilots (i.e., Italy, Spain, and the United Kingdom) in the field 1 [4] (see also “D1.5 – Ethics compliance report”). In the context of the SIMPATICO project, Fondazione Bruno Kessler is the data controller (i.e., the entity that is in control of the processing of personal data and is empowered to take the essential decisions on the purposes and mechanisms of such processing). Data processors (i.e., any partner other than an employee of the data controller who processes the data on behalf of the data controller) are all the other members of the SIMPATICO Consortium. According to the new EU Regulation 2016/679, data subjects have a **right to access and port data, to rectify, erase and restrict his or her personal data, to object to processing** and, if processing is based on consent, **to withdraw consent** . In particular, SIMPATICO will comply with the GDPR as follows: 1. **Subject access, rectification and portability:** * FBK, as data controller, will, on request: confirm if they process an individual’s personal data; provide a copy of the data (in commonly used electronic form); and provide supporting explanatory materials; data subjects can also demand that their personal data be ported to them or a new provider in machine readable format; the request will be met within one month and any intention not to comply must be explained to the individual. Access rights are intended to allow individuals to check the lawfulness of processing and the right to a copy should not adversely affect the rights of others. * For what concerns the personal data submitted by the user in the interaction with the eservices, the Citizen Data Vault will provide users with explanations on how and which personal information will be collected during their interaction with e-services. At the first usage of CDV, users will be informed that at any moment, during their interaction with eservices, they can, by clicking appropriated buttons, withdraw the collection of data and export a copy of the collected data in an open format. Currently users will be asked to choose from two types of open format: CSV and JSON. 2. **Right to erasure (“right to be forgotten”) and right to restriction of processing:** * Individuals can require data to be "erased" when there is a problem with the underlying legality of the processing or where they withdraw consent; the individual can require the controller to "restrict" processing of the data whilst complaints (for example, about accuracy) are resolved, or if the processing is unlawful but the individual objects to erasure; FBK, as SIMPATICO data controller, who has made data available to other subjects, which is then subject to a right to erasure request, is required to notify others who are processing that data with details of the request. * The Citizen Data Vault module has been designed to enable individuals to require data to be erased. Furthermore according to the "consent based" approach the user can at any moment withdraw the consent or to restrict the type of data stored by CDV during the interaction with e-services forms. 3. **Rights to object:** * There are rights for individuals to object to specific types of processing, such as processing for research or statistical purposes; * SIMPATICO will meet the obligations to notify individuals of these rights at an early stage through the informed consent form and its information sheet; * Online services provided by the Pas involved in the project, and extended by the advanced techniques developed by SIMPATICO, will offer their own methods of objecting. More information can be found in **D1.5 – Ethics compliance report.** #### _Humans \- “D8.1 – H – Requirement no. 1”_ 1. _Details on the procedures and criteria that will be used to identify/recruit research participants must be provided._ 2. _Detailed information must be provided on the informed consent procedures that will be implemented._ SIMPATICO involves **work with humans** (‘research or study participants’): according to the EC, collection of personal data, interviews, observations, tracking or the secondary use of information provided for other purposes. End- users (i.e., citizens and businesses) are **engaged in the project usecases** to test the functionalities provided by the SIMPATICO solution for the usage of e-services. Specific **engagement campaigns** are defined and executed for each use-case. The use-cases involve **only voluntary participants** **aged 18 or older and capable to give consent** , who will be informed on the nature of their involvement and on the data collection/retention procedures through an **informed consent form** before the commencement of their participations. **Terms and conditions** will be transparently communicated to the end-users by means of an **information sheet** including descriptions of e.g., purpose of the research, adopted procedures, data protection and privacy policies. SIMPATICO pilots may involve certain **vulnerable groups** , **e.g., elderly people, persons with physical disabilities, and immigrants.** For further details, please see deliverables **“D1.5 – Ethics compliance report”** and **“D8.1 – H – Requirement no. 1”** . #### _Vulnerable groups_ In addition to the above-mentioned ethical requirements, in the context of this deliverable it is also important to specify that SIMPATICO pilots may involve certain **vulnerable groups** : **e.g., elderly people, persons with physical disabilities, and immigrants** . Please note that all the research participants will have the **capacity to provide informed consent** : individuals who lack capacity to decide whether or not to participate in research will be appropriately excluded from research. Anyway taking into account the scope and objectives of the research, researchers should be **inclusive in selecting participants** . Researchers shall not exclude individuals from the opportunity to participate in research on the basis of attributes such as culture, language, religion, race, sexual orientation, ethnicity, linguistic proficiency, gender or age, unless there is a valid reason for the exclusion. Vulnerable groups could be misapplied for stigmatisation, discrimination, harassment or intimidation. Concern for **the rights and wellbeing of research participants** lies at the root of ethical review. The perception of subjects as vulnerable is likely to be influenced by diverse cultural preconceptions and so regulated differentially by localised legislation. It is likely to be one of the areas where researchers **need extra vigilance to ensure compliance with laws and customs** . Some vulnerabilities may not even be obvious until research is actually being conducted. To reduce the risk of enhancing the vulnerability/stigmatisation of the above- mentioned individuals, the SIMPATICO **Ethics Advisory Board** (see below) provides **specific assessment on vulnerable groups** that may be involved, prior of the commencement of the pilots’ activities. For further details, please see deliverables **“D1.5 – Ethics compliance report”** and **“D8.1 – H – Requirement no. 1”** . #### _SIMPATICO Ethics Advisory Board_ All the above-mentioned deliverables will be assessed and validated during the first meeting of the **SIMPATICO Ethics Advisory Board (EAB)** (see “D1.5 – Ethics compliance report”). It is **competent to provide the necessary authorizations** when the collection and processing of personal (or sensitive) data is part of the planned research, with the validation of national and/or local Data Protection Authorities if needed. This board is led by an **ethics adviser** external to the project and to the host institution, totally independent and free from any conflict of interest. In addition to the external ethics adviser, the EAB is composed of **one expert representative from all members of the SIMPATICO Consortium** [5]. Members of the Ethics Board are listed in “D1.5 – Ethics compliance report” with the name and contact information for persons appointed, the terms of reference for their involvement, and their declarations of no conflict of interest. The **reference national and/or local Data Protection Authorities** competent to provide the above-mentioned SIMPATICO EAB with the necessary **instructions/authorizations/notifications** for each pilot are the following [6] [7] [8]: **Trento pilot (Italy):** **the Italian Data Protection Authority (DPA** **-** **_http://www.garanteprivacy.it/_ ) ** **.** According to the “Italian Data Protection Code” (Legislative Decree no. 196/2003), an authorisation by the Italian DPA is required to enable private (and public) bodies to process specific typologies of personal and sensitive data (see Section 26 of the Italian Data Protection Code). More precisely, the DPA needs to be notified (also thorough an electronic form) whenever a public or private body undertakes a personal data collection, or personal data processing activity, as data controller. A data controller is required under the law to only notify the processing operations that concern e.g., data suitable for disclosing health and sex life, data processed with the help of electronic means aimed at profiling the data subject and/or his/her personality, analysing consumption patterns and/or choices. In such context, the DPA is also responsible for evaluating and expressing opinions on specific arguments concerning data protection (see “Simplification of Notification Requirements and Forms. Decision of the DPA dated 22 October 2008, as published in Italy's Official Journal no. 287 of 9 December 2008”). In the case of Trento pilot, we consider this public authority appropriate for providing the SIMPATICO EAB with the necessary instructions/authorizations/notifications. **Sheffield pilot (United Kingdom): the University Research Ethics Committee (UREC) of the University of Sheffield ( _https://www.sheffield.ac.uk/ris/other/committees/ethicscommittee_ ) . ** The University Research Ethics Committee (UREC) of the University of Sheffield is an independent, unbiased and interdisciplinary university-wide body that scrutinizes any potential issues related to research ethics for staff and students of the University of Sheffield, including collaborative research deriving from external funding. The key tasks this committee is in charge of are: Advise on any ethical matters in research that are referred to it from within the University; Keep abreast of the external research ethics environment and ensure that the University responds to all external requirements. In the case of the Sheffield pilot, we consider this committee appropriate for providing the SIMPATICO EAB with the necessary guidance. We remark that all entities involved in the Sheffield pilot – Sheffield Council, Sheffield University and Sparta Technologies Ltd – comply with the UK data protection regulations and each will ensure that this act is enforced when it comes to their participation in the pilot. Only if necessary, the AEB will engage the UK Information Commissioner's Office (ICO - _https://ico.org.uk/_ ) . **Galicia pilot (Spain): the Research Ethics Committee of the University of Deusto ( _http://research.deusto.es/cs/Satellite/deustoresearch/en/home/research- ethics-comittee_ ) . ** This committee is an independent, unbiased and interdisciplinary body that is both consultative and advisory in nature, and reports to the Vice-Rector’s Office for Research. This committee will assess SIMPATICO compliance with the Spanish legal framework on privacy and data protection. This includes the **Spanish Data Protection Act** 15/1999 ( **Law 15/1999 of 13 December 1999** on Protection of Personal Data, last updated on 5 March 2011) and the **Royal Decree 1720/2007** of 21 December 2007, approving the regulations implementing Law 15/1999 (“Data Protection Regulations”; Last updated: 8 March 2012). Among other responsibilities, this committee is in charge of: * Conducting the ethical assessment of research projects and drawing up the ethical suitability reports requested by institutions and researchers. * Ensuring compliance with best research and experimentation practices with regard to individuals’ fundamental rights and the concerns related to environmental defense and protection. * Supervising assessment processes or ethical requirements in research carried out by institutions and public bodies. * Preparing reports for the University’s governing bodies on the ethical problems that may arise from R+D+I activities. * Ensuring compliance with the Policy on Scientific Integrity and Best Research Practices of the University of Deusto. Providing guidance on laws, regulations and reports on research ethics. Reviewing procedures that have already been assessed, or proposing the suspension of any experimentation already started if there are objective reasons to do so. In the case of the Galicia pilot, we consider this committee appropriate for providing the SIMPATICO EAB with the necessary instructions/authorizations/notifications. Only if necessary, the AEB will engage the Spanish Data Protection Authority, i.e., Agencia Española de Protección de Datos (AEPD - _http://www.agpd.es/_ ) . ## Open access and data management The Consortium adheres to **the pilot for open access to research data (ORD pilot)** adopting an open access policy of all projects results, guidelines and reports, providing on-line access to scientific information that is free of charge to the reader [9]. Open access typically refers to two main categories: **scientific publication** (e.g., peer-reviewed scientific research articles, primarily published in academic journals) (Subsection 1.3.1) and **research data** (Subsection 1.3.2). ### Open access to scientific publications According to the European Commission, “under Horizon 2020, each beneficiary must ensure open access to all peer-reviewed scientific publications relating to its results” (see also Article 29.2 of the GA). The SIMPATICO Consortium adheres to the EU open access to publications policy, choosing as most appropriate route towards open access **self-archiving** (hereinafter also referred to as **'green' open access** ), namely “a published article or the final peer-reviewed manuscript is archived (deposited) in an online repository before, alongside or after its publication. Repository software usually allows authors to delay access to the article ('embargo period')”. The Consortium will ensure open access to the publication within a maximum of six months. The dissemination of SIMPATICO results will occur by mean of the activities identified in the implementation plan, such as international publications, participation in international events (exhibitions, conferences, seminars, courses, etc.). In compliance with the Consortium Agreement, **free-online access will be privileged for scientific publication** , following the above- mentioned rules of ‘green’ open access. All relevant information and the platform textual material (papers, deliverables, etc.) will be **also freely available on the project website** . In order to guarantee that also people who are visually impaired have access to all textual materials we will provide also **accessible PDF files** . In specific cases and according to the rules on open access, the dissemination of research results will be managed by **adopting precautionary IPR protection tools** , in order not to obstacle with preventive disclosures the possibility of protecting the achieved foreground. ### Open access to research data (Open Research Data Pilot) According to the European Commission, “research data is information (particularly facts or numbers) collected to be examined and considered, and to serve as a basis for reasoning, discussion, or calculation”. Open access to research data is **the right to access and reuse digital research data** under the terms and conditions set out in the Grant Agreement. Regarding the digital research data generated in the action, according to the Article 29.3 of the GA, the SIMPATICO Consortium will: _**Deposit in a research data repository** and take measures to make it possible for third parties to access, mine, exploit, reproduce and disseminate — free of charge for any user — the following: _ 1. _the data, including associated metadata, needed to validate the results presented in scientific publications;_ 2. _other data, including associated metadata, as specified and within the deadlines laid down in this data management plan;_ _(i) provide information — via the repository — about tools and instruments at the disposal of the beneficiaries and necessary for validating the results._ Please note that a portion of the relevant data for SIMPATICO comes from **existing data sets of the PAs** (e.g., service usage data, citizens’ data), while **new data sources** will be defined by this deliverable and will be identified as a result of requirements analysis in SIMPATICO. **Whenever possible** , these additional data sources will also be made available **as open data or through open services** . However, some of the collected data, in particular that concerning **user profiles and personal data** , is highly sensitive and will not be made available. More precisely, in order to discuss the **public availability of data** , there are three different types of datasets within the SIMPATICO project: 1. not publicly available personal and sensitive data; 2. data treated according to open access policy of all project results; 3. data connected to Citizenpedia. These datasets will be discussed in detail in Section 2 below. ## Data management policies ### Data set reference and names In order to be able to distinguish and easily identify data sets, each data set is assigned with a unique name. This name can also be used as the identifier of the data sets. In order to design the data set names, we use the following practice. Each data set name consists of _four_ different parts separated with a ”.” character: _ProjectName.CountryCode.DatasetName.Version,_ where 1. The _ProjectName_ is _SIMPATICO_ , in order to clearly identify for all datasets the origin. 2. The _CountryCode_ part represents the country associated with the dataset using ISO Alpha-2 country codes: 1. _IT_ for Italy ii. _ES_ for Spain iii. _UK_ for the United Kingdom 3. The _DatasetName_ represents the full name of the dataset. 4. The _Version_ of the dataset represents in which phase of the project the dataset was released: 1. _DB_ the live database during project lifetime ii. _InterimExport_ export of the database at M22 iii. _FinalExport_ export of the database at the end of the project An example of a data set’s name could be the following: _SIMPATICO.IT.Citizenpedia.InterimExport._ ### Standards and metadata This field will describe suitable standards that will be used to describe the data as well as the metadata of the data sets. Metadata are “data that provides information about other data”, describe the contents of data files and the context in which they have been established. Several metadata standards exist (see _https://en.wikipedia.org/wiki/Metadata_standards_ ) . Proper metadata facilitates use of the data by others, makes it easier to combine information from different sources, and ensures transparency. All SIMPATICO datasets will use a standard format for metadata. Each dataset description will specify the Data and metadata standards used. ### Archiving and preservation The SIMPATICO partners agreed on the procedures that will be used in order to ensure long-term preservation of the data sets. In particular, datasets will be stored on Zenodo ( _https://zenodo.org/_ ) a catch-all repository for EC funded research developed by CERN and launched in May 2013\. To be an effective catch--all, that eliminates barriers to adopting data sharing practices, Zenodo does not impose any requirements on format, size (currently accepts up to 50GB per dataset), access restrictions or license. In addition, datasets stored on Zenodo are automatically part of OpenAIRE ( _https://www.openaire.eu/_ ) , the EC-funded initiative which aims to support the Open Access policy of the European Commission via a technical infrastructure, thus integrating them into existing reporting lines to funding agencies like the European Commission. Archiving on Zenodo is free, thus eliminating the costs ### Data quality assurance SIMPATICO is committed to deliver quality data, and adopts data quality assurance procedures to achieve this goal. Quality control of each dataset is the responsibility of the relevant WP leader, supported by Project Manager and Project Coordinator. Depending on the case, “quality” might have different meanings, depending on the utility and on the re-usage scenarios of the dataset: for instance, editing a question submitted by a citizen through the SIMPATICO platform improves data quality if the goal is to provide an “FAQ” or a knowledge base on public services; it is detrimental if the intended usage is the analysis of interaction skills and languages of the platform users. Data quality insurance might hence imply editing and moderation, cleaning, pre-processing, adding metadata, transforming to a more convenient format or providing easier access. Information about the consortium's efforts to address data quality issues is hence provided for each type if dataset. # SIMPATICO datasets This **Data Management Plan (DMP)** has been prepared by taking into account the current template of the “Guidelines on Data Management in Horizon 2020” [1]. The elaboration of the DMP allows to SIMPATICO partners to address all issues related with data. A first version of the DMP has been released at the beginning of the project (M6). **This version is an update with the status of the DMP at project month M16** , after the detailed definition of the project use-cases and the revision of the ethics-related aspects of the project by the Ethics Advisory Board, and taking into account the feedback collected during the 1 st year project review. A revised deliverable is expected at the end of the project (“D1.4 – Data management plan v.2”, at M36). However, the DMP will be a **living document** throughout the project, and this initial version will evolve during the SIMPATICO lifespan according to the progress of project activities. In order to discuss the **public availability of data** , as outlined above, it is convenient to distinguish three different types of datasets within the SIMPATICO project: 1. **Not publicly available personal and sensitive data** **will be collected and processed as part of the execution of the SIMPATICO use-cases** , more specifically for the execution of the eservices. Specifically, the use-cases will involve only voluntary participants aged 18 or older and capable to give consent, who will be informed on the nature of their involvement and on the data collection/retention procedures through an informed consent form before the commencement of their participations. Informed consent will follow procedures and mechanisms compliant with European and national regulations in the field on ethics, data protection and privacy (see also deliverables “D1.5 – Ethics compliance report”, “D8.1 – H – Requirement no. 1”, and “D8.2 – POPD – Requirement no. 2”). 2. **SIMPATICO adheres to the open access policy of all project results.** Specifically, we are committed to make available, whenever possible, the data collected during the execution of SIMPATICO, in particular data collected during the use-cases, also to researchers and other relevant stakeholders outside the project Consortium. Whenever possible, these additional data sources will also be made available as open data or through open services. In this context, any personal data will only be published after suitable aggregation and/or pseudonymization techniques have been applied, and after an informed consent that explicitly authorize this usage has been collected. 3. **SIMPATICO intends to build an open knowledge base on public services and processes** **through Citizenpedia** , released as a new public domain resource co-created and co-operated by the community (i.e., citizens, professionals and civil servants). The initial content of Citizenpedia will be based on datasets and other digital goods that are publicly available. In the case of datasets and other digital goods owned by the PAs and not already publicly available, the Consortium will pursue to obtain an authorization for public release, as open content, before inclusion in the Citizenpedia. For what concerns the data contributed to Citizenpedia by the community, SIMPATICO will require that they are made available as open content (e.g., with licenses such as Creative Commons). This Data Management Plan and its updated versions describe **datasets characteristics** and **define principles and rules for the distribution of data** within SIMPATICO. In particular, in this first version of the DMP we present in details the procedures of creating **‘primary data’** (i.e., data not available from any other sources) and of their management. As such, only the datasets corresponding to “Citizenpedia” (Section 2.1), “Logging/Feedback” (Section 2.2), and “Citizen Data Vault (CDV) Dataset” (Section 2.3) are described in detail in the following sections, as any other datasets already exist and their creation is not foreseen in the GA. ## Citizenpedia Datasets ### Description Citizenpedia is the **human computation framework** inside the SIMPATICO platform. Its aim is to be a place where citizens can find useful information regarding e-services and public administration. Thus, **most of the content will be created and consumed by humans** . It will be mainly stored in JSON format. Citizenpedia is composed of **two main interactive parts** for the users, a Question Answering Engine and a Collaborative Procedure Designer. Thus, the typology of data is twofold: 1. **Question Answering Engine:** questions, answers, comments and terms/definitions, generated in the Question Answering Engine. All of them will be created, stored and retrieved in JSON format. 2. **Collaborative Procedure Designer:** diagrams representing procedures, and comments to these diagrams. The diagrams will be stored and encoded, in a computer processable manner, and not as a bitmap. Comments will be stored in JSON format. Both types of data will be stored in the **same database** , within Citizenpedia; for this reason, a unique datatset will be generated for both data typologies. Citizenpedia, along with the SIMPATICO platform, is intended to be deployed in **three different cities/regions of different countries** (i.e., Italy, Spain, and the United Kingdom). Each country speaks its own language (i.e., Italian, Spanish and English), and the human-generated data in each Citizenpedia will be in **different languages** . For that reason, we are using a different dataset in each pilot. ### Standards and Metadata DEUSTO team, after an initial investigation, did not find a standard format for the storage/management of the generated data/metadata. We have followed the **structure used in already deployed human-computation platforms** , such as the ones reviewed in the deliverable “D4.1 Citizenpedia framework specification and architecture”. The resulting representation of gathered information is centered on the User and e-service concepts. A User can create or review Questions which may have associated Terms and Answers, belong to a Category, be annotated with Tags or have associated several Comments. Questions are always associated to a given e-service. On the other hand a User can create or review the procedure associated to a e-service which is composed of several Diagram Elements and might also have associated several comments. A simplified diagram of the Citizenpedia database is provided in Figure 3. Currently, the Citizenpedia collective knowledge base is stored as a MongoDB database. In MongoDB entities are stored as collections, i.e. a concept equivalent to tables in SQL. Consequently, each of the above mentioned entities’ information is gathered in their corresponding MongoDB collections. Since MongoDB is a NoSQL database, foreign keys are not enforced by the underlying database management system. However, data integrity and cross- references among entities are managed by the Citizenpedia’s business logic in a programmatic manner. MongoDB offers several utilities that enable to easily extract the stored information in JSON format, namely mongodump and mongoexport. **Figure 3 Simplified Citizenpedia database model** We consider **two types of additional metadata** to be generated in Citizenpedia, apart from the core entities metadata mentioned above: 1. **Usage statistics:** this information will be created under demand. E.g. as an answer to the query “number of registered questions related to the Law XYZ/2015”. Currently, the Citizenpedia communicates with the LOG module to issue a new log every time a user performs a CRUD (Created, Read, Update, Delete) operation over any of the entities modeled by Citizenpedia. The LOG module offers an API from which usage statistics about Citizenpedia contents can be extracted. 2. **Indexing engine metadata:** an indexing engine included inside Citizenpedia, i.e. ElasticSearch or Apache SolR creates this data. This metadata is consumed by ElasticSearch to provide better searching capabilities over plain-text data. In upcoming releases of Citizenpedia, we will configure and parameterize the selected search engine in order to aim to optimize its searching capabilities. ### Data capture As regards data capture methods, there are **two ways of creating content** in Citizenpedia: 1. **Using the web interface:** citizens/civil servants will use the platform and write the information using their browsers. Then, data will be stored in the Citizenpedia database. 2. **Programmatically, via a REST interface:** Citizenpedia exposes a REST API for other SIMPATICO components/third party applications to query/insert data in the system. ### Data storage All the information related to Citizenpedia (i.e., both user-generated data and metadata) is stored in the **Citizenpedia internal database** . DEUSTO, as responsible of WP4 within the SIMPATICO Consortium, ensures to handle security and privacy issues, enforcing access to the internal database only via **secure connections** and using **access control systems** . ### Data quality assurance Data collection is undertaken mainly through forms filling through the QAE or CPD. Both tools undertake validation against their type, semantics and completeness. It is also possible to create data associated to the entities managed by the Citizenpedia, namely User, Question, Answer, Comment, Definition, Category, Tag, e-service, Procedure and Diagram Element, through the provided RESTful API. Again, the same validation as when filling form fields is carried out. For the next release of the Citizenpedia the following two new features are envisaged to further assure the quality of the data managed. Firstly, a _spam analyser module_ will be included which assesses whether the introduced contents can be regarded as spam, i.e. mainly for questions and answers. Secondly, the _moderator role_ will be introduced, which will be introduced in the Citizenpedia to ensure that some users can monitor, review and edit available updates, correcting or even removing them in case that they are polluting rather than enriching the Citizenpedia knowledgebase. ### Utility and re-use Data collected will be useful for Citizens and PA representatives. Citizens can encounter answers regarding the contents and procedures associated to the e-services that they interact with it. PA representatives can spot areas, which are unclear for e-service consumers when gathering comments or new questions associated to e-service concepts or procedure steps. Both QAE and CPD can be used to ensure that a common understanding of e-service operation is reached among PA representatives and with the e-service consuming citizens. ### Data sharing Method for data sharing is twofold: 1. **Human generated data:** first, human generated data (such as questions/answers/comments) will be shared publicly. It could be checked using the web interface or programmatically through a REST API. Given that this data is created by users, they will be warned in their first time using Citizenpedia that any content they make will be publicly available. 2. **Metadata:** second, as regards the metadata generated from the usage of data (such as statistics), some of the statistics (aggregated data) will be publicly available through the REST API, e.g., the number of questions related to a certain topic. The entire metadata will only be used for research purposes: in the case of the releasing some scientific publication from the usage data of Citizenpedia, information will be completely aggregated and/or pseudonymized. ### Archiving and preservation The SIMPATICO Consortium and, in particular, DEUSTO (as responsible of this WP) consider **to** **retain generated data** during the length of the project. Statistical data can be retained longer, after the end of the project lifespan, for research purposes. If so, DEUSTO estimates no additional cost for this. If collected metadata and statistical data will be retained after the length of the project, DEUSTO has the infrastructure **to retain data safely** . ### Datasets <table> <tr> <th> **Dataset ID** </th> <th> SIMPATICO_ES_Citizenpedia_DB </th> </tr> <tr> <td> **Description** </td> <td> Live database of Citizenpedia adopted by the Galician pilot </td> </tr> <tr> <td> **Data manager** </td> <td> DEUSTO </td> </tr> <tr> <td> **Data standard** </td> <td> Project specific </td> </tr> <tr> <td> **Metadata standard** </td> <td> JSON database (MongoDB) </td> </tr> <tr> <td> **Volume** </td> <td> 500 Kb ~ 1 Mb </td> </tr> <tr> <td> **Sharing level** </td> <td> Open </td> </tr> <tr> <td> **Sharing medium** </td> <td> Contents of this dataset can be accessed through Mongo binary API which can only be used if proper credentials are supplied. </td> </tr> <tr> <td> **Preservation duration** </td> <td> Project duration </td> </tr> <tr> <td> **Preservation medium** </td> <td> Trento deployment of SIMPATICO platform </td> </tr> <tr> <td> **Preservation costs** </td> <td> No additional cost </td> </tr> </table> <table> <tr> <th> **Dataset ID** </th> <th> SIMPATICO_ES_Citizenpedia_InterimExport </th> </tr> <tr> <td> **Description** </td> <td> Export of SIMPATICO_ES_CITIZENPEDIA_DB at the end of the first pilot phase (M20). </td> </tr> <tr> <td> **Data manager** </td> <td> DEUSTO </td> </tr> <tr> <td> **Data standard** </td> <td> Project specific </td> </tr> <tr> <td> **Metadata standard** </td> <td> JSON </td> </tr> <tr> <td> **Volume** </td> <td> 500 Kb ~ 1 Mb </td> </tr> <tr> <td> **Sharing level** </td> <td> Open </td> </tr> <tr> <td> **Sharing medium** </td> <td> OpenAIRE </td> </tr> <tr> <td> **Preservation duration** </td> <td> 5 years after project end </td> </tr> <tr> <td> **Preservation medium** </td> <td> Zenodo </td> </tr> <tr> <td> **Preservation costs** </td> <td> No additional cost </td> </tr> </table> <table> <tr> <th> **Dataset ID** </th> <th> SIMPATICO_ES_Citizenpedia_FinalExport </th> </tr> <tr> <td> **Description** </td> <td> Export of SIMPATICO_ES_CITIZENPEDIA_DB at the end of the project (M36). </td> </tr> <tr> <td> **Data manager** </td> <td> DEUSTO </td> </tr> <tr> <td> **Data standard** </td> <td> Project specific </td> </tr> <tr> <td> **Metadata standard** </td> <td> JSON </td> </tr> <tr> <td> **Volume** </td> <td> 500 Kb ~ 1 Mb </td> </tr> <tr> <td> **Sharing level** </td> <td> Open </td> </tr> <tr> <td> **Sharing medium** </td> <td> OpenAIRE </td> </tr> <tr> <td> **Preservation duration** </td> <td> 5 years after project end </td> </tr> <tr> <td> **Preservation medium** </td> <td> Zenodo </td> </tr> <tr> <td> **Preservation costs** </td> <td> No additional cost </td> </tr> </table> <table> <tr> <th> **Dataset ID** </th> <th> SIMPATICO_UK_Citizenpedia_DB </th> </tr> <tr> <td> **Description** </td> <td> Live database of Citizenpedia adopted by the Sheffield pilot </td> </tr> <tr> <td> **Data manager** </td> <td> SPARTA </td> </tr> <tr> <td> **Data standard** </td> <td> Project specific </td> </tr> <tr> <td> **Metadata standard** </td> <td> JSON database (MongoDB) </td> </tr> <tr> <td> **Volume** </td> <td> 500 Kb ~ 1 Mb </td> </tr> <tr> <td> **Sharing level** </td> <td> Open </td> </tr> <tr> <td> **Sharing medium** </td> <td> Contents of this dataset can be accessed through Mongo binary API which can only be used if proper credentials are supplied. </td> </tr> <tr> <td> **Preservation duration** </td> <td> Project duration </td> </tr> <tr> <td> **Preservation medium** </td> <td> Trento deployment of SIMPATICO platform </td> </tr> <tr> <td> **Preservation costs** </td> <td> No additional cost </td> </tr> </table> <table> <tr> <th> **Dataset ID** </th> <th> SIMPATICO_UK_Citizenpedia_InterimExport </th> </tr> <tr> <td> **Description** </td> <td> Export of SIMPATICO_UK_CITIZENPEDIA_DB at the end of the first pilot phase (M20). </td> </tr> <tr> <td> **Data manager** </td> <td> SPARTA </td> </tr> <tr> <td> **Data standard** </td> <td> Project specific </td> </tr> <tr> <td> **Metadata standard** </td> <td> JSON </td> </tr> <tr> <td> **Volume** </td> <td> 500 Kb ~ 1 Mb </td> </tr> <tr> <td> **Sharing level** </td> <td> Open </td> </tr> <tr> <td> **Sharing medium** </td> <td> OpenAIRE </td> </tr> <tr> <td> **Preservation duration** </td> <td> 5 years after project end </td> </tr> <tr> <td> **Preservation medium** </td> <td> Zenodo </td> </tr> <tr> <td> **Preservation costs** </td> <td> No additional cost </td> </tr> </table> <table> <tr> <th> **Dataset ID** </th> <th> SIMPATICO_UK_Citizenpedia_FinalExport </th> </tr> <tr> <td> **Description** </td> <td> Export of SIMPATICO_UK_CITIZENPEDIA_DB at the end of the project (M36). </td> </tr> <tr> <td> **Data manager** </td> <td> SPARTA </td> </tr> <tr> <td> **Data standard** </td> <td> Project specific </td> </tr> <tr> <td> **Metadata standard** </td> <td> JSON </td> </tr> <tr> <td> **Volume** </td> <td> 500 Kb ~ 1 Mb </td> </tr> <tr> <td> **Sharing level** </td> <td> Open </td> </tr> <tr> <td> **Sharing medium** </td> <td> OpenAIRE </td> </tr> <tr> <td> **Preservation duration** </td> <td> 5 years after project end </td> </tr> <tr> <td> **Preservation medium** </td> <td> Zenodo </td> </tr> <tr> <td> **Preservation costs** </td> <td> No additional cost </td> </tr> </table> <table> <tr> <th> **Dataset ID** </th> <th> SIMPATICO_IT_Citizenpedia_DB </th> </tr> <tr> <td> **Description** </td> <td> Live database of Citizenpedia adopted by the Trento pilot </td> </tr> <tr> <td> **Data manager** </td> <td> Fondazione Bruno Kessler </td> </tr> <tr> <td> **Data standard** </td> <td> Project specific </td> </tr> <tr> <td> **Metadata standard** </td> <td> JSON database (MongoDB) </td> </tr> <tr> <td> **Volume** </td> <td> 500 Kb ~ 1 Mb </td> </tr> <tr> <td> **Sharing level** </td> <td> Open </td> </tr> <tr> <td> **Sharing medium** </td> <td> Contents of this dataset can be accessed through Mongo binary API which can only be used if proper credentials are supplied. </td> </tr> <tr> <td> **Preservation duration** </td> <td> Project duration </td> </tr> <tr> <td> **Preservation medium** </td> <td> Trento deployment of SIMPATICO platform </td> </tr> <tr> <td> **Preservation costs** </td> <td> No additional cost </td> </tr> </table> <table> <tr> <th> **Dataset ID** </th> <th> SIMPATICO_IT_Citizenpedia_InterimExport </th> </tr> <tr> <td> **Description** </td> <td> Export of SIMPATICO_IT_CITIZENPEDIA_DB at the end of the first pilot phase (M20). </td> </tr> <tr> <td> **Data manager** </td> <td> Fondazione Bruno Kessler </td> </tr> <tr> <td> **Data standard** </td> <td> Project specific </td> </tr> <tr> <td> **Metadata standard** </td> <td> JSON </td> </tr> <tr> <td> **Volume** </td> <td> 500 Kb ~ 1 Mb </td> </tr> <tr> <td> **Sharing level** </td> <td> Open </td> </tr> <tr> <td> **Sharing medium** </td> <td> OpenAIRE </td> </tr> <tr> <td> **Preservation duration** </td> <td> 5 years after project end </td> </tr> <tr> <td> **Preservation medium** </td> <td> Zenodo </td> </tr> <tr> <td> **Preservation costs** </td> <td> No additional cost </td> </tr> </table> <table> <tr> <th> **Dataset ID** </th> <th> SIMPATICO_IT_Citizenpedia_FinalExport </th> </tr> <tr> <td> **Description** </td> <td> Export of SIMPATICO_IT_CITIZENPEDIA_DB at the end of the project (M36). </td> </tr> <tr> <td> **Data manager** </td> <td> Fondazione Bruno Kessler </td> </tr> <tr> <td> **Data standard** </td> <td> Project specific </td> </tr> <tr> <td> **Metadata standard** </td> <td> JSON </td> </tr> <tr> <td> **Volume** </td> <td> 500 Kb ~ 1 Mb </td> </tr> <tr> <td> **Sharing level** </td> <td> Open </td> </tr> <tr> <td> **Sharing medium** </td> <td> OpenAIRE </td> </tr> <tr> <td> **Preservation duration** </td> <td> 5 years after project end </td> </tr> <tr> <td> **Preservation medium** </td> <td> Zenodo </td> </tr> <tr> <td> **Preservation costs** </td> <td> No additional cost </td> </tr> </table> ## Logging/Feedback Datasets ### Description The SIMPATICO project provides a series of interactive front-end components as depicted in the yellow blocks in the diagram below, i.e., the user interaction and feedback analysis layer of the SIMPATICO Platform (see Figure 2). The components are as follows: _Session Feedback (SF)_ : which presents users with a feedback gathering form after each session. _Data Analysis (DA)_ : which gets data from the LOG coming from a variety of sources connected to interaction and provides extra analysis layers on top. * _Interaction Log (LOG)_ : which collects all the interaction information in the system. These modules generate valuable information coming from the interaction of the users. This happens by two different mechanisms: * **Explicit information gathering** , e.g., asking users directly to assess their interaction after it has happened. This is widely done in the industry and can be performed by a number of different mechanisms. * **Implicit information collection** , e.g., analysing metrics of interest in the interaction without requiring the users to be providing any extra information. As an example, upon the execution of a e-service request information about the time spent for each step may be collected and then analysed to find insights such as bottlenecks. Both of these data generation sets were conceptualized in the platform’s architecture in the cited modules. These includes **two data storage modules** such as the LOG for explicit and implicit data plus a data analysis step in the DA to generate new insights (e.g., statistics) from gathered data elements. ### Standards and Metadata The SIMPATICO team has not found relevant dedicated standards about the representation of these metadata. There is a model of the interaction (see deliverable D3.2) which is the basis of the interaction in the project’s results. Inspiration for this and other of the interaction elements such as questionnaires, etc. is from common representation data models of usability evaluation such as the System Usability Testing (SUS) [10], and standards such as the ISO 9241 2 for desktop application ergonomics. The metadata are generated from the data in the section above using the following analysis steps: **_Explicit information analysis:_ ** Statistics about general feelings or ratings for particular areas or topics identified in the first stage of analysis. This is chiefly generated by the Session Feedback component. **_Implicit information analysis:_ ** Statistical analysis of the data captured: average time spent by users, segmentation by age groups or target groups, etc. This is captured in the front-end of the system and further processed by the Data Analysis component. ### Data capture The data is collected in different ways according to the mechanism of information gathering (i.e., implicit/explicit), as explained above: **_Explicit information gathering (gathered in the Session Feedback component)_ ** * Questionnaires to the users with predefined (‘canned’) responses such as emoticons or “Likert scale” values. * Open ended questions and free form responses. This can then be further analysed using NLP tools or human experts, to search for elements such as: * Sentiment analysis to capture the general sentiment generated by the system. * Topic clustering to detect potential pain points or concerns of the users. **_Implicit information gathering (gathered by the front-end and processed in the Data Analysis_ _component)_ ** * Capture metrics such as click areas, time spent in different steps of the process. For this matter a mixture of handcrafter trackers and the open web analytics tool Piwik are used. ### Data storage The storage of these components (LOG, SF, DA and EE) is centralized at the interaction LOG component which stores metadata of the interaction. This component is constructed on top of an instance of the ElasticSearch. Internally, ElasticSearch uses a document-oriented storage solution with an associated Lucene indexing and search engine. ### Data quality assurance The results stored in the feedback analysis storage are stored as JSON documents (Javascript objects in plain text). Internally, the data is represented in manners which are specific to the application (e.g., feedback logs from session feedback contain data that can be mapped to the specific questions asked by the component). The data stored can be related to individual users (linked to the ones in the CDV by alphanumerical user IDs that mean the data is effectively anonymized if the CDV profile is erased) or to aggregate results by a group of users (e.g., average times by all users to complete a step in the e-service). The access to the LOG is securized by means of the use of the AAC component so that arbitrary HTTP connections cannot be opened against the component. ### Utility and re-use By their nature, the feedback results compiled in this part of SIMPATICO, are tightly connected to the particular usages and roles of the components. Thus, their reuse can be difficult for other applications. One result which can be of wider use is the global collection of simplification requests, the simplification results and the feedback of the users on the quality of the simplification as captured by the Session Feedback component. This will be thus selected for further detailing and explanation in the release of the data. ### Data sharing The data and metadata generated by the module expected to be useful beyond SIMPATICO mainly to researchers due to the particularities of the application. **The sharing of corpora of the data** is envisaged at the end of the project as open data. Specific scientific data will also be used for **publications** of the Consortium members. ### Archiving and preservation **All storing and preservation procedures is carried out internally** to the project (e.g., in servers physically located at the partners premises and under their full control). The captured interaction data will shared as open data. As discussed at a project level, it is expected that data sets will be shared using the Zenodo platform. This will provide the means for long-term storage and sharing. ### Datasets <table> <tr> <th> **Dataset ID** </th> <th> SIPATICO_ES_LOGDataset_DB </th> </tr> <tr> <td> **Description** </td> <td> Live database of logs of the usage of SIMPATICO platform adopted by the Galicia pilot </td> </tr> <tr> <td> **Data manager** </td> <td> HIB </td> </tr> <tr> <td> **Data standard** </td> <td> Project specific data </td> </tr> <tr> <td> **Metadata standard** </td> <td> JSON (JavaScript Object Notation) and binary ElasticSearch dumps </td> </tr> <tr> <td> **Volume** </td> <td> 10 KB /session /user (approximately). Totals: _Data size:_ 20 persons, 1 session 🡪 200 KB </td> </tr> <tr> <td> **Sharing level** </td> <td> Open </td> </tr> <tr> <td> **Sharing medium** </td> <td> Contents of this dataset can be accessed through swagger API </td> </tr> <tr> <td> **Preservation duration** </td> <td> Project duration </td> </tr> <tr> <td> **Preservation medium** </td> <td> Galicia deployment of SIMPATICO platform </td> </tr> <tr> <td> **Preservation costs** </td> <td> No additional cost </td> </tr> </table> <table> <tr> <th> **Dataset ID** </th> <th> SIPATICO_ES_LOGDataset_InterimExport </th> </tr> <tr> <td> **Description** </td> <td> Released at the end of the first validation phase at Galicia (M20) and including data from the interaction captured in the LOG. </td> </tr> <tr> <td> **Data manager** </td> <td> HIB </td> </tr> <tr> <td> **Data standard** </td> <td> Project specific data </td> </tr> <tr> <td> **Metadata standard** </td> <td> JSON (JavaScript Object Notation) and binary ElasticSearch dumps </td> </tr> <tr> <td> **Volume** </td> <td> 10 KB /session /user (approximately). Totals: </td> </tr> <tr> <td> </td> <td> _Data size:_ 150 persons, 1 session 🡪 1500 KB </td> </tr> <tr> <td> **Sharing level** </td> <td> Open </td> </tr> <tr> <td> **Sharing medium** </td> <td> OpenAIRE </td> </tr> <tr> <td> **Preservation duration** </td> <td> 5 years </td> </tr> <tr> <td> **Preservation medium** </td> <td> Zenodo open data repository </td> </tr> <tr> <td> **Preservation costs** </td> <td> Zenodo is free for the data sizes envisaged in SIMPATICO. </td> </tr> </table> <table> <tr> <th> **Dataset ID** </th> <th> SIPATICO_ES_LOGDataset_FinalExport </th> </tr> <tr> <td> **Description** </td> <td> Released at the end of the second project validation and the project end (M36) and including data from the interaction captured in the LOG. </td> </tr> <tr> <td> **Data manager** </td> <td> HIB </td> </tr> <tr> <td> **Data standard** </td> <td> Project specific data </td> </tr> <tr> <td> **Metadata standard** </td> <td> JSON (JavaScript Object Notation) and binary ElasticSearch dumps </td> </tr> <tr> <td> **Volume** </td> <td> 10 KB /session /user (approximately). Totals: _Data size:_ 150 persons, 1 session 🡪 1500 KB </td> </tr> <tr> <td> **Sharing level** </td> <td> Open </td> </tr> <tr> <td> **Sharing medium** </td> <td> OpenAIRE </td> </tr> <tr> <td> **Preservation duration** </td> <td> 5 years </td> </tr> <tr> <td> **Preservation medium** </td> <td> Zenodo open data repository </td> </tr> <tr> <td> **Preservation costs** </td> <td> Zenodo is free for the data sizes envisaged in SIMPATICO. </td> </tr> </table> <table> <tr> <th> **Dataset ID** </th> <th> SIPATICO_IT_LOGDataset_DB </th> </tr> <tr> <td> **Description** </td> <td> Live database of logs of the usage of SIMPATICO platform adopted by the Trento pilot </td> </tr> <tr> <td> **Data manager** </td> <td> Fondazione Bruno Kessler </td> </tr> <tr> <td> **Data standard** </td> <td> Project specific data </td> </tr> <tr> <td> **Metadata standard** </td> <td> JSON (JavaScript Object Notation) and binary ElasticSearch dumps </td> </tr> <tr> <td> **Volume** </td> <td> 10 KB /session /user (approximately). Totals: _Data size:_ 20 persons, 1 session 🡪 200 KB </td> </tr> <tr> <td> **Sharing level** </td> <td> Open </td> </tr> <tr> <td> **Sharing medium** </td> <td> Contents of this dataset can be accessed through swagger API </td> </tr> <tr> <td> **Preservation duration** </td> <td> Project duration </td> </tr> <tr> <td> **Preservation medium** </td> <td> Trento deployment of SIMPATICO platform </td> </tr> <tr> <td> **Preservation costs** </td> <td> No additional cost </td> </tr> </table> <table> <tr> <th> **Dataset ID** </th> <th> SIPATICO_IT_LOGDataset_InterimExport </th> </tr> <tr> <td> **Description** </td> <td> Released at the end of the first validation phase at Trento (M20) and including data from the interaction captured in the LOG. </td> </tr> <tr> <td> **Data manager** </td> <td> Fondazione Bruno Kessler </td> </tr> <tr> <td> **Data standard** </td> <td> Project specific data </td> </tr> <tr> <td> **Metadata standard** </td> <td> JSON (JavaScript Object Notation) and binary ElasticSearch dumps </td> </tr> <tr> <td> **Volume** </td> <td> 10 KB /session /user (approximately). Totals: _Data size:_ 300 persons, 1 session 🡪 3000 KB </td> </tr> <tr> <td> **Sharing level** </td> <td> Open </td> </tr> <tr> <td> **Sharing medium** </td> <td> OpenAIRE </td> </tr> <tr> <td> **Preservation duration** </td> <td> 5 years </td> </tr> <tr> <td> **Preservation medium** </td> <td> Zenodo open data repository </td> </tr> <tr> <td> **Preservation costs** </td> <td> Zenodo is free for the data sizes envisaged in SIMPATICO. </td> </tr> </table> <table> <tr> <th> **Dataset ID** </th> <th> SIPATICO_IT_LOGDataset_FinalExport </th> </tr> <tr> <td> **Description** </td> <td> Released at the end of the final validation phase at Trento and project end (M36) and including data from the interaction captured in the LOG. </td> </tr> <tr> <td> **Data manager** </td> <td> Fondazione Bruno Kessler </td> </tr> <tr> <td> **Data standard** </td> <td> Project specific data </td> </tr> <tr> <td> **Metadata standard** </td> <td> JSON (JavaScript Object Notation) and binary ElasticSearch dumps </td> </tr> <tr> <td> **Volume** </td> <td> 10 KB /session /user (approximately). Totals: _Data size:_ 300 persons, 1 session 🡪 3000 KB </td> </tr> <tr> <td> **Sharing level** </td> <td> Open </td> </tr> <tr> <td> **Sharing medium** </td> <td> OpenAIRE </td> </tr> <tr> <td> **Preservation duration** </td> <td> 5 years </td> </tr> <tr> <td> **Preservation medium** </td> <td> Zenodo open data repository </td> </tr> <tr> <td> **Preservation costs** </td> <td> Zenodo is free for the data sizes envisaged in SIMPATICO. </td> </tr> </table> <table> <tr> <th> **Dataset ID** </th> <th> SIPATICO_UK_LOGDataset_DB </th> </tr> <tr> <td> **Description** </td> <td> Live database of logs of the usage of SIMPATICO platform adopted by the Sheffield pilot </td> </tr> <tr> <td> **Data manager** </td> <td> SPARTA </td> </tr> <tr> <td> **Data standard** </td> <td> Project specific data </td> </tr> <tr> <td> **Metadata standard** </td> <td> JSON (JavaScript Object Notation) and binary ElasticSearch dumps </td> </tr> <tr> <td> **Volume** </td> <td> 10 KB /session /user (approximately). Totals: _Data size:_ 20 persons, 1 session 🡪 200 KB </td> </tr> <tr> <td> **Sharing level** </td> <td> Open </td> </tr> <tr> <td> **Sharing medium** </td> <td> Contents of this dataset can be accessed through swagger API </td> </tr> <tr> <td> **Preservation duration** </td> <td> Project duration </td> </tr> <tr> <td> **Preservation medium** </td> <td> Sheffield deployment of SIMPATICO platform </td> </tr> <tr> <td> **Preservation costs** </td> <td> No additional cost </td> </tr> </table> <table> <tr> <th> **Dataset ID** </th> <th> SIPATICO_UK_LOGDataset_InterimExport </th> </tr> <tr> <td> **Description** </td> <td> Released at the end of the first validation phase at Sheffield (M20) and </td> </tr> <tr> <td> </td> <td> including data from the interaction captured in the LOG. </td> </tr> <tr> <td> **Data manager** </td> <td> SPARTA </td> </tr> <tr> <td> **Data standard** </td> <td> Project specific data </td> </tr> <tr> <td> **Metadata standard** </td> <td> JSON (JavaScript Object Notation) and binary ElasticSearch dumps </td> </tr> <tr> <td> **Volume** </td> <td> 10 KB /session /user (approximately). Totals: _Data size:_ 100 persons, 1 session 🡪 1000 KB </td> </tr> <tr> <td> **Sharing level** </td> <td> Open </td> </tr> <tr> <td> **Sharing medium** </td> <td> OpenAIRE </td> </tr> <tr> <td> **Preservation duration** </td> <td> 5 years </td> </tr> <tr> <td> **Preservation medium** </td> <td> Zenodo open data repository </td> </tr> <tr> <td> **Preservation costs** </td> <td> Zenodo is free for the data sizes envisaged in SIMPATICO. </td> </tr> </table> <table> <tr> <th> **Dataset ID** </th> <th> SIPATICO_UK_LOGDataset_FinalExport </th> </tr> <tr> <td> **Description** </td> <td> Released at the end of the second validation phase at Sheffield (M36) and including data from the interaction captured in the LOG. </td> </tr> <tr> <td> **Data manager** </td> <td> SPARTA </td> </tr> <tr> <td> **Data standard** </td> <td> Project specific data </td> </tr> <tr> <td> **Metadata standard** </td> <td> JSON (JavaScript Object Notation) and binary ElasticSearch dumps </td> </tr> <tr> <td> **Volume** </td> <td> 10 KB /session /user (approximately). Totals: _Data size:_ 100 persons, 1 session 🡪 1000 KB </td> </tr> <tr> <td> **Sharing level** </td> <td> Open </td> </tr> <tr> <td> **Sharing medium** </td> <td> OpenAIRE </td> </tr> <tr> <td> **Preservation duration** </td> <td> 5 years </td> </tr> <tr> <td> **Preservation medium** </td> <td> Zenodo open data repository </td> </tr> <tr> <td> **Preservation costs** </td> <td> Zenodo is free for the data sizes envisaged in SIMPATICO. </td> </tr> </table> ## Citizen Data Vault (CDV) Datasets ### Description The **Citizen Data Vault** **(CDV)** is a **repository of the citizen personal data** . It is continuously updated through **each citizen interaction** and is used mainly to automatically fill e-service forms. In this way, citizens will give to the PA each information only once, as the information will be stored in the vault and used in all the following interactions and among different PA e-services. As regards the CDV, for personal data we will use the **definition provided by the** **World Economic Forum (June 2010)** , namely [11]: **"Personal data is defined as data (and metadata) created by and about people”** , encompassing: **Volunteered data** – created and explicitly shared by individuals, e.g., social network profiles. **Observed data** – captured by recording the actions of individuals, e.g., location data when using cell phones. **Inferred data** – data about individuals based on analysis of volunteered or observed information, e.g., credit scores." Personal data is also very broadly defined in Article 2 of the European Data Protection Directive as: "... any information relating to an identified or identifiable natural person ("data subject")...". This definition is, for the most part, unchanged under the new GDPR. According to these definitions, through the CDV **citizens have a practical mean to manage their personal data** with the ability to grant and withdraw **consent** to third parties for access to data about themselves (see “D1.5 – Ethics compliance report” – Annex I “Informed consent form”). In summary, data collected by the means of CDV is referring on the context of personal data. In a first stage, we have identified a **first categorization of such personal data** , referring to: 1. Government Records 2. Profile 3. Education 4. Relationship 5. Banking and Finance 6. Health 7. Communication & Media 8. Energy 9. Mobility 10. Activities For each category, **several data fields** have been defined. Starting from these categories, we have grouped the actual personal data that each citizen could manage by the means of CDV against the **three use cases** identified by the three SIMPATICO pilots (i.e., Trento, Sheffield, and Galicia). The only data that will be stored in CDV are those related of a subset of the fields of PA e-service forms indentified for the validation phases in the three pilots use cases. We remark that the personal data collected or linked by the CDV will **never be shared at any time** . **Each citizen** has the control and the ability to have **a copy or** **removing all data from CDV** . ### Standards and Metadata CDV will collect personal data with a reference to a specific element of **Personal Data Taxonomy** . In order to assure semantic interoperability several options and tools are going to be considered, in particular **RDF and Linked Data** [12], **XML and JSON** . In order to facilitate and promoting interoperability among public public services, in the context of CDV a working in progress activity is standardize the Personal Data and Service Model taking into account the e-Government Core Vocabularies created by ISA2 Programme. ### Data capture Personal data will be collected in two ways: 1. **Data will be inserted by citizens by the means of the CDV dashboard.** The user will be able to insert, collect and modify personal data fields by the means of interactive web forms provided by the CDV. 2. **Data will be collected during the interactions of the user with the e-service forms provided by the PAs.** The e-services and the related types of data will be the ones identified by the three pilots. During each interaction, users decide if the data inserted in the e-service forms can be stored in the CDV. At any time, users can **view (through the dashboard)** , and possibly **remove the data collected** . No version of data collected is provided. Thanks to the approach used to collect data, the stored information can be **retrieved by using Personal Data Taxonomy** . ### Data storage The CDV will provide an **ad-hoc repository to collect personal data** , adopting a **multiple key based data encryption** . According to the specific deployment strategy and the e-services that will be adopted in each use case, the CDV could refer to **multiple data stores** (i.e., legacy systems just provided by the PAs). Separated instances of CDV Data Store will be deployed for each use case and hosted by each Pilot. ### Data quality assurance Data collection is based mainly on the e-services filling forms, provided by PA and according its procedures and regulations. According to this, data fields are validated against its type, semantics and completeness. ### Utility and re-use Data collected will be useful for Citizen and PA. Citizen can collect personal data during the interaction with e-services form, in order to reuse in all the following interactions and among different PA e-services. PAs can facilitate and enhance the interactions with citizen in their e-services. ### Data sharing The personal data collected or linked by the CDV will **never be shared at any time** . ### Archiving and preservation SIMPATICO Project considers **to retain the collected personal data only for the lifetime of the grant** . In principle, the results are expected to be very use-case specific and **no long-term storage is envisaged** beyond the needs for the SIMPATICO project execution. In principle, the results are expected to be very use-case specific and **no long-term storage is envisaged** beyond the needs for the SIMPATICO project execution. ### Datasets <table> <tr> <th> **Dataset ID** </th> <th> SIMPATICO_IT_CDVTrento_DB </th> </tr> <tr> <td> **Description** </td> <td> Live database of CDV adopted by the Trento Municipality </td> </tr> <tr> <td> **Data manager** </td> <td> Trento Municipality </td> </tr> <tr> <td> **Data standard** </td> <td> Project specific (JSON based) </td> </tr> <tr> <td> **Metadata standard** </td> <td> RDF/JSON , ISA2 Core Vocabulary (WIP) </td> </tr> <tr> <td> **Volume** </td> <td> 4 Mb per User </td> </tr> <tr> <td> **Sharing level** </td> <td> Private/Personal - no sharable </td> </tr> <tr> <td> **Sharing medium** </td> <td> N/A </td> </tr> <tr> <td> **Preservation duration** </td> <td> Project duration </td> </tr> <tr> <td> **Preservation medium** </td> <td> Pilot hosting systems of SIMPATICO platform. </td> </tr> <tr> <td> **Preservation costs** </td> <td> No additional cost </td> </tr> </table> <table> <tr> <th> **Dataset ID** </th> <th> SIMPATICO_EN_CDVSheffield_DB </th> </tr> <tr> <td> **Description** </td> <td> Live database of CDV adopted by the Sheffield Council </td> </tr> <tr> <td> **Data manager** </td> <td> Sheffield Council </td> </tr> <tr> <td> **Data standard** </td> <td> Project specific (JSON based) </td> </tr> <tr> <td> **Metadata standard** </td> <td> RDF/JSON , ISA2 Core Vocabulary (WIP) </td> </tr> <tr> <td> **Volume** </td> <td> 4 Mb per User </td> </tr> <tr> <td> **Sharing level** </td> <td> Private/Personal - no sharable </td> </tr> <tr> <td> **Sharing medium** </td> <td> N/A </td> </tr> <tr> <td> **Preservation duration** </td> <td> Project duration </td> </tr> <tr> <td> **Preservation medium** </td> <td> Pilot hosting systems of SIMPATICO platform. </td> </tr> <tr> <td> **Preservation costs** </td> <td> No additional cost </td> </tr> </table> <table> <tr> <th> **Dataset ID** </th> <th> SIMPATICO_ES_CDVGalitia_DB </th> </tr> <tr> <td> **Description** </td> <td> Live database of CDV adopted by the Galitia Region </td> </tr> <tr> <td> **Data manager** </td> <td> Galitia Region </td> </tr> <tr> <td> **Data standard** </td> <td> Project specific (JSON based) </td> </tr> <tr> <td> **Metadata standard** </td> <td> RDF/JSON , ISA2 Core Vocabulary (WIP) </td> </tr> <tr> <td> **Volume** </td> <td> 4 Mb per User </td> </tr> <tr> <td> **Sharing level** </td> <td> Private/Personal - no sharable </td> </tr> <tr> <td> **Sharing medium** </td> <td> N/A </td> </tr> <tr> <td> **Preservation duration** </td> <td> Project duration </td> </tr> <tr> <td> **Preservation medium** </td> <td> Pilot hosting systems of SIMPATICO platform. </td> </tr> <tr> <td> **Preservation costs** </td> <td> No additional cost </td> </tr> </table> # SIMPATICO security protection strategy This final section is dedicated to the **SIMPATICO security protection strategy** and will develop as the project progresses. It reflects the current status within the Consortium about the security of data that will be collected and produced. In the SIMPATICO project **we do not perform activities, neither produce results, raising any large scale security issues** . The project does not have the potential for military applications, and also does not involve the use of elements that may cause any harm to humans, animals, plants or environment. However, the process of collecting, processing, storing data might hide some pitfalls. To reduce the **risk of potential malevolent, criminal and/or terrorist abuse** , which might be perpetrated also by malicious people authorized to access the information, the SIMPATICO Consortium is examining the deployment of a **twofold security protection strategy** : 1. by ensuring that the employed **security layers and privacy-preserving measures** will work properly, keeping access logs and following best practices for system administration; 2. by employing techniques to prevent information leakage “on-the-fly”, i.e., through the adoption of the **aggregation and** **pseudonymization approach** of personal and sensitive information at collection, communication, and storage time (e.g. via an encryption scheme, hash functions, and/or tokenization). Such an approach will neutralise eavesdropping and/or similarly dangerous hack attempts in the unlikely event of successful retrieval, since it will secure data, making them completely meaningless to the possible attacker. ## Authentication, authorization, and encryption State-of-the-art mechanisms for **authentication, authorization, and encryption** will be exploited in the implemented processes (concerning data collection, storage, protection, retention and destruction), so to ensure the satisfaction of core security and data protection requirements, namely **confidentiality, integrity, and availability** . In context of SIMPATICO, the crucial legal challenges are primarily the security measures concerning authentication and authorization issues: pursue to the above-mentioned **Directive 95/46/EC** the implementation of both computerized authentication and procedures for managing authorization credentials is required. To assure the security of and the trust in the system, it is fundamental to provide technical solutions aimed at allowing the **circulation of digital identities** and the **access to the e-services** . For identity management and data protection mechanisms, SIMPATICO will follow the standard practice in the security research community. **Identity management** deals with identifying individuals ( **authentication** ) and controlling access ( **authorization** ) to resources in a system. All the Privacy Enhancing Technologies associated with identity management aim at identity verification with minimum identity disclosure, and protection against identity theft. Due to internetworked services and in general to Cloud technology, the need of a secure identities management has grown increasingly. Identity and access management (IAM) is the security and business discipline that “enables the right individuals to access the right resources at the right times and for the right reasons”. It addresses the need to ensure appropriate access to resources across increasingly heterogeneous technology environments and to meet increasingly rigorous compliance requirements. Technologies, services and terms related to identity management will be exploited including Directory services, Digital Cards, Service Providers, Identity Providers, Digital Password Managers, Single Sign-on, JSON Web Token and JSON Web Key from OpenID Connect’s model, OpenID Connect , OAuth and XACML. In particular SIMPATICO’s solutions fo IAM are, and will be, influenced by many existing and upcoming standards: OAuth 2.0, User Managed Access (UMA) and OpenID Connect as well as the upcoming Minimum Viable Consent Record (MVCR) specification from Kantara Initiative. More specifically, following the “ **Privacy by default and by design** ” principles, the SIMPATICO platform will adopt an integrated and multilevel approach to protect the user information from the fraudulent access and consumption. This will be achieved using a dedicated Authentication and Authorization Control component (AAC). As for the identity management, the AAC component will be made compatible with the state of art identity provisioning technologies (including OpenID Connect, Shibboleth, SAML, OAuth2.0). This will allow for integration of the SIMPATICO platform with the identity provisioning solutions adopted locally by the pilots in a federated ecosystem. In this integration, AAC will be configured in a way to obtain minimal amount of personal information necessary to unambiguously and uniquely identify the user. Note that this data will not be used by the platform components directly to refer to the user. Instead, those components will use a generated identifier that will be provided by AAC. Such an approach will simplify the realization of the “right to be forgotten” policy, as it is sufficient to remove the association of the user personal data and the user identifier to make anonymous the data stored in different components of the platform and associated to the user. In order to ensure the data consumption is performed only by authorized applications and users, AAC will exploit the open standard for authorization, namely OAuth2.0 protocol. This protocol not only ensures the exchange of personal data in a trusted context, but also enables controlled access to the platform services and APIs. In that context, AAC will operate as the OAuth2.0 **Authorization Server** for the platform components that expose various APIs and resources over the network. Furthermore, In order to allow the secure transmission of personal data, the SIMPATICO APIs support the HTTPS communication protocol. The input and output data are transmitted as “plain text” over HTTPS and encrypted by the TSL (Transfer Security Layer), or by the SSL (Security Socket Layer). HTTPS is based over certificates and ensure the client and server mutual authentication. For the CDV component of SIMPATICO, which is the main storage of personal data in the project, particular attention is dedicate to reduce the **server side vulnerabilities,** applying all the best **security practices and policies** about the configuration of the user privileges, remote access and connections. In order to get the database unreadable by unauthorized users/applications, the CDV architecture includes a module named Data Security Manager (DSM) that, implementing the Transparent Data Encryption (TDE) approach, enables the encryption/decryption of the CDV data in transparent way from users and application point of view. In order to distribute the security knowledge about the encryption keys and increase the data security, the CDV keys and encrypted data will be periodically backuped and stored in different places. According to the best practice and the architectural solution adopted by the most important DBMS, see Oracle 3 and Microsoft SQL Server 4 , the CDV TDE implementation is based on the following concepts: * **Master Key** : a key adopted to encrypt the Keys Table. It will be stored into a read only file in the filesystem and access restricted exclusively to each single user registered in the CDV. **User Key** : a key associated to a single CDV user. * **Keys Table** : a table to store the User Keys. It will be located in a different server than the Master Key and the Personal Data Table one. **Encryption Key** : a key generated using the Master Key and the User Key. **AES Cipher Algorithm** : the CDV adopts the Advanced Encryption Standard (AES) at 192 bit, defined in the Federal Information Processing (FIPS) standard no. 197 5 . * **Personal Data Table** : it will contain the personal data encrypted/decrypted applying the AES and the Encryption Key. ## Focus on data aggregation and pseudonymization techniques Personal and sensitive data will be made publicly available only after an **informed consent** has been collected and **suitable aggregation and/or pseudonymization techniques** have been applied. Before starting the project activities that require user involvement, a careful investigation on privacy and security issues has been and will be undertaken, covering in particular **Italian, Spanish and UK privacy laws** , according to the procedures stated in deliverable “D1.5 – Ethics compliance report”. In this Data Management Plan, data pseudonymization and aggregation techniques will be identified and applied to personal/sensitive data before their public release. As regards aggregation techniques, data confidentiality, integrity and privacy will be assured **when collecting and processing data** . The information for each person contained in the release cannot be distinguished from a given number of other individuals whose information also appear in the release. Moreover, the pseudonymization of data is another method of ensuring confidentiality, according to **the Article 29 Working Party Opinion on Anonymization Techniques** and in relation to the upcoming EU General Data Protection Regulation [13]. Where data are particularly sensitive (e.g. data using detailed personal narratives) then risks to confidentiality increase. In this case, participants will be carefully informed of the nature of the possible risks. This does not preclude the responsibility of the applicant to ensure that maximal pseudonymization procedures are implemented. A detailed description of the measures that will be implemented to prevent improper use, improper data disclosure scenarios and ‘mission creep’ (i.e., unforeseen usage of data by any third party), within the above-mentioned security protection strategy, will be provided before the commencement of validation activities as update of this deliverable. **The optimal solution will be decided by using a** **combination of different techniques** , while taking into account the practical recommendations developed in the above-mentioned **Article 29 Working Party Opinion on Anonymization Techniques** . Pseudonymization approaches reduces the linkability of a dataset with the original identity of a data subject, and is accordingly a useful security measure. These techniques have to adhere certain requirements to comply with data protection and privacyrelated legislation in the EU [14]. The following set of requirements (among others) has been extracted from the Directive 95/46/EC and the Article 29 Working Party Opinion on Anonymization Techniques and will be the guidelines for security protection strategy drafting [13] [15]: * **User authentication:** the system has to provide adequate mechanisms for user authentication. * **Limited access:** the system must ensure that data is only provided to authenticated and authorized persons. 5 National Institute of Standards and Technology (NIST) , Federal Information Processing (FIPS) standard no. 197, http://csrc.nist.gov/publications/fips/fips197/fips-197.pdf * **Protection against unauthorized and authorized access:** the records of an individual have to be protected against unauthorized access. **Notice about use of data:** the users should be informed about any access to their records. **Access and copy users’ own data:** the system has to provide mechanisms to access and copy the users’ own data. * **Fall-back mechanism:** the system should provide mechanisms to back up and restore the security token used for pseudonymization. * **Unobservability:** pseudonymized data should not be observable and linkable to a specific individual in the system. * **Secondary use:** the system should provide a mechanism to export pseudonymized data for secondary use and a possibility to notify the owner of the exported data. * **Modification of the database:** if an attacker breaks into the system, the system must detect modifications and inform the system administrator about this attack. The above-mentioned potential “unforeseen usage” implications of this project will be examined by the SIMPATICO Ethics Advisory Board (see “D1.5 – Ethics compliance report”). ## Internal threats and human errors Most organisations focus on data management risk from external threat but most breeches occur from internal vulnerabilities. These can be thought of as part of the same risk continuum. This section looks at internal vulnerabilities and how to reduce them. There are two main types of internal threats. * Security may fall victim to **human error** . For example, an employee may copy information from an entire database table into an email for troubleshooting purposes and accidentally include external email addresses in the recipient list. * **Internal Attacks** . While internal accidents often compromise databases, wilful attackers on the inside commit a large portion of database breeches. Many are disgruntled employees who use their privileged access to damage. Most of these attacks came using the numerous outlets for data on the modern PC, including USB and Firewire ports, CD and DVD recorders and even built-in storage media slots. Combined with the fact that storage space on portable devices has rapidly increased, business professionals can now use personal storage devices, such as USB memory sticks, iPods, digital cameras and smart phones, to remove or copy sensitive information either for malicious intent or personal gain. **Internal threat prevention** The implementation of a strong and flexible security policy is essential for SIMPATICO. A security policy can provide rules and permissions that are understandable to both the employee of SIMPATICO partner organizations and those implementing them so that personal data is prevented from leaving the office. SIMPATICO policy is based on the security policies in the EU, that are often enough if enforced to prevent such breeches, and are summarized in the following 5 points methodology: <table> <tr> <th> 1 </th> <th> Data Protect Policies </th> <th> Using national a or local legal guidelines for data protection and privacy policies (DP) </th> </tr> <tr> <td> 2 </td> <td> Internal data </td> <td> Written policies and procedures for all staff to sign in and agree to </td> </tr> <tr> <td> </td> <td> protection policies </td> <td> </td> </tr> <tr> <td> 3 </td> <td> Clear staff role definition and responsibilities </td> <td> Staff training, awareness and clear roles and staff responsibilities on data for access to data with checklists (see attached) </td> </tr> <tr> <td> 4 </td> <td> Access control </td> <td> Managing change in staff and have leave processes in place </td> </tr> <tr> <td> 5 </td> <td> Sanctions and audits </td> <td> Disciplinary action for breach of DP and process guidelines by staff and threat of audits </td> </tr> </table> SIMPATICO management team is currently evaluating security policies based on its needs, restrictions and pilot requirements. After this evaluation an internal security policy will be adopted by M22 and will be attached to the updates of the Data Management Plan released after that date.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0290_Blue-Action_727852.md
# Foreword For this initial draft of the plan, we have collected the information available at work package levelto give a higher level of details in the replies. # 1\. Data Summary 1A) What is the purpose of the data collection/generation and its relation to the objectives of the project? <table> <tr> <th> WP </th> <th> Description </th> </tr> <tr> <td> 1 </td> <td> Model output from by climate scenarios and prediction systems will be collected and analyzed to meet the objectives of the project including tailored and value added products as input to the development of climate services (WP5). </td> </tr> <tr> <td> 2 </td> <td> Will work with and generate ocean and atmospheric reanalysis data. Model reanalysis will initially be used as a benchmark on observed oceanic heat and salt transport towards the Arctic from partners observing systems and mooring arrays. Model, satellite and ocean observational data are collected from existing repositories. The data will be part of the analysis in the WP2 deliverables. For instance, the reanalysis will be used to compute energy transports in relation to Arctic variability, ocean observations will be combined with satellite data to improve knowledge of the Atlantic meridional overturning and heat flux at subpolar latitude. New Earth Observations will be integrated with ocean observations to obtain robust basin estimates and these estimates compared with to state‐ofthe‐art coupled climate models and high resolution ocean‐only models. </td> </tr> <tr> <td> 3 </td> <td> Generation of data includes a set of coordinated climate model experiments. This requires the use and compilation of shared forcing and boundary condition. The data will be generated and analyzed to meet the objectives of the project including identification of lower latitude drivers of arctic change. </td> </tr> <tr> <td> 4 </td> <td> This WP will generate sub‐seasonal to decadal climate predictions with the aim to estimate decadal predictability including the role of Greenland ice sheet melting. Development and application of new novel initialization techniques and coordination of experiments across predictions systems requires compilation and sharing of data internally. </td> </tr> <tr> <td> 5 </td> <td> Data feeding into the development of services are primarily developed in WP1‐4. </td> </tr> </table> 1B) What **types and formats of data** will the project generate/collect (model, observations, consultation ? <table> <tr> <th> WP </th> <th> Description </th> </tr> <tr> <td> 1 </td> <td> Numerical model simulations will be compiled and conducted. Model data will be in NetCDF file format. </td> </tr> <tr> <td> 2 </td> <td> Generated and collected reanalysis data (NorCPM, MERRA and ERA5), climate model and CMIP data will be in NetCDF file format. Some model data are originally in GRIB, but will be transferred with CMOR ( _https://cmor.llnl.gov/_ ) to NetCDF CF conventions and with the CMIP6 protocol as decided by the CMIP panel of the World Climate Research Program. Ocean observations including hydrographic data, moored ocean current observations and data from Argo floats will to the widest possible extend be compiled in NetCDF but in some cases native formats are required to facilitate analysis. ASCII or CSV formats may be used for timeseries data. </td> </tr> <tr> <td> 3 </td> <td> Modeling data will be generated, using the NetCTD file format. </td> </tr> <tr> <td> 4 </td> <td> Model data from prediction systems will be archived and shared amongst partners using NetCDF. For specific prediction systems, the WMO GRIB file format is standard for atmospheric fields. </td> </tr> <tr> <td> 5 </td> <td> When developed, services are expected to deliver information according to specific user requirements. Initial formats applied during the development phase follow WP1‐4. </td> </tr> </table> 1C) Will you re‐use any existing data and how, indicate links to previous or ongoing projects? <table> <tr> <th> WP </th> <th> Description </th> </tr> <tr> <td> 1 </td> <td> Data will be ‘re‐used’ from: * Model output: CMIP5 (previous) and CMIP6 (ongoing) * Model output: Copernicus Climate Change service – seasonal forecasts (previous and ongoing) * Several different re‐analyses and observational products. The analysis and tailoring of products that will be conducted have to our best knowledge not been undertaken in any previous and ongoing projects, but complement previous and ongoing project efforts. </td> </tr> <tr> <td> 2 </td> <td> It is planned to re‐use CMIP5 and CMIP6 data in addition to existing reanalysis data, for example from the French L‐IPSL Labex project. Relevant simulations for WP2 assessments also come from the PRIMAVERA project. We share the control simulations and extend the number of control simulations to have a better signal/noise ratio. There are also links to the APPLICATE project. We will re‐use the baseline seasonal predictions (in collaboration with BSC) from that project, while we will focus on individual events in Blue‐Action. WP2 will use the publically available hydrographic cruise data (e.g. OVIDE and EEL), Argo data, moored ocean transport time series (from e.g. RAPID, OSNAP, GSR‐observatory that are linked to e.g. EU projects NaClim and AtlantOS and national projects (e.g. RACE II (Germany) and FARMON (Denmark)) and satellite data from CMESM (former AVISO data). </td> </tr> <tr> <td> 3 </td> <td> For comparison, we will use existing observation/reanalysis data. Available data from CMIP6 model simulation will be also re‐used. </td> </tr> <tr> <td> 4 </td> <td> Data from the past CMIP5 and available CMIP6 contributions will be used for analysis. Data from the SPECS projects including specific hindcast and forecast will enter the analysis. </td> </tr> <tr> <td> 5 </td> <td> Re‐use of data is not envisioned in the final services except as indicated for WP1‐4. </td> </tr> </table> 1D) What is the origin of the data? <table> <tr> <th> WP </th> <th> Description </th> </tr> <tr> <td> 1 </td> <td> The data will be generated by the climate model simulations by each partner of the project. Those data will be generated using the computing facilities of each partners or using shared resources. </td> </tr> <tr> <td> 2 </td> <td> **Generated model data:** Specific facilities of each partners used to generate data includes the IPSL/CNRS reanalysis data which are created on the French TGCC supercomputer, NLeSC data is generated at SURFSara’s CARTESIUS supercomputer and ECMWFs supercomputer capabilities. **Collected data:** Reanalysis data that already exist are freely downloadable from ECMWF (ERA, ORA), NCEP‐NCAR (NCEP reanalysis), NASA (MERRA) and JMA (JMA). Publically available ocean observations are collected from websites like Argo, Ocean SITES and national repositories. Satellite data are collected from the Copernicus CMEMS webpage. Time </td> </tr> <tr> <td> </td> <td> series of ocean variable are also used, e.g. new time series of the Atlantic Overturning Circulation and heat flux across the Greenland‐Portugal Line ( _http://www.seanoe.org/data/00353/46445/)_ </td> </tr> <tr> <td> 3 </td> <td> Coordinated simulations generating climate model data are conducted at the computing facilities of each partner or at shared facilities. </td> </tr> <tr> <td> 4 </td> <td> Coordinated predictions generating climate model data are conducted at the computing facilities of each partner or at shared facilities, for example: * IPSL/CNRS data are created on the French TGCC supercomputers (http://wwwhpc.cea.fr/fr/complexe/tgcc.htm) * CMCC data will be generated using the CMCC‐CM2 climate model. Those data will be generated using the computational facilities available at CMCC. * DMI will make use of the EC‐Earth coupled climate model running on the high performance computing facilities at DMI/IMO. </td> </tr> <tr> <td> 5 </td> <td> The origin of model data are specified for WP1 and WP4 delivering input to WP5 </td> </tr> </table> 1E) What is the expected size of the data? <table> <tr> <th> WP </th> <th> Description </th> </tr> <tr> <td> 1 </td> <td> The project related model data archives can reach several TBs for one member of the ensemble simulations that will be generated. Therefore about 1000 TBs in total will be generated and stored by each partners. </td> </tr> <tr> <td> 2 </td> <td> The size of the generated data such as the IPSL/CNRS reanalysis will depend on the list of outputs (fields, variables) shared with Blue‐Action partners. This is still to be decided. Collected data: The simulated data is the largest part and this will amount up to 500 TB. </td> </tr> <tr> <td> 3 </td> <td> The project related model data archives can reach several TBs for one member of the ensemble simulations that will be generated. Therefore about 1000 TBs in total will be generated and stored by each partners. </td> </tr> <tr> <td> 4 </td> <td> A precise figure for the size of the data is still not available, as this will depend on the number of hindcasts that will be performed, the number of model variables and their respective saving frequency. However, expectedly the data size will be of the order of several tens of TBs. </td> </tr> <tr> <td> 5 </td> <td> Data in WP5 are generally value added synthesis of WP1 and WP4 outputs and data linked to specific services will only be a fraction of these archives. </td> </tr> </table> 1F) To whom might it be useful outside the project ('data utility')? <table> <tr> <th> WP </th> <th> Description </th> </tr> <tr> <td> 1 </td> <td> The data can be useful to general climate research community and beyond including the cluster of H2020 projects funded under BG9, 10 and 11\. </td> </tr> <tr> <td> 2 </td> <td> The research community in a broad sense. Modeling groups for validation, climate system analysis. </td> </tr> <tr> <td> 3 </td> <td> The data can be useful to general climate research community. </td> </tr> <tr> <td> 4 </td> <td> The data can be useful to the wide climate research community, from experts in climate dynamics, operational oceanography to climate impact analysts. Due to the idealized nature of part of the simulations that will be performed in WP4, such model data will mainly be usable by the climate research community. </td> </tr> <tr> <td> 5 </td> <td> Climate service centers in general and targeted stakeholder groups. The utility of data is to be explored in part as dedicated cluster activities and in part through the co‐development and codesign of services with stakeholders (partners). </td> </tr> </table> **2\. FAIR data** ## 2\. 1. Making data findable, including provisions for metadata 2.1A) Will the data produced and/or used in the project discoverable with metadata, identifiable and locatable by means of a standard identification mechanism (e.g. persistent and unique identifiers such as Digital Object Identifiers)? <table> <tr> <th> WP </th> <th> Description </th> </tr> <tr> <td> 1 </td> <td> It is not relevant and the plan in the project to make the entire data identifiable and locatable by means of a standard identification mechanism. Since part of the project is to test novel approaches, the data generated within the project need to be scientifically ‘quality controlled’ before sharing widely. Beyond that, the seasonal re‐forecasts conducted in the project are scientific and should be applied with proper knowledge of the involved uncertainties. Hence, we expect that data can most appropriately shared based on individual contacts. </td> </tr> <tr> <td> 2 </td> <td> For specific reanalysis produced (IPSL/CNRS) this will be investigated. For time series of ocean variables it is not outlined, but will be pursued (e.g. Pangea). </td> </tr> <tr> <td> 3 </td> <td> It is not feasible or planned plan in the project to make the dataset identifiable and locatable by means of a standard identification mechanism. </td> </tr> <tr> <td> 4 </td> <td> Individual partners (e.g. IPSL/CNRS) will investigate the feasibility to make certain datasets locatable and identifiable. Their progress and experiences will serve to guide the WP partner approach. </td> </tr> <tr> <td> 5 </td> <td> The approach will be considered at a later stage in the development process. </td> </tr> </table> 2.1B) What naming conventions do you follow, if relevant? <table> <tr> <th> WP </th> <th> Description </th> </tr> <tr> <td> 1 </td> <td> When applicable, post‐processing of output will follow the CMOR convection (for CMIP‐style simulations). Seasaonal re‐forecasts will follow the C3S conventions, which will likely become standard during the project’s lifetime. </td> </tr> <tr> <td> 2 </td> <td> For reanalysis, CMOR (CMIP6). In general CF conventions and CMOR 3 is used for model data according to the CMIP6 protocol (the ece2cmor code for generating the data from the raw data is available on github and NLeSCs eSTEP online). For ocean observations naming conventions are not an issue. </td> </tr> <tr> <td> 3 </td> <td> No convention will be imposed on the participants, in the case model output data will be used mostly by project partners. When and where resources are available to post‐process the data, CMOR convention is preferred by partners. </td> </tr> <tr> <td> 4 </td> <td> Need to be decided as part of the coordination. CMOR convention as used within CMIP6 is a possibility. </td> </tr> <tr> <td> 5 </td> <td> Not relevant. </td> </tr> </table> 2.1C) Will search keywords be provided that optimize possibilities for re‐use? <table> <tr> <th> WP </th> <th> Description </th> </tr> <tr> <td> 1 </td> <td> Not relevant, see above. </td> </tr> <tr> <td> 2 </td> <td> CMIP6 and CMOR will suffice for model data. For ocean observations exploited in Blue‐Action, this activity will lie outside the project where observations are collected. </td> </tr> <tr> <td> 3 </td> <td> CMIP6 and CMOR will suffice for model data. </td> </tr> <tr> <td> 4 </td> <td> This is not planned but the advantages will be investigated (IPSL/CNRS). </td> </tr> <tr> <td> 5 </td> <td> This will be addressed when services are in a more mature state of development. </td> </tr> </table> 2.1D) Do you provide clear version numbers? <table> <tr> <th> WP </th> <th> Description </th> </tr> <tr> <td> 1 </td> <td> Model data will be provided with specific reference to model version. </td> </tr> <tr> <td> 2 </td> <td> * IPSL/CNRS reanalysis: yes, for model versions, assimilation method version. * Model outputs (NLeSC): Yes, for the model and for the data generated. * Ocean observations and derived products and parameters: when relevant. </td> </tr> <tr> <td> 3 </td> <td> Model data will be provided with specific reference to model version. </td> </tr> <tr> <td> 4 </td> <td> Model versions and type (hindcasts/forecast) will be supplied and we will follow the CMIP6 naming convention. </td> </tr> <tr> <td> 5 </td> <td> Relevant for product updates. </td> </tr> </table> 2.1E) What metadata will be created? In case metadata standards do not exist in your discipline, please outline what type of metadata will be created and how. <table> <tr> <th> WP </th> <th> Description </th> </tr> <tr> <td> 1 </td> <td> As suggested by CMIP/DCPP and C3S. </td> </tr> <tr> <td> 2 </td> <td> IPSL/CNRS reanalysis: CMOR (CMIP6) </td> </tr> <tr> <td> 3 </td> <td> CMOR convention from CMIP6 is a possibility </td> </tr> <tr> <td> 4 </td> <td> CMOR convention from CMIP6 </td> </tr> <tr> <td> 5 </td> <td> Not applicable </td> </tr> </table> ### 2.2. Making data openly accessible 2.2 A) Which data produced and/or used in the project will be made openly available as the default? <table> <tr> <th> WP </th> <th> Description </th> </tr> <tr> <td> 1 </td> <td> The model data can be openly available on request (see comment above). The partners will provide the facility for the data exchange, according to their resources available. </td> </tr> <tr> <td> 2 </td> <td> Model output will be or can be made available on request from partners; all reanalysis data (IPSL/CNRS) will be openly available. For Ocean observations, data are open as default. </td> </tr> <tr> <td> 3 </td> <td> The model data can be openly available by request. The partners will provide the facility for the data exchange, according to their resources available. </td> </tr> <tr> <td> 4 </td> <td> The model data can be openly available by request. Partners will provide the facility for the data exchange on an ad‐hoc basis and according to the resources available. </td> </tr> <tr> <td> 5 </td> <td> Not by default </td> </tr> </table> 2.2 B) Note that in multi‐beneficiary projects it is also possible for specific beneficiaries to keep their data closed if relevant provisions were made in the consortium agreement. <table> <tr> <th> WP </th> <th> Description </th> </tr> <tr> <td> 1 </td> <td> No beneficiaries have generally closed their data according to the CA. Exceptions for specific datasets exists. </td> </tr> <tr> <td> 2 </td> <td> No beneficiaries have generally closed their data according to the CA. Exceptions for specific datasets exists. </td> </tr> <tr> <td> 3 </td> <td> No beneficiaries have generally closed their data according to the CA. </td> </tr> <tr> <td> 4 </td> <td> No beneficiaries have generally closed their data according to the CA. </td> </tr> <tr> <td> 5 </td> <td> No beneficiaries have generally closed their data according to the CA. Exceptions for specific datasets exists. </td> </tr> </table> 2.2 C) How will the data be made accessible (e.g. by deposition in a repository)? What methods or software tools are needed to access the data? <table> <tr> <th> WP </th> <th> Description </th> </tr> <tr> <td> 1 </td> <td> The data will be stored by individual institutional depositions, and also (to be developed during the course of the project) at C3S (seasonal forecasts only). Methods or software tools to access the data can be institutional dependent. </td> </tr> <tr> <td> 2 </td> <td> Reanalysis (IPSL/CNRS) data will be provided in NetCDF format. They could be provided in a depository or any other solution agreed upon within the consortium. Other model outputs: In the repistories of SURFSara (NLeSC) or CMIP5/CMIP6 (NERSC). Ocean observations including time‐series will be transferred to _http://www.oceansites.org_ (under development) to the extent found relevant. </td> </tr> <tr> <td> 3 </td> <td> The data will be stored by individual institutional depositions. Methods or software tools to access the data can be institutional dependent. </td> </tr> <tr> <td> 4 </td> <td> In general data will be provided in NetCDF format and provided in a partner depository or ftp solution. Other general solution may be agreed upon within the consortium. For example, specific partners may choose to make use of THREDDS Data Server (https://www.unidata.ucar.edu/software/thredds/current/tds/) as it will be easily accessible on the supercomputer used to run the simulations. </td> </tr> <tr> <td> 5 </td> <td> Under consideration. </td> </tr> </table> 2.2 D) Is documentation about the software needed to access the data included? Is it possible to include the relevant software (e.g. in open source code)? <table> <tr> <th> WP </th> <th> Description </th> </tr> <tr> <td> 1 </td> <td> In general, documentation about the software needed to access the data is publically available. </td> </tr> <tr> <td> 2 </td> <td> NetCDF will be used for reanalysis and model data. NetCDF is open source. It will also be used for most observational data. No documentation for other observational data is needed. </td> </tr> <tr> <td> 3 </td> <td> In general, documentation about the software needed to access the data is publically available. </td> </tr> <tr> <td> 4 </td> <td> NetCDF is open source. Documentation for the use of the software relevant for data access will be made publicly available. This includes packages for decoding GRIB data. </td> </tr> <tr> <td> 5 </td> <td> Not applicable </td> </tr> </table> 2.2 E) Where will the data and associated metadata, documentation and code be deposited? Preference should be given to certified repositories which support open access where possible. <table> <tr> <th> WP </th> <th> Description </th> </tr> <tr> <td> 1 </td> <td> The data and associated metadata, documentation and code will be deposited at institutional level. </td> </tr> <tr> <td> 2 </td> <td> The question of deposition is to be decided for the reanalysis data (IPSL/CNRS). The NLeSC: SURFSara’s repository used by certain partners (NLeSC) is a trusted and certified repository. For other partners, the data and associated metadata, documentation and code will as a minimum be deposited at institutional level. Ocean observations including time‐series will be transferred to _http://www.oceansites.org_ (under development) to the extent found relevant. </td> </tr> <tr> <td> 3 </td> <td> The data and associated metadata, documentation and code will be deposited at institutional level. </td> </tr> <tr> <td> 4 </td> <td> The data and associated metadata, documentation and code will be deposited at institutional level. </td> </tr> <tr> <td> 5 </td> <td> Data will be part of prototype services and deposited at institutional level. </td> </tr> </table> 2.2 F) Have you explored appropriate arrangements with the identified repository? If there are restrictions on use, how will access be provided? <table> <tr> <th> WP </th> <th> Description </th> </tr> <tr> <td> 1 </td> <td> We have not explored project‐specific arrangements with the respective repositories. </td> </tr> <tr> <td> 2 </td> <td> Only for the SURFSara’s repository used by certain partners. </td> </tr> <tr> <td> 3 </td> <td> We have not explored appropriate arrangements with the identified repository. </td> </tr> <tr> <td> 4 </td> <td> There is normally no restriction on the depositories identified, but if any, they could be removed on request. Partners choosing a FTP server solution with access limited to the BlueAction WP4 partners will have no restrictions. </td> </tr> <tr> <td> 5 </td> <td> Not applicable. </td> </tr> </table> 2.2 G) Is there a need for a project data access committee? <table> <tr> <th> WP </th> <th> Description </th> </tr> <tr> <td> 1 </td> <td> Over time, many individual agreements might be better followed by a joint committee or an internal wiki‐page, where policies and data exchanges are documented. </td> </tr> <tr> <td> 2 </td> <td> No, but there is a need to explore possibilities and agree on a list of variables to be exchanged. There should be contact points and all should use DOIs. </td> </tr> <tr> <td> 3 </td> <td> A committee or a group is needed. </td> </tr> <tr> <td> 4 </td> <td> The need of a project data access committee exists according to a number of partners. This reflects the coordinated nature of work in the WP and in the first phase a forum at WP level will be created reporting to the Project Office. </td> </tr> <tr> <td> 5 </td> <td> Not at project level. </td> </tr> </table> 2.2 H) Are there well described conditions for access (i.e. a machine readable license)? How will the identity of the person accessing the data be ascertained? <table> <tr> <th> WP </th> <th> Description </th> </tr> <tr> <td> 1 </td> <td> Currently, all data exchange is within the project (though across WPs, and hence with WP5 also to ‘stakeholders’) and by individual agreement (including specification of purpose). If the number of requests increases, this will have to be organized (see above). </td> </tr> <tr> <td> 2 </td> <td> Most data are open. </td> </tr> <tr> <td> 3 </td> <td> The present plan is that the data will be available by individual request. We do not have a good way to ascertain the identity of individual person accessing the data. </td> </tr> <tr> <td> 4 </td> <td> The present plan is that the data will be available by individual request. We have not identified a way yet to ascertain the identity of individual person accessing the data. </td> </tr> <tr> <td> 5 </td> <td> Not applicable </td> </tr> </table> ## 3\. Making data interoperable 2.3 A) Are the data produced in the project interoperable, that is allowing data exchange and re‐use between researchers, institutions, organisations, countries, etc. (i.e. adhering to standards for formats, as much as possible compliant with available (open) software applications, and in particular facilitating re‐combinations with different datasets from different origins)? <table> <tr> <th> WP </th> <th> Description </th> </tr> <tr> <td> 1 </td> <td> In general, data can be exchanged and re‐use between researchers, institutions, organizations, countries. However, dependent on complexity of individual models, documentations are likely needed. </td> </tr> <tr> <td> 2 </td> <td> For model and reanalysis data, the standard format allows interoperability, although NetCDF is not like rdf, and semantically link with other data is not possible. Observational data are fully </td> </tr> <tr> <td> </td> <td> interoperable. </td> </tr> <tr> <td> 3 </td> <td> In general, data can be exchanged and re‐use between researchers, institutions, organizations, countries. However, dependent on complexity of individual models, documentations are likely needed. </td> </tr> <tr> <td> 4 </td> <td> Interoperability and re‐use is ensured by the use of NetCDF and GRIB data formats </td> </tr> <tr> <td> 5 </td> <td> Data and products generated are intended to be interoperable and re‐useable. </td> </tr> </table> 2.3 B) What data and metadata vocabularies, standards or methodologies will you follow to make your data interoperable? <table> <tr> <th> WP </th> <th> Description </th> </tr> <tr> <td> 1 </td> <td> No specific standards. </td> </tr> <tr> <td> 2 </td> <td>  Reanalysis and model data: CMOR (CMIP6) and netcdf  Observational data: n.a. </td> </tr> <tr> <td> 3 </td> <td> No specific standards. </td> </tr> <tr> <td> 4 </td> <td> CMOR (CMIP6) for standard CMIP6 simulations </td> </tr> <tr> <td> 5 </td> <td> No specific standards. </td> </tr> </table> 2.3 C) Will you be using standard vocabularies for all data types present in your data set, to allow interdisciplinary interoperability? <table> <tr> <th> WP </th> <th> Description </th> </tr> <tr> <td> 1 </td> <td> No plan to do that. </td> </tr> <tr> <td> 2 </td> <td> Yes </td> </tr> <tr> <td> 3 </td> <td> No plan to do that. </td> </tr> <tr> <td> 4 </td> <td> For standard CMIP6 simulations, yes. </td> </tr> <tr> <td> 5 </td> <td> No plan. </td> </tr> </table> 2.3 D) In case it is unavoidable that you use uncommon or generate project specific ontologies or vocabularies, will you provide mappings to more commonly used ontologies? <table> <tr> <th> WP </th> <th> Description </th> </tr> <tr> <td> 1 </td> <td> We do not plan to use uncommon ontologies or vocabularies in the project. </td> </tr> <tr> <td> 2 </td> <td> We do not generate linked data, so this is a step beyond what is envisioned and needed now. The metadata standards suffice. </td> </tr> <tr> <td> 3 </td> <td> We do not plan to use uncommon ontologies or vocabularies in the project. </td> </tr> <tr> <td> 4 </td> <td> We do not plan to use uncommon ontologies or vocabularies in the project. </td> </tr> <tr> <td> 5 </td> <td> </td> </tr> </table> ### 2.4. Increase data re‐use 2.4 A) How will the data be licensed to permit the widest re‐use possible? <table> <tr> <th> WP </th> <th> Description </th> </tr> <tr> <td> 1 </td> <td> Wide use of our data is ensured for example by participation in CMIP6 and C3S (Copernicus). Appropriate re‐use will be ensured by making known what kind of experiments we conducted so that they can be available on request. </td> </tr> <tr> <td> 2 </td> <td> Software potentially generated in the project will have APACHE license, the data is fully open. </td> </tr> <tr> <td> 3 </td> <td> We plan to join in larger modeling communities to promote the wider re‐use of data. </td> </tr> <tr> <td> 4 </td> <td> By ad‐hoc access of data to relevant scientific partners with the purpose of common scientific work. </td> </tr> <tr> <td> 5 </td> <td> No licensing considered. </td> </tr> </table> 2.4 B) When will the data be made available for re‐use outside the project? If an embargo is sought to give time to publish or seek patents, specify why and how long this will apply, bearing in mind that research data should be made available as soon as possible. <table> <tr> <th> WP </th> <th> Description </th> </tr> <tr> <td> 1 </td> <td> As we make data available on request, we expect to discuss the purpose of the data use on individual agreement (at minimum a short documentation of the intention to avoid overlaps). Therefore, we expect to general embargo, and follow the standard practice (which is, that data can be made available once we published them). </td> </tr> <tr> <td> 2 </td> <td> No embargo on central reanalysis products (IPSL/CNRS). Ocean observations are not embargoed. </td> </tr> <tr> <td> 3 </td> <td> After publication, data will be shared with the science communities outside the project upon request with no specific embargo time but possibly limited by resources available at partner level. </td> </tr> <tr> <td> 4 </td> <td> A limited time embargo will be applied in order to secure the publication of coordinated experiments in high‐ranked journals co‐authored by the WP4 investigators. </td> </tr> <tr> <td> 5 </td> <td> No specific embargo time considered. </td> </tr> </table> 2.4 C) Are the data produced and/or used in the project useable by third parties, in particular after the end of the project? If the re‐use of some data is restricted, explain why. <table> <tr> <th> WP </th> <th> Description </th> </tr> <tr> <td> 1 </td> <td> Data are usable and not restricted. Reuse may require a good knowledge of the data, description of limitations, biases, etc. </td> </tr> <tr> <td> 2 </td> <td> Data are usable and not restricted. Reuse may require a good knowledge of the data, description of limitations, biases, etc. </td> </tr> <tr> <td> 3 </td> <td> Data are usable and not restricted. Reuse may require a good knowledge of the data, description of limitations, biases, etc. </td> </tr> <tr> <td> 4 </td> <td> Data re‐use will be possible modulo a good knowledge / description of limitations, biases, etc. The access to model simulations produced within the WP4 should be made available for scientific research to third parties. The use for commercial applications after the end of the project will be decided and regulated at project‐level or as outlined in the consortium agreement. </td> </tr> <tr> <td> 5 </td> <td> Regulated by the CA. </td> </tr> </table> 2.4 D) How long is it intended that the data remains re‐usable? <table> <tr> <th> WP </th> <th> Description </th> </tr> <tr> <td> 1 </td> <td> It is decided by either the project or the individual institution. </td> </tr> <tr> <td> 2 </td> <td> At least 5 years for model data, observational data remains re‐usable without limit. </td> </tr> <tr> <td> 3 </td> <td> It is decided by either the project or the individual institution. </td> </tr> <tr> <td> 4 </td> <td> Possibly without limitation but this will be subject to the individual partners resource availability **,** once the project is over. </td> </tr> <tr> <td> 5 </td> <td> No limitations considered. </td> </tr> </table> ## 3\. Allocation of resources 3 A) What are the costs for making data FAIR in your project? <table> <tr> <th> WP </th> <th> Description </th> </tr> <tr> <td> 1 </td> <td> The project has no specific funds to cover the data management cost and these efforts have not been independently assessed. Costs are embedded in the work of the research and development teams. </td> </tr> <tr> <td> 2 </td> <td> The project has no specific funds to cover the data management cost and these efforts have not been independently assessed. Costs are embedded in the work of the research and development teams. </td> </tr> <tr> <td> 3 </td> <td> The project has no specific funds to cover the data management cost and these efforts have not been independently assessed. Costs are embedded in the work of the research and development teams. </td> </tr> <tr> <td> 4 </td> <td> The project has no specific funds to cover the data management cost and these efforts have not been independently assessed. Costs are embedded in the work of the research and development teams. </td> </tr> <tr> <td> 5 </td> <td> The project has no specific funds to cover the data management cost and these efforts have not been independently assessed. Costs are embedded in the work of the research and development teams. </td> </tr> </table> 3 B) How will these be covered? Note that costs related to open access to research data are eligible as part of the Horizon 2020 grant (if compliant with the Grant Agreement conditions). <table> <tr> <th> WP </th> <th> Description </th> </tr> <tr> <td> 1 </td> <td> Open access and other eligible costs may be covered by the individual partners budgets in the projects. </td> </tr> <tr> <td> 2 </td> <td> Open access and other eligible costs may be covered by the individual partners budgets in the projects. </td> </tr> <tr> <td> 3 </td> <td> Open access and other eligible costs may be covered by the individual partners budgets in the projects. </td> </tr> <tr> <td> 4 </td> <td> Open access and other eligible costs may be covered by the individual partners budgets in the projects. </td> </tr> <tr> <td> 5 </td> <td> Open access and other eligible costs may be covered by the individual partners budgets in the projects. </td> </tr> </table> 3 C) Who will be responsible for data management in your project (DMI \+ WP responsible persons)? <table> <tr> <th> WP </th> <th> Description </th> </tr> <tr> <td> 1 </td> <td> WP leader </td> </tr> <tr> <td> 2 </td> <td> Juliette Mignot (IPSL/CNRS), tbc. </td> </tr> <tr> <td> 3 </td> <td> WP leader </td> </tr> <tr> <td> 4 </td> <td> Didier Swingedouw(IPSL/CNRS), tbc. </td> </tr> <tr> <td> 5 </td> <td> WP leader </td> </tr> </table> 3 D) Are the resources for long term preservation discussed: costs and potential value, who decides and how what data will be kept and for how long? <table> <tr> <th> WP </th> <th> Description </th> </tr> <tr> <td> 1 </td> <td> Discussions pending. </td> </tr> <tr> <td> 2 </td> <td> Discussions pending. </td> </tr> <tr> <td> 3 </td> <td> Discussions pending. </td> </tr> <tr> <td> 4 </td> <td> Discussions pending. </td> </tr> <tr> <td> 5 </td> <td> Discussions pending. </td> </tr> </table> ## 4\. Data security 4 A) What provisions are in place for data security (including data recovery as well as secure storage and transfer of sensitive data)? <table> <tr> <th> WP </th> <th> Description </th> </tr> <tr> <td> 1 </td> <td> Following institutional and/or national provisions </td> </tr> <tr> <td> 2 </td> <td> The data are not considered sensitive. Data are securely stored within repositories. </td> </tr> <tr> <td> 3 </td> <td> Following institutional and/or national provisions </td> </tr> <tr> <td> 4 </td> <td> We will mainly follow institutional or national provisions. </td> </tr> <tr> <td> 5 </td> <td> Assessment of sensitive data is pending. </td> </tr> </table> 4 B) Is the data safely stored in certified repositories for long term preservation and curation? <table> <tr> <th> WP </th> <th> Description </th> </tr> <tr> <td> 1 </td> <td> The project relies in part on certified repositories which ensures long term preservation and partly on institutional repositories. In general these are considered safe and will be kept assessable for exchange for the time needed to the partners to exploit model simulations and publish the most relevant results. In the longer term (beyond the project duration) these data may be archived but their availability will be limited. </td> </tr> <tr> <td> 2 </td> <td> The project relies in part on certified repositories (observations, reanalysis) which ensures long term preservation and partly on institutional repositories. In general these are considered safe and will be kept assessable for exchange for the time needed to the partners to exploit model simulations and publish the most relevant results. In the longer term (beyond the project duration) these data may be archived but their availability will be limited. </td> </tr> <tr> <td> 3 </td> <td> The project relies in part on certified repositories which ensures long term preservation and partly on institutional repositories. In general these are considered safe and will be kept assessable for exchange for the time needed to the partners to exploit model simulations and publish the most relevant results. In the longer term (beyond the project duration) these data may be archived but their availability will be limited. </td> </tr> <tr> <td> 4 </td> <td> The project relies in part on certified repositories which ensures long term preservation and partly on institutional repositories. In general these are considered safe and will be kept assessable for exchange for the time needed to the partners to exploit model simulations and publish the most relevant results. In the longer term (beyond the project duration) these data may be archived but their availability will be limited. </td> </tr> <tr> <td> 5 </td> <td> To be detailed. </td> </tr> </table> **5\. Ethical aspects** 5 A) Are there any ethical or legal issues that can have an impact on data sharing? These can also be discussed in the context of the ethics review. If relevant, include references to ethics deliverables and ethics chapter in the Description of the Action (DoA). <table> <tr> <th> WP </th> <th> Description </th> </tr> <tr> <td> 1 </td> <td> We have not identified any ethical or legal issues yet. </td> </tr> <tr> <td> 2 </td> <td> We have not identified any ethical or legal issues yet. </td> </tr> <tr> <td> 3 </td> <td> We have not identified any ethical or legal issues yet. </td> </tr> <tr> <td> 4 </td> <td> We have not identified any ethical or legal issues yet. </td> </tr> <tr> <td> 5 </td> <td> To be evaluated. </td> </tr> </table> 5 B) Is informed consent for data sharing and long term preservation included in questionnaires dealing with personal data? <table> <tr> <th> WP </th> <th> Description </th> </tr> <tr> <td> 1 </td> <td> No privacy issues. </td> </tr> <tr> <td> 2 </td> <td> No privacy issues. </td> </tr> <tr> <td> 3 </td> <td> No privacy issues. </td> </tr> <tr> <td> 4 </td> <td> No privacy issues. </td> </tr> <tr> <td> 5 </td> <td> Will be addressed at a later stage. </td> </tr> </table> ## 6\. Other issues Do you make use of other national/funder/sectorial/departmental procedures for data management? If yes, which ones? <table> <tr> <th> WP </th> <th> Description </th> </tr> <tr> <td> 1 </td> <td> No other procedures identified. </td> </tr> <tr> <td> 2 </td> <td> The Dutch national e‐infrastructure is used, in particular SURFSara which is nationally funded. For ocean ocean observations: Ocean Sites, possibly others. </td> </tr> <tr> <td> 3 </td> <td> No other procedures identified. </td> </tr> <tr> <td> 4 </td> <td> No other procedures identified. </td> </tr> <tr> <td> 5 </td> <td> No other procedures identified. </td> </tr> </table>
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0291_3D NEONET_734907.md
Approach for clear versioning: The date (format: "dd_mm_yyyy") and the initials of the last person modifying the document (each researcher or staff involved has a identifyer code in the project of 2 or 3 capital letters) are added after the name of the modified file. If more than one version of the same document/author is generated same day "v2, v3, v4..." is added after date. Documents can be locked for access in Drive, so two versions are not generated simultaneously by different authors. All older versions are kept in Google-Drive, at least until the final version is generated. Standards for metadata creation (if any). If there are no standards in your discipline describe what metadata will be created and how (pending task, see above) 2.2 Making data openly accessible: # ALL POINTS MUST BE REFINED/COMPLETED. INPUT FROM ALL INSTITUTIONS REQUIRED Patentable and sensitive (IP at risk) data protection will be responsibility for the IP owners. These data, as well as publishable but not patentable data, will be protected and can be safely kept in 3D-NEONET folders with restricted access. Open access (green and gold) Journals and media will be always chosend in first place for publication. Copies of non-open acess 3D-NEONET articles (if any) will be kept in UCD Library repositories to enable availability to the research commnity and the society in general. All multimedia and videos, as well as management guidelines and tools for clinical trials, generated in 3D-NEONET will be openly accesible in internet (through you tube, Vimeo, Slide-Share and 3D-NEONET webside). How the data will be made available: Publications, Internet (Pending task) Methods and software tools are needed to access the data (Pending task) Where the data and associated metadata, documentation and code are deposited (Pending task/described elsewhere in this DMP) How access will be provided in case there are any restrictions (Pending task/described in other sections. Different access-levels granted through google Drive) 2.3 Making data interoperable: # ALL POINTS MUST BE REFINED/COMPLETED. INPUT FROM ALL INSTITUTIONS REQUIRED Interoperability of your data. Data and metadata vocabularies yet to be agreed in the consortium. In principle, no data sets that require the development of novel specific standards and methodologies to facilitate interoperability will be generated in the project. All datasets generated are expected to map to commonly used ontologies. 3D-NEONET will be using standard vocabulary for all data types present in the project data assets, to allow inter-disciplinary interoperability. 2.4 Increase data re-use (through clarifying licenses): # ALL POINTS MUST BE REFINED/COMPLETED. INPUT FROM ALL INSTITUTIONS REQUIRED 3D-NEONET data will be licenced to permit the widest reuse possible (Pending task: elaborate and describe how) When the data will be made available for re-use (If applicable: why and for what period a data embargo is needed (Pending task: to be discussed/agreed within the consortium) Is the data produced and/or used in the project useable by third parties, in particular after the end of the project? If the re-use of some data is restricted, explain why (Pending task: to be discussed/agreed within the consortium) Data quality assurance processes. Internal review by steering committee, external review by scientific&medical advisory board, Peer review for publication and patenting (Pending task: A SOP for Data quality assurance must be oulined for 3D-NEONET) Length of time for which the data will remain re-usable: So far, ther is not any envisaged deadline for 3D-NEONET data re-use availability. ## 3\. ALLOCATION OF RESOURCES There was no specific budget allocation to data manegement/FAIR in the original proposal. We estimate that overall costs per researcher/month of exchange to make 3D-NEONET fair will be roughly: a) €100/researcher-month (split among sending, host and coordinator institution), plus b)10% time of the seconded fellow, plus c) 5% time of the project manager, plus 1% time of the Data Management Boards Members. Economic costs will come from the "management and indirect costs" budget of each research unit. Project management in general (including data management) will be overseen by the Steering Board comprising of a Scientist in Charge from each partner, and the dedicated Project Manager, working 2 hrs/day on this project. Both sending and host institutions will be responsible for safe storage and appropiate sharing (in line with GA, CA, MTA and IP agreements in force) of the data generated by exchanged fellows. There will be also a Data-Management Board, composed by one representative researcher from each 3D-NEONET institution and chaired by the Project Coordinator/Project Manager, that will closely monitor data-management in the consortium, ensuring implementation of best practices. The Data Management Board will follow-up with fellows in relation to data generated during their exchanges. Fellows will be responsible for completing the required information in the Fellows-Research-Data table described in section 1 (Data Summary) of this document. The Data Management Board will report to the Seering Board at general meetings and TCs. This board will be responsible for periodic reviwes and updates (at least every 6 months) of this Data Management Plan. 3DNEONET Steering Board will be the ultimate responsible for safe storing and overall redistributing of all raw & processed data generated in the consortium, ensuring its preservation and accesibility to the research community beyond the lifetime of this project. Long term preservation and accesibility to data generated in this project are crucial as they may lead to the discovery or development of novel therapeutics for cancer or severe eye disease in future. These will be ensured by (green and gold) Open Access publication of papers (research articles and reviews), combined with patent protection of exploitable results. Raw data will be kept in available repositories at each institution. Raw an processed data can be also safely and long term stored in the 3D-NEONET Google Drive Collaborative of unlimited capacity, which is owned and provided free of charge to the consortium by the coordinator University (UCD). ## 4\. DATA SECURITY The coordinator and project managerensure appropriate and fluent communication between partners/researchers, facilitating secure shared filing repositories (Google Drive storage available at UCD). The 3D-NEONET Google Drive Collaborative of unlimited capacity provided by the coordinator (UCD), is an online storage service that allows users to upload, create, share, and work collaboratively with others on a variety of documents online. Files uploaded to Google Drive, are stored in secure data centers. <table> <tr> <th>  </th> <th> If hardware/local drive breaks (computer lost or broken) files can still be accessed from other registered devices and from the cloud-server, as well as access by other users. </th> </tr> <tr> <td>  </td> <td> Files deleted by mistake can be recovered from the recycle bin </td> </tr> <tr> <td>  </td> <td> Files are private unless shared. Access to files and directories is restricted to the coordinator and project manager by default, who control permits granted to different users (to access, use and share the data). </td> </tr> </table> As an additional safety and recovery meassure, we have in the consortium two mirror copies of the 3D-NEONET collaborative folder were all data is stored. These are automatically synchronised and one of them is only accesible by the coordinator and the project manager. The second mirror copy is accessible by the whole steering committee (including coordinator, project manager and WP- leaders/scientists in charge at each institution). Fellows (an other member of the consortium) are given view/donwload access to certain files subject to specific requirements of the project. Finally there is Public-3D-NEONET folder with Open view/download Access were non-IP sensitive files are shared within and beyond the consortium. ## 5\. ETHICAL ASPECTS 3D-NEONET project complies with ethical principles (including the highest standards of research integrity, as set out, for instance, in the European Code of Conduct for Research Integrity and including, in particular, avoiding fabrication, falsification, plagiarism or other research misconduct) All 3D-NEONET partners are sensitive to the appropriate ethical use of animals for scientific research and the project will be carried out with maximum respect to all fundamental ethical principles, including those reflected in the Charter of Fundamental Rights of the European Union. The use of animals will be restricted to cases of absolute necessity, where in vitro or ex vivo experimentation are not valid to address certain aspects of the science programme. Also, human biological samples (ex vivo explants tissue) from cancer patients will be used for some experiments. All tissues collected will be consented for use by the patients attending clinic The following documentation will be submitted to the coordinator before any experimentation linked to ethical implications is carried out: 1) copies of ethics approvals for the research with humans; 2) copies of relevant authorisations (for breeders, suppliers, users, and facilities) for animal experiments using genetically modified vertebrates (Zebrafish, Mice, Rabbits); 3) If applicable, the copies of opinion or confirmation by the competent Institutional Data Protection Officer and/or authorization or notification by the National Data Protection Authority (which ever applies according to the Data Protection Directive (EC Directive 95/46, currently under revision, and the national law). In addition, copies of training certificates/personal licenses of the staff involved in animal experiments will be provided before they start their secondments. All the documentation described above will be safely stored in the 3D-NEONET Google Drive Collaborative folder. All the experiments undertaken in this project will be carried out in conformity with EU legislation. All regulations comply with European ethic guidelines for use of animals and human tissue for research purposes. Full details on normatives, licences, protocols and technical aspects related to the project ethics, can be found in the Annex 1 to the Grant Agreement (Description of the Action), Part B. ## 6\. OTHER The coordinator Institution (UCD) has a Research Data Management Unit based in the UCD Library that assist researchers at all different stages of the research data lifecycle, from funding requirements to data capture and processing tosharing, storage and preservation. They have published a comprehensive website on all aspects of Research Data Management (http://libguides.ucd.ie/data) UCD IT Services provide the additional following information and support on data management: Data Classification Guidance, Data Storage (Google Drive) and IT security.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0292_COLA_731574.md
# 4 Introduction In COLA WP1 is responsible for creating DMP and for monitoring its implementation. This work package will coordinate and supervise how COLA data will be collected and/or produced by project partners and users, according to restrictions and rules described in this deliverable. There are two DMP levels: activity- and project-level. Activity-level DMPs are as follows: user, technology, exploitation, dissemination & marketing and project management. The project-level DMP integrates these activity-level DMPs and presents major aspects of the COLA data management strategy. This strategy will enable making COLA data available considering its access level and in a format defined in DMP for usage. This deliverable presents the COLA Data Management Plan that outlines how COLA will collect, process, publish and store data at project and work package level. DMP defines a framework for managing COLA data to assure full lifecycle data management both during and beyond the project’s lifetime. COLA started working on DMP at the very beginning of the project and this work will not end with submission of this report. DMP will evolve as COLA progresses. WP1 will monitor any activities that might affect DMP and upgrade it accordingly. In Section 5 the report lists DMP criteria used to define the COLA DMP. It was developed considering the requirements of activity-level DMPs: user, technology, dissemination & marketing, exploitation and project management. Section 6 first, describes the two DMP levels: activity- and project-level. Next, it presents these activity-level DMPs in a table format with a short explanation of each criterion. Finally, it outlines the project-level DMP based on activity-level DMPs. The report concludes with Section 7 that summarizes major issues considered in the COLA DMP. # 5 Data Management Plant’s Criteria This section lists the DMP criteria that will be used in the COLA DMP. These have been selected considering the EC DMP guidelines and the Open Research Data Pilot. Particularly, considering recommendations for full life cycle management through the implementation of the FAIR principles, which state that the data produced shall be **Findable, Accessible, Interoperable and Reusable** ( **FAIR** ). COLA DMP will implement the FAIR principles at conceptual integration rather than at technical integration. ## 5.0.1 Data in COLA **Existing and New Data.** It must provide a brief description of existing and new data i.e. nature, scope, and scale of the data that will be generated or collected. Good description of the data will help users to understand the characteristics of the data, their relationship to existing data, and any disclosure risks that may apply. **Data Format.** The data format must specify the anticipated submission, distribution, and preservation formats for the data and related files. Using a pre-defined format for publishing and sharing data will make the processing and usage of data faster and more efficient. **Producers and Consumers** It must describe who will produce, manage and consume the data throughout the data life cycle. **Metadata.** This sub-section must explain how data will be described by metadata to enable that data can be effectively used. Metadata must provide all of the needed information for accurate and proper usage. Metadata is preferred to be structured or tagged metadata, like the XML format of the Data Documentation Initiative (DDI) format as standard. ## 5.0.2 Data Management in COLA **Data Management.** It must explain how the data will be managed during the project, with information about version control, naming conventions, etc. **Storage and Backup.** It must explain how and where data will be stored to ensure its safety. It must also outline how many copies will be maintained, where these copies will be stored, and how these copies will be synchronized. **Access to Data and Data Sharing** . DMP must indicate how COLA intends to archive and share data and why particular options have been selected. Data sharing may include as technical solutions: * repository such as community, national, international repository, etc. * web site that COLA will create and maintain. Data sharing can be either * _self-dissemination_ \- the data producer must arrange for eventual archiving of the data after the self-dissemination period terminates and specify the schedule for data sharing in the grant application. * _delayed dissemination_ – the data producer must have an arrangement with a public data repository for archival preservation of the data with dissemination to occur. **Archiving and Preservation.** It must ensure that data are preserved for the long term. Archiving and preservation will enable active management of digital data over time making it available and usable. COLA is considering depositing data with a trusted digital archive to ensure that they are curated and handled according to good practices. This sub-section must also Indicate how data will be selected for archiving, how long the data will be held, and what COLA plans for eventual transition or termination of the data collection in the future. **Quality Assurance.** It must specify how COLA will ensure that the data meet quality assurance standards because producing data of high quality is essential to the advancement of the project, and every effort should be taken to be transparent with respect to data quality measures undertaken across the data life cycle. ## 5.0.3 Security, IPR and Ethics in COLA **Security.** It must ensure that data is secured over its life cycle. The security plan must outline how raw and processed data will be secured. Raw research data may include direct identifiers or links to direct identifiers and should be well-protected during collection, cleaning, and editing. Processed data may or may not contain disclosure risk and should be secured in keeping with the level of disclosure risk inherent in the data. Secure work and storage environments may include access restrictions (e.g., passwords), encryption, power supply backup, and virus and intruder protection. **Ethics and Privacy.** If there are any ethics and/or privacy issues of any data there must be a written consent from the data producers that the information they provide will remain confidential when data are shared. **Intellectual Property Rights.** It must describe who will hold intellectual property rights for the data and other information created by the project, i.e. the project consortium or the project partner that produced data. It must also outline whether these rights will be transferred to another organization for data distribution and archiving. Further, it must specify whether any copyrighted material will be used and the project or project partners will obtain permission to use the materials and disseminate them. The project will get a statement from the data producer of who owns the data to enable its dissemination. **Legal Requirements.** It must indicate whether any legal requirements apply to archiving and sharing data. It is important to define how the project will manage legal requirements if there are any because some data may have legal restrictions that impact data sharing. This sub-section must describe these issues that might impact data sharing. # 6 COLA Data Management Plan COLA produces and manages the following major data types: * use case data, * technology data * exploitation data * dissemination and marketing data, and * project management data. Considering this wide range of data types the COLA Data Management Plan incorporates activity-level DMPs that address specific aspects and requirements of these data types. WP8 elaborated the User DMP that outlines how the three COLA use cases: Social media data analytics for public sector organisations use case (Inycom + Sarga), Scalable hosting, testing and automation for SMEs and public sector organisations use case (Audience Agency + Outlandish), and Evaluation Planning Service use case (Saker + Brunel University) handles data. The first use case will handle social media data, the second one both company and social media data while the third one only company data. As a result, they have to manage different data types available in different data formats, stored and backed up at different locations and following different approaches and implementing multiple security measures to protect data. This heterogeneity will further increase when WP8 will implement another 20 proof of concept use cases using the COLA infrastructure and the MiCADO platform. It will require extending the User DMP. The Technology DMP describes how data is handled in the COLA infrastructure and in the MiCADO platform. Key contributors to this DMP are WP4 (COLA infrastructure) and WP5WP7 (MiCADO platform). The COLA infrastructure is IaaS while the MiCADO platform is PaaS that enables running use cases as SaaS. The COLA infrastructure incorporates one commercial cloud (CloudSigma) and three academic clouds (SICS, SZTAKI and UoW). As a result, similarly to the User DMP the Technology DMP also has a wide range of data management requirements. There will be further three activity-based DMPs: Exploitation DMP, Dissemination and Marketing DMP and Project Management DMP. To describe these activity-based DMPs WP1 developed a table (See Annex I) based on DMP criteria listed in Section 5 to define and describe DMPs listed above. ## 6.1 Activity-level DMPs **6.1.1 User DMP** ### Work package: WP8 **Person:** Jose Manuel Martin Rapun <table> <tr> <th> **Data** </th> </tr> <tr> <td> **Existing and New Data** </td> </tr> <tr> <td> **Social media data analytics for public sector organisations use case (Inycom + Sarga)** This use case will produce new data processing data that will be collected from Twitter using Twitter APIs. There are two existing data types that will be used in this use case: _Tweets:_ Data posted by Twitter users matching a list of keywords of interest for the Regional Government (tourism in the region, employment, etc.). This data will contain attributes, such as tweet id, user who posted the tweet, creation date, tweet content, number of retweets. There will be also calculated attributes, such as category among those defined in the keywords and sentiment analysis. _Users:_ Data about Twitter users, for example about users who posted tweets related to issues of interest for the Government of Aragón. This data will contain attributes, such as user id, user name, location, date of birth, number of followers, number of tweets, users following. There will also be calculated attributes, such as activity of the user in twitter, influence of the user and sentiment analysis. **Scalable hosting, testing and automation for SMEs and public sector organisations use case (Audience Agency + Outlandish)** This use case will use the following existing data types: _Ticketing Data:_ TAA has data processing agreements with all its clients in order to collect and process ticket sales data. This data is anonymised during the data transformation process before it reaches any part of the infrastructure which will be developed within the use case. From this form it cannot be traced back to the individual customer. Customers are identified by a key number which itself does not provide any information on the person. Certain tags derived from demographic data, such as Experian Mosaic codes and bespoke segmentation tags are also stored, along with customers’ postcodes. _Customer Surveys_ : Separately to ticketing data, responses to customer surveys run by arts organisations are stored in a different part of the system. These data might contain some basic demographic information along with a postcode. While TAA does not collect personal details, survey responses might contain pieces of sociodemographic information which might be considered as sensitive. These surveys collect the explicit consent of the user to process the requested sensitive data. _Business information_ : Some of the data processed by TAA can also be regarded as business sensitive to its clients, e.g. ticket sales information or website analytics metrics. There will be agreements between use case partners that will explicitly state that any customer or organisation in any output of analysis or reporting will never be identified. _Publicly available data_ : The collection and processing of other data will be from sources that are open to the public i.e. social media platforms such as Twitter. The types of data that are collected vary from user names, (reported) gender, age, sex, posts content and particular hashtags. In addition, the data collected from open social networks (i.e. Twitter) will be subject to the same laws governing voluntary disclosure detailed above in the Inycom case study on public social networks and be handled by the terms of the Twitter Privacy Policy and related third party development licenses. This use case will produce new data, for example summaries produced from the data </td> </tr> </table> <table> <tr> <th> types mentioned above. **Evaluation Planning Service (Saker + Brunel University)** This use case will generate two data types: _Simulations models_ : A given model will be run with a defined scenario. The model will contain the definition of the logic and data structures; a database will store all input data and results. _Simulation data_ : Simulation data will be produced by the simulation runs. </th> </tr> <tr> <td> **Data Format** </td> </tr> <tr> <td> **Social media data analytics for public sector organisations use case (Inycom + Sarga)** The collected data of this use case will be exchanged and distributed between the different system components in JSON format using web services. **Scalable hosting, testing and automation for SMEs and public sector organisations use case (Audience Agency + Outlandish)** The data of this use case will not have a standardised format. It will be simple formatted and transparent text. It will have the processing of the data encoded in code. In addition raw text and CSV files might be used in data pipelines. This data will be stored in relational and non-relational databases. **Evaluation Planning Service (Saker + Brunel University)** Simulation models will be in proprietary Flexsim format (.FSM). These will contain the definition of the simulation model which encompasses logic, data structures and objects (processes, queues, etc.). Simulation data, such as input parameters and key performance indicators (results data) will be stored in SQL database. This data will be held in tables that cross reference each other. Further, data will be organised into scenarios and datasets where a scenario being defined as a base dataset that is overridden by the data in an ordered set of subsequent datasets for a sub-set of the data points. All data points belong to datasets that will be hosted on a server residing on the web. </td> </tr> <tr> <td> **Data Producers and Consumers** </td> </tr> <tr> <td> **Social media data analytics for public sector organisations use case (Inycom + Sarga)** _Data producers_ : Twitter users posting information about topics of interest to the Aragon Regional Government. _Data consumers_ : In the COLA project the data consumers will be civil servants in the Aragon Regional Government eAdministration. **Scalable hosting, testing and automation for SMEs and public sector organisations use case (Audience Agency + Outlandish)** _Data producers_ : Arts organisations, TAA (for data summaries), public social media companies as producers of social media data. The two organisations processing data will be the Audience Agency and Outlandish. No third parties will be used in order to process data. _Data consumers_ : The Audience Agency staff, selected clients (for suitably selected subsets of data and data summaries), public (for non-sensitive data summaries and open data) **Evaluation Planning Service (Saker + Brunel University)** _Data producers_ and the _Data consumers_ will be the same entity. This will be the end user of the simulation model in the client organisation. The end user will create a scenario to be simulated. The simulation model will run which to generate a series of KPIs. These will be reviewed by the end user and where appropriate communicated to any interested parties within the client organisation. </td> </tr> <tr> <td> **Metadata** </td> </tr> <tr> <td> **Social media data analytics for public sector organisations use case (Inycom + Sarga)** Data in this use case will be stored in a structure defined using xml format in SOLr. </td> </tr> </table> <table> <tr> <th> **Scalable hosting, testing and automation for SMEs and public sector organisations use case (Audience Agency + Outlandish)** A set of tags specifying data range and type, e.g. genre, art form, venue, year, data source. In general metadata itself is the result of the relational databases being relational. If metadata is stored it will be done within databases themselves. **Evaluation Planning Service (Saker + Brunel University)** This use case will not use any metadata to describe use case data. </th> </tr> <tr> <td> **Data Management** </td> </tr> <tr> <td> **Storage and Backup** </td> </tr> <tr> <td> **Social media data analytics for public sector organisations use case (Inycom + Sarga)** The data will be stored in databases, namely SOLr for the tweets, and probably MySQL for users (a NoSQL option such as JENA TDB or MongoDB will be explored). There will be weekly backup but daily incremental backups will be also considered. The Twitter data will be reprocessed using the Twitter API at least for one month. As a result, the above outlined backup policy is considered proper and safe. The backup copies will be stored in servers located in in different locations and managed by different providers. **Scalable hosting, testing and automation for SMEs and public sector organisations use case (Audience Agency + Outlandish)** Use case data will be stored in a series of databases managed by Outlandish. The majority of these systems are immutable and able to be restored without requiring backups from either configuration as code or machine images. Backups will be taken every day and will be kept for a week, unless a different schedule is identified as most suitable in the course of business requirements gathering. There will be also regular restoration rehearsals. Use case documentation and information will be stored using SharePoint with continuous backups. **Evaluation Planning Service (Saker + Brunel University)** All simulation models will be stored by simulation users. This could be Saker Solutions or Saker’s client. These models will typically be held on a local server that is backed up outside of the COLA project. The SQL Server Database will reside on a server that is regularly backed up by the company that is responsible for hosting the database. For example, at Saker Solutions the database servers are backed up daily. </td> </tr> <tr> <td> **Access to Data and Data Sharing** </td> </tr> <tr> <td> **Social media data analytics for public sector organisations use case (Inycom + Sarga)** The dissemination and exploitation of the data will be done by means of a dedicated website interface (self-dissemination), where the users will be able to filter and analyse the data. They will also have the ability to print the charts and export the data in the tables to csv format. **Scalable hosting, testing and automation for SMEs and public sector organisations use case (Audience Agency + Outlandish)** Confidentiality Agreement will be signed separately with each organisation joining the warehouse. Any publicly available data (e.g. from Twitter, Open Geography portal) that will be processed will be always obtained through legal means and in accordance to relevant terms and conditions agreements. Only anonymised and aggregated information will available in the publicly accessible parts of the system (i.e., the dashboards). All nonpublicly available data processed by TAA is provided by their clients (arts organisations). Certain data summaries might be made publicly available, although in each case this will have to undergo a review process to ensure that no sensitive information is disclosed. **Evaluation Planning Service (Saker + Brunel University)** All simulation models, input data and results are confidential. Access to data is dependent upon the sensitivity required by the client and may be subject to prior formal authorisation. The data for each project that is run is specific to that project and may only be accessed by persons who are authorised to do so. </td> </tr> </table> <table> <tr> <th> **Archiving and Preservation** </th> </tr> <tr> <td> **Social media data analytics for public sector organisations use case (Inycom + Sarga)** Long term archiving (> 1 year) of Twitter data could be a challenging task because of its would be huge volume. Thus, SOLr data older than 1 year will be removed from the production servers and archived at least for one year. User data will be much smaller in volume and it will be more important in long term than Twitter data. As a result, it will be kept in production environment for a longer time (3-5 years) and archived every year. **Scalable hosting, testing and automation for SMEs and public sector organisations use case (Audience Agency + Outlandish)** Each data backup will be archived for the period of one week **Evaluation Planning Service (Saker + Brunel University)** Data is to be backed up every 24 hours but use case data will not be archived. </td> </tr> <tr> <td> **Quality Assurance** </td> </tr> <tr> <td> **Social media data analytics for public sector organisations use case (Inycom + Sarga)** The quality and value of the data will be monitored by the users. Wrong data will be either corrected or removed using tools. The main risk will be that fake data can be loaded in the system. For example, a Twitter search may provide data that is not relevant because the search filters (keywords) were not properly set. To sort out this issue the search filters must be improved when irrelevant data is detected. **Scalable hosting, testing and automation for SMEs and public sector organisations use case (Audience Agency + Outlandish)** Each data backup will be archived for the period of one week Data will be regularly checked for consistency and quality, through automatic and (occasionally) manual checks. **Evaluation Planning Service (Saker + Brunel University)** This use case will use the Saker Solution’s Quality system. It will be updated to cover any quality related matters pertaining to data uploaded to the Cloud for running this use case. </td> </tr> <tr> <td> **Security, IPR and Ethics** </td> </tr> <tr> <td> **Security** </td> </tr> <tr> <td> **Social media data analytics for public sector organisations use case (Inycom + Sarga)** Both data types are not sensitive and have been publicly disclosed by the Twitter users themselves. The access to the web interface will require user/password with different access levels. Two major user groups will need access: administrators (can change configuration such as keywords, crawlers) and end users (only enabled to look up the data). Regarding the infrastructure servers, at least firewalls and access restricted by IP will be used, apart from those security strategies/features provided by the COLA platform. **Scalable hosting, testing and automation for SMEs and public sector organisations use case (Audience Agency + Outlandish)** It is company standard at Outlandish that developers must encrypt their hard drives as well as all S3 buckets they use to store sensitive data. They also dispose of any Amazon Web Server (AWS) instance following all AWS protocols and standards. If necessary these AWS servers can be encrypted at rest. When AWS servers are shut down they are safely destroyed to standards that are acceptable to government agencies. TAA uses multi-factor authentication for access to their systems and only selected staff members have access to full data sets. Full access to databases and servers will be granted on a need-to- know bases to selected staff members. **Evaluation Planning Service (Saker + Brunel University)** All simulation models, input data and results of this use case are confidential. Access to this data will depend upon the sensitivity as defined by the client and may be subject to prior formal authorisation. The data for each project is specific to that particular project and may only be accessed by users who are authorised to do so. It will at least need to be protected from 3rd party access and in some cases will be such that it can only run on </td> </tr> </table> <table> <tr> <th> private networks. Considering these security requirements Saker will run this use case on the Cloud, for example G-Cloud, and on a private infrastructure, for example hosted at Saker. </th> </tr> <tr> <td> **Ethics and Privacy** </td> </tr> <tr> <td> **Social media data analytics for public sector organisations use case (Inycom + Sarga)** This use case involves the collection and processing of personal data and information that are not sensitive. The use case will follow a comprehensive approach based on the “Data protection by design” principle. This approach will protect data from the first stage up to the final stage. It is important to highlight that the use case does not collect data directly from individuals only from second sources, such as social networks, for example Twitter, and tools owned by the end users, for example tool of the Aragon public administration. Additionally, it must be noted that the information gathered is just information that individuals post voluntarily and disclose publicly, so there is no need in obtaining the prior consent of individuals. Therefore, this use case will sign an agreement with the data provider (the so called Developer Agreement, which will grant the license to use Twitter API and Content). This agreement will be based on the Privacy Policy that individuals accept when joining the social network. On the other hand, concerning data provided by the public administration, they will be the responsible of collecting the information by their own means. This structure of the information flow will be configured through another file and the corresponding agreement entitling us to process such data. As a result, this use case will comply with applying rules on data protection and will ensure the rights of the individuals within the framework of this use case. **Scalable hosting, testing and automation for SMEs and public sector organisations use case (Audience Agency + Outlandish)** The data that will be accessible or processed in the course of the project is not personal information and by ensuring the proper care and attention in processing and output, this data does not become personally identifiable. Some data is personal sensitive data or business sensitive data. This use case will only collect and process this data in line with appropriate regulations and signed agreements with clients and data subjects. This data is subject to strict collection, storage, retention and destruction protocols and TAA is registered with the ICO. All Outlandish employees are well versed in data handling and relevant legislation. All Outlandish employees sign agreements to this effect and abide by a strict non-disclosure agreement. **Evaluation Planning Service (Saker + Brunel University)** All data is confidential and belongs to a client company (or Saker Solutions in the case of prototypes / demonstrators / internal projects). There are no specific ethical issues related to this use case. </td> </tr> <tr> <td> **Intellectual Property Rights** </td> </tr> <tr> <td> **Social media data analytics for public sector organisations use case (Inycom + Sarga)** According to the Spanish law based on the European law, databases are protected under intellectual property rights as long as they are intellectual creations with the requirements of originality and creativeness; and it is structure of the database, the “container” what is protected, but data themselves, the “content”, are outside the scope of such protection. This use case involves the creation of databases, but they are not intellectual creations according to the terms mentioned before, so there are no intellectual property rights concerns. **Scalable hosting, testing and automation for SMEs and public sector organisations use case (Audience Agency + Outlandish)** TAA has agreements in place with all data suppliers for the perpetual use of any nonpublic data. Outlandish will endeavour to make as much software as is possible during the course of the COLA project open source under permissive or share-alike GPL licenses. **Evaluation Planning Service (Saker + Brunel University)** </td> </tr> <tr> <td> All data will belong to Saker Solutions and / or the client (end user) company. </td> </tr> <tr> <td> **Legal Requirements** </td> </tr> <tr> <td> **Social media data analytics for public sector organisations use case (Inycom + Sarga)** This use case is to be implemented in Spain, so the Spanish and European laws will be taken into account. Currently, the _Organic Law 15/1999 on Protection of Personal Data,_ which transposed the _Directive 95/46/CE on the Protection of Personal Data_ into the Spanish legislation, and the additional rules implementing these two ones. From May 2018, Regulation (EU) 2016/679 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data (directly effective and applying in all EU countries). For the moment, it will be considered as an inspiring framework. The framework created by these laws under which this use case will be developed is based on the following principles and guidelines * transparency, legitimate purpose, and proportionality that means that data will be: o processed fairly and lawfully * collected for specified, explicit and legitimate purposes and not further processed in a way incompatible with those purposes * adequate, relevant and not excessive in relation to the purposes for which they are collected and/or further processed * accurate and, where necessary, kept up to date * kept in a form which does not permit the identification of data subjects * data protection by design, to provide adequate protection from the mere design of the system * ensure the exercise of the rights of the subjects (which will be expanded with the new Regulation) Finally, it must be noted that this use case does not involve the processing of sensitive data, so the specific and tougher requirements about them do not apply. **Scalable hosting, testing and automation for SMEs and public sector organisations use case (Audience Agency + Outlandish)** TAA serves as a data processor providing research services to their client organisations under the research exemption of the Data Protection Act. The Audience Agency does not collect data directly from data subjects. TAA obtain data such as ticket sales records or audience/visitors survey response from their client organisations and process this data as part of research services. It is a requirement that before any client organisation is allowed to commit data, a Data Use and Confidentiality agreement is signed, where the client organisations warrant that correct notifications have been given and consent have been obtained from data subjects. Processing of all these data is absolutely critical to TAA’s mission to provide arts organisations with insight on demographics of their customers and their ticket sales patterns within various socio-demographic groups, as well as information on how well the arts organisations serve different parts of the community. This data will not be made publicly available. The only data that might be made publicly available will be non-sensitive summaries of data and open data. Any data obtained from third parties (through TAA’s services that process client’s ticketing data, customer surveys, business information and commercially licensed datasets – see also below) are obtained with their full consent and conforms to EU wide standards of transparency, information, access, erasure and so on. TAA are registered as a data controller with the Information Commissioner Office, with registration number ZA009719 **Evaluation Planning Service (Saker + Brunel University)** Data usage and confidentiality agreement must be signed by developers and users. </td> </tr> </table> **6.1.2 Technology DMP** ### Work package: WP4-WP7 <table> <tr> <th> **Persons:** </th> <th> WP4 Bogdan Despotov </th> </tr> <tr> <td> </td> <td> WP5 Gab Pierantoni </td> </tr> <tr> <td> </td> <td> WP6 Jozsef Kovacs </td> </tr> <tr> <td> </td> <td> WP7 Nicolae Paladi </td> </tr> </table> <table> <tr> <th> **Data** </th> </tr> <tr> <td> **Existing and New Data** </td> </tr> <tr> <td> Technology specific data will be IaaS, PaaS and Saas type data. The COLA infrastructure incorporates one commercial IaaS platform (CloudSigma) and 3 research IaaS platforms (SICS, SZTAKI and UoW). The infrastructure owners will create and publish infrastructure related data as WP4 partners. COLA developers and application users will generate and use Paas and SaaS data produced in WP5-WP7. WP6 will elaborate the MiCADO platform to support dynamic and secure deployment and run-time orchestration of cloud applications. This work package will develop and manage source codes and Docker/Virtual Machine images of the MiCADO services but it will not handle data generated by COLA applications. WP5 in cooperation with WP8 will describe applications creating TOSCA based Application Description Templates to specify the cloud applications’ service topologies and their policies. WP4, WP6 and WP7 in collaboration with WP5 will define deployment, performance, scalability and security policies. </td> </tr> <tr> <td> **Data Format** </td> </tr> <tr> <td> At one side there will be no specific data format to manage the COLA infrastructure data. WP4 will manage a wide variety of infrastructure specific data. At the other side WP5 and WP6 will specific data formats. WP5 will describe applications using TOSCA based on YAML. It will also manage deployment and implementation artifacts of COLA applications. WP6 will handle data of the MiCADO platform data, such as binaries of MiCADO services, scripts, Docker and Virtual Machine images using Occopus and TOSCA descriptors. </td> </tr> <tr> <td> **Data Producers and Consumers** </td> </tr> <tr> <td> _Data producers_ in COLA will be system administrators of the COLA infrastructure in WP4, COLA application developers in WP5 and WP8, and MiCADO platform developers in WP6-WP7. Developers in WP5-WP7 will be _data consumers_ of the COLA infrastructure data while users in WP8 will be consumers of the TOSCA based application descriptions and MiCADO specific data. </td> </tr> <tr> <td> **Metadata** </td> </tr> <tr> <td> WP4 will not use metadata to manage COLA infrastructure. Data. In contrast, WP5-WP7 will widely use metadata to support access to the MiCADO platform and re-usability of COLA applications. WP5 and WP8 will describe each COLA application in TOSCA. These descriptions will also contain descriptive metadata defined in the TOSCA specification. Additional metadata can be added to TOSCA Application Descriptions to support their sharing in digital markets. WP6 will add metadata to Docker and Virtual Machine images to describe MiCADO services and COLA applications they contain. This metadata may contain metadata such as version controller code, previous versions, release tags and commit times, etc. </td> </tr> <tr> <td> **Data Management** </td> </tr> <tr> <td> **Storage and Backup** </td> </tr> <tr> <td> WP4 will use a Storpool storage, located in in Zurich, to store data about the COLA </td> </tr> </table> <table> <tr> <th> infrastructure. It protects data and guarantees data integrity via a 64-bit checksum and version for each sector maintained by the storage system. WP4 will provide several existing CloudSigma backup solutions, for example the snapshot functionality, among those service owners can select the required backup solutions. WP5 will upload TOSCA based applications descriptions to GitHub. Backup copies will be also available on COLA Pydio ( _https://cola.fst.westminster.ac.uk)_ and gdrive. After installing the COLA repository and digital market product- quality application descriptions will be also stored in these facilities. WP6 will also store binaries, source codes and documentation of MiCADO services in GitHub. Documentation of MiCADO services will be also uploaded to the COLA Pydio and website. WP6 will publish Docker and Virtual Machine images on the Docker hub. </th> </tr> <tr> <td> **Access to Data and Data Sharing** </td> </tr> <tr> <td> WP4 will provide access to data of the COLA infrastructure from within the COLA infrastructure. Both applications descriptions including their artifacts such as deployment and implementation artifacts (WP5) and binaries, images and source code of MiCADO services (WP6) will follow the Open Access policy. As a result, they will be publicly available, no restriction is planned regarding their accessibility. </td> </tr> <tr> <td> **Archiving and Preservation** </td> </tr> <tr> <td> In WP4 CloudSigma uses a block storage system in the COLA infrastructure to archive and store VMs. This block storage solution is able to provide implicit non-disruptive backups at the storage block level for all user data. This includes any data contained within virtual drives including application data, databases, all operating system information etc. It provides full drive level backup of customer data. It backs up all end-user computing data each night and retains seven days of rolling snapshots. In addition to the automatic backup system, users are able to create point--in--time snapshots of their drives, which can later be cloned and upgraded to create stand-alone drives. A snapshot can be created on-demand while the server is running, thus in no way affecting the performance or availability of the systems. By using snapshots, customers can protect themselves from data corruption or use them for auditing purposes. WP5 and WP6 uses archiving and versioning services of GitHub and Docker hub to create back-ups of application descriptions, binaries and source codes of MiCADO services and images applications and MiCADO services. </td> </tr> <tr> <td> **Quality Assurance** </td> </tr> <tr> <td> The integrity of the COLA infrastructure data is guaranteed by Storpool's storage solution. Redundancy is provided by multiple copies (replicas) of the data written synchronously across the cluster. Users can set the number of replication copies, with the CloudSigma cloud configured to store three copies of all data. This technology is superior to RAID in both reliability and performance. Unlike RAID, the system replication distributes copies across different servers. As such, in the case of a server or component failure, data that is stored on this affected server is not lost. The integrity of application description and MiCADO service will be guaranteed by GitHub where these types of data are stored. The integrity of the image data is managed and provided by Docker hub. </td> </tr> <tr> <td> **Security, IPR and Ethics** </td> </tr> <tr> <td> **Security** </td> </tr> <tr> <td> Each COLA partner data security policies must comply with the criteria set out in the COLA DMP strategy document (D1.3 deliverable). CloudSigma’s cloud solution being the only commercial infrastructure made available for use in the project is ISO-27001 (2015) certified and PCI-DSS compliant. In addition, CloudSigma’s data centre in Zurich is </td> </tr> <tr> <td> covered by the following certifications: * SAS 70 compliant data centre * ISO-9001:2008 for quality management systems * ISO-27001:2005 for information security management systems * Gold LEED certification for environmental sustainability * FACT certification – key data protection certification for the European film and broadcasting industry Application descriptions (WP5) and MiCADO platform data (WP6) will require access control mechanisms to be used for data confidentiality and integrity protection. These data types do not have any further security issues because they are available and its accessibility is not restricted. </td> </tr> <tr> <td> **Ethics and Privacy** </td> </tr> <tr> <td> WP4-WP7 do not foreseen any ethics and/or privacy issues in the project. </td> </tr> <tr> <td> **Intellectual Property Rights** </td> </tr> <tr> <td> Intellectual property rights related to WP4-WP7 will be managed in accordance with the IPR management plan outlined in the DoW (Section 3.2.6 Management of Knowledge and Intellectual Property), the strategy for addressing issues formalised in the Consortium Agreement, as well as monitored and controlled by T3.2. The Consortium Agreement lists all background included and excluded from excess right. </td> </tr> <tr> <td> **Legal Requirements** </td> </tr> <tr> <td> One of the key factors regarding data protection relates to the physical location of stored data and the implications of the differences in legislation and regulation between jurisdictions. The four COLA infrastructure providers (SICS, CloudSigma, SZTAKI and UoW) in WP4 reside in four European countries. CloudSigma provides its commercial IaaS platform in Zurich (CH), but also makes available its resources located in Frankfurt (DE) if required. SIC academic cloud is located in Lulea (SE), SZTAKI academic cloud in Budapest (HU) and UoW academic cloud in London (UK). Data protection laws are consistent across EU countries, due to the EU Data Protection Directive (Directive 95/46/EC on the protection of individuals with regard to the processing of personal data and on the free movement of such data) adopted in 1995. However, there are some differences in legislation and regulation between Switzerland and the EU, as Switzerland only partially implemented the EU Directive on the Protection of Personal Data in 2006. One of the main differences is that, unlike the data protection of many other countries, the Swiss Federal Data Protection Act (DPA) protects both personal data pertaining to both natural persons and legal persons. Special requirements apply to the transfer of personal data outside of Switzerland. Depending on the circumstances, the Swiss Federal Data Protection and Information Commissioner must be informed before personal data is transferred outside of Switzerland. This is an important factor for companies or individuals storing sensitive information if they want to circumvent the US Patriot Act or the US Safe Harbour or Data Protection acts. CloudSigma has set-up its corporate structure in such a way that each cloud location is managed by a local entity and therefore subject to that jurisdiction. This allows customer data to be treated in accordance with the country where it is physically residing, essentially enabling customers with sensitive information to circumvent the Patriot Act. CloudSigma’s holding company is Swiss, which means it has no concept of extra-territorial jurisdiction (unlike US holding companies). This means CloudSigma’s US entity is subject to US law only and their Swiss cloud location is subject to Swiss law only. New operational companies are opened for each new location. This way, CloudSigma’s customers can make informed decisions about where they store their data according to the data protection laws relative to jurisdiction. WP5-WP7 has no legal requirements that are foreseen. </td> </tr> </table> ### 6.1.3Exploitation DMP Work package: WP3 **Person:** Nicola Fantini <table> <tr> <th> **Data** </th> </tr> <tr> <td> **Existing and New Data** </td> </tr> <tr> <td> WP3 will generate three types of data: exploitation, IPR and sustainability data. _Sustainability-related data_ . To validate the economic feasibility of the implemented COLA use cases WP3 will collect the sustainability-related data. COLA use case owners will provide this data based on different business models e.g., detailed use case description, partner resources, customer data, revenue structure, cost structure, value proposition details, etc. _Exploitation-related data_ . To contribute to the commercial exploitation planning WP3 will collect exploitation-related data from COLA partners. They will forward their exploitation plan WP3, including specific metrics that will be used to measure the economic impact of COLA use cases implemented as cloud- based solutions on the MiCADO platform. _IPR-related data_ . To investigate and handle IPR management issues, WP3 will request IPR-related data from the partners, such as data related to any IP brought to the project, IP generated in and out of the project. WP3 will process and describe all three types of data in D3.1-D3.3. The reports will be used by other COLA WPs and also submitted to EC. </td> </tr> <tr> <td> **Data Format** </td> </tr> <tr> <td> Most of data will be collected and produced in .doc and .pdf formats. .pptx format will be also used for better data presentation and visualization. </td> </tr> <tr> <td> **Producers and Consumers** </td> </tr> <tr> <td> _Data producers._ COLA project partners will provide sustainability, exploitation and IPR data about the COLA infrastructure, the MiCADO platform and COLA use cases. WP3 will process and describe this data in D3.1-D3.3. _Data consumers._ COLA work packages will be the consumers of exploitation, IPR and sustainability data presented in D3.1-D3.3. The key consumers will COLA use case owners and WP2. COLA use case owners will use this data to improve exploitation ad sustainability of their applications. WP2 will use public exploitation and sustainability data in dissemination activities to promote COLA, particularly, how SMEs can use a cloudbased platform to run their applications. </td> </tr> <tr> <td> **Metadata** </td> </tr> <tr> <td> WP3 will not use metadata to describe sustainability, exploitation and IPR data. </td> </tr> <tr> <td> **Data Management** </td> </tr> <tr> <td> **Storage and Backup** </td> </tr> <tr> <td> All WP3-related data, such as sustainability data provided by partners, exploitation data and IPR data collected, corresponding deliverables and reports, agendas and minutes of meetings, etc. will be uploaded to the COLA storage - a Pydio repository - available at https://cola.fst.westminster.ac.uk/. </td> </tr> <tr> <td> **Access to Data and Data Sharing** </td> </tr> <tr> <td> The Pydio based COLA storage has access rights based access control. Since exploitation and sustainability data is private the repository’s access control enables only COLA project partners to manage (upload, search, select, edit, etc.) these data types. </td> </tr> <tr> <td> **Archiving and Preservation** </td> </tr> <tr> <td> There is no specific archiving policy on data collected and/or processed by WP3. The data is stored in the format it was originally collected or provided in the COLA storage. </td> </tr> <tr> <td> **Quality Assurance** </td> </tr> <tr> <td> There are three phases of quality control of exploitation, IPR and sustainability data. In the first phase WP3 will process the collected sustainability, exploitation and IPR data checking whether there is any missing and wrong information. This processing includes </td> </tr> </table> <table> <tr> <th> correcting, excluding and finalizing this data. In phase 2, data producers, such as COLA infrastructure provider, MiCADO platform developers and COLA use case owners will check the finalized data. This data will be used to compile WP3. In phase 3 D3.1-D3.3 data will be examined whether it accurate, whether it has the needed quality, etc. by the internal COLA review process and modified as required. </th> </tr> <tr> <td> **Security, IPR and Ethics** </td> </tr> <tr> <td> **Security** </td> </tr> <tr> <td> All WP3 data is stored in the COLA repository. This storage facility has access rights based control that enables/disables access to the data and documents uploaded and stored in the repository. As a result, only Pydio users with the proper access right can reach WP3 data. </td> </tr> <tr> <td> **Ethics and Privacy** </td> </tr> <tr> <td> The exploitation, IPR and sustainability data does not raise any specific ethical issue. See report D9.1-D9.2. Exploitation and sustainability data of COLA use cases is confidential. This data can be shared among project partners only i.e. neither data nor WP3 reports will be available to the general public. In contrast these data types of the MiCADO platform are not confidential and COLA will make it available to the general public. </td> </tr> <tr> <td> **Intellectual Property Rights** </td> </tr> <tr> <td> COLA use case owners will hold IPRs for exploitation and sustainability data and other information they produced. Since this data is confidential the IPRs will not be transferred even to project partner for data distribution and archiving but they partners will be allowed to use this data as long as they follow the confidentiality requirements. To disseminate this data WP2 should get a statement from the use case owner who owns the data to allow its dissemination. </td> </tr> <tr> <td> **Legal Requirements** </td> </tr> <tr> <td> Confidentiality of exploitation and sustainability data of COLA use cases raises the issue of how archive and share this data. WP3 does not archive these data types. Their sharing is implemented through the COLA storage at technical level. As a result, there is no any specific legal issue that must be addressed in WP3 </td> </tr> </table> ### 6.1.4Dissemination and Marketing DMP Work package: WP2 **Person:** Andreas Ocklenburg / Steffen Budweg (CloudSME UG) <table> <tr> <th> **Data** </th> </tr> <tr> <td> **Existing and New Data** </td> </tr> <tr> <td> WP2 will develop two types of data: dissemination and marketing data and administrative data. _Dissemination and marketing data_ will incorporate academic and commercial publications, such as COLA leaflets, posters, etc. web content on COLA and other website, images, etc. WP2 will produce project deliverables and interim dissemination and marketing reports as _administration data_ . There will be the following deliverables: D2.1 Dissemination Plan (M03), D2.2 First periodic dissemination report (M12), D2.3 User community feedback (M24), D2.4 Final dissemination report (M30), D2.5 Report on standardisation activities (M30). This data type will also include data relevant to the work in WP2, such as agendas, minutes, interim documents, publications, etc. </td> </tr> <tr> <td> **Data Format** </td> </tr> <tr> <td> Dissemination and marketing data will be mostly produced in doc and pdf format. This data might include different data types generated by WP2 and project partners (e.g. marketing material with images, academic and commercial publications, web content, etc.) </td> </tr> <tr> <td> **Data Producers and Consumers** </td> </tr> <tr> <td> _Data producers_ : All project partners will contribute to the dissemination activities under coordination of WP2. They will collect and forward their dissemination data WP2 that will produce dissemination and marketing data, and lead dissemination and marketing activities, plus compiling and submitting periodic dissemination and marketing reports and WP2 deliverables. _Data consumers_ : They will be public services and SMEs targeted by WP2 dissemination and marketing activities. </td> </tr> <tr> <td> **Metadata-** </td> </tr> <tr> <td> WP2 will use metadata to describe data (e.g. metadata for publications, web content and images etc. WP2 will use the following metadata protocols and standards: * ISO 19005-1:2005 standard with compliance to PDF/A1 for long term archival of documents * ISO 16684-1:2012 standard (XMP) for metadata of documents and images * ISO 15836 Dublin Core for Metadata Element Set </td> </tr> <tr> <td> **Data Management** </td> </tr> <tr> <td> **Storage and Backup** </td> </tr> <tr> <td> All WP2 data including dissemination/marketing materials (flyers, images, posters, etc.) and administrative data (WP2 deliverables and reports) will be uploaded to the COLA storage - Pydio repository - available at https://cola.fst.westminster.ac.uk/ </td> </tr> <tr> <td> **Access to Data and Data Sharing** </td> </tr> <tr> <td> The COLA storage has access rights based access control. All WP2 data but administrative data such as agendas and minutes of WP2 meetings is public. Access to non-public data is restricted to project partners using access right control of the COLA storage. All public dissemination and marketing data will be shared through the COLA website. </td> </tr> <tr> <td> **Archiving and Preservation** </td> </tr> <tr> <td> WP2 will archive dissemination and marketing documents and materials using the ISO 19005-1:2005 standard (Document management — Electronic document file format for longterm preservation - Part 1) with compliance to PDF/A1. </td> </tr> <tr> <td> **Quality Assurance** </td> </tr> <tr> <td> There will be two phases of quality control of dissemination and marketing materials. In the first phase WP2 will develop and circulate dissemination and marketing materials </td> </tr> <tr> <td> among COLA project partners. In phase 2, they will review these materials and provide feedback WP2. Having this feedback WP2 will finalize and publish dissemination and marketing materials. Quality assurance of WP2 deliverables will be provided by the internal COLA review process and modified as required. </td> </tr> <tr> <td> **Security, IPR and Ethics** </td> </tr> <tr> <td> **Security** </td> </tr> <tr> <td> As described in “Data Storage and Data Sharing” section the COLA storage facility has access rights based control that enables/disables access to the documents uploaded and stored in the repository. </td> </tr> <tr> <td> **Ethics and Privacy** </td> </tr> <tr> <td> Dissemination and marketing does not raise any extra ethical issue other than describe in D9.1-D9.2. These reports outline how COLA will address these issues. There are no specific privacy requirements for dissemination and marketing data. </td> </tr> <tr> <td> **Intellectual Property Rights** </td> </tr> <tr> <td> WP2 will follow the IPR management policy of the COLA project. This policy is included in the Grant Agreement. </td> </tr> <tr> <td> **3.4 Legal Requirements** </td> </tr> <tr> <td> There are no specific legal requirements for dissemination and marketing data. </td> </tr> </table> **6.1.5 Project Management DMP** ### Work package: WP1 **Person:** Gabor Terstyanszky <table> <tr> <th> **Data** </th> </tr> <tr> <td> **Existing and New Data** </td> </tr> <tr> <td> Administrative data: Existing administrative data contains the Project Proposal, the Grant Agreement and the Consortium Agreement. New data includes project partners’ interim progress reports (every sixth month), the annual (M18) and final (M30) project reports. It also contains any data relevant to COLA, the Project Management Board (PMB), Technical Task Force (TTF) and Application Task Force (ATF) for example: agendas, minutes, etc. Even if most of the project deliverables are either dissemination or technical reports WP1 coordinates their writing, reviewing and publishing. Financial data: Existing data is the project budget included in the Project Proposal and finalized in the Grant Agreement. Project partners’ new financial data consists of researchers’ timesheets and receipts of project related costs and expenses. Project partners produce an informal short financial report every sixth month based on research staff’s timesheets and project related costs and expenses for the COLA Financial Officer. Having these reports he/she first, checks how project partners spend and use their budget. Secondly, he/she also produces the financial report to be submitted to EC. </td> </tr> <tr> <td> **Data Format** </td> </tr> <tr> <td> Administrative data is produced in doc and pdf format while financial data is generated in doc and xls format. </td> </tr> <tr> <td> **Data Producers and Consumers** </td> </tr> <tr> <td> Data producers: Local Project Officers collects and forward project partner specific administrative data to the COLA Project Officer who creates the interim project reports. He coordinates annual and final report writing collecting and integrating contributions from Local Project Officers. They also collect timesheets and receipts from researchers and forward these documents to the Local Financial Officers who will compile the local financial reports. Having these the COLA Financial Officer produces the interim, annual and final financial reports. WPs compile project deliverables under the COLA Project Officer’s coordination of involving the project partners. Data consumers: The EC Project Officers are the consumers of the administrative and financial data and project deliverables. </td> </tr> <tr> <td> **Metadata** </td> </tr> <tr> <td> COLA does not use any metadata to describe administrative and financial project data. </td> </tr> <tr> <td> **Data Management** </td> </tr> <tr> <td> **Storage and Backup** </td> </tr> <tr> <td> _Administrative data_ : All project-level administrative data, such as interim, annual and final project reports, project deliverables, agendas and minutes of meetings, etc. is uploaded to the COLA storage - a Pydio repository - available at https://cola.fst.westminster.ac.uk/. _Financial data:_ Project partners collect and store all local financial data and the relevant documents, such as timesheets, receipts, tickets, etc. Several partners scan these documents and store their e-version with other financial data on their storage facilities. Local Project Officers forward project partners’ financial information directly to the COLA Financial Officer who uploads and stores this data to the Central Finance storage of the University of Westminster. See backup of financial data in 2.3 Archiving and Preservation sub-section. </td> </tr> <tr> <td> **Access to Data and Data Sharing** </td> </tr> <tr> <td> _Administrative data_ : All reports but D3.1-D3, D4.4, D8.1-D8.4 and D9.1-D9.2 are publicly available through the COLA storage. Access to further administrative data such as agendas and minutes of project meetings is restricted to project partners using access right control. </td> </tr> <tr> <td> _Financial data_ : Only COLA and Local Financial Officers plus EC Project Officer can access project partners’ financial data. Summary of their financial data is included in the annual and final project reports. This summary is publicly available on the COLA storage. </td> </tr> <tr> <td> **Archiving and Preservation** </td> </tr> <tr> <td> _Administrative data_ : COLA will use the Pydio solution to archive data. _Financial data_ : Each project partner’s Central Finance storage is archived at every 24 hours. </td> </tr> <tr> <td> **Quality Assurance** </td> </tr> <tr> <td> _Administrative data_ : COLA set up a quality control procedure to check all project reports. There is a well-defined review process to check the quality of each deliverables. _Financial data_ : COLA and Local Financial Officers check financial data and monitor how project partners spend and use their budget. </td> </tr> <tr> <td> **Security, IPR and Ethics** </td> </tr> <tr> <td> **Security** </td> </tr> <tr> <td> _Administrative data_ : The COLA storage facility has access rights based control that enables/disables access to the documents uploaded and stored in the repository. _Financial data_ : It is uploaded and stored on storages of Central Finance of project partners that have sophisticated access control. </td> </tr> <tr> <td> **Ethics and Privacy** </td> </tr> <tr> <td> Not relevant to administrative and financial data </td> </tr> <tr> <td> **Intellectual Property Rights** </td> </tr> <tr> <td> Not relevant to administrative and financial data </td> </tr> <tr> <td> **3.4 Legal Requirements** </td> </tr> <tr> <td> Not relevant to administrative and financial data </td> </tr> </table> ### 6.2 Project-level Data Management Guidelines Considering the activity-level DMPs WP1 developed Data Management Guidelines for the COLA project. As a result, we collected the data, data management, IPR. legal and security requirements of the activity-level DMPs and compiled three tables (See Table 6.1-6.3). These tables will be considered as Data Management Guidelines and will be recommended both COLA developers and users. #### Data aspects of activity-level DMP <table> <tr> <th> </th> <th> Sarga </th> <th> TAA </th> <th> Saker </th> <th> COLA infrastructure </th> <th> MiCADO platform </th> <th> Exploitation </th> <th> Dissemination & marketing </th> <th> Project management </th> </tr> <tr> <td> data types </td> <td> * tweets * user data </td> <td> * tickets * customer surveys * business info * public data </td> <td> * simulation model * simulation data </td> <td>  infra data </td> <td> * TOSCA templates * source codes + binaries * application images </td> <td> * sustainability data * exploitation data * IPR data </td> <td> * publications * presentations * leaflets * posters * PR images </td> <td> * admin data * financial data </td> </tr> <tr> <td> data formats </td> <td>  JSON </td> <td> * raw data * formatted data * .CSV </td> <td> * models in .FSM * data in SQL </td> <td>  none specific format </td> <td> * YAML descriptions * application artifacts * application images * service images </td> <td> * .doc * .pdf </td> <td> * .doc * .jpg * .pdf </td> <td> * .doc * .pdf * .xls </td> </tr> <tr> <td> data producers </td> <td>  Twitter users </td> <td> * art organisation s * social media companies </td> <td>  simulation users </td> <td>  infra sysadmins </td> <td> * application developers, * MiCADO developers </td> <td>  project partners </td> <td>  project partners </td> <td>  project partners </td> </tr> <tr> <td> data consumers </td> <td>  civil servants </td> <td> * TAA employee * TAA clients * public </td> <td>  simulation users </td> <td>  infra sysadmins </td> <td> * application users * MiCADO platform users </td> <td> * use case owners * WP2 partners </td> <td> * SMEs * public services </td> <td> * project partners * EC Project Officers </td> </tr> <tr> <td> metadata </td> <td>  XML comments in SQLr </td> <td>  tags used in RDBM </td> <td>  none </td> <td>  none </td> <td> * TOSCA metadata * Image metadata </td> <td>  none </td> <td>  metadata on WP2 data </td> <td>  none </td> </tr> </table> **Table 6.1: Data aspects of activity-level DMPs** #### Data management aspects of activity-level DMP <table> <tr> <th> </th> <th> Sarga </th> <th> TAA </th> <th> Saker </th> <th> COLA infrastructure </th> <th> MiCADO platform </th> <th> Exploitation </th> <th> Dissemination & marketing </th> <th> Project management </th> </tr> <tr> <td> data storage </td> <td> * tweets in SQLr * users data in MySQL </td> <td>  SQL & noSQL database </td> <td>  SQL database </td> <td>  Storpool storage </td> <td> * GitHub * Docker hub * gdrive * Pydio </td> <td>  Pydio </td> <td>  Pydio </td> <td> * Pydio * financial data -> partner servers </td> </tr> <tr> <td> data backup </td> <td>  daily  weekly </td> <td>  daily kept for a week </td> <td>  daily </td> <td>  CloudSigma solution </td> <td>  hub + Pydio backups </td> <td>  Pydio backups </td> <td>  Pydio backups </td> <td> * Pydio backups * server backups </td> </tr> <tr> <td> data access </td> <td>  dedicated web interface </td> <td> * public -> anonymized * non-public > dedicated clients </td> <td>  access right based access </td> <td>  access right based access via COLA infra </td> <td>  open access </td> <td>  access right based access to COLA Pydio </td> <td>  access right based access to COLA Pydio </td> <td>  access right based access to COLA Pydio \+ servers </td> </tr> <tr> <td> data archivation </td> <td> * tweets: up to one year * user data: 3-5 years </td> <td>  one week </td> <td>  none </td> <td>  Storpool block storage </td> <td>  GitHub + Docker hub archivation </td> <td>  Pydio archivation </td> <td>  Pydio archivation </td> <td>  Pydio archivation + server archivation </td> </tr> <tr> <td> data Quality Control </td> <td>  user filters </td> <td>  automatic + manual QC </td> <td>  Saker quality system </td> <td>  Storpool QC </td> <td>  GitHub + Docker QC </td> <td>  3 phase QC </td> <td>  2 phase QC </td> <td>  COLA QC procedure </td> </tr> </table> **Table 6.2: Data management aspects of activity-level DMPs** #### Security, IPR and Ethics aspects of activity-level DMP <table> <tr> <th> </th> <th> Sarga </th> <th> TAA </th> <th> Saker </th> <th> COLA infrastructure </th> <th> MiCADO platform </th> <th> Exploitation </th> <th> Dissemination & marketing </th> <th> Project management </th> </tr> <tr> <td> data security </td> <td>  username + password for nonsensitive & public data </td> <td>  data encryption + multifactor authenticatio n for sensitive data </td> <td>  access right based access </td> <td>  access right based access </td> <td>  access right based access </td> <td>  access right based access </td> <td>  access right based access </td> <td>  access right based access </td> </tr> <tr> <td> ethics + privacy </td> <td>  no relevant > no sensitive personal data posted by individual </td> <td> * no personal data: not relevant * personal data -> EC ethical regulations </td> <td>  COLA ethics policy (see details in COLA D9.1 \+ D9.2) </td> <td> COLA ethics policy (see details in COLA D9.1 + D9.2) </td> <td> COLA ethics policy (see details in COLA D9.1 + D9.2) </td> <td> COLA ethics policy (see details in COLA D9.1 + D9.2) </td> <td> COLA ethics policy (see details in COLA D9.1 + D9.2) </td> <td> COLA ethics policy (see details in COLA D9.1 + D9.2) </td> </tr> <tr> <td> IPRS </td> <td>  no specific requirement s </td> <td>  non-public data is protected by IPR </td> <td>  Saker clients own data </td> <td>  COLA IPR policy (see in the Grant Agreement) </td> <td>  COLA IPR policy (see in the Grant Agreement) </td> <td>  COLA IPR policy (see in the Grant Agreement) </td> <td>  COLA IPR policy (see in the Grant Agreement) </td> <td>  COLA IPR policy (see in the Grant Agreement) </td> </tr> <tr> <td> legal requirements </td> <td>  data usage \+ confidentali ty agreement must be signed </td> <td>  data usage \+ confidentali ty agreement must be signed </td> <td>  data usage \+ confidentali ty agreement must be signed </td> <td>  EU + Swiss laws </td> <td>  none </td> <td>  none </td> <td>  none </td> <td>  none </td> </tr> </table> **Table 6.3: Security, IPR and Ethics aspects of activity-level DMPs** D1.3 Data Management Plan # 7 Conclusion The COLA work packages elaborated activity oriented DMPs considering the diversity and heterogeneity of data to be produced and managed in COLA. These DMPs implement the FAIR principles to support the full-lifecycle data management: ## Findability  DMP defines data formats and gives recommendation on metadata used to describe data needed and produced in COLA use cases and the MiCADO platform. COLA will use Digital Object Identifiers (DOI) to identify data. ## Accessibility  DMP specifies access type of data items and objects produced in COLA as public (or open) and private. Technology specific data, such as MiCADO platform data, will be public. Companies and public services (users of the MiCADO platform) define in the User DMP which data will be public and which data will be private and who can access it. ## Interoperability  COLA will use standard data format and metadata will follow metadata standardization. These details are outlined in the COLA DMP. ## Re-use * COLA public data will be available to third parties free of charge for scientific purposes but restrictions may apply for commercial use in compliance with open access regulations. * DMP gives recommendations for quality control measures for data produced in the MiCADO platform considering technology and user data. COLA will use the FAIR principles and follow the full lifecycle data management to allow the best possible dissemination, sharing and usage of COLA data. However, as the COLA consortium incorporates data providers and data users with different expertise and data resources to be managed to create a single Data Management Plan is not a realistic objective. Section 6.2 presents a summary of the activity-level DMPs. This summary will guide developers and users to manage data in the COLA infrastructure. WP1 will monitor how COLA work packages follow and use the activity-level COLA DMPs. This work package will extend/upgrade these DMPs based on this monitoring and considering new data requirements focusing on FAIR principles. Work Package WP1 Page 28 D1.3 Data Management Plan
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0293_SIMPATICO_692819.md
# Executive summary This document is the deliverable **“D1.4 – Data management plan v.2”** of the European project “SIMPATICO - SIMplifying the interaction with Public Administration Through Information technology for Citizens and cOmpanies” (hereinafter also referred to as **“SIMPATICO”** , project reference: 692819). The **Data Management Plan (DMP)** describes the types of data that has been generated and/or gathered during the project, the standards used, the ways in which data is exploited and shared (for verification or reuse), and in which way data is preserved. This DMP has been prepared by taking into account the template of the **“Guidelines on Data Management in Horizon 2020”** [Version 2.1. of 15 February 2016] and the EU General Data Protection Regulation (GDPR). The elaboration of the DMP allowed SIMPATICO partners to address all issues related with data protection, including ethical concerns and security protection strategy. SIMPATICO takes part in the **Open Research Data Pilot (ORD pilot)** ; this pilot aims to improve and maximise access to and re-use of research data generated by Horizon 2020 projects, such as the data generated by the SIMPATICO platform during its deployment and validation. Moreover, under Horizon 2020 each beneficiary must **ensure open access to all peer-reviewed scientific publications** relating to its results: these publications shall be made also available through the public section of the SIMPATICO website. All these aspects have been taken into account in the elaboration of the DMP. A first version of the deliverable “D1.3 – Data management plan v.1” was released at the beginning of the project (M6) and then updated at M16, after the detailed definition of the project use-cases and the revision of the ethics-related aspects of the project by the Ethics Advisory Board, and taking into account the feedback collected during the 1 st year project review. This document represent the final version of the deliverable (M36) that takes into account changes occurred in the last year of the project and reports the final datasets produced. Starting from a brief illustration of the SIMPATICO project, and of the ethical concerns raised by the project activities, this report describes **the procedures of data collection, storing and processing** , with a final overview on **SIMPATICO security protection strategy** . The report also reports on the datasets, language corpora and open publications produced by the project during its lifetime. This report does not cover the general concerns related to ethics and data protection, as they are the focus of dedicated deliverables already submitted – namely reports “D1.5 – Ethics compliance report” (including its final version with the so-called GDPR Self-Assessment; see subsection 5.3 and Annex 3 of the deliverable), “D8.1 – H – Requirement no. 1”, and “D8.2 – POPD – Requirement no. 2”. # 1\. Introduction The research activities undertaken in the SIMPATICO project have important data protection aspects, in particular due the foreseen involvement of public/private stakeholders and citizens and due to the necessity to collect, store and process personal data. This deliverable analyses the **data management implications** of the activities undertaken in the project, and describes the guidelines and procedures put in place in order to ensure compliance with data management requirements. The rest of this section provides **background information on the SIMPATICO project** (Subsection 1.1) and identifies in brief the **ethical issues** raised by the project activities (Subsection 1.2). The project aims **to maximise access to and re-use of research data** , also ensuring **open access to all peerreviewed scientific publications** relating to its results, in order pave the path for its data management plan according to the signed Grant Agreement - GA (Subsection 1.3.). Section 2 concerns the detailed **description of SIMPATICO datasets** , according to the requirements set out in Annex 1 - Data Management Plan template of the “Guidelines on Data Management in Horizon 2020” [1]: (a) the handling of research data during and after the project; (b) what data are collected, processed or generated; (c) what methodology & standards have been applied; (d) whether data has been shared/made open access and how; (e) how data has been curated and preserved. Finally, Section 3 presents the **SIMPATICO security protection strategy** . ## 1.1. SIMPATICO in brief SIMPATICO's goal is **to improve the experience of citizens and companies in their daily interactions with the public administration** by providing a **personalized delivery of** **e-services** based on advanced **cognitive system technologies** and by promoting an **active engagement of people** for the continuous improvement of the interaction with these services. The SIMPATICO approach is realised through a platform that can be deployed on top of an existing PA system and allows for **a personalized service delivery** without having to change or replace its internal systems: a process often too expensive for a public administration, especially considering the cuts in resources imposed by the current economic situation. The goal of SIMPATICO is accomplished through a solution based on the **interplay of language processing, machine learning and the wisdom of the crowd** (represented by citizens, business organizations and civil servants) **to change for the better the way citizens interact with the PA. SIMPATICO** **adapted** **the interaction process** to the characteristics of each user; **simplify** text and documents to make them understandable; **enable feedback for the users** on problems and difficulties in the interaction; **engage** **civil servants, citizens and professionals** so as to make use of their knowledge and integrate it in the system (Fig. 1). Figure 1: SIMPATICO concept as a glance The project aims can be broken down into the following **smaller research objectives (ROs)** . **RO1. Adapt the interaction process with respect to the profile of each citizen and company** (PA service consumer), in order **to make it clear, understandable and easy to follow** . * A **text adaptation** framework **,** based on a **rich** **text information layer** and on machine learning algorithms capable of **inducing general text adaptation operations** from **few examples, and of customizing these adaptations to the user profiles.** * **A workflow adaptation engine** that takes user characteristics and tailor the interaction according to the user’s profile and needs. * A feedback and annotation mechanism that **gives users the possibility to visualize, rate, comment, annotate, document the interaction process** (e.g., underlying the most difficult steps), so as to provide valuable feedback to the PA, further refine the adaptation process and enrich the interaction. **RO2. Exploit the wisdom of the crowd to enhance the entire e-service interaction process.** * An **advanced web-based social question answering engine (Citizenpedia)** where citizens, companies and civil servants **discuss and suggest potential solutions and interpretation for the most problematic procedures and concepts.** * A **collective knowledge** database on e-services that is used to simplify these services and improve their understanding. * An **award mechanism** that **engages users and incentivize them to collaborate** by giving them **reputation** (a valuable asset for professionals and organizations) and **privileges** (for the government of Citizenpedia – a new public domain resource) according to their contributions. **RO3. Deliver the SIMPATICO Platform, an** **open software system that can interoperate with PA legacy systems.** * A platform that **combines consolidated e-government methodologies with innovative cognitive technologies** (language processing, machine learning) at different level of maturity, enabling their experimentation in more or less controlled operational settings. * An interoperability platform that enables an **agile integration of SIMPATICO’s solution with** PA legacy systems and that allows the exploitation of data and services from these systems with the SIMPATICO adaptation and personalization engines. **RO4. Evaluate and assess the impact of the SIMPATICO solution** * Customise, deploy, operate and evaluate the SIMPATICO solution on **three use-cases in two EU cities** – Trento (IT) and Sheffield (UK) – **and one EU region** – Galicia (ES). * **Assess the impact** of the proposed solution in terms of **increase in competitiveness, efficiency of interaction and quality of experience.** ### 1.1.1. SIMPATICO technical framework and infrastructure The SIMPATICO project intends to provide a **software platform** incorporating technical innovations to enhance **the efficiency, effectiveness and inclusiveness of public services** . To this aim, SIMPATICO collects, generates and utilizes both personal and other data in a complex way. For what concerns this deliverable (consumption, production and storage of data), the key SIMPATICO components are the following: 1. **Citizen Data Vault** : it represents the component that takes care of personal data exchange between a user and SIMPATICO components. It is a distributed repository of the citizen (or company) profile and related information. It is continuously updated through each interaction and is used to automatically pre-fill forms. In this way, the citizen gives to the PA the same information only once, as the information has been stored in the vault and used in all the following interactions; 2. **Human Computation (Citizenpedia):** SIMPATICO fosters citizens’ involvement, by providing Citizenpedia, a hybrid of Wikipedia and a collaborative question answering engine, and sharing improvements on public resources in a semi-automatic basis. Citizens, companies and civil servants discussed and suggest potential solutions and interpretation for the most problematic procedures and concepts. These interaction actions have been further refined the user profile and have been stored in the citizen data vault to serve as the basis for the adaptation of future interactions. Public servants are able to moderate comments and suggestions of citizenships to prevent crowd’s wisdom bias. The knowledge collected by a user on a specific e-service (e.g., a request of clarification or the explanation of a concept) can propagate and improve the understanding and interaction of potentially all users and eservices. An award mechanism that engages users and incentivize them to collaborate by giving them reputation (a valuable asset for professionals and organizations) and privileges is designed. 3. **SIMPATICO Adaptation Engine:** it is a cognitive system that makes use of innovative text processing and machine learning algorithms to adapt the text and the workflow of the interaction according to the user profile. The text adaptation engine adapts the text of the forms and of the other documents to make it more understandable and to clarify complex elements, while the workflow adaptation engine adapts the interaction process itself by presenting the citizen only the elements that are relevant for his/her profile (e.g., if the citizen is not a foreigner he/she has not been presented the section of a form reserved for foreigners). The adaptation engine exploits data collected on the interactions of the users, exploiting both implicit and explicit techniques; these data are stored in the “User Profile” and “Log” components of SIMPATICO. These components are highlighted in Figure 2, depicting SIMPATICO conceptual architecture. Figure 2: SIMPATICO Platform conceptual architecture and main components ### 1.1.2. SIMPATICO pilots The piloting of the SIMPATICO platform is planned in two European cities ( **Trento and Sheffield** ) and one region ( **Galicia** ) in Italy, Spain and the United Kingdom (UK), through a **two-phase use-case validation.** The stakeholders engaged in the **three use-cases** were selected for their experience and interest in e-services, as well as for the different socio- cultural backgrounds of the three regions. In this way, the Consortium have the opportunity to validate the effectiveness of the project results in **contexts which differ on the number and heterogeneity of citizens and their social and cultural background** . There are indeed important **differences in the technological ecosystems** , with Trento and Sheffield having just started the process of digitalization of their services to citizens and businesses (this process happens in alignment and integration with the SIMPATICO activities), and Galicia having a mature and consolidated e-service delivery infrastructure (thus allowing to study the deployment of SIMPATICO on top of an already operating system). The contexts also **differ for the point of view of the number and heterogeneity of end-users and for the variety and maturity of eservices** ( **see deliverable “D6.2 – Use-case planning & evaluation v2” ** ). The tables below provide a short description of the SIMPATICO pilots summarising the general background and purpose of the use cases, as well as information on recruitment procedures, personal and sensitive data processing, and vulnerable groups involved in the experimentations. <table> <tr> <th> **TRENTO PILOT** </th> </tr> <tr> <td> **General background** Trento is the capital of the Autonomous Province of Trento in Italy. It is a cosmopolitan city of 117.317 inhabitants. The digitalization of all interactions between the PA and its citizens is a priority for Trento, and the city is currently working on a strategic project in this area. Trento has already done much to improve interactions with its citizens. Trento has already supported submitting applications through certified e-mail, by sending the filled application documents and a scan of identity document and signature. As part of its “smart city” strategy, Trento is working to realize a new e-service portal: it serves as a “one-stop shop” or unique access point that offers integrated and facilitated access to all the various services. With this new portal, it is possible for citizens and businesses to authenticate using smart service cards or one-time password devices, and to complete the interaction online. The Municipality of Trento has adopted “Sportello Telematico”, an end-to-end solution provided by the GLOBO srl company, specifically targeting the digitalization of modules for service provision by PA. Within this solution, the digital module is a composition of sections of organic information (e.g., birth data section, residence data section, real estate registry data section). The logic of the interaction with an information section is explicitly mapped by the module designer. The integrations with legacy systems are handle via a centralized REST web service, which routes the proper service request to the right data source service. Finally the solution supports module hierarchy, which guarantees the definition of a well organized digital module library. **Purpose of the use case** The main specific purpose of the first SIMPATICO experiment phase in Trento is to validate the integration between the Trento e-service portal and SIMPATICO solution, with the final aim to evaluate the SIMPATICO solution usability. The experiment has targeted the adoption of the SIMPATICO solution for all the new e-services that has been published by the Public Administration during the project lifetime. **Recruitment procedures** The experimentation is structured in two phases: (1) a pre-evaluation phase, where the e-services and the SIMPATICO solution has been evaluated in a controlled environment from a citizen panel representative of the e-service user community; (2) an evaluation phase, where the e-services and the SIMPATICO solution has been evaluated in a production environment and open to everyone. **Personal and sensitive data processing** For both the above-mentioned e-services the project collected demographic information from the participants (e.g., gender, age). More specifically, the digital module of the childcare service requires to specify further personal data (i.e., parent/custodial records, child records, family work conditions, family economic conditions) and sensitive information (i.e., if the family is in charge of the social care service; if the child has some form of disability). **Vulnerable groups** Children, persons with disabilities, and immigrant or minority communities. Please note that:  the use case involved only participants capable to give consent (e.g., we involved only the legal guardians and/or carers of children); </td> </tr> <tr> <td>  </td> <td> the Informed Consent Form has been translated in Italian; </td> </tr> <tr> <td>  </td> <td> appropriate efforts have been made to ensure fully informed understanding of the implications of participation (i.e., participants shall understand all the proceedings of the research, including risks and benefits). </td> </tr> </table> <table> <tr> <th> **GALICIA PILOT** </th> </tr> <tr> <td> **General background** Galicia is an autonomous community of Spain and historic nationality under Spanish law. It has a population of 2.717.749 inhabitants and has a total area of 29.574,4 km. The Xunta de Galicia is the collective decision-making body of the government of the autonomous community of Galicia. The Xunta has at its disposal a vast bureaucratic organization based at Santiago de Compostela, the galician government capital. According to data provided by IGE (Instituto Galego de Estatistica), the number of Galician elderly inhabitants is alarmingly increasing. Furthermore, the socioeconomic indicators for Galicia show a number of particular needs that make it suited for eservices improvement: a sparse distribution of the population, especially in the rural parts of the region. In that regions people often migrate to the richer coastal areas and other Spanish regions. This has resulted in large rural areas with low population density, where the access to public services is harder. Consequently, there is a big gap in the usage of e-services in Galicia in the segment of population older than 55. **Purpose of the use case** The main specific purpose of the use case is to analyse and validate the technological acceptance of elderly groups using the selected Xunta ("Government of Galicia") e-services and SIMPATICO solution. This analysis and validation assessed both (1) discretionary usage and satisfaction to measure the acceptance, and (2) the effectiveness and efficiency of the e-service usage improved by SIMPATICO. The main target audience are the elderlies, and three e-services have been selected: * Grants for the attendance to wellness and spas program; * Individual grants for personal autonomy and complimentary personal assistance for disabled people. Assessment of degree of disability. **Recruitment procedures** This use case was a closed experimentation. The participants have been recruited by three associations: * FEGAUS (Federation of Associations of alumni and ex alumni of the Senior University Programs) provides elder users. These users are between 55 and 75 years old. They have medium-high technological profile, i.e., they are able to autonomously access the internet and use modern devices such as smartphones and tablets. * ATEGAL (Association for Lifelong Learning) provides elder users. These users are adults with age over 55 years, and with a medium cultural level. The technical level of the ATEGAL members is lower than that of the FEGAUS members. * COGAMI is an association for people with disabilities of all ages. The technical level of the COGAMI members is heterogeneous, from entry-level users to experienced ones. </td> </tr> <tr> <td> **Personal and sensitive data processing** The project collected demographic information from the participants (e.g., gender, age) and whether they have physical disabilities (especially in the case of COGAMI users). **Vulnerable groups** Elderly people and persons with disabilities. Please note that: * the use case involveed only participants capable to give consent (e.g., we involved only persons with physical disabilities); * the Informed Consent Form has been translated in Spanish and Galician; * appropriate efforts have been made to ensure fully informed understanding of the implications of participation (i.e., participants shall understand all the proceedings of the research, including risks and benefits). </td> </tr> </table> <table> <tr> <th> **SHEFFIELD PILOT** </th> </tr> <tr> <td> **General background** Sheffield is a city and metropolitan borough in South Yorkshire, England (UK). It is England’s third largest metropolitan authority with 551.800 people. Sheffield is an ethnically diverse city, with around 19% of its population from minority ethnic groups. The largest of those groups is the Pakistani community, but Sheffield also has large Caribbean, Indian, Bangladeshi, Somali, Yemeni and Chinese communities. More recently, Sheffield has seen an increase in the number of overseas students and in economic migrants from within the European Union. It is estimated that migrants living in Sheffield actively speak at least 40 languages. Although a significant volume of information is openly available on the Sheffield City Council (SCC)'s website (http://www.sheffield.gov.uk/), current interactions between migrants and Sheffield City Council are mostly done in person or over the phone. An intended outcome is that more users of council services will prefer to use digital channels rather than traditional face to face, email and telephone contact. **Purpose of the use case** The main specific purpose of the first SIMPATICO experiment phase in Sheffield is to validate the implementation of the SIMPATICO technologies into the Sheffield City Council website, with the final aim to evaluate the SIMPATICO solution usability. The experiment has been based on three different e-services: * School Attendance (i.e., it aims to inform parents, education workers and general citizens about the importance of school attendance by children. The following tasks presented in the page: information advising why school attendance is important; form to report suspected truancy; pay term time absence fine); * Parenting Skills Course (i.e., it aims to inform parents about the support provided by the city council and external partners to equip them with better parenting skills); * Young Carers (it aims to support and provide information for people under 21 who look after someone else. All young carers under 18 have the right to an assessment). **Recruitment procedures** The Sheffield pilot has been delayed due to several organizational and technical issues. This caused the cancellation of the pre-evaluation phase for the phase two evaluation and the </td> </tr> </table> delay of the evaluation until the last month of the project. At the time of writing the Sheffield evaluation is being organized with the same structure of the Xunta de Galicia Evaluation, that is a closed evaluation with no personal data collected. For this reason Sheffield’s datasets are not release yet on Zenodo like in the other two use-cases. **Personal and sensitive data processing** The project collects demographic information from the participants (e.g., gender, age) including sensitive information on ethnic or racial origin. **Vulnerable groups** Immigrants or minority communities, minors, possible persons with disabilities. Please note that: * the use case involve only participants capable to give consent (e.g., we involved only the legal guardians and/or carers of minors); * the Informed Consent Form has been translated in English; * translation services for immigrants or minority communities have been provided (when needed); * appropriate efforts have been made to ensure fully informed understanding of the implications of participation (i.e., participants shall understand all the proceedings of the research, including risks and benefits). ## 1.2. SIMPATICO ethical issues The SIMPATICO consortium has been committed to **perform a professional management of any ethical issue** that could emerge in the scope of the activities of the project, also through the support of its **Ethics Advisory Board** (see deliverable “D1.5 – Ethics compliance report”). For this reason, the consortium has identified relevant ethical concerns already during the preparation of the project proposal and, then, during the preparation of the Grant Agreement. During this phase, the European Commission has also carried out an ethics scrutiny of the proposal, with the objective of verifying the respect of ethical principles and legislation. With regard to SIMPATICO, the research entails specific ethical implications, involving human subjects and risks for the protection of personal data [2] [3]. In particular, the **SIMPATICO ethical issues (requirements)** , as reported in the European Commission ethics scrutiny report and acknowledged by the SIMPATICO project, are the following: ### 1.2.1. Protection of personal data – “D8.2 – POPD – Requirement no. 2” 1. _Copies of ethical approvals for the collection of personal data by the competent University Data Protection Officer/National Data Protection authority must be submitted by the coordinator to REA before commencement of data gathering._ 2. _Clarification and if relevant justification must be given in case of collection and/or processing of personal sensitive data. Requirement needs to be met before commencement of relevant work._ 3. _The applicant must explicitly confirm that the existing data are publicly available._ 4. _In case of data not publicly available, relevant authorisations must be provided, requirements to be met before grant agreement signature._ SIMPATICO involves **collecting and processing personal data** (i.e., any information which relates to an identified or identifiable natural person, such as name, address, email) and **sensitive data** (e.g., health, sexual life, ethnicity). The **Citizen Data Vault** represents the component that takes care of personal and sensitive data exchange between a user and SIMPATICO components. Personal and sensitive data has been and will be made **publicly available** (e.g., for the data of **Citizenpedia** ) only after an **informed consent** has been collected and suitable **aggregation and/or pseudonymization techniques** have been applied. Mechanisms for encryption, authentication, and authorization (e.g., TLS protocol, Single-Sign-On implementations, Policy Enforcement Point for XACML) have been exploited in the processes, so to ensure the satisfaction of core **security and data protection requirements** , namely confidentiality, integrity, and availability. For further details, please see sections below and deliverables **“D1.5 – Ethics compliance report”** and **“D8.2 – POPD – Requirement no. 2”** . The Consortium complies with the requirements of:a) the **Directive 95/46/EC** of the European Parliament and of the Council of 24 October 1995 (and subsequent modifications and supplements) on the protection of individuals with regard to the processing of personal data and on the free movement of such data; b) the **Regulation (EU) 2016/679** (General Data Protection Regulation – GDPR) of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC; ; c) the **national legislation** of SIMPATICO pilots (i.e., Italy, Spain, and the United Kingdom) in the field 1 [4] (see also “D1.5 – Ethics compliance report”). In the context of the SIMPATICO project, Fondazione Bruno Kessler is the **data controller** (i.e., the entity that is in control of the processing of personal data and is empowered to take the essential decisions on the purposes and mechanisms of such processing). **Data processors** (i.e., any partner other than an employee of the data controller who processes the data on behalf of the data controller) are all the other members of the SIMPATICO Consortium. According to the new EU Regulation 2016/679, data subjects have a **right to access and port data, to rectify, erase and restrict his or her personal data, to object to processing** and, if processing is based on consent, **to withdraw consent** . In particular, SIMPATICO complies with the GDPR as follows: 1. **Subject access, rectification and portability:** * FBK, as data controller, on request: confirms if they process an individual’s personal data; provides a copy of the data (in commonly used electronic form); and provides supporting explanatory materials; data subjects can also demand that their personal data be ported to them or a new provider in machine readable format; the request has been met within one month and any intention not to comply must be explained to the individual. Access rights are intended to allow individuals to check the lawfulness of processing and the right to a copy should not adversely affect the rights of others. * For what concerns the personal data submitted by the user in the interaction with the eservices, the Citizen Data Vault provides users with explanations on how and which personal information has been collected during their interaction with e-services. At the first usage of CDV, users have been informed that at any moment, during their interaction with e-services, they can, by clicking appropriated buttons, withdraw the collection of data and export a copy of the collected data in an open format. Users have been asked to choose from two types of open format: CSV and JSON. 2. **Right to erasure (“right to be forgotten”) and right to restriction of processing:** * Individuals can require data to be "erased" when there is a problem with the underlying legality of the processing or where they withdraw consent; the individual can require the controller to "restrict" processing of the data whilst complaints (for example, about accuracy) are resolved, or if the processing is unlawful but the individual objects to erasure; FBK, as SIMPATICO data controller, who has made data available to other subjects, which is then subject to a right to erasure request, is required to notify others who are processing that data with details of the request. * The Citizen Data Vault module has been designed to enable individuals to require data to be erased. Furthermore according to the "consent based" approach the user can at any moment withdraw the consent or to restrict the type of data stored by CDV during the interaction with e-services forms. 3. **Rights to object:** * There are rights for individuals to object to specific types of processing, such as processing for research or statistical purposes; * SIMPATICO meets the obligations to notify individuals of these rights at an early stage through the informed consent form and its information sheet; * Online services provided by the Pas involved in the project, and extended by the advanced techniques developed by SIMPATICO, offer their own methods of objecting. More information can be found in **D1.5 – Ethics compliance report.** ### 1.2.2. Humans - “D8.1 – H – Requirement no. 1” 1. _Details on the procedures and criteria that will be used to identify/recruit research participants must be provided._ 2. _Detailed information must be provided on the informed consent procedures that will be implemented._ SIMPATICO involves **work with humans** (‘research or study participants’): according to the EC, collection of personal data, interviews, observations, tracking or the secondary use of information provided for other purposes. End- users (i.e., citizens and businesses) are **engaged in the project usecases** to test the functionalities provided by the SIMPATICO solution for the usage of e-services. Specific **engagement campaigns** have been defined and executed for each use-case. The use-cases involve **only voluntary participants** **aged 18 or older and capable to give consent** , who have been informed on the nature of their involvement and on the data collection/retention procedures through an **informed consent form** before the commencement of their participations. **Terms and conditions** have been transparently communicated to the end-users by means of an **information sheet** including descriptions of e.g., purpose of the research, adopted procedures, data protection and privacy policies. SIMPATICO pilots involved certain **vulnerable groups** , **e.g., elderly people, persons with physical disabilities, and immigrants.** For further details, please see deliverables **“D1.5 – Ethics compliance report”** and **“D8.1 – H – Requirement no. 1”** . ### 1.2.3. Vulnerable groups In addition to the above-mentioned ethical requirements, in the context of this deliverable it is also important to specify that SIMPATICO pilots involved certain **vulnerable groups** : **e.g., elderly people, persons with physical disabilities, and immigrants** . Please note that all the research participants have the **capacity to provide informed consent** : individuals who lack capacity to decide whether or not to participate in research have been appropriately excluded from research. Anyway taking into account the scope and objectives of the research, researchers should be **inclusive in selecting participants** . Researchers did not exclude individuals from the opportunity to participate in research on the basis of attributes such as culture, language, religion, race, sexual orientation, ethnicity, linguistic proficiency, gender or age, unless there is a valid reason for the exclusion. Vulnerable groups could be misapplied for stigmatisation, discrimination, harassment or intimidation. Concern for **the rights and wellbeing of research participants** lies at the root of ethical review. The perception of subjects as vulnerable is likely to be influenced by diverse cultural preconceptions and so regulated differentially by localised legislation. It is likely to be one of the areas where researchers **need extra vigilance to ensure compliance with laws and customs** . Some vulnerabilities may not even be obvious until research is actually being conducted. To reduce the risk of enhancing the vulnerability/stigmatisation of the above- mentioned individuals, the SIMPATICO **Ethics Advisory Board** (see below) provided **specific assessment on vulnerable groups** that have been involved, prior of the commencement of the pilots’ activities during its meetings in 2017/2018 and its final meeting. For further details, please see deliverables **“D1.5 – Ethics compliance report”** and **“D8.1 – H – Requirement no. 1”** . ## 1.3. SIMPATICO Ethics Advisory Board All the above-mentioned deliverables have been assessed and validated during both the first meeting of the **SIMPATICO Ethics Advisory Board (EAB)** and the subsequent meetings during the project lifespan (see “D1.5 – Ethics compliance report”). The board is **competent to provide the necessary authorizations** when the collection and processing of personal (or sensitive) data is part of the planned research, with the validation of national and/or local Data Protection Authorities if needed. This board is led by an **ethics adviser** external to the project and to the host institution, totally independent and free from any conflict of interest. In addition to the external ethics adviser, the EAB was initially composed of **one expert representative from all members of the SIMPATICO Consortium** [5]. After the Review Meeting of May 18 2017, the Consortium was asked to change the composition of the EAB adding **other two experts** to the board and ensuring **gender balance** in decision-making. For these reasons, during the SIMPATICO PMB meeting of June 22-23 2017, the PMB members identified possible experts that were contacted during the month of July 2017\. The two new appointed members participated firstly in the EAB meeting of November 30 2017. Members of the Ethics Board are listed in “D1.5 – Ethics compliance report” with the name and contact information for persons appointed, the terms of reference for their involvement, and their declarations of no conflict of interest. The **reference national and/or local Data Protection Authorities** competent to provide the above-mentioned SIMPATICO EAB with the necessary **instructions/authorizations/notifications** for each pilot are the following [6] [7] [8]: **Trento pilot (Italy): the Italian Data Protection Authority (DPA - _http://www.garanteprivacy.it/_ ) . ** According to the “Italian Data Protection Code” (Legislative Decree no. 196/2003), an authorisation by the Italian DPA was required (before the entry into force of the GDPR in May 2018) to enable private (and public) bodies to process specific typologies of personal and sensitive data (see Section 26 of the Italian Data Protection Code). More precisely, the DPA needs to be notified (also thorough an electronic form) whenever a public or private body undertakes a personal data collection, or personal data processing activity, as data controller. A data controller was required under the law to only notify the processing operations that concern e.g., data suitable for disclosing health and sex life, data processed with the help of electronic means aimed at profiling the data subject and/or his/her personality, analysing consumption patterns and/or choices. SIMPATICO Data Controller sent this notification on July 2016. In such context, the DPA is also responsible for evaluating and expressing opinions on specific arguments concerning data protection (see “Simplification of Notification Requirements and Forms. Decision of the DPA dated 22 October 2008, as published in Italy's Official Journal no. 287 of 9 December 2008”). In the case of Trento pilot, we consider this public authority appropriate for providing the SIMPATICO EAB with the necessary instructions/authorizations/notifications. ### Sheffield pilot (United Kingdom): the University Research Ethics Committee (UREC) of the University of Sheffield ** ( _https://www.sheffield.ac.uk/ris/other/committees/ethicscommittee_ ) . ** The University Research Ethics Committee (UREC) of the University of Sheffield is an independent, unbiased and interdisciplinary university-wide body that scrutinizes any potential issues related to research ethics for staff and students of the University of Sheffield, including collaborative research deriving from external funding. The key tasks this committee is in charge of are: * Advise on any ethical matters in research that are referred to it from within the University; * Keep abreast of the external research ethics environment and ensure that the University responds to all external requirements. In the case of the Sheffield pilot, we consider this committee appropriate for providing the SIMPATICO EAB with the necessary guidance. We remark that all entities involved in the Sheffield pilot – Sheffield Council, Sheffield University and Sparta Technologies Ltd – comply with the UK data protection regulations and GDPR, and eachs ensure that this act is enforced when it comes to their participation in the pilot. Only if necessary, the EAB engages the UK Information Commissioner's Office (ICO - _https://ico.org.uk/_ ) . **Galicia pilot (Spain): the Research Ethics Committee of the University of Deusto ( _http://research.deusto.es/cs/Satellite/deustoresearch/en/home/research- ethicscomittee_ ) . ** This committee is an independent, unbiased and interdisciplinary body that is both consultative and advisory in nature, and reports to the Vice-Rector’s Office for Research. This committee assessed SIMPATICO compliance with the Spanish legal framework on privacy and data protection. This includes the **Spanish Data Protection Act** 15/1999 ( **Law 15/1999 of 13 December 1999** on Protection of Personal Data, last updated on 5 March 2011) and the **Royal Decree 1720/2007** of 21 December 2007, approving the regulations implementing Law 15/1999 (“Data Protection Regulations”; Last updated: 8 March 2012). Among other responsibilities, this committee is in charge of: * Conducting the ethical assessment of research projects and drawing up the ethical suitability reports requested by institutions and researchers. * Ensuring compliance with best research and experimentation practices with regard to individuals’ fundamental rights and the concerns related to environmental defense and protection. * Supervising assessment processes or ethical requirements in research carried out by institutions and public bodies. * Preparing reports for the University’s governing bodies on the ethical problems that may arise from R+D+I activities. * Ensuring compliance with the Policy on Scientific Integrity and Best Research Practices of the University of Deusto. * Providing guidance on laws, regulations and reports on research ethics. * Reviewing procedures that have already been assessed, or proposing the suspension of any experimentation already started if there are objective reasons to do so. In the case of the Galicia pilot, we consider this committee appropriate for providing the SIMPATICO EAB with the necessary instructions/authorizations/notifications. Only if necessary, the EAB engages the Spanish Data Protection Authority, i.e., Agencia Española de Protección de Datos (AEPD - _http://www.agpd.es/_ ) . ## 1.4. Open access and data management The Consortium adheres to **the pilot for open access to research data (ORD pilot)** adopting an open access policy of all projects results, guidelines and reports, providing on-line access to scientific information that is free of charge to the reader [9]. Open access typically refers to two main categories: **scientific publication** (e.g., peer-reviewed scientific research articles, primarily published in academic journals) (Subsection 1.3.1) and **research data** (Subsection 2 .3.2). ### 1.4.2. Open access to research data (Open Research Data Pilot) According to the European Commission, “research data is information (particularly facts or numbers) collected to be examined and considered, and to serve as a basis for reasoning, discussion, or calculation”. Open access to research data is **the right to access and reuse digital research data** under the terms and conditions set out in the Grant Agreement. Regarding the digital research data generated in the action, according to the Article 29.3 of the GA, the SIMPATICO Consortium shall: _**Deposit in a research data repository** and take measures to make it possible for third parties to access, mine, exploit, reproduce and disseminate — free of charge for any user — the following: _ 1. _the data, including associated metadata, needed to validate the results presented in scientific publications;_ 2. _other data, including associated metadata, as specified and within the deadlines laid down in this data management plan;_ _(i) provide information — via the repository — about tools and instruments at the disposal of the beneficiaries and necessary for validating the results._ Please note that a portion of the relevant data for SIMPATICO comes from **existing data sets of the PAs** (e.g., service usage data, citizens’ data), while **new data sources** are defined by this deliverable and are identified as a result of requirements analysis in SIMPATICO. **Whenever possible** , these additional data sources has also been made available **as open data or through open services** . However, some of the collected data, in particular that concerning **user profiles and personal data** , is highly sensitive and has not been made available. More precisely, in order to discuss the **public availability of data** , there are three different types of datasets within the SIMPATICO project: 1. not publicly available personal and sensitive data; 2. data treated according to open access policy of all project results; 3. data connected to Citizenpedia. These datasets are discussed in detail in Section 2 below. ## 1.5. Data management policies ### 1.5.1. Data set reference and names In order to be able to distinguish and easily identify data sets, each data set is assigned with a unique name. This name can also be used as the identifier of the data sets. In order to design the data set names, we use the following practice. Each data set name consists of _four_ different parts separated with a ”.” character: _ProjectName.CountryCode.DatasetName.Version,_ where 1. The _ProjectName_ is _SIMPATICO_ , in order to clearly identify for all datasets the origin. 2. The _CountryCode_ part represents the country associated with the dataset using ISO Alpha-2 country codes: 1. _IT_ for Italy ii. _ES_ for Spain iii. _UK_ for the United Kingdom 3. The _DatasetName_ represents the full name of the dataset. 4. The _Version_ of the dataset represents in which phase of the project the dataset was released: 1. _DB_ the live database during project lifetime 2. _InterimExport_ export of the database at M22 3. _FinalExport_ export of the database at the end of the project An example of a data set’s name is the following: _SIMPATICO.IT.Citizenpedia.InterimExport._ ### 1.5.2. Standards and metadata This field describes suitable standards that are used to describe the data as well as the metadata of the data sets. Metadata are “data that provides information about other data”, describe the contents of data files and the context in which they have been established. Several metadata standards exist (see _https://en.wikipedia.org/wiki/Metadata_standards_ ) . Proper metadata facilitates use of the data by others, makes it easier to combine information from different sources, and ensures transparency. All SIMPATICO datasets use a standard format for metadata. Each dataset description specifies the Data and metadata standards used. ### 1.5.3. Archiving and preservation The SIMPATICO partners agreed on the procedures that have been and will used in order to ensure long-term preservation of the data sets. In particular, datasets have been stored on Zenodo ( _https://zenodo.org/_ ) a catch-all repository for EC funded research developed by CERN and launched in May 2013\. To be an effective catch-all, that eliminates barriers to adopting data sharing practices, Zenodo does not impose any requirements on format, size (currently accepts up to 50GB per dataset), access restrictions or license. In addition, datasets stored on Zenodo are automatically part of OpenAIRE ( _https://www.openaire.eu/_ ) , the EC-funded initiative which aims to support the Open Access policy of the European Commission via a technical infrastructure, thus integrating them into existing reporting lines to funding agencies like the European Commission. Archiving on Zenodo is free, thus eliminating the costs. ### 1.5.4. Data quality assurance SIMPATICO is committed to deliver quality data, and adopts **data quality assurance** procedures to achieve this goal. Quality control of each dataset is the responsibility of the relevant WP leader, supported by Project Manager and Project Coordinator. Depending on the case, “quality” might have different meanings, depending on the utility and on the re-usage scenarios of the dataset: for instance, editing a question submitted by a citizen through the SIMPATICO platform improves data quality if the goal is to provide an “FAQ” or a knowledge base on public services; it is detrimental if the intended usage is the analysis of interaction skills and languages of the platform users. Data quality insurance might hence imply editing and moderation, cleaning, pre-processing, adding metadata, transforming to a more convenient format or providing easier access. Information about the consortium's efforts to address data quality issues is hence provided for each type if dataset. # 2\. SIMPATICO datasets This **Data Management Plan (DMP)** has been prepared by taking into account the current template of the “Guidelines on Data Management in Horizon 2020” [1]. The elaboration of the DMP allows to SIMPATICO partners to address all issues related with data. A first version of the DMP has been released at the beginning of the project (M6). **This version is an update with the status of the DMP at project month M16** , after the detailed definition of the project use-cases and the revision of the ethics-related aspects of the project by the Ethics Advisory Board, and taking into account the feedback collected during the 1 st year project review. A revised deliverable is expected at the end of the project (“D1.4 – Data management plan v.2”, at M36). However, the DMP will be a **living document** throughout the project, and this initial version will evolve during the SIMPATICO lifespan according to the progress of project activities. In order to discuss the **public availability of data** , as outlined above, it is convenient to distinguish three different types of datasets within the SIMPATICO project: 1. **Not publicly available personal and sensitive data** **will be collected and processed as part of the execution of the SIMPATICO use-cases** , more specifically for the execution of the eservices. Specifically, the use-cases will involve only voluntary participants aged 18 or older and capable to give consent, who will be informed on the nature of their involvement and on the data collection/retention procedures through an informed consent form before the commencement of their participations. Informed consent will follow procedures and mechanisms compliant with European and national regulations in the field on ethics, data protection and privacy (see also deliverables “D1.5 – Ethics compliance report”, “D8.1 – H – Requirement no. 1”, and “D8.2 – POPD – Requirement no. 2”). 2. **SIMPATICO adheres to the open access policy of all project results.** Specifically, we are committed to make available, whenever possible, the data collected during the execution of SIMPATICO, in particular data collected during the use-cases, also to researchers and other relevant stakeholders outside the project Consortium. Whenever possible, these additional data sources will also be made available as open data or through open services. In this context, any personal data will only be published after suitable aggregation and/or pseudonymization techniques have been applied, and after an informed consent that explicitly authorize this usage has been collected. 3. **SIMPATICO intends to build an open knowledge base on public services and processes** **through Citizenpedia** , released as a new public domain resource co-created and co-operated by the community (i.e., citizens, professionals and civil servants). The initial content of Citizenpedia will be based on datasets and other digital goods that are publicly available. In the case of datasets and other digital goods owned by the PAs and not already publicly available, the Consortium will pursue to obtain an authorization for public release, as open content, before inclusion in the Citizenpedia. For what concerns the data contributed to Citizenpedia by the community, SIMPATICO will require that they are made available as open content (e.g., with licenses such as Creative Commons). This Data Management Plan and its updated versions describe **datasets characteristics** and **define principles and rules for the distribution of data** within SIMPATICO. In particular, in this second version of the DMP we present in details the procedures of creating **‘primary data’** (i.e., data not available from any other sources) and of their management. As such, only the datasets corresponding to “Citizenpedia” (Section 2.1), “Logging/Feedback” (Section 2.2), and “Citizen Data Vault (CDV) Dataset” (Section 3 .3) are described in detail in the following sections, as any other datasets already exist and their creation is not foreseen in the GA. ## 2.1. Citizenpedia Datasets ### 2.1.1. Description Citizenpedia is the **human computation framework** inside the SIMPATICO platform. Its aim is to be a place where citizens can find useful information regarding e-services and public administration. Thus, **most of the content is created and consumed by humans** . It is mainly stored in JSON format. Citizenpedia is composed of **two main interactive parts, which manage their own data,** for the users, a Question Answering Engine and a Collaborative Procedure Designer. Thus, the typology of data is twofold: 1. **Question Answering Engine:** questions, answers, comments and terms/definitions, generated in the Question Answering Engine. All of them will be created, stored and retrieved in JSON format. 2. **Collaborative Procedure Designer:** diagrams representing procedures, and comments to these diagrams. The diagrams will be stored and encoded, in a computer processable manner, and not as a bitmap. Comments will be stored in JSON format. Both types of data is stored in the **same database** , within Citizenpedia; for this reason, a unique datatset is generated for both data typologies. Citizenpedia, along with the SIMPATICO platform, is deployed in **three different cities/regions of different countries** (i.e., Italy, Spain, and the United Kingdom). Each country speaks its own language (i.e., Italian, Spanish,English and Galician), and the human-generated data in each Citizenpedia is in **different languages** . For that reason, we are using a different dataset in each pilot. **Figure 3 Simplified Citizenpedia database model** We consider **two types of additional metadata** to be generated in Citizenpedia, apart from the core entities metadata mentioned above: 1. **Usage statistics:** this information is created under demand. E.g. as an answer to the query “number of registered questions related to the Law XYZ/2015”. Currently, the Citizenpedia communicates with the LOG module to issue a new log every time a user performs a CRUD (Created, Read, Update, Delete) operation over any of the entities modeled by Citizenpedia. The LOG module offers an API from which usage statistics about Citizenpedia contents can be extracted. 2. **Indexing engine metadata:** an indexing engine included inside Citizenpedia, i.e. ElasticSearch or Apache SolR creates this data. This metadata is consumed by ElasticSearch to provide better searching capabilities over plain-text data. Search engine has been configured and parametrized in order to aim to optimize its searching capabilities. ### 2.1.3. Data capture As regards data capture methods, there are **two ways of creating content** in Citizenpedia: 1. **Using the web interface:** citizens/civil servants will use the platform and write the information using their browsers. Then, data will be stored in the Citizenpedia database. 2. **Programmatically, via a REST interface:** Citizenpedia exposes a REST API for other SIMPATICO components/third party applications to query/insert data in the system. ### 2.1.4. Data storage All the information related to Citizenpedia (i.e., both user-generated data and metadata) is stored in the **Citizenpedia internal database** . DEUSTO, as responsible of WP4 within the SIMPATICO Consortium, ensures to handle security and privacy issues, enforcing access to the internal database only via **secure connections** and using **access control systems** . ### 2.1.5. Data quality assurance Data collection is undertaken mainly through forms filling through the QAE or CPD. Both tools undertake validation against their type, semantics and completeness. It is also possible to create data associated to the entities managed by the Citizenpedia, namely User, Question, Answer, Comment, Definition, Category, Tag, e-service, Procedure and Diagram Element, through the provided RESTful API. Again, the same validation as when filling form fields is carried out. For the last release of the Citizenpedia the following two new features have been implemented to assure the quality of the data managed. Firstly, a _spam analyser module_ have been included which assesses whether the introduced contents can be regarded as spam, i.e. mainly for questions and answers. Secondly, the _moderator role_ have been introduced to ensure that some users can monitor, review and edit available updates, correcting or even removing them in case that they are polluting rather than enriching the Citizenpedia knowledgebase. ### 2.1.6. Utility and re-use Data collected will be useful for Citizens and PA representatives. Citizens can encounter answers regarding the contents and procedures associated to the e-services that they interact with it. PA representatives can spot areas, which are unclear for e-service consumers when gathering comments or new questions associated to e-service concepts or procedure steps. Both QAE and CPD can be used to ensure that a common understanding of e-service operation is reached among PA representatives and with the e-service consuming citizens. ### 2.1.7. Data sharing Method for data sharing is twofold: 1. **Human generated data:** first, human generated data (such as questions/answers/comments) will be shared publicly. It could be checked using the web interface or programmatically through a REST API. Given that this data is created by users, they will be warned in their first time using Citizenpedia that any content they make will be publicly available. 2. **Metadata:** second, as regards the metadata generated from the usage of data (such as statistics), some of the statistics (aggregated data) will be publicly available through the REST API, e.g., the number of questions related to a certain topic. The entire metadata will only be used for research purposes: in the case of the releasing some scientific publication from the usage data of Citizenpedia, information will be completely aggregated and/or pseudonymized. ### 2.1.8. Archiving and preservation The SIMPATICO Consortium and, in particular, DEUSTO (as responsible of this WP) consider **to** **retain generated data** during the length of the project. Statistical data can be retained longer, after the end of the project lifespan, for research purposes. If so, DEUSTO estimates no additional cost for this. If collected metadata and statistical data will be retained after the length of the project, DEUSTO has the infrastructure **to retain data safely** . ### 2.1.9. Datasets <table> <tr> <th> **Dataset ID** </th> <th> SIMPATICO_ES_Citizenpedia_DB </th> </tr> <tr> <td> **Description** </td> <td> Live database of Citizenpedia adopted by the Galician pilot </td> </tr> <tr> <td> **Data manager** </td> <td> DEUSTO </td> </tr> <tr> <td> **Data standard** </td> <td> Project specific </td> </tr> <tr> <td> **Metadata standard** </td> <td> JSON database (MongoDB) </td> </tr> <tr> <td> **Volume** </td> <td> 500 Kb ~ 1 Mb </td> </tr> <tr> <td> **Sharing level** </td> <td> Open </td> </tr> <tr> <td> **Sharing medium** </td> <td> Contents of this dataset can be accessed through Mongo binary API which can only be used if proper credentials are supplied. </td> </tr> <tr> <td> **Preservation duration** </td> <td> Project duration </td> </tr> <tr> <td> **Preservation medium** </td> <td> Spanish deployment of SIMPATICO platform </td> </tr> <tr> <td> **Preservation costs** </td> <td> No additional cost </td> </tr> </table> <table> <tr> <th> **Dataset ID** </th> <th> SIMPATICO_ES_Citizenpedia_InterimExport </th> </tr> <tr> <td> **Description** </td> <td> Export of SIMPATICO_ES_CITIZENPEDIA_DB at the end of the first pilot phase (M20). </td> </tr> <tr> <td> **Data manager** </td> <td> DEUSTO </td> </tr> <tr> <td> **Data standard** </td> <td> Project specific </td> </tr> <tr> <td> **Metadata standard** </td> <td> JSON </td> </tr> <tr> <td> **Volume** </td> <td> 28.5 kB </td> </tr> <tr> <td> **Sharing level** </td> <td> Open </td> </tr> <tr> <td> **Sharing medium** </td> <td> OpenAIRE </td> </tr> <tr> <td> **Preservation duration** </td> <td> 5 years after project end </td> </tr> <tr> <td> **Preservation medium** </td> <td> Zenodo - _https://zenodo.org/record/2546817_ </td> </tr> <tr> <td> **Preservation costs** </td> <td> No additional cost </td> </tr> </table> <table> <tr> <th> **Dataset ID** </th> <th> SIMPATICO_ES_Citizenpedia_FinalExport </th> </tr> <tr> <td> **Description** </td> <td> Export of SIMPATICO_ES_CITIZENPEDIA_DB at the end of the project (M36). </td> </tr> <tr> <td> **Data manager** </td> <td> DEUSTO </td> </tr> <tr> <td> **Data standard** </td> <td> Project specific </td> </tr> <tr> <td> **Metadata standard** </td> <td> JSON </td> </tr> <tr> <td> **Volume** </td> <td> 88.8 kB </td> </tr> <tr> <td> **Sharing level** </td> <td> Open </td> </tr> <tr> <td> **Sharing medium** </td> <td> OpenAIRE </td> </tr> <tr> <td> **Preservation duration** </td> <td> 5 years after project end </td> </tr> <tr> <td> **Preservation medium** </td> <td> Zenodo - _https://zenodo.org/record/2546438_ </td> </tr> <tr> <td> **Preservation costs** </td> <td> No additional cost </td> </tr> </table> <table> <tr> <th> **Dataset ID** </th> <th> SIMPATICO_UK_Citizenpedia_DB </th> </tr> <tr> <td> **Description** </td> <td> Live database of Citizenpedia adopted by the Sheffield pilot </td> </tr> <tr> <td> **Data manager** </td> <td> SPARTA </td> </tr> <tr> <td> **Data standard** </td> <td> Project specific </td> </tr> <tr> <td> **Metadata standard** </td> <td> JSON database (MongoDB) </td> </tr> <tr> <td> **Volume** </td> <td> 500 Kb ~ 1 Mb (estimate) </td> </tr> <tr> <td> **Sharing level** </td> <td> Open </td> </tr> <tr> <td> **Sharing medium** </td> <td> Contents of this dataset can be accessed through Mongo binary API which can only be used if proper credentials are supplied. </td> </tr> <tr> <td> **Preservation duration** </td> <td> Project duration </td> </tr> <tr> <td> **Preservation medium** </td> <td> UK deployment of SIMPATICO platform </td> </tr> <tr> <td> **Preservation costs** </td> <td> No additional cost </td> </tr> </table> <table> <tr> <th> **Dataset ID** </th> <th> SIMPATICO_UK_Citizenpedia_FinalExport </th> </tr> <tr> <td> **Description** </td> <td> Export of SIMPATICO_UK_CITIZENPEDIA_DB at the end of the project (M36). </td> </tr> <tr> <td> **Data manager** </td> <td> SPARTA </td> </tr> <tr> <td> **Data standard** </td> <td> Project specific </td> </tr> <tr> <td> **Metadata standard** </td> <td> JSON </td> </tr> <tr> <td> **Volume** </td> <td> 500 Kb ~ 1 Mb (estimate) </td> </tr> <tr> <td> **Sharing level** </td> <td> Open </td> </tr> <tr> <td> **Sharing medium** </td> <td> OpenAIRE </td> </tr> <tr> <td> **Preservation duration** </td> <td> 5 years after project end </td> </tr> <tr> <td> **Preservation medium** </td> <td> This datasets has not yet been released, as the pilot evaluation in Sheffield has been delayed to the last month of the project. Once released this dataset will be stored on Zenodo. </td> </tr> <tr> <td> **Preservation costs** </td> <td> No additional cost </td> </tr> </table> <table> <tr> <th> **Dataset ID** </th> <th> SIMPATICO_IT_Citizenpedia_DB </th> </tr> <tr> <td> **Description** </td> <td> Live database of Citizenpedia adopted by the Trento pilot </td> </tr> <tr> <td> **Data manager** </td> <td> Fondazione Bruno Kessler </td> </tr> <tr> <td> **Data standard** </td> <td> Project specific </td> </tr> <tr> <td> **Metadata standard** </td> <td> JSON database (MongoDB) </td> </tr> <tr> <td> **Volume** </td> <td> 500 Kb ~ 1 Mb </td> </tr> <tr> <td> **Sharing level** </td> <td> Open </td> </tr> <tr> <td> **Sharing medium** </td> <td> Contents of this dataset can be accessed through Mongo binary API which can only be used if proper credentials are supplied. </td> </tr> <tr> <td> **Preservation duration** </td> <td> Project duration </td> </tr> <tr> <td> **Preservation medium** </td> <td> Trento deployment of SIMPATICO platform </td> </tr> <tr> <td> **Preservation costs** </td> <td> No additional cost </td> </tr> </table> <table> <tr> <th> **Dataset ID** </th> <th> SIMPATICO_IT_Citizenpedia_FinalExport </th> </tr> <tr> <td> **Description** </td> <td> Export of SIMPATICO_IT_CITIZENPEDIA_DB at the end of the project (M36). </td> </tr> <tr> <td> **Data manager** </td> <td> Fondazione Bruno Kessler </td> </tr> <tr> <td> **Data standard** </td> <td> Project specific </td> </tr> <tr> <td> **Metadata standard** </td> <td> JSON </td> </tr> <tr> <td> **Volume** </td> <td> 978.5Kb </td> </tr> <tr> <td> **Sharing level** </td> <td> Open </td> </tr> <tr> <td> **Sharing medium** </td> <td> OpenAIRE </td> </tr> <tr> <td> **Preservation duration** </td> <td> 5 years after project end </td> </tr> <tr> <td> **Preservation medium** </td> <td> Zenodo - _https://zenodo.org/record/2554837_ </td> </tr> <tr> <td> **Preservation costs** </td> <td> No additional cost </td> </tr> </table> ## 2.2. Logging/Feedback Datasets ### 2.2.1. Description The SIMPATICO project provides a series of interactive front-end components as part of the user interaction and feedback analysis layer of the SIMPATICO Platform. The components are as follows: * _Session Feedback (SF)_ : which presents users with a feedback gathering form after each session. * _Data Analysis (DA)_ : which gets data from the LOG coming from a variety of sources connected to interaction and provides extra analysis layers on top. * _Interaction Log (LOG)_ : which collects all the interaction information in the system. * _Workflow Adaptation Engine (WAE)_ : which collects the interaction events derived from using the module to adapt the workflow in filling up long forms. These modules generate valuable information coming from the interaction of the users. This happens by two different mechanisms: * **Explicit information gathering** , e.g., asking users directly to assess their interaction after it has happened. This is widely done in the industry and can be performed by a number of different mechanisms. * **Implicit information collection** , e.g., analysing metrics of interest in the interaction without requiring the users to be providing any extra information. As an example, upon the execution of a e-service request information about the time spent for each step may be collected and then analysed to find insights such as bottlenecks. Both of these data generation sets were conceptualized in the platform’s architecture in the cited modules. These includes **two data storage modules** such as the LOG for explicit and implicit data plus a data analysis step in the DA to generate new insights (e.g., statistics) from gathered data elements. ### 2.2.2. Standards and Metadata The SIMPATICO team has not found relevant dedicated standards about the representation of these metadata. There is a model of the interaction (see deliverable D3.2) which is the basis of the interaction in the project’s results. Inspiration for this and other of the interaction elements such as questionnaires, etc. is from common representation data models of usability evaluation such as the System Usability Testing (SUS) [10], and standards such as the ISO 9241 4 for desktop application ergonomics. The metadata are generated from the data in the section above using the following analysis steps: **_Explicit information analysis:_ ** Statistics about general feelings or ratings for particular areas or topics identified in the first stage of analysis. This is chiefly generated by the Session Feedback component. **_Implicit information analysis:_ ** Statistical analysis of the data captured: average time spent by users, segmentation by age groups or target groups, etc. This is captured in the front-end of the system and further processed by the Data Analysis component. ### 2.2.3. Data capture The data is collected in different ways according to the mechanism of information gathering (i.e., implicit/explicit), as explained above: **_Explicit information gathering (gathered in the Session Feedback component)_ ** * Questionnaires to the users with predefined (‘canned’) responses such as emoticons or “Likert scale” values. * Open ended questions and free form responses. This can then be further analysed using NLP tools or human experts, to search for elements such as: o Sentiment analysis to capture the general sentiment generated by the system. o Topic clustering to detect potential pain points or concerns of the users. **_Implicit information gathering (gathered by the front-end and processed in the Data Analysis_ _component)_ ** * Capture metrics such as click areas, time spent in different steps of the process. For this matter a mixture of handcrafter trackers and the open web analytics tool Piwik are used. ### 2.2.4. Data storage The storage of these components (LOG/DA, SF, and WAE) is centralized at the interaction LOG component which stores metadata of the interaction. This component is constructed on top of an instance of the ElasticSearch. Internally, ElasticSearch uses a document-oriented storage solution with an associated Lucene indexing and search engine. ### 2.2.5. Data quality assurance The results stored in the feedback analysis storage are stored as JSON documents (Javascript objects in plain text). Internally, the data is represented in manners which are specific to the application (e.g., feedback logs from session feedback contain data that can be mapped to the specific questions asked by the component). The data stored can be related to individual users (linked to the ones in the CDV by alphanumerical user IDs that mean the data is effectively anonymized if the CDV profile is erased) or to aggregate results by a group of users (e.g., average times by all users to complete a step in the e-service). The access to the LOG is securized by means of the use of the AAC component so that arbitrary HTTP connections cannot be opened against the component. ### 2.2.6. Utility and re-use By their nature, the feedback results compiled in this part of SIMPATICO, are tightly connected to the particular usages and roles of the components. Thus, their reuse can be difficult for other applications. One result which can be of wider use is the global collection of simplification requests. The simplification results and the feedback of the users on the quality of the simplification as captured by the Session Feedback component. This will be thus selected for further detailing and explanation in the release of the data. ### 2.2.7. Data sharing The data and metadata generated by the module expected to be useful beyond SIMPATICO mainly to researchers due to the particularities of the application. All of the generated data in the pilots is shared in the SIMPATICO Zenodo Community as linked to in Section 2.2.9. ### 2.2.8. Archiving and preservation **All storing and preservation procedures is carried out internally** to the project (e.g., in servers physically located at the partners’ premises and under their full control). The captured interaction data is shared as open data. Final data sets are shared using the Zenodo platform as referred to in the next section. This will provide the means for long-term storage and sharing. **2.2.9. Datasets** <table> <tr> <th> **Dataset ID** </th> <th> SIMPATICO First Evaluation Galicia Dataset v1.1 </th> </tr> <tr> <td> **Description** </td> <td> Interaction LOG data captured in the Galicia evaluation of the results of H2020 project SIMPATICO that were undertaken between October 23rd 2017 and November 3rd 2017. This contains a total 374 user tests that were conducted, divided into 228 citizens for e-service BS607A, 130 citizens for eservice BS613B and 8 civil servants from Xunta de Galicia for each one of the e-services. </td> </tr> <tr> <td> **Data manager** </td> <td> HIB </td> </tr> <tr> <td> **Data standard** </td> <td> Project specific data </td> </tr> <tr> <td> **Metadata standard** </td> <td> JSON (JavaScript Object Notation). </td> </tr> <tr> <td> **Volume** </td> <td> 7.9 MB </td> </tr> <tr> <td> **Sharing level** </td> <td> Open </td> </tr> <tr> <td> **Sharing medium** </td> <td> Full dump of the JSON log messages. </td> </tr> <tr> <td> **Preservation duration** </td> <td> At least extended project duration (5 years after project execution ends). </td> </tr> <tr> <td> **Preservation medium** </td> <td> Zenodo - _https://zenodo.org/record/1173125_ </td> </tr> <tr> <td> **Preservation costs** </td> <td> No additional cost </td> </tr> </table> <table> <tr> <th> **Dataset ID** </th> <th> SIMPATICO Second Evaluation Galicia Dataset v1.0 </th> </tr> <tr> <td> **Description** </td> <td> SIMPATICO logs for the user evaluation of Galicia in project iteration 2 The current package contains the Interaction LOG data captured in the Galicia evaluation of the results of H2020 project SIMPATICO that were undertaken between September 24th 2018 and October 15th 2018. This contains a total 290 user tests that were conducted in the period. </td> </tr> <tr> <td> **Data manager** </td> <td> HIB </td> </tr> <tr> <td> **Data standard** </td> <td> Project specific data </td> </tr> <tr> <td> **Metadata standard** </td> <td> JSON (JavaScript Object Notation) </td> </tr> <tr> <td> **Volume** </td> <td> 9 MB </td> </tr> <tr> <td> **Sharing level** </td> <td> Open </td> </tr> <tr> <td> **Sharing medium** </td> <td> Full dump of the JSON log messages. </td> </tr> <tr> <td> **Preservation duration** </td> <td> At least extended project duration (5 years after project execution ends). </td> </tr> <tr> <td> **Preservation medium** </td> <td> Zenodo - _https://zenodo.org/record/2244751_ </td> </tr> <tr> <td> **Preservation costs** </td> <td> No additional cost </td> </tr> </table> <table> <tr> <th> **Dataset ID** </th> <th> SIPATICO_IT_LOGDataset_DB </th> </tr> <tr> <td> **Description** </td> <td> Live database of logs of the usage of SIMPATICO platform adopted by the Trento pilot </td> </tr> <tr> <td> **Data manager** </td> <td> Fondazione Bruno Kessler </td> </tr> <tr> <td> **Data standard** </td> <td> Project specific data </td> </tr> <tr> <td> **Metadata standard** </td> <td> JSON (JavaScript Object Notation) and binary ElasticSearch dumps </td> </tr> <tr> <td> **Volume** </td> <td> 10 KB /session /user (approximately). Totals: _Data size:_ 20 persons, 1 session 🡪 200 KB </td> </tr> <tr> <td> **Sharing level** </td> <td> Open </td> </tr> <tr> <td> **Sharing medium** </td> <td> Contents of this dataset can be accessed through swagger API </td> </tr> <tr> <td> **Preservation duration** </td> <td> Project duration </td> </tr> <tr> <td> **Preservation medium** </td> <td> Trento deployment of SIMPATICO platform </td> </tr> <tr> <td> **Preservation costs** </td> <td> No additional cost </td> </tr> </table> <table> <tr> <th> **Dataset ID** </th> <th> SIPATICO_IT_LOGDataset_FinalExport </th> </tr> <tr> <td> **Description** </td> <td> Released at the end of the final validation phase at Trento and project end (M36) and including data from the interaction captured in the LOG. </td> </tr> <tr> <td> **Data manager** </td> <td> Fondazione Bruno Kessler </td> </tr> <tr> <td> **Data standard** </td> <td> Project specific data </td> </tr> <tr> <td> **Metadata standard** </td> <td> JSON (JavaScript Object Notation) and binary ElasticSearch dumps </td> </tr> <tr> <td> **Volume** </td> <td> 1.7Mb </td> </tr> <tr> <td> **Sharing level** </td> <td> Open </td> </tr> <tr> <td> **Sharing medium** </td> <td> OpenAIRE </td> </tr> <tr> <td> **Preservation duration** </td> <td> 5 years </td> </tr> <tr> <td> **Preservation medium** </td> <td> Zenodo - _https://zenodo.org/record/2554833_ </td> </tr> <tr> <td> **Preservation costs** </td> <td> No additional cost </td> </tr> </table> <table> <tr> <th> **Dataset ID** </th> <th> SIPATICO_UK_LOGDataset_DB </th> </tr> <tr> <td> **Description** </td> <td> Live database of logs of the usage of SIMPATICO platform adopted by the Sheffield pilot </td> </tr> <tr> <td> **Data manager** </td> <td> SPARTA </td> </tr> <tr> <td> **Data standard** </td> <td> Project specific data </td> </tr> <tr> <td> **Metadata standard** </td> <td> JSON (JavaScript Object Notation) and binary ElasticSearch dumps </td> </tr> <tr> <td> **Volume** </td> <td> 10 KB /session /user (approximately). Totals: _Data size:_ 20 persons, 1 session 🡪 200 KB </td> </tr> <tr> <td> **Sharing level** </td> <td> Open </td> </tr> <tr> <td> **Sharing medium** </td> <td> Contents of this dataset can be accessed through swagger API </td> </tr> <tr> <td> **Preservation duration** </td> <td> Project duration </td> </tr> <tr> <td> **Preservation medium** </td> <td> Sheffield deployment of SIMPATICO platform </td> </tr> <tr> <td> **Preservation costs** </td> <td> No additional cost </td> </tr> </table> <table> <tr> <th> **Dataset ID** </th> <th> SIPATICO_UK_LOGDataset_FinalExport </th> </tr> <tr> <td> **Description** </td> <td> Released at the end of the second validation phase at Sheffield (M36) and including data from the interaction captured in the LOG. </td> </tr> <tr> <td> **Data manager** </td> <td> SPARTA </td> </tr> <tr> <td> **Data standard** </td> <td> Project specific data </td> </tr> <tr> <td> **Metadata standard** </td> <td> JSON (JavaScript Object Notation) and binary ElasticSearch dumps </td> </tr> <tr> <td> **Volume** </td> <td> (estimate) 10 KB /session /user </td> </tr> <tr> <td> **Sharing level** </td> <td> Open </td> </tr> <tr> <td> **Sharing medium** </td> <td> OpenAIRE </td> </tr> <tr> <td> **Preservation duration** </td> <td> 5 years </td> </tr> <tr> <td> **Preservation medium** </td> <td> This datasets has not yet been released, as the pilot evaluation in Sheffield has been delayed to the last month of the project. Once released this dataset will be stored on Zenodo. </td> </tr> <tr> <td> **Preservation costs** </td> <td> No additional cost </td> </tr> </table> ## 2.3. Citizen Data Vault (CDV) Datasets ### 2.3.1. Description The **Citizen Data Vault** **(CDV)** is a **repository of the citizen personal data** . It is continuously updated through **each citizen interaction** and is used mainly to automatically fill e-service forms. In this way, citizens will give to the PA each information only once, as the information will be stored in the vault and used in all the following interactions and among different PA e-services. As regards the CDV, for personal data we will use the **definition provided by the** **World Economic Forum (June 2010)** , namely [11]: **"Personal data is defined as data (and metadata) created by and about people”** , encompassing: * **Volunteered data** – created and explicitly shared by individuals, e.g., social network profiles. * **Observed data** – captured by recording the actions of individuals, e.g., location data when using cell phones. * **Inferred data** – data about individuals based on analysis of volunteered or observed information, e.g., credit scores." Personal data is also very broadly defined in Article 2 of the European Data Protection Directive as: "... any information relating to an identified or identifiable natural person ("data subject")...". This definition is, for the most part, unchanged under the new GDPR. According to these definitions, through the CDV **citizens have a practical mean to manage their personal data** with the ability to grant and withdraw **consent** to third parties for access to data about themselves (see “D1.5 – Ethics compliance report” – Annex I “Informed consent form”). In summary, data collected by the means of CDV is referring on the context of personal data. In a first stage, we have identified a **first categorization of such personal data** , referring to: 1. Government Records 2. Profile 3. Education 4. Relationship 5. Banking and Finance 6. Health 7. Communication & Media 8. Energy 9. Mobility 10. Activities For each category, **several data fields** have been defined. Starting from these categories, we have grouped the actual personal data that each citizen could manage by the means of CDV against the **three use cases** identified by the three SIMPATICO pilots (i.e., Trento, Sheffield, and Galicia). The only data that will be stored in CDV are those related to a subset of the fields of PA e-service forms identified for the validation phases in the three pilots use cases. Below the list of services (e-service forms) selected by each pilot and the type of data processed: ### Trento Pilot Services <table> <tr> <th> **Service** </th> <th> **Data Processed** </th> </tr> <tr> <td> DOMANDA ALLA COMMISSIONE EDILIZIA COMUNALE CON FUNZIONE DI COMMISSIONE PER LA PIANIFICAZIONE TERRITORIALE E IL PAESAGGIO DELLA COMUNITA (CPC) </td> <td> Citizenship, Sex, City of Birth, Social Security Number, Birthday, First Name, Last Name, Street Address, Postal Code, City, Country, Province, Fax, Email, Other Email, Phone Number, Work Industry, Work Company, Work City, Work Province, Work Address, Work Post Code, Work Post Code, Work VAT, Work PEC, Work Phone, Work Fax, Work CCIAA, Work CCIAA Province, Work CCIAA Number </td> </tr> <tr> <td> ISCRIZIONE AL INFANZIA </td> <td> NIDO DI </td> <td> Citizenship, Sex, City of Birth, Social Security Number, Birthday, First Name, Last Name, Street Address, Postal Code, City, Country, Province, Fax, Email, Other Email, Phone Number, Child Custodial Body, Child Custodial Protocol, Child Custodial Date, Work Industry, Work Company, Work Province, Work Address, Work Post Code, Work Phone, Work Email, Work Revenue Agency, Work VAT </td> </tr> <tr> <td> DOMANDA AUTORIZZAZIONE PER ATTIVITA' TEMPORANEA </td> <td> DI ACUSTICA EDILIZIA </td> <td> Citizenship, Sex, City of Birth, Social Security Number, Birthday, First Name, Last Name, Street Address, Postal Code, City, Country, Province, Fax, Email, Other Email, Phone Number, Work Industry, Work Company, Work City, Work Province, Work Address, Work Post Code, Work Post Code, Work VAT, Work PEC, Work Phone, Work Fax, Work CCIAA, Work CCIAA Province, Work CCIAA Number </td> </tr> <tr> <td> DOMANDA DI ACCESSO/CONSULTAZIONE AGLI ATTI IN MATERIA EDILIZIA </td> <td> Citizenship, Sex, City of Birth, Social Security Number, Birthday, Fax, Email, Other Email, Phone Number, First Name, Last Name, Street Address, Postal Code, City, Country, Province </td> </tr> <tr> <td> COMUNICAZIONE INIZIO LAVORI </td> <td> Citizenship, Sex, City of Birth, Social Security Number, Birthday, Fax, Email, Other Email, Phone Number, First Name, Last Name, Street Address, Postal Code, City, Country, Province </td> </tr> <tr> <td> DEPOSITO DICHIARAZIONI DI </td> <td> Citizenship, Sex, City of Birth, Social Security Number, Birthday, </td> </tr> <tr> <td> CONFORMITA' DEGLI IMPIANTI </td> <td> Fax, Email, Other Email, Phone Number, First Name, Last Name, Street Address, Postal Code, City, Country, Province </td> </tr> <tr> <td> COMUNICAZIONE OPERE LIBERE </td> <td> Citizenship, Sex, City of Birth, Social Security Number, Birthday, Fax, Email, Other Email, Phone Number, First Name, Last Name, Street Address, Postal Code, City, Country, Province </td> </tr> <tr> <td> ISTANZA PER IL RILASCIO DI TITOLI EDILIZI PER ATTI DI COMPRAVENDITA </td> <td> Citizenship, Sex, City of Birth, Social Security Number, Birthday, Fax, Email, Other Email, Phone Number, First Name, Last Name, Street Address, Postal Code, City, Country </td> </tr> <tr> <td> SEGNALAZIONE CERTIFICATA DI INIZIO ATTIVITA' </td> <td> Citizenship, Sex, City of Birth, Social Security Number, Birthday, Fax, Email, Other Email, Phone Number, First Name, Last Name, Street Address, Postal Code, City, Country, Province </td> </tr> <tr> <td> RICHIESTA DI RIDUZIONE DELLA TARIFFA SUI RIFIUTI - I.S.E.E. </td> <td> Citizenship, Sex, City of Birth, Social Security Number, Birthday, Fax, Email, Other Email, Phone Number, First Name, Last Name, Street Address, Postal Code, City, Country, Province </td> </tr> <tr> <td> DICHIARAZIONE DI RESIDENZA </td> <td> Citizenship, Sex, City of Birth, Social Security Number, Birthday, Fax, Email, Other Email, Phone Number, Mobile Number, First Name, Last Name </td> </tr> <tr> <td> TRASMISSIONE DOCUMENTAZIONE INTEGRATIVA </td> <td> Citizenship, Sex, City of Birth, Social Security Number, Birthday, Fax, Email, Other Email, Phone Number, First Name, Last Name, Street Address, Postal Code, City, Country, Province </td> </tr> <tr> <td> ACCETTAZIONE POSTO AL NIDO DI INFANZIA </td> <td> Citizenship, Sex, City of Birth, Social Security Number, Birthday, First Name, Last Name, Street Address, Postal Code, City, Country, Province </td> </tr> </table> **Galitia Pilot Services:** <table> <tr> <th> **Service** </th> <th> **Data Processed** </th> </tr> <tr> <td> PROGRAMA BIENESTAR EN BALNEARIOS </td> <td> First Name, Last Name, Middle Name, Passport Number, Street Address, Neighborhood, Place Name, Postal Code, City, Country, </td> </tr> <tr> <td> </td> <td> Phone Number, Mobile Number, Email </td> </tr> <tr> <td> AYUDAS INDIVIDUALES PARA PERSONAS CON DISCAPACIDAD </td> <td> First Name, Last Name, Middle Name, Passport Number, Street Address, Neighborhood, Place Name, Postal Code, City, Country, Phone Number, Mobile Number, Email </td> </tr> <tr> <td> RECONOCIMIENTO DEL GRADO DE DISCAPACIDAD </td> <td> First Name, Last Name, Middle Name, Passport Number, Street Address, Neighborhood, Place Name, Postal Code, City, Phone Number, Mobile Number, Email, Social Security Number </td> </tr> </table> **Sheffield Pilot Services:** <table> <tr> <th> ADAPTING YOUR HOME </th> <th> Email, Phone Number, First Name, Last Name, Street Address, Postal Code, Social Security Number, Birthday </th> </tr> <tr> <td> COST OF CARE </td> <td> Email, Phone Number, First Name, Last Name, Street Address, Postal Code, Social Security Number, Birthday </td> </tr> <tr> <td> PARENTING SKILLS WORKSHOP </td> <td> Email, Phone Number, First Name, Last Name, Street Address, Postal Code, Social Security Number, Birthday </td> </tr> <tr> <td> FREE SCHOOL MEALS </td> <td> Email, Phone Number, First Name, Last Name, Street Address, Postal Code, Social Security Number, Birthday </td> </tr> </table> We remark that the personal data collected or linked by the CDV will **never be shared at any time** . **Each citizen** has the control and the ability to have **a copy or** **removing all data from CDV** . ### 2.3.2. Standards and Metadata CDV will collect personal data with a reference to a specific element of **Personal Data Taxonomy** . In order to assure semantic interoperability several options and tools are going to be considered, in particular **RDF and Linked Data** [12], **XML and JSON** . In order to facilitate and promoting interoperability among public services, in the context of CDV a working in progress activity is standardize the Personal Data and Service Model taking into account the e-Government Core Vocabularies created by ISA2 Programme. ### 2.3.3. Data capture Personal data was collected in two ways: 1. **Data is inserted by citizens by the means of the CDV dashboard.** The user is able to insert, collect and modify personal data fields by the means of interactive web forms provided by the CDV. 2. **Data is collected during the interactions of the user with the e-service forms provided by the PAs.** The e-services and the related types of data are the ones identified by the three pilots. During each interaction, users decide if the data inserted in the e-service forms can be stored in the CDV. At any time, users can **view (through the dashboard)** , and possibly **remove the data collected** . No version of data collected is provided. Thanks to the approach used to collect data, the stored information can be **retrieved by using Personal Data Taxonomy** . ### 2.3.4. Data storage The CDV provides an **ad-hoc repository to collect personal data** , adopting a **multiple key based data encryption** . According to the specific deployment strategy and the e-services adopted in each use case, the CDV refers to **multiple data stores** (i.e., legacy systems just provided by the PAs). Separated instances of CDV Data Store can be deployed for each use case and hosted by each Pilot. ### 2.3.5. Data quality assurance Data collection is based mainly on the e-services filling forms, provided by PA and according its procedures and regulations. According to this, data fields are validated against its type, semantics and completeness. ### 2.3.6. Utility and re-use Data collected are useful for Citizen and PA. Citizen can collect personal data during the interaction with e-services form, in order to reuse in all the following interactions and among different PA eservices. PAs can facilitate and enhance the interactions with citizen in their e-services. **2.3.7. Data sharing** The personal data collected or linked by the CDV will **never be shared at any time** . ### 2.3.8. Archiving and preservation SIMPATICO Project considers **to retain the collected personal data only for the lifetime of the grant** . In principle, the results are expected to be very use-case specific and **no long-term storage is envisaged** beyond the needs for the SIMPATICO project execution. ### 2.3.9. Datasets <table> <tr> <th> **Dataset ID** </th> <th> SIMPATICO_IT_CDVTrento_DB </th> </tr> <tr> <td> **Description** </td> <td> Live database of CDV adopted by the Trento Municipality </td> </tr> <tr> <td> **Data manager** </td> <td> Trento Municipality </td> </tr> <tr> <td> **Data standard** </td> <td> Project specific (JSON based) </td> </tr> <tr> <td> **Metadata standard** </td> <td> RDF/JSON , ISA2 Core Vocabulary (WIP) </td> </tr> <tr> <td> **Volume** </td> <td> 4 Mb per User </td> </tr> <tr> <td> **Sharing level** </td> <td> Private/Personal - no sharable </td> </tr> <tr> <td> **Sharing medium** </td> <td> N/A </td> </tr> <tr> <td> **Preservation duration** </td> <td> Project duration </td> </tr> <tr> <td> **Preservation medium** </td> <td> Pilot hosting systems of SIMPATICO platform. </td> </tr> <tr> <td> **Preservation costs** </td> <td> No additional cost </td> </tr> </table> <table> <tr> <th> **Dataset ID** </th> <th> SIMPATICO_EN_CDVSheffield_DB </th> </tr> <tr> <td> **Description** </td> <td> Live database of CDV adopted by the Sheffield Council </td> </tr> <tr> <td> **Data manager** </td> <td> Sheffield Council </td> </tr> <tr> <td> **Data standard** </td> <td> Project specific (JSON based) </td> </tr> <tr> <td> **Metadata standard** </td> <td> RDF/JSON , ISA2 Core Vocabulary (WIP) </td> </tr> <tr> <td> **Volume** </td> <td> 4 Mb per User </td> </tr> <tr> <td> **Sharing level** </td> <td> Private/Personal - no sharable </td> </tr> <tr> <td> **Sharing medium** </td> <td> N/A </td> </tr> <tr> <td> **Preservation duration** </td> <td> Project duration </td> </tr> <tr> <td> **Preservation medium** </td> <td> Pilot hosting systems of SIMPATICO platform. </td> </tr> <tr> <td> **Preservation costs** </td> <td> No additional cost </td> </tr> </table> <table> <tr> <th> **Dataset ID** </th> <th> SIMPATICO_ES_CDVGalitia_DB </th> </tr> <tr> <td> **Description** </td> <td> Live database of CDV adopted by the Galicia Region </td> </tr> <tr> <td> **Data manager** </td> <td> HIB </td> </tr> <tr> <td> **Data standard** </td> <td> Project specific (JSON based) </td> </tr> <tr> <td> **Metadata standard** </td> <td> RDF/JSON , ISA2 Core Vocabulary (WIP) </td> </tr> <tr> <td> **Volume** </td> <td> 4 Mb per User </td> </tr> <tr> <td> **Sharing level** </td> <td> Private/Personal - no sharable </td> </tr> <tr> <td> **Sharing medium** </td> <td> N/A </td> </tr> <tr> <td> **Preservation duration** </td> <td> Project duration </td> </tr> <tr> <td> **Preservation medium** </td> <td> Pilot hosting systems of SIMPATICO platform. </td> </tr> <tr> <td> **Preservation costs** </td> <td> No additional cost </td> </tr> </table> ## 2.4. Language Corpora ### 2.4.1. Description During the project, several language corpora and data sets have been created in order to _i)_ train or tune supervised systems for text adaptation and simplification and _ii)_ evaluate them. The data creation followed two main approaches: manual or (semi)-automatic. Manual approaches are applied when high-quality data is needed and/or human judgements are required, and have been used to create, for example, lexical simplification benchmarks against which the different approaches have been tested. In this case, annotators with linguistic background or a high language proficiency have been involved to create a simplification of a complex sentence, and mark the simplification type (e.g. insertion, deletion, replacement, etc.). (Semi)-automatic approaches, instead, have been used when it was necessary to retrieve large amounts of monolingual data, or to align existing resources automatically. For Italian, for example, starting from a large freely- available corpus we automatically retrieved simple sentences to create a monolingual collection of Italian sentences with high readability, which was used to tune a system for neural simplification. ### 2.4.2. Standards and Metadata The collected datasets and resources follow, when applicable, existing standards for linguistic resources, which are generally XML-based. For large training sets, also plain text corpora have been released. ### 2.4.3. Data capture Data have been collected in different ways. The benchmarks have been manually created, starting from existing complex sources (for example, text from Sheffield City Council or Comune di Trento website), which have been simplified and annotated. The large training and tuning corpora have been automatically retrieved from already available sources, see for example the monolingual simple sentences for Italian extracted from the Paisà corpus (Lyding et al., 2014) or the SubIMDB corpus for English (Paetzold and Specia, 2016). ### 2.4.4. Data storage The different data sets are stored in publicly available repositories such as Zenodo or github. Links to such resources are reported also in the accompanying papers. The only exceptions are datasets for which no scientific publications have been produced up to now. However, these datasets will be publicly available after publication. ### 2.4.5. Data quality assurance the benchmarks have been manually created or validated, ensuring high quality of the annotation. As for the automatically retrieved or filtered data, small inconsistencies are possible, which are described and explained in details in the accompanying papers. ### 2.4.6. Utility and re-use All language datasets are freely available for research purposes and can be reused also beyond the project duration. ### 2.4.7. Data sharing Data sharing is allowed. Benchmarks have been specifically created to ease future research, so that novel systems for text simplification can be evaluated using the SIMPATICO language data sets. ### 2.4.8. Archiving and preservation As discussed at a project level, it is expected that data sets will be shared using the Zenodo platform and/or github. This will provide the means for long- term storage and sharing. **2.4.9. Datasets** <table> <tr> <th> **Dataset ID** </th> <th> SIMPATICO_EN_COMMON20LS </th> </tr> <tr> <td> **Description** </td> <td> Common20LS is a dataset for the task of Lexical Simplification that contains demographic information about the annotators. It consists on 20 Lexical Simplification problems annotated by 262 people. Each annotated instance is composed of a sentence, a target complex word or phrase, and a set of simplifications suggested by humans ranked by simplicity. </td> </tr> <tr> <td> **Data manager** </td> <td> USFD </td> </tr> <tr> <td> **Data standard** </td> <td> None, plain text </td> </tr> <tr> <td> **Metadata standard** </td> <td> None </td> </tr> <tr> <td> **Volume** </td> <td> 12 MB </td> </tr> <tr> <td> **Sharing level** </td> <td> Open </td> </tr> <tr> <td> **Sharing medium** </td> <td> n/a </td> </tr> <tr> <td> **Preservation duration** </td> <td> Beyond project duration </td> </tr> <tr> <td> **Preservation medium** </td> <td> Zenodo - _https://zenodo.org/record/2551474#.XFmQiVxKhaQ_ </td> </tr> <tr> <td> **Preservation costs** </td> <td> No additional cost </td> </tr> </table> <table> <tr> <th> **Dataset ID** </th> <th> SIMPATICO_EN_BENCHPS </th> </tr> <tr> <td> **Description** </td> <td> BenchPS is a dataset built for the training and evaluation of phrase simplification systems. Each instance is composed of a sentence, target complex phrase, and a set of candidate simplifications ranked by simplicity. Each instance was annotated by humans through multiple annotations steps to ensure the reliability of the data. </td> </tr> <tr> <td> **Data manager** </td> <td> USFD </td> </tr> <tr> <td> **Data standard** </td> <td> None, plain text </td> </tr> <tr> <td> **Metadata standard** </td> <td> None </td> </tr> <tr> <td> **Volume** </td> <td> 147 KB </td> </tr> <tr> <td> **Sharing level** </td> <td> Open </td> </tr> <tr> <td> **Sharing medium** </td> <td> n/a </td> </tr> <tr> <td> **Preservation duration** </td> <td> Beyond project duration </td> </tr> <tr> <td> **Preservation medium** </td> <td> Zenodo - _https://zenodo.org/record/2551536#.XFmQtFxKhaQ_ </td> </tr> <tr> <td> **Preservation costs** </td> <td> No additional cost </td> </tr> </table> <table> <tr> <th> **Dataset ID** </th> <th> SIMPATICO_EN_USERNN </th> </tr> <tr> <td> **Description** </td> <td> We report three user studies in which the Lexical Simplification needs of non- native English speakers are investigated. Our analyses feature valuable new insight on the relationship between the non-natives’ notion of complexity and various morphological, semantic and lexical word properties. Some of our findings contradict long-standing misconceptions about word simplicity. The data produced in our studies consists of 211,564 annotations made by 1,100 volunteers, which we hope will guide forthcoming research on Text Simplification for non-native speakers of English. </td> </tr> <tr> <td> **Data manager** </td> <td> USFD </td> </tr> <tr> <td> **Data standard** </td> <td> None, plain text </td> </tr> <tr> <td> **Metadata standard** </td> <td> None </td> </tr> <tr> <td> **Volume** </td> <td> 2.9MB </td> </tr> <tr> <td> **Sharing level** </td> <td> Open </td> </tr> <tr> <td> **Sharing medium** </td> <td> OpenAIRE </td> </tr> <tr> <td> **Preservation duration** </td> <td> Beyond project duration </td> </tr> <tr> <td> **Preservation medium** </td> <td> _http://ghpaetzold.github.io/data/User_Studies_NNS.zip_ Zenodo - _https://zenodo.org/record/2552816#.XFmRNFxKhaQ_ </td> </tr> <tr> <td> **Preservation costs** </td> <td> No additional cost </td> </tr> </table> <table> <tr> <th> **Dataset ID** </th> <th> SIMPATICO_EN_SUBIMDB </th> </tr> <tr> <td> **Description** </td> <td> SubIMDB is a corpus of everyday language spoken text we created which contains over 225 million words. The corpus was extracted from 38,102 subtitles of family, comedy and children movies and series, and is the first sizeable structured corpus of subtitles made available. </td> </tr> <tr> <td> **Data manager** </td> <td> USFD </td> </tr> <tr> <td> **Data standard** </td> <td> None, plain text </td> </tr> <tr> <td> **Metadata standard** </td> <td> None </td> </tr> <tr> <td> **Volume** </td> <td> 1GB and 972.2MB </td> </tr> <tr> <td> **Sharing level** </td> <td> Open </td> </tr> <tr> <td> **Sharing medium** </td> <td> OpenAIRE </td> </tr> <tr> <td> **Preservation duration** </td> <td> Beyond project duration </td> </tr> <tr> <td> **Preservation medium** </td> <td> _http://ghpaetzold.github.io/subimdb/_ Zenodo - _https://zenodo.org/record/2552407#.XFmRI1xKhaQ_ </td> </tr> <tr> <td> **Preservation costs** </td> <td> No additional cost </td> </tr> </table> <table> <tr> <th> **Dataset ID** </th> <th> SIMPATICO_EN_NNSEVAL </th> </tr> <tr> <td> **Description** </td> <td> Evaluating Lexical Simplification for Non-Native. 400 non-native speakers were asked to judge whether or not they could understand the meaning of each content word in a set of sentences, each of which was judged independently. A total of 35,958 distinct words from 9,200 sentences were annotated (232,481 total), of which 3,854 distinct words (6,388 total) were deemed as complex by at least one annotator. </td> </tr> <tr> <td> **Data manager** </td> <td> USFD </td> </tr> <tr> <td> **Data standard** </td> <td> None, plain text </td> </tr> <tr> <td> **Metadata standard** </td> <td> None </td> </tr> <tr> <td> **Volume** </td> <td> 27.4 kB </td> </tr> <tr> <td> **Sharing level** </td> <td> Open </td> </tr> <tr> <td> **Sharing medium** </td> <td> OpenAIRE </td> </tr> <tr> <td> **Preservation duration** </td> <td> Beyond project duration </td> </tr> <tr> <td> **Preservation medium** </td> <td> _http://ghpaetzold.github.io/data/NNSeval.zip_ Zenodo - _https://zenodo.org/record/2552381#.XFmRiVxKhaQ_ </td> </tr> <tr> <td> **Preservation costs** </td> <td> No additional cost </td> </tr> </table> <table> <tr> <th> **Dataset ID** </th> <th> SIMPATICO_EN_BENCHLS </th> </tr> <tr> <td> **Description** </td> <td> This is a dataset for Lexical Simplification which contains 929 instances, with an average of 7.37 candidate substitutions per complex word. BenchLS is a combination of two resources: the LexMTurk (Horn et al., 2014) and LSeval (De Belder and Moens, 2012) datasets. The instances in both datasets contain a sentence, a target complex word, and several candidate substitutions ranked according to their simplicity. The </td> </tr> <tr> <td> </td> <td> candidates in both datasets were suggested and ranked by English speakers from the U.S. </td> </tr> <tr> <td> **Data manager** </td> <td> USFD </td> </tr> <tr> <td> **Data standard** </td> <td> None, plain text </td> </tr> <tr> <td> **Metadata standard** </td> <td> None </td> </tr> <tr> <td> **Volume** </td> <td> 93.9 kB </td> </tr> <tr> <td> **Sharing level** </td> <td> Open </td> </tr> <tr> <td> **Sharing medium** </td> <td> OpenAIRE </td> </tr> <tr> <td> **Preservation duration** </td> <td> Beyond project duration </td> </tr> <tr> <td> **Preservation medium** </td> <td> _http://ghpaetzold.github.io/data/BenchLS.zip_ Zenodo - _https://zenodo.org/record/2552393#.XFmRo1xKhaQ_ </td> </tr> <tr> <td> **Preservation costs** </td> <td> No additional cost </td> </tr> </table> <table> <tr> <th> **Dataset ID** </th> <th> SIMPATICO_EN_SIMPA </th> </tr> <tr> <td> **Description** </td> <td> A corpus with 1,100 original and manually simplified sentence pairs using data extracted from SCC websites. There are two simplification versions of the corpus: lexically simplified only and lexically and syntactically simplified. For the lexically simplified version, 3,300 sentences were manually simplified by fluent speakers of English. From these 3,300 sentences, 1,100 were then selected for the syntactic simplification. Therefore, SimPA has 1,100 sentences that are lexically and syntactically simplified. </td> </tr> <tr> <td> **Data manager** </td> <td> USFD </td> </tr> <tr> <td> **Data standard** </td> <td> None, plain text </td> </tr> <tr> <td> **Metadata standard** </td> <td> None </td> </tr> <tr> <td> **Volume** </td> <td> 397.1 kB </td> </tr> <tr> <td> **Sharing level** </td> <td> Open </td> </tr> <tr> <td> **Sharing medium** </td> <td> OpenAIRE </td> </tr> <tr> <td> **Preservation duration** </td> <td> Beyond project duration </td> </tr> <tr> <td> **Preservation medium** </td> <td> _https://github.com/SIMPATICOProject/simpa_ Zenodo - _https://zenodo.org/record/2551297#.XFmRxFxKhaQ_ </td> </tr> <tr> <td> **Preservation costs** </td> <td> No additional cost </td> </tr> </table> <table> <tr> <th> **Dataset ID** </th> <th> SIMPATICO_IT_SIMPITIKI </th> </tr> <tr> <td> **Description** </td> <td> Italian corpus of complex-simple sentence pairs from Wikipedia and in the administrative domain (1,166 sentences in total). Each simplification type is also manually annotated. </td> </tr> <tr> <td> **Data manager** </td> <td> FBK </td> </tr> <tr> <td> **Data standard** </td> <td> Project specific (XML-based) </td> </tr> <tr> <td> **Metadata standard** </td> <td> XML </td> </tr> <tr> <td> **Volume** </td> <td> 910 kB </td> </tr> <tr> <td> **Sharing level** </td> <td> Open </td> </tr> <tr> <td> **Sharing medium** </td> <td> OpenAIRE </td> </tr> <tr> <td> **Preservation duration** </td> <td> Beyond project duration </td> </tr> <tr> <td> **Preservation medium** </td> <td> _https://github.com/dhfbk/simpitiki_ _,_ Zenodo - _https://zenodo.org/record/2535632#.XFmR9VxKhaQ_ </td> </tr> <tr> <td> **Preservation costs** </td> <td> No additional cost </td> </tr> </table> <table> <tr> <th> **Dataset ID** </th> <th> SIMPATICO_LEX_IT-BENCHMARK </th> </tr> <tr> <td> **Description** </td> <td> Manually created benchmark to evaluate the performance of Italian lexical simplification systems. It contains 901 pairs of complex sentences and their simplified version at the lexical level (i.e. replacement of a difficult term or phrase with a simpler synonym). </td> </tr> <tr> <td> **Data manager** </td> <td> FBK </td> </tr> <tr> <td> **Data standard** </td> <td> None, plain text </td> </tr> <tr> <td> **Metadata standard** </td> <td> None </td> </tr> <tr> <td> **Volume** </td> <td> 300 kB </td> </tr> <tr> <td> **Sharing level** </td> <td> Open </td> </tr> <tr> <td> **Sharing medium** </td> <td> OpenAIRE </td> </tr> <tr> <td> **Preservation duration** </td> <td> Beyond project duration </td> </tr> <tr> <td> **Preservation medium** </td> <td> Zenodo - _https://zenodo.org/record/2547994#.XFmR71xKhaQ_ </td> </tr> <tr> <td> **Preservation costs** </td> <td> No additional cost </td> </tr> </table> <table> <tr> <th> **Dataset ID** </th> <th> SIMPATICO_SIMPLE_MONOLINGUAL </th> </tr> <tr> <td> **Description** </td> <td> Repository of 500,000 Italian sentences extracted from the Paisà corpus (https://www.corpusitaliano.it/) showing high readability according to four parameters: sentence length, token length, depth of parse tree and verb “arity”. </td> </tr> <tr> <td> **Data manager** </td> <td> FBK </td> </tr> <tr> <td> **Data standard** </td> <td> Plain text </td> </tr> <tr> <td> **Metadata standard** </td> <td> None </td> </tr> <tr> <td> **Volume** </td> <td> 27 MB </td> </tr> <tr> <td> **Sharing level** </td> <td> Open </td> </tr> <tr> <td> **Sharing medium** </td> <td> OpenAIRE </td> </tr> <tr> <td> **Preservation duration** </td> <td> Beyond project duration </td> </tr> <tr> <td> **Preservation medium** </td> <td> Zenodo - _https://zenodo.org/record/2548585#.XFmSHVxKhaQ_ </td> </tr> <tr> <td> **Preservation costs** </td> <td> No additional cost </td> </tr> </table> ## 2.5. Open scientific publications * Tonelli Sara, Palmero Aprosio, Alessio, & Mazzon Marco. (2019). The impact of phrases on Italian lexical simplification (Version 2.0). Zenodo . _http://doi.org/10.5281/zenodo.2534080_ * Tonelli Sara, Palmero Aprosio, Alessio, & Saltori Francesca. (2019). SIMPITIKI: A Simplification Corpus for Italian (Version 2.). Zenodo. _http://doi.org/10.5281/zenodo.2534132_ * Cartelli Vincenzo, Di Modica Giuseppe, Tomarchio, Orazio, López-de-Ipiña Diego, Zabaleta Koldo, & Sanz Enrique. (2018). Citizenpedia: simplifying citizens interaction with public administration. In Proceedings of the 19th Annual International Conference on Digital Government Research: Governance in the Data Age (p. 106). ACM. Zenodo. _https://zenodo.org/record/2535208_ * Zabaleta Koldo, López-de-Ipiña Diego, Sanz Enrique, Irizar-Arrieta A, Cartelli Vincenzo, Di Modica Giuseppe, & Tomarchio Orazio (2018). Human Computation to Enhance E-Service Consumption among Elderlies. In _Multidisciplinary Digital Publishing Institute Proceedings_ (Vol. 2, No. 19, p. 1221). Zenodo. _https://zenodo.org/record/2203659_ * Alessio Palmero Aprosio, & Giovanni Moretti. (2018). Tint 2.0: an All-inclusive Suite for NLP in Italian. _Proceedings of CLIC-it 2018_ . Zenodo. _http://doi.org/10.5281/zenodo.1565256_ * Palmero Aprosio Alessio, Menini Stefano, Tonelli Sara, Ducceschi Luca, & Herzog Leonardo. (2018). Towards Personalised Simplification based on L2 Learners' Native Language. _Proceedings of CLIC-it 2018_ . Zenodo. _http://doi.org/10.5281/zenodo.1565296_ * Carolina Scarton, Gustavo Henrique Paetzold, & Lucia Specia. (2018). Text Simplification from Professionally Produced Corpora. _Proceedings of the Eleventh International Conference on_ _Language Resources and Evaluation (LREC-2018)_ . Zenodo. _http://doi.org/10.5281/zenodo.1410451_ * Carolina Scarton, & Lucia Specia. (2018). Learning Simplifications for Specific Target Audiences. _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics_ (Vol. 2, pp. 712-718). Zenodo. _http://doi.org/10.5281/zenodo.1410314_ * Carolina Scarton, Gustavo Henrique Paetzold, & Lucia Specia. (2018). SimPA: A Sentence-Level Simplification Corpus for the Public Administration Domain. _Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018)_ . Zenodo. _http://doi.org/10.5281/zenodo.1410455_ * Carolina Scarton, Lucia Specia, Alessio Palmero Aprosio, Sara Tonelli, & Tamara Martín Wanton. (2017). MUSST: A Multilingual Syntactic Simplification Tool. _Proceedings of the_ _IJCNLP 2017, System Demonstrations_ , 25-28. Zenodo. _http://doi.org/10.5281/zenodo.1042492_ * Pretel Ivan, Lopez-Novoa Unai, Sanz-Yagüe Enrique, López-de-Ipiña Diego, Cartelli, Vincenzo, Di Modica Giuseppe, & Tomarchio Orazio (2017). Citizenpedia: A human computation framework for the e-government domain. In _2017 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computed, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI) _ (pp. 1-6). IEEE. Zenodo. _https://zenodo.org/record/2415711_ * Zampieri Marcos, Malmasi Shervin, Paetzold Gustavo Henrique, & Specia Lucia. (2017). Complex Word Identification: Challenges in Data Annotation and System Performance. _arXiv preprint arXiv:1710.04989_ .. Zenodo. _http://doi.org/10.5281/zenodo.1040837_ * Paetzold Gustavo Henrique, & Specia Lucia. (2017). Lexical Simplification with Neural Ranking. _Proceedings of the 15th Conference of the European Chapter of the Association for_ _Computational Linguistics_ (Vol. 2, pp. 34-40). Zenodo. _http://doi.org/10.5281/zenodo.1040785_ * Paetzold Gustavo Henrique, Alva-Manchego Fernando, & Specia Lucia. (2017). MASSAlign: Alignment and Annotation of Comparable Documents. _Proceedings of the IJCNLP 2017, System Demonstrations_ , 1-4. Zenodo. _http://doi.org/10.5281/zenodo.1040791_ * Fernando Alva-Manchego Joachim Bingel, Gustavo Henrique Paetzold, Carolina Scarton, & Lucia Specia. (2017). Learning How to Simplify From Explicit Labeling of Complex-Simplified Text Pairs. _Proceedings of the Eighth International Joint Conference on Natural Language Processing_ (Vol. 1, pp. 295-305) Zenodo. _http://doi.org/10.5281/zenodo.1042505_ * Corcoglioniti Francesco, Palmero Aprosio, Alessio, Nechaev Yaroslav, & Giuliano Claudio. (2016). MicroNeel: Combining NLP Tools to Perform Named Entity Detection and Linking on Microposts. _CLiC-it/EVALITA_ . _2016_ . Zenodo. _http://doi.org/10.5281/zenodo.1048868_ * Paetzold Gustavo Henrique, & Specia Lucia. (2016). Anita: An Intelligent Text Adaptation Tool. Zenodo. _Computational Linguistics: Technical Papers_ . _http://doi.org/10.5281/zenodo.1040774_ * Paetzold Gustavo Henrique. (2016). Understanding the Lexical Simplification Needs of NonNative Speakers of English. _Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers_ (pp. 717-727). Zenodo. _http://doi.org/10.5281/zenodo.1040782_ * Paetzold Gustavo Henrique, & Specia Lucia. (2016). Collecting and Exploring Everyday Language for Predicting Psycholinguistic Properties of Words. _Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers_ (pp. 1669- 1679). Zenodo. _http://doi.org/10.5281/zenodo.1040776_ # 3\. SIMPATICO security protection strategy This final section is dedicated to the **SIMPATICO security protection strategy** and has been developed as the project progressed. It reflects the current status within the Consortium about the security of data that has beeen collected and produced. In the SIMPATICO project **we did not perform activities, neither produce results, raising any large scale security issues** . The project does not have the potential for military applications, and also does not involve the use of elements that may cause any harm to humans, animals, plants or environment. However, the process of collecting, processing, storing data might hide some pitfalls. To reduce the **risk of potential malevolent, criminal and/or terrorist abuse** , which might have been perpetrated also by malicious people authorized to access the information, the SIMPATICO Consortium carried out a **twofold security protection strategy** : 1. by ensuring that the employed **security layers and privacy-preserving measures** worked properly, keeping access logs and following best practices for system administration; 2. by employing techniques to prevent information leakage “on-the-fly”, i.e., through the adoption of the **aggregation and** **pseudonymization approach** of personal and sensitive information at collection, communication, and storage time (e.g. via an encryption scheme, hash functions, and/or tokenization). Such an approach neutralised eavesdropping and/or similarly dangerous hack attempts in the unlikely event of successful retrieval, since it secured data, making them completely meaningless to the possible attacker. ## 3.1. Authentication, authorization, and encryption State-of-the-art mechanisms for **authentication, authorization, and encryption** have been exploited in the implemented processes (concerning data collection, storage, protection, retention and destruction), so to ensure the satisfaction of core security and data protection requirements, namely **confidentiality, integrity, and availability** . In context of SIMPATICO, the crucial legal challenges are primarily the security measures concerning authentication and authorization issues. Pursue to the **Directive 95/46/EC** at the beginning of the project and to the **GDPR** in the second half of SIMPATICO, the implementation of both computerized authentication and procedures for managing authorization credentials is required. To assure the security of and the trust in the system, it is fundamental to provide technical solutions aimed at allowing the **circulation of digital identities** and the **access to the e-services** . For identity management and data protection mechanisms, SIMPATICO follows the standard practice in the security research community. Please see subsection 5.3 and Annex 3 of the final deliverable **“D1.5 – Ethics compliance report”** on the so-called **"SIMPATICO GDPR Self-Assessment"** to better understand the security measures adopted within the three project pilots in Italy, Spain and the United Kingdom. **Identity management** deals with identifying individuals ( **authentication** ) and controlling access ( **authorization** ) to resources in a system. All the Privacy Enhancing Technologies associated with identity management aim at identity verification with minimum identity disclosure, and protection against identity theft. Due to internetworked services and in general to Cloud technology, the need of a secure identities management has grown increasingly. Identity and access management (IAM) is the security and business discipline that “enables the right individuals to access the right resources at the right times and for the right reasons”. It addresses the need to ensure appropriate access to resources across increasingly heterogeneous technology environments and to meet increasingly rigorous compliance requirements. Technologies, services and terms related to identity management have been exploited including Directory services, Digital Cards, Service Providers, Identity Providers, Digital Password Managers, Single Sign-on, JSON Web Token and JSON Web Key from OpenID Connect’s model, OpenID Connect , OAuth and XACML. In particular SIMPATICO’s solutions for IAM are influenced by many existing and upcoming standards: OAuth 2.0, User Managed Access (UMA) and OpenID Connect as well as the upcoming Minimum Viable Consent Record (MVCR) specification from Kantara Initiative. More specifically, following the “ **Privacy by default and by design** ” principles and the **GDPR** prescriptions, the SIMPATICO platform adopted an integrated and multilevel approach to protect the user information from the fraudulent access and consumption. This has been achieved using a dedicated Authentication and Authorization Control component (AAC). As for the identity management, the AAC component has been made compatible with the state of art identity provisioning technologies (including OpenID Connect, Shibboleth, SAML, OAuth2.0). This has allowed for integration of the SIMPATICO platform with the identity provisioning solutions adopted locally by the pilots in a federated ecosystem. In this integration, AAC has been configured in a way to obtain minimal amount of personal information necessary to unambiguously and uniquely identify the user. Note that this data has not been used by the platform components directly to refer to the user. Instead, those components use a generated identifier that has been provided by AAC. Such an approach simplified the realization of the **“right to be forgotten”** policy, as it is sufficient to remove the association of the user personal data and the user identifier to make anonymous the data stored in different components of the platform and associated to the user. In order to ensure the **data consumption** is performed only by authorized applications and users, AAC exploited the open standard for authorization, namely OAuth2.0 protocol. This protocol not only ensures the exchange of personal data in a trusted context, but also enables controlled access to the platform services and APIs. In that context, AAC operates as the OAuth2.0 **Authorization Server** for the platform components that expose various APIs and resources over the network. Furthermore, in order to allow the secure transmission of personal data, the SIMPATICO APIs supports the HTTPS communication protocol. The input and output data are transmitted as “plain text” over HTTPS and encrypted by the TSL (Transfer Security Layer), or by the SSL (Security Socket Layer). HTTPS is based over certificates and ensure the client and server mutual authentication. For the **CDV** component of SIMPATICO, which is the main storage of personal data in the project, particular attention is dedicate to reduce the **server side vulnerabilities,** applying all the best **security practices and policies** about the configuration of the user privileges, remote access and connections. In order to get the database unreadable by unauthorized users/applications, the CDV architecture includes a module named Data Security Manager (DSM) that, implementing the Transparent Data Encryption (TDE) approach, enables the encryption/decryption of the CDV data in transparent way from users and application point of view. In order to distribute the security knowledge about the encryption keys and increase the data security, the CDV keys and encrypted data have been periodically backuped and stored in different places. According to the best practice and the architectural solution adopted by the most important DBMS, see Oracle 5 and Microsoft SQL Server 6 , the CDV TDE implementation is based on the following concepts: * **Master Key** : a key adopted to encrypt the Keys Table. It has been stored into a read only file in the filesystem and access restricted exclusively to each single user registered in the CDV. * **User Key** : a key associated to a single CDV user. * **Keys Table** : a table to store the User Keys. It has been located in a different server than the Master Key and the Personal Data Table one. * **Encryption Key** : a key generated using the Master Key and the User Key. * **AES Cipher Algorithm** : the CDV adopts the Advanced Encryption Standard (AES) at 192 bit, defined in the Federal Information Processing (FIPS) standard no. 197 7 8 . * **Personal Data Table** : it contains the personal data encrypted/decrypted applying the AES and the Encryption Key. ## 3.2. Focus on data aggregation and pseudonymization techniques Personal and sensitive data has been and will be made publicly available only after an **informed consent** has been collected and **suitable aggregation and/or pseudonymization techniques** have been applied. Before starting the project activities that require user involvement, a careful investigation on privacy and security issues has been undertaken, covering in particular the **Italian, Spanish and UK privacy laws** and the **GDPR** prescriptions, according to the procedures stated in deliverable **“D1.5 – Ethics compliance report”** . In this Data Management Plan, data pseudonymization and aggregation techniques have been identified and applied to personal/sensitive data before their public release. As regards aggregation techniques, data confidentiality, integrity and privacy have been assured **when collecting and processing data** . The information for each person contained in the release cannot be distinguished from a given number of other individuals whose information also appear in the release. Moreover, the pseudonymization of data is another method of ensuring confidentiality, according to **the Article 29 Working Party Opinion on Anonymization Techniques** and in relation to the **GDPR** [13]. Where data are particularly sensitive (e.g. data using detailed personal narratives) then risks to confidentiality increase. In this case, participants have been carefully informed of the nature of the possible risks. This does not preclude the responsibility of the applicant to ensure that maximal pseudonymization procedures are implemented. A detailed description of the measures that have been implemented to prevent improper use, improper data disclosure scenarios and ‘mission creep’ (i.e., unforeseen usage of data by any third party), within the above-mentioned security protection strategy, have been provided before the commencement of validation activities. **The optimal solution has been decided by using a** **combination of different techniques** , while taking into account the practical recommendations developed in the above-mentioned **Article 29 Working Party Opinion on Anonymization Techniques** . Pseudonymization approaches reduces the linkability of a dataset with the original identity of a data subject, and is accordingly a useful security measure. These techniques have to adhere certain requirements to comply with data protection and privacyrelated legislation in the EU [14]. The following set of requirements (among others) has been initially extracted from the Directive 95/46/EC and the Article 29 Working Party Opinion on Anonymization Techniques and then updated according to the GDPR (see Article 32 GDPR “Security of Processing” and related prescriptions). These are the general guidelines for the SIMPATICO security protection strategy [13] [15]: * **User authentication:** the system has to provide adequate mechanisms for user authentication. * **Limited access:** the system must ensure that data is only provided to authenticated and authorized persons. * **Protection against unauthorized and authorized access:** the records of an individual have to be protected against unauthorized access. * **Notice about use of data:** the users should be informed about any access to their records. * **Access and copy users’ own data:** the system has to provide mechanisms to access and copy the users’ own data. * **Fall-back mechanism:** the system should provide mechanisms to back up and restore the security token used for pseudonymization. * **Unobservability:** pseudonymized data should not be observable and linkable to a specific individual in the system. * **Secondary use:** the system should provide a mechanism to export pseudonymized data for secondary use and a possibility to notify the owner of the exported data. * **Modification of the database:** if an attacker breaks into the system, the system must detect modifications and inform the system administrator about this attack. The above-mentioned potential “unforeseen usage” implications of this project have been examined by the SIMPATICO Ethics Advisory Board and analysed during the “SIMPATICO GDPR SelfAssessment” (see the final version of “D1.5 – Ethics compliance report”). ## 3.3. Internal threats and human errors Most organisations focus on data management risk from external threat but most breeches occur from internal vulnerabilities. These can be thought of as part of the same risk continuum. This section looks at internal vulnerabilities and how to reduce them. There are two main types of internal threats. * Security may fall victim to **human error** . For example, an employee may copy information from an entire database table into an email for troubleshooting purposes and accidentally include external email addresses in the recipient list. * **Internal Attacks** . While internal accidents often compromise databases, wilful attackers on the inside commit a large portion of database breeches. Many are disgruntled employees who use their privileged access to damage. Most of these attacks came using the numerous outlets for data on the modern PC, including USB and Firewire ports, CD and DVD recorders and even built-in storage media slots. Combined with the fact that storage space on portable devices has rapidly increased, business professionals can now use personal storage devices, such as USB memory sticks, digital cameras and smart phones, to remove or copy sensitive information either for malicious intent or personal gain. **Internal threat prevention** The implementation of a strong and flexible security policy is essential for SIMPATICO. The security policy provides rules and permissions that are understandable to both the employee of SIMPATICO partner organizations and those implementing them so that personal data is prevented from leaving the office. SIMPATICO policy is based on the security policies in the EU, which are often enough if enforced to prevent such breeches, and are summarized in the following **5 points methodology** : <table> <tr> <th> 1 </th> <th> Data Protection Policies </th> <th> Using EU, national and/or local legal guidelines for data protection and privacy policies (DP) </th> </tr> <tr> <td> 2 </td> <td> Internal data protection policies </td> <td> Written policies and procedures for all staff to sign in and agree to </td> </tr> <tr> <td> 3 </td> <td> Clear staff role definition and responsibilities </td> <td> Staff training, awareness and clear roles and staff responsibilities on data for access to data with checklists (see attached) </td> </tr> <tr> <td> 4 </td> <td> Access control </td> <td> Managing change in staff and have leave processes in place </td> </tr> <tr> <td> 5 </td> <td> Sanctions and audits </td> <td> Disciplinary action for breach of DP and process guidelines by staff and threat of audits </td> </tr> </table>
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0294_TREND_823952.md
# Introduction The “Transition with Resilience for Evolutionary Development” – TREnD – Project is part of the Open Research Data Pilot (ORD pilot), following the extension to cover all the thematic areas of Horizon 2020 from and under the revised version of the 2017 Work Programme. According to the aim of the ORD pilot, in order to maximise access to and re- use of research data generated by Horizon 2020 projects, the Data Management Plan (DMP) is required to describe the data management life cycle for the data to be collected, processed and/or generated by a Horizon 2020 project. The TREnD project’s Data Management Plan is drafted in compliance with the _Horizon 2020 FAIR Data Management Plan_ template applicable to any Horizon 2020 project producing, collecting and/or processing research data. The DMP is a _living document_ where information incorporated can be updated to a finer level of granularity as the implementation of the project progresses and when significant changes occur. Data management practices have become a crucial concern at any stage of research and innovation projects to prevent from common mistakes made in the past with research data. The purpose of data management is to steer principal and co-investigators towards the development of a clear strategy definition regarding data creation, collection, storage and long-term preservation, handling of sensitive data, data retention and sharing, in the early phase of the research. In this perspective, the TREnD DMP aims to define a set of leading principles outlining the norms, rules and proper practices that will guide the conduct of all the participants in the research activities during the entire lifecycle of data production, collection and processing within the project, as well as the plans for data sharing and data preservation. In addition, the DMP is an essential tool to not only regulate and guide the research activities that deal with data during the project implementation phase, but foremost to soundly manage the generated data after the project’s completion. Therefore, ensuring a sound research data management is important to maximise both access to and re-use of research data generated, processed and analysed throughout the TREnD project and, their impact on the designated users. ##### Aim and structure of the TREnD DMP The TREnD Project has envisioned to enable knowledge transfer through an interactive platform dedicated to data sharing with different users, namely Open Access Toolkit (OpenAT). Using fully exploiting the potential value of the research data (and related meta-data) generated throughout the TREnD project, such an OpenAT is aimed to provide new knowledge and services for local communities of entrepreneurs, and policymakers as well as public authorities. The project deals with different kind of data based on interviews and from official and public Statistical databases warehouses at different geographic levels to comply with the needed approach to address the implementation phases with legal provisions and societal norms. The TREnD DMP intends to expound how the research data of the project are to be properly managed during its implementation, and how they will be handled after its completion. It is expected to comply with the legal requirements as outlined in Article 29.3 of the Grant Agreement concerning Open access to research data. _Regarding the digital research data generated in the action (‘data’), the beneficiaries must:_ _(a) deposit in a research data repository and take measures to make it possible for third parties to access, mine, exploit, reproduce and disseminate — free of charge for any user — the following: (i) the data, including associated metadata, needed to validate the results presented in scientific publications as soon as possible; (ii) other data, including associated metadata, as specified and within the deadlines laid down in the 'data management plan' (see Annex 1); (b) provide information — via the repository — about tools and instruments at the disposal of the beneficiaries and necessary for validating the results (and — where possible — provide the tools and instruments themselves)._ Finally, the TREnD DMP aims to describe: * the handling of research data during & after the end of the project; * what data will be collected, processed and/or generated; * which methodology & standards will be applied; * whether data will be shared/made open access and; * how data will be curated & preserved (including after the end of the project). This document also explains the TREnD’s approach to FAIR data under the Guidelines on FAIR Data Management in Horizon 2020 (European Commission 2016) for scientific data management and stewardship. ##### The TREnD Research design, methodology and analytical tools with respect the Data Management Plan The TREnD Project main objective is to stimulate the regional diversification to be seen more as a co-creation of solutions and concepts on development problems through. enhancing the resilient capacity of regions; applying a transitional approach to tailored placed-based innovation policies. The approach moves from the general question of how strengthening the regional capabilities in triggering, implementing and managing Transition strategies towards driving “resilience-building” processes. The scope is to combine Transition with Resilience for Evolutionary Development in different territorial contexts towards a reforming process of Cohesion Policy for the next programming period 2021-2027. Specifically, the research project seeks to: 1. identify and examine the factors enabling or hindering the Transitions strategies at a governance standpoint; 2. assess the territorial features critical to enable a resilient- building process; 3. unveils the unexploited potentials for “re-shaping trajectories” disclosed through the windows of local opportunities due to the external shocks regions are continuously exposed to. The research proposal exploits and moves forward the findings of the MAPS-LED project, consistent with the connection of knowledge based urban development (KBUD) and Entrepreneurial Discovery Process (EDP). The MAPS-LED project demonstrated: a) innovation policy and urban planning act in a complementary way for supporting both knowledge dynamics and regeneration of local economy; b) case studies analysis from Boston, Cambridge and San Diego clusters allowed identifying the link between city planning initiatives and S3 by introducing the innovation-driven urban policy as an important phase of the EDP process. Concurrently, TREND project harnesses “TM” as a medium for EDP by deepening the understanding of S3 in shaping the policies for regional economic development. The research project spreads knowledge about regional economic diversification by providing a platform in the shape of an Open Access Toolkit to policy-makers and policy-users (regional authorities, academics, stakeholders, and urban advocacy groups, not-for-profit organisations). The OpenAT will provide a set of indicators regarding 1) context, 2) result, and 3) performance as metrics of resilience-building process within TM strategies. The following figure depicts the conceptual framework sheltering the OpenAt platform. **Figure 1 The conceptual framework of the OpenAT platform** At this purpose the Research design and the methodological approach is in four phases: 1. **The conceptual phase** looks at converting the theoretical frame of the project into a realistic and appropriate research design (Methodology, data, methods). The characterization of regions is defined upon a set of indicators (socio-economical) concerning the past development trajectories (e.g. Path-dependency) and the local degree to shift into a related/unrelated diversification, in the context of the gap between core and lagging regions. 2. **The Implementation phase** involves the collection of data and the preparation of data for analysis, specifying: * What data will be collected; * How the data will be collected; * Who will collect the data; * The data collection procedure (i.e., what order forms are filled out, what the interview questions are). Research data will be projected into a GIS mapping database to help link the theoretical framework to the Territorial/urban dimension of Cohesion policy from an evolutionary perspective. 3. **The Evaluation Phase** involves the implementation of the research activities in terms of the pre-set goals and objectives sketching out how external shocks can provide latent opportunities to re-orient local development trajectories. The case studies are assessed according to the “back casting” approach in order to fully exploit the window of local opportunity disclosed in the aftermath of shocks and to design TM. 4. **The Research strategy** will be incrementally developed over the project lifetime to upgrade metrics of TM and Resilience building process to be implemented through the Open Access Toolkit. The areas of investigation follow the scenario depicts in the following map. **Figure 2 TREnD Scenario: areas of inquiry and key themes** The research data are produced concerning: * Literature review on the key themes relevant for the project based on a structured form to archive the information collected (figure 2); * Methods for producing research data; - Key data sources for the investigation; - Key data Analysis tools. **Figure 3 Literature Review Data** **Figure 4 Methods, Key data source and analysis tools** ## 1 DATA SUMMARY The research data produced throughout the TREND project life cycle range from quantitative, secondary data drawn from official statistics to qualitative, empirical data to be collected through a variety of methods such as surveys, interviews, focus groups, field work and observations. Indeed, both qualitative and quantitative approaches will be applied to the case studies analysis to be developed through several data analysis techniques. In order to provide a synopsis of the research data to be produced during the course of the TREnD project, the following table shows the main data types and sources of data as well as the common file formats, in which the data will be presumably stored. **Table 1 Data sets: general information** <table> <tr> <th> **Data Set** **No** </th> <th> **Data Type(s)** </th> <th> **Data Origin/SourceType(s)** </th> <th> **WP No** </th> <th> **File Format Type(s)** </th> </tr> <tr> <td> Quantitative Survey data: Statistical data (economy, 1 environment, demography, social, business environment, institutions and policy, spatial) </td> <td> Official data sources (e.g. US Census Bureau, European Open data portal, IMF, WB, Eurostat, etc.) </td> <td> 1, 2, 3 </td> <td> .xls +.csv </td> </tr> <tr> <td> 2 Literature & desk research </td> <td> On-line library, policy open data open access journal, Publicly available data </td> <td> 1, 2 </td> <td> .doc + .pdf </td> </tr> <tr> <td> 3 Geospatial data </td> <td> Official and open data sources (e.g. Esri Open Data, Global Map, Open Topography, USGS Land Cover Institute, etc.) </td> <td> 2,3,4 </td> <td> .shp </td> </tr> <tr> <td> 4 Stakeholders data collection </td> <td> Primary data </td> <td> 2,3 </td> <td> .xls </td> </tr> <tr> <td> Face-to-Face Interview 5 recordings, transcripts, field notes and Questionnaires data </td> <td> Primary data </td> <td> 2,3,4 </td> <td> .xls + .doc +.pdf </td> </tr> <tr> <td> Multimedia data related to the 6 Project Communication and Dissemination </td> <td> Videos Photos Audio recordings Science blogs </td> <td> 1,2,3,4 </td> <td> XML, JPEG, TIF, PNG, MP3, MP4, </td> </tr> <tr> <td> 7 Research data validation </td> <td> Primary data </td> <td> 2,3 </td> <td> xls </td> </tr> </table> The table 2 provides a short description of each data set in line with the corresponding purpose and utility. **Table 2 Datasets: datatype description, purpose and utility** <table> <tr> <th> **Data Set** **No** </th> <th> **Description** </th> <th> **Purpose** </th> <th> **Utility** </th> </tr> <tr> <td> **1** </td> <td> The dataset contains the data gathered from the official sources and organized in quantitative survey forms. The survey contains four main categories: _**i. Socio-Economic (population dynamics; business environment, quality of life, economics);** _ </td> <td> Understanding the past development trajectories of the regional context under investigation and provide a detailed depiction of the case </td> <td> The information deriving from the quantitative survey will highlight the shocks and stresses the regional context was exposed to and will allow the identification of the responses undertaken to face </td> </tr> </table> <table> <tr> <th> </th> <th> 2. _**Research and Innovation (Researchside);** _ 3. **_Science and Technology (Development side)_ iv. _Urbanization (physical and spatial dimension)_ ** **v._Environment (natural resources, climate changes)_ ** for each category, sub-categories are identified including those indicators listed according to the sources classified for US and EU statistical data, as reported in Table 1. </th> <th> study selected and/or areas of inquiry </th> <th> with the challenges experienced. At city/urban level, it will allow to understand the effects/impacts of such responses and assess the coherence of the response with the shock/stress experiences </th> </tr> <tr> <td> **2** </td> <td> The dataset contains data from the literature review action. Data about literature will be collected through the literature review form aiming to collect all the main information of any source analyzed (figure 2). </td> <td> Defining clearly the theoretical and methodological TREnD concepts; provide the basis to elaborate the framework for the analysis of quantitative data for the setting of the OpenAT </td> <td> The information deriving from this activity will contribute to better define the data framework for the OpenAT as well as for the advancement of the field concerning the TREnD topics </td> </tr> <tr> <td> **3** </td> <td> This dataset contains all the geospatial data related to the geographical unit of analysis under investigation, including the main source maps as well as the spatialization of the quantitative data contained in the quantitative survey both at regional and city/urban level. </td> <td> Building a GIS on quantitative data (data set n.1), both at the regional and city/urban level, to spatially investigate the behavior of research data. </td> <td> The output GIS research data will contribute to building the OpenAT platform, allowing to the final users to understand which areas are particularly sensitive to vulnerability factors. This process will allow a better tailor- made and targeting policy design. </td> </tr> <tr> <td> **4** </td> <td> The dataset contains all the data related to the main potential stakeholders to contact and involve in the TREnD project. They will be selected based on the official roles they play within the selected case studies. </td> <td> Identifying the main stakeholders who can contribute to the definition of the OpenAT objectives and potential users. </td> <td> The activity will allow to better design, implement and validate the OpenAT through an User Design oriented approach. </td> </tr> <tr> <td> **5** </td> <td> The dataset contains data on the governance structure, resilience-based policy initiatives undertaken in response to schoks, stakeholders involvement in the initiatives, the strategies adtoped as well the financial data related to the implementation of actions. The qualitative and quantitative data are organized in the form of interview, including the following observation fields: 1. Information on the actor selected; 2. Role of the actor selected with respect the initiative under investigation; 3. Coherence of the actor selection with respect the topics of the project; 4. Governance structure analysis; 5\. Stakeholder analysis; </td> <td> Understanding the key success factors as well as the obstacles encountered during the implementation of resilience-building processes and transition management strategies implementation </td> <td> The research data will allow improving the OpenAT framework providing information on the key characteristics of the governance processes and structure allowing a better implementation of resiliencebased processes and transition management-oriented strategies. </td> </tr> <tr> <td> </td> <td> 6. Strategy analysis 7. Financial analysis </td> <td> </td> <td> </td> </tr> <tr> <td> **6** </td> <td> The dataset contains the data related all the dissemination and communication activities related to the TREnD project </td> <td> Boosting up the communication and dissemination data generated alongside the project to the specialized and nonspecialized audience </td> <td> The data will allow to maximize the impact and the visibility of the TREnD project </td> </tr> <tr> <td> **7** </td> <td> The dataset contains the Research data produced during the project cycle and subjected to validation for evaluating the relevance with respect to the topics of the project, the significance and the utility. </td> <td> Improving the quality of results. </td> <td> The data from the validation have an internal use for improving the methods and outputs.. </td> </tr> </table> Overall, the data collected, generated and processed by the consortium throughout the TREnD project lifecycle will include 7 datasets and 5 types of data sources. According to the methods underlined in figure 3, data collection is finalized to Case study design methodology and Statistical enquire, driven by open-data tools, and processed for the Open Access Toolkit (OpenAT). **Figure 5 OpenAT platform overview** The OpenAT represents the result of the TREnD Project and will be shared with the local community, entrepreneurs, public authorities and administrations and other research institutions. These data are based only on processed data and will be available at the aggregate level and made available for specific evaluation and dissemination purposes. The OpenAT includes research data organized in open source through mapping visualization, graphs and tables. Data are indeed collected to fulfil a list of indicators towards building GIS mapping database. Qualitative and quantitative data collected are organized in logical and functional forms that help integrate information. Survey questionnaires will be conducted in compliance with the ethical guidelines applicable both in the EU and in the hosting institutions in the US. The interview forms will explicitly confirm that no sensitive personal data is collected in the TREnD project. If any is needed for the scientific integrity of the project, the nature of the data and the measures of implementation will be ensured in compliance with the H2020 guidelines. ## 2 FAIR DATA ### 2.1 Making Data Findable, Including Provisions for Metadata All data and metadata files will be uploaded onto a cloud storage and sharing facility specifically dedicated to TREnD project ( _http://www.cluds.unirc.it/trend/_ ) .The main features of the TREnD cloud storage and sharing facility are as follows: * Access through the TREnD webpage; * Restricted access to registered users only if needed; * 1 TB of storage; * Sharing of documents/metafiles by/with intra- and extra-consortium users;  Document status and drafting can be checked online. In addition to local storage, public metadata and datasets will be made available to users once publications are available on the TREnD’s website and OpenAIRE sharing web platform. In particular, relevant TREnD’s metadata and dataset will be uploaded by the involved researchers to the ZENODO _(https://zenodo.org/_ ) platform, compiling project-related information. This will enable automatic data extraction from the OpenAIRE platform, thus ensuring accessibility through a standard platform for Open Data access. The account at the Zenodo repository is created by the TREnD coordinator, including TREnD community, _https://zenodo.org/communities/trend/_ , where the dataset as well as papers, reports and presentations will be published. The partnership, beneficiaries and partner organization, will follow the conditions, rules and regulations from the Zenodo repository – including the settings for accessing the dataset. Some data may have temporal access restrictions, namely ‘embargo period’. Temporal access restrictions on data are accepted to support PhD publishing activities. Nevertheless, data will be shared internally among all the beneficiaries of the project. Major advantages could arise from the use of platforms providing dataset with a bibliographic citation and a Digital Object Identifier (DOI) allowing it to be identified, shared, published and cited. Only data related to publications will be made openly accessible by default. Once necessary, all Partners will decide on a case-by-case basis which data can be released in order to avoid issues related to Intellectual Rights Protection (IRP) or access at mid-term meetings. To make data findable and accessible, table 4 describes how each dataset, including metadata provisions, respect the FAIR requirement. **Table 3 Datasets accessible and findable** <table> <tr> <th> **Data Type** </th> <th> **Location** </th> <th> **Level of accessibility** </th> <th> **Type of availability and required software tools** </th> <th> **Information on metadata and additional data information** </th> </tr> <tr> <td> **Quantitative Survey data:** **Statistical data (economy, environment, demography,** **social climate, institutions and policy, spatial)** </td> <td> Coordinator University data repository and data repository users </td> <td> Repository users _http://www.cluds.unirc.it/trend/_ _https://zenodo.org/communities/trend/_ </td> <td> Accessibly and searchable database also with Standard statistical software (eg. SPSS) and opensource data analysis software (eg. PSPP). </td> <td> Metadata will be deposited in the data repository </td> </tr> <tr> <td> **Literature & desk research ** </td> <td> Coordinator University data repository or other public data repository </td> <td> Repository users </td> <td> Accessibly and searchable database with office package </td> <td> Metadata will be deposited in the data repository </td> </tr> <tr> <td> **Geospatial data** </td> <td> Coordinator University data repository or other public data repository </td> <td> Repository users </td> <td> Accessibly and searchable Standard spatial analysis software (ArcGIS) and open source geospatial data software </td> <td> Metadata will be deposited in the data repository </td> </tr> </table> TREnD project will adopt internationally accepted standards and protocols for documentation and exchange of discovery and use metadata. This ensures interoperability at the discovery level within international systems and frameworks. Data generated by TREnD will include qualitative data from semi- structured interviews, focus groups, workshops and textual analysis. The Dataset no. 1 will also collect data from revisiting previous studies, reviewing national and regional secondary data. It will additionally gather data from public statistics agencies such as Eurostat and US Census Bureau and similar at national and regional levels. Datasets will be formalized in structured database for the purposes of the project, and they will be findable, accessible, interoperable and reusable (FAIR). #### 2.1.1 Naming conventions, version number and search keywords The datasets contain different data according to the scope they are collected for. It is important to formalize a standardization of files by means of naming convention and clear version numbers to find quickly and easily the right files. The following table describes the common language to use when referring to files and their locations. An example may be: PAU-1-N-i- demographic_dynamics-R1.xls. **Table 4 File naming conventions** <table> <tr> <th> **Author** </th> <th> **Dataset** </th> <th> **Type** </th> <th> **Database** </th> <th> **Description** **of file** </th> <th> **Round of Revision** </th> <th> **Extension** </th> </tr> <tr> <td> PAU PA AU UU NEU LATECH </td> <td> 1. Statistics 2. Literature Review 3. Geospatial data 4. Stakeholder 5. Questionnaire </td> <td> * Numeric data ( **N)** * Text Data ( **TX** ) * Geo data ( **GEO)** </td> <td> 1. Socio-Economic (population dynamics; business environment, quality of life, economics); 2. Research and Innovation (Research-side); iii. Science and Technology (Development side) 4. Urbanization (physical and spatial dimension) 5. Environment (natural resources, climate changes) </td> <td> Name </td> <td> R (number) </td> <td> .doc .xls \---- </td> </tr> </table> The search keywords will be subjected to a specific dissemination strategy, related to the digital communication strategy underlies the website of the project. ### 2.2 Making data openly accessible Data collection and case study analysis are key methodological aspect of TREnD project that use open data driven tools. According to the Open Research Data pilot requirements, all generated data will be made publicly available, provided that such data do not represent any economic risk, or if partners cannot give critical information on the project’s progress to any concurrent public or private entities. Nevertheless, some datasets may be restricted due to the nature of the data collected. The table 3 shows which datasets will be made openly available by providing the reasons when it is not possible. **Table 5 Datasets: FAIR data Making data openly accessible** <table> <tr> <th> **Data Type** </th> <th> **Data openly available (y/n)** </th> <th> **justification** </th> <th> **Alternative solution** </th> </tr> <tr> <td> **Quantitative Survey data: Statistical data (economy,** **environment, demography, social, business environment, institutions and policy,** **spatial)** </td> <td> yes </td> <td> Not relevant </td> <td> Not relevant </td> </tr> <tr> <td> **Literature & desk research ** </td> <td> yes </td> <td> Not relevant </td> <td> Not relevant </td> </tr> <tr> <td> **Geospatial data** </td> <td> yes </td> <td> Not relevant </td> <td> Not relevant </td> </tr> <tr> <td> **Stakeholders data collection** </td> <td> no </td> <td> Despite the contact stakeholders’ data are publicly available, their information will not be published in order to avoid potential misuse. </td> <td> The stakeholder information will be integrated into the public scientific reports. In case any institution or researcher is interested in contact them the coordinator can forward the list if any privacy concern arise </td> </tr> <tr> <td> **Face-to-Face Interview recordings, transcripts, field notes and Questionnaires data** </td> <td> no </td> <td> Data about interviews and questionnaires, included recordings, transcripts, field notes etc. will not be published due to privacy issues </td> <td> The categorization, processing, analysis and interpretation of the data coming from these activities will be included in the scientific reports </td> </tr> <tr> <td> **Multimedia data related to the Project** **Communication and** **Dissemination** </td> <td> yes </td> <td> Not relevant </td> <td> Not relevant </td> </tr> </table> ### 2.3 Making Data Interoperable and Increase Data Re-Use Data standardization procedures are essential to enable data re-using. Standardization will be performing to encode unconventional data formats as well as to data interfaces. Throughout the document, a reference to the documentation standards widely used by the scientific communities is provided. Unconventional data formats will be mostly encoded in common and open formats such as XLS, CSV, TXT to enhance interoperability. All data will be made available in standard/open formats in accordance with commercial/open software so as to maximize data exchange between researchers and institutions. In most cases, a standard vocabulary will be used for meta-data description. The TREnD DMP recommends the use of Creative Commons attribution license for research data (e.g. creativecommons.org) and open access publications. TREnD data should be delivered in a time no longer than one year after the dataset is finished. Discovery metadata shall be delivered immediately. Some data may have constraints (e.g. on access or dissemination) and may be exclusively available for project participants. Details will be evaluated during the project. The data will be made available according to Open Licenses such as Creative Commons. ## 3 ALLOCATION OF RESOURCES The cost of preparing the data following the specifications and initial sharing is covered by the project. Maintenance of this over time will be complied by involving other funds from the public body. This is the case of the OpenAT that will need to be updated constantly. According to the general rules about open access publications, the project will assure the coverage of open data costs for research data to the extent they are eligible for reimbursement under the conditions defined in the TREnD Grant Agreement. Concurrently, each beneficiary shall be responsible for applying for reimbursement in open access journals. ## 4 DATA SECURITY The Web-based platform of the project is designed as a Cloud server and managed with a protected access protocol by the Università Mediterranea of Reggio Calabria, PAU Department. At the end of the research project, the documents will be stored for five years, counting from the end of the project. **Figure 6 The TREnD website** All research data supporting publications will be made available for verification and re-use unless there are justified reasons for keeping specific datasets confidential. Intellectual Property Rights (IPRs) management, including joint ownership, transfers of ownership provision and rules on access rights, would be complied according to ARTICLE 26 — OWNERSHIP OF RESULT of the Grant Agreement. ## 5 ETHICAL ASPECTS ### 5.1 Ethics requirements The TREnD Project deals with different kind of data based on Interviews and official and public Statistical databases warehouses at different geographic levels to comply with the needed approach to address the implementation phases with legal provisions and societal norms. The ethics self-assessment was conducted and taken into consideration of the EU beneficiaries and TC partners since the inception of the project in order to get the proposal ‘ethics-ready’ for funding. Consequently, the Consortium agrees to carry out the RISE action including the highest standards of research integrity and, in particular, avoiding fabrication, falsification, plagiarism and/or other research misconduct. Personal data, during the project, will be treated according to the provisions of the Directive 95/46/EC on the protection of individuals with regard to the processing of personal data and on the free movement of such data (DPA), and the Italian Legislative Decree No. 196/2003. From 25 May 2018, the Regulation on the protection of natural persons with regard to the processing of personal data and on the free movement of such data and repealing Directive 95/46/EC (General Data Protection Regulation - GDPR) - (Regulation (EU) 2016/679) - is applied. The objective of this new set of rules is to give citizens back control over of their personal data, and to simplify the regulatory environment for business. The data protection reform is a key enabler of the Digital Single Market which the Commission has prioritized. The reform allows European citizens and businesses to fully benefit from the digital economy. After the entry into force of the General Data Protection Regulation, the definition of ‘personal data’ includes “any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person“ (see Article 4(1) GDPR). Data gathered by the TREND Project’s members will be analysed and processed through different activities/methods, depending on the WP and the objective (see “data management”). The primary data will be structured and used in performing the open access toolkit web-platform which represents the final output of the Project. The Project will not focus on data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, trade- union membership, and the processing of data concerning health or sex life. Further, in the consent form will be requested the authorisation to the subject for processing the data provided. OpenAT - Open Access Toolkit - represents the result of the TREND Project and will be shared with the local community, entrepreneurs, public authorities and administrations and other research institutions. The data are based on processed data and will be available at aggregate level and made available for specific evaluation and dissemination purposes. Data collection and case study analysis are key methodological aspect of TREND project that use open data driven tools. Ethical standards and guidelines of Horizon 2020 will be rigorously applied throughout the whole TREND project implementation, regardless of the country in which the research is carried out. As described in the previous chapter, data shall be formalized in structured databases for the purposes of the project, and they will be findable, accessible, interoperable and reusable (FAIR). TREnD adheres to Open Research Data pilot, and thus all generated data will be made publicly available, provided that such data do not represent any economic risk, or if partners cannot give critical information on the project’s progress to any concurrent entities, public or private. TREnD DMP is consistent with the intellectual property right (IPR) policy (i.e., IP, confidentiality and publication provisions). The consortium hence declares its willingness to comply with article 34.2 in the Grant Agreement; hence obtain any authorisation required under European law and provide any documentation to the REA upon request. ### 5.2 The ethical issues The TREnD project focuses on the creation of an open access toolkit on the basis of an interactive platform which shares data with different users, policymakers, end-users. The Data Management Plan provides regulations for the activities dealing with data, involvement of participants and project-internal procedure. It is important to establish a project environment that stimulates responsible behaviour and a reflexive attitude in relation to ethical issues. In this perspective, the Data management will include personal data protection issues that will be adopted by all the TREnD project’s members. The objective of knowledge transfers into actions created by research activities will be used in the OpenAT in order to provide new services for local communities of entrepreneurs, local policy-makers and public authorities. #### 5.2.1 Research methodology and the potential impact The TREND project relies on three level of data sources: 1. official and public statistical data warehouse at different geographic levels (city – regions – Country – EU- non-EU); 2. on-line and face-to-face interviews; iii) surveys. The first includes statistical data, geographical data, demographic data. The second include data gathered through interviews of public and private stakeholders. The third include data deriving from observation, reports and inquiries. At the end of the Project, collected data will remain in a repository for five years and participants can ask to delete the information provided by contacting the Coordinator of the TREND project. Paragraph 5.3 explains in detail the management of personal data collected and consent procedures. The project needs to work following legal requirements established by the European Commission and national authorities in the areas of inquiry, in particular concerning data protection and privacy issues. The project members need to display research integrity in their work, and adhere to common, established research practices, such as intellectual honesty, accuracy and transparency in their project activities; It will be stressed also the priority need to avoid any source of discrimination (by gender and by race, etc.). It is important to consider and reflect on the possible implications of the TREND project outcomes for research participants (individuals as well as institutions) and society as a whole. Each member of the team will consider all these areas as utmost important to carry out the research project. These areas are guaranteed through internal procedures and the respect of European directives. #### 5.2.2 Data Management The data in the TREND project ranges from aggregate, quantitative data to detailed qualitative data collected. The WPs activities will be performed to fulfil the Ethical requirements, according to the research objectives to be pursed in each WP, as detailed below: **WP1** sets the stage and provides the frame of reference from which to develop the conceptual framework and assessment methodology to integrate Transition and Resilience building policies. **WP2** and **WP3** are mainly finalized to the case studies analysis for the potential of the implementation of the transition management in a tailored- based approach on the local contexts. A continuous evaluation and synthetization will be developed in Workshops with Community involvement and Stakeholders as well as in the Mid-term meetings that help the consortium to consistently review the planned research activities. **WP4** is dedicated to the design, development and testing of an online open access toolkit which will allow dynamic visualization of different indicators and datasets for different areas and periods of time, with the purpose to facilitate knowledge-based policy making towards territorial cohesion and regional development. The OpenAT will be empowered by the decision of the consortium to be part of “Research Data Pilot - Article 29.3 – Open access to research data”. According with the Guidance of ethics of H2020- checklist table, _the 5.3 version (21 February 2018)_ , the project fulfils the ethics issues checklist along with the tasks and WPs as reported in the following two tables, wherever the research activities will be performed and executed. **Table 6 TREnD project Tasks and WPs** <table> <tr> <th> **TASK** </th> <th> **Tools used** </th> <th> **WP** </th> <th> **Ethical requirement Fulfilled** </th> </tr> <tr> <td> **Data Collection** </td> <td> * Spatial data analysis * Longitudinal studies * Desk analysis of official documents </td> <td> 1,2,3 </td> <td> Data gathered based on an assessment methodology to fulfil a list of indicators from literature towards building GIS mapping database. Qualitative and quantitative data collected are organized in a logical and functional forms that help integrate the information </td> </tr> <tr> <td> **Surveys** </td> <td> Survey Form </td> <td> 1,2,3 </td> <td> Survey questionnaires will be conducted in compliance with the Ethical guidelines applicable both in EU and in the hosting institutions and won’t regard personal data nor political opinions. </td> </tr> <tr> <td> **Interviews** </td> <td> Informed Consent form </td> <td> 1,2,3 </td> <td> The interview forms will explicitly confirm that no sensitive personal data is collected in the TREND project. If any is needed for the scientific integrity of the project, the nature of the data and the measures of implementation will be ensured compliance with the H2020 guidelines. </td> </tr> <tr> <td> **OPENAT** </td> <td> Online toolkit </td> <td> 1-4 </td> <td> The data integrated in the open access toolkit will rely on materials used only for research purposes and official open data collection. </td> </tr> </table> **Table 7 Ethics Issues checklist table** <table> <tr> <th> **PROTECTION OF PERSONAL DATA** </th> <th> **YES/NO** </th> <th> **Information to be provided** </th> <th> **Documents to be provided/kept on file** </th> </tr> <tr> <td> **Does your research involve personal data collection and/or processing?** </td> <td> YES </td> <td> </td> <td> Data deriving from interviews (interview forms and digital recording) and observation will be stored in the CLUDs Lab facilities at the UNibwrsity of Reggio Calabria (Desktops); Data will be protected through a password encryption; Informed consent form will be provided to the participants upon gathering </td> <td> the Coordinatorr will require all the requested authorisations to the competent National Authorities both for personal data protection and concerning the processing of personal data in the electronic communication sectors. </td> </tr> <tr> <td> **IF** **YES:** </td> <td> \- Does it involve the collection or processing of sensitive personal data (e.g. </td> <td> </td> <td> NO </td> <td> </td> <td> </td> </tr> <tr> <td> </td> <td> health, sexual lifestyle, ethnicity, political opinion, religious or philosophical conviction)? </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> </td> <td> \- Does it involve processing of genetic information? </td> <td> </td> <td> NO </td> <td> </td> <td> </td> </tr> <tr> <td> </td> <td> Does it involve tracking or observation of participants (e.g surveillance or localization data, and Wan data, such as IP address, MACs, cookies etc.)? </td> <td> </td> <td> NO </td> <td> </td> <td> </td> </tr> </table> ### 5.3 Protection of personal data The University of Reggio Calabria (coordinator of the TREND project) has appointed as the Data Protection Officer (Under Articles. 37 et seq. of EU Regulation 2016/679 on the protection of personal data GDPR) **Alessandro Andriani** , Lawyer, employed at the University Office of legal and administrative affairs. The contact information are detailed as following: **Office** : Servizio Speciale Affari Legali, Contenzioso del Lavoro ed Attività Negoziali (Legal and Admistrative affairs); **Address** : Salita Melissari Feo di Vito – Università Mediterranea di Reggio Calabria – Italy; **Phone** :+39 0965 1691365; **Fax** :+39 0965 27901; **E-mail** : [email protected]_ The contact details of the Data Protection Officer will be made available to all data subjects involved in the research. The Partner Organisations – Third Country Partners – Northeastern University of Boston (MA) and Latech University (LA) are not required to appoint a DPO under GDPR and a detailed data protection policy for the project will be kept on file. Personal data (see Article 4(1) GDPR), eventually collected during the TREnD project, will be treated according to the national and EU legislation on privacy and data collection. The Regulation on the protection of natural persons with regard to the processing of personal data and on the free movement of such data and repealing Directive 95/46/EC (General Data Protection Regulation - GDPR) - (Regulation (EU) 2016/679) – is applied. No sensitive personal data will be collected in TREnD Project. #### 5.3.1 Technical measures The personal data may be collected during events organised by TREnD project partners and through interviews for research purpose. The personal data collection will be made according to the following technical measures: * Before any data collection event and/or interview, participants will be informed about: (1) aim and scope of the event and the interview as well; (2) how their data will be collected, stored and protected and either destroyed or reused at the end of the research; (3) they will be requested to provide their informed consent in advance. * Interviews and questionnaires will be organized in an “Interview form” that together with the “Survey Form” allows to collect data for the selected case study in Boston (MA) and in Ruston (LA), with respect the topics of the research. * The qualitative and quantitative data are organized in the form of interview to selected actors who play official roles within the selected case studies. * The anonymization of data will take into consideration the data features, environment and utility. In particular, the data features concern two kind of sources: interviews and public/official data-warehouse for statistics survey. The anonymization techniques will be applied for data gathered through interviews. According with the consent form, anonymization of data will be ensured to remove personal identification information from gathered data. The responsible of the project (Coordinator) will apply a procedure to remove personal identification in the data entry process stage, by keeping the personal information (gathered by face-to-face interview) in a repository at the University of Reggio Calabria, in agreement with the Data Protection Officer requirements. In case of on-line interview, the anonymization techniques will be applied from the preparation of the questionnaire to the data entry process, by guarantee the anonymization of data related to the personal identification of the interviewed. #### 5.3.2 Organizational measures The organizational measures are supposed to offer the highest guarantees to safeguard the rights of the research participants: * Data will be shared among the project partners and used for scientific purposes by the practitioners involved in TREnD Project, such as writing articles * Data will be analyzed and presented anonymously, and finally will be filed and protected in the database included in the Web-based platform of the project. * If a participant to semi-structured interview regarding research activities wants to remove the data already gathered, he/she has the right to exercise his/her rights by addressing the request to the Data Protection Officer appointed by the University of Reggio Calabria Coordinator of the project, within one month from the participation. * The Web-based platform is designed as Cloud and managed with a protected access protocol by the Università Mediterranea of Reggio Calabria, PAU Department. * At the end of the research project, the documents will be stored for five years, counting from the end of the project. * Digital data will be stored in a LAN server at the Università Mediterranea of Reggio Calabria, PAU Department. Paper documents will be stored in rotated security locker at the Università Mediterranea of Reggio Calabria, PAU Department. The LAN servers will be accessed with password by PAU Department personnel only. These data can be made available to other scientific practitioners at request. Università Mediterranea of Reggio Calabria, PAU Department is a body authorized to manage sensitive data pursuant to Presidential Decree (Italy) 318/1999. When personal data are transferred from the US to the EU, it is confirmed that such transfers will comply with the laws of the country (the US) in which the data was collected. In case activities undertaken in the US raise ethics issues, it is confirmed that the research conducted outside the EU is legal in at least one EU Member State. Ethical standards and guidelines of Horizon 2020 will be rigorously applied throughout the whole TREnD project implementation, regardless of the country in which the research is carried out. #### 5.3.3 Informed consent procedures. The informed consent procedures in regard to data processing are used for both the participation to the events organised by the TREnD project partners, and the participation to semi-structured interviews regarding research activities. Informed consent is meant to guarantee the voluntary participation in the research, to address privacy issues. The informed consent process consists of three components: adequate information (e.g. research goals, event scope), voluntariness (e.g. possibilities to refuse participation) and competence (e.g. awareness of the consequences). In detail, * information on the Project will be provided to any participant, in an Information Sheet describing the purposes of the research, the organisation and funding of the Project, as well as information about what will happen with the results of the research; * any participant will be given the opportunity to ask questions about the project and its participation; * any participant will agree to participate voluntarily to the project; * any participant will be able to withdraw her/his participation at any time without giving reasons ant she/he will not be penalised for withdrawing nor be questioned on the reasons she/he has withdrawn; * anonymization of data will be ensured to remove personal identification information from gathered data. It will result in anonymized data that cannot be associated with any individual. Any participant will be explained of the procedures regarding confidentiality; * The project findings are expected to be published within public reports. Anonymized data may be used in policy notes, conferences and workshops and as communication material; * any participant will consent to use data gathered being stored and used for academic purposes, namely in the context of the TREnD Project; * any participant will agree to sign and date an ‘Informed Consent Form’. Consent forms and information sheets will be provided in English language. One copy of the form is to be given to Coordinator University researcher and one copy is for the interviewed to keep for his/her records. Consent forms (in their paper version) will be collected in a specific repository at the Università Mediterranea of Reggio Calabria, PAU Department * any participant may exercise her/his individual rights on her/his personal data (including e.g. the right to access, the right to ask for rectification of, and the right to delete personal data) by sending an email to the Data Protection Officer (DPO) appointed by the University of Reggio Calabria Coordinator of the project. Any requests to exercise participant’s rights can be free of charge and will be addressed by the DPO as early as possible and always within one month. #### 5.3.4 Data Processing. Primary data gathered by the TREnD Project’s members will be analysed and processed through different activities/methods, depending on the WP and the objective (Table 1). Within the WP1, data will be collected through desk analysis and literature review in order to carry out the conceptual base of the project. The WP2 and WP3 will focus on case-studies to be developed during the secondments at the Host Institutions in the USA: Boston (MA) and Ruston (LA). Qualitative and quantitative data will be collected by interviewing (in person, by video/phone calls or online) selected actors who play key roles within the case. Collected data will be recorded within the “Interview form” and the “Survey Form”, accompanied by the purposely designed consent form. The WP4 will implement the resulting indicators within the open access toolkit (Research Data Pilot art. 29.3 of the GA). The Project will not focus on data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, trade-union membership, and the processing of data concerning health or sex life. Further, in the consent form will be requested the authorisation to the subject for processing the data provided. Two templates of the informed consent forms are defined, the former in regard to the participation to interviews, the latter in regard to the participation to TREnD events. Both templates are defined in the form of handout that introduces the participant to the research project and states the objects and ethical procedures that will be taken. The Informed Consent Form includes the information sheet that provides clear, accessible and sufficient information to a prospective research volunteer so that they can make an informed decision whether to take part in the research. In case of the participation to semi-structured interview, the participant will find a Research Ethical Consent Form that he/she needs to fill out and sign. The informed consent forms (in language and terms intelligible to the participants) will be included in the grant agreement and kept on file. The Template of informed consent forms and the participant information sheet are provided as follows. **Participant Information Sheet** **TREnD Project Transition with Resilience for Evolutionary Development** <table> <tr> <th> _You are being invited to take part in this research project. We invite you to read this Participation Information Sheet as we would appreciate to discuss and share your specific experiences and views on the key topics of our project, about which you may talk to anybody you feel comfortable with. Please take some time to reflect on whether you would like to participate or not. If there’s anything you don’t understand in this information sheet, feel free_ </th> </tr> <tr> <td> _to ask any questions at any time._ </td> <td> </td> </tr> </table> _Purpose of the Project_ The TREnD “Transition with Resilience for Evolutionary Development” project is based on a research project integrated with a higher education agenda finalized at strengthening the regional capabilities in triggering, implementing and managing Transition Management (TM) strategies towards driving “resilience-building” processes. The main aim is to combine Transition with Resilience for Evolutionary Development (TREnD) in different territorial contexts towards a reforming process of Cohesion Policy. TREnD is a ground-breaking proposal envisioning TM in the mentioned framework addressing regional economic diversification. Specifically, the research project seeks to: 1. identify and examine the factors enabling or hindering the Transitions strategies at a governance standpoint; 2. assess the territorial features critical to enable a resilient-building process; 3. unveils the unexploited potentials for **“re-shaping trajectories”** disclosed through the windows of local opportunities due to the external shocks regions are continuously exposed to. This project has received funding from the [specific data on the grant agreement will be added]. The research project is carried out through a partnership between higher education institutions, University of Reggio Calabria (IT), Utrecht University, Aristotle University of Thessaloniki, (GR), University of Palermo (IT), Northeastern University of Boston, Louisiana Tech University. _Participant Selection_ You are invited to contribute to this project due to your experience as a city representative, academic, representative of civil society, representative of private stakeholder. _Voluntary Participation_ Your participation in this research is entirely voluntary. You can choose either to participate or to decline the invitation. You can withdraw at any time and request the data to be deleted or request access to the data, (including e.g. the right to access, the right to ask for rectification of, and the right to delete personal data) by sending an email to Data Protection Officer (DPO) appointed by the University of Reggio Calabria, Coordinator of the project, as specified in the section “Rights of research participants”. _Procedures and Confidentiality_ The information collected is confidential and data gathered will be anonymized. Any summary interview content, or direct quotations from the interview, that are made available through academic publication or other academic outlets will be anonymized so that you cannot be identified, and care will be taken to ensure that other information in the interview that could identify yourself is not revealed. Any variation of the conditions above will only occur with your further explicit approval, or a quotation agreement could be incorporated into the consent form-In this case, with regards to being quoted, please initial next to any of the statements that you agree with: I wish to review the notes, transcripts, or other data collected during the research pertaining to my participation. I agree to be quoted directly. I agree that the researchers may publish documents that contain quotations by me. _Risks_ The survey might potentially include sensitive and personal issues (i.e. political opinions, cultural values). These kinds of personal data shall be processed fairly and lawfully, and shall not be further processed in any manner incompatible with the initial purpose. Moreover, you do not have to answer any question that might make you feel uncomfortable. _Reimbursement_ There will be no reimbursement for your participation. _Data storage_ All data will be stored for five years, counting from the end of the project. These data can be made available to other scientific practitioners at request. Sharing the Results The project findings are expected to be published within public reports. The data, for example, will be used in policy notes, conferences and workshops and as communication material, according to the procedures and confidentiality statement. _Identification of investigators_ If you have any questions or concerns about the research, please feel free to contact: * Primary Contact Coordinator University of Reggio Calabria (IT): Carmelina Bevilacqua E-mail: [email protected]; * Contact person at Utrecht University (NL): Pierre-Alexandre Balland, E-mail: [email protected]_ ; * Contact person at Aristotle University of Thessaloniki, (GR): Chistina Kakderi, E-mail: [email protected]_ ; * Contact person at University of Palermo, (IT): Vincenzo Provenzano, E-mail: [email protected]_ _Rights of research participants_ You may withdraw your consent at any time and discontinue participation without penalty. You are not waiving any legal claims, rights or remedies because of your participation in this research study. If you have questions regarding your rights as a research participant, contact the Data Protection Officer Alessandro Andriani, University of Reggio Calabria. The contact information are detailed as following: Office: Servizio Speciale Affari Legali, Contenzioso del Lavoro ed Attività Negoziali (Legal and Admistrative affairs); Address: Salita Melissari Feo di Vito – Università Mediterranea di Reggio Calabria – Italy; Phone:+39 0965 1691365; Fax:+39 0965 27901; E-mail: [email protected]_ . **Informed Consent Form** **TREnD Project Transition with Resilience for Evolutionary Development** _By signing this letter of consent, you acknowledge that you have been informed on the purpose and nature of the research and that the information you provide will remain anonymous._ <table> <tr> <th> **I, the undersigned, confirm that (please tick box as appropriate):** </th> <th> **YES** </th> <th> **NO** </th> </tr> <tr> <td> I have read and understood the information about the TREnD Project, as provided in the Participant Information Sheet </td> <td> </td> <td> </td> </tr> <tr> <td> I have been given the opportunity to ask questions about the projects and my participation </td> <td> </td> <td> </td> </tr> <tr> <td> I voluntarily agree to participate in the project </td> <td> </td> <td> </td> </tr> <tr> <td> I understand I can withdraw at any time without giving reasons and that I will not be penalised for withdrawing nor will I be questioned on why I have withdrawn </td> <td> </td> <td> </td> </tr> <tr> <td> The procedures regarding confidentiality have been clearly explained (e.g. use of names, anonymization of data, etc.) to me </td> <td> </td> <td> </td> </tr> <tr> <td> The use of the data in sharing, archiving, dissemination and publications has been explained to me </td> <td> </td> <td> </td> </tr> <tr> <td> I consent to the data gathered being used for this study </td> <td> </td> <td> </td> </tr> <tr> <td> I agree to sign and date this informed consent form </td> <td> </td> <td> </td> </tr> </table> <table> <tr> <th> **Name:** </th> <th> </th> <th> </th> </tr> <tr> <td> **Age:** </td> <td> </td> <td> </td> </tr> <tr> <td> _I confirm I am 18 years of age or over_ </td> <td> **YES** </td> <td> **NO** </td> </tr> <tr> <td> </td> <td> </td> </tr> <tr> <td> **Gender identity** </td> <td> **FEMALE** </td> <td> **MALE** </td> </tr> <tr> <td> </td> <td> </td> </tr> <tr> <td> **Organization:** </td> <td> </td> <td> </td> </tr> <tr> <td> **Role:** </td> <td> </td> <td> </td> </tr> <tr> <td> **Date:** </td> <td> </td> <td> </td> </tr> <tr> <td> **Signature:** </td> <td> </td> <td> </td> </tr> </table> **Informed Consent Form Participation to an Event of the Project when photographs/videos/tape recordings will be taken** **TREnD Project Transition with Resilience for Evolutionary Development** TITLE OF THE EVENT <table> <tr> <th> **Name:** </th> <th> </th> <th> </th> </tr> <tr> <td> **Age:** </td> <td> </td> <td> </td> </tr> <tr> <td> _I confirm I am 18 years of age or over_ </td> <td> **YES** </td> <td> **NO** </td> </tr> <tr> <td> </td> <td> </td> </tr> <tr> <td> **Gender identity** </td> <td> **FEMALE** </td> <td> **MALE** </td> </tr> <tr> <td> </td> <td> </td> </tr> <tr> <td> **Organization:** </td> <td> </td> <td> </td> </tr> <tr> <td> **Role:** </td> <td> </td> <td> </td> </tr> </table> Hereby authorizes _The TREnD PARTNER ……_ To Video/Audio Recording and/or Web streaming of the Event. _**The TREnD PARTNER ……** _ undertakes that, in respect of any video/audio tapes made, every effort will be made to ensure professional confidentiality and that any use of video/audio tapes, or descriptions of video/audio tapes, will be for professional purposes only and in the interests of improving research activities. There will be no reimbursement for Video/Audio Recording and/or Web streaming. <table> <tr> <th> **Date:** </th> </tr> <tr> <td> **Signature:** </td> </tr> </table> If you have any questions about this event feel free to ask the Organizer at any time. Data will be treated in compliance with national and EU legislation (GDPR). **Third countries** In case activities undertaken in non-EU countries will raise ethics issues, the research conducted outside the EU will be legal in at least one EU Member State.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0295_SMARTFISH_773521.md
# Executive summary This document presents SMARTFISH H2020 deliverable D11.3, Data Management Plan (DMP). A data management plan is the key element of good data management. SMARTFISH H2020 project’s datasets will be as open as possible and as closed as necessary, focusing on sound data management for the sake of best research practice, and in order to create value, and foster knowledge and technology out of the data obtained in the project period. The deliverable describes the data management life cycle for the data to be collected, processed and/or generated by the SMARTFISH H2020 project, accounting also for the necessity to make research data findable, accessible, interoperable and reusable (FAIR). SMARTFISH H2020 partners will be encouraged to adhere to sound data management to ensure that the obtained data is well- managed, archived and preserved. Data preservation is synonymous to data relevance since, when this is achieved: (1) data can be reused by other researchers, (2) data collectors can direct requests for data to the database itself, rather than addressing potential requests for data individually, (3) preserved data can have the potential to lead to new, unanticipated discoveries, (4) preserved data prevents the duplication of scientific studies that have already been conducted, and (5) archiving data insures against loss by the data collector. The main issues addressed in this deliverable include: (1) the purpose of data collection, (2) data type, format, size and beneficiaries, (3) use of historical data, (4) making data FAIR, (5) data management support, (6) data security, and (7) ethical aspects in terms of human subjects. The ambition of SMARTFISH H2020 is to use data in an analytical way for the fisheries, such as improved analysis of operational data, tools for planning and operational choices, crowdsourcing methods for fish stock estimation. # SMARTFISH H2020 motivation and background With an increasing pressure on marine resource extraction mounting with resultant calls for sustainability in the sector, SMARTFISH H2020 will develop, test and promote a suite of high-tech systems that will optimize resource efficiency, improve automatic data collection, provide evidence of compliance with fishery regulations and reduce the ecological impact of the sector on the marine environment (Figure 1). SMARTFISH H2020 will exploit and further develop existing technological innovations in machine vision, camera technology, data processing, machine learning, artificial intelligence, big data analysis, smartphones/tablets, LED technology, acoustics and ROV technology. The developments will assist commercial fishers throughout Europe in making informed decisions during pre- catch, catch, and post-catch phases of the harvesting process. SMARTFISH H2020 will also provide new data for stock assessment from commercial fishing and improve the quality and quantity of data that comes from traditional assessment surveys. This provides the potential for more accurate assessment of fish stocks and allows for the assessment of stocks that are currently data-poor and therefore difficult to manage. In addition, the project will access automatically collected catch data from the fisheries which will also allow for management regulations to gain higher compliance rates. **Technology Dimension/Facet Fishery test and** Machine vision Extraction Stock assessment Monitoring **demonstration area** Camera technology Barents Sea **SMARTFISH System** Data processing Norwegian Sea Pre-catch size and species recognition for purse seine Machine learning Seabed fish-finding system North Sea Artifical intelligence Trawl entry monitoring and analysis system West of Scotland Big data analysis Trawl Smart Gear systems for affecting catch Kattegat & Skagerrak Catch analysis system for conveyor belts Smartphones/Tablet Versatile Catch analysis system sample inspection Celtic Sea LED light technology Expert system for analysis of catch monitoring data Bay of Biscay Acoustic technology _Nephrops_ burrows detection and analysis system Mediterranean Sea SMARTFISH data handling system ROV technology Black Sea Unnecessary fishing pressure (reduces) **Impact** Damage to marine environments (reduces) Data for Stock assessment Resource efficiency (improves) Compliance check (provide) Ecosystem damage (reduces) Energy efficiency (improves) Compliance (improve) Social acceptability (improves) Economic efficiency (improves) Unintended mortality (reduces) Figure 1: Conceptual structure of SMARTFISH H2020 ## Role of the deliverable The Data Management Plan presented in this deliverable is a partial fulfilment of Task 11.4: Data Management, where SMARTFISH H2020 emphasizes that it will _“…participate in the Horizon 2020 Open_ _Research Data Pilot, and will therefore aim to make data publicly accessible to the extent possible, while at the same time protecting sensitive data from inappropriate access.”_ The task furthermore specifies that _“This task deals with defining the data sets, deciding on standards for data and metadata, as well as sharing, archiving and preserving data. The goal is to make data FAIR (findable, accessible, interoperable and re-usable)”._ The FAIR principle ensures that data, in this case produced during the project period, can be discovered through catalogues or search engines, is accessible through open interfaces, compliant to standards to interoperable processing of that data, and therefore can be easily being reused. Given that a priority of this task is to define the data sets and decide on standards for data and metadata, sharing, archiving and preservation of said data, the role of the current deliverable is to set out the project plan for the effectuation of this aim. The plan will therefore include standards for data formats, metrics, consistency and quality control as well as procedures for making data available for sharing and for regular backups. The types of data that will be generated during the project period include, but are not limited to: 1. Test data collected in WPs 6-10. 2. R&D test results and analyses of test results 3. Analyses, reviews and reports of the catch equipment engineering literature 4. Software code and algorithms 5. Machine generated data 6. Patent analyses 7. Design data including CAD/CAM outputs We will discuss these in detail in the following sections. ## Relationship with other deliverables The deliverable presented in this document is related to the following deliverables: * D11.1 – Title: Project Dissemination and Exploitation Plan (M3) * D11.6 – Title: Updated Data Management Plan (M26) - This will be an update of the current deliverable. ## Contributors The following partners have contributed to this deliverable: * SINTEF Ocean * AZTI * ZUNIBAL * DTU AQUA # Data Management The SMARTFISH H2020 project will produce a large amount of data relating to the development, adaptation and implementation of the proposed concept. The project will also generate data on fishing vessels, fish schools, fish stocks, catches and by-catches. Subsequently data management is as such a core aspect of the Project, and it will also affect the dissemination and commercialization activities in WP11. The data will also be used both throughout and beyond the project period. As such, data management is crucial to both the long-term end user benefits and the impacts of the SMARTFISH H2020 project. While the partner producing the data will retain the exclusive rights to use the data for dissemination or to share their use with other partners, the data will be made available among all the partners via the secure data repository, accessible through login data that has been shared with the WP leaders and assigned personnel specifically. ## Standards to be used All data generated and collected by the individual participants will be curated and stored on secure servers according to the individual participant’s current processes and standards. Each participant maintains the highest level of security for all their databases and each has implemented a hierarchical access system to prevent unauthorised, inappropriate or malicious data manipulation or destruction. All data access, sharing and exploitation will be subject to the relevant regulations and controls implemented through the Grant Agreement between the participants and the EC/REA, and through the Consortium Agreement. All databases will continue to be maintained throughout and after the end of the SMARTFISH H2020 project to enable verification and reuse of data by consortium members and third parties as appropriate. ## Background data An Annex of the Consortium Agreement lists all the Background intellectual property brought to the project by the participants for use in the project and in the post-project exploitation phase. The participants may also specifically exclude background IP from use in the project. This agreement is confidential and limited to project participants. ## Data from project period All the data arising during the project will likewise be considered project results, and subject to the regulations and controls implemented through the Grant and Consortium Agreements. The data will form part of the project results and will be treated as such with respect to the content of those agreements. ### Storage Data will be curated and stored first by individual partners – according to their current processes. The data from tests, etc. will be curated by the Task Leaders and then the Work Package Leaders. Ultimate responsibility for curation and preservation lies with the Coordinator however. ### Open access Everyone from citizens to civil servants, researchers and entrepreneurs can benefit from open access data. In this respect, the aim is to make effective use of open access data that already exists. This sort of data will already be available in public domains and is not within the control of the SMARTFISH H2020 project. All data available will be on a scale between closed and open access. This is because there are variances in how information is shared between the two points in the continuum. Closed data might be shared with specific individuals within a corporate setting, or within a research group for example. Open acess data on the other hand, though it may require the user to give attribution to the contributing source, it will still be completely available to the end user. Generally, open access data differs from closed data in three key ways in that it is: 1. Accessible, usually via a data warehouse on the internet; 2. Available in a readable format; 3. Licensed as open source, which will allow anyone to use the data or share it for noncommercial (or commercial) purposes. Closed data on the other hand restricts the user access to specific sources of information in several potential ways, primarily in that it is: 1. Only available to certain individuals within an organization; 2. Patented or proprietary; 3. Partially restricted to certain groups; 4. Open to the public through a license for fee or through some other prerequisite; 5. Difficult to access, such as paper records that still have not been digitized. An example of closed data for example may be information that requires a security clearance; healthrelated information collected by a hospital or insurance carrier, or even personal tax returns. Though open access to data is the norm, the management of data is to ensure that all project results/items of new knowledge are assessed for their value and then protected, if required/desirable, before their exploitation and dissemination as well. Therefore, some data, such as those pertaining to patent applications or data having direct commercial value for commercial fishers, will have to be kept as confidential until such time as it has been declassified. The rules and timescales for establishing the need for protection and for undertaking that protection is laid out in the Consortium Agreement. They are designed to minimize any embargo so that timely scientific and technical publications may be made during and following the project with the minimum of delay. In the SMARTFISH H2020 project, open access data will be published on an OpenAire 3.0 compliant data repository, such as Zenodo (www.zenodo.org). All other data will be closed access by default, curated on non-public data repositories owned by the respective partner(s) that generated the data. This closed access data may also be shared between the project consortium members. Open access source code will be made available on the GitHub code repository (www.github.org) as well as on Zenodo. # Types of data SMARTFISH H2020 uses and needs different data types, formats and sources in different work packages and regional seas. Below is an overview of the final products to be produced in the project period and the data types, formats and sources needed for each. This data inventory describes the required data. The table includes the type of data, data ownership, management, use, distribution and access rights to the data. Table 1: Final products to be produced and data types, formats and sources needed. <table> <tr> <th> No. </th> <th> Products </th> <th> Data needs (use, distribution) </th> <th> Data format </th> <th> Data sources (open access? Access rights?) </th> </tr> <tr> <td> 1 </td> <td> **SeinePrecog** </td> <td> R&D and testing use at AZTI, ZUNIBAL, SINTEF Ocean and NTNU, open access publication, </td> <td> Acoustic echograms/Video Data,/Numerical Data Bases. (EK-60-80 Raw data/*.mp4/*.avi/*.xls) </td> <td> Closed data shared among partners involved in R & D and test at sea </td> </tr> <tr> <td> 2 </td> <td> **FishFinder** </td> <td> R&D and testing use at SINTEF Digital, Marport and DTU Aqua, open access publication </td> <td> 2- 4D camera images and video, acoustic echograms </td> <td> Closed access to test and R&D data, open access for R&D data used in scientific publication </td> </tr> <tr> <td> 3 </td> <td> **TrawlMonitor** </td> <td> R&D and testing use at SINTEF Digital, Marport and DTU Aqua, open access publication </td> <td> 2- 4D camera images and video, acoustic echograms </td> <td> Closed access to test and R&D data, open access for select test and R&D data used in scientific publication </td> </tr> <tr> <td> 4 </td> <td> **NephropsScan** </td> <td> R&D use at DTU and SINTEF Ocean, open access publication, data set deposit at OpenAIREcompliant repository </td> <td> Unlabeled and labeled images and video </td> <td> Closed access to test and R&D data, open access for select test and R&D data used in scientific publication </td> </tr> <tr> <td> 5 </td> <td> **SmartGear** </td> <td> R&D use at DTU and SINTEF Ocean, open access publication, data set deposit at OpenAIREcompliant repository </td> <td> Unlabeled and labeled images and calibration parameters </td> <td> Closed access to test and R&D data, open access for select test and R&D data used in scientific publication </td> </tr> <tr> <td> 6 </td> <td> **CatchScanner** </td> <td> Internal use at Melbu Systems, R&D use at University of East Anglia and SINTEF Ocean </td> <td> Unlabeled and labeled images and calibration parameters </td> <td> Closed access </td> </tr> <tr> <td> 7 </td> <td> **CatchMonitor** </td> <td> R&D use at Marine Scotland and University of East Anglia, open access publication, data set deposit at OpenAIREcompliant repository </td> <td> Unlabeled and labeled images and calibration parameters </td> <td> Closed access to test and R&D data, open access for select test and R&D data used in scientific publication </td> </tr> <tr> <td> 8 </td> <td> **CatchSnap** </td> <td> R&D use at SINTEF Ocean, open access publication, data set deposit at OpenAIREcompliant repository </td> <td> Unlabeled and labeled images, IMU data and calibration parameters </td> <td> Closed access to test and R&D data, open access for select test and R&D data used in scientific publication </td> </tr> <tr> <td> 9 </td> <td> **FishData** </td> <td> R&D use at SINTEF Ocean and AZTI, open access publication. </td> <td> Time series from onboard systems in NetCDF format. Data from thirdparty sources (e.g. web services) in various formats (e.g. JSON). </td> <td> Closed access to data from/about individual vessels and companies. Open access to selected aggregate data and results on a fleet level. </td> </tr> </table> ## Test data collected in WPs 6-10 Two sets of test data will be collected in WP6, in two bottom trawl fisheries in the Barents Sea – Atlantic cod and deep-water shrimps: 1. From the CatchScanner prototype: This data will be used for R&D development and verification of CatchScanner for these fisheries. This data is closed-access, and will be curated by Melbu Systems on their local data repositories. For R&D purposes, this data will also be available for use by the University of East Anglia and SINTEF Ocean. 2. From pre-existing on-board equipment, instruments and sensors: This data will be collected through the FishData infrastructure. It will be used for R&D purposes within SINTEF Ocean and AZTI, for development, testing and demonstration of the FishData technology. Test data collected in WP7 will be from the CatchSnap prototype in three fisheries – 1) Mediterranian Sea multispecies demersal trawl fishery, 2) Black Sea purse seine fishery targeting anchovy, and 3) Mediterranean/Aegean multi species purse seine fishery. Most of this data will be closed access data in the form of unlabelled and labelled images, IMU data and calibration parameters. Selected data will be used in a scientific publication, and made available on the OpenAire 3.0 repository Zenodo. Test data collected in WP8 will involve multiple data sources, including CatchScanner, CatchSnap and CatchMonitor. CatchScanner will be used to gather data in a stock assessment survey in the North Sea. CatchSnap will be used to gather data in Scottish demersal whitefish fishery and Moray Firth nephrops fishery. CatchMonitor will be used to gather data in Scottish inshore scollop dredge fishery. Test data collected in WP9 will involve multiple data sources, including CatchSnap and CatchMonitor. CatchSnap will be used to gather data in shellfish pot fishery, Bay of Biscay purse seiners and Bay of Biscay demersal trawl fishery. CathMonitor will be used to gather data in Celtic sea fishery. Table 2: Collected test data from WPs 6-10 and use area in other WPs <table> <tr> <th> **Work package** </th> <th> **Data source** </th> <th> **Use** </th> <th> **Comment** </th> <th> **Partner responsible** </th> </tr> <tr> <td> **WP1** </td> <td> SeinePrecog </td> <td> Development of Seine precog </td> <td> Closed access to outputs from EK60-80 Raw data. </td> <td> Azti/Zunibal </td> </tr> <tr> <td> </td> <td> SeinePrecog </td> <td> Development of Seine precog </td> <td> Closed access to outputs from video camera files </td> <td> Azti/Sintef/Ntnu </td> </tr> <tr> <td> </td> <td> SeinePrecog </td> <td> Analysis of results Seine precog </td> <td> Analysis from xcel, csv or txt files. Closed access </td> <td> Azti/Zunibal/ Sintef/Ntnu </td> </tr> <tr> <td> **WP2** </td> <td> Fishfinder </td> <td> Internal use, R&D use, scientific publications </td> <td> Closed access to outputs from camera and acoustic files </td> <td> DTU/SINTEF Digital/Marport </td> </tr> <tr> <td> </td> <td> TrawlMonitor </td> <td> Internal use, R&D use, scientific publications </td> <td> Closed access to outputs from camera and acoustic files </td> <td> DTU/SINTEF Digital/Marport </td> </tr> <tr> <td> **WP3** </td> <td> SmartGear </td> <td> Internal use, R&D use, scientific publications </td> <td> Closed access </td> <td> DTU/Marine Scotland/CEFAS/ Marport </td> </tr> <tr> <td> **WP6** </td> <td> CatchScanner </td> <td> Internal use, R&D use </td> <td> Closed access </td> <td> Melbu Systems </td> </tr> <tr> <td> </td> <td> Instruments and sensors onboard fishing vessel. </td> <td> Internal use, R&D use </td> <td> Closed access </td> <td> SINTEF Ocean </td> </tr> <tr> <td> **WP7** </td> <td> CatchSnap </td> <td> Internal use, R&D use, scientific publication. </td> <td> Select data will be made open access </td> <td> SINTEF Ocean </td> </tr> <tr> <td> **WP8** </td> <td> CatchScanner </td> <td> Internal use, R&D use </td> <td> Closed access </td> <td> Melbu Systems </td> </tr> <tr> <td> </td> <td> CatchSnap </td> <td> Internal use, R&D use, scientific publication. </td> <td> Select data will be made open access </td> <td> SINTEF Ocean </td> </tr> <tr> <td> </td> <td> CatchMonitor </td> <td> Internal use, R&D use, scientific publication. </td> <td> Select data will be made open access </td> <td> University of East Anglia </td> </tr> <tr> <td> **WP9** </td> <td> CatchSnap </td> <td> Internal use, R&D use, scientific publication. </td> <td> Select data will be made open access </td> <td> SINTEF Ocean </td> </tr> <tr> <td> </td> <td> CatchMonitor </td> <td> Internal use, R&D use, scientific publication. </td> <td> Select data will be made open access </td> <td> University of East Anglia </td> </tr> </table> ## Data from software code and algorithms Data from CatchScanner software code and algorithms, tested in WP6 and WP8, will be closed access and curated by Melbu Systems. In concert with scientific publication of the CatchSnap tests, select CatchSnap software code, tested in WP7, WP8 and WP9, will be made open source on GitHub and also published Zenodo. Table 3: Data from software code and algorithms <table> <tr> <th> **Regional sea(s)** </th> <th> **Work** **package(s)** </th> <th> **Instrument** </th> <th> **Data type** </th> <th> **Partner responsible** </th> </tr> <tr> <td> **Barents Sea, North Sea** </td> <td> WP6, WP8 </td> <td> CatchScanner </td> <td> Software code and algorithms </td> <td> Melbu Systems </td> </tr> <tr> <td> **Mediterranean Sea, Black** **Sea, Aegian Sea, Scotland,** **Bay of Biscay, Moray Firth** </td> <td> WP7, WP8, WP9 </td> <td> CatchSnap </td> <td> Software code and algorithms </td> <td> SINTEF Ocean </td> </tr> <tr> <td> **Scotland and Celtic Sea** </td> <td> WP8, WP9 </td> <td> CatchMonitor </td> <td> Software code and algorithms </td> <td> University of East Anglia </td> </tr> <tr> <td> **Bay of** **Biscay/Mediterranean/black sea** </td> <td> WP1 </td> <td> Early and final prototypes of Seine precog </td> <td> EK-60-80 Raw data, video camera files, numerical databases. Acoustic Algorithms. </td> <td> AZTI/ZUNIBAL </td> </tr> </table> # Human subjects (GDPR) Given that SMARTFISH will also involve stakeholders for consultation and feedback, the project will adhere to Article 8 of the Charter of Fundamental Rights of the European Union (2000/C 364/01) _"Protection of personal data". This article emphasizes that citizens have the right to protection of personal data, and such data "...must be processed fairly for specified purposes and on the basis of the consent of the person concerned..."._ Information from stakeholders will be collected through interviews and at workshops after they have given their informed consent to participate in the project. All data collected from stakeholders will be stored in a safe and securely. However, the data collected are only to specify functional and technical requirements for the different SMARTFISH H2020 systems to be developed in the project. Therefore, no sensitive personal data are to be collected, and as such, the project is waived from reporting the project in to the NSD (Norwegian Centre for Research Data). The potential human subjects and their confidentiality are protected by obtaining approval from the Norwegian Centre for Research Data (NSD) in Norway, which will then apply to all SMARTFISH participants regardless of their Country, due to project ownership resting on a Norwegian partner (SINTEF Ocean). All potential human participants and providers of data will only participate and provide data after informed consent is secured orally or written, as per the requirements from the NSD. Information letters about the participants' rights to withdraw from the project at any time, and that their personal data is confidential, is a requirement for all projects with human subjects. The approval will be kept on file and submitted upon request by the coordinator to the Commission. This is in accordance with article 34.2 of the Multi-Beneficiary General Model Grant Agreement (H2020 General MGA – Multi). Further, any personal data collected will also be processed in accordance with GDPR- (EU) 2016/679. However, as clarified above no sensitive personal data are to be collected. In addition, in case of potential presentation of the collected data from stakeholder to third parties any identifiers like names that would enable linking specific persons to the data will be removed prior to presentation. In addition to stakeholder consultancy there may be data collected from CCTV cameras installed on vessels. Therefore, identifiable human images of crew members or others could potentially be collected. All involved participants will be required to give their informed consent before the trials on the vessels. In any case, the SMARTFISH systems developed and tested will be focused on collecting and analyzing catch data from fishing. There will be no focus on collecting human data and certainly not on analyzing any. Further, for most systems and installation there is no risk of humans being visible in the acquisition scene. In any case all CCTV camera collected data will be stored in a safe and secure way and any potential data sharing will only happen after consent has been obtained from involved stakeholders. The data collected from stakeholders will consist of responses to questions about what type of functionality, data, format of data, data presentation the different SMARTFISH H2020 systems should be able to collect, process and present to meet the needs of the fishing sector. Detailed information on the informed consent procedures that will be implemented in regard to the collection, storage and protection of personal data will be kept on file and submitted on request. We have attached an information letter that is sent out to potential interviewees/workshop participants before the event, where they are informed of their rights, and this can be found in Appendix (A). # A. Letter of Invitation – Stakeholders In the case in which we conduct personal interviews, or have workshops, we will inform the participants in writing with a letter provided either electronically directly or indirectly via their organizations. The letter (below) includes the following sections: * an introduction including a request for participation; * which methods are to be used in gathering data, and what these methods entail for the participant; * which institution is data controller (in this case, SINTEF Ocean as coordinator, and as such determines the purposes, conditions and means of the processing of personal data. The data controller is a formal position and involves requirements for compliance with a number of duties in the Personal Data Act; * contact information of the researcher (or alternatively of the student and the supervisor) in the given case area where the survey or interview/workshops will take place – but it will be made explicit that the contact person does not know the identity of the persons invited; * the purpose of the project and what the information will be used for; * information that highlights that participation is voluntary and that the participants may withdraw their consent as long as the project is in progress, without stating the reason; * information that the project is exempt from being reported to the Data Protection Official for Research at NSD – Norwegian Centre for Research Data because it does not collect any personal data; * when the project will be completed; and * who finances the project. **To Interviewee/Workshop participant in the SMARTFISH H2020 Project** CONCERNING INTERVIEW ABOUT FISHERIES TECHNOLOGY AND STAKEHOLDER PERCEPTIONS We would like to extend our gratitude that you have agreed to participate as a workshop participant/interviewee in the SMARTFISH H2020 project. In light of this, we would like to extend to you this letter so that you are informed about what your participation consists of. This participation in this workshop/interview is an essential part of the SMARTFISH H2020 project coordinated by Dr. Bent Herrmann from SINTEF Ocean, a research institute located in Trondheim, Norway. The purpose of your participation in the SMARTFISH H2020 project is to give your expert opinion on some of the suites of technologies that are being developed in the project period in terms of applicability, use, relevance etc. The data gathered for this project will be used in scholarly publications, models and the final end products described in _www.smartfishh2020.eu_ The workshop/interview will be recorded by if you accept and will afterwards be transcribed for accuracy of results and deleted. The emphasis is on assuring that your opinions are recorded correctly. When the results of the study are published in scholarly journals or chronicles/opinion pieces in Newspapers or other channels of communications, the material will be anonymous, but your sector affiliation if applicable be included. You will be informed when material related to the study is published if you wish. We would like to draw your attention to the fact that you at any time may choose to leave the interview and discontinue your participation. If you have any questions, please contact________________________________ or the project coordinator Dr. Bent Herrmann at SINTEF Ocean in Norway directly, at +47 92200886 or [email protected]_ Sincerely, Name of facilitator in case area If you would like to be kept informed via email and digital newsletters of the results of the project, please write your email address on the sheet that is circulating during the workshop/interview.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0297_GRECO_787289.md
# Executive Summary This document contains the Data Management Plan (DMP) for GRECO project. It describes the types of data that will be generated or gathered during the project, the standards that will be used, the ways how the data will be exploited and shared for verification or reuse, and how the data will be preserved. In addition, our DMP explains how GRECO ensures its research data are findable, accessible, interoperable and reusable (FAIR) regardless these are or not open into a repository. # Glossary- Abbreviations **FAIR:** FAIR data are data which meet standards of findability, accessibility, interoperability, and reusability. For detailed explanation on each of these terms visit: _https://www.go-fair.org/fair-principles/_ **PEDRC** : Plan of Exploitation and Dissemination of Results and Communication. Deliverable 5.2 of GRECO. **ORD:** Open Research Data. **DMP:** Data Management Plan. **TRL:** Technology Readiness Level. **P1.UPM:** Partner nº1. Universidad Politécnica de Madrid. # Introduction ## GRECO in a nutshell The two main objectives of GRECO (H2020-787289) are to demonstrate that the application of Open Science practices (and more broadly, Responsible Research and Innovation methodologies “RRI”) are the basis for obtaining research products aligned with current society challenges and; to become a fundamental reference of how these materialize in all phases of an R&D project. **GRECO faces the specific challenge of putting Open Science into action** in a three years research project. To this end, GRECO has designed a model for implementation that will be tested within a typical research project of societal challenges pillar of H2020 Program. Thus, GRECO investigates on **six innovative solutions that Photovoltaics (PV)** can offer to the society. However, the key of the project is that the research process is being carried out using the fresh and innovative approaches for science that researchers in sociology and humanists have been developing during the last decade. The project operationalizes tools that enable **public engagement mechanisms** for different research lines (from high TRL to low TRL). In this way, processes such as user-centered Open Innovation, Citizen Science or Mobilization and Mutual Learning (MML) actions are especially relevant for the development of our products. Also, through **Open Science tools** such as Open Access, Open Data, Open Education, Open Notebooks, Open Software and Open Peer-Review GRECO aims to generate a research process more accessible to the rest of world. And of course, GRECO conscientiously implements the ethical, gender and governance principles, to guide the execution of the project in its search for socially responsible products. The aims of GRECO project are: * To obtain an inclusive, validated, and understandable rationale pilot for Open Science ready to be applied to a wide spectrum of research projects. * To evidence the impact of Open Science and RRI approaches * To develop a model to come up with a Responsible Citizen Science Initiative exemplified for Photovoltaics * To carry out a Mutual and Mobilization Learning Action Plan as a way for ensuring socially acceptable innovative solutions. * To operationalize quadruple helix innovation collaborations in research projects. * To empower citizens in the scientific endeavour. * To provide high innovative research solutions using GRECO Open Science pilot. A Consortium formed by eleven partners aims at demonstrating to their counterparts how any societal challenge project can be operated in a novel way. That is, by using the mechanisms that Open Science and RRI offer to generate socially acceptable innovative solutions. **GRECO aims at bridging the gap between SWAFS and Societal Challenges pillars.** ## Data Management context The European Commission (EC) is running a flexible pilot under Horizon 2020 called the Open Research Data Pilot (ORD pilot). In the 2014-16 work programmes, the ORD pilot was included only in a few selected areas of Horizon 2020. Under the revised version of the 2017 work programme, the Open Research Data pilot has been extended to cover all the thematic areas of Horizon 2020 and GRECO takes part of it. This pilot is part of the Open Access to Scientific Publications and Research Data Program in H2020 1 . Its core objective is to improve and maximise access to and re-use of research data generated by Horizon 2020 projects. However, it also considers **_the need to balance_ ** openness and protection of scientific information, commercialisation and Intellectual Property Rights (IPR), privacy concerns, security as well as data management and preservation questions. For such reason, the **ORD pilot applies primarily to the data needed to validate the results presented in scientific publications** . Other data can also be provided by the beneficiaries on a voluntary basis, as stated in their Data Management Plans according to the content of the article 29.3 of the Grant Agreement “Open Access to Research Data”: _The Beneficiaries must deposit their data in a research data repository and take measures to make it possible for third parties to access, mine, exploit, reproduce and disseminate -free of charge for any user- the following:_ * _**The data, including metadata, needed to validate the results presented in scientific publications as soon as possible** _ * _**Other Data, including associated metadata, as specified and within the deadlines laid down in the data management plan** _ _The Beneficiaries must provide information-via the repository-about the tools and instruments at the disposal of the beneficiaries and necessary for validating the results (and- where possibleprovide the tools and instruments themselves)._ _This does not change the obligation to protect results in Article 27, the confidentiality obligations in Article 36, the security obligations in Article 37 or the obligations to protect personal data in Article 39, all of which still apply._ _As an exception, the beneficiaries do not have to ensure open access to specific parts of their research data if the achievement of the action's main objective, as described in Annex I, would be_ _jeopardised by making those specific parts of the research data openly accessible. In this case, the data management plan must contain the reasons for not giving access_ .” The EC provides a document with guidelines 2 for projects participating in the ORD pilot. Guidelines explain that all H2020 Projects will be required to develop a Data Management Plan (DMP). The DMP is the document that describes the life cycle of data within a project, i.e. types of data that will be generated or gathered during the project, the standards that will be used, the ways how the data will be exploited and shared for verification or reuse, and how the data will be preserved. The DMP is the document that describes how Beneficiaries are going to ensure that those selected research data are findable, accessible, interoperable and reusable (FAIR) 3 . There are several helpful guidelines for preparing a DMP plan. The EC Guideline in [2] Version 3.0 of 26 of July contains a DMP template for helping researchers on its preparation. The Digital Curation Centre (DCC) offers an online tool for preparing a DMP including an extended version 4 . And through the DMP Tool the users can also create such Plan through an oriented questionnaire depending on to the Funding Agencies 5 , although this feature is oriented to the North-American Agencies. GRECO has decided to follow the recommendations and template defined by the European Commission in [2]. # DATA SUMMARY Within the Plan of Dissemination, Exploitation of Results and Communication described in deliverable D5.2. GRECO has a defined strategy on research outputs management in order to reach its maximum impact and a basis for a solid Open Science pilot. Every six months all Beneficiaries complete an Excel file. In that sheet (included also in Annex I of this document), they report all research outputs generated as a result of GRECO activity regardless its classification as positive, negative or neutral. Once identified, they have to classify the type of data, the potential users and uses as well as the impact of such output and the proposed route for either exploitation or dissemination. Moreover, in such sheet the relation of the data with the Project is shown, defining both the workpackage and/or task and its relationship with the expected impact of the call. Within this Excel Sheet any reader can find practically implemented the questions identified in the DMP template 2 : **Table 1. Correlation between DMP requirements and GRECO implementation** <table> <tr> <th> **DMP Template Section1 Questions** </th> <th> **GRECO compliance** </th> </tr> <tr> <td> What is the purpose of the data collection/generation and its relation to the objectives of the project? </td> <td> Purpose is related to the expected impacts of the call. Beneficiaries list them in Colum H of the sheet. </td> </tr> <tr> <td> What types and formats of data will the project generate/collect? </td> <td> Colum H (types of data). The main format of electronic data in order to ensure the accessibility to data will be any of the included in the IANA Myme Media Types 6 . </td> </tr> <tr> <td> What is the origin of the data? </td> <td> Colum D (owner) and Colum 0 (WP/task) </td> </tr> <tr> <td> What is the expected size of the data? </td> <td> We have not included this feature. GRECO is not going to deal with large data files. </td> </tr> <tr> <td> To whom might it be useful ('data utility')? </td> <td> Colum E (uses) and Colum F (Users) </td> </tr> </table> In conclusion, **all characteristics that would define a data of GRECO have been properly documented in section 2 of D5.2 PEDRC** . The real management of research outputs (included data) is done through an updated EXCEL file in charge of P1. UPM. When the dissemination is the way for reaching the final users, then GRECO requests that all contributions are made open through ZENODO Community ( _https://zenodo.org/communities/greco-787289/_ ), and therefore all research data that support the dissemination are also requested to be archived in ZENODO. Conversely, when the exploitation route has been chosen we must analyse what type of exploitation is suggested. For those outputs reporting other benefits that are not economic, (i.e.: social, environmental, regulatory, educational, etc…) and exploited in further research activities (for education, for improving policies, for setting standards, etc., but not in commercial activities), the exploitation is going to be made through Open Mechanisms. Then, such Research Outputs will be also published in ZENODO to enhance their exploitation. On the other hand, in those cases of legitimate interests for exploiting commercially a research output, there will not be any archive of these results in ZENODO. Why have we decided to use ZENODO as main repository? Motivations to use this repository are: * Allows researchers to deposit both publications and data, while providing tools to link them. * In order to increase visibility and impact of the project the Community GRECO has been created in ZENODO, so all beneficiaries of the project can link the uploaded research outputs to the Community. * The repository has backup and archiving capabilities. * ZENODO assigns all publicly available uploads a Digital Object Identifier (DOI) to make the upload easily and uniquely citable. * The repository allows different access rights. * The repository is in a common language for all partners of the project. All the above makes ZENODO a good candidate as a unified repository for all foreseen project data (presentations, publications, images, videos and measurement data) from GRECO. Even when due to the policy of Journals against archiving out of the Institutional Repository, or when using specialized repositories such as GitHub for software; a proper input describing the research output will be included into ZENODO with a link to the repository where the data can be found. # FAIR DATA ## Making data findable, including provisions for metadata ### Discoverability: Metadata Provision Metadata are created to describe the data and aid discovery. According to ZENODO repository all metadata is stored internally in JSON-format according to a defined JSON schema. Metadata is exported in several standard formats such as MARCXML, Dublin Core, and DataCite Metadata Schema (according to the OpenAIRE Guidelines). Beneficiaries will complete all mandatory metadata required by the repository and metadata recommended by the repository but mandatory for GRECO Consortium, and may provide additional metadata if appropriated. In the Table 2 a general overview of metadata is outlined. **Table 2. Information on metadata generated at ZENODO.** <table> <tr> <th> **Metadata** </th> <th> **Category** </th> <th> **Additional Comments** </th> </tr> <tr> <td> Type of data </td> <td> Mandatory </td> <td> </td> </tr> <tr> <td> DOI </td> <td> Mandatory </td> <td> If not filled, ZENODO will assigned an automatic DOI. Please keep the same DOI if the document is already identified with a DOI. </td> </tr> <tr> <td> Publication Date </td> <td> Mandatory </td> <td> </td> </tr> <tr> <td> Title </td> <td> Mandatory </td> <td> </td> </tr> <tr> <td> Authors </td> <td> Mandatory </td> <td> </td> </tr> <tr> <td> Description </td> <td> Mandatory </td> <td> A description of the dataset including the procedures followed to obtain those results (e.g., software used for simulations, experimental setups, equipment used, etc.) </td> </tr> <tr> <td> Keywords </td> <td> Mandatory </td> <td> Frequently used keywords, plus GRECO </td> </tr> <tr> <td> Access rights </td> <td> Mandatory </td> <td> Open Access. Other permissions can be considered when appropriated. </td> </tr> <tr> <td> Terms for Access Rights </td> <td> Mandatory </td> <td> Licenses Creative Common will be detailed here. GRECO will open the data under Attribution, ShareAlike, Non Commercial and No Derivatives Licences. </td> </tr> <tr> <td> Communities </td> <td> Mandatory </td> <td> Fostering a Next Generation of European Photovoltaic Society through Open Science </td> </tr> <tr> <td> Funding </td> <td> Mandatory </td> <td> European Union (EU), Horizon 2020, Grant Nº 787289, GRECO </td> </tr> <tr> <td> Version </td> <td> Mandatory </td> <td> </td> </tr> <tr> <td> </td> </tr> </table> ### Identifiability of data Beneficiaries will maintain the Digital Object Identifier (DOI) when the publication/data has already been identified by a third party with this number. Otherwise ZENODO will provide each dataset with a DOI. ### Naming convention **GRECO does not establish a naming convention for uploading data to the repository** . Since mandatory metadata in ZENODO repository include a description of the dataset, we ensure third parties will access data easily by describing properly the dataset. Likewise, our policy of not changing data names will allow data to be consistent and traceable in each author’s local back-up devices. ### Approach towards search keyword ZENODO allows for introducing keywords for each dataset. Each author will introduce relevant keywords and **all dataset generated by the Consortium will be also identified with the keyword GRECO** . ## Making data openly accessible ### Types of data made openly available **GRECO establishes that is mandatory to make publicly available, by means of ZENODO, the underlying data related to the scientific publications and any other dissemination activity.** This will allow that other researchers can make use of that information to validate the results, thus being a starting point for their research, as expected by the EC through its open access policy. In addition, we will make public any type of data or research output that, having not been disseminated, have the purpose of being non-commercially exploited: i.e. presentations in events for trainings, videos used for educational purposes, etc. Their identification within the EXCEL file will facilitate the archive of these type of data. Therefore, **the Innovation Manager in charge of data management will recommend partners, what other data they should make available in open access mode** , **additionally to the data underlying publications or dissemination,** ### Methods or software tools needed to access the data All our data are openly accessible since we work with standard formats according to IANA Myme Media Types. If any software is needed for opening a non-conventional file, we will inform on that through the repository. As far as possible, we will try to convert our files into the most frequent software programmes, even when possible in Open Software such as i.e. Julia vs. Mathematica. ### Deposition of data and associated metadata, # documentation and code As explained, we will use ZENODO repository for the purpose of data, metadata and documentation deposition. General policy of this repository is accessible here: _http://about.zenodo.org/policies/_ Other repositories that could be used for some specific data are * GitHub **(** _https://github.com/collections/policies_ **)** for some Open Software developed in the project; * Institutional Repositories, for archiving post-prints if the Journal policy does not allow to archive the paper out of the institutional repository. o _http://oa.upm.es/eprints/_ ; o _https://repositori.upf.edu_ o _https://dspace.uevora.pt/ri/_ o _https://www.helmholtzberlin.de/zentrum/locations/bibliothek/literatur/publikationsserver_de.ht ml_ ## Making data interoperable Interoperability means allowing data exchange and re-use between researchers, institutions, organisations, countries, etc. (i.e. adhering to standards for formats, as much as possible compliant with available (open) software applications, and in particular facilitating re-combinations with different datasets from different origins. GRECO Consortium ensures the interoperability of the data by using data in standard formats according to IANA Myme Media Types and using ZENODO repository with a standardization JSON scheme for metadata. ## Increase data re-use (through clarifying licences) Data (with accompanying metadata) will be shared no later than publication of the main findings. The maximum time allowed to share underlying data is the maximum embargo period established by the EC, six months. GRECO will include a licence for increasing the re-use of data. Licenses based on the Creative Commons scheme will be attributed by the owner of the data. GRECO researchers have received due training on licenses on the 20 th of November 2018 and are aware about the options they have. We encourage at CC0 Licenses, but it is not mandatory. Data will be accessible for re-use without limitation during and after the execution of GRECO project. After the end of the project, data will remain in the repository. Publications and/or other data related with the project but generated after its deadline will be also uploaded. ## “Open” Notebook and FAIR data. As a part of the Open Science Pilot we will also explore the policy of FAIR data INSIDE organizations for non-open data. What we have defined as Open Notebooks in our proposal (T1.4) will be a pilot for carrying out a FAIR management of research outputs at P1.UPM, although other partners such as P9.HZB are also interested in exploring the alternative. We intent to generate a routine on FAIR data along our daily research activity regardless these data are going to be published or not. We believe that FAIR data inside Organizations will allow them, firstly to manage better their resources and not losing valuable information along the time and; secondly, to realize about the importance of FAIR data as a tool for a more responsible science rather than just to meet Funding Agencies mandates. So far, we have identified three alternatives for implementing this internal system, but we are in an early stage. # Allocation of resources GRECO will use ZENODO (GitHub or Institutional Repositories) to make data openly available so there is no cost for the infrastructure. The cost of personnel devoted to the management of the data is considered to be charged under the Program. Each beneficiary will devote its own personnel resources to upload data to ZENODO and follow the instructions contained in this document. The Coordinator has named two Innovation Managers within the project, and one of them (Prof. Antonio Martí) will be responsible to verify and control data opened by partners ensuring that the policy described in this document is fulfilled. # Data Security ZENODO counts with a technical infrastructure that ensures data security and long- term preservation. The interested reader can check the terms at: _http://about.zenodo.org/infrastructure/_ GitHub data security is described at _https://help.github.com/articles/github- security/_ Data stored in institutional repositories follow the policy of each Institution. # Ethical Aspects In order to guarantee that no sensitive data are archived without the consent of the Consortium, partners will apply the good practice of communicating any kind of disclosure 30 days beforehand. Plan of Exploitation, Dissemination and Communication 14
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0298_AMANDA_825464.md
# Executive summary This document is a deliverable of the AMANDA project, funded by the European Commission’s Directorate-General for Research and Innovation (DG RTD), under its Horizon 2020 Research and innovation programme (H2020). This report provides general description related to data management, ethics and standardisation which will be applied in the project and focuses on data collected up to M6 of the project. The document consists of four Sections. Section 1 provides information about the project scope, the goals of this report and the LEPPI manager nomination. Section 2, Data management plan, describes the needs, reasons and methods of data management, together with examples. Section 3 details ethical aspects of the project. Section 4 provides condensed information about data collected during the period M1 – M6 and can be used to track data collected for the project’s duration. The information provided in Section 2 can be used as explanation of condense description of data provided in the Tables of Section 4. # Introduction ## Overall technical objectives AMANDA is an ambitious project aiming to develop a unique ASSC which will have the size, feel and look of a credit card. It can be ideal for easy deployments in buildings (smart living environments) or as wearables (bikes, valuable assets and people). This project will cover the triangle of experimentation, development and standardization to optimize the materials behaviour, connectivity, miniaturization, power consumption, security, intelligence, design and cost. AMANDA’s partners have the expertise and combination of world-class manufacturing infrastructures and know-how. The partners are using micro- and nano- technology, new composites, innovative architectures and advanced software. AMANDA’s vision is to overcome the existing technological challenges and achieve the development of a user-friendly wearable platform not only for indoor and outdoor environmental sensing, but also for asset- or even people- tracking. A combination of developed and existing off-the-shelf technologies will be selected and integrated into the ASSC. Innovative PVs (Lightricity PV), PMIC (e-peas) and batteries (Ilika solid-state battery), all packed in under 3mm thickness. It will introduce technical breakthroughs that will boost further miniaturization, offer increased sensitivity, small footprint and ultra-low power consumption (maintenance free lifetime more than 10 years). The project execution will require tight cooperation between the partners. That will lead to the generation of a significant amount of information, such as datasheets, specifications, measurement and reporting data as well as other types of data. The data management task is in place to make sure that all information is categorised and stored in a safe way and can be accessed at any time by authorised personnel. ## Purpose, context and scope of this deliverable This document relates to data management and ethic plans within the AMANDA project. As the project progresses, it will be updated if needed. Updates to this document are foreseen at milestones M18 (v2), M30 (v3) and M36 (v4) and will focus on the management of scientific data collected for the whole duration of the project and making them findable, accessible, interoperable and reusable (FAIR). The document describes the process applied by the consortium to ensure good data management and high ethical standards. It enables clear tracking of data collected, not only during the project execution but also after its conclusion. External parties can have access to public data. Therefore, this document points to the assigned project member who can provide the dataset. The following aspects are dealt with: * Collected data * Information contained in the data * The data format * The contact point for the data request Authorized personnel is able to track the information and access it in case the information is needed. The DMP is a living document, which will evolve during the lifespan of the project. Particularly whenever significant changes arise, such as dataset updates or changes in consortium policies. This document is the first version of the DMP, delivered in M6 of the project. It includes descriptions of datasets collected until M6 by the project and the specific conditions attached to them. It also provides a framework for future data documentation. Although this report already covers a broad range of aspects related to the AMANDA data management, the upcoming versions will get into more detail on particular issues such as data interoperability and practical data management procedures implemented by the AMANDA project consortium. ## Nomination of the LEPPI manager There is a need to appoint a LEPPI manager. The LEPPI manager will be responsible for the coordination of all activities related to legal, ethical, privacy and policy issues that may arise during the development and validation phases of the project. In case of issues related to law, ethics and privacy, the LEPPI manager will cooperate and advise the following decisions making bodies: Plenary Board, Quality Control Board and Ethics Helpdesk. Table 1 shows the assigned LEPPI manager, chosen by the project partners. <table> <tr> <th> **Partner short name - company** </th> <th> **Name** </th> <th> **Email** </th> </tr> <tr> <td> IMEC </td> <td> Rik van de Wiel </td> <td> [email protected] </td> </tr> </table> Table 1 LEPPI nomination Rik van de Wiel is a senior employee at IMEC. He has been working as R&D Manager in the field of Connected Health Solutions for eight years. Rik van de Wiel contributed to many data collection trials, including a number of trials in which medical devices were evaluated. # Data management plan During the project and as part of WP8, administrational documentation will be created in regards to project coordination. Shared documents within the consortium are processed and managed by the project coordinator. Other data generated in the project (e.g. questionnaire information, specification of the components or measurement data and others) are managed and stored by the partner responsible for the generation of the data. The data should be stored and be available on request by an authorized associate. An authorized associate can be for example another consortium partner or third party who received authorisation for data access from Project Management Board. The project data management should fulfil directive 2013/37/EU of reusability of the generated data. ## Need of DMP The project involves carrying out data collection (in the context of the piloting and validation phase) and a set of validation tests to assess the technology and effectiveness of the proposed framework in real life conditions. For this reason, human participants might be involved in certain aspects of the project and data will be collected concerning their biometrics and their travelling info. Since the project might collect personal-related data, the consortium must comply with any European and national legislation and directives relevant to the country where the data collections are taking place. That’s why it has been decided to create a Data Management Plan (DMP) for the AMANDA project. Data Management Plans are a key element of good data management. A DMP describes the data management life cycle for the data to be collected, processed and/or generated by a Horizon 2020 project. A DMP should include information on: * The data that will be collected, processed and/or generated * The methodology and standards that will be applied * Whether data will be shared/made open access * The way data will be curated and preserved (including after the end of the project) ## Procedures of data collection ### Data collection process The data collection consists of two parts: the data collection description and the data collection detail. The data collection description characterizes, in plain text and for each data collection, the types of research data that will be collected during the study. It also describes how information will be collected and why it is needed. This gives a general overview that can be used to fine tune the data management if needed. The following research data types are possible (but not limited to): * Observational data: captured in real time, typically cannot be reproduced exactly. Examples: sensor readings, sensory (human) observations, survey results, images * Experimental data: from labs and equipment, can often be reproduced but may be expensive to do so * Simulation data: from models, can typically be reproduced if the input data is known. Examples: climate models, economic models, biogeochemical models * Derived or compiled data: after theoretical search, data mining or statistical analysis has been done, can be reproduced if analysis is documented. Examples: text and data mining, derived variables/parameters, compiled database, datasheets, 3D models, project reports The data collector can complete the matrix containing the detailed information of the data collection in the study based upon the following check list. * Data collection explains how the data will be collected such as sensors, interviews and others * Data type that will be collected could include text, numbers, images, 3D models, software, audio files, video files, reports, surveys and other types of data. * Data format is the format in which the data will be stored * Estimated size of the data contains a rough estimation of the size * Which tools or software are needed to create/process/visualize the data? * Responsible indicates which partner is responsible for the data collection * Does the data have a specific character in terms of reproducibility, confidentiality and others? What does this mean for the management of the data? ### Data collection description This Section describes the context and type of collected data. The description can be done based upon the data types: observational, experimental, simulation or derived. Furthermore, the data should be commented in wording how, what and why it is collected. The data collection detail shows what data will be collected in the study. Table 2 describes who collected the data, how the data was generated, the data format and the file size. Additionally, the table contains a column with the information about the specific character of the data. <table> <tr> <th> **Ref. nr.** </th> <th> **Responsible** **Partner** </th> <th> **Data type** </th> <th> **Data collection** </th> <th> **Data format** </th> <th> **Est. size** </th> <th> **Software** </th> <th> **Specific character** </th> </tr> <tr> <td> 1 </td> <td> CERTH </td> <td> Project reports </td> <td> Gathering infor- mation from different stakeholders in the project through interview, meetings, mails, ... </td> <td> DOC, XLS </td> <td> 10mb </td> <td> MS Of- fice </td> <td> Only personal information of project related persons included and no sensitive data </td> </tr> <tr> <td> 2 </td> <td> CERTH </td> <td> Survey </td> <td> Online survey </td> <td> XLS </td> <td> 10mb </td> <td> MS Of- fice </td> <td> No sensitive data included </td> </tr> <tr> <td> 3 </td> <td> IMEC </td> <td> CO 2 Sensor reading </td> <td> CO 2 sensor </td> <td> CSV </td> <td> 0,4Mb </td> <td> MS Office, Matlab </td> <td> No personal information included </td> </tr> </table> Table 2 Data acquisition details ## Data storage and back-up It is the responsibility of the project partner who collects the data to ensure that the data is regularly backed-up and stored securely for the lifetime of the project. The following matrix is filled in order to keep an overview of the data collected in the whole project. Storage indicates the medium and location of the backups. We distinguish the following types: * Network drives - These are secure and backed-up regularly. They are ideal for master copies of data. However, due to its online nature they might be target of hacker attacks. * Local drives – Data on PCs and laptops can be lost because of technical malfunction or the loss of the device itself. These are convenient for short-term storage and data processing but should only be relied upon for storing master copies when backed-up regularly. * Remote or cloud storage - Commonly used services, such as Dropbox and Google Drive, will not be appropriate for sensitive data. Agreements with providers should be studied before using them to store sensitive data. * External portable storage devices (external hard drives, USB drives, DVDs and CDs) - These are very convenient, being cheap and portable, but not recommended for longterm storage as their longevity is uncertain and they can be easily damaged. Backup indicates the location and frequency of the backups. The data in this study will be stored and backed up as described in Table 3. <table> <tr> <th> **Ref. nr.** </th> <th> **Responsible Partner** </th> <th> **Data type** </th> <th> **Storage medium and location** </th> <th> **Backup location and backup frequency** </th> </tr> <tr> <td> 1 </td> <td> CERTH </td> <td> Project reports </td> <td> Remote and cloud storage using Office365 </td> <td> Automated backup using Microsoft </td> </tr> <tr> <td> 2 </td> <td> CERTH </td> <td> Survey </td> <td> Local drive </td> <td> No backups made </td> </tr> <tr> <td> 3 </td> <td> IMEC </td> <td> CO 2 Sensor reading </td> <td> Internal IMEC SharePoint </td> <td> Automatic MS office service backup </td> </tr> </table> Table 3 Data storage and data backup information ## Data documentation The data processed in the study is documented and labelled for immediate usage and future reference. The labelling consists of two parts: * File naming. Files will have naming conventions for each data type. There are many conventions for file naming. It is suggested to follow the well documented, practical guidance from Purdue University [1]. Naming convention is very helpful in case of manual and automatic search. * Metadata. Files can have metadata that describe the data stored in the file. Metadata have a description what the data contains and what each value represents. The reason to use metadata is, that it can be found easily when looking for information. Wherever possible, existing community standards should be identified and reused. An example of commonly used generic metadata can be found at Dublin Core Metadata Initiative [2]. The data processed in the study will be documented according to the standards with the data type as described in Table 4\. <table> <tr> <th> **Ref. Nr.** </th> <th> **Responsible partner** </th> <th> **Data type** </th> <th> **Naming convention** </th> <th> **Metadata** </th> </tr> <tr> <td> 1 </td> <td> CERTH </td> <td> Project reports </td> <td> For facilitating common browsing and storage in different platforms and Operating System’s, no spaces should be used in the document names and instead the dash character “-” should be used. All project document names must start with the prefix “AMANDA-” to facilitate quick identification and indexing. Names of deliverable documents should follow the convention: “AMANDA-Dw.n-Title-vX.Y.ext” where: “Dw.n” is the deliverable number: “w” is the WP number; “n” is the numbering within the specific WP; “Title” is the title of the deliverable; “vX.Y” is the version number: “X” is the version; “Y” is the sub-version; “ext” is the file extension pertaining to the format used. </td> <td> n/a </td> </tr> <tr> <td> 2 </td> <td> CERTH </td> <td> Survey </td> <td> Database name: AMANDA_Surveys_2019 </td> <td> n/a </td> </tr> <tr> <td> 3 </td> <td> IMEC </td> <td> CO 2 Sensor reading </td> <td> Sensor designation+”_”+Place of measurement+ “_”+participant+”_” date of trial e.g.IMEC_CO2_eindho- ven_Tom_20190522 </td> <td> Description: Data from CO 2 sensor based upon events like entering, leaving the room Subject: Sensor information Created: 14.06.2019 Creator: Jon Smith Classification: Confidential </td> </tr> </table> Table 4 Data documentation ## Data access This Section describes how authorized access to the data is managed during the project for each dataset. During the project, it is required to keep data safe and secure. The process of data collection will already determine who has access to data. Data security is needed to prevent unauthorised access. Otherwise the data might be intentionally or unintentionally disclose, changes or delete. The storing partners are responsible for ensuring data security. The level of security required depends upon the nature of the data – personal or sensitive data need higher levels of security. Table 5 shows an example of the data access presentation. The access controller is responsible for the access management of the data. Access management is the description on how the access to the data will be managed. Data can be labelled as: * Public information * Restricted information * Confidential information * Strictly confidential Access is limited to the appointed persons, functions and groups. It can be extended on demand. The access to the data of the study will be managed by the assigned access controller for each data type. It will be done according to the access management description linked with the data type. <table> <tr> <th> **Ref. nr.** </th> <th> **Responsible Partner** </th> <th> **Data type** </th> <th> **Access controller** </th> <th> **Access management** </th> </tr> <tr> <td> 1 </td> <td> CERTH </td> <td> Project reports </td> <td> C. Kouzinopou- los </td> <td> Access will be granted to people working on the project after approval by the responsible from the company they represent and approval from the overall lead of the project. Data will not be made public at any time unless all parties agree to it or the necessary agreements are in place. </td> </tr> <tr> <td> 2 </td> <td> CERTH </td> <td> Survey </td> <td> C. Kouzinopou- los </td> <td> Access is limited to the controller </td> </tr> <tr> <td> 3 </td> <td> IMEC </td> <td> CO 2 Sensor reading </td> <td> P. Bembnowicz </td> <td> Access is limited to the controller </td> </tr> </table> Table 5 Data access information ## Data sharing and reuse This Section describes if and how the data processed in the study can be shared including: * What agreements are in place to share the data between consortium partners or third party * What purpose of reuse can be envisioned for the data type in a later phase * Will the data be shared with limited stakeholders or made publicly available * How will the sharing and re-use be managed * What safeguards are implemented for data sharing and re-use (such as anonymisation or scrambling of certain information) The data processed in the study can be shared or reused as described in Table 6. <table> <tr> <th> **Ref. nr.** </th> <th> **Responsible Partner** </th> <th> **Data type** </th> <th> **Sharing of data** </th> <th> **Reuse of data** </th> </tr> <tr> <td> 1 </td> <td> CERTH </td> <td> Project reports </td> <td> Only shared with stakeholders of the project </td> <td> For future EU project proposals </td> </tr> <tr> <td> 2 </td> <td> CERTH </td> <td> Survey </td> <td> Only shared with stakeholders of the project </td> <td> Reuse for all work packages development </td> </tr> <tr> <td> 3 </td> <td> IMEC </td> <td> CO 2 Sensor reading </td> <td> No data sharing </td> <td> No reuse </td> </tr> </table> Table 6 Data sharing and reusing ## Data retention and archiving This Section describes how long the data will be stored for this study, what data can be archived and what safeguards are setup for the data archiving. Examples of safeguards are limited access, anonymisation, scrambling and deleting parts of data. The data processed in this study should have the defined retention period for each data type. The default retention period is set to the end of the AMANDA project activities. However, the retention time should be adjusted to significance of collected data. Table 7 shows suggested retention time with regards different type of data Nevertheless, the experiment designer should have decisive voice about retention period. Moreover, the files can be archived or deleted directly after processing. <table> <tr> <th> **Nr.** </th> <th> **Data type** </th> <th> **Suggested retention period** </th> </tr> <tr> <td> 1 </td> <td> Voice recording </td> <td> Delete subsequent to project end </td> </tr> <tr> <td> 2 </td> <td> Pure technical information </td> <td> 5 years or more </td> </tr> <tr> <td> 3 </td> <td> Non-anonymised raw measurements </td> <td> Delete subsequent to project end </td> </tr> <tr> <td> 4 </td> <td> Anonymised raw measurements </td> <td> 5 years or more </td> </tr> <tr> <td> 5 </td> <td> Project reports </td> <td> 5 years or more </td> </tr> </table> Table 7 Suggested retention time with respect to type of data Table 8 shows the model description of the archiving process. <table> <tr> <th> **Ref. nr.** </th> <th> **Responsible Partner** </th> <th> **Data type** </th> <th> **Retention** </th> <th> **Archiving** </th> </tr> <tr> <td> 1 </td> <td> CERTH </td> <td> Project re- ports </td> <td> 5 years </td> <td> No archiving after the foreseen 5-year period </td> </tr> <tr> <td> 2 </td> <td> CERTH </td> <td> Survey </td> <td> During the project execution </td> <td> No archiving </td> </tr> <tr> <td> 3 </td> <td> IMEC </td> <td> CO 2 Sensor reading </td> <td> During the project execution </td> <td> No archiving </td> </tr> </table> Table 8 Data retention and archiving ## Best practice advice for data collection process During trials, where sensitive data is collected, the data collector can consider additional restriction in the internal communication. Sensitive massages can be encrypted. Only authorised limited personnel should receive the encryption keys. Thus, non-authorised persons do not get access to data. Data gathered can be anonymised and only limited personnel should be able to track back experiments results to the personal identification of volunteer. People in the premises or involved in the trials shall be instructed accordingly prior to the experiment execution. # Ethical concerns of the AMANDA project The AMANDA consortium confirms that each partner will check with their national legislation/practice and their local ethics committee. That will provide guidelines on data protection and privacy issues, in terms of both data protection and research procedures in relation to any of the proposed public engagement and potential volunteer research activities. Any procedures for electronic data protection and privacy will conform to Directive (EU) 2016/680 and Regulation (EU) 2016/679 on the protection of personal data and its enactments in the national legislations. The process of adhering to the applicable regulations begins with a thorough investigation of the EU and National research projects’ ethical guidelines as well as the examination of the directives regarding privacy and protection of personal data and free movement of data issues. The legislation with which the AMANDA consortium must conform includes: * The Universal Declaration of Human Rights * The Convention 108 for the Protection of Individuals with regard to Automatic Processing of Personal Data * The Directive 95/46/EC & Directive 2002/58/EC of the European parliament regarding issues with privacy and protection of personal data and the free movement of such data * The Declaration of Helsinki on research involving human subjects * Greek Law 2472/1997: Protection of Individuals regarding to the Processing of Personal Data * Greek Law 3471/2006: Protection of personal data and privacy in the electronic telecommunications sector and amendment of law 2472/1997 * The Constitution of the Kingdom of the Netherlands, Article 10 Privacy. The AMANDA project expects the development of a set of qualitative information collecting activities. In particular, interviews and questionnaires (Task 1.3) are planned. The double nature of consent appears again as both personal data and potentially sensitive information might be collected. Therefore, two issues become crucial from an ethical perspective: the confidentiality of the information and the anonymisation of personal data. The Code of Ethics of the International Sociological Association reminds researchers that "The security, anonymity and privacy of research subjects and informants should be respected rigorously” [3]. The sources of personal information obtained by researchers should be kept confidential, unless the informants have asked or agreed to be cited. Should informants be easily identifiable, researchers should remind them explicitly of the consequences that may follow from the publication of the research data and outcomes." [3]. From this article it is possible to extract some general rules that investigators must apply when designing and conducting their research: * Information gathered from the participants should be kept confidential, unless specific consent to be cited is given by the participant. * Information gathered should be anonymised and used only for the purpose for which it was collected. * Participants must be informed when the investigator believes that some of the information shared may make them identifiable and the potential consequences. * Participants must be given, in a clear and transparent manner, the opportunity to withdraw at any time and especially after being informed of their potential identification and potential the consequences. In case the collected data contains personal information, data protection principles and legal requirements extracted from Regulation 2016/679 should be taken into consideration. In particular the investigator needs to put in practice organizational and technical measures directed to "minimising the processing of personal data, pseudonymising personal data as soon as possible, transparency with regard to the functions and processing of personal data, enabling the data subject to monitor the data processing" [4]. In case the collected data contains personal information, the responsible partner of the AMANDA project should apply the following rules: * Information collected from the participants should be anonymised. Responsible partners of the Consortium will prepare a summary, of the conducted research’s results. The raw information will be kept in local resources by the partners under their own responsibility and according to the data protection policies of their own organisations. Partners should pay special attention to the respect of the minimisation principle following article 89 (1) of Regulation 2016/679. * Each task leader will collect the summaries and send them to the Ethics Helpdesk. The Ethics Helpdesk will review that no personal or sensitive information is contained in the summary, unless the participant has given specific consent. If needed, the Ethics Helpdesk can consult LEPPI during the process. The summary can be shared within the Consortium once this point is verified. * The investigator must obtain specific consent from all the participants prior to their involvement in the different activities. The example of the consent template is provided in the Annex 1. The responsible partner should adjust provided template towards experiment. * The task leader of each of the activities will propose to the Ethics Helpdesk a text containing the specific information concerning the activity. The Ethics Helpdesk will validate the specific Informed Consent Form before it is used with any participants. Informed consent must be obtained, in written form. * Oral informed consent is highly discouraged. Although oral consent is legally valid, the data controller must be able to "demonstrate that the data subject has consented to processing of his or her personal data" (Regulation 2016/679, article 7.1). Therefore, investigators should only use this procedure when there is no other possibility and after having consulted with the Ethics Helpdesk. The Ethical Body will evaluate the situation, bearing in mind the potential value of the information that could be obtained from the participant. Duly signed Informed Consent forms, both written and electronic or proof of the oral consent, should be kept by the controller for a 5 years period to be available for auditing by the Ethics Helpdesk or any competent authority. The AMANDA project goal is to develop pervasive technology as described in the Section Overall technical objectives. Miniaturizing the electronic system is going to be safe and non-intrusive. The project goals are technical. Most of the collected datasets are going to describe electronic system behaviour e.g. power consumption, voltage stability, radio connectivity performance and others. The measurements, which are not related to the technical evaluation of the system, are foreseen to be related to environmental conditions. An investigated subject is exposed to the measured conditions. There is no intention to directly gather measurement from bodies of living creatures. Thus, there is no interaction between the electronic system and the body of the subject. The project does not have the aim to perform human trials. However, experiments where the device is placed in the room where people are present is considered. The project is not in the scope of Utilisation of Genetic Resources the Access and Benefit Sharing (ABS) This check will be done prior to the data collection in the AMANDA study and the findings will be added to this report in Section 4. # Documentation of data collected in the AMANDA project during the M1 – M6 period ## Context of the data collection ### Voice of the customer data collection IoT devices are nonstandard computing devices that connect wirelessly to a network and have the ability to transmit data. The IoT sector involves extending internet connectivity beyond standard devices, such as desktops, laptops and others, to any range of traditionally non-smart or non-internet- enabled physical devices and everyday objects. With applications in residential as well as industrial environments and the nonstandard nature of this technology, the need to acquire more data from the end-user point of view is increased. The AMANDA consortium is interested to investigate the variety of use case scenarios. The use cases where the ASSC can be implemented and used in. Thus, Voice of the Customer data acquisition plan is created. In particular, a questionnaire was created by CERTH and PENTA as part of Task 1.2 System Requirements and Needs in order to complete Deliverable D1.3 Voice- of-the Customer, as part of the Industrial IoT application Section. The objective was to gather the end-user requirements from available industrial stakeholders, such as end users, product providers, suppliers and developers that have previously collaborated with anyone involved in the consortium. The industries were targeted by their interest in new ways to improve their business with the integration of new technologies. The chosen platform to host the online questionnaires was Google Forms, due to previous experience of CERTH in data acquisition via questionnaires from relevant projects as well as the simplicity of the end reports. Anonymity of the answers was a critical priority. Therefore, no questions that could make the responsible employee identified were used. In fact, except from the company name and a single company contact email address, all questions in the form were strictly related to the IIoT subject and different use case scenarios. The first Section of the questionnaire was focused on the currently used solutions by industrial partners. With these questions, important insight was gathered on the type of sensor data that is already being measured in industrial environments as well as the type of power supply and autonomy of the utilized monitoring systems. Lastly, recommendation on improvements on the state-of-the-art components were collected. The next Section of the questionnaire was mainly informative as the AMANDA ASSC is presented, with brief description of all available sensors and a link to the project’s official website. The most critical part of the online questionnaire was the third Section on AMANDA use cases. Here, each company was asked to construct an ideal sensor monitoring system, with a selection of the most appropriate and useful sensor types as well as additional hardware options. A brief application description was asked along with questions concerning the desired power management solution, wireless communication and data transfer specifications as well as potential size constrains. Although there was not any personal information or confidential data questions included in the online questionnaire, AMANDA partners decided to request a written consent from all companies that participated, to share their answers in Deliverable D1.3. Due to time delays on the reply of the consent, no raw data and company details were included in the Deliverable. The AMANDA consortium collected and merged the answers into a final list to make good use of the acquired data. The list contained State-of-the-Art monitoring systems along with proposed use cases scenarios. In this way, anonymity was kept through the whole process and in the same time, useful results and information were shaped to be used for the Deliverable. <table> <tr> <th> **Use cases** </th> <th> **ASSC functionality** </th> </tr> <tr> <td> Cargo transportation conditions </td> <td> Collect information about temperature, CO 2 /smoke levels, noise levels and distance from a set point to ensure proper conditions and safety for the cargo and its means of transportation. </td> </tr> <tr> <td> Indoor asset tracking </td> <td> Keep track of a company’s high value assets and their condition </td> </tr> <tr> <td> Worker comfort level monitoring </td> <td> Ensure comfort levels for the employees to increase their efficiency and motivation </td> </tr> <tr> <td> Workplace information delivery </td> <td> Keep an overview of the working conditions to ensure the health and safety of the employees </td> </tr> </table> Table 9 Use cases merged from the distributed questionnaires ### Voice of the customer data collection The primary purpose of data collection is to obtain the information needed to create an ASSC architecture. Information is collected from end users. To ensure a certain quality of information, it was necessary to explain the purpose and objectives of the project. The aim of the implemented activities was to collect as much information as possible about the needs and wishes of end users. Besides the survey interviews were used as well. The interview was conducted as part of WP1. Results and analyses were published in the completed Deliverable D1.3 Voice of the Customer Completed. Surveys and interviews do not contain ethical questions and no ethical questions arise. All surveys and interviews are fully aligned with the EU Regulation 2016/679 [5] containing General Data Protection Regulation (GDPR). ### Specification of the required components data collection Components data was collected as part of WP1 “System Specifications, Requirements and Use Cases”. This includes data from components being developed as part of the AMANDA project (e.g. temperature, touch, CO 2 , imaging sensor, solid-state battery, energy harvester, MCU, PMIC) but also data from state of the art, off-the shelves electronic components (RF chipsets and modules, additional sensors: accelerometer, Volatile Organic Compounds, Humidity, Light and others) and peripherals (timers, displays, memory and others). Each technological partner within the consortium has contributed to the required data based on its current expertise and on technology scouting (literature survey of patents, datasheets). The collected data was compiled into a spreadsheet document that comprised various tables of relevant technical specification parameters (electrical and mechanical), graphs (power consumption profiles) and electronic block diagrams (PMIC). The purpose of the document is to share the same level of information between all partners, regardless of the individual level of expertise, in order to mutually understand the current status of each respective technology and put these into perspective with the intermediate and final project specification targets. As such, it can be considered as an internal technical project roadmap. Finally, it is also a comprehensive comparison tool for assessing the various sensors, RF and loads that will be integrated into the ASSC. This reference document contains sensible and confidential information that is therefore only accessible by the AMANDA project partners. Any relevant non confidential information can then be included into the public deliverable documents (for example D1.2 or D1.3). The spreadsheet document will be updated throughout the project and the latest version is regularly circulated to all partners by email and stored on a local Git repository. ## Data collection <table> <tr> <th> **Ref. nr.** </th> <th> **Responsible Partner** </th> <th> **Data Type** </th> <th> **Data collection** </th> <th> **Data Format** </th> <th> **Est. size** </th> <th> **Software** </th> <th> **Specific character** </th> </tr> <tr> <th> </th> </tr> <tr> <td> 1 </td> <td> CERTH </td> <td> Industrial IoT use case suggestions and SoA solutions </td> <td> Company name and contact email, currently utilized monitoring systems, suggestions on type of sensors, power management and wireless communication specifications, size constrains, impact of the potential use of the AMANDA ASSC </td> <td> Online questionnaire in a commercial platform. Multiple choice questions as well as text </td> <td> 21 questions in total </td> <td> Google Forms platform </td> <td> n/a </td> </tr> <tr> <td> 2 </td> <td> PENTA </td> <td> Survey, interview </td> <td> Gathering information through interview, email, project presentation, technical talk…. </td> <td> DOC, XLS </td> <td> 0,5M b </td> <td> MS-Office </td> <td> Only personal information related to the occupation </td> </tr> <tr> <td> 3 </td> <td> Lightricity </td> <td> Technical specification tables, graphs, block diagrams </td> <td> Technical specification of all components on the AMANDA card: electrical and mechanical specifications, including power consumption and footprint </td> <td> PPT </td> <td> 3-4 Mb </td> <td> MS-Office </td> <td> 2 types of information: * Specific to the innovative technologies developed as part of AMANDA (Sensors, PV, Battery, PMIC, MCU) * More general information, e.g. related to state-of-the-art and off-the-shelf components (RF, displays, additional sensors, peripherals and others) </td> </tr> </table> ## Data storage and back-up <table> <tr> <th> **Ref. nr.** </th> <th> **Responsible Partner** </th> <th> **Data type** </th> <th> **Storage medium and location** </th> <th> **Backup location and backup frequency** </th> </tr> <tr> <td> 1 </td> <td> CERTH </td> <td> Industrial IoT use case questionnaire answers </td> <td> Original online spreadsheet document, created automatically by the Google Forms platform to gather all answers. Document was destroyed after the merge of the answers </td> <td> No backup of data </td> </tr> <tr> <td> 2 </td> <td> PENTA </td> <td> Survey, interview </td> <td> Local drive, network drive </td> <td> Daily backup, RAID array disk, external hard drives </td> </tr> <tr> <td> 3 </td> <td> Lightricity </td> <td> Technical specification tables, graphs, block diagrams </td> <td> Local drive, Git repository </td> <td> Daily backup (local) Cloud storage (OneDrive) External hard drives </td> </tr> </table> ## documentation <table> <tr> <th> **Ref. nr.** </th> <th> **Responsible Partner** </th> <th> **Data type** </th> <th> **Naming convention** </th> <th> **Metadata** </th> </tr> <tr> <td> 1 </td> <td> CERTH </td> <td> Industry IoT use case questionnaire answers </td> <td> No naming convention needed </td> <td> No metadata </td> </tr> <tr> <td> 2 </td> <td> PENTA </td> <td> Survey, interviews </td> <td> Database name: AMANDA project; folder name: Surveys; folder name: Interviews </td> <td> n/a </td> </tr> <tr> <td> 3 </td> <td> Lightricity </td> <td> Technical specification tables, graphs, block diagrams </td> <td> No naming convention required </td> <td> n/a </td> </tr> </table> ## access <table> <tr> <th> **Ref. nr.** </th> <th> **Responsible Partner** </th> <th> **Data type** </th> <th> **Access controller** </th> <th> **Access management** </th> </tr> <tr> <td> 1 </td> <td> CERTH </td> <td> Industrial IoT use case questionnaire answers </td> <td> Data removed after parsing. No access control required </td> <td> Data removed after parsing. No access control required </td> </tr> <tr> <td> 2 </td> <td> PENTA </td> <td> Survey, interviews </td> <td> O. Vujičić </td> <td> Access is limited to the controller </td> </tr> <tr> <td> 3 </td> <td> Lightricity </td> <td> Technical specification of all components on the AMANDA card: electrical and mechanical specifications, including power consumption and footprint </td> <td> M. Bellanger (local access) C. Kouzinopoulos (Git) </td> <td> Access to the Git repository is limited to project partners (registration process and password required) </td> </tr> </table> ## sharing and reuse <table> <tr> <th> **Ref. nr.** </th> <th> **Responsible Partner** </th> <th> **Data type** </th> <th> **Sharing of data** </th> <th> **Reuse of data** </th> </tr> <tr> <td> 1 </td> <td> CERTH </td> <td> Industrial IoT use case questionnaire answers </td> <td> No sharing, data merged only for the purpose of D1.3 and deleted immediately after </td> <td> No reusing of data is planned </td> </tr> <tr> <td> 2 </td> <td> PENTA </td> <td> Survey, interviews </td> <td> Only shared with stakeholders of the project </td> <td> Reuse for all work packages development </td> </tr> <tr> <td> 3 </td> <td> Lightricity </td> <td> Technical specification tables, graphs, block diagrams </td> <td> Only shared with project partners (contains sensible and confidential information) </td> <td> Reuse for all technical work packages (WP1-6) </td> </tr> </table> # Conclusions and future work This report details the data management and ethics of the project. After a short introduction, Section 2 provides guidance on how to make Data Management Plans. Section 3 puts emphasis on ethical issues and refers European legislation related to ethics in research. Detail information of collected data in the AMANDA project is placed in Section 4\. This Section can later be used to track all the generated data in the project. It therefore satisfies the need of reusability of the generated data required by directive 2013/37/EU. During this project, no human trials are planned. However, data collection in form of questionnaires was conducted. Personal or sensitive data must be labelled as such. The data shall then only be stored, analysed and used anonymously. The individuals will be informed comprehensively about the intended use of the information collected from them. Participants shall give their permission for data collection for a scientific purpose, with their active approval in form of a written consent. There is a potential for field tests, if time permits and lab testing is successful. These tests can include the deployment of prototypes at the location of end users for a preliminary evaluation of the ASSC. However, this decision will be made towards the end of the project. For each data set, ethical issues are considered separately in Section 4.1. The ethical aspect of each dataset is evaluated in the data description. The data which will be generated during the project is mostly related to the performance of the electronic hardware. Section 4 describes and systematizes data originated form the project. The data management approach presented in this report mostly consists of labelling and describing the generated data. In this way the data can be tracked during and after the project’s execution. The data management plan & ethics document is an iterative report which will be updated repeatedly at months M18 (v2), M30 (v3) and M36 (v4). Future versions of the Deliverable will focus on the management of scientific data collected for the project and making them findable, accessible, interoperable and reusable. As a combined set of reports, it will document the progress of data generation and its storage. The data description should point ethical consideration towards the collected data. The data will be treated according to its sensitivity. # Bibliography 1. S. Brandt, “Data Management for Undergraduate Researchers: File Naming Conventions,” Purdue University Libraries, [Online]. Available: http://guides.lib.purdue.edu/c.php?g=353013&p=2378293. [Accessed 24 06 2019]. 2. D. Hillmann, “Dublin Core Metadata Initiative,” 12 04 2001. [Online]. [Accessed 24 06 2019]. 3. U. C. Faculty of Political Sciences and Sociology, “Code of Ethics,” ISA Forum of Sociology International Sociological Association, [Online]. Available: https://www.isasociology.org/en/about-isa/code-of-ethics. 4. The European Parliament And The Council Of The European Union, “Regulation (Eu) 2016/679 Of The European Parliament And Of The Council,” _Official Journal of the European Union,_ vol. L, no. 119, pp. 1-88, 2016. 5. “https://publications.europa.eu/en/publication-detail/-/publication/3e485e15-11bd11e6-ba9a-01aa75ed71a1/language-en,” [Online]. 6. “https://www.gotomeeting.com/,” [Online]. 7. E. Commision, “Participant Portal H2020 Online Manual - ethics,” European Commision, [Online]. Available: http://ec.europa.eu/research/participants/docs/h2020-fundingguide/cross-cutting-issues/ethics_en.htm. [Accessed 24 06 2019].
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0300_ADASANDME_688900.md
**Executive Summary** This Deliverable is the second update of Deliverable 10.2 ‘Data Management Plan’ and the new additions in relation to the previous versions are introduced in **Chapter 1** . **Chapter 2** presents the implementation of GDPR to the project and the aspects affected or potentially affected because of the new Directive. **Chapter 3** includes the Data Privacy Impact Assessments (DPIAs) carried out separately by each UC team in order to identify if any issues or risks exist because of data collection. No risks were identified in any of the Use Cases because partners have already taken the necessary steps to anonymize data even before GDPR was implemented. **Chapter 4** briefly presents the different data clusters and introduces the consolidated data table that is an elaborate and in-depth account of all data collected across UCs as well as an update of the ADAS&ME data privacy (section 4.1) that was included in the first version of this Deliverable (submitted M6). Partners have decided data to remain available only within Consortium. The reasons are discussed in section 4.2. In addition, sections 4.3 and 4.4 shortly addressed the open access publications and where they will be available after the end of the project. The Deliverable concludes with **Chapter 5** , where the main outcomes of this version are summarized considering potential lessons learnt from the implementation of GDPR halfway through the project. The adjustment of the existing informed consent form template in order to be GDPR compliant is added in **Annex 1** . The Data Privacy Impact Assessment (DPIA) template can be found in **Annex 2** that was distributed to partners to be completed and the results are presented in Chapter 3. The latter will be circulated and completed two times by the end of the project. The first assessment is included in the final version of the Deliverable (M30) and the final version in the final technical report, if any changes or additions are expected. The updated data privacy policy has been added in **Annex 3** . The update considers the GDPR requirements and is written in a format to address the participants in UC Pilots (i.e. whole duration of testing period) and user tests (i.e. individual user testing session). Finally, partners updated the existing data collection template for data collected until this project period. The consolidated table including data for UCs can be found in **Annex 4** . **1 Introduction** This document is the third update of the deliverable; the other two versions were submitted in M6 and M24, respectively. The Data Management Plan aims to define the processes of data handling during and after the end of the project. In the first version of this Deliverable, data generated by sensors and devices were gathered and annexed, the standards and methodologies were presented, and the data privacy protection procedure and policy were defined and established along with guidelines on how data can be openly shared in order to comply with ORDP requirements with regards to their storage, curation and preservation. The second version included information about all the data types collected for the test requirements of each Use Case (UC) and this information was used to update the existing table of all gathered data during user testing across UCs and sites. A description of the ADAS&ME repository and its functionalities was included (Georgoulas & Gavrilis, 2018). This table has been continuously updated because there are two complexity factors that define the data clusters and their treatment: a) data collected are UC-specific (i.e. data collected for creating the physical fatigue algorithms may be different from the ones collected during the investigation of distraction and sleepiness) and b) testing objectives are different across phases (i.e. in the first phase, data are collected for developing/refining the affective states’ algorithms and in the second phase, data are collected to select the preferred HMI configurations for each affective state and in the final stage the systems will be tested in real (like) life conditions). The data table annexed in this document refers to the data collected during the first two these phases. However, the data collected during the final stage will include much less data than the first two phases, but data will be from the categories already addressed and thus not an update to this table is necessary. The reason for the addition of a third version of this document is the fact that GDPR was implemented long after the beginning of the project and thus the processes relevant to the new regulation were applied after this date to the project and are reported in this version. As this is a UC-based project, the objectives and data collection are performed at UC level, therefore separate Data Private Impact assessment (DPIA) reports were prepared per UC. The following table presents the content enrichment across the three separate updates. **Table 1.** D10.2 content per version <table> <tr> <th> **First version (preliminary; M6)** </th> <th> **First update (intermediate; M24)** </th> <th> **Second update (final; M30)** </th> </tr> <tr> <td> * Data processes. * Data sharing. * Initial data policy. * First data collection table was included in Annex I. * Relevant legislation and guidelines. </td> <td> • Data collection per UC pilots, as presented within D7.1 (Cocron et al., 2018; submitted M18). Types of data along information about privacy, confidentiality, and other characteristics are presented in Table 3 (Annex 4), following the same format with the table annexed in the </td> <td> * Results of GDPR implementation in the project: − Data Privacy Impact assessment (DPIA). * GDPR compliant consent form. * Final datasets collected and </td> </tr> <tr> <td> **First version (preliminary; M6)** </td> <td> **First update (intermediate; M24)** </td> <td> **Second update (final; M30)** </td> </tr> <tr> <td> </td> <td> first version of this deliverable. • A short description of the ADAS&ME data repository, based on D4.1, submitted in M18. </td> <td> decision about their openness. • Open access publications current status and future availability. </td> </tr> </table> 2. **Implementation of GDPR in ADAS &ME ** GDPR was implemented in the middle of the project’s lifetime and the following steps were taken: 1. Defined a GDPR compliant informed consent form template with clear reference to subject’s rights to access, retrieve, and delete their data post-testing (Annex 1). 2. Assessed the potential data privacy issues at UC level by the teams involved in each Use Case (Chapter 3). As the data collection was not big and it was on pilot level, then a DPO was available were it was already appointed. These partners were advised to discuss any data privacy issues with their DPOs, if one was appointed. However, no such issues and risks were identified due to data anonymity and restricted access. 3. Defined who the data controllers and data processors are in the project as well as the duration of data preservation period after the end of the project. 3. **Data Privacy Impact assessment (DPIA)** A data privacy impact assessment process was initiated as soon as the GDPR was implemented. As the project’s work started long before GDPR, this impact assessment was not applied before data collection and processing. A template was prepared based on the GDPR requirements (as defined within Art. 35) and was circulated to UC leaders to discuss and complete in communication and agreement with the pilot sites’ controllers and processors. Assisting and guiding questions were included in each section of the template to help UC teams in completing it with relevant information harmoniously across UCs. The circulate template have been annexed in this update (Annex 2). Data controllers and processors were the following organizations: SCANIA, VEDECOM, VALEO, VTI, DUCATI, CERTH/HIT, DLR, FORD, FhG, Autoliv, uPatras. The latter is perceived as both controller and processor because is the partner who created the database and therefore acts as a data manager of the project as well as was involved in the development of algorithms across affective states. Only data processors were the following partners: RWTH Aachen, EPFL, OVG, Continental, and SmartEye. Sections 3.1-3.5 present the results of each separate UC DPIA. Some parts of the text are similar in different UCs and validate the harmonious data procedure and policy implemented across the project. An overview of these reports is discussed in the last section of this Chapter. #### 3.1 Use Case A: Automation behaviour A DPIA was performed because participants are asked to provide feedback and data that could be identified as personal will be collected but participants remain anonymous. ##### 3.1.1 Aims * Development and evaluation of a system to detect driver’s sleepiness; * Development and evaluation of a system to detect driver’s visual distraction; * Development and evaluation of a system to detect driver’s resting; * Development and evaluation of a system to detect driver’s frustration; * Development and evaluation of HMI elements and their combinations to inform and warn riders when they are sleepy, visually distracted, frustrated, or resting. ##### 3.1.2 The need The need for a PIA is based on the fact that data collected may be anonymous but some of the data collected maybe classified in the ‘sensitive data category’ (e.g. heart rate). ##### 3.1.3 Data treatment process Data is being collected for two purposes; (1) data to support development of driver state detection algorithms, (2) data to support HMI development. Data are collected anonymously through sensors and questionnaire completion. Identifying information afforded from provision of consent to participate, is stored separately to all other data collected from the participant. All paper data are anonymous and stored in a locked cabinet. Data were collected to support the development of the driver state detection algorithms (Sleepiness, Visual Distraction, Frustration, and Rest) were collected (WP4). As per the informed consent obtained from the participant (and as was approved by the Stockholm Region Ethical Board), the only source of data in which the participant could be identified (raw camera data used for developing the Sleepy and Visual Distraction algorithms) was shared with three partners (the data owner, Scania, Smart Eye, and EPFL). These data were collected prior GDPR implementation; however, anonymity was protected. This digital information is stored on an isolated data storage device and is not connected to the Scania network. Other camera data not required for algorithm development is stored on an isolated data storage device and is not shared with other partners. All other non-identifying data will be shared with project partners via a secure data repository. Only pre-specified contributors can access the data that is needed for that partner (i.e. data compartmentalization). Additionally, data were collected to support the development of the Human Machine Interaction (HMI; WP5). No data collected from this work is shared with other partners. Identifiable digital information (from eye trackers) will be stored on an isolated data storage device that is not connected to the Scania network. Non-identifiable driver information is stored on a shared director accessible by all Scania project contributors. Digital and paper data will be destroyed five years after the completion of the project. ##### 3.1.4 Data sources As shown above, data sources are sensors and questionnaire data. For a complete list, please refer to the data collection spreadsheet completed for UC A (Annex 4). ##### 3.1.5 Data sharing In addition to the identifiable raw video camera data explained above, anonymized raw data was shared with VTI for generation of the Rest algorithm. Identifiable driver data (raw camera data from eye tracking cameras): stored on isolated memory storage device. Shared with two additional partners (consistent with approved participant consent, and Stockholm region ethics board), beyond the data owner (Scania). Non-identifiable (digital) participant data (e.g. heart rate) were stored on Project hosted data repository and available to specific partners requiring access to the data for algorithm development. No high-risk data processing was involved. ##### 3.1.6 Data clusters No special categories of data are included. The main data clusters are the following and include fata collected for algorithm training: * Demographics Questionnaire * Background Sleep Questionnaire * Sleep Diary * Alertness & Sustained Attention (TAP-M) task performance * Lane Change Test & N-Back Task Performance * Physical Ergonomics Measurements * Karolinska Sleepiness Scale * Stress Scale * VITAPORT II: EOG, ECG, GSR, EMG * CAN data (steering wheel, levels, and Instrument Panel buttons and Switches) * Eye Tracker (x 3) * Optical Cameras (x 3): * Hypnodyne: EEG data * Bitium sensor: Heart rate parameters * Data collected for HMI development includes: * Performance associated with handovers/takeovers of control between the participant and the automated vehicle * Geneva Emotion Wheel * Human Trust in Automation Scale * System Usability Scale (SUS) * AATT * 10-grade Scania scale * Open interview questions Data have been so far collected in two stages with ten drivers participating for two days in Stage I and 13 drivers participating for two hours each in Stage II. All data collected were used (i.e. data minimization principle was followed). Separate HMI driving simulator study will be conducted with two hours participation for each user and aiming again to use all collected data. Two rounds of data collection to support algorithm training were conducted. One round of simulator driving testing was conducted. Anonymous datasets will be stored for five years after the end of the project. Individuals are not affected because data collection and storage are anonymous. The identifiable eye tracker camera information that was shared between three partners is from 23 participants. It covers participants in the Stockholm County (län). ##### 3.1.7 Context of processing All participants are Scania (or Scania subsidiary) employees. Drivers participated voluntarily and were free to stop and leave whenever they wanted and if they wanted their data to be deleted and not stored, they were informed they could ask for it. They were informed about data being treated anonymously and used for research purposes and publications. No concerns existed because data collection is anonymous. Data collection was not novel. Children or other vulnerable groups were not involved. All data collected was done so using off- the-shelve technologies. There were no issues related to public concerns. Data collection for algorithm development was approved by the Stockholm region ethics board. ##### 3.1.8 Purposes of processing As described previously, some data are used for development driver state detection algorithms. Other data is used as input to the HMI. No indented effect is anticipated on the person. This process ensures a user-defined and accepted system, as it is being developed during the lifetime of the project and, as whole, it did not pre-exist. In addition, it ensures the system is safe, reliable, accurate, valid and usable. ##### 3.1.9 Consultation process Participants’ views will be collected through self-reports and standardized questionnaires. For HMI development, no one else but the local (Scania) project team will be involved in the data collection process. The team member names have been added in both the ethics application and the informed consent form. Data collected from Stage I and Stage II testing has been made available to the relevant partners (see previous description). Information about data security was received internally from the project. ##### 3.1.10 Necessity and proportionality Data processing is performed within the framework of the European project ADAS&ME and is part of our contractual obligations. The data processing achieves its goal because it is based on careful and considerate experimental planning, as it is described in D7.1 of the ADAS&ME project. There was no other way or method in order to achieve this outcome and especially if the same level of system performance is required/requested. Function creep is prevented by the fact that the system and its respective functions cannot be used for another purpose. The endproduct will be a prototype that will not be used by people outside the consortium. Further development and improvements can be made but they will not involve usage of data collected at this stage for different applications. Further exploitation of the products generated within this project will not require any further access to participant data. Data quality is ensured because of technical verification and pre-testing sessions taking place before any actual testing. Certain indices were set for technical performance with regards accuracy, validity, sensitivity and reliability. Data treatment and minimization was controlled by trained data scientists. All participants completed an informed consent form. Stage I and Stage II testing has been completed and was GDPR compliant, even though parts of them were performed before GDPR was implemented. Users receive information about data treatment and storage, anonymization process, their own rights during testing, a copy of the informed consent form, testing duration, description of the project and contact points. All data are anonymous, and processors have no access to any information about the participants (except as mentioned for the identifiable camera data). Participants have agreed for those data to be shared within the consortium. Identifiable video data were transferred by a wired local connection from one encrypted data storage device to another encrypted data storage device. Non- identifiable data was transferred to a project hosted data repository. ##### 3.1.11 Identified, assessed and mitigated risks No data related risks are envisaged and were encountered during the tests already completed or are scheduled to be carried out. Simulator sickness was a potential risk for test conduction, not related to data privacy, that would result in immediate cessation of the test session. Participants were informed beforehand of this possibility through the informed consent process. #### 3.2 UC B – Range anxiety (VALEO) ##### 3.2.1 Aims To help drivers mitigate their range-related anxiety and increase the acceptability of electric vehicles, tests were conducted to determine whether we can reliably provoke and detect anxiety and other emotions in electric vehicle drivers. Additionally, these tests aimed to determine which part the HMI is responsible for the anxiety and stress of the driver of an electric car, and to see whether it is indeed possible – that is whether the intra- individual variability of the person’s behaviour and physiological condition is detected and interpreted reliably and correctly enough - to develop an adaptive approach to the HMI; whereby a vehicle would progressively amass data and learn to behave in the most appropriate way towards its driver. Research Hypotheses: * Spontaneously experienced positive and range anxiety can be induced while driving an electric car; * A discrimination of experienced positive and range anxiety during driving is possible based on video recordings of the face; * A discrimination of experienced positive and range anxiety during driving is possible based on speech data; * To identify the electrical vehicle range anxiety among other emotions; * To evaluate the efficacy of the various parts of the test plan for reliably provoking emotional responses. ##### 3.2.2 The need Over the last year, we did three data collections to fulfil the needs of data for Audio anxiety, video anxiety and HR/RR anxiety algorithms. The audio algorithm is developed by OVGU, a University partner within the Consortium. The video algorithm is developed by EPFL and the HR/RR by Valeo. After the data collections, each partner trained their algorithm on anonymized data to build a robust algorithm. ##### 3.2.3 Data treatment process Three data collection took place within 2018. All of them were on open roads in the vehicle demonstrator of VEDECOM. The last stages (II and III) were conducted after GDPR and a confidentiality and release form was additionally signed. 1. **Stage I:** During April 2018 with 5 subjects in France, around Paris. 1. Objective: CAN Vehicle, Audio and face video. 2. Subjective: BFI-10, GEW, KSS, Stress and Gagge scales. 3. Audio-visual capture agreement 2. **Stage II:** During September 2018 with 20 subjects in France, around Paris. 1. Objective: CAN Vehicle, Audio, face video, HR/RR signals and the road. 2. Subjective: Personality, Attrakdiff, Feedback on the test and HMI. 3. Audio-visual capture agreement 4. Confidential agreement 3. **Stage III:** During November 2018 with 20 subjects in France, around Paris. 1. Objective: CAN Vehicle, Audio, face video, HR/RR signals and the road. 2. Subjective: Personality, Attrakdiff and Feedback on the test. 3. Audio-visual capture agreement 4. Confidential agreement ##### 3.2.4 Data sources In terms of Go, we had an average of 200Go for the Stage II and 670Go for the Stage III. In terms of time, we had an average of 60mins per participant for the Stage II & III. Each partner used all exploitable data to build their algorithm. Some of these collected data as the scene camera and the cockpit camera are used for Ground Truth annotating. The headset microphone and the BioHarness belt are used for Ground truth development. The data collected are recorded only once per participant. The recorded data are safely kept till the end of the European project. In total, 45 users participated that live in Paris and suburbs. ##### 3.2.5 Data sharing Personal data handling was and will be performed strictly according to EU regulations which have been laid out to and agreed by the cooperating partners and participants in advance. The consent form signed by all the participants of the study, allows the sharing of the audio, video and physiological raw data among all relevant driver-state emotions partners (OVGU, VEDECOM, EPFL, SEYE, VALEO). Further, no personalized data may be distributed among other partners. Anonymized data, like extracted features and markers, may be distributed to other partners. Data will be made available to the partners using the repository system provided by WP4. Audio and physiological recordings can be downloaded only by the corresponding partners. The video recordings are distributed by SEYE on request only to involved partners. ##### 3.2.6 Data clusters The nature of the data are videos (.avi), audio (.wav) and other data (i.e. subjective measures) are stored in to Excel files (.csv). ##### 3.2.7 Context of processing Experienced drivers participated once in a driving session (e.g. at least five years since they obtained their driving license). Each of them had a full control on the possibility to perform the test or not. They were no constraints during driving, however, directions about which road to choose, were given. Participants completed a separate audio-visual agreement and they were fully aware about the data usage, storage, processing and sharing. ##### 3.2.8 Purposes of processing Range anxiety (or range paradox) is a concept emerged in the late 90s which is the concern of not reaching to the destination or to the next charging spot while traveling in an EV (Nilsson, 2011). This is a stressful experience of a present or anticipated range situation, where the range resources and personal resources are in fact available to effectively manage the situation; however, they are perceived to be insufficient. Studies show that electric vehicle drivers usually need around 160 km of autonomy per charge. Nevertheless, they often prefer vehicles with considerably higher available range (around 350 km). This demand (which seems to be avertable) comes from the worry of experiencing such a situation in the future or present, worry of what will happen if such a situation emerges, worry of not being able to find a solution to the situation and further worry of being stranded in this uncomfortable situation (Nguyen, Cahour, Forzy, & Licoppe, 2011). If the manufacturers cannot lower range anxiety, electric vehicles will not be able to compete with gasoline and diesel cars. The four important physical parameters that will make a difference in range anxiety level are the following: the battery size (kWh), the energy consumption (this parameter is mostly affected by the weight of the car) (kWh/km), the charging speed (kW) and the minimum state of charge (%). Despite the fact that each one of these parameters could be optimized individually, their effects are inversely proportional. As an example, implementing a bigger battery will increase the range but all the same the weight of the vehicle (so the energy consumption) and the charging time will increase. Therefore, the range anxiety cannot be eliminated by purely quantitative means, and this is the motivation behind the Use Case of the project. Current vehicles do not address this issue whether through intelligent vehicle power management system or whether through a routing and traffic analysis and mitigation. The ADAS&ME approach is to create a driver monitoring system to reliably detect driver emotional and physiological state, in order to detect and mitigate the range anxiety through both the means of adapting the vehicle HMI to the situation at hand, and managing the vehicle’s operational parameters, as to reliably and safely provide a technical solution, when the remaining range is truly insufficient for the driver to reach the destination. The aim is the creation of a system able to reliably understand and discriminate between different types of emotional states. **3.2.9 Consultation process** Not relevant to testing conducted so far. ##### 3.2.10 Necessity and proportionality VEDECOM applied to respective Ethics committee and ensured that every part of testing complied with French legislation and relevant Directives. The research team succeeded in observing real range anxiety during the Stage II & III. Detecting actual range anxiety during a controlled driving test is very complex because the participant knows that everything is under control (expectation and testing bias) and, thus, an electrical breakdown has lower probability to happen. So, the other way to proceed would be to record driver during every day driving experience where an electrical breakdown can happen. Through RTMaps, records, status trackers were set per sensor type to ensure quality of data was not affected. Following the data minimization principle, only the necessary data were recorded and shared. Private servers are used to safeguard data and data sets’ structure and storage ensures deletion of data upon participant’s request. ##### 3.2.11 Identified, assessed and mitigated risks No risks identified that are relevant to user data privacy. All necessary steps were taken beforehand to ensure and safeguard data anonymity. #### 3.3 Use Case C (DLR – automation to manual) & Use Case D (Fraunhofer – during manual driving & DLR – during automation) ##### 3.3.1 Aims * Development and evaluation of a system to detect drivers stress; * Development and evaluation of a system to detect drivers’ emotions; * Development and evaluation of a system to detect visual distraction in drivers; * Development and evaluation of HMI elements and their combinations to inform and warn drivers of an upcoming transition of control. ##### 3.3.2 The need Project includes information about the participants that might be perceived as personal data. The use case C and D of the ADAS&ME project focuses on a safe and smooth transition between automated and manual driving. Therefore, different driver’s states need to be considered to provide the best possible support to human driver. To achieve this goal, personal data as physiological data need to be collected and processed to monitor the driver state. Since different project partners are responsible for different driver states, use case C/D needed to collect and share date with involved project partners. This includes sharing personal data with internal project partners involved in use case C/D. The overarching purpose is to provide the internal project partners with enough data to develop algorithms for the driver state assessment. The benefits of collecting and processing the personal information is to have a robust driver state monitoring which makes it possible to tailor HMI and automated functions to the needs of the driver. Therefore, DLR collects -in two data collection phases- demographic data, audio data, video data for face recognition, physiological data and driving behaviour data and shares them with SmartEye, RWTH Aachen, Ford, Continental, EPFL and UPatras. Further, different use case partner collected data in HMI studies (DLR, Frauenhofer). Ford collected physiological data and shared it with RWTH Aachen. The work performed is described in the following document that is confidential to Consortium (i.e. D5.1 ‘HMI and Automated Functions’), where the Interaction/ Transition framework was fully developed and all basic HMI elements were selected. The need for a PIA is based on the fact that data collected maybe anonymous but some of the data collected maybe classified in the ‘sensitive data’ category (e.g. heart rate, gaze data, facial recognition). ##### 3.3.3 Data treatment process Data collected in Use Case C/D are mainly collected in “Data collection phases” and in HMI studies. In both types of studies personal data from the participants are collected. The data collections focus more on sensor data (ECG, Voice, Gaze behaviour) while HMI studies are focusing on the behaviour of the participants. All data is shared at the ADAS&ME repository in an anonymized format. Data from participants who consent for video recordings (e.g. face recognition) can only be accessed by a limited number of project partners and results are used only for scientific purposes, in an anonymised manner. ##### 3.3.4 Data sources As shown above, data sources are sensors and questionnaire data. For a complete list, please refer to the data collection spreadsheet completed for UCs C and D (included in the overall data sheet in Annex 4). ##### 3.3.5 Data sharing Anonymized raw data from data collection are shared internally -in Use Case C/D- with the related use case partners. Data is stored on local servers with restricted access only to DLR employees. Gaze data is stored on a hard-disk and sent to SmartEye. Anonymization is not possible by the latter data. Data resulting from tests conducted by Ford can be accessed by all project partners, as no high-risk data processing is involved. ##### 3.3.6 Data clusters No special categories of data are included. The main data clusters are the following: * Facial recognition data * Gaze data * Voice data * Heart rate * GSR * Driver behavioural parameters (steering, braking, acceleration/deceleration, pedal pressure, steering pressure) * Questionnaire and scale data * Background data DLR collected data from 42 participants in two data collection phases. Ford collected data from 24 participants in March 2018. Both DLR’s data collections were conducted in controlled experimental conditions in 2018. Anonymous datasets will be stored for five years after the end of the project. Individuals are not affected because data collection and storage are anonymous. Testing geographically covers participants in the area of Braunschweig, Germany (DLR employees and externals). Participants of the Ford study were recruited within NRW, Germany. ##### 3.3.7 Context of processing In the first data collection participants were citizens of Braunschweig. In the second data collection all participants were DLR employees. Participants of the Ford study were recruited by an external company. All the informed consent procedures were carried out by Ford. Drivers participated voluntarily and were free to stop and leave whenever they wanted. Further participants were informed that their data will be stored and used for scientific purposes. All participants agreed that their data can be stored and used for further scientific analysis. They were informed about data being treated anonymously and used for research purposes and publications. No children or other vulnerable groups were involved in the data collections or HMI studies. No concerns existed because data collection is anonymous. Experimental setup was novel and for this reason a security driver was present in the second data collection. But data collection was not novel. The real time processing of physiological data is not new and was already done in some series produced cars (sleepiness detection). Nevertheless, new sensors are used in the current project. All data will only be processed and analyzed in the vehicle and no communication of these data is performed. Similar systems exist in respective market for most of the in-house builds. No public related concerns are relevant or anticipated for the remaining pilot activities. ### 3.3.8 Purposes of processing The goal of the processing is to have a robust driver state assessment in automated vehicles. With the knowledge of the driver state, new tailored interaction strategies can be performed by the vehicle to achieve the maximum level of driver support, when it is needed. **Ford Stress study:** 1.To gather data that will be used for algorithm development in order to determine the driver’s state. 2.To identify methods by which periods and intensity of certain driver states can be detected in a minimally obtrusive way. **Phase I:** Collect voice data from participants in different emotional states. This data is used to develop a robust algorithm for driver state detection. **Phase II:** Collect ECG data from participants in different stress states. Further gaze data was collected for the distraction algorithm. This data is used to develop a robust algorithm for driver state detection. **Ford Stress study:** The goal was to induce stress. **Phase I:** Different emotional states (happiness, anxiety, neutral, sad) were induced in a controlled environment. **Phase II:** Stress and distraction were induced in controlled environment while driving in manual or automated mode on a test track. A safety driver was always present. Ensures a user-defined and accepted system, as it is being developed during the lifetime of the project and, as whole, it did not pre-exist. All drivers will benefit from this, since they will get the right level of system support, if they need it. Due to this, the safety of the driver and his driving environment can be achieved. ### 3.3.9 Consultation process Through self-reports and standardized questionnaire comprising mostly close- ended questions. Only the research team have access to these data. The team member names have been added in both the ethics application and the informed consent form. The same organization is the data collector and data processor. However, the algorithms are developed by UPatras that has access to anonymised data only and manages the data storage on the central project repository, as it is described in WP4-IR –MS7 document. ### 3.3.10 Necessity and proportionality Data processing is performed within the framework of the European project ADAS&ME and is part of our contractual obligations. Yes, it is based on careful and considerate experimental planning, as it is described in D7.1 of the ADAS&ME project. Participant data collection is mandatory for a UCD approach when designing a user-specific safety system. The system and its respective functions cannot be used for another purpose. The end-product will be a prototype that will not be used by people outside the Consortium. Further development and improvements can be made but they will not involve usage of data collected at this stage for different applications. Data quality is ensured because of technical verification and pre-testing sessions taking place before any actual testing. Certain indices were set for technical performance with regards accuracy, validity, sensitivity and reliability Data treatment and minimization was controlled by trained data scientists. All participants completed an informed consent form. Both data collections are completed, and the informed consent form was GDPR compliant. Users receive information about data treatment and storage, anonymization process, their own rights during testing, a copy of the informed consent form, testing duration, description of the project and contact points. All data are anonymous, and processors have no access to any information about the participants. In addition, data processors are all informed about GDPR new guidelines and their responsibilities. All date shall be handled in accordance to the ADAS& ME Ethics Manual D10.3 (Jansson et al., 2017). This manual refers to the Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons about the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (The General Data Protection Regulation, GDPR). Thus, GDPR restricts transfers of personal data outside the EEA unless the rights of the individuals in respect of their personal data is protected in another way. In any case, no transfer of personal data is foreseen in the ADAS&ME project. ### 3.3.11 Identified, assessed and mitigated risks Identified risks were included in the performed risk analysis (e.g. part of ethical approval of DLR stress study) about potential accidents during driving at DLR campus (Phase I) and with test vehicle while automated driving on test track (Phase II). These issues were mitigated by conducting tests only in times without heavy traffic and the safety driver was always able to brake (Phase I) and no other traffic on closed test track and safety driver always on board (Phase II). However, these aspects are related to ethical conduction of user testing and not related to data privacy issues. Therefore, no data privacy related issues and risks have been identified. ## 3.4 UCs E & F – Physical fatigue and fainting (DUCATI & CERTH/HIT) ### 3.4.1 Aims * Development and evaluation of a system to detect rider’s stress; * Development and evaluation of a system to detect physical and thermal fatigue; * Development and evaluation of a system to detect visual distraction in riders; * Development and evaluation of HMI elements and their combinations to inform and warn riders when they are fatigued and distracted. ### 3.4.2 The need The need for a PIA is based on the fact that data collected maybe anonymous but some of the data collected maybe classified in the ‘sensitive data category (e.g. heart rate). ### 3.4.3 Data treatment process Data are collected anonymously through sensors and questionnaire completion. Data are stored only in CERTH offline storage and will be deleted five years after the end of the project. As shown above, data sources are sensors and questionnaire data. For a complete list, please refer to the data collection spreadsheet completed for UCs E and F (included in the consolidated data collection table that can be found in Annex 4). Anonymized raw data were shared with UPatras for algorithms creation and data repository management, who is the data manager of the project. The data flow is presented in the following diagram (Figure 1). **Figure 1.** Use case E/F HMI data management No high-risk data processing is involved. ### 3.4.4 Data sources No special categories of data are included. The main data clusters are the following: * Heart rate * Galvanic skin response (GSR) * Internal and external temperature * External humidity * Rider behavioural parameters (steering, braking, acceleration/deceleration, pedal pressure, steering pressure, body stature) * Questionnaire and scale data * Background data Data from 22 participants were collected across 2 phases. Data were collected in controlled experimental and simulated conditions for both phases. Experiments conducted during Phase I aimed to the collection of data for the development of physical fatigue and visual distraction algorithms (conduction 2017/2018). Phase II tests were aiming to evaluation the HMI configurations developed within the project in order to detect and inform/warn riders when visual distraction or physical fatigue was detected (conduction 2019). Some riders participated in both phases and they were in their majority CERTH employees, all naïve to the project and testing objectives. Anonymous datasets will be stored for five years after the end of the project. Individuals are not affected because data collection and storage are anonymous. It covers participants that are residents in Thessaloniki, Greece. ### 3.4.5 Data sharing Anonymous data are shared only with UPatras for data management (i.e. storing in data repository) and development of algorithms for distraction and physical fatigue. Data sharing is anonymous. Pseudonimization was handled offline and related documents were deleted after the end of respective testing phase (i.e. Pilot). This means that during the Pilot, personal details were kept only for organizational reasons (e.g. arranging appointments) and destroyed after the end of the testing period. ### 3.4.6 Data clusters Data are clustered into three main categories: a) self-reported/perceived questionnaires, b) physiological measures (e.g. ECG, GSR) and c) riding behaviour/performance indicators (e.g. braking behaviour, acceleration/deceleration). ### 3.4.7 Context of processing Riders participated voluntarily and were free to stop and leave whenever they wanted and if they wanted their data to be deleted and not stored, they were informed they could ask for it. Participants were informed about data being treated anonymously and used for research purposes and publications. No children or vulnerable groups were involved and/or recruited and explicit inclusion and exclusion criteria were set. No concerns existed because data collection is anonymous. Experimental setup was novel and for this reason a medical practitioner was always present during the thermal fatigue experiments in phase 1. But data collection was not novel. Similar systems exist in respective market for most of the in-house builds. No data privacy concerns exist or were envisaged. ### 3.4.8 Purposes of processing The objectives are different per Phase: * **Phase I:** Create algorithms to detect rider’s acute stress, visual (acute) distraction and thermal fatigue. Thermal fatigue, stress and distraction were induced in controlled environment with a medical practitioner always being present. Participants to understand the HMI role and select the ones (or their combinations) they preferred. * **Phase II:** Select the most appropriate and accepted HMI elements for informing riders when they are distracted and when they are physically tired (3 levels: information, warning, critical and activation of recovery and then stabilization modes). This approach ensures a user-defined and accepted system, as it is being developed during the lifetime of the project and, as whole, it did not pre-exist as well as that they system is safe, reliable, accurate, valid and usable. ### 3.4.9 Consultation process Direct subjective feedback is collected by participants through self-reports and standardized questionnaire comprising mostly close-ended questions. Only CERTH research team had access to subjective data. The team member names have been added in both the ethics application and the informed consent form. The same organization is the data collector and data processor. However, UPatras was involved in the development of the algorithms and therefore has access to anonymised data only and manages the data storage on the central project repository, as it is described in WP4-IR –MS7 document. ### 3.4.10 Necessity and proportionality Data processing is performed within the framework of the European project ADAS&ME and is part of our contractual obligations. Data collection is based on careful and considerate experimental planning, as it is described in D7.1 of the ADAS&ME project. Participant data collection is mandatory for a UCD approach when designing a user-specific safety system. The system and its respective functions cannot be used for another purpose. The end-product will be a prototype that will not be used by people outside the Consortium. Further development and improvements can be made but they will not involve usage of data collected at this stage for different applications. Data quality is ensured because of technical verification and pre-testing sessions taking place before any actual testing. Certain indices were set for technical performance with regards accuracy, validity, sensitivity and reliability Data treatment and minimization was controlled by trained data scientists. All participants completed an informed consent form. Phase I was conducted before implementation of GDPR and the informed consent form was not GDPR compliant. For the second phase of user testing (Phase II), the informed consent form was further adapted to be GDPR compliant. Users receive information about data treatment and storage, anonymization process, their own rights during testing, a copy of the informed consent form, testing duration, description of the project and contact points. An update informed consent and release form can be found in Annex 1. The release form can be used in case the research teams wish to collect audio, video and photographs. Updated versions of these forms were primarily annexed in this deliverable because of the final real-life tests that remain to be conducted within WP7.1 at IDIADA, as the material that can be collected during these tests can be used for demonstration of the results of the ADAS&ME project. All data are anonymous, and processors have no access to any information about the participants. In addition, data processors are all informed about GDPR new guidelines and their responsibilities. One day training seminar by a counselling firm was held at CERTH and data collection and PIA was discussed with CERTH’s DPO. There are no international transfers of data. ### 3.4.11 Identified, assessed and mitigated risks No risks related to data privacy were identified and thus no data related mitigation strategy is necessary. The only identified risk is related to participants falling from the riding simulator during testing. This potential risk was mitigated by adding padded surfaces around the riding simulator to ensure participants are not hurt in case of fall. ## 3.5 Use Case G – Automating approaching at and departing of bus stops (VTI) A DPIA was performed because participants are asked to provide feedback and data that could be identified as personal were collected but participants remain anonymous. ### 3.5.1 Aims The aim was to collect data on stress, sleepiness and fatigue of bus drivers in relation to repeated tasks like approaching at a bus station and departing from it. Tasks that can be automated and thus alleviate bus driver’s fatigue and stress. These data were used for tuning the algorithms on sleepiness and inattention. In addition, the aim was to test the setting of the final evaluation and to make sure that integration of sensors and the connection to the HMI is working, but also that the design of the study supports the results needed for the final evaluation. ### 3.5.2 The need The need for a PIA is based on the fact that data collected may be anonymous but some of the data collected maybe classified in the ‘sensitive data category’. ### 3.5.3 Data treatment process Data is being collected for two purposes; (1) data to support development of driver state detection algorithms, (2) data to support HMI development. Data are collected anonymously through sensors and questionnaire completion. Identifying information afforded from provision of consent to participate, is stored separately to all other data collected from the participant. All paper data is stored in a locked cabinet. Data was collected to support the development of the driver state detection algorithms was collected (WP4). Only pre-specified contributors can access the data that is needed for that partner. Data is being collected to support the development of the Human Machine Interaction (HMI; WP5). No data collected from this work is shared with other partners. Identifiable digital information will be stored on an isolated data storage device that is not connected to the online network. Non-identifiable driver information is stored on a shared director accessible by all Scania project contributors. Digital and paper data will be destroyed five years after the completion of the project. ### 3.5.4 Data sources As shown above, data sources are sensors and questionnaire data. For a complete list, please refer to the data collection spreadsheet completed for UC G (see Annex 4). ### 3.5.5 Data sharing Non-identifiable (digital) participant data are stored on Project hosted data repository and available to specific partners requiring access to the data for algorithm development. No highrisk data processing is involved. ### 3.5.6 Data clusters No special categories of data are included. The main data clusters are the following and include fata collected for algorithm training: * Demographics Questionnaire * Background Sleep Questionnaire * Karolinska Sleepiness Scale (KSS) * Subjective Stress (SUS) * ECG, Heart Rate / Variability * Blink duration, EOG * Respiratory rate * Kinematic data of moving base simulator (i.e. velocity/acceleration) * Closed and Open question items ### 3.5.7 Context of processing At this stage, three testing steps were taken. The first step was an exploratory study with the aim to understand bus drivers working conditions and problems. The second step was a Virtual Reality (VR) simulation study with 10 bus drivers in which a first HMI concept was evaluated. The outcome of the VR study was then modified and integrated in a moving-base driving simulator were also driver state algorithm for distraction and hands on steering wheel (except Sleepiness) were integrated and evaluations. Algorithms for sleepiness will be added to the final driver state detection system. In total 6 bus drivers (some of them also used in the VR data collection) were invited to drive at two different occasions; one time in an alert condition and one time in a supposedly fatigued condition. The HMI evaluation was based on a questionnaire that was filled in after the fatigue driving session. This study followed the data collection conducted before GDPR compliance that aimed on the affective state algorithms. ### 3.5.8 Purposes of processing As described previously, some data is used for development driver state detection algorithms. Other data is used as input to the HMI. No indented effect is anticipated on the person. Ensures a user-defined and accepted system, as it is being developed during the lifetime of the project and, as whole, it did not pre-exist. In addition, it ensures that the system is safe, reliable, accurate, valid and usable. ### 3.5.9 Consultation process Participants’ views will be collected through self-reports and standardized questionnaire. For HMI development. Information about data security was received internally from the project. ### 3.5.10 Necessity and proportionality Data processing is performed within the framework of the European project ADAS&ME and is part of our contractual obligations. The data processing achieves its goal because it is based on careful and considerate experimental planning, as it is described in D7.1 of the ADAS&ME project. There was no other way or method in order to achieve this outcome and especially if the same level of system performance is required/requested. Function creep is prevented by the fact that the system and its respective functions cannot be used for another purpose. The endproduct will be a prototype that will not be used by people outside the consortium. Further development and improvements can be made but they will not involve usage of data collected at this stage for different applications. Further exploitation of the products generated within this project will not require any further access to participant data. Data quality is ensured because of technical verification and pre-testing sessions taking place before any actual testing. Certain indices were set for technical performance with regards accuracy, validity, sensitivity and reliability Data treatment and minimization was controlled by trained data scientists. All participants completed an informed consent form. Stage I and Stage II testing has been completed and all data collection was GDPR compliant regardless if data collection was performed before its implementation. Users receive information about data treatment and storage, anonymization process, their own rights during testing, a copy of the informed consent form, testing duration, description of the project and contact points. All data are anonymous, and processors have no access to any information about the participants. ### 3.5.11 Identified, assessed and mitigated risks No data related risks are envisaged and were encountered during the tests already completed or are scheduled to be carried out. Simulator sickness was a potential risk for test conduction, not related to data privacy that would result in immediate cessation of the test session. Participants were informed beforehand of this possibility through the informed consent process. ### 3.5.12 Overview of privacy assessment As it is obvious from each separate UC data privacy assessment, no major privacy risks were identified as data collection was largely anonymized and data sharing was at a required basis and only of anonymized data. Data collection of audio and video data was stripped of any personal data, stored safely and shared only on need basis and isolated by the other collected data. In addition, in all cases, participants have agreed in writing these data to be collected, stored and shared with the Consortium. No recognition was possible and informed consent was obtained across all stages of testing. The UC leading teams conducting each assessment stated that they are now aware and ready of all steps required for implementing GDPR, as completing this template was a valuable experience to be utilized and re-used in other projects they are currently working or will be involved in the near future. The most important lesson learnt from this experience is that a DPIA is a time-consuming process and thus it is necessary to be organized early in the project. In addition, roles should be assigned very early in order to ensure that ‘privacy by design’ is a necessity and will be evident in the architecture, the data repositories as well as the data flows. Within ADAS&ME, because of the addressed data clusters, this process was inherent (i.e. utilization of data like heart rate, blink rates, etc.). This is not the case for other transport (or not) oriented project and the three aspects mentioned above must be addressed at the beginning of the project along with the data privacy policy that will be based on again these three aspects. Moreover, the relation between the Ethics and data privacy policy also needs to be addressed early in the project by the collaboration of both responsible teams within a project that will be reported in the respective Deliverables. This process will ensure that the policies are agreeable and complementary. As GDPR was not adopted from the beginning of the project, then certain procedures and documents had to be updated and roles to be assigned that did not exist beforehand. However, as involved partners are experienced in this field, the positive outcome was that most considerations and requirements were fulfilled even before GDPR implementation. # 4 Data descriptions and updated policy ## 4.1 Updated data privacy policy The initial ADAS&ME data privacy policy is included in the first version of this Deliverable submitted in M6. An update of this data privacy is prepared for two reasons: 1. GDPR compliance requires an elaborate description of related processes and actions; 2. Written in a format to be shared by partners conducted tests or other activities required to get in contact with people/users outside the Consortium. The updated policy governs the collection of information (private or not) during the Pilots (i.e. whole testing period duration) and user tests (i.e. dedicated user testing session) and can be found in Annex 4\. ## 4.2 Final datasets per UC The datasets collected per UC were updated based on the current evaluation stage. The complete dataset is annexed in this final version of this report, but the table below provides a summary of the qualitative and quantitative characteristics of the data collected during testing within the project. Overall, 141 data categories have been identified across all UCs in the project. It is important to note that for the data collected so far (stages I and II) are included in this file. Data collection during the final evaluation activities, as they have been planned within WP7 will utilize data sources from phases I and II and additional subjective (perceived) scales. **Table 2.** Data description categories and explanatory text <table> <tr> <th> **ASPECT** </th> <th> **CATEGORY** </th> <th> Explanatory text </th> </tr> <tr> <td> **DATA** </td> <td> **Collected/Created** </td> <td> Collected/created </td> </tr> <tr> <td> **Name** </td> <td> Name of the data/ metadata/ exploitable result </td> </tr> <tr> <td> **Description** </td> <td> Description of the data/ metadata </td> </tr> <tr> <td> **Category** </td> <td> FW/SW/ Algorithm/Raw data/ Dissemination material/etc. </td> </tr> <tr> <td> **Type** </td> <td> Document/video/images/Source code/etc. </td> </tr> <tr> <td> **Format** </td> <td> File extention/ prototype </td> </tr> <tr> <td> **Size** </td> <td> size in MB/GB </td> </tr> <tr> <td> **Owner** </td> <td> Partner name/ Consortium/ external stakeholder </td> </tr> <tr> <td> **Privacy level** </td> <td> Public/ consortium/ partner/etc. </td> </tr> </table> <table> <tr> <th> **ASPECT** </th> <th> **CATEGORY** </th> <th> Explanatory text </th> </tr> <tr> <td> </td> <td> **Metadata** </td> <td> Any metadata that are linked to these data and/or describe this data type. </td> </tr> <tr> <td> </td> <td> **Relevant standards and legislation** </td> <td> Any standards or laws that need to apply that are relevant to the systems that produce these data. </td> </tr> <tr> <td> **DATA** **SHARING** </td> <td> **Repository during the project (for private/public access)** </td> <td> BAL.PM or other Open access repository/ partner storage (private cloud/private drop box)/etc. </td> </tr> <tr> <td> </td> </tr> <tr> <td> **Data sharing** </td> <td> **Open:** Open for public disposal **Embargo:** It will become public when the embargo period applied by the publisher is over. In case it is categorized as embargo the end date of the embargo period must be written in DD/MM/YYYY format. **Restricted:** Only for project internal use. Each data set must have its distribution license. Provide information about personal data and mention if the data is anonymized or not. Tell if the dataset entails personal data and how this issue is taken into account. </td> </tr> <tr> <td> **Back-up frequency** </td> <td> daily/ monthly/ yearly/once </td> </tr> <tr> <td> **Destroyed at the end of the project?** </td> <td> NO (1)/No (2)/NO (3)/ Yes / Unnecessary </td> </tr> <tr> <td> **Duration of preservation (in years)** </td> <td> number of years </td> </tr> <tr> <td> **Repository after the project** **(open/embargo or never open) GDPR compliance** **(Yes/No/In process)** **Data role** </td> <td> BAL.PM or other Open access repository/ partner storage (private cloud/private drop box), etc. </td> </tr> <tr> <td> **GDPR** </td> <td> State if you have completed/initiated a process to be compliant with the new Directive. </td> </tr> <tr> <td> Please state if you are a data controller, processor or both within the framework of the project. For further information about these roles you may find here: https://www.gdpreu.org/the-regulation/keyconcepts/data-controllers- and-processors/ </td> </tr> <tr> <td> **ASPECT** </td> <td> **CATEGORY** **DPO** **Publications** </td> <td> Explanatory text </td> </tr> <tr> <td> </td> <td> Present/have discussed project with/ not appointed yet </td> </tr> <tr> <td> **Open** **publications** </td> <td> Please add any papers you have already submitted/ presented in open-access journals and/or ones you are planning to submit by the end of the project. In addition, please add any publications to journals that are not open, but you are planning/ or have already paid to make them open. </td> </tr> </table> Therefore, any data privacy issues have been covered already in all the versions of the Data Management Plan reported so far in the project. The following table presents an overview of the data clusters across UCs based on the consolidated information that can be found in the aggregated table in Annex 4. **Physiological measures** (related mostly to algorithms’ development/refinement): heart rate/variability, eye gaze/blinking, Galvanic skin response (GSR), blood pressure, etc. **Vehicle performance/behaviour measures** (related to both algorithms creation and HMI testing): lateral position, acceleration/deceleration, braking, steering angle, etc. **Self-reported/perceived questionnaires/interviews:** relevant to all testing taking place within the project. ## 4.3 Decision about data openness Due to the nature of data collected, no datasets will be open to the public, but findings will be presented in conferences and peer-reviewed journals. This result is evident by the decision per data type on embargo period, as reported by the UC leaders in the consolidated data pool (Annex 4). ## 4.4 Open access publications So far, two open-access publications are available by VTI (they are available in the Excel spreadsheet in Annex 4). The open access publications will be made available via the website and potentially Zenodo (https://zenodo.org/) by the end of the project. # 5 Conclusions The final version of the Data Management Plan accommodates for the efforts put by UC teams and leaders to ensure that GDPR guidelines are implemented and involved partners are getting familiarized with the process of data privacy impact assessment. Overall, identified risks were more relevant to actual testing conduction (e.g. participant falling from motorcycle) rather than actual privacy leaks or user profile loss/revealing. The related mechanisms were in place in dedicated pilot sites and anonymized files with no recognition potential were shared among partners and only within the project. However, setting a data privacy impact assessment methodology early in the project, facilitates the ‘data privacy by design’ concept actualization very early in the project and allows for harmonization of activities with less iterations and changes. The final testing phase will be conducted after the submission of this Deliverable and therefore the results of the final round of DPIA per UC will be reported in the final technical report only if the results are different from the ones already reported in this document and if changes in the data privacy policy are necessary as well as if the data collection table considerably changes.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0304_MONOCLE_776480.md
# 1\. Executive Summary The objective of this data management plan (DMP) is to detail the plan for management of data generated and collected within the MONOCLE project. The DMP describes the data management life cycle for all datasets collected, processed and/or generated by the project. It covers * what data will be collected, processed or generated * how the data will be handled during and after the project * who is considered as an owner of a data set and who it is shared with * how the sharing of the data within and outside the project is organised * what formats, meta data and standards the data will adhere to * how data will be curated and preserved MONOCLE data will consist of a diverse range of types and formats and standardization of the data and data flows is one of the main project objectives. # 2\. Scope This document, The Data Management Plan, is intended for internal and external use, describing the mechanisms that MONOCLE will put in place to ensure all public data follow the FAIR (Findable, Accessible, Interoperable, Re-usable) data management principles. This is a living document, updated periodically to reflect new data sets made available through MONOCLE. The current document presents the status and planning at month six of the four-year project. Streamlining data access and interoperability from the sensor to the user is one of the main aspects of the MONOCLE project. Therefore, a number of related reports which detail the methodologies developed in the project will be of interest. At present, D5.2 outlines the data infrastructure and standards that will be implemented in the project. D5.4 will describe the final implementation of the data flow between MONOCLE subsystems. # 3\. Introduction The aim of MONOCLE is to implement enabling technologies for the deployment, management and maintenance of sensors and sensor networks. Sound data management is pivotal for fully realising the benefits of MONOCLE. Well curated data will stimulate and ensure smooth collaboration between the project partners and will allow the users to easily evaluate and put to use the data received from the project. For dissemination and exploitation, open access to data generated in the project will help to underpin the credibility and stimulate uptake of MONOCLE results. The MONOCLE project will follow the FAIR (Findable, Accessible, Interoperable, Re-usable) data paradigm and this is reflected in the data management plan. Data will be **F** indable through the various user applications that interface to the data services provided by the MONOCLE back-end. A web based geographic information system (GIS) will be publicly available with a data search feature acting on parameter, spatial and temporal coverage or data originator fields. Appropriate datasets will also be registered in public archives such as ZENODO and GEOSS, this will enhance their ability to be found and **R** e-used, even if the MONOCLE back-end should cease to operate. The use of data services designed for system inter-operability will guarantee that all open data within the project are widely **A** ccessible now and in future. The user applications (web based and in the form of source code) will also improve **A** ccessibility with focused information available, including through intuitive tools. **I** nteroperability is also made possible through the use of common data formats and standardised data services. For instance, it would not matter what format the original data are stored in when requested via a Sensor Observation Service as the response is documented and standardised. The DMP will be updated as a “live” document during the lifetime of the project, with four scheduled release dates. Document D5.2 “System architecture and standards report”, accompanies the first release of the DMP and describes data sources and interfaces in additional detail. # 4\. Data summary ## Data purpose and utility Observation of global coastal and inland water bodies with ocean-colour satellite sensors has reached full operational potential through the latest satellite missions in the Copernicus programme. The global societal demand for water quality information through downstream EO services is increasing and expanding into domains of public health, agriculture, aquaculture, energy and food safety, drinking water, conservation of ecosystems and biodiversity conservation, navigation and recreational use of water resources. Inland and transitional water bodies, however, represent a staggering range of optical and environmental diversity. A dedicated concept for EO-supporting in situ services for optically complex waters is necessitated by the limited ability of present in situ activities to add additional value to operational EO missions. To improve in situ components of the GEOSS and Copernicus services in optically complex waters, MONOCLE will introduce new sensor technological development across a range of innovative platforms. MONOCLE will combine high- end reference sensors in a spatially sparse configuration with a complementary, higher density, network of low cost sensors for smartphones and unmanned aerial vehicles (UAV or drones). The full MONOCLE sensor suite and the data gathered and processed with MONOCLE sensors and processing means will serve the EO research communities for water and atmosphere with a rapidly replenishing volume of reference observation data, reducing both local-regional (improved atmospheric correction) and global (improved algorithms) observation uncertainty. The MONOCLE integrated observation service concept, particularly when integrated with EO services, significantly lowers the technology and computing requirement for innovators in environmental observation in general, and water quality management in particular. This reduction is critical for uptake and engagement in developing regions. By making both data and supporting software openly available the project will boost innovation with app developers, environmental consultants, data analysts and visualisation artists worldwide. The open data strategy of MONOCLE plays a central role in opening opportunities to the EO sector, not merely in Europe but also in supporting downstream users and regional information providers in data-poor regions, particularly developing countries. For the latter, MONOCLE will lower the threshold for computational and technological capacity to actively contribute to the global observation system ## Data Types MONOCLE will collect a wealth of data on water quality from multiple sources that can be categorised as one of: * In-situ data, either: o Data collected by non-expert participants (e.g. citizen scientists) o Data from automated instruments (e.g. on buoys, ships) o Data from manually operated instruments (e.g. hand-held sensors, piloted aircraft) * Satellite data of inland, transitional and coastal water bodies * Image data collected with Remotely Piloted Aircraft Systems (RPAS) * Pre-existing data, accessed in (external) databases or directly contributed by stakeholders  Research results and derived data sets Each of these data sources has specific characteristics and challenges which are summarised below: ### Citizen generated data MONOCLE will engage with groups of volunteers in citizen science campaigns where the citizen scientists collect water quality data and submit these to the MONOCLE system via a mobile app. A number of different parameters will be collected by the citizen scientists, either ad-hoc or during larger campaigns. Such campaigns can deliver a large amount of data in a fixed time period, but are more difficult to plan as motivation of the volunteers is pivotal. A fundamental principle of MONOCLE data management is that the apps used to collect data will also have access to stored results, providing immediate feedback where possible. Citizen participation requires additional ethical considerations, which are discussed further below. Citizen observations are presently foreseen to be collected through the Earthwatch Freshwater Watch app and the iSPEX app (under development). Data exchange formats are still being decided upon, but will likely follow concepts of interoperability using Web Feature Services (WFS) and Sensor Observation Services (SOS), with the MONOCLE back-end communicating with the respective data stores of iSPEX and Earthwatch. Hence, no ‘raw’ data format is currently considered here. The global Freshwater Watch dataset currently contains > 20,000 datasets, where each contributor is represented as a separate dataset. For iSPEX, the main mode of operation is foreseen to be in dedicated campaigns. Data volumes associated with the FreshWater Watch are modest as these take the form of forms and occasionally photos, likely to remain in the order of Gigabytes or less. The iSPEX collects a range of smartphone camera photos, likely to range in the order of Gigabytes. Data storage at this magnitude is not currently seen as an issue. ### Automated data collection A variety of automated sensors will be deployed by project partners such as radiometers, fluorometers and absorption meters. The sensors can be deployed at fixed positions (e.g. on buoys, poles or jetties) or on moving platforms (ships, RPAS). Data will in general be collected at high frequency and immediately transmitted to the MONOCLE system. However, if deployed in remote locations, the sensors can also collect data less frequently, and store measurements if they are not online. Data acquisition, processing and transmission should all be automated with these sensors as should be quality control mechanisms. The aim of MONOCLE is to provide these sensors with interactive interfaces so that measurements can be triggered, sensors turned on and off or calibrations performed remotely. During and / or following data collection, most of the optical sensors require data processing, calibration and quality control. The intention is for these processes to be highly automated, with suspect data flagged as not recommended for use and to be inspected by the data creator / curator. Where existing data stores are considered and an application programming interface (API) is not already in place, one will be created. The SOS interface will be preferred in the development of new communication interfaces from individual sensors. In addition, an SOS compliant data wrapper will be made available for legacy sensors. Automated high-frequency data collection for the MONOCLE sensors is estimated in the order of tens of megabytes per observation day. Transfer, storage, and dissemination of these volumes is not currently seen as an operational issue. ### Manual data collection The project will collect data in field situations using handheld and manually operated instruments. This includes new sensors intended for short-term deployment, and high-end reference sensors operated only during validation campaigns, following described protocols. In all cases, the measurement records will be referenced by geo-location and UTC time-stamp and the measurement protocol which was used. These measurements are subject to further quality assurance (protocols) and quality control by the operators. Manual data collection during field campaigns is estimated to deliver in the order of several gigabytes of data per campaign. Transfer, storage, and dissemination of these volumes is not currently seen as an operational issue. ### Earth observation data While the focus of MONOCLE is on providing a network of in situ observation to support Earth observation of optical water quality, for demonstration of the use and benefits of the MONOCLE services for Earth Observation services, satellite-derived data will be produced within the project. For dedicated case studies, high resolution (Sentinel-2 MSI) and medium resolution (Sentinel-3 OLCI) data will be acquired and processed into water quality information products making use of MONOCLE in situ data for calibration and validation. Data storage needs for EO data for the selected MONOCLE regional use cases (Lake Balaton, Scottish Lochs, Danube Delta, Lake Tanganyika, and several smaller sites) is in the order of hundreds of gigabytes of data per year, which has been costed in the project budget. ### Image data collected with Remotely Piloted Aircraft Systems (RPAS) The purpose of data collection with RPAS is to construct mosaic maps of waterbodies from which water quality parameters can be derived. The RPAS systems may also serve as direct reference to satellite data, with the added advantage of detailing fine spatial features which can explain aberrations in processed satellite data, where fine features are not directly visible due to a large pixel size. The raw image data are too large (and not useful) to be disseminated beyond the data processing centres, where they are archived on suitable storage media (e.g. tape drives). Processed parameter-specific maps will be disseminated through the MONOCLE data back-end using machine interfaces (WCS). Storage and archiving needs associated with the image data are in the order of terabytes of data and budgeted for in the project. ### Pre-existing data Pre-existing in situ datasets will consist of collections of optical and biogeochemical measurements contributed by various stakeholders, either as independent data sets where MONOCLE is given a licence to use and distribute these, or as part of curated data bases (e.g. LIMNADES for inland water). Access constraints will be recorded and maintained as part of the registration of the data set in the MONOCLE back end. Pre-existing data will also take the form of large scale satellite data archives downloaded from space agencies (ESA and NASA), which are then used for further processing to a usable format, in turn integrated with data from MONOCLE sensors. ### Research results and derived data sets In the process of research and development, outputs will be generated in the form of publications, presentations, tables and datasets, survey results. Such results will be stored in the project management portal for access within the project consortium, the size not likely to exceed 100Mb per item. Public reports will be available also through the website. Public deliverables will be available through the website and OpenAire. The methodology is detailed in D9.3 “Open data repositories”. The open access requirement for H2020 publications will be honoured through either the green or gold open access route. Each project partner is responsible for delivering publications through their chosen open access route – open access publication fees are an eligible project cost. In addition, these papers will be included in the Zenodo/OpenAire repository that has been set up for MONOCLE. # 5 FAIR data principles All MONOCLE research data will be curated according to the 'FAIR' principle, i.e. Findable, Accessible, Interoperable and Re-usable. In the following, a short overview is given of the building blocks to reach this goal. It should be noted that due to the ongoing development of the system, further detail will be provided in future releases of this document. The following are guiding principles. Details on each data set will be kept in a central data register, discussed further below. ## General data documentation and guidance Any documentation such as measurement protocols, system descriptions and use cases will be linked within the data register and a copy will be kept in the MONOCLE back-end, where possible. Users of the MONOCLE front-end will be able to access these documents when accessing a corresponding data set. Where data sets are ‘frozen’ to create a snapshot of available data at a given point in time, these datasets will be versioned and uploaded to public repositories providing a digital object identifier. By default, all data generated in MONOCLE will be openly available (see Data Access, below), with the exception of unprocessed, uncalibrated data if these have no value to the user. Such data will nevertheless be stored and curated. Data contributed from external sources are the exception to this rule. In such cases, data ownership and licensing will govern whether dissemination beyond MONOCLE is possible. ## Metadata Initially the metadata profile ISO 19115 will be used to describe datasets that are made available. As a common ontology the CF conventions (cfconventions.org) will be followed or extended. These metadata conventions ensure that data are identifiable, usually as part of (live) data streams, using appropriate search terms and key words. Additional metadata requirements to enable MONOCLE data interoperability developments are described in D5.2 “System architecture and standards report”. These requirements concern data ownership, licensing, access restrictions (embargo periods), as well as geospatial parameters. The definition of the minimum and recommended metadata for MONOCLE data sets will be refined during the implementation of MONOCLE WP5. A guiding principle for MONOCLE sensors and platforms is that metadata are injected into the data flow at the point of measurement, either at the sensor or using a dedicated sensor interface. ## Data Access, Interoperability and respecting Intellectual Property Processed data intended for public access and not subject to ethics limitations will be made available through the Open Geospatial Consortium (OGC) Sensor Observation Service (SOS), Web Feature Service (WFS) and Web Coverage Service 1 (WCS) standards initially with other standard and bespoke data interfaces being added as required. MONOCLE aims to apply these standards to communication between sensors, sensors and data hubs, the MONOCLE data back-end, front-end and user applications. Details can be found in D5.2 “System architecture and standards report”. Data generated as part of MONOCLE will be free of cost. Data access restrictions and intellectual property rights will, however, remain as set by the dataset creator/owner where applicable. Unless specified, all data will be treated as FAIR open data. In practise, the following data access levels are foreseen: * open access, not requiring registration, providing access to data identified as open without license restrictions * limited access, requiring registration, providing access to open data as well as data sets with a limited license for use (e.g. non-commercial, accrediting ownership, delayed release etc). * restricted access, requiring registration, providing access to data owned by the user and any data sets this specific user has been granted access to. Any software tools that are required to access and, to a limited extent, make use of the data, which are developed during MONOCLE, will be available free of cost through software repositories such as those already set up on Zenodo and Github (see D9.3 “Open data repositories”, for details). Essential software tools required to make use of the data have not been defined at this point. ## Data Sharing and Reuse All data accessible through the MONOCLE backend data services (Sensor Observation Service, Web Feature Service, Web Coverage Service) will be publicly findable, with accessibility rules based on ownership and licensing defined drawn from the metadata and data register. Any data producer that requires data to be delayed in its release will carry that information such that it can be securely stored and released only when appropriate. In many use cases, however, it will be beneficial to the user to know that embargoed data exist. Such data embargoes will feature in the extended metadata and the data register and can take the following shapes: * restricted data that are identifiable by measurement parameter, and as collected within a given geographical range and time period * restricted data identifiable as above but including exact time and location of observation * restricted data identifiable as above, but including information about the data owner ## Data Preservation and Archiving Data will be kept available for a minimum of three years after the end of MONOCLE. Beyond this period, e.g. if the service should no longer be deemed useful or sustainable, data will be archived at a secure open access location, insofar as data licensing permits. The project intends to create links between MONOCLE data service and large-scale public data archives (e.g. GEOSS) for long-term accessibility. Requests to remove a data set from the MONOCLE services can be submitted to the Coordinator and will be handled in a manner equivalent to the GDPR for personal data. Within the project, Work Package 8 is dedicated to planning for long-term sustainability and evolution of MONOCLE from a service concept into an operational in situ service. Each of the development and innovation activities have set deliverables that will be conditioned by the identified end users and stakeholders through early stage trend and gap analysis. The input from the sensor manufacturing industry (beyond those well represented in the consortium), by primary in situ data producers (e.g. environment agencies), or primary data consumers (e.g. EO service developers) will provide both a cornerstone and vision for development. Commercial sensor and service development will be explored to support a 180 degree market perspective for MONOCLE system components and branding as a whole, exploring manufacturing chains and economies of scale, IP licensing, and patent searches, where applicable. Public-private partnerships and corporate sponsorships (providing green credentials) to sustain citizen observatories and management of ‘super sites’ will be considered in this work, delivered as an evolving exploitation plan. ## Data Register The data register will be maintained as a “live” document; a snapshot will be created for each DMP release. A template is included in the Appendix. The data register is based upon information and restrictions supplied by the upstream data provider matched to Horizon 2020 guidelines as below (in _italics)_ : * _**Data set reference and name** _ _Identifier for the data set to be produced._ * _**Data set description** _ _Descriptions of the data that will be generated or collected, its origin (in case it is collected), nature and scale and to whom it could be useful, and whether it underpins a scientific publication. Information on the existence (or not) of similar data and the possibilities for integration and reuse._ * _Standards and metadata_ _Reference to existing suitable standards of the discipline. If these do not exist, an outline on how and what metadata will be created._ * _Data sharing_ _Description of how data will be shared, including access procedures, embargo periods (if any), outlines of technical mechanisms for dissemination and necessary software and other tools for enabling re-use, and definition of whether access will be widely open or restricted to specific groups. Identification of the repository where data will be stored, if already existing and identified, indicating in particular the type of repository (institutional, standard repository for the discipline, etc.). In case the dataset cannot be shared, the reasons for this should be mentioned (e.g. ethical, rules of personal data, intellectual property, commercial, privacy- related, securityrelated)._ * _**Archiving and preservation (including storage and backup)** _ _Description of the procedures that will be put in place for long-term preservation of the data. Indication of how long the data should be preserved, what is its approximated end volume, what the associated costs are and how these are planned to be covered._ # 6 Allocation of resources The MONOCLE infrastructure has been designed as an open infrastructure from the start, therefore the effort and cost of making the data FAIR is part of the overall MONOCLE budget. It is the responsibility of each sensor provider within the project to ensure that their sensors adhere to the agreed standards, with support provided through Work Package 5 which is dedicated to data interoperability and accessibility. The development and maintenance of the MONOCLE backend are the responsibility of PML, who will continue to maintain access for at least three years beyond the end of the project. # 7 Data security To safeguard original data, backups will be made at the site where they are hosted. The nature of the MONOCLE data back-end is such that copies can be stored there, but this is not a requirement – it is designed to function both as a centralized and distributed data system. Copies of data will, in general, not be backed-up at the MONOCLE back-end provided that they can be retrieved again from the source. The same applies to the use of Earth Observation and auxiliary data. Loss of such data would potentially cause delays due to the need to download them again from the source, but this will not be much different than restoring the data from tape backups. A number of data repositories have been set up to safeguard specific project outputs, such as software, publications, sensitive data and frozen versions of sensor data. These will be accompanied by DOIs and are described in more detail in the document accompanying D9.3 “Open Data Repositories”. # 8 Ethical aspects Ethical aspects are mainly relevant for data gathered through citizen science initiatives. These data will be treated according to the ethics procedures laid out in D10.1, in summary these procedures cover the following aspects: * Details on the procedures and criteria used to identify/recruit research participants  Details on the informed consent procedures for the participation of humans. * Templates of the informed consent forms * Information sheets provided to participants * Procedures regarding the recording of imagery where humans are identifiable # Appendix ## Data Register Template <table> <tr> <th> **Project** </th> <th> MONOCLE H2020 (grant 776480) </th> <th> **Start / Duration** </th> <th> 1 February 2018/ 48 Months </th> </tr> <tr> <td> **Dissemination** </td> <td> PUBLIC </td> <td> **Nature** </td> <td> **ORDP** </td> </tr> <tr> <td> **Date** </td> <td> 1 Aug 2018 </td> <td> **Version** </td> <td> **1.0** </td> </tr> </table> The example shows the information collected through the data register. Included are descriptions of the fields and an example covering Earth observation data. <table> <tr> <th> **Organisation** </th> <th> **Dataset reference & ** **Name** </th> <th> </th> <th> **Dataset description/outline** </th> <th> **Standards & metadata ** </th> </tr> <tr> <td> _**Name of organisation providing the data.** **Also reference any other ownership, i.e. if you have bought commercial data and you have rights to use but must attribute etc** _ </td> <td> _A reference label. Should be unique when combined with your organisation name._ </td> <td> _Simple description of the dataset, try to include as much information as possible on_ </td> <td> _Spatial Resolution & extent Temporal resolution and extent _ </td> <td> _any standardised metadata that accompanies the dataset_ </td> </tr> <tr> <td> **PML** based on satellite data from ESA and NASA </td> <td> CCI_reference_chlor_a </td> <td> ESA OC-CCI archive consisting of global 4 x 4km ocean colour data. Consisting of individual RRS bands and derived chlor_a, the dataset has per pixel bias and rmsd uncertainty </td> <td> resolution: 4km 1997-09-04T00:00:00.000Z extent: -180,-90,180,90 to 2017-10-01T00:00:00.000Z </td> <td> Files contain CF compliant metadata but currently no xml/ISO 19115 metadata exist </td> </tr> </table> Page **14** of **15** <table> <tr> <th> **Project** </th> <th> MONOCLE H2020 (grant 776480) </th> <th> **Start / Duration** </th> <th> 1 February 2018/ 48 Months </th> </tr> <tr> <td> **Dissemination** </td> <td> PUBLIC </td> <td> **Nature** </td> <td> **ORDP** </td> </tr> <tr> <td> **Date** </td> <td> 1 Aug 2018 </td> <td> **Version** </td> <td> **1.0** </td> </tr> </table> <table> <tr> <th> **How will data be shared** </th> <th> **software/protocol required for sharing** </th> <th> **data access policy** **(open/locked/partial - give details, e.g. embargo time)** </th> <th> **stored in MONOCLE** **Backend (yes/no - if no please say why/where it will be stored** </th> </tr> <tr> <td> _**list data services or custom websites** _ </td> <td> _list the protocols available for data access_ </td> <td> _data policy, such as groups that can use, whether it is only accessible to project partners or whether there is a time based embargo_ </td> <td> _whether the data will be stored in the MONOCLE back end or not. If not, describe how data will be stored._ </td> </tr> <tr> <td> WMS: **_https://vortices.npm.ac.uk/thredds/wms/CCI_ALL-v3.1MONTHLY?service=WMS &version=1.3.0&request=GetCapab _ ** **_ilities_ ** **WCS** : **_https://vortices.npm.ac.uk/thredds/wcs/CCI_ALL-v3.1MONTHLY?service=WCS &version=1.1.0&request=GetCapabil _ ** **_ities_ ** </td> <td> OGC WCS OGC WMS </td> <td> Fully open data </td> <td> No Archive will be proxied through the backend using the WMS/WCS links provided </td> </tr> </table> Page **15** of **15**
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0306_PROSEU_764056.md
# Executive summary This document is the first version of the PROSEU Data Management Plan (DMP). The PROSEU project participates in the Open Research Data (ORD) Pilot, launched by the European Commission (EC), which aims at promoting free access, reuse, repurpose, and redistribution of data generated by Horizon 2020 projects. Therefore, the purpose of this DMP is to provide a first description of the main elements of the data management policy that will be used by the Consortium with regard to the research data generated and collected during the project (excluding deliverables for the EC and other dissemination materials). Specifically, it includes detailed information about: * The main research data that the project will collect and generate; * How it will be processed, organised, made accessible for verification and reuse; * How it will be curated and preserved during and after the end of the project according to the corresponding ethical requirements (detailed on D9.1 and D9.2). To ensure that all research data produced is managed and shared properly, according to ethical and technical standards, and to facilitate their reuse by others in an effective, efficient and transparent way, PROSEU team members will adopt a series of criteria to follow the FAIR data principles. Overall, all research data that are final versions will be shared using licences that permit the widest reuse possible (e.g. Creative Commons licences). Exceptions are those datasets containing personal data, not-final versions of working documents, datasets in which PROSEU members are not primary authors, or when a contractual obligation exists between two parts. These documents will remain closed and not be openly distributed, nor shared with third parties. Since the DMP is a _living document_ , the information provided in this first version describes the plan for managing all research data during the first 15 months of the project (until May 2019). According to the Grant Agreement, the DMP will be periodically updated to include any significant change arisen during the development of the project (i.e. following updates will be delivered on 06/2019 and 05/2020). The document is structured as follows: Section 1 introduces the Consortium’s commitment towards Open Data and Open Access initiatives to contribute to Open Science principles. Section 2 describes the features of the research data expected to be generated and collected in the project. Section 3 is dedicated to the principles of FAIR data, that is, how data will be findable, accessible, interoperable, and reused. Section 4 details the costs and responsibilities associated with making data FAIR. Next, Section 5 addresses data security issues. Section 6 will describe the ethical and legal aspects involving the research data collected. Finally, the appendices include the description and characteristics of the main datasets that are expected for the period covered by this DMP. # Introduction The PROSEU project participates in the _Open Research Data (ORD) Pilot_ , which aims to make the research data generated by Horizon 2020 projects findable and accessible for others, in order to maximise their reuse with as few restrictions as possible, while protecting personal and sensitive data, as well as intellectual property rights (following the premise “ _as open as possible, as closed as necessary_ ”). Our ambition and responsibility as researchers is to ensure that all research data produced is managed and shared properly, according to ethical and technical standards, and to facilitate their reuse by others in an effective, efficient and transparent way, following open science practices. In this way, by opening up the research data, new knowledge may be easily discovered and scrutinised by a largest community of researchers and stakeholders, such as policy-makers or non- governmental organisations, among other societal actors. Therefore, the objective of the PROSEU Data Management Plan (DMP) is to provide detailed information concerning the datasets that will be collected and generated during the project, and how these research data will be shared, archived and preserved during the lifespan of the project, as well as after its conclusion. Specifically, PROSEU will collect data through several procedures, based on transdisciplinary and interdisciplinary research methods and approaches, such as: self-administered questionnaire (web-based); semi- structured interviews; focus groups; living labs; and quantitative modelling and scenario analysis, in nine EU countries (Belgium, Croatia, France, Germany, Italy, Portugal, Spain, the Netherlands, and the United Kingdom). Since the project’s impacts are dependent on an easy discovery, access and reusability of the research data, these will be available during and after the end of the project. Thus, the DMP describes which data can be shared, including access procedures, storage and long-time preservation. Where exceptions are necessary due to data protection issues and to confidentiality, since some data cannot be anonymized and participants are entitled to confidentiality, the DMP states clearly which data will not be shared. In this regard, ethicalrelated aspects of data protection and research procedures have already been considered in this DMP and are described in more detailed in the PROSEU Ethics Requirements documents (i.e. Deliverables 9.1 and 9.2). This first version of the DMP includes the research datasets that PROSEU Consortium members expect to collect and generate during the first 15 months of the project (until May 2019). This DMP excludes the deliverables for the EC and other dissemination materials, which will be made publicly available through the EC website and the PROSEU project website. In line with the Grant Agreement, the DMP will be updated in deliverable D1.3 (month 16), and deliverable D1.5 (month 27), which is the final version. The document follows the guidelines on FAIR Data Management for Horizon 2020 projects and is organised in six sections that cover the description of the data (Section 2), the FAIR principles for opening research data (Section 3), resources needed for making data FAIR (Section 4), data security (Section 5), and ethical aspects (Section 6). A detailed description and characteristics of all datasets expected for the period covered by this DMP can be found as an appendix to this report. # Data summary The results obtained within the PROSEU project will rely on efficient data collection from _Renewable Energy Sources (RES) Prosumer Initiatives_ and relevant stakeholders involved in _prosumerism_ , who will be requested to provide information about the economic, financial, legal, technological and cultural factors that drive or hinder the development and consolidation of RES _prosumerism_ in Europe. Specifically, factors that facilitate or impede collective energy-responsible behaviour and choices, including data on how prosumer initiatives deal with issues of participation, inclusiveness, gender, and transparency. This will allow us to determine what incentive structures will enable the mainstreaming of RE _prosumerism_ and, in so doing, safeguarding citizen participation, inclusiveness and transparency in the Energy Union. Specifically, PROSEU researchers will collect both quantitative and qualitative data that will be used to: describe the traits of collective RES prosumer initiatives (WP2); analyse European and national-level policies, regulations and governance frameworks (WP3); study and experiment with finance and business models (WP4); model technological solutions for _prosumers_ (WP5), understand what are the incentive structures (WP6) and key recommendations and lessons learnt (WP7) for mainstreaming RES _prosumerism_ and the participation of citizens in the Energy Union. To accomplish this, the PROSEU team members will examine the phenomenon of _prosumerism_ in, at least, nine EU Member States (Belgium, Croatia, France, Germany, Italy, Portugal, Spain, the Netherlands, the United Kingdom), by collecting data via a self- administered questionnaire (web-based) (WP2); interviews with experts and focus groups (WP3 and WP4); technological databases (WP5); workshops (WP6); and, finally, through direct input from RES Prosumer collectives and other stakeholders (in the form of interviews or workshops, depending on the interventions conducted in the multiple Living Labs that will take place across Europe), but also drawing on previous research from other WPs (WP7). Firstly, the project’s WP2, WP3, WP4 and WP5 will start with a baseline review of existing scientific studies, including research outcomes, guidelines related to regulatory, social, economic, political and technological aspects of RES prosumer initiatives (considered as ‘secondary data’, since it is based on an evaluation of primary sources). This implies that any research data from previous EU projects that can inform and contextualise the work of these work packages will be reused and thus referred back to published results. Taken together, these data will allow us to gain a deeper understanding of the sociotechnical dimensions of RES _prosumerism,_ and concretely: * To map and characterise RES prosumer initiatives in Europe, and to develop a typology that accounts for their full diversity, achievements and ambitions including sociocultural and socio-economic factors (Objective 1); * To examine the current regulatory frameworks and policy instruments relevant for RES prosumer initiatives across the EU to produce updated Member State factsheets and policy briefs on challenges, opportunities, incentives of regulations and policies for prosumers in nine EU member states (Objective 2); * To identify innovative business and financial models for RE prosumers (Objective 3); * To develop local, national and EU technology scenarios for 2030 and 2050, and technology recommendations for RES prosumers under different geographical, climatic, and sociopolitical conditions (Objective 4); * To propose a set of incentive structures and a roadmap for the mainstreaming of prosumers in the Energy Union (Objective 5); * To develop new methodological tools (based on the co-creation and co-learning methods used in the living labs) to facilitate the mainstreaming of _prosumerism_ (Objectives 6 and 7); * To create a Prosumers Community of Interest by bringing together relevant stakeholders (Objective 8). Table 1 and table 2 (Appendix 1 and Appendix 2) provide a description of the datasets that PROSEU partners expect to collect and generate during the first 15 months of the project (until May 2019), which are directly linked with the above PROSEU’s objectives. These tables may suffer modifications by the addition, removal or rename of the dataset included, and will be updated in the following versions of this DMP. PROSEU Consortium members expect that the research data and outcomes generated through the project (e.g. deliverables, policy briefs, guidelines, etc.) will be useful for other researchers from different fields - not only in the social sciences and humanities, but also in the STEM (science, technology, engineering and mathematics) sciences -, as well as for current and future RES prosumer initiatives, and their potential allies such the alternative finance sector, utility and grid operators, and representatives from governments at the local, regional, national and EU level. Thus, to facilitate the widespread reuse of the research data, and in this manner enable the reproducibility of results, PROSEU datasets will use widely accepted formats and standards. These datasets will be provided in text (plain text and/or comma delimited) and/or in numeric formats. PROSEU will use the most common file extensions to save the data, such as .pdf, docx, or .csv. Moreover, most of the data will be produced and made available in English. # FAIR Data Projects within the EU Horizon 2020 are encouraged to provide open access to/and to reuse digital research data, following the FAIR data principles; that is, all research data should be **Findable, Accessible, Interoperable and Reusable (FAIR)** . The PROSEU project, as participant in the ORD pilot, will follow the FAIR principles by establishing a series of criteria to make data findable by other users (e.g. using metadata standards, adding keywords, and DOIs, etc.); to address which data may be accessible via open repository or keep confidential according to ethical requirements; to foster the interoperability of the data by allowing data exchange and reuse (e.g. using standards, or open source); and, to establish the licences of the data generated to permit the widest reuse possible. ## Making data findable, including provisions for metadata Improving the ability of other researchers, policy makers, and stakeholders to find and reuse the PROSEU research data is vital to increase the impact of the project. To do so, metadata will accompany the datasets that will be made available in public repositories in order to improve their discoverability and increase their usability. Thus, PROSEU will follow the **metadata standards** provided by the Data Documentation Initiative (DDI), in particular the DDI Codebook v. 2.5, which is a widely used international standard for describing data from the social, behaviour, and economic sciences (DDI Alliance, 2014). The use of these standards is recommended to make different data sources comparable and interoperable, and to increase data sharing and reuse (OpenAIRE2020, 2017). In particular, PROSEU’s metadata will provide a full description of each dataset, including, at least, the following information: dataset reference (file name), a description of the content, authors (i.e. PROSEU partner/s that collected/processed the data), file version number, publication date, Digital Object Identifier (DOI), licence information, and any technical specification that will be needed to visualise or access the data. Table 3 (Appendix 3) shows an example of the metadata associated with a specific dataset available in public repositories. Furthermore, all PROSEU documents will incorporate basic information and visual elements associated with the project, such as the PROSEU logo, the EU logo, or the GA number, among others, to make them identifiable. PROSEU will use unique **Digital Object Identifiers (DOI)** for all files uploaded to public repositories. DOIs to these files will be cited in any published article that uses PROSEU research data to facilitate their discoverability and reuse. Additionally, **file-naming conventions** will be adopted to organise and make the project’s research data easy to identify and use by the Consortium partners and by others. Consequently, PROSEU’s file names will reflect the contents of the file and include enough information to uniquely identify the data file. According to extended practices to standardise file names, PROSEU will comply with the following general specifications for file naming: * File names should be short but descriptive (<40 characters); * Use alphanumeric and avoid special characters, such as "/ \ : * < > [ ] $ & ~ ! # ? { } ' ^ % * Use underscores _ instead of periods or spaces; * Standardise dates (according to ISO standards: YYYYMMDD); * Use leading zeros for sequential numbering, e.g. v01. Moreover, PROSEU file names should contain the following information: * Project acronym (i.e. PROSEU) – compulsory for deliverables and data openly shared; * Document name - content (e.g. Data Management Plan) – compulsory for all documents; * Researcher/Authors initials – compulsory only for internal documents; * Acronym institution leading the research shared – recommended for openly shared data; * Year of study (e.g. 2018) – when necessary; * File version (e.g. v01) – compulsory for internal documents; * File type/extension (e.g. .odt) - compulsory for all documents. Examples: * For EC deliverables and dissemination materials: PROSEU_DataManagementPlan.odt; * For public research data (final docs, openly shared): PROSEU_DataManagementPlan_FCID_v01.odt; * For internal/working documents: 20180518_DataManagementPlan_EM_v01.odt. Finally, to optimise possibilities for discoverability and reuse of PROSEU research data, all datasets and files stored and openly shared on repositories such as ZENODO will include **keywords** to describe the content of each file. Some potential keywords to be used are shown in Table 4 (Appendix 4). ## Making data openly accessible In the context of the EC’s ORD Pilot, the research outcomes (i.e. journal articles, policy briefs, reports, etc.) are published by default in Open Access. Furthermore, public research data generated will be made openly accessible via the **online repository _ZENODO_ ** , as soon as the research is published. ZENODO is a free of charge repository developed by CERN within the EU OpenAIRE project to support Open Data management for EC funded multi- disciplinary research. Data contained in ZENODO is stored in the CERN Data Centres, primarily in Geneva, but with multiple replicas in a distributed file system (“About ZENODO”, n.d.). Since identification is not needed to download open materials on this repository, the identity of people (other researchers or stakeholders) accessing the data through ZENODO cannot be ascertained. However, ZENODO allows control of the access right of the data uploaded (i.e. open, closed, restricted or embargoed information); allows the upload of different types of data (i.e. publication, image, dataset, presentation, software, poster, or videos, among others); and allows adding specific keywords to facilitate the access to information. It is not expected that any highly specialised software tool would be needed to access the PROSEU public research data shared on ZENODO. As previously described in Table 2, most of the datasets produced within the project will be available in **common text or numeric format files** (e.g. DOCX, ODT, PDF, or CSV). When possible, **open source code** and/or **open source software** (e.g. LibreOffice) will be used. Although our ambition is to widely distribute PROSEU research data as open as possible, **certain datasets cannot be shared or need to be shared under restrictions** . For instance, datasets containing personal data of participants (such as names or e-mail addresses, which cannot be anonymized), not-final versions of working documents, datasets in which PROSEU members are not primary authors, or when a contractual obligation exists between two parts, will remain stored in the project’ servers and will not be openly distributed, nor shared with third parties. Table 5 (Appendix 5) details PROSEU members’ expectations regarding the availability of the project’s datasets stored in the open repository ZENODO, as well as the expected access right to the data. ## Making data interoperable Interoperability of the data (i.e. allowing data exchange and re-use of the research data between researchers, institutions, organisations or countries) will be facilitated by following good practices and standards for research data. This includes, as previously stated, the tag of PROSEU files with appropriate metadata, based on the standards provided by the DDI Alliance for social sciences; using self-explaining file names, following international file naming conventions; the addition of keywords and DOIs; and the utilization of common file types and formats (e.g. PDF or CSV), as well as open source software (e.g. LibreOffice) whenever possible. Moreover, PROSEU will utilise the DDI standards for controlled vocabularies for social science research (DDI Alliance, 2017), which described specific terms and rules for their use, in relation to, for instance, data source types, mode of collection, or general data formats, among others. The use of all of these standards will facilitate the understanding, sharing, and reuse of our datasets. ## Increase data reuse (through clarifying licences) Research data generated by the project will be made publicly available for reuse according to the type of data and journal embargo policies. In most of the cases, data will be distributed as soon as research is published online. Therefore, the reuse of PROSEU research data by third parties (i.e. other researchers, policy-makers, other societal actors) is expected during the project’s activities (e.g. through periodic PROSEU deliverables), but most importantly, after the end of the project. As previously mentioned, certain datasets cannot be shared or need to be shared under restrictions. Thus, datasets containing personal data of participants (such as names or e-mail addresses, which cannot be anonymized), not-final versions of working documents, datasets in which PROSEU members are not primary authors, or when a contractual obligation exists between two parts, will remain stored in the project’ servers and neither be openly distributed, or shared with third parties. Whenever possible, public research data will be licenced in a way that permits the widest reuse possible, for instance, Creative Commons (CC) Attribution (BY) version 4.0 (CC BY 4.0). Table 6 (Appendix 6) shows some of the datasets and type of licences that PROSEU expects to make publicly available. # Allocation of resources The **expected costs** for making PROSEU research data findable, openly accessible, interoperable, and reusable (i.e. FAIR), while securing any personal data collected, are detailed in Table 7 (Appendix 7). These costs will be covered by the financial budget of the project, and include Open Access (OA) publications, ICT services such as secure servers and Internet domains, and the development of code for a self-administered questionnaire (web-based). Any other cost that may relate to data preservation and/or data security will be discussed among Consortium members. Once PROSEU data management is fully established, each partner will be responsible for following the ethical and technical requirements described in this DMP to keep research data within the FAIR principles. Additionally, the PROSEU project’s coordinator will be responsible for data management decisions, for instance, resolving what data will be kept, for how long, and how and where data will be shared and preserved. # Data security Data security is of major importance in the PROSEU project. Although PROSEU will not deal with any sensitive data, **personal data of participants** will be collected and stored during the development of the project, being destroyed after the completion of the research. The protection of personal data will be ensured through appropriate procedures for data collection, storage, protection, and destruction, according to the current EU data protection regulations for data protection and ethical principles. This means written material will be stored in secured locations (institutional servers and hard drives), data processed and stored digitally will be anonymized and any publication derived from personal data will be presented in a way that makes it impossible to identify individuals. To do that, each participant will receive a code number. Only the person in charge of the study will have a list with names and codes to allow participants to withdraw the data at any point in time during the study. Moreover, personal background-related data will be stored separately from research relevant data. Concerning **long-term conservation** and **secure storage** of public research data and dissemination materials (e.g. journal articles, reports, policy briefs, etc.), these will be ensured using the open repository ZENODO, which is expected to retain items for the lifetime of the repository. On the other hand, restricted research data (e.g. interview recordings, notes from focus groups and workshops, responses from the self-administered questionnaire, etc.) will be stored in institutional servers (located in Germany), and retained for a period of 3 years after the project ends. To access these data, PROSEU project has created a _project management_ _platform_ that facilitates the efficient management of the project’s files and datasets. The PROSEU platform is only accessible to the Consortium partners, which use an individual username and password to access the system. Thus, no external access to restricted documents is possible. Through the establishment of users’ roles for each member, the project coordinator has set up the level of permission to create, modify or delete the information stored on the system. Therefore, only specific members have right to delete information contained in the system. Finally, **data recovery** will be secured through periodical back-ups of PROSEU restricted research data will be done in external hard-drives that will be kept secure by the corresponding WP and/or task leader/s. # Ethical aspects In agreement with the participatory basis of the project, which will require the collaboration of multiple stakeholders, the PROSEU Consortium has set out ethical and data protection frameworks that will guide the project’s research practices. These are in line with the EU key ethical principles and research codes of conduct, as well as the current regulations on Data Protection. Thus, ethical principles and research codes of conduct concerning privacy and personal data protection have been addressed. All participants will be given an **Informed Consent Form** to be signed, upon voluntary agreement, prior to participating in any project activities. This form will contain, among other information, instructions about how they can ask for their data to be destroyed and/or removed from the project. Each Consortium Partner conducting activities that include interaction with research participants/stakeholders is responsible for securing the signed Informed Consent Form and storing it in a secure location for possible future verification and use. Additionally, participants will receive a **Participant Information Sheet** , which will include detailed information about how the information gathered during the project will be used and what will happen with the results of the research. Moreover, a description of the procedures adopted to guarantee the participant's privacy will be described. Participants must be competent to understand the information they are given (which will be translated in the languages of every participant country) and should be fully aware of the implications of their consent. All participants will be told that they can withdraw from taking part in PROSEU at any time and ask for their data to be destroyed and/or removed. Participants will be reassured that any **personal data** will remain confidential, will be stored in institutional servers, and will not be shared with third parties, nor transferred between countries, in agreement with the General Regulation on Data Protection from 25 May 2018, and the EU Directive 2016/679. After the completion of the research all personal data will be destroyed. A detailed description of the ethical and data protection procedures, as well as the documents above mentioned (Informed Consent Form and Participant Information Sheet) can be found in the PROSEU Deliverable 9.1 and Deliverable 9.2 (namely, Ethics Requirements 1 and 2). # Other issues The application of other national, institutional, departmental, or group procedures on data management, data sharing and/or data security are not expected at this stage.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0307_FABLABIA_853530.md
# FAIR data 2. 1. Making data findable, including provisions for metadata FABLABIA participates in Open Research Data Pilot, which requires the policy of FAIR data (findable, accessible, interoperable and re-usable research data) Data collected and created during the project implementation are for internal usage only. It is not research data, it is only data from survey to get the information from FabLabs and innovation agencies users/clients. No metadata included. Data will be use only for the project purposes (mainly for Design option paper) and collected data itself are no valuable for other users from outside. Within the internal system of the coordinator and project partner the data will be findable according the internal standards and regulations. For the purposes of the project results the data will be anonymized. ## Making data openly accessible Fablabia will not be openly accessible. Data will be used for analysis to recognize the recommendation and best practise that will be the part of the project result (Design option paper). As the estimated interest to the survey data outside FabLab community is very limited, these data will be available for use outside FABLABIA consortium only upon request made to FABLABIA coordinator. ## Making data interoperable Collected data are interoperable only for internal purposes, the vocabulary used is standard in basic level. No research data are collected. The data will be stored in a format readable by commonly used data management tools or office software. The data sets are small and specified, so direct automatic interoperability of the data with other external data sets is not sought for. ## Increase data re-use (through clarifying licences) Data are planned to be used only for the purposes of this project and its results. Data might be used internally by project partners for their further development related to the project objectives and aims. No licences are needed for re-use of the data, as long as the data source is acknowledged. # Allocation of resources The amount of data produced by FABLABIA is rather small, and therefore there are no resources specifically allocated for making the data FAIR. The resources for data collection are allocated in the work packages and tasks collecting the data. Responsible person for data management is Tomas Mejzlik (project coordinator) and Eliska Matejova (administrative project manager), both form JIC – project coordinator. No additional resources are predicted. # Data security Data are saved in an internal storage of the project coordinator (with the ability to set access restrictions). The storage is accessible only to the coordinator's employees and access to the data collected during the project implementation can only be granted to selected employees. The storage is backed up so that the data can be recovery if necessary. In case of data breaches, the person responsible for the breached data shall notify coordinator of FABLABIA project as soon as possible, but at maximum during 72 hours. The individuals whose personal data were breached shall also be notified without undue delay. It is worth noting, that due to the nature of personal data collected during FABLABIA project the damage that can be caused by a data breach is expected to be limited. ## Right to erasure If a person wishes his/her personal data to be erased, that can and shall be done. It is easy to do from the contact lists controlled by FABLABIA coordinator or WP leaders conducting surveys. If a person wants his/her personal data to be removed from a survey, the non-personal data shall remain in the analysis of the survey. ## Privacy by design and by default Personal data collected during FABLABIA will be used only by project partners, including beneficiaries and only for purposes needed for the implementation of FABLABIA. Even within the project, if someone of the project consortium asks for personal data, the person holding the data should consider whether that data is needed for the implementation of the project. If personal data is provided, the data shall not be distributed further within or outside the project. 2 # Ethical aspects Collected data will be processed and store according the applicable legislation and the deliverable D.1.1 – Data protection policy and compliance with GDPR. # Other issues No other special procedures for data management are used. . 3 **SUMMARY TABLE 1** **FAIR Data Management at a glance: issues to cover in your Horizon 2020 DMP** This table provides a summary of the Data Management Plan (DMP) issues to be addressed, as outlined above. <table> <tr> <th> **DMP component** </th> <th> </th> <th> **Issues to be addressed** </th> </tr> <tr> <td> **1\. Data summary** </td> <td>  </td> <td> **The purpose of the data collection/generation:** Survey, benchmarking </td> </tr> <tr> <td> </td> <td>  </td> <td> **Relation to the objectives of the project:** Mapping good practices and success stories of FabLab supporting SMEs around Europe and worldwide, Various business models of FabLab will be developed </td> </tr> <tr> <td> </td> <td>  </td> <td> **Types and formats of data generated/collected:** Documents, project deliverables, contracts, raw and processed data collected through surveys, others </td> </tr> <tr> <td> </td> <td>  </td> <td> **The origin of the data:** On-line survey and semi-structured interview, data from project partners </td> </tr> <tr> <td> </td> <td>  </td> <td> **Data utility (to whom will it be useful):** Project partners, FabLabs and innovation agencies </td> </tr> <tr> <td> 2. **FAIR Data** 2.1. Making data findable, including provisions for metadata </td> <td>    </td> <td> Data collected and created during the project implementation are **for internal usage only.** **No metadata included.** Data will be findable according to the internal standards and regulations. </td> </tr> <tr> <td> </td> <td>  </td> <td> Data will be anonymized. </td> </tr> </table> 4 <table> <tr> <th> 2.2 Making data openly accessible </th> <th>  </th> <th> **No openly accessible data.** </th> </tr> <tr> <td> </td> <td>  </td> <td> Data will be used for the project results (Design option paper). </td> </tr> <tr> <td> 2.3. Making data interoperable </td> <td>  </td> <td> Only for internal purposes. </td> </tr> <tr> <td> </td> <td>  </td> <td> The vocabulary is standard in basic level. </td> </tr> <tr> <td> </td> <td>  </td> <td> Data will be stored by commonly used data management tools or office software. </td> </tr> <tr> <td> 2.4. Increase data re-use (through clarifying licences) </td> <td>  </td> <td> Data are planned to be used only for the purposes of this project and its results. </td> </tr> <tr> <td> **3\. Allocation of resources** </td> <td>  </td> <td> No additional resources are allocated. </td> </tr> <tr> <td> </td> <td>  </td> <td> Responsible persons are Tomas Mejzlik, Eliska Matejova (both from JIC – project coordinator), as part of their workload for the project </td> </tr> <tr> <td> **4\. Data security** </td> <td>  </td> <td> Data are saved in an internal storage of the project coordinator (with the ability to set access restrictions). </td> </tr> <tr> <td> </td> <td>  </td> <td> The storage is regularly backed up. </td> </tr> <tr> <td> **5\. Ethical aspects** </td> <td>  </td> <td> Applicable legislation </td> </tr> <tr> <td> </td> <td>  </td> <td> Deliverable D.1.1 Data protection policy and compliance with GDPR </td> </tr> <tr> <td> **6\. Other** </td> <td>  </td> <td> No other special procedures for data management are used. </td> </tr> </table> <table> <tr> <th> </th> <th> </th> <th> </th> <th> </th> <th> **HISTORY OF CHANGES** </th> </tr> <tr> <td> **Version** </td> <td> **Publication date** </td> <td> </td> <td> </td> <td> **Change** </td> </tr> <tr> <td> 1.0 </td> <td> 30.9.2019 </td> <td> ▪ </td> <td> Initial version </td> <td> </td> </tr> </table> 5
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0308_ATHOR_764987.md
Taking into account that first young researchers have joined the project about 4 months ago, we do not expect any considerable output in terms of their data provision plan at current stage. Therefore, precise information related to the data produced by ESRs will be given in the further versions of the DMP. The expected size can neither be predicted at this stage but it is reasonable to assume that it will reach the tens of Gigabyte range. Main types of data to be generated in ATHOR can be approximately divided into 3 different groups: * **Project rules and follow up data** : Grant and Consortium Agreements, Gantt Chart and Actions Plan, administrative and financial data, templates, surveys, management files * **Research data** : it covers the data collected within the frames of the project subject including analysed data. In a research context, examples of such data include graphs and images, statistical data, parameters, experimental conditions, experimental observations, results of measurements etc. The focus will be made on the availability of this research data in digital form. * **Data related to training activities** : Personal Career Development Plan (PCDP), Training & Visit Plan, Lecture notes, Powerpoints, Video Records, E-Learning tools, MOOC. * **Data related to dissemination activities** : publications, presentations/posters, seminars/newsletters, dedicated short videos These types of data include data of different confidentiality levels that can be schematically represented as in the Figure 1 (inspired by 5G! Pagoda D 1.2 - Open Data Management Plan, p.8). In this way, a dominant part of communication data and some part of research data is rendered public, while project data that ensures project’s functioning is principally kept confidential (on UCloud). The data generated by ESRs strongly depends on the individual doctoral projects, tools and research methods used within these projects. Whenever possible, the dataset will be made available online using the following formats: * Text content: Acrobat PDF/A (.pdf); Comma-Separated Values (.csv), Microsoft Office or Open Office Formats (.docx, .xls, ,pptx, .odt, .ods, .odp); Plain Text US-ASCII/ UTF-8 (.txt); XML (.xml) * Graphic content (.jpg, .png, .svg .tif, .tiff) * Audio content (.aif, .aiff, .wav) * Video content (.avi, .mp4)  Modeling data (.mat) The project will assume the principle of using commonly used data formats for the reason of compatibility, efficiency and access. **Figure 1. Distribution of ATHOR project data in the confidentiality grid** # FAIR data Findable, Accessible, Interoperable and Reusable ## Data storage <table> <tr> <th> **ETN-ATHOR** 1. – Proposal-Valuation-GA-CA 2. – Potential Additional Funds 3. – Team 4. – Gantt Chart-Actions Plan 5. – Meetings 6. – Supervisory Board (SB) 7. – Finance Committee (FC) 8. – Industry Advisory Board (IAB) 9. – Recruit. Skill Progress. Committee (RSPC) 10. – ESR Council (ESRC) 11. – Training & Knowledge Transfer Committee (TKTC) 12. – State of the Art 13. – WP1 - Improvement of measurements 14. – WP2 - Advanced characterization 15. – WP3 - Innovative modelling 16. – WP4 - Advanced measurements 17. – WP5 - Training, mobility 18. – WP6 - Knowledge Dissemination 19. – WP7 - Management Activities 20. – ESRs working space 21. – Dissemination 22. – Scientific Publications 23. – Deliverables and Milestones 24. – Image gallery 25. – Public </th> </tr> </table> The overall data produced and/or collected by each consortium member organisation has to be carefully stored and managed by this organisation. In a preliminary stage of production (and/or collection), a local storage by the authors is not excluded. When close to final version, all produced data have to be carefully stored by the authors in the central repository ( _https://ucloud.unilim.fr_ ) dedicated to ATHOR project by the coordinating university of Limoges. All local and central repositories are to be secured using the latest security protocols. The access to the central repository is regulated by the project coordinator and project manager. It is provided for project consortium and to other linked parties upon request from a project team. The Ucloud platform is hosted by UNILIM servers, regular files backup is ensured by the local informatics services, additional archiving is made by designated project members on hard drive supports. The main Ucloud project repository is structured in the way, presented in the scheme of Figure 2. Ucloud, being the central “data bank” feeds other platforms linked to the project. For instance, as it has been mentioned in Deliverable 7.2, at current stage ATHOR Principle Investigators with the lead of RWTH academic pole are establishing an e-learning program “Eleonor” in the frames of Modular Object- Oriented Dynamic Learning Environment (Moodle) that is planned to be hosted by UNILIM server. Such Moodle platform will store educational material on the subject of ATHOR project in written and visual form. It will also have two different access modes: private and public. The confidentiality status of each document deposited on the platform will be defined by IP owners of the document. In the frames of project data management, all participants attempt to follow best practices for data generation, storage and sharing, i.e. document changelog, unified name attribution and appropriate repository are kept as clear as possible. The documents are preferably shared within the consortium via indication of its placement in a database. To facilitate document evaluation and review, all deliverables and official documents are created in agreement with established templates for main MS Office formats. Each Work Package or task leader is responsible for timely preparation of corresponding deliverables and required materials, while the project coordinator assumes the responsibility for management activities and project administration. **Figure 2. Directories tree of Ucloud repository of UNILIM** When a collection of data is ready to be published in public space, the last final version of these data currently stored on UCloud is upload on the most pertinent open access public platform. Depending of the type of data, it could be: * **For project description** : ATHOR website ( _www.etn-athor.eu_ ) , Youtube Channel dedicate to ATHOR project, Refractories WorldForum ( _www.refractories-worldforum.com_ ) * **For research data** : Zenodo platform ( _https://zenodo.org_ ) * **For** **data related to training activities** : ATHOR website ( _www.etn-athor.eu_ ) , Youtube Channel dedicate to ATHOR project, Moodle platform dedicated to ATHOR project * **For data related to dissemination activities** : ATHOR website ( _www.etn-athor.eu_ ) , Youtube Channel dedicate to ATHOR project, Refractories WorldForum ( _www.refractories-worldforum.com_ ) , CNRS Hal platform ( _https://hal.archives-ouvertes.fr_ ) , Zenodo platform ( _https://zenodo.org_ ) ## Making data findable, including provisions for metadata In order to keep data findable, it is necessary to provide its metadata. Metadata is a systematic method for describing such resources and thereby improving access to them. Author, date created, date modified and file size are examples of very basic document metadata. Considering the strongly interdisciplinary nature of the project, ATHOR's consortium favours the adoption of a broad and domain agnostic metadata standard that the EU recommends to its member states for recording information about research activity: the Common European Research Information Format (CERIF) standard is described at _http://www.eurocris.org/cerif/main- featurescerif_ . An additional advantage of a CERIF inspired standard is that ATHOR's DMP managing institution (University of Limoges) currently uses a research information system developed by Elsevier that implements the CERIF standard (PURE). For publication data unique identifiers such as Digital Object Identifiers (DOI) will be used. According to authors knowledge it is the most common way for data identification. The repositories such as Zenodo or OpenAIRE (Open Access Infrastructure for Research in Europe), both or one of which is planned to be used for data publishing already provide persistent identifiers for data sets. By one of the upcoming reporting periods, as soon as sufficient amount of data is produced within the project, consortium leaders will consider the distribution of survey template to all ESRs to collect the information on: * Data set reference and name; * Data set description; * Data formats; * Faced difficulties/risks while data collection and analysis; * Standards and metadata * How is data created? * What standards or methodologies did you use? * How did you structure and name your folders and files? * How did you track the changelog? * Data sharing; * Archiving (storage and backup); * Ethical issues; * Other aspects (share of responsibilities within the team related to data lifecycle) Currently, all data is stored on Ucloud platform and with the clear indication of data subject, authors and change log history when necessary. ## Making data openly accessible As mentioned in the previous section 2.3, after receiving the authorization of all concerned parties, it is planned to deposit the collected data on the most quoted repositories like _OpenAIRE_ or _Zenodo_ that allows researchers to deposit both publications and data, while providing tools to link them. It is expected that data related to the social media, to any publicity, designated courses, open access publications, survey results, public deliverables will be made openly available by default. It is agreed within the consortium that the information, that has to be kept confidential within ATHOR will be marked with a special digital stamp with a mention “keep information private within ATHOR”. Before any release of information, the authors of the document in question have to sign an “Authorization letter” clearly indicating his/her name, date, entity, the title of the document. A template of such authorization letter is available on Ucloud: _https://ucloud.unilim.fr/public/authorization-letter-form_ . This procedure was considered compulsory to avoid IP conflicts within the consortium and violation of the rules of good scientific practice and protection of personal data. For some cases, in order to avoid multiplication of desynchronized versions of the data related to the same action and to moderate the use of virtual storage space, the consortium considers to preserve and make public only metadata, while removing raw data itself. The virtual address of the main and unique dataset (Ucloud platform) needs to be provided in parallel with metadata. In this way, the collected data remains findable and accessible. Since the H2020 requirement for Open Access publishing is fully embraced by ATHOR project, the project will ensure both “green” (in addition to publication in subscription journals, the copy of an article is deposited into an institutional repository such as Research Repository UCD) and “gold” (publications available directly from the publisher after paying author’s fees, envisioned in the project’s budget) data publishing. While ensuring the internal data storage and backup, the project obliges to publish the public results through the following channels: * “Open data” section of the project website: _www.etn-athor.eu_ menu. * Zenodo ( _https://zenodo.org_ ) central repository recommended by the Horizon2020 online manual where public deliverables and publications will be uploaded and connected to the OpenAIRE platform. The advantage of Zenodo is that it is “open in every sense”, hence, there is no need to explore any kind of arrangement with this repository, neither documentation, nor data access committee to access the data uploaded. Since there is no any sufficient scientific data produced within the ATHOR project yet, these data repositories are currently empty. * Diffuse or publish the appropriate type of data via ATHOR social channels, i.e. Facebook, LinkedIn, Youtube, Twitter. * National portals for data publishing, for instance _http://theses.fr/_ in France for depositing doctoral thesis manuscripts. ## Making data interoperable As mentioned in section 1. “Data Summary”, in order to comply with interoperability and re-usability requirements and to facilitate the exchange between researchers and institutions, best practices for file formats will be used in ATHOR project. When possible, data will be rendered available in the format consultable with the help of free of charge software (for example Open Office formats for text documents). The depositors will also strive for using a standard vocabulary for all data types present to allow interdisciplinary interoperability. ## Increase data re-use (through clarifying licences) It is possible to license a produced dataset. To do so, it will be necessary to attach Creative Commons Licence, according to the following guidelines _https://creativecommons.org/choose/_ or _http://ufal.github.io/public- license-selector/_ by integrating the appropriate abbreviation into the shared file. Since no research data has been produced to date, the specific question of its reusability by third parties or usability period is not fully developed in the current version of the DMP document. # Allocation of resources Generally, it is hard to predict the cost of data management activities, as many activities are an integral part of standard research activities and data analysis. Ideally, it is necessary to estimate the time or cost needed for activities related to data collection, data entry and transcription, data validation and documentation and the cost of preparing data for archiving and re-use. Those resources that include time and effort costs i.e. search costs, maintenance of technical infrastructure, individual preparation effort needed to use the infrastructure etc. are so-called non-monetary costs. Since ESRs and management team are the main producers of datasets in the project, all these costs are related to them. ATHOR consortium expects that monetary costs for FAIR data will be minor and will be mainly related to “gold” publishing of the articles, maintenance of hosting university servers for Ucloud and Moodle and engagement of external workforce for producing multimedia dissemination material. Regarding the question of long term data preservation, no specific arrangements has been done in the consortium yet. However, with a great degree of confidence, it can be confirmed that it is the project coordinator with the help of local UNILM resources who will play the major role in this task. # Data security The security of the central Uclou d _https://ucloud.unilim.fr/_ repository and all other partner repositories is provided and guaranteed by the respective centres for information processing of these universities. Access to the Ucloud database is managed by the coordinator and project manager. It is provided for project members and other parties upon request from a project team. This space is password protected and the security of this platform is guaranteed by the Informatics Systems Direction (DSI) of Limoges University. Regular back up of all data stored on UNILIM servers is ensured. The backup of Ucloud data is performed by the Limoges University server, the history of the content can be traced back to up to three months. The selected data, deposited in Ucloud space, will remain also available for 3 years after the end of the project. In addition, if the project uses Zenodo repository for data sharing, its safety is guaranteed according to the product description (See _https://zenodo.org/features_ ) # Ethical aspects According to the Annex 1 of Grant Agreement 764987 - Part B – p. 32, the ATHOR Consortium has taken into account all requested ethics issues. For example, the most common ethical issues include: * the involvement of children, patients, vulnerable populations, * the use of human embryonic stem cells, * privacy and data protection issues, * research on animals and non-human primates. It also includes the avoidance of any breach of research integrity, which means, in particular, avoiding fabrication, falsification, plagiarism or other research misconduct. More precisely, all the activities carried out under the ATHOR project comply with ethical principles and relevant national, EU and international legislation, for example the Charter of Fundamental Rights of the European Union and the European Convention on Human Rights. The tasks for ATHOR only concern basic research activities and the project does not involve humans, animals or cells. Due to the fact that the main domain of the ATHOR project activity is related to materials science with the focus on refractory materials, the risk of having ethics issues during the project is extremely limited. Either way, within the ATHOR DoA Part A, the workpackage 8 is devoted to the ethics issues which sets out the 'ethics requirements' that the ATHOR project must comply with. One deliverable will be provided: D8.1 NEC - Requirement No. 1. In the framework of D8.1, all beneficiaries and partner organisations must confirm that the ethical standards and guidelines of Horizon2020 will be rigorously applied, regardless of the country in which the research is carried out. ATHOR’s partners are not planning to use any harmful material, or process which likely emits harmful materials. They do not use elements that may cause harm to the environment, to animals or plants. In any case, all the partners will follow their internal protocols to treat any material according to the national law and EU legislation. In that way, all chemical waste is collected and processed by a central university facility in the Universities involved within the ATHOR project. All wastes are recycled or appropriately deposited. Moreover, their respective researches do not deal with endangered fauna and/or flora /protected areas. No tests on humans or animals are planned. ATHOR’S partners will not use nano-material in their research and they do not do harm to the environment. **This work is supported by the funding scheme of the European Commission, Marie Skłodowska-Curie Actions Innovative Training Networks in the frame of the project ATHOR - Advanced THermomechanical multiscale modelling of Refractory linings 764987 Grant.**
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0309_EVO-NANO_800983.md
# 1\. DATA SUMMARY **What is the purpose of the data collection/generation and its relation to the objectives of the EVO-NANO?** Long term vision of EVO-NANO project is to create an integrated platform for the artificial evolution and validation of novel DDS for cancer treatment using NP. Within EVO-NANO we defined four specific objectives: **Objective 1** : To develop a new class of open-ended evolutionary algorithms to creatively assess different cancer scenarios and autonomously engineer effective NP-based solutions to them in a novel way. **Objective 2:** To implement a computational platform for the autonomous generation of new strategies for targeting CSC surface receptors using functionalized NPs. In its final form our model will simulate all the main aspects of NP dynamics: their travel via blood streams, extravasation, tumour penetration and endocytosis. **Objective 3:** To streamline synthesis of functionalized NPs suggested by the computational platform. **Objective 4:** To develop an integrated platform for validation of efficacy of the artificially evolved nanoparticle designs. It will be composed of (i) tumour microenvironments on microfluidic chips that will mimic major physiological barriers for NP tumour delivery and (ii) in vivo pre-clinical tests. Reaching any of these objectives will require generation and/or collection of specific data: <table> <tr> <th> − </th> <th> Collection of available data on physico-chemical NP properties, nonspecific chemical interactions of functionalized NPs in the bloodstream, extravasation of NPs, interaction of NPs with tumour cells, behaviour of NPs within tumour cells (Objective 1); </th> </tr> <tr> <td> − </td> <td> Source code data (Objective 1 & 2) ; </td> </tr> <tr> <td> − </td> <td> Simulation output files (Objective 2) ; </td> </tr> <tr> <td> − </td> <td> Analysis of simulation output files (Objective 2) ; </td> </tr> <tr> <td> − </td> <td> Characterization of synthesized NPs (Objective 3) ; </td> </tr> </table> − Results of _in vitro_ and _in vivo_ tests (Objective 4). In summary, data collection/generation follows different procedures for each of the proposed objectives and within EVO-NANO we will create at least 7 separate datasets. Since reusability of datasets between different research groups is essential for the success of the project, development of DMP is equally important. **What types and formats of data will the project generate/collect?** * Data from computer models, represented as text, binary or graphics files and videos. * Data from clinical assessment of the treatment. * Measurements in biological matrices/tissues. * Molecular and chemical data on nanoparticles developed, including core and coating . * Exposure data: internal biomarkers of exposure. The Open Research Data Pilot applies to two types of data: 1. the data, including associated metadata, needed to validate the results presented in scientific publications as soon as possible. 2. other data, including associated metadata, as specified and within the deadlines laid down in the data management plan–that is, according to the individual judgement by each project group. According to the “Guidelines on Data Management in Horizon 2020” (2015) the DMP describes the handling of numerical datasets processed or collected during EVO-NANO lifetime. The DMP include clear descriptions and rationale for the access regimes that are foreseen for collected data sets. Thus the DMP leaves explicitly open the handling, use and curation of products like tools, software and written documents, which could also be subsumed under the generic term “data”; we restrict the focus of our DMP to numerical data products like produced model data or observation data. **Formats of the data:** * Data and metadata will be requested, stored and transferred (across partners and in EVONANO) in a comma separated values (CSV) format. * To facilitate the data exchange, MS Excel compatible files including comma separated and .xls(x) format will be also accepted. * For statistical purposes, other formats include .sas7bdat (SAS), .RData (R), .SAV (SPSS), .mat (matlab). * Where applicable data formats may be migrated when new technologies become available and are proved robust enough to ensure digital continuity and continued availability of data We will follow the following guidelines: * Guidelines on Data Management in Horizon 2020, Version 2.0, 30 October 2015: _http://ec.europa.eu/research/participants/data/ref/h2020/grants_manual/hi/oa_pilot/h202_ _0-hi-oa-pilot-guide_en.pdf_ * Guidelines on Open Access to Scientific Publications and Research Data in Horizon 2020, Version 2.0, 30 October 2015: _http://ec.europa.eu/research/participants/data/ref/h2020/grants_manual/hi/oa_pilot/h202_ _0-hi-oa-pilot-guide_en.pdf_ * Webpage of European Commission regarding Open Access: _http://ec.europa.eu/research/science-society/open_access_ **Will you re-use any existing data and how?** To develop NP simulations we will re-use available published data on: <table> <tr> <th> − </th> <th> physico-chemical NP properties, </th> </tr> <tr> <td> − </td> <td> nonspecific chemical interactions of functionalized NPs in the bloodstream, </td> </tr> <tr> <td> − </td> <td> extravasation of NPs, </td> </tr> <tr> <td> − </td> <td> interaction of NPs with tumour cells, </td> </tr> <tr> <td> − </td> <td> behaviour of NPs within tumour cells, </td> </tr> </table> and use them to define model boundary conditions. **What is the origin of the data?** <table> <tr> <th> − </th> <th> Written source code; </th> </tr> <tr> <td> − </td> <td> Published scientific articles; </td> </tr> <tr> <td> − </td> <td> Outputs of _in silico_ experiments; </td> </tr> <tr> <td> − </td> <td> Outputs of tools for mathematical analysis; </td> </tr> <tr> <td> − </td> <td> Experimental _in vivo_ and _in vitro_ test; </td> </tr> <tr> <td> − </td> <td> Characterization results of synthesized NPs. </td> </tr> </table> **What is the expected size of the data?** * Numerical data related to optimisation c. 250 Gb per project’s lifetime; * Full history of the 3D models of nano-particle swarm: c. 5Tb; * Videos of the computer models and animation of results: c. 10Tb; The above are estimates. To be evaluated during the course of the project. The expected size depends on the extend and the nature of the data that are made available. **To whom might it be useful ('data utility')?** * EVONANO consortium; * European Commission services and European Agencies; * EU National Bodies; * The general public including the broader scientific community * Manufacture of nanoparticle based treatment * Clinicians using the nanoparticles # 2\. FAIR DATA ## 2\. 1. MAKING DATA FINDABLE, INCLUDING PROVISIONS FOR METADATA **Are the data produced and/or used in the project discoverable with metadata, identifiable and locatable by means of a standard identification mechanism (e.g. persistent and unique identifiers such as Digital Object Identifiers)?** Yes. All EVO-NANO generated data are stored at Zenodo repository ( _https://zenodo.org/_ ). Zenodo closely follows FAIR principles: <table> <tr> <th> − </th> <th> Each uploaded dataset gets DOI; </th> </tr> <tr> <td> − </td> <td> Metadata for individual records are retrievable by their identifier using a standardized communications protocol; </td> </tr> <tr> <td> − </td> <td> Metadata are publicly accessible and licensed under public domain. No authorization is ever necessary to retrieve it; </td> </tr> <tr> <td> − </td> <td> Metadata use a formal, accessible, shared, and broadly applicable language for knowledge </td> </tr> </table> representation All data will be discoverable via medata provision. All data will be identifiable and referable via standard identification mechanism (DOI). A unique online naming conventions are adopted. The data will be searchable by keywords. Clear versioning will be in place. **What naming conventions do you follow?** To be able to clearly distinguish and identify data sets, each data set is assigned with a unique name. To design the data set names, we use the following procedure: _FieldIdentifier.CountryCode.PartnerName.DatasetName_ , where 1. _FieldIdentifier_ defines subfield within which data are produced 2. The _CountryCode_ part represents the country associated with the dataset using ISO Alpha-3 country codes: 1. FIN for Finland 2. POL for Poland 3. SRB for Serbia _iv._ ESP for Spain _v._ GBR for United Kingdom 3. The _PartnerName_ part represents the name of the organization associated with the dataset: 1. UNSPF for Univerzitet u Novom Sadu, Poljoprivredni fakultet Novi Sad 2. UNIVBRIS for University of Bristol 3. UWEBRISTOL for University of the West of England iv. AAU for Abo Akademi 5. IMDEANANO for Fundacion IMDEA Nanociencia 6. PCS for Prochimia Surfaces SP. ZO.O. vii. VHIR for Fundacio Hospital Universitari Vall D’Hebron – Institut de Recerca 4. The _DatasetName_ represents the full name of the dataset. **Will search keywords be provided that optimize possibilities for re-use?** Yes.The dataset information reported into the metadata fiche will be published in EVO-NANO, where specific filters, based on the metadata elements, will allow to refine the search across datasets (e.g. search dataset by chemical or chemical group, by temporal or spatial coverage of the data, by key words, etc. **Do you provide clear version numbers?** Yes. The versioning management of the data, metadata template and in general the files stored into the Repository will be applied at two levels: 1. Via the naming convention and the use of the date as suffix, indicating the last version of the file uploaded into the Repository; 2. As capabilities of the Zenodo Repository sets up for the project, since the solution supports the simple version control system for the uploaded files **What metadata will be created?** We will adopt a metadata scheme used to describe chemical monitoring data collections, e.g.: chemical occurrence data collected as result of legal obligations on adhoc or regular basis for reporting /monitoring at European or national levels; data generated as result of targeted research on the presence of known or unknown chemical substances in specific media in a European country/region. Metadata is compliant with two European metadata standards, namely: 1. INSPIRE metadata elements for spatial data sets and services (see these elements in the INSPIRE Metadata Regulation: http://data.europa.eu/eli/reg/2008/1205/oj#d1e600-14-1) 2. The "DCAT application profile for European data portals" (DCAT-AP), developed in the framework of the EU ISA Programme. The European Data Portal is implementing the DCAT-AP as the common vocabulary for harmonising descriptions of datasets harvested from several data portals of 34 countries. The DCAT-AP specification is available at: https://joinup.ec.europa.eu/asset/dcat_application_profile/ ### 2.2. MAKING DATA OPENLY ACCESSIBLE **Which data produced and/or used in the project will be made openly available as the default?** When no embargo period applies and a data package related to a case study has been marked as public, it will be made openly available. Only data gathered by partners outside of the project work plan and protected by IPR, or inside the work plan but containing confidential information (e.g. patent application) will be kept closed until those results are necessary to protect, in accordance to Articles 27, 29, 36, 37 and 39 of the EVO-NANO Grant Agreement number 800983\. These principles will apply to the following inclusive but not an exhaustive list of sets of data produced by the EVONANO: * Material science data on nanoparticles * Results of computer modelling * Optimisation analysis and results * Results of experimental laboratory trials * Results of clinical studies **How will the data be made accessible (e.g. by deposition in a repository)?** All generated datasets within EVO-NANO will be uploaded to Zenodo repository ( _https://zenodo.org/_ ).The data sharing should occur in a timely fashion.This means that the data resulted from the research conducted in the project should become available close to the project results themselves. Furthermore, it is reasonable to expect that the data will be released in waves as they become available or as main findings from waves of the data are published. **What methods or software tools are needed to access the data?** Since Zenodo stores data as publicly accessible, the only requirement is internet access. With regards to open software, all the data needed to create and maintain the marketplace is being made openly accessible through the GitHub repository, along with the corresponding technical documentation. **Is documentation about the software needed to access the data included?** No documentation is required. Online help provided with existing browser is sufficient. **Is it possible to include the relevant software (e.g. in open source code)?** Software will be shared via GitHub, which is directly linked with Zenodo. **Where will the data and associated metadata, documentation and code be deposited?** _Preference should be given to certified repositories which support open access where possible._ The consortium agreed to deposit the data generated by the project in Zenodo, publications in arXiv, and software in GitHub unless for a specific project there is a subject specific repository that is considered more relevant. **Have you explored appropriate arrangements with the identified repository?** Yes, the arrangement are tested by the Partners in their other projects. **If there are restrictions on use, how will access be provided?** There are not restrictions to us. To access data no registration at arXiv, Zenodo or GitHub is required. **Is there a need for a data access committee?** There is no need for a data access committee because sharing of data is agreed straightforwardly. **Are there well described conditions for access (i.e. a machine readable license)?** Potential users will find out about the data through publications and the website. Data will be made available on publication of the associated paper and will be made accessible on request, under conditions agreed on a case-by- case basis, and after agreement of the project consortium. **How will the identity of the person accessing the data be ascertained?** The identity of the person accessing the date will not be ascentain because their access is anonymous. ### 2.3. MAKING DATA INTEROPERABLE **Are the data produced in the project interoperable, that is allowing data exchange and re-use between researchers, institutions, organisations, countries, etc. (i.e. adhering to standards for formats, as much as possible compliant with available (open) software applications, and in particular facilitating re-combinations with different datasets from different origins)?** All data produced will have transparent formats: publications in PDF, compute code in Phython/C++/Processing/Java, results of numerical simulation in CSV, animations and videos in AVI/MP4. **What data and metadata vocabularies, standards or methodologies will you follow to make your data interoperable?** Other types of data have been registered following internal codifications, clearly specified within the file. **In case it is unavoidable that you use uncommon or generate project specific ontologies or vocabularies, will you provide mappings to more commonly used ontologies?** Not applicable. ### 2.4. INCREASE DATA RE-USE (THROUGH CLARIFYING LICENCES) **How will the data be licensed to permit the widest re-use possible?** The deliverables associated to the dataset are licensed through an All rights reserved license as they are working papers not intended to be re-used. Nevertheless the database should be shared as a possible reusable dataset. For this reason, when deposited to the repository, an AttributionNonCommercial license (by-nc) will be requested. The data is currently available for re-use from the project website and will also be findable and reusable through the final depositing repository (the institutional one or Zenodo) and from OpenAire, the latest by the end of the project. **When will the data be made available for re-use?** _If an embargo is sought to give time to publish or seek patents, specify why and how long this will apply, bearing in mind that research data should be made available as soon as possible._ The data will remain re-usable after the end of the project by anyone interested in it, with no access or time restrictions. **Are the data produced and/or used in the project useable by third parties, in particular after the end of the project?** _If the re-use of some data is restricted, explain why._ Each archived data set will have its own permanent repository ID and will be easily accessible. We expect most of the data generated to be made available without restrictions and only data sets subject to IPR and confidentiality issues will be restricted. Where this is going to be the case, agreements will be made based on the individual data sets. Requests for the use of the data by externals will be approved by the project consortium. **How long is it intended that the data remains re-usable?** Data and metadata will be retained for the lifetime of the Zenodo repository. This is currently the lifetime of the host laboratory CERN, which currently has an experimental programme defined for the next 20 years at least. **Are data quality assurance processes described?** The data quality is ensured by different measures. These include validation of the sample, replication and comparison with results of similar studies and control of systematic distortion. # 3\. ALLOCATION OF RESOURCES **What are the costs for making data FAIR in your project?** Exact costs estimated will be known and adjusted dynamically during the project’s lifetime. **How will these be covered?** _Note that costs related to open access to research data are eligible as part of the Horizon 2020 grant (if compliant with the Grant Agreement conditions)._ The costs for depositing the dataset with the project, and subsequent resources required to make the dataset publicly available have been included within specific WPs within the project. **Who will be responsible for data management in your project?** The project coordinator has the ultimate responsibility for the data management in the project and so, for the Marketplace platform management. **Are the resources for long term preservation discussed (costs and potential value, who decides and how what data will be kept and for how long)?** Due to the data being shared via public repositories, the preservation beyond lifetime of the project does not involve any costs. # 4\. DATA SECURITY **What provisions are in place for data security (including data recovery as well as secure storage and transfer of sensitive data)?** Due to the data volume, zenodo also hold a copy of their own processed data, effectively acting as a second distributed database and additional backup. Locally, within each partner, all data will be stored at backup external hard- disks. **Is the data safely stored in certified repositories for long term preservation and curation?** The digital signature of the whole dataset, or the storage of the dataset in a git repository could provide support for the correct duplication and preservation. In addition zenodo operates with 12hourly backup cycle with one backup sent to tape storage once a week. # 5\. ETHICAL ASPECTS **Are there any ethical or legal issues that can have an impact on data sharing?** _These can also be discussed in the context of the ethics review. If relevant, include references to ethics deliverables and ethics chapter in the Description of the Action (DoA)._ The ethical aspects related to the personal data collected in this dataset are addressed in the Ethics Requirements document of the original proposal. Regarding the protection of personal data of the research participants, the Consortium will meet the following conditions: * To submit to the REA the copies of ethical approvals for the collection of personal data by each of the competent University Data Protection Officers or National Data Protection authorities. * To justify (if necessary) the collection and/or processing of personal sensitive data. * To follow and accomplish the national and EU legislation on the procedures that will be implemented for data collection, storage, protection, retention and destruction. **Is informed consent for data sharing and long term preservation included in questionnaires dealing with personal data?** Not applicable in this project. # 6\. OTHER ISSUES **Do you make use of other national/funder/sectorial/departmental procedures for data management? If yes, which ones?** Through the use of the institutional repository, we are also following these procedures for data management Partner countries, including the Partner countries Research Council’s common principles on data policy provide an overarching framework for individual Research Council policies on data policy. Page Page 11
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0310_ROSSINI_818087.md
EXECUTIVE SUMMARY / ABSTRACT According to the task T9.3 of WP9, a Data Management Plan (DMP) should be outlined and delivered by M06. The present report is focused on the preparation of the Data Management Plan (DMP) for ROSSINI and aims to define the guidelines that the project consortium will follow to manage the data generated within the project, to protect them and to guarantee their availability for other researchers. In particular, DMP will define how this data will be managed and shared by the project partners, and also, how this information will be updated and preserved during and after the project duration. The building of the DMP started at the proposal stage when the good research data management was addressed under the impact criterion, however all partners will be responsible for updating the knowledge management system periodically and will characterise their produced data to ensure that they will be: discoverable (means of an identification mechanism such as Digital Object Identifier), accessible (in what modalities, scope, licenses, Intellectual Property Rights (IPR)), assessable and intelligible (they allow third parties to make assessments about their reliability and the competence of the creators), useable beyond the original purpose for which it was collected are usable to third parties after the collection of the data for long periods (repositories, preservation and curation) and interoperable. SCOPE The ROSSINI DMP describes the observed data that are collected and processed during the life time of the project while providing the overview of available research data, access, data management and terms of use. The DMP reflects the current state of the discussions, plans and ambitions of the partners. It includes the preliminary scenario of data set definitions and expected results and will be updated and implemented with new datasets and results during the lifespan of ROSSINI. Moreover, ROSSINI project takes part to the Open Research Data Pilot (ORDP) in Horizon 2020 that aims to improve access to and re-use of research data with a special focus on the need to balance openness and protection of scientific information. ABBREVIATIONS <table> <tr> <th> Data Management Plan </th> <th> DMP </th> </tr> <tr> <td> Intellectual Property Rights </td> <td> IPR </td> </tr> <tr> <td> Open Research Data Pilot </td> <td> ORDP </td> </tr> <tr> <td> Key Performance Indicators </td> <td> KPI </td> </tr> <tr> <td> European Commission </td> <td> EC </td> </tr> <tr> <td> Findable, accessible, interoperable and re-usable data </td> <td> FAIR </td> </tr> <tr> <td> European Union </td> <td> EU </td> </tr> <tr> <td> General Data Protection Regulation </td> <td> GDPR </td> </tr> <tr> <td> European Factories of the Future Research Association </td> <td> EFFRA </td> </tr> <tr> <td> Simple Web-service Offering Repository Deposit </td> <td> SWORD </td> </tr> <tr> <td> Human Robot Collaboration </td> <td> HRC </td> </tr> <tr> <td> </td> <td> </td> </tr> </table> <table> <tr> <th> **1** </th> <th> **Introduction** </th> </tr> </table> Good data management is not a goal in itself, but rather is the key conduit leading to knowledge discovery and innovation, and to subsequent data and knowledge integration and reuse by the community after the data publication process 1 . The amount of data generated by scientific research and research projects continuously increase, however re-using the data for further research purposes and thus maximising the benefit deriving from the research investments still represents a hard challenge. The data needs to be properly collected, annotated, filed in such a way that they will be available in the long-term and can be re-used for downstream investigations, either alone, or in combination with newly generated data. **1.1 The Data Management Plan in H2020:** According to the guidelines of the EC, the DMP is “the key element of good data management. A DMP describes the data management life cycle for the data to be collected, processed and/or generated by a Horizon 2020 project”. There DMP is a document that outlines the procedures and methodologies of data treatment during the project and how the data will be used and shared after the project ends. It deals with the generation and discovery of the data, their collection, evaluation by quality assurance, classification, organization and documentation. Dissemination and sharing policies are also addressed within the DMP. The process of creation of the DMP respects some golden rules that ensure the successful management of research data arising from the project. W. K. Michener in his article “Ten Simple Rules for Creating a Good Data Management Plan” 2 well outlines the building process of the DMP, which starts with the identification of the data to be collected (1), followed by the definition on how the data will be organized (2), documented (3), how data quality will be assured (4), the strategy adopted to preserve and store the data collected (5), the data policies adopted (6), ending with the rules for the dissemination of the data and the appointment of the roles and responsibilities of data management (7). Even if having a clear vision of the nature of the data could be quite challenging in the early phases of the project, when the ROSSINI DMP is released, the plan will include the characterization of the data collected: <table> <tr> <th> − </th> <th> the types of data: text, spreadsheets, software, algorithms, pictures, videos, audio files; </th> </tr> <tr> <td> − </td> <td> the volume of data: data management activities might be influenced by the number of the data to be handled; </td> </tr> <tr> <td> − </td> <td> sources: stakeholders or research centers that wish to re-use the data generated by the project are interested in the source of the data, if they are proprietary, derive from other research studies, are subject to restrictions or can be freely used; </td> </tr> <tr> <td> − </td> <td> data and file formats: non-proprietary formats are preferred as they ensure the accessibility to the data for the long term; examples could be Comma Separated Values [CSV]. </td> </tr> </table> The DMP is a living document that must be revised during the project. Therefore, if some of the above-mentioned information are not clear and known yet at M06, the contents will be updated with the missing data. The definition of the **way** the **data** will be **organized** should be addressed as well and it is strongly influenced by the volume of the data generated and used within the project. Larger data volumes and usage constraints may require the use of relational database management systems (RDBMS) for linked data tables like ORACLE or mySQL, or a Geographic Information System (GIS) for geospatial data layers like ArcGIS, GRASS, or QGIS, while a small number of data could be effectively managed with commercial or open source spreadsheet programs like Excel and OpenOffice Calc. 2 The data organization will be defined once a clear scenario of the types of data and the volume of data collected within the project will be made clear. The documentation of the data by means of proper **metadata** is fundamental to ensure that data will be discoverable, usable and properly cited by those ones who will look for them and use them for further research purposes. Indeed, metadata means "data about data". Metadata is defined as the data providing information about one or more aspects of the data; it is used to summarize basic information about data which can make tracking and working with specific data easier. 3 In the this project, structural and descriptive metadata will be deployed to define formats, types, versions and relationships between digital data and to identify the data through some of their characteristics, like data of collection, volume, name, authors/owners and keywords. Once the whole vision of data collected will be clear, the project will state the guidelines and the approaches to be followed to perform quality control and quality assurance checks, that should guarantee the measurement and improvement of the quality of the results of the project. **Data storage and preservation strategy** will be an important topic that the DMP will address, by answering to the following questions: − How long will the data be accessible? − How will data be stored and protected over the duration of the project? − How will data be preserved and made available for future use? The answers to those questions are expected to vary from one partner to another due to several factors. The internal data policy of each partner will influence the possibility to make some of the research data collected public or not. Moreover, if the partners will make the data available to be consulted by other researchers after the end of the project, the storage may depend on the type of data generated, indeed some of them may need to be kept for a short time as they are extremely repeatable, while if they are strongly variable from one experiment to another might need to be stored for a very long time. The data that will receive the authorization by the owners to be published and available for others to use will be stored within online collaborative platforms accessible by the project consortium and will also be uploaded on online repositories like Zenodo ( _http://zenodo.org/_ ) . The DMP will also include explicit **policy statements** about how data will be managed and shared. Such policies include the licensing or sharing arrangements that pertain to the use of preexisting materials; the plans for licensing, sharing, and embargoing (i.e., limiting use by others for a period of time) data, code, and other materials; the legal and ethical restrictions on access and use of human subject and other sensitive data. The policies will be in line with what has been stated in the Consortium Agreement with respect to the Intellectual Property Rights (IPr) and the joint ownership of the results and the data beyond them, and to what the Ethics Requirements deliverables report with regards to the Personal Data treatment and the treatment of data deriving from the involvement of humans beings in the project experiments. The IPR and non-disclosure statements will be very important for the dissemination actions taken by the project partners. The DMP will be compliant with the statements of the Consortium Agreement (CA) and the Ethics Deliverables in defining when, how and which data will be made available. Dissemination actions will range form posting the data on social media and the project website, to publishing the data in online repositories (like Zenodo) and in scientific journals. ## 1.2 Open Research Data Pilot Since 2017, all the thematic areas of the H2020 programme have been included in the work programme Open Research Data Pilot (ORD Pilot) which is a flexible pilot on open access to research data and scientific publication in H2020. The pilot considers the need to balance the possibility to make the data open with the protection of scientific information, commercialisation and Intellectual Property Rights (IPR), privacy concerns, and security, as well as questions of data management and preservation. The Europe H2020 strategy for a smart, sustainable and inclusive economy underlines the central role of knowledge and innovation in generating growth. The reason for the creation of the ORD Pilot is that the free access to scientific publications and data should improve the quality of the results on which further research will hopefully be built, encourage collaboration and avoid duplication of efforts, speed up the entrance of the results into the market and should boost and underline the benefits of public investments in research funded under the H2020 programme also among citizens and society 4 . From a high-level perspective, open access (OA) consists in providing on-line access to scientific information free of charge to the users to promote the reusability of the data. Within the context of the R&D actions there are two main categories of data that OA addresses: the scientific papers and the research data collected from the experiments conducted in the laboratories. Research data refer in particular to facts or numbers, collected to be examined and considered as a basis for reasoning and discussion on project results, examples include statistics, results of experiments, measurements, observations resulting from fieldwork, survey results, interview recordings and images. The focus is on research data that is available in digital form. Indeed, the ORD Pilot run by the European Commission applies to the datasets at the basis of scientific publication and to the peer reviewed papers released within the context of the H2020 projects. According to the Budapest declaration (2002) and Berlin Declaration (2003), within the context of ORD Pilot, open access means not only the basic rights to download, save and print a document, but also to copy, distribute, search, link, crawl and mine data. The DMP will be useful to state which type of data will be made available and which restriction may apply to some data. The two main routes to open access are 5 : <table> <tr> <th> − </th> <th> **Self-archiving / 'green' open access** – the author, or a representative, archives (deposits) the published article or the final peer-reviewed manuscript in an online repository before, at the same time as, or after publication. Some publishers request that open access be granted only after an embargo period has elapsed. </th> </tr> <tr> <td> − </td> <td> **Open access publishing / 'gold' open access** \- an article is immediately published in open access mode. In this model, the payment of publication costs is shifted away from subscribing readers. The most common business model is based on one-off payments by authors. These costs, often </td> </tr> </table> referred to as Article Processing Charges (APCs) are usually borne by the researcher's university or research institute or the agency funding the research. In other cases, the costs of open access publishing are covered by subsidies or other funding models. According to the Article 29.2 of the Grant Agreement, under H2020 “each beneficiary must ensure open access to all peer-reviewed scientific publications relating to its results”. Following the recommendation of the EC, ROSSINI project is participating on the Open Research Data Pilot and the DMP is considered as a Deliverable (D.9.3) due in M06. To ensure visibility and openness of ROSSINI resources, some platforms have been considered to be used, where general public, researchers and other investigators can discover and download information, data and documents on the project’s results. The platforms and widely used research data repositories that are considered for ROSSINI, which allow research stakeholders to search and retrieve vilely open and all types of data that are uploaded by other researchers, are Zenodo and EC’s OpenAIRE platform. Furthermore, the EFFRA Innovation Portal is also taken into consideration since being provided by the European Factories of the Future Research Association (EFFRA), is a unique resource database combining various projects’ databases’ information about the latest projects’ outputs, together with reports and demo materials. ## 1.3 Objectives of the Data Management Plan The overall objective of ROSSINI project is to develop a disruptive, inherently safe hardwaresoftware platform for the design and deployment of human-robot collaboration (HRC) applications in manufacturing. The research lines of ROSSINI project will address several points: sensing (smart sensors for quickening sensors response), control (safety aware control architecture to reduce robot task execution time), actuation (collaborative robot manipulation), human factors (human-robot interactions) and risk assessment. The research lines will be combined into 3 demonstrators (white goods, electronic equipment, and food packaging). Thus, the purpose of ROSSINI DMP is to define the management of project research data that are collected from the smart sensors, log of informatic systems and data derived from the metrics that will be used to evaluate the job quality in the use cases. The plan is intended as a roadmap illustrating how data arisen from project research lines will be treated throughout the project lifetime and beyond, once it will be finished. ROSSINI DMP will provide a vehicle for conveying information to and setting expectations for the project team during the different stages of project, especially when the project is underway. The plan will be a living document that is periodically reviewed and revised as necessary according to the new data gathered, the needs and any changes in protocols (e.g., metadata, QA/QC, storage) and policies. ## 1.4 ROSSINI Data Management Plan: General description The present report is focused on the preparation of the Data Management Plan (DMP) for ROSSINI project to be delivered at M06. The DMP provides an analysis of the main elements of the data management policy that will be used throughout the project with regard to all the datasets that will be generated. In particular, DMP will define how this data will be managed and shared by the project partners, and also, how this information will be updated and preserved during and after the project duration and made available after the end of the project. The DPM of the ROSSINI project has been prepared following the template provided by the European Commission of the “ _Guidelines on Data Management in H2020_ ”.This document will be updated and augmented with new datasets and results according to the progress of activities of ROSSINI project. Also, the plan will be updated including changes in consortium composition and policies over the course of the project. The procedures that will be implemented for data collection, storage and access, sharing policies, protection, retention and destruction will be according to the requirements of the national legislation of each partner and in line with the EU standards. The first part of the DMP will assess the types of data collected, their origin, their dimension, the purpose for gathering them and how they are going to be reused. Moreover, as part of making research data findable, accessible, interoperable and re-usable (FAIR), ROSSINI DMP should include information on: (i) how data will be collected, processed and/or generated: which will be the metadata describing them, which numbers and keywords will make them findable; (ii) which methodology & standards will be applied; (iv) whether data will be shared/made open access: which methods or software tools will be needed to access the data; (v) how data will be curated & preserved (including after the end of the project); how data will allow their exchange and re-use: standard vocabularies used, common ontologies, mappings to more common ontologies, definition of which data might be used by third parties after the project and which quality assurance strategies an processes will be followed. Finally the plan will focus on the costs to be incurred for making data FAIR and the roles and responsibilities for Data Management within the project. The M06 version of the DMP is only a preliminary plan since the effective phase of the project has just started and the scenario of the types, formats, dimension od data is still not clear and complete. Moreover, some strategies for collection, classification and organization of the data are still object of discussion among the partners. The DMP is intended as a living document that will be kept constantly updated with more detailed breakdown as the project progresses and knew knowledge and results will be achieved. <table> <tr> <th> **2** </th> <th> **Data Sets identification** </th> </tr> </table> ## 2.1 Register on numerical data sets generated or collected in ROSSINI The intention of the DMP is to describe numerical model or observation datasets collected or created by ROSSINI project. Since the project is at its very beginning, there is not yet any dataset generated or collected until the delivery date of this DMP. However, the register on numerical data sets has to be understood as a living document, which will be updated regularly during project lifetime. The information listed below reflects the conception and design of the individual partners in the different work packages at the beginning of the project. The data register will deliver information according to information detailed in Annex 1 (Part A) of the Grant Agreement Document (GA): * Data set reference and name: identifier for the data set to be produced. * Data set description: descriptions of the data that will be generated or collected, its origin or source (in case it is collected), nature, scale, to whom it could be useful and whether it underpins a scientific publication. Information on the existence (or not) of similar data and the possibilities for integration and reuse. * Partners activities and responsibilities: partner owner of the device, in charge of the data collection, data analysis and/or data storage, and WPs and tasks it is involved. * Standards and metadata: reference to existing suitable standards of the discipline. If these do not exist, an outline on how and what metadata will be created. Format and estimated volume of data. * Data exploitation and share: description of how data will be shared, including access procedures and policy, embargo periods (if any), outlines of technical mechanisms for dissemination and necessary software and other tools for enabling re-use, and definition of whether access will be widely open or restricted to specific groups. Identification of the repository where data will be stored, if already existing and identified, indicating in particular the type of repository (institutional, standard repository for the discipline, etc.) and if this information will be confidential (only for members of the Consortium and the Commission Services) or public. In case of dataset cannot be shared, the reasons for this should be mentioned (e.g. ethical, rules of personal data, intellectual property, commercial, privacyrelated, security-related). * Archiving and preservation (including storage and backup): description of the procedures that will be put in place for long-term preservation of the data. Indication of how long the data should be preserved, what is its approximated end volume, what the associated costs are and how these are planned to be covered. Such data can be anonymised for statistical or other purposes and shared with open access, which could be further analysed and provide the possibility to extract information and knowledge from them. Each dataset can be accompanied by several metadata (e.g. type, gender, age, etc.) which can support various kinds of historical data analysis. **2.1.1 Data set per partner** All partners have identified the data that will be produced in the different project activities and have provided an overview on the nature and details for each dataset as listed below. # Table 1, Datalogic dataset overview <table> <tr> <th> **Data Identification** </th> </tr> <tr> <td> **Data set description Type of data: qualitative or quantitative?** **Order of magnitude** </td> <td> Besides the project documentation (mainly made of SW source codes, schematics, drawings and MS Office documents) we will collect sequences of sensor data including 2D or 3D images, safety laser scanner measurements and safety radar traces. The datasets will be used exclusively for the development of our data processing algorithms. Stored datasets will allow both off-line development and for nonregression tests. IRIS may need some dataset for the same purpose. Each set will contain from one to many tens of acquisitions, i.e. from a few MBytes to some hundreds of MBytes. As images will contain pictures of human operators (typically a Datalogic developer) they will be considered personal data and be, by default, confidential. Laser scans and radar traces will not be considered personal data even if capturing human beings. </td> </tr> <tr> <td> **Provenance of data: sources** </td> <td> All the data will be acquired using 2D or 3D cameras (both commercial and our prototypes), Datalogic safety laser scanners and Pilz safety radar prototypes. </td> </tr> <tr> <td> **Nature and formats of data** </td> <td> A dataset will be a sequence of images and/or scans and/or radar traces saved in the same directory. 2D images will be saved in a standard lossless format like BMP or TIFF. 3D images will be saved in standard format too, e.g. PCL. Data coming from laser scanners or from the radar prototypes will be saved in a raw binary format. All data will be tagged with an acquisition timestamp. </td> </tr> <tr> <td> **New data set value** </td> <td> The safety and robot control algorithms that will be developed in the Rossini project will start from the data acquired by a set of safe (RS4) and non-safe sensors. Developing the algorithms using real sensors in a real world has the advantage of generating a huge variety of cases, but requires the continuous availability of a real (or, at least, realistic) environment. Saving sequences of acquisitions will allow the developers to work off-line and to use repeatable sequences for non-regression tests. </td> </tr> <tr> <td> **Audio-visual material** </td> <td> Considering that each image has a size of 1-3 Mbytes, we plan to acquire up to 1-2 TByte of data. </td> </tr> <tr> <td> **Partners Activities & Responsibilities ** </td> </tr> <tr> <td> **Partner owner of the device producing the data** </td> <td> Datalogic (2D and 3D images and laser scans) and, maybe, Pilz (radar traces) </td> </tr> <tr> <td> **Partner in charge of the data** **collection (if different)** </td> <td> Datalogic (2D and 3D images and laser scans) and, maybe, Pilz (radar traces) </td> </tr> <tr> <td> **Partner in charge of the data analysis (if different)** </td> <td> Datalogic and, on request, IRIS </td> </tr> <tr> <td> **Partner in charge of the data storage** **(if different)** </td> <td> Datalogic and, on request, IRIS </td> </tr> </table> <table> <tr> <th> **WPs and tasks** </th> <th> Data will be collected using sensors from WP3, and will be used for the development of the algorithms of WP3 and WP4.1 </th> </tr> <tr> <td> **Standards and Metadata** </td> </tr> <tr> <td> **Metadata standards and data documentation** </td> <td> Only a textual description of each dataset will be provided. The description will include the working cell, the environmental data (e.g. lighting conditions), the type of sensors and their setup. </td> </tr> <tr> <td> **Methodology for data collection/generation** </td> <td> Datalogic (or Pilz) will collect the data recording brief sequences of acquisitions of a realistic work situation. The data will be stored in a company network server and will be available for the developer for future use. Datalogic will collect the datasets and store them in an internal network space with a limited access. </td> </tr> <tr> <td> **Data exploitation & sharing ** </td> </tr> <tr> <td> **Data exploitation (purpose/use of the data analysis)** </td> <td> When a Datalogic developer will need a dataset, he will download it from the network and use it to feed his algorithms. On request, some datasets will be transferred to IRIS using secured channels. </td> </tr> <tr> <td> **Data ownership** </td> <td> The owner of the data will be the company that acquired it. No pre-existing data will be used </td> </tr> <tr> <td> **Suitability for sharing** </td> <td> All images will be considered confidential by default. Other type of data can potentially be shared, at least inside the Consortium. </td> </tr> <tr> <td> **Data utility** </td> <td> Datalogic developers working on Rossini will have full access to the datasets. </td> </tr> <tr> <td> **Open research data pilot** </td> <td> No </td> </tr> <tr> <td> **Embargo periods (if any)** </td> <td> N.a. </td> </tr> <tr> <td> **Archiving & preservation (including storage and backup) ** </td> </tr> <tr> <td> **Managing, storing and curating data** </td> <td> A dataset will be a sequence of images and/or scans and/or radar traces saved in the same directory together with their description. Datasets will be saved in a company fileserver and periodically backed-up as from Datalogic IT policy. In case of need, copies of the datasets will be downloaded on the developer PCs for fastest access and removed when not useful any more. Access to developer PCs require personal logins as from Datalogic IT policy. Should some datasets be shared with IRIS they will be transferred via a password protected link, e.g. FTP or an encrypted archive on a large file sharing platform to be defined. </td> </tr> <tr> <td> **Data Storage** </td> <td> Only a selection of the acquired datasets will be stored on the network servers. Most of the datasets will be deleted at the end of the project. The most peculiar datasets might be maintained as a reference for future product developments. </td> </tr> </table> # Table 2, Pilz dataset overview <table> <tr> <th> **Data Identification** </th> </tr> <tr> <td> **Data set description Type of data: qualitative or quantitative?** **Order of magnitude** </td> <td> The information and state of the environment will be captured by both safe and non-safe sensors of T3.1 and T3.3 and transformed into a data set that can be sent by the bus T3.4 to the system T3.5 and this can be able to understand and manage. </td> </tr> <tr> <td> **Provenance of data: sources** </td> <td> Vision sensors (T3.1). Capacitive, tactile and radar sensor (T3.3) </td> </tr> <tr> <td> **Nature and formats of data** </td> <td> Protocol for the bus will be discussed and nature of data too </td> </tr> <tr> <td> **New data set value** </td> <td> n/a </td> </tr> <tr> <td> **Audio-visual material** </td> <td> No video planed for Sensors of T3.3 </td> </tr> <tr> <td> **Partners Activities & Responsibilities ** </td> </tr> <tr> <td> **Partner owner of the device producing the data** </td> <td> Partner of the project owner of the device/software producing the data </td> </tr> <tr> <td> **Partner in charge of the data** **collection (if different)** </td> <td> </td> </tr> <tr> <td> **Partner in charge of the data analysis (if different)** </td> <td> </td> </tr> <tr> <td> **Partner in charge of the data storage** **(if different)** </td> <td> </td> </tr> <tr> <td> **WPs and tasks** </td> <td> WP3 T3.3, T3.4, T3.5 </td> </tr> <tr> <td> **Standards and Metadata** </td> </tr> <tr> <td> **Metadata standards and data documentation** </td> <td> To be defined: In the data documentation will be define the protocol for the communication between the sensors of T3.1 and T3.3 and the system designed in T3.5 </td> </tr> <tr> <td> **Methodology for data collection/generation** </td> <td> To be discussed </td> </tr> <tr> <td> **Data exploitation & sharing ** </td> </tr> <tr> <td> **Data exploitation (purpose/use of the data analysis)** </td> <td> The data of sensing Layer will be sent to perception layer where a decision will be made about safety relevant issues </td> </tr> <tr> <td> **Data ownership** </td> <td> Who is the owner of the data? To be discussed Is another organization contributing to the data development? No Are you re- using some pre-existing data? No </td> </tr> <tr> <td> **Suitability for sharing** </td> <td> Public/confidential/limited access All data from Sensors is only relevant for the next Layer. </td> </tr> <tr> <td> **Data utility** </td> <td> How will this data shared/made accessible for verification and re-use The data from sensors have to be check before to be sent to the next Layer. . </td> </tr> <tr> <td> **Open research data pilot** </td> <td> Can data be uploaded in an open research data pilot? When? No </td> </tr> <tr> <td> **Embargo periods (if any)** </td> <td> </td> </tr> <tr> <td> **Archiving & preservation (including storage and backup) ** </td> </tr> <tr> <td> **Managing, storing and curating data** </td> <td> Please describe the modality of: * storage; * backup; * transmission; * data processing in the short and medium term, with references to practices, standards and regulations where applicable. </td> </tr> <tr> <td> </td> <td> Transmission per Bus to the next Layer </td> </tr> <tr> <td> **Data Storage** </td> <td> Please indicate: * where data will be stored * if the conservation concerns the whole collected data or only part of them, * for how long data will be stored Not relevant for this type of data </td> </tr> </table> # Table 3, Unimore dataset overview <table> <tr> <th> **Data Identification** </th> </tr> <tr> <td> **Data set description Type of data: qualitative or quantitative?** **Order of magnitude** </td> <td> Data about position/velocity and about the nature of the objects detected (e.g. human/object). The data will be quantitative. The data will be needed for implementing a dynamic controller for the robot. </td> </tr> <tr> <td> **Provenance of data: sources** </td> <td> (Processed) Sensor data </td> </tr> <tr> <td> **Nature and formats of data** </td> <td> Describe nature and format of data: 5. structured data (HTML, JSON, TEX, XML, RDF); 6. tables (CSV, ODS, TSV, XLS, SAS, Stata, SPSS portable); </td> </tr> <tr> <td> **New data set value** </td> <td> No new data set will be created </td> </tr> <tr> <td> **Audio-visual material** </td> <td> Video will be done for dissemination purpose. The duration of the video Is that of the project. </td> </tr> <tr> <td> **Partners Activities & Responsibilities ** </td> </tr> <tr> <td> **Partner owner of the device producing the data** </td> <td> Partner of the project owner of the device/software producing the data </td> </tr> <tr> <td> **Partner in charge of the data** **collection (if different)** </td> <td> </td> </tr> <tr> <td> **Partner in charge of the data analysis (if different)** </td> <td> </td> </tr> <tr> <td> **Partner in charge of the data storage** **(if different)** </td> <td> </td> </tr> <tr> <td> **WPs and tasks** </td> <td> </td> </tr> <tr> <td> **Standards and Metadata** </td> </tr> <tr> <td> **Metadata standards and data documentation** </td> <td> </td> </tr> <tr> <td> **Methodology for data collection/generation** </td> <td> The data will be collected by sensors, post-processed and then sent to the scheduler/planner/controller for deciding the inputs to provide to the robot </td> </tr> <tr> <td> **Data exploitation & sharing ** </td> </tr> <tr> <td> **Data exploitation (purpose/use of the data analysis)** </td> <td> Use for control of the robot. </td> </tr> <tr> <td> **Data ownership** </td> <td> The data will be produced and used in our labs </td> </tr> <tr> <td> **Suitability for sharing** </td> <td> Public </td> </tr> <tr> <td> **Data utility** </td> <td> Plots in technical documents/scientific publications </td> </tr> <tr> <td> **Open research data pilot** </td> <td> No </td> </tr> <tr> <td> **Embargo periods (if any)** </td> <td> </td> </tr> <tr> <td> **Archiving & preservation (including storage and backup) ** </td> </tr> <tr> <td> **Managing, storing and curating data** </td> <td> Data will be collected by sensors, processed and used for computing the input to provide to the robot. Data will be transmitted either by cabled or wireless communication. The processing happens through scheduling/planning/control algorithms. </td> </tr> <tr> <td> **Data Storage** </td> <td> </td> </tr> </table> # 4, IRIS dataset overview <table> <tr> <th> **Data Identification** </th> </tr> <tr> <td> **Data set description Type of data: qualitative or quantitative?** **Order of magnitude** </td> <td> In the case of our Tasks 4.1 (Semantic Scene Map) the purpose is to merge the data collected by the sensors in order to build the semantic scene map. The main purpose of the semantic scene map is to produce a map of the working environment including the safety areas available from the environment representation. The purpose of the data collection is uniquely to process the information coming from the sensors to build the semantic scene map. Active learning algorithms can be used to reduce the labelling quantity. Synthetic data can be generated through simulation software to compensate for the possible lack of real data. The results may be stored in order to be available for other layers of the architecture developed by the partners. In the case of Task 7.2 (Implementation of the design tool) the scope is to develop a desktop-based design tool starting from existing one, integrating libraries, tools, algorithms and model specific for the ROSSINI project. The data used for this task will be mainly generated by the software itself, and will be used for testing purpose. The size of data depends on the source. A 3D camera can generate a consistent volume of data, while a proximity sensor doesn’t produce high data rates. Since the data will be merged, after the merging process we can expect high volume of data. The data will be in majority quantitative sensor readings, x,y,z coordinates, as well as movement measurements (speed, acceleration). The magnitude will be determined once we have real data but as it will represent coordinates and movement in the work cell it will be neither very large nor very small. </td> </tr> <tr> <td> **Provenance of data: sources** </td> <td> Mainly data from the sensors. Since the sensors implemented can be of different types, potentially many kinds of data can be produced. In our case the data are not coming directly from the sensors, but preprocessed and merged. </td> </tr> <tr> <td> **Nature and formats of data** </td> <td> 1. text documents mainly in PDF, ODT DOC, and DOCX 2. images JPG, SVG, PNG 3. audio recordings MP3, WAV, OGG 4. structured data HTML, JSON, TEX, XML 5. tables CSV, ODS, XLS, XLSX 6. source codes C, C++, CSS, Python 7. configuration data CONF 8. database MySql The expected data formats are types d), e) and h). </td> </tr> <tr> <td> **New data set value** </td> <td> We will add semantic labels to the sensor data, especially the “point clouds” of the LiDAR data, in order to identify the objects with human meaningful identifiers as well as behaviour identifiers (esp. directional movement) This will be passed to the “cognitive layer” of the Rossini platform, which will use this information to plan actions for the robot. </td> </tr> <tr> <td> **Audio-visual material** </td> <td> No videos are considered at this stage </td> </tr> <tr> <td> **Partners Activities & Responsibilities ** </td> </tr> <tr> <td> **Partner owner of the device producing the data** </td> <td> DATALOGIC </td> </tr> <tr> <td> **Partner in charge of the data collection** </td> <td> </td> </tr> <tr> <td> **(if different)** </td> <td> </td> </tr> <tr> <td> **Partner in charge of the data analysis (if different)** </td> <td> </td> </tr> <tr> <td> **Partner in charge of the data storage** **(if different)** </td> <td> </td> </tr> <tr> <td> **WPs and tasks** </td> <td> </td> </tr> <tr> <td> **Standards and Metadata** </td> </tr> <tr> <td> **Metadata standards and data documentation** </td> <td> The hardware manufacturers (robot, sensors, …) will each have their standards, especially with respect to safety, and the relevant ones will be established during the project. </td> </tr> <tr> <td> **Methodology for data collection/generation** </td> <td> 1. Who and how it collects data; The sensor data is collected by the sensor owners (Pilz, Datalogic, …) and will be captured by activating the sensors and storing their streaming data onto a server. 2. Who and how it structures and stores them; Each sensor may have different format sothis will be unified into a common structure. 3. Who and how he processes them; The WP3 and WP4 partners. N/A </td> </tr> <tr> <td> **Data exploitation & sharing ** </td> </tr> <tr> <td> **Data exploitation (purpose/use of the data analysis)** </td> <td> The data will be used to plan and execute a safety work cell for human robot collaboration in an industrial working environment. </td> </tr> <tr> <td> **Data ownership** </td> <td> Data produced are stored and accessible to the whole consortium. External access can be granted to third parties upon identification and signature of NDA. </td> </tr> <tr> <td> **Suitability for sharing** </td> <td> Data not marked as confidential, will be public </td> </tr> <tr> <td> **Data utility** </td> <td> The data that are not supposed to be used or visible outside of the system, will be in a specific format to satisfy efficiency and safety requirements. Data that are supposed to interact with external systems will be easy to access and saved in standard formats </td> </tr> <tr> <td> **Open research data pilot** </td> <td> To be defined. </td> </tr> <tr> <td> **Embargo periods (if any)** </td> <td> </td> </tr> <tr> <td> **Archiving & preservation (including storage and backup) ** </td> </tr> <tr> <td> **Managing, storing and curating data** </td> <td> Backup of datasets exists in IRIS’ internal repository. Access to those is restricted only to authorized personnel or to project beneficiaries needing access rights. The data will be available for indefinite time and curated until 2 years upon project conclusion. </td> </tr> <tr> <td> **Data Storage** </td> <td> Backup of datasets exists in IRIS’ internal repository. Access to those is restricted only to authorized personnel or to project beneficiaries needing access rights. </td> </tr> </table> # 5, SUPSI dataset overview <table> <tr> <th> **Data Identification** </th> </tr> <tr> <td> **Data set description Type of data: qualitative or quantitative?** **Order of magnitude** </td> <td> Project technical documents: CAD files, Data-sheets, Bill Of Materials, Gantt diagrams, Operating manuals, Software Packages, Source code, Electrical diagrams, Data logs. Other documents: Pictures, Videos, Meetings minutes, Contact lists, Posters, Flyers and Presentations, Papers, Deliverables. Nature of data: digital. Order of magnitude: ≥40GB (prediction) </td> </tr> <tr> <td> **Provenance of data: sources** </td> <td> Project technical documents sources: engineer’s workstations, machines control systems, Internet. Other documents sources: internal realization for documentation or promotional purposes. </td> </tr> <tr> <td> **Nature and formats of data** </td> <td> Describe nature and format of data: 1. text documents (DOC, PDF, TXT); 2. images (JPG, GIF, PNG, TIFF); 3. video (MPEG, AVI, WMV, MP4); 4. structured data (HTML, XML); 5. tables (CSV, XLS); 6. source codes (C, C#, JavaScript, Java); 7. configuration data (INI, CONF) 8. database (MySql) </td> </tr> <tr> <td> **New data set value** </td> <td> \- </td> </tr> <tr> <td> **Audio-visual material** </td> <td> Max 5 min each. </td> </tr> <tr> <td> **Partners Activities & Responsibilities ** </td> </tr> <tr> <td> **Partner owner of the device producing the data** </td> <td> SUPSI </td> </tr> <tr> <td> **Partner in charge of the data** **collection (if different)** </td> <td> \- </td> </tr> <tr> <td> **Partner in charge of the data analysis (if different)** </td> <td> \- </td> </tr> <tr> <td> **Partner in charge of the data storage** **(if different)** </td> <td> \- </td> </tr> <tr> <td> **WPs and tasks** </td> <td> WP5 – Collaborative by Birth Robot Arm T5.2 – T5.3 WP6 – Human-Robot Mutual Understanding T6.4-T6.5 </td> </tr> <tr> <td> **Standards and Metadata** </td> </tr> <tr> <td> **Metadata standards and data documentation** </td> <td> The data are stored in SUPSI Instory (SUPSI INSTitutional repositORY), the online institutional archive of publications related to the research and didactic work conducted by the University of Applied Sciences and Arts of Southern Switzerland. All data in Instory are provided with metadata. </td> </tr> <tr> <td> **Methodology for data collection/generation** </td> <td> 1. Who and how it collects data: WP responsible, collecting them in local folders 2. Who and how it structures and stores them: the Project Manager, regularly uploading the collected data in the Insory repository. </td> </tr> <tr> <td> </td> <td> 3. Who and how he processes them: SUPSI researchers and Project Partners, obtaining access to the SUPSI Instory. 4. Who and how he distributes them: SUPSI researchers, giving external access to Instory. </td> </tr> <tr> <td> **Data exploitation & sharing ** </td> </tr> <tr> <td> **Data exploitation (purpose/use of the data analysis)** </td> <td> SUPSI collected data will be used for: * the engineering of a collaborative “by birth” robot arm * the development of new technologies for human-robot and robot-human communication </td> </tr> <tr> <td> **Data ownership** </td> <td> Data owner is SUPSI. Project partners can contribute to the data development. Pre-existing data coming from other projects can be re-used. </td> </tr> <tr> <td> **Suitability for sharing** </td> <td> Limited access </td> </tr> <tr> <td> **Data utility** </td> <td> Academic researches, Project partners, Scientific community </td> </tr> <tr> <td> **Open research data pilot** </td> <td> Data can be uploaded in an open research pilot, normally at the end of the project. </td> </tr> <tr> <td> **Embargo periods (if any)** </td> <td> </td> </tr> <tr> <td> **Archiving & preservation (including storage and backup) ** </td> </tr> <tr> <td> **Managing, storing and curating data** </td> <td> Data are encrypted and stored on a server. Regular backup of data is automatically executed on the server. Data are transmitted by giving password-protected access to the repository. </td> </tr> <tr> <td> **Data Storage** </td> <td> The whole collected data are stored on the SUPSI repository for an unlimited period. </td> </tr> </table> # 6, TNO dataset overview <table> <tr> <th> **Data Identification** </th> </tr> <tr> <td> **Data set description Type of data: qualitative or quantitative?** **Order of magnitude** </td> <td> Observational and experimental data from the experimental setup in each Use Case. </td> </tr> <tr> <td> **Provenance of data: sources** </td> <td> Informed consent, questionnaires and objective measurements. </td> </tr> <tr> <td> **Nature and formats of data** </td> <td> Observational and experimental data in TXT, CSV, JPEG, AVI </td> </tr> <tr> <td> **New data set value** </td> <td> New dataset to be analyzed in WP06 (Human-robot mutual understanding) </td> </tr> <tr> <td> **Audio-visual material** </td> <td> In case of video, indicate the duration: <30 min </td> </tr> <tr> <td> **Partners Activities & Responsibilities ** </td> </tr> <tr> <td> **Partner owner of the device producing the data** </td> <td> Partner of the project owner of the device/software producing the data </td> </tr> <tr> <td> **Partner in charge of the data** **collection (if different)** </td> <td> </td> </tr> <tr> <td> **Partner in charge of the data analysis (if different)** </td> <td> </td> </tr> <tr> <td> **Partner in charge of the data storage** **(if different)** </td> <td> </td> </tr> <tr> <td> **WPs and tasks** </td> <td> </td> </tr> <tr> <td> **Standards and Metadata** </td> </tr> <tr> <td> **Metadata standards and data documentation** </td> <td> n/a </td> </tr> <tr> <td> **Methodology for data collection/generation** </td> <td> n/a </td> </tr> <tr> <td> **Data exploitation & sharing ** </td> </tr> <tr> <td> **Data exploitation (purpose/use of the data analysis)** </td> <td> n/a </td> </tr> <tr> <td> **Data ownership** </td> <td> n/a </td> </tr> <tr> <td> **Suitability for sharing** </td> <td> n/a </td> </tr> <tr> <td> **Data utility** </td> <td> n/a </td> </tr> <tr> <td> **Open research data pilot** </td> <td> n/a </td> </tr> <tr> <td> **Embargo periods (if any)** </td> <td> </td> </tr> <tr> <td> **Archiving & preservation (including storage and backup) ** </td> </tr> <tr> <td> **Managing, storing and curating data** </td> <td> n/a </td> </tr> <tr> <td> **Data Storage** </td> <td> n/a </td> </tr> </table> # Table 7, Fraunhofer IFF dataset overview <table> <tr> <th> **Data Identification** </th> </tr> <tr> <td> **Data set description Type of data: qualitative or quantitative?** **Order of magnitude** </td> <td> Fraunhofer IFF expects to gather quantitative data from collision tests with robots. The tests will be necessary to evaluate the proposed method of transforming results from collision tests. The amount of data will probably in the range of 10 to 20 GB that corresponds with data sets from 500 to 1000 particular measurements. </td> </tr> <tr> <td> **Provenance of data: sources** </td> <td> The data come from collision tests with robots. </td> </tr> <tr> <td> **Nature and formats of data** </td> <td> The test data may include drawings (PDF), images (JPG, PNG), footage (MP4), tables (MAT, CSV, XLSX), and reports (PDF, DOCX). It is also likely that source code from robot programs will belong to the test data (depends on the confidentiality of the data). </td> </tr> <tr> <td> **New data set value** </td> <td> Fraunhofer IFF requires the data to evaluate its contribution in the project. However, the data have also a benefit for the community. To certain extent, they describe the robot behavior during a collision. This knowledge is relevant for the development of verified collision models for instance as part of a simulation environment used to plan and design application featuring human-robot collaboration. </td> </tr> <tr> <td> **Audio-visual material** </td> <td> Fraunhofer IFF cannot estimated the amount and duration of footage at this point of the project. </td> </tr> <tr> <td> **Partners Activities & Responsibilities ** </td> </tr> <tr> <td> **Partner owner of the device producing the data** </td> <td> There are not partner when producing the data which do not belong to the project. For the tests, Fraunhofer IFF will cooperate with Pilz. Hence, it is very likely that Pilz will also record and gather data from the tests. </td> </tr> <tr> <td> **Partner in charge of the data** **collection (if different)** </td> <td> </td> </tr> <tr> <td> **Partner in charge of the data analysis (if different)** </td> <td> </td> </tr> <tr> <td> **Partner in charge of the data storage** **(if different)** </td> <td> </td> </tr> <tr> <td> **WPs and tasks** </td> <td> </td> </tr> <tr> <td> **Standards and Metadata** </td> </tr> <tr> <td> **Metadata standards and data documentation** </td> <td> There are no applicable standards for the data which Fraunhofer IFF will record in the ROSSINI project. </td> </tr> <tr> <td> **Methodology for data collection/generation** </td> <td> Fraunhofer IFF will collect the data from collision tests with collaborative robot. An investigator will structure and store the data. For data processing, Fraunhofer IFF will use tools like MATLAB. It does not plan to distribute the data. There are no applicable regulations for such data. </td> </tr> <tr> <td> **Data exploitation & sharing ** </td> </tr> <tr> <td> **Data exploitation (purpose/use of the data analysis)** </td> <td> Not applicable </td> </tr> <tr> <td> **Data ownership** </td> <td> Fraunhofer IFF will be the owner of the data from tests done in its laboratory. It has no exclusive ownership for data from tests in which other partners were involved. It is planned to re-use data from former experiments with volunteers. Fraunhofer IFF requires these data when developing the desired method that transforms results from a collisions test with a fixed measurements device to such of transient contact. </td> </tr> <tr> <td> **Suitability for sharing** </td> <td> Limited access </td> </tr> </table> <table> <tr> <th> **Data utility** </th> <th> By making a copy of the files. </th> </tr> <tr> <td> **Open research data pilot** </td> <td> Data can be uploaded, if necessary. </td> </tr> <tr> <td> **Embargo periods (if any)** </td> <td> No </td> </tr> <tr> <td> **Archiving & preservation (including storage and backup) ** </td> </tr> <tr> <td> **Managing, storing and curating data** </td> <td> Fraunhofer IFF will store the data on a server with fully automated backup system. Only employees of the business unit Robotic Systems, which are involved in the ROSSINI project, will have access to the data. There are no applicable standards or regulations for the data. </td> </tr> <tr> <td> **Data Storage** </td> <td> Fraunhofer IFF plan to keep the data after the end of the ROSSINI project an indefinite period. </td> </tr> </table> # Table 8, Whirpool dataset overview <table> <tr> <th> **Data Identification** </th> </tr> <tr> <td> **Data set description Type of data: qualitative or quantitative?** **Order of magnitude** </td> <td> Worker Description. Qualitative dataset </td> </tr> <tr> <td> **Provenance of data: sources** </td> <td> Interviews and assessment </td> </tr> <tr> <td> **Nature and formats of data** </td> <td> 1. text documents (DOC, ODF, PDF, TXT, etc); 2. images (JPG, GIF, SVG, PNG, TIFF); 3. video / film (MPEG, AVI, WMV, MP4); f) tables (CSV, ODS, TSV, XLS, SAS, Stata, SPSS portable); </td> </tr> <tr> <td> **New data set value** </td> <td> Accurate description of the worker to enable enhanced human-robot interactions </td> </tr> <tr> <td> **Audio-visual material** </td> <td> 10’ </td> </tr> <tr> <td> **Partners Activities & Responsibilities ** </td> </tr> <tr> <td> **Partner owner of the device producing the data** </td> <td> WHR </td> </tr> <tr> <td> **Partner in charge of the data** **collection (if different)** </td> <td> </td> </tr> <tr> <td> **Partner in charge of the data analysis (if different)** </td> <td> </td> </tr> <tr> <td> **Partner in charge of the data storage** **(if different)** </td> <td> </td> </tr> <tr> <td> **WPs and tasks** </td> <td> WP8 </td> </tr> <tr> <td> **Standards and Metadata** </td> </tr> <tr> <td> **Metadata standards and data documentation** </td> <td> NONE </td> </tr> <tr> <td> **Methodology for data collection/generation** </td> <td> Data gathering and collection will be performed by WHR personnel under GDPR policy. Data will be stored on a specific Team Drive created on Whirlpool G-Drive cloud and accessed only by ROSSINI project members </td> </tr> <tr> <td> **Data exploitation & sharing ** </td> </tr> <tr> <td> **Data exploitation (purpose/use of the data analysis)** </td> <td> To implement HRC </td> </tr> <tr> <td> **Data ownership** </td> <td> WHR </td> </tr> <tr> <td> **Suitability for sharing** </td> <td> Confidential </td> </tr> <tr> <td> **Data utility** </td> <td> Access to data will be granted to ROSSINI partners </td> </tr> <tr> <td> **Open research data pilot** </td> <td> NO </td> </tr> <tr> <td> **Embargo periods (if any)** </td> <td> NO </td> </tr> <tr> <td> **Archiving & preservation (including storage and backup) ** </td> </tr> <tr> <td> **Managing, storing and curating data** </td> <td> GDPR internal policy </td> </tr> <tr> <td> **Data Storage** </td> <td> WHR Team Drive (Private Google Drive) </td> </tr> </table> # 9, Schindler dataset overview <table> <tr> <th> **Data Identification** </th> </tr> <tr> <td> **Data set description Type of data: qualitative or quantitative?** **Order of magnitude** </td> <td> Data may include: production and internal logistic layout production order configuration data others to be defined </td> </tr> <tr> <td> **Provenance of data: sources** </td> <td> Data come from interviews, archives, databases and / or other projects, devices, machines. </td> </tr> <tr> <td> **Nature and formats of data** </td> <td> All above fomats and, in addition to them, DWG and Visio format are possible. </td> </tr> <tr> <td> **New data set value** </td> <td> To be defined. </td> </tr> <tr> <td> **Audio-visual material** </td> <td> Duration of each video should be between 1 and 10 minutes. </td> </tr> <tr> <td> **Partners Activities & Responsibilities ** </td> </tr> <tr> <td> **Partner owner of the device producing the data** </td> <td> To be defined </td> </tr> <tr> <td> **Partner in charge of the data** **collection (if different)** </td> <td> To be defined </td> </tr> <tr> <td> **Partner in charge of the data analysis (if different)** </td> <td> To be defined </td> </tr> <tr> <td> **Partner in charge of the data storage** **(if different)** </td> <td> \- </td> </tr> <tr> <td> **WPs and tasks** </td> <td> </td> </tr> <tr> <td> **Standards and Metadata** </td> </tr> <tr> <td> **Metadata standards and data documentation** </td> <td> n/a </td> </tr> <tr> <td> **Methodology for data collection/generation** </td> <td> To be defined </td> </tr> <tr> <td> **Data exploitation & sharing ** </td> </tr> <tr> <td> **Data exploitation (purpose/use of the data analysis)** </td> <td> n/a </td> </tr> <tr> <td> **Data ownership** </td> <td> Who is the owner of the data? Schindler Supply Chain Europe Is another organization contributing to the data development? No Are you re- using some pre-existing data? Yes </td> </tr> <tr> <td> **Suitability for sharing** </td> <td> Confidential/limited access </td> </tr> <tr> <td> **Data utility** </td> <td> How will this data shared/made accessible for verification and re-use To be defined </td> </tr> <tr> <td> **Open research data pilot** </td> <td> Can data be uploaded in an open research data pilot? When? No, they can’t. </td> </tr> <tr> <td> **Embargo periods (if any)** </td> <td> </td> </tr> <tr> <td> **Archiving & preservation (including storage and backup) ** </td> </tr> <tr> <td> **Managing, storing and curating data** </td> <td> Data will be stored and backed up in Schindler servers or in Cloud Schindler certified services. Information appears in many various forms such as: · printed out or (hand-) written on paper · stored on hard-disks · transmitted through networks · spoken in conversation or over the telephone In any case information will be exchanged according to Schindler ON 0-08001 Information Security Group Policy. </td> </tr> <tr> <td> **Data Storage** </td> <td> Data will be stored in a dedicated MS Teams repository called LOC Rossini Project and they will be stored there for 5 years from the date of project closure. </td> </tr> </table> # 10, IMA dataset overview <table> <tr> <th> **Data Identification** </th> </tr> <tr> <td> **Data set description Type of data: qualitative or quantitative?** **Order of magnitude** </td> <td> Data collection: * Quantitative coming mainly from the IMA tea bags machines. * Quantitative coming from the IMA robotic application already on stage. * Qualitative to describe the different tasks the robotic platform has to perform. Data generation: * Quantitative coming from the Rossini sensing system. * Quantitative and qualitative for KPI evaluation. </td> </tr> <tr> <td> **Provenance of data: sources** </td> <td> IMA tea bags machine C24-E. IMA robotics projects. Rossini robotic platform. </td> </tr> <tr> <td> **Nature and formats of data** </td> <td> 1. text documents (DOC, ODF, PDF, TXT, etc); 2. images (JPG, GIF, SVG, PNG, TIFF); 3. video / film (MPEG, AVI, WMV, MP4); 4. audio recordings (MP3, WAV, AIFF, OGG, etc); 5. structured data (HTML, JSON, TEX, XML, RDF); 6. tables (CSV, ODS, TSV, XLS, SAS, Stata, SPSS portable); 7. source codes (C, CSS, JavaScript, Java, etc); 8. configuration data (INI, CONF, etc) 9. database (MS Access, MySql, Oracle, ect) </td> </tr> <tr> <td> **New data set value** </td> <td> New data set will come from: * The Rossini sensing system. * The mobile manipulator. New data sets will be created about the tasks completion to have a quantitative evaluation of the Rossini Platform . </td> </tr> <tr> <td> **Audio-visual material** </td> <td> The duration of the video (if recordered) will be decided in the future. </td> </tr> <tr> <td> **Partners Activities & Responsibilities ** </td> </tr> <tr> <td> **Partner owner of the device producing the data** </td> <td> The owner of the device is the partner that provide it (i.e. Datalogic for the vision system developed from Datalogic), even if the device is used in the IMA facility. </td> </tr> <tr> <td> **Partner in charge of the data** **collection (if different)** </td> <td> </td> </tr> <tr> <td> **Partner in charge of the data analysis (if different)** </td> <td> </td> </tr> <tr> <td> **Partner in charge of the data storage** **(if different)** </td> <td> </td> </tr> <tr> <td> **WPs and tasks** </td> <td> </td> </tr> <tr> <td> **Standards and Metadata** </td> </tr> <tr> <td> **Metadata standards and data documentation** </td> <td> The hardware manufacturers (robot, sensors, …) will each have their standards, especially with respect to safety, and the relevant ones will be established during the project. </td> </tr> <tr> <td> **Methodology for data collection/generation** </td> <td> The sensor data will be collected by the sensor owners (Pilz, Datalogic, …) and from IMA that will use the sensor. The data will be processed from the WP3 and WP4 partners. </td> </tr> <tr> <td> </td> <td> The data will probably stored on the Pilz Rossini platform. </td> </tr> <tr> <td> **Data exploitation & sharing ** </td> </tr> <tr> <td> **Data exploitation (purpose/use of the data analysis)** </td> <td> The data will be used to plan and execute a safety work cell for human robot collaboration in an industrial working environment. </td> </tr> <tr> <td> **Data ownership** </td> <td> Data produced are stored and accessible to the whole consortium. External access can be granted to third parties upon identification. </td> </tr> <tr> <td> **Suitability for sharing** </td> <td> Data not marked as confidential, will be public. </td> </tr> <tr> <td> **Data utility** </td> <td> The data that are not supposed to be used or visible outside of the system, will be in a specific format to satisfy efficiency and safety requirements. Data that are supposed to interact with external systems will be easy to access and saved in standard formats. </td> </tr> <tr> <td> **Open research data pilot** </td> <td> To be defined. </td> </tr> <tr> <td> **Embargo periods (if any)** </td> <td> </td> </tr> <tr> <td> **Archiving & preservation (including storage and backup) ** </td> </tr> <tr> <td> **Managing, storing and curating data** </td> <td> IMA IT procedures are based on best practices to guarantee data security and integrity: Data is stored in enterprise level storage system locked inside access-restricted data center rooms. Logical data security procedures define the authorization level needed to access data, i.e. authentication methods and access profiling based on the role of the technician, and who is accessing the data, i.e. logging of activities on file systems where data is stored. Sensitive data is transferred securely between users and systems using transfer protocols that include encryption technology. Backup systems are based on the best technologies available on the market and granted and certified by enterprise class hardware vendors. The data recoverability is granted by backup systems based on best-of-breed enterprise backup software as well as enterprise level backed-up data repository (virtual libraries and hardware repositories with data deduplication technology enabled). </td> </tr> <tr> <td> **Data Storage** </td> <td> Backup systems are locked inside access-restricted backup rooms, IMA backup policies and procedures define backup data retention and frequency, as well as periodic copies of backed up data in magnetic tapes stored in offsite vaults for long term retention. The conservation of collected data and how long the data is maintained is detailed in IMA backup and retention policies and depends on the type of data collected and the retention is compliant and respectful of EU data protection regulation. </td> </tr> </table> # 11, IMA Machinebouw dataset overview <table> <tr> <th> **Data Identification** </th> </tr> <tr> <td> **Data set description Type of data: qualitative or quantitative?** **Order of magnitude** </td> <td> Data will be generated by installed sensors. Data will be used to define the severity of the defect and decide future actions. The generated suggestions by the RSC component can be sent as notifications (SMS or email) to the addressed operators, to execute the appropriate actions. </td> </tr> <tr> <td> **Provenance of data: sources** </td> <td> Possible sources could be: sensors, cameras, lasers, and other measurement instruments installed at different points of the production line. </td> </tr> <tr> <td> **Nature and formats of data** </td> <td> Depending on the implemented type of sensor/camera this could be: 1. text documents (DOC, ODF, PDF, TXT, etc); 2. images (JPG, GIF, SVG, PNG, TIFF); 3. video / film (MPEG, AVI, WMV, MP4); 4. audio recordings (MP3, WAV, AIFF, OGG, etc); 5. structured data (HTML, JSON, TEX, XML, RDF); 6. tables (CSV, ODS, TSV, XLS, SAS, Stata, SPSS portable); 7. source codes (C, CSS, JavaScript, Java, etc); 8. configuration data (INI, CONF, etc) 9. database (MS Access, MySql, Oracle, ect) </td> </tr> <tr> <td> **Partners Activities & Responsibilities ** </td> </tr> <tr> <td> **Partner owner of the device producing the data** </td> <td> The device will be owned to the industry, where the data collection is going to be performed. </td> </tr> <tr> <td> **Partner in charge of the data** **collection (if different)** </td> <td> Multiple partners will be in charge (there are various partners related to the specific incident and/or operation). </td> </tr> <tr> <td> **Partner in charge of the data analysis (if different)** </td> <td> Multiple partners will be in charge (there are various partners related to the specific incident and/or operation). </td> </tr> <tr> <td> **Partner in charge of the data storage** **(if different)** </td> <td> Multiple partners will be in charge (there are various partners related to the specific incident and/or operation). </td> </tr> <tr> <td> **WPs and tasks** </td> <td> </td> </tr> <tr> <td> **Standards and Metadata** </td> </tr> <tr> <td> **Metadata standards and data documentation** </td> <td> The dataset might be accompanied with a documentation. Possible metadata include: • location, date, etc., and production process that led to the defect generation. A defect detection event in the production line, the cause, the origin, the value of the defect, thresholds, the current production stage. </td> </tr> <tr> <td> **Methodology for data collection/generation** </td> <td> The methodologies of data collection and production will be defined during the research process.(To be defined) </td> </tr> <tr> <td> **Data exploitation & sharing ** </td> </tr> <tr> <td> **Data exploitation (purpose/use of the data analysis)** </td> <td> The responsible project partner will use the data to decide whether or not a defective part/product should return to a previous production stage. </td> </tr> <tr> <td> **Data ownership** </td> <td> The dataset is confidential and available only to the members of the consortium. </td> </tr> <tr> <td> **Suitability for sharing** </td> <td> Data sharing is capable through web services and the control platform. </td> </tr> <tr> <td> **Data utility** </td> <td> Data sharing is capable through web services and the control platform. </td> </tr> <tr> <td> **Open research data pilot** </td> <td> The dataset is confidential and available only to the members of the consortium. </td> </tr> <tr> <td> **Embargo periods (if any)** </td> <td> </td> </tr> <tr> <td> **Archiving & preservation (including storage and backup) ** </td> </tr> <tr> <td> **Managing, storing and curating data & Data Storage ** </td> <td> Data is going to persisted to relational database system. A regular back up service will run in the background, and aging algorithm will decide which records are to old and need to be removed. </td> </tr> </table> # 12, CORE dataset overview <table> <tr> <th> **Data Identification** </th> </tr> <tr> <td> **Data set description** **Type of data: qualitative or quantitative? Order of magnitude** </td> <td> Dataset will collect general information about partners and project’s activities, as well as information about news, events, publications relevant for the project. </td> </tr> <tr> <td> **Provenance of data: sources** </td> <td> The dataset originates from the project’s social media accounts (Twitter & LinkedIn) </td> </tr> <tr> <td> **Nature and formats of data** </td> <td> Data collected should be elaborated by using the formulas in CVC and XLX format: Twitter data includes: Name, Location, Created Date, Number of Favorites, URL, Profile Image URL, Language, Protected, Description, Verified, Tweet Count, Time Zone Twitter also gives direct access to individual Tweets: Text, URL, Retweet Count, Date, Source, Favorite Count, Hash Tags, Mentioned Users, In Reply to Screen Name, Geo Data LinkedIn data includes (depending on the profile privacy preferences of the user): Direct link to LinkedIn profile, Email address, Phone number, Website, Instant messenger accounts, Birthday, 1st-degree connections </td> </tr> <tr> <td> **New data set value** </td> <td> n/a </td> </tr> <tr> <td> **Audio-visual material** </td> <td> n/a </td> </tr> <tr> <td> **Partners Activities & Responsibilities ** </td> </tr> <tr> <td> **Partner owner of the device producing the data** </td> <td> CORE </td> </tr> <tr> <td> **Partner in charge of the data collection (if different)** </td> <td> </td> </tr> <tr> <td> **Partner in charge of the data analysis (if different)** </td> <td> </td> </tr> <tr> <td> **Partner in charge of the data storage (if different)** </td> <td> </td> </tr> <tr> <td> **WPs and tasks** </td> <td> WP9, T9.1 </td> </tr> <tr> <td> **Standards and Metadata** </td> </tr> <tr> <td> **Metadata standards and data documentation** </td> <td> n/a </td> </tr> <tr> <td> **Methodology for data collection/generation** </td> <td> n/a </td> </tr> <tr> <td> **Data exploitation & sharing ** </td> </tr> <tr> <td> **Data exploitation** **(purpose/use of the data analysis)** </td> <td> Data will be used for the communication and dissemination of the project activities Moreover, CORE will monitor users’ access to the website and social network pages to evaluate stakeholders’ interest in the project activities and KPI definition </td> </tr> <tr> <td> **Data ownership** </td> <td> CORE </td> </tr> <tr> <td> **Suitability for sharing** </td> <td> Public/limited access </td> </tr> <tr> <td> **Data utility** </td> <td> Core Innovation collects this data as a KPI of the Communication Supports and Channels The data collected from the project’s social media accounts will only circulate within the consortium and the Commission Services. </td> </tr> <tr> <td> **Open research data pilot** </td> <td> n/a </td> </tr> <tr> <td> **Embargo periods (if any)** </td> <td> Yes, for the scientific papers. </td> </tr> <tr> <td> **Archiving & preservation (including storage and backup) ** </td> </tr> <tr> <td> **Managing, storing and curating data** </td> <td> </td> </tr> <tr> <td> **Data Storage** </td> <td> Data is stored on the specific cloud for all time of the project </td> </tr> <tr> <td> **3** </td> <td> **Data Summary** </td> </tr> </table> It has been already mentioned that the DMP released at M06 of ROSSINI project will only be a preliminary version of the document, that will be updated and augmented with new datasets and results during the lifespan of ROSSINI project. For completing the DMP deliverable, all ROSSINI partners have provided input following the template Horizon 2020 6 as well as the Data Set Template. At the proposal phase of the project, an initial DMP plan has been outlined and its results are summarised and presented in the following Table 13\. # Table 13, ROSSINI Expected research data <table> <tr> <th> **Research Data** </th> <th> **Related Partners** </th> </tr> <tr> <td> RS4 system specification and performance indicators </td> <td> DATALOGIC, PILZ </td> </tr> <tr> <td> Preliminary assessment data for external certification of RS4 </td> <td> DATALOGIC, PILZ </td> </tr> <tr> <td> Safety Aware Control Architecture algorithms experimentation results </td> <td> UNIMORE, IRIS </td> </tr> <tr> <td> Collaborative Robotic Arm simulation and validation outcomes </td> <td> SUPSI, PLIZ </td> </tr> <tr> <td> OECD metrics and datasets </td> <td> TNO </td> </tr> <tr> <td> Actual collision values algorithms and model </td> <td> FRAUNHOFER </td> </tr> <tr> <td> KPIs from use cases </td> <td> IMA, WHIRPOOL, IMA Machinebouw, SCHINDLER </td> </tr> </table> In the next months of the project the following table will be further implemented as consortium partners will be requested to investigate and provide more detailed information on their produced data and whether these are discoverable, accessible, assessable and intelligible, useable (beyond the original purpose) and interoperable. ## 3.1 The purpose of the data collection/generation and its relation to the objectives of the project There are several purposes for data collection within ROSSINI project. In principle, data will be generated/gathered for: * Testing and research purposes: the data collected will help in characterizing form a qualitative and quantitative point of view the results of the experiments carried out within the project, giving the technology providers of ROSSINI project a view on the progress of their research and the needed information to set new tests and build the architecture for ROSSINI collaborative platform. * Scientific data analysis collection to define the metrics that will be used to evaluate the job quality in the use cases, before and after the implementation of the Rossini framework. * Communication, Dissemination and Exploitation puroposes: to determine the degree to which the dissemination objectives have been reached, and the relationship between the outcomes and the efforts made to reach the goals. The data collected will be also used to support the exploitation activities to maximize the impact of the project. Each partner has listed his own purpose and they strongly depend on the role of partner within the project and the workpackage they are involved in: * Fraunhofer IFF is responsible for the development of a method that allows for transforming results from collision test with a fixed measurement device into such for a transient contact. For the evaluation of the method, Fraunhofer IFF requires data from collision test with an appropriate measurement device and a collaborative robot. Those data are the only data which Fraunhofer IFF will create during the project. * Datalogic, besides the project documentation mainly made of SW source codes, schematics, drawings and MS Office documents, will collect temporary datasets of images and of laser and radar scanner measurements. The datasets will be used exclusively for the development of Datalogic data processing algorithms. * IMA Machinebouw, will collect case identifying information, respondent information, technical information, description of actual work situation, information concerning future work information. The data will give insight on the actual work situation and how this will evolve in the future. Both qualitative and quantitative data types will be used. * IRIS will merge the data collected by sensors in order to build the semantic scene map, in case of Task 4.1. The main purpose of the semantic scene map is to produce a map of the working environment including the safety areas available from the environment representation. Synthetic data can be generated through simulation software to compensate for the possible lack of real data. In the case of Task 7.2 the scope is to develop a desktop-based design tool starting from an existing one, integrating libraries, tools, algorithms and model specific for the ROSSINI project. The data used for this task will be mainly generated by the software itself and will be used for testing purpose. * SUPSI: Purpose of data collection is to share data with project partners data with scientific community and dissemination of project’s results. * IMA/WHR/SCHINDLER will collect data to give input for requirements and validation strategy necessary to the Rossini platform development. The data generation is related to demonstration activities and to the verification of the robotic cell. **3.2 Types and formats, origin and size of data the project will generate/collect** The project will generate the following types and formats of data: <table> <tr> <th> − </th> <th> text documents (DOC, ODF, PDF, TXT, etc); </th> </tr> <tr> <td> − </td> <td> images (JPG, GIF, SVG, PNG, TIFF); </td> </tr> <tr> <td> − </td> <td> video / film (MPEG, AVI, WMV, MP4); </td> </tr> <tr> <td> − </td> <td> audio recordings (MP3, WAV, AIFF, OGG, etc); </td> </tr> <tr> <td> − </td> <td> structured data (HTML, JSON, TEX, XML, RDF); </td> </tr> <tr> <td> − </td> <td> tables (CSV, ODS, TSV, XLS, SAS, Stata, SPSS portable); </td> </tr> <tr> <td> − </td> <td> source codes (C, CSS, JavaScript, Java, etc); </td> </tr> <tr> <td> − </td> <td> configuration data (INI, CONF, etc); </td> </tr> <tr> <td> − </td> <td> database (MS Access, MySql, Oracle, ect); </td> </tr> <tr> <td> − </td> <td> DWG; </td> </tr> <tr> <td> − </td> <td> Visio </td> </tr> </table> Moreover, SUPSI would generate also CAD files, Data-sheets, Bill Of Materials, Gantt diagrams, Operating manuals, Software Packages, Source code, Electrical diagrams, Data logs. The previous listed formats of data are justified by their origin, some partners of ROSSINI project have already estimated the origin of the generated/collected data. Table 14, Origin of the data illustrates the origin of the data: # Table 14, Origin of the data <table> <tr> <th> PARTNER </th> <th> </th> <th> ORIGIN OF DATA </th> </tr> <tr> <td> **FFI** </td> <td> </td> <td> Experimental collision tests </td> </tr> <tr> <td> **WHR** </td> <td> </td> <td> from questionnaires and interviews to experts </td> </tr> <tr> <td> **DATALOGIC** </td> <td> </td> <td> 3D cameras of different types including the Datalogic 3D safety camera, Datalogic safety Laser Scanners and Pilz radar sensor prototypes </td> </tr> <tr> <td> **IMA Machinebouw** </td> <td> </td> <td> interviews, project meetings, field visits, conference calls, net meetings, emails, shared data platforms (cloud), projects, existing machines </td> </tr> <tr> <td> **IMA** </td> <td> </td> <td> tea bags packaging machines, IMA robotics projects, Mobile platform, Rossini Sensing System </td> </tr> <tr> <td> **IRIS** </td> <td> </td> <td> safety and non-safety sensors; software design tool </td> </tr> <tr> <td> **SUPSI** </td> <td> </td> <td> Project financial and administration data sources: EU portal, employees reporting. </td> </tr> <tr> <td> PARTNER </td> <td> ORIGIN OF DATA </td> </tr> <tr> <td> </td> <td> Project technical documents sources: engineer’s workstations, machines control systems, Internet. Other documents sources: internal realization for documentation or promotional purposes </td> </tr> <tr> <td> **CORE** </td> <td> The dataset originates from the project’s social media accounts (Twitter & LinkedIn) </td> </tr> </table> Some partners have also provided an estimation of the expected size of the data, this information is outlined in the Table 15 below: # Table 15, Size of the data <table> <tr> <th> **PARTNER** </th> <th> **SIZE OF DATA** </th> </tr> <tr> <td> **FFI** </td> <td> From 10 to 20 GB </td> </tr> <tr> <td> **WHR** </td> <td> Less than 10GB </td> </tr> <tr> <td> **DATALOGIC** </td> <td> Each image will be 1-3 MB; final size of the datasets will probably exceed 1 TB </td> </tr> <tr> <td> **IMA Machinebouw** </td> <td> around 5Gbyte </td> </tr> <tr> <td> **IMA** </td> <td> Depends on the source </td> </tr> <tr> <td> **IRIS** </td> <td> Depends on the source </td> </tr> <tr> <td> **PILZ** </td> <td> Not available yet </td> </tr> <tr> <td> **SCHINDLER** </td> <td> Not available yet </td> </tr> <tr> <td> **SUPSI** </td> <td> ≥40GB </td> </tr> </table> **3.3 Existing data, re-use and how?** Some partner will re-use existing data, such as FFI (from a study with volunteers), IMA (IMA robotics applications already on stage and data from the IMA tea bags packaging machines), SUPSI (data coming from other projects); other partners as DATALOGIC, IRIS and PILZ will not re-use existing data. ## 3.4 Data utility Data from the project will be mostly useful to project partners, for design, implementation, simulation, and testing purposes within the project. The main addressees will be requirements engineers, design engineers and system integrators. In particular, data will be useful for the project consortium for the design of the use cases, which is performed by IMA Machinebouw (WP2); Rossini Safe sensing System, which is performed by DATALOGIC (WP3); for the definition, implementation, simulation, and testing of the semantic scene map, which is performed by IRIS (WP4), collaborative by Birth Robot Arm, which is performed by SUPSI (WP5); a system that implements Human Robot mutual understanding, which is performed by TNO (WP6) and the integration layer, which is performed by PILZ (WP7). Outside of the project consortium the data from scientific publications could be useful for academic researchers and the scientific community to carry out further research. <table> <tr> <th> **4** </th> <th> **FAIR data** </th> </tr> </table> The principles of FAIR data have been established by a set of different stakeholders like academia, industry, funding agencies, and scholarly publishers. According to Wilkinson et al1, the FAIR Data Principles are a set of guiding principles in order to make data findable, accessible, interoperable and reusable. 7 The FAIR Data Principles clearly provide a concise and measurable set of parameters that should be respected to ensure the availability and reusability of the data for further research purposes by third parties that were not part of the project. Distinct from peer initiatives that focus on the human scholar, the FAIR Principles put specific emphasis on enhancing the ability of machines to automatically find and use the data, in addition to supporting its reuse by individuals. The mentioned principles do not necessarily propose an explicit technology, standard or implementation solution, forego implementation choices and promote the maximum usage of data. 7 Association of European Research Libraries, “Implementing FAIR Data Principles”, Ligue des Bibliothèques Européennes de Recherche ### 4.1 Making data findable, including provisions for metadata Data will be made **findable** by means of appropriate metadata and taking into account internal project conventions. Given that, they will be made available mainly to project partners for their use and re-use. The conventions that will be followed will be further discussed in the upcoming months: some partners already rely on their internal company system for data identification and their model could be adopted by the whole consortium as the common project methodology. For example, SUPSI stores its data (including data deriving from the project) in SUPSI Instory (SUPSI INSTitutional repositORY), the online institutional archive of publications related to the research and didactic work conducted by the University of Applied Sciences and Arts of Southern Switzerland. All data in Instory are provided with metadata. Instory automatically assigns a DOI (Digital Object Identifier) to every submitted record and each of its versions, for research data as well as for publications. Research data may be linked to the corresponding publications and vice versa via their DOIs. The data naming convention must be further discussed as well, however the solution currently proposed by SUPSI and likely to be adopted is the following: ### yyyymmdd_ROS_WPx.t_Tyy_Description_Rzz.eee Where: **yyyy** = year, **mm** = month, **dd** = day **x** = WP number **t** = task number **zz** = version number **eee** = file extension Example: 20190228_ROS_WP5.2_RobotJoint_R01.stp Such solution represents a good option as it provides the user with the main important information on the data, using the date of creation, the keyword associated to the ROSSINI project name, the number of the WP and Task that originated the data, a versioning number and the file extension. If some new keywords, other than the name of the project, the number of the WP and the Task will be set the DMP will be updated accordingly. As previously mentioned, the consortium will make use of metadata to describe the data collection and to facilitate the finding and re-use of data. Generally speaking, metadata will have a twofold nature: descriptive, therefore giving information on the data discovery and identification (titles, author, keywords) and administrative outlining when and how data was created, file type and other technical information, and who can access it. Some standard metadata schema will also be used. The Qualified Dublin Core standard will be used by SUPSI and may apply to the rest of the consortium. However, this will be assessed in the next months and the DMP will be updated accordingly. #### 4.2 Making data openly accessible Most of data collected during the project are for confidential use and accessible to project partners only. However, some data will be open and available for the public, especially those included in scientific papers and public deliverables of the ROSSINI project.. Most of the non-confidential data will be uploaded on online repositories like Zenodo or SUPSI Instory (SUPSI INSTitutional repositORY), the online institutional archive of publications related to the research and didactic work conducted by the University of Applied Sciences and Arts of Southern Switzerland. All metadata in Instory are made publicly available in the sense of Open Access. The data generated within the project should be easily accessible without the use of any sophisticated software. Indeed, the data that will made open only to the partners of the ROSSINI project will be accessible via the ROSSINI design tool or via standard software. The data valuable for the stakeholders and the public outside of the project will be made accessible using standard software and methods. Typical formats may be .csv so Excel would be a typical tool to read the data, or via a MySQL database interface if the data will be loaded into a common repository, rather than many “flat” files. In case it will be needed, clear documentation will be produced, especially for the ROSSINI design tool. Well known data access tools such as Excel and MySQL are already well documented however, future provisions could be undertaken. At the current stage of the project the data and associated metadata, documentation and code will be deposited in the servers of each partner’s organisations, while some data that will be valuable for the whole consortium will be stored in the Rossini collaborative platform provided by PILZ. Generally, access to the servers of the organization by which each partner belongs to, if provided, it will be given only to people directly involved in the project through an identification process (i.e. username and password). In the specific case of SUPSI, anyone may access the metadata describing items in the SUPSI Instory repository free of charge. The metadata may be re-used in any medium without prior permission for not-for-profit purposes. #### 4.3 Making data interoperable The data that are not supposed to be used or visible outside of the system, will be in a specific format to satisfy efficiency and safety requirements. However, in case of non-common ontologies, they will be explained and mapped. Generally, data will use a formal, standard, accessible, shared, and broadly applicable language for knowledge representation, vocabularies that follow FAIR principles and include qualified references to other data. Data and metadata vocabularies, standards or methodologies used to make data interoperable are standard vocabularies or OAI-PMH and SWORD (Simple Web- service Offering Repository Deposit). #### 4.4 Increase data re-use (through clarifying licences) In the event that some project data will be licensed to permit their re-use by the external stakeholders, Creative Commons or GNU licenses will be used. As previousely mentioned, the data that will be made available for individuals that are external to the project will be included in the project deliverables marked as public. Such data will be made public as soon as the project officer will approve the deliverables in the Funding and Tenders Platform of the EC. With regards to the access rights to third parties, the consortium will follow the rules established in the Rossini Consortium Agreement (Chapter 9, par. 9.8) <table> <tr> <th> **5** </th> <th> **Allocation of resources** </th> </tr> </table> The estimation of the costs to be incurred when making the data FAIR according to the principles of the EC and the proper allocation of the resources to do that are two topics addressed by the DMP. Moreover, the plan identifies the responsibilities for data management in project describing the costs and potential value of long-term preservation. Even if this is an early stage of the project an not all the points could find an exhaustive description, this chapter of the DMP will address the costs for making data FAIR in ROSSINI project, how these costs will be covered, who will be the responsible for the data management and who will decide and how what data will be kept and for how long. #### 5.1 Estimation of cost The costs related to open access to research data are eligible as part of the Horizon 2020 grant (if compliant with the Grant Agreement conditions). Costs are eligible for reimbursement during the duration of the project under the conditions defined in the H2020 Grant Agreement. #### 5.2 Responsibilities for data management The partner who will be responsible for data management within the ROSSINI project is CRIT. CRIT will be in charge to periodically update the DMP with further details and information provided by all project partners that should provide their contributions. <table> <tr> <th> **6** </th> <th> **Data security** </th> </tr> </table> The ethics deliverables of ROSSINI project released so far outline with high level of detail the procedures and strategies to ensure the secure storage and preservation of the data. The project partners will respect the procedures implemented for data collection, storage, access, sharing policies, protection, retention and destruction according to the requirements of the national legislation and EU standards 7 . The ROSSINI consortium will also be compliant with the European Union (EU) 2016/679 General Data Protection Regulation (GDPR) 8 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data. #### 6.1 Data confidentiality and integrity Generally, provisions for data security include best practices to guarantee data security, both physical data security (i.e. availability of locked data rooms accessed only by authorized personnel) and logical data security (i.e.using firewalls, intrusion detection system, authentication methods and access profiling based on the role of the technician, logging of activities on file systems where data is stored). Sensitive data is transferred securely between users and systems using transfer protocols that include encryption technology. The data recoverability is granted by backup systems based on best-of-breed enterprise backup software as well as enterprise level backed-up data repository (virtual libraries and hardware repositories with data deduplication technology enabled) which can be accessed only by authorised personnel to project beneficiaries needing access rights. #### 6.2 Data availability Usually, data are protected and stored according to the directive for safeguarding good scientific practice of the organisation. <table> <tr> <th> **7** </th> <th> **Conclusion and next steps** </th> </tr> </table> The initial ROSSINI DMP is presented in this document due M06 describing how acquired data and knowledge will be shared and/or made open as well as how data will be maintained and preserved during and after the project lifetime. Indeed, it describes the updated procedures and the infrastructure implemented by the project to efficiently manage the produced data. The DMP is identified as starting point for the discussion with the community about the ROSSINI data management strategy and reflects the procedures planned by the work plan at the beginning of the project. Present information has been collected by the answers provided by the ROSSINI project partners to the H2020 survey template by the EC. This is only a preliminary version of the plan that will be however updated regularly by CRIT thanks to the contribution of all partners to ensure the proper management of the data.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0313_BionicVEST_801127.md
# _1.2. Reused data and its origin_ The only data that will be taken from previous records will be the clinical records of the patients that are included in the study. The origin of the data comes from the different tests, questionnaires and interviews conducted with the study patients. # _1.3. Estimated size of the data_ The data that will be collected will be mainly the clinical data per patient. It is estimated that its total size is around 2GB. # _1.4. Usefulness_ The usefulness of the data extracted from the project is expected to benefit mainly: * The Bionic **\VEST** consortium * European Commission services and European Agencies * EU National Organisms * The general public including the broader scientific community # FAIR Data The data produced and / or used in the project are detectable with metadata, identifiable and locatable by means of a standard identification mechanism (PIDs) and unique as the Digital Object Identifiers that will allow its search within the repository of the Bionic **\VEST** project. ## Naming conventions For metadata, dataset and template names we will define naming convention consisting in 3 mandatory parts: * A prefix, indicating if it is a dataset, a metadata or a template  A root composed by: * the short and meaningful name of the dataset/template * the acronym/short name of the data provider organisation(s) (Bionic **\VEST** by default for templates) * A suffix indicating the date of the last upload into the Repository in YYYYMMDD format. Each of these elements are separated by an underscore: _ In addition to this, search keywords will be provided that optimize the possibilities of reuse. The Naming convention will be applied to the version in management of the data, metadata template and in general the files stored into the Repository and the use of the date as suffix, indicating the last version of the file uploaded into the Repository. ## Making data openly accessible The data will be directly accessible via Bionic **\VEST** repository to the Bionic **\VEST** consortium partners. To share data with Bionic **\VEST** consortium partners, a repository has been set up. This provides access, password protected, to the data through a web interface to view, upload and download the files across the users easily. The Bionic **\VEST** repository has been set up for this project as it: * Is a platform that facilitates sharing of data, intermediate results, and results * Is developed by the ULPGC as one of the partners of the Bionic **\VEST** project. So changes can be done to improve the outcomes. * Is needed to enable the analysis of vestibular implant data, but also to meet the goals of Bionic **\VEST** dissemination goals. * Enables that Data Owners/Data Providers can choose to which research they will contribute with their data; that use of the data can be detailed, diversified, and flexible according to the purpose and the interests of the Data Owners/Data Providers. * Will reach the GDPR compliancy by: o Logging of user identity during data access, download, and upload, including version control. This enables to restore the availability and access to the data in a timely manner in the event of a physical or technical incident. For data generated with Bionic **\VEST** co-fund during the course of Bionic **\VEST** , the Data Owner/Data Provider shall agree that these data are transferred at high level of granularity to the Bionic **\VEST** repository (single measurement data in an anonymised or pseudonymised way); and that the accompanying variables of the study that are needed to solve the envisaged research purpose(s) are also provided as single measurement data. ## Making data interoperable All the data generated and software applications are standard elements in the field of treatment and diagnosis of vestibular pathology. For this we follow the guidelines published by the Bárány Society. Likewise, the vocabulary, standards or methodologies of metadata and data follow those described by the Bárány Society so that their data are interoperable On the other hand, it will use standard vocabularies for all types of data present in its data set, to allow interdisciplinary interoperability. ## Increase data re-use This section will be compiled throughout the course of the project, when we get more information on the datasets that are made available forBionic **\VEST** . # Allocation of resources The maintenance of the project website with its annual repository maintenance repository will be around € 500 / year. Maintenance costs are covered by direct costs of the website for the Bionic **\VEST** project. The management of the data is the responsibility of the partners ULPGC and SCS. The data will be maintained once the project has been completed for 4 years in the Bionic **\VEST** repository. Once the term has expired and if the repository is not maintained, a copy of the data will be stored in the raw form by each one of the partners of the Bionic **\VEST** consortium. # Data security The repository is under SSL encryption. Likewise, the data that is loaded into the repository will be anonymous. On a daily basis, the repository generates an incremental backup of the data, structures of the database as well as the users. # Ethical aspects The transfer of data on human subjects to the Bionic **\VEST** repository is only considered when the following approvals are in place: * informed consents, * ethics approval and * approval by local data protection authorities if applicable. Such an approval will cover the purpose of the data to be used within Bionic\VEST and allows for transfer of individual or aggregated data to the Bionic **\VEST** repository. The informed consents will be stored by the partner's center where the patient comes from.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0314_Residue2Heat_654650.md
# Introduction ## Scope The _Residue2Heat_ Data Management Plan (DMP) constitutes one of the outputs of the work package dissemination, communication and exploitation, dedicated to raising awareness and promoting the project and its related results, achievements. The present deliverable is prepared at an early project stage (Month 12), in order to commence on a strategy on data management from the project onset. It is also envisaged that the Data Management Plan will be implemented during the entire project lifetime and updated on a yearly basis. The main focus of the _Residue2Heat_ data management framework is to ensure that the project’s generated and gathered data can be preserved, exploited and shared for verification or reuse in a consistent manner. The main purpose of the Data Management Plan (DMP) is to describe _Research Data_ with the metadata attached to make them _discoverable_ , _accessible_ , _assessable_ , _usable beyond the original purpose_ and _exchangeable_ between researchers. The definition of Research data is defined in the “Guidelines on Open Access to Scientific Publication and Research Data in Horizon 2020” (2015) as: “ _Research data_ refers to information, in particular facts or numbers, collected to be examined and considered and as a basis for reasoning, discussion, or calculation. In a research context, examples of data include statistics, results of experiments, measurements, observations resulting from fieldwork, survey results, interview recordings and images. The focus is on research data that is available in digital form." According to the EC provided documentation 1 for data management in H2020 aspects like research data access, sharing and security should also be addressed in the DMP. This document has been produced following these guidelines and aims to provide a policy for the project partners to follow. ## Objectives The generated and gathered research data need to preserved in-line with the EC requirements. They play a crucial role in exploitation, verification of the research results and should be effectively managed. This Data Management plan (DMP) aims at providing a timely insight into facilities and expertise necessary for data management both during and after the project is finished, to be used by all _Residue2Heat_ . The most important reasons for setting up this DMP are: * Embedding the _Residue2Heat_ project in the EU policy on data management. The rationale is that the Horizon 2020 grant consists of public money and therefore the data should be accessible to other researchers;. * Enabling verification of the research results of the _Residue2Heat_ project; * Stimulating the reuse of _Residue2Heat_ data by other researchers; * Enabling the sustainable and secure storage of _Residue2Heat_ data in repositories; This second version of the Data Management plan is submitted to the EU in December 2016. It is important to note however that the document will evolve and further develop during the project’s life cycle. It can be identified by a version number and a date. Updated versions will be uploaded by project partner OWI, which is the primary responsible for data management. # Findable, accessible interoperable and reusable (FAIR) data This document takes into account the latest “Guidelines on FAIR Data Management in Horizon 2020”. The _Residue2Heat_ project partners should make their research data **findable, accessible, interoperable and reusable** ( **FAIR** ) and ensure that is soundly managed. Good research data management is not a goal in itself, but rather the key conduit leading to knowledge discovery and innovation, and to subsequent data and knowledge integration and reuse 1 . ## Data Management Plan Data Management Plans (DMPs) are a key element of good data management. A DMP describes the data management life cycle for the data to be collected, processed and/or generated by a Horizon 2020 project. As part of making research data findable, accessible, interoperable and re-usable (FAIR), a DMP should include information on 2 : * the handling of research data during and after the end of the project; * what data will be collected, processed and/or generated; * which methodology and standards will be applied;  whether data will be shared/made open access;  how data will be curated and preserved. # Residue2Heat implementation of FAIR Data ## Data Summary It is a well-known phenomenon that the amount of data is increasing while the use and re-use of data to derive new scientific findings is more or less stable. This does not imply, that the data currently unused are useless - they can be of great value in the future. The prerequisite for meaningful use, re- use or recombination of data is that they are well documented according to accepted and trusted standards. Those standards form a key pillar of science because they enable the recognition of suitable data. To ensure this, agreements on standards, quality level and sharing practices have to be defined. Strategies have to be fixed to preserve and store the data over a defined period of time in order to ensure their availability and re-usability after the end of the _Residue2Heat_ project. Data considered for open access would include items as fuel properties, energy flows and balances, modelling calculations etc. For example, the consortium expects that the following data will be obtained and made available: * Physico-chemical characterization of FPBO from different biomass resources (WP3); * Data underpinning the mass- and energy balances for fast pyrolysis (WP6); * Emission data and actual measurements obtained during combustion of FPBO (WP5); * Data on combustion reaction mechanism modelling (WP4); * Data on spray modelling (WP4); * Background data on screening LCA calculations (WP6). The data will be documented in 4 types of datasets: 1. **Core datasets** – datasets related to the main project activities. 2. **Produced datasets** – datasets resulting from _Residue2Heat_ applications, e.g. sensor data. 3. **Project related datasets** – datasets resulting from the documentation of the progress of the _Residue2Heat_ project. They are a collection of deliverables, dissemination material, training material and scientific publications. 4. **Software related datasets** – datasets resulting from the development of the combustion reaction mechanisms. These can be used for various purposes in the combustion area including research tasks or the development of new appliances. Generally, the datasets which be stored in file formats which have a high chance of remaining usable in the far future (see Annex 1). Especially the datasets which will be available for open access will be stored in these selected file formats. In principle the OpenAIRE 3 platform is selected to insure open access of the datasets, persistent identifiers, data discovery and preservation of data for a long term. The open access data is useful for different stakeholder groups from the scientific community, industry as well as socioeconomic actors. For example: * **Industry and potential end** **users of the residential heating systems.** To implement FPBO residential heaters in society, the potential end‐users need to be aware of their options. The end users will have certain demands, such as cost and comfort levels, which the industry needs to accommodate. This will be addressed by the datasets generated in WP6 and WP7. * **Social and Environmental impacts of the _Residue2Heat_ value chain to the population. ** The proposed value chain has the potential to influence the daily life of many EUresidents, not only in heating their home, but also in terms of environmental impact, social aspects such as job security and the economic development of rural communities. The positive (and if present negative) effects will be documented in WP6. * **Social and Environmental impacts of the _Residue2Heat_ value chain to the Regulatory Framework. ** To allow commercial use of FPBO in residential heating systems, both the fuel as well as the heating systems need to comply with numerous regulations. Examples are CE certification of the heating system (EU), EN standard for FPBO, Emission limits for both FPBO production as well as the heating system (National) and local development plans need to accommodate construction and operation of the FPBO production plant (Regional). In WP6 the regulatory framework on the different levels will be documented. ## FAIR Data ### Making data findable, including provisions for metadata In order to support the discoverability of data the OpenAIRE platform has been selected. This platform support multiple using unique identifiers (doi, arxiv, isbn, issn, etc) which are persistent for a long time. Currently this platform is being tested how it can support the Residue2Heat project in optimal form. This needs additional documentation of best practices with respect to: * the discoverability of data (metadata provision); * the identifiability of data and refer to standard identification mechanism. Do you make use of persistent and unique identifiers such as Digital Object Identifiers? * the naming conventions used; * the approach towards search keyword; * the approach for clear versioning; * specification of standards for metadata creation (if any). If there are no relevant standards available a documentation of what type of metadata will be created and how it is being created will be performed. ### Making data openly accessible The consortium will store the research data in a format which is suited for long-time preservation and accessibility. To prevent file format obsolescence, some precautions have been taken. One such measure is to select file formats which have a high chance of remaining usable in the far future (see Annex 1). Furthermore, in a future update of this deliverable the following issues will be addressed: * Specification of the data which will be made openly available? If some data is kept closed provide a rationale for doing so will be given; * Specification where the data and associated metadata, documentation and code are deposited; * Specification how access will be provided in case there are any restrictions. ### Making data interoperable In order to support the interoperability of the Residue2Heat project data a list of standard and metadata vocabularies need to be defined. Additionally it will be checked whether the data types present in our data set allow inter- disciplinary interoperability. If necessary mapping to more commonly ontologies will become available. The present version of the Data Management plan does not include the actual metadata about the data being produced in the _Residue2Heat_ project. Access to this project’s related metadata will be provided in an updated version of the DMP. ### Increase data re-use In order to support the data re-use the data will be a proper licence to permit the widest re-use possible. Most likely the best licenses to publish will be the Creative Commons license 3 . Other items which have to be addressed are: * when data will become available for re-use. If applicable, it is mentioned whether a data embargo is necessary; * the data produced and/or used in the project which is useable by third parties, in particular after the end of the project is listed. If the re-use of some data is restricted, it is explained why this is necessary; * data quality assurance processes; * the length of time for which the data will remain re-usable. ## Allocation of resources Lead for this data management task will be with OWI, co-lead with RWTH, though all partners are involved in the compliance of the DMP. The partners deliver datasets and metadata produced or collected in Residue2Heat according to the rules described in the Annex 1. The project coordinator and in particular the Technical coordinator are central players in the implementation of the DMP and will track the compliance of the rules as documented in this DMP. The Residue2Heat project partners have covered the costs for data FAIR in their budget estimations. The long term preservation of datasets has been secured via our internal communication platform EMDESK for up to eight years after the project is finished. ## Data security In this project various types of experimental and numerical data will be generated. The raw data will be stored by each partner according to their own standard procedures minimum for ten years after ending of the project. The processed data will become available in the form of project reports and open access publications. This data will be further exploited in webinars, articles in professional journals, and by conference presentations. The OpenAIRE platform 4 has been selected for secure long term storage and access of these datasets. The research data used for communication, dissemination and exploitation will be stored also on internal communication platform EMDESK ( _http://www.emdesk.com_ ) for up to 8 years after the project is finished. This internal platform is only accessible for the project partners. Access to research data which is not marked as confidential will be granted via a repository. ### Rights to access and re-use research data Open access to research data refers to the right to access and re-use digital research data under the terms and conditions set out in the Grant Agreement. Openly accessible research data can typically be accessed, mined, exploited, reproduced and disseminated free of charge for the user. Building on the proposed Consortium Agreement of the _Residue2Heat_ partnership the present data management plan is setup. The Consortium Agreement described general rules how data will be shared and/or made open, and how it will be curated, preserved and the proper licenses to publish, e.g. Creative Commons license. In an updated version of this DMP the right to access and re-use of research data will be documented in detail. ## Ethical aspects and Legal Compliance The legal compliance related to copyright, intellectual property rights and exploitation has been agreed on in the Consortium Agreement, which is also applicable to access to research data. It is unlikely that the _Residue2Heat_ project will produce research which is sensitive to personal and ethical concerns. # Conclusions This Data Management Plan (DMP) is focussed on the support of use and re-use of research data to validate or derive new scientific findings. The prerequisite for meaningful use, re-use or recombination of data is that they are well documented according to accepted and trusted standards. Those standards form a key pillar of science because they enable the recognition of suitable data. To ensure this, agreements on standards, accessibility and sharing practices have been defined. Strategies have to be fixed to preserve and store the data over a defined period of time in order to ensure their availability and re-usability after the end of _Residue2Heat_ . Especially, the metadata vocabularies and licences to permit the widest reuse possible need to be addressed more in detail in a future update of this deliverable.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0315_PROSEU_764056.md
# Executive summary This document corresponds to the first update of the current PROSEU Data Management Plan (DMP), submitted on August 2018 (D1.2). The PROSEU project participates in the Open Research Data (ORD) Pilot, launched by the European Commission (EC), which aims at promoting free access, reuse, repurpose, and redistribution of data generated by Horizon 2020 projects. Therefore, the purpose of the PROSEU DMP is to provide a description of the main elements of the data management policy that will be used by the Consortium with regard to the research data generated and collected during the project (excluding deliverables for the EC and other dissemination materials). Specifically, it includes detailed information about: * The main research data that the project will collect and generate; * How it will be processed, organised, made accessible for verification and reuse; * How it will be curated and preserved during and after the end of the project according to the corresponding ethical requirements (detailed on D9.1 and D9.2). To ensure that all research data produced is managed and shared properly, according to ethical and technical standards, and to facilitate their reuse by others in an effective, efficient and transparent way, PROSEU team members have adopted a series of criteria to follow the FAIR data principles. Overall, all research data that are final versions will be shared using licences that permit the widest reuse possible (i.e. Creative Commons licences). Exceptions are those datasets containing personal data, not-final versions of working documents, datasets in which PROSEU members are not primary authors, or when a contractual obligation exists between two parts. These documents will remain closed and not be openly distributed, nor shared with third parties. Since the DMP is a _living document_ , this version includes updated information of all research data generated from month 1 to month 26 of the project (i.e. March 2018 to April 2020). According to the Grant Agreement, the DMP has to be periodically updated to include any significant change arisen during the development of the project, thus, the following update (i.e. final version) will be delivered on 05/2020. This document introduces the Consortium’s commitment towards Open Data and Open Access initiatives to contribute to Open Science principles, and is structured as follows: Section 1 describes the features of the research data expected to be generated and/or collected in the project. Section 2 is dedicated to the principles of FAIR data, that is, how data will be findable, accessible, interoperable, and reused. Section 3 details the costs and responsibilities associated with making data FAIR. Next, section 4 addresses data security issues. Section 5 will describe the ethical and legal aspects involving the research data collected. Finally, the appendices include the description and characteristics of the main datasets that have been generated (or are expected to be generated) within the period covered by this version of the DMP (until April 2020). # Introduction The PROSEU project participates in the _Open Research Data (ORD) Pilot_ , which aims to make the research data generated by Horizon 2020 projects findable and accessible for others, in order to maximise their reuse with as few restrictions as possible, while protecting personal and sensitive data, as well as intellectual property rights (following the premise “ _as open as possible, as closed as necessary_ ”). Our ambition and responsibility as researchers is to ensure that all research data produced is managed and shared properly, according to ethical and technical standards, and to facilitate their reuse by others in an effective, efficient and transparent way, following open science practices. In this way, by opening up the research data, new knowledge may be easily discovered and scrutinised by a largest community of researchers and stakeholders, such as policy-makers or non- governmental organisations, among other societal actors. Therefore, the objective of the PROSEU Data Management Plan (DMP) is to provide detailed information concerning the datasets that will be generated during the project, and how these research data will be shared, archived and preserved during the lifespan of the project, as well as after its conclusion. Specifically, PROSEU collects data through several procedures, based on transdisciplinary and interdisciplinary research methods and approaches, such as: literature review, self-administered questionnaire (web-based); semi- structured interviews; focus groups; living labs; or quantitative modelling and scenario analysis, in nine EU countries (Belgium, Croatia, France, Germany, Italy, Portugal, Spain, the Netherlands, and the United Kingdom). Since the project’s impacts are dependent on an easy discovery, access and reusability of the research data, these will be available during and after the end of the project. Thus, the DMP describes which data can be shared, including access procedures, storage and long-time preservation. Where exceptions are necessary due to data protection issues and to confidentiality, since some data cannot be anonymized and participants are entitled to confidentiality, the DMP states clearly which data will not be shared. In this regard, ethical-related aspects of data protection and research procedures have already been considered in this DMP and are described in more detailed in the PROSEU Ethics Requirements documents (i.e. Deliverables 9.1 and 9.2). This document details the research datasets that PROSEU Consortium members have generated from the beginning of the project until now (M1 to M15), as well as their expectations regarding the data that will be generated within the period covered by this second version of the DMP (M16 to M26). The PROSEU DMP excludes the deliverables for the EC and other dissemination materials, which will be made publicly available through the EC website, the PROSEU project website, and the open repository ZENODO. In line with the Grant Agreement, the DMP will be again updated in deliverable D1.5 (month 27), corresponding to its final version. The document follows the guidelines on FAIR Data Management for Horizon 2020 projects and is organised in six sections that cover the description of the data (Section 1), the FAIR principles for opening research data (Section 2), resources needed for making data FAIR (Section 3), data security (Section 4), and ethical aspects (Section 5). A detailed description and characteristics of all datasets covered by this DMP can be found as an appendix to this report. # Data summary The results obtained within the PROSEU project rely on efficient data collection from several sources, mainly primary and secondary, which allow us to determine what incentive structures may enable the mainstreaming of RES _prosumerism_ and, in so doing, safeguarding citizen participation, inclusiveness and transparency in the Energy Union. _Collective RES Prosumer Initiatives_ and other relevant stakeholders involved in _RES prosumerism_ (such as business model experts; FinTech companies; local, national and EU regulators and policy-makers; technology experts, etc.) will be requested to provide information about the economic, financial, legal, technological and cultural factors that drive or hinder the development and consolidation of RES _prosumerism_ in Europe. Specifically, factors that facilitate or impede collective energy-responsible behaviour and choices, including data on how prosumer initiatives deal with issues of participation, inclusiveness, gender, and transparency. Specifically, PROSEU researchers collect and/or generate both quantitative and qualitative data that are used to: 1. develop a typology of prosumer experiences in Europe that will provide a framework for a meaningful comparison of RES prosumer drivers, barriers, challenges, opportunities and incentive structures across Europe (WP2); 2. produce a document with concrete policy options for regulatory frameworks that will be directed towards policy makers at EU, national, regional and local levels, as well as a policy brief on the models of participatory governance to inform the development and revision of Member States’ National Energy and Climate Plans (WP3); 3. illustrate the business model archetypes and innovation potential for European prosumers, and to produce a detailed policy brief on the changes needed to energy market regulations and policy to allow new business models for the Energy Union (WP4); 4. create EU, national and local technology scenarios to assess the impact of prosumers, as well as the potential drivers and barriers in various political and economic frameworks and climate conditions (WP5); 5. provide an overview of current financial and non-financial (dis)incentive structures for prosumerism in Europe (WP6). To accomplish this, the PROSEU team members are collecting data via interviews with experts and focus groups (WP3 and WP4); technological databases (WP5); workshops (WP6); and, finally, through direct input from RES Prosumer collectives and other stakeholders (in the form of interviews or workshops, depending on the interventions conducted in the multiple Living Labs that will take place across Europe), but also drawing on previous research from other WPs (WP7). The research data generated will allow us to gain a deeper understanding of the sociotechnical dimensions of RES _prosumerism,_ and concretely: * To map and characterise RES prosumer initiatives in Europe, and to develop a typology that accounts for their full diversity, achievements and ambitions including sociocultural and socio-economic factors (Objective 1); * To examine the current regulatory frameworks and policy instruments relevant for RES prosumer initiatives across the EU to produce updated Member State factsheets and policy briefs on challenges, opportunities, incentives of regulations and policies for prosumers in nine EU member states (Objective 2); * To identify innovative business and financial models for RE prosumers (Objective 3); * To develop local, national and EU technology scenarios for 2030 and 2050, and technology recommendations for RES prosumers under different geographical, climatic, and socio-political conditions (Objective 4); * To propose a set of incentive structures and a roadmap for the mainstreaming of prosumers in the Energy Union (Objective 5); * To develop new methodological tools (based on the co-creation and co-learning methods used in the living labs) to facilitate the mainstreaming of _prosumerism_ (Objectives 6 and 7); * To create a Prosumers Community of Interest by bringing together relevant stakeholders (Objective 8). Table 1 and table 2 (Appendix A and Appendix B) provide the updated description of the datasets that PROSEU partners have collected/generated (or will collect/generate) within the period covered by this DMP (until April 2020), which are directly linked with the above PROSEU’s objectives. These tables may suffer further modifications by the addition, removal or rename of the dataset included, and will be updated in the following (and final) version of the DMP. PROSEU Consortium members expect that the research data and outcomes generated through the project (e.g. deliverables, policy briefs, guidelines, etc.) will be useful for other researchers from different fields - not only in the social sciences and humanities, but also in the STEM (science, technology, engineering and mathematics) sciences, as well as for current and future RES prosumer initiatives, and their potential allies such the alternative finance sector, utility and grid operators, and representatives from governments at the local, regional, national and EU level. Thus, to facilitate the widespread reuse of the research data, and in this manner enable the reproducibility of results, PROSEU datasets use widely accepted formats and standards. These datasets are provided in text (plain text and/or comma delimited) and/or in numeric formats. PROSEU uses the most common file extensions to save the data (such as .pdf, docx, or .csv), and follows standard file-naming conventions and keywords. The data is made available on the online repository ZENODO, a free of charge repository developed by CERN within the EU project OpenAIRE to support Open Data management for EC funded multi-disciplinary research, and shared under the Creative Commons (CC) Attribution (BY) version 4.0 (CC BY 4.0) licence. All of the above, aim to follow the FAIR principles for research data established by the EC, i.e. data should be Findable, Accessible, Interoperable and Reusable (FAIR). # FAIR Data The PROSEU project, as participant in the ORD pilot, follows the FAIR principles for research data, i.e. data should be **Findable, Accessible, Interoperable and Reusable (FAIR)** . To do so, PROSEU Consortium members have agreed on establishing a series of criteria to make data findable by other users (e.g. using metadata standards, adding keywords, and DOIs, etc.); to address which data may be accessible via open repository or should remain confidential according to ethical requirements; to foster the interoperability of the data by allowing data exchange and reuse (e.g. using standards, or open source); and, to establish the licences of the data generated to permit the widest reuse possible. ## Making data findable, including provisions for metadata Improving the ability of other researchers, policy makers, and stakeholders to find and reuse the PROSEU research data is vital to increase the impact of the project. The standards and good practices detailed in the previous version of the DMP (Deliverable 1.2) have been followed (i.e. use of metadata, DOI, file- naming convention and keywords) (see examples shown in Tables 3 and 4, Appendix C and D). **There are not significant changes with respect to that version** . ## Making data openly accessible As stated in the previous version of the DMP (Deliverable 1.2), public research data generated is openly accessible via the **online repository _ZENODO_ ** , a free of charge repository developed by CERN within the EU project OpenAIRE to support Open Data management for EC funded multidisciplinary research. PROSEU is already storing research data produced within the project (i.e. databases and deliverables) on that repository. The currently available documents are detailed in Table 5 (Appendix E). The direct links to those documents are the following: * **Survey questionnaire (WP2):** _https://zenodo.org/record/3238181#.XPZhPS1Dnq0_ (Self-administered online questionnaire); * **Country factsheets (WP2)** : _https://zenodo.org/record/3247376#.XQdbIC1DnMI_ (ZIP file, 9 country fact sheets) * **National Energy and Climate Plans – Preliminary analysis on Prosumerism for 9 EU Member States (WP3)** : _https://zenodo.org/record/2650933#.XOKCRy9DnMI_ (Dataset, policy analysis) * **Assessment of existing EU-wide and Member State-specific regulatory and policy frameworks of RES Prosumers (WP3)** : _https://zenodo.org/record/2607940#.XOKEGi9DnMI_ (Report; Deliverable 3.1) * **Prosumer technology database (WP5)** : _https://zenodo.org/record/2611147#.XOJ_3S9DnMI_ (Database) * **RES Living Labs Operational Plan (WP7):** _https://zenodo.org/record/3236049#.XPE28i1Dnq0_ (Working document; management of Living Labs) * **Co-creating models for Prosumerism in Portugal: Our experience working with Living Labs: (WP7):** _https://zenodo.org/record/3236219#.XPE3Hi1Dnq0_ (Conference contribution; abstract) There is not any highly specialised software tool needed to access the PROSEU public research data shared on ZENODO. As described in Table 2 (Appendix B), most of the datasets produced within the project are available in **common text or numeric format files** (e.g. DOCX, XLSX, ODT, PDF, etc.). When possible, **open source code** and/or **open source software** (e.g. LibreOffice) will be used. Although our ambition is to widely distribute PROSEU research data as open as possible, **certain datasets cannot be shared or need to be shared under restrictions** . For instance, datasets containing personal data of participants (such as names or e-mail addresses, which cannot be anonymized), not-final versions of working documents, datasets in which PROSEU members are not primary authors, or when a contractual obligation exists between two parts, will remain stored in the project’ servers and will not be openly distributed, nor shared with third parties. Table 6 (Appendix F) details PROSEU members’ expectations regarding the availability of the project’s datasets stored in the open repository ZENODO. ## Making data interoperable Interoperability of the data (i.e. allowing data exchange and re-use of the research data between researchers, institutions, organisations or countries) is ensure by following the good practices and standards for research that were detailed in the previous version of the DMP (Deliverable 1.2). **There are not significant changes with respect to that version** . ## Increase data reuse (through clarifying licences) The reuse of PROSEU research data by third parties (i.e. other researchers, policy-makers, and other societal actors) is expected during the project’s activities. Whenever possible, public research data is shared under the licence **Creative Commons (CC) Attribution (BY) version 4.0 (CC BY 4.0)** in order to allow the widest reuse possible. Datasets containing personal data of participants (such as names or e-mail addresses, which cannot be anonymized), not-final versions of working documents, datasets in which PROSEU members are not primary authors, or when a contractual obligation exists between two parts, remain stored in the project’ servers. # Allocation of resources The costs for making PROSEU research data findable, openly accessible, interoperable, and reusable (i.e. FAIR), while securing any personal data collected, were detailed in Table 7 (Appendix G) of the previous version of the DMP (Deliverable 1.2). **There are not significant changes with respect to that version** . # Data security The procedures to ensure the correct management (i.e. data collection, long- term conservation, secure storage, protection, and destruction of data) of all PROSEU research data as well as any personal data collected from participants were detailed in the previous version of the DMP (Deliverable 1.2). **There are not significant changes with respect to that version** . # Ethical aspects The ethical and data protection frameworks that guide the project’s research practices are in line with the EU key ethical principles and research codes of conduct, as well as the current regulations on Data Protection (General Regulation on Data Protection from 25 May 2018, and the EU Directive 2016/679). A detailed description of the ethical and data protection procedures can be found in the PROSEU Deliverable 9.1 and Deliverable 9.2 (namely, Ethics Requirements 1 and 2), as well as in the previous version of the DMP (Deliverable 1.2). **There are not significant changes with respect to those documents** . # Other issues The application of other national, institutional, departmental, or group procedures on data management, data sharing and/or data security are not expected at this stage.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0317_HyPhOE_800926.md
Metadata content and format will be further developed in future versions of the DMP, and an example of a metadata file will be produced and amended to this plan, after the first scientific publication has been made available. # 3.2. Making data openly accessible According to Article 29.3 of the HyPhOE Grant Agreement, beneficiaries of HyPhOE must deposit research data including associated metadata, needed to validate the results presented in scientific publications as soon as possible unless a decision has been taken to protect the results. At Linköping University, research data will be stored in the institutional repository DiVA (Digitala Vetenskapliga Arkivet) via the _Dataset_ publication type. Each file may be up to 16GB and you can upload multiple files, even in compressed formats. Each dataset receives a unique identifier (Digital Object Identifier) and a persistent link. From the dataset it is possible to link to the publication. In addition, a document containing a description of research data e.g. an overview of the research results, various variables that are included in the data material and what they mean, how the material is to be interpreted, will be uploaded together with the dataset. At SLU research data will be stored in the university’s repository TILDA via the _Dataset_ publication type. Each dataset receives a unique identifier (Digital Object Identifier) and a persistent link. From the dataset it is possible to link to the publication. In addition, a document containing a description of research data e.g. an overview of the research results, various variables that are included in the data material and what they mean, how the material is to be interpreted, will be uploaded together with the dataset. Bordeaux INP will use the open data repository HAL ( _Hyper Articles en Ligne_ , https://hal.archivesouvertes.fr/). HAL is an open archive implemented by the _Centre pour la Communication Scientifique Directe_ (CSDD) of the _French National Centre for Scientific Research_ (CNRS). It is a platform made in the respect of open access principles, for archiving and disseminating scientific publications and data. HAL is compatible with the European OpenAIRE project. In case of uploading a publication, HAL can automatically retrieve the metadata to complete the filing, either directly from the pdf file (Grobid – GeneRation Of BIbliographic Data), or, from the DOI number, using the CrossRef database. In case of uploading unpublished data, each dataset will prior receive a unique identifier (Digital Object Identifier number) and then will be uploaded to HAL. In addition, a document containing a description of research data e.g. an overview of the research results, various variables that are included in the data material and what they mean, how the material is to be interpreted, will be uploaded together with the dataset. UNIBA will use the data repository IRIS – Institutional Research Information System ( _https://ricerca.uniba.it/_ ) . IRIS is password protected archive designed for the collecting and handling research products, including publications. Similarly CNR uses the data repository PEOPLE (https://www.cnr.it/people/) which is also a password protected archive designed for the collecting and handling research products, including publications. Data deposited in these databank can be retrieved via specific fields, including Author, DOI and funding grant. UPDiderot will use the French HAL open data repository ( _https://doc.archives-ouvertes.fr/_ ) , also used by Bordeaux INP. The multidisciplinary open archive HAL exists since 2000. HAL is based on the principle of self-archiving, the direct deposit with the authors of the full text of publications of any kind, but also thesis, research report, data with metadata, etc. # 3.3. Making data interoperable The beneficiaries of HyPhOE project aims to collect and document data in a standardised way to enable exchange and re-use between researchers of HyPhOE. Even though the HyPhOE consortium is highly interdisciplinary, standard protocols and methods will be used to generate research data experimentally. Thus, the formats for research data will be similar across the consortium. Generated data will be preserved on institutional platforms minimum until the end of the HyPhOE project and maximum according to national guidelines and legislation for archiving. # 3.4. Increase data re-use (through clarifying licences) To enable reuse of the HyPhOE datasets and make the data available to the widest audience possible, licensing will be made through Creative Commons. At this stage of the project, all 7 Creative Commons licenses can be possible and considerations for choosing license will be made continuously throughout the project. # Allocation of resources The costs foreseen to make HyPhOE datasets openly available are primarily personnel costs related to HyPhOE beneficiaries managing and storing datasets according to this plan. Prof. Gianluca Farinola and Prof. Massimo Trotta at Universita Degli Studi di Bari Aldo Moro (UNIBA) are responsible for data management within HyPhOE. The implementation of the data management plan is the responsibility of the PIs of the HyPhOE beneficiaries. # Data security No sensitive data, such as personal data will be generated within HyPhOE. During the course of HyPhOE, datasets will be stored locally by the responsible beneficiary, detailed in the table below. Table 2. Data Storage <table> <tr> <th> Short name </th> <th> Data Storage </th> </tr> <tr> <td> LiU </td> <td> Data generated at LiU will be stored at internal unit servers at Laboratory of Organic Electronics, LiU. All data is backed up automatically by the central IT department at LiU (network file storage at LiU). Backups are regularly taken, which means that if data is lost for any reason, it can in most cases be recreated. </td> </tr> <tr> <td> SLU </td> <td> Data generated at SLU will be stored at internal unit servers at Umea Plant Science Centre. Backups are regularly taken, which means that if data is lost for any reason, it can in most cases be recreated. </td> </tr> <tr> <td> Bordeaux INP </td> <td> Data generated at Bordeaux INP will be stored at internal unit servers at the Laboratoire de Chimie des Polymères Organiques (LCPO). A networkattached storage (NAS) server is used for this purpose, which is a file-level computer data storage server. The NAS installed at the LCPO contains four storage drives and is connected to </td> </tr> <tr> <td> </td> <td> the internal computer network of the LCPO. All data is backed up automatically by the NAS, ensuring that if data is lost for any reason, it can in most cases be recreated. </td> </tr> <tr> <td> UNIBA </td> <td> Data generated at UNIBA/CNR will be stored at internal unit servers at Chemistry Department - Uniba. All data are backed up automatically. Backups are regularly taken, which means that if data is lost for any reason, it can in most cases be retrieved. </td> </tr> <tr> <td> UPDiderot </td> <td> Data generated at UPDiderot will be stored at internal computers and at an internal unit server at the ITODYS laboratory. All data are backed up automatically each day by the server, which means that if data are lost by the users at their working computer, they can be restored within a short time. This server cannot be accessed from outside the laboratory network. </td> </tr> </table> # Ethical aspects No ethical or legal issues can be foreseen within the scope of the proposed project. All partners will comply with Article 34 of the Grant Agreement and the HyPhOE consortium will use techniques and methodologies (including for data collection and management) that are appropriate for the field. # Other issues None of the HyPhOE beneficiaries currently have any official policy for managing and storing research data. # Timetable for updates of the DMP Updates of the HyPhOE DMP will be made over the course of the project in conjunction with significant changes such as generation of new data (not previously covered in the DMP), changes in consortium policies (such as new IPR strategies) or changes in the consortium composition. There will also be updates in conjunction with the periodic reporting
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0318_SAMS_780755.md
The expected size of the data is in the first version of the Initial Data Management Plan hard to predict, e.g., raw audio data could be expect in many Tera Byte (TB), but temperature and weight data in several Giga Byte(GB). SAMS will produce several datasets during the lifetime of the project. The data will be both qualitative and quantitative in nature and will be analysed from a range of methodological perspectives for project development and scientific purposes. In addition, data will be useful for internal project use, for beekeepers and for other researchers working in the field of Precision Beekeeping. # FAIR data ## Making data findable, including provisions for metadata The data generated and / or used in the project is not identifiable by metadata nor is it identifiable and localizable by a standard identification mechanism. SAMS naming conventions are agree within developers (UNILV and UNIKAS) of data storage system. It is internally agreed. Search keywords will be provide when the dataset is upload. SAMS data will be stored in closed database. If necessary to have access to raw data, there is the possibility to provide specific interfaces. It is planned to give open access to data summaries and charts. In SAMS there is no possibility to provide clear version numbers, because we do not have data versions. The reason for these is that each data row were supported by timestamp. There is no plan to create metadata in SAMS. ## Making data openly accessible All bee colony data produced by sensors within the project will be summarised and accessible online using the developed web system. Raw data will be use internally, but if needed the access to it can be granted by specific interfaces. To make the data accessible a specific Web system will be developed to access the data summary. Within the SAMS project, there are no special software or methods needed to access the data. Only web browser is needed to see the data summary and charts. If access to the raw data will be granted (by specific interfaces), no specific software will be needed during the export stage, but a spreadsheet type software may be needed to inspect the exported data. As long as there is no special software needed, no documentation is included. SAMS data will be stored in database, which is located on the server, which is placed in Latvia University of Life Sciences and Technologies (LLU). The appropriate arrangements with the identified repository were made from the Latvia University of Life Sciences and Technologies. There are no restrictions on use and so a data access committee is not need. The conditions for data access is not describe. Nevertheless, a Link to the developed system will be published within the community. For these there is no need for person identification within the SAMS project. 4 ## Making data interoperable SAMS data produced in the project is planned to be interoperable. It is possible to add also data from other sources to the developed system. The data vocabulary and methodologies we follow to make them interoperable is still in progress and could be finalized in Version 2.0 of the Data Management Plan. We will use standard vocabularies for all data types in our data set. If it is unavoidable that we need to use unusual or project-specific ontologies or vocabularies in SAMS, we provide mappings to more commonly used vocabularies. ## Increase data re-use (through clarifying licences) The SAMS data collected within the project will be open for everyone. SAMS data will be made available for re-use from the moment the SAMS web site is published ( _www.sams-project.eu_ ) . Data produced in the project will be available until the web system is operable. At this moment, it is planned that data remains re-usable until the project end date. In the Version 1.0 of the Initial Data Management Plan the data quality assurance processes is not described. If Datasets are update, the partner that processes the data has the responsibility to manage the different versions and to make sure that the latest version is available in the case of publically available data. # Allocation of resources There are no immediate costs anticipated to make the dataset produced FAIR. Costs for data management and data storage are not defined separately. The costs are included in person months for researchers. Any Costs are covered from project direct costs. Latvian University of Life Sciences and Technologies will be responsible for data management. In this stage of the project, the resources for a long term preservation are not discussed. # Data security For the duration of the project, Data backups are made automatically by the server. Within the project, there are no sensitive data transfer. In this stage of the project, there is no certified repositories for long term preservation and curation. # Ethical aspects No ethical or legal issues that can have impact on data sharing in the moment. The SAMS project does not involve the use of human participants or personal data in the research of the bee colony data and therefore there is no requirement for ethical review. 5 H2020 templates: Report on cumulative expenditure incurred v1.0 – 26.07.2016 <table> <tr> <th> </th> <th> </th> <th> HISTORY OF CHANGES </th> <th> </th> </tr> <tr> <td> Version </td> <td> Publication date </td> <td> </td> <td> Change </td> </tr> <tr> <td> 1.0 </td> <td> 30.06.2018 </td> <td> ▪ Initial version </td> <td> </td> </tr> </table> **Project website:** _www.sams-project.eu_ **Project Coordinator contact:** Angela Zur Deutsche Gesellschaft für Internationale Zusammenarbeit (GIZ) GmbH An der Alster 62, 20999 Hamburg, Germany [email protected]_ _DISCLAIMER_ The sole responsibility for the content of this document lies with the SAMS project and does not necessarily reflect the opinion of the European Union that not responsible for any use that may be made of the information it contains. Neither GIZ nor any other consortium member nor the authors will accept any liability at any time for any kind of damage or loss that might occur to anybody from referring to this document. In addition, neither the European Commission nor the Agencies (or any person acting on their behalf) can be held responsible for the use made of the information provided in this document. 6
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0321_GRACE_821270.md
# Introduction ## **Data Management Plan Internal Consortium Policy** The GRACE Data Management Plan (DMP) gives an overview of the data and information collected throughout the project and shows the interaction and interrelation of the data collecting activities within and between the work packages. The DMP will also link these activities to the GRACE partners and discuss their responsibilities with respect to all aspects of data handling. Furthermore, the GRACE DMP will lay out the procedure for data collection, consent procedure, storage, protection, retention and destruction of data, and confirmation that they comply with national and EU legislation. The DMP will ensure that the exchange of data of companies and industries is in full compliance with the participating companies and industries internal data protection strategies. This DMP aims at providing an effective framework to ensure comprehensive collecting and handling of the data used in the project. Thereby and wherever trade secrets of the participating companies and industries are not violated, GRACE strives to comply with the open access policy of Horizon 2020. The DMP is intended to be a living document which will be adjusted to the specific needs of GRACE throughout the project’s runtime and will be adapted whenever appropriate. This is the first version of DMP to be revised during the course of the project. This plan will establish the measures for promoting the findings during GRACE’s lifecycle and will set the procedures for the sharing of data of the project. ## **Data Management Responsible** Primary project Data Contact person is the coordinator of the project. <table> <tr> <th> Project Data Contact (PDC) </th> <th> **Dan Kuylenstierna** </th> </tr> <tr> <td> PDC Affiliation </td> <td> **Chalmers University of Technology** </td> </tr> <tr> <td> PDC mail </td> <td> **Dan.kuylenstierna @chalmers.se** </td> </tr> <tr> <td> PDC telephone number </td> <td> **+46 723 52 61 05** </td> </tr> </table> ## **FAIR data** The GRACE project will follow the principles of FAIR data management. In Section 2 it is described how data is Findable (F), Accessible (A), Interoperable (I), and Re-usable (R). # GRACE DATA SUMMARY The purpose of all data being collected in the GRACE project is to meet the project objectives. In this aspect type of data beign collected is measurement data, graphical data, layouts, text documents, and reports. Being in line with the EU’s guidelines regarding the DMP, this document should address for each data set collected, processed and/or generated in the project the following characteristics: dataset description, reference and name, standards and metadata, data sharing, archiving and preservation. At this point in time, an estimation of the size of the data cannot be given. To this end, the consortium develops a number of strategies that will be followed in order to address the above elements. This section, shall be provided a detailed description of these elements in order to ensure their understanding by the partners of the consortium. For each element, we also describe the strategy that will be used to address it. ## **Data set description, reference and name** In order to be able to distinguish and easily identify data sets, each data set will be assigned with a unique name. This name can also be used as the identifier of the data sets. All data files produced, including emails, include the term “GRACE”, followed by file name which briefly describes its content, followed by a version number (or the term “FINAL”), followed by the short name of the organisation which prepared the document (if relevant). Each data set that will be collected, processed or generated within the project will be accompanied by a brief description. ## **Standards and metadata** This version of the GRACE DMP does not include a compilation of all the metadata about the data being produced in GRACE project, but there are already several domains considered in the project which follows different rules and recommendations. This is a very early stage identification of standards: <table> <tr> <th> **For text-based documents Microsoft Office 2010 (or any other compatible version) will be primarily used .doc, .docx, .xls, .xlsx, .ppt, .pptx. Further, approved documents will also be made available as .pdf documents.** </th> </tr> <tr> <td> **Larger datasets will be stored as either .csv or .txt file formats** </td> </tr> <tr> <td> **Illustrations and graphic design will make use of Microsoft Visio (Format: .vsd) or Photoshop (Format:** **different types possible, mostly .png), and will be made available as .jpg, .psd, .tiff or .ai files.** </td> </tr> <tr> <td> **Electrical data, e.g., simulated or measured Scattering parameters will be stored using touchstone file format (.S2p), citifiles or other equivalent format.** </td> </tr> <tr> <td> **Circuit layouts will be stored using the GDSII file format as main rule.** </td> </tr> </table> These file formats have been chosen because they are accepted standards and in widespread use. Files will be converted to open file formats where possible for long-term storage. ## **Data sharing, access and preservation** The digital data created by the project will be diversely curated depending on the sharing policies attached to it. For both open and non-open data, the aim is to preserve the data and make it readily available to the interested parties for the whole duration of the project and beyond. A public Application Programing Interface (API) will be provided to registered users allowing them the access to the platform. The database compliance aims to ensure the correct implementation of the security policy on the databases verifying vulnerability and incorrect data. The target is to identify excessive rights granted to users, too simple passwords (or even the lack of password) and finally to perform an analysis of the entire database. At this point, we can assure that at least the following measures will be considered for assuring a proper management of data: * Dataset minimisation. The minimum amount of data needed will be stored so as to prevent potential risks. * Access control list for user and data authentication. Depending on the dissemination level of the information an Access Control List will be implemented reflecting there for each user the data sets that can be accessed. * Monitoring and Log of activity. The activity of each user in the project platform, including the data sets accessed, is registered in order to track and detect harmful behaviour of users with access to the platform. • Liability. Identification of a person who is responsible for keeping safe the information stored, * When possible, the information will be also made available in the initiative that the EC has launched for open data sharing from research, which is ZENODO.ORG The mechanisms explained in this document aim at reducing to the maximum the risks related to data storage. ### Non-Open research data The non-open research data will be archived and stored long-term in the BOX portal administered by Chalmers. The BOX platform is currently being employed to coordinate the project's activities and to store all the digital material connected to GRACE. If certain datasets cannot be shared (or need restrictions), legal and contractual reasons will be explained. ### Open research data The open research data will be archived, when deemed appropriate on the Zenodo platform (https://zenodo.org/). Zenodo is an OpenAIRE’s trusted repository hosted by CERN enabling researchers from all disciplines to share and preserve their research outputs, regardless of size or format. Free to upload and free to access, Zenodo makes scientific outputs of all kinds citable, shareable and discoverable for the long term. # Allocation of resources Data management in GRACE will be done as part of the WP5 and Chalmers, as project coordinator, will be responsible for data management in GRACE project. Each partner is responsible for their own data management costs within their own organisation. Chalmers has allocated a part of the overall WP5 budget and person months to these activities. For the time being, the project coordinator is responsible for FAIR data management. Costs related to open access to research data are eligible as part of the Horizon 2020 grant (if compliant with the Grant Agreement conditions). Resources for long term preservation, associated costs and potential value, as well as how data will be kept beyond the project and how long, will be discussed by the whole consortium during General Assembly (GA) meetings. # Data Security For the duration of the project, datasets will be stored on the responsible partner’s storage system. Every partner is responsible to ensure that the data are stored safely and securely and in full compliance with European Union data protection laws. After the completion of the project, all the responsibilities concerning data recovery and secure storage will go to the repository storing the dataset. All data files will be transferred via secure connections and in encrypted and password-protected form (for example with the open source 7-zip tool providing full AES-256 encryption: http://www.7-zip.org/ or the encryption options implemented in MS Windows or MS Excel). Passwords will not be exchanged via e-mail but in personal communication between the partners. # Ethical Aspects This section deals with ethical and legal compliance issues, like the consent for data preservation and sharing, protection of the identity of individuals and companies and how sensitive data will be handled to ensure it is stored and transferred securely. Data protection and good research ethics are major topics for the consortium of this project. Good research ethics meet all actions to take great care and prevent any situation where sensitive information could get misused. This is what the consortium wants to guarantee for this project. Research data which contains personal data will just be disseminated for the purpose for which it was specified by the consortium. Furthermore, all processes of data generation and data sharing have to be documented and approved by the consortium to guarantee highest standards of data protection. GRACE partners have to comply with the ethical principles as set out in Article 34 of the Grant Agreement, which states that all activities must be carried out in compliance with: * ethical principles (including the highest standards of research integrity — as set out, for instance, in the European Code of Conduct for Research Integrity including, in particular, avoiding fabrication, falsification, plagiarism or other research misconduct) and * applicable international, EU and national law (in particular, EU Directive 95/46/EC). ## Informed Consent An Informed Consent Form will be handed out to any individual participating in GRACE interviews, workshops or other activities which may lead to the collection of data which will subsequently be used in the project. An example of the Informed Consent Form is shown in the Annex of this document. ## Confidentiality GRACE partners must retain any data, documents or other material as confidential during the implementation for the project. Further details on confidentiality can be found in Article 36 of the Grant Agreement along with the obligation to protect results in Article 27. ## Management of ethical issues Personal data which will be collected within this project, will only be stored, analysed and used anonymously. The individuals will be informed comprehensively about the intent use of the information collected from them and have to agree to the data collection for this scientific purpose with their active approval in form of a written consent. The identity of any individual interviewed or other wisely engaged in the project (e.g. by email correspondence) will be protected by this anonymization of the data. The anonymization process guarantees that no particular individual can be identified anymore. Statistics and tables of quantitative research will be published in a manner such that it will not be possible to identify any person. The legal experts of this project will guarantee that this process, including the information for the individuals about data protection issues, fully complies with national and EU laws. # Time table for updates The document will be updated at each general assembly (GA) meeting according to the time plan in the grant agreement. # List of datasets This section will list the data-sets produced within the GRACE project. For each partner involved in the collection or generation of research data a short technical description is given stating the context in which the data has been created.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0322_E2DATA_780245.md
# Executive Summary E2Data’s vision is to develop a novel Big Data software stack that will help Big Data practitioners to exploit in a transparent and efficient manner the available underlying diverse heterogeneous hardware resources without requiring software re-engineering by the programmers. To achieve its goals, E2Data development will be driven by the requirements of industrial partners, having the role of use case providers. This document describes those application use cases which will be experimented within E2Data. The background of each use case is described in order to identify the underlying challenges in terms of business requirements. The E2Data project needs to respond to diverse and strict requirements, in terms of performance and infrastructure costs. Such requirements are driven by the four participating use cases, in the following domains: **Health Use Case –** It is necessary to improve the predictive capability of a hospital readmission risk prediction algorithm. To achieve this, the patient discharge profile is enhanced with profiles of highly correlated patients (in terms of recent hospital activity). The patient correlations are established based on their medical conditions. The algorithmic solution enters into time sensitive matrix calculations which need to be accelerated appropriately. **Fintech (Natural Language Processing) -** Processing of unstructured data (text) is a powerful tool to extract knowledge from articles and messages, including social media. Processing of such online streams within the financial sector is useful when you need to correlate financial news with trade facts; also useful in several other business domains where sentiment analysis needs to be applied (i.e. tourism). In E2DATA, we focus on processing large amounts of messages from social media, such as Twitter, in order to perform semantic information extraction, sentiment analysis, summarization, interpretation and organization of their contents. Critical language processing algorithms are falling in the critical path in the knowledge extraction process; therefore, acceleration is considered as a solution towards enhanced performance. **Green Building Infrastructure.** Management and monitoring of buildings employs an Internet-ofThings (IoT) Framework cloud platform with high scalability both in terms of users, number of connected devices and volume of data processed. It accommodates real-time processing of information collected from mobile sensors and smartphones and offers fast analytic services. The Cloud Services offer real time processing and analysis of unlimited IoT data streams with minimal delay and processing costs. **Security and biometric recognition.** Biometric authentication, using facial recognition, is fast becoming a mainstream method of authenticating customers for high value transactions, such as the creation of Bank Accounts, issuing of Travel Visas and unmanned border crossing by preregistered users. Such processes are coupled with tight SLA to ensure user experience is maintained, even when a single transaction involves the processing of 30 images and the execution of deep neural networks. The nature of security results in the complexity of processing that is continuously increasing. E2Data will be asked to both optimize the cost base of the platform and automate the performance optimization of code which is currently undertaken by highly skilled engineers. From the business level, the document moves to lower code level details, where critical computation takes place and acceleration is needed to improve performance and eliminate bottlenecks. Candidate code kernels for acceleration are described. A first set of code includes: * a matrix inversion algorithm * an algorithm for lexicographical approximate matching in vocabularies using distances between words * approximate Matching in dictionaries stored as Directed Acyclic Graph Words * cosine similarity applied on words or q-grams and a variation of the algorithm in order to rank documents * fuzzy Matching of terms in terminological dictionaries stored in Compressed Tries ● computing the sum / maximum / minimum / average of sensor measurements. * carrying out a large number of PearsonR correlation coefficient calculations in a very small time period. * algorithm that takes an array of RGB images containing faces together with a set of feature location for the faces, and ‘morphs’ each individual image to a standard face which is finally returned. Apart from the application use case context and the specific computation field requirements, the document includes a data management framework that the project will follow in order to manage the use cases’ datasets. Results of the project action includes scientific publications, data research results and datasets that are available for processing and experimentation. The Data Management Plan and the relevant principles are set in the project, in order to assure access, respect to personal data, re-usability, sharing, archiving and presentation. A common framework is set by the project that is to be instantiated by each use case. Main pillars of the plan include: * Open access to scientific publications; * Open access to research data; * Standards and metadata; * Data access; * Access and sharing policies; * Re-usability and distribution; ● Archiving and presentation. # Introduction E2Data’s vision is to develop a novel Big Data software stack that will help Big Data practitioners to exploit in a transparent and efficient manner the available underlying diverse heterogeneous hardware resources without requiring software re-engineering by the programmers. While in the de-facto scale-out or homogeneous scale-up model the applications are partitioned and sent for execution on CPU nodes, E2Data will intelligently identify which parts of the applications can be hardware accelerated and, **dynamically** , based on the current hardware resources will send tasks for execution on the appropriate nodes. The scheduling, compilation, and execution of the tasks will happen “on-the-fly” without requiring Big Data practitioners to write not-portable, low-level code for each specific device or accelerator. This will ultimately translate to: a) **higher performance** of Big Data execution, b) **energy efficient execution** , c) **significant cost improvements** for cloud providers and endusers, and d) **enhanced scalability and performance portability** of Big Data applications. In order to achieve its ambitious goals, E2Data will follow an **application- driven approach** in which the development of the proposed solutions will be guided by the SLAs of the industrial partners. Enabling current Big Data software stacks to perform transparent and efficient heterogeneous execution is a challenging task in which pre-defined objectives and performance Key Performance Indicators (KPIs) must be satisfied in order to assess success. The selected real-world and complex scenarios of the industrial partners are guiding the technological advancements. Since E2Data is not aiming to provide a new Big Data software stack but rather to break the barriers of existing deployments to exploit heterogeneity, its high impact objectives and deliverables will be immediately available to the industrial partners for exploitation. The aim of this document is to present and describe those application use cases which will be experimented within E2Data. The background of each use case is described in order to identify the underlying challenges in terms of business requirements. Moreover, indicative code segments are identified and presented in detail. These code segments will be the focus for acceleration, as being critical in the overall computation process. More code kernels will be also identified during the project progress. These will be the main requirements to drive the E2Data architecture options design choices. Apart from the application use case context and the specific computation field requirements, the document includes a data management framework that the project will follow in order to manage the use cases’ datasets. # Use Cases Overview The E2Data project needs to respond to diverse and strict requirements, in terms of performance and infrastructure costs. Such requirements are driven by the four participating use cases, in the following domains: **Health Use Case –** It is necessary to improve the predictive capability of a hospital readmission risk prediction algorithm. To achieve this, it is proposed to enhance the patient discharge profile with profiles of highly correlated patients (in terms of recent hospital activity). It is proposed to establish the patient correlations based on their medical conditions. The number of unique codes, identifying reasons for patient hospitalization, runs into tens of thousands. Further, for a sufficiently large number of National Health Service (NHS) trusts (i.e. organisations in the UK that serve different aspects of a patient in a specific geographic area), the number of patients can easily run into millions over the span of a few years. The algorithmic solution enters into time sensitive matrix calculations which need to be accelerated appropriately. **Fintech (Natural Language Processing) -** Processing of unstructured data (text) is a powerful tool to extract knowledge from articles and messages, including social media. Processing of such online streams within the financial sector is useful when you need to correlate financial news with trade facts; also useful in several other business domains where sentiment analysis needs to be applied (i.e. tourism). In E2DATA, we focus on processing large amounts of messages from social media, such as Twitter, in order to perform semantic information extraction, sentiment analysis, summarization, interpretation and organization of their contents. This analysis occurs by extracting from each tweet phrases with specific syntactic forms. The process uses a number of different dictionary types storing a diverse range of information from word lists (vocabularies) to complex networks structures, expressing syntactic patterns (kanon rules). These dictionaries provide hints with which each tweet is going to be marked. The execution involves critical and complex algorithms (words proximity, fuzzy matching, etc.) that are invoked upon each new text, thus requiring their acceleration in order to become as efficient and scalable as possible. **Green Building Infrastructure.** Management and monitoring of buildings employs an Internet-ofThings (IoT) Framework cloud platform with high scalability both in terms of users, number of connected devices and volume of data processed. It accommodates real-time processing of information collected from mobile sensors and smartphones and offers fast analytic services. The Cloud Services offer real time processing and analysis of unlimited IoT data streams with minimal delay and processing costs. The current setup uses averaging to minimize the required storage space. For obtaining near real-time information on the buildings’ status, it is necessary to shorten the averaging period, an approach that will lead to significant data volume increase. In this context, the sensors and the platform will generate, handle, transfer and store a tremendous amount of data, which cannot be processed in an efficient manner using current platforms and techniques. Therefore, it is necessary to utilize the E2Data stack for improving IoT Platform in order to rapidly process the constantly accumulated data and tackle the issues generated by thousands of parallel deployments of sensors, each generating enormous data chunks that need to be processed almost in real-time. **Security and biometric recognition.** Biometric authentication, using facial recognition, is fast becoming a mainstream method of authenticating customers for high value transactions, such as the creation of Bank Accounts, issuing of Travel Visas and unmanned border crossing by preregistered users. This has been enabled by the introduction of methods that can detect presentation attacks, such as photos, masks or replay attacks [1]. iProov is a leader in this space and the service offers both multi-tenant and dedicated cloud based Software-as-a-Service (SaaS) offerings to both government and commercial organisations. The data volumes involved are large; for example, the image analysis rate for one government customer recently tested as 12 % of the global image upload rate of Facebook. This is coupled with a tight SLA to ensure user experience is maintained, so a single transaction that will typically require the processing of 30 images; the execution of 10+ deep neural networks; and the execution of a large amount of compute intensive algorithms is required to be completed in under 5 seconds. The nature of security is that this is an ongoing arms race which results in the complexity of processing continuing to increase. E2Data will both optimize the cost base of the platform and automate the performance optimization of code which is currently undertaken by highly skilled engineers typically in a CUDA environment. # Health Use Case \- Fast Collaborative Filtering ## Description of Use Case and Requirements To improve the predictive capability of a hospital readmission risk prediction algorithm, it is proposed to enhance the patient discharge profile with data from highly correlated patients (in terms of recent hospital activity). In particular, the aim is to establish the patient correlations based on their medical conditions. The International Classification of Diseases (ICD), maintained by the World Health Organisation (WHO), provides a system of diagnostic codes for classifying diseases based on ICD codes. The number of unique ICD codes, identifying reasons for patient hospitalization, are in the order of magnitude of tens of thousands. Further, for a sufficiently large number of NHS trusts, the number of patients can easily run into millions over the span of a few years. _Technical description:_ The patient’s medical condition matrix is a highly sparse matrix because most patients are typically unlikely to suffer from most medical conditions. Furthermore, over the span of a few years, this matrix, for a normal size hospital, is likely to expand to the order of 10 10 elements. It is proposed to extract patient correlations from this matrix, for the purpose of enhancing a readmission risk prediction model, through the use of Model-Based Collaborative Filtering – a method that has been shown to be extremely successful for identifying correlations [2]. ### Description of the Framing Environment Model-based Collaborative Filtering has received significant attention mainly as an unsupervised learning method for latent variable decomposition and dimensionality reduction. Two approaches to achieve the matrix factorization required in Collaborative Filtering are stochastic gradient descent and alternating least squares (ALS). ALS is favorable in two cases: a) when the system can use parallelization, and b) when the data is implicit [3]. Collaborative Filtering can be formulated by approximating a matrix by ALS. Once the decomposed matrices (corresponding to patients and medical conditions) have been obtained, various operations can be carried out on these matrices to calculate pairwise patient similarity scores. ALS is an iterative algorithm and can be very slow and computationally expensive but lends itself to parallel implementations. The ALS algorithm is as follows: 1. Initialize the two target matrices X, Y 2. repeat 3. for u = 1...n do 4. 𝑥𝑢 = (∑𝑟𝑢𝑖 ∈𝑟𝑢∗ 𝑦𝑖𝑦𝑇𝑖 + 𝜆𝐼𝑘)−1 ∑𝑟𝑢𝑖 ∈𝑟𝑢∗ 𝑟𝑢𝑖𝑦𝑖 5. end for 6. for i = 1...m do 7. 𝑦𝑖 = (∑𝑟𝑢𝑖 ∈𝑟∗𝑖 𝑥𝑢𝑥𝑇𝑢 + 𝜆𝐼𝑘)−1 ∑𝑟𝑢𝑖 ∈𝑟∗𝑖 𝑟𝑢𝑖𝑥𝑢 8. end for 9. until convergence The key matrix operations required to implement (ridge regression) the algorithm are: matrix inversion, dot product calculation and a sum of squared errors calculation to check when the convergence criterion has been met. On a single machine (non-parallel) implementation, the matrix inversion subroutine is the most time consuming. It is proposed to develop the matrix factorization in Java with Tornado APIs used for providing the acceleration for the matrix inversion subroutine. ### List of Critical Code Parts The list of kernels which are candidates to accelerate are enumerated in the Table below. Specifically for the Health Use Case, the kernel is the one that performs matrix inversion. <table> <tr> <th> </th> <th> </th> <th> **Title** </th> <th> </th> <th> **Description** </th> </tr> <tr> <td> 1\. </td> <td> **Minv** </td> <td> </td> <td> Matrix inversion </td> <td> </td> </tr> </table> ## Code Kernel “Minv” **Description -** ALS factors a matrix A into two component matrices - X (for example representing patientfeature space) and Y (for example representing medical condition-feature space). The most compute intensive part of the ALS algorithm is the calculation of the inverse of a matrix. The proposed kernel should speed up the overall ALS by dealing with the most time consuming part. The following Figure shows the time (in seconds) taken by three most time consuming methods in the ALS process: matrix inversion (Invert), dot product (Dot) and sum of square error (Error). The times were measured for datasets of two sizes: small (2000x5000 elements) and medium (6040x3952 elements) [4], each experiment running for 10 iterations. It can be seen that the time spent calculating matrix inverse is far greater than the other and increases significantly as the number of elements in the dataset increase. _**Figure:** Stacked bar graph showing processing times (seconds) of the three most time consuming subroutines in the ALS implementation for different sized datasets _ Further, as stated, ALS is an iterative algorithm where the two factor matrices (X, Y) are alternatively calculated such that the sum of squared errors are minimized. Thus, for a given size of dataset, the number of iterations needed to reach an acceptable error level varies. In general, the higher the number of iterations permitted, the more likely that the desired error level (smaller is better) will be reached. In the following Figure, we compare the times taken by the subroutines as the number of iterations is increased. It is clear that the time spent calculating the inverse matrix increases very rapidly (almost exponentially) with the number iterations permitted for the optimization process. _**Figure:** Stacked bar graph showing processing times (seconds) of the three most time consuming subroutines in the ALS implementation for different iteration counts _ Input: float matrix A (m x m) Output: float matrix A’ (m x m) **Data involved -** The main input data should be received as a two dimensional array of floating values. The input matrix is expected to be square. **Accelerating Code Kernel -** In order to support daily execution of factorization routines, it is important to accelerate the ALS task which can be very time consuming for large matrices and the matrix inversion is the most time consuming operation in the algorithm. The above analysis clearly highlights the importance of the matrix inversion routine in speeding up the overall matrix factorization process. **About the code** – The existing code is in Java, utilizing libraries [5] _https://github.com/grafosml/okapi/blob/master/src/main/java/ml/grafos/okapi/cf/als/Als.java_ . EXUS ALS code will use this (or equivalent implementation) as the base and utilize the Tornado APIs for acceleration. The specific Java implementation of ALS that will be used has not been finalized; however, the central process of matrix inversion is expected to be common to all of them. # NLP Use Case ## Description of Use Case and Requirements Mnemosyne is a corpus processing and information extraction platform developed by Neurolingo ( _www.neurolingo.com_ ) . The platform has been designed with the ambition to support and manipulate most of the branches of Natural Language Processing (NLP) field that are: 1. _Phonetics – Phonology_ : Phonetics examine the physiology of speech and describe the sounds in the way they are produced by the human phonetic systems. Phonology looks at the operation of these sounds in a specific linguistic system (morpho-phonological relations). In each language, sounds can be classified into a finite set of phonemes. Traditionally, they include vowels: a, e, i, o; and consonants: p, f, r, m. Phonemes are assembled into syllables: pa, pi, po, to build up the words. 2. _Morphology_ : This second level concerns the words. The word set of a language is called a lexicon. Words can appear under several forms, for instance, the singular and the plural forms. Morphology is the study of the structure and the forms of a word. Usually a lexicon consists of root words. Morphological rules can modify or transform the root words to produce the whole vocabulary. Morphology concerns morphemes, i.e. the smallest word segments that can carry semantic information. 3. _Syntax_ is a third discipline in which the order of words in a sentence and their relationships is studied. Syntax defines word categories and functions. Subject, verb, object is a sequence of functions that corresponds to a common order in many European languages including English and French. However, this order may vary and the verb is often located at the end of the sentence (for example in German). Parsing determines the structure of a sentence and assigns functions to words or groups of words. Syntax deals with the way and the rules according to which words are combined in larger units, creating phrases and clauses. 4. _Semantics_ is a fourth domain of linguistics. It considers the meaning of words and sentences. The concept of “meaning” or “significance” can be controversial. Semantics is differently understood by researchers and is sometimes difficult to describe and process. In a general context, semantics could be envisioned as a medium of our thought. In applications, semantics often corresponds to the determination of the sense of a word or the representation of a sentence in a logical format. Semantics is either the research of the meaning of language elements or the research of the combination of these elements. 5. _Pragmatics_ is the fifth discipline of linguistics. While semantics is related to universal definitions and understandings, pragmatics restricts it – or complements it – by adding a contextual interpretation. Pragmatics refers to the study of linguistic production in specific cases. This means that the language is studied by its context (the participating members, the timing, the place, etc.). Mnemosyne incorporates a large number of text processing technologies developed by Neurolingo and its scientists in more than a twenty years period. Characteristic technologies used in NLP process are: 1. Spelling Checking & Fuzzy Matching 2. Lexicons development as Morphology Lexicons, Thesaurus, Terminology Lexicons, etc. 3. Indexing and Searching 4. Syntax Checking & Grammar Checkers 5. Named Entity Recognition 6. Information Extraction #### Mnemosyne Features In the following figure, we present the top level architecture of the Mnemosyne platform. _**Figure:** Mnemosyne Architecture _ The main features of the Mnemosyne platform are: 1. Process collections of texts stores in various media (files, websites, databases) and in various formats (DOC, XML, HTML, PDF, TXT). 2. Uses a large number of lexical resources as: alphabets, spelling dictionaries, morphological dictionaries, gazetteers, thesaurus, etc. 3. Creates different analyses of a text. Each analysis is expressed as a sequence of annotations on parts (text spans) of the input text. The input text remains unmodified as the annotations are stored in different parallel layers referencing parts of the input text. This architecture gives great flexibility because it permits to have different layers of annotations that may refer to interleaved text spans. 4. Analyzers are the software components that produce analyses. They are interconnected to flows where the output of an analyzer constitutes the input of another. These flows can represent parallel computation tasks increasing the throughput of the system. 5. The heart of the system is the “Kanon” formalism. Kanon formalism is close to Unification Grammars [6]. The formalism uses grammar rules that describe the syntax of the processed language. The execution of these rules creates the analyses described above. 6. Filtering of the produced annotations are transferred in the next processing phases. This way we minimize the information we pass and we need to handle from one phase. Except for the storage and transmission gain, we minimize the processing complexity of following phases because there are less annotations to handle and also less ambiguity to solve. 7. Fuzzy matching mechanisms are used for named entity recognition, i.e. persons, organizations, toponyms, etc. The recognition of a named entity is not always enough. Many times we also need to identify it, i.e. to match it in a set of similar entities and return a unique id. Usually these sets are stored in legacy databases which also provide a unique id for each entity. Mnemosyne provides two categories of fuzzy matching mechanisms. The lexicographic mechanisms that use spelling correction techniques and distance functions in order to match two strings and the statistical techniques that split the strings in words or parts of a word (q-grams) and use statistical formulas in order to evaluate the similarity of an input string (entity) with the entity strings in the database. 8. Specialized analyzers are responsible for exporting and/or transfering the extracted structural information to the desired formats and destinations (e.g. XML, Database tables, etc.). 9. There are special mechanisms for monitoring and logging of the process flow. The mechanisms are extensible and permit the concurrent storage of logged data in different formats and media (files, databases, network, etc.). The level of detail of the logged information is configurable and is very useful in case we want to debug the process. The schema of the logged information constitutes a generic schema for information extraction projects and can be used in a quick implementation of information extraction and text analytics projects. 10. The results of the information extraction process can be viewed, verified and corrected with a specialized Annotation Editor application. This is a GUI application with functionality that permits the editing of annotations of a text. 11. The large amounts of corpus texts alongside with the annotations produced by the information extraction process can be semantically searched with a web application. The users can use both non-structural (text) and structural (annotations) search criteria in order to search and retrieve the information they are interested in. The results presented also in a mixed nonstructural/structural way based on a concordance view. _Use Case Application -_ In the E2Data project, we focus on processing large amounts of messages from social media, such as Twitter, in order to perform semantic information extraction, sentiment analysis, summarization, interpretation and organization on their contents. This analysis occurs by extracting from each tweet phrases with specific syntactic forms expressed with Kanon. The process uses a number of different dictionary types storing a diverse range of information from word lists (vocabularies) to complex network structures expressing syntactic patterns (kanon rules). These dictionaries provide hints with which each tweet is going to be marked. After finishing execution, Mnemosyne creates a large output file, stored either in Lucene or in the local file system. _NLP Kernels -_ Three of the most important Mnemosyne engine types incorporate many functions have been chosen to be accelerated. The common characteristics of these engines are: a) they use a type of dictionary that is static and constant, and b) they work in stream mode, i.e. we feed the engine with input (words or texts) and they return answers. The engine types that will be accelerated are: 1. Lexicographical fuzzy matching search in vocabularies using either: * Directed Acyclic Graph Words or DAWG which is a deterministic acyclic finite state automaton [7] that can be accessed with regular expressions. * Levenshtein distances between the dictionary words and the input words [8]. 2. Statistical fuzzy matching and classification applied in multiword expressions and/or documents using: * Cosine similarity or TFIDF [9] applied on words or q-grams. * Okapi BM25 [10], which is a similar to the TFIDF algorithm for ranking documents. 3. Fuzzy matching of multiword expressions using Compressed Tries [11]. Compressed Tries are used as indexes to various types of lexicons (morphological, terminological, syntactical, etc.). ### Description of the Framing Environment To scale on a single server, Mnemosyne has been designed as a multithreaded application. It uses a number of threads to process incoming tweets. Each tweet is assigned to a different thread that performs all processing associated with the tweet. Processing may involve multiple analysis steps depending on the type of application. There is a central coordinator that dispatches tweets to available threads for processing while processing accesses private data structures, both to read and update information. Each thread creates a private version of the required dictionaries and updates its dictionary based on the tweets it processes. Although in other applications, cross-tweet updates may be required, this is not the case in the applications we examine in this report. Finally, the output, which is typically longer than each tweet, is written to a file or database. In our experiments, we direct output to files since this approach has lower overhead. There are two flavors of multi-threading within Mnemosyne: _Hand-crafted thread management_ and _Java streaming_ . In the first case, threads are manually started and managed by application code. In the second case, Mnemosyne uses Java 8 streams. Java 8 offers a new abstraction that supports functional-style operations (e.g. map-reduce) on a set of elements, called streams. Streams aim to provide parallel processing to their elements, without the programmer having to explicitly write parallel code; the parallelization is handled by the Java VM. The difference between collections and streams is that collections are concentrated to give direct access to their elements for operations, such as update, whereas streams are concentrated on the source of the stream's elements and the type of computational operations that are going to be performed on that source. Mnemosyne takes advantage of the characteristics of Streams. It uses a split iterator for processing the input data, which is split to 1.5*#cores streams. Hence, this version dynamically adapts to the capability of the server it runs on. Both concurrent versions exhibit similar performance and scaling characteristics. ### List of Critical Code Parts The list of kernels which are candidates to accelerate are enumerated in the Table below: <table> <tr> <th> </th> <th> **Title** </th> <th> **Description** </th> </tr> <tr> <td> 1\. </td> <td> **Word Distance Kernel (WDK)** </td> <td> Lexicographical Approximate Matching in vocabularies using distances between words (Levenshtein) </td> </tr> <tr> <td> 2\. </td> <td> **Directed Acyclic Graph Words Kernel** **(DAWGK)** </td> <td> Approximate Matching in dictionaries stored as Directed Acyclic Graph Words (DAWG) </td> </tr> <tr> <td> 3\. </td> <td> **TFIDFK** </td> <td> Cosine similarity (TFIDF) applied on words or qgrams </td> </tr> <tr> <td> 4\. </td> <td> **Best Match Kernel (BM25K)** </td> <td> Okapi BM25 is a variation of TFIDF algorithm ranking documents </td> </tr> <tr> <td> 5\. </td> <td> **Compressed Tries Kernel (CTK)** </td> <td> Fuzzy Matching of terms in terminological dictionaries stored in Compressed Tries </td> </tr> </table> ## Code Kernel “Word Distance Kernel” (WDK) **Description -** The edit distance [12] between two strings of characters is defined as the minimum number of edit operations needed to transform the first string to the second. The permitted edit operations can be: insertion of a character anywhere in the first string, deletion of a character from first string and substitution of a character of the first string with another one. The most known and used algorithm that computes the edit distance of two strings is the Levenshtein and is a typical example of dynamic programming. Despite the fact that the algorithm itself is not appropriate for acceleration due to its recursive nature, we are going to accelerate the application of the algorithm to a large number of pairs. Given a set of words L (lexicon) and an input word w we will apply the Levenshtein algorithm between every word of L and input word w. Then, we will output the words with the smaller distances as candidates for the input word. There are two variations of the algorithm, one that uses a matrix with dimensions equal to the lengths of the words that we work on and one that uses only two vectors of the previous matrix. We will implement the second variation that needs less space. The algorithm is presented below: function LevenshteinDistance(char s[1..m], char t[1..n]): // create two work vectors of integer distances declare int v0[n + 1] declare int v1[n + 1] // initialize v0 (the previous row of distances) // this row is A[0][i]: edit distance for an empty s // the distance is just the number of characters to delete from t for i from 0 to n: v0[i] = i for i from 0 to m-1: // calculate v1 (current row distances) from the previous row v0 // first element of v1 is A[i+1][0] // edit distance is delete (i+1) chars from s to match empty t v1[0] = i + 1 // use formula to fill in the rest of the row for j from 0 to n-1: // calculating costs for A[i+1][j+1] deletionCost := v0[j + 1] + 1 insertionCost := v1[j] + 1 if s[i] = t[j]: substitutionCost := v0[j] else: substitutionCost := v0[j] + 1 v1[j + 1] := minimum(deletionCost, insertionCost, substitutionCost) // copy v1 (current row) to v0 (previous row) for next iteration swap v0 with v1 // after the last swap, the results of v1 are now in v0 return v0[n] **Data Involved -** For the testing of the kernel we will use lexicons from the Moby Project [13]. ## Code Kernel “Directed Acyclic Graph Words Kernel” (DAWGK) **Description -** DAWG constitutes the basic structure for vocabulary handling. Dawg’s are directed graphs whose edges are labeled by symbols, and they represent the set of all words "spelled" by its paths from the root to the sink. Searching a DAWG for a word is very efficient and similar to traversing a deterministic automaton to find a path. It is independent from the size of the vocabulary (i.e. the size of DAWG) and depends only on the length of the input word. The problem is when we search for similar words, i.e. words with small distance from the input word. In that case, we have to traverse many paths of the automaton in order to find the optimal (minimum distance) alternatives. In that case, the DAWG is used as non-deterministic automaton. Mnemosyne uses a modification of KMP (Knuth Morris Pratt) pattern matching algorithm as base of the correction functionality (fuzzy matching) for the most of the lexicon engines it has. In this kernel we will implement parallel versions of the search engines that Mnemosyne uses, incorporating algorithms as BNDM [14]. ## Code Kernel “TFIDFK” **Description -** TFIDF [9] is another pattern matching technology used in Mnemosyne. It belongs to the statistical matching algorithms and can also be used as classification and clustering mechanism. Mnemosyne uses two alternatives of the statistical approximate matching algorithms (TFIDF and BM25). These algorithms can be applied between texts consisting of one to few words to large documents with thousands of words. The strings compared are called documents and the basic entity compared is the word. In case that the documents consists of one or few words, Mnemosyne can also use q-grams as the basic entity of indexing and comparing, q can be 2, 3 or 4. A parallel version of the algorithm that will be implemented in the kernel, presented below: **procedure** TFIDF(documents[0..n−1]) { // term frequency per document associative_container ( string−> **int** ) term_freq[n ]; // document freq. and ID associative_container ( string −> ( **int** , **int** )) doc_freq ; **parallel_for** ( i : 0.. n−1) { // Calculate term frequency in i−th document **parallel_for** (term : documents[i]) modify(term_freq [ i ], term,+,1) // Update document frequencies for term in i−th document // Increment counter for each term ignoring term frequency // Value of ID is irrelevant at this time merge(doc_freq, term_freq [ i ], f =(k,( dfl , idl ), tfr )−>(k,(dfl+1,idl )), g=(k, tfr )−>(k,(1,0))) } // Assign unique IDs to each term. The terms can be optionally // sorted alphabetically . Sorting here affects the order of // terms in the TF−IDF matrix and output. // Store IDs in second element of value pair in doc_freq . sort−by−key(doc_freq) ID = 0; **parallel_for** (term : doc_freq ) { modify(doc_freq , term, f =((tf , old_ID ), ID)−>(tf,ID)) ID += 1 } // Construct TF−IDF (sparse) matrix **parallel_for** ( i : 0.. n−1) { **for** ((term, tf ) : term_freq [ i ]) { // Calculate TF−IDF score for term in i−th document (df , id ) := lookup(doc_freq , term) tfidf [ i , id ] := tf log (( df+1)/(n+1)) } } **return** tfidf } ## Code Kernel “Best Match Kernel” (BM25K) **Description -** This kernel will be a parallel version the Okapi BM25 algorithm which is a variation of the TFIDF algorithm. It is mostly used in query processing systems where the system must return a set of most relevant documents with the query (document). The kernel will follow the same characteristics with the previous ones. The dictionary which constitutes the indexed documents will be loaded in the kernel memory and the queries will be streamed to the kernel. The kernel will respond with a ranking set of relevant document ids. There are many implementations of this algorithm using acceleration in GPU. Below, we provide links to two example implementations: _https://link.springer.com/chapter/10.1007%2F978-3-319-64471-4_10_ _http://nbjl.nankai.edu.cn/Lab_Papers/2017/ICPADS2017.pdf_ We will investigate/experiment with these implementations and we will choose the most appropriate for usage in Mnemosyne. ## Code Kernel “Compressed Tries Kernel” (CTK) **Description -** Compressed Tries or C-Tries are used as index structures for more complex dictionary types such as morphological, terminological, thesaurus etc. They are also used as the basis for storing the compiled form of Kanon rules. This kernel will be a parallel implementation of approximate searching on C-Tries. Mnemosyne uses a modified KMP (Knuth Morris Pratt) algorithm to search C-Tries in order to find similar terms. Trie or digital tree [11] is a tree type used as index structure when keys are strings. Nodes of the tree have edges labeled with characters from the alphabet. This way searching of a string of length M needs M node visits in order to either find the indexed object or to return failure. C-trie optimizes the space needed by each node by using a bit mask of the alphabet used and a length member of the number of children (nodes of the next level) from the left siblings of the current node. The kernel will follow the same architecture with the previous ones, i.e. it will load the dictionary data in memory and will be streamed with keys (strings) that will be checked in the dictionary. The kernel will reply with the ID (a number) of the object having the key. In case that the key is not stored in the dictionary, the kernel will reply with a set of approximate keys with the distances from the input key. **Data Involved -** The kernel will be tested with a modified version of a morphological Greek dictionary developed by Neurolingo [15]. # Green Buildings Use Case ## Description of Use Case and Requirements The SparkWorks IoT Framework cloud platform is designed to enable an easy and fast implementation of applications that utilize an Internet-of-Things infrastructure. It offers high scalability in terms of users, number of connected devices and volume of data processed. The platform accommodates real-time processing of information collected from mobile sensors and smartphones and offers fast analytic services. The Cloud Services offer real time processing and analysis of unlimited IoT data streams with minimal delay and processing costs. In its current deployment, over 400MB of data are produced daily, resulting in a yearly data volume of approximately 140GB. However, the current setup uses averaging to minimize the required storage space. For obtaining near real-time information on the building status, it is necessary to shorten the averaging period (now set to 5 minutes), an approach that will lead to significant data volume increase. In this context, the deployed IoT infrastructure will generate, handle, transfer and store a tremendous amount of data, which cannot be processed in an efficient manner using current platforms and techniques. SparkWorks will utilize the E2Data stack for improving its IoT Platform in order to rapidly process the constantly accumulated data and tackle the issues generated by thousands of parallel deployments of sensors, each generating enormous data chunks that need to be processed almost in real-time. ### Description of the Framing Environment The SparkWorks platform receives streaming data from multiple types of sensors and provides real-time analytics over them. In general, the Sparks Engine is a process engine which provides the analytics and a storage system which is used for storing those results. The process engine receives events from multiple sensors and executes aggregate operations on these events. The output of the engine is stored at a storage system. Sensors produce (periodically or asynchronously) events that are sent to the Sparks Processing Engine via RabbitMQ. Those events are usually tuples of values: value and timestamp. All data received are collected and forwarded to a queue. From there, they get processed in real time by an Apache Storm cluster. The Storm cluster has a number of topologies for processing based on the data type. Each topology is responsible for a unique type of sensor such as general measurement sensors (temperature, humidity, wind speed etc.), power measurement sensors, etc. The produced analytics are outputted into summaries which are stored permanently to a NoSQL (MongoDB) database. The process engine is composed by topologies for every type of sensor. Each topology has the ability to be easily modified in order to accommodate aggregation operations. The engine, which is implemented with Apache Storm, consists topologies. As already mentioned above, each topology is responsible for a specific type of sensor and consists a chain of aggregators which we call process blocks or process levels. The process blocks can aggregate data for specific time intervals. Events which enter the Storm cluster are processed consecutively. First, the Storm topology performs aggregation operations on the streaming data i.e., for a temperature sensor; Storm calculates the average values of the 5-min-interval and stores them to memory and disk for further processing (when the topology receives more than one events for the same 5-min-interval, it calculates the average of those events). Every consecutive 5-min-interval aggregate is kept in memory (topology keeps 48 interval values ‘k’ for 5-min, hour, day, month intervals for each device) or stored in disk. The next step is to update the hour intervals. For that reason, the topology updates the 5-min-intervals inside the buffer of the hour processor and stores the average of those 5min-intervals. The process is the same for the daily processor but the topology also stores the max/min of the day (based on the hour intervals) and the same for monthly and yearly processors. For power consumption sensors, the scheme (topologies inside storm) is the same with the difference that the topology has to calculate and store the power consumption. **Aggregators** are used to perform aggregation operations on input streaming data. The topologies use aggregation for **Power Consumption calculation** (calculate the power consumption of the stream values), **Sum calculation** (summarize the streaming values) and **Average Calculation** (calculate the average of the streaming values). _**F** _ _**IGURE** _ _**2:** _ _**A** _ _**G** _ _**GREGATION** _ _**TOPOLOGY** _ 1.1.1. High Level Specification The high level indicators that need to be achieved are: 1. _Execution time_ : 50% reduction in execution time in metadata categorization process time and query response time. 2. _Computational resources_ : 70% reduction of computational resources needed to perform Big Data processing with respect to homogeneous scale-out approach. 3. _Data volume_ : 40% reduction of data volume traverse between Network Edge and Big Data processing tool based on dynamic information regarding resource availability provided by E2Data’s GUI. ### List of Critical Code Parts The list of kernels which are candidates to accelerate are enumerated in the Table below: <table> <tr> <th> **Title** </th> <th> **Description** </th> </tr> <tr> <td> 1\. **compute sum** </td> <td> Computes the sum of the sensor measurements. </td> </tr> <tr> <td> 2\. **compute max** </td> <td> Computes the max of the sensor measurements. </td> </tr> <tr> <td> 3\. **compute min** </td> <td> Computes the min of the sensor measurements. </td> </tr> <tr> <td> 4\. **compute average** </td> <td> Computes the average of the sensor measurements. </td> </tr> </table> ## Code Kernel “compute sum” **Description -** Returns the summary of the readings stored until a requested index arrives. Method signature: getTotal(int until) Input (int): the index until which summary will be computed Output (double): the computed summary value #### Data involved * Type / Structure: Array of Double * Data source: Stream * Volume: Up to hundreds of events * Rate: up to 1 event/30 secs * How accessed: AMQP **Code-level SLA –** We need to achieve: #### Latency: 0,373 ms **Throughput:** 5833,33 events/sec **Critical to accelerate -** This method is crucial for the efficient operation of the SparkWorks analytics engine since it contains the computation of analytics results and is invoked every time a new sensor reading arrives on the system. **About the code –** It is implemented in Java, utilizing the Apache Storm stream engine. ## Code Kernel “compute max” **Description -** Returns the maximum value of the readings stored until a requested index. Method signature: getMax(int until) Input (int): the index until which maximum value will be computed Output (double): the computed maximum value #### Data involved * Type / Structure: Array of Double * Data source: Stream * Volume: Up to hundreds of events * Rate: up to 1 event/30 secs * How accessed: AMQP **Code-level SLA –** We need to achieve: #### Latency: 0,373 ms **Throughput:** 5833,33 events/sec **Critical to accelerate -** This method is crucial for the efficient operation of the SparkWorks analytics engine since it contains the computation of analytics results and is invoked every time a new sensor reading arrives on the system. **About the code –** It is implemented in Java, utilizing the Apache Storm stream engine. ## Code Kernel “compute min” **Description -** Returns the minimum value of the readings stored until a requested index. Method signature: getMin(int until) Input (int): the index until which minimum value will be computed Output (double): the computed minimum value #### Data involved * Type / Structure: Array of Double * Data source: Stream * Volume: Up to hundreds of events * Rate: up to 1 event/30 secs * How accessed: AMQP **Code-level SLA –** We need to achieve: #### Latency: 0,373 ms **Throughput:** 5833,33 events/sec **Critical to accelerate -** This method is crucial for the efficient operation of the SparkWorks analytics engine since it contains the computation of analytics results and is invoked every time a new sensor reading arrives on the system. **About the code –** It is implemented in Java, utilizing the Apache Storm stream engine. ## Code Kernel “compute average” **Description -** Returns the average value of the readings stored until a requested index. Method signature: getAvg(int until) Input (int): the index until which average value will be computed Output (double): the computed average value #### Data involved * Type / Structure: Array of Double * Data source: Stream * Volume: Up to hundreds of events * Rate: up to 1 event/30 secs * How accessed: AMQP **Code-level SLA –** We need to achieve: #### Latency: 0,373 ms **Throughput:** 5833,33 events/sec **Critical to accelerate -** This method is crucial for the efficient operation of the SparkWorks analytics engine since it contains the computation of analytics results and is invoked every time a new sensor reading arrives on the system. **About the code –** It is implemented in Java, utilizing the Apache Storm stream engine. # Security Use Case ## Description of Use Case and Requirements The iProov cloud platform is a multi-tenant platform running on Microsoft Azure. The General Data Protection Regulations (GDPR) define biometric profiles as “Special Category Personal Data”. This, combined with the high- value nature of the transactions the platform runs, which include the authentication of users for the creation of European bank accounts in multiple EU countries, as well as the automated issuing of Visa documents, results in the requirement for a highly secure, as well as highly scalable and resilient infrastructure. The current platform encompasses a number of clusters of both CPU and GPU servers to provide both the scalability and reliability required. Peak transaction volumes for one customer results in excess of 3Gb of image data to be processed per second. ### Description of the Framing Environment The existing environment runs as a series of CPU and GPU clusters deployed in Microsoft Azure [16], combined with a distributed (Netherlands, UK and Eire) MongoDB [17] NoSQL database which holds account information, customer information and biometric profiles. The CPU and GPU clusters are configured similarly as follows: * Each Virtual machine runs either one (GPU Cluster) or more than one (CPU Cluster) docker [18] containers. * Each docker container runs a single micro-service which is typically coordinated by python software. However, for performance reasons, the actual execution of the kernel will be either in compiled multi-threaded C/C++ on CPUs or C compiled to CUDA [19] to execute on a Nvidia GPUs. * The docker containers are orchestrated by Kubernetes [20] to allow ease of management. * Task distribution is carried out by Celery [21] running on a RabbitMQ AMQP infrastructure [22]. * Due to the nature of the payload (images), the overhead of distribution over RabbitMQ is excessive so instead the images are encoded using Google Protobuf 2 [23] and are stored in an active Memcached cluster [24]. This allows any worker to load any image in < 1ms for negligible CPU cost. * The entire infrastructure sits behind a pair of Active/Standby Pfsense [25] firewalls, as well as the Azure firewalls. This allows the introduction of both an Intrusion Detection System (IDS) on the Pfsense servers, as well as the use of HAPROXY [26] to allow load distribution at the edge of the network. ### List of Critical Code Parts The list of kernels which are candidates to accelerate are enumerated in the Table below: <table> <tr> <th> **Title** </th> <th> **Description** </th> </tr> <tr> <td> 1\. **ColourMorph** </td> <td> This takes an array of RGB images containing faces together with a set of feature location for the faces. Each image is individually ‘morphed’ to a standard face which is returned. </td> </tr> <tr> <td> 2\. **PearsonR** </td> <td> This Kernel is required to carry out a large number of PearsonR calculations in a very small time period. </td> </tr> </table> ## Code Kernel ColourMorph The kernel takes as input a series (n) of equal sized source RGB images loaded from Memcached where the RBG uint8 images have been encoded by Protobuf 2. Each image contains a single face. A set of landmarks for each image is also read from Memcached. The kernel also has a single reference image again complete with a set of landmarks. The value of each pixel in the reference image is set to that of the closest pixel in each of the source images as described below. #### Description The feature points within the reference image are triangulated using Delauney Triangulation [27] as an one-off exercise. The location of each point within each triangle is geometrically calculated relatively to the 3 - vertices of the surrounding triangle and stored. For each pixel within the reference image: \- For each colour channel for each input image: The colour channel value for the corresponding pixel in the input image is stored onto the reference image.This means that some pixels in the input image may be used more than once whereas some may not be used at all depending on the relative sizes of the images. The output of the process is a uint8 array of (3n, w, h). Where: n is the number of input images; w is the width of the reference image and h is the height of the reference image. #### Data Involved \- The inputs are: N x 3 channel uint8 RGB images. A JSON array containing 68 feature points for each input image with each feature represented as a uint8 coordinate pair. A reference model All transient data will be accessed from Memcached where the results will also be stored. Static data such as the model, will be loaded once from disk and then stored in memory. **Code-level SLA –** We need to achieve: #### ● Latency: <500ms * **Throughput:** 240 images per second with typical images size being 220x240x3 * **Execution Time:** <200ms for each set of 30 images **Programming Efficiency:** HIGH. This is one of the key benefits for this use case which can be massively parallelized. The goal of this use case is to reduce engineering effort for code optimization. **Critical to accelerate -** Acceleration is needed as the cost, both financial and in terms of resources, of processing this quantity of data is high. I/O can be a limiting factor. Significant work has been carried out to optimize the efficiency of image load. This process is an input to one of the Presentation Attack Detection (PAD) defenses of the iProov system. It is critical it is completed for each transaction or the transaction will fail. Acceleration so far - Significant work has been put into optimizing this function using hand crafted CUDA. This performs well but the cost in person effort of implementation is high. This is currently run on GPU. It parallelizes well and is a good candidate for both GPUs and Xeon Phis. #### About the code - * Language: CUDA * Libraries utilised: Memcached and Protobuf 2 for data loading. None for execution. **Hints on Parallelisation -** The calculation for every pixel of every colour channel of every image can be carried out in parallel. ## Code Kernel PearsonR This kernel takes as input a JSON structure containing two two-dimensional arrays. The first input is of shape (8192, 13) and the second of shape (4096, 13) with both input are of type float64. The kernel performs a PearsonR calculation for every 13 value array in the first input against every 13 byte array in the second input. Over 33 million (8192x4096) Pearson Correlation Coefficient calculations are made. **Description -** This kernel is very simple. It performs Pearson Correlation Coefficient [28] calculations against arrays consisting of thirteen float64 numbers. The complexity is simply the number of calculations required and the limited time available. #### Data involved - * Input 1: An array of type Float64 and size (8192, 13) * Input 2: An array of type Float64 and size (4096, 13) * Output 3: An array of type Float64 and size (33554432, 2) **Code-level SLAs –** We need to achieve: #### ● Latency: <500ms * **Throughput:** 8 batched of transactions per seconds * **Execution Time:** <25ms for each set of 30 images **Programming Efficiency:** HIGH. This is one of the key benefits for this use case which can be massively parallelized. It complements the first use case which is image-based. The goal of this use case is to reduce engineering effort for code optimization. # Generic Data Management Plan Results of the project action includes scientific publications, data research results and datasets that are available for processing and experimentation. The Data Management Plan and the relevant principles are set in the project, in order to assure access, respect to personal data, re-usability, sharing, archiving and presentation. A common framework is set by the project that is to be instantiated by each use case. Main pillars of the plan include: * Open access to scientific publications; * Open access to research data; * Standards and metadata; * Data access; * Access and sharing policies; * Re-usability and distribution; * Archiving and presentation. ## Open Access to Scientific Applications Scientific publications will be given open access; that is free of charge online access for users. Open access will be achieved through the following steps: 1. Any paper presenting project results will acknowledge the Action: “ _The research leading to these results has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 780245-E2Data_ ” and display the EU emblem. 2. Any paper presenting results will be deposited at least by the time of publication to a formal repository for scientific papers. If the organisation does not support a formal repository [29], the paper can be uploaded in the European sponsored repository for scientific papers Zenodo [30]. 3. Authors can choose to pay “author processing charges” to ensure open access publishing, but still they have to deposit the paper in a formal repository for scientific papers (step 2). 4. Authors will ensure open access via the repository to the bibliographic metadata identifying the deposited publication. More specifically, the following will be included: * The terms “ _European Union (EU)_ ” and “ _Horizon 2020_ ”; * “ _E2Data – European Extreme Performing Big Data Stacks_ ”, Grant agreement number 780245; * Publication data, length of embargo period if applicable; and * A persistent identifier. 5. Each case will be examined separately in order to decide on self-archiving or paying for open access publishing. ## Open Access to Research Data Open access to research data refers to the right to access and re-use digital research data generated by project actions. Management and sharing of research aims to maximise opportunities for future research and comply with best practices in the relevant subject domain. That is: * The dataset has clear scope for wider research use; * The dataset is likely to have long-term value for research or other purposes; ● The dataset has broad utility for reference and use by research communities; * The dataset represents a significant output of the research project. Openly accessible research data generated during E2Data will be accessed, mined, exploited, reproduced and disseminated free of charge for the user. Specifically, the " _Guidelines on Data Management in Horizon 2020_ " clarifies that the beneficiaries must: _“Deposit in a research data repository and take measures to make it possible for third parties to access, mine, exploit, reproduce and disseminate — free of charge for any user — the following:_ i. _The data, including associated metadata, needed to validate the results presented in scientific publications as soon as possible;_ ii. _Other data, including associated metadata.”_ The datasets to provide will be described and be updated as they might change with the evolution of the project, within future versions of this current document. For each dataset that we are going to share in the project’s lifetime, policies for access and sharing, as well as policies for re-use and distribution, will be defined and applied. A generic guideline is provided in Sections 6.4 " _Policies for Data Sharing and Access_ ", Section 6.5 “ _Policies for Re-Use and Distribution”_ and Section 6.6 " _Archiving and preservation_ ". ## Standards and Metadata **Metadata** is needed on the datasets. This will provide transparency and traceability to make security and audit possible. Such data can include timestamps for various actions on data (generation, exchange, modification), ownership, log access and action details, identifiers, etc. **Standards** can be selected to help the data exchange, but the variation in the employed systems across Europe is very large and there will be hundreds of standards and even large variations within the same standards and/or versions of these. Datasets should be as standardized, if possible, but the exchange of data will be very hard by using defined standards. To use standards, you do not only need to invent new or develop current standards but implement them into the actual systems which is a very difficult task. The reason for this is that you do not only need the technological support, but you also really need to make it happen. It is a rare thing that an organisation can adapt to a new standard with an existing system since that system will likely be integrated with other systems. Also, the systems/organisations asking others to adapt to a standard will not have the budget or the governance in the other organisation. Experience shows that you need to be able to handle a multitude of standards, if you want to integrate a large number of systems and E2Data is aiming to potentially integrate a very large number of systems. A standard for integrating to E2Data should be defined, but it is a must that it can co-exist with current standards and interfaces in the system. Even on a European level, it is hard to imagine all countries moving at the same pace. ## Policies for Data Sharing and Access The data sharing policy is an ongoing process and is expected to be finalised later in the course of the project as it closely related to the definition and the requirements of the E2Data use cases. The issues identified so far concern: * The definition of data owner(s); * The definition of incentives concerning the data providers; * The identification of user groups and the access policies concerning the data; * The definition of access procedures and embargo periods; * The compliance with corresponding legal and ethical issues. Open access to research data will be achieved in E2Data through the following steps: 1. Prepare the " _Data Management Plan_ " (current document) and update it as needed; 2. Select what data we will need to retain to support validation of the project findings ; 3. Deposit the research data into an online research data repository. While deciding where to store project data, the following choices will be performed, in order of priority: * An institutional research data repository, if available; * An external data archive or repository already established in the E2Data research domain (to preserve the data according to recognised standards); * The European sponsored repository: _http://zenodo.org/_ ; * Other data repositories (searchable here: _http://www.re3data.org_ ) , if the aforementioned ones are ineligible. 4. License the data for re-use (Horizon 2020 recommendation is to use CC0 or CC BY); 5. Provide information on the tools needed for validation, i.e. everything that could help a third party in validating the data (workflow, code, etc.). Independent of the selected repository, the authors will ensure that the repository: * Gives the submitted dataset a persistent and unique identifier to make sure that research outputs in disparate repositories can be linked back to particular researchers and grants; * Provides a landing page for each dataset, with metadata; * Helps track if the data has been used by providing access and download statistics; * Keeps the data available in the long term, if desired; * Provides guidance on how to cite the data that has been deposited. Even following the previously described steps, each case will be examined separately in order to select the most suitable online repository. As suggested by the European Commission, the partners will deposit at the same time the research data needed to validate the results presented in the deposited scientific publications. This timescale applies for data underpinning the publication and results presented _._ Research papers written and published during the funding period will be made available with a subset of the data necessary to verify the research findings. The consortium will then make a newer, complete version of the data, available within 6 months of Action completion. This embargo period is requested to allow time for additional analysis and further publication of research findings to be performed. Other data (not underpinning the publication) will be shared during the project’s lifetime following a granular approach to data sharing and releasing subsets of data at distinct periods rather than waiting until the end of the Action, in order to obtain feedback from the user community and refine it as necessary. An important aspect to take into account is who is allowed to access the data. It could be the case that part of a dataset should not be publicly accessible to everyone. In this case, control mechanisms will be established, including: * Authentication systems that limit read access to authorised users only; * Procedures to monitor and evaluate access requests one by one. A user must complete a request form stating the purpose for which they intend to use the data; * Adoption of a Data Transfer Agreement that outlines conditions for access and use of the data. Each time a new dataset will be deposited, the consortium will decide who is allowed to access the data. Generally speaking, anonymised and aggregate data will be made freely available to everyone, whereas sensitive and confidential data will only be accessed by specific authorised users. ## Policies for Re-Use and Distribution A key aspect of data management is to define policies in order for users to learn the existence of data and the content it contains. People will not be interested in a set of unlabeled files published on a website. To attract interest, partners will describe accurately the content of published datasets and, each time a new dataset is deposited, disseminate the information using the appropriate means (i.e., mailing list, press release, Facebook, website news) based on the type of data and the interested target audience. Research data will be made available in a way that can be shared and easily re-used by others. That means: 1. Sharing data using open file format (whenever possible), so that they can be processed by both proprietary and open source software; 2. Using formats based on an underlying open standard; 3. Using formats which are interoperable among diverse internal and external platforms and applications; 4. Using formats which do not contain proprietary extensions (whenever possible). Documenting datasets, data sources and the methodology used for acquiring the data establishes the basis for the interpretation and appropriate usage of the data. Each generated/collected and deposited dataset will include documentation to help users to re-use it. As recommended, the license that will be applied to the data is CC0 or CC BY. If limitations exist for the generated data, these restrictions will be clearly described and justified. Potential issues that could affect how data can be shared and used may include the need to protect participant confidentiality, comply with informed consent agreement, protect Intellectual Property Rights, submit patent applications and protect commercial confidentiality. Possible measures that may be applied to address these issues include encryption of data during storage and transfer, anonymisation of personal information, development of Data Transfer Agreements that specify how data may be used by an end user, specification of embargo periods, and development of procedures and systems to limit access to authorised users only. ## Archiving and Presentation Datasets will be maintained for 5 years following project completion. To ensure high-quality long-term management and maintenance of the dataset, the consortium will implement procedures to protect information over time. These procedures will permit a broad range of users to easily obtain, share and properly interpret both active and archived information and they will ensure that information is: * Kept up-to-date in content and format so that they remain easily accessible and usable; * Protected from catastrophic events (e.g., fire and flood), user error, hardware failure, software failure or corruption, security breaches, and vandalism. Regarding the second aspect, solutions dealing with disaster risk management and recovery, as well as with regular backups of data and off-site storage of backup sets, are always integrated when using the official data repositories (i.e., Zenodo [30]); the partners will ensure the adoption of similar solutions when choosing an institutional research data repository. Partners are encouraged to claim costs for resources necessary to manage and share data; these will be clearly described and justified. Arrangements for post-action data management and sharing must be made during the life of the Action. Costs associated with long-term curation and preservation, such as POSF (Pay Once, Store Forever) storage, will be purchased before the Action ends. # Conclusions The use case providers (EXUS, Neurocom Luxembourg, iProov and SParkWorks/CTI) do present an interesting set of code kernels that will be exercised. It includes * a matrix inversion algorithm * an algorithm for lexicographical approximate matching in vocabularies using distances between words * approximate Matching in dictionaries stored as Directed Acyclic Graph Words * cosine similarity applied on words or q-grams and a variation of the algorithm in order to rank documents * fuzzy Matching of terms in terminological dictionaries stored in Compressed Tries ● computing the sum / maximum / minimum / average of sensor measurements. * carrying out a large number of PearsonR correlation coefficient calculations in a very small time period. * algorithm that takes an array of RGB images containing faces together with a set of feature location for the faces, and ‘morphs’ each individual image to a standard face which is finally returned. The description of the code kernels have given significant feedback to the core development team that need to advance the E2DATA architecture to address the requirements. Computations are featuring different performance behavior and computation profiles. Some are better on GPUs rather than FPGAs and the opposite. During the identification of certain case requirements, code examples have been provided for experimentation, setting well defined targets. The accomplished work triggered the desirable and valuable collaboration among the use cases and the E2data core development teams. Apart from the application use case context, the project sets a Data Management Plan that aims to assure access, respect to personal data, re- usability, sharing, archiving and presentation. A common framework is set by the project which will sufficiently guide each use case, into treating each dataset, in respect to the following: * Open access to scientific publications; * Open access to research data; * Standards and metadata; * Data access; * Access and sharing policies; * Re-usability and distribution; ● Archiving and presentation.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0323_Families_Share_780783.md
# Data Summary ## Purpose of the Data Collection / Relation to the Project Objectives The data collected by UPWARDS is technical data of new wind turbines required to parameterize integrated simulation model as well as social data. <table> <tr> <th> **Major objective** **1** </th> <th> The UPWARDS project will establish a high-fidelity multi-physics, mechatronic and multi-scale simulation framework for wind turbines that enables integrated modelling of wind flow, mechanical movements, structural/control dynamics and stresses with a level of detail that today only is achievable in a sequential fashion meaning a comprehensive holistic vision is not possible. The collected technical data will be required to parametrize the model. </th> </tr> <tr> <td> **Major objective** **2** </td> <td> UPWARDS will define a virtual prototype of a 15MW horizontal axis wind turbine including descriptions of aerodynamic design, structural design, transmission and generator, and control system. The purpose of the virtual prototype is to serve as a study case to ensure that the developed simulation tools performs as required, and enable generation of realistic and relevant simulation result for knowledge extraction and further exploitation. </td> </tr> <tr> <td> **Major objective** **3** </td> <td> Using the innovative resources that will be developed as well as feedback from the public and stakeholder opinions and needs that will be gathered, the UPWARDS project will perform high-fidelity simulations of important wind turbine related phenomena to exploit and increase the understanding of their physical behavior and interaction. State of the art data mining methods will be used to extract and structure relevant information from the data that will inform new wind turbine designs. </td> </tr> </table> To fulfill the above-mentioned objectives, a clear data management strategy and methodology, also enabling the ability to easily share data between partners are both necessary. The large volume of the data that will be generated during this project poses significant challenges for the implementation of such a methodology. In order to determine the needs and preferences, a questionnaire based on the "Guidelines on FAIR Data Management in Horizon 2020" of the European Commission was prepared and distributed to all partners (cf. Annex C). To ensure easy availability of key data sets to key stakeholders and audiences, UPWARDS data will be classified into three specific categories as follows. <table> <tr> <th> **GOLD** Analysed data that has clear scientific significance, provides interesting results that have resulted in new understanding in the field, possibly supporting high impact journal publications. Such data will be made available online using a reliable, indexable repository that is compatible with the European Commission’s OpenAIRE platform i.e. Zenodo. </th> <th> **GREEN** Analysed data that is relevant for the UPWARDS partners and can be used to develop new models within the project work plans. Such data will be shared in a searchable repository accessible by all partners, the UPWARDS website intranet. </th> <th> **WHITE** Raw data or data that has been analysed but is not thought at this stage to be significant to either the project or wider community. Such data may still have unseen value and will be stored in a local, searchable archive </th> </tr> </table> All data generated during the project will be tracked and recorded in a project database. The stored information will describe the category of the data, the type of data, a brief description of the data and where it can be located. ## Types, Formats, and Utility of the collected Data The type of data and formats of data are described and agreed in the Grant Agreement pp 37-38. The update is given in the table below: <table> <tr> <th> **Type of data** </th> <th> **WP** </th> <th> **Standards** </th> <th> **Accessibility** </th> <th> **Curation/preservation** </th> </tr> <tr> <td> Progress, interim and final reports </td> <td> 1 </td> <td> EC templates </td> <td> Restricted to the project partners and the EC </td> <td> Website’s intranet </td> </tr> <tr> <td> Raw model output data </td> <td> 2 </td> <td> Model dependent </td> <td> Restricted to the project partners </td> <td> Depending on data status (c.f. section 1.3) </td> </tr> <tr> <td> Processed model output data (adapted to interface with the following models in the chain) </td> <td> 2 </td> <td> Model dependent </td> <td> Restricted to the project partners </td> <td> Depending on data status (c.f. section 1.3) </td> </tr> <tr> <td> Flow results around wind turbine, strains, stresses, velocities, accelerations, local forces & moments in mechanisms, sensors and actuators data </td> <td> 3 </td> <td> Model dependent, curves, 3D graphical views and realistic animations </td> <td> Restricted to the project partners </td> <td> Depending on data status (c.f. section 1.3) </td> </tr> <tr> <td> Wind turbine CFD database </td> <td> 4 </td> <td> Model dependent </td> <td> Restricted to the project partners </td> <td> Depending on data status (c.f. section 1.3) </td> </tr> <tr> <td> Wind turbine near-field noise database </td> <td> 4 </td> <td> Model dependent </td> <td> Restricted to the project partners </td> <td> Depending on data status (c.f. section 1.3) </td> </tr> <tr> <td> Report and data on the effect of fatigue loading history on damage development </td> <td> 5 </td> <td> Templates Established by partners </td> <td> Public domain </td> <td> website’s intranet </td> </tr> <tr> <td> Simulation model and data of blade substructure </td> <td> 5 </td> <td> Model dependent </td> <td> Public domain </td> <td> website’s intranet </td> </tr> <tr> <td> Experimental fatigue material </td> <td> 5 </td> <td> Templates Established by partners </td> <td> Restricted to the project partners and the EC </td> <td> website’s intranet </td> </tr> <tr> <td> Integrated system simulation </td> <td> 6 </td> <td> As in WP 2-5 </td> <td> As in WP 2-5 </td> <td> Raw data: local server Results: Depending on data status (c.f. section 1.3) </td> </tr> <tr> <td> Business cases/market studies </td> <td> 7, 8 </td> <td> Templates established by partners </td> <td> Restricted to the project partners and the EC. All subject of publications and related to the open </td> <td> Dedicated project data management system </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> pilot strategy will be made available </td> <td> Open data for publications and open pilot strategies will be followed. </td> </tr> <tr> <td> Dissemination data </td> <td> 7, 8 </td> <td> Defined in dissemination plan </td> <td> Public </td> <td> Public domain (website and intranet) </td> </tr> </table> In WP6 an integrated system simulation model is generated. The data is depending on the models of WP2-WP5 and the data accessibility is defined in the corresponding WPs. Raw data and intermediate results of the simulations will be stored at partners’ premises. The impact of the UPWARDS project outcomes within and beyond its lifespan will be maximized through a systematic set of communication, dissemination, market analyses and business planning actions, and transmit the project results to the relevant stakeholders including policy makers, industry, and society. ## UPWARDS data management methodology The roadmap for data management in UPWARDS is shown in Figure 1.1. There are three key aspects that will be considered: 1. Architecture. The structure and function of the data archiving system. The physical locations of the data storage repositories and the tools required to manage and recall data. 2. Process. The process for adding to the UPWARDS data and ensuring that data is in a standardized format and easily searchable. 3. Reporting. The process of monitoring the data archives, and the ability to extract Key Performance Indicators that can inform decision making for further improvements in the Data Management plan. ### Architecture Due to the high volume of data that is expected to be generated during the project, all of the data will be required to be stored locally at each partner’s site, but results of significance will be shared internally using a common UPWARDS repository and externally using standard online repositories, as shown in Figure 1.1. The database with the data available will be stored in the UPWARDS intranet and kept updated. A contribution of Upwards is to provide the integration system with all components as Open Accessible Framework. Hence, we specified to package each simulation step as software container in Docker 1 . Further, we defined a concrete format each container has to follow in order to be used in Upwards and integrated within the workflow. * The container can be created be executing a provided build script (i.e., Dockerfile) * The container executes the encapsulated software by passing a single command line script (here, a Bash script named runAll.sh). This script takes input data from files and writes simulation results into files. These file are stored at the host system and mounted into the container runtime. * Licenses of proprietary or commercial software programs (e.g., StarCCM+, Samcef) and databases (e.g., ERA5) are injected via configurable license server paths by the use of environment variables. * Commercial software binaries are injected by placing the files in dedicated directories, which are the applied and integrated by the build script. * All resources and documentation is managed in a web-based software managing and version control system GITLAB 2 , which is hosted and provided by Fraunhofer ITWM. It is accessible to each consortium partner 3 . Furthermore, for large data sets with green status (e.g. simulation results with significant relevance) Fraunhofer ITWM provides access to a large-scale file sharing server. ### Process The management of UPWARDS data requires standardized and timely reporting of data which can be shared within the project and with external stakeholders. The preliminary standard process for handling new data generated during the project is defined in Figure 1.2 and will be the backbone of the definitive data management plan. This process will ensure that all data generated is captured and can be made available to both internal and external stakeholders if required. UPWARDS’ partners will also be able to see that data has been recorded in advance of processing and analysis and can prepare activities in advance of the publication of results. **Figure 1.2 Process** ### Reporting Database monitoring tools will be used for extracting key KPIs from the extensive data sets in the archive. # FAIR Data UPWARDS will use the Zenodo depository (zenodo.org) to store all data which are released for free access. Zenodo is a free large capacity platform for the exchange and curation of research data managed by CERN established as a result of the OpenAIRE project. Zenodo has built in functionality to meet most FAIR criteria and to generate searchable metadata (see section 7.2). In Zenodo data is stored in records with associated metadata. All data must be associated with a community. For that purpose, a "UPWARDS H2020 Project" community has been established that all data from UPWARDS will be associated to. In addition, data will be associated to other communities as the "Wind Energy" community. ## Making data findable, including provisions for metadata To make the open access data findable they will be assigned unique Digital Object Identifiers, descriptive names keywords and metadata. Relevant search keywords will be assigned to all data sets and included in the metadata. ### Naming conventions Data will be named using the following naming conventions: Deliverables: [DT] UPWARDS_[DN]_[UDN].[VN] Publications: [DT] UPWARDS_[PN]_[UDN].[VN] [DT] Descriptive Text [DN] Deliverable Number [PN] Publication Number [UDN] Unique Data Number [VN] Version Number ### Digital Object Identifiers (DOI) DOIs for all datasets will be reserved and assigned with the DOI functionality provided by Zenodo. DOI versioning will be used to assign unique identifiers to updated versions of the data records. ### Metadata Metadata associated to each published dataset will by default be * Digital Object Identifiers and version numbers * Bibliographic information * Keywords * Abstract/description * Associated project and communities * Associated publications and reports * Grant information * Access and licensing info * Language ## Making data openly accessible As described above, the open data is exchanged via the Zenodo depository. Metadata, including licenses for individual records and data collections, can be harvested using the OAI-PMH protocol by the record identifier and the name of the collection. Metadata can also be retrieved via the public REST API. The data can be called up on the Internet at www.zenodo.org and is therefore accessible via any web browser application. The data is freely searchable and the identity of the persons accessing the data is not determined. ## Making data interoperable All files use standard scientific notations as S.I. units and vocabulary as in ISO test standards. The depository Zenodo, where shared data will be stored, uses JSON schemes as internal representation of metadata and offers export to other popular formats such as Dublin Core, MARCXML, BibTeX, CSL, DataCite and export to Mendeley. The data record metadata will utilize the vocabularies applied by Zenodo. For certain terms these refer to open, external vocabularies, e.g.: license (Open Definition), funders (FundRef) and grants (OpenAIRE). Reference to any external metadata is done with a resolvable URL. ## Increase data re-use The data will be licensed under different levels of creative commons licenses (https://creativecommons.org/licenses/). As default, the CC-BY-SA license will be applied for open UPWARDS data. This license lets others remix, tweak, and build upon the data even for commercial purposes, as long as they credit UPWARDS and license their new creations under the identical terms. This license is often compared to “copyleft” free and open source software licenses. All new works based on CC-BY-SA licensed data will carry the same license, so any derivatives will also allow commercial use. This does not preclude use of less restrictive licenses as CC-BY or more restrictive licenses as CC-BY-NC not allowing commercial usage. This will be assessed in each case. For data published in scientific journals, the data are made available at the same time as Open Access is granted for the paper or preprint. The data will accompany the paper. For data associated with public deliverables data will be shared after approval of the deliverable by the EC. Open data will be reusable as defined by their licenses. Data defined as confidential will not be reusable as default due to commercial exploitation. See table in section 1.2. The data re-usability is only limited by the lifetime of the Zenodo depository. This is currently the lifetime of the host laboratory CERN, which currently has an experimental program defined for the next 20 years at least. In cases where Zenodo expires, its policy is to transfer data and metadata to other appropriate depositories. # Allocation of Resources The cost is only the PM cost required to organize and upload the data and will be covered by the project grants. Fraunhofer ITWM will be responsible for the data management with Dr. Andreas Wirsen as technical manager. Wavestone will be in charge of updating the project related databases i.e. Zenodo, monitoring database in UPWARDS intranet and the continuous reporting feature in the EC’s participant portal. Gold Data stored on Zenodo depository: Since an externally freely usable, already financed depository is used, there are no costs for long-term archiving for the project. The longevity of the data curation is only limited by the lifetime of the Zenodo depository, which currently has an experimental program defined for the next 20 years at least Self-archiving or so-called 'green' Open Access will be also applied through the developed Zenodo repository. As also required, open access to the publication will be ensured in a maximal delay of 6 months. The difference between gold and green open access (including related fees) can be found in the table below. <table> <tr> <th> </th> <th> **Gold open access** </th> <th> **Green open access** </th> </tr> <tr> <td> **Definition** </td> <td> Open access publishing (also called 'Gold' open access) means that an article is immediately provided in open access mode by the scientific publisher. The associated costs are shifted away from readers, and instead to (for example) the university or research institute to which the researcher is affiliated, or to the funding agency supporting the research. </td> <td> Self-archiving (also called 'Green' open access) means that the published article or the final peer-reviewed manuscript is archived by the researcher – or a representative - in an online repository before, after or alongside its publication. Access to the article is often – but not necessarily - delayed (‘embargo period’) as some scientific publishers may wish to recoup their investment by selling subscriptions and charging payper-download:view fees during an exclusivity period. </td> </tr> <tr> <td> </td> <td> • Publish in an open access journal </td> <td> • Link to the article </td> </tr> <tr> <td> **Options** </td> <td> • Or in a journal which supports open access </td> <td> * Select a journal that features an open archive * Self-archive a version of the article </td> </tr> <tr> <td> **Access** </td> <td> * Public access is to the final published article * Access is immediate </td> <td> • Free access to a version of the article • Time delay may apply (embargo period) </td> </tr> <tr> <td> **Fees** </td> <td> * Open access fee is paid by the author * Fees range between $500 and $5,000 USD depending on the journal </td> <td> • No fee is payable by the author as publishing costs are covered by library subscriptions </td> </tr> <tr> <td> **Use** </td> <td> • Authors can choose between a commercial and noncommercial user license </td> <td> * Accepted manuscripts should attach a Creative Common Licence * Authors retain the right to reuse their articles for a wide range of purposes </td> </tr> </table> # Data Security The data security is as specified by the Zenodo depository (see section 8.4): 1. **Versions:** Data files are versioned. Records are not versioned. The uploaded data is archived as Submission Information Package. Derivatives of data files are generated, but original content is never modified. Records can be retracted from public view; however, the data files and record are preserved. 2. **Replicas:** All data files are stored in CERN Data Centres, primarily Geneva, with replicas in Budapest. Data files are kept in multiple replicas in a distributed file system, which is backed up to tape on a nightly basis. 3. **Retention period:** Items will be retained for the lifetime of the repository. This is currently the lifetime of the host laboratory CERN, which currently has an experimental program defined for the next 20 years at least. 4. **Functional preservation:** Zenodo makes no promises of usability and understandability of deposited objects over time. 5. **File preservation:** Data files and metadata are backed up nightly and replicated into multiple copies in the online system. 6. **Fixity and authenticity:** All data files are stored along with a MD5 checksum of the file content. Files are regularly checked against their checksums to assure that file content remains constant. 7. **Succession plans:** In case of closure of the repository, best efforts will be made to integrate all content into suitable alternative institutional and/or subject based repositories. If the file-sharing server for large-scale data with green status expires, ITWM will inform all partners at an early stage in order to secure the data with significant relevance. # Ethical Aspects No sensitive personal data will be collected (see Grant Agreement, Annex 1 (part B), pp. 97-98). # Other Procedures No other national, funder, sectorial, or departmental procedures for data management are planned.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0325_MONOCLE_776480.md
# 1\. Executive Summary This data management plan (DMP) details the plan for management of data generated and collected within the MONOCLE project. The DMP describes the data management life cycle for all datasets collected, processed and/or generated by the project. It covers * what data will be collected, processed or generated * how the data will be handled during and after the project * who is considered as an owner of a data set and who it is shared with * how the sharing of the data within and outside the project is organised * what formats, meta data and standards the data will adhere to * how data will be curated and preserved MONOCLE data sets consist of a diverse range of types and formats and standardization of the data and data flows is one of the main project objectives. # 2\. Scope This document, The Data Management Plan, is intended for internal and external use, describing the mechanisms that MONOCLE will put in place to ensure all public data follow the FAIR (Findable, Accessible, Interoperable, Re-usable) data management principles. This is a living document, updated periodically to reflect new data sets made available through MONOCLE. The current document presents the status and planning at month 18 of the four-year project. Streamlining data access and interoperability from the sensor to the user is one of the main aspects of the MONOCLE project. Therefore, a number of related reports which detail the methodologies developed in the project will be of interest. At present, D5.2 already outlines the data infrastructure and standards that will be implemented in the project. D5.3 (expected September 2020) will be in the form of a handbook describing the final implementation of the data flow between MONOCLE subsystems and how to use these. # 3\. Introduction The aim of MONOCLE is to implement enabling technologies for the deployment, management and maintenance of sensors and sensor networks. Sound data management is pivotal for fully realising the benefits of MONOCLE. Well curated data will stimulate and ensure smooth collaboration between the project partners and will allow the users to easily evaluate and put to use the data received from the project. For dissemination and exploitation, open access to data generated in the project will help to underpin the credibility and stimulate uptake of MONOCLE results. The MONOCLE project will follow the FAIR (Findable, Accessible, Interoperable, Re-usable) data paradigm and this is reflected in the data management plan. Data will be **F** indable through the various user applications that interface to the data services provided by the MONOCLE back-end. A web based geographic information system (GIS) will be publicly available with a data search feature acting on parameter, spatial and temporal coverage or data originator fields. Appropriate datasets will also be registered in public archives such as ZENODO and GEOSS, this will enhance their ability to be found and **R** e-used, even if the MONOCLE back-end should cease to operate. The use of data services designed for system inter-operability will guarantee that all open data within the project are widely **A** ccessible now and in future. The user applications (web based and in the form of source code) will also improve **A** ccessibility with focused information available, including through intuitive tools. **I** nteroperability is also made possible through the use of common data formats and standardised data services. For instance, it would not matter what format the original data are stored in when requested via a Sensor Observation Service as the response is documented and standardised. The DMP will be updated as a “live” document during the lifetime of the project, with four scheduled release dates. Document D5.2 “System architecture and standards report”, accompanied the first release of the DMP and describes data sources and interfaces in additional detail. # 4\. Data summary ## Data purpose and utility Observation of global coastal and inland water bodies with ocean-colour satellite sensors has reached full operational potential through the latest satellite missions in the Copernicus programme. The global societal demand for water quality information through downstream EO services is increasing and expanding into domains of public health, agriculture, aquaculture, energy and food safety, drinking water, conservation of ecosystems and biodiversity conservation, navigation and recreational use of water resources. Inland and transitional water bodies, however, represent a staggering range of optical and environmental diversity. A dedicated concept for EO-supporting in situ services for optically complex waters is necessitated by the limited ability of present in situ activities to add additional value to operational EO missions. To improve in situ components of the GEOSS and Copernicus services in optically complex waters, MONOCLE will introduce new sensor technological development across a range of innovative platforms. MONOCLE will combine high- end reference sensors in a spatially sparse configuration with a complementary, higher density, network of low cost sensors for smartphones and unmanned aerial vehicles (UAV or drones). The full MONOCLE sensor suite and the data gathered and processed with MONOCLE sensors and processing means will serve the EO research communities for water and atmosphere with a rapidly replenishing volume of reference observation data, reducing both local-regional (improved atmospheric correction) and global (improved algorithms) observation uncertainty. The MONOCLE integrated observation service concept, particularly when integrated with EO services, significantly lowers the technology and computing requirement for innovators in environmental observation in general, and water quality management in particular. This reduction is critical for uptake and engagement in developing regions. By making both data and supporting software openly available the project has the potential to boost innovation with app developers, environmental consultants, data analysts and visualisation artists worldwide. The open data strategy of MONOCLE plays a central role in opening opportunities to the EO sector, not merely in Europe but also in supporting downstream users and regional information providers in data-poor regions, particularly developing countries. For the latter, MONOCLE will lower the threshold for computational and technological capacity to actively contribute to the global observation system ## Data Types MONOCLE will collect a wealth of data on water quality from multiple sources that can be categorised as one of: * In-situ data, either: o Data collected by non-expert participants (e.g. citizen scientists) o Data from automated instruments (e.g. on buoys, ships) o Data from manually operated instruments (e.g. hand-held sensors, piloted aircraft) * Satellite data of inland, transitional and coastal water bodies * Image data collected with Remotely Piloted Aircraft Systems (RPAS) * Pre-existing data, accessed in (external) databases or directly contributed by stakeholders  Research results and derived data sets Each of these data sources has specific characteristics and challenges which are summarised below: ### Citizen generated data MONOCLE will engage with groups of volunteers in citizen science campaigns where the citizen scientists collect water quality data and submit these to the MONOCLE system via a mobile app. A number of different parameters will be collected by the citizen scientists, either ad-hoc or during larger campaigns. Such campaigns can deliver a large amount of data in a fixed time period, but are more difficult to plan as motivation of the volunteers is pivotal. A fundamental principle of MONOCLE data management is that the apps used to collect data will also have access to stored results, providing immediate feedback where possible. Citizen participation requires additional ethical considerations, which are discussed further below. Citizen observations are collected through the Earthwatch Freshwater Watch app and the iSPEX app (under development). Data exchange formats are based on Sensor Observation Services (SOS), for which the app-specific data servers have set up connections to the MONOCLE back-end communicating with the respective data stores of iSPEX and Earthwatch. Hence, no ‘raw’ data format is currently considered here. The global Freshwater Watch dataset currently contains > 20,000 datasets, where each contributor is represented as a separate dataset. For iSPEX, the main mode of operation is foreseen to be in dedicated campaigns. Data volumes associated with the FreshWater Watch are modest as these take the form of forms and occasionally photos, likely to remain in the order of Gigabytes or less. The iSPEX collects a range of smartphone camera photos, likely to range in the order of Gigabytes. Data storage at this magnitude is not currently seen as an issue. ### Automated data collection A variety of automated sensors are being deployed by project partners such as radiometers, fluorometers and absorption meters. The sensors can be deployed at fixed positions (e.g. on buoys, poles or jetties) or on moving platforms (ships, RPAS). Data will in general be collected at high frequency and immediately transmitted to the MONOCLE system. However, if deployed in remote locations, the sensors can also collect data less frequently and store measurements if they are not online. Data acquisition, processing and transmission should all be automated with these sensors as should be quality control mechanisms. The aim of MONOCLE is to provide these sensors with interactive interfaces so that measurements can be triggered, sensors turned on and off or calibrations performed remotely. During and / or following data collection, most of the optical sensors require data processing, calibration and quality control. The intention is for these processes to be highly automated, with suspect data flagged as not recommended for use and to be inspected by the data creator / curator. Where existing data stores are considered and an application programming interface (API) is not already in place, one will be created. The SOS interface will be preferred in the development of new communication interfaces from individual sensors. In addition, an SOS compliant data wrapper will be made available for legacy sensors. The Sensor Planning Service is being tested to task individual sensors, e.g. to coordinate synchronous data collection between multiple sensors or with satellite overpasses. Automated high-frequency data collection for the MONOCLE sensors is estimated in the order of tens of megabytes per observation day. Specifically, the HSP-1 transfers only calibrated and interpreted data which marks an order-of- magnitude data volume reduction compared to its ‘raw’ data, which are not considered useful to end-users. The same holds true for WISP-M which stores uncalibrated data within its own data servers. So-Rad systems transfer uncalibrated data for which calibration routines are kept on the MONOCLE back- end. Transfer, storage, and dissemination of these volumes is not currently seen as an operational issue. ### Manual data collection The project will collect data in field situations using handheld and manually operated instruments. This includes new sensors intended for short-term deployment, and high-end reference sensors operated only during validation campaigns, following described protocols. In all cases, the measurement records will be referenced by geo-location and UTC time-stamp and the measurement protocol which was used. These measurements are subject to further quality assurance (protocols) and quality control by the operators. Manual data collection during field campaigns is estimated to deliver in the order of several gigabytes of data per campaign. Transfer, storage, and dissemination of these volumes are not currently seen as an operational issue. ### Earth observation data While the focus of MONOCLE is on providing a network of in situ observation to support Earth observation of optical water quality, for demonstration of the use and benefits of the MONOCLE services for Earth Observation services, satellite-derived data will be produced within the project. For dedicated case studies, high resolution (Sentinel-2 MSI) and medium resolution (Sentinel-3 OLCI) data will be acquired and processed into water quality information products making use of MONOCLE in situ data for calibration and validation. These procedures will not be developed from scratch but use the Copernicus Land Monitoring Service (CLMS) and Copernicus Marine Environment and Modelling Service (CMEMS) data streams where feasible. Data storage needs for EO data for the selected MONOCLE regional use cases (Lake Balaton, Scottish Lochs, Danube Delta, Lake Tanganyika, and several smaller sites) is in the order of hundreds of gigabytes of data per year, which has been costed in the project budget. ### Image data collected with Remotely Piloted Aircraft Systems (RPAS) The purpose of data collection with RPAS is to construct mosaic maps of waterbodies from which water quality parameters can be derived. The RPAS systems may also serve as direct reference to satellite data, with the added advantage of detailing fine spatial features which can explain aberrations in processed satellite data, where fine features are not directly visible due to a large pixel size. The raw image data are too large (and not useful) to be disseminated beyond the data processing centres, where they are archived on suitable storage media (e.g. tape drives). Processed parameter-specific maps will be disseminated through the MONOCLE data back-end using machine interfaces (WCS). Storage and archiving needs associated with the image data are in the order of terabytes of data and budgeted for in the project. ### Pre-existing data Pre-existing in situ datasets will consist of collections of optical and biogeochemical measurements contributed by various stakeholders, either as independent data sets where MONOCLE is given a licence to use and distribute these, or as part of curated data bases (e.g. LIMNADES for inland water). Access constraints have been recorded according to the new (2019) data licenses for LIMNADES, and are being maintained as part of the registration of the data set in the MONOCLE back end. Pre-existing data will also take the form of large scale satellite data archives downloaded from space agencies (ESA and NASA), which are then used for further processing to a usable format, in turn integrated with data from MONOCLE sensors. ### Research results and derived data sets In the process of research and development, outputs will be generated in the form of publications, presentations, tables and datasets, survey results. Such results will be stored in the project management portal for access within the project consortium, the size not likely to exceed 100Mb per item. Public reports will be available also through the website. Public deliverables will be available through the website and OpenAire. The methodology is detailed in D9.3 “Open data repositories”. The open access requirement for H2020 publications will be honoured through either the green or gold open access route. Each project partner is responsible for delivering publications through their chosen open access route – open access publication fees are an eligible project cost. In addition, these papers will be included in the Zenodo/OpenAire repository that has been set up for MONOCLE. # 5 FAIR data principles All MONOCLE research data will be curated according to the 'FAIR' principle, i.e. to be Findable, Accessible, Interoperable and Re-usable. In the following, a short overview is given of the building blocks to reach this goal. As the system is under active development, further detail will be added as design decisions are made, for future releases of this document. The following are guiding principles. Details on each data set will be kept in a central data register, discussed further below. ## General data documentation and guidance Any documentation such as measurement protocols, system descriptions and use cases will be linked within the data register and a copy will be kept in the MONOCLE back-end, where possible. Users of the MONOCLE front-end will be able to access these documents when accessing a corresponding data set. Where data sets are ‘frozen’ to create a snapshot of available data at a given point in time, these datasets will be versioned and uploaded to public repositories providing a digital object identifier. By default, all data generated in MONOCLE will be openly available (see Data Access, below), with the exception of unprocessed, uncalibrated data if these have no value to the user. Such data will nevertheless be stored and curated. Data contributed from external sources are the exception to this rule. In such cases, data ownership and licensing will govern whether dissemination beyond MONOCLE is possible. Reference to existing FAIR data sources is to be preferred over duplication. ## Metadata Initially the metadata profile ISO 19115 will be used to describe datasets that are made available. As a common ontology the CF conventions (cfconventions.org) will be followed or extended. These metadata conventions ensure that data are identifiable, usually as part of (live) data streams, using appropriate search terms and key words. Additional metadata requirements to enable MONOCLE data interoperability developments are described in D5.2 “System architecture and standards report”. These requirements concern data ownership, licensing, access restrictions (embargo periods), as well as geospatial parameters. The definition of the minimum and recommended metadata for MONOCLE data sets will be refined during the implementation of MONOCLE WP5. A guiding principle for MONOCLE sensors and platforms is that metadata are injected into the data flow at the point of measurement, either at the sensor or using a dedicated sensor interface. ## Data Access, Interoperability and respecting Intellectual Property Processed data intended for public access and not subject to ethics limitations will be made available through the Open Geospatial Consortium (OGC) Sensor Observation Service (SOS), Web Feature Service (WFS) and Web Coverage Service 1 (WCS) standards initially with other standard and bespoke data interfaces being added as required. MONOCLE is in the process of applying these standards to communication between sensors, sensors and data hubs, the MONOCLE data back-end, front-end and user applications. Details can be found in D5.2 “System architecture and standards report”. Data generated as part of MONOCLE will be free of cost. Data access restrictions and intellectual property rights will, however, remain as set by the dataset creator/owner where applicable. Unless specified, all data will be treated as FAIR open data. In practise, the following data access levels are foreseen: * open access, not requiring registration, providing access to data identified as open without license restrictions * limited access, requiring registration, providing access to open data as well as data sets with a limited license for use (e.g. non-commercial, accrediting ownership, delayed release etc). * restricted access, requiring registration, providing access to data owned by the user and any data sets this specific user has been granted access to. Any software tools that are required to access and, to a limited extent, make use of the data, which are developed during MONOCLE, will be available free of cost through software repositories such as those already set up on Zenodo and Github (see D9.3 “Open data repositories”, for details). Essential software tools required to make use of the data have not been defined at this point. ## Data Sharing and Reuse All data accessible through the MONOCLE backend data services (Sensor Observation Service, Web Feature Service, Web Coverage Service) will be publicly findable, with accessibility rules based on ownership and licensing defined drawn from the metadata and data register. Any data producer that requires data to be delayed in its release will carry that information such that it can be securely stored and released only when appropriate. In many use cases, however, it will be beneficial to the user to know that embargoed data exist. Such data embargoes will feature in the extended metadata and the data register and can take the following shapes: * restricted data that are identifiable by measurement parameter, and as collected within a given geographical range and time period * restricted data identifiable as above but including exact time and location of observation * restricted data identifiable as above, but including information about the data owner ## Data Preservation and Archiving Data will be kept available for a minimum of three years after the end of MONOCLE. Beyond this period, e.g. if the service should no longer be deemed useful or sustainable, data will be archived at a secure open access location, insofar as data licensing permits. The project intends to create links between MONOCLE data service and large-scale public data archives (e.g. GEOSS) for long-term accessibility. Requests to remove a data set from the MONOCLE services can be submitted to the Coordinator and will be handled in a manner equivalent to the GDPR for personal data. Within the project, Work Package 8 is dedicated to planning for long-term sustainability and evolution of MONOCLE from a service concept into an operational in situ service. Each of the development and innovation activities has produced an initial set of deliverables conditioned by identified end users and stakeholders through early stage trend and gap analysis. Further input from the sensor manufacturing industry (beyond those well represented in the consortium), by primary in situ data producers (e.g. environment agencies), or primary data consumers (e.g. EO service developers) will continue to be a cornerstone and vision for development. Commercial sensor and service development will be explored to support a 180 degree market perspective for MONOCLE system components and branding as a whole, exploring manufacturing chains and economies of scale, IP licensing, and patent searches, where applicable. Public-private partnerships and corporate sponsorships (providing green credentials) to sustain citizen observatories and management of ‘super sites’ will be considered in this work, delivered as an evolving exploitation plan. ## Data Register The data register will be maintained as a “live” document; a snapshot will be created for each DMP release. A template is included in the Appendix. Datasets that meet the criteria for dissemination have not yet been generated for this version (1.2) of the DMP. The data register is based upon information and restrictions supplied by the upstream data provider matched to Horizon 2020 guidelines as below (in _italics)_ : * _**Data set reference and name** _ _Identifier for the data set to be produced._ * _**Data set description** _ _Descriptions of the data that will be generated or collected, its origin (in case it is collected), nature and scale and to whom it could be useful, and whether it underpins a scientific publication. Information on the existence (or not) of similar data and the possibilities for integration and reuse._ * _Standards and metadata_ _Reference to existing suitable standards of the discipline. If these do not exist, an outline on how and what metadata will be created._ * _Data sharing_ _Description of how data will be shared, including access procedures, embargo periods (if any), outlines of technical mechanisms for dissemination and necessary software and other tools for enabling re-use, and definition of whether access will be widely open or restricted to specific groups. Identification of the repository where data will be stored, if already existing and identified, indicating in particular the type of repository (institutional, standard repository for the discipline, etc.). In case the dataset cannot be shared, the reasons for this should be mentioned (e.g. ethical, rules of personal data, intellectual property, commercial, privacy- related, securityrelated)._ * _**Archiving and preservation (including storage and backup)** _ _Description of the procedures that will be put in place for long-term preservation of the data. Indication of how long the data should be preserved, what is its approximated end volume, what the associated costs are and how these are planned to be covered._ # 6 Allocation of resources The MONOCLE infrastructure has been designed as an open infrastructure from the start, therefore the effort and cost of making the data FAIR is part of the overall MONOCLE budget. It is the responsibility of each sensor provider within the project to ensure that their sensors adhere to the agreed standards, with support provided through Work Package 5 which is dedicated to data interoperability and accessibility. The development and maintenance of the MONOCLE backend are the responsibility of PML, who will continue to maintain access for at least three years beyond the end of the project. # 7 Data security To safeguard original data, backups will be made at the site where they are hosted. The nature of the MONOCLE data back-end is such that copies can be stored there, but this is not a requirement – it is designed to function both as a centralized and distributed data system. Copies of data will, in general, not be backed-up at the MONOCLE back-end provided that they can be retrieved again from the source. The same applies to the use of Earth Observation and auxiliary data. Loss of such data would potentially cause delays due to the need to download them again from the source, but this will not be functionally different from having to restore data from tape backups. A number of data repositories have been set up to safeguard specific project outputs, such as software, publications, sensitive data and frozen versions of sensor data. These will be accompanied by DOIs and are described in more detail in the document accompanying D9.3 “Open Data Repositories”. # 8 Ethical aspects Ethical aspects are mainly relevant for data gathered through citizen science initiatives. These data will be treated according to the ethics procedures laid out in D10.1, in summary these procedures cover the following aspects: * Details on the procedures and criteria used to identify/recruit research participants  Details on the informed consent procedures for the participation of humans. * Templates of the informed consent forms * Information sheets provided to participants * Procedures regarding the recording of imagery where humans are identifiable # Appendix ## Data Register Template <table> <tr> <th> **Project** </th> <th> MONOCLE H2020 (grant 776480) </th> <th> **Start / Duration** </th> <th> 1 February 2018/ 48 Months </th> </tr> <tr> <td> **Dissemination** </td> <td> PUBLIC </td> <td> **Nature** </td> <td> **ORDP** </td> </tr> <tr> <td> **Date** </td> <td> 1 Aug 2018 </td> <td> **Version** </td> <td> **1.0** </td> </tr> </table> The example shows the information collected through the data register. Included are descriptions of the fields and an example covering Earth observation data. <table> <tr> <th> **Organisation** </th> <th> **Dataset reference & ** **Name** </th> <th> </th> <th> **Dataset description/outline** </th> <th> **Standards & metadata ** </th> </tr> <tr> <td> _**Name of organisation providing the data.** **Also reference any other ownership, i.e. if you have bought commercial data and you have rights to use but must attribute etc** _ </td> <td> _A reference label. Should be unique when combined with your organisation name._ </td> <td> _Simple description of the dataset, try to include as much information as possible on_ </td> <td> _Spatial Resolution & extent Temporal resolution and extent _ </td> <td> _any standardised metadata that accompanies the dataset_ </td> </tr> <tr> <td> **PML** based on satellite data from ESA and NASA </td> <td> CCI_reference_chlor_a </td> <td> ESA OC-CCI archive consisting of global 4 x 4km ocean colour data. Consisting of individual RRS bands and derived chlor_a, the dataset has per pixel bias and rmsd uncertainty </td> <td> resolution: 4km 1997-09-04T00:00:00.000Z extent: -180,-90,180,90 to 2017-10-01T00:00:00.000Z </td> <td> Files contain CF compliant metadata but currently no xml/ISO 19115 metadata exist </td> </tr> </table> Page **15** of **16** <table> <tr> <th> **Project** </th> <th> MONOCLE H2020 (grant 776480) </th> <th> **Start / Duration** </th> <th> 1 February 2018/ 48 Months </th> </tr> <tr> <td> **Dissemination** </td> <td> PUBLIC </td> <td> **Nature** </td> <td> **ORDP** </td> </tr> <tr> <td> **Date** </td> <td> 1 Aug 2018 </td> <td> **Version** </td> <td> **1.0** </td> </tr> </table> <table> <tr> <th> **How will data be shared** </th> <th> **software/protocol required for sharing** </th> <th> **data access policy** **(open/locked/partial - give details, e.g. embargo time)** </th> <th> **stored in MONOCLE** **Backend (yes/no - if no please say why/where it will be stored** </th> </tr> <tr> <td> _**list data services or custom websites** _ </td> <td> _list the protocols available for data access_ </td> <td> _data policy, such as groups that can use, whether it is only accessible to project partners or whether there is a time based embargo_ </td> <td> _whether the data will be stored in the MONOCLE back end or not. If not, describe how data will be stored._ </td> </tr> <tr> <td> WMS: **_https://vortices.npm.ac.uk/thredds/wms/CCI_ALL-v3.1MONTHLY?service=WMS &version=1.3.0&request=GetCapab _ ** **_ilities_ ** **WCS** : **_https://vortices.npm.ac.uk/thredds/wcs/CCI_ALL-v3.1MONTHLY?service=WCS &version=1.1.0&request=GetCapabil _ ** **_ities_ ** </td> <td> OGC WCS OGC WMS </td> <td> Fully open data </td> <td> No Archive will be proxied through the backend using the WMS/WCS links provided </td> </tr> </table> Page **16** of **16**
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0329_ENTROPY_649849.md
# INTRODUCTION The main objective of this deliverable is to provide the data management policy with regard to the data sources that the ENTROPY project collects, processes, generates and makes available. This deliverable is a ‘living document’ which will be further developed within the course of the project so as to define the research data that will be collected and specify how these data collections will be processed and managed in accordance with relevant standards and methodologies. Finally, it will also incorporate the decisions for data sharing and archiving. #### Document Composition This document is composed of the following six (6) chapters, each one covering a specific area as specified by the EC guidelines in its Data Management Plan template: * Chapter 1: Introduction * Chapter 2: Initial naming of the datasets * Chapter 3: Description of the minimum datasets to be collected for each pilot * Chapter 4: Standards and metadata * Chapter 5: Data access and sharing mechanisms ▪ Chapter 6: Archiving and preservation of the data # DATASETS: REFERENCE AND NAME ENTROPY project is still identifying a set of heterogeneous data sources through a series of interviews with project end-users. Following an iterative process, data sources are being established, encapsulating that way the project end-user requirements. A refinement process will continuously take place throughout the lifetime of the project, as new data sources are becoming available to the consortium. ENTROPY project is driven by three different pilots in three different sites: 1. Pilot A: Navacchio Technology Park (NTP) 2. Pilot B: University of Murcia Campus (UMU) 3. Pilot C: Technopole in Sierre (HES-SO) The teams working under these pilots have formed users’ interaction scenarios with the system, which include data for overall assessment and behavioural analysis, providing a minimum set of data. The data sets required by each pilot differ to each other, since the pilots have different types of data sources. However, similar naming methodology will be followed. The partners will receive one or more files (.xls/.csv) containing data. The name of the file should follow a specific structure, such as: PL_DS_FT_ND_V_D. * PL: PiLot, the name of the pilot, the first letters (three max) of the pilot’s responsible partner (UM for UMU, NT for NTP, HE for HES-SO) * DS: DataSet, the set of data related to the pilot. It may take the value “ALL”, if the file contains all the sets of data. * FT: FormaT, the format of the file of the data * ND: The name of the original document * V: The version of the document * D: The date of receiving the document or the date of creating this document (dd-mm-yyyy). The respective folders used may follow similar structure: PL_FT_ND_D * PL: PiLot, the first letters (three max) of the pilot’s responsible partner (UMU, NTP, HESSO) * FT: FormaT, the format of the file of the data * ND: The name of the original document * D: The reception or creation date of the document. Additionally, some of the pilots could expose their datasets by means of public web services to make the data access easier and more reliable. # DATASET DESCRIPTION For each of these target groups, several parameters have been identified by each pilot. The data to be received by external sources will fill in these parameters. The pilots have identified some identical parameters, however some differ. For Pilot 1 and Pilot 2, there are five basic types of parameters, which are also shared by Pilot 3: * Demographics: Data concerning the demographics of the users * Building Data: Data describing the buildings’ characteristics * Psychographics: Data concerning the personality of the users * Room Sensor Data: Data concerning the sensors’ technical characteristics and measurements per room * Building Sensor Data: Data concerning energy consumption in the building **Table 1** Variables corresponding to the common parameter types of all the three pilots <table> <tr> <th> **Parameter** </th> <th> **Type** </th> <th> **Unit** </th> <th> **Mandatory** </th> </tr> <tr> <td> DEMOGRAPHICS </td> </tr> <tr> <td> User ID </td> <td> String </td> <td> \- </td> <td> N(NO) </td> </tr> <tr> <td> Age </td> <td> Numeric </td> <td> Years </td> <td> N </td> </tr> <tr> <td> Gender </td> <td> String </td> <td> \- </td> <td> N </td> </tr> <tr> <td> Ethnicity </td> <td> String </td> <td> \- </td> <td> N </td> </tr> <tr> <td> Function (ex. Manager, professor, student etc.) </td> <td> String </td> <td> \- </td> <td> N </td> </tr> <tr> <td> Educational level </td> <td> String </td> <td> \- </td> <td> N </td> </tr> <tr> <td> Studies </td> <td> String </td> <td> \- </td> <td> N </td> </tr> <tr> <td> Hours at university/campus </td> <td> Numeric </td> <td> hours </td> <td> N </td> </tr> <tr> <td> Kids at home (yes/no) </td> <td> Boolean </td> <td> \- </td> <td> N </td> </tr> <tr> <td> Working hours </td> <td> Numeric </td> <td> hours </td> <td> N </td> </tr> <tr> <td> Reported health issues </td> <td> String </td> <td> \- </td> <td> N </td> </tr> <tr> <td> BUILDING DATA </td> </tr> <tr> <td> Date </td> <td> String </td> <td> dd-mm-yyyy HH:mm:ss </td> <td> </td> </tr> <tr> <td> Building ID </td> <td> String </td> <td> \- </td> <td> Y(YES) </td> </tr> <tr> <td> Construction year </td> <td> Numeric </td> <td> </td> <td> Y </td> </tr> <tr> <td> Building type </td> <td> String </td> <td> \- </td> <td> Y </td> </tr> <tr> <td> Building size </td> <td> Numeric </td> <td> m (meters) </td> <td> Y </td> </tr> <tr> <td> Windows percentage </td> <td> Numeric </td> <td> </td> <td> Y </td> </tr> <tr> <td> Building regulations </td> <td> String </td> <td> \- </td> <td> Y </td> </tr> <tr> <td> Consumption baseline </td> <td> Numeric </td> <td> kWh </td> <td> Y </td> </tr> <tr> <td> Sensor ID (link with sensor data) </td> <td> String </td> <td> \- </td> <td> Y </td> </tr> <tr> <td> Total number of sensors </td> <td> Numeric </td> <td> \- </td> <td> Y </td> </tr> <tr> <td> Internal temperature </td> <td> Numeric </td> <td> °C </td> <td> Y </td> </tr> <tr> <td> Internal humidity level </td> <td> Numeric </td> <td> % </td> <td> Y </td> </tr> <tr> <td> Occupants per room/building </td> <td> Numeric </td> <td> </td> <td> N </td> </tr> <tr> <td> PSYCHOGRAPHICS </td> </tr> <tr> <td> Personality test </td> <td> String </td> <td> \- </td> <td> N </td> </tr> <tr> <td> Curtailment behaviour </td> <td> String </td> <td> \- </td> <td> N </td> </tr> <tr> <td> “Hassle factor” </td> <td> String </td> <td> \- </td> <td> N </td> </tr> <tr> <td> Comfort level </td> <td> String </td> <td> \- </td> <td> N </td> </tr> <tr> <td> The impact of incentives (Questionnaire) </td> <td> String </td> <td> \- </td> <td> N </td> </tr> <tr> <td> Interest in energy renewable sources </td> <td> String </td> <td> \- </td> <td> N </td> </tr> <tr> <td> Intrinsic interest in efficiency (Questionnaire) </td> <td> String </td> <td> \- </td> <td> N </td> </tr> <tr> <td> ROOM SENSOR DATA </td> <td> </td> <td> </td> </tr> <tr> <td> **HVAC** </td> <td> </td> <td> </td> </tr> <tr> <td> Sensor ID </td> <td> String </td> <td> \- </td> <td> Y </td> </tr> <tr> <td> Location </td> <td> String </td> <td> \- </td> <td> Y </td> </tr> <tr> <td> Automated system (Yes/No) </td> <td> Boolean </td> <td> \- </td> <td> Y </td> </tr> <tr> <td> State (ON/FF) </td> <td> Boolean </td> <td> \- </td> <td> Y </td> </tr> <tr> <td> Operation mode (heating/cooling) </td> <td> String </td> <td> \- </td> <td> Y </td> </tr> <tr> <td> Fan speed </td> <td> String </td> <td> \- </td> <td> Y </td> </tr> <tr> <td> Nominal power </td> <td> String </td> <td> \- </td> <td> Y </td> </tr> <tr> <td> Energy efficiency label </td> <td> String </td> <td> \- </td> <td> Y </td> </tr> <tr> <td> Energy ( Electricity, Gas, Fuel oil) </td> <td> String </td> <td> \- </td> <td> Y </td> </tr> <tr> <td> **Energy Meter** </td> <td> </td> <td> </td> </tr> <tr> <td> Date (timestamp) </td> <td> Date </td> <td> dd-mm-yyyy HH:mm:ss </td> <td> Y </td> </tr> <tr> <td> Meter ID </td> <td> String </td> <td> \- </td> <td> Y </td> </tr> <tr> <td> Energy consumption </td> <td> Numeric </td> <td> KWh </td> <td> Y </td> </tr> <tr> <td> Energy from renewable sources </td> <td> String </td> <td> \- </td> <td> Y </td> </tr> <tr> <td> Type of energy source </td> <td> String </td> <td> </td> <td> Y </td> </tr> <tr> <td> Building/Room ID (link with building/room data) </td> <td> String </td> <td> \- </td> <td> Y </td> </tr> <tr> <td> **Indoor Lighting System Management/Luminosity Sensors** </td> <td> </td> <td> </td> </tr> <tr> <td> Sensor ID </td> <td> String </td> <td> \- </td> <td> Y </td> </tr> <tr> <td> Location </td> <td> String </td> <td> \- </td> <td> Y </td> </tr> <tr> <td> Automated system (Yes/No) </td> <td> Boolean </td> <td> \- </td> <td> Y </td> </tr> <tr> <td> Light status (ON/OFF) </td> <td> Boolean </td> <td> \- </td> <td> Y </td> </tr> <tr> <td> Light regulation (0-100%) </td> <td> Numeric </td> <td> % </td> <td> Y </td> </tr> <tr> <td> Hours of lighting per day </td> <td> Numeric </td> <td> Hours </td> <td> Y </td> </tr> <tr> <td> Type of lighting (ex. CFL, LED etc.) </td> <td> String </td> <td> \- </td> <td> Y </td> </tr> <tr> <td> Number of lights on </td> <td> Numeric </td> <td> \- </td> <td> Y </td> </tr> <tr> <td> Luminous flux </td> <td> Numeric </td> <td> lm(lumen) </td> <td> Y </td> </tr> <tr> <td> Nominal power </td> <td> Numeric </td> <td> W </td> <td> Y </td> </tr> <tr> <td> </td> <td> </td> <td> </td> </tr> </table> <table> <tr> <th> **Humidity Sensors** </th> <th> </th> <th> </th> </tr> <tr> <td> Sensor ID </td> <td> String </td> <td> \- </td> <td> Y </td> </tr> <tr> <td> Location </td> <td> String </td> <td> \- </td> <td> Y </td> </tr> <tr> <td> Humidity level (internal) </td> <td> Numeric </td> <td> % </td> <td> Y </td> </tr> <tr> <td> **Presence sensor** </td> <td> </td> <td> </td> </tr> <tr> <td> Sensor ID </td> <td> String </td> <td> \- </td> <td> Y </td> </tr> <tr> <td> Location </td> <td> String </td> <td> \- </td> <td> Y </td> </tr> <tr> <td> Number of attendees </td> <td> Numeric </td> <td> \- </td> <td> N </td> </tr> <tr> <td> User ID </td> <td> String </td> <td> \- </td> <td> N </td> </tr> <tr> <td> Enter timestamp </td> <td> Date </td> <td> \- </td> <td> N </td> </tr> <tr> <td> Exit timestamp </td> <td> Date </td> <td> \- </td> <td> N </td> </tr> <tr> <td> BUILDING SENSOR DATA </td> <td> </td> <td> </td> </tr> <tr> <td> **Energy Meter** </td> <td> </td> <td> </td> </tr> <tr> <td> Date (timestamp) </td> <td> String </td> <td> dd-mm-yyyy HH:mm:ss </td> <td> Y </td> </tr> <tr> <td> Meter ID </td> <td> String </td> <td> \- </td> <td> Y </td> </tr> <tr> <td> Energy consumption (KWh) </td> <td> Numeric </td> <td> \- </td> <td> Y </td> </tr> <tr> <td> Electrical consumption (Active and reactive power) </td> <td> Numeric </td> <td> \- </td> <td> Y </td> </tr> <tr> <td> Energy from renewable sources </td> <td> String </td> <td> \- </td> <td> Y </td> </tr> <tr> <td> Type of energy source </td> <td> String </td> <td> \- </td> <td> Y </td> </tr> <tr> <td> **Water Meter** </td> <td> </td> <td> </td> </tr> <tr> <td> Meter ID </td> <td> String </td> <td> \- </td> <td> Y </td> </tr> <tr> <td> Water meter type (Mass/Volumetric) </td> <td> Boolean </td> <td> </td> <td> Y </td> </tr> <tr> <td> Water consumption </td> <td> Numeric </td> <td> \- </td> <td> Y </td> </tr> <tr> <td> **Environmental conditions monitoring (Weather station)** </td> <td> </td> <td> </td> </tr> <tr> <td> Weather station ID </td> <td> String </td> <td> \- </td> <td> Y </td> </tr> <tr> <td> Temperature (external) </td> <td> Numeric </td> <td> °C </td> <td> Y </td> </tr> <tr> <td> Barometric pressure </td> <td> Numeric </td> <td> Hpa </td> <td> Y </td> </tr> <tr> <td> Humidity (external) </td> <td> Numeric </td> <td> % </td> <td> Y </td> </tr> <tr> <td> Wind speed </td> <td> Numeric </td> <td> m.s -1 </td> <td> Y </td> </tr> <tr> <td> Wind direction </td> <td> Numeric </td> <td> ° </td> <td> Y </td> </tr> <tr> <td> Precipitation </td> <td> String </td> <td> Mm </td> <td> Y </td> </tr> <tr> <td> Outside sun duration (luminosity) </td> <td> Numeric </td> <td> h/day (hours per day) </td> <td> Y </td> </tr> <tr> <td> Outside radiation </td> <td> Numeric </td> <td> W/m 2 /day </td> <td> N </td> </tr> <tr> <td> </td> <td> </td> <td> (daily radiation average) </td> <td> </td> </tr> </table> Further, Pilot 3 concerns three more types of parameters: * Environment: Data concerning environmental characteristics of the building * Energy: Data describing the building’s characteristics per energy type used * Price: Data concerning the price and its updates for gas and fuel oil, along with the consumption characteristics of the city **Table 2** Variables corresponding to three more parameter types of Pilot 3 <table> <tr> <th> **Parameter** </th> <th> **Type** </th> <th> **Unit** </th> <th> **Mandatory** </th> </tr> <tr> <td> ENVIRONMENT </td> <td> </td> <td> </td> </tr> <tr> <td> User ID </td> <td> String </td> <td> \- </td> <td> Y </td> </tr> <tr> <td> Age </td> <td> Numeric </td> <td> Years </td> <td> Y </td> </tr> <tr> <td> Number of people at work </td> <td> Numeric </td> <td> \- </td> <td> Y </td> </tr> <tr> <td> Week-end work </td> <td> Numeric </td> <td> \- </td> <td> Y </td> </tr> <tr> <td> Planning production </td> <td> String </td> <td> \- </td> <td> Y </td> </tr> <tr> <td> Studied surface </td> <td> String </td> <td> m 2 </td> <td> Y </td> </tr> <tr> <td> Location </td> <td> String </td> <td> \- </td> <td> Y </td> </tr> <tr> <td> Mountain mask/Building environment </td> <td> String </td> <td> \- </td> <td> Y </td> </tr> <tr> <td> ENERGY </td> <td> </td> <td> </td> </tr> <tr> <td> **Heating System** </td> <td> </td> <td> </td> </tr> <tr> <td> Technology </td> <td> String </td> <td> \- </td> <td> Y </td> </tr> <tr> <td> Nominal power </td> <td> Numeric </td> <td> W </td> <td> Y </td> </tr> <tr> <td> Output/energy efficiency </td> <td> Numeric </td> <td> % </td> <td> Y </td> </tr> <tr> <td> Energy ( Electricity, Gas, Fuel oil) </td> <td> String </td> <td> \- </td> <td> Y </td> </tr> <tr> <td> Energy ( Electricity, Gas, Fuel oil) </td> <td> Numeric </td> <td> kWh </td> <td> </td> </tr> <tr> <td> Local energy control command </td> <td> String </td> <td> \- </td> <td> Y </td> </tr> <tr> <td> **Devices system** </td> <td> </td> <td> </td> </tr> <tr> <td> Clusters definition (Cold, Heating,…) </td> <td> String </td> <td> \- </td> <td> Y </td> </tr> <tr> <td> Nominal power </td> <td> Numeric </td> <td> \- </td> <td> Y </td> </tr> <tr> <td> Hours at use per day (or per period) </td> <td> Numeric </td> <td> \- </td> <td> Y </td> </tr> <tr> <td> Number </td> <td> Numeric </td> <td> \- </td> <td> Y </td> </tr> <tr> <td> **Lighting System** </td> <td> </td> <td> </td> </tr> <tr> <td> Type of lighting (ex. CFL, LED etc.) </td> <td> </td> <td> </td> <td> String </td> <td> \- </td> <td> Y </td> </tr> <tr> <td> Nominal power </td> <td> </td> <td> </td> <td> Numeric </td> <td> W </td> <td> Y </td> </tr> <tr> <td> Luminous flux </td> <td> </td> <td> </td> <td> Numeric </td> <td> lm </td> <td> Y </td> </tr> <tr> <td> Number </td> <td> </td> <td> </td> <td> Numeric </td> <td> \- </td> <td> Y </td> </tr> <tr> <td> **Hot Water** </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> Technology </td> <td> </td> <td> </td> <td> String </td> <td> \- </td> <td> Y </td> </tr> <tr> <td> Nominal power </td> <td> </td> <td> </td> <td> Numeric </td> <td> W </td> <td> Y </td> </tr> <tr> <td> Output/energy efficiency </td> <td> </td> <td> </td> <td> Numeric </td> <td> % </td> <td> Y </td> </tr> <tr> <td> Energy ( Electricity, Gas, Fuel oil) </td> <td> </td> <td> </td> <td> String </td> <td> \- </td> <td> Y </td> </tr> <tr> <td> Energy ( Electricity, Gas, Fuel oil) </td> <td> </td> <td> </td> <td> Numeric </td> <td> kWh </td> <td> N </td> </tr> <tr> <td> **Ventilation** </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> Technology </td> <td> </td> <td> </td> <td> String </td> <td> </td> <td> Y </td> </tr> <tr> <td> Nominal power </td> <td> </td> <td> </td> <td> Numeric </td> <td> </td> <td> Y </td> </tr> <tr> <td> Output/energy efficiency </td> <td> </td> <td> </td> <td> Numeric </td> <td> </td> <td> Y </td> </tr> <tr> <td> </td> <td> PRICE </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> **Gas** </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> Taxes percentage </td> <td> </td> <td> </td> <td> Numeric </td> <td> </td> <td> Y </td> </tr> <tr> <td> Price evolution year 2015 - n </td> <td> </td> <td> </td> <td> Numeric </td> <td> </td> <td> Y </td> </tr> <tr> <td> **Fuel oil** </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> Taxes percent </td> <td> </td> <td> </td> <td> Numeric </td> <td> </td> <td> Y </td> </tr> <tr> <td> Price evolution year 2015 - n </td> <td> </td> <td> </td> <td> Numeric </td> <td> </td> <td> Y </td> </tr> <tr> <td> **Electricity** </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> Name of supplier </td> <td> </td> <td> </td> <td> String </td> <td> \- </td> <td> Y </td> </tr> <tr> <td> Taxes percentage </td> <td> </td> <td> </td> <td> Numeric </td> <td> </td> <td> Y </td> </tr> <tr> <td> Power peak value for price shifting </td> <td> </td> <td> </td> <td> Numeric </td> <td> W </td> <td> Y </td> </tr> <tr> <td> Power peak percent </td> <td> </td> <td> </td> <td> Numeric </td> <td> </td> <td> Y </td> </tr> <tr> <td> Reactive power percentage </td> <td> </td> <td> </td> <td> Numeric </td> <td> </td> <td> Y </td> </tr> <tr> <td> Mean power peak percentage </td> <td> </td> <td> </td> <td> Numeric </td> <td> </td> <td> Y </td> </tr> <tr> <td> Price evolution year 2015 - n </td> <td> </td> <td> </td> <td> Numeric </td> <td> </td> <td> Y </td> </tr> </table> # STANDARDS AND METADATA ENTROPY project is related to different pillars, e.g., green energy, environment, etc. Several existing standards, addressing interoperability, adaptability and dynamicity issues of data on each of these specific fields. This section presents the required standards, along with the methodologies and technical documents that will be taken into account in order for the project to produce aligned data structures and data services. ## ISO/TR 16344:2012 ISO/TR 16344:2012 provides a coherent set of terms, definitions and symbols for concepts and physical quantities related to the overall energy performance of buildings and their components, including definitions of system boundaries, to be used in all standards elaborated within ISO on energy performance of buildings. These terms and definitions are applicable to energy calculations in accordance with the Technical Report and standards on the overall energy performance of buildings and their components, to provide input to the Technical Report or using output from the Technical Report. They are based on existing terms and definitions from standards and other documents referenced in the bibliography. ## ISO 16346:2013 ISO 16346:2013 defines the general procedures to assess the energy performance of buildings, including technical building systems, and defines the different types of ratings, and the building boundaries. The purpose of ISO 16346:2013 is to (a) collate results from other international standards that calculate energy use for specific services within a building, (b) account for energy generated in the building, some of which may be exported for use elsewhere, (c) present a summary of the overall energy use of the building in tabular form, (d) provide energy ratings based on primary energy, carbon dioxide emission, or other parameters defined by a national energy policy, and (e) establish general principles for the calculation of primary energy factors and carbon dioxide emission coefficients. ISO 16346:2013 defines the energy services to be taken into account for setting energy performance ratings for planned and existing buildings and provides (1) a method to compute the standard calculated energy rating, a standard energy use that does not depend on occupant behaviour, actual weather, and other actual (environment or indoor) conditions, (2) a method to assess the measured energy rating, based on the delivered and exported energy, (3) a method to improve confidence in the building calculation model by comparison with actual energy use, and (4) a method to assess the energy effectiveness of possible improvements. ISO 16346:2013 is applicable to a part of a building (e.g. flat), a whole building, or several buildings. ## Methods to assess environmental impacts of ICT **ETSI Working Group DTR/EE-00008** : This work defines the methods to assess the environmental impact Assessment of ICTs including the Positive Impact by using ICT Services. This work will define the methods to assess the environmental impacts of ICT, which have two aspects. (a) Negative impact caused by the energy consumptions or CO2 emissions of operators of ICT equipment and sites including telecom network, users’ terminals and datacentres for residential and business services. (b) Positive impact caused by energy saving or CO2 emission saving by using ICT services. We propose the method of how to quantify these impacts at national level. ## Technical documents **IETF** : IETF’s mission is to improve the Internet by producing high quality technical documents that influence the way people design, use, and manage the Internet. ## Metadata Each data file will be accompanied by unique specified metadata, in order to allow their ease of access and re-usability. Below, we present the metadata form we will adopt. **Table 3** : Metadata form <table> <tr> <th> **Parameter** </th> </tr> <tr> <td> Document version </td> <td> The version of this document </td> </tr> <tr> <td> Document format </td> <td> The format of this document </td> </tr> <tr> <td> Description </td> <td> A description of the data included in the document </td> </tr> <tr> <td> Date </td> <td> The date of the creation of the document (yyyy-mm-dd) </td> </tr> <tr> <td> Keywords </td> <td> Some keywords describing the content </td> </tr> <tr> <td> Subject </td> <td> Small description of the data source </td> </tr> <tr> <td> **Creator (Name of the creator of the data source)** </td> </tr> <tr> <td> Sector of the provider </td> <td> Information on the sector that this provider belongs to </td> </tr> <tr> <td> Permissions </td> <td> The permission of this document are mandatory to be mentioned here </td> </tr> <tr> <td> **Name of the Partner (The name of the partner that collected the data and is responsible for)** </td> </tr> <tr> <td> Responsible person </td> <td> The name of the person within the partner, who is responsible for the data </td> </tr> <tr> <td> Pilot </td> <td> For which pilot the data will be used </td> </tr> <tr> <td> Scenario of data usage </td> <td> How the data are going to be used in this scenario </td> </tr> <tr> <td> **Description of the Data Source** </td> </tr> <tr> <td> File format </td> <td> The format of the data source provided </td> </tr> <tr> <td> File name/path </td> <td> The name of the file </td> </tr> <tr> <td> Storage location </td> <td> In case a URI/URL exists for the data provider </td> </tr> <tr> <td> Data type </td> <td> Data type and extension of the file; e.g. Excel Sheet, .xlsx; Standard if possible </td> </tr> <tr> <td> Standard </td> <td> Data standard, if existent </td> </tr> <tr> <td> Data size </td> <td> Total data size, if possible </td> </tr> <tr> <td> Time references of data </td> <td> Start date </td> <td> End date </td> </tr> <tr> <td> Availability </td> <td> Start date </td> <td> End date </td> </tr> <tr> <td> Data collection frequency </td> <td> The time frequency in which the data is collected; e.g. hourly, every 15 minutes, on demand, etc. </td> </tr> <tr> <td> Data quality </td> <td> The quality of the data; is it complete, does it have the right collection frequency, is it available, etc. </td> </tr> <tr> <td> **Raw data sample** </td> </tr> <tr> <td> Textual copy of data sample </td> </tr> <tr> <td> **Number of Parameters included:** </td> <td> </td> <td> </td> </tr> <tr> <td> **Parameter #1:** </td> <td> </td> <td> </td> </tr> <tr> <td> **Variables** </td> <td> **Name** </td> <td> **Type** </td> <td> **Mandatory** </td> </tr> <tr> <td> </td> <td> … </td> <td> … </td> <td> … </td> </tr> <tr> <td> **Parameter #2:** </td> <td> </td> <td> </td> </tr> <tr> <td> **Variables** </td> <td> **Name** </td> <td> **Type** </td> <td> **Mandatory** </td> </tr> <tr> <td> </td> <td> **…** </td> <td> **…** </td> <td> **…** </td> </tr> </table> The Energy-Infrastructure Monitoring Parameters’ Semantic Model and Citizens Environment Friendly Behavioural Semantic Model form a semantic metadata in an integrated manner. This document contains the initial versions of the both models. More detailed information about the models will be presented in their own deliverables. Figure 1 represents the concepts and parameters - along with their relationships- we will adopt for collecting information from the different type of sensors from an energy efficiency perspective. The types of sensors refer mainly to energy consumption, production and storage meters. This semantic model is fully extensible, it’s evolution with the addition of new concepts -or even the refinement of part of the existing concepts- will be an evolving process. **Figure 1** : ENTROPY Energy Efficiency Semantic Model A building space (BuildingSpace) in ENTROPY Energy Model defines the physical spaces of the building. A building space contains devices (DeviceInstance) or building objects (BuildingObject). A building object is an object in the building such as a room (Room) of a floor (Floor) that can contain one or more devices. A Device instance (DeviceInstance) implements a Device (Device), has a unique id and geolocation coordinates (latitude, longitude) and is located in a Specific Building Object. Each Device Instance (DeviceInstance) supports a set of measurements (Measurement) that are also associated with a set of quantitative and qualitative (Quality) characteristics (e.g. frequency, accuracy) and units of measurements (e.g. KWh, bar, m, o C etc.) (UnitofMeasure). The values of the monitored parameters are included in the Observation Values (ObservationValue). Each observed value is also associated with a timestamp (Timezone). A building space may be frequented by individuals (Agent) which could be one person (Person) or group of people (Group). Figure 2 shows the basic concepts and their relationships in the behavioural model. This model will be utilized for representing extracted behavioural data of users. It includes concepts representing demographic data, as well as activity data and user’s context information. The ontology borrows concepts from external ontologies and vocabularies when appropriate, in order to increase reusability. For instance, initial model uses Agent, Person and OnlineAccount concepts from the FOAF 1 vocabulary. As stated in deliverable D1.1, reuse of other existing ontologies also will be considered in the later versions of this model. **Figure 2** : Initial concepts of the behavioural model # DATA ACCESS AND SHARING Data access and sharing plan include several aspects that have to be identified regarding the data resulted from the project. Bellow the issues regarding the data access and sharing plan are presented in a more detailed manner. ## IPRs and Privacy Issues Data access and sharing activities will be implemented in compliance with the privacy and data collection rules and regulations, as they are applied nationally and in the EU, as well as with the H2020 rules. Concerning the results of the project, these will become publicly available based on the IPRs as described in the Consortium Agreement. Due to the nature of the data involved, some of the results that will be generated by each project phase will be restricted to authorized users, while other results will be publicly available. Data access and sharing activities will be rigorously implemented in compliance with the privacy and data collection rules and regulations, as they are applied nationally and in the EU, as well as with the H2020 rules. One possibility would be to ask users to pre-register for the purpose of using the system and will then need to authenticate them against a user database. If successful, the users will have roles associated with them. These roles will determine the level of access that a user will be given and what they will be permitted to do. ## Methods for Data Sharing As the raw data included in the data sources, will be gathered from sensor nodes and information management systems, those could be seen as highly commercially-sensitive. Therefore, access to raw data can only take place between the specific end users and the partners involved in the analysis of the data. For the models to function correctly, the data will have to be included into the ENTROPY repository. The results of the data analytics in the orient phase are set to be anonymised and made available to the subsequent layers of the framework, which will then allow the possibility for external industry stakeholders to use the results of the project for their own purposes. Publications will be released and disseminated through the project dissemination and exploitation channels to make these parties aware of the project as well as appropriate access to the data. Additionally, data that are eligible for public distribution may be disseminated through: * Scientific papers * Lectureships in case of Universities * Dissemination via the appropriate channels of the project * Interest groups created by the project’s partners Rather than the raw data used, there will be knowledge obtained by applying analytics processes to low level information in order to extract behavioural information about users. Such behavioural data that will be collected throughout the project, or a fragment of it, will be published by following the linked data principles 2 : * Use URIs to name (identify) things. * Use HTTP URIs so that these things can be looked up (interpreted, "dereferenced"). * Provide useful information about what a name identifies when it's looked up, using open standards such as RDF, SPARQL, etc. * Refer to other things using their HTTP URI-based names when publishing data on the Web. Open access to the anonymized behavioural data will be provided by means of periodic data dumps and read-only SPARQL endpoints. # ARCHIVING AND PRESERVATION #### _Short Term_ All original raw data files and respective processing programs will be versioned over time and maintained in a date-stamped file structure. Access to the datasets will be given only after request and during the design phases of the project to the responsible person. These datasets will be automatically backed up on a nightly and monthly basis. Respectively, the data generated by the system during the pilots of the project will be stored to the database of ENTROPY platform, whose DB schema will reflect the aforementioned pilot parameters. Back-ups of the DB will be performed and stored on a monthly-basis. Also, the datasets will be automatically backed up on a nightly and monthly basis. #### _Long Term_ The project consortium is committed to make the high quality final data generated by ENTROPY available for use by the research community, as well as industry peers. We will identify appropriate platform solutions (e.g. _https://joinup.ec.europa.eu/_ and _http://ckan.org/_ ) that will allow the sustainable archiving of all the ENTROPY datasets after the life span of the project.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0330_MEMERE_679933.md
## 1\. EXECUTIVE SUMMARY ### 1.1. Description of the deliverable content and purpose This deliverable reports the first version of the Data Management plan for the project MEMERE. The deliverable is based on a document prepared by the federation of the 3 technical universities of the Netherlands. The document has been shared with all partners and each partner has described the way the research data will be managed. To keep the data Management clear, the document reports the information, partner by partner. **1.2. Brief description of the state of the art and the innovation brought** n/a **1.3. Deviation from objectives** n/a **1.4. If relevant: corrective actions** n/a **1.5. If relevant: Intellectual property rights** n/a ## 2\. Partner TUE <table> <tr> <th> Name of student/researcher(s) </th> <th> Prof. Fausto Gallucci, Dr. Jose Medrano, Dr. Solomon Wassie, Mr. Aitor Cruellas, Prof. Martin van Sint Annaland </th> </tr> <tr> <td> Name of group/project </td> <td> Eindhoven University of Technology, Chemical Process Intensification </td> </tr> <tr> <td> Description of your research </td> <td> Development of novel multifunctional reactors integrating reaction and separation. Development and use of detailed models and novel non-invasive monitoring techniques _._ </td> </tr> <tr> <td> Funding body(ies) </td> <td> H2020 </td> </tr> <tr> <td> Grant number </td> <td> **679933** </td> </tr> <tr> <td> Partner organisations </td> <td> MEMERE consortium </td> </tr> <tr> <td> Project duration </td> <td> Start: **10-01-2015** End: **09-30-2019** </td> </tr> <tr> <td> Date written </td> <td> **04-01-2016** </td> </tr> <tr> <td> Date last update </td> <td> **04-01-2016** </td> </tr> <tr> <td> Version </td> <td> V0.1. A new version of the DMP will be created whenever important changes to the project occur due to inclusion of new data sets, changes in consortium policies or external factors. </td> </tr> <tr> <td> Name of researcher(s) with roles/responsibilities for data management </td> <td> _See above_ </td> </tr> </table> ### 2.1 Data Collection ⮚ Data to be created/collected includes: * Process parameter for membranes and membrane reactors. * Characterization of the membranes at TUE: measurements * Modelling data, both detailed models and Aspen Model * Catalyst characterization * High temperature fluidization with membranes, PIV data ⮚ Sources per type of data are described below: <table> <tr> <th> Type of data </th> <th> Possible source of data </th> </tr> <tr> <td> Process parameter </td> <td> Experimental data. Data will be generated along the project life. Literature data will be also considered. </td> </tr> <tr> <td> Characterization of the membranes </td> <td> Experimental data Data will be generated along the project life. </td> </tr> <tr> <td> Modelling </td> <td> In house codes for fluidized bed membrane reactors and aspen models will be used. </td> </tr> <tr> <td> Catalyst characterization </td> <td> Experimental data Data will be generated along the project life </td> </tr> <tr> <td> High temperature fluidization with membranes </td> <td> Experimental data including images Data will be generated along the project life </td> </tr> </table> Data will be collected either by means of data collection templates (mostly Excel or Word files and TIF images), with version number explicitly stated in the filename as well as in the file itself. ### 2.2 Data Storage and Back-up Data are stored in different locations. Firstly on the computers of the researcher and lab computers. The data from experimental facilities are also generally saved on the PLC of the setup. SPI has adopted a CLOUD application and each student saves the data on the cloud, thus stored on several computes and accessible by staff members of SPI. Images are also stored on external HD (generally 4 TB each). TUE is also discussing the possibility to adopt electronic lab notebook, in this case also this will be used for MEMERE. ### 2.3. Data Documentation In the MEMERE project no specific data format will be used unless specifically required in a given situation (to be decided along the project). As far as naming convention, the project naming convention will be used if there is one. Data will be storage at least 5 years after the formal end of the project: five years after the payment of the balance (see the Grant Agreement). ### 2.4. Data Access Copyright or Intellectual Property Rights are applicable to main type of data generated by TUE. The data store at TUE is fully secure as the system has been design for this. Data access is provided in the Project Folder by the Project Manager (name by name). ### 2.5. Data Sharing and Reuse Once any copyright or Intellectual Property Rights are protected part of the data could be shared outside the consortium. Approval of the consortium (according to the Grant Agreement / Consortium Agreement) will be required for any publications outside the consortium Publication will be carried out in different journals and websites. Open Access following the Green or Golden Route will be used. As policy of TUE, all data published should be made available through the repository of TUE. When possible, raw data will be published along the main articles as supplementary material. All data generated by TUE (after IP protection) will be also made available on MEMERE website. ### 2.6. Data Preservation and Archiving The consortium policy for data preservation and archiving shall be followed (at least 5 years after the formal end of the project: five years after the payment of the balance, see the Grant Agreement). In the absence of such a project-specific policy, the data shall be archived and stored like the data from any other TUE project. All data related to PhD thesis should be stored by TUE for 5-10 year after the PhD has been granted. ## 3\. Partner TECNALIA <table> <tr> <th> Name of student/researcher(s) </th> <th> </th> </tr> <tr> <td> Name of group/project </td> <td> TECNALIA / MEMERE (WP3 Membrane development) </td> </tr> <tr> <td> Description of your research </td> <td> _Briefly summarise the type of your research to help others understand the purposes for which the data are being collected or created._ Development of oxygen supported membranes and membrane characterization (Leader of WP on membrane development, Technical Manager, membrane developer) </td> </tr> <tr> <td> Funding body(ies) </td> <td> H2020 </td> </tr> <tr> <td> Grant number </td> <td> **679933** </td> </tr> <tr> <td> Partner organisations </td> <td> </td> </tr> <tr> <td> Project duration </td> <td> Start: **10-01-2015** End: **09-30-2019** </td> </tr> <tr> <td> Date written </td> <td> **30-03-2016** </td> </tr> <tr> <td> Date last update </td> <td> </td> </tr> <tr> <td> Version </td> <td> V0.1. A new version of the DMP will be created whenever important changes to the project occur due to inclusion of new data sets, changes in consortium policies or external factors. </td> </tr> <tr> <td> Name of researcher(s) with roles/responsibilities for data management </td> <td> Alfredo Pacheco Tanaka Ekain Fernández José Luis Viviente </td> </tr> </table> ### 3.1. Data Collection * Data to be created/collected includes: * Process parameter for the development of oxygen membranes at TECNALIA. * Characterization of the membranes at TECNALIA: measurements * State-of-art on oxygen membranes * Sources per type of data are described below: <table> <tr> <th> Type of data </th> <th> Possible source of data </th> </tr> <tr> <td> Process parameter for the development of oxygen membranes </td> <td> Experimental data. Data will be generated along the project life. Initially, it will be collected mostly from experience from past projects and FP7 running projects. Literature data will be also considered. </td> </tr> <tr> <td> Characterization of the membranes </td> <td> Observational data Data will be generated along the project life. </td> </tr> <tr> <td> State-of-art on oxygen membranes </td> <td> From the literature including patents. It will be updated along the project. </td> </tr> </table> Data will be collected either by means of data collection templates (mostly Excel or Word files), with version number explicitly stated in the filename as well as in the file itself. ### 3.2. Data Storage and Back-up Both the collected and produced data are stored initially on TECNALIA staff and/or individual testing computers (lab testing notebook). In addition, each project at TECNALIA has a folder in the central server where all the data and documents generated and related to the project are stored. Accession to this folder is allowed to the staff working in this project (accession provided by the Project Manager case by case). This sever is backed up every day, thereby limiting the loss of data to its minimum. ### 3.3. Data Documentation Data documentation is inherent to the activities at TECNALIA. TECNALIA holds several accreditations and certificates such as the Certification of the Quality Management System as per UNE-EN-ISO 9001:2008 for the Management of Projects of Research, Technological Innovation and Development, Tests and Assays, and Client Technological Assessment as well as the ISO 140001:2004 (Environmental Management System). In the MEMERE project no specific data format will be used unless specifically required in a given situation (to be decided along the project). As far as naming convention, the project naming convention will be used if there is one. If not, TECNALIA’ naming convention will be applied. Data will be storage at least 5 years after the formal end of the project: five years after the payment of the balance (see the Grant Agreement). ### 3.4. Data Access How will you manage access and security? Copyright or Intellectual Property Rights are applicable to main type of data generated by TECNALIA (Process parameter for the development of oxygen membranes and Characterization of the membranes: measurements). The data store at TECNALIA is fully secure as the system has been design for this. Process parameters for manufacturing the oxygen membranes will be only accessible by TECNALIA team. Reader access to data coming from the characterization of membranes may however be provided to specific project partners upon request. Data access is provided in the Project Folder by the Project Manager (name by name). ### 3.5. Data Sharing and Reuse Once any copyright or Intellectual Property Rights are protected part of the data on membrane characterisation could be shared outside the consortium. Approval of the consortium (according to the Grant Agreement / Consortium Agreement) will be required for any publications outside the consortium Deep details on process manufacturing will not be shared in any case to avoid the loss of any intellectual property rights or copyright. Publication will be mainly carried out in different journals and websites. Open Access following the Green or Golden Route will be considered. TECNALIA has recently developed its own institutional repository: TECNALIA Publications ( _http://dsp.tecnalia.com/_ ) . All the Scientific Publication produced in MEMERE in which TECNALIA participates as author will be deposited, regardless whether it is also deposited in some other repository(s), in this repository for scientific publications. TECNALIA Publications has been developed following RECOLECTA directions and facilities in order to fulfil international interoperability standards and protocols and gain long-term sustainability. RECOLECTA (Open Science Harvester) is a platform that gathers all the Spanish scientific repositories in one place and provides services to repository managers, researchers and decision-makers ( _http://recolecta.fecyt.es/_ ) . This platform is the result of the collaboration between the Spanish Foundation for Science and Technology (FECYT) and the Network of Spanish University Libraries (REBIUN) run by the Conference of Vice-Chancellors of Spanish Universities (CRUE), with the aims of creating a nationwide infrastructure of Open Access scientific repositories. TECNALIA Publications is indexed to Google, and harvested by Recolecta (Fecyt) and OpenAire ( _https://www.openaire.eu/_ ) , the open access infrastructure by the EC. ### 3.6. Data Preservation and Archiving The consortium policy for data preservation and archiving shall be followed (at least 5 years after the formal end of the project: five years after the payment of the balance, see the Grant Agreement). In the absence of such a project-specific policy, the data shall be archived and stored like the data from any other TECNALIA project. ## 4\. Partner VITO <table> <tr> <th> Name of student/researcher(s) </th> <th> Vesna Middelkoop Bart Michielsen </th> </tr> <tr> <td> </td> <td> </td> </tr> <tr> <td> Name of group/project </td> <td> _Group_ : VITO-Sustainable Materials Management _Project_ :“MEthane activation via integrated MEmbrane Reactors” - MEMERE </td> </tr> <tr> <td> Description of your research </td> <td> Development and characterisation of catalytic structures and membranes. Characterisation will include acquiring, storing and processing a large volume of data through spectroscopic and X-ray measurements (ranging from 5 MB per run for ex situ measurements to terabytes of data by operando/in situ measurements) </td> </tr> <tr> <td> Funding body(ies) </td> <td> H2020 </td> </tr> <tr> <td> Grant number </td> <td> **679933** </td> </tr> <tr> <td> Partner organisations </td> <td> Eindhoven University of Technology, Tecnalia, Berlin University of Technology, Marion Technology, Hygear Technology and Services B.V. Quantis sarl, Finden Ltd, Johnson Matthey PLC RAUSCHERT HEINERSDORF - PRESSIG GMBH Ciaotech s.r.l. (100% PNO Group B.V.) </td> </tr> <tr> <td> Project duration </td> <td> Start: **10-01-2015** End: **09-30-2019** </td> </tr> <tr> <td> Date written </td> <td> **06-01-2016** </td> </tr> <tr> <td> Date last update </td> <td> **11-01-2016** </td> </tr> <tr> <td> Version </td> <td> V0.1. A new version of the DMP will be created whenever important changes to the project occur due to inclusion of new data sets, changes in consortium policies or external factors. </td> </tr> <tr> <td> Name of researcher(s) with roles/responsibilities for data management </td> <td> _See above_ </td> </tr> </table> The partner VITO will make available all data produced in relation to MEMERE on their repository and on the MEMERE Website. ## 5\. Partner Berlin University of Technology The partner Berlin University of Technology will make available all data produced in relation to MEMERE on their repository and on the MEMERE Website. ## 6\. Partner Marion Technologies The partner Marion Technologies will make available all data produced in relation to MEMERE on their repository and on the MEMERE Website. **7\. Partner Hygear Technology and Services B.V.** <table> <tr> <th> Name of student/researcher(s) </th> <th> Leonardo Roses </th> </tr> <tr> <td> Name of group/project </td> <td> HyGear Technology & Services B.V. </td> </tr> <tr> <td> Description of your research </td> <td> _Pilot scale design and construction. Oxygen generator_ </td> </tr> <tr> <td> Funding body(ies) </td> <td> H2020 </td> </tr> <tr> <td> Grant number </td> <td> **679933** </td> </tr> <tr> <td> Partner organisations </td> <td> </td> </tr> <tr> <td> Project duration </td> <td> Start: **10-01-2015** End: **09-30-2019** </td> </tr> <tr> <td> Date written </td> <td> **06/06/2016** </td> </tr> <tr> <td> Date last update </td> <td> </td> </tr> <tr> <td> Version </td> <td> V0.1. A new version of the DMP will be created whenever important changes to the project occur due to inclusion of new data sets, changes in consortium policies or external factors. </td> </tr> <tr> <td> Name of researcher(s) with roles/responsibilities for data management </td> <td> _Leonardo Roses_ </td> </tr> </table> ### 7.1. Data Collection Test data. I/O to PLC (flows, temperatures, pressures, levels, concentrations). Gas analysis. Generally in summary form such as excel graphs, powerpoint presentations and tables/plots in deliverables. ### 7.2. Data Storage and Back-up Data stored on company server. Storage and back-up managed at corporate level and implemented by IT department. Data will be stored at least 5 years after the formal end of the project: five years after the payment of the balance (see the Grant Agreement). **7.3. Data Documentation** Periodic (interim) reports (deliverables D1.3-D1.7) and RTD deliverables in WP3-WP9. ### 7.4. Data Access Internal access limited by control of password protected network access with restricted access. IPR managed by CTO. ### 7.5. Data Sharing and Reuse Data access restricted outside of consortium by agreed publication process within MEMERE partner consortium agreement. Approval for publication to be authorized by project manager. **7.6. Data Preservation and Archiving** Long term data archiving of electronic data managed centrally IT department. ## 8\. Partner Quantis sarl <table> <tr> <th> Name of student/researcher(s) </th> <th> Angela Adams, Christopher Zimdars, and Arnaud Dauriat </th> </tr> <tr> <td> Name of group/project </td> <td> Quantis / MEMERE (WP8) </td> </tr> <tr> <td> Description of your research </td> <td> In the frame of WP8, Quantis will be responsible for the environmental life cycle assessment (LCA) and life cycle costing (LCC). The objectives of the LCA and LCC are to i) evaluate the environmental and cost performance of the investigated novel OCM technology, and ii) guide the design and development of the novel OCM technology towards more sustainable solutions. </td> </tr> <tr> <td> Funding body(ies) </td> <td> H2020 </td> </tr> <tr> <td> Grant number </td> <td> **679933** </td> </tr> <tr> <td> </td> <td> </td> </tr> <tr> <td> Partner organisations </td> <td> WP8 partners: Quantis, Johnson Matthey, HyGear, PNO </td> </tr> <tr> <td> Project duration </td> <td> Start: **10-01-2015** End: **09-30-2019** </td> </tr> <tr> <td> Date written </td> <td> **10-01-2016** </td> </tr> <tr> <td> Date last update </td> <td> **06-06-2016** </td> </tr> <tr> <td> Version </td> <td> V0.1. A new version of the DMP will be created whenever important changes to the project occur due to inclusion of new data sets, changes in consortium policies or external factors. </td> </tr> <tr> <td> Name of researcher(s) with roles/responsibilities for data management </td> <td> Angela Adams, Quantis </td> </tr> </table> **8.1. Data Collection** _Data to be collected_ includes: ## • Ethylene manufacturing * Characteristics of the investigated production facility (incl. annual feed capacity, annual production capacity, location, etc.) * Yield (kg ethylene per Nm 3 natural gas) * Input of raw materials (e.g. natural gas, compressed air, steam), chemicals, consumables, etc. * Output of main product (ethylene) and co-products (e.g. syngas, ethane, heat) o Direct process emissions * Electricity and fuel/net thermal energy use o Water use * Generation of solid wastes and liquid effluents * **Infrastructure data** o Characteristics of the reactor, membranes, catalysts o Reactor data, incl. materials and mass * Membrane data, incl. design area, type of membrane, composition, number of tubes, lifetime, etc. o Catalyst data, incl. catalyst amount and composition, filler amount and composition, lifetime, etc. * **Ethylene application** o Definition of the application * Characterisation of the distribution stage o Characterisation of use stage * Characterisation of the end-of-life stage * **Cost data for LCC** o Market prices of all exchanges (input to/output from the system) * **Emission factors (EFs) for LCA** o EFs of all exchanges (input to/output from the system) Sources per type of data are described in the table below: <table> <tr> <th> **Type of data** </th> <th> **Possible sources of data** </th> </tr> <tr> <td> Ethylene manufacturing </td> <td> Data to be collected mostly from process modelling partner TU/e, based on strong experience from past and running FP7 projects, in the form of e.g. ASPEN export data sheets including complete mass and energy balance </td> </tr> <tr> <td> Infrastructure data </td> <td> Data to be collected from membrane/catalyst project partners, building upon the experience from the DEMCAMER project (data collection template based on existing DEMCAMER template) </td> </tr> <tr> <td> Ethylene application </td> <td> Literature data, life cycle inventory databases, expert judgment from technical partners and endusers, experience from former related projects (e.g. DEMCAMER) </td> </tr> <tr> <td> Cost data for LCC </td> <td> Market prices to be obtained from specialised literature or cost databases, from partners </td> </tr> <tr> <td> Emission factors (EFs) for environmental LCA </td> <td> Emission factors to be obtained from available LCI databases (standard processes), or developed by Quantis when not available in LCI database or not specific enough (custom processes) </td> </tr> </table> Data will be collected either by means of data collection templates (mostly Excel or Word files), with version number explicitly stated in the filename as well as in the file itself. The data from the templates will then be implemented in the LCA/LCC models in our LCA software Quantis SUITE 2.0, thereby preventing any risk of losing the data. Pre-existing data will be used as much as possible, e.g. from DEMCAMER. _Data to be produced_ includes: * LCA/LCC model in professional LCA software (accessible only by Quantis although reader access to the results may be provided to specific project partners upon request) * LCA/LCC results in the form of graphs and tables (Excel file, PowerPoint for reporting) ### 8.2. Data Storage and Back-up Both the collected and produced data are stored on Quantis staff computers. All Quantis computers are backed up on a weekly basis. In addition, most projects at Quantis are stored on Quantis’s Google Drive (which is an additional backup). The data in our LCA software Quantis SUITE 2.0 are backed up every day, thereby limiting the loss of data to its minimum. ### 8.3. Data Documentation Data documentation is inherent to the activities in LCA. All the data in the LCA/LCC models are fully documented and related to the data collection templates. This is for other colleagues to be able to determine where the data are coming from and/or how the data were obtained/calculated. LCI/LCA data are generally based on the EcoSpold data format. The latter may be useful for sharing data across LCA software or platforms, but it is not so relevant in the context of the present project. In the MEMERE project, no specific LCI/LCA data format will be used unless specifically required in a given specific situation (to be decided throughout the project). As for the naming convention, the project naming convention will be used if there is one. If not, Quantis’s naming convention will be applied. ### 8.4. Data Access Copyright or Intellectual Property Rights are not applicable to the data generated by Quantis. Access to project data within Quantis will be limited to the project team. The data stored in Quantis SUITE 2.0 are fully secure, following the security policy of our professional LCA software. As stated previously, the LCA/LCC model in Quantis SUITE 2.0 shall be accessible only by Quantis team. Reader access to the project in Quantis SUITE 2.0 may however be provided to specific project partners upon request. ### 8.5. Data Sharing and Reuse Whether the results of the LCA/LCC can be shared or disclosed outside the project shall be discussed and agreed upon at the consortium level. Unless a specific decision has been made, Quantis will not share any of the LCA/LCC results outside of the consortium. ### 8.6. Data Preservation and Archiving The consortium policy for data preservation and archiving shall be followed. In the absence of such a project-specific policy, the data and LCA models shall be archived and stored like the data from any other Quantis project (i.e. on Quantis SUITE 2.0 server and on Quantis’s Google Drive). ## 9\. Partner Finden ltd <table> <tr> <th> Name of student/researcher(s) </th> <th> Simon Jacques, Andrew Beale, Dorota Matras </th> </tr> <tr> <td> Name of group/project </td> <td> Finden / MEMERE (WP2 & WP4) </td> </tr> <tr> <td> Description of your research </td> <td> In the frame of WP2 and WP4 Finden will be responsible for the static, in situ and operando characterisation of catalyst materials and catalytic membrane reactors predominantly by X-ray diffraction methods. </td> </tr> <tr> <td> </td> <td> </td> </tr> <tr> <td> Funding body(ies) </td> <td> H2020 </td> </tr> <tr> <td> Grant number </td> <td> **679933** </td> </tr> <tr> <td> Partner organisations </td> <td> WP2 partners: TUE, VITO, TUB, MARION, JM WP4 partners: TUE, TUB, TECHNALIA, VITO, HYG </td> </tr> <tr> <td> Project duration </td> <td> Start: **10-01-2015** End: **09-30-2019** </td> </tr> <tr> <td> Date written </td> <td> **05-02-2016** </td> </tr> <tr> <td> Date last update </td> <td> </td> </tr> <tr> <td> Version </td> <td> V0.1. A new version of the DMP will be created whenever important changes to the project occur due to inclusion of new data sets, changes in consortium policies or external factors. </td> </tr> <tr> <td> Name of researcher(s) with roles/responsibilities for data management </td> <td> Simon Jacques </td> </tr> </table> ### 9.1. Data Collection _Data (and sizes) to be collected includes:_ • **Standard laboratory XRD (typically 100 Kb per set)** o Static 1D powder patterns of powdered catalysts o Static 1D powder patterns of powdered membrane o In situ 1D powder pattern series of powdered catalysts under imposed conditions o In situ 1D powder pattern series of powdered membranes under imposed conditions ## • Laboratory micro-CT (typically 20 Gb per set) o Static micro-CT of structured catalysts • Laboratory fluorescence / diffraction imaging (typically 100 Mb per set) o Static hyperspectral (3d data cubes) images o In situ hyperspectral (3d data cubes) series under imposed conditions * **Synchrotron based micro-CT (typically 20 Gb per set)** o Static micro-CT of structured catalysts (3d images) o Static micro-CT of CMR’s catalysts (3d images) * **Synchrotron based XRD-CT (typically 0.1 Tb per set)** o In situ and in operando XRD-CT yielding very large volume raw data sets each XRD-CT comprising many thousands of 2D images o _Data processing will yield new data_ : * **Standard laboratory XRD (typically 100 Kb per set)** o Crystallographic phases will be identified, quantified. Physical parameters such as peak width will be also be extracted. This will yield tabulated data (stored in XL or equivalent file format). Various stack plots will be generated yielding graphs (sored as png or equivalent format). * **Laboratory micro-CT (typically 20 Gb per set)** o Data will reconstructed yielding volume images (stored as vol files which are a binary file format) with processing steps along the way stored as Matlab file format * **Laboratory fluorescence / diffraction imaging (typically 100 Mb per set)** o Data will reconstructed yielding volume images (stored as hxt file format also binary) with processing steps along the way stored as Matlab file format * **Synchrotron based micro-CT (typically 20 Gb per set)** o Data will reconstructed yielding volume images (stored as vol files which are a binary file format) with processing steps along the way stored as Matlab file format * **Synchrotron based XRD-CT (typically 250 Mb per set)** o Raw files will pre-processed to yield reduced XRD-CT data sets stored as hxt and or Matlab file format. Processing steps along the way stored as Matlab file format _Reproducibility:_ Raw data will not reproducible without repeat measurement. Processed data can be regenerated from the raw data. Crucially processing scripts (stored as .m files) will be needed to quickly regenerate processed data. _Storage:_ We estimate we will need ca. 100 TB for raw data and 50 TB for temporary storage of processed data and 10 TB for final processed data. _Version control:_ We will creation date stamp and date modify processing scripts. For reasons of disc space usage we will, in most cases, overwrite processed data where this processing is to be modified. _Software tools:_ Almost all processing will be done using Matlab. We will also use a variety of image processing and standard XRD software programs, including imagej, Avizo, Expert High Score Plus, GSAS, TOPAS. _Pre-existing data:_ We will use various powder diffraction databases including ICSD and ICDD databases. ### 9.2. Data Storage and Back-up All raw XRD-CT and synchrotron micro CT data will be copied to portable NAS discs. All raw XRD-CT and synchrotron micro CT data will also be archived up to a 100 TB data space (this data backup is itself mirrored). Processing of XRD-CT and synchrotron micro CT will take place in a 50 Tb space. Laboratory micro CT, fluorescence and imaging data and laboratory XRD data will be stored on a distributive NAS raid system which itself is mirrored. A small amount of cloud storage (ca. 10 Tb) will be used for processing and exchanging data. Backup frequency will be weekly on local PC’s. This will cover written scripts etc. Mirror’s should operate nightly or over weekends for most raw data, where the mirrors are in different physical locations. All raw data will be read- only. Archives will be restricted only to Finden personnel and any associated Finden/Memere research students. ### 9.3. Data Documentation Data documentation is yet to be fully determined. Processing scripts will use soft links to be more versatile. Data inside Matlab files will be stored in structures. Electronic log books (this will include XL sheets and OneNote or equivalent formats) will be used to store the locations, indexes and descriptions of archived and processed files. For processed data, we will use dates within file names as additional level of version control and associated text or html files as file descriptions. ### 9.4. Data Access Copyright or Intellectual Property Rights are not applicable to the data generated by Finden but certain scripts, algorithms and process flows will remain the property of Finden. Access to processed data within Finden will be limited to the project team unless further dissemination is agreed/required by the project partners. Access to the raw large volume data will be restricted for data security reasons but can be shared upon request; this data unless copied will be read only. ### 9.5. Data Sharing and Reuse Processed data will be shared with project partners via a secure cloud storage system. For smaller files, data can be shared via email. Publishing of data will be in accordance with the Memere policies and subservient to this the Journal/Repository policy. We will seek permission for reuse of data or sharing with outside parties from the project partners. Shared multi- dimensional imaging data will be visualised using Matlab and/or imagej and using inhouse developed software (that we can share with the partners). All other graphs etc can be shared as standard formats such as xls, png, tif, pdf. ### 9.6. Data Preservation and Archiving During the course of the project all reduced raw and processed data will be stored and archived; the archive will be maintained for a minimum of 12 months after termination of the project. We can, at this time, guarantee the archive of the large volume raw data for a minimum of 3 months after data collection and are currently looking into ways to extend this archive period. ## 10\. Partner Johnson Matthey PLC <table> <tr> <th> Name of student/researcher(s) </th> <th> Stephen Poulston </th> </tr> <tr> <td> Name of group/project </td> <td> Johnson Matthey Technology Centre </td> </tr> <tr> <td> Description of your research </td> <td> _Catalyst preparation and testing._ </td> </tr> <tr> <td> Funding body(ies) </td> <td> H2020 </td> </tr> <tr> <td> Grant number </td> <td> **679933** </td> </tr> <tr> <td> Partner organisations </td> <td> </td> </tr> <tr> <td> Project duration </td> <td> Start: **10-01-2015** End: **09-30-2019** </td> </tr> <tr> <td> Date written </td> <td> **04/02/2016** </td> </tr> <tr> <td> Date last update </td> <td> </td> </tr> <tr> <td> Version </td> <td> V0.1. A new version of the DMP will be created whenever important changes to the project occur due to inclusion of new data sets, changes in consortium policies or external factors. </td> </tr> <tr> <td> Name of researcher(s) with roles/responsibilities for data management </td> <td> </td> </tr> </table> **10.1. Data Collection** Experimental data. Generally in summary form such as excel graphs and powerpoint presentations **10.2. Data Storage and Back-up** Data stored on company server. Storage and back-up managed at corporate level by IT department. ### 10.3. Data Documentation Periodic reports; monthly, 6 monthly etc. Experiments recorded electronically in ‘electronic notebook’ with links to additional data files. Each experiment recorded in this way has a unique identification code assigned. ### 10.4. Data Access Internal access limited by control of password protected network access with restricted access. IPR managed by relevant group business manager and group legal. ### 10.5. Data Sharing and Reuse Data access restricted outside of consortium by agreed publication process within MEMERE partner consortium agreement. Internal approval for publication within JM controlled by internal approval process agreed at group level. **10.6. Data Preservation and Archiving** Long term data archiving of electronic data managed centrally by group IT. ## 11\. Partner Rauschert Heinersdorf-Pressig GmbH <table> <tr> <th> Name of student/researcher(s) </th> <th> Ulrich Werr Ralph Weckel </th> </tr> <tr> <td> </td> <td> Violetta Prehn Dr. Ralf Girmscheid Egbert Martin Rainer Thoma </td> </tr> <tr> <td> Name of group/project </td> <td> Rauschert Heinersdorf-Pressig GmbH (RHP)/ MEMERE-Project </td> </tr> <tr> <td> Description of your research </td> <td> _Development and Production of porous ceramic supports for MIEC-membranes based on Y-FSZ and/or MgO. Also includes dense feed tubes and seals to metal tubes suitable at elevated temperatures_ </td> </tr> <tr> <td> Funding body(ies) </td> <td> H2020 </td> </tr> <tr> <td> Grant number </td> <td> **679933** </td> </tr> <tr> <td> Partner organisations </td> <td> WP3: TUE, TECNALIA, VITO, HYGEAR, MARION </td> </tr> <tr> <td> Project duration </td> <td> Start: **10-01-2015** End: **09-30-2019** </td> </tr> <tr> <td> Date written </td> <td> **02-05-2016** </td> </tr> <tr> <td> Date last update </td> <td> </td> </tr> <tr> <td> Version </td> <td> V0.1. A new version of the DMP will be created whenever important changes to the project occur due to inclusion of new data sets, changes in consortium policies or external factors. </td> </tr> <tr> <td> Name of researcher(s) with roles/responsibilities for data management </td> <td> _Ulrich Werr (Project Manager RHP for MEMERE)_ </td> </tr> </table> ### 11.1. Data Collection RHP will collect data during development and manufacturing of porous ceramics substrates. It will be handwritten observations in Lab journals, results from Analysis and pictures (photos) Excel-files, Word-files, pdf-files, jpg-files Important data will be summarised in reports and printed out. So they will be available in paper form for long periods as well. Estimated size of the project folder: 2 GB Windows based software is used as preferred software to create/process and visualise data. Pre-existing data will be used from DEMCAMER (FP7) ### 11.2. Data Storage and Back-up All data, no matter if raw date or processed date will be stored at a shared folder on the Rauschert file server. This is back up daily and stored externally in a different location by a service company on hard disks. It is not allowed to safe any data only on hard discs on PCs or laptop-computers (Acc. To the ITuser-rules any RHP-employee has signed). ### 11.3. Data Documentation Data created are easy to understand, they will be summarised in regular reports that help to understand them. The convention is to name the files with self-explaining names and to use folders to bring a logical structure into the data storage. The project manager in RHP is responsible to set up the general structure of the folder and to share it with the other collaborators. ### 11.4. Data Access Copyright and intellectual properties will be handled acc. to German law (especially: Arbeitnehmererfindergesetz) and the rules of the EC, Grant Agreement etc.. Data are only accessible for co-workers in the project and the IT-staff. It is controlled by the project manager for MEMERE in RHP. ### 11.5. Data Sharing and Reuse All members of MEMERE-project are allowed to use data within the project. Any further distribution, use or share of the data requires the written approval of RHP. RHP will deny the external use in case, the data are commercial important data or know-how, that will endanger the commercial use of the new development made. Scientific important data will that cannot be converted into commercial products can be shared upon request and approval through RHP to external parties. Windows-based software is as a standard tool to share such data. ## 11\. 6. Data Preservation and Archiving The full data will be stored for 5 years minimum, after that period it may be allowed to reduce the date to only relevant data. Written reports will be kept for 10 years min in paper form. ## 12\. Partner Ciaotech s.r.l. (100% PNO Group B.V.) <table> <tr> <th> Name of student/researcher(s) </th> <th> Marco Molica Colella </th> </tr> <tr> <td> Name of group/project </td> <td> CiaoTech/PNO / MEMERE (WP9) </td> </tr> <tr> <td> Description of your research </td> <td> In the frame of WP9 CiaoTech/PNO will have a major role in carrying on a Stakeholder analysis to be used as a reference point for the following exploitation and dissemination activities </td> </tr> <tr> <td> Funding body(ies) </td> <td> H2020 </td> </tr> <tr> <td> Grant number </td> <td> **679933** </td> </tr> <tr> <td> Partner organisations </td> <td> WP9 partners: ALL </td> </tr> <tr> <td> Project duration </td> <td> Start: **10-01-2015** End: **09-30-2019** </td> </tr> <tr> <td> Date written </td> <td> **10-01-2016** </td> </tr> <tr> <td> Date last update </td> <td> </td> </tr> <tr> <td> Version </td> <td> V0.1. A new version of the DMP will be created whenever important changes to the project occur due to inclusion of new data sets, changes in consortium policies or external factors. </td> </tr> <tr> <td> Name of researcher(s) with roles/responsibilities for data management </td> <td> Marco Molica Colella, CiaoTech </td> </tr> </table> ### 12.1. Data Collection _Data to be collected_ include: * Sets of Keywords related to project topics needed to search for * Partners’ own – already known - stakeholder lists * Dedicated survey forms * EU projects - related to similar topics - data Sources per type of data are described in the table below: <table> <tr> <th> **Type of data** </th> <th> **Possible sources of data** </th> </tr> <tr> <td> Keywords and stakeholders list </td> <td> These data will be collected through discussion among partners and put on dedicated text files on Excel and Word documents format. </td> </tr> <tr> <td> Dedicated Surveys form </td> <td> Word Document and, possibly, standard on-line platforms to assemble the survey (to be chosen through a shared set of possibilities among partners) and provide easier access </td> </tr> <tr> <td> EU project data </td> <td> Literature data, CORDIS database and CiaoTech/PNO own databases and Open Innovation Platform _‘Innovation Place’_ </td> </tr> </table> Data are to be collected mostly by Excel or Word files, provided with version numbers in the name as well as into the file. A final version to be used in the following project deliverables will be opportunely identified and also marked by the finalization date. _Main Data to be produced_ include: * A _survey form_ to be spread among stakeholders * A _Power Point_ presentation and a _Word_ summary document for the complete Stakeholder analysis, detailing activities, methodologies and extracted results ### 12.2. Data Storage and Back-up All the collected data will be stored on CiaoTech/PNO hardware, accessible only by consultants dedicated to the activities, which are standardly protected by partners. **12.3. Data Documentation** The name standard will be sufficient to keep trace of different versions and release dates. **12.4. Data Access** No IPR will be claimed. Data will be kept on CiaoTech/PNO own hardware, as said before. ### 12.5. Data Sharing and Reuse No sharing will be allowed outside project consortium or unless the consortium allows it. Sharing and Reuse among partners will be available continuously. It will be done by standard e-mail, if not specifically decided. ### 12.6. Data Preservation and Archiving The consortium policy for data preservation and archiving shall be followed. In the absence of such a project-specific policy, the data will be kept on CiaoTech/PNO hardware. A specific folder can be loaded on shared archives only accessible to Ciaotech/PNO consultants’ network. For the sake of simplicity, only the final version of each file will be retained at project end.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0331_MEMERE_679933.md
## 1\. EXECUTIVE SUMMARY ### 1.1. Description of the deliverable content and purpose This deliverable reports the first version of the Data Management plan for the project MEMERE. The deliverable is based on a document prepared by the federation of the 3 technical universities of the Netherlands. The document has been shared with all partners and each partner has described the way the research data will be managed. To keep the data Management clear, the document reports the information, partner by partner. **1.2. Brief description of the state of the art and the innovation brought** n/a **1.3. Deviation from objectives** n/a **1.4. If relevant: corrective actions** n/a **1.5. If relevant: Intellectual property rights** n/a ## 2\. Partner TUE <table> <tr> <th> Name of student/researcher(s) </th> <th> Dr. Fausto Gallucci, Dr. Vincenzo Spallina, Mr. Aitor Cruellas, Dr. Martin van Sint Annaland </th> </tr> <tr> <td> Name of group/project </td> <td> Eindhoven University of Technology, Chemical Process Intensification </td> </tr> <tr> <td> Description of your research </td> <td> Development of novel multifunctional reactors integrating reaction and separation. Development and use of detailed models and novel non-invasive monitoring techniques _._ </td> </tr> <tr> <td> Funding body(ies) </td> <td> H2020 </td> </tr> <tr> <td> Grant number </td> <td> **679933** </td> </tr> <tr> <td> Partner organisations </td> <td> MEMERE consortium </td> </tr> <tr> <td> Project duration </td> <td> Start: **10-01-2015** End: **09-30-2019** </td> </tr> <tr> <td> Date written </td> <td> **04-01-2016** </td> </tr> <tr> <td> Date last update </td> <td> **04-01-2016** </td> </tr> <tr> <td> Version </td> <td> V0.1. A new version of the DMP will be created whenever important changes to the project occur due to inclusion of new data sets, changes in consortium policies or external factors. </td> </tr> <tr> <td> Name of researcher(s) with roles/responsibilities for data management </td> <td> _See above_ </td> </tr> </table> ### 2.1 Data Collection * Data to be created/collected includes: * Process parameter for membranes and membrane reactors. * Characterization of the membranes at TUE: measurements * Modelling data, both detailed models and Aspen Model * Catalyst characterization * High temperature fluidization with membranes, PIV data * Sources per type of data are described below: <table> <tr> <th> Type of data </th> <th> Possible source of data </th> </tr> <tr> <td> Process parameter </td> <td> Experimental data. Data will be generated along the project life. Literature data will be also considered. </td> </tr> <tr> <td> Characterization of the membranes </td> <td> Experimental data Data will be generated along the project life. </td> </tr> <tr> <td> Modelling </td> <td> In house codes for fluidized bed membrane reactors and aspen models will be used. </td> </tr> <tr> <td> Catalyst characterization </td> <td> Experimental data Data will be generated along the project life </td> </tr> <tr> <td> High temperature fluidization with membranes </td> <td> Experimental data including images Data will be generated along the project life </td> </tr> </table> Data will be collected either by means of data collection templates (mostly Excel or Word files and TIF images), with version number explicitly stated in the filename as well as in the file itself. ### 2.2 Data Storage and Back-up Data are stored in different locations. Firstly on the computers of the researcher and lab computers. The data from experimental facilities are also generally saved on the PLC of the setup. SPI has adopted a CLOUD application and each student saves the data on the cloud, thus stored on several computes and accessible by staff members of SPI. Images are also stored on external HD (generally 4 TB each). TUE is also discussing the possibility to adopt electronic lab notebook, in this case also this will be used for MEMERE. ### 2.3. Data Documentation In the MEMERE project no specific data format will be used unless specifically required in a given situation (to be decided along the project). As far as naming convention, the project naming convention will be used if there is one. Data will be storage at least 5 years after the formal end of the project: five years after the payment of the balance (see the Grant Agreement). ### 2.4. Data Access Copyright or Intellectual Property Rights are applicable to main type of data generated by TUE. The data store at TUE is fully secure as the system has been design for this. Data access is provided in the Project Folder by the Project Manager (name by name). ### 2.5. Data Sharing and Reuse Once any copyright or Intellectual Property Rights are protected part of the data could be shared outside the consortium. Approval of the consortium (according to the Grant Agreement / Consortium Agreement) will be required for any publications outside the consortium Publication will be carried out in different journals and websites. Open Access following the Green or Golden Route will be used. As policy of TUE, all data published should be made available through the repository of TUE. When possible, raw data will be published along the main articles as supplementary material. All data generated by TUE (after IP protection) will be also made available on MEMERE website. ### 2.6. Data Preservation and Archiving The consortium policy for data preservation and archiving shall be followed (at least 5 years after the formal end of the project: five years after the payment of the balance, see the Grant Agreement). In the absence of such a project-specific policy, the data shall be archived and stored like the data from any other TUE project. All data related to PhD thesis should be stored by TUE for 5-10 year after the PhD has been granted. ## 3\. Partner TECNALIA <table> <tr> <th> Name of student/researcher(s) </th> <th> </th> </tr> <tr> <td> Name of group/project </td> <td> TECNALIA / MEMERE (WP3 Membrane development) </td> </tr> <tr> <td> Description of your research </td> <td> _Briefly summarise the type of your research to help others understand the purposes for which the data are being collected or created._ Development of oxygen supported membranes and membrane characterization (Leader of WP on membrane development, Technical Manager, membrane developer) </td> </tr> <tr> <td> Funding body(ies) </td> <td> H2020 </td> </tr> <tr> <td> Grant number </td> <td> **679933** </td> </tr> <tr> <td> Partner organisations </td> <td> </td> </tr> <tr> <td> Project duration </td> <td> Start: **10-01-2015** End: **09-30-2019** </td> </tr> <tr> <td> Date written </td> <td> **30-03-2016** </td> </tr> <tr> <td> Date last update </td> <td> </td> </tr> <tr> <td> Version </td> <td> V0.1. A new version of the DMP will be created whenever important changes to the project occur due to inclusion of new data sets, changes in consortium policies or external factors. </td> </tr> <tr> <td> Name of researcher(s) with roles/responsibilities for data management </td> <td> Alfredo Pacheco Tanaka Ekain Fernández José Luis Viviente </td> </tr> </table> ### 3.1. Data Collection * Data to be created/collected includes: * Process parameter for the development of oxygen membranes at TECNALIA. * Characterization of the membranes at TECNALIA: measurements * State-of-art on oxygen membranes * Sources per type of data are described below: <table> <tr> <th> Type of data </th> <th> Possible source of data </th> </tr> <tr> <td> Process parameter for the development of oxygen membranes </td> <td> Experimental data. Data will be generated along the project life. Initially, it will be collected mostly from experience from past projects and FP7 running projects. Literature data will be also considered. </td> </tr> <tr> <td> Characterization of the membranes </td> <td> Observational data Data will be generated along the project life. </td> </tr> <tr> <td> State-of-art on oxygen membranes </td> <td> From the literature including patents. It will be updated along the project. </td> </tr> </table> Data will be collected either by means of data collection templates (mostly Excel or Word files), with version number explicitly stated in the filename as well as in the file itself. ### 3.2. Data Storage and Back-up Both the collected and produced data are stored initially on TECNALIA staff and/or individual testing computers (lab testing notebook). In addition, each project at TECNALIA has a folder in the central server where all the data and documents generated and related to the project are stored. Accession to this folder is allowed to the staff working in this project (accession provided by the Project Manager case by case). This sever is backed up every day, thereby limiting the loss of data to its minimum. ### 3.3. Data Documentation Data documentation is inherent to the activities at TECNALIA. TECNALIA holds several accreditations and certificates such as the Certification of the Quality Management System as per UNE-EN-ISO 9001:2008 for the Management of Projects of Research, Technological Innovation and Development, Tests and Assays, and Client Technological Assessment as well as the ISO 140001:2004 (Environmental Management System). In the MEMERE project no specific data format will be used unless specifically required in a given situation (to be decided along the project). As far as naming convention, the project naming convention will be used if there is one. If not, TECNALIA’ naming convention will be applied. Data will be storage at least 5 years after the formal end of the project: five years after the payment of the balance (see the Grant Agreement). ### 3.4. Data Access How will you manage access and security? Copyright or Intellectual Property Rights are applicable to main type of data generated by TECNALIA (Process parameter for the development of oxygen membranes and Characterization of the membranes: measurements). The data store at TECNALIA is fully secure as the system has been design for this. Process parameters for manufacturing the oxygen membranes will be only accessible by TECNALIA team. Reader access to data coming from the characterization of membranes may however be provided to specific project partners upon request. Data access is provided in the Project Folder by the Project Manager (name by name). ### 3.5. Data Sharing and Reuse Once any copyright or Intellectual Property Rights are protected part of the data on membrane characterisation could be shared outside the consortium. Approval of the consortium (according to the Grant Agreement / Consortium Agreement) will be required for any publications outside the consortium Deep details on process manufacturing will not be shared in any case to avoid the loss of any intellectual property rights or copyright. Publication will be mainly carried out in different journals and websites. Open Access following the Green or Golden Route will be considered. TECNALIA has recently developed its own institutional repository: TECNALIA Publications ( _http://dsp.tecnalia.com/_ ) . All the Scientific Publication produced in MEMERE in which TECNALIA participates as author will be deposited, regardless whether it is also deposited in some other repository(s), in this repository for scientific publications. TECNALIA Publications has been developed following RECOLECTA directions and facilities in order to fulfil international interoperability standards and protocols and gain long-term sustainability. RECOLECTA (Open Science Harvester) is a platform that gathers all the Spanish scientific repositories in one place and provides services to repository managers, researchers and decision-makers ( _http://recolecta.fecyt.es/_ ) . This platform is the result of the collaboration between the Spanish Foundation for Science and Technology (FECYT) and the Network of Spanish University Libraries (REBIUN) run by the Conference of Vice-Chancellors of Spanish Universities (CRUE), with the aims of creating a nationwide infrastructure of Open Access scientific repositories. TECNALIA Publications is indexed to Google, and harvested by Recolecta (Fecyt) and OpenAire ( _https://www.openaire.eu/_ ) , the open access infrastructure by the EC. ### 3.6. Data Preservation and Archiving The consortium policy for data preservation and archiving shall be followed (at least 5 years after the formal end of the project: five years after the payment of the balance, see the Grant Agreement). In the absence of such a project-specific policy, the data shall be archived and stored like the data from any other TECNALIA project. ## 4\. Partner VITO <table> <tr> <th> Name of student/researcher(s) </th> <th> Vesna Middelkoop Bart Michielsen </th> </tr> <tr> <td> </td> <td> </td> </tr> <tr> <td> Name of group/project </td> <td> _Group_ : VITO-Sustainable Materials Management _Project_ :“MEthane activation via integrated MEmbrane Reactors” - MEMERE </td> </tr> <tr> <td> Description of your research </td> <td> Development and characterisation of catalytic structures and membranes. Characterisation will include acquiring, storing and processing a large volume of data through spectroscopic and X-ray measurements (ranging from 5 MB per run for ex situ measurements to terabytes of data by operando/in situ measurements) </td> </tr> <tr> <td> Funding body(ies) </td> <td> H2020 </td> </tr> <tr> <td> Grant number </td> <td> **679933** </td> </tr> <tr> <td> Partner organisations </td> <td> Eindhoven University of Technology, Tecnalia, Berlin University of Technology, Marion Technology, Hygear Technology and Services B.V. Quantis sarl, Finden Ltd, Johnson Matthey PLC RAUSCHERT HEINERSDORF - PRESSIG GMBH Ciaotech s.r.l. (100% PNO Group B.V.) </td> </tr> <tr> <td> Project duration </td> <td> Start: **10-01-2015** End: **09-30-2019** </td> </tr> <tr> <td> Date written </td> <td> **06-01-2016** </td> </tr> <tr> <td> Date last update </td> <td> **11-01-2016** </td> </tr> <tr> <td> Version </td> <td> V0.1. A new version of the DMP will be created whenever important changes to the project occur due to inclusion of new data sets, changes in consortium policies or external factors. </td> </tr> <tr> <td> Name of researcher(s) with roles/responsibilities for data management </td> <td> _See above_ </td> </tr> <tr> <td> </td> <td> </td> </tr> </table> The partner VITO will make available all data produced in relation to MEMERE on their repository and on the MEMERE Website. ## 5\. Partner Berlin University of Technology The partner Berlin University of Technology will make available all data produced in relation to MEMERE on their repository and on the MEMERE Website. ## 6\. Partner Marion Technologies The partner Marion Technologies will make available all data produced in relation to MEMERE on their repository and on the MEMERE Website. **7\. Partner Hygear Technology and Services B.V.** <table> <tr> <th> Name of student/researcher(s) </th> <th> Leonardo Roses </th> </tr> <tr> <td> Name of group/project </td> <td> HyGear Technology & Services B.V. </td> </tr> <tr> <td> Description of your research </td> <td> _Pilot scale design and construction. Oxygen generator_ </td> </tr> <tr> <td> Funding body(ies) </td> <td> H2020 </td> </tr> <tr> <td> Grant number </td> <td> **679933** </td> </tr> <tr> <td> Partner organisations </td> <td> </td> </tr> <tr> <td> Project duration </td> <td> Start: **10-01-2015** End: **09-30-2019** </td> </tr> <tr> <td> Date written </td> <td> **06/06/2016** </td> </tr> <tr> <td> Date last update </td> <td> </td> </tr> <tr> <td> Version </td> <td> V0.1. A new version of the DMP will be created whenever important changes to the project occur due to inclusion of new data sets, changes in consortium policies or external factors. </td> </tr> <tr> <td> Name of researcher(s) with roles/responsibilities for data management </td> <td> _Leonardo Roses_ </td> </tr> </table> ### 7.1. Data Collection Test data. I/O to PLC (flows, temperatures, pressures, levels, concentrations). Gas analysis. Generally in summary form such as excel graphs, powerpoint presentations and tables/plots in deliverables. ### 7.2. Data Storage and Back-up Data stored on company server. Storage and back-up managed at corporate level and implemented by IT department. Data will be stored at least 5 years after the formal end of the project: five years after the payment of the balance (see the Grant Agreement). **7.3. Data Documentation** Periodic (interim) reports (deliverables D1.3-D1.7) and RTD deliverables in WP3-WP9. ### 7.4. Data Access Internal access limited by control of password protected network access with restricted access. IPR managed by CTO. ### 7.5. Data Sharing and Reuse Data access restricted outside of consortium by agreed publication process within MEMERE partner consortium agreement. Approval for publication to be authorized by project manager. **7.6. Data Preservation and Archiving** Long term data archiving of electronic data managed centrally IT department. ## 8\. Partner Quantis sarl <table> <tr> <th> Name of student/researcher(s) </th> <th> Angela Adams, Andrea Del Duce and Arnaud Dauriat </th> </tr> <tr> <td> Name of group/project </td> <td> Quantis / MEMERE (WP8) </td> </tr> <tr> <td> Description of your research </td> <td> In the frame of WP8, Quantis will be responsible for the environmental life cycle assessment (LCA) and life cycle costing (LCC). The objectives of the LCA and LCC are to i) evaluate the environmental and cost performance of the investigated novel OCM technology, and ii) guide the design and development of the novel OCM technology towards more sustainable solutions. </td> </tr> <tr> <td> Funding body(ies) </td> <td> H2020 </td> </tr> <tr> <td> Grant number </td> <td> **679933** </td> </tr> <tr> <td> Partner organisations </td> <td> WP8 partners: Quantis, Johnson Matthey, HyGear, PNO </td> </tr> <tr> <td> Project duration </td> <td> Start: **10-01-2015** End: **09-30-2019** </td> </tr> <tr> <td> Date written </td> <td> **10-01-2016** </td> </tr> <tr> <td> Date last update </td> <td> **06-06-2016** </td> </tr> <tr> <td> Version </td> <td> V0.1. A new version of the DMP will be created whenever important changes to the project occur due to inclusion of new data sets, changes in consortium policies or external factors. </td> </tr> <tr> <td> Name of researcher(s) with roles/responsibilities for data management </td> <td> Angela Adams, Quantis </td> </tr> </table> **8.1. Data Collection** _Data to be collected_ includes: ##  Ethylene manufacturing * Characteristics of the investigated production facility (incl. annual feed capacity, annual production capacity, location, etc.) * Yield (kg ethylene per Nm 3 natural gas) * Input of raw materials (e.g. natural gas, compressed air, steam), chemicals, consumables, etc. * Output of main product (ethylene) and co-products (e.g. syngas, ethane, heat) o Direct process emissions * Electricity and fuel/net thermal energy use o Water use * Generation of solid wastes and liquid effluents * **Infrastructure data** o Characteristics of the reactor, membranes, catalysts o Reactor data, incl. materials and mass * Membrane data, incl. design area, type of membrane, composition, number of tubes, lifetime, etc. o Catalyst data, incl. catalyst amount and composition, filler amount and composition, lifetime, etc. * **Ethylene application** o Definition of the application * Characterisation of the distribution stage * Characterisation of use stage o Characterisation of the end-of-life stage * **Cost data for LCC** o Market prices of all exchanges (input to/output from the system) * **Emission factors (EFs) for LCA** o EFs of all exchanges (input to/output from the system) Sources per type of data are described in the table below: <table> <tr> <th> **Type of data** </th> <th> **Possible sources of data** </th> </tr> <tr> <td> Ethylene manufacturing </td> <td> Data to be collected mostly from process modelling partner TU/e, based on strong experience from past and running FP7 projects, in the form of e.g. ASPEN export data sheets including complete mass and energy balance </td> </tr> <tr> <td> Infrastructure data </td> <td> Data to be collected from membrane/catalyst project partners, building upon the experience from the DEMCAMER project (data collection template based on existing DEMCAMER template) </td> </tr> <tr> <td> Ethylene application </td> <td> Literature data, life cycle inventory databases, expert judgment from technical partners and endusers, experience from former related projects (e.g. DEMCAMER) </td> </tr> <tr> <td> Cost data for LCC </td> <td> Market prices to be obtained from specialised literature or cost databases, from partners </td> </tr> <tr> <td> Emission factors (EFs) for environmental LCA </td> <td> Emission factors to be obtained from available LCI databases (standard processes), or developed by Quantis when not available in LCI database or not specific enough (custom processes) </td> </tr> </table> Data will be collected either by means of data collection templates (mostly Excel or Word files), with version number explicitly stated in the filename as well as in the file itself. The data from the templates will then be implemented in the LCA/LCC models in our LCA software Quantis SUITE 2.0, thereby preventing any risk of losing the data. Pre-existing data will be used as much as possible, e.g. from DEMCAMER. _Data to be produced_ includes: * LCA/LCC model in professional LCA software (accessible only by Quantis although reader access to the results may be provided to specific project partners upon request) * LCA/LCC results in the form of graphs and tables (Excel file, PowerPoint for reporting) ### 8.2. Data Storage and Back-up Both the collected and produced data are stored on Quantis staff computers. All Quantis computers are backed up on a weekly basis. In addition, most projects at Quantis are stored on Quantis’s Google Drive (which is an additional backup). The data in our LCA software Quantis SUITE 2.0 are backed up every day, thereby limiting the loss of data to its minimum. ### 8.3. Data Documentation Data documentation is inherent to the activities in LCA. All the data in the LCA/LCC models are fully documented and related to the data collection templates. This is for other colleagues to be able to determine where the data are coming from and/or how the data were obtained/calculated. LCI/LCA data are generally based on the EcoSpold data format. The latter may be useful for sharing data across LCA software or platforms, but it is not so relevant in the context of the present project. In the MEMERE project, no specific LCI/LCA data format will be used unless specifically required in a given specific situation (to be decided throughout the project). As for the naming convention, the project naming convention will be used if there is one. If not, Quantis’s naming convention will be applied. ### 8.4. Data Access Copyright or Intellectual Property Rights are not applicable to the data generated by Quantis. Access to project data within Quantis will be limited to the project team. The data stored in Quantis SUITE 2.0 are fully secure, following the security policy of our professional LCA software. As stated previously, the LCA/LCC model in Quantis SUITE 2.0 shall be accessible only by Quantis team. Reader access to the project in Quantis SUITE 2.0 may however be provided to specific project partners upon request. ### 8.5. Data Sharing and Reuse Whether the results of the LCA/LCC can be shared or disclosed outside the project shall be discussed and agreed upon at the consortium level. Unless a specific decision has been made, Quantis will not share any of the LCA/LCC results outside of the consortium. ### 8.6. Data Preservation and Archiving The consortium policy for data preservation and archiving shall be followed. In the absence of such a project-specific policy, the data and LCA models shall be archived and stored like the data from any other Quantis project (i.e. on Quantis SUITE 2.0 server and on Quantis’s Google Drive). ## 9\. Partner Finden ltd <table> <tr> <th> Name of student/researcher(s) </th> <th> Simon Jacques, Andrew Beale, Dorota Matras </th> </tr> <tr> <td> Name of group/project </td> <td> Finden / MEMERE (WP2 & WP4) </td> </tr> <tr> <td> Description of your research </td> <td> In the frame of WP2 and WP4 Finden will be responsible for the static, in situ and operando characterisation of catalyst materials and catalytic </td> </tr> <tr> <td> </td> <td> membrane reactors predominantly by X-ray diffraction methods. </td> </tr> <tr> <td> Funding body(ies) </td> <td> H2020 </td> </tr> <tr> <td> Grant number </td> <td> **679933** </td> </tr> <tr> <td> Partner organisations </td> <td> WP2 partners: TUE, VITO, TUB, MARION, JM WP4 partners: TUE, TUB, TECHNALIA, VITO, HYG </td> </tr> <tr> <td> Project duration </td> <td> Start: **10-01-2015** End: **09-30-2019** </td> </tr> <tr> <td> Date written </td> <td> **05-02-2016** </td> </tr> <tr> <td> Date last update </td> <td> </td> </tr> <tr> <td> Version </td> <td> V0.1. A new version of the DMP will be created whenever important changes to the project occur due to inclusion of new data sets, changes in consortium policies or external factors. </td> </tr> <tr> <td> Name of researcher(s) with roles/responsibilities for data management </td> <td> Simon Jacques </td> </tr> </table> ### 9.1. Data Collection _Data (and sizes) to be collected includes:_  **Standard laboratory XRD (typically 100 Kb per set)** o Static 1D powder patterns of powdered catalysts o Static 1D powder patterns of powdered membrane o In situ 1D powder pattern series of powdered catalysts under imposed conditions o In situ 1D powder pattern series of powdered membranes under imposed conditions ##  Laboratory micro-CT (typically 20 Gb per set) o Static micro-CT of structured catalysts  Laboratory fluorescence / diffraction imaging (typically 100 Mb per set) o Static hyperspectral (3d data cubes) images o In situ hyperspectral (3d data cubes) series under imposed conditions * **Synchrotron based micro-CT (typically 20 Gb per set)** o Static micro-CT of structured catalysts (3d images) o Static micro-CT of CMR’s catalysts (3d images) * **Synchrotron based XRD-CT (typically 0.1 Tb per set)** o In situ and in operando XRD-CT yielding very large volume raw data sets each XRD-CT comprising many thousands of 2D images o _Data processing will yield new data_ : * **Standard laboratory XRD (typically 100 Kb per set)** o Crystallographic phases will be identified, quantified. Physical parameters such as peak width will be also be extracted. This will yield tabulated data (stored in XL or equivalent file format). Various stack plots will be generated yielding graphs (sored as png or equivalent format). * **Laboratory micro-CT (typically 20 Gb per set)** o Data will reconstructed yielding volume images (stored as vol files which are a binary file format) with processing steps along the way stored as Matlab file format * **Laboratory fluorescence / diffraction imaging (typically 100 Mb per set)** o Data will reconstructed yielding volume images (stored as hxt file format also binary) with processing steps along the way stored as Matlab file format * **Synchrotron based micro-CT (typically 20 Gb per set)** o Data will reconstructed yielding volume images (stored as vol files which are a binary file format) with processing steps along the way stored as Matlab file format * **Synchrotron based XRD-CT (typically 250 Mb per set)** o Raw files will pre-processed to yield reduced XRD-CT data sets stored as hxt and or Matlab file format. Processing steps along the way stored as Matlab file format _Reproducibility:_ Raw data will not reproducible without repeat measurement. Processed data can be regenerated from the raw data. Crucially processing scripts (stored as .m files) will be needed to quickly regenerate processed data. _Storage:_ We estimate we will need ca. 100 TB for raw data and 50 TB for temporary storage of processed data and 10 TB for final processed data. _Version control:_ We will creation date stamp and date modify processing scripts. For reasons of disc space usage we will, in most cases, overwrite processed data where this processing is to be modified. _Software tools:_ Almost all processing will be done using Matlab. We will also use a variety of image processing and standard XRD software programs, including imagej, Avizo, Expert High Score Plus, GSAS, TOPAS. _Pre-existing data:_ We will use various powder diffraction databases including ICSD and ICDD databases. ### 9.2. Data Storage and Back-up All raw XRD-CT and synchrotron micro CT data will be copied to portable NAS discs. All raw XRD-CT and synchrotron micro CT data will also be archived up to a 100 TB data space (this data backup is itself mirrored). Processing of XRD-CT and synchrotron micro CT will take place in a 50 Tb space. Laboratory micro CT, fluorescence and imaging data and laboratory XRD data will be stored on a distributive NAS raid system which itself is mirrored. A small amount of cloud storage (ca. 10 Tb) will be used for processing and exchanging data. Backup frequency will be weekly on local PC’s. This will cover written scripts etc. Mirror’s should operate nightly or over weekends for most raw data, where the mirrors are in different physical locations. All raw data will be read- only. Archives will be restricted only to Finden personnel and any associated Finden/Memere research students. ### 9.3. Data Documentation Data documentation is yet to be fully determined. Processing scripts will use soft links to be more versatile. Data inside Matlab files will be stored in structures. Electronic log books (this will include XL sheets and OneNote or equivalent formats) will be used to store the locations, indexes and descriptions of archived and processed files. For processed data, we will use dates within file names as additional level of version control and associated text or html files as file descriptions. ### 9.4. Data Access Copyright or Intellectual Property Rights are not applicable to the data generated by Finden but certain scripts, algorithms and process flows will remain the property of Finden. Access to processed data within Finden will be limited to the project team unless further dissemination is agreed/required by the project partners. Access to the raw large volume data will be restricted for data security reasons but can be shared upon request; this data unless copied will be read only. ### 9.5. Data Sharing and Reuse Processed data will be shared with project partners via a secure cloud storage system. For smaller files, data can be shared via email. Publishing of data will be in accordance with the Memere policies and subservient to this the Journal/Repository policy. We will seek permission for reuse of data or sharing with outside parties from the project partners. Shared multi- dimensional imaging data will be visualised using Matlab and/or imagej and using inhouse developed software (that we can share with the partners). All other graphs etc can be shared as standard formats such as xls, png, tif, pdf. ### 9.6. Data Preservation and Archiving During the course of the project all reduced raw and processed data will be stored and archived; the archive will be maintained for a minimum of 12 months after termination of the project. We can, at this time, guarantee the archive of the large volume raw data for a minimum of 3 months after data collection and are currently looking into ways to extend this archive period. ## 10\. Partner Johnson Matthey PLC <table> <tr> <th> Name of student/researcher(s) </th> <th> Stephen Poulston </th> </tr> <tr> <td> Name of group/project </td> <td> Johnson Matthey Technology Centre </td> </tr> <tr> <td> Description of your research </td> <td> _Catalyst preparation and testing._ </td> </tr> <tr> <td> Funding body(ies) </td> <td> H2020 </td> </tr> <tr> <td> Grant number </td> <td> **679933** </td> </tr> <tr> <td> Partner organisations </td> <td> </td> </tr> <tr> <td> Project duration </td> <td> Start: **10-01-2015** End: **09-30-2019** </td> </tr> <tr> <td> Date written </td> <td> **04/02/2016** </td> </tr> <tr> <td> Date last update </td> <td> </td> </tr> <tr> <td> Version </td> <td> V0.1. A new version of the DMP will be created whenever important changes to the project occur due to inclusion of new data sets, changes in consortium policies or external factors. </td> </tr> <tr> <td> Name of researcher(s) with roles/responsibilities for data management </td> <td> </td> </tr> </table> **10.1. Data Collection** Experimental data. Generally in summary form such as excel graphs and powerpoint presentations ### 10.2. Data Storage and Back-up Data stored on company server. Storage and back-up managed at corporate level by IT department. **10.3. Data Documentation** Periodic reports; monthly, 6 monthly etc. Experiments recorded electronically in ‘electronic notebook’ with links to additional data files. Each experiment recorded in this way has a unique identification code assigned. ### 10.4. Data Access Internal access limited by control of password protected network access with restricted access. IPR managed by relevant group business manager and group legal. ### 10.5. Data Sharing and Reuse Data access restricted outside of consortium by agreed publication process within MEMERE partner consortium agreement. Internal approval for publication within JM controlled by internal approval process agreed at group level. **10.6. Data Preservation and Archiving** Long term data archiving of electronic data managed centrally by group IT. ## 11\. Partner Rauschert Heinersdorf-Pressig GmbH <table> <tr> <th> Name of student/researcher(s) </th> <th> Ulrich Werr Ralph Weckel </th> </tr> <tr> <td> </td> <td> Violetta Prehn Dr. Ralf Girmscheid Egbert Martin Rainer Thoma </td> </tr> <tr> <td> Name of group/project </td> <td> Rauschert Heinersdorf-Pressig GmbH (RHP)/ MEMERE-Project </td> </tr> <tr> <td> Description of your research </td> <td> _Development and Production of porous ceramic supports for MIEC-membranes based on Y-FSZ and/or MgO. Also includes dense feed tubes and_ _seals to metal tubes suitable at elevated temperatures_ </td> </tr> <tr> <td> Funding body(ies) </td> <td> H2020 </td> </tr> <tr> <td> Grant number </td> <td> **679933** </td> </tr> <tr> <td> Partner organisations </td> <td> WP3: TUE, TECNALIA, VITO, HYGEAR, MARION </td> </tr> <tr> <td> Project duration </td> <td> Start: **10-01-2015** End: **09-30-2019** </td> </tr> <tr> <td> Date written </td> <td> **02-05-2016** </td> </tr> <tr> <td> Date last update </td> <td> </td> </tr> <tr> <td> Version </td> <td> V0.1. A new version of the DMP will be created whenever important changes to the project occur due to inclusion of new data sets, changes in consortium policies or external factors. </td> </tr> <tr> <td> Name of researcher(s) with roles/responsibilities for data management </td> <td> _Ulrich Werr (Project Manager RHP for MEMERE)_ </td> </tr> </table> ### 11.1. Data Collection RHP will collect data during development and manufacturing of porous ceramics substrates. It will be handwritten observations in Lab journals, results from Analysis and pictures (photos) Excel-files, Word-files, pdf-files, jpg-files Important data will be summarised in reports and printed out. So they will be available in paper form for long periods as well. Estimated size of the project folder: 2 GB Windows based software is used as preferred software to create/process and visualise data. Pre-existing data will be used from DEMCAMER (FP7) ### 11.2. Data Storage and Back-up All data, no matter if raw date or processed date will be stored at a shared folder on the Rauschert file server. This is back up daily and stored externally in a different location by a service company on hard disks. It is not allowed to safe any data only on hard discs on PCs or laptop-computers (Acc. To the ITuser-rules any RHP-employee has signed). ### 11.3. Data Documentation Data created are easy to understand, they will be summarised in regular reports that help to understand them. The convention is to name the files with self-explaining names and to use folders to bring a logical structure into the data storage. The project manager in RHP is responsible to set up the general structure of the folder and to share it with the other collaborators. ### 11.4. Data Access Copyright and intellectual properties will be handled acc. to German law (especially: Arbeitnehmererfindergesetz) and the rules of the EC, Grant Agreement etc.. Data are only accessible for co-workers in the project and the IT-staff. It is controlled by the project manager for MEMERE in RHP. ### 11.5. Data Sharing and Reuse All members of MEMERE-project are allowed to use data within the project. Any further distribution, use or share of the data requires the written approval of RHP. RHP will deny the external use in case, the data are commercial important data or know-how, that will endanger the commercial use of the new development made. Scientific important data will that cannot be converted into commercial products can be shared upon request and approval through RHP to external parties. Windows-based software is as a standard tool to share such data. ## 11\. 6. Data Preservation and Archiving The full data will be stored for 5 years minimum, after that period it may be allowed to reduce the date to only relevant data. Written reports will be kept for 10 years min in paper form. ## 12\. Partner Ciaotech s.r.l. (100% PNO Group B.V.) <table> <tr> <th> Name of student/researcher(s) </th> <th> Marco Molica Colella </th> </tr> <tr> <td> Name of group/project </td> <td> CiaoTech/PNO / MEMERE (WP9) </td> </tr> <tr> <td> Description of your research </td> <td> In the frame of WP9 CiaoTech/PNO will have a major role in carrying on a Stakeholder analysis to be used as a reference point for the following exploitation and dissemination activities </td> </tr> <tr> <td> Funding body(ies) </td> <td> H2020 </td> </tr> <tr> <td> Grant number </td> <td> **679933** </td> </tr> <tr> <td> Partner organisations </td> <td> WP9 partners: ALL </td> </tr> <tr> <td> Project duration </td> <td> Start: **10-01-2015** End: **09-30-2019** </td> </tr> <tr> <td> Date written </td> <td> **10-01-2016** </td> </tr> <tr> <td> Date last update </td> <td> </td> </tr> <tr> <td> Version </td> <td> V0.1. A new version of the DMP will be created whenever important changes to the project occur due to inclusion of new data sets, changes in consortium policies or external factors. </td> </tr> <tr> <td> Name of researcher(s) with roles/responsibilities for data management </td> <td> Marco Molica Colella, CiaoTech </td> </tr> </table> ### 12.1. Data Collection _Data to be collected_ include: * Sets of Keywords related to project topics needed to search for * Partners’ own – already known - stakeholder lists * Dedicated survey forms * EU projects - related to similar topics - data Sources per type of data are described in the table below: <table> <tr> <th> **Type of data** </th> <th> **Possible sources of data** </th> </tr> <tr> <td> Keywords and stakeholders list </td> <td> These data will be collected through discussion among partners and put on dedicated text files on Excel and Word documents format. </td> </tr> <tr> <td> Dedicated Surveys form </td> <td> Word Document and, possibly, standard on-line platforms to assemble the survey (to be chosen through a shared set of possibilities among partners) and provide easier access </td> </tr> <tr> <td> EU project data </td> <td> Literature data, CORDIS database and CiaoTech/PNO own databases and Open Innovation Platform _‘Innovation Place’_ </td> </tr> </table> Data are to be collected mostly by Excel or Word files, provided with version numbers in the name as well as into the file. A final version to be used in the following project deliverables will be opportunely identified and also marked by the finalization date. _Main Data to be produced_ include: * A _survey form_ to be spread among stakeholders * A _Power Point_ presentation and a _Word_ summary document for the complete Stakeholder analysis, detailing activities, methodologies and extracted results ### 12.2. Data Storage and Back-up All the collected data will be stored on CiaoTech/PNO hardware, accessible only by consultants dedicated to the activities, which are standardly protected by partners. **12.3. Data Documentation** The name standard will be sufficient to keep trace of different versions and release dates. **12.4. Data Access** No IPR will be claimed. Data will be kept on CiaoTech/PNO own hardware, as said before. ### 12.5. Data Sharing and Reuse No sharing will be allowed outside project consortium or unless the consortium allows it. Sharing and Reuse among partners will be available continuously. It will be done by standard e-mail, if not specifically decided. ### 12.6. Data Preservation and Archiving The consortium policy for data preservation and archiving shall be followed. In the absence of such a project-specific policy, the data will be kept on CiaoTech/PNO hardware. A specific folder can be loaded on shared archives only accessible to Ciaotech/PNO consultants’ network. For the sake of simplicity, only the final version of each file will be retained at project end.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0333_CAREGIVERSPRO-MMD_690211.md
# 3 Data, Materials, Resources Collection Information The purpose of this section is to provide a full description of the data that will be generated and stored during this project. The information provided in here might be adapted or updated in further versions of this document. ### 3.1 Description of the data All data will be generated through the use of the CAREGIVERSPRO-MMD online platform by several categories of users, _i.e._ health professionals, caregivers and patients. Each category of user will have access to specified content and will be able to generate different types of information according to the permissions granted. For each user of the platform, different datasets described in this section may be generated. Additional datasets may be generated in the future. The data will be collected before and after the pilot phase of the project. The platform will also provide means to assess and store data no directly produced by users _i.e._ the interaction among users and the evolution on their activity in the social network, which will also be subject to further analysis. ## 3.1.1 Personal Dataset _Table 2 Personal Dataset_ <table> <tr> <th> **Data set reference and name** </th> </tr> <tr> <td> C-MMD-Personal </td> </tr> <tr> <td> **Data set description** </td> </tr> <tr> <td> This data set contains all the personal data captured through the registration tools integrated in the C-MMD platform for the dyad (patient and caregiver) and the health professionals. The registration tool collects standard personal information. i.e. as described in EU Data Protection Directive (95/46/EC) 3 : _"Personal data” shall mean any information relating to an identified or identifiable natural person ('Data Subject'); an identifiable person is one who can be identified, directly or indirectly, in particular by reference to an identification number or to one or more factors specific to his physical, physiological, mental, economic, cultural or social identity_ . Therefore, the nature of the data corresponds to the values used to represent such concepts (e.g. text, integers). At this moment the registering tool has not been implemented in its final version, further details will be given in future versions. </td> </tr> <tr> <td> **Standards and metadata** </td> </tr> <tr> <td> Data will be stored each time a user (be it patient, caregiver or health professional) registers to the platform or modifies their profile. Although at this moment the registering tool and </td> </tr> <tr> <td> profile management tool have not been defined yet, it is expected that data will be stored in a MySQL database, using noSQL database for complementary purposes. Records will also be related (and identified) with other datasets and the date when the data was recorded. Metadata will include information about the profile creation time, range of possible values, etc. This metadata will be associated to each table and will follow the Common European Research Information Format (CERIF) metadata standard 4 . </td> </tr> <tr> <td> **Data sharing** </td> </tr> <tr> <td> This dataset will not be shared outside of the Consortium boundaries for ethical and security reasons. Each dataset record belongs to the user and to the Consortium partner responsible for the user. Only the user, people authorised by him/her ( _e.g._ caregiver) and authorised personnel of the Consortium partner responsible for the user, can access the record. Data will be available to users and people authorised by them through the C-MMD platform. Authorised personnel of the pilot partner generating the data will be able to access aggregated data in periodic reports and also will be able to access raw data dumped from the database in _csv_ files or through a web service. Each access will be identifiable and traceable. Dataset records will be shared among defined Consortium partners anonymised for research purposes in order to be used for the tasks of the project. Anonymisation is the standard procedure followed to preserve confidentiality of participants. Each participant ( _e.g._ patient, caregiver, doctor) will sign an informed consent at recruitment phase authorizing access to all his/her data (raw, aggregated, anonymised). Users will agree to the anonymised and aggregated data being used for research and possibly commercial exploitation. The data repository will be in the C-MMD host in the UPC premises (more details are given in section 6). </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> See section 6 and 7. </td> </tr> </table> ## 3.1.2 Screening Dataset _Table 3 Screening Dataset_ <table> <tr> <th> **Data set reference and name** </th> </tr> <tr> <td> C-MMD-Screening </td> </tr> <tr> <td> **Data set description** </td> </tr> <tr> <td> This data set contains all the clinical and social data captured through the screening tools integrated in the C-MMD platform for the dyad (patient and caregiver). The screening tools </td> </tr> </table> 4 <table> <tr> <th> implement standard evaluation scales for different conditions (physical, psychosocial, neurological, functional, _etc_ .). Therefore, the nature of the data corresponds to the values used to evaluate such scales. At this moment the screening tool has not been implemented yet, further details will be given in future versions. </th> </tr> <tr> <td> **Standards and metadata** </td> </tr> <tr> <td> The data will be stored following the standard numeric scales defined by each screening tool each time that a user (be it patient, caregiver or health professional) uses one of the screening tools. Although at this moment the screening tool has not been defined it is expected that data will be stored in a MySQL database, using noSQL database for complementary purposes. Records will also be related (and identified) with the user to which the recorded data belong and the date when the data was recorded. Metadata will include information about the scale recorded, range of possible values, etc. This metadata will be associated to each table and will follow the Common European Research Information Format (CERIF) metadata standard 5 . </td> </tr> <tr> <td> **Data sharing** </td> </tr> <tr> <td> This dataset will not be shared outside of the Consortium boundaries for ethical and security reasons. Each dataset record belongs to the user and to the Consortium partner responsible for the user. Only the user, people authorised by Him/her ( _i.e._ caregiver) and authorised personnel of the Consortium partner responsible for the user can access the record. Data will be available to users and people authorised by them through the C-MMD platform. Authorised personnel of the pilot partner generating the data will be able to access aggregated data in periodic reports and also will be able to access raw data dumped from the database in csv files or through a web service. Each access will be identifiable and traceable. Dataset records will be shared among the Consortium partners anonymised for research purposes in order to be used in the tasks of the project. Anonymisation is the standard procedure followed to preserve confidentiality of participants. Each participant will sign an informed consent at recruitment phase authorizing access to all his/her data (raw, aggregated, anonymised). Users will agree to the anonymised and aggregated data being used for research and possibly commercial exploitation. The data repository will be allocated in the C-MMD host in the UPC premises (more details in section 6). </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> See section 6 and 7. </td> </tr> </table> 5 ## 3.1.3 Treatment Dataset _Table 4 Treatment Dataset_ <table> <tr> <th> **Data set reference and name** </th> </tr> <tr> <td> C-MMD-Treatment </td> </tr> <tr> <td> **Data set description** </td> </tr> <tr> <td> This dataset contains all the treatment information for each dyad. The treatment information will come from: (1) a specific toolset integrated in the platform for that purpose, (2) through the API to connect with national healthcare systems where possible. The nature of the data corresponds to medication descriptions, doses, schedules and follow-up of the adherence. At this moment the data-capturing tool has not been implemented, further details will be given in future versions. </td> </tr> <tr> <td> **Standards and metadata** </td> </tr> <tr> <td> The data will be stored following the numeric/text standards each time that a user (be it patient, caregiver or health professional) uses the treatment management interface to introduce or modify information about the pharmacological treatment being followed and the adherence regime to the treatment. Although at this moment the treatment management tool has not been defined it is expected that data will be stored in a MySQL database, using noSQL database for complementary purposes. Records will also be related (and identified) with the user to which the recorded data belong and the date when the data was recorded. Metadata will include information about the data recorded, range of possible values, etc. This metadata will be associated to each table and will follow the Common European Research Information Format (CERIF) metadata standard 6 . </td> </tr> <tr> <td> **Data sharing** </td> </tr> <tr> <td> This dataset will not be shared outside of the Consortium boundaries for ethical and security reasons. Each dataset record belongs to the user and to the Consortium partner responsible for the user. Only the user, people authorised by him/her ( _i.e._ the caregiver) and authorised personnel of the Consortium partner responsible for the user, can access the record. Data will be available to users and people authorised by them through the C-MMD platform. Authorised personnel of the pilot partner generating the data will be able to access aggregated data in periodic reports and also will be able to access raw data dumped from the database in _csv_ files or through a web service. Each access will be identifiable and traceable. Dataset records will be shared among the Consortium partners, anonymised for research purposes, in order to achieve with the tasks of WP6. Anonymisation is the standard procedure followed to preserve confidentiality of participants. All described accesses to data (raw, aggregated, anonymised) will be authorised though an informed consent signed by the participant at the recruitment phase. Users will agree to the </td> </tr> </table> 6 <table> <tr> <th> anonymised and aggregated data being used for research and possibly commercial exploitation. The data repository will be allocated in the C-MMD host in the UPC premises (more details in section 6). </th> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> See section 6 and 7. </td> </tr> </table> ## 3.1.4 Intervention Dataset _Table 5 Intervention Dataset_ <table> <tr> <th> **Data set reference and name** </th> </tr> <tr> <td> C-MMD-Intervention </td> </tr> <tr> <td> **Data set description** </td> </tr> <tr> <td> This data set contains all the intervention contents created by the consortium members during the lifetime of the project. These intervention contents include posts, articles, tips, multimedia, tutorials, webinars and any kind of educational content produced to support the caregiving process and the healthy ageing lifestyle. These intervention contents will be introduced in the platform through specific tools designed for that purpose ( _e.g_ . the ones available in Wordpress to edit blog posts). Standards in multimedia and text posts storage will be followed. At this moment the editor tools have not been implemented, further details will be given in future versions. </td> </tr> <tr> <td> **Standards and metadata** </td> </tr> <tr> <td> The data will be stored following the standard text/media formats following best practices for data management (see section 6). Although at this moment the editing tool has not been defined, it is expected that data will be stored in a MySQL database, using noSQL database for complementary purposes. Records will also be related (and identified) with the user authoring the contents and the date when the data was recorded. As explained in section 5.1 of DoA and later in this document in section 4, all contents created will follow the HONCode. Metadata will include information about the intervention recorded and a list of tags or keywords that relate the content with specific symptoms, conditions or problems that the content refers to ( _e.g._ a video about Alzheimer could have the tags _Alzheimer_ , _dementia_ , _cognitive decline_ , etc.) This metadata will be associated to each table and will follow the Common European Research Information Format (CERIF) metadata standard 7 . </td> </tr> <tr> <td> **Data sharing** </td> </tr> </table> 7 <table> <tr> <th> Each dataset record belongs to the Consortium partner responsible for creating it. All the Consortium and suitable users 4 are authorised to access the recorded contents. Data will be available to users and people authorised by them through the C-MMD platform. Aggregated data about the amount of contents generated and specific metadata ( _e.g._ tags) will be available as well as access to raw data dumped from the database in files to selected Consortium members. Dataset records, particularly aggregated data, will be shared among the Consortium partners for research purposes in order to be used in the tasks of the project. Users will agree to the anonymised and aggregated data being used for research and possibly commercial exploitation. The data repository will be allocated in the C-MMD host in the UPC premises (more details in section 6). </th> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> See section 6 and 7. </td> </tr> </table> ## 3.1.5 Dissemination Dataset _Table 6 Dissemination Dataset_ <table> <tr> <th> **Data set reference and name** </th> </tr> <tr> <td> C-MMD-Dissemination </td> </tr> <tr> <td> **Data set description** </td> </tr> <tr> <td> This data set contains all the dissemination contents created by the consortium members during the lifetime of the project. These dissemination contents include scientific papers, newsletters, multimedia, press articles, conferences and any kind of dissemination content produced to support the communication activities of the project and dissemination of results. These contents created from different sources will be stored in a database/filesystem. </td> </tr> <tr> <td> **Standards and metadata** </td> </tr> <tr> <td> The data will be stored following the standard text/media formats following best practices for data management (see section 6). Records will also be related (and identified) with the user authoring the contents and the date when the data was recorded. Metadata will include information about the dissemination data recorded, the target audience, identifier ( _i.e._ DOI, URI), authors, title of the publication, time of publication, related event ( _e.g._ conference, forum, _etc._ ) and a list of tags or keywords that relate the content with specific topics or results. This metadata will be associated to each table and will follow the Common European Research Information Format (CERIF) metadata standard 5 . </td> </tr> <tr> <td> **Data sharing** </td> </tr> <tr> <td> Each dataset record belongs to the Consortium partner/s responsible for creating it. These contents are open for access. The data repository will be allocated in the C-MMD host in the UPC premises (more details in section 6). </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> See section 6 and 7. </td> </tr> </table> ## 3.1.6 Dataset Summary _Table 7 Dataset Summary_ <table> <tr> <th> Dataset </th> <th> Who </th> <th> Ownership </th> <th> Access </th> </tr> <tr> <td> Personal Dataset </td> <td> User </td> <td> Yes </td> <td> Yes , full </td> </tr> <tr> <td> Partner (recruiting) </td> <td> Yes </td> <td> Yes, full to authorised personnel </td> </tr> <tr> <td> Rest of Consortium </td> <td> No </td> <td> Yes, only anonymised and aggregated data </td> </tr> <tr> <td> World </td> <td> No </td> <td> No </td> </tr> <tr> <td> Screening Dataset </td> <td> User </td> <td> Yes </td> <td> Yes , full </td> </tr> <tr> <td> Partner (recruiting) </td> <td> Yes </td> <td> Yes, full to authorised personnel </td> </tr> <tr> <td> Rest of Consortium </td> <td> No </td> <td> Yes, only anonymised and aggregated data </td> </tr> <tr> <td> </td> <td> World </td> <td> No </td> <td> No </td> </tr> <tr> <td> Treatment Dataset </td> <td> User </td> <td> Yes </td> <td> Yes , full </td> </tr> <tr> <td> Partner (recruiting) </td> <td> Yes </td> <td> Yes, full to authorised personnel </td> </tr> <tr> <td> Rest of Consortium </td> <td> No </td> <td> Yes, only anonymised and aggregated data </td> </tr> <tr> <td> World </td> <td> No </td> <td> No </td> </tr> <tr> <td> Intervention Dataset </td> <td> User </td> <td> No </td> <td> Yes, depending on their needs </td> </tr> <tr> <td> Partner (authoring) </td> <td> Yes </td> <td> Yes, full to authorised personnel </td> </tr> <tr> <td> Rest of Consortium </td> <td> No </td> <td> Yes, full to authorised personnel </td> </tr> <tr> <td> World </td> <td> No </td> <td> Limited and depending on project needs and exploitation policies </td> </tr> <tr> <td> Dissemination Dataset </td> <td> User </td> <td> No </td> <td> Yes </td> </tr> <tr> <td> Partner (authoring) </td> <td> Yes </td> <td> Yes </td> </tr> <tr> <td> Rest of Consortium </td> <td> No </td> <td> Yes </td> </tr> <tr> <td> World </td> <td> No </td> <td> Yes </td> </tr> </table> ### 3.2 Quality Assurance Process Every data gathering process is susceptible to contamination in the absence of adequate preventive measures. Data contamination results from a process or phenomenon, other than the one of interest, which can affect the variable values. Data contamination results in erroneous values in the data set. In general, there are two types of errors that can occur in a data set. Firstly, errors of commission, which are the result of incorrect or inaccurate data being included in the data set. This may happen because of a malfunctioning instrument that produces faulty results, data that are mistyped during entry, or other problems. Errors of omission are the second type of errors. These result from data or metadata being omitted. Situations that result in omission errors occur when data are inadequately documented, when there are human errors during data collection or entry, or when there are anomalies in the field that affect the data. Quality assurance/quality control (QA/QC) activities should be an integral part of any inventory development processes as they improve transparency, consistency, comparability, completeness and accuracy. **Quality control (QC)** is defined as a system of checks to assess and maintain the quality of the data inventory being compiled. Quality control procedures are designed to provide routine technical checks to measure and control the data consistency, integrity, correctness and completeness; and to identify and address errors and omissions. Quality control checks should cover everything from data acquisition and handling, application of approved procedures and methods, and documentation. Examples of general quality control checks include: * checking for transcription errors in data input; * checking that scale measures are within the range of acceptable values; * checking that proper conversion factors are used; In future versions of this document we will provide more details on the QC protocols to be adopted during the project lifetime. **Quality assurance (QA)** is a planned system of review procedures conducted outside the actual inventory compilation by personnel not directly involved in the inventory development process. It is a non-biased, independent review of methods and/or data summaries that ensures that the inventory continues to incorporate correctly the scientific knowledge and data generated. Quality assurance procedures may include expert peer reviews of data summaries and audits to assess the quality of the inventory and to identify where improvements could be made. If deemed necessary, selected members of the Advisory Board may perform this task in the course of the project lifecycle. # 4 Ethics, Intellectual Property, Citation ### 4.1 Ethics The lack of ethical principles standardization at international level may potentially lead to the abuse of data collection, use and storage by exploiting differences between societies with regard to established ethical standards. Ethics of data collection, and data use and storage in medical applications, is of growing importance since the quality and quantity of medical data usage is growing quickly both in Europe and worldwide. Great concerns are raised about data protection and privacy issues in the area of biometric and health applications with growing markets that might be affected by insufficiently protected sensitive information. The healthcare providers that are involved in the project follow strict ethical codes. All ethical, legal and regulatory issues will be studied in detail in T8.6 and presented in the incremental versions of D8.3. The most relevant findings will be included in the final version of this document. ### 4.2 Intellectual Property With regard to property and ownership of medical data and records, there are two distinct views. From the standpoint of practitioners (i.e., healthcare providers, hospitals), patient medical records are their property because they are the ones who write, compile and produce the records (data producers). At the same time, patients tend to believe that medical records belong to them as they provide the relevant information. Nevertheless, the project will produce data assets that do not correspond to medical records. For instance: * Intervention contents and guidelines; * Gamification reports; * Treatment adherence reports; * Aggregated medical data reports; and * Reports and statistics of platform usage. The ownership and IPR of these assets will be detailed in future versions of this document. The resulting agreements will be compliant with corresponding legislation (i.e. Data Protection Act, Copyright, Freedom of Information Act, _etc_ .). ### 4.3 Citation An article, paper or presentation that refers to, or draws, information from a data set should cite the data set, just as it would cite other sources such as books and articles. A citation gives appropriate credit to the data set creator(s), and allows interested readers to find the data set so they can confirm the data is being correctly represented, or can use it in their own work. There is no universal standard for formatting a data set citation. There are many different styles for formatting citations, such as APA and Chicago Manual of Style. In addition, most scientific publications have their own style, either unique to themselves or based on an existing style. A few of these styles, such as APA 6th edition, specify how to cite data sets. However, most citation style manuals do not currently cover citing data sets. Consequently, adaptation of the styles’ general format can be applied to the needs of data sets. At this early stage, the information used to cite C-MMD data sets could be: * Author(s) ( _the principal investigator can be used as the “author” of a data set_ ) * Title * Year of Publication * Publisher ( _partner producing the dataset_ ) * Version * Access information ( _doi or url_ ) # 5 Access and Use of Information One of the objectives of the CAREGIVERSPRO-MMD project is to develop the solution into a commercial product. This is the main reason why the Consortium has decided that potentially publishable data will not be available for open access until the end of the project, once the exploitation paths have been defined. However, results of the pilot execution and platform evaluation will be made publicly available through the deliverables **D6.1 – Mid-Pilot preliminary analysis report, D6.2 – Final Pilot analysis report** and **D6.3 – User feedback and usability report.** More details on specific dataset access regimes are defined in section 3.1. # 6 Storage and Backup of Data In order to safeguard the appropriate preservation of the data, portion of the budget has been allocated in the data storage and backups during the lifespan of the project and at least for the following two years. The data will be stored in databases installed on the same server that holds the CAREGIVERSPRO-MMD platform. These Databases are only accessible locally (i.e. only available to the server itself) in order to prevent any connection from outside. The system and server configuration have been arranged in order to support local data encryption to avoid physical access to the hard disk drive. This measure would prevent access to the data if the physical storage was stolen or accessed directly. The server has a local firewall that only allows secure web connections to the Internet and verified IP addresses for development/updates of the C-MMD application. A local log file records every access to the server. The server is located in the UPC campus Data Center. This data center is a dedicated 250m 2 facility with controlled access, personal ID cards for authorized staff and video surveillance 24x7. The server has dedicated bandwidth and backup power system in order to guarantee availability. A daily backup procedure has been designed in order to ensure data integrity and recovery. This backup has two main subsystems: 1. File system backup: A daily copy of every file in the file system is stored in compressed format. 2. Database backup: A daily dump of every database/table is stored in a single file. 3. Daily encryption and compression of log files. Optionally, this backup can be physically moved to a safe location outside the UPC Data Center if the personal data requires this level of protection. A specific budget is reserved for this task. A 30-day window backup system has been programmed and enough disk space has been reserved for a monthly operation. ### 6.1 Best Practices for File Formats The file formats used have a direct impact on the ability to open those files at a later date and on the ability of other people to access those data. ## 6.1.1 Proprietary vs Open Formats Data should be saved in a non-proprietary (open) file format when possible. If conversion to an open data format will result in some data loss from the files, it should be considered saving the data in both the proprietary format and an open format. Having at least some of the information available in the future is better than having none. When it is necessary to save files in a proprietary format, it will be included a readme.txt file that documents the name and version of the software used to generate the file, as well as the company who made the software. ## 6.1.2 Guidelines for Choosing Formats When selecting file formats for archiving, the formats should ideally be: * Non-proprietary; * Unencrypted; 6 * Uncompressed; * In common usage by the research community; ▪ Adherent to an open, documented standard: * Interoperable among diverse platforms and applications o Fully published and available royalty-free * Fully and independently implementable by multiple software providers on multiple platforms without any intellectual property restrictions for necessary technology * Developed and maintained by an open standards organization with a welldefined inclusive process for evolution of the standard ## 6.1.3 Some Preferred File Formats 7 8 * Containers: TAR, GZIP, ZIP * Databases: XML, CSV * Geospatial: SHP, DBF, GeoTIFF, NetCDF * Moving images: MOV, MPEG, AVI, MXF * Sounds: WAVE, AIFF, MP3, MXF * Statistics: ASCII, DTA, POR, SAS, SAV * Still images: TIFF, JPEG 2000, PDF, PNG, GIF, BMP * Tabular data: CSV * Text: XML, PDF/A, HTML, ASCII, UTF-8 * Web archive: WARC # 7 Archiving and Future Proofing of Information The national legislation (European compliant) of the server site (Spain) compels UPC to preserve all data and access records for **two years** after the project completion. The server will remain in the same safe location in order to preserve physical and logical access. Consequently, the data will be kept in the server and will be accessible under the same terms that will be agreed among partners during the project lifespan. All public project deliverables will be available at least for **five years** after the project completion at the project portal. Selected datasets, databases, standalone documents, and even software may be made public or open for exploitation at the end of the project. These resources may prove useless without explanatory notes (metadata) accompanying them. Metadata will be clearly linked to the materials so that they can adequately inform any future user about the material. For example, a published dataset will typically be accompanied by a metadata document that explains the various fields, their usefulness and summarises the purpose of the dataset in general. These documents will be stored along with the dataset and made accessible in the same manner as the dataset ( _e.g._ online, or download). Contact information will be provided accordingly in case that the future user needs further clarification. # 8 Resourcing of Data Management This section outlines the staffing and financial details of the data management within the CAREGIVERSPRO-MMD project. The former aspect provides information about the role and responsibilities of the partners that generate the data and those who control it. The latter aspect describes the financing process for data management and data storage. ### 8.1 Roles in Data Management Each pilot partner (HUL, COO, FUB, CHU) is responsible for the data generated in their own pilots by the different stakeholders of the platform as **data producers** . Each pilot partner will assign a responsible person from his or her institution for this task to be designed for the next version of this document. The UPC is responsible for all the aspects related with data storage and backup as **data processor** . MDD and CERTH as the main developers of the C-MMD platform will be responsible as **data processor** and **service provider** of all the aspects related with data gathering, data integrity, access logging, _etc_ . As specified in section 5.1.3 of DoA, specific agreements will be signed among partners in order to grant access to the different datasets for the different uses (data storage, data processing, service provision). ### 8.2 Financial Data Management Process As mentioned before, the Consortium has reserved a portion of the project budget for data hosting and backup. # 9 Review of Data Management Process The follow-up of this plan will be reported in future versions of this document, where detailed protocols and measures will be described to ensure the compliance with the plan along with preliminary results on the observed evolution. UPC as main contributor to this plan, supported by the roles described in section 8.1, will perform the follow-up. External reviewers of the Consortium as well as selected members of the Advisory Board will support the peer-review process. # 10 Statements and Personnel Details ### 10.1 Statement of Agreement The Consortium agree to the specific elements of the plan as outlined. 9 #### Project Coordinator <table> <tr> <th> Title </th> <th> </th> </tr> <tr> <td> Designation </td> <td> </td> </tr> <tr> <td> Name </td> <td> </td> </tr> <tr> <td> Date </td> <td> </td> </tr> <tr> <td> Signature </td> <td> </td> </tr> </table> #### Project Manager <table> <tr> <th> Title </th> <th> </th> </tr> <tr> <td> Designation </td> <td> </td> </tr> <tr> <td> Name </td> <td> </td> </tr> <tr> <td> Date </td> <td> </td> </tr> <tr> <td> Signature </td> <td> </td> </tr> </table> **Management Board** (one table for each member) <table> <tr> <th> Title </th> <th> </th> </tr> <tr> <td> Designation </td> <td> </td> </tr> <tr> <td> Name </td> <td> </td> </tr> <tr> <td> Date </td> <td> </td> </tr> <tr> <td> Signature </td> <td> </td> </tr> </table>
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0336_OPRECOMP_732631.md
concurrently develop the applications, micro-benchmarks, software tools and transprecision algorithms. This repository will remain private during the first phase of development. But, as soon as the work in the project will be mature enough to be shared with the public community, we will create a branch for releasing a public version of our work on GitHub openly. Github is a web application that is perfectly well suited to allow a free FAIR access to Software code and the following table resumes its characteristics: <table> <tr> <th> **Fair Data** </th> <th> </th> </tr> <tr> <td> **How Data is findable** </td> <td> Github allows source code to be easily findable through a search engine as: o Github is very well referenced on Google Search because it’s the largest host of source code in the world for opensource and private software projects with 20 millions users and 57 millions repositories. * Each project page comes with a short description (metadata) that describes the project on a search engine (see fig 1). o As Github takes care of naming all releases, we do not use any naming convention other than common sense for naming all source code files and libraries. * A list of search keywords specific to the project can be specified for each project underneath the metadata description, making the project easily findable through a search engine. * Github has a clear version control system that records changes to a file or a set of files over time so that specific versions can be recalled later. And this can be done for nearly any type of file on a computer. Of course, the whole project team will make use of this system. * We do not have any metadata standard, but for each file stored by the team on Github, a short textual description of the file content will be entered each time a new file will be uploaded. </td> </tr> <tr> <td> **How data is openly accessible** </td> <td> Most of the source code and its associated metadata generated during the lifespan of the project will be on a Github repository that allows data to be openly accessible. The source code will be accessible by two ways: * A link to our OPRECOMP Github repository on our OPRECOMP website. * By search engine when typing the right keywords. Once on our Github repository, the data can be downloaded by clicking on the green button of the main repository page (see Fig1). In case of any restriction, Github allows to restrict the access to some repositories or sub-repositories to a group of people that will be invited. </td> </tr> <tr> <td> **How data is interoperable** </td> <td> Source codes are written in well-known standard software languages (C/C++, python, etc.) using standard libraries (STD, etc..) or open source platforms (Tensorflow for example). Comments inside source code as well as the metadata describing the files and the associated documents (reports, user’s manuals, etc.) will all be written in English. We will not force partners of this project to use a standard vocabulary for all data types present in our data set and rely on each member common sense to name things. </td> </tr> <tr> <td> **How data is reusable** </td> <td> Github is structured to foster interoperability and reusability: • On the main repository page, a longer description of the project will explain how to install the code, use it, and contribute to the project </td> </tr> <tr> <td> </td> <td> </td> <td> (see Fig 2). </td> </tr> <tr> <td> </td> <td> • </td> <td> A “contributors” widget leads to a list of project contributors </td> </tr> <tr> <td> </td> <td> • </td> <td> A “releases” widget leads to a documentation about all the different project releases and their documentation </td> </tr> <tr> <td> </td> <td> • </td> <td> A “branch” widget leads to all the currently opened or closed branches of the project. </td> </tr> <tr> <td> </td> <td> • </td> <td> The core of the project page lists all the folders/files of the project, with their short description (fig 3) </td> </tr> <tr> <td> </td> <td> • </td> <td> Consortium partners will use an appropriate licensing model for their contributions that will be hosted on GitHub. In some cases, where our contributions are built on existing packages, we may be forced to use a specific licensing model (i.e. additions to a GPL package, will have to remain GPL). For contributions, where there are no limitations, the contributing partner will chose an appropriate licensing model (GPL, LGPL, BSD, Solderpad license). </td> </tr> <tr> <td> </td> <td> • </td> <td> During the project lifespan, we plan to deliver regular code releases and the code will be released as soon as a release is ready (code source working for targeted benchmarks). And in any case, the code will be released at the end of the project. As of now, no code embargo is envisioned. </td> </tr> <tr> <td> </td> <td> • </td> <td> Source code will be reusable by third parties, just downloading the Github repository dedicated to our project. This repository will contain the necessary documentation to compile and run the source code on the provided benchmarks. </td> </tr> <tr> <td> </td> <td> • </td> <td> Each development group in the project will be responsible of its own code quality. We consider that code quality is a development issue, not a testing issue. Thus, we are embedding testing practices inside development through an incremental process consisting of writing some code along with its unit testing functions, testing it, making it reviewed by some peers, check it in along with the test functions, writing some new code, etc. . Additionally, as soon as enough code has been developed to test our targeted benchmarks, we will test it against it. </td> </tr> <tr> <td> </td> <td> • </td> <td> IBM and ETH are committed to keep the infrastructure for hosting the data as well as its backups way past the end of the project. </td> </tr> </table> **Table1:** Fair data management characteristics for Software code and benchmarks/test benches results **FIG1** : An example of a Github project webpage (Tensorflow project) **FIG2:** Example of a project description with code source installation and contribution guidelines (Tensorflow Project) **Fig 3:** Github web page: List of all Project folders and files (Tensorflow Project) <table> <tr> <th> **Other DMP components** </th> <th> </th> </tr> <tr> <td> **Allocation of resources** </td> <td> Github is free to use for public and open source projects. It would cost us something ($9 per user/month) only if we decided to restrict access to some part of the code. Data Management is a task of the WP9 work package and GreenWaves Technologies is responsible of managing this task, but IBM and ETH are responsible of the current and long term storage of the source code. </td> </tr> <tr> <td> **Data Security** </td> <td> ETH Zurich and IBM are responsible to make periodic backups of the OPRECOMP Github repository and will follow the procedures in place in their organization. Currently, the GitLab server hosted at ETH Zurich makes snapshots every three hours. Incremental backups are made daily, and full backups every other week. All backup data is stored in two different physical locations As we plan to freely release our software code on the internet, we don’t need to encrypt the data before any transfer of it. But, for some sensitive data that we do not want to divulge, we will encrypt it before the transfer. </td> </tr> <tr> <td> **Ethical aspects** </td> <td> Nothing to add to the DoA </td> </tr> <tr> <td> **Other** </td> <td> Most of the partners are already using Github for their software code management. </td> </tr> </table> **Table2:** Other DMP components for software code and Benchmark and test benches results # 2.2 Benchmarks and test benches Results We plan to do regular runs of benchmarks or test benches during the lifespan of the project for all work packages dedicated to scientific developments. The format of the data generated in the log files can differ (measurements, plots, simulation results, etc.), but they will all be stored the same way in a GitHub repository that will be publically available. This will allow tracking the history of the changes/improvements. The FAIR data management characteristics are almost the same as the one described in paragraph 2.1, table 1. Except that once the benchmarks will be defined, their specification will include a naming convention that the different partners will be entitled to follow. Other DMP components will also have the same characteristics (see Table 2). <table> <tr> <th> **Work Package** </th> <th> **Data Set Description** </th> </tr> <tr> <td> **WP1** </td> <td> No such data </td> </tr> <tr> <td> **WP2** </td> <td> Simulation results for the error-energy relation. The data will be presented in form of tables and figures. </td> </tr> <tr> <td> **WP3** </td> <td> Simulation results (time, power, energy) about different heterogeneous memory architectures. These results are fundamental to characterize the baseline as well as the gain due to future transprecision techniques (approximate storage). Data will be generated in log files mostly containing tables and figures. Regular runs of typical test benches or benchmarks are scheduled during the project lifespan and the generated data will be compared against previous runs to measure modeling improvements. </td> </tr> <tr> <td> **WP4** </td> <td> Compiled results of hardware performance numbers for different implementations and different design parameters. This data will be presented in tables and figures. </td> </tr> <tr> <td> **WP5** </td> <td> Micro-benchmarks </td> </tr> <tr> <td> **WP6** </td> <td> Log files and plots from benchmarks results </td> </tr> <tr> <td> **WP7** </td> <td> Log files and plots from benchmarks results </td> </tr> <tr> <td> **WP8** </td> <td> Log files from benchmark results </td> </tr> <tr> <td> **WP9** </td> <td> No benchmarks delivered </td> </tr> </table> # 2.3 Reports Data will be also produced in the form of a considerable number of reports (deliverables) that will be compiled from the studies and insights gained from the work on the different work packages. Those reports are regular documents with text, figures and tables and currently stored in our Box repository. They will be openly accessible from the EU website accordingly to the dissemination level indicated in the project grant agreement. <table> <tr> <th> **Work Package** </th> <th> **Data Set Description** </th> </tr> <tr> <td> **WP1** </td> <td> Reports for deliverables </td> </tr> <tr> <td> **WP2** </td> <td> Reports describing the main results obtained for each WP2 task. The data will be presented in form of written documents containing text, tables and figures. </td> </tr> <tr> <td> **WP3** </td> <td> Deliverable Reports </td> </tr> <tr> <td> **WP4** </td> <td> Deliverable Reports </td> </tr> <tr> <td> **WP5** </td> <td> Reports from the studies and insights gained from the work on WP5 package. Written documents containing mostly tables and figures. </td> </tr> <tr> <td> **WP6** </td> <td> Deliverable Reports </td> </tr> <tr> <td> **WP7** </td> <td> Deliverable Reports </td> </tr> <tr> <td> **WP8** </td> <td> Deliverable reports after testing approximate computing mathematical library </td> </tr> <tr> <td> **WP9** </td> <td> Deliverable Reports (DPM, Website design report) and Google Analytics reports of the website traffic every two months. </td> </tr> </table> # 2.4 Publications All Work packages will deliver publications. Those publications will be written using the LaTeX document preparation system. LaTeX includes features designed for the production of technical and scientific documentation. It’s the de facto standard for the communication and publication of scientific documents. LaTeX Source documents are stored in a specific format that can be assimilated to source code. Thus, we will store those files as the software source code in a Gitlab repository at the ETH. Pdf versions of those documents will be publically available in our Github repository. Additionally, they will be published on different open access platforms (paper repositories) commonly used by researchers, such as ArXiv, Peer Evaluation, Figshare, etc. .We will choose the ones on which we will publish our papers later. The FAIR data management characteristics are strictly the same as the ones described in paragraph 2.1, table1. Other DMP components will also have the same characteristics (see Table 2 in the paragraph 2.1) # 2.5 Media data: Videos During the lifespan of the project, any project member might be interviewed or create videos for tutorials or demonstrations of benchmarks. These videos will be registered in a YouTube channel owned by a member of the project (WP9 leader) and also embedded on the webpage “Videos” of our OPRECOMP website. On both Youtube and OPRECOMP website, videos will be stored in different folders depending on the nature of the video: * Interviews * Tutorials * Demonstrations They will be publically available. The Youtube account and channel for OPRECOMP will be created as soon as we will have videos to publish on it. <table> <tr> <th> **Fair Data** </th> <th> </th> </tr> <tr> <td> **How is data findable?** </td> <td> All videos will be openly available on the OPRECOMP Youtube channel on the World Wide Web. Each video will come with its corresponding metadata: a title, a description of its content, search keywords, category (science and technology) and accessibility level (see Fig4) </td> </tr> <tr> <td> **How is data openly accessible?** </td> <td> All videos created during the lifespan of the project will be openly accessible two ways: * Through a search engine (via metadata and search keywords) * They will be embedded on a page of the website under the menu “videos” </td> </tr> <tr> <td> **How is data interoperable?** </td> <td> Through Youtube </td> </tr> <tr> <td> **How you increase data re-use?** </td> <td> The video data will be licensed under the Youtube standard license. The videos remain re-usable as long as we let them on Youtube. And it can remain forever, as it’s free. </td> </tr> </table> <table> <tr> <th> **Other DMP components** </th> <th> </th> </tr> <tr> <td> **Allocation of resources** </td> <td> Youtube is free, so making the videos FAIR costs only the time for somebody to put the video online and add the requested metadata and optimum search keywords (30 minutes) </td> </tr> <tr> <td> **Data Security** </td> <td> Videos will be first stored on Youtube that has its own security system (basically two copies of each element stored in three different servers located in different WWW facilities). We plan also to keep the videos in a repository of our Github’s account, which will be also regularly backed by ETH or IBM staff members. </td> </tr> <tr> <td> **Ethical aspects** </td> <td> Nothing to add to the DoA </td> </tr> <tr> <td> **Other** </td> <td> Most of the partners are already using Github for their software code management. </td> </tr> </table> **Fig4:** Shows the video manager editor of Youtube with Video Title, Description, search keywords, and accessibility level (Public here). # OPRECOMP Website: the core of our data management system The OPRECOMP website is part of the WP9 package and is hosted by WPengine, a worldwide sophisticated Wordpress hosting platform that handles the website security and regular backups and comes with a bulletproof customer service in English. The website has been designed by GreenWaves Technologies, using the #1 selling theme AVADA whose particularity is to have a Page editor with 50 short codes with 100+ options that are controlled easily with a Fusion Builder’s drag and drop system that doesn’t require any coding knowledge. This website design architecture is thus well suited to be accessible by anyone in the OPRECOMP team to write its own web pages or posts. The website will be regularly updated with information about the OPRECOMP development and links to the Data resources described in the previous paragraphs, making it the primary platform to access our data system. Of course, finding our website through a search engine is of primary importance. Thus, we have put in place the following things to increase our SEO (search engine optimization) ranking. * We’ve referenced our OPRECOMP website from all partners websites and from our twitter account. * We’ve added to this website the YOAST SEO plugin for analyzing and optimizing web pages content on different criteria. * We’ve linked our website to Google Analytics for tracking its traffic and improve our website visibility on the Internet. ## Yoast SEO Plugin While this YOAST SEO plugin goes the extra mile to take care of all the technical optimization, it first and foremost helps us write better content and make sure the content we write is optimized to be easily findable by search engines. The first focus is the readability of the content and Yoast displays a readability report of the website content with points to optimize (see red points on Fig5): **Fig5:** Yoast SEO readability report for the OPRECOMP landing page before optimization Yoast also provides help to edit/preview the website snippet (how the website will look like in the search results): **Fig6** : Snippet Editor/Viewer This way, the Yoast plugin will help us not only increase rankings but also increase the click through rate for organic search results. This plugin also performs some page analysis by checking if the page has images, whether there is an alt tag containing the focus keyword of the page, whether a post is long enough, whether a meta description has been written, if subheadings have been used, etc. . Following is the SEO result analysis of OPRECOMP landing page: **Fig7** : SEO analysis for OPRECOMP landing page Some improvement still has to be done, but the overall rating of the page is green already: **Fig8:** overall Readability and SEO rating of the OPRECOMP landing page ## Google Analytics All along the project, we will track the website traffic and the behavior of our visitors within the website and make the necessary corrections to improve the visibility and usability of our website, trying to provide the best possible information about our project. As an example, Fig9 represents the Audience Overview of the OPRECOMP website for the period of June 8-22, 2017. We already can see that the part of new visitors is 67% and that our site has been visited by people in the EU community (Switzerland, France, Italy, etc.) but also by foreigners in Canada (24%), USA (2.7%) and India (1.35%). We can also see that visitors spent an average of 3:40 minutes on the website, which is already pretty good. **Fig9:** OPRECOMP Website Audience Overview given by Google Analytics # Conclusions Thanks to modern and popular data management tools (Github for source code and benchmarks, Youtube for videos, WPEngine and Wordpress for the website posts and news, and Open Access Platforms for the publications) that are all in the cloud and accessible through an internet connection, all European partners of this project can use the same data management system, which is the clue to an easy and FAIR delivery of our common work to the public and potential contributors. Additionally, we will take great care of our visibility on the internet through an SEO optimization of any page or news post published on our website via the Yoast SEO plugin and we will closely monitor the website traffic and its visitors behavior using Google Analytics, the web analytics service of Google. # Deviations from work plan None
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0338_PrimeFish_635761.md
# Introduction Through participation in the Open Research Data initiative, and through the creation of a data management plan (DMP), PrimeFish aims to make research data FAIR: * Findable * Accessible * Interoperable * Reusable The Data Management Plan summaries general information regarding each data set, covering several subjects all relevant to data management and the possible reuse of data sets. This goes for both data extracted from existing sources as well as data developed over the course of the PrimeFish-project. The data management plan forms outlines how research data will be handled during and after the project, what type of data has been collected/generated, whether the data will be made openly accessible after project end, access procedures, barriers to using it such as language and software requirements, potential ethical issues, and costs associated with research data archiving and storage. All submitted data management plans are included in the appendix of this document. They are grouped according to the corresponding work package (WP). The data management plans within are listed first according to sub-task where this has been provided. If no task-number has been listed, they have been placed in no particular order. Where possible, the corresponding WP- and task- number is listed for each DMP along with the institution. In cases where two or more identical data management plans were submitted for different tasks, these have been included as only one form. # Methods A DMP-template (see "Original template" in Appendix) was distributed among all project participants, along with instructions on how to fill out the form. Instructions were given both by e-mail and through a separate and more detailed explanation, describing the required information. The information requested in the original DMP-template formed the basis for deliverables 1.1 and 1.2 ("Guidelines on data collection methods", and the "Data Management Plan", respectively). The DMP-form was later updated (see "New template" in Appendix) to better reflect the FAIR data principles, as well as to better showcase any issues related to topics such as ethics and data security, in those instances where it is relevant. The deliverable has been circulated among all participants once per 18-month reporting period and has been updated based on the feedback provided by the participants. # Conclusion The Data Managements Plans provided in this document give a detailed overview on the essential elements in the corresponding data sets. Due to the datasets containing different types of data, i.e. both quantitative (e.g. catch data, statistical data, etc.) and qualitative (interview data), not all forms contain the same level of detail regarding the information provided. Certain topics, such as interoperability, the use of standards, licensing issues, etc. are therefore not applicable for all datasets. There are no extra costs associated with making the data available to the public nor ensuring longterm storage for any of the datasets, regardless of whether they are original datasets available on public databases or if they are generated within the project and made available on in-house repositories. Regarding ethics, PrimeFish applies and adheres to ethical standards and guidelines associated with proper scientific conduct. For the majority of datasets, there are no extraordinary ethical concerns. For certain datasets, such as those containing interview data, measures have been taken to ensure anonymity for both companies and individuals, including not releasing interview raw data. For more information on ethics, see relevant sections in the DoA and deliverables. Most datasets are based on either pre-rendered datasets or datasets generated from public statistics databases. For these datasets, only a link to the original data source has been provided, along with instructions on how to obtain access. Certain datasets generated within the PrimeFish project have been uploaded to open access repositories hosted by the different institutions, and been given a permanent digital object identifier (DOI). For datasets generated within the project, and where the institutions responsible does not have an in-house open access repository, a PrimeFish community has been created on Zenodo. The community can be accessed from _https://zenodo.org/communities/primefish/_ . Not all datasets have been uploaded on the PrimeFish community on Zenodo, but these are still findable through the DOI. Deliverables and reports from the project, which also display the data that underpins the scientific work, are available on the PrimeFish homepage _http://www.primefish.eu/_ .
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0339_CLAIM_774586.md
# Deliverable Number: D6.2 ### Plan for Exploitation & Dissemination of Results including a Data Management Plan # 1\. Plan for Exploitation and Dissemination – main objective and scope The main objective of CLAIM’s Plan for Exploitation and Dissemination of Results is to identify and organise the project’s dissemination and exploitation activities in order to reach out to the widest possible range of stakeholders and to promote further exploitation of the project results. Another major goal is to set the foundation of a Data Management Plan (DMP) featuring important data / information data sharing principles that will be followed. Exploitation efforts within the project will follow the step-wise guidelines developed within the COLUMBUS project methodology 1 . To ensure exploitation and dissemination objectives are met professionally, effectively and in a timely manner the following ten basic principles are adopted as the backbone to dissemination and communication: 1. Open access of CLAIM results to the greatest extent possible, while considering IPR (see D7.1); 2. Multi-targeted dissemination of results, based on identifying all relevant target groups; 3. Adjusted and targeted communication messages reflecting the needs of each target group; 4. Multivalent modes of dissemination based on traditional (scientific papers, leaflets, posters, factsheets, policy briefs, press releases, newsletters) and innovative (online broadcasting, blogs, open access journals, data publishing) methods; 5. Extensive use of social networks and content sharing environments (Twitter, Facebook, LinkedIn, Instagram, You Tube) and Search Engine Optimisation (SEO); 6. Translating the scientific results into comprehensive and more understandable forms, such as best practices, recommendations, factsheets and policy briefs; 7. Widest integration of CLAIM results into international networks, professional organisations, and NGO’s, and their popularisation at large symposia; 8. Regular press releases and news announcements posted through the world’s leading (Eurekalert.org) and EU-based (Science for Environment newsletter, Horizon Magazine, Community Research and Development Information Service (CORDIS)) distributors of science news; 9. Feedback from stakeholders used to improve the usability of results; 10. Sustainability of CLAIM results by maintaining the portal and website at least 5 years after expiration of the funding phase of the project. # 2\. Target groups and stakeholder integration Key to successful communication and dissemination is identifying the right target groups and tailoring the message to be communicated according to their specific needs and characteristics. Prior to choosing the right message to be delivered, identifying the relevant target audiences, mapped to the identified knowledge outputs (Annex 1) is crucial. The preidentified CLAIM target audiences are provisionally divided in four main groups, acting on municipal, local, regional, national, European and international levels: * **Government –** politicians, policy-makers, experts, advisors, regulators; * **Economy/enterprise –** practitioners, technical experts, industry regulators and lobbyists, public utilities; * **Research –** scientists, research groups, graduate and post-graduate students; ■ **Civic society –** advocacy/lobbying groups, NGOs, citizen scientists, journalists, specialinterest private persons and the general public. Stakeholder identification will be done in two stages – 1) general data collection and 2) defining groups within database, based on both sector (see preliminary groups above) and scope of work – local, national, European and global. This segmentation of the stakeholder database will allow the dissemination leader and all CLAIM partners to have an overview of most relevant stakeholders in accordance with the requirements of each specific output. The segmentation will be of particular use for local engagement events, planned within Task 6.2 in this work package, and also for the Business consultations planned within WP5. # 3\. CLAIM communication message CLAIM will produce a wide range of communication messages from its research and deliverables. An important guiding principle of dissemination will be to use the same key output and core message to produce various dissemination materials for a variety of channels in order to maximise uptake of the project outcomes (Fig. 1). This communication message will be clearly defined depending on the objective for raising awareness and the specifics of the target group and the channel chosen. CLAIM’s dissemination messages will be all unified within the common project slogan “Clean is the Aim.” However, messages can be divided into two semantic levels: 1. General messages conveying the need for action with regards for reducing and tackling existing problem with marine litter. 2. Project-specific messages, defining how project outputs (technologies, databases, concentration maps etc.) will help in achieving cleaner seas and oceans. While messages falling under the first category will mostly aim to continue the successful public communication of marine plastic pollution and the growing concerns over micro plastics concentration levels, the second group of messages will be targeted towards industry, policy and decision-makers with the aim to ensure the uptake and exploitation of project results. It is essential to “translate” the scientific terms into easy to understand language when addressing stakeholders other than the scientific community. Usually research projects are long lasting and complex, but the messages to transmit should be simplified as much as possible. The focus will be on clear and simple messages that are easily understood and sent to the right stakeholders through the information source they trust. If the same message has to be sent to different audiences, an appropriate language will be used for each of them. As it is important to think about what we say and how we say it in order to provoke interest towards our dissemination activities we will aim at communicating different types of messages, in line with the specific needs of the identified knowledge output target user: * **Government level –** scientific research will be translated into concise and easy-to-use policy recommendations and guidelines. * **Economy/enterprise level –** The materials should be user-friendly and easy-to-read to enable the visualisation and translation of information for practical use. * **Research level –** straight-to-the-point, but focus is on using appropriate scientific terminology and language. * **Civic society level –** Despite the heterogeneity of this group, key to approaching it will be appropriate wording of the messages, adjusted to be translatable and understandable for the wider public. Tailoring the communication and dissemination activities and the relevant messages will be organised at three different semantic levels: * **Awareness** – for those who do not need detailed knowledge, but for whom it is useful to be aware about the project activities (e.g. the general public); * **Understanding** – this type of dissemination will be directed to those who need a deeper understanding of the project because they are interested, work in the same field and/or can benefit from the project outcomes (e.g. project-relevant stakeholders, scientific community); * **Action** – this type of dissemination will be targeted to those having the power to influence the achievement of a real change (e.g. policy-makers). Figure 1: Flow chart showing how multiple uses will be made of project results for dissemination and knowledge transfer purposes. # 4\. Dissemination actors Within the consortium of partners, WP6 has the responsibility for coordinating communication and dissemination activities and reports progress to the CLAIM coordination team. All other CLAIM partners are expected to actively contribute to popularising CLAIM’s results, for roles and responsibilities see guidelines below: ## 1.1. Dissemination leader Pensoft, as the leader of WP6, will be the dissemination leader during the CLAIM project lifetime and is expected to: 1. Coordinate and monitor all dissemination activities; 2. Organise dissemination activities on all project levels; 3. Encourage partners to initiate, and participate in, dissemination activities; 4. Reach out and establish working contacts with relevant activities; and 5. Ensure regular, quality content for the various dissemination channels within the strategy (see sections 5 and 7). ## 1.2. Dissemination and Exploitation Group (DissG) The Dissemination and Exploitation Group (DissG) is in charge to set-up, update and implement the dissemination and exploitation strategy of the project. More details on the work of this group are described in WP6 as well as in Chapter 2.2 Measures to Maximise the Impact. The group is chaired by HMCR and includes representatives of WP2, WP4, WP5, WP6 and SMEs. ## 1.3. Dissemination at the partner level To ensure the broadest impact and highest level of dissemination, all partners will be actively engaged in the dissemination process by: ■ Providing scientific content to the dissemination team (see dissemination forms in D6.1); * Using their own personal and/or institutional networks and websites to promote the project; * Taking advantage of relevant conferences to present the project results and distribute dissemination materials. Communication within the project consortium will be in English. However, most partners will be communicating to local stakeholders and disseminating project results and conclusions in their native languages. To assist reporting, dedicated forms have been designed for partners to use when sharing their activities with the dissemination leader (for more details, see D6.1): ■ Symposia and Meetings Form – designed to allow partners to easily report activities from meetings, workshops, conferences, etc. * General Dissemination Form – designed to allow partners to report all sorts of media participation and promotion of the project, such as in newspapers, magazines and web publications; TV and radio broadcasts, policy briefs, press releases, teaching sessions, PhD and Masters Theses, etc. * Scientific Publications Form – designed to allow partners to report CLAIM derived journal publications. # 5\. Overview of communication and dissemination channels Various manners of communication and dissemination will be applied to reach different target groups. The main channels to be used by CLAIM are specified below. The specific use and implementation of each channel is outlined in detail within Section 7 of this strategy: Communication and dissemination channels created and maintained by CLAIM: ■ Project website ■ Newsletter * Promotional materials: brochures, posters, policy briefs, etc. * Social Networks ■ Mailing list ■ Events Dissemination channels managed outside of the CLAIM consortium: ■ Journals ■ Mass Media * Partnering projects’ websites, social networks, events, newsletters To achieve the main objective of the Plan for Exploitation and Dissemination of Results, CLAIM will work with various selectively targeted groups through formal and informal mechanisms. Once target audiences are identified, it is of foremost importance to select the most appropriate channels to reach them. This can be sub-divided into two main target groups and related communication channels: 1. Scientific audience - the most widely used channels will be articles published in scientific journals and various scientific newspapers, as well as presentations at meetings, workshops, conferences, etc. 2. Other (non-scientific) stakeholders – the most widely used channels will be publications in popular newspapers, journals and magazines, web publications, TV and radio interviews and broadcasts, presentations at information days, policy briefs, social media, email blasts for important results and newsletters, stakeholder workshops, etc. Open access to CLAIM results will be adopted as a general procedure in the dissemination process. Traditional methods of dissemination (publications in journals, printed materials) will be combined with advanced technologies (online open access publications, e-books, ejournals, email newsletters, CLAIM Online Library, etc). # 6\. CLAIM dissemination and communication ## 6.1. Internal project communication The internal communication is aimed at better coordination of the communication and dissemination activities. It is organised in a very consistent manner in order to ensure effectiveness of the communication among the CLAIM participants: ■ Email is the primary tool for internal communication. ■ Skype and/or telephone meetings are regularly used for discussion of various issues. ■ Physical meetings are organised periodically, when intense exchanges and a large number of people are needed. More specifically, General Assembly meetings are held annually and Steering Committee meetings are held twice a year in-person, and online every 2 months. ■ Small workshops/WP meetings are organised _ad-hoc_ when deemed necessary. The Project Management Platform EMDESK is a web-based collaboration and project management application providing a comprehensive set of tools to make proposal writing, project administration and reporting easier for the entire consortium. It is used for exchange of data, results, coordination decisions, timetables, information material, and for reporting among partners has been established. It allows each partner to regularly monitor progress in data collation, methodological development, analysis, and deliverables by checking the latest updates in a results section. Regularly updated time schedules for the work within WPs are placed in a prominent location of the intranet pages. ## 6.2. Application for the H2020 Common dissemination booster (CDB) The Common dissemination booster (CDK) encourages and research and innovation projects to come together in a project group and increase their impact collectively. CDK shows projects how best to disseminate results to end users, while monitoring exploitation opportunities. CLAIM partnered with the project GoJelly to apply for the CDB. GoJelly develops, tests and promotes a gelatinous solution to microplastic pollution by developing a TRL 5-6 prototype microplastics filter made of jellyfish mucus. Two environmental issues are addressed: commercially and ecologically destructive sea and coastal pollution of both jellyfish and microplastics. The project is aiming for less plastic in the ocean and more jobs for commercial fishers in off-seasons to harvest the jellyfish. While the two projects decided not to submit the application in the first call, plans are in place to re-visit this common project when another opportunity arises at a more developed stage of the two projects. # 7\. Aim and use of dissemination tools within the CLAIM project ## 7.1. Development of the project image Developing the CLAIM logo (Fig. 2) was one of the first steps taken by the CLAIM consortium in order to create a recognisable project identifier. For more information on logo and branding see D6.1. Figure 2: Horizontal version of the CLAIM project logo. **_The CLAIM website_ ** (Fig. 3) acts as a principal means of dissemination of information. The website has been designed so that it is attractive to the different target groups, userfriendly and interactive. It has two distinct areas (public and private), each aimed at a different audience: Public area – this informs interested parties about the project and its development, allowing easy access to extensive information about CLAIM and its activities including its aims/objectives, methodological approach, news and events announcements, jobs and articles alerts, contact details, etc. Project deliverables will also be made available on the public website, as well as other published materials that the project has created. Private area – available to project partners only after login, where project templates, dissemination report forms etc. will be stored. The website will be regularly updated by placing interesting items on the home page, not only to keep the audience informed, but also to maintain the continued interest of already attracted visitors. The website will be publicised via newsletters and brochures. In addition, it will be submitted to key search engines to acquire traffic. Websites on similar topics will be asked to link to the CLAIM website. A usage logs counter is foreseen in order to verify that users are actively searching and using the website. To ensure the long-term sustainability of CLAIM results, the website will be maintained at least five years after the end of the project. A dedicated manual was created for the website to allow easy use by the dissemination leader and assist in uploading and managing content. **The CLAIM Online Library** will host (scientific) publications and other information (deliverables) on all project activities that are open for access/download by the external users of the website. All consortium members will be able to upload files in the Online library. While uploading external documents, the following basic information should be given: Title / Subtitle, Author(s) (of the publication/ deliverable), year of publication and standard bibliographic information varying according to the type of the documents (e.g. for journal papers: journal's name, volume, pages, etc. including web link to the document, if stored on an external web platform). **The CLAIM Media** area hosts all communication materials produced by the project that are likely to be of interest to non-academic stakeholders. This includes leaflets/flyers, policy briefs, posters, videos, brochures, press releases and newsletters. All are freely available for download. Figure 3: CLAIM’s website homepage. ## 7.2. Outreach materials Outreach materials, such as posters, brochures, leaflets, newsletters and policy briefs, will be used to advertise the project and provide relevant information. Before producing any Public Relation (PR) material, its purpose will be clearly defined in order to choose the most suitable format for influencing the target audience. All CLAIM PR materials and presentations will have a corporate design and the EU flag will be prominently placed. Several outreach materials have already been developed and others are being planned as follows: * The CLAIM overview poster produced at the beginning of the project implementation, has an eye-catching design, communicating the CLAIM message. * The CLAIM flyer has been designed in a way to capture the attention of the different target groups and increase awareness of the project. It explains the rationale behind the project – its objectives, the activities and main tasks planned, the expected results as well as the organisations involved. It’s produced in English. ■ A short video is under creation to introduce the project and give a project overview, for which key partners have been interviewed. CLAIM will create a dedicated YouTube channel to ensure wider reach of its video materials. ■ Results and major outcomes of CLAIM will be made available through electronic newsletters. ■ Policy briefs will be created to present a concise summary of the CLAIM research. The marketing materials will be disseminated in both electronic and printed form as appropriate. The electronic format will be preferred due to its environmental friendliness and economic efficiency. However, printed copies of selected publicity materials will be made available for distribution at relevant (inter)national meetings, workshops and conferences. Electronic versions of relevant materials will be circulated to subscribers to the newsletter list and can be used as a communication tool when approaching target groups via email. For more information on dissemination materials see Deliverable 6.1. All dissemination materials should be presented to the communication and dissemination leader (Pensoft) and the coordination team (HCMR) for approval before publication, for timelines and responsibility, please see the Consortium Agreement. ## 7.3. Electronic newsletter A news bulletin will be produced in electronic format, containing and highlighting news of interest for the CLAIM stakeholders. The CLAIM newsletters will be issued approximately twice a year depending on when relevant outcomes from the project become available. All CLAIM partners are expected to actively contribute to the newsletter by providing the WP6 dissemination team with relevant information on news, details on upcoming events, project results, publications and any other activities, which could be of interest to the project stakeholders and the general public and can help increase the project’s visibility. Information on interviews given for local media, published articles, public lectures, and presentations given at seminars or workshops could also be included. To maximise the impact of the newsletters they will be combined with any relevant photographs and/or multimedia if, and when, possible. The news digests will be largely disseminated to all people subscribing to them via the CLAIM website. It will also be available for free download in the news page of the website. ## 7.4. Press releases Throughout the project implementation and especially when important project milestones and deliverables are met, press releases will be issued to disseminate the results. Press releases for major scientific results published in peer-reviewed papers will be used as a main communication route to reach science journalists and other mass media. The responsibility for preparation of a press release usually lies with the first author who should report interesting and newsworthy results and publications to WP6 (the dissemination leader and coordination leader Pensoft and coordinating institution HCMR).The authors, together with WP6 will prepare the press materials and discuss distribution and expected impact. CLAIM will use the channels of EurekAlert!, one of the world’s largest online distributors of science news, that distributes press releases to more than 5000 mass media and independent science journalists. Personal and institutional channels will also be utilised to ensure as wide a coverage as possible. “Cleaning marine litter in the Mediterranean and the Baltic Sea” ■ EurekAlert - _https://www.eurekalert.org/pub_releases/2017-10/pp-cml102417.php_ \- (1,859 views) ■ CORDIS Wire - _https://cordis.europa.eu/news/rcn/141802_en.html_ “Ways to reduce ocean plastic pollution is the focus of a workshop hosted by European experts” ■ EurekAlert _\- https://www.eurekalert.org/pub_releases/2018-05/pp-wtr051018.php#_ _https://cordis.europa.eu/event/rcn/146079_en.html?WT.mc_id=email- Notification_ ■ CORDIS Wire - _https://cordis.europa.eu/event/rcn/146079_en.html?WT.mc_id=emailNotification_ Addressed to a wide range of stakeholders and using a simple language, avoiding scientific terminology and unnecessary details the press release got a good coverage securing a couple of publications in related media for CLAIM. ## 7.5. Social networks and sharing platforms News and announcements on the website will be disseminated using technologies, such as social networks, to address a range of users. The CLAIM project will take full advantage of social media communication. A social media strategy has been designed to define clear and specific goals and outline a detailed and systematic plan of actions for social media use. An analysis of the project’s specificities and the functionalities and specifics of five main social networks (Twitter, Facebook, Instagram, LinkedIn and You Tube) showed that each social network offers different benefits and can have a potential unique use within the CLAIM project (Tab. 1). The project already owns accounts in these networks and their current status is shown in Table 2. As a result of the social media analysis and outline of social media to be used within the project, a social media strategy has been drafted. This aims to adapt the content and the features used within each social media taking account of target users and intended messages. A specific action plan has been outlined to increase membership and to generate content, as well as to strengthen the existing weak points within CLAIM’s social media visibility. Target Groups within each network: **_Twitter:_ ** all stakeholders, environmental organizations and initiatives, bloggers/media accounts, general public interested in the project themes, professionals **_Facebook:_ ** all stakeholders, environmental organizations and initiatives, general public interested in the project themes. **_Instagram:_ ** all stakeholders, general public, relevant organizations, bloggers and influencers, academia/students. **_LinkedIn:_ ** all stakeholders, environmental industry/projects/initiatives, professionals from the field. **_YouTube:_ ** all stakeholders, environmental industry/projects/initiatives, general public and academia/students. Getting the message across for each network: **_Twitter_ ** : stakeholders can contribute with short, to the point, messages using suitable hashtags (#) and connecting to the right accounts (@), following the right initiatives and using lists for re-tweeting. **_Facebook:_ ** creating events; relevant posts, images, videos, and albums can be uploaded from workshops, meetings and conferences. **_Instagram:_ ** images, videos, and albums can be uploaded from meetings and conferences; focus is on visual communication, captions are short and hashtags (#) can be used. **_LinkedIn:_ ** Posting relevant discussion topics, these can be generated through project related news, or by choosing relevant topics from other initiatives. **_YouTube:_ ** Posting relevant videos, created through project related news (test sites, workshops, interviews with project partners and stakeholders), or by choosing relevant topics from other initiatives. Table 1: Comparison of the pros and cons of five social networks and sharing platforms for use in CLAIM. <table> <tr> <th> </th> <th> **Functionalities and features** **– pros and cons** </th> <th> </th> <th> **In the context of CLAIM** </th> </tr> <tr> <td> **Twitter** </td> <td> **Pros:** Short, fast, easy communication; popular and with high number of users; Twitter lists easy way to follow news and interact; Event back- channelling **Cons:** Rather limited in space and media sharing; Tweets have a short searchability lifetime </td> <td> ■ ■ ■ ■ </td> <td> Generate interest and share on-going news and activities through posts/tweets Build community around the project and get relevant news Conference live stream/post-conference review </td> </tr> <tr> <td> **Facebook** </td> <td> **Pros:** Useful for sharing media (pictures, videos); High number of users; Create events and invite users; Community-like feel **Cons:** Less professional and used mainly for personal social activities; Scientific content might not enjoy vast reach </td> <td> ■ ■ ■ ■ </td> <td> Generate interest and share on-going news and activities through posts Share relevant multimedia (in posts, or as separate albums) Events creation and promotion: </td> </tr> <tr> <td> </td> <td> </td> <td> ■ </td> <td> strengthening the sense of community around the project </td> </tr> <tr> <td> </td> <td> </td> <td> ■ </td> <td> Create groups to share group messages </td> </tr> <tr> <td> </td> <td> </td> <td> ■ </td> <td> Insights: provide useful analytics for the development of the page </td> </tr> <tr> <td> **Instagram** </td> <td> **Pros:** Growing network; Provides visual reference, Connects to a relevant community through hashtag usage **Cons:** Not popular is scientific fields, Limited textual content; Not all targeted audience has Instagram </td> <td> ■ ■ ■ ■ </td> <td> Generate interest and share on-going news and activities through posts Convey the brand message through visual communication Generate interest in a younger target group </td> </tr> <tr> <td> </td> <td> **Functionalities and features** **– pros and cons** </td> <td> </td> <td> **In the context of CLAIM** </td> </tr> <tr> <td> **LinkedIn** </td> <td> **Pros:** A predominantly professional network; Creates potential for professional networking across members; Participation in group discussions **Cons:** More popular in business than in academia; Seen more as an opportunity to professionally showcase yourself, rather than as a social tool </td> <td> ■ ■ ■ </td> <td> Form a more professionally meaningful discussion, disseminating news and developments around the project in an engaging discussion form Facilitate networking among the members Job advertising for CLAIM posts </td> </tr> <tr> <td> **YouTube** </td> <td> **Pros:** Wide reach; Provides visual reference; SEO advantage; Popular in the scientific field **Cons:** Content saturated platform, hard to engage viewers </td> <td> ■ ■ ■ </td> <td> Share information in an engaging way Facilitate networking among the members Storytelling </td> </tr> </table> _Figure 4: CLAIM social media strategy workflow._ **This will be implemented using the following action plan:** **1\. Create and share** _At the partner level:_ All partners should join the project social media groups; Each partner is encouraged to launch regular postings or discussions on social networks. _At the project level (PENSOFT):_ ! _Online promotion:_ ■ Add links to the social media groups in all online tools and communication; ■ Newsletters; ■ Emails and blast emails; _Offline promotion:_ ■ Add the social media groups URL in all communication materials: ■ Leaflets, brochures, posters; ■ Reports; ■ PowerPoint presentations at other events. **2\. Increase visibility and membership** _At the partner level:_ Partners should invite their contacts to join the project’s groups sending promotional messages and using the features (such as #s) through the social networks; Partners should send promotional emails to their related professional networks. _At the project level (PENSOFT):_ * Join other related groups (marine litter cleaning initiatives, news agencies etc.) and promote the CLAIM groups; * Twitter engagement list containing the followed accounts will be created and constantly updated to provide a constant overview of the Twitter project community (Annex 2); ■ Invite people/organisations to join the CLAIM groups; ■ Send an email blast; ■ Online and offline promotion. ## 7.6. Policy briefs Policy briefs describing the major outcomes of CLAIM will be produced to target decision and policy-makers: ■ Research results will be made accessible to policy-makers using accurate, timely and reliable evidence in order to engage them and sustain their interest. * The language will be non-technical but professional, highlighting the project’s policy relevance in order to capture the interest of policy-makers by explaining the project’s significance in a concise way and outlining the main policy problem addressed. * Special focus will be given to the policy implications of the information and recommendations for concrete actions will be suggested. While WP5 will be responsible for the conceptual framework and content of policy briefs, WP6 will assist the scientists by providing a template setting up length limitations (normally 4 to maximum 6 pages) and offering professional design and editing services. ## 7.7. Scientific papers One of the most effective ways to target the scientific community is by publishing results in scientific journals. Therefore, where appropriate, the findings of the project will be elaborated as research papers and submitted for publication in peer-reviewed academic journals. In all scientific papers, authors must clearly acknowledge CLAIM as a project and the European Union as the funding source by adding the following sentence: _“The research leading to these results has received funding from the European Union’s Horizon 2020 Programme, under grant agreement no 774586, CLAIM Project (CLAIM - Cleaning Litter by developing and Applying Innovative Methods in European seas (_www.claim-h2020project.eu_ )” _ If possible, the following sentence should also be added: " _This project has been funded with support from the European Commission. This publication [communication] reflects the views only of the author, and the Commission cannot be held responsible for any use which may be made of the information contained therein."_ The CLAIM acknowledgment will allow the work to be considered as an official dissemination activity. Partners are required to provide information on any scientific paper by reporting its status (submitted, accepted, in press, published) in the dissemination report form available in the CLAIM internal website area. An electronic copy of the paper will be sent to the CLAIM dissemination leader for publication on the project website. The PDF of the article will be made available on the public part when the paper is open access or in the private section when it comes to a restricted access article. # 8\. Exploitation of project results through stakeholder engagement Following the basic principles and steps outlined within the knowledge transfer and impact methodology developed by the Horizon 2020 research project COLUMBUS, five general steps outlined in Deliverable D.2.2 Guidelines on carrying out COLUMBUS Knowledge Transfer and Impact Measurement. Within its Exploitation and Dissemination plan, CLAIM will adjust and adopt good practices to provide stepwise instruction to partners involved within the Knowledge Pathways. Figure 5. Basic steps of the COLUMBUS Methodology, (COLUMBUS D2.2) Overall, there will be two types of stakeholder engagement events planned within CLAIM that will focus respectively on the: 1. Co-creation of business models and cases together with stakeholders through 5 workshops (within WP5) that will aim to ensure the maximum uptake, impact and sustainability of the cleaning technologies developed within the project and 2. A series of 5 local engagement events that will ensure a wider audience is reached aiming at transferring knowledge to policy, industry, science and also civic society or public at large. While steps within the two event types will be overall similar, strategies will be adjusted within each event type to ensure users are targeted correctly and knowledge is transferred successfully towards anticipated immediate and eventual impacts. As a first step towards designing the various knowledge transfer pathways within CLAIM the knowledge outputs were mapped on an overall projects basis (see Annex 1). This initial table summarized project main knowledge outputs assessed on the basis of initial identifying of results from the DOA and consecutive consultation with partners and coordination during the developments stages of various introductory project marketing & PR materials. Step one will be repeated with a focus on prioritising and picking outputs in accordance to the needs of each workshop. Annex 2 and 5 are also provided to the benefit of partners and will serve to guide them through the process of planning an event (2) and building the necessary communication outputs (5). However, partners should be in touch with the communication leader WP6 at least 5 months before the event to allow for efficient stakeholder mapping and building of the corresponding communication messages and outputs. To evaluate impact special surveys (developed with WP5) will be created and distributed to participants before and after the event in an attempt to measure the effectiveness if knowledge transfer by measuring competence on key topics before and after CALIM’s communication. #### A: Co-creation of business models The five workshops envisaged within WP5 can be roughly divided into two groups: 2 internal workshops and 3 business model consultations. The first two workshops will be used to go through steps 1 – 4 of the COLUMBUS methodology, which will ensure that before the first business model consultation the partners involved will have a clear Knowledge Transfer Plan to be executed at Consultation 1 and evaluated and re-designed accordingly for the consecutive two consultations. #### B: Local engagement events The local engagement events within CLAIM will aim to transfer knowledge outputs and create impact through a wider range of stakeholders. The events will take place within the project’s case study areas and within Step one of planning the event a set of knowledge outputs will the picked form the general table in Annex 1. A detailed step-by-step guidance and suggested event forms have been developed to assist partners when planning for the events (see Annex 2). The dissemination and communication leader will work closely with each local organizer to assist them in designing their Knowledge output pathways at each step of the process. Possible forms of the local engagement events are: ■ Summer schools (20 – 30 persons). Target: young researchers and practitioners. ■ Co-creation workshop (30 - 40 persons). Target: Researchers, policy makers, practitioners. ■ Demonstrations & exhibitions (unlimited). Target: researchers, policy makers, practitioners and general public. Additionally, t WP6 will work closely with all local organizers to assist them with creation of relevant dissemination and communication materials for the events, as well as with raising awareness about the events and attracting local media coverage. Impact of each events will be analysed through comparative surveys distributed at the beginning and end of each event (WP5) to assess the baseline knowledge on key issues (marine litter status, microplastics in drinking water, current policies etc.) and measure the achieved improvements in participants’ awareness. #### C: Presentations of CLAIM results at international conferences Another very effective mean of targeting the scientific community is through presentations at scientific symposia. The CLAIM members are encouraged to participate and present the project and disseminate its results at relevant national and international meetings, workshops, conferences and congresses. A workflow has been established at the beginning of the project and outlined in the Consortium Agreement to ensure that participation at events is approved by WP6 and coordination and in-line with the best interest of both CLAIM as a project and each individual partner as part of the consortium. All plans for participation and abstract submission should be announce well in advance to ensure the possibility for communication with parties involved if a possibility for a conflict of interest arises. A list of the most relevant forthcoming symposia has been developed based on contributions from project partners (Annex 4). The list will be regularly updated during the project lifetime. This will aim for good geographical coverage (national, Europe and worldwide), discipline coverage, as well as scientific and non-scientific events. Presentations of interest given by partners at important events will be also uploaded on the website and shared via social media and mailing lists to increase visibility and outreach. CLAIM relevant events are regularly published on the project website in order to assist partners to select the most suitable event in which to present their results to the wider scientific community and interested parties. # 9\. Matching the exploitation and dissemination channels/tools and target groups Table 3 brings together the information presented in Section 2 on target groups with the dissemination channels and tools described in Section 7. This shows the interactions between the different components of the Dissemination and Communication Strategy: Table 3: Interaction between the CLAIM dissemination and communication components. <table> <tr> <th> **Dissemination tool** </th> <th> **Target groups** </th> <th> **Contribution to the project dissemination objectives** </th> <th> **Verification of use** </th> </tr> <tr> <td> **Project website** </td> <td> </td> </tr> <tr> <td> General </td> <td> All target groups </td> <td> Inform and engage interested parties through provision of general information about the project and its main outcomes </td> <td> Number of visits, number of requests, unique visitors and document downloads </td> </tr> <tr> <td> Online document library (public) </td> <td> All interested stakeholders, academics </td> <td> Open access to papers, reports and deliverables </td> </tr> <tr> <td> News </td> <td> All interested parties </td> <td> Increase awareness of, and feedback on, project outcomes </td> <td> Number of visits and comments </td> </tr> <tr> <td> **Other dissemination ch** </td> <td> **annels** </td> </tr> <tr> <td> sharing ! ! ! ! ! </td> <td> Social networks and platforms Facebook Twitter Instagram LinkedIn YouTube </td> <td> Academics and students, stakeholders, general public, including potential unforeseen users </td> <td> Inform on key project events and outcomes; active dialogue within networks; discovery of unforeseen users and stakeholders </td> <td> Number of posts; number of re-tweets (Twitter); number of followers, views and “likes” </td> </tr> <tr> <td> Scientific publications, CLAIM special journal issues </td> <td> Scientific community </td> <td> Presentation of research findings and evaluation of its scientific quality through feedback from the scientific community </td> <td> List of publications </td> </tr> <tr> <td> Presentations at scientific conferences </td> <td> Scientific community </td> <td> Presentation of research findings and evaluation of its scientific quality through feedback from the scientific community </td> <td> List of international or national conferences where the project results are presented </td> </tr> </table> <table> <tr> <th> **Other dissemination channels** </th> </tr> <tr> <td> Poster </td> <td> All target groups </td> <td> Promotion of the project </td> <td> Number of downloads of electronic copies or handouts at conferences </td> </tr> <tr> <td> Leaflets/flyers </td> <td> Project stakeholders, academics and students, generally interested public </td> <td> Increase awareness about the topics dealt with by the project </td> </tr> <tr> <td> Policy factsheets/ briefs </td> <td> Policy and decisionmakers, practitioners, NGOs </td> <td> Knowledge transfer from the project to policy-makers for key issues; engagement of scientists in the policy-making process </td> </tr> <tr> <td> Newsletter </td> <td> Project stakeholders, academics and students, general public </td> <td> Provision of information about ongoing events, project outcomes and related activities </td> <td> Number of subscribers receiving the newsletter </td> </tr> <tr> <td> Concise final brochure, translated in several EU languages </td> <td> All stakeholders at international/ national/regional level </td> <td> Provision of a concise summary of the CLAIM project outcomes to stimulate decision-making, policy implementation and raise awareness </td> <td> Number of distributed printed copies of the final brochure; number of downloads of electronic copies </td> </tr> <tr> <td> External blogs, e- newsletters, websites </td> <td> Various project stakeholders, related projects and networks </td> <td> Dissemination and discussion of specific topics of interest; facilitate collaboration / uptake </td> <td> Number of posts </td> </tr> <tr> <td> Project-relevant mailing lists and networks </td> <td> Scientific community and CLAIM’s specific stakeholders </td> <td> Dissemination and discussion of specific topics of interest; facilitate collaboration / uptake </td> <td> Account of mailing lists and networks </td> </tr> <tr> <td> Training events </td> <td> Graduate and postgraduate students, researchers </td> <td> Increase/transfer of knowledge, skills and/or competences </td> <td> List of training events and number of trainees </td> </tr> <tr> <td> Stakeholder interviews and workshops </td> <td> Project stakeholders </td> <td> Stakeholder engagement and evaluation of stakeholder needs </td> <td> Interview and workshop reports and summaries of recommendations </td> </tr> <tr> <td> Email alert </td> <td> Stakeholders and generally interested public </td> <td> Semi-automated dissemination of news and announcements to increase user base </td> <td> Numbers of subscribed users </td> </tr> <tr> <td> **Mass media** </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> Press releases </td> <td> Journalists, mass media, project stakeholders, general public </td> <td> Announcement of significant project results </td> <td> Number of press releases issued; number of visits of particular press releases </td> </tr> <tr> <td> Publications in newspapers and popular magazines </td> <td> General public </td> <td> Raising public awareness on project aims, methods and outcomes </td> <td> List of publications </td> </tr> <tr> <td> Interviews </td> <td> General public </td> <td> Raising public awareness on project aims, methods and outcomes </td> <td> List of interviews </td> </tr> <tr> <td> Broadcasts (TV and radio) </td> <td> General public </td> <td> Raising public awareness on project aims, methods and outcomes </td> <td> List of broadcasts </td> </tr> <tr> <td> Multimedia clip </td> <td> General public </td> <td> Communication of project key messages </td> <td> Number of visits and comments on YouTube, Video; number of downloads from the website </td> </tr> </table> # 10\. Access to the information CLAIM’s website will ensure easy and fast access to progress updates and results. Closed mailing lists or password protected webpages will only be used when there is a good reason for restricting or limiting access. ## 10.1. Evaluation of the effectiveness of the communication and dissemination activities In order to ensure that the different target groups will get the right messages using the best methods at the right time, communication and dissemination activities are being prepared well in advance and have indeed started with the project launch. Suitable mechanisms (regular reviews, Google Analytics for website, Analytics for social media, baseline scores – Tab. 4) will be used to review progress and the extent to which the General Communication and Dissemination Strategy and Implementation Plan meets its objectives. Once stakeholders are identified, messages are defined and dissemination methods are chosen, the effectiveness of the communication and dissemination activities will be measured in order to learn from and/or improve them. This evaluation will help reveal if the communication and dissemination activities have influenced the knowledge, opinion and/or behaviour of the target group. In order to review and measure the progress and the effectiveness of the communication and dissemination activities we have established the targets shown in Table 4. To guarantee the continued effectiveness of the General Communication and Dissemination Strategy and Implementation Plan as the project progresses, it will be regularly updated building on experience so far. To guarantee this the following guidelines will be followed: * The implementation plan will be adjusted continuously; ■ Communication and dissemination activities will be subject to evaluation; ■ The focus will be on the stakeholders; * The dissemination will be focused on quality and not just quantity; ■ The communication and dissemination activities will be considered effective when the target audience is engaged. Table 4: Table of the effectiveness measurement indicators.1 <table> <tr> <th> **Objective** </th> <th> **Indicator** </th> <th> **Baseline** </th> <th> **Target** </th> </tr> <tr> <td> **Raised public awareness** </td> <td> Number of website visits (per year) </td> <td> >8, 000 </td> <td> >12, 000 </td> </tr> <tr> <td> Number of people registered for the project dissemination list to receive the newsletter (in total) </td> <td> >100 </td> <td> >150 </td> </tr> <tr> <td> Number of press releases issued (in total) </td> <td> 8 </td> <td> 12 </td> </tr> <tr> <td> Number of views accumulated per press release (in total) </td> <td> 700 </td> <td> 1050 </td> </tr> <tr> <td> Number of policy briefs written (in total) </td> <td> 3 </td> <td> 5 </td> </tr> <tr> <td> Number of outreach materials distributed to stakeholders (e.g. poster, brochures, newsletters, fact sheets) (in total) </td> <td> 1, 000 </td> <td> >1,500 </td> </tr> <tr> <td> Number of CLAIM project meetings </td> <td> 10 </td> <td> 15 </td> </tr> <tr> <td> Number of participants at all CLAIM stakeholderworkshops (in total) </td> <td> 100 </td> <td> >150 </td> </tr> <tr> <td> Number of international conferences where CLAIM results are presented (in total) </td> <td> 5 </td> <td> 10 </td> </tr> <tr> <td> Number of people present at conferences/ large meetings where CLAIM has been presented orally to raise awareness </td> <td> 50 </td> <td> 100 </td> </tr> <tr> <td> </td> <td> Number of news posts on the website (per year) </td> <td> 50 </td> <td> 100 </td> </tr> <tr> <td> Number of new followers in the social networks (per year) </td> <td> 30 </td> <td> 60 </td> </tr> <tr> <td> Number of posts in the social networks (these vary in the different social media channels) (per year) </td> <td> 70 </td> <td> >140 </td> </tr> </table> # 11\. Data Management Plan ## 11.1. Open access statement and data sharing In the context of the Horizon 2020 programme, the European Commission published a document titled “Guidelines on Open Access to Scientific Publications and Research Data in Horizon 2020”. The document clearly describes the need that led to the mandate for open access to scientific publications, research data and their associated metadata that have been produced under funding from the Horizon 2020 programme. At the same time, the document states the European Commission’s view on this aspect “information already paid for by the public purse should not be paid for again each time it is accessed or used, and that it should benefit European companies and citizens to the full”. In this context, this chapter provides the plan for the management of research outcomes (and more specifically, the research publications and data) that will be produced during the CLAIM project lifetime, as well as those that will be collected from the CLAIM partners for the respective use cases. It aims to ensure that the research activities of the project are compliant with the H2020 Open Access policy and the recommendations of the Open Research Data pilot. In this context, the project’s Data Management Plan (DMP) described in this chapter outlines how research data will be collected, processed or generated both within the project and how this data will be curated and preserved during and after the project. #### What is the Open Research Data Pilot? Open data is data that is free to use, reuse, and redistribute. The Open Research Data Pilot aims to make the research data generated by selected Horizon 2020 projects open. It will be carefully monitored and used to inform future EC policy. As a Horizon 2020 project participating in the pilot, CLAIM must: ! Develop (and keep up-to-date) a Data Management Plan (DMP). ! Deposit your data in a research data repository. ! Make sure third parties can freely access, mine, exploit, reproduce and disseminate it. ! Make clear what tools will be needed to use the raw data to validate research results (or provide the tools themselves). The pilot applies to (1) the data (and metadata) needed to validate results in scientific publications, and (2) other curated and/or raw data (and metadata) that you specify in the DMP. #### Where to store the data after the project? A data repository is a digital archive collecting and displaying datasets and their metadata. A lot of data repositories also accept publications, and allow linking between publications and their underlying data. If there is no disciplinary or institutional repository available, researchers are welcome to use the Zenodo repository www.zenodo.org, provided by OpenAIRE and hosted by CERN. CLAIM will collect different data types, both generated from the project and obtained from existing external sources. Data collection templates will be created respectively within WP1 and WP5, reflecting the specific necessities of the work within the two WPs. The templates will ensure that collected data is interoperable both within CLAIM’s context and in the context of outside data aggregators and archives. Data types will include: plastic litter data from existing sources, new plastic litter data from experiments and technology testing writhing CLAIM, modelled plastic litter data mapped plastic litter source data, socio-economic data and efficiency data for cleaning devices. (see also D7.1). In data publishing and dissemination policies, CLAIM will follow the basic postulates of the **Open Knowledge/Data Definition, and the Panton Principles for Open Data in Science** ; CLAIM will strengthen and develop various Open Source databases and tools, providing also access to the data sets underlying the published research. Database Rights will be generated with the collation of data collections, addition of new data and new ways of displaying and archiving datasets. The partners will as much as possible should try to manage the Database Rights with the aim of using to the maximum possible extent copyright licenses allowing free distribution of data. Such a license is **Open Data Commons Attribution License (ODC-By)** , which allows users to freely share, modify, and use the published data (bases) provided that the data authors are acknowledged (cited in academic articles or acknowledged when used for other purposes). Data that are deemed to be commercially sensitive will be clearly identified and treated in accordance with D7.1Quality Assurance Procedure Report and IPR strategy. Authors should explicitly inform the project coordination, if they want to publish data associated with a journal article under a license that is different from the Open Data Commons Attribution License (ODC-By)m as outlined within the Consortium Agreement. In case data have been previously deposited or published elsewhere under a license different from the above, the author should explicitly mention that in the text of the manuscript, cite the respective license and link to it. Some of the data, on discretion of the data collectors and in coordination with the project leaders could also be published under: Creative Commons CC0 (also cited as “CC-Zero” or “CC-zero”) and the Open Data Commons Public Domain Dedication and License (ODC-PDDL). According to the CC0 license, _“the person who associated a work with this deed has dedicated the work to the public domain by waiving all of his or her rights to the work worldwide under copyright law, including all related and neighbouring rights, to the extent allowed by law. You can copy, modify, distribute and perform the work, even for commercial purposes, all without asking permission.”_ Publication of data under a non-attribution waiver such as CC0 avoids potential problems of ”attribution stacking” when data from several sources are aggregated for re-use, particularly if this re-use is undertaken automatically. In such cases, while there is no legal requirement to provide attribution to the data creators, the norms of academic citation best practice for fair use still apply, and those who re-use the data should reference the data source, as they would reference other research articles. The Attribution- ShareAlike Open Data Commons Open Database License (OdbL) is NOT recommended for use for CLAIM data, although it may be used as an exception in particular cases. The OdbL license assumes that " _If one publicly uses any adapted version of the database, or works produced from an adapted database, he or she must also offer that adapted database under the OdbL_ ." The following methodology and reporting steps are suggested to partners managing data: #### DATASET CONTENT, PROVENANCE AND VALUE What type of data has been collected or created? Data can be derived from one or more datasets that relate to each use case. The DMP will explain the background (retaining provenance) of the described dataset. Imported data can then be combined, processed and analysed, generating additional data. A description of the operations leading to this data should also be included. What is the value for other researchers? Candidates for reuse are identified in respective user-driven requirements and use cases deliverables. #### STANDARDS AND METADATA Which data standards will the data conform to? The consortium will strive to comply or reuse existing standards whenever possible. Although original data sources may conform to different formats and standards, data processed by CLAIM data layer will likely have been transformed into formats complying with a set of wellknown standards. What documentation and metadata will accompany the data? In addition to the data collection activities, CLAIM will also generate its own valuable data assets in terms of metadata that will improve the description, interlinking, normalization, unification, and quality assessment of the collected datasets. #### DATA ACCESS AND SHARING Which data is open, re-usable and what licenses are applicable? It is envisaged that most of the datasets resulting out of project activities, will be of an open nature, i.e., data which is freely accessible and protected by minimally restrictive or unrestricted licenses. However, some data could also be obtained via private access. In both cases, the consortium will ensure that any imported data conforms to existing or indicated licenses. In particular, the attachment of the Open Data Commons Open Database License (ODbL) to open datasets could be adopted, promoting the three core requirements of: attribution, sharealike and the retention of its open nature. Additional usage and sharing restrictions on the dataset will be defined through additional licences or modifications of existing alternatives. Justifications for restrictions to dataset access or re-use should be explained clearly in data reports. How will open data be accessible and how will such access be maintained? Data access will vary depending on storage location. Measures will be taken to enable third parties to freely access, re-use, analyse, exploit and disseminate the data (bound by the license specifications). Different access procedures will be implemented, enabling the export of an entire dataset as well as the provision of a querying interface for the retrieval of relevant subsets. Access mechanisms will also be supported as much as possible by metadata enabling search engines and other automated processes to access the data using standard Web mechanisms. Which privacy protocols are implemented? In the case that a dataset contains sensitive corporate or personal data, privacy protocols need to be established and followed throughout the aggregation, processing and publishing stages. The anonymisation of personal information should precede the processing stage. If additional data pre- processing measures need to be taken to safeguard individuals or groups. If the data processing results still produce sensitive data, access controls will be enforced and described. #### DATA ARCHIVING, MAINTENANCE AND PRESERVATION Where will each dataset be physically stored? Depending on the nature of the data, a dataset might eventually be moved to an external repository, e.g. the European Open Data Portal. Data generated via other means can have additional hosting arrangements. What physical resources are required to carry out the plan? During the pilot project phase, hosting, persistence and access will be managed by the project partners’ infrastructure. Partners with the most suitable hosting and processing capabilities have been identified early in the project lifetime What are the physical security protection features? Once a dataset is published and access enabled, security will be addressed to ensure that the data cannot be tampered with and its veracity can be guaranteed. How will each dataset be preserved to ensure long-term value? Since the majority of data integrated and generated within the CLAIM infrastructure will abide by the Linked Data principles, the consortium will follow the best practices for supporting the life cycle of Linked Data. This includes its curation, repair and evolution, thus also increasing the likelihood that machine-readable structured datasets (and associated metadata) resulting out of project efforts can also be of long term use for third parties. Who is responsible to deliver the plan? Consortium members collecting and re-using data will be tasked to follow the data management steps and make sure all aspects are considered when producing reports and products within WP 1 and 4. #### Data publishing Data publishing is an act of making data available on the Internet, so that they can be downloaded, analysed, re-used and cited by people and organisations other than the creators of the data. This can be achieved in various ways. In the broadest sense, any upload of a dataset onto a freely accessible website could be regarded as “data publishing”. There are, however, several issues to be considered during the process of data publication, including: ■ Data hosting, long-term preservation and archiving ■ Documentation and metadata ■ Citation and credit to the data authors ■ Licenses for publishing and re-use ■ Data interoperability standards ■ Format of published data ■ Software used for creation and retrieval ■ Dissemination of published data CLAIM data could be published in Pensoft’s Open Access Journal Research Ideas and Outcomes (RIO) under a project collection such as eg. EUBON project: https://riojournal.com/collection/2/ The journal accepts various types of unconventional research outputs such as: data management plans, software descriptions, project reports, Grant proposals, etc. #### Intellectual property rights The procedures to disseminate, protect and exploit the IPR are covered in D7.1 Quality Assurance Procedure Report and IPR strategy.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0341_DESTINATIONS_689031.md
# Executive Summary Throughout the whole work programme, the CIVITAS DESTINATIONS project embeds the process of data management, and the procedure of compliance to ethical/privacy rules set out in the Ethics Compliance Report (D1.1). The data management procedures within the CIVITAS DESTINATIONS project arise within the detail of the work, and not with the overall raison d’être of the project itself, which is part of the EC Horizon 2020 programme, Mobility for Growth sub-programme. D1.6 represents the second edition of the Project Data Management Plan (PDMP) related to the data collected, handled and processed by the CIVITAS DESTINATIONS project in the horizontal (HZ) WPs until February 2018\. According to the guidelines and indications defined in the Ethics Compliance Report (D1.1), the overall approach to data management issues adopted by the CIVITAS DESTINATIONS project is described in section 2.2. The Project Data Management Plan is structured as follows: * Section 2 provides the introduction to the role of the Project Data Management Plan (PDMP) in the project. * Section 3 identifies the different typologies of data managed by the whole CIVITAS DESTINATIONS project (the data described are cumulated since the beginning of the project until M18). * On the basis of the data typologies identified in section 3, section 4 details the specific data collected and generated by CIVITAS DESTINATIONS (the data described are cumulated since the beginning of the project until M18). * Section 5 focuses on Horizontal (HZ) WPs and it specifies the data managed/processed and the procedures adopted (when applicable) at this level. # Introduction ## Objectives of the CIVITAS DESTINATIONS project The CIVITAS DESTINATIONS project implements a set of mutually reinforcing and integrated innovative mobility solutions in six medium-small urban piloting areas in order to demonstrate how to address the lack of a seamless mobility offer in tourist destinations. The overall objective of the CIVITAS DESTINATIONS project is articulated in the following operational goals: * Development of a Sustainable Urban Mobility Plan (SUMP) for residents and tourists focusing on the integrated planning process that forms the basis of a successful urban mobility policy (WP2); * Development of a Sustainable Urban Logistics Plan (SULP) targeted on freight distribution processes to be integrated into the SUMP (WP5); Implementation and demonstration of pilot measures to improve mobility for tourists and residents (WP3-WP7); * Development of guidelines to sites for stakeholder engagement (WP2-WP8); * Development of guidelines to sites for the definition of business models to sustain the site pilot measures and the future implementation of any other mobility actions/initiatives designed in the SUMP (WP8); * Development of guidelines to sites for the design, contracting and operation of ITS (WP8); * Evaluation of results both at the project level and at site level (WP9); * Cross-fertilization of knowledge and best practice replication including cooperation with Chinese partners (WP10); * Communication and Dissemination (WP11). ## Role of PDMP and LDMP in CIVITAS DESTINATIONS The role and the positioning of the PDMP within the whole CIVITAS DESTINATIONS project (in particular with the Ethics Compliance Report, D1.1) is detailed in the following: * The PDMP specifies the project data typologies managed in CIVITAS DESTINATIONS; * Based on the identified data typologies, the PDMP details the data which are collected, handled, accessed, and made openly available/published (eventually). The PDMP provides the structure (template) for the entire Data Management reporting both at Horizontal (WP8, WP9, WP10) and Vertical (from WP2 to WP7) level; * The LDMP (D1.9) describes the procedures for data management implemented at site level. ## PDMP lifecycle The CIVITAS DESTINATIONS project includes a wide range of activities spanning from users’ needs analysis of the demonstration measures, including SUMP/SULP (survey for data collection, assessment of the current mobility offer which could include the management of data coming from previous surveys and existing data sources, personal interviews/questionnaires, collection of requirements through focus groups and coparticipative events, etc.) to the measures operation (data of users registered to use the demo services, management of images for access control, management of video surveillance images in urban areas, infomobility, bookings of mobility services, payment data/validation, data on the use of services for promotion purpose: green credits, etc.) and to data collection for ex-ante evaluation to ex-post evaluation. Data can be grouped in some main categories, but the details vary from WP to WP (in particular the demonstration ones) and from site to site. Due to the duration of the project, data to be managed will also evolve during the project lifetime. For the abovementioned reasons, the approach used for the delivery of the PDMP and LDMP is to restrict the data collection in each six-monthly period: this will also allow the project partners, in particular Site Managers, to keep track of and control the data to be provided. This version of the PDMP covers the period of project activities until February 2018. # Data collected and processed in CIVITAS DESTINATIONS The CIVITAS DESTINATIONS project covers different activities (identified in section 2.1) and deals with an extended range of possible data to be considered. The term “data” can be related to different kinds/sets of information (connected to the wide range of actions taking place during the project). A specification of the “data” collected/processed in DESTINATIONS is required together with a first comprehensive classification of the different main typologies involved. In particular, data in DESTINATIONS can be divided between the two following levels: 1. Data collected by the project; 2. Data processed/produced within the project. **Data collected** by the project can be classified in the following main categories: * Data for SUMP-SULP elaboration (i.e. baseline, current mobility offer, needs analysis, etc.); * Data required to set up the institutional background to support SUMP-SULP elaboration, design and operation of demo measures; * Data for the design of mobility measures in demo WPs (i.e. baseline, current mobility offer, needs analysis, etc.); * Data produced in the operation of demo mobility measures (i.e. users’ registration to the service, validation, transactions/payment, points for green credits, etc.); * Data collected to carry out the ex-ante and ex-post evaluation; * Data required to develop guidelines supporting the design/operation of demo measures; * Data used for knowledge exchange and transferability; * Data used for dissemination. Data collected by the CIVITAS DESTINATIONS project are mainly related to local activities of the demonstration measures design, setup and implementation. This process deals mostly with responsibilities of Site Managers. This is reflected in the production of the LDMP for which each site provides its contribution. **Data processed/produced** by the project are mainly: * SUMP/SULP; Demonstration measures in the six pilot sites; * Outputs coming from WP8 (business principles and scenarios, ITS contracting documents, etc.), WP9 (evaluation) and WP10 (transferability). Regarding this data, the data management process deals mostly with responsibilities of Horizontal WP Leaders/Task Leaders and they are described in this Deliverable. The activities which have taken place since the beginning of the CIVITAS DESTINATIONS project are the following (here the reporting is restricted to the activities of interest for the data management process): * **WP2** – collection of information on SUMP baseline * **WP3** , **WP4** , **WP6** , **WP7** – User needs analysis and design/implementation of demonstration services and measures. * **WP5:** collection of information on SULP baseline. User needs analysis and design of demonstration services and measures. ### • WP8 * Task 8.1 – Stakeholder mapping exercise detailing the organisations in each of the six sites which have differing levels of power and interest in the site measures. This included the collection of the names and email addresses of key individual contacts in these organisations and phone numbers. Development of guidelines in how to engage the identified stakeholder * Task 8.2 – Elaboration of the documents for the call for tender for subcontracting professional expertise on business model training and coaching activities to be provided to the project sites. Launch of tender, collection of participants offers, evaluation of the offers and awarding of the tender to META Group srl. Coordination of sub-contracting activities by ISINNOVA. * Task 8.3 – Provision of guidelines for the design of ITS supporting demo measures, provision of guidelines for tendering/contracting ITS, provision of guidelines for ITS testing. ### • WP9 * Task 9.1 and 9.3: Identification of indicator categories for ex-ante/ex-post evaluation. Continuous coordination activity in order to support LEMs (Local Evaluation Managers) and discuss the definition of their measures impact indicators (in accordance with the guidelines distributed in December 2016), the preparation of the local Gantt charts and the setting of the ex-ante impact evaluations. Close and continuous cooperation with the SATELLITE project. * Task 9.2: Preparation and delivering of the draft evaluation report (delivered 4 th of July 2017) ### • WP10 * Participation to ITB-China 2017 * Launch of platform of followers # Detail of data categories In the following the typologies of “sensible” data produced, handled or managed by these activities are identified. The description of the data management procedure is provided in section 5 (for Horizontal WPs) and in D1.9 (for demo WPs and site activities). ### WP2 __Task 2.2-Task 2.3 Mobility context analysis and baseline_ _ Data collection/survey for SUMP elaboration: * Census/demographic data; * Economics data; * Tourists flow; * Accessibility in/out; * O/D matrix; * Data on network and traffic flow (speed, occupancy, incidents, etc.); * Emissions/Energy consumption; * Pollution; * Questionnaires on travel behaviour, attitudes, perceptions and expectations; * On-field measuring campaign carried out during the data collection phase. __Task 2.6 Smart metering and crowdsourcing_ _ Automatic data collection supporting SUMP development: • Traffic flow; * Passenger counting. ### WP3 __Task 3.2 User needs analysis, requirements and design_ _ Data collection/survey for safety problem assessment at local level and design of demo measures: * Data about network, cycling lanes, walking paths, intersections, crossing points, traffic lights; * Traffic data (combined with WP2), * Road safety statistics (number of incidents on the network, etc.) combined with WP2; Emissions/Energy consumption (combined with WP2); * Survey on users’ needs and expectations; * Reports coming from stakeholder and target users focus group; * Statistics produced by Traffic Management System, Traffic Supervisor or similar. ### WP4 __Task 4.2 User needs analysis, requirements and design_ _ Data collection/survey for extension/improvement of sharing services and design of demo measures: * Data on sharing/ridesharing service demand; * Data on sharing/ridesharing service offer; * Statistics produced by the platform of management of bike sharing already operated (registered users, O/D trips, etc.); * Survey on users’ needs and expectations; * Reports coming from stakeholder and target users focus group. Data collection/survey for take up of electrical vehicles and design of demo measures: * Data on the demand of electrical vehicles and recharge points; * Data on the offer of electrical vehicles and recharge points; * Survey on users’ needs and expectations; * Reports coming from stakeholder and target users focus group. __Task 4.4/Task 4.5/Task 4.6 Demonstration of demo services_ _ Data collection during service demonstration * Registered service users and related info; * Data collecting during the service operation. ### WP5 __Task 5.2 Logistics context and user needs analysis for piloting services on freight logistics_ _ Data/collection surveys for SULP elaboration: * Network/traffic data (combined with WP2); * Data on shops, supply process, logistics operators, etc.; * Energy/emissions consumption (combined with WP2); * On-field measuring campaign carried out during the data collection phase; * Questionnaires/survey on supply/retail process; * Reports coming from stakeholder and target users focus group. Data/collection surveys for demo logistics services * Data related to the used cooked oil collection process currently adopted; * Survey on users’ needs and expectations; * Reports coming from stakeholder and target users focus group. __Task 5.6/Task 5.7 Demonstration of demo services_ _ Data collection during service demonstration * Registered service users and related info; * Data collecting during the service operation. ### WP6 __Task 6.2 User needs analysis, requirements and design_ _ Data/collection for the design of demo measures for increasing awareness on sustainable mobility: * Network/traffic data (combined with WP2); * Energy/emissions consumption (combined with WP2); * Data on mobility and tourist “green services”, green labelling initiatives and promotional initiatives already under operation; * Survey on users’ needs and expectations; * Reports coming from stakeholder and target users focus group. Data/collection for the design of demo measures for mobility demand management: * Survey on users’ needs and expectations; * Reports coming from stakeholder and target users focus group. __Task 6.4/Task 6.5/Task 6.6 Demonstration of demo measures_ _ Data collection during service demonstration * Registered service users and related info; * Data collecting during the service operation ### WP7 __Task 7.2 User needs analysis, requirements and design_ _ Data/collection for the design of demo measures for Public Transport services: * Data on PT service demand; * Data on PT service offer; * Statistics produced by the systems already operated (i.e. ticketing); * Survey on users’ needs and expectations; * Reports coming from stakeholder and target users focus group. __Task 7.4/Task 7.5/Task 7.6 Demonstration of demo measures_ _ Data collection during service demonstration. ### WP8 __Task 8.1_ _ * Data on stakeholders: * Contact names of individuals working at the stakeholder organisations o Email addresses of the individuals o Phone numbers of the stakeholder organisations __Task 8.2_ _ * Information provided by tender participants in their offer: * General information of the tender participants (contact details and address, authorized signature and subcontracting, declarations) * Information to prove the professional and technical capability to carry out the activities requested in the tender (description of proposed methodology, curriculum vitae of the experts) * Information supporting CANVAS development for relevant measures in the sites __Task 8.3_ _ N/A – The data collected in this WP in the reference period are not included in the list of “sensible” data identified in D1.1. ### WP9 __Task 9.2 – Task 9.3 – Task 9.4 Evaluation Plan, Ex-ante/Ex-post evaluation_ _ • Baseline (BAU): baselines are calculated in different ways, including surveys and according to the measures the baselines refer to. The used data are highlighted below: * Economic impacts (operating revenues, investment costs, operating costs) o Energy consumption (fuel consumption, energy resources) o Environmental impacts (air quality, emissions, noise) o Sustainable mobility (modal split, traffic level, congestion level, vehicle occupancy, parking, public transport reliability and availability, opportunity for walking, opportunity for cycling, bike/car sharing availability, road safety, personal safety, freight movements) * Societal impacts (user acceptance, awareness and satisfaction, physical accessibility towards transport, car availability, bike availability); * Health impacts. ### WP10 __Task 10.4 – Cross-fertilisation among consortium members and beyond_ _ * Information provided by tender participants in their offer * Management of personal data required to register to the platform __Task 10.5 – International cooperation in research and innovation in China_ _ * Data to prepare a collective brochure in Mandarin per site (as detailed below) * Contacts (name, phones, contact address, emails…).and business cards collected by visitors to 2017 ITB-China DESTINATIONS boots (in English, but mostly in Mandarin) ### WP11 N/A – The data collected in this WP in the reference period are not included in the list of “sensible” data identified in D1.1. # Data Management Plan ## WP2-WP7 The Data Management Plan for the demonstration measures (WP2-WP7) is detailed in Deliverable D1.9 – Local Data Management Plan (LDMP) – second edition (M1-M18). ## WP8 For each of the data categories identified in section 4, the following table describes the management procedures. <table> <tr> <th> **WP8 – Task 8.1** </th> </tr> <tr> <td> **Stakeholder mapping** </td> </tr> <tr> <td> **Data management and storing procedures** </td> </tr> <tr> <td> 8.1.1 </td> <td> How data collected by sites are stored? </td> <td> Data has been inputted by the six cities into proforma Excel files, issued by Vectos (electronic format) </td> </tr> <tr> <td> 8.1.2 </td> <td> Please detail where the data are stored and in which modality/format (if applicable) </td> <td> Information provided by the six cities is stored in Vectos internal server in electronic format. </td> </tr> <tr> <td> 8.1.3 </td> <td> How data are used (restricted use/public use)? Are they made publicly available? </td> <td> Email addresses and individuals’ names are restricted and are only for the use of the sites when liaising with stakeholders. </td> </tr> <tr> <td> 8.1.4 </td> <td> Who is organization responsible for data storing management? </td> <td> the and </td> <td> Vectos, Paul Curtis is overall responsible for the collation of the data and storing centrally on the Vectos server. The six site managers are responsible for the storing of their respective stakeholder data, with the following variances: * Andreia Quintal, HF (individual names, individual email addresses stored internally by Vectos) * Antonio Artiles Del Toro, GUAGUAS (organisation phone numbers and individual email addresses stored internally by Vectos) * Maria Stylianou, LTC (data stored internally only) * Alexandra Ellul, TM (individual names, individual email addresses stored internally by Vectos) * Stavroula Tournaki, TUC – (data held internally only) * Renato Belllini, Elba – (data held internally only) </td> </tr> <tr> <td> 8.1.5 </td> <td> By (organization, responsible) data are accessible? </td> <td> whom </td> <td> Data is accessible to Vectos via the internal server. It is also accessible by each site partner - who provided the details via their servers. </td> </tr> </table> **Table 1: Description of WP8 (Task 8.1) data management procedures (stakeholders mapping)** <table> <tr> <th> **WP8 – Task 8.2** </th> </tr> <tr> <td> **Management of the call for tender for the selection of expert support for business development for the more relevant site measures** </td> </tr> <tr> <td> **Data management and storing procedures** </td> </tr> <tr> <td> 8.2.1 </td> <td> How data collected by tender participants are stored? </td> <td> Tender participants have sent their offer in electronic format </td> </tr> <tr> <td> 8.2.2 </td> <td> Please detail where the data are stored and in which modality/format (if applicable) </td> <td> Information provided by the participants are stored in ISINNOVA archive in electronic format. Details of awarded participant (META Group srl) have been also forwarded to ISINNOVA accounting system for the management of payment procedures. </td> </tr> <tr> <td> 8.2.3 </td> <td> How data are used (restricted use/public use)? Are they made publicly available? </td> <td> Information are restricted and they are managed in accordance with confidentiality rules required for tender management. Information will not be made publicly available </td> </tr> <tr> <td> 8.2.4 </td> <td> Who is the organization responsible for data storing and management? </td> <td> ISINNOVA, Ms. Loredana MARMORA </td> </tr> <tr> <td> 8.2.5 </td> <td> By whom (organization, responsible) data are accessible? </td> <td> Data have been accessed by ISINNOVA team involved in the tender management and awarding and by the members of the evaluation board (3 people from ISINNOVA and 2 people from Madeira). Data related to the awarded participant (META Group srl) are also available to ISINNOVA accounting staff for payment management </td> </tr> </table> **Table 2: Description of WP8 (Task 8.2) data management procedures – call for tender** ## WP9 For each of the data categories identified in section 4, the following table describes the management procedures. <table> <tr> <th> **WP9** </th> </tr> <tr> <td> **Data management and storing procedures** </td> </tr> <tr> <td> 9.7.1 </td> <td> How data collected by sites related to exante evaluation are stored? </td> <td> Ex ante and ex post data collected by the Local Evaluation Manager (LEMs) and Site Managers are stored in an ad hoc Excel file according to a structured data collection template. </td> </tr> <tr> <td> 9.7.2 </td> <td> Please detail where the data are stored and in which modality/format (if applicable) </td> </tr> <tr> <td> 9.7.3 </td> <td> How data will be used? </td> <td> These data will be then transposed to the Measures Evaluation Report according to the format provided by the SATELLITE project. They will be used under an aggregated format. </td> </tr> <tr> <td> 9.7.4 </td> <td> Who is the organization responsible for data storing and management? </td> <td> ISINNOVA </td> </tr> <tr> <td> 9.7.5 </td> <td> By whom (organization, responsible) data are accessible? </td> <td> Data are accessible by the ISINNOVA evaluation manager (Mr. Stefano Faberi) and his colleagues. </td> </tr> </table> **Table 3: Description of WP9 data management procedures** ## WP10 For each of the data categories identified in section 4, the following table describes the management procedures. <table> <tr> <th> **WP10** </th> </tr> <tr> <td> **Participation to ITB China 2017** </td> </tr> <tr> <td> **Data management and storing procedures** </td> </tr> <tr> <td> 10.1.1 </td> <td> How data collected by sites are stored? </td> <td> Data collected from the site are included in a promotional brochure in Mandarin. The business cards collected by GV21 during the ITBChina trade fair (and collateral events) have been used to send a follow-up email and allow to identify follow-up actions that could be conducted by the sites (possibly outside the project as no budget for ITB China 2017’s follow-up actions allocated in the DESTINATIONS project). No specific archive has been created to store those business cards’ data. </td> </tr> <tr> <td> 10.1.2 </td> <td> Please detail where the data are stored and in which modality/format (if applicable) </td> </tr> <tr> <td> 10.1.3 </td> <td> How will be data used? </td> </tr> <tr> <td> 10.1.4 </td> <td> Who is the organization responsible for data storing and management? </td> <td> GV21 </td> </tr> <tr> <td> 10.1.5 </td> <td> By whom (organization, responsible) data are accessible? </td> <td> Data are accessible by GV21 (Mrs. Julia Perez Cerezo) and her colleagues. </td> </tr> </table> **Table 4: Description of WP10 data management procedures (ITB China 2017 participation)** <table> <tr> <th> **WP10** </th> </tr> <tr> <td> **Management of the call for tender for the selection of IT provider in charge of the setup of the platform of followers** </td> </tr> <tr> <td> **Data management and storing procedures** </td> </tr> <tr> <td> 10.2.1 </td> <td> How data collected by tender participants are stored? </td> <td> Tender participants have sent their offer in electronic format </td> </tr> <tr> <td> 10.2.2 </td> <td> Please detail where the data are stored and in which modality/format (if applicable) </td> <td> Information provided by the participants are stored by the Project Dissemination Manager (PDM) and the CPMR Financial services. Data are stored in electronic format on the CPMR server. </td> </tr> <tr> <td> 10.2.3 </td> <td> How data are used (restricted use/public use)? Are they made publicly available? </td> <td> The data stored were used to evaluate and select the successful bidder. They are available in case of INEA audit. </td> </tr> <tr> <td> 10.2.4 </td> <td> Who is the organization responsible for data storing and management? </td> <td> CPMR </td> </tr> <tr> <td> 10.2.5 </td> <td> By whom (organization, responsible) data are accessible? </td> <td> Data are accessible by CPMR (Mr. Panos Coroyannakis) and his colleagues. The offers have been shared with the Project PCO & PM teams via emails. </td> </tr> </table> **Table 5: Description of WP10 data management procedures (tender for platform for followers)** <table> <tr> <th> **WP10** </th> </tr> <tr> <td> **Follower registration to the DESTINATIONS platform** </td> </tr> <tr> <td> **Data management and storing procedures** </td> </tr> <tr> <td> 10.3.1 </td> <td> How data collected are stored? </td> <td> The data are collected and stored on the platform’s administration site. The site is on the server of the platform designer INEVOL The data will only be used to invite the followers to join the platform by sending a password to qualified followers. </td> </tr> <tr> <td> 10.3.2 </td> <td> Please detail where the data are stored and in which modality/format (if applicable) </td> </tr> <tr> <td> 10.3.3 </td> <td> How will be data used? </td> </tr> <tr> <td> 10.3.4 </td> <td> Who is the organization responsible for data storing and management? </td> <td> CPMR The PLATFORM designer organisation INEVOL </td> </tr> <tr> <td> 10.3.5 </td> <td> By whom (organization, responsible) data are accessible? </td> <td> Data are accessible by CPMR personnel Mr. Panos Coroyannakis, Mr. Stavros Kalognomos and the platform designers INEVOL. </td> </tr> </table> **Table 6: Description of WP10 data management procedures (operation of platform for followers)**
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0346_CORBEL_654248.md
# Introduction As a major funder of research and infrastructure, the European Commission has a clear commitment to open science. This is exemplified by the Commission’s 2012 recommendation on access to and preservation of scientific information 1 , recognising the societal and economic benefits that open access to publications and other research outputs can provide. These principles are subsequently reflected in its Horizon 2020 2016-2017 Work programme (Part 16: Science & Society) 2 . As the representatives of Research Infrastructures that have been established to support the ecosystem of modern collaborative science, the CORBEL participants are fully committed to this ethos and are actively working towards the same agenda. As a large and highly distributed project, it is essential that the CORBEL consortium establish robust working practices that will maximise the value of the project’s outcomes. As part of this goal, the consortium is committed to enact effective data management policies both to safeguard data during the project, and to ensure responsible stewardship following its conclusion. This applies not only to any experimental research data generated by funded activities, but the tools and knowledge we expect to generate. In so doing, the consortium can maximise the utility of the project’s deliverables in the communities of the participating Research Infrastructures, and thus maximise the return on the Commission’s investment. All of the Research Infrastructures participating in CORBEL take central roles in the provision of infrastructure to safeguard data for their respective scientific communities - facilitating access, re-use and reproducibility. Key to this is not only to ensure that research data is made available, but that it is published in such a way as to help researchers make use of it. These requirements are exemplified in the FORCE11 group’s FAIR Principles 3 4 \- that data should be Findable, Accessible, Interoperable and Re-usable. Indeed, improving the infrastructure that allows scientists to make research data conform to these guiding principles are a key aim of the CORBEL project itself, including the management of data across infrastructure boundaries. This builds upon and strengthens the data bridges laid down between the Research Infrastructures during the BioMedBridges project, the successful conclusion of which has begun to tackle the challenges of cross-disciplinary data sharing and integration. # General Approach As a cluster of Research Infrastructures, the CORBEL consortium’s approach to ensuring effective data management and stewardship relies upon the existing policies and expertise of the infrastructures. The Research Infrastructures have established data strategies appropriate to the type of data and - having the relevant expertise - are best placed to implement those strategies. 5 6 Common across these strategies is a commitment to open science, pursuing the goal of sharing the outputs of publicly funded research whilst respecting the varying ethical and legal contexts of these outputs and taking account of the technical and social challenges of data sharing. For example, much of the data managed by ELIXIR is routinely made available openly, and similarly the INSTRUCT community has a long-standing tradition of depositing data in established domain archives as a matter of course. Euro-BioImaging is specifically seeking to encourage a similar ethos of data sharing by developing standards and technical solutions, by addressing data flow in tandem with access to the imaging infrastructure, as well as the identification of reference datasets. On the other hand, sharing of data and resources in BBMRI or ECRIN requires a specialised approach due to the sensitive nature of the data. The Research Infrastructures are working to improve access to human research data through technical solutions for managed access. This includes secure computational infrastructure for specific individuals to access data consented for research, but also exposing metadata to allow for discovery. Within CORBEL, the handling and deposition of research outputs will not be managed centrally. Instead, responsibility is delegated to the Research Infrastructures, who will retain control over and accountability for each dataset, service, publication and software tool produced in the context of CORBEL. By adopting this strategy, CORBEL can better ensure efficient and effective management by clearly and unambiguously assigning responsibility for the project’s outputs to the investigators producing them. This allows the project to leverage the significant data management expertise of the partners implementing each task. # Data Many of the research infrastructures participating in CORBEL such as BBMRI, ELIXIR, ECRIN and EuroBioImaging fulfil key roles in managing access to scientific data or metadata, with varying approaches to data sharing as appropriate. CORBEL is thus part of the effort to build Europe’s data management and data stewardship infrastructure, and this is reflected in its tasks and deliverables. In WP3, ECRIN will lead task 3.3, focussed on the sharing of clinical trial data at multiple levels, from registration of trials through to access to patient-level data. Similarly, task 3.4 seeks to enable efficient biomarker research through the necessary sharing of data between hospitals, who may individually lack sufficient patient numbers. Both are challenging areas for data sharing, but are also of crucial importance in clinical research and thus in addressing the grand challenges of improving human health. Data management and data sharing infrastructure is itself built upon data - for example vocabularies, ontologies and schemas that are themselves datasets that must be sustainably managed. Interoperability of data is a core focus of CORBEL, and work done in WP6 will deliver standards and schemas that will address issues such as cross-domain identifier use and provenance. In addition, during the CORBEL project it is anticipated that new experimental or informatics datasets may be generated as part of the joint research work packages. For example in WP3 (Medical), subtask 3.5.3 “Data generation and analysis” involves developing biomarker profiles for pancreatic cancer samples within BBMRI. The early stages of WP4 (Bioscience) involve the selection of a number of pilot projects through a competitive call; after proposals are evaluated, details of data to be generated can be identified. Thus at this stage in the project it is not possible to describe precisely the datasets involved and how they will be handled. However, provision for the management and stewardship of these datasets will be made in accordance with the General Approach (see above), i.e. responsibility rests with the work package leadership and following the procedures of the Research Infrastructures. Work package 4 also includes deliverables focused on user access to the underlying infrastructure. # Tools In addition to data, software tools are potential research outputs of CORBEL that require careful management and stewardship. Indeed, provision of core infrastructure for interoperability is a key component of WP6 (Data access, management and integration). This includes good quality, reliable software services. Examples of services that will be developed in CORBEL include those for identifier management and mapping led by ISBE, and ELIXIR will develop semantic interoperability tools such as those for accessing ontologies and mapping data to ontologies. ELIXIR and INSTRUCT are also developing policies for software sustainability. This includes collaborating with the Software Sustainability Institute on developing best practices for software development. Both infrastructures are working to establish policies or recommendations around open development and open source, which provides a powerful mechanism not only to disseminate the results of publicly funded software development but also a major driver of software quality. Although not part of CORBEL, these outputs will be available for project partners and can benefit the implementation of CORBEL. A strong adoption of open source already exists within the biomedical science community but, as with data, the approach of research infrastructures and CORBEL must respect the commercial and operational context of the participants. # Knowledge Wherever appropriate, CORBEL deliverables will be published in an open fashion and deposited in repositories such as Zenodo. Likewise, peer-reviewed articles will be published under open-access terms. The project may also result in the production of guidelines, standards and other documents, which will likewise be published openly. Partners retain responsibility for choosing the appropriate publication channel (e.g. the journal), but infrastructures can implement mechanisms to facilitate the publication process. For example, ELIXIR has established a gateway within F1000 Research 7 , using the post- publication transparent review process for peer-reviewed articles, and channels for other types of publications such as reports, posters and slidesets. # **Delivery and schedule** The delivery is delayed: Yes It took more time than anticipated to get an overview about the existing, actual policies from all RI’s and collect the necessary feedback. # **Adjustments made** N/A
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0347_MarketPlace_760173.md
1. **Executive summary** The present document is a **living document that is continuously updated and** summarizes the actions taken to create awareness about the results being generated in the MarketPlace project and to create an open research dissemination infrastructure. In view of the future sustainable operation of the MarketPlace, a list of key exploitable results (KER) has been compiled and is presented. This list of KER on the one hand is based on the MarketPlace infrastructure and its commercial exploitation. On the other hand it considers the interests of stakeholders in potentially using the MarketPlace for their business. A detailed first business plan for a sustainable operation of the MarketPlace is subject to deliverable D 6.6 of WP6 where first rough estimates of possible revenues, costs and market size are given and a preliminary “SWOT” analysis identifying the strengths, weaknesses, opportunities and threads for a potential commercial MarketPlace operation are summarized. Based on extensive lists of SWO and end-users available e.g. in the EMMC, the present document provides an initial list of SWO which have been personally contacted and informed in full detail about the possibilities and options of the MarketPlace. 2. **Introduction and Objectives** The MarketPlace project is expected to generate a considerable amount of intellectual property rights (IPR), which is open to exploitation by all partners in the consortium, with the contractual basis for this exploitation being the consortium agreement CA. Details for the exploitation of project results are tackled in subtask 6.2.1, while exploitation of third party data in an open data research management is addressed in subtask 6.2.2. From these subtasks, a Key Exploitable Results (KER) register and metadata for their preliminary assessment is compiled. This business plan itself - as a culmination of the assessment of the KER - is tackled in a separate task and its respective deliverable (D 6.6). The scope of this task is to develop, to agree and to implement a strategy for the exploitation of the combined body of IPR, This strategy eventually – together with D 6.6 - will condensed into a report being compiled by the Dissemination and Exploitation (DE) Board. 3. **Open Research Data Management** The DE board keeps track of the generated data and advise partners which data can be open and how to manage the curation. This task results in the first draft elaborating a Data Management Plan (DMP) for the results of the project. Key Outcomes are a Data Management Plan for project results (M12) and eventually an inventory of open accessible results being made publicly available (M54). To date no data has been generated as these are expected to emanate from WP5 where all data generation application did not yet quite start. 1. **Data Management Plan** Before any journal article, software code or data is published, the DE (Dissemination & Exploitation board) will consult all relevant partners to resolve potential IP issues. Upon mutual agreement the DE eventually gives clearance for the data to be disclosed. It is expected that at least all extensions to the e-CUDS and e-CUBA as well as the APIs will be open and freely available. The same holds for all non-mission critical data from WP5, which will become openly available as use cases and tutorials on the MarketPlace platform. An account has been created at the Open Research Data Pilot (ORDP): _https://www.openaire.eu/_ , for user “MarketPlace” has been generated where future publications will be registered by the DE board after clearance by the partners. 2. **Inventory of project results** The inventory of project results in a first place consists of documents, publications, and presentations. #### 3.2.1 Publications (until 30.6.2018) 1. Schmitz, G.J.: “Entropy and Geometric Objects”; Entropy 2018, 20(6), 453; https://doi.org/10.3390/e20060453 (spin-off publication from EMMO discussions) 2. Arpit Singhal, Adham Hashibon, “OpenFoam Interactive (OFI) an interface to control solvers in OpenFOAM”, Proceedings of Particles 2019\. #### 3.2.2 Publications (under preparation) 1. Arpit Singhal, Yoav Nahshon, Pablo De Andres, A. Hashibon. “A Universal Open Simulation Platform for Materials Modelling and Data Management”. 2. A. Hashibon. “Interoperability and practical ontology for materials modelling and data spaces”. 3. Heinz Presig et al. “from ontology wot workflows“ (tentative title) 4. Jesper Friis, Pablo de Andres, A. Hashibon. “EMMO CUDS and OWL” (tentative title). 5. L. Koschmieder, S. Hojda, M. Apel, R. Altenfeld, Y. Bami, H. Farivar, C. Haase, A. Vuppala, G. Hirt, G.J. Schmitz: “AixViPMaP® -an operational platform for microstructure modelling workflows”, submitted to: Integrating Materials and Manufacturing Innovation; 6. G. J. Schmitz: „Computerbasierte Materialentwicklung damals, heute und morgen - Von der Werkstoffkunde zur Materialwissenschaft“, RWTH Themen Ausgabe November 2018 (for general public) #### 3.2.3 Presentations of MarketPlace activities (until 30.06.2018) Only presentations comprising MarketPlace related content and being given at events with participants beyond EMMC-CSA and MarketPlace partners are listed in the following: <table> <tr> <th> Date </th> <th> Place </th> <th> by </th> <th> Event </th> <th> Audience # /type </th> <th> Remarks/ download from </th> </tr> <tr> <td> 17.-21.6.19 </td> <td> Quebec/ Kanada </td> <td> G.Goldbeck & G.J.Schmitz </td> <td> NAFEMS World meeting </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> 12- 14.06.2019 </td> <td> Bucharest </td> <td> A.Singhal </td> <td> EuroNanoForum 2019 </td> <td> 50-60 </td> <td> </td> <td> </td> </tr> <tr> <td> 14- 15.5.2019 </td> <td> Freiburg </td> <td> A. Hashibon </td> <td> Material Digital </td> <td> 60 industry) </td> <td> (scientific, </td> <td> a presentation about Marketplace and DSMS system. </td> </tr> <tr> <td> 6-7.5.2019 </td> <td> Lausanne </td> <td> G. Goldbeck </td> <td> EMMC Expert Group Meeting: Business Models of MarketPlaces </td> <td> 15 </td> <td> </td> <td> Joint EMMC-MarketPalce-VIMMP- Talk: Marketplace Personas and Market Segments </td> </tr> <tr> <td> 6-7.5.2019 </td> <td> Lausanne </td> <td> G. J. Schmitz </td> <td> EMMC Expert Group Meeting: Business Models of MarketPlaces </td> <td> 15 </td> <td> </td> <td> Business models </td> </tr> <tr> <td> 6-7.5.2019 </td> <td> Lausanne </td> <td> A. Hashibon </td> <td> EMMC Expert Group Meeting: Business Models of MarketPlaces </td> <td> 15 </td> <td> General presentation on the goals of Marketplace </td> </tr> <tr> <td> 8.4.- 13.4.19 </td> <td> Korea </td> <td> G.J.Schmitz </td> <td> along with ACCESS specific product presentations </td> <td> several companies </td> <td> </td> </tr> <tr> <td> 26.2.- 28.2.19 </td> <td> Vienna </td> <td> Arpit Singhal, Adham Hashibon </td> <td> EMMC International Workshop </td> <td> expected 150 (invited specialists) </td> <td> Poster </td> </tr> <tr> <td> 26.2.- 28.2.19 </td> <td> Vienna </td> <td> multiple presenters </td> <td> EMMC International Workshop </td> <td> expected 150 (invited specialists) </td> <td> </td> </tr> <tr> <td> 23.11.18 </td> <td> Pune/India </td> <td> G.J.Schmitz </td> <td> company seminar at Bharat Forge </td> <td> </td> <td> 35 (scientific/ engineering) </td> <td> </td> <td> </td> </tr> <tr> <td> 6.- 7.11.18 </td> <td> Freiburg </td> <td> Konstandin </td> <td> IntOP workshop </td> <td> </td> <td> 80 (science engineering) </td> <td> and </td> <td> </td> </tr> <tr> <td> 6.- 7.11.18 </td> <td> Freiburg </td> <td> G.J.Schmitz </td> <td> IntOP workshop </td> <td> </td> <td> 80 (science engineering) </td> <td> and </td> <td> </td> </tr> <tr> <td> 1.-7.11.18 </td> <td> Freiburg </td> <td> A.Hashibon </td> <td> IntOP workshop </td> <td> </td> <td> 80 </td> <td> </td> <td> </td> </tr> <tr> <td> 1.-7.11.18 </td> <td> Freiburg </td> <td> A.Singhal, A.Hashibon </td> <td> IntOP worshop </td> <td> </td> <td> 80 </td> <td> </td> <td> Poster presentation </td> </tr> <tr> <td> 25.- 26.10.18 </td> <td> Paris/ France </td> <td> G.J.Schmitz </td> <td> SWO workshop </td> <td> </td> <td> 15 (SWO) </td> <td> </td> <td> entire workshop was discussions only </td> </tr> <tr> <td> 17.-18.9.18 </td> <td> Cambridge/ UK </td> <td> G.J.Schmitz </td> <td> EMMO Workshop </td> <td> </td> <td> 20 (scientific/ engineering) </td> <td> </td> <td> </td> </tr> <tr> <td> 6.-7.9.18 </td> <td> Aachen </td> <td> G.J.Schmitz </td> <td> ThermoCalc meeting/Germany </td> <td> user </td> <td> 25 (scientific/ engineering) </td> <td> </td> <td> </td> </tr> <tr> <td> 28.8.18 </td> <td> Aachen </td> <td> G.J.Schmitz </td> <td> German-Chinese seminar </td> <td> student </td> <td> 50 (students/ PhDcandidates) </td> <td> </td> </tr> <tr> <td> 4.7.18 </td> <td> Cambridge/ UK </td> <td> G.J.Schmitz </td> <td> SWO workshop </td> <td> </td> <td> 15 (SWO) </td> <td> </td> </tr> <tr> <td> 29.6.18 </td> <td> Brussels </td> <td> G.J.Schmitz </td> <td> Ontology workshop </td> <td> </td> <td> 70 (scientific/ engineering </td> <td> </td> </tr> <tr> <td> 29.6.18 </td> <td> Brussels </td> <td> G.J.Schmitz </td> <td> ontology workshop </td> <td> </td> <td> 70 (scientific/ engineering </td> <td> </td> </tr> <tr> <td> 14.-15.6.18 </td> <td> Solna/ Sweden </td> <td> G.J.Schmitz </td> <td> Thermo-Calc user meeting </td> <td> 50 (scientific/ Engineering) </td> <td> no dedicated presentation, but contributions to discussions </td> </tr> <tr> <td> 11.-13.6.18 </td> <td> Uppsala/ Sweden </td> <td> G.J.Schmitz </td> <td> Uppsala </td> <td> 80 (scientific/ Engineering) </td> <td> </td> </tr> <tr> <td> 25.-27.4.18 </td> <td> Technion Haifa/Israel </td> <td> G.J.Schmitz </td> <td> Umbrella Symposium </td> <td> 40 (scientific) </td> <td> </td> </tr> <tr> <td> 12.3.- 16.3.18 </td> <td> Phoenix/ USA </td> <td> G.J.Schmitz </td> <td> TMS/ ICME committee meeting </td> <td> 60 (scientific/ engineering) </td> <td> no dedicated presentation, but contributions to discussions interviews with SWO </td> </tr> <tr> <td> 19.- 22.11.18 </td> <td> Trivandrum/ India </td> <td> G.J.Schmitz </td> <td> ICSSP 7 </td> <td> 100 (scientific) </td> <td> </td> </tr> </table> #### 3.2.4 Presentations (scheduled for 2019 and beyond) <table> <tr> <th> Date </th> <th> Place </th> <th> by </th> <th> Event </th> <th> Audience # /type </th> <th> Remarks </th> </tr> <tr> <td> 21.-26.7.19 </td> <td> Indianapolis/ USA </td> <td> G.J.Schmitz </td> <td> 5th ICME World Congress </td> <td> expected 200 (scientist/engineers) </td> <td> </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> 15- 19.07.2019 </td> <td> Valencia </td> <td> A.Singhal </td> <td> ICIAM 2019 </td> <td> expected 100 </td> <td> </td> </tr> </table> #### 3.2.5 Inventory of MarketPlace owned software codes / data The software being developed is available on a project folder on GitLab called “MarketPlace” to which every project partner has access. The GitLab is complemented by https://mattermost.cc-asp.fraunhofer.de/- a free slack communication channel hosted by Fraunhofer for EU-MarketPlace, facilitating discussions on software in the project. The MarketPlace Project folder on Gitlab (see screenshot below) has currently 21 software code projects, most under heavy development. These current repositories (code projects) are: 1. _MarketPlace / EMMO_ 2. _MarketPlace / ontology-MarketPlace_ 3. _MarketPlace / UserCases-Apps_ 4. _MarketPlace / simlammps_ 5. _MarketPlace / wrappers_sdk_ 6. _MarketPlace / Web_ **(the platform)** 7. _MarketPlace / file-io_ 8. _MarketPlace / simopenfoam_ 9. _MarketPlace / osp-core_ 10. _MarketPlace / osp-rest-api_ 11. _MarketPlace / ExpertAPP_ 12. _MarketPlace / unittest_ 13. _MarketPlace / ontology-tools_ 14. _MarketPlace / h5cuds_ 15. _MarketPlace / MODA_ 16. _MarketPlace / OF_ 17. _MarketPlace / Knowledge_ 18. _MarketPlace / MRDB_ **(materials relations DB)** 19. _MarketPlace / simliggghts_ A first example for a code entering into practical use is the EMMO-CUDS data containers based on a Python API for EMMO and the core for OSP-CORE. Additional interfaces are Ontology Tools for representing EMMO classes in Python based on Owlready2 allowing for mapping to and from EMMO and from EMMO to CUDS. These are made available for MarketPlace together with the current owl sources for EMMO at _gitlab.cc-_ _asp.fraunhofer.de:MarketPlace/ontology.git_ , where the Python API itself can be found in the python/ folder in the "rooted_relations" branch. Currently all projects have an MIT business friendly license, however they are not publicly available. As the development proceeds, the licensing of each of the components will be evaluated with respect to exploitation plans. Ideally all codes are open and then transferred to the entity running the Marketplace as per the conditions lady out in the GA and CA. The DE and all involved partners will update starting from M20 the status of each code they produce or contribute to with respect to potential exploitation within and outside the project. #### 3.2.6 Inventory of open accessible project results The few by now available, open accessible and documented results are indicated in the above sections including their access details like their DoI or download links. These open accessible results are complemented by results being disclosed in a bilateral way. Such results especially relate to ontologies (owl files) developed together with the VIMMP project like the European Virtual Market Place Ontology (EVMPO) or sub-ontologies being developed by the MarketPlace consortium and being shared with the VIMMP consortium. (production_processes.owl; material.owl, models.owl, software.owl,..). For details of these ontologies see Deliverable 3.1 on “Software Solutions Service (First Version)”. 4. **Handling of project outcomes** **4.1 IPR exploitation strategy** The IPR exploitation strategy (M18, first draft, M48 final) is based on the assumption that the MarketPlace will start from scratch, meaning the MarketPlace a priori does not own any resources or tradable objects. In principle it thus cannot provide/offer any products/services but only can broker existing products/services. A minimum requirement of resources for operating a MarketPlace is a minimum hardware hosting a MarketPlace SW architecture allowing at least steering the brokering process. What then can be brokered is: * Hardware (“Hardware as a Service”. This is not really the aim but may perhaps become part of the game. Other players are around or emerging here like e.g. the European Grid Infrastructure EGI * Software (agent type distribution of temporary licenses via the Marketplace). The software is eventually run by the end users on their own computers. Strategies like this are followed e.g. in the Fortissimo project. In a typical setting finders fees of about 10% of the sales price can be expected as income for the MarketPlace * “Software as a Service” (SaaS) means to temporarily run third party software on MarketPlace owned hardware or alternatively on brokered hardware. This requires extensive negotiations with the owners of the software, the owners of the brokered hardware as well as an accounting system. Fees for the software use have to be agreed and might lead to revenue of 5-10 % of the costs being charged by the SWO in case such type of usage is covered by his portfolio at all. * “Data as a Service”. This will need strong negotiations in case of commercially available datasets. The providers of such datasets in general do not see any business model here as the data - once released - can hardly be further sold. Academic data could be brokered, but the question is how revenues can be generated for such basically free and open data. * Simulation ”Workflow manager”: This is a tool allowing orchestrating a configurable variety of other apps. “Apps” here are software in combination with the hardware they are executed on in a SaaS type approach. Besides the revenues for the individual apps, revenues could be generated from charging for the use of the Workflow-manager and especially for its configuration. The MarketPlace SW architecture accordingly must allow at least for * Communication between distributed resources/apps and the MarketPlace backend * Orchestrating workflows * Communication between end-users and the MarketPlace (FrontEnd) * Monitoring of the brokerage procedure: Which resources (hardware/software) were used by a specific customer? for which time? etc. * Authentication of users and their rights to use specific apps. * Billing end users and paying HW/SW providers for the brokered services An alternative approach may be an architecture, where “Applications” (Apps) register with the MarketPlace. Once being registered with the MarketPlace, it is possible to use such registered Applications in a Workflow. An Application here is any kind of software that can communicate with MarketPlace through a REST API. Therefore, the Application can be run on MarketPlace hardware, on Provider hardware or on cloud services. This means that MarketPlace can offer following services to providers (software and/or Hardware), for example: * Building required wrappers enabling SWO to build applications out of their existing software or database services that they would like to offer through MarketPlace, * Broker hardware (preferably cloud services, as these significantly reduce investment/maintenance needs and risks for MarketPlace) for Providers to run their Applications on, * Provide competence on how to best define, use and exploit metadata in a semantic interoperability setting, such as the workflow, so that a Provider's software or data can be used in Workflows, "Bringing your model to the MarketPlace!" The present MarketPlace architecture does not distinguish between software and data. Both can be sold as Products, if contained in an Application being (i) registered at the MarketPlace and (ii) connected to MarketPlace by a Provider. 5. **Key exploitable results** Besides publications and possible patents being described in the section on open research data management, the following list of exploitable results – essentially being based on software and data structures resp. ontologies - can be expected from the activities within the marketplace project: 1. **Hardware and computational grid** The Hardware being initially procured in the frame of the MarketPlace project forms the basis for any activity of a future MarketPlace consortium. In case this hardware should not be accessible after termination of the project, measures have to be taken to allow at least for a minimum of operations. The minimum exploitation of this hardware is its use * as host for the MarketPlace website * for all contractual matter * for customer relations * as repository for information catalogues like o event calendar o consultants and translators directory o trainer directory o software directory o Expertise directory o Software solutions directory The next exploitation level is its use * as “Hardware as a Service” to run individual third party simulations * as a central server managing and distributing jobs on a grid of distributed computational resources to be established * as central unit to store and retrieve data * to run third party software as demos in an SW agent type of operation * to run third party software as simulations in a SaaS type of operation * to test, configure and run simulation workflows (if these are not run on a grid) 2. **Web-Front-End** The central element of the MarketPlace is its web front end and especially the “Entry Portal”. Besides its basic functionalities like the MarketPlace corporate ID, navigating the website, management of user login etc., the “entry portal” especially offers space for _**commercial exploitation by advertisements** _ . Such advertisements may be classified into different value propositions depending e.g. * on specially featured content * on size of advertisements, * on duration of display * on frequency of display * on graphical elements, * on possible animations, * on possible videoclips * on provisions of hyperlinks * (further options would be sound, 3D animations etc..) 3. **Scientific Networks** A major scope and benefit of the MarketPlace is to serve as a one stop shop for the materials modelling community. To cope with this role a number of services have to be provided making the MarketPlace sufficiently attractive. Most important in this context seem directories and calendars. Most of this information is considered as “nice to have” by the potential MarketPlace user #### 5.3.1 Event Calendar The event calendar comprises a searchable list of future events being of interest to the materials modelling community. It will especially compile data of conferences and workshops. Although such a calendar can be operated in a quite automated way allowing external users to add their events, there is still a need for curation and checking the entries (avoid spam, keep ethics). Operation of such a calendar can be expected to be a source of costs rather than of revenues. #### 5.3.2 Directory of Consultants/Translators The business model for establishing, maintaining and commercialising a directory of trainers/consultants is still to be defined. In case of brokering such services a contractual network of numerous agreements between the consultants/trainers and the MarketPlace entity as well standardized contracts between the MarketPlace entity and the end-users. Issues to be solved especially relate to the liability of the brokering MarketPlace entity for the results offered by the consultant. Possible revenues could be * a fee paid by the consultant for being listed * a royalty (percentage) for each contract mediated between a consultant and an end-user * a fee paid by the end user for identification of a suitable consultant **5.3.3 Directory of Trainers** The business model would be similar to the one above for consultants. #### 5.3.4 Job opportunities Maintaining a list of “job opportunities” is similar to maintaining an event calendar and there is also a need for curation and checking the entries (avoid spam, keep ethics). Operation of such a list thus can also be expected to be a source of costs rather than a source of revenues. **5.4 Software Catalogue** The basic, searchable catalogue of modelling software is one of the key services offered by the MarketPlace. The respective part of the website may be commercially exploited by * charging the users with a basic fee for the use of all basic MarketPlace services * charging software providers when registering their software at all Both of these options would however create barriers for the widespread use of the MarketPlace in general and thus should only be considered if really necessary. Options to still commercially exploit the Software Catalogue could however be a gradual increase in the result display characteristics as follows * Name of Software, website address (not_hyperlinked), short description (free), user rating Additional information (to be charged with increasing content) * website address (hyperlinked) * Logo of Software * Contact details of Vendor (Name Phone e-mail, hyperlinked ) * extended description * further information (pictures) * further information (animations) * further information (videos) * …. A typical example for such an advertisement strategy is the “TMS Marketplace” see http://tmsmarketplace.com/: Typical cost for such a listing (according to own experience) are around 500 to max. 1000 $ per annum. Direct impact on sales is moderate only. Customers placing advertisements have to be regularly contacted and convinced to continue placing their ads. These customer relations and administration consumes a non-negligible share of the income. **5.5 Software as a Service** Software as a Service or SaaS is a software maintenance and distribution model in which a service provider (the marketplace in our case) hosts applications and makes them available through a certain access model to customers (end users) over the Internet. SaaS is one of four main paradigms of cloud computing, alongside infrastructure as a service (IaaS), platform as a service (PaaS) and hardware as a service (HaaS). SaaS available on MarketPlace can be summed under the following categories. #### 5.5.1 Third party Software This section addresses individual third-party software codes. The combination of tools in workflows is handled in the section on workflows. To provide third party software both a number of contractual and technical issues have to be tackled. Following the establishment of a contract between the third party SWO and the MarketPlace entity, the commercial exploitation of individual codes may proceed by * Initial installation charge * handling fee for regular maintenance updates * commission per catalysed and paid use * commission for short term license sale (see also software sale) * fees for use on MarketPlace Hardware * fees for using MarketPlace accounting & billing system (may be part of the commission) Alternative to installing the S/W itself at the marketplace infrastructure, the registration of Apps being downloadable and executable on/from third party resources is an interesting option. The revenue for such type of use would be less and hardly controllable. It could be generated by providing ### 5.5.2 MarketPlace owned Software The major software tools owned by the MarketPlace are tools related to * the website including its search services * the user management * the workflow management * the data handling and retrieval * the management of hardware resources * inventories of software/events/experts etc (the knowledge Apps) * the OSP-CORE, semantic Services and EMMO-CUDS According to the grant agreement the core software tools developed within the MarketPlace and needed for its operation are open and freely distributed. Thus, there is no way of commercial exploitation by sales of this software. As often the case for open source software codes (e.g. Linux operating system), however, there remains however some space for commercial activities in the areas of * support for installation * support for operation * extended documentation * fees for their use on the MarketPlace infrastructure * customisation of apps on top of the MarketPlace framework * customised entire MarketPlace installations on site (private cloud computing) ## 5.6 Software Sales Commercial Exploitation may further proceed by sales of software licenses with the MarketPlace acting similar like an agent for the SWO. This will need a bilateral agreement between the MarketPlace and the SWO. Exploitation then may proceed via * Finder’s fee for identification of new customers (further handling of the sale by the SWO in this case) * Commission for handling the sale as an agent for the SWO This kind of business requires strong knowledge of the individual tools and respective skilled manpower. ## 5.7 Scientific Data Marketplace will result with a bulk of scientific datasets mainly from WP5 work of demonstrators. To date no data has been produced as most WP5 activities have not yet started. However, there maybe options to exploit the bulk of data generated by Marketplace partners as well as third parties using the MarketPlace to generate valuable scientific data. This can be utilised in various ways: 1. Users declare their dates to be open, this allows Marketplace to provide search functionality and furthermore integration of the data via APIs into third party applications. This can of course be offered as a fee (the data is free, but access to it through dedicated interfaces is not). 2. Users may declare their data to be closed but open for exploitation. This is similar to 1, but the MarketPlace will share the fee with the users (owners) of the data. Other options are shown with respect to the type of access, public or commercial below: ### 5.7.1 Open Data Open data can be exploited in multiple ways. The MarketPlace data services may harvest such open data, whether generated on the marketplace or on third party repositories linked with the marketplace. The value proposition of the marketplace is the tools and interfaces that allow streamlined rapid semantic data management. Hence while data is open and may be free, the access is the main product. ### 5.7.2 Commercial datasets Commercial data sets are completely proprietary and require licenses to access them. These licenses are often complex and require specific legal bilateral agreemenets between the owner and user of the data. For the Marketplace it may be most beneficial to allow commercial owners of datasets to host their own access application as a SaaS whereby they have to deal with the access and licensing. MarketPlace model here is either: 1. Charge a free from the owners for hosting their applications, e.g., per traffic used (which is proportional to sales of data) 2. Charge a flat rate for the use of the Marketplace 3. Charge a fee per user or contract An important aspect here is that the marketplace must guarantee that no data leak is possible, e.g., by allowing the commercial users to host their own data servers that are discoverable on the marketplace but otherwise provide direct access (using the Marketplace API to enable interoperability). Another avenue for marketplace is, like in the public data, to offer interoperable semantic interfaces to the data. ## 5.8 Workflows The MarketPlace will establish simulation workflows for the integrative simulation of processes for different products and materials in the use-case work packages (WP5). Each of these configured workflows is a priori specific and useful only for that particular situation. The partners in the different uses cases will surely further exploit their workflows for their own purposes after the end of the project. Further commercial exploitation of these use-case workflows is not straight forward. Their reconfiguration to other materials, to other processes and to other products is a non-trivial task. This re-configuration task requires a profound expertise and will most probably proceed by experts/consultants/translators only. For these stakeholders pre-configured and validated work-flow templates might however be highly interesting as a starting point for tackling similar challenges. Thus a specialist market for the sale of simulation workflows may be expected especially if the number and variety of existing workflows increases. The commercial exploitation of workflows will thus best proceed on a case by case basis. ## 5.9 Human Capital/ Skills / Expertise “Today high effort for contract set-up and agreements on requirements” In the current landscape of materials development, industry struggles not only to find the right contact (the experts) but also usually need to tackle many contractual agreements including NDA. Marketplace, through the operating entity can offer a value proposition by means of sets of pre negotiated agreements with major industries and consultants. There agreements can then form a basis where contract set-up and agreements on requirements can be substantially. A prerequisite for this is that the legal entity has proper legal advice. “Today high effort to research information, purchase and installation, often inadequately high when only tested within a study” Numerous SME’s lacks expertise or experts as well as software and hardware resources to actuall conduct materials modelling. The Marketplace by definition provides these resources. In particular models for 1. Pay-per-“use” models for software (commercial and proprietary) should be provide for a free. This requires agreements with SWO. 2. Host software and provide hardware as a service for end users, including support for updates, upgrades, maintenance, and help on demand. ## 5.10 Potential sources of revenues In addition to the EMMC survey, MarketPlace conducted another survey and a study for the types of services needed and the potential for revenue. . Exemplarily, the view of an industrial prospective platform user answering to an EMMC survey is documented below. Solutions to the posed questions may become unique selling points for the MarketPlace: \- Cite - * Manage complex simulation workflows (Today in PFPs: cross-institutional computations w/ manual data transfer create high effort for performing linked/coupled workflows. * Access to computational resources when not available internally (Today high effort for contract set-up and agreements on requirements) * Access to new software (Today high effort to research information, purchase and installation, often inadequately high when only tested within a study) 2\. Key difference to current practice for three biggest problems: * Platform manages simulation processes / chains by offering pre-configured workflows and / or possibility for user-defined workflows (share w/ other users if specified). * Platform has contracted cloud resources / computational clusters that can be used on demand (pay per use, highly welcome solution, optional), or platform provides links to cluster providers for individual assignment * Platform provides overviews / introduction to simulation software for material topics, provides links to vendors (mandatory) and / or assists / offers purchase (opt.). 3. \-- 4. Security requirements: No expert on this field, but mandatory: encrypted protocols (https), no network traffic tracing. Data privacy: “Private data” (according to CMU classification) as mandatory, “Public data” when specified by the data provider (e.g. via retrieval during uploading) 5. For every day use, the platform should provide pre-configured workflows for linking / coupling of software that is often used in a simulation chain, e.g. DFT + phase field, microscopic + continuum simulation, data analytics + data based prediction methods etc. 6. Further, it should provide overviews/introduction to simulation software for self-study in addition to training services. Third, an overview over computer clusters in the EU which can be used as external capacity for academic and commercial computations and or cloud computing would be welcome. For software and computing resources, a link to the provider for the further process would be mandatory, handling of the assignment / purchase over the MarketPlace platform highly welcome (i.e. optional). 6\. I would pay for training services individually. For translation services, I am open to pay per use or a pay a flat rate (e.g. via annually MarketPlace utilization fee) Summarizing the above, I personally would be open to invest in the marketplace, since I would profit from it in the sense of accessibility of data / services if all mandatory requirements stated are fulfilled. If even optional requirements could be fulfilled, the benefit would be even stronger. - end_cite - This example clearly indicates a demand for MarketPlace type services and also the willingness of prospective users to pay for such services. However, at the same time, it also indicates the necessity of the very wide range of infrastructure which has to be generated, operated and maintained by the MarketPlace company. The current “high effort for contract set-up and agreements on requirements” and “the effort to research information, purchase and installation, often inadequately high when only tested within a study to set up a simulation environment” just for a single software - as mentioned by the prospective user - will be shifted to the marketplace entity and has to be multiplied by the number of software codes to be installed at the MarktPlace. _This number has to be sufficiently high_ in order to have a high probability to match the potential user demand for just one single code he has got in his mind A list of potential sources of revenues originating from different stakeholders is exemplarily compiled below. The quantification of these revenues in terms of estimated turnover and market size is subject of deliverable 6.6. * Revenues from users of the Marketplace (using of general infrastructure): one time registration fees, annual subscription fees, flat rate subscriptions * Revenues from users of Marketplace for _specific_ services (translation/consultancy/SaaS/HaaS: DaaS, PaaS)) (Refund hardware cost) no net revenue (Refund software cost) no net revenue Commissions Hardware use Fees for use of MarketPlace owned HW Commissions Software use Fees for use of MarketPlace owned SW Commissions Database use Fees for use of MarketPlace owned SW Handling fees * Revenues from public bodies Institutional support for providing public (free) information services * Revenues from SWO offering their software (both as software and/or as service) Commissions for brokered sales Commissions for brokered SaaS annual base membership fee handling fees advertisements (SW companies) * Revenues from experts offering their services (translation/training/consultancy) Commissions for contact matchmaking annual base fee for being listed as expert advertisements ( e.g. consultancy companies) handling fees This preliminary list of potential sources of revenues will be further regularly updated in future versions of this document. ## 5.11 Sources of Costs Sources of Costs are detailed in Deliverable 6.6 and a related Excel file ## 5.12 Estimated Market Size Estimates for the market size are given in Deliverable 6.6. A recent study conducted by Goldbeck consulting focused on Cloud based IaaS market size estimate. It found cloud computing market size estimates relevant to materials modelling marketplaces is R&D spending in relevant industries, in particular chemicals and materials as well as aerospace and defence (plus a few others): * Chemicals and materials global R&D: $40-50bn; Aerospace and defence: about $30bn; Industrial Energy: about $20bn. In comparison: Life Science: about $180bn * Relevant ‘materials science R&D’ is about $90bn, i.e. about half of Life Science R&D * The global Cloud computing IaaS in Life Science market was valued at USD 946.1 million in 2017 The first level estimate would be that the Materials Science cloud computing market is about half that of life sciences, i.e. about $500m. Hence the Total Accessible Market (TAM) for marketplaces would be a 5% brokerage fee of that, i.e. : $25m. Assume that in the first place we only deal with Europe, the serviceable addressable market is perhaps ¼ of that, or about $8m. If we base the figure on servicing primarily chemicals and materials R&D, the TAM is €4m. A realistic market share is 5%, hence: € 200,000 ## 5.13 Summarizing analysis and assessment A qualitative comparison of the exemplary demands of a prospective user with the effort being necessary to fulfil these demands already indicates the need for a sufficient large market – and also a respective share of this market for the MarketPlace entity. This is essentially due to the fact, that a reasonably large amount of infrastructure – especially in terms of a high number of software codes - must be made available and accordingly be installed at and be maintained by the MarketPlace entity. Besides technical efforts this also includes a substantial administrative and contractual effort for establishing and maintaining business relations/contracts with numerous software providers. The situation might perhaps be compared with a library providing access to a host of books as compared to those being in the shelf at home. Such libraries in the past have been publicly installed and maintained at cities, at universities. They provided free – or minimum charge - access to expensive books and a wealth of information in the past. Software – and especially modelling software – is the future source of information. In contrast to former books it can be re-used and be modified. It integrates all types of experience and validation efforts and thus – beyond being a source of information - becomes a source of knowledge. In spite of an obvious demand for a MarketPlace and its benefits, a sustainable private/commercial operation of such an entity might turn out to be hardly feasible. In this case, the establishment of public funded e.g. european or national “software farms/software hubs” – similar to European experimental CERN or ESRF facilities, but probably less costly – should be considered. **5.14 Pretest Academic Survey** * Aim of the study: * Identify latent needs, wishes, requirements and expectations * Priorize the planned features • Evaluate customer satisfaction * Execution: * PhD student at Fraunhofer IWM * Cooperation with the Faculty of Economics and Behavioral Sciences of the Albert-Ludwig-University Freiburg * Contact: [email protected]_ * Topic: * dynamics of customer preferences and satisfaction on products related to the Kano model. The product under investigation is the MarketPlace platform. * Method: * two queries, one before introduction of the product or in an early stage and at least one sometimes after. * Benefit for MarketPlace: * insights on what customers would like to have on the MarketPlace, content, features, etc. * Current status (M12): first query designed, compiled and sent out ( _https://ec.europa.eu/eusurvey/runner/Survey1MarketPlace_ * Additional question are in the Vienna EMMC Workshop (M14) The survey addresses levels of Involvement, Extrinsic and intrinsic Motivation, and Passive Innovation Resistance (see tables below for questions). The End Users were 21% females and 79% Male with age distribution mostly in the 40’s. Also most users had previous experience with Nanohub, JmatPro, LAMMPS, TotalMateria. The results include Prioritization of attributes by the End Users, including those features that must be implemented before release, and those after. These attributes included online training, data conversion, contact to experts, software offerings, and others, see Figure below. The analysis of the results and user feedback is summarised in the following Table : Clearly it sheds light on the importance of the knowledge services to the entire marketplace. Moreover, it shows that software and data come next. In particular, Data Conversion (interoperability, and translation of formats) is a prominent attribute. Next steps of the study is to analyse willingness to pay. <table> <tr> <th> **Involvement** </th> <th> **Digitalization of materials** </th> <th> **Seven-point** **likert scale** </th> <th> </th> </tr> <tr> <td> I1 </td> <td> I am very interested in the digitalization of materials. </td> <td> strongly disagree (1) strongly agree (7) </td> <td> Zaichkowsky (1985) </td> </tr> <tr> <td> I2 </td> <td> I know a lot about the digitalization of materials. </td> <td> strongly disagree (1) strongly agree (7) </td> <td> Bloch (1981) </td> </tr> <tr> <td> I3 </td> <td> The digitalization of materials is important to me. </td> <td> strongly disagree (1) strongly agree (7) </td> <td> Bloch (1981) </td> </tr> <tr> <td> I4 </td> <td> I am happy to advise others on questions about the digitalization of materials. </td> <td> strongly disagree (1) strongly agree (7) </td> <td> Summers (1970) </td> </tr> <tr> <td> I5 </td> <td> I like to inform myself about the digitalization of materials in magazines, at </td> <td> strongly disagree (1) strongly agree (7) </td> <td> Corey (1971) </td> </tr> <tr> <td> </td> <td> conferences media. </td> <td> or </td> <td> in </td> <td> the </td> <td> </td> <td> </td> </tr> </table> <table> <tr> <th> **UTAUT2-** **Modell** </th> <th> **Online materials modelling and** **collaboration platforms** </th> <th> </th> <th> </th> </tr> <tr> <td> **Extrinsic Motivation** </td> <td> </td> <td> **Yoo et al.** **(2012)** </td> </tr> <tr> <td> **Performance** **Expectancy** </td> <td> </td> <td> **Seven-point** **Likert scale** </td> <td> **Vankatesh** **(2012)** </td> </tr> <tr> <td> P1 </td> <td> 1. I would find the MarketPlace useful in my daily and professsional life. 2. Using the MarketPlacewill enable me to accomplish things more quickly. </td> <td> strongly disagree (1) strongly agree (7) </td> <td> </td> </tr> <tr> <td> P2 </td> <td> strongly disagree (1) strongly agree (7) </td> <td> </td> </tr> <tr> <td> P3 </td> <td> Using the collaboration platform will probably increase my productivity. </td> <td> strongly disagree (1) strongly agree (7) </td> <td> </td> </tr> <tr> <td> **Facilitating** **Conditions** </td> <td> </td> <td> **Seven-point** **Likert scale** </td> <td> **Vankatesh** **(2012)** </td> </tr> <tr> <td> F1 </td> <td> 4. I have the resourcesnecessary to use the MarketPlace. 5. I have the knowledgenecessary to use the MarketPlace. </td> <td> strongly disagree (1) strongly agree (7) </td> <td> </td> </tr> <tr> <td> F2 </td> <td> strongly disagree (1) - </td> <td> </td> </tr> <tr> <td> </td> <td> </td> <td> strongly agree (7) </td> <td> </td> </tr> <tr> <td> F3 </td> <td> Open web-based services are in general compatible with other technologies I use. </td> <td> strongly disagree (1) strongly agree (7) </td> <td> </td> </tr> <tr> <td> F4 </td> <td> 6) I can get help from others when I have difficulties using the MarketPlace. </td> <td> strongly disagree (1) strongly agree (7) </td> <td> </td> </tr> <tr> <td> **Social** **Influence** </td> <td> </td> <td> **Seven-point** **Likert scale** </td> <td> **Vankatesh** **(2012)** </td> </tr> <tr> <td> S1 </td> <td> 7) People who are important to me think that I should use the MarketPlace. </td> <td> strongly disagree (1) strongly agree (7) </td> <td> </td> </tr> <tr> <td> S2 </td> <td> People who influence my behaviour think that I should use the materials modelling and collaboration platform. </td> <td> strongly disagree (1) strongly agree (7) </td> <td> </td> </tr> <tr> <td> S3 </td> <td> People whose opinions that I value prefer that I use the open web-based platform. </td> <td> strongly disagree (1) strongly agree (7) </td> <td> </td> </tr> <tr> <td> S4 </td> <td> 8) In general, my organization supports the use of the MarketPlace. </td> <td> strongly disagree (1) strongly agree (7) </td> <td> </td> </tr> </table> <table> <tr> <th> **Intrinsic Motivation** </th> <th> </th> <th> **Yoo et al.** **(2012)** </th> </tr> <tr> <td> **Effort Expectancy** </td> <td> **Seven-point** **Likert scale** </td> <td> **Vankatesh** **(2012)** </td> </tr> <tr> <td> E1 </td> <td> 9) Learning to use the MarketPlace is easy for me. </td> <td> strongly disagree (1) - strongly agree (7) </td> <td> </td> </tr> <tr> <td> E2 </td> <td> The interaction with the online platform is clear and understandable for me. </td> <td> strongly disagree (1) - strongly agree (7) </td> <td> </td> </tr> <tr> <td> E3 </td> <td> I find online collaboration platforms easy to use. </td> <td> strongly disagree (1) - strongly agree (7) </td> <td> </td> </tr> <tr> <td> E4 </td> <td> It is easy fo rme to become skillful at using the materials modelling and collaboration platform. </td> <td> strongly disagree (1) - strongly agree (7) </td> <td> </td> </tr> <tr> <td> **Hedonic Motivation** </td> <td> **Seven-point** **Likert scale** </td> <td> **Vankatesh** **(2012)** </td> </tr> <tr> <td> HM1 </td> <td> 10) Using the MarketPlace is fun. </td> <td> strongly disagree (1) - strongly agree (7) </td> <td> </td> </tr> <tr> <td> HM2 </td> <td> Using open web-based modelling and collaboration platforms is enjoyable. </td> <td> strongly disagree (1) - strongly agree (7) </td> <td> </td> </tr> <tr> <td> HM3 </td> <td> Using open web-based modelling and collaboration platforms is entertaining. </td> <td> strongly disagree (1) - strongly agree (7) </td> <td> </td> </tr> <tr> <td> **Anxiety** **(Frage 2)** </td> <td> **Seven-point** **Likert scale** </td> <td> </td> </tr> <tr> <td> A1 </td> <td> 3) I feel apprehensive about using the MarketPlace. </td> <td> strongly disagree (1) - strongly agree (7) </td> <td> </td> </tr> <tr> <td> A2 </td> <td> I hesitate to use online platforms because of making a mistake. </td> <td> strongly disagree (1) - strongly agree (7) </td> <td> </td> </tr> <tr> <td> A3 </td> <td> Using online platforms is somewhat intimidating to me. </td> <td> strongly disagree (1) - strongly agree (7) </td> <td> </td> </tr> </table> # Directory of contacts/ network of MarketPlace stakeholders Following software owners exhibiting during the TMS annual meeting in Phoenix/Arizona from 11 th through 15 th of March 2018 were _personally_ contacted (by Dr. G.J. Schmitz, ACCESS), informed and interviewed w.r.t. possibly offering of their services on the future Marketplace platform: <table> <tr> <th> Thermo-Calc </th> <th> Anders Engstrom </th> <th> President </th> <th> [email protected] </th> </tr> <tr> <td> Computherm </td> <td> Fan Zhang </td> <td> President </td> <td> [email protected] </td> </tr> <tr> <td> SenteSoftware </td> <td> Jean Philippe Schille </td> <td> Managing director </td> <td> [email protected] </td> </tr> <tr> <td> SCM </td> <td> Fedor Goumans </td> <td> Business developer </td> <td> [email protected] </td> </tr> <tr> <td> Bluequartz </td> <td> Michael Jackson </td> <td> Owner/ Software architect </td> <td> [email protected] </td> </tr> <tr> <td> Kitware </td> <td> Marcus D Hanwell </td> <td> Technical leader </td> <td> [email protected] </td> </tr> <tr> <td> CNS Software </td> <td> Andreas Pilgrim </td> <td> CEO/co-owner </td> <td> [email protected] </td> </tr> <tr> <td> Ohio State University </td> <td> Yunzhi Wang </td> <td> Professor </td> <td> [email protected] </td> </tr> <tr> <td> Citrine Informatics </td> <td> Bryce Meredig </td> <td> CSO </td> <td> [email protected] </td> </tr> </table> They were all very positive and highly interested and want to be kept informed about the developments in order to possibly influence them with respect to their needs. # Deviations from the objectives, corrective actions None so far. The Data management plan and IPR exploitation strategy along with a table with key exploitable results KER are summarized in the present document. The initial table of established **direct** contacts to end user communities and SWO depicted in this document is complemented with a large list of possible, future contacts (see e.g. D 3.1 for a comprehensive list of SWO). Continuous updates of this document will be reported until M60.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0351_5G-CARMEN_825012.md
# Executive Summary The purpose of D6.1 is to define a Data Management Plan (DMP) for 5G-CARMEN and define the various aspects of that DMP within the wider H2020 DMP framework and the European Commission’s Pilot on Open Research Data. In Section 1 we get to introduce some core features of the 5G-CARMEN project along with a general framework for the concept of data management In Section 2 we elaborate over the management of knowledge and we attempt to define a knowledge protection strategy. We briefly analyse the information flow within the project focusing on the project structure as split in different Work Packages and each WP’s mission and co-dependencies. Following that we make an extended analysis on the outward information flow indicating communication targets that could possibly display an interest in the data generated by 5G-CARMEN project. Moreover, we define the main channels of communication along with the phases that the sharing of the project’s data will occur. Finally, we refer to the Ethics and IPR frameworks that will be used by 5G CARMEN. In Section 3 we address the Open Access Policy (OAP) framework by initially defining it and then referring to its benefits and structure. The basis of this discussion is the European Commission’s wider H2020 framework and the relevant OA policies. We debate over the distinction between Basic and Applied research and their thin boundaries in the Age of Information. Then we identify the need for Research organizations and Companies to collaborate on research projects and the benefits acquired from that collaboration. Furthermore, we go deeper on our analysis elaborating over open-source licensing and open access policies for research data and publications In Section 4 we define the Data Management Plan of 5G-CARMEN. First, we get to set the basic principles and guidelines according to EU directives and we define the requirements and limitations on FAIR data. Following that we present the DMP structure of 5G-CARMEN by primarily identifying the Information Data categories. Then we make an analysis regarding the flow, the storage, the sharing and the disposal of project data. 5G- # Introduction ## Preamble The 5G-CARMEN project is part of the 5G-PPP initiative and its objective is to promote the merging of roads and vehicles with the digital world creating always-connected, automated and intelligent “Mobility Corridors”. The Bologna- Munich road networks will be the first deployment field for the 5G-CARMEN innovative architecture. 5G-CARMEN will advance the use of C-V2X based on 3FPP LTE and NR radio access, extending the activities foreseen by the European Automotive Telecom Alliance (EATA) and the 5G Automotive Alliance (5GAA) on connected and automated vehicles. 5G-CARMEN architecture will optimize road mobility regarding security, connectivity and autonomous driving and will employ the following synergistic applications of enabling technologies: 1. Hybrid radio access network for connected vehicles 2. Distributed and multi-layer network-embedded cloud 3. MEC-assisted range extension and interworking between C-V2X and C-ITS 4. Service-oriented predictive quality of service through end-to-end slicing 5. Precise positioning and time synchronization 6. Secure, multi-domain and cross-border service orchestration 7. 5G New Radio and new frequency bands The project is expected to attract a number of stakeholders including car manufacturers, road operators, vehicle owners and local authorities since it will add further value to their business models and it is aligned with the agenda of the European 5G Action plan, which foresees uninterrupted 5G coverage in major roads. ## General framework for data management The search for knowledge, the need to answer even the toughest questions around the world surrounding us, was always one of mankind’s greatest quests, thus science was born. The scientific community has always strived to generate research results and make them accessible to the world thus promoting a culture of inquiry and knowledge. From Archimedes, Einstein, Newton, Tesla to today’s institutions, large corporations and SME’s involved in research, sharing the acquired knowledge is still a task of great importance. Today, in the Information Age, the rate at which data is exchanged is tremendous. The internet has become a global and interactive database of knowledge where research results can be accessible to everyone and can be used for the collective progress of mankind aligned with the concept of the “public good”. Sharing research knowledge within the concept of the “public good” dictates that there is a world-wide online distribution of the peer-reviewed journal literature with free online access for every interested party from scientist to just people with inquiring minds. Access barriers should be removed to provide a fertile ground for further research, help to further improve our education systems and ensure that all humanity has equal opportunities in exploiting the results of that research. According to 5G-CARMEN Grant Agreement (GA ID 825012), all involved partners must implement the Project effort as described in the respective GA and in compliance with the provisions of the GA and all legal obligations under applicable EU, international and national law. The current document will be providing a Data Management Plan for 5G-CARMEN following the template recommended by the EC [ _1_ ]. This DMP will be describing how data will be handled in terms of collection, organization, management, storage, security, back-up, preservation and sharing (where applicable). Proper data management is a necessary component of responsible conduct of research since it ensures the value of the research results and assists in sustaining that value for the years to come. The purpose of 5G-CARMEN DMP is to make 5G-CARMEN data easy to discover and access, intelligible and interoperable so that it may be used beyond the core objectives of 5G CARMEN. ### 5G- Please note the distinction between open access to scientific peer-reviewed publications and open access to research data: * publications – open access is an obligation in Horizon 2020\. * data – the Commission is running a flexible pilot which has been extended and is described below. 5G-CARMEN will follow two different strategies for self-archiving repository: (1) the project website will host the repository, granting visibility at the end of the project as well; and (2) the consortium will consider the case of a centralised repository, such as Zenodo repository. In any case, the two options enable third parties to access, exploit, reproduce and disseminate at no cost. # Knowledge Management and Protection Strategy ## Information flow within the project The general description of the information flow within the project is depicted in Figure 1. WP2 provides requirements and specification to steer the technical work packages WP3 and WP4. Moreover, WP2 also provides input to the pilot work package WP5 in the form of refined scenarios and KPIs. Additionally, WP3 and WP4 will jointly cooperate to provide WP5 with the prototypes to be piloted over the Bologna-Munich Corridor. WP6 will receive inputs from WP5 and will use them to analyse the project impact and perform usecase cross validation and business modelling studies. The Project Management (WP8) and the Dissemination and Communications (WP7) work packages will run throughout the entire project interacting with the other work packages with the aim of ensuring smooth project execution and proper dissemination of the results. diagram) Such cross-information exchanges and dependencies need to be monitored efficiently in order to have the project partners leverage on the available and developed knowledge within the project. To ensure a good communication among project partners an online file repository system is being used for internal information exchange facilitating the daily collaboration. Additionally, the project leverages WP-wise mailing lists allowing daily discussion and a video conferencing tool to host monthly project management meetings and weekly WPwise meetings ensuring partners coordination and alignment in the technical work. Communication within the consortium is one of the main objectives of the project management which structure roles and responsibilities were described in the project management handbook deliverable [ _2_ ]. Importantly, within the 5G-CARMEN management structure a Knowledge and Innovation Management (KIM) team have been appointed since the beginning of the project. The KIM team will ensure an effective innovation management developing and constantly updating both market analysis and a business plan for the results achieved by 5G-CARMEN. The KIM team also ensures the monitoring of IPR issues as regulated in the Consortium Agreement and explores. Opportunities of patenting will be considered and analysed; market relationships will be created and reinforced. The Consortium Agreement will provide rules for handling confidentiality and IPR to the benefit for the Consortium and its partners. Classified Documents will be handled according to proper rules with regards to classification, numbering and locked storing and distribution limitations. This team is chaired by the Innovation Manager and includes the Project Coordinator, the Technical Manager and the WP Leaders. ## Outward information flow ### Communication targets 5G-CARMEN’s Communication Strategy identifies the following eight groups of entities as the Target Audience for the project. Open access to the project’s results for these groups is critical to the success and further usage of the output of 5G-CARMEN. For each target group a broad description is provided along with the benefit and goals that each group of Stakeholders will be focusing on in terms of content. **_Table 1: Communication targets_ ** <table> <tr> <th> **ID** </th> <th> </th> <th> **Target Group** </th> <th> **Description** </th> <th> **Stakeholder Interest** </th> </tr> <tr> <td> </td> <td> **A** </td> <td> Industry, SMEs and Entrepreneurs </td> <td> Stakeholders operating in fields related to the project such as automotive companies, telecoms and network operators, and SMEs and Entrepreneurs operating in the 5G domain </td> <td> * Project results which will enable development of new products * Complementing the findings of 5GCARMEN with internal knowledge to increase impact </td> </tr> </table> ##### 5G- <table> <tr> <th> **B** </th> <th> 5G Infrastructure PPP Programme Stakeholders </th> <th> Participants as well as any stakeholder involved in the 5G Infrastructure PPP </th> <th> * Finding synergies in tackling common issues * Enhancing benefit by combination of results * Co-organising events </th> </tr> <tr> <td> **C** </td> <td> Road operators and road infrastructure-related stakeholders </td> <td> Road operators and related international entities, involved in the provision of physical and digital infrastructure to the activities of the project </td> <td> * Cross validation of solutions through output of 5G Carmen. * Support findings related to the transition phase of CCAM * Definition of non-technical requirements </td> </tr> <tr> <td> **D** </td> <td> Technology Clusters </td> <td> European initiatives and clusters, and research focused organisations </td> <td> * Leverage of project’s result into own research activities * Knowledge exchange and building through project events </td> </tr> <tr> <td> **E** </td> <td> Researchers and Academics </td> <td> Stakeholders from Universities, research centres and R&D departments of industry entities </td> <td> * Advancing research * Benefits in training personnel and students * Use-cases provide real life demonstrations of theoretical findings </td> </tr> <tr> <td> **F** </td> <td> Policy Makers </td> <td> Policy-makers at any level, such as EC Directorates and Units, Ministries and Agencies </td> <td> * Evaluation of existing or proposed legislation through perspective of project’s innovations * Definition of future research requirements </td> </tr> <tr> <td> **G** </td> <td> Standards bodies and fora </td> <td> Organisations focused on standardisation and industry fora </td> <td> Input for standardisation activities </td> </tr> <tr> <td> **H** </td> <td> General Public </td> <td> Any other stakeholder group or individual interested in the project </td> <td> * Stimulate innovation in society as a whole * Understand and support European research activities </td> </tr> </table> ### Communication channels The dissemination approach has been designed to ensure open access to the stakeholder groups listed above. Several means of communication have been adopted for this purpose, ranging in technological effort and type and depth of reach; these can be broadly categorized into two areas: Events and Publications. A Web Portal and Social Media platforms have also been used to maximise the impact of the Dissemination channels. #### Publications 5G-CARMEN Dissemination activities are aimed at reaching several targets audiences and are therefore varied and extensive. The following types of publications are part of the dissemination strategy: * Articles in international peer-reviewed magazines and journals. * White Papers published in synergy with the European Technology Platform Networld2020 and relevant industry fora. This is particularly targeted at facilitating stakeholder understanding of the project’s approach and decision making. * Promotional Material such as brochures, leaflets and flyers. * Project Documentation: Deliverables and technical reports will be made publicly available through the Web Portal. * Logo and Templates to support the identity of the project throughout its outputs * Two electronic newsletters per year will be released #### Events Participation in events will allow project consortium members with direct experience in the project and related knowledge to directly reach target audiences. The following types of events are targeted for dissemination of the project’s activities. * Establishment of conferences and informative workshops * Participation in industrial exhibitions * Participation and contribution to international conferences * Interactions with worldwide fora, institutes, and standardization organizations #### Online Presence The 5G-CARMEN Dissemination approach leverages the growing worldwide digitalisation trends to allow open access to its generated content. In particular a dedicated Web Portal has been designed and Social Media are being used to maximise reach. The Web Portal provides content generated by the project such as reports and presentations for open access by the target audience. Moreover, it will include links to the project’s Social Media channels to enable further content sharing. Also included on the Web Portal are dates and information about upcoming events, as well as general information about the project itself. Through the use of the Web Portal, the partners of the project will also be able to freely exchange data. 5G CARMEN’s website adopts modern design principles which, among other features, will provide the users with the most relevant material first, and ensures an optimal viewing experience. The main Social Media Platforms selected for the project are Twitter and Linkedin. These have been deemed the most appropriate in terms of targeted reach and structure of content for the purpose of sharing the project’s material. Social Media participation is expected to significantly rise once the outputs of the project’s activities start to be shared through the dedicated profiles. The Social Media profiles used by the project are the following: * Twitter: _https://twitter.com/5g_carmen_ * Linkedin profile: _https://www.linkedin.com/company/5g-carmen/_ ### Communication Phases Four specific phases have been defined to enable the sharing of the project’s contents. Each phase is related to specific groups of contents and related target audience, and as such will exploit a bespoke range of dissemination channels. In order to positively impact the standardization and ease of access of the contents, the phases are aligned to those of other projects within the 5G PPP Programme. Phase 1 \- Create Awareness Phase 2 \- Increase the potential impact Phase 3 \- Results Phase 4 \- Valorisation During Phase 1, the objective of the Dissemination effort is to raise awareness about the project in general. As many 5G stakeholders as possible are targeted during this period. The awareness already raised by 5G PPP will be leveraged to increase outreach of the project. A workshop will be held following the start of the project’s activities to communicate the roadmap of the ICT-18-2018 call. In Phase 2, four use cases will be used to begin analysing the potential impact. The outcomes of this activity will be communicated to the Target Audiences raised during Phase 1. The main material expected to be ##### 5G- communicated in Phase 2 is the facility usage and testing of the pilot. This phase will be significantly impacted by a series of dedicated workshops with key stakeholders of the project. Phase 3 will used to emphasise the results obtained by external entities as a result of the outputs of the project. The goal of this phase is to highlight the commercial viability resulting from the project, and to attract further external users. Finally, in Phase 4 demonstrations will take place to selected target audiences, to highlight the final scientific and business findings of 5G CARMEN. Due to its content, this phase may take place after the project is concluded. This phase has the goal of attracting investors as a result of the results from external customers. ## IPR Management 5G-CARMEN is a project with participants from many different countries, therefore a strategy to properly handle the data management and the IP ownership is an essential process to secure and protect value generated by the software tools. Considering that contract formalization of software takes the form of license agreements. Such agreements impose specific usage rules on third-parties that intent to make use of the software. Developing software can be defined as a creative task, relying on the coding and development abilities of a developer and also on the ability to translate the functional and operational requirements defined during the designing phase into a sequence of instructions. The figure as shown below Figure 4 describes a typical software development process and the multiple IPRs that can be generated. The research data generated or created under the projects may include statistics, results of experiments, measurements, observations resulting from fieldwork, survey results, and images. The obligations to disseminate results (Article 29.1 of the GA) and to provide open access to scientific publications (Article 29.2 of the GA) do not, in any way, change the obligation of consortia to protect results, ensure confidentiality obligations and the security obligations or the obligations to protect personal data, all of which still apply. The project CA defines all the guidelines that regulate the access rights to the different IPRs, both for the contributed background and for the foreground or results. These guidelines describe the rights any of the actors have on the IPRs involved in the project. The actors that come into play can be distinguished essentially in two categories: project partners, and external parties. As the project evolves and the research progresses, datasets may be created and may be subject to changes or updates in terms of the types, formats, and origins of the data. Furthermore, the way the data is named or made accessible may change according to consortium policy changes and/or identification of potential for exploitation by project partners. # Open Access Policy ## OAP: Definition, benefits, and general framework Open access (OA) refers to the practice of providing online access to scientific information that is free of charge to the end-use and reusable. “Scientific” refers to all academic disciplines. In the context of research and innovation, “scientific information” can mean [ _3_ ]: 1. Peer-reviewed scientific research articles (published in scholarly journals), or 2. Research data (data underlying publications, curated data and/or raw data). There are two main categories that can be identified within the general context of research: 1. **Basic** which refers to academic research focusing on providing results of scientific interest. Basic research is intended to be accessible and access to that is usually made via publications. 2. **Applied** which refers to research supported by companies to provide results that will increase their value and make them more competitive in their industry. Companies expect a return on their investment in applied research thus they tend to protect the value of that research by using patents and trade secrets. In Horizon 2020 Universities and companies are encouraged to collaborate on research projects towards producing higher value research results. Combining those two worlds creates insight diversity connected with a strong background of both academic and industry-related knowledge. Through that collaboration universities are able to expand their scientific interests in more industry specific sectors like telecoms, ICT and biotechnology. On the other hand, companies get to collaborate with high caliber academic institutions that can provide scientific insights, high quality research techniques and methodologies and eventually innovative solutions to industry problems. Small and medium companies usually do not have the knowledge base and the resources to radically innovate. Those actors can benefit from participating in consortiums that engage in research projects since those partnerships provide them with the required capacity to best leverage their own resources towards innovation. As a result of the Academic-Industry partnership the borders between Basic and Applied research have become quite thin. Research Organizations (RO) are moving from the context of Basic to the one of Applied research, thus publishing research results is being replaced by elaborating patent opportunities. Through that process ROs display an active interest in extracting the optimum amount of value from their research. However, making research results available to the public is still of the essence. The two mainstream vehicles used for that process are patent applications and journal publications. With the advancement of the Information Technology that provided easier and broader access to the Internet, defensive publications and the open access model were added as alternative vehicles for knowledge sharing. The Open Access Model provides free Internet access to research articles and is considered to be a very effective system for broad dissemination and access to research data and publications thus accelerating scientific progress worldwide. It is a revolutionary policy on access to scientific information that can be structured to also include access policies for private companies. An open access model can be beneficiary for those companies in the context of facing scientific fraud, enhancing data quality, increasing the value added by the research results and reducing the resources spent on duplicate research. The process of establishing Open Access requires that every discrete or individual producer of scientific knowledge commits to that purpose by contributing with research results, source materials, digital representations, raw data and meta data. Open access contributions have to meet two conditions [4]: 1. The authors and right holder(s) of such contributions grant(s) to all users a free, irrevocable, worldwide, right to access to and a license to copy, use, distribute, transmit and display the work publicly and to make and distribute derivative works, in any digital medium for any responsible purpose, subject to proper attributions of authorship (community standards, will continue to provide the mechanism for enforcement of proper attribution and responsible use of the published work, as they do now), as well as the right to make small numbers of printed copies for their personal use. 2. A complete version of the work and all supplemental materials, including a copy of the permission as stated above, in an appropriate standard electronic format is deposited (and thus published) in at least one online repository using suitable technical standards (such as the Open Archive definitions) that is supported and maintained by an academic institution, scholarly society, government agency, or other well established organization that seeks to enable open access, unrestricted distribution, interoperability and long-term archiving. The concept under which the Open Access Model operates is to ensure free, without barriers access to scientific literature for readers. There are two access options provided by the European commission [ _3_ ]. Commission Recommendation 2018/790 of 25.04.2018 on “An accompanying Commission Recommendation sets out a complete policy framework for improving access to, and preservation of, scientific information” [ _5_ ]. 1. Gold Open access: Articles are immediately made accessible online by the publisher. Up front publication costs can be eligible for reimbursement by the European Commission 2. Green Open Access: Researchers make their articles available through an open access repository no later than six after publication. The European Commission has suggested that Member States take a similar approach to the results of research funded under their own domestic programs. That step aims to empower the innovation capacity of the EU and provide its citizens with fast, free and unobstructed access to scientific research results. 5G-CARMEN aligns with the context of the Open Access Model and more specifically with the directives of a hybrid Open Access Model. Considering on a case-by-case basis if green or gold open access is to be used. ## OA to Open-Source licensing The output of several deliverables can include contributions to Open-Source projects. Such contributions could involve contribution in code, documentation, operational management and other processes. A number of partners maintain and develop Open-Source projects under permissive licenses such as MIT, BSD or Mozilla Public License. Open-Source projects in the domain 5G are exceptionally beneficial to the community since they bring the telecommunications technology closer to the public. SMEs especially benefit since they use such projects as a development platform to test and deploy services. The following table is a comparison between Open-Source licenses that will be used throughout the 5G-CARMEN project. **Table 2: Comparison of Open Source Licenses.** <table> <tr> <th> **Term** </th> <th> **MIT** </th> <th> **Mozilla Public License** </th> <th> **Apache** </th> <th> **GNU** </th> </tr> <tr> <td> **Popular** </td> <td> ✔ </td> <td> ✔ </td> <td> ✔ </td> <td> ✔ </td> </tr> <tr> <td> **License Type** </td> <td> Permissive </td> <td> Permissive </td> <td> Permissive </td> <td> Strong Copyleft </td> </tr> <tr> <td> **Jurisdiction** </td> <td> Not Specified </td> <td> Not Specified </td> <td> Not Specified </td> <td> Not Specified </td> </tr> <tr> <td> **Grant patent rights** </td> <td> X </td> <td> ✔ </td> <td> ✔ </td> <td> X </td> </tr> </table> ### 5G- <table> <tr> <th> **Patent retaliation clause** </th> <th> X </th> <th> ✔ </th> <th> ✔ </th> <th> X </th> </tr> <tr> <td> **Modification** </td> <td> ✔ </td> <td> ✔ </td> <td> ✔ </td> <td> ✔ </td> </tr> <tr> <td> **Distribution** </td> <td> ✔ </td> <td> ✔ </td> <td> ✔ </td> <td> ✔ </td> </tr> <tr> <td> **Liability** </td> <td> X </td> <td> X </td> <td> X </td> <td> X </td> </tr> <tr> <td> **Warranty** </td> <td> X </td> <td> X </td> <td> X </td> <td> X </td> </tr> <tr> <td> **Private Use** </td> <td> ✔ </td> <td> ✔ </td> <td> ✔ </td> <td> ✔ </td> </tr> <tr> <td> **Disclose source** </td> <td> X </td> <td> ✔ </td> <td> X </td> <td> ✔ </td> </tr> </table> ## OA Management of Research Data Open access to research data refers to the right to access and reuse digital research data under the terms and conditions set out in the Grant Agreement. Research data refers to information, in particular facts or numbers, collected to be examined and considered as a basis for reasoning, discussion, or calculation. In a research context, examples of data include statistics, results of experiments, measurements, observations resulting from fieldwork, survey results, interview recordings and images. The focus is on research data that is available in digital form. Users can normally access, mine, exploit, reproduce and disseminate openly accessible research data free of charge. Open access provides a number of benefits that align with boarder access to research data, such as 1) build on previous research results; 2) encourages collaboration; 3) speed up innovation and 4) involve citizens and society. Currently, the consortium expects to make most of the 5G-CARMEN research data and dataset openly available. The consortium expects to make the dataset available through the project’s repository (which is foreseen to support versioning). 5G-CARMEN will follow two different strategies for self-archiving repository: (1) the project website will host the repository, granting visibility at the end of the project as well; and (2) the consortium will analyze the case of a centralized repository, such as Zenodo repository. In any case, the two options enable third parties to access, exploit, reproduce and disseminate at no cost. The repository is maintained by the project coordinator and access to it is authenticated. Access to the repository will be enabled through a web interface that only allows download of the dataset (i.e. it will not be possible to delete, upload, check-out or commit other files). Figure 5 displays Open access to research data as part of the dissemination and exploitation plan of the H2020 projects. **Figure 5: Open access to research data** [ _6_ ] # Data Management Plan ## Principles-Guidelines The European Union enables Open Innovation by encouraging projects funded under the European Union Framework Programme for Research and Innovation Horizon 2020 to provide open access (free of charge, online access for any user) to research data generated in the context of H2020 projects. Starting from projects funded under the 2017 Work Programme, all H2020 projects are encouraged to be part of the Open Research Data Pilot (ORD pilot) in an effort to provide even wider access to scientific facts and knowledge, while also improving and maximizing access and re-use of research data generated therein. At the same time, ORD pilot takes into account the need to balance openness and protection of scientific information, commercialisation and Intellectual Property Rights (IPR), privacy concerns, security as well as data management and preservation questions. In this direction, the possibility for a project to opt out the ORD pilot is available before project submission or during project execution **Error! Reference source not found.** . The 5G-CARMEN consortium has decided to opt out from the ORD pilot process. ## FAIR data: Requirements and limitations The data generated during and after all projects should follow the FAIR data principles that require that data are Findable, Accessible, Interoperable and Reusable. These requirements don’t affect implementation choices and don’t necessarily suggest any specific technology, standard, or implementation solution. In this direction, H2020 project shall adopt methodologies for data generation, collection and sharing so as to ensure the following: * _data are findable_ due to the exploitation of metadata for convenient data discovery and of standard persistent and unique identifiers (such as DOIs). * _data are openly accessible_ , where this is possible; adequate justification is required to be provided if otherwise. Towards this, projects shall use methods and tools for providing access to data along with any required complementary pieces of information, such as guidelines for repository access and use. * _data are interoperable_ and allow for data exchange and re-use among researchers due to the extended and targeted exploitation of standardized data representation formats, vocabularies, etc. or mappings when the former is not possible. * _data re-use is promoted_ through clarifying licenses. To ease the application of FAIR data and, therefore, maximise the research data openness, the EC suggests various standards and standardised processes that can be exploited towards the adoption of the FAIR principles **Error! Reference source not found.** . Indicatively, several standardised metadata vocabularies covering a wide set of domains are listed in the Metadata Standards Directory **Error! Reference source not found.** , while EUDAT B2SHARE **Error! Reference source not found.** provides a built-in license wizard that facilitates the selection of an adequate license for research data. The FAIR principles have been generated with the purpose of improving the best practices for data management and data curation. On top of this, FAIR aims to describe the principles in order to be applied to a wide range of data management purposes, whether it is data collection or data management of larger research projects regardless of scientific disciplines. With the endorsement of the FAIR principles by H2020 and their implementation in the guidelines for H2020, the FAIR principles serve as a template for lifecycle data management and ensure that the most important components for lifecycle are covered. This is intended as an implementation of the FAIR concept rather than a strict technical implementation of the FAIR principles. The FAIR concept implementation of each project is documented in a Data Management Plan (DMP), which is a key element of good data management. DMPs help shape the data management life cycle principles to be followed by an H2020 project, as described above. Such documents are created during the first 6 months of a project and they are appropriately refined through its course so as to fulfil evolving requirements. The 5GCARMEN consortium is expected to adhere to the conditions laid out in the 5G-CARMEN Data Management Plan below, in which all details related to management of 5G-CARMEN research data are specified. ## DMP structure The use of innovative information technologies raises many questions concerning the right of individuals to determine how their personal information may be used. We, as 5G-CARMEN, consider this right to be of immense importance. Data protection issues when handling the personal data of test drivers, collaborators and partners will be taken into account. Personal details will only be recorded, processed or used if this is permitted by law or if the person involved has given permission. We are committed to the principles of sparing use of personal data and transparency in data processing. In order to accomplish this, we have a detailed Data Management Plan (DMP) in place. This approach ensures a consistent and appropriate level of data protection throughout 5G-CARMEN project. Our DMP consists of a living document which describes the data management life cycle for all data collected, processed and generated in a H2020 project. This plan outlines how data will be created, managed, shared and preserved throughout the project, providing arguments for any restrictions which apply to any of these steps. Data protection involves data which is either in digital or physical form. This DMP aims to prevent unauthorized disclosure of information, which can occur in many different forms: release, transfer, dissemination, or other communication in an oral, written, electronic, or any other way. A potential recipient of unauthorized information could be any person or entity. This DMP will also provide a set of guidelines to minimize the impact and provide auditability in case this is required. There are five technocentric categories that can define the life cycle of Critical Information Data: 1) Creation; 2) Storage; 3) Usage; 4) Transmission, and 5) Deletion. Figure 6 illustrates the life cycle of critical data inside an organization. The processes which will be implemented in relation to data protection are divided into the following categories: * Storage of digital data * Storage of physical data * Sharing of data * Data disposal, deletion and destruction ### Storage of digital data Securing stored data involves preventing unauthorized people from accessing it as well as preventing accidental or intentional destruction, infection or corruption of information. While data encryption is a popular mechanism, it is just one of many techniques and technologies that can be used to implement a tiered data-security strategy. Steps to secure data involve understanding applicable threats, aligning appropriate layers of defence and continual monitoring of activity logs taking action as needed. This means that a multi- tier approach needs to be adopted from all the partners. The proper method of storage and the appropriate community along with levels of access for privileged users are important considerations for comprehensive protection. Improperly stored information along with overly permissive accounts are a centralized theme in many high-profile breaches. Partners within the 5G-CARMEN will follow a specific set of guidelines to comply with the project’s main requirement for storage of digital data. <table> <tr> <th> **Term** </th> <th> **Description** </th> </tr> <tr> <td> **Requirement** </td> <td> **Data-in-storage must be protected from unauthorized access, modification and loss.** </td> </tr> <tr> <td> **Measure to be implemented by all partners** </td> <td> * Data availability must be guaranteed. * Confidential data must be stored using access protection. * Strictly confidential information must only be stored in an encrypted mode * Confidential data must not be stored in online services that are not approved by the 5GCARMEN Consortium. * Any exception from this measure must explicitly be approved * Modifications to data with high integrity requirements must be documented and approved by the partners. </td> </tr> </table> ### Storage of physical data Physical data refers to data assets which are physically manifested, such as paper documents or the physical manifestation of digital assets, such as printed copies of emails. Physical data has unique challenges when it comes to its protection. An important security factor of physical data is where it is physically located. Locations with poor physical security greatly increase the likelihood of data compromise. A significant challenge is that the physical data cannot be accessed in a controlled and encrypted way, in the same manner as their digital counterparts. Physical data usually displays information in plain text, which can be deciphered by any malicious onlooker. Therefore, it is important to implement a number of security processes when accessing and modifying it, in order to comply with 5G-CARMEN’s security requirements. <table> <tr> <th> **Term** </th> <th> **Description** </th> </tr> <tr> <td> **Requirement** </td> <td> **Storage of physical data must be protected from unauthorized access, modification and loss.** </td> </tr> <tr> <td> **Measure to be implemented by all partners** </td> <td> * Physical access to confidential data must be access controlled. * Physical access to confidential data must be recorded. * Physical data when replicated or copied must be clearly indicated as a copy. * A record of copies of physical data must be kept. </td> </tr> </table> ### Sharing of data Data sharing in the context of 5G-CARMEN refers to the process of making confidential data available to authorized partners. To prevent impact on the confidentiality and integrity of data while it is being shared, a set or processes will need to be adopted between all partners. The processes not only improve the integrity and confidentiality of data, but also the auditability trail in cause of compromise. Shared confidential data is often copy-protected to prevent the creation of unauthorized copies from malicious actors. <table> <tr> <th> **Term** </th> <th> **Description** </th> </tr> <tr> <td> **Requirement** </td> <td> **Data must only be exchanged in the context of a legal framework, and/or research need and while ensuring confidentiality.** </td> </tr> <tr> <td> **Measure to be implemented by all partners** </td> <td> * For the exchange of confidential data, only services approved by 5G-CARMEN must be used. This applies in particular to online services. * Strictly confidential data sent by email must be encrypted. * Encryption/Decryption keys and other access mechanisms need to be communicated between the partners in a secure manner. * A process will be implemented to rotate keys and access controls in case of compromise. </td> </tr> </table> ### Data disposal, deletion and destruction Protecting confidential and sensitive data from accidental disclosure is of paramount importance. A key area in data security is the disposal of confidential data, in both electronic and paper formats. Confidential information discarded in the trash or recycling bin is legally and effectively open to anyone. Additionally, so is any data stored on discarded or donated computer technology, like hard drives and thumb drives, once the devices are thrown away or donated to charity. Electronic data kept beyond its usefulness invites mischief or accidental breach. The secure disposal, deletion, and destruction of data aims to make data unrecoverable from other parties. <table> <tr> <th> **Term** </th> <th> **Description** </th> </tr> <tr> <td> **Requirement** </td> <td> **Data which is no longer needed* must be disposed of, deleted or destroyed (** Not needed means not needed for research processes and not subject to the time period for storage). </td> </tr> <tr> <td> **Measure to be implemented by all partners** </td> <td> * Confidential paper documents which are no longer needed must be disposed of using data protection boxes or shredded. * Confidential data which is no longer needed must be (securely) erased. * Data storages of mobile end devices and data carriers which are no longer needed must be (securely) erased. * If it is not possible to erase data storages of mobile end devices or data carriers then the end device or the carrier must be destroyed. * Tamper-resistant hardware platforms such as secure elements, secure enclaves, SIMs, etc., which are used to store confidential data must be destroyed. </td> </tr> </table> ## Ethics The 5G-CARMEN consortium is to respect the framework that is structured by the joined provision of: 1. The European Regulation 2016 regarding “Protection of natural persons with regard to the processing of personal data and on the free movement of such data” [ _10_ ] 2. Horizon 2020 Ethics guidelines [ _11_ ] The 5G-CARMEN project partners will abide by professional ethical practices and comply with the Charter of Fundamental Rights of the European Union. An extended analysis regarding data protection and ethical issues and requirements can be found at D1.1 and D1.2 that address the following points relative to the DMP: * Processing of personal data (D1.1 section 2.2) * Data protection of voluntary interviewed participants (D1.1 section 3.1) which includes a description of the processes of o Obtaining consent o Recording and storing consent * Particular categories of personal data (D1.2 section 3) * Data protection officer (D1.2 section 5) * Protection of personal data in 5G-CARMEN (D1.2 section 6) 5G-
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0352_EHRI_654164.md
# Introduction Deliverable 13.3 “Data management planning (DMP) for long-term preservation” is described in the DOW as follows: “ _The report and corresponding workshop is for archives that wish to develop knowledge and capabilities in regard to the preservation of digital resources based on the EHRI data management policies. It will focus on data management planning, preservation policy, and access policy. It will be delivered in coordination with WP4”_ . This deliverable reports on the activities carried out in relation to Task 13.2 (Secure Longterm Access Infrastructure for the Preservation of Holocaust Research Objects). It includes details on the workshop that was held on 31 July and 1 August 2017 in The Hague. The outcomes of the workshop are included in _Chapter 3_ . The data and information assets of archives consist of a wide range of different types of digital objects. Examples are databases, text-files, websites, social media collections, digital images and multimedia files. These objects can both be digital surrogates of analogue originals (e.g. digitized photographs) or “digital born” (e.g. documentation of archival collections or digital recorded oral histories). In a practical sense long-term preservation of data and information assets is related to questions like: * What are the features of durable digital objects? 1. How to avoid file format obsolescence? ○ How to be sure that future users can use digital objects in a correct way? ○ How to be sure that digital objects remain authentic ● What kind of digital repository is required? ○ How to assess the quality of a digital repository? ○ How to implement a “Trusted Digital Repository”? * What kind of services are required to provide long-term access to digital objects? 1. How to protect data objects? ○ How to share data objects? ○ How to archive data objects? The aim of the activities carried out is to help to answer these kind of questions, taking into consideration the specific characteristics and requirements of the archives within the EHRI consortium. Within the consortium the number and variety of digital Holocaust objects fluctuates significantly, as well as the expertise and resources available to manage the digital assets. # Context and coherence Task 13.2 “Secure Long-term Access Infrastructure for the Preservation of Holocaust Research Objects” is related to three deliverables: * D13.3 Data management planning for long-term preservation * D13.4 Trusted Digital Repository workshop * D13.2 Long-term access infrastructure for preserving Holocaust research objects Data management refers to the development, execution and supervision of (research) plans, policies, programs and practices that control, protect, deliver and enhance the value of data and information assets. Data should be archived in a repository that complies to international standards and guidelines of trustworthiness: a certified ‘Trusted Digital Repository’. Thus, the mission of the task (13.2) and deliverables (13.2, 13.3 and 13.4) can be characterized as: “Preserving digital Holocaust evidence for the future”. The outcomes of both activities on “data management planning” (D13.3) and on “Trusted Digital Repositories” (D13.4) are input sources for the “Long-term Access Infrastructure for Preserving Holocaust Research Objects” (D13.2). The three deliverables are the outcome of the activities carried out in Task 13.2 “Secure Long-Term Access Infrastructure for the Preservation of Holocaust Research Objects”. This secure long-term access infrastructure will consist of a set of guidelines, principles and services that enable organisations to provide durable access to digital Holocaust resources. **Figure 1** : EHRI project Deliverables in relation to Task 13.2, “Secure Long-term Infrastructure for the Preservation of Holocaust Research Objects”. ## Target audience Data management planning, trustworthy digital archiving and issues relevant to realise a long-term access infrastructure is obviously relevant for archives in general, and not specific for archives that curate Holocaust research objects. What is specific though for Holocaust archival institutes is the subject or theme of the archival records, photographs, documentation and other objects they curate that require specific procedures (e.g. in the field privacy protection). The activities in this task and its current deliverables are targeted at representatives of archives within the EHRI consortium who wish to develop and extend knowledge and capabilities in regard to the management and long-term preservation of digital resources. Ideally, representatives of archives are involved in managing digital data objects or in formulating an appropriate strategy, such as policy makers. Archives curating Holocaust objects (both inside and outside the EHRI consortium) have a couple of specific characteristics, such as: 1. _Heterogeneous level of IT-savviness_ . The EHRI consortium contains 21 partners that curate documentation and/or archival material on the Holocaust, both in digital and/or analogue form. The level of sophistication concerning the application of information technology to curate digital assets ranges from basic to advanced. This determines to what extent an archive might be able to contribute to the workshop or learn from its results. Two groups of archives within the EHRI consortium are distinguished (1) “IT-Savvy” archives that have a policy on data management / long-term archiving (or intent to define a policy) and (2) archives that do not yet have a data management / long-term archiving policy. A representation of the first group will be actively involved in this task (to discuss data management planning issues). 2. _A lot of the curated archival material contains personal data._ This brings data management issues such as “privacy protection” and “user authorisation and authentication” to the forefront. 3. _Analogue archival material is very vulnerable._ The majority of the archival material originates from the period of the Second World War and the paper quality of this material is low. Digitization can be used to preserve the vulnerable originals. Issues like preservation imaging and long-term access to the images will influence data management policies applied by the archive. 4. _The archive collection can contain copies of originals curated by other archives._ Also specific for collection holding institutes curating Holocaust records is that their collections can contain copies of records. This can include copies of original archive sources, both in analogue (e.g. photocopy) and digital (e.g. digital images) format as well as copies of archival finding aids. This “copy-original” issue has specific implications for data management. The original and the copy, for instance, can be described in different ways and do often not have the same level of detail of description. ## Strategy to achieve the goals of the task The strategy chosen to ultimately arrive at a roadmap for a long-term access infrastructure for preserving Holocaust Research Objects (D13.2) consisted of three steps: 1. Principles, standards, procedures, etc. (in relation to long-term access infrastructure to preserve data) from the research data community were selected. 2. These were presented to policy makers (concerning the information architecture) in the EHRI consortium. 3. We assessed to what extent the standards etc. are relevant / of value for the curators for CHIs (in the EHRI consortium). The workshop in summer 2017 (31 July - 1 August) played an important role in this assessment process. The summary of this workshop can be found in Chapter 3. The next section discusses the input from the research data community. ## Input from the research data community Management of digital assets is discussed in several communities, such as the cultural heritage community, the records management community and the research data community. Although each community uses methods, standards, terminology and governance models specific to its needs, there are a number of lessons to be learned and applied to the development of a data management policy for EHRI. Since the assets managed by EHRI partners are primarily aimed at scholarly users, it is particularly useful for this work package to take a closer look at data management aspects provided by the research data community. The following data management principles, standards and procedures from the research data community serve as input: 1. FAIR data principles, aimed at the quality of _data objects_ . 2. Guidelines of Certification for Trustworthy Digital Repositories, aimed at the quality of _repositories_ that curate digital objects. 3. Services provided by European Research Infrastructures (e.g. EUDAT), aimed at _services_ that support data management. Each of the above aspects (data, repositories, services) acts as a reference for the assessment of data management issues relevant to CHIs curating Holocaust data objects. ### FAIR data principles 1 The FAIR data principles are aimed at making data Findable, Accessible, Interoperable, and Reusable. Each principle is clarified below: 1. The _Findable_ data principle. The findable principle concerns the assignment of persistent identifiers to digital objects, to provide rich metadata and to register the data in a searchable resource. To be findable: F1. (meta)data are assigned a globally unique and persistent identifier F2. data are described with rich metadata (defined by R1 below) F3. metadata clearly and explicitly include the identifier of the data it describes F4. (meta)data are registered or indexed in a searchable resource 2. The _Accessible_ data principle. The accessible principle is related to the retrieval of objects by their identifier and the availability of metadata. To be accessible: A1. (meta)data are retrievable by their identifier using a standardized communications protocol A1.1 the protocol is open, free, and universally implementable A1.2 the protocol allows for an authentication and authorization procedure, where necessary A2. metadata are accessible, even when the data are no longer available 3. The _Interoperable_ data principle. Interoperability is realised by using formal, broadly applicable languages for knowledge representation and qualified references. To be interoperable: I1. (meta)data use a formal, accessible, shared, and broadly applicable language for knowledge representation I2. (meta)data use vocabularies that follow FAIR principles I3. (meta)data include qualified references to other (meta)data 4. The _Reusable_ data principle. The reusable principle involves the application of rich, accurate metadata, clear licenses, provenance and use of community standards. To be re-usable: R1. meta(data) are richly described with a plurality of accurate and relevant attributes R1.1. (meta)data are released with a clear and accessible data usage license R1.2. (meta)data are associated with detailed provenance R1.3. (meta)data meet domain-relevant community standards ### Guidelines for Certification for Trustworthy Digital Repositories A Trusted Digital Repository (TDR) has the mission to provide reliable, long- term access to managed digital resources to its so-called “designated community” 2 . A designated community is an identified group of potential consumers who should be able to understand a particular set of information. A number of criteria and guidelines have been established regarding the long- term sustainability of digital data. A European Framework for Audit and Certification of Digital Repositories was set up to help organisations in obtaining appropriate certification as a trusted digital repository 3 . It has established three increasingly demanding levels of assessment: _Basic Certification_ , consisting of self- assessment and external review of the criteria that are part of the Data Seal of Approval (DSA); _Extended Certification_ , including the Basic Certification and additionally and externally reviewed self-assessment against a more fine-grained ISO standard (ISO 16363); and _Formal Certification_ , the validation of the self-assessment through a third-party official audit based on the ISO standard. The workshop presents the TDR assessment frameworks with an emphasis on the Data Seal of Approval ( _http://datasealofapproval.org_ ) 4 . Fundamental to the DSA guidelines are five criteria, that together determine whether or not the digital research data may be qualified as sustainably archived: * The research data can be found on the Internet. * The research data are accessible, while taking into account relevant legislation with regard to personal information and intellectual property of the data. * The research data are available in a usable format. * The research data are reliable. * The research data can be referred to. ### Services provided by the European Research Infrastructure EUDAT The EUDAT initiative aims at developing and supporting research data services for all scientific disciplines and that support the data lifecycle. The Humanities are an important target group for EUDAT. The EUDAT “B2Service Suite” consists of services to exchange, synchronize, store, share, replicate, protect and find data (See: _http://eudat.eu_ ) .The EUDAT services suite or Collaborative Data Infrastructure (CDI) consists of seven services. They are briefly described below. The B2DROP service can be characterized as a personal cloud storage service. It is a secure and trusted data exchange service 5 . The next service of the EUDAT Services Suite is the B2SHARE service to store and share small-scale research data from diverse contexts 6 . The service automatically assigns persistent identifiers to records. The B2SHARE service assigns handle PIDs 7 . Depositors can document their data objects and give the data a usage license, preferably an open access license. The third service of the EUDAT CDI is the B2SAFE service. This service allows community and department repositories to implement data management policies on research data across multiple administrative domains. The B2STAGE service enables the movement of large amounts of data between data stores and high-performance computing resources. The B2FIND service can be characterized as a simple, user-friendly metadata catalogue of research data collections stored in EUDAT data centres and other repositories. The service provides access to resources that are also available in the EHRI portal. This is because a repository is harvested both by the B2FIND service and the EHRI-portal 8 . B2HANDLE provides an abstraction layer between a globally unique persistent identifier and a physical location of a data object allowing researchers to reliably cite and refer in the long term. B2ACCESS provides an easy-to-use and secure authentication and authorization platform integrated in all other services. It provides various methods of authentication through the home organisation identity provider, but also allows social IDs like Google and Facebook as well as the EUDAT ID. Managers can specify authorisation decisions in the dedicated interface. Figure 2 gives an overview of the services of the EUDAT Collaborative Data Infrastructure (CDI). The EUDAT project ends early 2018 and this of course obstructs the realisation of a sustainable trustworthy infrastructure. EUDAT, however, will be part of the “European Open Science Cloud” (EOSC) that is planned to emerge on the basis on several European data infrastructure initiative. Concerning data management services EHRI should keep an eye on this development as it can play a role in the long-term access to EHRI databases. **Figure 2:** The services of the EUDAT Collaborative Data Infrastructure ## Towards a long-term access infrastructure The challenge in task 13.2, Secure Long-term Access Infrastructure for the Preservation of Holocaust Research Objects, is to collect, formulate and disseminate knowledge and expertise on the management and long-term preservation of digital objects of value to the 20 archives in the consortium and beyond. The sources of information come both from the EHRI partners and from the community that develops and maintains services in the field of research data management. This process is described below: 1. The collection phase The main idea is to assess services, systems and procedures of the EHRI partners to curate digital objects in relation to reference models and data infrastructure services that have their origin in the research data management field. As not all EHRI partners have the same capability with regard to data management and long-term preservation, a consultation is carried out. An important part of the collection phase is the workshop on data management planning and becoming a trusted digital repository. 2. The formulation phase State of the art information on research data management and long-term archiving is provided in the present report, presented in Chapter 3 as the outcomes of the workshop. This report is available to the EHRI consortium. 3. The dissemination phase The main focus of the dissemination phase will be the formulation of a long- term access infrastructure for preserving digital Holocaust objects that is scheduled to be delivered near the end of the EHRI project. The long-term access infrastructure for preserving Holocaust research objects is the subject of Deliverable 13.2. The formulation of this infrastructure will be based on input from the research data community. See Figure 1. # Workshop Outcomes This chapter contains a report of an EHRI workshop on data management organised on 31 July and 1 August 2017. ## Program & Content Workshop The workshop consisted of two days: the first day’s topic was Data Management Planning, whereas the second day’s topic was Long Term Access to Holocaust Data. Three presentations were given the first day, according to the topics that are described in Section 2.3: FAIR Principles (by Peter Doorn), Certification of Trusted Digital Repository (by Heiko Tjalsma), and Data Infrastructure Services (by René van Horik). In addition, a presentation on digital information management at the Dutch National Archives was given by Margriet van Gorsel. For the second day, participants were asked to prepare slides about their view on Archiving, Access, and Policies. A summary of these discussions can be found in Section 3.4. **Figure 3:** Program of the Workshop ## Workshop participants Within the EHRI consortium partners were approached that have experience with managing data assets, e.g. because they operate information management systems. Another activity concerns data management policies. The workshop participants are thus able to evaluate the value data management policy issues have for the whole EHRI consortium and beyond. Six EHRI partners were represented: CDEC, WL, DANS, USHMM,fc Yad Vashem and NIOD. The following persons / EHRI partners contributed to the workshop: <table> <tr> <th> Laura Brazzo </th> <th> Fondazione Centro di Documentazione Ebraica Contemporanea (CDEC) </th> </tr> <tr> <td> Jessica Green </td> <td> The Wiener Library (WL) </td> </tr> <tr> <td> René van Horik </td> <td> Data Archiving and Networked Services (DANS) </td> </tr> <tr> <td> Tonke de Jong </td> <td> Data Archiving and Networked Services (DANS) </td> </tr> <tr> <td> Michael Levy </td> <td> United States Holocaust Memorial Museum (USHMM) </td> </tr> <tr> <td> Effi Neumann </td> <td> Yad Vashem (YV) </td> </tr> <tr> <td> Annelies van Nispen </td> <td> NIOD Institute for War, Holocaust and Genocide Studies (NIOD) </td> </tr> <tr> <td> Frank Uiterwaal </td> <td> NIOD Institute for War, Holocaust and Genocide Studies (NIOD) </td> </tr> </table> ## Workshop Day 1: Data Management Planning Introduction The workshop started with an introduction in which the context and strategy of the activities related to this workshop were described. This introduction to a large extent contains the information as given in chapter 2. A couple of remarks were given by the workshop participants: 1. Data management planning (DMP) in first instance is directed towards the researchers who elaborate on how they deal with data they use and create. In EHRI the data provider perspective is more prominent than this data user perspective. In EHRI a great deal of historians are active who work in a traditional way. DMP in EHRI must be directed on the archives rather than on the users. 2. Also data-reuse and the management of licenses should be added to the topics (see page 6) 3. Participants have experienced that certification of repositories can be very expensive. It can, however, play an important role in educating / training the people in the organisation on policies with regard to long-term access to digital assets. ## DMP aspect 1: FAIR data principles Presenter: Peter Doorn (Data Archiving and Networked Services (DANS)) Title of presentation: “FAIR Data Assessment of Datasets in Trusted Digital Repositories” Link to slides: _https://b2drop.eudat.eu/s/atw1lonNKULP9yp_ Remarks and comments by the workshop participants: 1. The assignment and management of persistent identifiers turns out to be a very important component of data that is “FAIR”. Several practical questions concerning this were exchanged. E.g. on how to get PIDs for objects. A solution of getting PIDs for publications is to become a member of DataCite 9 (or join a national member of DataCite). The importance of PIDs for the EHRI infrastructure was confirmed a couple of times. 2. Although the complete implementation of the FAIR principles is considered as too much for the EHRI consortium as a whole, the principles can still be considered as good guidelines. The FAIR data assessment tool (currently as prototype) might be relevant for EHRI at a later stage. 3. Some EHRI partners manage / create Linked Open Data. This type of data is by definition of high quality in terms of FAIR criteria. 4. As privacy protection / license issues are very important in the EHRI consortium several FAIR criteria (e.g. Findable) will not be fully supported. 5. For EHRI the “data scope” is rather on archival collections than on research data sets. This perspective is probably new in the FAIR data community and EHRI might consider to put this perspective more to the forefront. ## DMP aspect 2: Certification of TDR’s Presenter: Heiko Tjalsma (Data Archiving and Networked Services (DANS)) Title of presentation “Certification of TDRs” Link to slides: _https://b2drop.eudat.eu/s/Lsdi8LMuFSpPUMr_ Remarks and comments by the workshop participants: 1. Outsourcing of services (e.g. archival storage) does not influence the certification situation for an archive. The service provider now has to “prove” that the certification requirements are met. So the certification guidelines might have to be assessed over several service providers. 2. It might be the case that not all aspects of the certification are public (e.g. for security reasons). The reviewer, obviously, must have access to all relevant information in order to assess the quality of the repository. This issue is certainly applicable in EHRI. 3. The certification requirements are of value in a “self-assessment” process. Repositories can evaluate several aspects of the organisation of the data without any consequence. 4. EHRI is in the process to become an ERIC. In other ERICs (e.g. CLARIN) the certification of repositories plays a role to improve the quality of the repositories involved. And still the majority of the repositories in CLARIN do not have a certification. The way other ERICs engage with repository certification should be taken into consideration by EHRI. 5. The EHRI Content Provider Agreement (CPA) was discussed in relation to some certification rules. The CPA is not signed by all EHRI partners and also the formulation of the agreement has undergone several versions. An important issue is the ownership of the data, especially after the transfer of the data from the archive to the EHRI portal. This issue is not unambiguously settled in the EHRI project. 6. The assessment of certification guidelines is not a trivial activity. Carrying out a full certification process is for most EHRI partners not possible. Within EHRI in first instance awareness raising on the importance to assess features of the data management infrastructure is of value as it will provide an overview of aspects that are relevant for the management of digital objects. ## Information Management at the National Archives Presenter: Margriet van Gorsel (Dutch National Archives) Title of presentation: “Digital Information Management” Link to slides: _https://b2drop.eudat.eu/s/xCXBzYytcpqpgOt_ Remarks and comments by the workshop participants: 1. The design and management of an “information chain” as presented is seen as relevant and has some resemblance with the EHRI project (several data providers and one service provider). Process based on standards, but exceptions do occur. 2. The National Archives repositories have received the dataseal of approval (replaced by the “CoreTrustSeal” in the future). ## DMP aspect 3: Data infrastructure services Presenter: René van Horik (Data Archiving and Networked Services (DANS)) Title of presentation: “Data infrastructure services of the EUDAT CDI” Link to slides: _https://b2drop.eudat.eu/s/OclvzCfpts7nAgJ_ Remarks and comments by the workshop participants: 1. It seems several tools and services are available for data management (e.g. the ones provided by the EUDAT CDI). The threshold to use some of them is quite high. One must be trained. Also the sustainability of the service is an issue: what will happen with the services once the EUDAT project is over? A follow-up initiative has started (European Open Science Cloud - EOSC) but it is not clear what its potential value is for EHRI. 2. It is important to make a distinction between a data infrastructure and a research infrastructure. Some services of EUDAT are of value. B2DROP is already used in the EHRI project as exchange service for sample data sets. 3. The discussion of the store, share, archive, etc. services of EUDAT gives also insight in the EHRI information architecture. E.g. on issues such as the updating of records, the long-term perspective of the EHRI portal. We should start with an inventory of what is needed, prioritise and then plan the next steps. 4. Services that deal with privacy protection and authorized access (identity provision by B2ACCESS) is relevant for EHRI. ## Workshop day 2: Archiving, Access and Policies at the EHRI institutes In relation to long-term access to digital objects three aspects were discussed: (1) archiving, (2) access and (3) policies. Each workshop participant provided input on each aspect. The “archiving” aspect concerns issues related to the storage of digital objects by organisations that curate digital Holocaust objects. Questions related to this aspect are: 1. What kind of digital objects do you manage? 2. Where do you store these objects? 3. How do you monitor the quality of the digital objects? 4. Details on the information systems you use The “access” aspect concerns issues related to the management of access to digital objects. Questions related to this aspect are: 1. How do you manage the access to the digital objects? 2. How do you protect the objects? 3. Details on licenses / legal issues The “policies” aspect concerns issues related to the formulation of policies to provide longterm access to digital objects. Questions related to this aspect are: 1. Which stakeholders are involved in managing the digital collection? 2. Details on the business model to manage digital assets 3. With whom do you cooperate? 4. How do you check / monitor the quality of your assets? Each workshop participant (representing an organisation that manages digital Holocaust objects) provided input on the long-term access aspects. See below. ## The Wiener Library (Jessica Green) **Concerning _archiving_ : ** The Wiener Library is currently undertaking an ambitious digital transformation, as we move towards more sustainable and efficient processes for creating, managing, preserving and accessing digital records. Over the last few decades, the Library’s digital holdings have grown to include approximately 18 TB of digital material, including audio and video files, databases, photographs, document scans, and more. Some of this material has been digitised in-house or by third-parties from their original analogue or paper formats, while a growing percentage of it is born-digital. In addition to being a digital copy holder of the International Tracing Service (ITS), the Library has accepted a number of large digital collections over the last year, including, most notably, a digital copy of the UN War Crimes Commission Archive. The Library expects a growth of digital collections donations over the next few years in a range of different formats, including large databases, audiovisual collections, and sets of PDFs/TIFF files with a corresponding Excel spreadsheet of metadata. In order to support this type of growth, the Library is taking steps towards improving and standardising methods for accessioning, cataloguing, making accessible, and preserving this digital collections. One of the Library’s short-term goals is to finalise a digital preservation policy to ensure longterm preservation of digitised materials, born-digital material, and large digital collections. There are currently a range of different file formats stored in a number of different locations on our shared servers, individual email accounts, a few hard drives and digital tape. The Library is currently working towards gathering all scans of collection items, born-digital material, and digital collections into a separate shared drive. Files in this drive will then be organised by collection type and renamed according to their collection ID number. This will make linking between catalogue records and their digital objects more straightforward. In addition, the files will be moved to a dedicated server in a secure data centre this October. In order to ensure files are accessible for the long term and to mitigate the effects of obsolescence, work is being done to forward-migrate existing files to file formats that are considered better for long-term preservation. This includes converting JPEGs to TIFFs and VLC media files to MP4s. The digital preservation policy will also cover future forwardmigration of preservation files, multiple methods for backups, and running checksums. The Library recognises that digital preservation is an ongoing activity that is never complete; taking these steps and continuously reviewing/updating our policy will help the Library bring itself in line with best practice for long- term digital preservation. Since the Library can learn from others and help others learn, we are using guidelines from the Digital Preservation Coalition (DPC) and other professional bodies to inform our decisions and plan to share our policies and procedures with EHRI and other interested bodies. The end goal is for digital preservation practices to be as embedded into the Library’s daily work as the physical preservation of its collections. **Concerning _access_ : ** The Wiener Library currently provides access to our digital holdings through a range of different information systems and is working to improve the searching, retrieval, and display of these digital objects for its staff and users. Access to ITS is provided to researchers on two dedicated terminals in our Reading Room using the OusArchiv database. Before using these terminals, researchers have to attend a training instruction and make appointments in advance. Our ITS Archive Researcher is on hand to help people use and search this database, as well as to conduct research for people unable to visit the Library themselves. Three other dedicated terminals provide researchers with read-only access to some of our digital collections (including the UNWCC archive), as well as video and audio recordings via segmented drives on our server. To prevent people from downloading files, the Library has disabled internet access and USB ports on the Reading Room terminals that have access to this segmented drive. A small number of photographs are available to view on our online catalogue, Soutron. Currently the only way to attach images to our catalogue records is to upload thumbnail versions of the images into the catalogue database directly. This is an unsustainable model for providing online access to our digital materials, since the more files, and the larger the files, the slower the entire catalogue becomes. In order to provide more materials online and at our dedicated terminals, the Library is exploring the implementation of a digital viewer that would link high-res digitised images of documents and photographs to their relevant catalogue record. This would allow for a richer user experience, including zooming in and out of a photograph, as well as help to comply with copyright and data protection regulations by restricting access to digital files based on IP address or logins. The main system Library staff use to access our digital objects currently is Adobe Bridge. As materials were digitised, descriptive information was added directly into the embedded metadata of the images. The main method for finding a digital object is to search Bridge for keywords and search terms that might be included in this embedded metadata. The Photo Archivist is currently working on turning this embedded metadata into ISAD(g) compliant catalogue records in Soutron, so that our users can access, search and find these images as well. All Library staff have recently undergone a mandatory copyright training session. Our Collections Team is evaluating our policies around granting licenses for use of digital images as part of the follow-up to this. In developing our access policies, the Library aims to make accessible as many of our digital objects as possible, taking into account financial, technical, and copyright/data protection restrictions. This is based on the belief that digital users are just as valid and important as physical users to the Library; as such, they should have access to rich material and metadata online as well as onsite. **Concerning _policies_ : ** The Wiener Library is currently developing its policies around management and long-term access to digital objects. The requirements for digital objects are just as important as for physical objects, but more challenging to embed into our daily practices due to technological and budgetary constraints, as well as a deficit of digital skills. Although this shift towards managing and providing access to more digital objects brings with it certain challenges, it is something that benefits our staff and users greatly and is becoming more and more expected over time. Policies are being developed with both the needs of our users and staff in mind, as well as keeping an eye towards future trends and best practices. Sharing policies and business models for managing digital assets across like organisations would be helpful to this process. ## USHMM (Michael Levy) **Concerning _archiving_ : ** USHMM digital archiving practice * Intensive digitization began 2006 with magnetic media, oral history. International copy archive projects more and more digital. Digitization of microfilm, historical film, paper archives, photos * 50 million+ digital files, growing * ~800 TB, growing * State-of-the-art NAS w/erasure coding * Tape backup. Offsite secure storage of backup * Inventories, checksums. Currently using open source tools to crawl and store. md5 and sha1 are utilized. The process takes many months * Compare recalculated inventory and checksum to store checksums every ~24 months * Currently engaged in RFP process intended to supply commercial digital preservation platform, to automate digital preservation activities. Follow OAIS model ISO 14721:2012 * Web archiving of our own institution's digital output * Informal relationship with US Government Publication Office. GPO uses the nonprofit Internet Archive's “Archive-It” service for archiving web and social media output What are the requirements for Holocaust archives in general? * Desirable for every Holocaust archive to begin to engage with digital preservation activities * Recognize that digital assets with long term value require active preservation measures Suggestions for implementation: * Training, workshops, other educational processes * Blog posts * Other educational efforts * Start with small steps: “Do something” * Self-assessment * “NDSA Levels of Digital Preservation” may be a good place to start for many organizations and practitioners * Storage and Geographic Location (Levels 1-4) File Fixity and Data Integrity (Levels 1 to 4) * Information Security (Levels 1 to 4) * Metadata (Levels 1 to 4) * File Formats (Levels 1 to 4) **Concerning _access_ : ** * Collections Search * Provide access to all catalog records and to “use copies” of media –- on web if possible, locally-only if not allowed; certain materials are highly restricted or embargoed * Ensure everything on web is searchable by web crawlers e.g. Google - User studies, improve UX * Permanent URLs or HTTP 301 redirects (so old links to records still work) * Handles –- contracts with users for a permanent place on the web (we hope to implement at USHMM eventually) * Constant improvement * Technology changes. Obsolescence. Security standards. User expectations continually increase What are the requirements for Holocaust archives in general? * Broadest allowable access is desirable: increased access leads to broader use and interest, strengthens the field of Holocaust research overall, and thus leads to continued or increased support Suggestions for implementation * Continued support, strengthening and broadening the EHRI Portal and digital tools **Concerning _policies_ : ** USHMM Digital Preservation and Access Policies: * Competing priorities include: * Quality (e.g. file formats, resolutions, bit rates) * Quantity / funding * Preservation, redundancy, resources * Every institution is unique * Quantity: File sizes affect preservation operations (e.g.time) & redundancy * USHMM distinguishes between “asset” and “instance” to help balance resource priorities * Asset = only trustworthy copy (e.g. magnetic media or fragile originals, born digital); irreplaceable, therefore warrants utmost attention * Instance = surrogate of a durable physical item, or a derivative; replaceable at a cost What are the requirements for Holocaust archives in general? * Institutions are each unique and must develop digitization policies according to their institutional responsibilities and requirements ## NIOD (Annelies van Nispen & Frank Uiterwaal) **Concerning _archiving_ : ** * NIOD has digitized archives, digitized photo’s, research databases, audiovisual material and manages this with Dutch partners; * The material is stored at KNAW’s infrastructure or at dedicated partners as DANS or Netherlands Institute for Sound and Vision; * The partners are specialized in Digital Preservation and we trust them. The KNAW infrastructure needs to be of high quality. (SLA: backup, disaster recovery); * Specifications on the quality of the objects are project-based (Metamorfoze Digitisation Programme) What are the requirements for Holocaust archives in general?: Hybrid archives with multiple sorts of digital objects. One-solution-fits-all approach will not work. **Concerning _access_ : ** * Different digital objects are managed by specialized partners, eg. Research data/Testimonials (DANS); Audiovisual material (NIBG); - External Access is controlled by: * Archiefwet; * Privacy Protection Laws; - Copyright. * Internal access to the storage of digital objects is controlled (based on someone’s role within the organisation); * Persistent Identifiers need to be implemented within a few years. ## CDEC (Laura Brazzo) CDEC has started a project to integrate its archival description databases, research databases, digitized (and digital born) material (papers, photographs, tape and video recordings) objects, as well as its library catalog. The goals of the project are to overcome, fragmented information, to avoid duplicated resources and to implement lacking measures for the long term preservation of data. The project can be summarized as "integration", "interoperability" and "preservation". The digital asset management system used is the Linked Data platform “openDams/Bygle” (created by the company “regesta.exe”), based on W3C recommendations. It works as a data integration layer allowing the integration of heterogeneous data sources. Data from research databases and the library's catalog have been converted into and are imported in the system 10 . Archival descriptions and digitized material are managed by xDams, an OS XML web-based platform, that is integrated inopenDams. It works as the main data provider of openDams. The metadata are stored in xDams in native XML databases. To encode the metadata in XML format, the EAD data model is used for the description of the archival resources. The EAC-CPF data model is used for the description of authority records. Authority files (managed through openDams) are Uniform Resource Identifiers (URIs) linked to the archival descriptions by a lookup function. Linked resources are encoded in the related XML EAD files Digitized material (papers, photographs, audios and videos) are attached to the appropriate archival description records. The xDams Platform enables the creation of an XML repository containing the metadata needed to the description of all the digital attachments referenced in the databases. To ensure consistency and validity to the referenced digital objects, the METS 11 standard is adopted. EAD and EAC-CPF are used in conjunction with METS in descriptive and administrative metadata contexts. The next, scheduled, step forward is the transforming of archival descriptions currently in XML, into the RDF format (using the OAD - Archival Description Ontology as data model) in order to support interoperability as well as a sustainability of data. Compared to the beginning of the project (in 2013), significant progress has been made. Firstly we have set up a basic architecture for the data management. A lot of work, though, still needs to be done in order to not-to- lose resources in the coming years. One of this is definitely the upgrade of the Quest online journal 12 with a better set of descriptive metadata and the attribution of DOI as persistent identifiers of each published article. High resolution master images of the digitized materials are stored in the CDEC storage system. An agreement between CDEC and UCEI (Union of the Italian Jewish Communities) about the keeping of the backup copy in Rome is under review. High resolution copies (TIFF, MOV, WAV) are stored in a “Networked Attached Storage” (NAS) storage (RAID5 Hot Spare, Redundant, Access controlled) localized at the CDEC Foundation. Data, metadata and low-resolution copies of digitized materials attached to the archival descriptions (JPEG, Mp3, mp4) are stored directly by Regesta on a remote Virtual Machine localized off-site. ## DANS (René van Horik) **Concerning _archiving_ : ** (expresses as so-called URIs) are imported in openDams/Bygle and published in a triplestore accessible through an intranet Sparql endpoint. Every new item created by openDams is natively a URI (see for example: <http://dati.cdec.it/lod/shoah/person/251> or < _http://dati.cdec.it/lod/shoah/place/Milano_ > [cited 5 October 2017]. 11. Metadata Encoding and Transcription Standard. See: <http://www.loc.gov/standards/mets/> [cited 4 October 2017]. 12. Quest. Issues in Contemporary Jewish History. Published by CDEC. See: <www.questcdecjournal.it> [cited 5 October 2017]. The collection “World War II” has been created by DANS 11 . The collection consists of datasets that have been deposited by researchers. Keywords provided by the data depositors have been used to create the thematic collection. Most of the datasets contain oral history material. Many of these datasets are the result of the national program “Heritage of the War” (2005-2009) that improved access to the large variety of WOII collections and thereby contributed to the advance of knowledge of WOII. The metadata is open accessible and harvestable (by OAI-PMH). The datasets are harvested by several organisations and enriched (for instance by the B2FIND service of EUDAT (see section 2.3.3)). Authentication of the datasets is realised by its persistent identifier and metadata. If applicable an additional streaming service is available so the interview can be seen 12 . The average size of an interview is 2 GB. For the archival storage of the data a third party is used. **Concerning access:** The access to data objects is managed by an information system (http://easy.dans.knaw.nl) and is based on a user license. Three types of access can be distinguished. In the first place “Open Access” in two versions. One version requires registration of the user whereas the other version does not. The second version of access is “Restricted Access” and this implies that the user has to ask the owner of the data permission to get access. This request for access can be made via the information system. After the permission is granted the user can get access to the data. The third type of access is classified as “Other Access” and can have several appearances, e.g an embargo on the access special conditions for access. Secure storage of the data sets is based on a data management policy in which third party services (e.g. data centre) are involved. All metadata of the datasets can be harvested. **Concerning _policies_ : ** The policy concerning data management at DANS can be characterized as “”Open if possible, closed if necessary”. The repository has a DSA seal as well as a NESTOR seal 15 . Concerning data formats, DANS has formulated a “preferred format policy”. Preferred formats are file formats of which DANS is confident that they will offer the best long-term guarantees in terms of usability, accessibility and sustainability. Depositing research data in preferred formats will always be accepted by DANS. Acceptable formats are file formats that are widely used in addition to the preferred formats, and which will be moderately to reasonably usable, accessible and robust in the long term. DANS favours the use of preferred formats, but acceptable formats will in most cases also be allowed 13 . ## Brainstorm and Discussion The workshop participants decided to create a mindmap to formulate the main outcomes of the workshop 14 . The main topic of the discussion concerned the value of the the three data management planning aspects presented at the first day (FAIR data principles, repository certification and data management services of the EUDAT CDI) for the formulation of a data management policy for the long-term preservation of Holocaust data. The output of the discussion will be used to work on a long-term access infrastructure (Deliverable 13.2) to be delivered in early 2019. The mindmap created can be found in Figure 4. 12 September 2017]. **Figure 4:** Mindmap of the discussion # Conclusion and next steps All in all, a fruitful workshop was held on data management planning for long- term preservation of Holocaust objects, attended by representatives of organisations in the EHRI consortium that manage and curate digital objects. The first day’s goal was to get all participants acquainted with data management building blocks from the research perspective. The second day’s goal was to discuss current data management practices at the different institutes. In addition, the roadmap for a long-term access infrastructure (LTA) was discussed. The participants of the workshop agreed on a structure for the LTA. The roadmap consists of three parts: archiving of data, access to data and policies to implement the LTA. Each part is elaborated on below. The activities carried out to establish EHRI as a permanent legal entity by becoming an ERIC obviously will play an important role in the formulation of the LTA. The activities carried out will be complementary to the work done in relation to the development of the ERIC. The LTA for preserving Holocaust Research Objects will be published as Deliverable 13.2 and is foreseen for month 44 in the EHRI project. Concerning “archiving” the work on the roadmap for a LTA consists of five aspects: 1. Classification of the data that has to be curated by the LTA. It also concerns the selection of the data objects that have to be archived for the long-term. 2. Alternatives for the archival storage of data objects. 3. An assessment of available data formats for the data objects with respect to its durability. 4. The role of persistent identifiers in the LTA. Which identifier scheme can be used in which situation as well as practical implementation issues. 5. The monitoring of the LTA concerns the periodic evaluation of the quality of the components of the LTA and possible procedures to keep the archiving services of the LTA uptodate. Concerning the “access” to data objects the LTA roadmap will pay attention to three issues: 1. Legal issues, such as licensing models and ways to protect sensitive Holocaust archives and personal information according to the latest European (and American for USHMM/Israeli for Yad Vashem) legislation. 2. Secured access is the next topic. This consists of AAI (Authentication and Authorisation Infrastructure) services such as the identity management of users of data objects. 3. Search engine optimization (SEO) that uses web-analytics (statistics) to improve the access to data objects. Concerning “data management policies” for the LTA roadmap five aspects will be covered. 1. Security policies. Coverage of aspects such as data protection, data access conditions and management of user data. 2. Cooperation models. This activity will evaluate how the current EHRI Content Provider Agreement can be used to formalise cooperation with stakeholders. 3. Attention is paid to business models that facilitate the long-term operation of the LTA. 4. The role of certification frameworks is part of the quality control of the LTA. 5. Long-term viability of the LTA is the last aspect of data management policies for the LTA roadmap. The workshop participants discussed the process to define the LTA roadmap and came to a proposal for a timeline. The proposed activities are: * The certification of data repositories is considered an important component of the LTA roadmap. This workshop has paid attention to the topic Trusted Digital Repository by discussing certification frameworks, such as the “Data Seal of Approval” (that is succeeded by the “Core Trust Seal”). The topic Trusted Digital Repository is the subject of deliverable D13.4. We consider to organise a workshop at the EHRI general partner meeting in June 2018 so all partners in EHRI can be informed on aspects of repository certification. * Based on the outcomes of the DMP and TDR workshop the LTA roadmap will be defined and created in December 2018 as deliverable D13.2. All participants of the workshop have confirmed that they will, given available resources, contribute to the content of the roadmap.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0355_SEA-TITAN_764014.md
# INTRODUCTION SEA TITAN project participates in the Pilot on Open Research Data (ORD) launched by the European Commission (EC) along with the H2020 programme [1]. This pilot is part of the Open Access to Scientific Publications and Research Data programme in H2020. The goal of the programme is to foster access to research data generated in H2020 projects. The use of a Data Management Plan (DMP) is required for all projects participating in the Open Research Data Pilot. Open access is defined as the practice of providing on-line access to scientific information that is free of charge to the reader and that is reusable. In the context of research and innovation, scientific information can refer to peer-reviewed scientific research articles or research data. Research data refers to information, in particular facts or numbers, collected to be examined and considered and as a basis for reasoning, discussion, or calculation. In a research context, examples of data include statistics, results of experiments, measurements, observations resulting from fieldwork, survey results, interview recordings and images. The focus is on research data that is available in digital form. The Consortium strongly believes in the concepts of open science, and in the benefits that the European innovation ecosystem and economy can draw from allowing the reuse of data at a larger scale. Furthermore, there is a need to gather experience in wave technology, especially power performance and operating data. In fact, there has been very limited experience in wave energy, which is essential in order to fully understand the challenges in device performance and reliability. The limited data and experience that currently exists are rarely shared, as testing is partly private-sponsored. This project proposes to remove this roadblock by delivering for the first time, open access, highquality power take-off (PTO) performance, reliability and operational data to the wave energy development community. Nevertheless, data sharing in the open domain can be restricted as a legitimate reason to protect results that can reasonably be expected to be commercially or industrially exploited. Strategies to limit such restrictions will include anonymizing or aggregating data, agreeing on a limited embargo period or publishing selected datasets. ## Purpose of the Data Management Plan The purpose of the DMP is to provide an analysis of the main elements of the data management policy that will be used by the Consortium with regard to the project research data. The DMP covers the complete research data life cycle. It describes the types of research data that will be generated or collected during the project, the standards that will be used, how the research data will be preserved and what parts of the datasets will be shared for verification or reuse. It also reflects the current state of the Consortium agreements on data management and must be consistent with exploitation and IPR requirements. The DMP is not a fixed document, but will evolve during the lifespan of the project, particularly whenever significant changes arise such as dataset updates or changes in Consortium policies. This document is the first version of the DMP, delivered in Month 3 of the project. It includes an overview of the datasets to be produced by the project, and the specific conditions that are attached to them. The next versions of the DMP will get into more detail and describe the practical data management procedures implemented by the SEA TITAN. At a minimum, the DMP will be updated in Month 18 (D8.6) and Month 36 (D8.7) respectively. This document has been prepared by taking into account the “Template horizon 2020 data management plan (DMP)” [Version 1.0. of 10 October 2016] and additional consideration described in ANNEX I: KEY PRINCIPLES FOR OPEN ACCESS TO RESEARCH DATA. ## Research Data Types in SEA TITAN For this first release, the DMP highlights the data types expected to be produced during SEA TITAN project life span, these datasets will be revised on next iterations of the document if found redundant or insufficient. According to such consideration, Table 1 reports a list of indicative types of research data that SEA TITAN will produce. This list may be adapted with the addition or removal of datasets in the next versions of the DMP to take into consideration the project developments. A detailed description of each dataset is given in the following sections of this document. <table> <tr> <th> # </th> <th> Dataset reference </th> <th> Lead partner </th> <th> Related WP(s) </th> </tr> <tr> <td> 1 </td> <td> DS_AMSRM_Performance </td> <td> CIEMAT </td> <td> WP2, WP3, WP4, WP5 </td> </tr> <tr> <td> 2 </td> <td> DS_AMSRM_Feasibility </td> <td> CIEMAT </td> <td> WP2, WP3, WP4, WP5 </td> </tr> <tr> <td> 3 </td> <td> DS_Cooling_System_performance </td> <td> CIEMAT </td> <td> WP6 </td> </tr> </table> ### Table 1. SEA TITAN types of data Specific datasets may be associated to scientific publications (i.e. underlying data), public project reports and other raw data or curated data not directly attributable to a publication. The policy for open access are summarized in the following picture. Research data linked to exploitable results will not be put into the open domain if they compromise its commercialization prospects or have inadequate protection, which is a H2020 obligation. The rest of research data will be deposited in an open access repository. When the research data is linked to a scientific publication, the provisions described in ANNEX II: SCIENTIFIC PUBLICATIONS will be followed. Research data needed to validate the results presented in the publication should be deposited at the same time for “Gold” Open Access ( _Authors make a one-off payment to the publisher so that the scientific publication is immediately published in open access mode_ ) or before the end of the embargo period for “Green” Open Access ( _Due to the contractual conditions of the publisher, the scientific publication can undergo an embargo period up to six months since publication date before the author can deposit the published article or the final peer-reviewed manuscript in open access mode_ ). Underlying research data will consist of selected parts of the general datasets generated, and for which the decision of making that part public has been made. Other datasets will be related to any public report or be useful for the research community. They will be selected parts of the general datasets generated or full dataset and be published as soon as possible. ## Responsibilities Each SEA TITAN partner has to respect the policies set out in this DMP. Datasets have to be created, managed and stored appropriately and in line with applicable legislation. The Project Coordinator has a particular responsibility to ensure that data shared through the SEA TITAN website are easily available, but also that backups are performed and that proprietary data are secured. WEDGE GLOBAL, as WP1 leader, will ensure dataset integrity and compatibility for its use during the project lifetime by different partners. Validation and registration of datasets and metadata is the responsibility of the partner that generates the data in the WP. Metadata constitutes an underlying definition or description of the datasets and facilitate finding and working with particular instances of data. Backing up data for sharing through open access repositories is the responsibility of the partner possessing the data. Quality control of these data is the responsibility of the relevant WP leader, supported by the Project Coordinator. If datasets are updated, the partner that possesses the data has the responsibility to manage the different versions and to make sure that the latest version is available in the case of publicly available data. WP1 will provide naming and version conventions. Last but not least, all partners must consult the concerned partner(s) before publishing data in the open domain that can be associated to an exploitable result. # DATASETS DESCRIPTION ## DS_AMSRM_PERFORMANCE Along the AMSRM development, the representative variables to be obtained during the different design and testing procedures are separated in the different stages: calculation of specifications and experimental tests performance. **Calculation of the specifications of the PTO** During the simulation of the system, corresponding to WP2, the data obtained to define and place the linear generator in the different WEC technologies will be: * Available space. (Length, width, height) * Maximum stroke * Maximum velocity * Maximum force After evaluating the WECs in different scenarios proposed for each WEC technology, different values of: force, velocity and stroke will be obtained. This data will be private, only shared internally for the project partners, since they are sensible data corresponding to the involved technologies. **Experimental tests performance** Finally, during the laboratory test performance, accomplished in WP5, a set of data will be collected, for each of the scenarios tested, corresponding to one type of WEC technology and a certain sea location, reproducing a certain sea state: * Force values as a function of the current applied to the generator phases, for different velocities and current levels. * Output power, supplied to the grid as a function of the force and velocity. Mechanical power will be also calculated, obtaining a complete global efficiency map. This data will be mostly public, since it is considered they are part of the results obtained from the project and part of the dissemination plan. ## DS_AMSRM_FEASIBILITY The data set are obtained as a result of the design stage of the PTO solution. Based on that solution, a PTO module will be defined to develop a prototype. During the design of both the linear generator, the power converters and the control platform, corresponding to WP3, different variables will be defined as a result of the calculations: * Based on Finite Elements Method (FEM) analysis, force map depending on the position, velocity and the current level. Force validation will demonstrate the feasibility of the proposed solution. * Losses provided by the losses model, depending on the position, velocity and current level. * Expected efficiency map depending on the position, velocity and current level. The losses model and efficiency map will allow to develop an energy matrix to explore the economic feasibility of the system when it is applied to the different WEC technologies. * Thermal behaviour will be analysed along the different operation situations defined, validating the feasibility of the system. This data will be private, only some of these data will be shared internally for the project partners, since they are sensible data corresponding to the know-how of the machine. ## DS_COOLING_SYSTEM_PERFORMANCE Related to the thermal behaviour of the system, considering that PTO will be evaluated for different WEC technologies and sea states, it will be analised in those scenarios the time evolution of temperature in the following points: \- At the linear generator: temperature at the machine coils (at least two measurements), translator magnetic circuit and bearings (at least two measurements) - At the power electronic converters: IGBT case, water cooling fluid, ambient. Related to the SLSG, since only calculation and preliminary design is accomplished during the project, no thermal data will be provided. However, the superconducting solution requires, as one of the main results of the solution definition, a cryostat, being the system in charge of taking the system to the required low temperature. Anyway, only a engineering solution will be defined, no results or data set. # STANDARDS AND METADATA This aspect will be defined as part of task 7.3 Standardization activities, identification and analysis of related existing standards and the contribution to the ongoing and future standardization developments from the results of the project. The participation of a Standardization Body (UNE) provides the relevance, knowledge and experience in the standardization system and its internal procedures. Other project partners will provide the technical support to the development of this task. It is expected to fulfill an analysis of the applicable standardization landscape by M6 and to define in detail the contribution to the ongoing and future standardization developments by M36. As so this part of the document will be updated as soon as more information is available for the consortium. # DATA SHARING During the lifecycle of the SEA-TITAN project datasets will be stored and systematically organized in a database. An online data query tool will be operational by Month 18 and for open dissemination by Month 24. The database schema and the quarriable fields, will be also publicly available to the database users as a way to better understand the database itself. In addition to the project database, relevant datasets will be also stored in ZENODO [5], which is the open access repository of the Open Access Infrastructure for Research in Europe, OpenAIRE [4]. Data access policy will be unrestricted if no confidentiality or IPR issues are expected by the relevant Work Package leader in consensus with the Project Coordinator. All collected datasets will be disseminated without an embargo period unless linked to a green open access publication. Otherwise, in order to protect the commercial and industrial prospects of exploitable results aggregated data will be used in order to limit this restriction. The aggregated dataset will be disseminated as soon as possible. In the case of the underlying data of a publication this might imply an embargo period for green open access publications. Data objects will be deposited in ZENODO under: * Open access to data files and metadata and data files provided over standard protocols such as HTTP and OAI-PMH. * Use and reuse of data permitted. * Privacy of its users protected. # ARCHIVING AND PRESERVATION The SEA-TITAN project database will be designed to remain operational for at least 5 years after project end. By the end of the project, the final dataset will be transferred to the ZENODO repository, which ensures sustainable archiving of the final research data. Items deposited in ZENODO will be retained for the lifetime of the repository, which is currently the lifetime of the host laboratory CERN and has an experimental programme defined for at least the next 20 years. Metadata and persistent identifiers in Zenodo are stored in a PostgreSQL instance operated on CERN’s Database on Demand infrastructure with 12-hourly backup cycle with one backup sent to tape storage once a week. Metadata is in addition indexed in an Elasticsearch cluster for fast and powerful searching. Metadata is stored in JSON format in PostgreSQL in a structure described by versioned JSONSchemas. All changes to metadata records on Zenodo are versioned and happening inside database transactions. In addition to the metadata and data storage, Zenodo relies on Redis for caching and RabbitMQ and python Celery for distributed background jobs.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0358_Bio4Comp_732482.md
1: a) 2D architectural elements: Layout files (GDSII format), designed using KLayout, Raith 150 e-beam lithography software Version 5.0 (proprietary format; will be exported to GDSII), LayoutEditor or L-edit software. b) 3D architectural elements: we use Autodesk Inventor to prepare the 3D model data which represents the error free junctions and potentially the linking structures to the e-beam structures made by FhG ENAS. The 3D data from Autodesk Inventor is then loaded into Rhinoceros3D (McNeel) where the calculation of the hatching and contour vectors takes place. 3D model data is always saved in either stl or stp file format which can be opened by several open-source 3D viewers and 3D editors. Additionally, any 2D cross-section of the generated data can be stored in DXF or GDSII formats, which can be opened by open-source software, such as KLayout. 2: Layout files (GDSII format), designed using KLayout, Raith 150 e-beam lithography software Version 5.0 (proprietary format; will be exported to GDSII) and LayoutEditor or L-edit software. Size: < 50GB Utility: researchers within biocomputation, bio-nanotechnology and FET-sensing **1.4 Microscopy Data** Purpose 1: SEM images for quality control during fabrication of architectural elements and biocomputation devices Purpose 2: Fluorescence microscopy images for monitoring device operation Types, formats and origin of data collected: 1: TIFF images, obtained with SEM microscopes. 2: TIFF images, Metamorph Stacks, Nikon ND2 files, obtained with Metamorph and Nikon AR imaging softwares, Hamamtsu cxd files obtained using HCImage software. 3\. OME-TIFF images obtained with Photometrics Prime CMOS camera Size: 1-5 TB Utility: researchers within Bio4Comp **1.5 Processing and fabrication data** Purpose 1/3/4: Process flow plans and parameter sets for fabrication of each individual sample. Types, formats and origin of data collected 1/3/4: For EBL control, the following input data formats are permissible: GDSII, OASIS, DXF, CIF. Using an appropriate design software such as LayoutEditor or L-edit, conversions between different data formats are possible. For galvo scanner control used in TPA, STL-files are used. These data are then sliced and filled with vectors that are saved into a proprietary file format (SLL). For direct fabrication with the 3D positioning system any motion and laser triggering command will be given by a simple script language (G-Code; industry standard). Viewers for these programs are Rhino3D, STL-Viewer, EasyViewSTL). Additionally, any 2D cross-section of the generated 3D data can be stored in DXF or GDSII format, that can be opened by open-source software, such as KLayout (or commercial ones like Layout Editor) for further modification at the collaboration partners. Further, text strings, floating point numbers, vector sets are used. Reporting data can be done on formats like .txt, .xls, or .doc, viewable with NotePad or standard Office Software. Size: < 100 MB Utility: researchers within Bio4Comp # FAIR DATA **2.1 Making data findable, including provisions for metadata:** * **Outline the discoverability of data (metadata provision)** * **Outline the identifiability of data and refer to standard identification mechanism. Do you make use of persistent and unique identifiers such as Digital Object Identifiers?** * **Outline naming conventions used** * **Outline the approach towards search keyword** * **Outline the approach for clear versioning** * **Specify standards for metadata creation (if any). If there are no standards in your discipline describe what metadata will be created and how** 1. Scientific documents (1.1) will be provided with keywords, a DOI and online searchability where applicable, as facilitated by the chosen institutional or subject repository. 2. Source code will be searchable through GitHub, use semantic versioning ( _http://semver.org/_ ) and commented. When possible, the source code on GitHub will be connected to other relevant resources, e.g. scientific documents, with DOIs. 3. Device layouts will be identified through a unique identifier which will also be used during device fabrication to identify individual devices. 4. Microscopy data will be provided with automatically generated metadata by the camera software (machine parameters like scale, camera brand, acquisition time, exposure time, light source, light intensity, and image mode (fluorescence or TIRF)) completed by manually generated metadata (Creator, Title, Subject, Description) readable by bioformats libraries ( _http://www.openmicroscopy.org/site/products/bio-formats_ ) wherever possible. 5. Fabrication data is usually generated in raw formats specific for the individual production facility (TPA, EBL) and will be locally saved in in the lab journals assigned to each facility and relating to the sample processed identified by their id. Data relevant for the processed samples will be copied to in the Bio4Comp project folders (institution-wide as well as TUD OwnCloud service) and translated in data formats accessible for the project partners and the public stakeholders (MS Office, TextEdit). In general, metadata creation will be in accordance with DataCite, the following six fields being mandatory: Identifier, Creator, Title, Publisher, PublicationYear, and ResourceType. We will encourage the NBC community to also add the recommended fields: Subject, Contributor, Date, RelatedIdentifier, and Description. For microscopy data, other metadata fields will also be required, including the automatically generated metadata (see point 4 above). Before uploading microscopy data to repositories, the researchers will curate the metadata manually in order to make the files discoverable and reusable. To facilitate discoverability, adding subject keywords (corresponding to DataCite field Subject) to data and tools will be encouraged. As Bio4Comp is contributing to development of a new research approach, there is not one controlled vocabulary that exactly matches the scope of the project. Thesauri that might cover parts of the subject scope of the project include IEEE and INSPEC thesauri for the engineering aspects, and the ACM Computing Classification System for the computing aspects. As a first step, subject keywords will be chosen freely by the members of the NBC community. We will also consult with subject librarians to develop an easy to use practice for using a controlled vocabulary under these interdisciplinary circumstances. We will follow the file and folder naming conventions as described by Stanford Libraries “Best practices for file naming” guide ( _https://library.stanford.edu/research/data-management-services/data-best- practices/bestpractices-file-naming_ ) . This includes creating a naming scheme that informs the Bio4Comp researchers about e.g. the project name, researcher name, date of experiment, and version number. To ensure consistency among the researchers, a readme file will be available in common directories. **2.2 Making data openly accessible:** * **Specify which data will be made openly available? If some data is kept closed provide rationale for doing so** * **Specify how the data will be made available** * **Specify what methods or software tools are needed to access the data? Is documentation about the software needed to access the data included? Is it possible to include the relevant software (e.g. in open source code)?** * **Specify where the data and associated metadata, documentation and code are deposited** * **Specify how access will be provided in case there are any restrictions** **2.2.1 Data to be made openly accessible:** 1. Scientific documents (1.1): Contributions to Open Innovation Awards; other documents concerning community building; and public reports for the Bio4Comp project will be published on Zenodo.org. Research publications will be stored (publicly searchable) each partner’s OpenAire approved institutional repository e.g. in Lund University’s research information system, LUCRIS, available at _http://portal.research.lu.se/portal/_ f or publications co-authored by researchers affiliated to Lund University; Qucosa, available at _http://www.qucosa.de/startseite/_ f or publications co-authored by researchers affiliated to TU Dresden; Fraunhofer ePrints, available at http://publica.fraunhofer.de/starweb/ep09/en/index.htm for publications co-authored by researchers affiliated to Fraunhofer Gesellschaft. 2. Source code (1.2) of software developed within the consortium, will be made publicly available under an open source license on GitHub, after IP protection and/or first scientific publication. 3. Fabrication data will be sufficiently outlined in the internal reporting as well as in scientific publications according to the scientific standards which allow a reproduction of the experimental execution by fellow researchers **Software tools** to access the data: pdf-reader, text editor, web browser, image viewer (e.g. https://fiji.sc/) **Associated metadata** and documentation will be deposited together with the files on Zenodo.org **2.2.2 Closed data:** 1. Closed reports for the Bio4Comp project Reason: Relevant for IP protection. 2. Source code of unpublished programs. Reason: Quality management. Relevant for IP protection. 3. Device layouts (see 1.3). Reason: Highly relevant for IP protection. 4. Microscopy Data (see 1.4). Reason: Relevant for IP protection. 5) Fabrication Data (see 1.5): Reason: Relevant for IP protection. Closed data relevant to more than one partner will be shared through an OwnCloud instance hosted by TUD. **2.3 Making data interoperable:** * **Assess the interoperability of your data. Specify what data and metadata vocabularies, standards or methodologies you will follow to facilitate interoperability.** * **Specify whether you will be using standard vocabulary for all data types present in your data set, to allow inter-disciplinary interoperability? If not, will you provide mapping to more commonly used ontologies?** 1. Research data (1.1) will be made available in PDF/A format to ensure the documents’ portability across systems in a long-term perspective. 2. Source code (1.2) will be provided as plain text files in well-documented programming languages such as matlab, java and c. 3. Data for internal Bio4Comp use (1.3 and 1.4) will be stored and shared within Bio4Comp in a defined folder hierarchy using an Owncloud instance hosted by TUD. Access will be granted only to Bio4Comp members. 4. The interoperability of fabrication data comprising techniques commercially not meant to interoperate one with another is a part of the research work done in WPs 2 and 4. See also Section 2.1 regarding metadata standard (DataCite) and controlled vocabularies for describing the subject scope of the project’s resources. We will consult with subject librarians to ensure interoperability with other controlled vocabularies. **2.4 Increase data re-use (through clarifying licenses):** * **Specify how the data will be licenced to permit the widest reuse possible** * **Specify when the data will be made available for re-use. If applicable, specify why and for what period a data embargo is needed** * **Specify whether the data produced and/or used in the project is useable by third parties, in particular after the end of the project? If the re-use of some data is restricted, explain why** * **Describe data quality assurance processes** * **Specify the length of time for which the data will remain re-usable** 1. _Research data (1.1) generated by the Bio4Comp researchers_ will be licensed by the respective authors. Authors will be encouraged to use permissive licenses such as Creative Commons wherever possible. Data connected to research publications (including device layouts (1.3) and microscopy data (1.4) unless subject to IPR protection) will be manually curated by the creating researchers before uploading to public repositories. This will be done after the research publications have been published in institutional repositories so that the publication can act as an umbrella for finding und understanding the data. Metadata in the institutional repositories will be curated by subject librarians to ensure sufficient quality and findability. The research data that will be made available are useful for other researchers who intend to generate their own biocomputation designs or verify the correctness of our conclusions based on these data. Research data that contributed to a PhD Thesis need to be stored for ten years. The usefulness of our data for other researchers will probably have expired by that time, as this research field, albeit new, is moving forward very rapidly. _Research data (1.1) generated by others and submitted to us in the frame of the Open Innovation Awards_ will be made available directly after the decision of the Innovation System Committee. Licenses are the responsibility of the authors. IPR connected to these data that is not protected before submission will become available under a Creative Commons license (CC-NC). Those data will be useful for third parties that plan to start developing network-based biocomputation units. As said above, the usefulness will probably expire five to ten years after the end of the project as better network designs, better agents as well as tracking and verification methods become available. 2. Software (1.2) will be licensed under an open source license after IP protection and/or research publication. Distribution via a public repository on GitHub will ensure long-term re-usability. We expect this research field to grow even after the end of the project, therefore we expect many researchers to use the softwares resulting from this project. Design software to make new bio-computation networks and verification software to ensure the correctness of the achieved results. However, this field is rapidly growing; therefore we expect new versions and better programs by others to be available already 5 years after the end of this project. 3. Fabrication data iteratively proven functional and stored and retrieved locally at fabrication site will be used as standard method for upcoming fabrication runs. These data are not usable by third parties. The data published under 1) and 2) will be sufficient for others to set up their own fabrication protocols or replicate our experiments. # ALLOCATION OF RESOURCES **Explain the allocation of resources, addressing the following issues:** * **Estimate the costs for making your data FAIR. Describe how you intend to cover these costs** * **Clearly identify responsibilities for data management in your project** • **Describe costs and potential value of long term preservation** 1. Gerda Rentschler will oversee data management in close collaboration with the respective copyright owners. (3 Person Months covered by Bio4Comp grant.) 2. Wherever possible, research will be made available via green and/or gold open access. Gold open access fees will be covered by Bio4Comp grant. 3. Wherever possible, existing infrastructure and/or public repositories will be used at no extra costs to the consortium. 4. Potential value of long term preservation: data that are included in a PhD thesis have to be stored for ten years after graduation. The Bio4Comp partner that graduates the PhD student is responsible for the storage of the data. As said above, the data generated by the Bio4Comp are probably not useful for third parties more than five to ten years after the end of the project due to newer data being more relevant. Therefore, we do not plan to ensure data preservation beyond that time. 5. Responsibilities for reporting, storage and re-use of fabrication data are enacted with the device responsibility for microfabrication plants such as EBL (Fraunhofer ENAS) and TPA (Fraunhofer ISC). # DATA SECURITY **Address data recovery as well as secure storage and transfer of sensitive data** 1. Zenodo manages secure storage and data recovery; 2. Github research publications manages secure storage and data recovery; 3. TU Dresden manages owncloud secure storage and data recovery; 4. Additional data storage on servers of partner institutes with secure storage and data recovery. # ETHICAL ASPECTS **To be covered in the context of the ethics review, ethics section of DoA and ethics deliverables. Include references and related technical aspects if not covered by the former** The considerations of ethical aspects, e.g. in connection with ethical approval, in performing animal experiments increase the quality and thereby the impact of resulting data. In addition to the animal experiments there are no other ethical issues of the current project that concern research data. # OTHER **Refer to other national/funder/sectorial/departmental procedures for data management that you are using (if any)** 1. Zenodo.org 2. Lund University’s research information system LUCRIS _http://portal.research.lu.se/portal/_ 3. GitHub.org 4. TU Dresden Owncloud ( _https://cloudstore.zih.tu-dresden.de/_ ) 5. University libraries
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0359_HYDRALAB-PLUS_654110.md
# EXECUTIVE SUMMARY The HYDRALAB+ project is aimed at strengthening the coherence of experimental hydraulic and hydrodynamic research undertaken across its partner organisations. Accordingly, each research activity in HYDRALAB+ will require one or more Data Management Plans (DMPs). Indeed, Horizon 2020 guidelines emphasise that “Data Management Plans (DMPs) are a key element of good data management” and a DMP sets out the life cycle of the data to be collected, processed and stored by HYDRALAB+. HYDRALAB+ is also a voluntary member of the H2020 Open Data Pilot. This requires participants to make their publications open access and their research data _**Findable** _ , _**Accessible** _ , _**Interoperable** _ and _**Reusable** _ – FAIR for short. The aims of this report are to recommend mechanisms for capturing, storing and managing DMPs for the whole of HYDRALAB+ and describe a case study exemplar of how a data management plan should be created and maintained during the lifecycle of each experiment. There is a sufficiently large number of research activities in HYDRALAB+ to make the management of these DMPs a complex issue in itself. It is therefore sensible to introduce a mechanism for marshalling them in a manageable form. The Digital Curation Centre has built and maintains a database and user interface to facilitate the storage , editing and retrieval of DMPs across a broad range of funding body requirements including Horizon 2020: DMP Online. HYDRALAB+ research activities can make use of the Horizon 2020 DMP template, available through the DMP Online website user interface, to manage and maintain their DMPs. The following actions are recommended for the creation and management of DMPs across the HYDRALAB+ project: · Use DMP Online to create and manage Data Management Plans; · Use defined common identifiers to uniquely distinguish experiments and DMPs; · Share DMPs with leaders of other interested HYDRALAB+ work packages; · Include in each DMP a recommended MIME type (a mechanism for specifying the format of a data set so that any intelligent application can read and make best use of it) for each discrete dataset where possible. A database of project activities will be maintained along with a record of the status of each activity’s Data Management Plan. Also included in this report is a case study exemplar (an experiment that is being led by HR Wallingford) illustrating use of DMP Online to create and maintain a Data Management Plan adhering to the Horizon 2020 template and EC Guidelines on FAIR Data Management. This report is D10.1 of the HYDRALAB+ project, entitled “Data Management Plan”. It is one of the outputs of Work Package 10 – JRA3: Facilitating the Re-use and Exchange of Experimental Data. # 1 INTRODUCTION _“A**data management plan** or **DMP** is a formal document that outlines how you will handle your data both during your research, and after the project is completed. The goal of a data management plan is to consider the many aspects of data management, metadata generation, data preservation, and analysis before the project begins; this ensures that data are well-managed in the present, and prepared for preservation in the future.” _ 1 _HYDRALAB_ is a network of research institutes with world-leading hydraulic and hydrodynamic experimental facilities. The HYDRALAB+ project is funded by the European Commission through the Horizon2020 programme and is aimed at strengthening the coherence of experimental hydraulic and hydrodynamic research by improving infrastructure with a focus on adaptation to climate change issues. HYDRALAB+ has three key objectives: 1. to widen the use of, and access to, unique hydraulic and hydrodynamic research infrastructures in the EU through the **Transnational Access** (TA) programme, which offers researchers the opportunity to undertake experiments in rare facilities to which they would not normally have access; 2. to improve experimental methods to enhance hydraulic and hydrodynamic research and address the future challenges of climate change adaptation, through our programme of **Joint Research Activities** (JRAs). The JRAs are undertaking R&D to develop and disseminate tools and techniques that will keep European laboratories at the forefront of hydraulic experimentation; and 3. to network with the experimental hydraulic and hydrodynamic research community throughout Europe and share knowledge, best practice and data with the wider scientific community and other stakeholders, including industry and government agencies. Some training will also be provided to the next generation of researchers. HYDRALAB+ is also a voluntary member of the H2020 Open Data Pilot. This requires participants to make their publications open access and their research data _**Findable** _ , _**Accessible** _ , _**Interoperable** _ and _**Reusable** _ – FAIR for short (EC, 2016). General guidelines on **FAIR** data management can be found in EC (2016), the H2020 online manual section on _Open Access_ and _Data Management_ and the H2020 Annotated Model Grant Agreement. EC (2016) emphasizes _that “Data Management Plans (DMPs) are a key element of good data management”_ as a DMP sets out the life cycle (see section 1.1) of the data to be collected, processed and stored by HYDRALAB+. The requirement that data be FAIR means that information on the following should be made available in the DMP: · _“the handling of research data during and after the end of the project_ · _what data will be collected, processed and generated_ · _which methodology and standards will be applied_ · _whether data will be shared/made open access and_ · _how data will be curated and preserved (including after the end of the project)”_ (EC, 2016). A Data Management Plan is a living document. This first version can only be an outline that will evolve as the project progresses and should be updated when there are significant changes, such as the collection of new data, the planning of new (or different) experiments, changes in consortium policies and in time for the periodic (and final) reviews of the project. ## 1.1 AIMS This report is D10.1 of the Hydralab+ project, entitled “Data Management Plan”. It is one of the outputs of Work Package 10 – JRA3: Facilitating the Re-use and Exchange of Experimental Data. Its aim is to recommend mechanisms for capturing, storing and managing Data Management Plans for the whole of HYDRALAB+. It makes recommendations for data collected as part of the JRAs and TA. At this stage, DMPs for each of these experiments are not fully known (for example, the second round of applications for Transnational Access have not been assessed by the User Selection Panel, and the projects have not been chosen). This document therefore sets out in general terms what should happen and makes recommendations for HYDRALAB+ experiments for creating and managing Data Management Plans. In addition, it describes a case study exemplar of how a data management plan should be created and maintained during the lifecycle of each experiment. ### 1.2 MANAGEMENT OF DATA MANAGEMENT PLANS There are many Data Management Plan (DMP) schemas available to researchers. In other words, there are many different ways of recording useful information about experimental data. Indeed, such schemas can be created by the researcher for a specific experiment – or sometimes not at all. This variety makes the task of communicating information about data sets across contextual boundaries potentially difficult, time-consuming and costly. However, funding bodies and other stake-holding organizations increasingly require consistent, coherent and complete data management plans for research activities. This makes communication easier and more effective and fits into the requirements for HYDRALAB+ by providing an agreed platform for communication between the three main contexts of activity (field, laboratory and computer). As such, **each research activity in HYDRALAB+ will require one or more DMPs.** There is a sufficiently large number of research activities in HYDRALAB+ to make the management of the DMPs a complex issue in itself. It is therefore sensible to introduce a mechanism for marshalling these DMPs in a manageable form. The **Digital Curation Centre** (DCC) is _“an internationally-recognized centre of expertise in digital curation with a focus on building capability and skills for research data management. The DCC provides expert advice and practical help to research organizations wanting to store, manage, protect and share digital research data”_ 2 _._ The DCC has built and maintains a database and user interface to facilitate the storage, editing and retrieval of Data Management Plans across a broad range of funding body requirements including Horizon 2020\. This is **DMP Online** . 3 ### 1.3 DATA COLLECTION/GENERATION ACTIVITIES Data will be collected as part of the Joint Research Activities ( **JRAs** ) and Transnational Access ( **TA** ) parts of HYDRALAB+. The purposes of these activities are somewhat different, as outlined in the sections below. #### 1.3.1 Joint Research Activities Joint Research Activities (JRAs) are aimed at improving the goods and services offered by European hydraulics laboratories, so that they remain at the technological forefront of hydraulic research. The three JRAs in HYDRALAB+ are: 1. **RE** presenting **C** limate change **I** n **P** hysical **E** xperiments ( **RECIPE** – work package 8). RECIPE is developing innovative experimental techniques, methods and protocols that will overcome barriers to research progress in modelling climate change in physical experiments. A range of laboratory experiments will be undertaken to assist with these tasks. 2. **C** ross-disciplinary **O** bservations of **M** orphodynamics and **P** rotective structures, **L** inked to **E** cology and e **X** treme events ( **COMPLEX** – work package 9). COMPLEX is developing tools and protocols to (i) improve observational equipment for measuring at complex boundaries, (ii) incorporate vegetation and biologically active sediment surfaces and (iii) allow evaluation of complex hard and soft engineering solutions. A range of laboratory experiments will also be undertaken to assist with these tasks. Synergies between JRAs 1 and 2 will be exploited, where possible, by combining experiments from both JRAs in the test schedule. 3. **F** acilitating the **R** e-use and **E** xchange of **E** xperimental Data ( **FREE Data** – work package 10). FREE Data is developing the techniques and tools for collecting and sharing data in a FAIR way. No physical experiments are being undertaken in FREE Data. JRA experiments are conducted by HYDRALAB+ participants in the experimental facilities of HYDRALAB+, with an emphasis on joint experiments with multiple participants, often trying different tools and techniques as part of a coordinated experimental plan. The details of the current experimental programme are given in _**Table 2 JRA experiments in HYDRALAB+** _ , but are being developed as HYDRALAB+ develops. #### 1.3.2 Transnational access Transnational Access (TA) provides opportunities for researchers to form multi-national teams and bid for access to advanced hydraulic experimental facilities to which they would not normally have access. The HYDRALAB+ TA facilities are based at the institutes: Deltares, Aalto University, CNRSGrenoble, DHI, HR Wallingford Ltd., HSVA, Leibniz Universität Hannover, NTNU, University of Hull and Universitat Politèchnica de Catalunya. The experimental facilities being made available are designed for research across a range of disciplines, including hydraulics, geophysical hydrodynamics, morphodynamics, ecohydraulics, ice engineering and hydraulic structures. Potential TA users form multi-national teams, choose an experimental topic, identify a suitable facility and write a proposal to undertake a set of experiments. Each proposal is checked for technical suitability then reviewed by an independent User Selection Panel. Therefore, the purpose of the data generation is not known until the successful proposals have been selected. There will be at least two calls for new TA proposals during HYDRALAB+. The selected projects from the first call are listed in _**Table 3 Transnational experiments granted by HYDRALAB+ first call for proposals** _ . ## 1.4 DMP ONLINE HYDRALAB+ research activities can make use of the Horizon 2020 DMP template (available through the DCC DMP Online website user interface) to manage and maintain their data management plans. With such a facility already available it would appear superfluous to invest in the design and development of a functionally similar (if not identical) application specifically for HYDRALAB+. For such a system to be useful to HYDRALAB+, it is essential that common access across research domains (field, lab, and computer), research activities (JRA, TA) and management levels is maintained. The facility to share common information about data management is also important and benefits from a consistent data format allowing searching and aggregating across domain boundaries. In practice the exporting of DMPs to a standard consistent schema and format is a basic requirement. To that end the existing XML schema used by DMP Online for exporting DMPs will be the de facto standard for exchanging information about DMPs. ### 1.4.1 DMP Access The DMP Online user interface allows for multiple users to have access to a given DMP. There are three permission levels: · **Read-only** users can only read the plan. · **Editor** users can contribute to the plan. · **Co-owner** users can also contribute to the plan, but additionally can edit the plan details and control access to the plan. ### 1.4.2 DMP Development Cycle DMP Online leads the data management plan through three main stages of development: · **Initial –** this stage represents the first version of your DMP and should be submitted within the first six months of the project. It is not required to provide detailed answers to all the questions in the _**Initial** _ version of the DMP. The DMP is intended to be a living document in which information can be made available on a finer level of granularity through updates as the implementation of the project progresses and when significant changes occur. DMPs should have a clear version number and include a timetable for updates · **Detailed** – this stage represents the main working stage of the plan throughout its lifetime · **Final review** – this stage represents the DMP as a completed document for review. ### 1.4.3 DMP Online Export Formats The DMP Online user interface allows the user at any point in the DMP life cycle to export the DMP to one of the following formats: · csv · html · json · pdf · text · xml · docx A number of these formats (particularly XML) will be useful for the later searching or collating of information about HYDRALAB+ data management plans and research activities generally. ### 1.4.4 DMP Licenses The DMP Online user interface also allows the user to specify how the data will be licensed or released and the site provides useful guidance for this. 4 # 2 FAIR DATA One of the requirements for HYDRALAB+ experimental data is for it to be “FAIR” (findable, accessible, interoperable and re-usable). From the sources outlined in the introduction to this document we can add guidelines from DMP Online. However, a brief outline of considerations for completing a DMP section on this includes (but are not limited to): ## 2.1 MAKING DATA FINDABLE, INCLUDING PROVISIONS FOR METADATA · Outline the discoverability of data (metadata provision); · Outline the identifiability of data and refer to standard identification mechanism (e.g. do you make use of persistent and unique identifiers such as Digital Object Identifiers (DOIs)?); · Outline naming conventions used throughout; · Outline the approach towards search keyword selection; · Outline the approach for clear versioning; · Specify standards for metadata creation (if any). If there are no standards in your discipline then describe what metadata will be created and how. ## 2.2 MAKING DATA OPENLY ACCESSIBLE · Specify which data will be made openly available. If some data is kept closed then provide the rationale for doing so; · Specify how the data will be made available; · Specify what methods or software tools are needed to access the data. Is documentation about the software needed in order to access the data included? Is it possible to include the relevant software (e.g. in open source code)? · Specify where the data and associated metadata, documentation and code are deposited; · Specify how access will be provided in case there are any restrictions. ## 2.3 MAKING DATA INTEROPERABLE · Assess the degree of interoperability of your data. Can it easily be used and incorporated by other practitioners in your discipline and other disciplines? Specify what data and metadata vocabularies, standards or methodologies you will follow to facilitate interoperability. · Specify whether you will be using standard vocabulary for all data types present in your data set, to allow inter-disciplinary interoperability. If not, will you provide mapping to more commonly used ontologies which structure data and its inter-relationships? ## 2.4 INCREASE DATA RE-USE (THROUGH CLARIFYING LICENSES) The data collected in HYDRALAB+ will be licensed, so that it can be re-used. We will not specify a single license, but require that all data becomes open data ( _http://opendefinition.org/guide/data/_ ) and recommend the use of licenses that conform to the open definition 5 that is suitable for data. A list of suitable licenses can be found at _http://opendefinition.org/licenses/_ . Each HYDRALAB+ Data Management Plan will incorporate the specification of an embargo period. While no specific minimum or maximum time scale is prescribed, two years after the experiment is complete is suggested. We intend that all data collected or generated will be made available under an open license. Therefore it will be usable by third parties after the embargo period, which is intended to allow those who collect and process the data to have first use of it. We do not intend to restrict the re-use of any of the data. In practice, some of the original data, such as 3-D PIV data, will consist of large files that are in a proprietorial format. In most cases, sharing of this data will come after it is processed into 2-D array time series of velocities and other quantities. The original data should be kept in case it needs to be re-analyzed. Clear documentation of processes showing the sequence of transformations of the data from its original form to subsequent forms for storage will be produced by the experimenter; such documentation will be incorporated into the relevant Data Management Plan. HYDRALAB+ does not have its own data quality assurance process, but relies on the quality assurance processes of individual laboratories. The length of time any specific data will remain re-usable isn’t possible to determine. This will depend on the data repositories used, the quality of the meta-data that is provided and, potentially, the file format used to collect or store the data. # 3 QUALITY CONTROL It is important for the successful future sharing of data between the three domains examined in HYDRALAB+ (field, laboratory, and computer) that the Data Management Plans are themselves searchable and that queries can be performed across the broad range of experimental DMPs. Although HYDRALAB+ does not have its own data quality assurance process, steps will be taken to ensure the Data Management Plans are quality controlled. ## 3.1 DMP DATABASE A database of the JRA and TA activities will be maintained along with a record of the status of each activity’s Data Management Plan. The final form of this database is yet to be determined but may be represented as a serialised XML document searchable by XPath queries or a relational database comprising the CSV or JSON exported values from DMP Online. Each plan will be uniquely identified by the DMP Online Plan Id available in each XML export for a given plan _(“ <plan id=’123456’>”) _ and an identifier unique to HYDRALAB+ identifying the experiment. Such an identifier already exists for TA experiments (see _Table 3 Transnational experiments granted by HYDRALAB+ first call for proposals_ ) and is specified in the column labelled “Acronym”. ### 3.2 JRAIDENTIFIER For JRA experiments the identifier will be arranged as follows <table> <tr> <th> HYDRALAB+ </th> <th> H+ </th> </tr> <tr> <td> PARTNERCODE </td> <td> e.g.: HRW, DELTARES, CNRS, AALTO, etc. </td> </tr> <tr> <td> WORKPACKAGE </td> <td> e.g.: JRA[n], TA[n] </td> </tr> <tr> <td> 3 digit seq # </td> <td> e.g.: 001, 002, 003 allocating the sequential number assignment will be the responsibility of the WP leader </td> </tr> </table> For example: H+_HRW_JRA1_002 (second sequential experiment in Joint Research Activity 1 - RECIPE) ### 3.3 RESPONSIBLE INDIVIDUAL In addition, for each experiment producing one or more DMPs a responsible individual from the relevant organisation will be appointed to ensure the DMP is created and maintained in accordance with these recommendations. This person should have a broad overview of both the science and the data management involved in the experiment. ### 3.4 MAINTENANCE Regular querying of the DMP Online database will allow the database to be kept as up to date as possible using the **Initial** , **Detailed** and **Final review** status to determine the state of a given activity’s DMP. The database itself will be maintained by HRW as part of its brief as Lead Beneficiary of Work Package WP10. # 4 DATA FORMATS AND STANDARDS It is important to distinguish between data “format” and data “standard”. Data “format” is a description of the organisation of a given stream of data (including files). Data “standard” is an agreed protocol for sharing data – the “format” may or may not constitute part of the “standard”. It is a fine and sometimes unclear distinction. Data standards tend to have tightly controlled specifications (e.g. GML, WaterML2, both of which are subtypes of the XML standard). Data formats range from third party proprietary formats (.ZIP, .PDF, XLSX) through open formats like .CSV and bespoke custom formats developed for specific purposes (output files from many bespoke numerical models for example). There is a universe of formats and standards which may or may not be used – or required – within the HYDRALAB community and to be prescriptive about a specific format is likely to impose unnecessary restrictions and cut across well-established processes and procedures within partner organisations. We will therefore adopt a non-prescriptive approach to data standards within HYDRALAB+ unless and until other tasks (e.g. **10.2 Data standards and licenses** , **10.3 Repository** , **10.4 and 10.5 Data flux** ) make recommendations accordingly. Regarding data formats, given that the data we may be dealing with could be proprietary, bespoke, open, closed, binary, ASCII text, images, video, sound and so on, we consider looking to existing practices in this area. The Multipurpose Internet Mail Extension (MIME) provides an extensible mechanism for specifying data formats and currently has many common data formats pre-specified (for application specific data like XLSX, to video, images and so on). The following section introduces MIME and use of this standard for specifying data formats in HYDRALAB+ is recommended for future accessibility to data via automatic means. ## 4.1 MIMETYPES MIME was designed mainly for email systems: _“the content types defined by MIME standards are also of importance_ …. _for the World Wide Web. Servers insert the MIME header at the beginning of any Web transmission. Clients use this content type or media type header to select an appropriate viewer application for the type of data the header indicates. Some of these viewers are built into the Web client or browser (for example, almost all browsers come with GIF and JPEG image viewers as well as the ability to handle HTML files).”_ 6 Basically, the MIME type is a mechanism for specifying the format of a data set so that any intelligent application can read and make best use of the data thereby improving the **I** of **FAIR** (interoperability). A current list of MIME types is maintained by IANA. 7 A good description of MIME types, what they are and why they are useful in data management is given by Wikipedia 8 . The University of Hull (UHULL) data repository also makes use of MIME types. For example, the url ### _https://hydra.hull.ac.uk/resources/hull:13268_ provides a dataset with the MIME type _vnd.openxmlformats-officedocument.spreadsheetml.sheet_ By assigning an appropriate MIME type to a data set (even if it’s a custom type) the degree of interoperability is increased by providing a standard description of the data. Furthermore, it increases the possibility of identifying third party software which may be able to manipulate the data. The relevant MIME type for HYDRALAB+ datasets will be recorded in DMP Online in the Data Summary of the DMP at the Final Review stage, section 1 in answer to the question: “Specify the types and formats of data generated/collected”. This specification will be the MIME type **recommended** by the experimenter for the specific dataset. # 5 SUMMARY OF RECOMMENDATIONS In summary, the following actions are recommended for the creation and management of DMPs across the HYDRALAB+ project: · Remind HYDRALAB+ partners about the requirement to create and manage data management plans as part of any HYDRALAB+ experiment (JRA or TA). (HRW) · Learn and use DMP Online to create and manage Data Management Plans. (All partners producing DMPs) · Use the defined common identifiers to uniquely distinguish experiments and data management plans in HYDRALAB+. (All partners producing DMPs) · Share DMP Online DMPs with leaders of other interested work packages (in all cases with the relevant person responsible for this deliverable ). (All Experimenters) · DMPs to include a recommended MIME type for each discrete dataset where possible. (All partners producing DMPs) · Create and maintain a database using DMP Online identifiers to monitor the status of each DMP for each experiment. (HRW) # 6 CASE STUDY This section introduces a case study exemplar illustrating use of a DMP Online Data Management Plan by a HYDRALAB+ participant to create and maintain a data management plan adhering to the Horizon 2020 template and EC (2016) Guidelines on FAIR Data Management. The case study title and description are given in the sections below. Following this is a summary of the Data Management Plan for this experiment. This takes the form of a document exported from DMP Online for the relevant DMP. ## 6.1 EXPERIMENT TITLE **JRA 8.2 Use of Joint Probability Analysis and storm sequencing / abbreviation for wave overtopping.** This is the JRA experiment that is being led by HRW at HR Wallingford. Its unique identifier is H+_HRW_JRA1_002 and this used to identify the relevant Data Management Plan in DMP Online. ## 6.2 DESCRIPTION There follows a brief description of the nature of the experiment. ### 6.2.1 Introduction The design of seawalls / breakwaters is often required to achieve very low target overtopping discharges when these structures protect vulnerable infrastructure or activities. The balance between economically viable protection and performance requirements is often difficult to achieve without good knowledge on low overtopping. The paucity of data in this space and the higher uncertainty associated with existing methods, increase the challenge. The occurrence of low number of overtopping waves has the consequence that any test results are substantially more affected by the inherent variation of random waves, therefore more uncertain. Within the multi-institute project RECIPE under the HYDRALAB+ project, experimental studies for RECIPE Task 8.2 have generated new data on the response of seawalls, breakwaters and related coastal structures with the aim of improving future model testing. Tests by LNEC, UPC, UPORTO, and assisted by Deltares, have explored armour damage progression. Tests by HRW have explored wave overtopping with contributions of data from UPORTO and LNEC. The physical model test results described hereafter were intended to explore these issues and provide example data on the problem. The tests were successful in obtaining low to very low overtopping discharge test data. For low / very low overtopping discharges, these test data present considerable scatter relative to the latest empirical prediction. A number of repetitions were performed for wave conditions resulting in very low overtopping discharges, which illustrated the inherent uncertainty associated with low overtopping. The general form and procedures of hydraulic model tests of this type have been presented in previous HYDRALAB guidelines ((HYDRALAB III, 2007), and are consistent with industry guidelines i.e. the Rock Manual (CIRIA, 2007) and EurOtop (EurOtop, 2016). ### 6.2.2 Physical model tests The overtopping tests measured overtopping volumes, wave-by-wave, and mean discharge for a simple impermeable smooth 1:2 slope and for a simple vertical wall. The test conditions were carefully designed to cover a wide range of overtopping, but particularly under low-discharge conditions. The 2D model tests measured wave overtopping on two different structures: a simple (smooth) 1:2 slope with two different crest levels (1m structure A1 – refer to Figure 1 and 1.2m structure A2 – refer to Figure 2) and a simple vertical wall also with two crest levels (0.9m structure B1 – refer to Figure 3 and 1.1m structure B2 – refer to Figure 4). All levels were relative to the flume floor. No approach slope or bathymetry was used, so the depth at the structure toe was the same as at the wave paddle (refer to Figure 5). . Most tests were run at water levels of 0.7m and 0.75m, and some using a water level of 0.8m, all above sea bed level. The tests measured wave-by-wave overtopping volumes, and mean discharges. The collection chutes from the test section to the measurement tanks were varied between 0.04m to 0.335m width to accommodate a wide range of discharges with three tank sizes. _Figure 1: Structure A1 Figure 2: Structure A2_ _Figure 3:: Structure B1 Figure 4: Structure B2_ _Figure 5: Flume layout with 1:2 slope (after calibration) showing wave paddle, five wave gauges, test section and absorbing beach_ The target wave conditions were Hs ≈ 0.04 - 0.185m, and also allowing for extreme testing to Hs ≈ 0.24m. The wave periods (Tm) ranged from 1.2s to 3s. The suggested conditions gave wave steepnesses of s0m ~0.06 (storm sea), 0.035 (ocean waves), and 0.01 (swell). Tests were run for 500 waves or 1000 waves, although one test used multiple simulations with changed seed to give 10 x 1000 waves. All tests run are listed in Table 1. Test conditions were calibrated in the flume before construction of the test section, to minimise corruption of incident waves by reflections. Calibration was an iterative process. The amplitude of the signal driving the wave generator was adjusted until the spectral significant wave height measured at the calibration point was within ±5% of the target significant wave height. During testing all wave conditions were recorded by wave gauges. Another wave gauge was installed in the overtopping tank and an event detector on the crest of the structure. Overtopping discharges were quantified by collecting the overtopping water by a chute into a calibrated tank and measuring the volume collected in a known time. Mean overtopping discharges were calculated by measuring the depth of water in the tank. Mean overtopping discharges measured during testing were then compared with predictions given by the empirical formulae. Finally the number of overtopping waves, N ow , and the individual overtopping volumes, V ow , were determined by analysing the wave gauge inside the overtopping tank with the event detector. _Table 1 List of tests run at HR Wallingford_ <table> <tr> <th> Lab </th> <th> _Number 20161+_ </th> <th> **No** </th> <th> **WC** </th> <th> **Test conditions** </th> <th> **Spectral shape** </th> <th> **Structure** </th> <th> **Chute** </th> <th> **Tank** </th> <th> **File Name** </th> </tr> <tr> <th> **T p (s) ** </th> <th> **Hm0 (m)** </th> <th> **A,B** </th> <th> **1,2** </th> </tr> <tr> <td> HR </td> <td> _00101_ </td> <td> 500 </td> <td> 20 </td> <td> 1.76 </td> <td> 0.14 </td> <td> J3.3 </td> <td> A </td> <td> 1 </td> <td> 1 </td> <td> C </td> <td> HR00101-WC20No500_Sp-J3.3_StA1_Ch1_TC </td> </tr> <tr> <td> HR </td> <td> _00102_ </td> <td> 500 </td> <td> 20 </td> <td> 1.76 </td> <td> 0.14 </td> <td> J3.3 </td> <td> A </td> <td> 1 </td> <td> 1 </td> <td> C </td> <td> HR00102-WC20No500_Sp-J3.3_St-A1_Ch1_TC </td> </tr> <tr> <td> HR </td> <td> _00103_ </td> <td> 500 </td> <td> 20 </td> <td> 1.76 </td> <td> 0.14 </td> <td> J3.3 </td> <td> A </td> <td> 1 </td> <td> 1 </td> <td> C </td> <td> HR00103-WC20No500_Sp-J3.3_St-A1_Ch1_TC </td> </tr> <tr> <td> HR </td> <td> _01304_ </td> <td> 500 </td> <td> 24 </td> <td> 3.3 </td> <td> 0.14 </td> <td> J3.3 </td> <td> A </td> <td> 1 </td> <td> 1 </td> <td> C </td> <td> HR01304-WC24No500_Sp-J3.3_St-A1_Ch1_TC </td> </tr> <tr> <td> HR </td> <td> _01305_ </td> <td> 500 </td> <td> 15 </td> <td> 1.32 </td> <td> 0.14 </td> <td> J3.3 </td> <td> A </td> <td> 1 </td> <td> 1 </td> <td> C </td> <td> HR01305-WC15No500_Sp-J3.3_St-A1_Ch1_TC </td> </tr> <tr> <td> HR </td> <td> _01406_ </td> <td> 500 </td> <td> 05 </td> <td> 1.32 </td> <td> 0.14 </td> <td> J3.3 </td> <td> A </td> <td> 1 </td> <td> 1 </td> <td> C </td> <td> HR01406-WC05No500_Sp-J3.3_St-A1_Ch1_TC </td> </tr> <tr> <td> HR </td> <td> _01407_ </td> <td> 500 </td> <td> 05 </td> <td> 1.32 </td> <td> 0.14 </td> <td> J3.3 </td> <td> A </td> <td> 1 </td> <td> 1 </td> <td> D </td> <td> HR01407-WC05No500_Sp-J3.3_St-A1_Ch1_TD </td> </tr> <tr> <td> HR </td> <td> _01408_ </td> <td> 500 </td> <td> 08 </td> <td> 1.54 </td> <td> 0.11 </td> <td> J3.3 </td> <td> A </td> <td> 1 </td> <td> 1 </td> <td> C </td> <td> HR01408-WC08No500_Sp-J3.3_St-A1_Ch1_TC </td> </tr> <tr> <td> HR </td> <td> _01409_ </td> <td> 1000 </td> <td> 06 </td> <td> 1.32 </td> <td> 0.08 </td> <td> J3.3 </td> <td> A </td> <td> 1 </td> <td> 1 </td> <td> B </td> <td> HR01409-WC06No1000_Sp-J3.3_St-A1_Ch1_TB </td> </tr> <tr> <td> HR </td> <td> _01410_ </td> <td> 1000 </td> <td> 06 </td> <td> 1.32 </td> <td> 0.08 </td> <td> J3.3 </td> <td> A </td> <td> 1 </td> <td> 2 </td> <td> C </td> <td> HR01410-WC06No1000_Sp-J3.3_St-A1_Ch2_TC </td> </tr> <tr> <td> HR </td> <td> _01711_ </td> <td> 1000 </td> <td> 23 </td> <td> 2.31 </td> <td> 0.07 </td> <td> J3.3 </td> <td> A </td> <td> 1 </td> <td> 2 </td> <td> C </td> <td> HR01711-WC23No1000_Sp-J3.3_St-A1_Ch2_TC </td> </tr> <tr> <td> HR </td> <td> _01712_ </td> <td> 1000 </td> <td> 18 </td> <td> 1.54 </td> <td> 0.11 </td> <td> J3.3 </td> <td> A </td> <td> 1 </td> <td> 2 </td> <td> C </td> <td> HR01712-WC18No1000_Sp-J3.3_St-A1_Ch2_TC </td> </tr> <tr> <td> HR </td> <td> _01413_ </td> <td> 1000 </td> <td> 16 </td> <td> 1.32 </td> <td> 0.08 </td> <td> J3.3 </td> <td> A </td> <td> 1 </td> <td> 2 </td> <td> B </td> <td> HR01413-WC16No1000_Sp-J3.3_St-A1_Ch2_TB </td> </tr> <tr> <td> HR </td> <td> _01414_ </td> <td> 1000 </td> <td> 16 </td> <td> 1.32 </td> <td> 0.08 </td> <td> J3.3 </td> <td> A </td> <td> 1 </td> <td> 2 </td> <td> B </td> <td> HR01414-WC16No1000_Sp-J3.3_St-A1_Ch2_TB </td> </tr> <tr> <td> HR </td> <td> _01415_ </td> <td> 500 </td> <td> 10 </td> <td> 1.76 </td> <td> 0.14 </td> <td> J3.3 </td> <td> A </td> <td> 1 </td> <td> 3 </td> <td> D </td> <td> HR01415-WC10No500_Sp-J3.3_St-A1_Ch3_TD </td> </tr> <tr> <td> HR </td> <td> _01716_ </td> <td> 500 </td> <td> 10 </td> <td> 1.76 </td> <td> 0.14 </td> <td> J12 </td> <td> A </td> <td> 1 </td> <td> 3 </td> <td> D </td> <td> HR01716-WC10No500_Sp-J12_St-A1_Ch3_TD </td> </tr> <tr> <td> HR </td> <td> _01717_ </td> <td> 500 </td> <td> 10 </td> <td> 1.76 </td> <td> 0.14 </td> <td> J01 </td> <td> A </td> <td> 1 </td> <td> 3 </td> <td> D </td> <td> HR01717-WC10No500_Sp-J01_St-A1_Ch3_TD </td> </tr> <tr> <td> HR </td> <td> _01818_ </td> <td> 500 </td> <td> 10 </td> <td> 1.76 </td> <td> 0.14 </td> <td> J06 </td> <td> A </td> <td> 1 </td> <td> 3 </td> <td> D </td> <td> HR01818-WC10No500_Sp-J06_St-A1_Ch3_TD </td> </tr> <tr> <td> HR </td> <td> _01819_ </td> <td> 500 </td> <td> 05 </td> <td> 1.32 </td> <td> 0.14 </td> <td> J3.3 </td> <td> A </td> <td> 1 </td> <td> 3 </td> <td> D </td> <td> HR01819-WC05No500_Sp-J3.3_St-A1_Ch3_TD </td> </tr> <tr> <td> HR </td> <td> _01820_ </td> <td> 500 </td> <td> 05 </td> <td> 1.32 </td> <td> 0.14 </td> <td> J12 </td> <td> A </td> <td> 1 </td> <td> 3 </td> <td> C </td> <td> HR01820-WC05No500_Sp-J12_St-A1_Ch3_TC </td> </tr> <tr> <td> HR </td> <td> _01821_ </td> <td> 500 </td> <td> 05 </td> <td> 1.32 </td> <td> 0.14 </td> <td> J01 </td> <td> A </td> <td> 1 </td> <td> 3 </td> <td> C </td> <td> HR01821-WC05No500_Sp-J01_St-A1_Ch3_TC </td> </tr> <tr> <td> HR </td> <td> _01822_ </td> <td> 500 </td> <td> 05 </td> <td> 1.32 </td> <td> 0.14 </td> <td> J06 </td> <td> A </td> <td> 1 </td> <td> 3 </td> <td> C </td> <td> HR01822-WC05No500_Sp-J06_St-A1_Ch3_TC </td> </tr> <tr> <td> HR </td> <td> _01823_ </td> <td> 500 </td> <td> 05 </td> <td> 1.32 </td> <td> 0.14 </td> <td> J06 </td> <td> A </td> <td> 1 </td> <td> 3 </td> <td> C </td> <td> HR01823-WC05No500_Sp-J06_St-A1_Ch3_TC </td> </tr> <tr> <td> HR </td> <td> _01824_ </td> <td> 500 </td> <td> 05 </td> <td> 1.32 </td> <td> 0.14 </td> <td> J02 </td> <td> A </td> <td> 1 </td> <td> 3 </td> <td> C </td> <td> HR01824-WC05No500_Sp-J02_St-A1_Ch3_TC </td> </tr> <tr> <td> HR </td> <td> _01825_ </td> <td> 500 </td> <td> 14 </td> <td> 3.30 </td> <td> 0.14 </td> <td> J3.3 </td> <td> A </td> <td> 1 </td> <td> 3 </td> <td> C </td> <td> HR01825-WC14No500_Sp-J3.3_St-A1_Ch3_TC </td> </tr> <tr> <td> HR </td> <td> _01826_ </td> <td> 1000 </td> <td> 13 </td> <td> 2.31 </td> <td> 0.07 </td> <td> J3.3 </td> <td> A </td> <td> 1 </td> <td> 2 </td> <td> C </td> <td> HR01826-WC13No1000_Sp-J3.3_St-A1_Ch2_TC </td> </tr> <tr> <td> HR </td> <td> _01827_ </td> <td> 1000 </td> <td> 13 </td> <td> 2.31 </td> <td> 0.07 </td> <td> J3.3 </td> <td> A </td> <td> 1 </td> <td> 2 </td> <td> C </td> <td> HR01827-WC13No1000_Sp-J3.3_St-A1_Ch2_TC </td> </tr> </table> <table> <tr> <th> Lab </th> <th> _Number 20161+_ </th> <th> **No** </th> <th> **WC** </th> <th> **Test conditions** </th> <th> **Spectral shape** </th> <th> **Structure** </th> <th> **Chute** </th> <th> **Tank** </th> <th> **File Name** </th> </tr> <tr> <th> **T p (s) ** </th> <th> **Hm0 (m)** </th> <th> **A,B** </th> <th> **1,2** </th> </tr> <tr> <td> HR </td> <td> _01928_ </td> <td> 1000 </td> <td> 04 </td> <td> 2.31 </td> <td> 0.06 </td> <td> J3.3 </td> <td> A </td> <td> 1 </td> <td> 2 </td> <td> C </td> <td> HR01928-WC04No1000_Sp-J3.3_St-A1_Ch2_TC </td> </tr> <tr> <td> HR </td> <td> _01929_ </td> <td> 500 </td> <td> 01 </td> <td> 1.32 </td> <td> 0.08 </td> <td> J3.3 </td> <td> A </td> <td> 1 </td> <td> 2 </td> <td> C </td> <td> HR01929-WC01No500_Sp-J3.3_St-A1_Ch2_TC </td> </tr> <tr> <td> HR </td> <td> _01930_ </td> <td> 500 </td> <td> 02 </td> <td> 1.54 </td> <td> 0.11 </td> <td> J3.3 </td> <td> A </td> <td> 1 </td> <td> 1 </td> <td> D </td> <td> HR01930-WC02No500_Sp-J3.3_St-A1_Ch1_TD </td> </tr> <tr> <td> HR </td> <td> _01931_ </td> <td> 1000 </td> <td> 03 </td> <td> 1.76 </td> <td> 0.04 </td> <td> J3.3 </td> <td> A </td> <td> 1 </td> <td> 2 </td> <td> B </td> <td> HR01931-WC03No1000_Sp-J3.3_St-A1_Ch2_TB </td> </tr> <tr> <td> HR </td> <td> _02532_ </td> <td> 500 </td> <td> 21 </td> <td> 1.76 </td> <td> 0.24 </td> <td> J3.3 </td> <td> A </td> <td> 2 </td> <td> 3 </td> <td> C </td> <td> HR02532-WC21No500_Sp-J3.3_St-A2_Ch3_TC </td> </tr> <tr> <td> HR </td> <td> _02533_ </td> <td> 1000 </td> <td> 17 </td> <td> 1.54 </td> <td> 0.19 </td> <td> J3.3 </td> <td> A </td> <td> 2 </td> <td> 1 </td> <td> C </td> <td> HR02533-WC17No1000_Sp-J3.3_St-A2_Ch1_TC </td> </tr> <tr> <td> HR </td> <td> _02534_ </td> <td> 1000 </td> <td> 20 </td> <td> 1.76 </td> <td> 0.14 </td> <td> J3.3 </td> <td> A </td> <td> 2 </td> <td> 1 </td> <td> C </td> <td> HR02534-WC20No1000_Sp-J3.3_St-A2_Ch1_TC </td> </tr> <tr> <td> HR </td> <td> _02535_ </td> <td> 1000 </td> <td> 24 </td> <td> 3.3 </td> <td> 0.14 </td> <td> J3.3 </td> <td> A </td> <td> 2 </td> <td> 1 </td> <td> C </td> <td> HR02535-WC24No1000_Sp-J3.3_St-A2_Ch1_TC </td> </tr> <tr> <td> HR </td> <td> _02536_ </td> <td> 500 </td> <td> 22 </td> <td> 2.31 </td> <td> 0.20 </td> <td> J3.3 </td> <td> A </td> <td> 2 </td> <td> 1 </td> <td> C </td> <td> HR02536-WC22No500_Sp-J3.3_St-A2_Ch1_TC </td> </tr> <tr> <td> HR </td> <td> _02537_ </td> <td> 1000 </td> <td> 15 </td> <td> 1.32 </td> <td> 0.14 </td> <td> J3.3 </td> <td> A </td> <td> 2 </td> <td> 1 </td> <td> C </td> <td> HR02537-WC15No1000_Sp-J3.3_St-A2_Ch1_TC </td> </tr> <tr> <td> HR </td> <td> _02538_ </td> <td> 500 </td> <td> 07 </td> <td> 1.54 </td> <td> 0.19 </td> <td> J3.3 </td> <td> A </td> <td> 2 </td> <td> 1 </td> <td> C </td> <td> HR02538-WC07No500_Sp-J3.3_St-A2_Ch1_TC </td> </tr> <tr> <td> HR </td> <td> _02639_ </td> <td> 500 </td> <td> 12 </td> <td> 2.31 </td> <td> 0.15 </td> <td> J3.3 </td> <td> A </td> <td> 2 </td> <td> 3 </td> <td> C </td> <td> HR02639-WC12No500_Sp-J3.3_St-A2_Ch3_TC </td> </tr> <tr> <td> HR </td> <td> _02640_ </td> <td> 500 </td> <td> 09 </td> <td> 1.76 </td> <td> 0.20 </td> <td> J3.3 </td> <td> A </td> <td> 2 </td> <td> 1 </td> <td> C </td> <td> HR02640-WC09No500_Sp-J3.3_St-A2_Ch1_TC </td> </tr> <tr> <td> HR </td> <td> _02641_ </td> <td> 1000 </td> <td> 05 </td> <td> 1.32 </td> <td> 0.14 </td> <td> J3.3 </td> <td> A </td> <td> 2 </td> <td> 1 </td> <td> C </td> <td> HR02641-WC05No1000_Sp-J3.3_St-A2_Ch1_TC </td> </tr> <tr> <td> HR </td> <td> _02642_ </td> <td> 1000 </td> <td> 05 </td> <td> 1.32 </td> <td> 0.14 </td> <td> J06 </td> <td> A </td> <td> 2 </td> <td> 1 </td> <td> C </td> <td> HR02642-WC05No1000_Sp-J06_St-A2_Ch1_TC </td> </tr> <tr> <td> HR </td> <td> _02643_ </td> <td> 1000 </td> <td> 05 </td> <td> 1.32 </td> <td> 0.14 </td> <td> J12 </td> <td> A </td> <td> 2 </td> <td> 1 </td> <td> C </td> <td> HR02643-WC05No1000_Sp-J12_St-A2_Ch1_TC </td> </tr> <tr> <td> HR </td> <td> _02644_ </td> <td> 1000 </td> <td> 05 </td> <td> 1.32 </td> <td> 0.14 </td> <td> J01 </td> <td> A </td> <td> 2 </td> <td> 1 </td> <td> C </td> <td> HR02644-WC05No1000_Sp-J01_St-A2_Ch1_TC </td> </tr> <tr> <td> HR </td> <td> _02645_ </td> <td> 1000 </td> <td> 05 </td> <td> 1.32 </td> <td> 0.14 </td> <td> J02 </td> <td> A </td> <td> 2 </td> <td> 1 </td> <td> C </td> <td> HR02645-WC05No1000_Sp-J02_St-A2_Ch1_TC </td> </tr> <tr> <td> HR </td> <td> _02746_ </td> <td> 1000 </td> <td> 10 </td> <td> 1.54 </td> <td> 0.11 </td> <td> J3.3 </td> <td> A </td> <td> 2 </td> <td> 1 </td> <td> C </td> <td> HR02746-WC10No1000_Sp-J3.3_St-A2_Ch1_TC </td> </tr> <tr> <td> HR </td> <td> _02747_ </td> <td> 1000 </td> <td> 10 </td> <td> 1.54 </td> <td> 0.11 </td> <td> J12 </td> <td> A </td> <td> 2 </td> <td> 1 </td> <td> C </td> <td> HR02747-WC10No1000_Sp-J12_St-A2_Ch1_TC </td> </tr> <tr> <td> HR </td> <td> _02748_ </td> <td> 1000 </td> <td> 10 </td> <td> 1.54 </td> <td> 0.11 </td> <td> J06 </td> <td> A </td> <td> 2 </td> <td> 1 </td> <td> C </td> <td> HR02748-WC10No1000_Sp-J06_St-A2_Ch1_TC </td> </tr> <tr> <td> HR </td> <td> _02749_ </td> <td> 1000 </td> <td> 10 </td> <td> 1.54 </td> <td> 0.11 </td> <td> J01 </td> <td> A </td> <td> 2 </td> <td> 1 </td> <td> C </td> <td> HR02749-WC10No1000_Sp-J01_St-A2_Ch1_TC </td> </tr> <tr> <td> HR </td> <td> _02750_ </td> <td> 1000 </td> <td> 10 </td> <td> 1.54 </td> <td> 0.11 </td> <td> J01 </td> <td> A </td> <td> 2 </td> <td> 1 </td> <td> C </td> <td> HR02750-WC10No1000_Sp-J01_St-A2_Ch1_TC </td> </tr> <tr> <td> HR </td> <td> _02751_ </td> <td> 500 </td> <td> 14 </td> <td> 3.30 </td> <td> 0.14 </td> <td> J3.3 </td> <td> A </td> <td> 2 </td> <td> 1 </td> <td> C </td> <td> HR02751-WC14No500_Sp-J3.3_St-A2_Ch1_TC </td> </tr> <tr> <td> HR </td> <td> _02752_ </td> <td> 500 </td> <td> 12 </td> <td> 2.31 </td> <td> 0.15 </td> <td> J3.3 </td> <td> A </td> <td> 2 </td> <td> 3 </td> <td> C </td> <td> HR02752-WC12No500_Sp-J3.3_St-A2_Ch3_TC </td> </tr> <tr> <td> HR </td> <td> _02753_ </td> <td> 1000 </td> <td> 08 </td> <td> 0.11 </td> <td> J3.3 </td> <td> J3.3 </td> <td> A </td> <td> 2 </td> <td> 2 </td> <td> B </td> <td> HR02753-WC08No1000_Sp-J3.3_St-A2_Ch2_TB </td> </tr> <tr> <td> HR </td> <td> _02754_ </td> <td> 1000 </td> <td> 18 </td> <td> 2.12 </td> <td> 0.11 </td> <td> J3.3 </td> <td> A </td> <td> 2 </td> <td> 2 </td> <td> B </td> <td> HR02754-WC18No1000_Sp-J3.3_St-A2_Ch2_TB </td> </tr> <tr> <td> HR </td> <td> _10255_ </td> <td> 1000 </td> <td> 06 </td> <td> 1.32 </td> <td> 0.08 </td> <td> J3.3 </td> <td> A </td> <td> 2 </td> <td> 2 </td> <td> B </td> <td> HR10255-WC06No1000_Sp-J3.3_St-A2_Ch2_TB </td> </tr> <tr> <td> HR </td> <td> _10256_ </td> <td> 1000 </td> <td> 02 </td> <td> 1.54 </td> <td> 0.11 </td> <td> J3.3 </td> <td> A </td> <td> 2 </td> <td> 2 </td> <td> B </td> <td> HR10256-WC02No1000_Sp-J3.3_St-A2_Ch2_TB </td> </tr> <tr> <td> HR </td> <td> _10257_ </td> <td> 1000 </td> <td> 01 </td> <td> 1.32 </td> <td> 0.08 </td> <td> J3.3 </td> <td> A </td> <td> 2 </td> <td> 2 </td> <td> B </td> <td> HR10257-WC01No1000_Sp-J3.3_St-A2_Ch2_TB </td> </tr> </table> <table> <tr> <th> Lab </th> <th> _Number 20161+_ </th> <th> **No** </th> <th> **WC** </th> <th> **Test conditions** </th> <th> **Spectral shape** </th> <th> **Structure** </th> <th> **Chute** </th> <th> **Tank** </th> <th> **File Name** </th> </tr> <tr> <th> **T p (s) ** </th> <th> **Hm0 (m)** </th> <th> **A,B** </th> <th> **1,2** </th> </tr> <tr> <td> HR </td> <td> _10358_ </td> <td> 1000 </td> <td> 25 </td> <td> 2.12 </td> <td> 0.08 </td> <td> J3.3 </td> <td> A </td> <td> 2 </td> <td> 2 </td> <td> B </td> <td> HR10358-WC25No1000_Sp-J3.3_St-A2_Ch2_TB </td> </tr> <tr> <td> HR </td> <td> _10358_ </td> <td> 1000 </td> <td> 25v </td> <td> 2.12 </td> <td> 0.11 </td> <td> J3.3 </td> <td> A </td> <td> 2 </td> <td> 2 </td> <td> B </td> <td> HR10358-WC25vNo1000_Sp-J3.3_St-A2_Ch2_TB </td> </tr> <tr> <td> HR </td> <td> _10359_ </td> <td> 1000 </td> <td> 18 </td> <td> 2.12 </td> <td> 0.11 </td> <td> J3.3 </td> <td> A </td> <td> 2 </td> <td> 2 </td> <td> B </td> <td> HR10359-WC18No1000_Sp-J3.3_St-A2_Ch2_TB </td> </tr> <tr> <td> HR </td> <td> _11160_ </td> <td> 1000 </td> <td> 14 </td> <td> 2.12 </td> <td> 0.11 </td> <td> J3.3 </td> <td> A </td> <td> 2 </td> <td> 1 </td> <td> C </td> <td> HR11160-WC14No1000_Sp-J3.3_St-A2_Ch1_TC </td> </tr> <tr> <td> HR </td> <td> _11161_ </td> <td> 1000 </td> <td> 14 </td> <td> 2.12 </td> <td> 0.11 </td> <td> J3.3 </td> <td> A </td> <td> 2 </td> <td> 2 </td> <td> B </td> <td> HR11161-WC14_No1000-Se2_Sp-J3.3_St-A2_Ch2_TB </td> </tr> <tr> <td> HR </td> <td> _11162_ </td> <td> 1000 </td> <td> 14 </td> <td> 2.12 </td> <td> 0.11 </td> <td> J3.3 </td> <td> A </td> <td> 2 </td> <td> 2 </td> <td> B </td> <td> HR11162-WC14_No1000-Se2_Sp-J3.3_St-A2_Ch2_TB </td> </tr> <tr> <td> HR </td> <td> _11163_ </td> <td> 1000 </td> <td> 14 </td> <td> 2.12 </td> <td> 0.11 </td> <td> J3.3 </td> <td> A </td> <td> 2 </td> <td> 2 </td> <td> B </td> <td> HR11163-WC14_No1000-Se2_Sp-J3.3_St-A2_Ch2_TB </td> </tr> <tr> <td> HR </td> <td> _11164_ </td> <td> 1000 </td> <td> 18 </td> <td> 2.12 </td> <td> 0.11 </td> <td> J3.3 </td> <td> A </td> <td> 2 </td> <td> 2 </td> <td> B </td> <td> HR11164-WC18_No1000-Se2_Sp-J3.3_St-A2_Ch2_TB </td> </tr> <tr> <td> HR </td> <td> _11465_ </td> <td> 1000 </td> <td> 06 </td> <td> 1.32 </td> <td> 0.08 </td> <td> J3.3 </td> <td> A </td> <td> 2 </td> <td> 2 </td> <td> B </td> <td> HR11465-WC06_No1000-Se2_Sp-J3.3_St-A2_Ch2_TB </td> </tr> <tr> <td> HR </td> <td> _11466_ </td> <td> 1000 </td> <td> 01 </td> <td> 1.32 </td> <td> 0.08 </td> <td> J3.3 </td> <td> A </td> <td> 2 </td> <td> 2 </td> <td> B </td> <td> HR11466-WC01_No1000-Se2_Sp-J3.3_St-A2_Ch2_TB </td> </tr> <tr> <td> HR </td> <td> _11467_ </td> <td> 1000 </td> <td> 25 </td> <td> 2.12 </td> <td> 0.08 </td> <td> J3.3 </td> <td> A </td> <td> 2 </td> <td> 2 </td> <td> B </td> <td> HR11467-WC25_No1000-Se2_Sp-J3.3_St-A2_Ch2_TB </td> </tr> <tr> <td> HR </td> <td> _11568_ </td> <td> 1000 </td> <td> 23 </td> <td> 2.31 </td> <td> 0.07 </td> <td> J3.3 </td> <td> A </td> <td> 1 </td> <td> 2 </td> <td> B </td> <td> HR11568-WC23_No1000-Se1_Sp-J3.3_St-A1_Ch2_TB </td> </tr> <tr> <td> HR </td> <td> _11569_ </td> <td> 1000 </td> <td> 23 </td> <td> 2.31 </td> <td> 0.07 </td> <td> J3.3 </td> <td> A </td> <td> 1 </td> <td> 2 </td> <td> B </td> <td> HR11569-WC23_No1000-Se2_Sp-J3.3_St-A1_Ch2_TB </td> </tr> <tr> <td> HR </td> <td> _11570_ </td> <td> 1000 </td> <td> 13 </td> <td> 2.31 </td> <td> 0.07 </td> <td> J3.3 </td> <td> A </td> <td> 1 </td> <td> 2 </td> <td> B </td> <td> HR11570-WC13_No1000-Se2_Sp-J3.3_St-A1_Ch2_TB </td> </tr> <tr> <td> HR </td> <td> _11571_ </td> <td> 1000 </td> <td> 13 </td> <td> 2.31 </td> <td> 0.07 </td> <td> J3.4 </td> <td> A </td> <td> 1 </td> <td> 2 </td> <td> B </td> <td> HR11571-WC13_No1000-Se2_Sp-J3.4_St-A1_Ch2_TB </td> </tr> <tr> <td> HR </td> <td> _11572_ </td> <td> 1000 </td> <td> 03 </td> <td> 2.31 </td> <td> 0.07 </td> <td> J3.5 </td> <td> A </td> <td> 1 </td> <td> 2 </td> <td> B </td> <td> HR11572-WC03_No1000-Se1_Sp-J3.5_St-A1_Ch2_TB </td> </tr> <tr> <td> HR </td> <td> _11573_ </td> <td> 1000 </td> <td> 03 </td> <td> 2.31 </td> <td> 0.07 </td> <td> J3.6 </td> <td> A </td> <td> 1 </td> <td> 2 </td> <td> B </td> <td> HR11573-WC03_No1000-Se2_Sp-J3.6_St-A1_Ch2_TB </td> </tr> <tr> <td> HR </td> <td> _11774_ </td> <td> 10000 </td> <td> 23 </td> <td> 2.31 </td> <td> 0.07 </td> <td> J3.3 </td> <td> A </td> <td> 1 </td> <td> 2 </td> <td> B </td> <td> HR11774-WC23_No10000-Se2_Sp-J3.3_St-A1_Ch2_TB </td> </tr> <tr> <td> HR </td> <td> _12375_ </td> <td> 500 </td> <td> 15 </td> <td> 1.32 </td> <td> 0.14 </td> <td> J3.3 </td> <td> B </td> <td> 1 </td> <td> 1 </td> <td> C </td> <td> HR12375-WC15No500_Sp-J3.3_St-B1_Ch1_TC </td> </tr> <tr> <td> HR </td> <td> _12376_ </td> <td> 500 </td> <td> 20 </td> <td> 2.12 </td> <td> 0.11 </td> <td> J3.3 </td> <td> B </td> <td> 1 </td> <td> 1 </td> <td> C </td> <td> HR12376-WC20No500_Sp-J3.3_St-B1_Ch1_TC </td> </tr> <tr> <td> HR </td> <td> _12377_ </td> <td> 500 </td> <td> 24 </td> <td> 3.3 </td> <td> 0.14 </td> <td> J3.3 </td> <td> B </td> <td> 1 </td> <td> 1 </td> <td> C </td> <td> HR12377-WC24No500_Sp-J3.3_St-B1_Ch1_TC </td> </tr> <tr> <td> HR </td> <td> _12377_ </td> <td> 500 </td> <td> 18 </td> <td> 2.12 </td> <td> 0.11 </td> <td> J3.3 </td> <td> B </td> <td> 1 </td> <td> 1 </td> <td> C </td> <td> HR12377-WC18No500_Sp-J3.3_St-B1_Ch1_TC </td> </tr> <tr> <td> HR </td> <td> _12378_ </td> <td> 500 </td> <td> 20 </td> <td> 1.76 </td> <td> 0.14 </td> <td> J3.3 </td> <td> B </td> <td> 1 </td> <td> 1 </td> <td> C </td> <td> HR12378-WC20No500_Sp-J3.3_St-B1_Ch1_TC </td> </tr> <tr> <td> HR </td> <td> _12379_ </td> <td> 1000 </td> <td> 18 </td> <td> 2.12 </td> <td> 0.11 </td> <td> J3.3 </td> <td> B </td> <td> 1 </td> <td> 1 </td> <td> C </td> <td> HR12379-WC18No1000_Sp-J3.3_St-B1_Ch1_TC </td> </tr> <tr> <td> HR </td> <td> _12380_ </td> <td> 1000 </td> <td> 18 </td> <td> 2.12 </td> <td> 0.11 </td> <td> J3.3 </td> <td> B </td> <td> 1 </td> <td> 1 </td> <td> C </td> <td> HR12380-WC18No1000_Sp-J3.3_St-B1_Ch1_TC </td> </tr> <tr> <td> HR </td> <td> _12381_ </td> <td> 1000 </td> <td> 16 </td> <td> 1.32 </td> <td> 0.08 </td> <td> J3.3 </td> <td> B </td> <td> 1 </td> <td> 2 </td> <td> B </td> <td> HR12381-WC16No1000_Sp-J3.3_St-B1_Ch2_TB </td> </tr> <tr> <td> HR </td> <td> _12382_ </td> <td> 1000 </td> <td> 19 </td> <td> 1.76 </td> <td> 0.04 </td> <td> J3.3 </td> <td> B </td> <td> 1 </td> <td> 2 </td> <td> B </td> <td> HR12382-WC19No1000_Sp-J3.3_St-B1_Ch2_TB </td> </tr> <tr> <td> HR </td> <td> _12383_ </td> <td> 1000 </td> <td> 23 </td> <td> 2.31 </td> <td> 0.07 </td> <td> J3.3 </td> <td> B </td> <td> 1 </td> <td> 2 </td> <td> B </td> <td> HR12383-WC23No1000_Sp-J3.3_St-B1_Ch2_TB </td> </tr> <tr> <td> HR </td> <td> _12484_ </td> <td> 1000 </td> <td> 11 </td> <td> 1.76 </td> <td> 0.04 </td> <td> J3.3 </td> <td> B </td> <td> 1 </td> <td> 2 </td> <td> B </td> <td> HR12484-WC11No1000_Sp-J3.3_St-B1_Ch2_TB </td> </tr> <tr> <td> HR </td> <td> _12485_ </td> <td> 1000 </td> <td> 13 </td> <td> 2.31 </td> <td> 0.07 </td> <td> J3.3 </td> <td> B </td> <td> 1 </td> <td> 2 </td> <td> B </td> <td> HR12485-WC13No1000_Sp-J3.3_St-B1_Ch2_TB </td> </tr> </table> <table> <tr> <th> Lab </th> <th> _Number 20161+_ </th> <th> **No** </th> <th> **WC** </th> <th> **Test conditions** </th> <th> **Spectral shape** </th> <th> **Structure** </th> <th> **Chute** </th> <th> **Tank** </th> <th> **File Name** </th> </tr> <tr> <th> **T p (s) ** </th> <th> **Hm0 (m)** </th> <th> **A,B** </th> <th> **1,2** </th> </tr> <tr> <td> HR </td> <td> _12486_ </td> <td> 1000 </td> <td> 06 </td> <td> 1.32 </td> <td> 0.08 </td> <td> J3.3 </td> <td> B </td> <td> 1 </td> <td> 1 </td> <td> B </td> <td> HR12486-WC06No1000_Sp-J3.3_St-B1_Ch1_TB </td> </tr> <tr> <td> HR </td> <td> _12487_ </td> <td> 1000 </td> <td> 06 </td> <td> 1.32 </td> <td> 0.08 </td> <td> J3.3 </td> <td> B </td> <td> 1 </td> <td> 1 </td> <td> B </td> <td> HR12487-WC06No1000_Sp-J3.3_St-B1_Ch1_TB </td> </tr> <tr> <td> HR </td> <td> _12488_ </td> <td> 1000 </td> <td> 05 </td> <td> 1.32 </td> <td> 0.14 </td> <td> J3.3 </td> <td> B </td> <td> 1 </td> <td> 3 </td> <td> C </td> <td> HR12488-WC05No1000_Sp-J3.3_St-B1_Ch3_TC </td> </tr> <tr> <td> HR </td> <td> _12488_ </td> <td> 500 </td> <td> 05 </td> <td> 1.32 </td> <td> 0.14 </td> <td> J01 </td> <td> B </td> <td> 1 </td> <td> 3 </td> <td> C </td> <td> HR12488-WC05No500_Sp-J01_St-B1_Ch3_TC </td> </tr> <tr> <td> HR </td> <td> _12590_ </td> <td> 500 </td> <td> 05 </td> <td> 1.32 </td> <td> 0.14 </td> <td> J12 </td> <td> B </td> <td> 1 </td> <td> 3 </td> <td> C </td> <td> HR12590-WC05No500_Sp-J12_St-B1_Ch3_TC </td> </tr> <tr> <td> HR </td> <td> _12591_ </td> <td> 500 </td> <td> 10 </td> <td> 1.54 </td> <td> 0.11 </td> <td> J3.3 </td> <td> B </td> <td> 1 </td> <td> 3 </td> <td> C </td> <td> HR12591-WC10No500_Sp-J3.3_St-B1_Ch3_TC </td> </tr> <tr> <td> HR </td> <td> _12592_ </td> <td> 500 </td> <td> 10 </td> <td> 1.54 </td> <td> 0.11 </td> <td> J01 </td> <td> B </td> <td> 1 </td> <td> 3 </td> <td> C </td> <td> HR12592-WC10No500_Sp-J01_St-B1_Ch3_TC </td> </tr> <tr> <td> HR </td> <td> _12593_ </td> <td> 500 </td> <td> 10 </td> <td> 1.54 </td> <td> 0.11 </td> <td> J12 </td> <td> B </td> <td> 1 </td> <td> 3 </td> <td> C </td> <td> HR12593-WC10No500_Sp-J12_St-B1_Ch3_TC </td> </tr> <tr> <td> HR </td> <td> _12594_ </td> <td> 500 </td> <td> 14 </td> <td> 3.3 </td> <td> 0.14 </td> <td> J3.3 </td> <td> B </td> <td> 1 </td> <td> 3 </td> <td> C </td> <td> HR12594-WC14No500_Sp-J3.3_St-B1_Ch3_TC </td> </tr> <tr> <td> HR </td> <td> _12595_ </td> <td> 500 </td> <td> 08 </td> <td> 1.54 </td> <td> 0.11 </td> <td> J3.3 </td> <td> B </td> <td> 1 </td> <td> 1 </td> <td> C </td> <td> HR12595-WC08No500_Sp-J3.3_St-B1_Ch1_TC </td> </tr> <tr> <td> HR </td> <td> _12996_ </td> <td> 1000 </td> <td> 15 </td> <td> 1.32 </td> <td> 0.14 </td> <td> J3.3 </td> <td> B </td> <td> 2 </td> <td> 2 </td> <td> B </td> <td> HR12996-WC15No1000_Sp-J3.3_St-B2_Ch2_TB </td> </tr> <tr> <td> HR </td> <td> _12997_ </td> <td> 1000 </td> <td> 18 </td> <td> 2.12 </td> <td> 0.11 </td> <td> J3.3 </td> <td> B </td> <td> 2 </td> <td> 2 </td> <td> B </td> <td> HR12997-WC18No1000_Sp-J3.3_St-B2_Ch2_TB </td> </tr> <tr> <td> HR </td> <td> _12998_ </td> <td> 1000 </td> <td> 18 </td> <td> 2.12 </td> <td> 0.11 </td> <td> J3.3 </td> <td> B </td> <td> 2 </td> <td> 2 </td> <td> B </td> <td> HR12998-WC18No1000_Sp-J3.3_St-B2_Ch2_TB </td> </tr> <tr> <td> HR </td> <td> _12999_ </td> <td> 1000 </td> <td> 20 </td> <td> 1.76 </td> <td> 0.14 </td> <td> J3.3 </td> <td> B </td> <td> 2 </td> <td> 2 </td> <td> B </td> <td> HR12998-WC20No1000_Sp-J3.3_St-B2_Ch2_TB </td> </tr> <tr> <td> HR </td> <td> _12900_ </td> <td> 1000 </td> <td> 24 </td> <td> 3.30 </td> <td> 0.14 </td> <td> J3.3 </td> <td> B </td> <td> 2 </td> <td> 2 </td> <td> B </td> <td> HR12999-WC24No1000_Sp-J3.3_St-B2_Ch2_TB </td> </tr> <tr> <td> HR </td> <td> _12901_ </td> <td> 1000 </td> <td> 17 </td> <td> 1.54 </td> <td> 0.19 </td> <td> J3.3 </td> <td> B </td> <td> 2 </td> <td> 2 </td> <td> B </td> <td> HR12901-WC17No1000_Sp-J3.3_St-B2_Ch2_TB </td> </tr> <tr> <td> HR </td> <td> _12902_ </td> <td> 1000 </td> <td> 22 </td> <td> 2.31 </td> <td> 0.20 </td> <td> J3.3 </td> <td> B </td> <td> 2 </td> <td> 1 </td> <td> C </td> <td> HR12902-WC22No1000_Sp-J3.3_St-B2_Ch1_TC </td> </tr> <tr> <td> HR </td> <td> _12903_ </td> <td> 500 </td> <td> 21 </td> <td> 1.76 </td> <td> 0.24 </td> <td> J3.3 </td> <td> B </td> <td> 2 </td> <td> 1 </td> <td> C </td> <td> HR12903-WC21No500_Sp-J3.3_St-B2_Ch1_TC </td> </tr> <tr> <td> HR </td> <td> _13004_ </td> <td> 500 </td> <td> 12 </td> <td> 2.31 </td> <td> 0.15 </td> <td> J3.3 </td> <td> B </td> <td> 2 </td> <td> 1 </td> <td> C </td> <td> HR13004-WC12No500_Sp-J3.3_St-B2_Ch1_TC </td> </tr> <tr> <td> HR </td> <td> _13005_ </td> <td> 1000 </td> <td> 09 </td> <td> 1.76 </td> <td> 0.2 </td> <td> J3.3 </td> <td> B </td> <td> 2 </td> <td> 1 </td> <td> C </td> <td> HR13005-WC09No1000_Sp-J3.3_St-B2_Ch1_TC </td> </tr> <tr> <td> HR </td> <td> _13006_ </td> <td> 1000 </td> <td> 12 </td> <td> 2.31 </td> <td> 0.15 </td> <td> J3.3 </td> <td> B </td> <td> 2 </td> <td> 1 </td> <td> C </td> <td> HR13006-WC12No1000_Sp-J3.3_St-B2_Ch1_TC </td> </tr> <tr> <td> HR </td> <td> _13007_ </td> <td> 1000 </td> <td> 05 </td> <td> 1.32 </td> <td> 0.14 </td> <td> J3.3 </td> <td> B </td> <td> 2 </td> <td> 2 </td> <td> B </td> <td> HR13007-WC05No1000_Sp-J3.3_St-B2_Ch2_TB </td> </tr> <tr> <td> HR </td> <td> _13008_ </td> <td> 1000 </td> <td> 05 </td> <td> 1.32 </td> <td> 0.14 </td> <td> J01 </td> <td> B </td> <td> 2 </td> <td> 2 </td> <td> B </td> <td> HR13008-WC05No1000_Sp-J01_St-B2_Ch2_TB </td> </tr> <tr> <td> HR </td> <td> _13009_ </td> <td> 1000 </td> <td> 05 </td> <td> 1.32 </td> <td> 0.14 </td> <td> J12 </td> <td> B </td> <td> 2 </td> <td> 2 </td> <td> B </td> <td> HR13009-WC05No1000_Sp-J12_St-B2_Ch2_TB </td> </tr> <tr> <td> HR </td> <td> _13110_ </td> <td> 1000 </td> <td> 08 </td> <td> 1.54 </td> <td> 0.11 </td> <td> J3.3 </td> <td> B </td> <td> 2 </td> <td> 2 </td> <td> B </td> <td> HR13110-WC08No1000_Sp-J3.3_St-B2_Ch2_TB </td> </tr> <tr> <td> HR </td> <td> _13211_ </td> <td> 1000 </td> <td> 10 </td> <td> 1.54 </td> <td> 0.11 </td> <td> J3.3 </td> <td> B </td> <td> 2 </td> <td> 2 </td> <td> B </td> <td> HR13211-WC10No1000_Sp-J3.3_St-B2_Ch2_TB </td> </tr> <tr> <td> HR </td> <td> _13312_ </td> <td> 1000 </td> <td> 10 </td> <td> 1.54 </td> <td> 0.11 </td> <td> J01 </td> <td> B </td> <td> 2 </td> <td> 2 </td> <td> B </td> <td> HR13312-WC10No1000_Sp-J01_St-B2_Ch2_TB </td> </tr> <tr> <td> HR </td> <td> _20113_ </td> <td> 1000 </td> <td> 10 </td> <td> 1.54 </td> <td> 0.11 </td> <td> J12 </td> <td> B </td> <td> 2 </td> <td> 2 </td> <td> B </td> <td> HR20113-WC10No1000_Sp-J12_St-B2_Ch2_TB </td> </tr> <tr> <td> HR </td> <td> _20114_ </td> <td> 1000 </td> <td> 14 </td> <td> 3.3 </td> <td> 0.14 </td> <td> J3.3 </td> <td> B </td> <td> 2 </td> <td> 2 </td> <td> B </td> <td> HR20114-WC14No1000_Sp-J3.3_St-B2_Ch2_TB </td> </tr> <tr> <td> HR </td> <td> _20115_ </td> <td> 1000 </td> <td> 07 </td> <td> 1.54 </td> <td> 0.185 </td> <td> J3.3 </td> <td> B </td> <td> 2 </td> <td> 1 </td> <td> B </td> <td> HR20115-WC07No1000_Sp-J3.3_St-B2_Ch1_TB </td> </tr> </table> ### 6.2.3 Test data output The discharged water is collected by a chute leading to the collection tank. The event detector installed on the crest of the structure will identify that an overtopping volume will be collected in the tank at a given point in time. The mean overtopping discharge can be calculated by measuring the change of depth of water in the tank over the duration of the test. Measuring the elevation of the water level in the tank after each event has been detected will allow the calculation of individual overtopping volume. The output of the wave gauge inside the overtopping tank and the event detector on the crest of the structure are a time series of the levels measured as shown in Figure 6\. ## 6.3 DATA MANAGEMENT PLAN The case study created a data management plan online using the identifier H+_HRW_JRA1_002 . The location of the data management plan on the DMP Online server is _https://dmponline.dcc.ac.uk/projects/jra-8-2-use-of-joint-probability- analysis-and-stormsequencing-abbreviation-for-wave-overtopping_ (if required please request read access to this document from HR Wallingford). # A. THE EXPORTED XML VERSION OF THE DATA MANAGEMENT PLAN IS INCLUDED IN APPENDIX A Data Management Plan Export XML. This format allows queries (for example, in the form of XPath) to be performed on the data. /plan/details/detail[@title="Project Name"] ## Will return the value “ _JRA 8.2 Use of Joint Probability Analysis and storm sequencing / abbreviation for wave overtopping - DMP title”_ Building a database from the XML exports allows querying and aggregating across all data management plans for all HYDRALAB+ experiments – for example, to identify the degree of reliance on video format, or the occurrence of GML standard data sets and so on. # 7 EXPERIMENTS PRODUCING DATA MANAGEMENT PLANS This section enumerates all the experiments which currently form part of HYDRLAB+ and which are expected to produce and maintain a data management plan. Each of these experiments, both JRA (Joint Research Activity) and TA (Transnational Activity) will be expected to create and maintain a Data Management Plan using DMP Online for the duration of the HYDRALAB+ project. The TA experiments are already assigned an identifier. We will retrospectively assign identifiers using the schema above to the JRA experiments. _Table 2 JRA experiments in HYDRALAB+_ <table> <tr> <th> **Experiment** </th> <th> **Type** </th> <th> **Location** </th> <th> **Task** </th> <th> **Year** </th> </tr> <tr> <td> **Overtopping and Joint Probability Analysis for theoretical storms, mean sea level, storm surge conditions.** </td> <td> Small wave flume, with two armour layer of concrete cubes </td> <td> UPC </td> <td> 8.2 </td> <td> 1 & 2 </td> </tr> <tr> <td> **Overtopping and damage experiments. Use of joint probability analysis and design point probabilistic method.** </td> <td> wave flume 50 m x1.6 m, 1:30 Froude model,1:2 rock slope, two rock layers </td> <td> LNEC </td> <td> 8.2 </td> <td> 1 & 2 tbc </td> </tr> <tr> <td> **Use of Joint Probability Analysis and storm sequencing / abbreviation for wave overtopping.** </td> <td> small wave flume with vertical wall and smooth 1:2 slope, without armour </td> <td> HRW </td> <td> 8.2 </td> <td> 1 & 2 tbc </td> </tr> <tr> <td> **Damage experiments in a** </td> <td> wide wave flume 28 m x 12 </td> <td> UPORTO </td> <td> 8.2 </td> <td> ? </td> </tr> <tr> <td> **wide (3D) configuration.** </td> <td> m, 1:2 rock slope </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> **PIV measurements** </td> <td> Toulouse-flume channel </td> <td> CNRS-T </td> <td> 8.2 & 9.1 </td> <td> 1 & 2 </td> </tr> <tr> <td> **Lightweight sediments** </td> <td> Flume channel </td> <td> NTNU </td> <td> 8.3 & 8.2 </td> <td> 1 & 2 tbc </td> </tr> <tr> <td> **Cohesive sediment experiments to study cliff erosion** </td> <td> Small wave flume </td> <td> CNRS-T </td> <td> 8.3 </td> <td> 2 </td> </tr> <tr> <td> **Behaviour of light weight material when studying the morphological evolution. The use of different density fluids will be considered and tested** **if feasible** </td> <td> Small wave flume </td> <td> UPC </td> <td> 8.3 </td> <td> 2 </td> </tr> <tr> <td> **Distorted scale beach model** </td> <td> wave flume 50 m x 1.6 m, 1:30 Froude model, 1:2 structure slope </td> <td> LNEC </td> <td> 8.3 </td> <td> 2 tbc </td> </tr> <tr> <td> **Mesocosm** </td> <td> Bucket </td> <td> LBORO </td> <td> 8.4 </td> <td> 2 tbc </td> </tr> <tr> <td> **Stressed Organisms** </td> <td> Flume Channel </td> <td> LBORO/H ULL </td> <td> 8.4 </td> <td> 2 & 3 </td> </tr> </table>
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0361_ThoR_814523.md
ThoR H2020 814523 # 2\. _Initial DMP_ The following pages have been created with DMP online  All rights reserved. Page 4 of 4 ThoR_TUBS_181218_B_WP1 **ThoR: THz end-to-end wireless systems supporting ultra-high data Rate applications** _A Data Management Plan created using DMPonline_ Creator: Thomas Kürner Affiliation: Other Template: European Commission (Horizon 2020) Grant number: 814523 Project abstract: Data traffic densities of several Tbps/km2 are already predicted for 5G networks. To service a fully mobile and connected society networks beyond 5G must undergo tremendous growth in connectivity, data traffic density and volume as well as the required multi-level ultra-densification. The ThoR project will provide technical solutions for the backhauling/fronthauling of this traffic. The ThoR consortium brings together the leading Japanese and European players from industry, R&D and academia, whose prior work defines the state-of-the-art in high data rate long range point-to-point THz links. This team has been instrumental in defining and implementing the new IEEE 802.15.3d Standard “100 Gbps Wireless Switched Point-to-Point Physical Layer.” ThoR’s technical concept builds on this standard, in a striking and innovative combination using state-of-the-art chip sets and modems operating in the standardized 60 and 70 GHz bands, which are aggregated on a bit-transparent high performance 300 GHz RF wireless link offering >100 Gbps real-time data rate capacity. ThoR will apply European and Japanese state-of-the-art photonic and electronic technologies to build an ultrahigh bandwidth, high dynamic range transceiver operating at 300 GHz combined with state-of-the-art digital signal processing units in two world-first demonstrations: - more than 100 Gbps P2P link over 1 km at 300 GHz using pseudo data in indoor and outdoor controlled environments \- more than 40 Gbps P2P link over 1 km at 300 GHz using emulated real data in a live operational communication network. This will require an innovative combination of specific THz PHY technology advances: photonic millimeter-wave generation in E-band used to drive wideband up/down-conversion into THz bands, combined with solid- state and Travelling Wave Tube amplifiers to enable long range operation. Using this concept, ThoR will enable the required multi-frequency and channel aggregation towards the new IEEE 802.15.3d Standard. The success of ThoR will represent the first operational use of THz frequencies in ICT and this influential and powerful consortium will directly influence and shape the frequency regulation activities beyond 275 GHz through agenda item 1.15 of WRC 2019. Last modified: 19-12-2018 **ThoR: THz end-to-end wireless systems supporting ultra-high data Rate applications - Initial DMP** # 1\. Data summary Provide a summary of the data addressing the following issues: State the purpose of the data collection/generation Explain the relation to the objectives of the project Specify the types and formats of data generated/collected Specify if existing data is being re-used (if any) Specify the origin of the data State the expected size of the data (if known) Outline the data utility: to whom will it be useful The following purposes of data collection have been identified (this list may be extended as the project progresses): Device characterisation data required for device specifications and for modelling of RF impairments in link level simulations. Characterization of single transistors and integrated circuits required for the design of solid-state RF front end receive and transmitt MMICs. Characterization of the packaged solid-state RF front end. Free-space propagation data required for link design. Measurements, modelling and simulation data are necessary inputs to model and verify components of the developed hardware as well as the wireless THz data transmission system. The data will be used for device and circuit engineering, for modelling and simulations, and as a basis for designing wireless links. Types of data produced: Measurement data: device and circuit characterisation, channel and antenna measurement. Scenario data: simulation data defined in WP2, typical/generic geometrical environments. Simulation data: simulation results from WP5 and WP6, physical and system layer simulations, open-source simulation software. Simulation results from WP4, RF front end design (EM/circuit simulation software: ADS, CST). No data-re-use is foreseen at this point. All data will be generated by the project except the 3D builing data required to set-up the simualtion scenrios Size of the data. Measurement data originating from device characterisation will be low-volume; expected total size below 1 GB Modelling data will be medium-volume, of the order of 100 GB The data will be utilised by project partners, and will be made available to third parties as open-source data if possible and adequate (see section 3.2) # 2\. FAIR data 2.1 Making data findable, including provisions for metadata: Outline the discoverability of data (metadata provision) Outline the identifiability of data and refer to standard identification mechanism. Do you make use of persistent and unique identifiers such as Digital Object Identifiers? Outline naming conventions used Outline the approach towards search keyword Outline the approach for clear versioning Specify standards for metadata creation (if any). If there are no standards in your discipline describe what metadata will be created and how Metadata provision: Data will be sorted by category A file will be provided and regularly updated, listing the type of data, its filename, and relevant information as to its nature. A file will be provided listing all abbreviations in use. For data originating from measurements, simulation, and device characterisation a table of contents will be provided, showing data structure. Each category of data will have the same folder structure. Each scenario will have its own identifier. Data files will have standard identifiers. The details will be defined before D2.4 is submited. 2.2 Making data openly accessible: Specify which data will be made openly available? If some data is kept closed provide rationale for doing so Specify how the data will be made available Specify what methods or software tools are needed to access the data? Is documentation about the software needed to access the data included? Is it possible to include the relevant software (e.g. in open source code)? Specify where the data and associated metadata, documentation and code are deposited Specify how access will be provided in case there are any restrictions As far as possible the processed data will be openly available. Types of data not available: Raw data. Data violating personal rights. Data violating company interests. Especiall confidential data (e.g. locations and parameters of macro/micro base stations) from real-life network deployment cannot be provided openly Most of the processes data will be available as clear text files, importable into software packages such as Matlab, Excel, and Origin. Where necessary, software conversion script will be provided. Data will be stored in Powerforlder hosted at the TUBS. On a case by case basis, links to data will be registered with appropriate Open Access Data Repositories hosted by TUBS with agreed access restrictions and procedures in place. 2.3 Making data interoperable: Assess the interoperability of your data. Specify what data and metadata vocabularies, standards or methodologies you will follow to facilitate interoperability. Specify whether you will be using standard vocabulary for all data types present in your data set, to allow interdisciplinary interoperability? If not, will you provide mapping to more commonly used ontologies? Processed measurement data from device characterisation and free-space propagation measurements will be stored as tabulated values in tab-separated columns with named column headings in ASCII plain-text files that can be read or imported into suitable software tools (e.g. MATLAB, EXCEL, Origin). Processed channel measurement data will be provided as MATLAB data files that can be directly loaded into MATLAB. Scenario data will be in the form of a generic data type containing the scenario data in an XML file, where every point in space is characterised by several properties. Modelling data will be recorded as statistical parameters listed in an appropriate table format (e.g. EXCEL). The format of the stochastic channel models will be defined in the course of the project. Simulation data will be produced by MATLAB or C#, with the exact format determined by the simulation software. If opensource simulation software is provided, the data can be directly imported and used. Processed measurement and simulation results of integrated (packaged) circuits will be stored using ASCII text files (see above) as well as Touchstone files (usable in conventional RF circuit simulators). 2.4 Increase data re-use (through clarifying licenses): Specify how the data will be licenced to permit the widest reuse possible Specify when the data will be made available for re-use. If applicable, specify why and for what period a data embargo is needed Specify whether the data produced and/or used in the project is useable by third parties, in particular after the end of the project? If the re-use of some data is restricted, explain why Describe data quality assurance processes Specify the length of time for which the data will remain re-usable A licensing policy will be introduced within the project. Open source data will be made available at the conclusion of the project, where possible and adeqate. The project will have innovation cycles that will depend on confidentiality. Final analysed data will be published. Re-using the data: Measurement data pertaining to devices and propagation will be re-usable. Simulation data will be re-usable or reproducible, because simulation software will be open-source. Data will be maintained for use by other projects, researchers or development engineers, as recommended in the Guidelines on the Handling of Research Data by the Deutsche Forschungsgemeinschaft (German Research Foundation, DFG). # 3\. Allocation of resources Explain the allocation of resources, addressing the following issues: Estimate the costs for making your data FAIR. Describe how you intend to cover these costs Clearly identify responsibilities for data management in your project Describe costs and potential value of long term preservation Costs for cloud service: to be decided Responsibilities for data management: to be agreed by project partners. Powerfolder is hosted by partner TUBS. # 4\. Data security Address data recovery as well as secure storage and transfer of sensitive data Data will be hosted on powerfolder by TUBS or on own repository. Data security will be provided by the host. Raw data will be held in local repositories; with data security provided locally. Data will be secured by TUBS backup policies, local as well as remote (off site). # 5\. Ethical aspects To be covered in the context of the ethics review, ethics section of DoA and ethics deliverables. Include references and related technical aspects if not covered by the former The project addresses aspects of data transmission in the THz domain. There will be no personal data acquired or revealed. All stored data will be "de- identified". Therefore no specific aspects have been put to the ethics section of the DoA (Description of Action) # 6\. Other Refer to other national/funder/sectorial/departmental procedures for data management that you are using (if any) Country-dependent issues: Germany: minimum of 10-year storage must be guaranteed for research data as recommended in the Guidelines on the Handling of Research Data by the Deutsche Forschungsgemeinschaft (German Research Foundation, DFG).
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0362_SafeWaterAfrica_689925.md
(3) what methodology & standards will be applied, (4) whether data will be shared /made open access & how and (5) how data will be curated & preserved. # Overall Dataset Framework This document contains the third version of the DMP, which, according to the document “Guidelines on FAIR Data Management in Horizon 2020”, aims to make our research data findable, accessible, interoperable and reusable (FAIR). In SafeWaterAfrica, data management procedures are included into the WP8 and can be summarized according to the framework shown in **Figure 1** , in which the complete workflow of dissemination and publication is shown. **Figure 1** : SafeWaterAfrica workflow of dissemination and publication. DMP: Data Management Plan PEDR: Plan for Exploitation and Dissemination of Results OA: Open Access SC: Steering Committee Dissemination Manager: Jochen Borris, Fraunhofer Data Manager: Manuel Andrés Rodrigo Rodrigo, UCLM The procedure for the management of data begins with the production of a data set by one or several of the partners. According to the Figure, they should inform the Data Manager about the data by filling in the template shown in Annex 1, in which the most important metadata is included. Dataset is then archived by the partner that has produced it, while metadata are managed by the Data Manager. The data archived by the partner may be in the form of tables and, occasionally, as documents such as reports, technical drawings, pictures, videos and material safety data sheets. Software used to store the research results mainly includes the: * applications of the office suites of Microsoft, Open and Libre Office, e.g. Word and Excel, and * Origin Data Analysis and Graphing by Originlab. * Following checkup by the Data Manager, the metadata will be included in the Annex II section of the next edition of the DMP and depending on the decision-tree shown, data can be considered for publication. The DMP addresses the required points on a dataset by dataset basis and reflects the current status of reflection within the consortium about the data that will be produced. The DMP presents in details only the procedures of creating ‘primary data’ (data not available from any other sources) and of their management. In the internal procedures to grant open access to any publication, research data or other innovation generated in the EU project the main workflow starts at the WP level. If the WP team member considers putting research data open access, it will inform the project steering committee about its plans. The project steering committee will then discuss these plans in the consortium and decide whether the data will be made openly accessible or not. The general policy of the EU project is to apply “open access by default” to its research data. Project results to be made openly accessible for the public will be labelled “public” in the project documentation (table, pictures, diagram, reports etc.). All project results labelled “public” will be distributed under specific free/open license, where the authors retain the authors’ rights and the users can redistribute the content freely by acknowledgement of the data source. With regard to the five points covered in the template proposed in the “Guidelines on Data Management in Horizon 2020” (Data set reference and name, Data set description, Standards and metadata, Data sharing and Archiving and Preservation), they are included in the Table template proposed in Annex I and there are common procedures that will be described together for all datasets included in the next sections of this document. # Data Set Reference and Name For an easy identification, all datasets produced in SafeWaterAfrica will be also provided with a short name (Data set reference) following the format SWA- DS-xxyyy, where xx refers to the work package in which data are produced and yyy is a sequential reference number assigned by the Data Manager upon reception of a proposal of Dataset. This name will be included in the template and will not be filled in by the partner that propose the Dataset. Opposite, partner that produces the Dataset will propose a descriptive name (1) , consisting of a sentence in which the content of the dataset is clearly reflected. This sentence should be shorter than 200 characters and will be checked and, if necessary, modified by the Data Manager for the sake of uniformity. # Data Set Description It consists of a plain text with a maximum extension of 200 words in which it is very briefly summarized the content, methodology and organization of the dataset in order to let the reader have a first clear idea of the main aspects of the Dataset. It will be filled in by the partner that produces the Dataset (2) and checked upon reception and, if necessary, modified by the Data Manager for the sake of uniformity. # Standards and Metadata Metadata is structured information that describes, explains, locates, or otherwise makes it easier to retrieve, use, or manage an information resource. Metadata is often called data about data or information about information. Metadata that are going to be included in our DMP are going to be classified into three groups: * Descriptive metadata, which designates a resource for purposes such as discovery and identification. In the DMP of SafeWaterAfrica this metadata are needed to be filled in by the partner that propose the Dataset and include elements such as the contributors (3) (institution partners that contributes the dataset), creator/s (4) (author/s of the dataset), subjects (5) (up to six keywords that clearly identifies the content). * Administrative metadata, which provides information to help manage a resource, such as when and how it was created, file type and other technical information, and who can access it. In the DMP of SafeWaterAfrica, these metadata are needed to be filled in by the partner that propose the Dataset and include elements such as language (6) (most likely English), file format (7) (excel, cvs, …) and type of resource (8) (Table, Figure, picture…). It is proposed to use commonly used metadata standards in this project based on the digital object identifier system® (DOI). With this purpose, DOI of the final version of the metadata form for each Dataset will be obtained by the Data Manager. * Structural metadata, which indicates how compound objects are put together. In the DMP of SafeWaterAfrica, these metadata are needed to be filled in by the partner that proposed the Dataset in Table 1 and include elements such as parameters (9) included in the dataset (including information about methodology used to obtain it according to international standards, equipment, etc.), structure of the datatable (10) (showing clearly how data are organized) and additional information for the dataset (11) (such as Decimal delimiter, the Column delimiter, etc.) * Upon reception of the first version of the Dataset, this information will be checked by the Data Manager and, if necessary, modified for the sake of uniformity and clarity. # Data Sharing The data sharing procedures and rights in relation to the data collected through the SafeWaterAfrica project are the same across the different datasets and are in accordance with the Grant Agreement. Partner that produces the datasheet should inform about the status (12) of the dataset: public, if data are going to be published, or private, if no diffusion out of the consortium is aimed (because data are considered as sensitive). In the case of public data, a link to sample data can also be included to allow potential users a rapid determination about the relevance of the data for their use (13) . This link will be checked by the Data Manager and the partner that produce the Dataset is responsible for keeping it alive for the whole duration of SafeWaterAfrica. With respect to the access procedure, in accordance with Grant Agreement Article 17, data must be made available upon request, or in the context of checks, reviews, audits or investigations. If there are ongoing checks etc., the records must be retained until the end of these procedures. Each partner must ensure open access to all peer-reviewed scientific publications relating to its results. As per Article 29.2, the partners must: * As soon as possible and at the latest on publication, deposit a machine-readable electronic copy of the published version or final peer-reviewed manuscript accepted for publication in a repository for scientific publications; moreover, the beneficiary must aim to deposit at the same time the research data needed to validate the results presented in the deposited scientific publications. * Ensure open access to the deposited publication — via the repository — at the latest: o On publication, if an electronic version is available for free via the publisher, or o Within six months of publication in any other case. o Ensure open access — via the repository — to the bibliographic metadata that identify the deposited publication. The bibliographic metadata must be in a standard format and must include all of the following: the terms “European Union (EU)” and “Horizon 2020”;-the name of the action, acronym and grant number;-the publication date, and length of embargo period if applicable, and-a persistent identifier. Data will also be shared when the related deliverable or paper has been made available at an open access repository, via the gold or the green model. The normal expectation is that data related to a publication will be openly shared. However, to allow the exploitation of any opportunities arising from the raw data and tools, data sharing will proceed only if all co-authors of the related publication agree. The Lead author, who is the author with the main contribution and who is listed first, is responsible for getting approvals and then sharing the data and metadata in the repository of its institution or, alternative, in the repository **Fraunhofer ePrints** ( _http://eprints.fraunhofer.de/_ ) , an open access repository for research data. # Archiving and Preservation The archiving and preservation procedures in relation to the data collected through the SafeWaterAfrica project are the same across the different datasets and are in accordance with the Grant Agreement. The research data is generated at the sites of the partners, and stored and archived at each place in accordance to the rules of each organisation and in accordance with the referring national legislation. Additionally the data is copied to the project intranet that is available to all beneficiaries. The project uses the software Atlassian Confluence. This wiki software installation is provided by the coordinator Fraunhofer IST. The software runs on a separate server on the campus in Braunschweig, Germany. Access is limited to the IT administrators and to the beneficiaries via any internet browser, secured by personal accounts. Differential back-ups are made each night on magnetic tape. Server and tapes are stored in a locked room. The electricity grid is backed up by batteries. The Confluence server will be provided also after the end of the project for at least five years. # Legal Issues The SafeWaterAfrica partners are to comply with the ethical principles as set out in Article 34 of the Grant Agreement, which states that all activities must be carried out in compliance with: * The ethical principles (including the highest standards of research integrity e.g. as set out in the European Code of Conduct for Research Integrity, and including, in particular, avoiding fabrication, falsification, plagiarism or other research misconduct) and Commission recommendation (EC) No 251/2005 of 11 March 2005 on the European Charter for Researchers and on a Code of Conduct for the Recruitment of Researchers (OJ L 75, 22.03.2005, p. 67), the European Code of Conduct for Research Integrity of ALLEA (All European Academies) and ESF (European Science Foundation) of March 2011 ( _http://www.esf.org/fileadmin/Public_documents/Publications/Code_Conduct_ResearchIntegr_ _ity.pdf_ ) * Applicable international, EU and national law. Furthermore, activities raising ethical issues must comply with the ‘ethics requirements’ set out in Annex 1 of the Grant Agreement. At this point, the DMP warrants that 1) research data are placed at the disposal of colleagues who want to replicate the study or elaborate on its findings, 2) all primary and secondary data are stored in a secure and accessible form and 3) the freedom of expression and communication. Regarding confidentiality, all SafeWaterAfrica partners must keep any data, documents or other material confidential during the implementation for the project and for at least five years (preferible 10 years) after the period set out in Article 3 (42 months, starting 2016-06-01). Further detail on confidentiality can be found in Article 36 of the Grant Agreement.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0363_HEMERA_730970.md
# Introduction The overall HEMERA-2020 project [A1] aims to provide the best balloon measurements. This requires a highly integrated data and information management system. The project is composed by a coordinated set of networking activities, which delivers improved balloon data across the infrastructure, as well as standard protocols for data generation and analysis. The main objective is to make all the scientific and technological data collected during the flights accessible to the whole European scientific community, upon request to the Data Centre (DC). The data centre will provide free access and services for data archiving including higher level data products, links to large databases of past and ongoing scientific balloon data projects, complemented with access to new data products, together with tools for quality assurance (QA), data analysis and research. The architecture of the DC will be described in a next deliverable (D2.3). Currently, the balloon-borne Data Centres are founded on two topical databases: * Atmospheric balloon-borne database (https://cds-espri.ipsl.upmc.fr/BALLOON), * Astrophysical balloon-borne database (https://www.asi.it/eng/agency/bases/data-center _)_ . ## Purpose of the document The Data Management Plan (DMP) considers the data management life cycle for the data sets to be collected and processed by HEMERA-2020 project. The DMP outlines the handling of research data during the project, and how and what parts of the data sets will be made available after the project has been completed. This includes an assessment of when and how data can be shared. The DMP describes also the choices that will be made for the metadata standards to be used, database repository, data access policy and data access methods, long term archival and the costs associated to data management. With regard to access to research data, HEMERA-2020 will make the data and metadata available on the new website DC. From this website, project members and external users will have access to both data and metadata. New Research data is originally planned to be archived at AERIS data center in Paris and for existing data via a link to NILU data centre (atmospheric and astrophysical data). ## Intended readership This deliverable is intended for use internally in the project and provides guidance on data management to the project partners responsible for data collection. At the current stage of the project, this DMP is just the initial DMP and will evolve throughout the project as new research data sets will be added or modified. ## Document outline The document consists of the following sections: * **Section 2** describes the guiding principles for the data management of the overall HEMERA-2020 data sets. * **Section 3** lists the data sets provided by HEMERA-2020 DC and will provide: * the data sets description, * the standards and metadata related to, * the sharing of the data sets, * the procedures for archiving and long-term preservation of the data. * **Section 4** presents the FAIR data. ## Application area The prime focus of this document will be on **HEMERA-2020 Virtual Access (WP2)** , as specified in the HEMERA-2020 project document. [A1]. ## Applicable documents and reference documents **Applicable documents** [A1] HEMERA-2020 project document ## Abbreviations <table> <tr> <th> **ABBREVIATIONS** </th> <th> **SIGNIFICATION** </th> </tr> <tr> <td> **ASI-INAF** </td> <td> Agenzia Spaziale Italiana – Instituto Nazionale di Astrofisica </td> </tr> <tr> <td> **CNRS** </td> <td> Centre National de la Recherche Scientifique </td> </tr> <tr> <td> **DC** </td> <td> Data Centre </td> </tr> <tr> <td> **DMP** </td> <td> Data Management Plan </td> </tr> <tr> <td> **DOI** </td> <td> Digital Object identifier </td> </tr> <tr> <td> **FAIR** </td> <td> Findable, Accessible, Interoperable and Re-usable </td> </tr> <tr> <td> **IPSL** </td> <td> Institut Pierre Simon Laplace </td> </tr> <tr> <td> **QA** </td> <td> Quality assurance </td> </tr> <tr> <td> **NILU** </td> <td> Norsk Institutt for LUftforkning </td> </tr> <tr> <td> **WP** </td> <td> Workpackage </td> </tr> </table> # Functional guidance principles The general approach to data management support for HEMERA-2020 project is summarized in a data flow diagram (see Fig.1 below). It is important that the HEMERA-2020 data management strategy be responsive to the needs of the investigators, ensuring that data are accurate and disseminated in a timely fashion. It is also important that the investigators know what is expected of them in this process. **Figure 1** : HEMERA-2020 data flow **Step 1** : Products provided by HEMERA-2020 institutions are validated and qualified before provisioning AERIS infrastructure. Format of products is going to be described further in the document. **Step 2** : Data and information transfer to AERIS infrastructure. Procedures assuming the data transfer will be relied on the specification given by the relevant WP. **Step 3** : Data and information management: data populate the data repository and information is treated to create metadata files to populate the HEMERA-2020 catalogue. **Step 4** : Data and metadata integration in long-term sustainable databases. **Step 5** : Dissemination to end-users. The HEMERA DC infrastructure ensures open access to all data. **Step 6** : Preservation and backup of data, information, databases and web site. The archive, web interface and supporting software will continue to be maintained and updated to ingest new data, and to accommodate changes in the data streams. The archive catalogue record will be maintained to enable dataset-level. These processes will continue until the end of the project and an infrastructure to ensure the long-term availability of the data to the broader community will be set in place. After the end of the project, the data will remain available on a best effort basis. # HEMERA-2020 data sets description In this chapter we describe the different data sets that have been provided by HEMERA partners. The HEMERA-2020 Data Centre is organized in 2 different databases: ⁻ The Atmospheric database: This database provides a compilation of experimental data obtained from balloon experiments supplied by partners of the consortium. ⁻ The Astrophysical database: This database provides a catalogue of spectrophotometric standard stars characterized by high precision and accuracy. It is associated to a specific website to flash new sources. These databases already exist and were developed within AERIS and ASI-INAF data centre respectively. The table 1 gives an overview of the existing data sets that have been already collected by the new DC within HEMERA2020. The description of these data sets is given in the following sections. **Table 1** : Overview of the data sets collected <table> <tr> <th> **Experimental data sets** </th> <th> **Brief description** </th> </tr> <tr> <td> Atmospheric observations: from balloon-borne experiments </td> <td> These data are time series of radiometric, chemical, or physical variables measured during balloon-borne experiments. These data are in NASA-Ames format </td> </tr> <tr> <td> Astrophysical observations: from balloon-borne experiments </td> <td> These data are in FITS format and specify type of sources, position, intensity and spectral range </td> </tr> </table> All these data sets have been mainly provided by these European institutions: ‒ CNRS-LPC2E, France ‒ CNRS-LATMOS, France ‒ CNRS-LMD, France ‒ CNRS-LOA, France ‒ GSMA, Reims University, France ‒ IAUG, Frankfurt, Germany ‒ IUP, Heidelberg, Germany ‒ KIT, Karlsruhe, Germany ‒ INAF, Italy ‒ … ## Products description ### Atmospheric balloon-borne experiments data sets These data sets are mainly stratospheric chemical variables and related measured during balloon-borne experiments which types are the following: * Ozone * NOy chemical family * Cly chemical family * Bry chemical family * HOx chemical family * Dynamic tracers and/or greenhouse gases (CO 2 , CH 4 , N 2 O) * Aerosols concentration and/or size For each set of data, several files have to be provided by partners: ₋ A pdf file that describes the experiment and provides information on the experimental conditions. ₋ One or several files containing data in a unique format, called NASA-Ames format. This format is an ASCII based format and is described in Appendix 1. In the case several NASA-Ames files are provided, they can be gathered in a zip file. ₋ A metadata file indexing the datasets. These data sets have been collected and managed by AERIS. **Nature and scale of data:** vertical profiles of the key stratospheric species (concentrations of gaseous species) that control the mid-latitude ozone budget. The total data volume of this database is currently less than 200 GB and more than 1000 files. **To whom the data set could be useful:** These data are of high interest for a large community of users in atmospheric sciences, as well as the private sector. These observations are very useful: ⁻ To compare measurements of the same species recorded by different instruments on the balloon, ⁻ To use them for calibration/validation of satellite measurements. ⁻ To study long time series of measurements ⁻ Complete spatial or temporal measurements from ground and/or Satellite **Existence of similar data sets?** Atmospheric balloon-borne data sets can be found elsewhere but this database focuses on molecules of interest for atmospheric sciences. We identify a set of data (past ENVISAT validation campaign, NILU data centre) in the scope of HEMERA for which interoperability for data and metadata in the HEMERA catalogue will be implemented. ### Astrophysical balloon-borne experiments data sets These data sets are related to spectrophotometric standard stars parameters measured during balloon-borne experiments which types are the following: ⁻ Type of source ₋ Position ₋ Intensity ₋ Wavelength All these data are qualified as HEMERA-2020 data only if * The measurement data files are submitting to the HEMERA data centre in “FITS” format as described in Appendix 2, * A metadata file indexing the datasets. **Nature and scale of data** : it is a catalogue of sources list with the following information: ⁻ Type of source ₋ Position ₋ Intensity ₋ Wavelength **To whom the data set could be useful:** These data are of high interest for a large community of users in astrophysics sciences. **Existence of similar data sets:** Specific databases exist associated to ESA satellite experiments (e.g. INTEGRAL) but no specific database dedicated to balloon-borne astrophysical observations exist. ## Standards and metadata The supply of detailed metadata is mandatory for datasets - new or existing - to be referenced by the virtual access in the HEMERA-2020 metadata catalogue. Automatic validation processes ensure the quality and the completeness of the provided information. Each metadata record is associated with a unique universal identifier. The specification of the metadata profile per datasets is the following: * resource title * resource abstract - id * temporal extents * publications * links o type * url * name * description * contacts o name o email * organization o comment o address o roles - formats * data level * platforms * parameters * instrument * resolution * type ## Data sharing **Access procedures:** Through the web interface, the HEMERA data centre will provide a user-friendly, multi-criteria research mechanism to discover and preview the datasets of the catalogue. Interaction with other catalogues is another important point to make our data findable. To achieve this, we will use standards for structuring information (e.g. ISO19115), defining vocabularies and for querying our catalogue. The access procedure will be achieved with a shopping-cart mechanism to select datasets found in our catalogue. In addition to the possibility of a direct download, the data centre will propose scripts to execute the download programmatically. Each downloaded file will be an archive containing additional files recalling metadata, licenses and how to quote and acknowledge data. Open source tools to manipulate and plot data and corresponding documentation will be included in a dedicated page on the data centre. To simplify data retrieval for the users, HEMERA data centre will use widely used authentication schemes such as OrcId which is already used in other European research infrastructures. **Document format and availability:** The data sets are available in their native format through the HEMERA data centre. From there the fully data are accessible to internal and external users (in and out of the project), free of charge. ## Archiving and preservation (including storage and backup) Archiving of the data sets by AERIS guarantees a long-term and secure preservation of the data without any additional cost for the project. This access will be freely available all along the years. Free and open access means unrestricted access at no cost for all interested individuals, whether they are within or outside of the project, but an acceptance of the HEMERA data policy will be required. Access to all data products and tools will be recorded through web-based user statistics for all virtual access activities. ## Data volume The new Data Centre infrastructure is correctly sized to be able to accommodate the new datasets, including their different versions and safety copy. The datasets volume represents less than 200 GB and currently such datasets volume is very easy to store and archive in different location. ## Data repository description The directory structure of the data repository is the following: \---------|/data (root of the hierarchical tree) \-------------|/DATB (Database of Atmospheric balloon-borne experiments) \------------------|/ID (ID of the metadata describing the dataset) \----------------------| xxxx.pdf (information file) \----------------------| xxxx.ames (datafile) ------------------|/ID …. | … | … …………….. …………….. \-------------|/DASB (Database of Astrophysical balloon-borne experiments) \----------------------|/ID ((ID of the metadata describing the dataset) \--------------------------| xxxx.pdf (information file) \--------------------------| xxx.fits (datafile) \----------------------|/ID … \--------------------------| … ## Preliminary data policy Data are available all along the project and there is no embargo period; as soon as they are on the website, they can be used by internal and external users. The full data policy will be described later in a new revision of this document but the main elements of this policy will comprise: * Data ownership, * Data curation, * Data archiving, * Open access to data. # FAIR DATA ## Findable data Each metadata record is associated with a unique universal identifier. This will allow the establishment of an automatic link with “Datacite”. Hence, every dataset will be quotable through DOI. We will use the concept of fragment to precisely quote the different versions of a dataset. Through its web interface, the data centre provides a user-friendly, multi- criteria research mechanism to discover and preview the datasets of our catalogue. Interaction with other catalogues is another important point to make our data findable. To achieve this, we use standards for structuring information (e.g. ISO19115), defining vocabularies (e.g. CF Climate and Forecast- conventions) and for querying our catalogue (e.g. CSW). ### Atmospheric balloon-borne experiments data sets: criteria list The data research mechanism is based on this multi-criteria list: * Parameters * Platforms * Instruments - Time period * Locations * Altitudes ### Astrophysical balloon-borne experiments data sets: criteria list The data research mechanism is based on this multi-criteria list: ⁻ Type of source ₋ Position ₋ Intensity range ₋ Wavelength range ## Openly accessible data The web interface of HEMERA DC provides access to all data resulting from the activities of the new infrastructure. This is achieved with a shopping cart mechanism to select datasets found in our catalogue. In addition to the possibility of a direct download, the data centre proposes scripts to execute the download programmatically. Each downloaded file is an archive containing additional files recalling metadata, licenses and how to quote and acknowledge data. Open source tools to manipulate data and corresponding documentation are included in a dedicated page on the data centre. To simplify data retrieval for the users, HEMERA DC uses widely used authentication schemes such as ORCID which is already used in other European research infrastructures. ## Interoperable data Just like the metadata, data are interoperable by adhering to identified open standards and shared vocabularies in the research community. ## Reusable data The HEMERA data Policy is implemented by the data centre. Its goal is to regulate the sharing of HEMERA data and includes information on dissemination, sharing, and potential access restriction. The data policy creation is on- going and will also be publically available on the HEMERA DC. ## Data security and long term conservation The integrity and the security of the collected data are done by mechanisms which are either already existing or currently being developed in the scope of this project. These mechanisms imply multi-site archiving, regular checksum of data files, and automatic data format update if necessary. ## Organization and human resources As mentioned in the proposal, the HEMERA-2020 data centre involves staff with complementary knowledge and competences. Precise points of contact for both topical and technical questions are indicated in a dedicated web page. They are accessible either via emails or online forms (helpdesk). **APPENDIX** **APPENDIX 1** . Description of the NASA-Ames format The NASA-Ames format is a text-based, self-describing, portable format. File contents are limited to the printable ASCII character set (ASCII codes 32 to 126). Each NASA-Ames file is made up of a file header section and a data section. The file header contains the information needed to make the file self describing, as well as giving information such as the origin of the data. Once the form of a file for a particular instrument has been decided, the file header for that instrument changes little from file to file. The data section lists the data, in a column-oriented format. For more details see: **http://artefacts1.ceda.ac.uk/formats/NASA-Ames/na- brief-guide.html** **APPENDIX 2** Description of the FITS format **Flexible Image Transport System** ( **FITS** ) is an open standard defining a digital file format useful for storage, transmission and processing of data: formatted as N-dimensional arrays (for example a 2D image), or tables. FITS is the most commonly used digital file format in astronomy. The FITS standard has special (optional) features for scientific data, for example it includes many provisions for describing photometric and spatial calibration information, together with image origin metadata. For more details, see: https://fits.gsfc.nasa.gov/fits_standard.html
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0364_WeObserve_776740.md
# Executive Summary As a Coordination and Support Action (CSA), WeObserve is part of the Horizon 2020 Open Research Data (ORD) Pilot. The ORD Pilot aims to improve and maximise access to and reuse of research data generated by Horizon 2020 projects. As a requirement of the ORD Pilot, this deliverable provides the initial WeObserve Data Management Plan (DMP) for the first period of the project (18 months), describing the life cycle of the data collected, processed and generated. The deliverable provides an overview of the type of data collected within WeObserve and the plans to facilitate potential reuse of the data while addressing FAIR (Findable, Accessible, Interoperable, Reusable) principles. This deliverable is intended to be a living document that will be updated throughout the lifetime of the project, whenever significant changes arise, e.g., when data sets are added or there are changes in the project that affect the management of the data. # Project Summary WeObserve is a Horizon 2020 Coordination and Support Action (CSA) that tackles three key challenges Citizens Observatories face: awareness, acceptability and sustainability. The project aims to improve the coordination between existing Citizen Observatories (COs) and related regional, European and International activities. The WeObserve mission is to create a sustainable ecosystem of COs that can systematically address these identified challenges and help to move citizen science into the mainstream. To achieve this mission, the WeObserve project has identified detailed objectives, which include: 1. Develop communities of practice around key topics to assess the current CO knowledge base and strengthen it to tackle future environmental challenges using CO-driven science, 2. Extend the geographical coverage of the CO knowledge base to new communities and support the implementation of best practices and standards across multiple sectors, 3. Demonstrate the added value of COs in environmental monitoring mechanisms within regional and global initiatives such as GEOSS, Copernicus and the UN Sustainable Development Goals, and 4. Promote the uptake of information from CO-powered activities across various sectors and foster new opportunities and innovation in the business of in-situ earth observation. To address these objectives, the project is organized into five work packages (WP), within which various forms of data are collected or generated not only by the WeObserve consortium, but also by contributing stakeholders. More specifically, the five work packages and their respective leads are: WP1: Project Coordination, Management and Support 🡪 IIASA WP2: Co-create and Strengthen the Citizen Observatories Knowledge Base 🡪 IHE- DELFT WP3: Stimulate uptake of the Citizen Observatories Knowledge Base 🡪 UNIVDUN WP4: Facilitate adoption into Earth Observation initiatives 🡪 CREAF WP5: Dissemination, Communication and Outreach 🡪 ICCS. During the implementation of the WPs, there are various types of data that are created or collected, and the management life cycle of this data requires careful attention. As such, this deliverable represents the initial WeObserve Data Management Plan (DMP) for the first period of the project (18 months), describing the life cycle of the data collected, processed and generated. This DMP has been developed following the Horizon 2020 guidelines with additional guidance from DMPonline, as suggested by the European Commission. We provide an overview of the type of data collected within WeObserve and the plans that the project uses to manage and protect the data. The DMP is intended to be a living document where modifications and adaptations are integrated as the project is implemented. Key topics addressed within the DMP include a) a description of the types of data that will be collected, generated, or processed, b) whether and how the data will be made openly accessible, and c) the handling of data during and after the project. Furthermore, WeObserve brings together relevant actors and stakeholders within the domain of citizen science, and therefore personal data is also acquired during the project. Consequently, the consortium has actively taken measures to ensure compliance with the EU General Data Protection Regulation (GDPR) 1 in the use and handling of personal date. # Data Summary This section provides a summary of the data within the WeObserve Project. The purpose of the datasets within the project is to not only to build a communication network with relevant citizen science actors and practitioners, but also to generate a knowledge base to be exploited using FAIR (Findable, Accessible, Interoperable, Reusable) principles. The key datasets associated with the project are outlined in Table 1. Table 1: Overview of the key datasets within WeObserve <table> <tr> <th> **WeObserve Datasets** </th> <th> **Related WP** </th> </tr> <tr> <td> **List of newsletter subscribers** \- Data submitted when someone subscribes to the WeObserve newsletter and agrees to the WeObserve privacy policy </td> <td> 5 </td> </tr> <tr> <td> **List of participants in the Communities of Practice (CoPs)** \- Data submitted when someone joins the mailing list of a WeObserve CoP. All CoP members agree to the WeObserve privacy policy and the Terms of Reference (ToR) and guidelines for CoPs (D2.2), and are contacted for F2F forums and on- line meetings </td> <td> 2 </td> </tr> <tr> <td> **Citizen Observatories landscape mapping** \- Dataset associated with the WeObserve EU Citizen Observatories Landscape Report </td> <td> 2 </td> </tr> <tr> <td> **Knowledge Base from activities within CoPs** \- Outcomes of the WeObserve CoPs </td> <td> 2 </td> </tr> <tr> <td> **WeObserve learning programme** \- Toolkits and course material (photos, videos) for the WeObserve Massive Open Online Course (MOOC) </td> <td> 3 </td> </tr> <tr> <td> **WeObserve project outputs (i.e., deliverables)** \- Project deliverables that are not specifically research data </td> <td> All WPs </td> </tr> </table> The listed datasets support the creation of the _WeObserve knowledge platform_ (https://www.weobserve.eu/), which is the main entry point for the project and the backbone of communication and dissemination activities. Further, the WeObserve consortium is taking measures to ensure that the scientific research data produced within the project will satisfy the relevant criteria defined in guidelines on data management in Horizon 2020, namely: 1. **Discoverability:** To aid discoverability of the data sets, it is aimed to obtain digital object identifiers (DOIs) where possible. In addition, the underlying scientific publications (reports and peer-reviewed journal articles) will cross-reference these data sets. 2. **Accessibility:** To ensure accessibility beyond the duration of the project, the use of proprietary data formats will be avoided. In addition, wherever web-interfaces will be applied to display and visualize the data in a convenient way, the underlying data will also be made accessible in numerical form to encourage reuse. 3. **Assessability and intelligibility:** The intention is that data sets will be made available to reviewers of the resulting publications to aid transparency in the review process. 4. **Usability beyond the original purpose for which it was collected** : All data sets produced in WeObserve are intended to be used also outside the project. Synergies with other COs and citizen science related projects have shown that re-analysis and use of the data beyond the original purpose is pursued heavily if accessible via the WeObserve knowledge platform. 5. **Interoperability to specific quality standards:** WeObserve outputs (namely, datasets associated with the landscape mapping (WP2)) will use data formats and metadata standards collected in a spreadsheet (Excel format) to maximize interoperability. 2.1 Types of data ## Personal data A limited amount of personal data is collected within WeObserve for a variety of purposes, which include the WeObserve newsletter, toolkits survey and online learning course survey. The types of personal data collected within WeObserve include: First Name Last Name Affiliation Email address ## Other types of data Due to the nature of the WeObserve project we will collect other types of data such as: Data from surveys (toolkits, online course) Data from questionnaires (CoP registration) Data from CO landscape inventory Digital media data (photos, videos) Publications Reports & Deliverables # Data Preservation The WeObserve project database will be designed to remain operational for 5 years after project end. By the end of the project, appropriate datasets will be transferred to the ZENODO repository, which ensures sustainable archiving of the final research data. # Data Sharing and Publication The WeObserve consortium will conform to the Horizon2020 Open Access mandates including Gold Open Access and Green Open Access (or self-archiving) for all scientific publications produced. As a minimum, all publications will be available via Green Open Access, e.g., through OpenAIRE, ResearchGate and repositories supported by individual institutions such as IIASA’s own PURE repository. Although some funds have been set aside for Gold Open Access, WeObserve consortium partners will be encouraged to publish via this route, using in-kind contributions from their institutions to fund this where possible. Additionally, metadata will maximise the discoverability of publications and ensure the acknowledgment of EU funding. Bibliographic data mining is more efficient than mining of full text versions. As such, the inclusion of metadata is necessary for adequate monitoring, production of statistics, and assessment of the impact of H2020. In addition to basic bibliographic information about deposited publications, the following metadata information is expected: * EU funding acknowledgement o “This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement no 776740” • Peer Reviewed type (e.g., accepted manuscript; published version). * Embargo Period (if applicable): * End date o Access mode * Project Information: * Grant number: “776740” * Name of the action: “Coordination and Support Action” o Project Acronym: “WeObserve” * Project Name: “An Ecosystem of Citizen Observatories for Environmental Monitoring” * Publication Date: * Persistent Identifier: * Authors and Contributors. Wherever possible, identifiers should be unique, non-proprietary, open and interoperable * License (if applicable): * Granting appropriate licence options from Creative Commons # Data Security ## Personal data WeObserve research and stakeholder interaction – both online and face-to-face – involve the potential processing of the personal data of participants. Participant consent procedures are also described in D6.1 Ethics Requirements Report, regarding personal data collection, storage and protection. Project partners ensure that WeObserve activities are in compliance with national and EU legislation, in particular Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data (and in compliance with the H2020 Annotated Model Grant Agreement (AMGA) article 39 on processing of personal data, and article 34 on ethics) and the European Union GDPR, due to replace the Data Protection Directive 95/46/EC, and designed to harmonize data privacy laws across Europe for the protection of EU citizens’ data privacy. The Consortium has undertaken all appropriate organizational and technical measures to ensure that the data collected in the framework of the present declaration are processed according to the legislation for the protection and storage of personal data for a period of 5 years after the end of the WeObserve project for the purposes described above. We take appropriate measures to ensure that all personal data are kept secure, including security measures to prevent personal data from being accidentally lost or used or accessed in an unauthorized way. Those processing user information do so only in an authorized manner and are subject to a duty of confidentiality. It is noted that, according to the GDPR (Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016), participants may exercise the following rights that derive from the Regulation: * Right of access and right to rectification for inaccurate personal data * Right to erasure of personal data if they are not necessary for service provision * Right to restrict processing of your data * Right to object to the processing of your data * Right to data portability, namely right to receive your data in a structured, commonly used and machine-readable form so they can be transferred to another data processor. Additionally, participants have the right to submit a written complaint to the responsible supervisory body for personal data protection in each country. # Ethical Aspects Based on the ethics screening by the European Commission, the project was flagged for two potential issues, namely the involvement of human participants and the protection of personal data. WeObserve abides by the provisions of the currently applicable EU legislation on data protection for the collection and processing of personal data in meetings, surveys, interviews and dissemination activities. All participants are required to provide consent to the terms and conditions and the WeObserve privacy policy for any project data preservation or sharing. Additional details regarding the ethical considerations, including templates of the consent forms are provided in D6.1 Ethics Requirements Report. ## _An Ecosystem of Citizen Observatories for Environmental Monitoring_
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0365_WADI_689239.md
**Executive Summary** This document presents the Initial Data Management Plan (DMP) for the WADI project. WADI has chosen to participate in the extended Open Research Data pilot. Following the recommendations provided by the European Commission (EU, 2016), in the scope of making data FAIR, this DMP provides an initial approach to the following topics: * the handling of research data during and after the end of the project * what data will be collected, processed and/or generated * which methodology and standards will be applied * whether data will be shared/made open access * how data will be curated and preserved (including after the end of the project) The WADI DMP was prepared using the Digital Curation Centre (DCC) DMP online tool (https://dmponline.dcc.ac.uk/), which provides a DMP template that match the demands and suggestions of the Guidelines on FAIR Data Management in Horizon 2020(EU, 2016). # Introduction This document is developed as part of the WADI (Water-tightness Airborne Detection Implementation) project, which has received funding from the European Union’s Horizon 2020 Research and Innovation programme, under the Grant Agreement number 689239. The Data Management Plan (DMP), integrated in Task 10.1 (Communication, Dissemination and Data Management Plan), represents the Deliverable 10.2 of Work Package10 (WP10) – Communication and dissemination. WADI has chosen to participate in the extended Open Research Data pilot. The project WADI includes eleven work packages from which seven of them (WP2 to WP7 and WP9) will result in relevant information to be shared with the research and end-user communities. The purpose of the DMP is to support the data management life cycle for all data that will be collected, processed or generated by the WADI project. Following the recommendations provided by the European Commission (EU, 2016), in the scope of making data FAIR (findable, accessible, interoperable and reusable), this DMP provides an initial approach to the following topics: * the handling of research data during and after the end of the project * what data will be collected, processed and/or generated * which methodology and standards will be applied * whether data will be shared/made open access * how data will be curated and preserved (including after the end of the project) More developed versions of the Plan will be released at later stages. The WADI DMP was prepared using the Digital Curation Centre (DCC) DMP online tool (https://dmponline.dcc.ac.uk/), which provides a DMP template that match the demands and suggestions of the Guidelines on FAIR Data Management in Horizon 2020 (EU, 2016). # Data summary The main objective of WADI is to develop an innovative airborne water leak detection surveillance service to provide water utilities with adequate information on leaks in water infrastructures outside urban areas (rural areas) and optimise their performance in this field. Throughout the project, data will be produced and reused to support the development of research and innovation activities focusing on: defining end-user requirements for water leak surveillance services (WP2); defining system requirements including optical device coupling; optimisation and their application on aerial platforms (WP3); developing cost-effective data processing techniques, reliable data processing and a new interface for end-users (WP4); evaluating WADI’s service feasibility using a performance matrix and its environmental and economic impacts (WP7); validating and demonstrating on both French and Portuguese sites (WP5, WP6), conducting a comprehensive legal and regulatory analysis (WP8); as well as developing pertinent market studies, marketing strategies, and business plans (WP9). Table 1.1 summarises the types of data produced / made available in different work packages of the WADI project. Table 1.2 presents a more detailed description of data and its use (or re- use), formats, standards and metadata, data availability (open, confidential) and the expected resources to store, curate and preserve (size, backup frequency, resources to maintain it after project ends, repository choices for open and private data). <table> <tr> <th> **TYPE OF DATA** </th> <th> **WP** </th> </tr> <tr> <td> **WADI surveillance system development** </td> <td> Hyper-spectral/IR image database </td> <td> WP3 </td> </tr> <tr> <td> Optimized detection wavelengths </td> <td> WP3 </td> </tr> <tr> <td> Preliminary data from the platform for data processing development </td> <td> WP3, WP4 </td> </tr> <tr> <td> Imagery data for tests and validation </td> <td> WP3, WP5, WP6 </td> </tr> <tr> <td> Flight data of WADI process tests </td> <td> WP5 </td> </tr> <tr> <td> Flight data of WADI operational and surveillance tests </td> <td> WP6 </td> </tr> <tr> <td> Multi-spectral/IR image database </td> <td> WP7 </td> </tr> <tr> <td> Flights results, frequency, instrumentation used, fuel consumed, renting, staff costs </td> <td> WP7 </td> </tr> <tr> <td> **End-users demonstration sites information** </td> <td> Network maps </td> <td> WP3, WP4, WP5, WP6, WP7, WP9 </td> </tr> <tr> <td> Infrastructure data base </td> <td> WP5, WP6, WP7 </td> </tr> <tr> <td> Water company operational info </td> <td> WP2, WP7 </td> </tr> <tr> <td> Flow data </td> <td> WP5, WP6, WP7 </td> </tr> <tr> <td> Water losses data, leaks localisation </td> <td> WP2, WP5, WP7 </td> </tr> <tr> <td> Ground leak detection investigation </td> <td> WP5, WP6, WP7 </td> </tr> <tr> <td> **General information** </td> <td> Environmental data from satellites and other open sources </td> <td> WP7 </td> </tr> <tr> <td> Socioeconomic public reports analysis </td> <td> WP7 </td> </tr> </table> Table 1. 1 Summary of types of data to be used / created in different work packages (WP) <table> <tr> <th> **Partner** </th> <th> **Work Package** </th> <th> **Type of data/ data description** </th> <th> **Methodology of data production** </th> <th> **Data documentation and how that** **will be made available** </th> <th> **Data standards** </th> <th> **Which data sets will be classified as Open** **Access** </th> <th> **Which data sets will be** **classified as** **confidential** </th> <th> **In case of datasets that** **are not shared, reasons for that** </th> <th> **To whom data** **classified as Open** **Access could be useful** </th> <th> **How will open data** **be shared / by whom and where** </th> <th> **How will confidential data be** **archived / by** **whom and where** </th> <th> **How is backup and** **versioning realised?** </th> <th> **When data will be** **produced** </th> <th> **When data will be** **placed in Open** **Access** </th> <th> **How will the data be disseminated** </th> <th> **How will data be available** **after the end of the project** </th> </tr> <tr> <td> **OFFICE NATIONAL** **D'ETUDES ET DE** **RECHERCHES** **AEROSPATIALES** **(ONERA)** </td> <td> WP3 (task3.1) </td> <td> Hyperspectral/IR image database </td> <td> From Airborne measurement </td> <td> Not available </td> <td> Standards are not available for this type of data. Data formats are owner format of the suppliers : NEO for hyperspectr al camera and FLIR for IR camera </td> <td> Not applicable </td> <td> All </td> <td> Data too complex for others to use it, data sets have very large size, and they cannot be exploitable without specific software owned by ONERA </td> <td> Not applicable </td> <td> Not applicable </td> <td> By ONERA in its own servers </td> <td> Process done by ONERA using their own resources and procedures </td> <td> Data will be produced during WP 3-1 </td> <td> Not applicable </td> <td> Not applicable </td> <td> Data archive on ONERA server and repositories </td> </tr> <tr> <td> WP3 (task3.1) </td> <td> Optimized detection wavelengths </td> <td> Exploitation of database image </td> <td> Available data will be reported in WADI report </td> <td> Not applicable, wavelength standards do not exist </td> <td> All data is open access </td> <td> None </td> <td> Not applicable </td> <td> Researcher s, endusers, service providers </td> <td> Can be shared to all at the end of the WP3-1, through WADI repository at ZENODO </td> <td> Not applicable </td> <td> Process done using ZENODO's own procedures </td> <td> Data will be produced during WP 3-1 </td> <td> At the end of the WP3 </td> <td> In conferences presentations </td> <td> Through publication and available through ZENODO during the period that this repository offers for H2020 projects </td> </tr> <tr> <td> WP7 (task 3.1) </td> <td> Multispectral/IR image database </td> <td> Through Airborne measurements achieved on WP5 and WP6 </td> <td> WADI report </td> <td> Standards will be defined later in WP4 with NTGS and LNEC partner </td> <td> Images database </td> <td> Not applicable </td> <td> Not applicable </td> <td> Researcher s, end-users, service providers </td> <td> Can be shared to all at the end of the WP7, using WADI repository at ZENODO </td> <td> Not applicable </td> <td> Process done using ZENODO's own procedures </td> <td> Database will be produced during WP7 </td> <td> At the end of WP7 </td> <td> During WP10 process </td> <td> Through publication and available through ZENODO during the period that this repository offers for H2020 projects </td> </tr> </table> Table 1. 2Description of data to be used / created in different work packages, including standards and metadata, availability, storing resources for open and confidential data <table> <tr> <th> **Partner** </th> <th> **Work Package** </th> <th> **Type of data/ data description** </th> <th> **Methodology of data production** </th> <th> **Data documentation and how that** **will be made available** </th> <th> **Data standards** </th> <th> **Which data sets will be classified as Open** **Access** </th> <th> **Which data sets will be** **classified as** **confidential** </th> <th> **In case of datasets that** **are not shared, reasons for that** </th> <th> **To whom data** **classified as Open** **Access could be useful** </th> <th> **How will open data** **be shared / by whom and where** </th> <th> **How will confidential data be** **archived / by** **whom and where** </th> <th> **How is backup and** **versioning realised?** </th> <th> **When data will be** **produced** </th> <th> **When data will be** **placed in Open** **Access** </th> <th> **How will the data be disseminated** </th> <th> **How will data be available** **after the end of the project** </th> </tr> <tr> <td> **AIR MARINE SARL** **(AIR MARINE)** </td> <td> WP5 (task 5.2) </td> <td> Flight data of WADI process tests (validation of Water leak Airborne Detection on SCP infrastructure) </td> <td> GIS output: data of the mission Onboard mission recording: raw data sensor output and navigation sensor output (synchronised metadata) </td> <td> Internal technical document, proprietary repository </td> <td> SHAPE format + Images (geotiff) </td> <td> None </td> <td> All: WADI consortium restricted </td> <td> Flight data is protected and will be provided to the consortium partners on demand. </td> <td> End-users, researchers, data providers,… </td> <td> Not applicable </td> <td> Confidential data is managed using Air Marine resources </td> <td> Backup and versioning data is managed using Air Marine resources </td> <td> Data produced according to the project schedule (demonstra tion flights in 2018) </td> <td> Not applicable </td> <td> Not applicable </td> <td> Data archive on Air Marine server and repositories </td> </tr> <tr> <td> WP6 (task 6.2) </td> <td> Flight data of WADI operational and surveillance tests (validation of Water leak Airborne Detection on EDIA infrastructure) </td> <td> GIS output: data of the mission Onboard mission recording: raw data sensor output and navigation sensor output (synchronised metadata) </td> <td> Internal technical document, proprietary repository </td> <td> SHAPE format + Images (geotiff) </td> <td> None </td> <td> All: WADI consortium restricted </td> <td> Flight data is protected and will be provided to the consortium partners on demand. </td> <td> End-users, researchers, data providers,… </td> <td> Not applicable </td> <td> Confidential data is managed using Air Marine resources </td> <td> Backup and versioning data is managed using Air Marine resources </td> <td> Data produced according to the project schedule (demonstra tion flights in 2018) </td> <td> Not applicable </td> <td> Not applicable </td> <td> Data archive on Air Marine server and repositories </td> </tr> <tr> <td> **LABORATORIO** **NACIONAL DE** **ENGENHARIA CIVIL** **(LNEC)** </td> <td> WP7 (task 7.3) </td> <td> Infrastructure data for reliability analysis (water levels, discharge flows,…) </td> <td> Field data acquisition </td> <td> Internal endusers reports and databases. Will be provided by EDIA and SCP on request. </td> <td> Not applicable </td> <td> None </td> <td> All: Infrastructur e data for reliability analysis (water levels, discharge flows,…) </td> <td> According to the Consortium agreement, infrastructure data is protected and will be provided to the partners on demand. </td> <td> Not applicable </td> <td> Not applicable </td> <td> Infrastructure data is already managed by the end-users using their own resources </td> <td> Not applicable </td> <td> Data already exists. It will be used in WADI as part of the reliability data processing. </td> <td> Not applicable </td> <td> Not applicable </td> <td> Data archive on LNEC server and repositories </td> </tr> <tr> <td> WP7 (task 7.3) </td> <td> Environmental data from satellites and other open sources </td> <td> Remote sensing and field data acquisition </td> <td> In open repositories on the internet (Copernicus) and provided by other endusers (e.g. Water authority in Portugal) </td> <td> Satellite data: standards used by Copernicus; Water authorities: distinct data formats and standards </td> <td> All </td> <td> Not applicable </td> <td> Not applicable </td> <td> End-users, researchers, general public,… </td> <td> Available in open repositories </td> <td> Not applicable </td> <td> Procedures done by the data owners with their own resources </td> <td> Data already exists. It will be used in WADI as part of the reliability data processing. </td> <td> Already available </td> <td> Through websites </td> <td> Depending on data owner policies </td> </tr> </table> Table 1. 2 Description of data to be used / created in different work packages, including standards and metadata, availability, storing resources for open and confidential data(continuation) <table> <tr> <th> **Partner** </th> <th> **Work Package** </th> <th> **Type of data/ data description** </th> <th> **Methodology of data production** </th> <th> **Data documentation and how that** **will be made available** </th> <th> **Data standards** </th> <th> **Which data sets will be classified as Open** **Access** </th> <th> **Which data sets will be** **classified as** **confidential** </th> <th> **In case of datasets that** **are not shared, reasons for that** </th> <th> **To whom data** **classified as Open** **Access could be useful** </th> <th> **How will open data** **be shared / by whom and where** </th> <th> **How will confidential data be** **archived / by** **whom and where** </th> <th> **How is backup and** **versioning realised?** </th> <th> **When data will be** **produced** </th> <th> **When data will be** **placed in Open** **Access** </th> <th> **How will the data be disseminated** </th> <th> **How will data be available** **after the end of the project** </th> </tr> <tr> <td> **EDIA-EMPRESA DE** **DESENVOLVIMIENTO** **E INTRA-ESTRUTURAS** **DO ALQUEVA** </td> <td> WP3 WP6 </td> <td> and </td> <td> Digital data about EDIA network localisation </td> <td> Working drawings </td> <td> Internal documentation , available through request to EDIA project team, internet EDIA website </td> <td> PDF maps, SHP files or other GIS format </td> <td> All not included as restricted in the consortium agreement </td> <td> Those included as restricted in the consortium agreement </td> <td> Operational data from EDIA </td> <td> Partners, public </td> <td> By EDIA through its website </td> <td> By EDIA in EDIA server </td> <td> Using EDIA own resources </td> <td> In website already available </td> <td> After the analysis of WP6 flights (2019) </td> <td> EDIA website </td> <td> Depending on EDIA owner policies </td> </tr> <tr> <td> WP6 </td> <td> </td> <td> Flow data at specific locations </td> <td> Flow meters </td> <td> Internal documentation , available through request to EDIA project team </td> <td> Distinct data formats and standards </td> <td> Restricted in the consortium agreement </td> <td> Those included as restricted in the consortium agreement </td> <td> Operational data from EDIA </td> <td> Partners, public </td> <td> Not applicable </td> <td> By EDIA in EDIA server </td> <td> Using EDIA own resources </td> <td> Data already exists </td> <td> Not applicable </td> <td> Not applicable </td> <td> Depending on EDIA owner policies </td> </tr> <tr> <td> **SOCIETE DU CANAL** **DE PROVENCE ET** **D'AMENAGEMENT** **DE LA REGION** **PROVENCALE SA** **(SCP)** </td> <td> WP3 WP5 </td> <td> and </td> <td> Digital data about SCP network localisation </td> <td> Working drawings </td> <td> Internal documentation , available through request to SCP project team, or available at internet SCP website </td> <td> PDF maps, SHP files or other GIS format </td> <td> Data sets on localisation on 1/25000 scale without associated data (size, material, age…) </td> <td> All others </td> <td> Operational data from SCP </td> <td> Partners, public </td> <td> SCP internet website </td> <td> SCP </td> <td> Using SCP own resources </td> <td> Data already exists </td> <td> Data already exists </td> <td> SCP internet website </td> <td> SCP internet website </td> </tr> <tr> <td> WP5 </td> <td> </td> <td> Leaks localisation </td> <td> Analysis of remote sensing acquisition </td> <td> cf. partners (SGI, GG) who will produce this kind of data </td> <td> cf. partners (SGI, GG) who will produce this kind of data </td> <td> Data sets on localisation on 1/25000 scale in general map (without precise localisation) </td> <td> Data sets with more precise presentatio n </td> <td> Communicatio n management with local residents </td> <td> Partners, public </td> <td> Not applicable </td> <td> SCP, on our GIS and WADI partner who will produce the data </td> <td> SCP, on our GIS and by WADI partners who will produce the data </td> <td> During WP5 </td> <td> After the analysis of WP5 flights (2018) </td> <td> SCP internet website </td> <td> SCP internet website </td> </tr> </table> Table 1. 2 Description of data to be used / created in different work packages, including standards and metadata, availability, storing resources for open and confidential data(continuation) <table> <tr> <th> **Partner** </th> <th> **Work Package** </th> <th> **Type of data/ data description** </th> <th> **Methodology of data production** </th> <th> **Data documentation and how that** **will be made available** </th> <th> **Data standards** </th> <th> **Which data sets will be classified as Open** **Access** </th> <th> **Which data sets will be** **classified as** **confidential** </th> <th> **In case of datasets that** **are not shared, reasons for that** </th> <th> **To whom data** **classified as Open** **Access could be useful** </th> <th> **How will open data** **be shared / by whom and where** </th> <th> **How will confidential data be** **archived / by** **whom and where** </th> <th> **How is backup and** **versioning realised?** </th> <th> **When data will be** **produced** </th> <th> **When data will be** **placed in Open** **Access** </th> <th> **How will the data be disseminated** </th> <th> **How will data be available** **after the end of the project** </th> </tr> <tr> <td> **NEW** **TECHNOLOGIES** **GLOBAL SYSTEMS SL** **(NTGS)** </td> <td> WP3 (task 3.2) </td> <td> Preliminary data from the platform for data processing development </td> <td> Engineering </td> <td> Internal reports. Data will be provided by NTGS on request </td> <td> PDF maps </td> <td> none </td> <td> All </td> <td> According to the Consortium agreement, infrastructure data (imagery) is protected and will be provided to the partners on demand. </td> <td> Not applicable </td> <td> Not applicable </td> <td> Data owners in their own repositories </td> <td> NTGS own backup </td> <td> M18 </td> <td> Not applicable </td> <td> Not applicable </td> <td> Not applicable </td> </tr> <tr> <td> WP4 (task 4.3) </td> <td> Preliminary data from the platform for data processing development </td> <td> Design </td> <td> Internal reports. Data will be provided by NTGS on request </td> <td> PDF maps </td> <td> none </td> <td> All </td> <td> According to the Consortium agreement, infrastructure data (imagery) is protected and will be provided to the partners on demand. </td> <td> Not applicable </td> <td> Not applicable </td> <td> Data owners in their own repositories </td> <td> NTGS own backup </td> <td> M19 </td> <td> Not applicable </td> <td> Not applicable </td> <td> Not applicable </td> </tr> <tr> <td> **FUNDACION CIRCE CENTRO DE** **INVESTIGACION DE** **RECURSOS Y** **CONSUMOS** **ENERGETICOS** **(CIRCE)** </td> <td> WP7 (task 7.1) </td> <td> Infrastructure information for building inventories for both demos: Pipes info, pumping consumption, chemical consumption, Flights results, Frequency, instrumentatio n used, fuel consumed, renting, staff costs </td> <td> Information will be obtained from WP 2 </td> <td> Data bases will be internally build and stored according to WP 2 information. </td> <td> As in WP2 </td> <td> None, unless endusers (EDIA and SCP) allow sharing the info </td> <td> All data will be considered as confidential unless endusers (EDIA and SCP) allow sharing the info </td> <td> According to the Consortium agreement, infrastructure data is protected </td> <td> Not applicable </td> <td> Not applicable </td> <td> Internally in databases (included in SimaPro software) </td> <td> Backup will be done weekly according to CIRCE procedures </td> <td> Databases will be developed during the WP7 framework, once WP2 tasks will start </td> <td> Not applicable </td> <td> Not applicable </td> <td> Not applicable </td> </tr> <tr> <td> WP7 (task 7.2) </td> <td> Environmental data from satellites and other open sources and socioeconomi c public reports analysis </td> <td> Type of ecosystem will be defined according to MAES methodology internally, according to the information obtained from demo sites definition </td> <td> In open repositories on the internet (Copernicus) and provided by other endusers (e.g. Water authority in Portugal and France) </td> <td> Water authorities: distinct data formats and standards </td> <td> All </td> <td> None </td> <td> Not applicable </td> <td> End-users, researchers, local inhabitants </td> <td> By means of open repositories </td> <td> Not applicable </td> <td> Backup will be done weekly according to CIRCE procedures </td> <td> Databases will be developed during the WP7 framework, once WP2 tasks will start </td> <td> At the end of the project </td> <td> By means of project website </td> <td> By means of project website </td> </tr> </table> Table 1. 2 Description of data to be used / created in different work packages, including standards and metadata, availability, storing resources for open and confidential data(continuation) Table 1. 2 Description of data to be used / created in different work packages, including standards and metadata, availability, storing resources for open and confidential data(continuation) <table> <tr> <th> **Partner** </th> <th> **Work Package** </th> <th> **Type of data/ data description** </th> <th> **Methodology of data production** </th> <th> **Data documentation and how that** **will be made available** </th> <th> **Data standards** </th> <th> **Which data sets will be classified as Open** **Access** </th> <th> **Which data sets will be** **classified as** **confidential** </th> <th> **In case of datasets that** **are not shared, reasons for that** </th> <th> **To whom data** **classified as Open** **Access could be useful** </th> <th> **How will open data** **be shared / by whom and where** </th> <th> **How will confidential data be** **archived / by** **whom and where** </th> <th> **How is backup and** **versioning realised?** </th> <th> **When data will be** **produced** </th> <th> **When data will be** **placed in Open** **Access** </th> <th> **How will the data be disseminated** </th> <th> **How will data be available** **after the end of the project** </th> </tr> <tr> <td> **SST-CONSULT ADAM** **STACHEL, RAFAL** **STANEK, DAVID** **ANDREW TOFT (SST - Consult)** </td> <td> WP9 (task 9.1) </td> <td> Data on WADI results obtained from validation (WP5&6), results (WP 7) and legal analysis (WP 8). Desk study for market conditions. </td> <td> Data on WADI results obtained from validation (WP5&6), results (WP 7) and legal analysis (WP 8) </td> <td> Internal project documentation and for communicatio n with funders. Not publicly available directly from project. </td> <td> Distinct data formats and standards </td> <td> Restricted in the consortium agreement </td> <td> Those included as restricted in the consortium agreement </td> <td> Operational data from end- users </td> <td> Partners, public </td> <td> Not applicable </td> <td> In end-users servers </td> <td> Using endusers own resources </td> <td> During WP9 </td> <td> Not applicable </td> <td> Not applicable </td> <td> Data archive on end-users server and repositories </td> </tr> <tr> <td> WP9 (task 9.2) </td> <td> Data on WADI results obtained from validation (WP5&6), results (WP 7) and legal analysis (WP 8). Desk study for market conditions. </td> <td> Data on WADI results obtained from validation (WP5&6), results (WP 7) and legal analysis (WP 8) </td> <td> Internal project documentation and for communicatio n with funders. Not publicly available directly from project. </td> <td> Distinct data formats and standards </td> <td> Restricted in the consortium agreement </td> <td> Those included as restricted in the consortium agreement </td> <td> Operational data from end- users </td> <td> Partners, public </td> <td> Not applicable </td> <td> In end-users servers </td> <td> Using endusers own resources </td> <td> During WP9 </td> <td> Not applicable </td> <td> Not applicable </td> <td> Data archive on end-users server and repositories </td> </tr> <tr> <td> WP9 (task 9.3) </td> <td> Based on market strategy </td> <td> Based on market strategy </td> <td> Based on market strategy </td> <td> Distinct data formats and standards </td> <td> Restricted in the consortium agreement </td> <td> Those included as restricted in the consortium agreement </td> <td> Operational data from end- users </td> <td> Partners, public </td> <td> Not applicable </td> <td> In end-users servers </td> <td> Using endusers own resources </td> <td> During WP9 </td> <td> Not applicable </td> <td> Not applicable </td> <td> Data archive on end-users server and repositories </td> </tr> </table> <table> <tr> <th> **Partner** </th> <th> **Work Package** </th> <th> **Type of data/ data description** </th> <th> **Methodology of data production** </th> <th> **Data documentation and how that** **will be made available** </th> <th> **Data standards** </th> <th> **Which data sets will be classified as Open** **Access** </th> <th> **Which data sets will be** **classified as** **confidential** </th> <th> **In case of datasets that** **are not shared, reasons for that** </th> <th> **To whom data** **classified as Open** **Access could be useful** </th> <th> **How will open data** **be shared / by whom and where** </th> <th> **How will confidential data be** **archived / by** **whom and where** </th> <th> **How is backup and** **versioning realised?** </th> <th> **When data will be** **produced** </th> <th> **When data will be** **placed in Open** **Access** </th> <th> **How will the data be disseminated** </th> <th> **How will data be available** **after the end of the project** </th> </tr> <tr> <td> **Galileo Geosystems S.L. (GG)** </td> <td> WP3 (task 3.3) </td> <td> Imagery data for tests and validation </td> <td> RPAS data acquisition </td> <td> Will be provided by GG on request </td> <td> End-users standards for geographic information </td> <td> None </td> <td> All: high resolution imagery from infrastructur es and preliminary data report </td> <td> According to the Consortium agreement, infrastructure data (imagery) is protected and will be provided to the partners on demand. </td> <td> End-users, researchers </td> <td> Available in restricted repositories </td> <td> High resolution imagery archived by end-users </td> <td> Procedures done by end-users with their own resources </td> <td> M18 </td> <td> Not applicable </td> <td> Not applicable </td> <td> Not applicable </td> </tr> <tr> <td> WP5 (task 5.2) </td> <td> Imagery data for tests and validation </td> <td> RPAS data acquisition </td> <td> Will be provided by GG on request </td> <td> End-users standards for geographic information </td> <td> None </td> <td> All: high resolution imagery from infrastructur es and preliminary data report </td> <td> According to the Consortium agreement, infrastructure data (imagery) is protected and will be provided to the partners on demand. </td> <td> End-users, researchers </td> <td> Available in restricted repositories </td> <td> High resolution imagery archived by end-users </td> <td> Procedures done by end-users with their own resources </td> <td> M23 </td> <td> Not applicable </td> <td> Not applicable </td> <td> Not applicable </td> </tr> <tr> <td> WP6 (task 6.2) </td> <td> Imagery data for tests and validation </td> <td> RPAS data acquisition </td> <td> Will be provided by GG on request </td> <td> End-users standards for geographic information </td> <td> None </td> <td> All: high resolution imagery from infrastructur es and preliminary data report </td> <td> According to the Consortium agreement, infrastructure data (imagery) is protected and will be provided to the partners on demand. </td> <td> End-users, researchers </td> <td> Available in restricted repositories </td> <td> High resolution imagery archived by end-users </td> <td> Procedures done by end-users with their own resources </td> <td> M31 </td> <td> Not applicable </td> <td> Not applicable </td> <td> Not applicable </td> </tr> <tr> <td> **SGI STUDIO GALLI** **INGEGNERIA SRL** **(SGI )** </td> <td> WP1 </td> <td> Commercial (expectations from WADI) </td> <td> Questionnaires </td> <td> Aggregated analysis, rough data; internal repository </td> <td> None </td> <td> None </td> <td> All </td> <td> Either part of Partners' (EDIA and SCP) commercial strategy and competitivenes s or protected according to the Consortium Agreement </td> <td> Not applicable </td> <td> Not applicable </td> <td> Questionnaires archived by end-users and SGI </td> <td> Not applicable </td> <td> Within M2 </td> <td> Not applicable </td> <td> Not applicable </td> <td> Not applicable </td> </tr> <tr> <td> WP2 </td> <td> Water losses data, water company operational info </td> <td> Questionnaires </td> <td> Aggregated analysis; internal repository </td> <td> None </td> <td> None </td> <td> All </td> <td> Either part of Partners' (EDIA and SCP) commercial strategy and competitivenes s or protected according to the Consortium Agreement </td> <td> Not applicable </td> <td> Not applicable </td> <td> Questionnaires archived by end-users and SGI </td> <td> Not applicable </td> <td> Within M4 </td> <td> Not applicable </td> <td> Not applicable </td> <td> Not applicable </td> </tr> </table> Table 1. 2 Description of data to be used / created in different work packages, including standards and metadata, availability, storing resources for open and confidential data(continuation) <table> <tr> <th> **Partner** </th> <th> **Work Package** </th> <th> **Type of data/ data description** </th> <th> **Methodology of data production** </th> <th> **Data documentation and how that** **will be made available** </th> <th> **Data standards** </th> <th> **Which data sets will be classified as Open** **Access** </th> <th> **Which data sets will be** **classified as** **confidential** </th> <th> **In case of datasets that** **are not shared, reasons for that** </th> <th> **To whom data** **classified as Open** **Access could be useful** </th> <th> **How will open data** **be shared / by whom and where** </th> <th> **How will confidential data be** **archived / by** **whom and where** </th> <th> **How is backup and** **versioning realised?** </th> <th> **When data will be** **produced** </th> <th> **When data will be** **placed in Open** **Access** </th> <th> **How will the data be disseminated** </th> <th> **How will data be available** **after the end of the project** </th> </tr> <tr> <td> **SGI STUDIO GALLI** **INGEGNERIA SRL** **(SGI )** </td> <td> WP5 </td> <td> Network maps, infrastructure dB, flow measurement s, GLD investigation </td> <td> The pilot area/s will be identified with SCP. SCP will provide network maps, infrastructure characteristics and flow data through GIS, dB, xls or other media according to availability. Investigation will be planned by SGI together with SCP, who will provide gauged data accordingly. SGI will process investigation data and submit relevant report. </td> <td> Documented through maps, spreadsheets, dB </td> <td> To be defined </td> <td> To be defined </td> <td> To be defined </td> <td> Operational data from Water Companies </td> <td> Anyone interested in direct check of WADI performanc e </td> <td> WADI repository at ZENODO </td> <td> Internal WADI project repository </td> <td> To be defined </td> <td> M20-M23 </td> <td> M23 </td> <td> Training, public reports, conferences </td> <td> Reports, conference / training papers, dissemination material </td> </tr> <tr> <td> WP6 </td> <td> Network maps, infrastructure dB, flow measurement s, GLD investigation </td> <td> The pilot area/s will be identified with EDIA. EDIA will provide network maps, infrastructure characteristics and flow data through GIS, dB, xls or other media according to availability. Investigation will be planned by SGI together with EDIA, who will provide gauged data accordingly. SGI will process investigation data and submit relevant report. </td> <td> Documented through maps, spreadsheets, dB </td> <td> To be defined </td> <td> To be defined </td> <td> To be defined </td> <td> Operational data from Water Companies </td> <td> Who is interested in direct check of WADI performanc e </td> <td> WADI repository at ZENODO </td> <td> Internal WADI project repository </td> <td> To be defined </td> <td> M24-M30 </td> <td> M23 </td> <td> Training, public reports, conferences </td> <td> Reports, conference / training papers, dissemination material </td> </tr> </table> Table 1. 2 Description of data to be used / created in different work packages, including standards and metadata, availability, storing resources for open and confidential data(continuation) # FAIR data ## Making data findable, including provisions for metadata WADI will produce and reuse a variety of data types, from images to time series and georeferenced information in GIS format, covering a broad range of areas (remote sensing, hydrology, hydraulics,..). Metadata will be produced for all data, using standards when available. Standards such as ISO19115 (http://rd-alliance.github.io/metadata directory/standards/iso-19115.html) or the OGC Sensor Observation Service (SOS) Interface Standard ( http://rd-alliance.github.io/metadata- directory/standards/observations-and measurements.html) are expected to be adopted. Consistency between metadata for similar data sets will be sought when standards are not available. Elements to be included in the metadata include a clear description of the data, the institution and person of contact responsible for the data creation, its format, creation date and possible modifications, data units and georeferencing (when applicable) and a number of keywords (metatags). The choice of adequate keywords will be included to promote and ease the discoverability of data. These keywords will include a number of fixed, common keywords in WADI’s scientific area and several new, free keywords that can help attract researchers from other areas to use and adapt WADI’s results to their scientific fields. For all open data in the project, ZENODO will be used as the project’s open data repository. ZENODO provides Digital Object Identifiers for all data sets, thus guaranteeing that all open data in WADI will have persistent and unique identifiers. For consistency and promotion of data discovery, consistent naming conventions will also be used and agreed among the partners (to be defined later). Open access publication will also be sought, with direct links to the underlying data sets deposited in ZENODO. LNEC will be responsible for uploading data and other items in ZENODO, through the project designated data manager. Each partner will provide the datasets and publications to be integrated in the ZENODO repository to the data manager, dully informed on its access policy. The datasets and the expected time of availability and access policies are described in Table 1.2. For publications subject to embargo periods (due to the publishers’ policies), the data manager will upload them in ZENODO as soon as the embargos are finished. The coordinator will inform LNEC as soon as the datasets are delivered to youris.com, for data management monitoring purposes. In the scope of the surveillance system developed in WADI, a succession of data sets will be produced, creating several databases of images at different stages of development and processing, from the raw data from cameras to the processed and quality-certified images included in the end-user application. This sequence can be labelled as several versions of a single dataset or it can be identified and managed as different datasets. Regardless of the approach chosen by the partners for this data, a clear versioning policy will be adopted and linked with detailed metadata and supporting documentation. ## Making data openly accessible WADI will create or reuse a variety of data sets, which have different natures and correspondingly distinct access privileges. Part of these privileges was set up in the project’s Consortium Agreement. These different access privileges are described in detail in Table 1.2 and are reviewed here in a concise manner. Table 1.2 also provides a detailed description of all aspects related to dataset management. A short overview on data access policies and availability is presented here. Regarding end-users’ infrastructure data (such as digital information regarding the networks’ location in very fine detail, the customer's data and the location of leaks), these are classified as confidential in the Consortium agreement to protect personal information (as described in the D11.1 POPD - Requirement No. 3) as well as the security of the companies’ assets. Infrastructure data at a granular scale is already openly available at the companies’ website, thus fulfilling its usefulness for scientific or other public purposes. Regarding classified data, it is kept at the companies’ own repositories, fulfilling their own policies on data backup and preservation, and will be maintained by these entities after the end of the project. Regarding the data from the surveillance system developed at WADI (Tables 1.1 and 1.2), different access policies are defined according to the Consortium agreement. These data sets include mostly several databases of images at different stages of development and processing, ranging from the raw data derived from the hyperspectral, multi-spectral and I/R cameras to the processed and quality-certified images. The raw data from the several types of cameras have an immense size and require specialized, proprietary software to be accessed. Therefore, it is not made available, although it will be fully documented in reports and scientific publications. In contrast, the optimised detection wavelengths (developed in WP3) and the multi-spectral/IR image database to be developed during the demonstrations in Portugal and France will be openly available through ZENODO. In-situ ground data is also part of the WADI service, but given the information’s sensitivity and the restrictions posed by the infrastructure’s competitive operation, it will not be open. Like all confidential data in WADI, its preservation and maintenance during and after the project will be handled by the data owners and/or the end-users. Deposits in ZENODO will include the data, their metadata and their documentation. For most data sets, access is granted through generalised use software such as ArcGIS or similar. ## Making data interoperable All data developed in WADI will be fully documented and accompanied with detailed metadata supported by a set of select keywords, to facilitate automatic discovery and integration of WADI data for other purposes. Besides usual metadata fields, technical aspects such as units (complying with SI standards) and spatial and temporal references will be supplied. All data will be provided in generally used extensions, adopting well established formats (csv, shapefiles, image formats,...) whenever possible which will also facilitate its use by other parties. The exception will be the raw data from the optical devices which can only be interpreted using ONERA’s own software. ## Increase data re-use Open data availability will occur as soon as possible in WADI while respecting the team’s publication targets. Typically, open data will be available for publication in ZENODO at the end of respective WP (Table 1.2), and its publication will occur within one month. The team expects that this fast publication of data created in WADI will promote its reuse by other researchers and end-users, thereby contributing to the dissemination of WADI methodology and tools. The usefulness of the data for third parties is closely linked to the perception of quality and robustness of the available data. Therefore, a dedicated task was defined in WP4 to validate data and standards compliance, setting the stage for the data reliability analysis performed in WP7. In this last analysis, other sources of data will be combined with images to provide a quality index for the leak detection data generated in WADI. All these methods will be expected to contribute to a long term usefulness of this data. # Allocation of resources In WADI there is a considerable amount of confidential data (see the aforementioned reasons). This data will be managed by the partners responsible for its creation and/or by the end-users (Table 1.2). Therefore its maintenance, backup and versioning and long term preservation will be guaranteed by their own resources and at their own expense. A repository in ZENODO was created ( https://zenodo.org/communities/h2020-wadiproject/ ) for the projects´ open data, therefore ensuring data availability, backup and versioning. Long term preservation will be guaranteed for the lifetime of the ZENODO repository (https://zenodo.org/policies). This is currently the lifetime of the ZENODO’s host laboratory CERN, which currently has an experimental programme defined at least for the next 20 years. After the end of that period, the data will only be kept at the data owners’ servers and repositories. Publications featuring the data will be produced in the project (specifically by the research partners) and will be made available through open access (using open access journals or journals selected for a short embargo period). This channel will provide a long term availability of data and data analysis. The partners expect the data to have immense value in short to medium term to support the optimised management of water mains and to reduce water losses. The methodology and data products developed in the project are expected to have a large impact in this field, through generalised application at European and other international sites. Moreover, we expect that the scientific and technological developments may serve as a basis in other fields of application, with the data demonstrating and promoting WADI’s exceptional quality of service. The long term usefulness of the data collected during WADI will depend on the technological advances in this area. The resolution achieved with today's technology may prove lacking in the future, given the advances in camera quality and the supporting data’s infrastructure. # Data security Open data security will addressed in WADI taking advantage of ZENODO’s services of secure storage, backup and preservation and protected transfer mechanisms. Regarding the confidential data, different approaches will be used by each data-owner institution, but common rules apply. Data will be housed on servers under direct management of the institution’s personnel to be installed in already provisioned data centres. These data centres are expected to be equipped with various features ranging from secure physical access, air conditioning, generators and fire extinguishing measures. Typically, hardware / electricity failure are addressed with redundant hardware and generators. Access to data under different permission conditions (read-only, read-write, etc.) are granted to users and authorized computers by project managers or to whomever this task is delegated, according to a well-defined protocol. Confidentiality is assured by additional methods, encryption and anonymization to name a few, depending on their nature and final applications. Taking in account the size of the data at stake that requires regular backup (be it either for security versus a hardware failure or for archival purposes), a sequence of regular full backups, differential backups and incremental backups on an increasingly frequent basis are envisaged, and following already installed procedures. The physical media used to store the data will be maintained in secure locations. Access to these backups is limited to the personnel authorised to use the backup system, and as a general rule, not authorised for external sources. All data transfers should be encrypted to render all stolen/lost data useless. Encryption methods are to be specified at a later date. # Ethical aspects The WADI partners are to comply with the ethical principles as set out in Article 34 of the Grant Agreement, which states that all activities must be carried out in compliance with: 1. Ethical principles (including the highest standards of research integrity – as set out, for instance in the European Code of Conduct for Research Integrity – and including, in particular, avoiding fabrication, falsification, plagiarism or other research misconduct) 2. Applicable international, EU and national law. Activities raising ethical issues must comply with the “ethics requirements” set out in Annex1 of the Grant Agreement. The WP11 “Ethics requirements” aims to follow-up the ethical issues applicable to the WADI project implementation. It includes: informed consent procedures that will be implemented by each partner that foreseen activities that require such procedures (Deliverable 11.1); administrative clearance procedures and approvals for flights to be carried out in WP3 in France and Portugal (Deliverable 11.2); copies of confirmation/notification by the National data Protection Authority (Deliverable 11.3); and information on the procedures implemented for data collection, storage, protection, retention, destruction and confirmation and compliance with national and EU legislation (Deliverable 11.4). Legal and regulatory issues also have a dedicated WP (WP8 “Legal and regulatory aspects analysis, including IPR protection”). This WP will provide legal guidance to the project partners, in particular with regard to the authorisations for executing the flight tests in real environment (WP5 and WP6 pilots), the compliance with relevant rules and procedures such as the applicable rules for privacy protection and processing of personal data in relation to the unmanned airborne vehicles use and for all issues with regard to the intellectual property rights of the project results. It will also develop the legal and regulatory framework for the commercialisation of the project results. The WP8 includes: Project legal framework (Deliverable 8.1); WADI service provision legal framework (Deliverable 8.2); WADI service provision terms and conditions (Deliverable 8.3).
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0367_ACTRIS PPP_739530.md
# 1 Abstract This document defines the data policy and data management procedures in Horizon-2020 INFRADEV-2 project #739530 Aerosols, Clouds and Trace gases Research Infrastructure – Preparatory Phase Project (ACTRIS PPP). This is a living document, expected to be updated during ACTRIS PPP. This document will also be submitted to European Commission as ACTRIS PPP deliverable 9.3. The later updated versions will be for the use of ACTRIS PPP partners. # 2 ACTRIS and ACTRIS PPP Aerosols, Clouds and Trace gases Research Infrastructure (ACTRIS) is a pan- European research infrastructure, identified as a new Research Infrastructure in ESFRI Roadmap 2016. ACTRIS will provide a four-dimensional picture of short-lived atmospheric components (aerosols, reactive trace gases and cloud microphysics) that affect air quality, climate and meteorology. ACTRIS will consist of numerous observatory stations and atmospheric research chambers in various European countries, and coordinated Central Facilities assuring data quality, data storage and access to ACTRIS data and services. ACTRIS Preparatory Phase Project (ACTRIS PPP) is a coordination and support project for the implementation of ACTRIS. It lasts from 1/2017 till 12/2019, and is funded by European Commission under INFRADEV-2 tool (Grant #739530). ACTRIS PPP aims to set up the organizational and operational frameworks of ACTRIS, making it ready to be founded as an independent legal entity. ## 2.1 Definitions and key terminology In this document the key terminology is used with the following definitions: * Raw data: Raw and cleaned data produced by measurements, surveys, or similar methods. * Information: Raw data that has been processed, interpreted, and put to context to make it more understandable. This includes also contact lists. * Knowledge: Conclusions and reports based on information. This includes ACTRIS PPP deliverables and other documents produced or obtained during the project. All these are discussed in this document as data, unless specified otherwise. * Personal data: Any data or information linked to an identifiable individual person. This includes names, personal contact information etc. For clarity, the key terminology for ACTRIS is used in this document as follows: * ACTRIS PPP refers to the ACTRIS Preparatory Phase Project (H2020-INFRADEV-2016-2017 GA 739530). * ACTRIS refers to the infrastructure to be implemented. o ACTRIS community refers to all research institutions providing data or services within ACTRIS. o Interim ACTRIS Council (IAC) is the council of ministry representatives and funding agency representatives nominated by ministries. This is the highest decision making body in ACTRIS for the implementation of the infrastructure. # 3 Scope of ACTRIS PPP data management plan The scope of this data management plan is to describe the data sets produced in ACTRIS PPP, and the methods and policies used for managing them. ACTRIS PPP participates in Open Research Data Pilot (ORPD), which is also taken into account in the data management plan. This data management plan is a living document, and will be updated during ACTRIS PPP. The research data collected within H-2020 INFRAIA-project ACTRIS-2 or in other ACTRIS-related projects is not within the scope of this data management plan, nor is the research data which is to be collected within the research infrastructure ACTRIS once it is operational. During ACTRIS PPP the main party responsible for ACTRIS PPP data management is the ACTRIS PPP office at Finnish Meteorological Institute, the party coordinating ACTRIS PPP. At the end of ACTRIS PPP the responsibilities of storing and disseminating ACTRIS PPP data will be transferred to the party coordinating the implementation of ACTRIS at that time. When an ACTRIS legal entity is established, the rights and responsibilities will be transferred to the ACTRIS legal entity. ## 3.1 ACTRIS PPP data The data collected in ACTRIS PPP is not typical atmospheric research data, but rather data on the ACTRIS users and implementers, and their views on the implementation, provision and use of ACTRIS services. This data originates from surveys made during ACTRIS PPP and from the institutions participating in the project. ACTRIS PPP data also includes a lot of contact information to different organizations and individuals. As ACTRIS PPP collects no research data, the total amount of data is estimated to be small, less than 1 Gb. For this reason there are no resources allocated specifically for ACTRIS PPP data management, but the data management tasks are included in work packages collecting the data, and in the general project management in WP9. ## 3.2 Types of data collected or produced within ACTRIS PPP There are three types of data collected within ACTRIS PPP. These are: * Documents, including but not limited to, project documentation, project deliverables, and other documents produced in the project, contracts, and CVs. Also documents prepared for / by the Interim ACTRIS Council (IAC) belong to this category. * Contact information of ACTRIS PPP beneficiaries, linked third parties and associated partners, national ACTRIS contact persons and organizations related to ACTRIS. These lists include names, affiliations and e-mail addresses of individual persons working at the institutes. * Raw and processed data collected through surveys and interviews within ACTRIS PPP. This data contains views how ACTRIS should be organized, and what socio-economic impact ACTRIS services have now and in the future. The management policies and procedures of these three types of data differ from each other in some aspects, but follow the same general ACTRIS PPP data policy described in next chapter. The three data types are then described in detail one by one. # 4 General data policy in ACTRIS PPP The main principle of ACTRIS, and also ACTRIS PPP, is to have as open data policy as possible, without compromising the protection of personal and confidential data. The data policy described here affects data collected and produced within the framework of ACTRIS PPP. It does not affect other ACTRIS data. For ACTRIS PPP the data policy is as follows: 1. Within ACTRIS PPP data is collected for two reasons: * To guide and progress the implementation of ACTRIS. * To ensure the smooth implementation of ACTRIS PPP. 2. If and when data collected is relative to individual persons, the persons shall be informed about the purpose why the data is collected and how it will be used. 3. All data should be anonymized by removing any parameters making the data identifiable to individual persons from the rest of the data in as early stage as possible. 4. Retention of data must be carefully considered and if some data are no longer necessary they shall be destroyed. * All personal data that is limited to ACTRIS PPP shall be destroyed after the end of the project. * If any other data than personal data collected during ACTRIS PPP remains relevant to ACTRIS after the project, it shall be transferred to the party coordinating ACTRIS after ACTRIS PPP, which will store and use it under the same or similar conditions as ACTRIS PPP. 5. Data security and storage must be in accordance with the sensitivity of the data. 6. Access to data shall be secured, well defined and shall be in accordance with long-term preservation. 7. All project deliverables are public, unless marked confidential in the Grant Agreement. The deliverables will be openly available at ACTRIS web site _www.actris.eu_ . 8. Other project documents than deliverables are public, unless they contain sensitive data or are declared confidential by European Commission or by Interim ACTRIS Council. As default the project documentation shall be available at ACTRIS web sit e _www.actris.eu_ . 9. All data sets containing personal data shall be characterized in a form stating the type, collection method and purpose of the data, and where and for how long the data will be stored. This form shall also identify who is in charge of keeping the data set and who has access to it. 10. This data policy comes into force at the time the ACTRIS PPP data management plan is submitted and distributed to ACTRIS PPP beneficiaries. It does not affect actions taken before that time, but does affect later actions for the data collected before that time. ## 4.1 FAIR data ACTRIS PPP participates in Open Research Data Pilot, which requires the policy of FAIR data (findable, accessible, interoperable and re-usable research data). As most of ACTRIS PPP data is not research data, the concept of FAIR data is not applicable as such for most of ACTRIS PPP data. The only ACTRIS PPP data that can be considered research data are the answers to different questionnaires conducted within ACTRIS PPP. This data contains personal data when collected, but shall be anonymized in as early stage as possible. The following sub-chapters address only this data. ### 4.1.1 Making the data findable The data will be accompanied with metadata clarifying the meaning of the data and how the data has been collected. The metadata can be provided without the actual data, if requested. Data set naming should clearly describe the content of the data. ### 4.1.2 Making the data accessible As the estimated interest to the survey data outside ACTRIS community is very limited, these data will be available for use outside ACTRIS PPP or ACTRIS only upon request made to ACTRIS PPP office or later to other party coordinating ACTRIS. This data is very different than the observational data produced by ACTRIS, and does not fit into the concept of ACTRIS Data Centre. ### 4.1.3 Making the data interoperable The data will be stored in a format readable by commonly used data management tools or office software. The data sets are small and specified, so direct automatic interoperability of the data with other external data sets is not sought for. ### 4.1.4 Increase data re-use As the processed data will be public, no licences are needed for re-use of the data, as long as the data source is acknowledged. ### 4.1.5 Allocation of resources The amount of data produced by ACTRIS PPP is rather small, and therefore there are no resources specifically allocated for making the data FAIR. The resources for data collection are allocated in the work packages and tasks collecting the data. Making the data available is under the responsibility of work package 9 (ACTRIS PPP management). ## 4.2 GDPR compliancy The ACTRIS PPP data management plan aims to fulfil the requirements of EU regulation 2016/679 General data Protection Regulation (GDPR), taking effect in 25.5.2018. This chapter describes the ACTRIS PPP data management procedures from GDPR point-of-view. As GDPR applies only to personal data, the following protocols are not required for non-personal or anonymized data. It is worth noting, that most of the personal data collected during ACTRIS PPP include only names, work positions and contact details. CVs of individual persons include more personal data. The position of data protection officer will be established at FMI in autumn 2017, after which his/her approval for this data management plan will be sought for. ### 4.2.1 Single set of rules and one-stop shop As FMI is coordinating ACTRIS PPP, the project data management is under the supervision of Finnish GDPR Supervisory Authority (SA). For individual tasks collecting and using personal data in other countries, the data management is under the national SAs of those countries. ### 4.2.2 Responsibility and accountability The responsibility of protection and use of personal data is on the party collecting the data. For surveys and questionnaires it is the leader of the work package conducting the survey. The survey and questionnaire answers shall be anonymized in as early stage of the process, and data making it possible to connect the answers to individual persons shall be destroyed. The party responsible for e-mail lists etc. is ACTRIS PPP office at FMI. ### 4.2.3 Consent The consent of the survey participant will be asked in all surveys conducted within ACTRIS PPP. This will include a description how and why the data is to be used. The survey participants will not include children or other groups needing a supervisor. More information on the survey and consent procedures is provided in ACTRIS PPP deliverables 10.1 and 10.2. Also, when asking for someone’s contact information, the party asking shall explain why this information is asked. ### 4.2.4 Data Protection Officer ACTRIS PPP will not have its own data protection officer, but will use the services and expertise of FMI data protection officer, to be appointed in autumn 2017. ### 4.2.5 Pseudonymisation Due to the limited amount and less harmful nature of the personal data that is collected within ACTRIS PPP, no pseudonymisation will be used. Data will be protected by other means of data security. ### 4.2.6 Data breaches In case of data breaches, the person responsible for the breached data shall notify the both the national SA and ACTRIS PPP office as soon as possible, but at maximum during 72 hours. The individuals whose personal data were breached shall also be notified without undue delay. It is worth noting, that due to the nature of personal data collected during ACTRIS PPP the damage that can be caused by a data breach is expected to be limited. ### 4.2.7 Right to erasure If a person wishes his/her personal data to be erased, that can and shall be done. It is easy to do from the contact lists controlled by ACTRIS PPP office or WP leaders conducting surveys. If a person wants his/her personal data to be removed from a survey, the non-personal data shall remain in the analysis of the survey. ### 4.2.8 Data portability By default the personal data collected within ACTRIS PPP will be in in electronic form, mostly in Microsoft Excel file forms .xls or .xlsx. These files can be read by Microsoft Excel, which is commonly used worldwide. If a user requests to have his/her personal data for another outside ACTRIS PPP, there should be no technical limitations for providing them. ### 4.2.9 Privacy by design and by default Personal data collected during ACTRIS PPP will be used only by project partners, including beneficiaries, linked third parties and Interim ACTRIS Council, and only for purposes needed for the implementation of ACTRIS. Even within the project, if someone of the project consortium asks for personal data, the person holding the data should consider whether that data is needed for the implementation of the project. If personal data is provided, the data shall not be distributed further within or outside the project. ### 4.2.10 Records of processing activities Records of data processing and plans for the use of data will be kept by ACTRIS PPP office and work package leaders of those work packages that collect personal data. # 5 Data type descriptions and management ## 5.1 Project documentation ### 5.1.1 Data collected The data consists of ACTRIS PPP deliverables and other project documents, including documents provided for / by IAC. ### 5.1.2 Collection method These documents are created by ACTRIS PPP beneficiaries within the project, using their knowledge, raw data collected during the project, and other sources of information. ### 5.1.3 Ethical issues Some deliverables are marked confidential by the European Commission, and IAC is expected to classify some documents confidential. Some documents (contracts, CVs) include personal information. Some of the work is also performed outside EU, but not parts collecting or using personal information. These ethical issues are described in ethics deliverables 10.5. and 10.6. ### 5.1.4 Access management Public project documentation will be available via ACTRIS web site _www.actris.eu_ . Confidential documentation, if relevant to ACTRIS PPP partners or IAC, will be made available via _www.actris.eu_ only to those who need access to the documents. Different levels of access rights can be used. ### 5.1.5 Storage and back-up during ACTRIS PPP During ACTRIS PPP the documents will be stored and backed up at ACTRIS server hosting the ACTRIS web site. A back-up copy will also be kept at ACTRIS PPP office at Finnish Meteorological Institute (FMI). Those documents that are available only in paper form will be stored at FMI register and a copy at ACTRIS PPP office in a locked cabinet. ### 5.1.6 Long-term preservation After the end of ACTRIS PPP the documents will be transferred to the party coordinating ACTRIS after the end of ACTRIS PPP. ### 5.1.7 Retention All ACTRIS PPP documentation will be kept for later use after the end of the project, unless specified otherwise. ### 5.1.8 Sharing policy All project deliverables are by default public, except those marked confidential in the Grant Agreement. Other project documentation is public except for those documents containing personal or sensitive data, and those documents declared confidential by IAC. ### 5.1.9 Data security ACTRIS server is maintained by University of Clermont-Ferrand in France, who are also responsible for keeping the server data security up to date. The backups at FMI will be protected by FMI data security measures. Non-electronic data will be stored in a locked cabinet at ACTRIS PPP office, and later at the party coordinating ACTRIS after the end of ACTRIS PPP. ### 5.1.10 Responsible party The party responsible for storage, access, and availability of ACTRIS PPP documentation is ACTRIS PPP office, and later the party coordinating ACTRIS after the end of ACTRIS PPP. ## 5.2 Personal and institutional contact information ### 5.2.1 Data collected The data consists of lists of ACTRIS PPP beneficiaries, linked third parties and associated partners, ACTRIS national contact persons and IAC member contact information. The lists contain names, affiliations, roles and contact information of individual persons contributing to ACTRIS PPP, ACTRIS and IAC. Also Work Package specific contact lists are expected to be set up during the project. ### 5.2.2 Collection method The contact information has been / will be asked by ACTRIS PPP office from ACTRIS PPP beneficiaries, from national ACTRIS contact persons, from ministry representatives or ESFRI delegates of potential ACTRIS member countries and from known or potential users of ACTRIS data or services identified within ACTRIS PPP or in the past projects. Contact information is also collected from new associated partners to the project. ### 5.2.3 Ethical issues These data contain personal information, making it subject to ethical issues. More details on the ethical aspect of these data are provided in deliverable 10.6. ### 5.2.4 Access management These data will not be publicly available as such. The lists of different institutions participating in ACTRIS PPP can be published in ACTRIS web site _www.actris.eu_ without the contact details. The lists including contact information might be distributed to ACTRIS PPP beneficiaries or to national ACTRIS contact persons if that is needed for the implementation of ACTRIS PPP. ACTRIS PPP office will manage the access rights to these data. ### 5.2.5 Storage and back-up during ACTRIS PPP These data will be stored and backed up by ACTRIS PPP office. Work Package specific contact lists will be stored and backed up by the respective Work Package leaders. ### 5.2.6 Long term-preservation After the end of ACTRIS PPP there is no legal ground for the project partners to keep personal contact information data, and therefore they shall be destroyed. 5.2.7 Retention At the end of ACTRIS PPP the contact information collected during ACTRIS PPP will be destroyed. ### 5.2.8 Sharing policy The contact information data will not be made publicly available. They can be shared between ACTRIS PPP participants for purposes relevant to the implementation of ACTRIS. ### 5.2.9 Data security Contact information stored at ACTRIS PPP office will be secured by FMI data security and firewall. Work Package specific contact information collected in the work packages are secured by the security protocols of the respective work packages. ### 5.2.10 Responsible party The party responsible for storage, access and availability of general contact information data within ACTRIS PPP is ACTRIS PPP office. Work Package specific contact information is in the responsibility of the respective work package leader. ## 5.3 Surveys and questionnaires ### 5.3.1 Data collected The data consists of cost estimates and opinions of individual persons on various aspects of ACTRIS, mainly attributed to implementation of the research infrastructure and to added value created by it. ### 5.3.2 Collection method These data are collected via surveys, questionnaires and interviews of ACTRIS PPP beneficiaries, national ACTRIS contact persons, and known or potential users of ACTRIS data or services identified within ACTRIS PPP or in the past projects. ### 5.3.3 Ethical issues As raw data these data often include personal data, typically name and contact information of the person answering to the questions. In processing the data the personal and personally identifiable data shall be separated from the other data in as early stage as possible, making the rest of the data anonymous and thus publishable. Only the anonymized data shall be used for analysis. The ethical issues linked to collection and use of these data are explained in ACTRIS PPP deliverables 10.1, 10.2 and 10.6. ### 5.3.4 Access management As the survey data is not expected to be in interest of a large public, it will be available for use outside ACTRIS PPP or ACTRIS only upon request made to ACTRIS PPP office or later to the party coordinating ACTRIS after the end of ACTRIS PPP. ### 5.3.5 Storage and back-up during ACTRIS PPP During ACTRIS PPP the raw and processed data and the informed consent forms will be stored at the institute that is leading the Work Package in charge of collecting and analysing the data. A back-up copy will also be kept at ACTRIS PPP office or at ACTRIS server. ### 5.3.6 Long-term preservation After the end of ACTRIS PPP the data will be transferred to the party coordinating ACTRIS after the end of ACTRIS PPP. No personal data will be transferred. ### 5.3.7 Retention Personal data may be stored during the project only for keeping track who has answered to the questionnaire. After the end of ACTRIS PPP this personal data shall be destroyed. The data without personal information will be stored at the party coordinating ACTRIS after the end of ACTRIS PPP. ### 5.3.8 Sharing policy When personal data have been removed, the rest of the data are public and can be shared to anyone, as long as ACTRIS PPP is acknowledged as the data source. ### 5.3.9 Data security The data are secured by the data security measures and protocols of those institutes where it is stored. The back-up data sets at FMI will be secured by FMI data security measures and protocols. ### 5.3.10 Responsible party The party responsible for storage and analysis the survey data collected in ACTRIS PPP is the work package leader of the work package collecting and analysing the data. The back-up data and access to the data are under the responsibility of ACTRIS PPP office, and later the party coordinating ACTRIS after the end of ACTRIS PPP. Page
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0368_ICEDIG_777483.md
identification of the organisms contained in the samples is a major activity that must be performed, often repeatedly when our knowledge of the taxonomic classification accumulates. By digitising these objects, digital surrogates of the physical objects can be created. This does not mean replacing the physical object, but only allows wider access to the information about the physical object. If packaged appropriately as actionable digital objects, the management of the diverse data can be streamlined. This will be a major topic in the DiSSCo DMP. In its work package (WP) tasks and pilot projects, ICEDIG will deal with limited amounts of such data, which has been summarised in Table 1 below. They may be generated in the following actions: * **Technology pilot projects in WP3.** These belong to the subtask T3.1.1 (Plants), T3.1.2 (Pinned insects), T3.1.3 (Skins), T3.1.4 (Liquid samples), T3.1.5 (Microscopic slides), Task T3.2 (3D imaging), and T3.3 (Robotics). Each of these tasks or subtasks may produce test material, which can be made openly available in order to demonstrate the work of ICEDIG. Specifically, the following pilot projects have been foreseen in the Description of the Work for subtask T3.1.2 (Pinned insects): * Parts of multispectral imaging pilot study will be subcontracted for approximately 20,000 € to a research laboratory that owns the device “teraherz time-gated spectral imaging scanner”. We will see if stacked labels in pinned insects can be read without removing them from the specimen. This could yield major cost savings in imaging. The project will process about 1,000 samples which will take approximately 1-2 months. Similar tests will be done at the Cardiff University using their available equipment. * Robotics pilot project will be subcontracted for approximately 20,000 € to a small company with experience in placing small cameras in robot hands, followed by taking large number of images of pinned insect specimen. These images will then be processed into a 3D model of the specimen. Labels underneath of the specimen should become readable when observed from different angles. This is new 3D imaging technology, and there are probably several robotics and image processing start-up companies that could do the job. One company will be selected using an open call for tender at the beginning of the project (MS16). It is estimated that the work takes 3-4 months. * **The WP4 task T4.1 on automatic text capture may produce data on specimens.** This task will create a pilot project to test particular approaches. The task T4.2 may also test different strategies for data capture, which will produce data on specimens. This task will in addition coordinate the creation of a test dataset of herbarium specimens. * **Transcription data in WP5, which belongs to task T5.2: Working with citizens to transcribe and enrich data.** This task will explore how to foster citizen participation and the emergence of new platforms beyond the existing ones. We will review existing platforms for volunteer transcription, such as Herbaria@home, DoeDat, DigiVol, Les Herbonautes, and Notes from Nature, as well as other platforms operating outside Europe, to evaluate which aspects of each system have been successful and which not. We will evaluate the quality of data produced from each system, including georeferencing, with the aim of identifying recommendations for improvement and further development (best practices). We will determine the motivations of participants, with the particular aim of increasing the diversity and number of participants. Furthermore, we will determine the need for internationalization of a citizen science platform to increase inclusiveness, but also to match citizens with the specimens from their own location/interest field/background. We will also look for novel usages of citizen participation to enrich data, for example, by extracting trait data, temporal distribution patterns, learning about field notebooks, the biographies of collectors, and enriching gazetteers. To alleviate the coding effort for new websites, opening the source code of existing systems is a major help. Task 5.2 will review existing source code or code elements for transcription websites. To promote open access, it will build a comprehensive repository on a standard platform such as GitHub to allow public access on a stable version of the available sources codes and the associated documentation. Having a clear specification is the key to interoperability, and therefore the task will produce a specification to propose a simple way to publish transcribed data following existing standards for biodiversity data. It will also specify a lightweight interface for activity feed that can be used for a future “citizen transcription dashboard” on the DiSSCo website. * Specifically, a limited set of test data that may be transcribed during this work will be flagged as being the result of ICEDIG and will be made publicly available. * Despite the fact that code and documentation is not data _sensu stricto_ , the platform will stick to this DMP regarding open access, security and reusability * Citizens’ digitisation pilot projects of WP5, which belongs to task T5.3: Digitisation of small collections. The biological collections of private collectors, amateur societies, and smaller museums and herbaria are numerous. Often they are very specialized and represent an important, but often unexplored or unknown resource. By bringing them together with other private and institutional collections they can contribute significantly to current data needs. As these collection owners have neither biodiversity informatics knowledge nor the resources to digitise and share their collections for science, the project ICEDIG will investigate solutions and procedures to incorporate these collections into the DiSSCo infrastructure. Pilot projects together with subcontracted citizen associations will be launched to test the ideas of how to best motivate and equip citizen collectors in digitisation. * Specific pilot exercises have been foreseen, which would be managed by local entomologist societies such as the Dutch Society of Lepidopterologists and the Estonian Lepidopterological Society. This is necessary to test procedures of training private collectors to digitise their own collections. The funds of 10,000 € will be used to hire a student to work as the trainer at the Society for about 4 months, and assist several amateur entomologists to digitise their collection. The data will be publicly shared through the GBIF portal. * In a related exercise, amateur entomologists which are users of the FinBIF portal and its Field Notebook service (see https://laji.fi/en/vihko), will be offered a service to print labels and unique identifiers for their own collection specimens. Currently, very few collectors are numbering or otherwise uniquely identifying and databasing the specimens in their private collections. Introducing such a practice would pave the way for digitising private collections before they enter public collections. * WP6 will evaluate alternatives for a storage infrastructure at petabyte scale that may consist of combining different institutional, national, and European solutions. The WP will carry out tests and demonstrations of the data flows, storage systems, and access mechanisms in order to find the most viable solutions and combinations. These tests will be done in subtasks T6.3.1, T6.3.2, and T6.3.3 on national OSCs, EUDAT, and Zenodo, respectively. In each of these environments, digital objects from the databases of project partners will be uploaded and the features of these storage environments will be tested and demonstrated. The description of the work is as follows: * _**Subtask 6.3.1 National cloud infrastructures.** _ In many European countries national solutions for open science clouds exist or are under development. The feasibility and role of these systems to store DiSSCo data will be assessed. Partners will gather information of the available systems and services from their countries and regions, which will be summarised in a document of services, capacities, and costs. In order to test these services one or more pilot projects in different countries will be carried out. The questions that will be clarified include data flows from digitisation facilities to these national level systems and further to European systems. * _**Subtask 6.3.2 EUDAT infrastructure.** _ In order to evaluate the infrastructure of EUDAT for DiSSCo, tests and demonstrations of the data flows, storage systems, and access mechanisms will be carried out. CINES will reuse its data service based on the existing Common Data Infrastructure developed by the EUDAT project. A storage capacity will be dedicated for the ICEDIG project as a B2SAFE (i.e., safe replication) service. The capacity provision during the ICEDIG project duration consists of maximum one hundred terabyte of disk and tape storage capacity accessible via the CINES's B2SAFE node in a similar way as for the EUDAT Data pilot Herbadrop. The Architecture consists of an ingestion service at CINES and optional replication centres (subcontracted by ICEDIG coordination to available EUDAT sites). All data ingested in the workflow include a validation stage to ensure that the data format is suitable for long term preservation. The workflow will follow OAIS recommendations and the repository will be DSA-WDS compliant to ensure that criteria for long term preservation are met. To demonstrate user functionalities, the task will build on the results of the Herbadrop data pilot. The Herbadrop pilot will be enhanced so that already processed data can be used to enhance the quality of the transcription (in cooperation with T4.2 and T5.3). * _**Subtask 6.3.3 Zenodo infrastructure.** _ In order to evaluate using the infrastructure of Zenodo for DiSSCo, tests and demonstrations of the data flows, storage systems, and access mechanisms will be carried out. Zenodo is based on exactly the same technology stack crafted by CERN to serve the Big Data needs of the High Energy Physics community. CERN uses this stack to power its own Open Data service (CODP), and as part of its mission to openly share the products of its research, it also used this stack to create Zenodo, within the OpenAIRE project, for all other researchers to use and store long tail science data such as over 175,000 published biodiversity images including links to external resources. In this task the storage will be extended for the large data needs of DiSSCo and connectors tuned for any domain specific needs. Zenodo is compliant with the open data requirements of Horizon 2020, the EU Research and Innovation funding programme and OpenAIRE. The data at Zenodo can be located via the ElasticSearch Engine. For each data set, a digital object identifier (DOI) is automatically assigned. Dashboards will be implemented to monitor input and content of the data locally. The pilot will include uploading of 100,000 images at a total of 10 TB uploaded in few big bursts, using the Zenodo API and then accessed from portals (WP4, WP5). This process will be monitored and documented. **Table 1. Summary of data types used by ICEDIG.** <table> <tr> <th> **Work package and task** </th> <th> **Objects** </th> <th> **Number of** **Digital Objects** </th> <th> **Volume (GB)** </th> </tr> <tr> <td> WP3 T3.1.2, T3.2 Pilots </td> <td> Insect specimens in 2D and 3D </td> <td> 10,000 </td> <td> 200 </td> </tr> <tr> <td> WP4 T4.1, T4.2 Data capture </td> <td> Herbarium specimens </td> <td> 1,000 </td> <td> 100 </td> </tr> <tr> <td> WP5 T5.2 Crowd-sourcing </td> <td> Any </td> <td> 100 </td> <td> 10 </td> </tr> <tr> <td> WP5 T5.3 Citizen digitisation </td> <td> Insect specimens </td> <td> 10,000 </td> <td> 200 </td> </tr> <tr> <td> WP6 T6.3.1 National OSC </td> <td> Any </td> <td> 100,000 </td> <td> 10,000 </td> </tr> <tr> <td> WP6 T6.3.2 EUDAT </td> <td> Any </td> <td> 100,000 </td> <td> 10,000 </td> </tr> <tr> <td> WP6 T6.3.3 Zenodo </td> <td> Any </td> <td> 175,000 </td> <td> 10,000 </td> </tr> <tr> <td> WP2, WP4, WP5, WP7 </td> <td> Questionnaires </td> <td> 1,000 </td> <td> 0.01 </td> </tr> </table> Will you re-use any existing data and how? Selected, existing images and data from the databases of the partner museums (UH, Naturalis, APM, UTARTU, NHM, MNHN, RBGK) will be used in specific tests, such as the storage tests in WP6. The final kind of data that will be created is that which is information in project deliverables, which must be preserved, made accessible and passed on to subsequent persons working in DiSSCo. What is the origin of the data? These data have been digitised in diverse earlier projects. What is the expected size of the data? The size of the data handled by ICEDIG is quite small, such as less than 10 GB, except in the tests of the data infrastructure in WP6, where the project needs experience of managing large volumes of data, as explained above. To whom might it be useful ('data utility')? The data from these limited pilots will be useful for users and institutions who may be considering similar technologies in their digitisation and data management work. This applies in particular to the experiments carried out by WP6, but also the others. In particular, the digitised data from the experiments in WP3 will make apparent the quality of the digitisation results achieved with the new technologies. The data in the experiments of WP5 will be useful for the museums. # FAIR data 2. 1. Making data findable, including provisions for metadata Are the data produced and/or used in the project discoverable with metadata, identifiable and locatable by means of a standard identification mechanism, e.g. persistent and unique identifiers such as Digital Object Identifiers (DOI)? The data are discoverable through its DOI, if they have been published through the _Global Biodiversity Information Facility_ (GBIF), or if they are deposited on Zenodo. However, there are data that are only available from institutional and national portals, which may not provide a DOI as of yet. There is a recommendation by the CETAF ISTC of the form of a persistent unique identifier for specimens (see https://cetaf.org/cetaf-stable-identifiers). What naming conventions do you follow? _Darwin Core_ (DwC) terms and _Access to Biological Collections Data_ (ABCD) terms (see http://www.tdwg.org/). Will search keywords be provided that optimize possibilities for re-use? At Zenodo, keywords can be, and are routinely included in the metadata. Do you provide clear version numbers? In Zenodo, each deposit is clearly versioned, allowing adding multiple versions of the same digital object. Otherwise, no version numbers are provided. What metadata will be created? In case metadata standards do not exist in your discipline, please outline what type of metadata will be created and how. Metadata are expressed in _Ecological Metadata Language_ (EML), DwC, and ABCD. EML is a _de facto_ standard of the ecological informatics community, and supported by the _International Long-Term Ecological Research Network_ (LTER, ILTER). It has been implemented in the _Knowledge Network for Biocomplexity_ (KNB) and DataONE networks. It has also been implemented by the GBIF for all its resource metadata. EML can be used to describe datasets and projects. It does not cover the data itself. DwC is a Biodiversity Information Standards (TDWG) standard, and can be characterised as a biological data extension of Dublin Core. DwC can be used not only for describing data resources, but also for “full data”, i.e., location, time, observer, and species name. ABCD also is a TDWG standard. It covers both resource metadata as EML, and full data as DwC. DwC and ABCD can be automatically cross-mapped. ## Making data openly accessible Which data produced and/or used in the project will be made openly available as the default? If certain datasets cannot be shared (or need to be shared under restrictions), explain why, clearly separating legal and contractual reasons from voluntary restrictions. Note that in multi-beneficiary projects it is also possible for specific beneficiaries to keep their data closed if relevant provisions are made in the consortium agreement and are in line with the reasons for opting out. All data produced by the experiments of WP3, WP4, WP5, and WP6, which has been described above, will be made openly available. This is, any imagery and results of automatic or computer-assisted human interpretation of the data, which can be seen in the imagery. This does not mean that also the details of the equipment used and algorithms used in the interpretation will be made openly available, as these may contain proprietary information. In Zenodo, the option exists to provide open access, embargoed access, closed access. How will the data be made accessible (e.g. by deposition in a repository)? The data will be deposited in the storage systems which will be tested by WP6, as appropriate (national OSC, EUDAT, Zenodo). Links from ICEDIG website will be provided to these storage systems. By their service definition, the data stored at Zenodo remains permanently available. Permanent access to the data on national OSC and EUDAT tests is not foreseen. Data from the digitisation pilots may remain permanently available, if published on GBIF. These arrangements will be revisited after the data from the pilots has been created. What methods or software tools are needed to access the data? Web browser and/or _application programming interfaces_ (API) offered by these storage systems, complemented by customized tools developed by users in specific domains. Zenodo provides basic robust, fast services. Anything on top of it is envisioned to be layered, and not necessarily part of the Zenodo infrastructure. For example, viewing and searching multiple images has to be handled outside Zenodo, e.g., by using https://ocellus.punkish.org/ that is currently being developed by Plazi for the domain specific Biodiversity Literature Repository. Is documentation about the software needed to access the data included? If accessed through the API, documentation will be needed. Is it possible to include the relevant software (e.g. in open source code)? Any such software has already been released by the providers of these storage systems. Where will the data and associated metadata, documentation and code be deposited? Preference should be given to certified repositories which support open access where possible. The data will be deposited in the storage systems which will be tested by WP6, as appropriate (national OSC, EUDAT, Zenodo). Links from ICEDIG website will be provided to these storage systems. Have you explored appropriate arrangements with the identified repository? We have already explored the appropriate arrangements with the national cloud services in Finland (CSC), EUDAT through the work of Herbadrop pilot, and Zenodo through the work of the Biodiversity Literature Repository community. If there are restrictions on use, how will access be provided? There are no restrictions on use, except when CC BY-NC license has been chosen. DiSSCO should address question of sensitive data (e.g. location of protected plants), but ICEDIG will avoid working with any sensitive data. If personal data is received in questionnaires, which ICEDIG will receive, such data shall be anonymised before making available outside the project. Is there a need for a data access committee? Because of the small scale of these experiments, there is no need for a data access committee. Are there well described conditions for access (i.e. a machine readable license)? The Creative Commons licenses supported by the GBIF will be used. These include CC0, CC BY, and CC BY-NC (see https://www.gbif.org/publishing-data). Zenodo supports a large array of widely used as well as domain specific, machine readable licences. The owner of the data will determine which of these licenses will be used when data is posted on ICEDIG repositories. However, it is the project’s recommendation to choose CC0 for data and CC BY for media, and avoid CC-BY-NC which has issues in some national jurisdictions. How will the identity of the person accessing the data be ascertained? Identity of the person accessing the data will not be directly ascertained. However, we expect users to follow the standard norms of scientific citation and use of the data in this context will be tracked through scientific citation. ## Making data interoperable Are the data produced in the project interoperable, that is allowing data exchange and re-use between researchers, institutions, organisations, countries, etc. (i.e. adhering to standards for formats, as much as possible compliant with available (open) software applications, and in particular facilitating re-combinations with different datasets from different origins)? The project will follow the standards and formats of the biodiversity informatics community. What data and metadata vocabularies, standards or methodologies will you follow to make your data interoperable? The metadata will follow the EML, DwC, and ABCD standards. For images TIFF would be used for the originals, and JPEG for fast-loading derivatives on web portals. Will you be using standard vocabularies for all data types present in your data set, to allow inter-disciplinary interoperability? Standard vocabularies have been defined for some of the terms in above standards, but not all. The available vocabularies will be followed, e.g., ISO 8601 for dates and periods. In case it is unavoidable that you use uncommon or generate project specific ontologies or vocabularies, will you provide mappings to more commonly used ontologies? The project will avoid generating its own ontologies and vocabularies. ## Increase data re-use (through clarifying licences) How will the data be licensed to permit the widest re-use possible? The data will be licensed following the Creative Commons licenses CC0, CC BY, and by adding machine readable licences. When will the data be made available for re-use? If an embargo is sought to give time to publish or seek patents, specify why and how long this will apply, bearing in mind that research data should be made available as soon as possible. There is no need to delay making the data available. Technical delays are possible, though, when making the arrangements with the repositories used. Are the data produced and/or used in the project useable by third parties, in particular after the end of the project? If the re-use of some data is restricted, explain why. The data may be useful for scientists and digitisation projects, but as it is only created for testing and demonstration, the quantities and hence the usability will be limited. The tests of WP6 will use large datasets, but the data in those repositories (national OSC, EUDAT, Zenodo) are also available from other sources. If they are in multiple repositories, the respective alternative identifiers will be added to each of the deposit. How long is it intended that the data remains re-usable? Data digitised in the experiments of WP3, WP4 and WP5 will be migrated to GBIF at some point, and remain available in the long term. Also in the case of Zenodo, the data will remain available indefinitely. Are data quality assurance processes described? There is a specific task 3.4, which looks into the quality issues for image and another T4.2 which looks at data quality. The procedures created in these tasks will be tried in the data created in the pilot projects, where applicable. Further to the FAIR principles, DMPs should also address: # Allocation of resources What are the costs for making data FAIR in your project? In case of the experiments of WP3, WP4, and WP5, following the standards will only result in savings, since we do not need to re-invent the formats. For the storage tests of WP6 there is a cost involved which depends on the size of the storage that will be allocated. There is a specific budget reserved for EUDAT (20,000€) and Zenodo (30,000€). There may be at least one data paper and the costs for this will be covered by the project. The experiences of the WP6 tests will be fed to WP8 which is looking at the costs of the DiSSCo research infrastructure. Although storage costs are rapidly coming down, and the physics community already talks of exabyte scale operations, the scale and cost of storing DiSSCo data at petabyte scale is beyond anything experienced by the scientific collections before. In ICEDIG, we need to understand accurately what those costs will be so that they can be planned for can plan in the Cost Book, which WP8 is constructing for DiSSCo. How will these be covered? Note that costs related to open access to research data are eligible as part of the Horizon 2020 grant (if compliant with the Grant Agreement conditions). This is included as service purchase in the grant agreement. Who will be responsible for data management in your project? The Coordinator University of Helsinki is responsible. There is also a specific work package (WP6) focussed on the data infrastructure. The creators of test data in WP3, WP4 and WP5 will manage their own data. Are the resources for long term preservation discussed (costs and potential value, who decides and how what data will be kept and for how long)? In the case of Zenodo, this has been discussed. Long term preservation is actually in their business model. In the case of EUDAT, agreements of long term preservation have not been made because we are only carrying out temporary tests of data flows in and out of the EUDAT infrastructure. The same applies for the tests on national OSC, which tests, however, may evolve into operational systems. At national level either institutional (museum) storage will occur or it will be outsourced to some nationally agreed data centre. Control and dependency issues must be sorted out in this case. The data from the pilot projects of WP3, WP4 and WP5 will be stored for long term in the institutional repositories of the partners. # Data security What provisions are in place for data security (including data recovery as well as secure storage and transfer of sensitive data)? Institutional provisions are in place. Is the data safely stored in certified repositories for long term preservation and curation? This varies for each experiment, following the institutional provisions. # Ethical aspects Are there any ethical or legal issues that can have an impact on data sharing? These can also be discussed in the context of the ethics review. If relevant, include references to ethics deliverables and ethics chapter in the Description of the Action (DoA). There is a specific task T7.1 “Identification, consolidation and harmonisation of national and European policy / legal frameworks”, and a related deliverable D7.1 “Policy component of ICEDIG project website”. There also is a specific WP10 for the ethics issues, as required by the European Commission during the grant preparation phase. A provision is accepted that sensitive data (conservation status, governmental regulations, security issues) are not openly shared. Is informed consent for data sharing and long term preservation included in questionnaires dealing with personal data? The project’s questionnaires will not deal with any personal data, but if there is any, it will be removed before storing the data for any re-use. Where the data from project questionnaires will be stored has not yet been discussed. It first has to be determined whether it is necessary to keep such data after it has been written into deliverables. WP10 might also address the issue of data on collectors of specimens. These data include dates and locations which people have visited and thus could be consider personal. However, much of such data is very old (even centuries) and can be considered historical and not personal. Where the line will be drawn needs to be considered in the DiSSCo DMP. # Other issues Do you make use of other national/funder/sectorial/departmental procedures for data management? If yes, which ones? Each of the partners will follow their national and institutional procedures for data management, in addition to this ICEDIG DMP.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0370_PerceptiveSentinel_776115.md
1\. EXECUTIVE SUMMARY As an innovation Action, PerceptiveSentinel project will deliver innovative technological solutions that will encourage the usage of EO data, contribute to the EU gaining a leading position on the EO market and foster the growth and market position of the consortium partners. The PerceptiveSentinel project is part of the Horizon 2020 – EO Big Data Shift. As a requirement of the project, this deliverable provides the Data Management Plan (DMP) describing the life cycle of the data collected, processed and generated. This deliverable is intended to be a living document that will be updated throughout the lifetime of the project, whenever significant changes arise, e.g. when data sets are added or there are changes in the project that affect the management of the data. In general, two types of data will be used in PerceptiveSentinel: EO data and non-EO data. Several types of EO data will be used (SENTINELs, LANDSAT), that are accessible globally. On the level of non-EO data there is no “global” coverage available. Data from several different sources will be used: Danish field database, Danish validated YIELD dataset, public datasets from Denmark, Slovenia, Austria and France, and field trial datasets from Denmark and Slovenia. We have discussed whether to focus only on small part of Slovenia, Denmark, and Austria or develop the use-cases for the entire countries. The yield data will be from farms scattered all over Denmark and thus makes no sense to look into a smaller region. In addition, during the presentation of Geoville’s study of number of valid observations in 2017 and the discussion afterwards it became clear that it doesn’t make sense to focus on a region that has above/below average of cloud free observations. The algorithms should work on a country-scale under variable conditions, therefore the plan is to focus on entire countries. PROJECT SUMMARY The Perceptive Sentinel project aims to build innovative technological solutions that will encourage the usage of EO data, contribute to the EU gaining a leading position on the EO market and foster the growth and market position of the consortium partners. The major technological delivery of the project is the PerceptiveSentinel platform, that will combine big data sources into a single system. The data will be transformed into action by applying streaming machine learning to unlock its value. The Perceptive Sentinel platform will provide the necessary toolset to transform petabytes of data into action for a domain of choice. The platform will provide a new way of doing business and science. It will deliver capabilities to engage new kinds of questions and solve the most challenging forecasting problems facing EO data today. Its capabilities will be exposed for the wider public as well as for business purposes. In fact, PerceptiveSentinel platform is committed to provide a complete service - including its own Front Office service. The PerceptiveSentinel project will enable fusion of Copernicus data with other, EO and non-EO data sources. Further, it will stimulate the emergence of new user communities, that previously hesitated to use EO services due to the complexity and high costs. The PerceptiveSentinel project has founded on three pillars: * Scientific excellence: embodied in pre-processing algorithms, time-series feature extraction, streaming machine learning principles deep learning algorithms and delivery of EO services; * Technological innovation: embodied through PerceptiveSentinel platform and its subsystem EO-QMiner, streaming data mining engine designed for the interpretation of EO time-series data; * Business innovation: embodied in free-of-charge modelling (offering a cost-free approach to modelling of EO processing chains) and in open value chain strategy – open approach to building alliances among contributors, providers and consumers of EO VAS. 2\. DATA CAPTURE L&F have spent a lot of time getting a validated crop yield dataset for this type of project. The available data is a combination of data from the farm level and public datasets covering the entire Danish land area or the entire Danish farmland. As the majority of the L&F provided data is owned by individual farmers, the provided data will be subject to a Non-disclosure agreement. General background data covers maps of soil types, soil water holding capacity, an elevation model and climate data. For the entire Danish farmland field polygons and crop for each field polygon is available. The background database for SEGES’s farm management software (Danish Field Data Base, DFDB) contains operations data from farms covering 85% of the Danish farmland. However, registrations are of variable quality because the individual farmers differ in the amount of data that they register. In addition, some farmers use the SEGES software as a planning tool and does not register deviations from planed treatment, while others register the actual performed treatment in the field. While SEGES stores and manages the farm data, the actual data owner is the individual farmer. Hence, the farmer have to give a permission to use his management data for R&D purposes in the PerceptiveSentinel. A validated subset of this dataset will be made available for the project Yields in grain crops (mainly winter wheat and winter barley) have been collected during harvest years 2016 and 2017. These yields are collected as yields on the field level and are reported to the SEGES by the farmer. In addition, yields maps from combines have been collected for harvest years before 2017 in grain crops from 119 harvests. Roughage yields in silage maize and grass for cutting have been collected for the years 2015 – 2017. In maize data from approximately 1900 harvests have been collected and in grass for cutting approximately 2500 harvests over three years. Compared to other sources this is a large collection of yield data, however, the yield data only covers a small fraction of the total area with these crops. As an example, a map of fields with collected yields in silage maize for 2016 is shown in figure xx along with a map of the total number of fields with silage maize. In addition to yields at field level, yield maps from silage maize harvest have been collected from 116 harvests. Yield data will also be available for use in the project. These data are also owned by the farmers, and hence permission to use these data will have to be obtained. Furthermore, L&F conducts approximately 1000 field trials every year. In the field trials, different treatments is applied in each plot (4 repeat plots of same treatment) and different parameters are scored. E.g. several trials with increasing N amounts are conducted every year, as well as different doses of plant protection products and different plant protection strategies. In such experiment yield is always recorded as well as crop quality, along with other parameters. That could be plant diseases scores, measured biomass, measured plant nutrient content or the ability of the crop to remain standing until harvest. All of these data are stored in the Nordic Field Trial Database, from which a query of the relevant experiments will be extracted and provided for PerceptiveSentinel, as this data is owned directly by L&F. For the trial season 2018, L&F surveys a subset of approximately 60 (planned) of these field trials using drones with multiband cameras. This will yield up to a total of 250 observations from these field trials. These drones can take pictures of a similar type as the SENTINEL satellites to provide development and verification data for the algorithms. Similar type of trial data is available also by Slovenian field trial dataset, covering the area of 500 field polygons. The comparison of both trial datasets will be especially interesting, since they reflect different environmental aspects and different crop-growing cultures. For Slovenian trial dataset only occasional multispectral images recorded by drone are available. It is planned to provide drone multispectral images in 2018 on more than 100 ha of experimental farm. Images will be recorded with 5 channel camera (RGB, red edge and NIR). Data of experimental farm are available and include crop type, yield, farm operations, time and amount of fertilization and soil chemical characteristic. Public Danish datasets contains data on crop type, soil texture, geology and approximate fertilization rate, all at the scale of individual fields from 2009 - present day. Data on farm operations are not available in the public datasets. 3. DATA TYPES, FORMATS AND STORAGE 1. DATA TYPES In general, two types of data will be used in PerceptiveSentinel: * EO data * non-EO data 3.1.1 EO DATA Will be used (SENTINELs, LANDSAT) that are accessible globally # OPEN SATELLITE DATA Landsat missions, directed as a joint initiative between the U.S. Geological Survey (USGS) and NASA, have been observing Earth since1972, and provide invaluable historical data on top of recent observations. The two currently operational satellites are Landsat 7 and Landsat 8, both providing global coverage. The European counterpart, Sentinel missions, are directed by European Commission in partnership with European Space Agency and are part of Copernicus Earth Observation programme. Both Landsat and Sentinel missions have adopted the free, full and open data policy, with access available to all users. The combined high frequency and resolution of given data provide a unique resource for applications in agriculture, forestry, geology, regional planning, education, mapping, insurance, defence and global land change research, and offer instrumental information for emergency response and disaster relief. The following chapter will shortly describe open accessible data from Landsat and Copernicus missions. SENTINEL -1 Sentinel-1 is SAR radar mission from Copernicus programme. As for all Sentinel missions, Sentinel-1 data is publicly accessible. # 3.1.1.1 SENTINEL-1 Sentinel-1 mission is designed to provide enhanced revisit frequency, coverage, timeliness and reliability for operational services and applications requiring long time series. Affords an operational interferometry capability through stringent requirements placed on attitude accuracy, attitude and orbit knowledge, and data-take timing accuracy. The constellation covers the entire world’s land masses on a bi-weekly basis, sea-ice zones, Europe's coastal zones and shipping routes on a daily basis and open ocean continuously by wave imagettes. The instrument may acquire data in four exclusive modes: Strip map (SM) - A standard SAR strip map imaging mode where the ground swath is illuminated with a continuous sequence of pulses, while the antenna beam is pointing to a fixed azimuth and elevation angle. Interferometric Wide swath (IW) - Data is acquired in three swaths using the Terrain Observation with Progressive Scanning SAR (TOPSAR) imaging technique. In IW mode, bursts are synchronised from pass to pass to ensure the alignment of interferometric pairs. IW is SENTINEL1's primary operational mode over land. Extra Wide swath (EW) - Data is acquired in five swaths using the TOPSAR imaging technique. EW mode provides very large swath coverage at the expense of spatial resolution. Wave (WV) - Data is acquired in small strip map scenes called "vignettes", situated at regular intervals of 100 km along track. The vignettes are acquired by alternating, acquiring one vignette at a near range incidence angle while the next vignette is acquired at a far range incidence angle. WV is SENTINEL-1's operational mode over open ocean. Spectral Bands and Resolution -- for the four exclusive acquisition modes: * SM -- 5 m by 5 m resolution over a narrow swath width of 80 km; * IW -- with a large swath width (250 km) and a moderate geometric resolution (5 m by 20 m); * EW -- with a lower resolution (20 m by 40 m); * WV – strip map images of 20 km by 20 km, acquired alternately on two different incidence angles; Main Uses -- The SAR instrument and short revisit times provide data routinely and systematically for maritime and land monitoring, emergency response, climate change and security; # 3.1.1.2 SENTINEL 2 SENTINEL-2 mission objectives are to provide: * systematic global acquisitions of high-resolution, multispectral images allied to a high revisit frequency * continuity of multi-spectral imagery provided by the SPOT series of satellites and the USGS LANDSAT Thematic Mapper instrument * observation data for the next generation of operational products, such as land-cover maps, land-change detection maps and geophysical variables. The Sentinel-2A satellite sees very early changes in plant health due to its high temporal, spatial resolution and 3 red edge bands. This is particularly useful for the end users and policy makers for agriculture applications and to detect early signs of food shortages in developing countries; Launch date: Sentinel-2A - 23 June 2015 Sentinel-2B - 07 March 2017 Revisit time: 5 days Sensor data: Sensor Resolution: Sentinel-2A Satellite Sensor: 10m, 4 bands, basic land- cover classification; 20m, 6 bands, enhanced land-cover classification and retrieval of geophysical parameters; 60m, 3 bands, atmospheric corrections and cirrus-cloud screening; Spectral Bands: MSI covering 13 spectral bands (443–2190 nm), with a swath width of 290 km and a spatial resolution of 10 m (four visible and near- infrared bands), 20 m (six red edge and shortwave infrared bands) and 60 m (three atmospheric correction bands); Types of use: Agriculture, forests, land-use change, land-cover change. Mapping biophysical variables such as leaf chlorophyll content, leaf water content, leaf area index; monitoring coastal and inland waters; risk and disaster mapping; # 3.1.1.3 SENTINEL 3 Low Earth-orbit moderate-size satellite, which main objective is to measure sea surface topography, sea and land surface temperature, and ocean and land surface colour with high accuracy and reliability. European global land and ocean monitoring mission. It provides 2day global coverage Earth observation data (with 2 satellites) for sea and land applications with real-time products delivery in less than 3 hours. Launch date: Sentinel-3A - 16 February 2016 Revisit time: Less than two days for OLCI and less than one day for SLSTR at the equator. Sensor data: * Sensor Resolution: 300m full resolution, 1200m reduced resolution; * Spectral Bands: * Ocean and Land Colour Instrument (OLCI) covering 21 spectral bands (400–1020 nm) with a swath width of 1270 km; * Sea, Land Surface Temperature Radiometer (SLSTR) covering 9 spectral bands (550– 12000 nm); * Dual-view scan with swath widths of 1420 km (nadir) and 750 km (oblique view); - Synthetic Aperture Radar Altimeter (SRAL) Ku-band (300 m after SAR processing); Cband Microwave Radiometer (MWR) dual-frequency at 23.8 & 36.5 GHz; Types of use: Systematically measures Earth’s oceans, land, ice and atmosphere to monitor and understand large-scale global dynamics. Provides critical near-real time information for ocean and weather forecasting. Broad scope of data allows European environmental policies to be administered with confidence; # 3.1.1.4 LANDSAT 8 It is the eighth satellite in the Landsat program; the seventh to reach orbit successfully. Originally called the Landsat Data Continuity Mission (LDCM), it is a collaboration between NASA and the United States Geological Survey (USGS). NASA Goddard Space Flight Center in Greenbelt, Maryland, provided development, mission systems engineering, and acquisition of the launch vehicle while the USGS provided for development of the ground systems and will conduct ongoing mission operations. Launch date: 11 February 2013 Sensor Resolution and Spectral Bands -- Landsat Thematic Mapper (TM) sensor: * Band 1 - Coastal / Aerosol 0.433 – 0.453 µm, resolution 30 m * Band 2 - Blue 0.450 – 0.515 µm, resolution 30 m * Band 3 – Green 0.525 – 0.600 µm, resolution 30 m * Band 4 - Red 0.630 – 0.680 µm, resolution 30 m * Band 5 - Near Infrared 0.845 – 0.885 µm, resolution 30 m * Band 6 - Short Wavelength Infrared 1.560 – 1.660 µm, resolution 30 m * Band 7 - Short Wavelength Infrared 2.100 – 2.300 µm, resolution 30 m * Band 8 - Panchromatic 0.500 – 0.680 µm, resolution 15 m * Band 9 - Cirrus 1.360 – 1.390 µm, resolution 30 m Spectral Band Wavelength Resolution Band 10 - Long Wavelength Infrared 10.30 – 11.30 µm 100 m Band 11 - Long Wavelength Infrared 11.50 – 12.50 µm 100 m Types of use: Landsat 8 consists of three key mission and science objectives: * Collect and archive medium resolution (30-meter spatial resolution) multispectral image data affording seasonal coverage of the global landmasses for a period of no less than 5 years; * Ensure that Landsat 8 data are sufficiently consistent with data from the earlier Landsat missions in terms of acquisition geometry, calibration, coverage characteristics, spectral characteristics, output product quality, and data availability to permit studies of landcover and land-use change over time; * Distribute Landsat 8 data products to the general public at no cost to the user. 3.1.2 NON-EO DATA On the level of non-EO data there is no “global”coverage available. Data from several different sources will be used: Danish field database, Danish validated YIELD dataset, public datasets from Denmark, Slovenia, Austria and France, and field trial datasets from Denmark and Slovenia. Our plan is to collect non-EO data available for the 6 use cases 1. Cultivated Area Could be extracted from Austrian, Danish, and Slovenian LPIS data that will be provided. 2. Crop Type Could be extracted from Austrian, Danish, and Slovenian LPIS data that will be provided. 3. Crop Cycle We believe that typical dates for different phenology stages could be set for current and previous years, once the main crop types classes are set and there is a solution to add data for Slovenia for damages. 4. Crop Damage We have already obtain spatial data for drought and frost damage from Administration of the Republic of Slovenia for Civil Protection and Disaster Relief but we ask them for better attributed data. We are stil waiting for the data, however till now we have spatial data where damage has happened in 2016 or 2017. Geoville has first version of crop status analysis, baseline for crop damage. 5. Moisture Content For now we do not have nothing available. 6. Crop Yield (LANDBRUG & FODEVARER F.M.B.A) * available from Jable test area (by Agricultural Institute of Slovenia); * Yield maps for subset of Denmark; * Yield at field level for subset of Denmark 3.1.2.1 DANISH FIELD DATABASE (DFDB) Is the database platform that contains all data for L&F’s software solutions consisting of desktop field management software “MarkOnline” (field management plans and nutrient management plans), used by farmers and consultants for 85% of the Danish farmland; Cropmanager, which allows the farmer to plan and specify operations and e.g. make variable rate applications maps for his fields;“FarmTracking” phone app allows farmers to store information about operations that have been carried out in each field; “MarkAnalyseIOnline” software provides information about soil samples (information is stored in the form of GPS positions and results from the laboratory and can be displayed in “MarkOnline”, “PlanteværnOnline” is a decision support tool for plant protection intervention, which, among other things, can advise the farmer on which product to use, and which dosage would be appropriate , AgroGIS – special tool for agriculture advisors. DFDB will provide a range of other attributes, important for farm management, representing an important PerceptiveSentinel input: field polygons, data on crop yields, dates of sowing, type of crops, soil composition, harvest date, type of plant protection product + amount and date of its usage, type of fertilizer + amount and date of its usage, autumn cover, soil data on pH, Phosphorus, Potassium, Magnesium, Boron, Copper, Total N, Mineral N and Total C. Data is owned by Danish farmers, not by L&F, and hence a permission from the farmers to use the data for the project will have to be acquired. In addition, these data will be subject to an non-disclosure agreement. 3.2 DATA FORMAT Non-EO Data will be stored in cloud GIS database based on PostgreSQL/PostGIS. Export of the data can be in ESRI SHP and various other formats. Note that there are limitations to accessibility of non-EO data due to national constraints. 4. DATA OUTPUTS Data in the form of vector and raster maps crop types, etc., will be generated as PerceptiveSentinel products. Authenticated users of the PerceptiveSentinel Platform will be able to visualize and, in certain cases, download these outputs. Specific terms of use, privacy policy and contributor terms (i.e. describing data licensing), all compliant with the EU General Data Protection Regulation1 are not necessary, because we will not have any private data. 5. DATA PRESERVATION Data is preserved in Sinergise's data centre in Ljubljana. Project partners will be accessing, in some cases copying the data to their own data centres but only temporarily, for the time of execution of the process. No preservation is planned in 3rd party locations. 6. DATA DOCUMENTATION AND DESCRIPTION No specific documentation requirements are set for the non-EO data in the project. 7. DATA SHARING AND PUBLICATION Data sharing policies will follow rules of Open Research Data Pilot, providing an open access to research data generated through this project. Privacy and data ownership will be accounted as well. Following principles will apply: * data generated through project research activities will be openly available, all the data which is available free of cost (for instance SENTINEL data) will be available at the same terms also through PerceptiveSentinel platform; * all of the other data will be (or not be) available on the terms set by data owner. The only exception of this rule is represented in PerceptiveSentinel’s DEMO REGION, where we will tend to provide ALL data free-of-charge (following special agreements with data owners). The described principles will assure that all data, required to verify project deliveries, will be openly available; * Non EO data will be shared over proprietary APIs and in limited cases over WMS. 8. DATA SECURITY We do not provide any ways to access data for the hosting company other than the ways intended for every user. The environment is set up such that we restrict access to the database strictly to our trusted network and through APIs that let us choose exactly which data are accessible to which users. 9. RESOURCES https://www.sentinel-hub.com/develop/documentation/data_sources http://www.satimagingcorp.com/satellite-sensors/worldview-2/ http://www.satimagingcorp.com/satellite-sensors/worldview-3/ http://www.satimagingcorp.com/satellite-sensors/pleiades-1/ https://en.wikipedia.org/wiki/Landsat_8 ANNEX - EXAMPLES OF ANALYSIS OF AVAILABLE NON - EO DATA AUSTRIAN LPIS – CODE LIST <table> <tr> <th> SNAR_BEZEIÄ </th> <th> Crop type </th> <th> </th> <th> </th> <th> </th> <th> </th> </tr> <tr> <td> Austrian </td> <td> english </td> <td> slovenian </td> <td> latin </td> <td> Class 1 Class 2 </td> <td> Class 3 </td> </tr> <tr> <td> 20 JÄHRIGE STILLLEGUNG </td> <td> set a side for 20 years </td> <td> </td> <td> </td> <td> Unclassified Undefined </td> <td> Unclassifed </td> </tr> <tr> <td> ACKERBOHNEN - GETREIDE GEMENGE </td> <td> mixture of broad bean and cerals </td> <td> bob/žito </td> <td> </td> <td> Mixed plants Mixed plants </td> <td> Fodder </td> </tr> <tr> <td> ACKERBOHNEN (PUFFBOHNEN) </td> <td> broad bean </td> <td> krmni bob </td> <td> Vicia fabae </td> <td> Legumes Grain legumes </td> <td> Vicia fabae </td> </tr> <tr> <td> ACKERBOHNEN (PUFFBOHNEN) / FELDGEMÜSE </td> <td> broad bean / in vegetable production </td> <td> bob </td> <td> Vicia fabae </td> <td> Legumes Grain legumes </td> <td> Vicia fabae </td> </tr> <tr> <td> ACKERBOHNEN / ERBSENGEMENGE </td> <td> mixture of broad bean and peas </td> <td> bob/grah </td> <td> Vicia fabae/Pisum arvense </td> <td> Legumes Grain legumes </td> <td> Vicia fabae/Pisum arvense </td> </tr> <tr> <td> ALMFUTTERFLÄCHE </td> <td> alpine pasture </td> <td> gorski pašniki </td> <td> </td> <td> Grass Grass </td> <td> Mountain grass </td> </tr> <tr> <td> AMARANTH </td> <td> amaranth </td> <td> amarant </td> <td> Amaranthus sp. </td> <td> Pseudo cereals Pseudo cereals </td> <td> Pseudo cereals </td> </tr> <tr> <td> ANDERE DAUERKULTUREN </td> <td> other permanent crops </td> <td> druge trajne kulture </td> <td> </td> <td> Mixed plants Perennial crops </td> <td> Mixed plants </td> </tr> <tr> <td> ANDERES OBST </td> <td> other fruit </td> <td> druge sadne vrste </td> <td> </td> <td> Orchard Fruit </td> <td> Fruit </td> </tr> <tr> <td> BERGMÄHDER </td> <td> mountain mower </td> <td> gorski košeni travniki </td> <td> </td> <td> Grass Grass </td> <td> Grass </td> </tr> <tr> <td> BITTERLUPINEN </td> <td> narrow leaf or blue lupin </td> <td> modra lupina </td> <td> Lupinus angustifolius </td> <td> Legumes Fodder legumes </td> <td> Fodder </td> </tr> <tr> <td> BLUMEN UND ZIERPFLANZEN </td> <td> flower and ornamental plants </td> <td> cvetje in okrasne rastline </td> <td> </td> <td> Mixed plants Mixed plants </td> <td> Ornamental production </td> </tr> <tr> <td> BLUMEN UND ZIERPFLANZEN IM FOLIENTUNNEL </td> <td> flower and ornamental plants in tunels </td> <td> cvetje in okrasne rastline v tunelih </td> <td> </td> <td> Production under protection Production under protection </td> <td> Ornamental production </td> </tr> <tr> <td> BLUMEN UND ZIERPFLANZEN IM GEWÄCHSHAUS </td> <td> flower and ornamental plants in grennhouse </td> <td> cvetje in okrasne rastline v rastlinjakih </td> <td> </td> <td> Production under protection Production under protection </td> <td> Ornamental production </td> </tr> <tr> <td> BUCHWEIZEN </td> <td> buckwheat </td> <td> ajda </td> <td> Fagopyrum esculentum </td> <td> Pseudo cereals </td> <td> Summer Pseudo cereals </td> <td> Fagopyrum esculentum </td> </tr> <tr> <td> DAUERWEIDE </td> <td> permanent pastures </td> <td> trajni pašniki </td> <td> </td> <td> Grass </td> <td> Grass </td> <td> Grass </td> </tr> <tr> <td> EDELKASTANIEN </td> <td> marone - chesnut </td> <td> kostanj </td> <td> Castanea sativa </td> <td> Orchard </td> <td> Trees </td> <td> Castanea sativa </td> </tr> <tr> <td> EINJÄHRIGE BAUMSCHULEN </td> <td> one year nursery </td> <td> enoletne drevesnice </td> <td> </td> <td> Mixed plants </td> <td> Trees </td> <td> Nursery </td> </tr> <tr> <td> EINMÄHDIGE WIESE </td> <td> Once per year mow meadow </td> <td> travniki </td> <td> </td> <td> Grass </td> <td> Grass </td> <td> Grass </td> </tr> <tr> <td> ELEFANTENGRAS (CHINASCHILF, MISCANTHUS SINENSIS) </td> <td> ornamental grasses </td> <td> okrasne trave </td> <td> Miscanthus x giganteus / Mi Mixed plants </td> <td> Grass </td> <td> Ornamental production </td> </tr> <tr> <td> EMMER ODER EINKORN (SOMMERUNG) </td> <td> Oversummering emmer wheat or single grain wheat </td> <td> jara tetraploidna in enozrna pšenica </td> <td> _T. monococcum; Triticum tur_ Cereals </td> <td> Summer Cereals </td> <td> Oversummering emmer wheat or single grain wheat </td> </tr> <tr> <td> EMMER ODER EINKORN (SOMMERUNG) / FELDGEMÜSE </td> <td> Oversummering emmer wheat or single grain wheat in vegetable pr </td> <td> jara tetraploidna in enozrna pšenica - v zelenjad _T. monococcum; Triticum tur_ Cereals </td> <td> Summer Cereals </td> <td> Oversummering emmer wheat or single grain wheat in vegetable production on the field </td> </tr> <tr> <td> EMMER ODER EINKORN (WINTERUNG) </td> <td> Overswintering emmer wheat or single grain wheat </td> <td> prezimna tetraploidna in enozrna pšenica _T. monococcum; Triticum tur_ Cereals </td> <td> Winter Cereals </td> <td> Overswintering emmer wheat or single grain wheat </td> </tr> <tr> <td> EMMER ODER EINKORN (WINTERUNG) / FELDGEMÜSE </td> <td> Overwintering emmer wheat or single grain wheat in vegetable pro </td> <td> prezimna tetraploidna in enozrna pšenica - v zel _T. monococcum; Triticum tur_ Cereals </td> <td> Winter Cereals </td> <td> Overwintering emmer wheat or single grain wheat in vegetable production on the field </td> </tr> <tr> <td> ENERGIEGRAS </td> <td> grasses for enertgy production </td> <td> pridelave trav za energijo </td> <td> Grass </td> <td> Grass </td> <td> Grass </td> </tr> <tr> <td> ENERGIEHOLZ OHNE ROBINIE </td> <td> Wood energy plantations without Robinia pseudoacacia </td> <td> lesna biomasa za energijo brez akacije </td> <td> Trees </td> <td> Trees </td> <td> Trees </td> </tr> <tr> <td> ENERGIEHOLZ ROBINIE </td> <td> Wood energy plantations with Robinia pseudoacacia </td> <td> lesna biomasa za energijo z akacijo </td> <td> Trees </td> <td> Trees </td> <td> Trees </td> </tr> <tr> <td> ERBSEN - GETREIDE GEMENGE </td> <td> Mixture of peas and cereals </td> <td> mešanica graha in žit </td> <td> Mixed plants </td> <td> Fodder </td> <td> Fodder </td> </tr> <tr> <td> ERBSEN - GETREIDE GEMENGE / BUCHWEIZEN </td> <td> Mixture of peas and cereals or buckwheat </td> <td> mešanica graha in žit ali ajde </td> <td> Mixed plants </td> <td> Fodder </td> <td> Fodder </td> </tr> <tr> <td> ERBSEN - GETREIDE GEMENGE / FELDGEMÜSE </td> <td> Mixture of peas and cereals in vegetable production on the field </td> <td> mešanica graha in žit v pridelavi zelenjave na prostem </td> <td> Mixed plants </td> <td> Fodder </td> <td> Fodder </td> </tr> <tr> <td> ERDBEEREN </td> <td> strawberry </td> <td> jagoda </td> <td> Vegetable </td> <td> Row crops, interrow 1 m </td> <td> Soft fruit </td> </tr> <tr> <td> ERDBEEREN / FELDGEMÜSE </td> <td> strawberry - in open field production </td> <td> jagoda - pridelava na prostem </td> <td> Vegetable </td> <td> Row crops, interrow 1 m with foil </td> <td> Soft fruit </td> </tr> <tr> <td> ERSTAUFFORSTUNG </td> <td> First forestation </td> <td> prva pogozditev </td> <td> Trees </td> <td> Trees </td> <td> Trees </td> </tr> <tr> <td> ERSTAUFFORSTUNG ALT </td> <td> forestation </td> <td> pogozditev </td> <td> Trees </td> <td> Trees </td> <td> Trees </td> </tr> <tr> <td> ESPARSETTE </td> <td> Common sainfon </td> <td> turška detelja Onobrychis viciifolia </td> <td> Legumes </td> <td> Fodder </td> <td> Onobrychis viciifolia </td> </tr> <tr> <td> FELDGEMÜSE EINKULTURIG </td> <td> Field vegetable - uniform production </td> <td> enovita pridelava zelenjave na njivi </td> <td> Vegetable </td> <td> Mixed plants </td> <td> Vegetable </td> </tr> <tr> <td> FELDGEMÜSE EINLEGEGURKEN </td> <td> Cucumber as open field production </td> <td> kumarice na njivi Cucumis sativas </td> <td> Vegetable </td> <td> Vegetable </td> <td> Row crops, interrow 1 m with foil </td> </tr> <tr> <td> FELDGEMÜSE FRISCHMARKT UND VERARBEITUNG MEHRKULTURIG </td> <td> Field vegetable production - mixture; for fresh consumption and pr </td> <td> pridelava zelenjave na njivi mešanica za svežo prodajo in predelavo </td> <td> Vegetable </td> <td> Mixed plants </td> <td> Vegetable </td> </tr> <tr> <td> FELDGEMÜSE MEHRKULTURIG </td> <td> Field vegetable production - mixture; </td> <td> pridelava zelenjave na njivi mešanica </td> <td> Vegetable </td> <td> Mixed plants </td> <td> Vegetable </td> </tr> <tr> <td> FELDGEMÜSE OHNE ERNTE </td> <td> Field vegetable production without harvesting </td> <td> pridelava zelenjave na njivi brez pobiranja </td> <td> Vegetable </td> <td> Mixed plants </td> <td> Vegetable </td> </tr> <tr> <td> FELDGEMÜSE VERARBEITUNG EINKULTURIG </td> <td> Field vegetable - uniform production for processing </td> <td> enovita pridelava zelenjave na njivi za predelavo </td> <td> Vegetable </td> <td> Mixed plants </td> <td> Vegetable </td> </tr> <tr> <td> FELDGEMÜSE VERARBEITUNG MEHRKULTURIG </td> <td> Field vegetable production - mixture; for processing </td> <td> mešana pridelava zelenjave na njivi za predelavo </td> <td> Vegetable </td> <td> Mixed plants </td> <td> Vegetable </td> </tr> <tr> <td> FLACHS (FASERLEIN) ZUR FASERERZEUGUNG </td> <td> Common flax for processing </td> <td> lan za predelavo </td> <td> Linum usitatissimum </td> <td> Pseudo cereals </td> <td> Pseudo cereals </td> <td> Winter pseudo cereals </td> </tr> <tr> <td> FORST GENETISCHE RESSOURCEN </td> <td> Forest tree nursery - forest genetic resources </td> <td> gozdna drevesnica za gozdne genske vire </td> <td> </td> <td> Trees </td> <td> Trees </td> <td> Trees </td> </tr> <tr> <td> FRÜHKARTOFFELN </td> <td> Early potato </td> <td> zgodnji krompir </td> <td> Solanum tuberosum </td> <td> Potato </td> <td> Row crop;interrow 70 cm </td> <td> Potato </td> </tr> <tr> <td> FRÜHKARTOFFELN / BUCHWEIZEN </td> <td> Early potato following by buckwheat </td> <td> zgodnji krompir in ajda kot strniščni posevek </td> <td> </td> <td> Potato </td> <td> Row crop;interrow 70 cm </td> <td> Mixed plants - </td> </tr> <tr> <td> FRÜHKARTOFFELN / FELDGEMÜSE </td> <td> Early potato in vegetable field production </td> <td> zgodnji krompir v zelenjadarski pridelavi </td> <td> </td> <td> Potato </td> <td> Row crop;interrow 70 cm </td> <td> Potato </td> </tr> <tr> <td> FRÜHKARTOFFELN / MAIS </td> <td> Early potato following by maize </td> <td> zgodnji krompir in koruza kot strniščni posevek </td> <td> </td> <td> Potato </td> <td> Row crop;interrow 70 cm </td> <td> Mixed plants </td> </tr> <tr> <td> FUTTERGRÄSER </td> <td> Fooder grasses </td> <td> krmne trave </td> <td> </td> <td> Grass </td> <td> Grass </td> <td> Fodder </td> </tr> <tr> <td> FUTTERGRÄSER / FELDGEMÜSE </td> <td> Fooder grasses in vegetable production in open field </td> <td> krmne trave v pridelavi zelenjave na prostem </td> <td> </td> <td> Grass </td> <td> Grass </td> <td> Fodder </td> </tr> <tr> <td> FUTTERKARTOFFELN </td> <td> Potato as a fodder </td> <td> krmni krompir </td> <td> Solanum tuberosum </td> <td> Potato </td> <td> Row crop;interrow 70 cm </td> <td> Potato </td> </tr> <tr> <td> FUTTERRÜBEN (RUNKELRÜBEN, BURGUND KOHLRÜBEN) </td> <td> Root beet, Rutabage </td> <td> krmna pesa, koleraba </td> <td> Beta vulgaris subs. vulgaris; </td> <td> Mixed plants </td> <td> Row crop;interrow 70 cm </td> <td> Root crop </td> </tr> <tr> <td> GEMÜSE IM FOLIENTUNNEL </td> <td> Vegetable production under tunnel </td> <td> pridelava zelenjave v tunelih </td> <td> </td> <td> Production under protection Production under protection </td> <td> Vegetable </td> </tr> <tr> <td> GEMÜSE IM GEWÄCHSHAUS </td> <td> Vegetable production in grenhouse </td> <td> pridelava zelenjave v rastlinjakih </td> <td> </td> <td> Production under protection Production under protection </td> <td> Vegetable </td> </tr> <tr> <td> GEWÜRZFENCHEL </td> <td> Fennel </td> <td> koromač </td> <td> Foeniculum vulgare </td> <td> Vegetable Vegetable </td> <td> Vegetable </td> </tr> <tr> <td> GEWÜRZPFLANZEN </td> <td> Herbs </td> <td> začimbnice </td> <td> </td> <td> Vegetable Vegetable </td> <td> Vegetable </td> </tr> <tr> <td> GEWÜRZPFLANZEN IM FOLIENTUNNEL </td> <td> Herbs production under the tunnel </td> <td> začimbnice v tunelih </td> <td> </td> <td> Production under protection Production under protection </td> <td> Vegetable </td> </tr> <tr> <td> GEWÜRZPFLANZEN IM GEWÄCHSHAUS </td> <td> Herbs production in the greenhouse </td> <td> začimbnice v rastlinjaku </td> <td> </td> <td> Production under protection Production under protection </td> <td> Vegetable </td> </tr> <tr> <td> GINKGO </td> <td> Ginko </td> <td> Ginko </td> <td> Ginko biloba </td> <td> Orchard </td> <td> Trees </td> <td> Trees </td> </tr> <tr> <td> GLÍZ GRABEN / UFERRANDSTREIFEN </td> <td> ditch, banch </td> <td> Jarek, brežina </td> <td> </td> <td> Unclassified </td> <td> Undefined </td> <td> infertile land </td> </tr> <tr> <td> GLÍZ NATURDENKMAL FLÄCHE </td> <td> undefined </td> <td> naravna dediščina </td> <td> </td> <td> Unclassified </td> <td> undefined </td> <td> undefined </td> </tr> <tr> <td> GLÍZ STEINRIEGEL / STEINHAGE </td> <td> Stone slope </td> <td> kamnito pobočje </td> <td> </td> <td> Unclassified </td> <td> Undefined </td> <td> infertile land </td> </tr> <tr> <td> GLÍZ TEICH / TÜMPEL </td> <td> Small standing water </td> <td> manjša zajetje vode </td> <td> </td> <td> Unclassified </td> <td> Undefined </td> <td> water </td> </tr> <tr> <td> GRÜNBRACHE </td> <td> Crop rotation- natural vegetation without planted vegetation </td> <td> kolobarjenje, naravno zelenje, brez sejanih rastlin </td> <td> Mixed plants </td> <td> Mixed plants </td> <td> Natural vegetation </td> </tr> <tr> <td> GRÜNLANDBRACHE </td> <td> Crop rotation - non cultivated for some time </td> <td> kolobarjenje, že dalj časa ni bila obdelana </td> <td> Mixed plants </td> <td> Mixed plants </td> <td> non cultivated for longer period </td> </tr> <tr> <td> GRÜNMAIS </td> <td> Fresh maize as fodder </td> <td> krmna, zelena koruza, pitnik </td> <td> Maize </td> <td> Maize </td> <td> Fodder </td> </tr> <tr> <td> GRÜNSCHNITTROGGEN </td> <td> Fresh rye as a fodder </td> <td> Ozimna rž za krmo </td> <td> Cereals </td> <td> Winter Cereals </td> <td> Winter Cereals </td> </tr> <tr> <td> GRÜNSCHNITTROGGEN / HIRSE </td> <td> Fresh rye as a fodder/following millet </td> <td> Ozimna rž za krmo s prosom kot naknadnim posevkom </td> <td> Cereals </td> <td> Winter Cereals </td> <td> Mixed plants </td> </tr> <tr> <td> GRÜNSCHNITTROGGEN / MAIS </td> <td> Fresh rye as a fodder/following maize </td> <td> Ozimna rž za krmo s koruzo kot naknadnim posevkom </td> <td> Cereals </td> <td> Winter Cereals </td> <td> Mixed plants </td> </tr> <tr> <td> GRÜNSCHNITTROGGEN / SUDANGRAS </td> <td> Fresh rye as a fodder/following sudan grass </td> <td> Ozimna rž za krmo s sudansko travo kot naknadnim posevkom </td> <td> Cereals </td> <td> Winter Cereals </td> <td> Mixed plants </td> </tr> <tr> <td> HANF </td> <td> Hemp </td> <td> konoplja Cannabis sativa </td> <td> Other plants </td> <td> Other plants </td> <td> Cannabaceae </td> </tr> <tr> <td> HEILPFLANZEN </td> <td> Medecinal plants </td> <td> zdravilne rastline </td> <td> Other plants </td> <td> Mixed plants </td> <td> Vegetable </td> </tr> <tr> <td> HIRSE </td> <td> Millet </td> <td> proso </td> <td> Cereals </td> <td> Summer Cereals </td> <td> Panicum miliaceum </td> </tr> <tr> <td> HOLUNDER </td> <td> Elderberry </td> <td> bezeg </td> <td> Trees </td> <td> Trees </td> <td> Sambucus nigra </td> </tr> <tr> <td> HOPFEN </td> <td> Hop </td> <td> hmelj Humulus lupulus </td> <td> Other plants </td> <td> Hop </td> <td> Plant height aroound 6 m </td> </tr> <tr> <td> HUTWEIDE </td> <td> Pasture </td> <td> pašnik </td> <td> Grass </td> <td> Grass </td> <td> Grass </td> </tr> <tr> <td> ÍLKÜRBIS </td> <td> Pumpkin for oil </td> <td> oljna buča Cucurbita pepo </td> <td> Other plants </td> <td> Row crops, interrow > 1 m </td> <td> Pumpkin </td> </tr> <tr> <td> ÍLLEIN (NICHT ZUR FASERGEWINNUNG) </td> <td> Flax </td> <td> navadni lan (ne za pridobivanje vlaken) Linum usitatissimum </td> <td> Pseudo cereals </td> <td> Winter pseudo cereals </td> <td> Flax </td> </tr> <tr> <td> ÍLLEIN (NICHT ZUR FASERGEWINNUNG) / FELDGEMÜSE </td> <td> Flax/in vegetable production </td> <td> navadni lan (ne za pridobivanje vlaken) / njivska Linum usitatissimum </td> <td> Pseudo cereals </td> <td> Winter pseudo cereals </td> <td> Flax </td> </tr> <tr> <td> ÍLRETTICH </td> <td> Radish </td> <td> redkvica </td> <td> Raphanus sativa L. var. oleif Vegetable </td> <td> Vegetable </td> <td> Brassicaea </td> </tr> <tr> <td> JOHANNISKRAUT </td> <td> Saint John's wort </td> <td> šentjanževka </td> <td> Hypericum perforatum Other plants </td> <td> Medicinal plants </td> <td> Saint John's wort </td> </tr> <tr> <td> KICHERERBSEN </td> <td> Chickpea </td> <td> čičerka </td> <td> Cicer arietinum Legumes </td> <td> Grain legumes </td> <td> Chickpea </td> </tr> <tr> <td> KÍRNERERBSEN </td> <td> Peas </td> <td> grah </td> <td> Pisum sativum Legumes </td> <td> Grain legumes </td> <td> Peas </td> </tr> <tr> <td> KÍRNERERBSEN / FELDGEMÜSE </td> <td> Peas / in vegetable production </td> <td> grah / njivska zelenjava </td> <td> Pisum sativum Legumes </td> <td> Grain legumes </td> <td> Peas </td> </tr> <tr> <td> KÍRNERMAIS </td> <td> Maize </td> <td> prehranska koruza </td> <td> Zea mais Maize </td> <td> Row crop;interrow 60 cm </td> <td> Maize </td> </tr> <tr> <td> KIRSCHEN </td> <td> Cherry </td> <td> češnje </td> <td> Prunus avium Orchard </td> <td> Fruit </td> <td> Trees </td> </tr> <tr> <td> KLEE </td> <td> Clover </td> <td> detelja </td> <td> Trifolium sp Legumes </td> <td> Fodder </td> <td> Clover </td> </tr> <tr> <td> KLEE / FELDGEMÜSE </td> <td> Clover / in vegetable production </td> <td> detelja / njivska zelenjava </td> <td> Trifolium sp Legumes </td> <td> Fodder </td> <td> Clover </td> </tr> <tr> <td> KLEEGRAS </td> <td> Grass clover mixture </td> <td> mešanica trav in detelj </td> <td> Grass </td> <td> Grass </td> <td> Grass </td> </tr> <tr> <td> KLEEGRAS / FELDGEMÜSE </td> <td> Grass clover mixture / in vegetable production </td> <td> mešanica trav in detelj / njivska zelenjava </td> <td> Grass </td> <td> Grass </td> <td> Grass </td> </tr> <tr> <td> LEINDOTTER </td> <td> Camelina </td> <td> navadni riček </td> <td> Camelina sativa Other plants </td> <td> Medicinal plants </td> <td> Camelina </td> </tr> <tr> <td> LINSEN </td> <td> Lentil </td> <td> leča </td> <td> Lens esculenta Legumes </td> <td> Grain legumes </td> <td> Lentil </td> </tr> <tr> <td> LSE FELDGEHÍLZ / BAUM- / GEBÜSCHGRUPPE </td> <td> Woody plants on the field, trees, bushes </td> <td> lesne rastline na polju, drevesa, grmovje </td> <td> Trees </td> <td> Trees </td> <td> Trees </td> </tr> <tr> <td> LSE HECKE / UFERGEHÍLZ </td> <td> Evergreen hedges, woody plants near the cost </td> <td> živa meja, lesne rastline ob obali </td> <td> Trees </td> <td> Trees </td> <td> Trees </td> </tr> <tr> <td> LSE RAIN / BÍSCHUNG / TROCKENSTEINMAUER </td> <td> banch, stone </td> <td> obmejek, brežina, kamnita stena </td> <td> Unclassified </td> <td> Undefined </td> <td> Infertile land </td> </tr> <tr> <td> LUZERNE </td> <td> Alfalfa </td> <td> lucerna </td> <td> Medicago sativa Legumes </td> <td> Fodder legumes </td> <td> Alfalfa </td> </tr> <tr> <td> MÄHWIESE/-WEIDE DREI UND MEHR NUTZUNGEN </td> <td> 3 or more times mowed meadows </td> <td> 3 ali večkrat košen travnik </td> <td> Grass </td> <td> Grass </td> <td> Grass </td> </tr> <tr> <td> MÄHWIESE/-WEIDE ZWEI NUTZUNGEN </td> <td> 2 times mowed meadows </td> <td> 2x košen travnik </td> <td> Grass </td> <td> Grass </td> <td> Grass </td> </tr> <tr> <td> MAIS / KÄFERBOHNEN IN GETRENNTEN REIHEN </td> <td> Intercropping beans and maize </td> <td> laški fižol / koruza v ločenih vrstah </td> <td> Mixed plants </td> <td> Mixed plants </td> <td> Mixed plants </td> </tr> <tr> <td> MAIS CORN-COB-MIX (CCM) </td> <td> Maize (fodder) </td> <td> krmna koruza </td> <td> Zea mais Maize </td> <td> Row crop;interrow 60 cm </td> <td> Maize </td> </tr> <tr> <td> MAIS CORN-COB-MIX (CCM) / FELDGEMÜSE </td> <td> Maize (fodder) / in vegetation production </td> <td> krmna koruza / njivska zelenjava </td> <td> Zea mais Maize </td> <td> Row crop;interrow 60 cm </td> <td> Maize </td> </tr> <tr> <td> MARIENDISTELN </td> <td> Milk thistle </td> <td> pegasti badelj </td> <td> Silybum marianum Other plants </td> <td> Medicinal plants </td> <td> Milk thistle </td> </tr> <tr> <td> MARILLEN </td> <td> Apricot </td> <td> marelice </td> <td> Prunus armeniaca Orchard </td> <td> Fruit </td> <td> Trees </td> </tr> <tr> <td> MEHRJÄHRIGE BAUMSCHULEN </td> <td> Nurserys </td> <td> drevesnice </td> <td> Mixed plants </td> <td> Mixed plants </td> <td> Trees </td> </tr> <tr> <td> NEKTARINEN </td> <td> Nectarine </td> <td> nektarine </td> <td> Prunus persica Orchard </td> <td> Fruit </td> <td> Trees </td> </tr> <tr> <td> OBST IM FOLIENTUNNEL </td> <td> Fruit production in the tunnel </td> <td> sadje v tunelu </td> <td> Production under protection Fruit </td> <td> Production under protection </td> </tr> <tr> <td> OBST IM GEWÄCHSHAUS </td> <td> Fruit production in the greenhouse </td> <td> sadje v rastlinjaku </td> <td> Production under protection Fruit </td> <td> Production under protection </td> </tr> <tr> <td> OBST/HOPFEN BODENGESUNDUNG </td> <td> Croprotation in orchards or hop production </td> <td> kolobarjenje pri sadju / hmelju </td> <td> Mixed plants Mixed plants </td> <td> Mixed plants </td> </tr> <tr> <td> PELUSCHKE </td> <td> Peas </td> <td> njivski grah P. sativum var. arvensi </td> <td> P. sativum var. arvensi Legumes Grain legumes </td> <td> Vegetable </td> </tr> <tr> <td> PFIRSICHE </td> <td> Peaches </td> <td> breskev </td> <td> Prunus persica Orchard Fruit </td> <td> Tree </td> </tr> <tr> <td> PFLAUMEN </td> <td> Plums </td> <td> slive </td> <td> Prunus domestica Orchard Fruit </td> <td> Tree </td> </tr> <tr> <td> PHACELIA </td> <td> Lacy phacelia </td> <td> facelija </td> <td> Phacelia tanacetifolia Legumes Fodder legumes </td> <td> Lacy phacelia </td> </tr> <tr> <td> PLATTERBSEN </td> <td> ? Pea </td> <td> grahor </td> <td> Lathyrus sativus Legumes Fodder legumes </td> <td> Grass pea </td> </tr> <tr> <td> QUINOA </td> <td> Quinoa </td> <td> kvinoja </td> <td> Chenopodium quinoa Pseudo cereals Summer Pseudo cereals </td> <td> Quinoa </td> </tr> <tr> <td> QUITTEN </td> <td> Quince </td> <td> kutina </td> <td> Cydónia oblónga Orchard Fruit </td> <td> Trees </td> </tr> <tr> <td> RÜBENVERMEHRUNG </td> <td> Root beet for seed production </td> <td> Semenski posevek navadne pese </td> <td> Beta vulgaris subs. vulgaris Other plants Row crop;interrow 50 cm </td> <td> Root crop </td> </tr> <tr> <td> REBSCHULEN </td> <td> Production of vine planting material </td> <td> pridelava sadilnega materiala vinske trte </td> <td> Vitis vinifera Vineyard Vineyard </td> <td> Vineyard </td> </tr> <tr> <td> ROLLRASEN </td> <td> Grass roll </td> <td> trava za polagat (navita) </td> <td> Grass Grass </td> <td> Grass </td> </tr> <tr> <td> SÜSSLUPINEN </td> <td> Sweet lupin </td> <td> volčji bob </td> <td> Lupinus angustifolius Legumes Fodder legumes </td> <td> Sweet lupin </td> </tr> <tr> <td> SAATKARTOFFELN </td> <td> Seed potato </td> <td> semenski krompir </td> <td> Solanum tuberosum Potato Row crop;interrow 70 cm </td> <td> Potato </td> </tr> <tr> <td> SAATMAISVERMEHRUNG </td> <td> Seed maize </td> <td> semenska koruza </td> <td> Zea mais Maize Maize </td> <td> Maize </td> </tr> <tr> <td> SCHALENFRÜCHTE (WALNÜSSE, HASELNÜSSE, ...) </td> <td> Nuts </td> <td> oreški (orehi, lešniki...) </td> <td> Orchard Fruit </td> <td> Trees </td> </tr> <tr> <td> SCHNITTWEINGARTEN </td> <td> Vineyard (establish) </td> <td> vinograd (ne mlad, ravno rastišče, lahko terase) </td> <td> Vitis vinifera Vineyard Vineyard </td> <td> Vineyard </td> </tr> <tr> <td> SENF </td> <td> Mustard </td> <td> gorčica </td> <td> Brassica … Other plants Brassicaea </td> <td> Brassicaea </td> </tr> <tr> <td> SILOMAIS </td> <td> Maize for sillage </td> <td> koruza za silažo </td> <td> Zea mais Maize Row crop;interrow 60 cm </td> <td> Maize </td> </tr> <tr> <td> SOJABOHNEN </td> <td> Soybean </td> <td> soja </td> <td> Glycine max Legumes Grain legumes </td> <td> Soybean </td> </tr> <tr> <td> SOMMERDINKEL (SPELZ) </td> <td> Summer Spelt </td> <td> jara pira </td> <td> Triticum spelta Cereals Summer Cereals </td> <td> Summer Spelt </td> </tr> <tr> <td> SOMMERGERSTE </td> <td> Summer barley </td> <td> jari ječmen </td> <td> Hordeum vulgare Cereals Summer Cereals </td> <td> Summer barley </td> </tr> <tr> <td> SOMMERGERSTE / BUCHWEIZEN </td> <td> Summer barley following buckwheat </td> <td> jari ječmen / ajda </td> <td> Hordeum vulgare Cereals Summer Cereals </td> <td> Summer barley following buckwheat </td> </tr> <tr> <td> SOMMERGERSTE / FELDGEMÜSE </td> <td> Summer barley / in vegetable production </td> <td> jari ječmen / njivska zelenjava </td> <td> Hordeum vulgare Cereals Summer Cereals </td> <td> Summer barley / in vegetable production </td> </tr> <tr> <td> SOMMERHAFER </td> <td> Summer oat </td> <td> jari oves </td> <td> Avena sativa Cereals Summer Cereals </td> <td> Summer oat </td> </tr> <tr> <td> SOMMERHAFER / FELDGEMÜSE </td> <td> Summer oat / in vegetable production </td> <td> jari oves / njivska zelenjava </td> <td> Avena sativa Cereals Summer Cereals </td> <td> Summer oat / in vegetable production </td> </tr> <tr> <td> SOMMERHARTWEIZEN (DURUM) </td> <td> Summer durum wheat </td> <td> jara trda pšenica </td> <td> Triticum turgidum var. duru Cereals Summer Cereals </td> <td> Summer durum wheat </td> </tr> <tr> <td> SOMMERHARTWEIZEN (DURUM) / BUCHWEIZEN </td> <td> Summer durum wheat following buckwheat </td> <td> jara trda pšenica / ajda </td> <td> Triticum turgidum var. duru Cereals Summer Cereals </td> <td> mixed plants </td> </tr> <tr> <td> SOMMERHARTWEIZEN (DURUM) / FELDGEMÜSE </td> <td> Summer durum wheat / in vegetable production </td> <td> jara trda pšenica / njivska zelenjava </td> <td> Triticum turgidum var. duru Cereals Summer Cereals </td> <td> mixed plants </td> </tr> <tr> <td> SOMMERKÜMMEL </td> <td> Summer Caraway </td> <td> jara kumina </td> <td> Carum carvi Other plants Medicinal plants </td> <td> Summer Caraway </td> </tr> <tr> <td> SOMMERMENGGETREIDE </td> <td> Summer cereals </td> <td> jara žita </td> <td> Cereals Summer Cereals </td> <td> Summer Cereals </td> </tr> <tr> <td> SOMMERMENGGETREIDE / FELDGEMÜSE </td> <td> Summer cereals / in vegetable production </td> <td> jara žita / njivska zelenjava </td> <td> Cereals Summer Cereals </td> <td> Summer Cereals </td> </tr> <tr> <td> SOMMERMOHN </td> <td> Summer Poppy flower </td> <td> jari mak </td> <td> Papaver somniferum Other plants Other plants </td> <td> Summer Poppy flower </td> </tr> <tr> <td> SOMMERRAPS </td> <td> Summer rapeseed </td> <td> jara oljna ogrščica </td> <td> Brassica napus var. napus Other plants Brassicae </td> <td> Brassicae </td> </tr> <tr> <td> SOMMERROGGEN </td> <td> Summer rye </td> <td> jara rž </td> <td> Secale cereale Cereals Summer Cereals </td> <td> Summer rye </td> </tr> <tr> <td> SOMMERTRITICALE </td> <td> Summer triticale </td> <td> jara tritikala </td> <td> Triticosecale Wittmack Cereals Summer Cereals </td> <td> Summer triticale </td> </tr> <tr> <td> SOMMERWEICHWEIZEN </td> <td> Summer wheat </td> <td> jara mehka pšenica </td> <td> Triticum aestivum Cereals Summer Cereals </td> <td> Summer wheat </td> </tr> <tr> <td> SOMMERWICKEN </td> <td> Common Vetch </td> <td> jara grašica (Vicia sativa) </td> <td> Vicia sativa Legumes Fodder legumes </td> <td> Common Vetch </td> </tr> <tr> <td> SONNENBLUMEN </td> <td> Sunflower </td> <td> sončnice </td> <td> Helianthus annuus Other plants Other plants </td> <td> Sunflower </td> </tr> <tr> <td> SONSTIGE ACKERFLÄCHEN </td> <td> Other arable land </td> <td> razne njivske površine </td> <td> Unclassified Undefined </td> <td> undefined </td> </tr> <tr> <td> SONSTIGE ACKERKULTUREN </td> <td> Other arable plants </td> <td> razne njivske kulture </td> <td> Mixed plants Mixed plants </td> <td> undefined </td> </tr> <tr> <td> SONSTIGE FLÄCHEN: GESCHÜTZTER ANBAU </td> <td> Area of production under different protection </td> <td> razne površine – varovano pridelovanje (folije, steklo...) Production under protection Production under protection </td> <td> Production under protection </td> </tr> <tr> <td> SONSTIGE GRÜNLANDFLÄCHEN </td> <td> Different green areas </td> <td> razne zelene površine Mixed plants Undefined </td> <td> undefined </td> </tr> <tr> <td> SONSTIGE ÍLFRÜCHTE (SAFLOR, ...) </td> <td> Different production of medicinial and industrial plants </td> <td> razne nekaj (žafranika...) Other plants Mixed plants </td> <td> Mixed plants </td> </tr> <tr> <td> SONSTIGE KULTUREN IM FOLIENTUNNEL </td> <td> Diiferent production in the plastic tunnels </td> <td> razne kulture v plastičnem tunelu Production under protection Production under protection </td> <td> Production under protection </td> </tr> <tr> <td> SONSTIGE KULTUREN IM GEWÄCHSHAUS </td> <td> Different production in the greenhouse </td> <td> razne kulture v rastlinjaku Production under protection Production under protection </td> <td> Production under protection </td> </tr> <tr> <td> SONSTIGE SPEZIALKULTURFLÄCHEN </td> <td> Special areas </td> <td> razne posebne površine Mixed plants undefined </td> <td> undefined </td> </tr> <tr> <td> SONSTIGE WEINFLÄCHEN </td> <td> Different vineyard areas </td> <td> razne vinogradniške površine Vineyard Vineyard </td> <td> Vineyard </td> </tr> <tr> <td> SONSTIGES FELDFUTTER </td> <td> Different fodder </td> <td> razna krma Mixed plants Mixed plants </td> <td> Mixed plants </td> </tr> <tr> <td> SORGHUM </td> <td> Sorghum </td> <td> sirek Sorghum bicolor </td> <td> Other plants </td> <td> Row crop;interrow 60 cm </td> <td> Maize </td> <td> Sorghum </td> <td> </td> </tr> <tr> <td> SPEISEINDUSTRIEKARTOFFELN </td> <td> Potato - industrial and human consumption </td> <td> prehranski in industrijski krompir Solanum tuberosum </td> <td> Potato </td> <td> Row crop;interrow 70 cm </td> <td> Potato </td> <td> </td> <td> </td> </tr> <tr> <td> SPEISEKÜRBIS </td> <td> Pumpkin </td> <td> prehranska buča / buča velikanka Cucurbita pepo </td> <td> Other plants </td> <td> Row crop;interrow 2 m </td> <td> Pumpkin </td> <td> </td> <td> </td> </tr> <tr> <td> SPEISEKARTOFFELN </td> <td> Potato / human consumption </td> <td> prehranski krompir Solanum tuberosum </td> <td> Potato </td> <td> Row crop;interrow 70 cm </td> <td> Potato </td> <td> </td> <td> </td> </tr> <tr> <td> SPEISEKARTOFFELN / FELDGEMÜSE </td> <td> Potato / human consumption in vegetable production </td> <td> prehranski krompit / njivska zelenjava Solanum tuberosum </td> <td> Potato </td> <td> Row crop;interrow 70 cm </td> <td> Potato </td> <td> </td> <td> </td> </tr> <tr> <td> STÄRKEINDUSTRIEKARTOFFELN </td> <td> Potato for industrial production </td> <td> krompir za industrijsko pridelavo (več škroba, gl Solanum tuberosum </td> <td> Potato </td> <td> Row crop;interrow 70 cm </td> <td> Potato </td> <td> </td> <td> </td> </tr> <tr> <td> STRAUCHBEEREN </td> <td> Raspberries, blackberries, blueberries… </td> <td> maline, ribez, robide, kosmulje, borovnice </td> <td> Orchard </td> <td> Soft fruit </td> <td> Bushes in row </td> <td> </td> <td> </td> </tr> <tr> <td> STREUWIESE </td> <td> Natural meedow not for animal consumption </td> <td> vrstno bogat travnik na vlažni podlagi, kosijo enkrat letno, ni namenjeno preh </td> <td> Grass </td> <td> Grass </td> <td> Grass </td> <td> </td> <td> </td> </tr> <tr> <td> SUDANGRAS </td> <td> Sudan grass </td> <td> sudanska trava </td> <td> Sorghum sudannense </td> <td> Other plants </td> <td> Grass </td> <td> Sudun grass </td> <td> </td> <td> </td> </tr> <tr> <td> TAFELÄPFEL </td> <td> Apples </td> <td> jabolka </td> <td> Malus domestica </td> <td> Orchard </td> <td> Fruit </td> <td> Trees </td> <td> </td> <td> </td> </tr> <tr> <td> TAFELBIRNEN </td> <td> Pears </td> <td> namizne? Hruške </td> <td> Pyrus communis </td> <td> Orchard </td> <td> Fruit </td> <td> Trees </td> <td> </td> <td> </td> </tr> <tr> <td> TOPINAMBUR </td> <td> Topinambur </td> <td> topinambur </td> <td> Helianthus tuberosus </td> <td> Other plants </td> <td> Other plants </td> <td> Sunflower </td> <td> </td> <td> </td> </tr> <tr> <td> WALDUMWELTMASSNAHMEN </td> <td> Measures for forest and environment preservation </td> <td> ukrepi za ohranjanje gozdov in okolja </td> <td> </td> <td> Unclassified </td> <td> Undefined </td> <td> Unclassifed </td> <td> </td> <td> </td> </tr> <tr> <td> WECHSELWIESE (EGART, ACKERWEIDE) </td> <td> Rotation of arable land and meadow </td> <td> njiva, ker se izmenjujeta oranje in travišče? </td> <td> </td> <td> Mixed plants </td> <td> Mixed plants </td> <td> Mixed plants </td> <td> </td> <td> </td> </tr> <tr> <td> WEICHSELN </td> <td> Souer Cherry </td> <td> višnja </td> <td> Prunus cerasus </td> <td> Orchard </td> <td> Fruit </td> <td> Trees </td> <td> </td> <td> </td> </tr> <tr> <td> WEIN </td> <td> Wine </td> <td> vino </td> <td> Vitis vinifera </td> <td> Vineyard </td> <td> Vineyard </td> <td> Vineyard </td> <td> </td> <td> </td> </tr> <tr> <td> WEIN BODENGESUNDUNG </td> <td> Green manure to rise nitrogen contetnt in the soil </td> <td> kolobarjenje z vnosom dušika </td> <td> </td> <td> Mixed plants </td> <td> Mixed plants </td> <td> Mixed plants </td> <td> </td> <td> </td> </tr> <tr> <td> WICKEN - GETREIDE GEMENGE </td> <td> Mixed sowing od common vetch and cereals </td> <td> mešanica grašica – žita </td> <td> </td> <td> Mixed plants </td> <td> Mixed plants </td> <td> Mixed plants </td> <td> </td> <td> </td> </tr> <tr> <td> WINTERDINKEL (SPELZ) </td> <td> Winter spelt </td> <td> prezimna pira </td> <td> Triticum spelta </td> <td> Cereals </td> <td> Winter cereals </td> <td> Winter spelt </td> <td> </td> <td> </td> </tr> <tr> <td> WINTERDINKEL (SPELZ) / FELDGEMÜSE </td> <td> Winter spelt /vegetable production </td> <td> prezimna pira / njivska zelenjava </td> <td> Triticum spelta </td> <td> Cereals </td> <td> Winter cereals </td> <td> Winter spelt </td> <td> </td> <td> </td> </tr> <tr> <td> WINTERGERSTE </td> <td> Winter barley </td> <td> prezimni ječmen </td> <td> Hordeum vulgare </td> <td> Cereals </td> <td> Winter cereals </td> <td> Winter barley </td> <td> </td> <td> </td> </tr> <tr> <td> WINTERGERSTE / BUCHWEIZEN </td> <td> Winter barley following buckwheat </td> <td> prezimni ječmen / ajda </td> <td> Hordeum vulgare </td> <td> Cereals </td> <td> Winter cereals </td> <td> Winter barley following buckwheat </td> <td> </td> </tr> <tr> <td> WINTERGERSTE / FELDGEMÜSE </td> <td> Winter barley / vegetable production </td> <td> prezimni ječmen / njivska zelenjava </td> <td> Hordeum vulgare </td> <td> Cereals </td> <td> Winter cereals </td> <td> Winter barley </td> <td> </td> </tr> <tr> <td> WINTERHAFER </td> <td> Winter oat </td> <td> prezimni oves </td> <td> Avena sativa </td> <td> Cereals </td> <td> Winter cereals </td> <td> Winter oat </td> <td> </td> </tr> <tr> <td> WINTERHARTWEIZEN (DURUM) </td> <td> Winter dorum wheat </td> <td> prezimna trda pšenica </td> <td> Triticum turgidum var. duru </td> <td> Cereals </td> <td> Winter cereals </td> <td> Winter dorum wheat </td> <td> </td> </tr> <tr> <td> WINTERHARTWEIZEN (DURUM) / BUCHWEIZEN </td> <td> Winter dorum wheat following buckwheat </td> <td> prezimna trda pšenica / ajda </td> <td> Triticum turgidum var. duru </td> <td> Cereals </td> <td> Mixed plants </td> <td> Winter dorum wheat following buckwheat </td> <td> </td> </tr> <tr> <td> WINTERHARTWEIZEN (DURUM) / FELDGEMÜSE </td> <td> Winter dorum wheat / in vegetable production </td> <td> prezimna trda pšenica / njivska zelenjava </td> <td> Triticum turgidum var. duru </td> <td> Cereals </td> <td> Winter cereals </td> <td> Winter dorum wheat </td> <td> </td> </tr> <tr> <td> WINTERKÜMMEL </td> <td> Winter caraway </td> <td> prezimna kumina </td> <td> Carum carvi </td> <td> Other plants </td> <td> Medicinal plants </td> <td> Winter caraway </td> <td> </td> </tr> <tr> <td> WINTERMENGGETREIDE </td> <td> Winter cereals </td> <td> prezimna žita </td> <td> </td> <td> Cereals </td> <td> Winter cereals </td> <td> Winter cereals </td> <td> </td> </tr> <tr> <td> WINTERMOHN </td> <td> Summer Poppy flower </td> <td> prezimni mak </td> <td> Papaver somniferum </td> <td> Other plants </td> <td> Other plants </td> <td> Papaver somniferum </td> <td> </td> </tr> <tr> <td> WINTERRÜBSEN </td> <td> Turnip Tops </td> <td> prezimna repica </td> <td> Brassica rapa L. ssp. sylvestri Other plants </td> <td> Brassicaea </td> <td> Brassicaea </td> <td> </td> </tr> <tr> <td> WINTERRAPS </td> <td> Winter rapeseed </td> <td> prezimna oljna ogrščica </td> <td> Brassica napus var. napus Other plants </td> <td> Brassicaea </td> <td> Brassicaea </td> <td> </td> </tr> <tr> <td> WINTERROGGEN </td> <td> Winter rye </td> <td> prezimna rž </td> <td> Secale cereale Cereals </td> <td> Winter cereals </td> <td> Winter rye </td> <td> </td> </tr> <tr> <td> WINTERROGGEN / FELDGEMÜSE </td> <td> Winter rye / in vegetable production </td> <td> prezimna rž / njivska zelenjava? </td> <td> Secale cereale Cereals </td> <td> Winter cereals </td> <td> Winter rye </td> <td> </td> </tr> <tr> <td> WINTERTRITICALE </td> <td> Winter triticala </td> <td> prezimna tritikala </td> <td> Triticosecale Wittmack Cereals </td> <td> Winter cereals </td> <td> Winter triticala </td> <td> </td> </tr> <tr> <td> WINTERTRITICALE / FELDGEMÜSE </td> <td> Winter triticala / in vegetable production </td> <td> prezimna tritikala / njivska zelenjava </td> <td> Triticosecale Wittmack Cereals </td> <td> Winter cereals </td> <td> Winter triticala </td> <td> </td> </tr> <tr> <td> WINTERTRITICALE / FUTTERRÜBE </td> <td> Winter triticala following peas </td> <td> prezimna tritikala / njivski grah </td> <td> Mixed plants </td> <td> Mixed plants </td> <td> Winter triticala following peas </td> <td> </td> </tr> <tr> <td> WINTERTRITICALE / HIRSE </td> <td> Winter millet </td> <td> proso </td> <td> Panicum miliaceum Cereals </td> <td> Winter cereals </td> <td> Winter millet </td> <td> </td> </tr> <tr> <td> WINTERWEICHWEIZEN </td> <td> Winter wheat </td> <td> prezimna pšenica – mehka </td> <td> Triticum aestivum Cereals </td> <td> Winter cereals </td> <td> Winter wheat </td> <td> </td> </tr> </table> ANALYSIS OF AVAILABLE NON-EO DATA (AGRICULTURE INSTITUTE OF SLOVENIA) **Region: Subset of Slovenia (Jablje) Region: Slovenia** <table> <tr> <th> **Data** </th> <th> **Information provided** </th> <th> **Available** </th> <th> **Descripion/** **Remarks** </th> <th> **Format** </th> <th> **Units and coordinates** </th> <th> **Spatial Resolution** </th> <th> **Polygons layout:Format & Spatial resolution ** </th> <th> **Temporal** **Resolution/** **Acquistion** **Frequency** </th> <th> **Available dates** </th> <th> **Validity** </th> <th> **Use with EO- data/Expect ations from** **EO data** </th> <th> **Available** </th> <th> **Remarks** </th> <th> **Format** </th> <th> **Units and coordinates** </th> <th> **Spatial resolution** </th> <th> **Polygons layout:Format & Spatial resolution ** </th> <th> **Temporal Resolution/** **Acquistion Frequency** </th> <th> **Validity** </th> <th> **Available dates** </th> <th> **Use with EO- data/Expectation s from EO data** </th> </tr> <tr> <td> **Soil phosphorus** </td> <td> soil samples </td> <td> yes </td> <td> phosphorus as P2O5 (internal laboratory validated method) n=59 </td> <td> Shape and excel/csv </td> <td> mg/100g and classes D48 coordfinate system </td> <td> </td> <td> Polygon or centroid. LPIS field (if there will be possible even more detailed than LPIS field level) </td> <td> Resolutions is at the actual field level </td> <td> </td> <td> 2014 </td> <td> As covariate for crop yield, soil moisture </td> <td> </td> <td> </td> <td> Shape </td> <td> phosphorus as P2O5 (mg/100g) --> internal validated method </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> **Soil pottasium** </td> <td> soil samples </td> <td> yes </td> <td> potassium as K2O (mg/100g) -- </td> <td> Shape and excel/csv </td> <td> mg/100g and classes </td> <td> </td> <td> Polygon or centroid. </td> <td> Resolutions is at the </td> <td> </td> <td> 2014 </td> <td> As covariate </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> > internal validated method </td> <td> </td> <td> D48 coordfinate system </td> <td> </td> <td> LPIS field (if there </td> <td> actual field level </td> <td> </td> <td> </td> <td> for crop </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> n=59 </td> <td> </td> <td> </td> <td> </td> <td> will be possible even </td> <td> </td> <td> </td> <td> </td> <td> yield, soil </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> more detailed than LPIS field level) </td> <td> </td> <td> </td> <td> </td> <td> moisture </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> **Soil pH** </td> <td> soil samples </td> <td> yes </td> <td> pH in KCl (-) --> ISO 10390:2005 n=59 </td> <td> Shape and excel/csv </td> <td> pH value and classes D48 coordfinate system </td> <td> </td> <td> Polygon or centroid. LPIS field (if there will be possible even more detailed than LPIS field level) </td> <td> Resolutions is at the actual field level </td> <td> </td> <td> 2014 </td> <td> As covariate for crop yield, soil moisture </td> <td> pH </td> <td> pH from soil samples from 2006 and 2016 analysed at our LAB with method ISO 10390:2005. The samples are taken mostly from central and northwestern part of Slovenia. n=11.387 with pH data </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> **Soil organic matter** </td> <td> soil samples </td> <td> yes </td> <td> organic matter (% f=1,724) --> ISO 14235:1998 n=59 </td> <td> Shape and excel/csv </td> <td> % and classes D48 coordfinate system </td> <td> </td> <td> Polygon or centroid. LPIS field (if there will be possible even more detailed than LPIS field level) </td> <td> Resolutions is at the actual field level </td> <td> </td> <td> 2014 </td> <td> As covariate for crop yield, soil moisture </td> <td> </td> <td> Organic matter from soil samples from 2006 and 2016 analysed at our LAB with method ISO 14235:1998. The samples are taken mostly from central and northwestern part of Slovenia. n=4.340 with OM data </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> **Soil type** </td> <td> Soil types from soil map of Slovenia 1:25.000 </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> Yes </td> <td> Each polygon has an attribute: (1) soil </td> <td> Shape </td> <td> meters, Slovenian D48coordinate system or new </td> <td> </td> <td> Polygon. Polygons </td> <td> No spatial update. Only </td> <td> 1999 but valid </td> <td> </td> <td> As covariate for </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> mapping unit (SMU) and zhe (2) area. Each </td> <td> </td> <td> Slovenian D96 coordinate system </td> <td> </td> <td> were delineated </td> <td> some unofficial attribute </td> <td> for more years </td> <td> </td> <td> crop yield, soil </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> SMU has up to three soil type units (STU's) </td> <td> </td> <td> </td> <td> </td> <td> based on field </td> <td> and content updates and </td> <td> </td> <td> </td> <td> moisture </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> with percentage of coverage in SMU. Soil </td> <td> </td> <td> </td> <td> </td> <td> mapping in 90is in </td> <td> extensions (in 2007 and </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> type units = soil types are classified based </td> <td> </td> <td> </td> <td> </td> <td> scale 1:25.000 </td> <td> 2014 by AIS). </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> on Slovenian Soil classification system. The main soil type in SMU is also transformed into FAO soil type (example: Eutric </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> **Soil depth** </td> <td> Average soil depth from soil map of Slovenia 1:25.000 </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> yes </td> <td> Soil depth (cm) is calculated from reference </td> <td> Shape </td> <td> cm or classes, Slovenian D48 coordinate system or </td> <td> </td> <td> Polygon. Polygons </td> <td> Soil depth was derived by </td> <td> 1999 but valid </td> <td> </td> <td> As covariate for </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> soil profile parameters (defined for each </td> <td> </td> <td> new Slovenian D96 coordinate system </td> <td> </td> <td> were delineated </td> <td> AIS in 2015 </td> <td> for more years </td> <td> </td> <td> crop yield, soil </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> STU) using weighted average based on </td> <td> </td> <td> </td> <td> </td> <td> based on field </td> <td> </td> <td> </td> <td> </td> <td> moisture </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> percentage of coverage of STU under SMU. </td> <td> </td> <td> </td> <td> </td> <td> mapping in 90is in scale 1:25.000 </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> **Soil pH** </td> <td> Average soil pH from soil map of Slovenia 1:25.000 </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> yes </td> <td> Soil pH is calculated from reference soil </td> <td> Shape </td> <td> ph value or classes, Slovenian D48 coordinate </td> <td> </td> <td> Polygon. Polygons </td> <td> Soil pH was derived by </td> <td> 1999 but valid </td> <td> </td> <td> As covariate for </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> profile parameters (defined for each STU) </td> <td> </td> <td> system or new Slovenian D96 coordinate system </td> <td> </td> <td> were delineated </td> <td> AIS in 2015 </td> <td> for more years </td> <td> </td> <td> crop yield, soil </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> using weighted average based on </td> <td> </td> <td> </td> <td> </td> <td> based on field </td> <td> </td> <td> </td> <td> </td> <td> moisture </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> percentage of coverage of STU under SMU. </td> <td> </td> <td> </td> <td> </td> <td> mapping in 90is in scale 1:25.000 </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> **LPIS field polygons** </td> <td> Field polygons from AGENCY FOR AGRICULTURAL MARKETS AND RURAL DEVELOPMENT which is under Ministry of Agriculture, Forestry and food. Field polygons are uploaded every year. Field polygons are at the level of the actual field. </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> yes </td> <td> Farm ID, field number, area, crop type code.There is no information about autumn catch crops . Main crop type only. </td> <td> Shape </td> <td> </td> <td> Field polygon </td> <td> Polygon. Resolutions is at the actual field level </td> <td> Yearly updates </td> <td> Acqusition date/year </td> <td> Permission for 2016 and 2017 </td> <td> As covariate for crop type, crop yield </td> </tr> <tr> <td> **Crop type by field** </td> <td> Crop type by LPIS field polygons. </td> <td> yes </td> <td> crop types by LPIS field polygons </td> <td> Shape or excel </td> <td> crop types as defined in lookup table. D48 coordfinate system </td> <td> Field polygon </td> <td> Polygon or excel. Resolution is at the actual LPIS field level or sometimes by more detailed field level (example: inside corn LPIS field we have data about two corn varieties) </td> <td> </td> <td> 2016 and 2017 </td> <td> 2016 and 2017 </td> <td> As covariate for crop type, crop yield </td> <td> yes </td> <td> crop types by LPIS field polygons </td> <td> Shape </td> <td> crop types as defined in lookup table. Also harmonized with Austrian crop type classification (level 1, 2 and 3) by AIS. </td> <td> Field polygon </td> <td> Polygon. Resolutions is at the actual LPIS field level </td> <td> Yearly updates </td> <td> Acqusition date/year </td> <td> Permission for 2016 and 2017 </td> <td> As covariate for crop type, crop yield </td> </tr> <tr> <td> **Working task by field** </td> <td> Working task by field polygons </td> <td> yes </td> <td> crop types by LPIS field polygons. Date and type of task. </td> <td> Shape or excel </td> <td> Type of task: 1 plowing pre-sowing soil treatment sowing / planting / transplanting 4 fertilization mechanical weed control covering with anti-insect nets or other coverages plant protection measures 8 harvesting, mowing 9. mechanical destruction of the crop 10. soil sample for Nmin analysis 11. other work tasks </td> <td> Field polygon </td> <td> Polygon. Resolution is at the actual LPIS field level or sometimes by more detailed field level (example: inside one LPIS field we have data about two spatialy separated fertilization operations) </td> <td> </td> <td> 2016 and 2017 </td> <td> 2016 and 2017 </td> <td> As covariate for crop yield </td> <td> no </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> Can be grouped or some working tasks (such as plant protection measures) can be excluded. D48 coordfinate system </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> **Fertilization by field** </td> <td> Fertilization by field polygons </td> <td> yes </td> <td> crop types by LPIS field polygons. Date, type of fertilizer and amount of fertilizer. Will be calculated on N, P and K input. </td> <td> Shape or excel </td> <td> type of fertilizer: (1) mineral or (2) organic fertilizer amount (kg per area). D48 coordfinate systemd </td> <td> Field polygon </td> <td> Polygon. Same as Working tasks </td> <td> </td> <td> 2016 and 2017 </td> <td> 2016 and 2017 </td> <td> As covariate for crop yield </td> <td> no </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> **Yield by field** </td> <td> Yield by field polygon </td> <td> yes </td> <td> Yield by LPIS field polygons. Will be calculated per field in t/ha. </td> <td> Shape or excel </td> <td> yield in kg per area. D48 coordfinate system </td> <td> Field polygon </td> <td> Polygon. Same as Working tasks </td> <td> </td> <td> 2016 and 2017 </td> <td> 2016 and 2017 </td> <td> As covariate for crop yield </td> <td> no </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> **Crop demage** </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> **Crop cycle** </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> **Climate data** </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> </table> ## Region: Subset of Denmark Region: Denmark <table> <tr> <th> **Data** </th> <th> **Information provided** </th> <th> **Available** </th> <th> **Descripion/** **Remarks** </th> <th> **Format** </th> <th> **Units and coordinates** </th> <th> **Spatial Resolution** </th> <th> **Polygons layout:Format & Spatial resolution ** </th> <th> **Temporal** **Resolution/ Acquistion** **Frequency** </th> <th> **Available dates** </th> <th> **Validity** </th> <th> **Use with EO data/Expect ations from EO data** </th> <th> **Available** </th> <th> **Remarks** </th> <th> **Format** </th> <th> **Units and coordinates** </th> <th> **Spatial resolution** </th> <th> **Polygons layout:Format & Spatial resolution ** </th> <th> **Temporal** **Resolution/ Acquistion** **Frequency** </th> <th> **Validity** </th> <th> **Available dates** </th> <th> **Use with EO- data/Expectation s from EO data** </th> </tr> <tr> <td> **Soil map** </td> <td> Map of Danish soil types </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> Yes </td> <td> The Danish soil classification system JB 1- JB 11. SEGES will deliver a description of the definitions of JB numbers (% clay, % silt, % sand and % OM) </td> <td> shape </td> <td> </td> <td> </td> <td> </td> <td> Soil map updated in 2014. </td> <td> 2014, but valid for more years </td> <td> </td> <td> </td> </tr> <tr> <td> **Soil capacity of plant available water** </td> <td> Map of soil capacity of plant available water in 50, 75, 100, 125 and 150 cm </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> Yes </td> <td> Soil capacity of plant available water in 50, 75, 100, 125 and 150 cm </td> <td> Shape </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> **Danish height model** </td> <td> DHM / Terrain which is a model of terrain topography or elevation above sea level. The model is constructed form LIDAR scans. The product consists of several themes. Most relevant are a terrain model where objects like vegetation, houses, cars, etc. is removed. DHM / Additional relevant products are a surface map where structures and buildings are included. The actual LIDAR point scans are also available. </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> Yes </td> <td> Height above sealevel. LIDAR scan points. Surface height, i.e. height of buildings and structures included </td> <td> Shape, WMS </td> <td> </td> <td> Terrain has a grid point of 40 cm. Vertical RMSE ~5 cm, Horizontal RMSE ~15 cm. </td> <td> 40x40 cm </td> <td> Updateded 2015 and updates are published continiously </td> <td> Several years (terrain) </td> <td> 2015 - onwards </td> <td> </td> </tr> <tr> <td> **Field polygons (IMK)** </td> <td> Field polygons from The Ministry of Food, Agricultura and environment. Field polygons is uploaded every year in connection with EU application. Field polygons are at the level of the actual field. </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> Yes (NDA?) </td> <td> Farm ID, field num-ber, area, crop type, crop code.There is no information about autumn catch crops </td> <td> Shape </td> <td> </td> <td> Field polygon </td> <td> Resolutions is at the actual field level </td> <td> Yearly updates </td> <td> Acqusition date/year </td> <td> 2009 - onwards </td> <td> </td> </tr> <tr> <td> **Climate data** </td> <td> Data is collected and validated by The Danish Meteorological Institute. Data is obtained on an hourly or daily basis. Data is available for current year and two years back. </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> NDA (brought by SEGES and can only be made available for third parties for SEGES R&D activities, subject to NDA) </td> <td> Dataset consists of: Air temperature hourly/av./min ⁰ ./max. ( C), </td> <td> </td> <td> </td> <td> 10 x 10 km grid </td> <td> 10 x 10 km grid </td> <td> Daily </td> <td> Day </td> <td> 01/01/2016- to current date. Only two years back at any time </td> <td> Crop type, yeild, soil moisture </td> </tr> <tr> <td> **Yield maps** </td> <td> Yield maps from individual fields collected from combines. Data comes from actual production farms which have granted SEGES access to their data. Yield meters on combines must be calibrated. Since the data comes from actual farms, the data various levels of calibration may have been performed, which is a source of error. This error can possibly be quantified if yields for a subsample of data have also been measured by weighbridge. SEGES will deliver yield data with a document describing data cleaning procedures (field polygons, outlier removal for different crops and headland) </td> <td> NDA </td> <td> The number of attributes and spatial resolution will vary depending on type of combine harvester (manufactures). SEGES will deliver a master table with selected attributes. Some yield maps will include data in all attributes, others will be blank in some of the attributes. </td> <td> likely shape </td> <td> </td> <td> Depending on manufacture </td> <td> Position data within field (GPS) with a X and Y Coordinate. </td> <td> Yearly </td> <td> 2015-2017 </td> <td> Acqusition date </td> <td> Yeild prediction, crop type </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> **Yield field level** </td> <td> Yield data are retrieved from the Danish field database (DFDB). The data is either estimated by farmer (low quality), registered by combine harvester with various levels of calibration (medium quality), measured by weighbridge (high quality). </td> <td> NDA </td> <td> Yield from individual fields </td> <td> </td> <td> </td> <td> Field level </td> <td> Actual field polygon </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> **DFDB (Danish field database)** </td> <td> Data consist of information from Danish production farms. Data consists both of registered and planned operations or status at the field level. Registered and planned data can be separated through a user validation system. However, not all farmers use this system, so the separation between planned and registered operations can be difficult. Quality of this separation have to be done individually on each attribute. In general, quality of data related to farms and fields (ID, soil texture and soil analysis,) is very accurate. Quality of the yearly data varies from low to medium for planned and default data to high for </td> <td> </td> <td> Data consist of: Farmidentity (address, CVR-number), Field-ID per farmidentity (positiondata for the majority of fields), Soiltexture on fieldlevel, Soilanalysis on fieldlevel if registrated (Rt, Pt, Kt), Yearly data: crop, seedrate, fertilizer (organic, inorganic), pesticides used, machinery, yield, operations data (drilling and harvesting dates etc.) </td> <td> </td> <td> </td> <td> Field level </td> <td> Actual field polygon </td> <td> Yearly </td> <td> 2005-onwards </td> <td> Growing season. Quality varies (see description). </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> **NFTS (Nordic field trial system) – optimal N rate** </td> <td> Data is obtained from field trials (typically 10x 3 m plots) with different rates of N-fertilizer placed within cultivated fields. Observations of treatments (drilling, fertilizer, growth control), yield and soil parameters are recorded from field trial. The results from field trials provides optimal N-fertilizer rate for surrounding field. Data for actual N-fertilizer rate for surrounding field and field polygon coordinates can be retrieved from DFDB. Quality of data is high. Crop: Number of fields 2015/2016/2017, Winter wheat: 42/30/26, Winter rye: 4/4/5, Winter barley: 7/4/5, Spring barley: 16/9/11, Corn: 3/4/3, Triticale: 0/0/1, Winter rapeseed: 0/8/8, Sugar beet: 0/1/0, Total: 72/60/59 </td> <td> NDA </td> <td> Crop, crop protection, fertilizer, growth control, harvest, yield, soil texture (analyzed), laid crops (estimated). Exact GPS position of trial </td> <td> </td> <td> </td> <td> Field level </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> **NFTS (Nordic field trial system)** </td> <td> Data is obtained from a variety of field trials (plot trials within cultivated fields). Observations of treatments (sowing, fertilizer, growth control), yield and soil parameters are recorded. Pictures (multispectral) of field trials are provided by overflight using drone 3-5 times throughout growth season. Data for for surrounding field and field polygon coordinates can be retrieved from DMDB. Quality of data is high. Number of fields 2015/2016/2017 ? </td> <td> </td> <td> Multispectral pictures of field trials, crop, crop protection, fertilizer, growth control, harvest, yield, soil texture (analyzed), laid crops (estimated) </td> <td> </td> <td> </td> <td> Plot level </td> <td> Treatment plot for all oprations and yeld data, <10 x 10 cm for multispec imagary </td> <td> 3- 6 times pr. growing season </td> <td> harves year 2018 - onwards </td> <td> Growing season </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> </table> LPIS CROP GROUPS OF GEOVILLE **Original LPIS Grouping Initial grouping Austria LPIS member classes Stage 1 Stage 2 Stage 3** <table> <tr> <th> _TAFELÄPFEL_ </th> <th> _Apples_ </th> <th> _1_ </th> <th> </th> </tr> <tr> <td> SCHALENFRÜCHTE, MARILLEN, TAFELBIRNEN, KIRSCHEN, ZWETSCHKEN, PFIRSICHE, EDELKASTANIEN, WEICHSELN, NEKTARINEN, PFLAUMEN </td> <td> All other tree-fruits </td> <td> Tree fruits 11 </td> <td> Fruits </td> </tr> <tr> <td> </td> <td> Wine </td> <td> 1 </td> </tr> <tr> <td> KÖRNERMAIS, SILOMAIS, MAIS CORN-COB-MIX (CCM), _SAATMAISVERMEHRUNG, ZUCKERMAIS_ SOJABOHNEN </td> <td> _Maize_ Soya bean </td> <td> _5_ Maize & Soy Other 1 </td> <td> </td> <td> Summer non-grain </td> </tr> <tr> <td> _SOMMERMOHN_ SONNENBLUMEN </td> <td> _Poppy_ Sun flower </td> <td> _1_ 1 Vegetables </td> <td> </td> </tr> <tr> <td> _ACKERBOHNEN (PUFFBOHNEN)_ _KÖRNERERBSEN, PLATTERBSEN_ FELDGEMÜSE EINKULTURIG, FELDGEMÜSE VERARBEITUNG EINKULTURIG, FELDGEMÜSE MEHRKULTURIG </td> <td> _Beans Peas_ Other/mixed vegetables </td> <td> _1_ Vegetables & other _2_ Root vegetables 3 </td> </tr> <tr> <td> SPEISEKARTOFFELN, STÄRKEINDUSTRIEKARTOFFELN, SPEISEINDUSTRIEKARTOFFELN, SAATKARTOFFELN, _FRÜHKARTOFFELN_ ZUCKERRÜBEN </td> <td> _Potatoes_ Sugar beet </td> <td> _1_ 1 </td> </tr> <tr> <td> SPEISEKÜRBIS </td> <td> Pumpkins </td> <td> 2 </td> </tr> </table> QUITTEN, WEIN ÖLKÜRBIS, <table> <tr> <th> _SOMMERGERSTE_ </th> <th> _Summer barley_ </th> <th> _1_ </th> <th> </th> </tr> <tr> <td> _SOMMERHAFER_ </td> <td> _Summer oats_ </td> <td> _1_ </td> <td> </td> </tr> <tr> <td> _SOMMERHARTWEIZEN (DURUM)_ _SOMMERMENGGETREIDE_ _SOMMERWEICHWEIZEN_ HI _RSE, SORGHUM_ </td> <td> _Summer hard wheat_ _Summer mixed grain_ _Summer wheat_ _Millet & Sorghum _ </td> <td> _1_ _1_ _1_ _2_ </td> <td> Summer grains </td> </tr> <tr> <td> WINTERWEICHWEIZEN Winter wheat Winter triticale Winter rye Winter spelt Winter hard wheat Winter mixed grain WINTERTRITICALE WINTERROGGEN WINTERDINKEL (SPELZ) WINTERHARTWEIZEN (DURUM) WINTERMENGGETREIDE </td> <td> _1_ _1_ _1_ Winter grains _1_ _1_ 1 </td> <td> Winter crops </td> </tr> <tr> </tr> <tr> </tr> <tr> </tr> <tr> </tr> <tr> </tr> <tr> <td> WINTERRAPS Winter rape/canola </td> <td> 1 </td> </tr> </table> _DAUERWEIDE, HUTWEIDE Pasture 2_ MÄHWIESE/-WEIDE DREI UND MEHR NUTZUNGEN, MÄHWIESE/- WEIDE ZWEI NUTZUNGEN, WECHSELWIESE (EGART, _ACKERWEIDE), EINMÄHDIGE WIESE Meadows 4_ Grassland ## _ALMFUTTERFLÄCHE, BERGMÄHDER Alpine meadows 2_ _GRÜNBRACHE, GRÜNLANDBRACHE Fallow green land 2_ KLEE, LUZERNE, FUTTERGRÄSER, SONSTIGES FELDFUTTER, KLEEGRAS Grass crops 5 SLOVENIAN LPIS CODE LIST American blueberrie American cranberries Soft fruit Purple coneflower ameriški slamnik Echinacea purpurea Other plants Ornamental production Chokeberries aronija Orchard Soft fruit Bushes in row Arthichocke artičoka Cynara scolymus Vegetable Vegetable Vegetable Paw Paw asimina Asimina triloba Orchard Fruit Trees ?? bar Other plants White mustard bela gorjušica Sinapis alba Other plants Brassicaceae Elderberry bezeg Sambucus nigra Trees Trees broad bean / in vegetable p bob Vicia fabae Legumes Grain legumes Peaches breskev Prunus persica Orchard Fruit Tree undfefined Unclassified Undefined Raspberries, blackberries, b maline, ribez, robide, kosmulje, borovnice Orchard Soft fruit Bushes in row Cherry češnje Prunus avium Orchard Fruit Trees Spanish salsify črni koren Scorzonera hispanica Vegetable Vegetable Raspberries, blackberries, b maline, ribez, robide, kosmulje, borovnice Orchard Soft fruit Bushes in row Raspberries, blackberries, b maline, ribez, robide, kosmulje, borovnice Orchard Soft fruit Bushes in row Clover detelja Trifolium sp Legumes Fodder legumes Clover Grass clover mixture mešanica trav in detelj Grass Grass Grass Woody plants on the field, lesne rastline na polju, drevesa, grmovje Trees Trees Nurserys drevesnice Other plants Trees Nursery Different fodder razna krma Other plants Mixed plants Trees drugi hitro rastoči panjevci Trees Trees Lacy phacelia facelija Phacelia tanacetifolia Legumes Fodder legumes Lacy phacelia Pineapple guava Pineapple guava Feijoa sellowiana Orchard Trees Pineapple guava Goji berry goji jagoda Lycium barbarum Orchard Trees Goji berry Peas grah Pisum sativum Legumes Grain legumes ? Pea grahor Lathyrus sativus Legumes Fodder legumes Lathyrus sativus Pomegranate granatno jabolko Punica granatum Orchard Fruit Trees Common Vetch jara grašica (Vicia sativa) Vicia sativa Legumes Fodder legumes Common Vetch Winter vetch ozimna grašica Vicia villosa Legumes Fodder legumes Winter vetch Grapefruit grenivlka Citrus x paradisi Orchard Fruit Trees Woody plants on the field, lesne rastline na polju, drevesa, grmovje Trees Trees Hop hmelj Humulus lupulus Other plants Hop Plant height around 6 m Pears namizne? Hruške Pyrus communis Orchard Fruit Trees Crimson clover inkarnatka Trifolium incarnatum Legumes Fodder legumes Crimson clover Apples jabolka Malus domestica Orchard Fruit Trees strawberry jagoda Vegetable Row crops, interrow 1 m Vegetable Loquat japonska nešplja Eriobotrya japonica Orchard Fruit Trees Summer barley jari ječmen Hordeum vulgare Cereals Summer Cereals Summer barley Winter barley prezimni ječmen Hordeum vulgare Cereals Winter cereals Winter barley Persimmon kaki Diospyros kaki Orchard Fruit Trees Summer khorasan wheat kamut (jari) Triticum turgidum spp turanicum Cereals Summer Cereals Winter khorasan wheat kamut (ozimni) Triticum turgidum spp turanicum Cereals Winter cereals Kiwi kivi Actinidia deliciosa Orchard Fruit Trees Hemp konoplja Cannabis sativa Other plants Other plants Cannabaceae Maize for sillage koruza za silažo Zea mais Maize Maize Maize Maize prehranska koruza Zea mais Maize Maize Raspberries, blackberries, b maline, ribez, robide, kosmulje, borovnice Orchard Soft fruit Bushes in row marone - chesnut kostanj Castanea sativa Orchard Trees Summer Swede rape krmna ogrščica (jara) Brassica napus L. var. napus f. biennis Other plants Brassicaceae Root crop Winter Swede rape krmna ogrščica (ozimna) Brassica napus L. var. napus f. biennis Other plants Brassicaceae Brassicaceae Root beet, Rutabage krmna pesa Beta vulgaris subs. vulgaris; Brassica na Other plants Row crop;interrow 70 cm Root crop Turnips krmna repa Brassica rapa Other plants Row crop;interrow 50 cm Root crop Summer wild turnip krmna repica (jara) Brassica rapa L. ssp. sylvestris Other plants Brassicaceae Brassicaceae Winterwild turnip krmna repica (ozimna) Brassica rapa L. ssp. sylvestris Other plants Brassicaceae Brassicaceae broad bean krmni bob Vicia fabae Legumes Fodder legumes Peas grah Pisum sativum Legumes Grain legumes Peas grah Pisum sativum Legumes Grain legumes Field vegetable - uniform pr enovita pridelava zelenjave na njivi Vegetable Mixed plants Vegetable Field vegetable - uniform pr enovita pridelava zelenjave na njivi Vegetable Mixed plants Vegetable Sorghum sirek Sorghum bicolor Other plants Row crop;interrow 60 cm Maize Field vegetable - uniform pr enovita pridelava zelenjave na njivi Vegetable Mixed plants Vegetable Potato / human consumpti prehranski krompir Solanum tuberosum Potato Row crop;interrow 70 cm Potato Quince kutina Cydónia oblónga Orchard Fruit Trees Flax navadni lan (ne za pridobi Linum usitatissimum Pseudo cereals Winter pseudo cereals Woody plants on the field, lesne rastline na polju, drevesa, grmovje Trees Trees Lemon limonovec Citrus limon Orchard Fruit Trees Watermelon lubenice Citrullus lanatus Other plants Row crop;interrow 2 m Watermelon Alfalfa lucerna Medicago sativa Legumes Fodder legumes Raspberries, blackberries, b maline, ribez, robide, kosmulje, borovnice Orchard Soft fruit Bushes in row Tangerine mandarinovec Citrus reticulata Orchard Fruit Trees Almond mandelj Prunus dulcis Orchard Fruit Trees Apricot marelice Prunus armeniaca Orchard Fruit Trees Production of vine planting pridelava sadilnega mater Vitis vinifera Vineyard Vineyard Melon melone oziroma dinje Cucumis melo Other plants Row crop;interrow 2 m Melon Field vegetable production mešana pridelava zelenjave na njivi za predelavo Vegetable Mixed plants Mixed plants mešane rastline za rejo polžev Other plants Mixed plants other fruit druge sadne vrste Orchard Fruit Mixed plants mešane trajne rastline pod 0,1 ha Other plants Mixed plants Mixed plants mešane zelenjadnice pod 0,1 ha Other plants Mixed plants Mixed plants mešanica rastlin - naknadni posevek Other plants Mixed plants Summer cereals jara žita Cereals Summer Cereals Winter cereals prezimna žita Cereals Winter cereals Ornametal grass miskant Miscanthus spp Other plants Grass Ornamental production Perennial ryegrass mnogocvetna ljulka Lolium perenne Grass Grass Grass Corn Salad motovilec Valerianella locusta Other plants Vegetable Corn Salad Mulberry murva Morus spp. Orchard Fruit Trees Vineyard vinograd (ne mlad, ravno Vitis vinifera Vineyard Vineyard Vineyard Asian pear nashi Pyrus pyrifolia Orchard Fruit Trees Pumpkin prehranska buča / buča v Cucurbita pepo Other plants Pumpkin Pumpkin Birdsfoot Trefoil navadna nokota Lotus corniculatus Legumes Fodder legumes Birdsfoot Trefoil Undefined nedefinirana kmetijska rastlina Unclassified Undefined Nectarine nektarine Prunus persica Orchard Fruit Trees No green cover neozelenjen del Unclassified Undefined Untouched grass belt nepokošen pas Unclassified Grass 21 <table> <tr> <th> 000 </th> <th> ni v uporabi </th> </tr> <tr> <td> 888 </td> <td> Ni v uporabi KMG </td> </tr> <tr> <td> 404 </td> <td> njivska zelišča </td> </tr> <tr> <td> 735 </td> <td> okrasne rastline </td> </tr> <tr> <td> 800 </td> <td> oljka </td> </tr> <tr> <td> 013 </td> <td> oljna buča </td> </tr> <tr> <td> 014 </td> <td> oljna ogrščica (jara) </td> </tr> <tr> <td> 814 </td> <td> oljna ogrščica (ozimna) </td> </tr> <tr> <td> 113 </td> <td> oljna redkev </td> </tr> <tr> <td> 103 </td> <td> oljna repica </td> </tr> <tr> <td> 631 </td> <td> oreh </td> </tr> <tr> <td> 698 </td> <td> oreh in kostanj </td> </tr> <tr> <td> 008 </td> <td> oves (jari) </td> </tr> <tr> <td> 808 </td> <td> oves (ozimni) </td> </tr> <tr> <td> 221 </td> <td> perzijska detelja </td> </tr> <tr> <td> 003 </td> <td> pira (jara) </td> </tr> <tr> <td> 803 </td> <td> pira (ozimna) </td> </tr> <tr> <td> 108 </td> <td> podzemna koleraba </td> </tr> <tr> <td> 673 </td> <td> pomarančevec </td> </tr> <tr> <td> 777 </td> <td> površina v odstopu </td> </tr> <tr> <td> 026 </td> <td> praha </td> </tr> </table> <table> <tr> <th> not in use </th> <th> ni v uporabi </th> <th> </th> <th> Unclassified </th> <th> Undefined </th> <th> </th> </tr> <tr> <td> not in use </td> <td> Ni v uporabi KMG </td> <td> </td> <td> Unclassified </td> <td> Undefined </td> <td> </td> </tr> <tr> <td> Field herbs </td> <td> njivska zelišča </td> <td> </td> <td> Other plants </td> <td> Mixed plants </td> <td> Vegetable </td> </tr> <tr> <td> cvetje in okrasne rastline </td> <td> okrasne rastline </td> <td> Mixed plants </td> <td> Other plants </td> <td> Ornamental production </td> <td> </td> </tr> <tr> <td> Olive </td> <td> oljka </td> <td> Olea europea </td> <td> Orchard </td> <td> Fruit </td> <td> Trees </td> </tr> <tr> <td> Pumpkin for oil </td> <td> oljna buča </td> <td> Cucurbita pepo </td> <td> Other plants </td> <td> Row crops, interrow 2 m </td> <td> Pumpkin </td> </tr> </table> <table> <tr> <th> Winter spelt /vegetable pro prezimna pira / njivska ze Triticum spelta </th> <th> Cereals </th> <th> Winter cereals </th> <th> Winter spelt </th> </tr> <tr> <td> Rutabaga swede podzemna koleraba Brassica napus rapifera </td> <td> Other plants </td> <td> Row crop;interrow 40 cm </td> <td> Root crop </td> </tr> <tr> <td> Orange pomarančevec Citrus × sinensis </td> <td> Orchard </td> <td> Fruit </td> <td> Trees </td> </tr> <tr> <td> Area in owner transition površina v odstopu </td> <td> Unclassified </td> <td> Undefined </td> <td> </td> </tr> <tr> <td> not in use praha </td> <td> Unclassified </td> <td> Undefined </td> <td> </td> </tr> </table> Summer rapeseed jara oljna ogrščica Brassica napus var. napus Other plants Brassicaceae Summer rapeseed Winter rapeseed prezimna oljna ogrščica Brassica napus var. napus Other plants Brassicaceae Winter rapeseed Oil radish oljna redkev Raphanus sativus var. oleiformis Other plants Brassicaceae Brassicaceae Turnip rape oljna repica Brassica rapa subsp. oleifera Other plants Brassicaceae Brassicaceae Nuts oreh Juglans regia Orchard Fruit Trees Nuts oreh in kostanj Orchard Fruit Trees Summer oat jari oves Avena sativa Cereals Summer Cereals Summer oat Winter oat prezimni oves Avena sativa Cereals Winter cereals Winter oat Clover detelja Trifolium sp Legumes Fodder legumes Clover Summer Spelt jara pira Triticum spelta Cereals Summer Cereals Summer spelt 22 <table> <tr> <th> 010 </th> <th> proso </th> <th> Winter millet </th> <th> proso Panicum miliaceum </th> </tr> <tr> <td> 001 </td> <td> pšenica (jara) </td> <td> Summer wheat </td> <td> jara mehka pšenica Triticum aestivum </td> </tr> <tr> <td> 801 </td> <td> pšenica (ozimna) </td> <td> Winter wheat </td> <td> prezimna pšenica – mehk Triticum aestivum </td> </tr> <tr> <td> 734 </td> <td> rabarbara </td> <td> Rhubarb Radicchio </td> <td> rabarbara radič </td> <td> Rheum Cichorium intybus </td> </tr> <tr> <td> 047 </td> <td> radič </td> </tr> <tr> <td> 649 </td> <td> rakitovec </td> <td> Sea buckthorns </td> <td> rakitovec </td> <td> Hippophae </td> </tr> <tr> <td> 403 </td> <td> različna trajna zelišča </td> <td> Permanent herbs </td> <td> različna trajna zelišča </td> <td> </td> </tr> <tr> <td> 655 </td> <td> rdeči ribez </td> <td> Redcurrant </td> <td> rdeči ribez </td> <td> Ribes rubrum </td> </tr> <tr> <td> 036 </td> <td> riček </td> <td> Camelina </td> <td> navadni riček </td> <td> Camelina sativa </td> </tr> <tr> <td> 034 </td> <td> rjava indijska gorčica </td> <td> Brown mustard </td> <td> rjava indijska gorčica </td> <td> Brasica juncea </td> </tr> <tr> <td> 654 </td> <td> robida </td> <td> Blackberry </td> <td> robida </td> <td> Rubus fruticosus </td> </tr> <tr> <td> 662 </td> <td> robida x malina </td> <td> Tayberry </td> <td> robida x malina </td> <td> Rubus fruticosus x idaeus </td> </tr> <tr> <td> 048 </td> <td> rukola </td> <td> Arugola </td> <td> rukola </td> <td> Eruca sativa </td> </tr> <tr> <td> </td> <td> </td> <td> Mountain pine </td> <td> Ruševje </td> <td> Pinus mugo </td> </tr> <tr> <td> 998 </td> <td> Ruševje </td> </tr> <tr> <td> 002 </td> <td> rž (jara) </td> <td> Summer rye </td> <td> jara rž </td> <td> Secale cereale </td> </tr> <tr> <td> 802 </td> <td> rž (ozimna) </td> <td> Winter rye Fly honeysuckle Sorghum </td> <td> rž (ozimna) sibirska borovnica sirek </td> <td> Secale cereale Lonicera caerulea Sorghum bicolor </td> </tr> <tr> <td> 678 </td> <td> sibirska borovnica </td> </tr> <tr> <td> 024 </td> <td> sirek </td> </tr> <tr> <td> 737 </td> <td> sivka </td> <td> Lavender </td> <td> sivka </td> <td> Lavandula </td> </tr> <tr> <td> 677 </td> <td> skorš </td> <td> Sorb tree </td> <td> skorš </td> <td> Sorbus domestica </td> </tr> <tr> <td> 049 </td> <td> sladka koruza </td> <td> Sweet maize </td> <td> sladka koruza </td> <td> Zea mais </td> </tr> <tr> <td> 019 </td> <td> sladkorna pesa </td> <td> Root beet, Rutabage </td> <td> krmna pesa, koleraba </td> <td> Beta vulgaris subs. vulgaris; Brassica na </td> </tr> <tr> <td> 623 </td> <td> sliva/češplja </td> <td> Common plum, Flea Common fig Soybean Sunflower Mixture of summer wheat a Mixture of winter wheat an </td> <td> sliva/češplja smokva (figa) soja sončnice soržica (jara) soržica (ozimna) </td> <td> Prunus domestica Ficus carica Glycine max Helianthus annuus Triticum aestivum, Secale cereale Triticum aestivum, Secale cereale </td> </tr> <tr> <td> 647 </td> <td> smokva (figa) </td> </tr> <tr> <td> 030 </td> <td> soja </td> </tr> <tr> <td> 012 </td> <td> sončnice </td> </tr> <tr> <td> 021 </td> <td> soržica (jara) </td> </tr> <tr> <td> 821 </td> <td> soržica (ozimna) </td> </tr> <tr> <td> 116 </td> <td> sudanska trava </td> <td> Sudan grass </td> <td> sudanska trava </td> <td> Sorghum sudannense </td> </tr> <tr> <td> 680 </td> <td> šipek </td> <td> Dog-rose </td> <td> šipek </td> <td> Rosa canina </td> </tr> <tr> <td> 703 333 </td> <td> šparglji tehnično ali drugo sredstvo </td> <td> Asparagus </td> <td> šparglji </td> <td> Asparagus officinalis </td> </tr> <tr> <td> undefined </td> <td> tehnično ali drugo sredstvo </td> </tr> <tr> <td> 723 </td> <td> tobakovec </td> <td> Tobacco 2 times mowed meadows </td> <td> tobakovec 2x košen travnik </td> <td> Nicotiana tabacum </td> </tr> <tr> <td> 204 </td> <td> trajno travinje </td> </tr> <tr> <td> 505 </td> <td> trava - podsevek </td> <td> Grass asfter main crop </td> <td> </td> <td> </td> </tr> <tr> <td> 201 </td> <td> trave </td> <td> Grasses </td> <td> trave </td> <td> </td> </tr> <tr> <td> 200 </td> <td> trave za pridelavo semena </td> <td> Grasses for seed production </td> <td> </td> </tr> <tr> <td> 202 </td> <td> travna ruša (travni tepih) </td> <td> Grass roll Grass clover mixture Summer durum wheat </td> <td> mešanica trav in detelj jara trda pšenica </td> <td> Triticum turgidum var. durum </td> </tr> <tr> <td> 203 </td> <td> travnodeteljne mešanice </td> </tr> <tr> <td> 025 </td> <td> trda pšenica (jara) </td> </tr> <tr> <td> 825 </td> <td> trda pšenica (ozimna) </td> <td> Winter dorum wheat Summer triticale Winter triticala Vitis nursery Vineyard (establish) Hop rootstock Vineyard (establish) </td> <td> prezimna trda pšenica jara tritikala prezimna tritikala trsnice vinograd (ne mlad, ravno ukorenišče hmeljnih sadik vinograd (ne mlad, ravno </td> <td> Triticum turgidum var. durum Triticosecale Wittmack Triticosecale Wittmack Vitis vinifera Vitis vinifera Humulus lupulus Vitis vinifera </td> </tr> <tr> <td> 007 </td> <td> tritikala (jara) </td> </tr> <tr> <td> 807 </td> <td> tritikala (ozimna) </td> </tr> <tr> <td> 704 </td> <td> trsnice </td> </tr> <tr> <td> 706 </td> <td> trta za drugo rabo, ki ni vino ali namizno g </td> </tr> <tr> <td> 029 </td> <td> ukorenišče hmeljnih sadik </td> </tr> <tr> <td> 100 </td> <td> vinska trta </td> </tr> <tr> <td> 626 </td> <td> višnja </td> <td> Sour cherry </td> <td> višnja Prunus cerasus </td> </tr> <tr> <td> 210 </td> <td> volčji bob </td> <td> Sweet lupin Summer Poppy flower </td> <td> volčji bob Lupinus angustifolius jari mak Papaver somniferum </td> </tr> <tr> <td> 031 </td> <td> vrtni mak (jari) </td> </tr> <tr> <td> 831 </td> <td> vrtni mak (ozimni) </td> <td> Winter Poppy flower </td> <td> ozimni mak Papaver somniferum </td> </tr> <tr> <td> 736 </td> <td> vrtnice </td> <td> Rose Annual ryegrass </td> <td> vrtnice Rosa spp. westerwoldska ljuljka Lolium multiflorum Lam. var. westerwo </td> </tr> <tr> <td> 117 </td> <td> westerwoldska ljuljka </td> </tr> <tr> <td> 402 </td> <td> zelenjadnice </td> <td> Field vegetable production pridelava zelenjave na njivi mešanica </td> </tr> <tr> <td> 618 </td> <td> žižula </td> <td> Jujube red date Žižula Ziziphus zizyphus </td> </tr> </table> Cereals Winter cereals Winter cereals Cereals Summer Cereals Cereals Winter cereals Winter cereals Vegetable Vegetable Rhubarb Vegetable Vegetable Radicchio Other plants Trees Sea buckthorns Vegetable Mixed plants Orchard Soft fruit Bushes in row Other plants Other plants Camelina Other plants Brassicaceae Orchard Soft fruit Bushes in row Orchard Soft fruit Bushes in row Vegetable Vegetable Trees Trees Cereals Summer Cereals Summer rye Cereals Winter cereals Winter rye Orchard Soft fruit Bushes in row Other plants Row crop;interrow 60 cm Maize Other plants Other plants Lavender Other plants Trees Maize Maize Maize Other plants Row crop;interrow 70 cm Row crop;interrow 70 cm Orchard Fruit Trees Orchard Fruit Trees Legumes Grain legumes Soybean Other plants Other plants Sunflower Cereals Summer Cereals Cereals Winter cereals Other plants Other plants Maize Other plants Other plants Bushes in row Vegetable Vegetable Asparagus Unclassified Undefined Other plants Other plants Grass Grass Other plants Mixed plants Grass Grass Grass Fooder Grass Grass Grass Grass Grass Grass Grass Cereals Summer Cereals Cereals Winter cereals Cereals Summer Cereals Cereals Winter cereals Winter cereals Other plants Vineyard Vineyard Vineyard Other plants Other plants Hop Vineyard Vineyard Vineyard Orchard Fruit Trees Legumes Fodder legumes Other plants Other plants Summer Poppy flower Other plants Other plants Winter Poppy flower Other plants Other plants Rose Grass Grass Annual ryegrass Vegetable Mixed plants Orchard Fruit Trees 23
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0371_GAP_700670.md
# Executive Summary The GAP data policy document sets out our principles and agreed practices in data management and sharing of data produced by the GAP project. Our goal is to ensure our research data is ‘findable, accessible, interoperable and reusable (FAIR) to ensure it is soundly managed.” We are guided by the principle that “good data management is not a goal in itself, but rather the key conduit leading to knowledge discovery and innovation, and to subsequent data and knowledge integration and reuse.” 1 Our principles are described in relation to the different types of data collected for the GAP project, and our approach to data sharing, balanced with maintaining anonymity for data subjects, is presented. There are two rounds of interview data produced in GAP: in WP3 and WP5. Access to data collected in WP3 [interviews with personnel on their past experiences in deployment on CPPB missions] will be restricted, as “open access to data does not change the obligation to protect results in Article 27, the confidentiality obligations in Article 36, the security obligations in Article 37 or the obligations to protect personal data in Article 39, all of which still apply.” 2 Access to the interview data produced in WP5, during the evaluation of the game, will be open access, as will the data produced by the players within the game. The DMP will support the management life-cycle for all data that will be collected, processed or generated by GAP. It will include: 1. what data GAP will generate 2. whether and how it will be made accessible 3. how it will be maintained and preserved. The DMP will be updated and completed (i.e. become more precise) as GAP evolves. New versions of the DMP will be created whenever important changes to GAP occurs (due to inclusion of new data sets, changes in consortium policies or external factors). This document, following the suggested Horizon 2020 FAIR Data Management Plan (DMP) template format given by the European Commission, outlines our approach to data sharing, within the limitation of ensuring privacy for GAP’s research participants. Further information on our policy towards data collection, storage, protection, retention and destruction and compliance with national and EU legislation, and incidental findings may be found in the GAP Data Policy document and the GAP Incidental Findings document respectively. At every stage, the GAP management and consortium will ensure the Data Management Plan is in line with the norms of the EU and Commission [as expressed in the General Data Protection Regulation (GDPR) (Regulation (EU) 2016/679)] and will promote best practice in Data Management. # 1.Introduction Gaming for Peace (GAP) is a project funded through the European Commission Horizon 2020 Secure Societies programme. GAP will collect deployment experiences from relevant civilian and military informants in Europe (WP3). These will then be used, in conjunction with a review of state of the art current training, to develop a curriculum in soft skills (communication, cooperation, gender awareness and cultural competency) and this curriculum will be embedded in an immersive online role-playing game. Under WP5, there will be several rounds of evaluation of the curriculum and game, producing interview data with police, military and civil personnel and player metrics for those same individuals. The final version of the curriculum and game will be used by those being deployed in Conflict Prevention & Peace Building (CPPB) roles. GAP supports the four key principles of FAIR - findability, Accessibility, Interoperability, and Reusability, and in its data management plan aims to go beyond proper collection, annotation, and archival to include “the notion of ‘long-term care’ of valuable digital assets, with the goal that they should be discovered and re-used for downstream investigations, either alone, or in combination with newly generated data.” 3 We adopt the definition of ‘Digital research data’ as information in digital form (in particular facts or numbers), collected to be examined and used as a basis for reasoning, discussion or calculation; this includes statistics, results of experiments, measurements, observations resulting from fieldwork, survey results, interview recordings and images. 4 ## "As open as possible, as closed as necessary" The GAP project was reviewed by the European Commission’s Ethical reviewers in December 2015. In this review the issue of anonymity of interview and game participants was raised. As the pool of research participants has very defined parameters, the ethics reviewers expressed the opinion that ensuring anonymity of the participants would be a challenge. Following this assessment, it was agreed with the Commission Project Officer that GAP should not participate in the Open Research Data pilot. The ORD pilot aims to improve and maximise access to and re-use of research data generated by Horizon 2020 projects and takes into account the need to balance openness and protection of scientific information, commercialisation and Intellectual Property Rights (IPR), privacy concerns, security as well as data management and preservation questions. While open access to research data becomes applicable by default in Horizon 2020, the Commission also recognises that there are good reasons to keep some or even all research data generated in a project closed, as is the case for the first round (WP3) interviews in GAP. However, the data generated in the second round of interviews in GAP (WP5) which comprises interviews with players evaluating the game will be open access as the likelihood of identifying information will be minimal, as the focus of the interviews is on the experience of playing the game. # 2\. Data Summary ## 2.1 Purpose of the data collection/generation and its relation to the objectives of the project & types and formats of data GAP will gather interview and game play data primarily from the following project partners in two rounds of data gathering: * Police Service of Northern Ireland (PSNI) * Finnish Defence Forces (FINCENT) * Armed Forces of the Republic of Poland (Via NDUW) * Polish National Police (WSpol) * Bulgarian Defence Force (BDI) * Portuguese State Police (PSP) Data will also be gathered from members of the project’s ‘End-user Advisory Board’. This is primarily comprised of NGO and voluntary groups working in the CPPB field. In the first round of data gathering (WP3), this is to be combined with state of the art knowledge in training personnel in CPPB missions to build realistic scenarios which capture the defined learning objectives of GAP. Personnel from the same organizations will be interviewed in the second round of data gathering (WP5), the purpose of which is to gather data on the experience of playing the game. Metrics from data generated by the players actually playing the game will also be collected and analyzed to ascertain the degree and kind of learning achieved in the game. **Table 1: Types of data to be collected throughout the project** <table> <tr> <th> **1\. Interview data:** <table> <tr> <th>  </th> <th> **_Work Package 3 (WP3) interviews_ ** : Interviews with participants’ (drawn from end-user partner organizations and the end-user advisory board member organizations) on their experience of deployment in CPPB missions. The data from these interviews will contribute to the development of game play storylines, ensuring that the game is based on realistic scenarios. Interview data will also be analysed using sociological methods and theories to produce journal publications and will be the subject of analysis for PhD theses funded by GAP. </th> </tr> <tr> <td>  </td> <td> **_Work Package 5 (WP5) interviews_ ** : In the evaluation phase of GAP, there will be pre- and post-gameplay interviews. Participants will evaluate the game, both from a computeruser interaction perspective and from the perspective of impact on their organizational role and others organizational roles in CPPB missions. These interviews will focus on their experience of the game and their interpretation of and response to the various scenarios. This data will be used to modify the game play experience and the curriculum, and will be interrogated to contribute to sociological and psychological understandings of CPPB training and experiences. </td> </tr> </table> </th> </tr> <tr> <td> **2\. Game play data:** The performance of test players will be recorded in the game and a data set will be </td> </tr> <tr> <td> </td> <td> produced from this. Test players will again be sought from the participating armies and police services, with additional testers sought from organisations in the End-user Advisory Board. The GAP game will require players to take on different roles within the game than they hold in real life (e.g. a male member of the army could play the role of a female NGO worker). Data produced will be used to study the degree and kind of learning, including the achievement of various soft-skills (communication, cooperation, gender awareness, cultural competency) by the players. The data will be used to modify the game play experience and the curriculum as necessary, and also will provide the basis for a ‘skills passport’ that will provide a means to standardize results and thus enable game players to document their achievement of targeted softskills. </td> <td> </td> </tr> </table> **2.2 Re-use of existing data** GAP will not reuse existing data. ## 2.3 Origin of data GAP will produce original research data from two rounds of interviews carried out in six countries, and play metrics within the subsequently developed game [see 2.1 above]. ## 2.4 Expected size of the data WP3 interviews: Interviews with 15-30 individuals will be carried out in each country lasting 45 minutes – one hour. This will produce approximately 30 typed pages of transcript per interview. This will produce approximately 180 individual interviews x 30 pages of interviews= 2,700-5,400 pages of transcript. WP5 interviews: 15-30 interviews will be carried out in each country lasting 15-30 minutes pregame play and 30/45 minutes post-game play. This will produce approximately 30 typed pages of transcript per interview. This will produce approximately 180 individual interviews x 30 pages of interviews= 2,700-5,400 pages of transcript. Game data: The exact size of the gameplay database will depend on the specific assessment metrics produced by WP4.2, but it is not expected that it will consist of more than 1 GB of data. **2.5 To whom is this data useful?** This data will be useful to academics, practitioners (end users) and policymakers. It will be useful to researchers who are interested in the development of serious games, specifically games as a tool for learning; and researchers who aim to understand the acquisition of specific soft skills, specifically but not limited to those skills as used by CPPB personnel. It will be useful to end users (organizations such as the end user partners in the Consortium of GAP) who can use the data to expand and attain a greater understanding of how to provide and deliver a curriculum of appropriate soft skills to personnel being deployed in CPPB. # 3\. FAIR Data ## 3.1 Making data findable Our policy is guided by the 3 steps outlined in Annex 1. Open access to digital research data involves 3 steps: Procedure for open access (research data): Step 1 — Deposit the digital research data, preferably in a research data repository. Step 2 — Provide open access by taking measures to enable users to access, mine, exploit, reproduce and disseminate the data free of charge (e.g. by attaching a ‘creative commons licence’ (CC-BY or CC0 tool) to the data). Open access must not be given immediately; for data needed to validate the results presented in scientific publications, as soon as possible; for other data, beneficiaries are free to specify embargo periods for their data in the data management plan (as appropriate in their scientific area). Step 3 — Provide information, via the repository, about tools and instruments for validating the results. ### 3.1.1 Naming conventions GAP interview data will follow the naming convention (Interview number_Initials of interviewer_date of interview_GAP name and project). Thus an example of a file name would be “Int1_RB_10.1.2017_GAP 700670” The game play dataset will be labelled as such, with metadata providing more information on background to the data. ### 3.1.2 Search keywords Metadata data will be included with each item to be deposited in the open access repository (see section 3.2.2 for details on the repository). This will allow for keyword searching. ### 3.1.3 Metadata Each item to be included in the open access repository will include metadata giving background explanation of the GAP project, and specifics on how the data was created and the data format. We will follow the guidelines set out in the TARA repository (see section 3.2.2) ## 3.2 Making data openly accessible ### 3.2.1 Which data produced will be made openly available Anonymised WP5 interview transcripts and game play data will available for open access sharing. WP5 interviews concern the participants’ experience of the game, and are likely to contain less sensitive information that WP3 interviews (although identifying information or vignettes may be present). WP5 interview transcripts will be checked to ensure anonymity by the responsible team member at each organisation (see table one of the GAP Data Policy documents for further details) before wider release to the Consortium. Any identifying details or vignettes (after anonymization) will be redacted before release to the wider Consortium. Game players will be given an identifying number, with their corresponding identifying details encrypted and kept in a secure location by project partner Haunted Planet Studios (HPS). Prior to depositing email addresses used for creating game accounts, hashed passwords, all IP addresses logged during play and any chat logs that maybe be created will be removed. As game play data can be fully anonymised, and players will be taking on a role different from their role in real-life, the ability to identify game players will be severely circumscribed, thus this data will be suitable for depositing in the TARA repository. WP5 Interview data to be deposited will include: * Anonymised interview transcripts from pre and post-game play interviews. Game data to be deposited will be: * Description of the scenarios and the soft-skills learning goals associated with these * Anonymised quantitative data, on actions taken within the game, broken down by scenario WP3 interview concern the real experience of CPPB by members of the participating police and armies (listed in section 1.1) plus participating NGO and civil society members. Due to the sensitive nature of the data collected via interview, and the risk of interviewee identification, sharing of interview data on the interviewee’s experience of CPPB activities will not be included in the open access repository. Rather, WP3 interviews will be made available for sharing on a case-by-case basis following a request to the GAP PI. Anonymised WP3 interview transcripts will be available to other researchers on request. Requests will be reviewed by the GAP PI and, if the request is deemed appropriate, anonymised transcripts with identifiable vignettes redacted, will be supplied. As per Article 38 of the Grant Agreement, Visibility of EU funding Any dissemination of results (in any form), even when combined with other data, must include the reference to EU funding set out in the GA. ### 3.2.2 Method for data accessibility Open access means taking measures to make it possible for third parties to access, mine, exploit, reproduce and disseminate data — via a research data repository. A ‘research data repository’ means an online archive for research data; this can be subjectbased/thematic, institutional or centralised. The GAP project will use Trinity’s Access to Research Archive (TARA) to make the above outlined data openly accessible. TARA is designed to store, index, distribute, and preserve the digital materials of Trinity College Dublin. Content, deposited directly by Trinity faculty and staff, may include research papers, photographs, videos, theses, conference papers, or other intellectual property in digital form. The content is then distributed through a searchable Web interface. TARA uses DSpace open source software, developed by MIT and Hewlett Packard. Data uploaded to TARA is requires metadata attached to each item. This metadata entails descriptive information about an item that allows it to be found via keyword searching. Thus each item of GAP data will include metadata. **3.2.3 Methods or software tools needed to access the data** TARA is accessible from a standard web browser. ## 3.3 Making data interoperable ### 3.3.1 Interoperability of data produced TARA has a predefined list of software packages that it supports. These software types are ‘recognised’ by TARA and will be maintained as such by the system on an ongoing basis and if/when the content is exported or moved, or the server is changed. Qualitative interview data will be provided on Microsoft word, which is a package ‘recognised’ in TARA. All transcripts will be provided in English and will follow a standard transcript template developed by the research team. Game play data will be provided in machine-readable eXtensible Markup Language (XML) format, which is also supported in TARA. Supporting data on the scenarios that the data relates to will also be supplied in XML format. ### 3.3.2 Standard vocabularies In the conduct of qualitative interviews it is likely that technical terms, specific to conflict prevention and peace building, may be used. In this case, we will provide a mapping of these terms to accompany the transcripts. The gameplay data will be expressed in an XML application designed specifically to reflect the metrics produced in WP4.2. A Document Type Definition (DTD) will be provided, which will help with making the XML data set fully machine-readable, searchable and interoperable. ## 3.4 Increase data re-use By depositing WP5 interview and game play data in the TARA repository, this data will be made freely available to anyone via the internet. TARA provides a persistent web link that remains constant and allows anyone worldwide access over the internet. No licencing would be required for accessing of the deposited data. WP3 data, shared if appropriate, will be provided in a standardised PDF format. The PI will liaise with those accessing the data about format usability – while ensuring the security of the data. **3.4.1 When will the data be made available for re-use?** Data collected during the project will be used for two purposes 1) creation of the training game and 2) academic publication. We will embargo release of the data until both of these objectives have been achieved. The anticipated time period for depositing the data into the TARA repository will be late-2019. **3.4.2 How long is it intended that the data remains re-usable?** The TARA repository provides long-term stewarded preservation of deposited materials; no specific time limit will put on the use of data deposited in TARA. # 4\. Allocation of resources By using the TARA repository, we are able to avail of a service provided, at no additional charge, to TCD academics. Thus, additional funding for making GAP data open access is not required. ## 4.1 Responsible persons The GAP Principle Principal Investigator (Anne Holohan) will be responsible for ensuring that interview transcripts and game data are deposited into TARA. Once deposited, the TCD library team, who manage the TARA repository, will be responsible for the long term storage of the GAP data. # 5\. Data security All data deposited into TARA is backed up on the TCD servers. The GAP Principal Investigator will also ensure that GAP data is stored on departmental servers. # 6\. Ethical Aspects As discussed at the start of this document, ethics concerns mean that we cannot have full openaccess sharing of all data to be produced by the GAP project. GAP ethics documents outline: 1) our policy towards in data collection, storage, protection, retention and destruction and compliance with national and EU legislation, and 2) our Incidental Findings policy. In brief, our policies on the above are as follows: * Data Collection: Data will be collected in WP3 and WP5 via interview and game testing. All participating individuals will be provided with information sheets on the project and asked to sign consent forms. Via these we will ensure that that our policies on handling, storing and retaining their data, their right to withdraw and our policy on incidental findings are understood by the study participants. Participants in interviews and game play may involve the same individuals, or may be conducted with separate people. In either scenario, separate consent will be sought before interviews and game evaluation. * Data storage: Interview data will be immediately transferred off recording devices and encrypted. Access to un-anonymised data will be strictly limited and only anonymised data will be transferred amongst consortium members. Consent forms will be kept in a secure and separate location from transcripts and game play data. * Data protection: The content of data produced by the GAP project will be specified, and we will provide copies of appropriate authorizations according to the legal requirement of the area where the research is planned to take place. This includes all partner institutional/college ethical approval committees, operating under the auspices of EU regulations on Data Protection and Privacy, notably the Data Protection Directive (Directive 95/46/EC) and the General Data Protection Regulation (Regulation (EU) 2016/679). * Original recordings will be stored for the duration of the project (2.5 years); after this time they will be irreversibly destroyed by overwriting the file with other sound. The consent forms will be stored at a different location to the transcripts and recordings for the duration of the project. Then they will all be destroyed by shredding of hard copies (informed consent) and irreversible overwriting of soft copies. Those who collected original recordings in each jurisdiction will be responsible for destruction of the aforementioned material in this manner at the end of the project. * Incidental findings: Should interview data result in an incidental finding (such as reports of illegal/prohibited behaviour or reports of PTSD) the research team will follow the escalation/reporting protocol (set up in the GAP Incidental findings policy document). In the case of illegal prohibited behaviour, this will be reported in the first instance to the appropriate person in their organisation, who will then escalate this to an outside agency if appropriate. In the case of mental health issues, the research team will direct the participant to support services both inside and outside their organisation. Full details of these policies are available in the GAP Data Policy Document and the GAP Incidental Findings document. ## 6.1 Informed consent GAP will comply with Article 39.2 of the grant agreement: GAP's open data policy will be outlined in information sheets and consent forms for participants in pre and post-game interviews. Game players will also receive information and consent forms that inform them how the data will be used. **7\. Timetable for Updates.** The DMP will be updated minimally at Month 12 and Month 28, after the scheduled Ethics Committee meetings but also whenever important changes to GAP occur (due, for example, to inclusion of new data sets, changes in consortium policies or external factors).
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0376_EDC-MixRisk_634880.md
**All data access from third-parties will be subjected to an** **application** within _EDCMixRisk_ , that will be examined by the management of the consortium (Project Steering Committee). In evaluating these applications, the Project Steering Committee will also consider whether involving the Ethics Advisory Board of the project in its decision-making process. In addition, beneficiaries have agreed to (i) grant the power of veto for the release of any data to those who produce them, (ii) release all data produced by the end of the project, (iii) leave open the possibility that data produced by one beneficiary may be released upon its request. The **Controlled** **Access** model guarantees that data potentially leading to reidentification of donors can only be accessed and used by researchers that are granted permission and the necessary credentials by the management of _EDC-MixRisk_ . This allows having the data used only for purposes that are examined in advance by _EDC-MixRisk_ , and ensures the project’s compliance with the highest EU ethical standards, those currently followed by other big projects and funding institutions (e.g. the Cancer Genome Atlas, the International Human Epigenome Consortium), as well as with cutting edge bioethical standards for collections of biological samples and/or data (see Mascalzoni et al. 2014). The same kind of controlled access will be extended to the sharing of **specimens** , by ensuring that all members or partners collecting and accessing samples comply with the same standards. Access to _EDC-MixRisk_ ’s database will be conditional upon acceptance of a number of requirements granting **full compliance** with national and EU legislation (Directive 95/46/EC, Directive 2004/23/EC) protecting the privacy and autonomy of patients/donors, as well as committing the recipient of the data to accept that they will not attempt to re-identify sample donors. Upon future revisions of the DMP, beneficiaries will also consider including a specific requirement against ‘dual-use’ for access to _EDC-MixRisk_ ’s data. The project management and Beneficiary 11 have already started to collect all relevant documentation regarding the ethical approval of the experimental procedures within _EDCMixRisk_ . As of today, the following documentation has been received and is currently under review: 1. Ethics approvals from Beneficiary 1 (KI) on the use of foetal tissues and ovarian cells including informed consent forms if relevant (in Swedish only); 2. Ethics approvals from Beneficiary 2 (KU) regarding the Selma cohort including informed consent forms and authorisations for processing personal; 3. Ethics approval from Beneficiary 6 (UU) for the experimental part of their subprojects. 4. Ethics approval from Beneficiary 8 (ULEI) regarding the LIFEchild cohort. 5. Ethics approval from Beneficiary 9 (UoA) regarding animal experiments 6. Ethics approvals from Beneficiary 11 (IEO) on the use of hESC (delivered upon finalization of grant agreement). The completion and delivery of following documents is instead in the course of being finalized: 1. Additional material regarding details on the Ethical approvals from Beneficiary 8 (ULEI) regarding the LIFE Child study cohort including informed consent forms and authorisations for processing personal data, as well as biobanking; 2. Copies of authorisations for animal experiments from Beneficiaries 3 (UGOT) and 7 (CNRS). Copies of authorisations for the supply of animals and the animal experiments, as well as copies of training certificates/personnel licences of the staff involved in animal experiments, should be shortly delivered to project management. # Future tasks As recognised by the H2020 Guidelines, the DMP should also provide a detailed description of data flows within the project on a dataset by dataset basis, and should reflect the current state of affairs within the consortium about the data that are produced and shared. The **Controlled** **Access** model adopted by _EDC-MixRisk_ requires the full development of an infrastructure that is able to grant the foreseen level of control on the datasets (and their access), by establishing credentials for all beneficiaries and potential external collaborators (both present and future), as well as devising the suitable strategies for periodic decisionmaking by project management over granting of authorizations. The amount and sensitivity of data and samples that will be used by _EDC- MixRisk_ requires careful consideration and a specific attention to the procedures for their collection, sharing and management. In order to facilitate the evolution of data infrastructures in _EDC-MixRisk_ , so as to streamline data flow within the project and while complying with the H2020 Guidelines, we list a number of tasks and questions for project management and beneficiaries: 1. Establish dataset reference and name: project beneficiaries should provide the unique identifiers for the datasets (to be) produced. 2. Provide dataset description: describe the data that will be generated or collected, their origin, nature and scale, to whom they could be useful, whether they are codified (if yes, describe the codification process), and whether beneficiaries intend to use them in a scientific publication. We envisage that the type of data produced by the project will include: 1. General patient information such as gender and age, Clinical data (e.g. potential diagnosis, standard biomarkers across the two cohorts), 2. Sample identifiers, 3. Gene expression data (at various levels of coverage depth, and hence with different implications for reidentification etc.), 4. Genome or exome sequence files, 5. Epigenetic data (DNA methylation, histone modifications, non coding RNAs) 3. Provide dataset potential integration: are project beneficiaries knowledgeable about the existence (or not) of similar data and the possibilities for their integration with those produced in _EDC-MixRisk_ ? 4. Describe standards and metadata: provide references to existing suitable standards for datasets in the relevant discipline. 5. Design and implementation of data sharing infrastructure: as of today, _EDC-MixRisk_ does not dispose of a common infrastructure for data sharing. Upon establishment of this common platform the management of the project should then update the DMP by including information about: 1. access procedures 2. outlines of technical mechanisms for transfer, sharing and dissemination 3. necessary software(s) and other tools for enabling use 4. definition of the controlled access application form 5. identification of the repository where data will be stored, and the type of repository, e.g. the Science in Risk Assessment and Policy (SciRap) and Information Platform for Chemical Monitoring (IPChem) data bases. 6. Strategies for archiving and preservation (including storage and backup): describe the procedures that will be put in place for long-term preservation of the data, as well as indicate the type of data that beneficiaries foresee for inclusion in a publication and that should thus be preserved, what is the approximated end volume of datasets, what are the associated costs for the project and how these are planned to be covered 7. Interoperability of data and quality standards: provide a strategy for the potential standardisation of data in the project, such as the adoption of common codifications or software(s), for the sake of allowing data exchange between researchers, institutions, and organisations within and outside the project. The accomplishment of these technical tasks will enable the establishment of the data flow within _EDC-MixRisk_ , fulfilling the DMP’s function of implementing data quality, sharing and security within a Controlled Access policy.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0378_Planheat_723757.md
# Introduction The present deliverable “D7.1: Data Management Plan” has been developed in the framework of WP7 activities (“Project Coordination and Management”) under the responsibility of D’Appolonia. The purpose of the present document is to outline a preliminary strategy for the management of the data generated in the framework of PLANHEAT project activities. Procedures for the management of research data, energy resource databases, urban maps and scientific publication data will be addressed. The management policy will be defined fully in compliance with the open access principles adopted by the European Commission and enforced through the Grant Agreement. The project recognizes the value of regulating research data management issues. Accordingly, in line with the rules laid down in the Model Grant Agreement, the beneficiaries will deposit the underlying research data needed to validate the results presented in the deposited scientific publications in a clear and transparent manner. Furthermore, the beneficiaries have agreed, after a careful assessment of the types of data that will be collected or become available within the project, to take part to the Pilot Action on Open Research Data to which the Work Programme Societal Challenge: Energy Efficiency has adhered. An overview on Open Access and in particular on the Open Research Data Pilot will be given and different repositories will be investigated in order to find the most appropriate modality for ensuring open access to discoverable data and scientific publications generated throughout the project lifecycle. Even if the document is due at M6 and project activities are at the beginning, a tentative description of the expected dataset generated has been carried out, trying to predict what data will be kept confidential and what data will be instead made available during project development. It is important to highlight that this Data Management Plan will be updated at each reporting period as agreed by the whole Consortium. # Open Access Open access can be defined as the practice of providing on-line free of charge access to scientific information related to project outcomes. In the context of R&D “scientific information” mainly refers to * peer-reviewed scientific research articles, if projects results are going to be disseminated in academic journals (as with PLANHEAT project) * scientific research data, that means not only data underlying the aforementioned scientific publications, but also any other data related to project activities, both processed or raw.  Although there are no legally binding definitions of open access, authoritative definitions appear in key political declaration such as the _2002 Budapest Declaration_ and the _2003 Berlin Declaration_ . Under these definitions, _“access”_ includes the right to read, download and print, but also to copy, distribute, search, link, crawl and mine the former data, provided that obligations to confidentiality, security and protection of personal data are ensured and the achievements of PLANHEAT objectives, including the future exploitability of results, are not jeopardized. Open access is not a requirement to publish, but it is seen by the European Commission as an approach to facilitate and improve the circulation of information in the European research area and beyond. Open access to some data generated in projects funded by the European commission is the key to lower barriers to accessing publicly-funded research, as well as to demonstrate and share the potential of research activities supported with the help of public funding. In the framework of the PLANHEAT project giving Open Access to data such as public (or disclosable) urban heating and cooling demand/supply data, weather data, urban heat islands maps, vectorial maps of local urban infrastructures, emissions factors, database of costs and efficiencies of supply technologies, etc... could enlarge the possibility of researchers to enhance the development of new knowledge and foster the opportunity for public authorities involved in urban planning activities (not only in the energy field) to facilitate their analysis increasing the planning capacity as a fundamental step for a structured development of the city of the future. ## Open Access in Model Grant Agreement The importance given by the European Commission to the open access issue is clearly outlined in the PLANHEAT Grant Agreement. Particularly, Article 29.2 and 29.3 states the responsibilities of beneficiaries and the actions to be undertaken in order to ensure open access to scientific publications and to research data respectively. The text of the aforementioned articles is reported below. **Article 29.2:** _Open access to scientific publications_ Each beneficiary must ensure open access (free of charge online access for any user) to all peerreviewed scientific publications relating to its results. In particular, it must: 1. as soon as possible and at the latest on publication, deposit a machine-readable electronic copy of the published version or final peer-reviewed manuscript accepted for publication in a repository for scientific publications; Moreover, the beneficiary must aim to deposit at the same time the research data needed to validate the results presented in the deposited scientific publications. 2. ensure open access to the deposited publication — via the repository — at the latest: 1. on publication, if an electronic version is available for free via the publisher, or 2. within six months of publication (twelve months for publications in the social sciences and humanities) in any other case. (c) ensure open access — via the repository — to the bibliographic metadata that identify the deposited publication. The bibliographic metadata must be in a standard format and must include all of the following: * the terms “European Union (EU)” and “Horizon 2020”; * the name of the action, acronym and grant number; * the publication date, and length of embargo period if applicable, and - a persistent identifier. **Article 29.3:** O _pen access to research data_ Regarding the digital research data generated in the action (‘data’), the beneficiaries must: (a) deposit in a research data repository and take measures to make it possible for third parties to access, mine, exploit, reproduce and disseminate — free of charge for any user — the following: (i) the data, including associated metadata, needed to validate the results presented in scientific publications as soon as possible; (ii) other data, including associated metadata, as specified and within the deadlines laid down in the 'data management plan' (see Annex 1); (b) provide information — via the repository — about tools and instruments at the disposal of the beneficiaries and necessary for validating the results (and — where possible — provide the tools and instruments themselves). This does not change the obligation to protect results in Article 27, the confidentiality obligations in Article 36, the security obligations in Article 37 or the obligations to protect personal data in Article 39, all of which still apply. As an exception, the beneficiaries do not have to ensure open access to specific parts of their research data if the achievement of the action's main objective, as described in Annex 1, would be jeopardised by making those specific parts of the research data openly accessible. In this case, the data management plan must contain the reasons for not giving access. The confidentiality aspects have been duly taken into account in the preparation of this document in order do not compromise the protection of project results and legitimate interests of project partners. ## Open Access Research Data Pilot Horizon2020 has launched an **Open Research Data Pilot (ORDP)** aiming at improving and maximising access to and re-use of research data generated by projects (eg. from experiments, simulations and surveys). These data are typically small sets, scattered across repositories and hard drives throughout Europe. The success of the EC’s Open Data Pilot is therefore dependent on support and infrastructures that acknowledge disciplinary approaches on institutional, national, and European levels. The pilot is an excellent opportunity to stimulate and nurture the data-sharing ecosystem and has the potential to connect researchers interested in sharing and re-using data with the relevant services within their institutions (library, IT services), data centres and data scientists. The pilot should serve to promote the value of data sharing to both researchers and funders, as well as to forge connections between the various players in the ecosystem. Projects starting from January 2017 are by default part of the Open Data Pilot. Projects started before but belonging to one of the following Horizon 2020 areas are automatically part of the pilot as well: * Future and Emerging Technologies * Research infrastructures (including e-Infrastructures) * Leadership in enabling and industrial technologies – Information and Communication Technologies * Nanotechnologies, Advanced Materials, Advanced Manufacturing and Processing, and Biotechnology: ‘nanosafety’ and ‘modelling’ topics * Societal Challenge: Food security, sustainable agriculture and forestry, marine and maritime and inland water research and the bio economy - selected topics in the calls H2020-SFS-2016/2017, H2020-BG-2016/2017, H2020-RUR-2016/2017 and H2020-BB-2016/2017, as specified in the work programme * Societal Challenge: Climate Action, Environment, Resource Efficiency and Raw materials – except raw materials * Societal Challenge: Energy Efficiency * Science with and for Society * Cross-cutting activities - focus areas – part Smart and Sustainable Cities. The PLANHEAT project recognizes the value of regulating research data management issues. Accordingly, in line with the rules laid down in the Model Grant Agreement, the beneficiaries will deposit the underlying research data needed to validate the results presented in the deposited scientific publications in a clear and transparent manner. Furthermore, the beneficiaries have agreed, after a careful assessment of the types of data that will be collected or become available within the project, to take part to the Pilot Action on Open Research Data to which the Work Programme Societal Challenge: Energy Efficiency has adhered in any case. PLANHEAT project falls under the previously listed categories, since it envisages the development of a tool to empower local authorities’ capabilities to plan their future low carbon heating and cooling scenarios and consequently several modelling activities and data research have to be performed in order to demonstrate the effectiveness of results. Therefore, consortium committed itself to respect the provisions that taking part in ORDP program implies, targeting the possibility to give Open Access to different kind of data coming from the development and the proper use of the PLANHEAT Modules (mapping, planning, simulation) as described in Fig.1. ### Fig.1 – PLANHEAT Open Access strategy **2.2.1 Enabling projects to register, discover, access and re-use research data** Open Research Data Pilot project aims at supporting researchers in the management of research data throughout their whole lifecycle, providing answers to key issues such as “what”, “where”, “when”, “how” and “who” 1 . **WHAT** : The Open Data Pilot covers all research data and associated metadata resulting from ECfunded projects, if they serve as evidence for publicly available project reports and deliverables and/or peer reviewed publications. To support discovery and monitoring of research outputs, metadata have to be made available for all datasets, regardless of whether the dataset itself will be available in Open Access. Data repositories might consider supporting the storage of related project deliverables and reports, in addition to research data. **WHERE** : All research data has to be registered and deposited into at least one open data repository. This repository should: provide public access to the research data, where necessary after user registration; enable data citation through persistent identifiers; link research data to related publications (eg. journals, data journals, reports, working papers); support acknowledgement of research funding within metadata elements; offer the possibility to link to software archives; provide its metadata in a technically and legally open format for European and global re-use by data catalogues and third-party service providers based on wide-spread metadata standards and interoperability guidelines. Data should be deposited in trusted data repositories, if available. These repositories should provide reliable long-term access to managed digital resources and be endorsed by the respective disciplinary community and/or the journal(s) in which related results will be published (e.g., Data Seal of Approval, ISO Trusted Digital Repository Checklist). **WHEN** : Research data related to research publications should be made available to the reviewers in the peer review process. In parallel to the release of the publication, the underlying research data should be made accessible through an Open Data repository. If the project has produced further research datasets (i.e. not necessarily related to publications) these should be registered and deposited as soon as possible, and made openly accessible as soon as possible, at least at the point in time when used as evidence in the context of publications. **HOW** : The use of appropriate licenses for Open Data is highly recommended (e.g. Creative Commons CC0, Open Data Commons Open Database License). **WHO** : Responsibility for the deposit of research data resulting from the project lies with the project coordinator (delegated to project partners where appropriate). ## Research Data Repositories All data collected during the project will be in the first instance stored and preserved in an online data repository linked to the project website with access limited to the PLANHEAT Consortium, managed by ARTELYS and intended for internal uses. Particular attention will be paid to the confidential and/or sensitive data and the consortium will not disclose or share this information to third parties. In the internal PLANHEAT Consortium Repository a specific folder has been dedicated for the collection of data to be included in the future PLANHEAT Open Research Data Platform. At M18 a preliminary analysis will be performed in order to identify the data suitable to get open access disclosure: this preliminary list will be integrated and confirmed at the end of the project (M36). Furthermore it is important to remark that this Data Management Plan will be updated at each reporting period. Concerning the open access of discoverable data, different online public repository possibilities will be investigated in subsequent stages of the project. Some examples of suitable repositories under evaluation are shown below:  ZENODO (http://www.zenodo.org/) is the open access repository of OpenAIRE (the Open Access Infrastructure for Research in Europe, https://www.openaire.eu/). The goal of OpenAIRE portal is to make as much European funded research output as possible available to all. Institutional repositories are typically linked to it. Moreover, dedicated pages per project are visible on the OpenAIRE portal, making research output (whether it is publications, datasets or project information) accessible through the portal. This is possible due to the bibliographic metadata that must accompany each publication. ### Fig. 2 - Zenodo homepage  LIBER (www.libereurope.eu) supports libraries in the development of institutional research data management policies and services. It also enables the exchange of experiences and good practices across Europe. Institutional infrastructures and support services are an emerging area and will be linked to national and international infrastructure and funder policies. Building capacities and skills, as well as creating a culture of incentives for collaboration on research data, management are the core targets of LIBER. ### Fig. 3 Liber homepage PLANHEAT consortium is also already in contact with other “sister projects” (i.e. Thermos and Hotmaps) currently working on Energy and heating and cooling planning, in order to stimulate a mutual benefit sharing of data useful to strengthen the future exploitation of the results of the project. Specific energy data gathered at city level throughout mapping and simulation activities could be also transferred to an open access database of energy efficient solutions, such as **The Smart City Information System database (SCIS)** , follow-up of the previous CONCERTO initiative. ### Fig. 4-CONCERTO homepage In case of availability of institutional repositories, and depending on the particular institutional policies for their use (e.g. it might be the practice of the company that all open access publications must be deposited in there), the research data and scientific publications by PLANHEAT might be deposited and made openly accessible even on institutional repositories. # Scientific Publications As reported in the DoA, a dissemination and communication plan has been set up in order to raise awareness on the project outcomes among specialized audience. In this framework, the consortium commits itself to perform publications in peer reviewed international journals, in order to make the outcomes available to the scientific community. The partner in charge of dissemination activities are responsible for the scientific publications as well as for the selection of the publishers considered as more relevant for the subject matter. Further details on dissemination activities are enclosed in D6.6 _“Draft Plan for Dissemination and Exploitation of results”_ and D6.7 _“Final Plan for Dissemination and Exploitation of results”_ respectively delivered at M18 and M36. Fully in line with the rules laid down in the PLANHEAT Grant Agreement and reported in section 2.2.1, each beneficiary will ensure open access to all peer reviewed scientific publications relating to its results. The project will make use of a mix of the three different possibilities for open access, namely: 1. **Open access publishing** (without author processing charges): partners may opt for publishing directly in open access journals, i.e. journals which provide open access immediately, by default without any charges. 2. **Gold open access publishing:** partners may also decide to publish in journals that sell subscriptions, offering the possibility of making individual articles open accessible (hybrid journals). In such case, authors will pay the fee to publish the material for open access, whereby highest level journals offer this option. 3. **Self-archiving/ “green” open access publishing** : alternatively, beneficiaries may deposit the final peer reviewed article or manuscript in an online disciplinary, institutional or public repository of their choice, ensuring open access to the publication within a maximum of six months. Moreover, the relevant beneficiary will deposit at the same time the research data presented in the deposited scientific publication into a data repository. The consortium will evaluate which of these data will be part of the data to be published on the PLANHEAT Open Research Data Platform mainly according to Ethics and confidentiality reasons. ## Selection of suitable publishers Each publisher has its own policy on self-archiving (i.e, the act of the author's depositing a free copy of an electronic document online in order to provide open access to it). Since publishing conditions of some publishers might not fix to open access requirements applying to PLANHEAT on the basis of the Grant Agreement, each partner in charge of dissemination activities will identify the most suitable repository. Particularly, beneficiaries will not choose a repository which claims rights over deposited publications and precludes access. At this stage any specific journal has been identified each beneficiary, in collaboration with the project coordinator, will evaluate if the identified journal and it article sharing policy can respect the consortium agreement in terms of Open Access. According to consortium partners’ previous Open Access experience, ELSEVIER journals could be considered a good option. As example, ELSEVIER article sharing policy is summarized in the following table 2 https://www.publishingcampus.elsevier.com/websites/elsevier_publishingcampus/files/Guides/Brochure_Ope nAccess_1_web.pdf 2 <table> <tr> <th> </th> <th> **Share** </th> </tr> <tr> <td> **Pre submission** </td> <td> Preprints 1 can be shared anywhere at any time PLEASE NOTE: Cell Press, The Lancet, and some society-owned titles have different preprint policies. Information of these is available on the journal homepage. </td> </tr> <tr> <td> **After acceptance** </td> <td> Accepted manuscripts 2 can be shared: * Privately with students or colleagues for their personal use. * Privately on institutional repositories. * On personal websites or blogs. * To refresh preprints on arXiv and RePEc. * Privately on commercial partner sites. </td> </tr> <tr> <td> **After publication** </td> <td> Gold open access articles can be shared: * Anytime, anywhere on non-commercial platforms. * Via commercial platforms if the author has chosen a CC-BY license, or the platform has an agreement with us. Subscription articles can be shared: * As a link anywhere at any time. * Privately with students or colleagues for their personal use. * Privately on commercial partner sites. </td> </tr> <tr> <td> **After embargo** </td> <td> Author manuscripts can be shared: * Publicly on non-commercial platforms. * Publicly on commercial partner sites 3 . </td> </tr> <tr> <td> 1 Preprint is the initial write up of author results and analysis that have not yet been peer reviewed or submitted to a journal. 2 Accepted manuscript is a version of author manuscript which typically includes any changes you have incorporated through the process of submission, peer review and in your communications with the editor 3 For an overview of how and where author can share his article, it is possible to check Elsevier.com/sharing-articles </td> </tr> </table> ## Bibliographic Metadata As mentioned in the Grant Agreement, metadata for scientific peer reviewed publications must be provided. The purpose is to maximize the discoverability of publications and to ensure EU funding acknowledgment. The inclusion of information relating to EU funding as part of the bibliographic metadata is necessary also for adequate monitoring, production of statistics and assessment of the impact of Horizon 2020. All the following information must be included in the metadata associated to each PLANHEAT publication. Information about the grant number, name and acronym of the action: * European Union (UE) * Horizon 2020 (H2020) * Research and Innovation Action (RIA) * PLANHEAT[Acronym] * Grant Agreement: GA N° 723757 Information about the publication date and embargo period if applicable: * Publication date * (eventual) Length of embargo period Information about the persistent identifier: * Persistent identifier, if any, provided by the publisher (for example an ISSN number) # Research Data Research data refers to data that is collected, observed, or created within a project for purposes of analysis and to produce original research results. Data are plain facts. When they are processed, organized, structured and interpreted to determine their true meaning, they become useful and they are called information. In a research context, research data can be divided into different categories, depending on their purpose and on the process through which they are generated. It is possible to have: * **Observational** data, which are captured in real-time, for example, sensor data, survey data, sample data. * **Experimental** data, which derive from lab equipment, for example resulting from fieldwork * **Simulation** data, generated from test or numerical models Research data may include all of the following formats:  Text or word documents, spreadsheets * Laboratory notebooks, field notebooks, diaries * Questionnaire, transcripts, codebooks * Audiotapes, videotapes * Photographs, films, * Test responses * Slides, artifacts, specimen, samples * Collection of digital objects acquired and generated during the research process  Data files * Database contents * Models, algorithms, scripts * Contents of software application such as input, output, log files, simulations * Methodologies and workflows * Standard operating procedures and protocols ## Key principles for open access to research data According to the “ _Guidelines on FAIR Data Management in Horizon 2020_ ”, research data must be _findable_ , _accessible_ , _interoperable_ , _re- usable_ [5]. The FAIR guiding principles are reported in the following table 2 . <table> <tr> <th> FINDABLE </th> <th> **F1** (meta)data are assigned a globally unique and eternally persistent identifier **F2** data are described with rich metadata **F3** (meta)data are registered or indexed in a searchable resource **F4** metadata specify the data identifier </th> </tr> <tr> <td> ACCESSIBLE </td> <td> **A1** (meta)data are retrievable by their identifier using a standardized communications protocol **A1.1** the protocol is open, free, and universally implementable **A1.2** the protocol allows for an authentication and authorization procedure, where necessary. **A2** metadata are accessible, even when the data are no longer available </td> </tr> <tr> <td> INTEROPERABLE </td> <td> **I1** (meta)data use a formal, accessible, shared, and broadly applicable language for knowledge representation **I2** (meta)data use vocabularies that follow FAIR principles </td> </tr> <tr> <td> </td> <td> **I3** (meta)data include qualified references to other (meta)data. </td> </tr> <tr> <td> RE-USABLE </td> <td> **R1** meta(data) have a plurality of accurate and relevant attributes. **R1.1** (meta)data are released with a clear and accessible data usage license **R1.2** (meta)data are associated with their provenance **R1.3** (meta)data meet domain-relevant community standards. </td> </tr> </table> ## Roadmap and procedures for Data Sharing PLANHEAT will generate a relevant amount of data that will be made available not only for the purposes of the project but also for other tools and studies. To facilitate the project data publication and the linking with the open research data, a repository will be developed in order to share the project data towards external communities The repository is an Open Source tool based on public REST web services and will be published on the project public website. Through the web services the users will be able download and filter the open data in order to create synergies with other platforms. The website provides a source catalogue, metadata and description of all the resourced shared with the communities. According to the aforementioned principles (Section 4.1), information on data management is disclosed by detailing the next elements * **Data set reference and name** : Identifier for the data set to be produced. * **Data set description** : its origin (in case it is collected), nature and scale and to whom it could be useful, whether it underpins a scientific publication. Information on the existence (or not) of similar data and the possibilities for integration and reuse will be also included. * **Standards and metadata** : reference to existing suitable standards of the discipline. If these do not exist, an outline on how and what metadata will be created has to be given. * **Data sharing** : Description of how data will be shared, including access procedures, embargo periods (if any), outlines of technical mechanisms for dissemination and necessary software and other tools for enabling re-use, and definition of whether access will be widely open or restricted to specific groups. The repository where data will be stored will be identified, if already existing, indicating in particular the type of repository (institutional, standard repository for the discipline, etc.). In case the dataset cannot be shared, the reasons for this should be mentioned (e.g. ethical, IP, privacy related, security-related etc.). * **Archiving and preservation** (including storage and backup): Procedures that will be put in place for long-term preservation of the data. Indication of how long the data should be preserved, what is its approximated end volume, what the associated costs are and how these are planned to be covered. Since at M6, data set has not been generated yet, the previous list has to be intended as a guideline for data generated in the future. Obviously, the sharing of data will be strictly linked to the level of confidentiality of the data itself. In particular, the level of confidentiality of gathered data will be checked by the partner responsible for the activity (task leader) in which data has been collected, with the data owners (such as public authority, energy provider, industry, associations, etc...) in order to verify if data can be disclosed or not. For the purpose, a written confirmation to publish data in the PLANHEAT Open Access Repository will be asked via e-mail by the task leader to the data owner. It will be possible to make such data available only following the received confirmation provided by the data owner. No confidential data generated within the project will be made available in digital form 4 . ## Expected Dataset During the project will be collected several types of data in order to assess the new strategies as well as practices also in comparison with current ones, with the objective to evaluate the efficacy of the new strategies/practices. Research data of this type will primarily consist of information such as facts and numbers (especially statistics as well as factual data, such as those related to land use/ soil occupancy), which will be collected to be examined and considered as basis for reasoning, discussion, and calculation, as well as results of interviews and surveys (especially those aimed at characterising city needs for energy planning). Among the data generated within the project and that could be shared within the open repository: * _Specific Data from cities if the Municipality is the direct owner of the data (or the utility_ _accepted to share them) and no personal information/industrial confidential information are_ _present (particularly Velika Gorica, Lecce and Antwerp):_ cadastral maps (only if they are anonymous and a fee is not requested for the access), building footprint maps, energy consumption in aggregated forms compliant to D.8.1, etc. * _Georeferenced database populated with hourly distribution of HDH and CDH to include UHI_ _phenomena in Velika Gorica, Lecce and Antwerp_ . A monthly subset of the air temperatures retrieved from IAASARS/NOA system- from which the CDH and HDH will be estimated- will be available to undergo validation against in-situ meteorological station data (to be provided by the cities). This will result in a specific calculation of the confidence put in this dataset. * _Database_ including energy efficiency and retrofitting measures for buildings and related % of H&C demand reduction if explicitly anonymized following D.8.1 guidelines * _Other Databases already publicly shared_ :i.e. demographic statistics, spatial analysis and statistics (es.green surface, number of schools etc.) * _Database on industrial waste energy_ coming from industries, waste incinerators, power plants and for cooling purposes LNG terminals and NG decompression stations. This kind of data will be published in a completely anonymous way, providing waste heat value as a specific value or in a range of magnitude (i.e 10-20 MWh) according to the industry request of confidentiality and in complete accordance to D.8.1 guidelines and confidentiality issues ; * _Specific data for supply sources_ : Solar energy, biomass, geothermal energy, water bodies, sewage network, data centres, shopping mall, underground ventilation shaft; * _Database of emissions, technology and costs for the EU-28 countries_ : coming from already public databases. Data about urban energy infrastructures (i.e.maps of gas networks, DH networks, electricity networks, sewage networks) are normally confidential information and they won’t be explicitly shared without the allowance of the utility/infrastructure owner which will be contacted at this purpose. Accordingly, the beneficiaries will: * Verify with the data owners the level of confidentiality of the gathered data; * Deposit the publicly disclosable data produced within or collected for the purposes of the project (including associated metadata) in an open research data repository; * Check if the shared data could be stored and how long they could be stored; * Take measures to allow any user to access, mine, exploit, reproduce and disseminate free of charge the data; * Provide information about tools and instruments necessary for validating the results (providing the tools and instruments themselves whenever possible, or alternatively providing at least information, via the chosen repository, about the tools and instruments necessary for validating the results, such as specialized software or software code, algorithms, analysis protocols, etc.). Beyond input data listed above, the following outputs will be made publicly available during the project lifetime, mainly distributing public deliverables on the website and other data/information such as: . * Results from cities survey towards PLANHEAT tool specifications definition. * Results related to the analysis of available databases for what it concerns potential energy sources. * Common IT framework specifications of the PLANHEAT integrated tool * Models for mapping and quantifying current and future H&C demand in cities * Models for mapping and quantifying local energy sources in cities (RES, waste heat, unconventional heat sources etc.) * KPIs calculation models * Algorithms for energy planning and simulation * Geospatial dataset of hourly variation of HDH and CDH * Strategic Plans for the cities of Antwerp, Lecce and Velika Gorica * All the outcomes of the activities related to the validation of the PLANHEAT tool thanks to the support of the PLANHEAT validation cities: Antwerp, Lecce and Velika Gorica. It is also important to consider that the whole PLANHEAT tool will be available at the end of the project (M36) in an Open Source version which will be available from the project website. # Potential Exceptions to Open Access The PLANHEAT modules constituting the integrated platform will be developed iteratively and by means of prototypes of growing complexity and functionality. These prototypes will be kept confidential until the final release is ready (according to what is reported in the DoA). As already reported in Chapter 4, the level of confidentiality of data will be verified with the data owners in order to disclose only the information for which the consortium has received a written permission to publish from the data owners themselves. It is foreseen that some data may be kept confidential and/or subject to restriction in the diffusion. One potential exception to open access could be represented by the individual surveys that are carried out among the PLANHEAT cities networks which are going to be engaged during the project. Some of the cities have already asked to keep the questionnaire in anonymous form. Therefore, data could be only partially available. Additional data could be represented by energy consumption and production data available at city level which could be owned by local DSOs or energy providers. These data will be used for validating the different PLANHEAT modules at level of the validation cities. It is reasonable to assume that part of such data will be kept confidential. Moreover, in order to define models for evaluating industrial excess heating and cooling that could be recovered and reused in the city, partners will use the results of energy audits carried out in different industry typologies. Specific data used for elaborating the energy audits will be kept confidential since they are of property of the industry itself while the models elaborated for evaluating industrial waste heat will be publicly available (Deliverable 2.7, PU, M21). Data subject to confidentiality restrictions would be provided by the participants themselves, industries, local DSOs or heating providers, cities , etc..., and they will be stored and protected with state-of-the-art security measures, accessed only by selected and restricted personnel of partners, and will be used to validate the performances of the PLANHEAT tool. This list of potential exceptions to open access must be considered provisional. As reported above the data management plant will be updated at each reporting period in order to update it based on the project’s evolution. Furthermore, data collection will be performed fully in compliance with European Standard and Regulations about Protection of Personal Data, as already outlined in D8.1 “ _Ethics Requirements_ ” in order to avoid incidental findings during the analysis of Heating and Cooling demand data that could be redirect to personal habits, preferences, heating and cooling consumption etc. # Conclusions and next steps The present document has intended to outline a preliminary strategy for the management of data generated throughout PLANHEAT project. Considering that this deliverable is due at month six, few dataset has been generated yet, so it is possible that in the future some aspects outlined in the present document will need to be refined or adjusted. This initial data management plan has however demonstrated that the consortium fully commits itself to comply with open access requirements, also referring to the main fact that the PLANHEAT tool will be a completely open source tool. Moreover, a tentative list of dataset has been generated, showing the soundness of the concepts that the projects aims to develop and demonstrate. A dedicated Excel tracking sheet has been set up for the evaluation of the disclosure of project data. The results of this monitoring process (DAPP) will be progressively presented and discussed during the Consortium General Assembly meetings along project life. The update of the data management plan will reported in the different periodic reports at the end of each reporting period. Moreover, a specific task, namely Task 1.4, on “Overcoming barriers in data collection” has been included in the DoA. Within this task, protocol and procedure for data collection will be defined with reference to the different layers of data which will be needed by the PLANHEAT modules. A comprehensive repository of publicly available existing database will be set-up within this task, clustering the already existing databases which can be used for the project purposed (such as Corine, GEO2O, BPIE database, etc...).
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0379_AMECRYS_712965.md
# Introduction This report describes the **Initial** **Data Management Plan (DMP)** for the AMECRYS project, funded by the EU’s Horizon 2020 Programme under Grant Agreement number 712965. The purpose of the DMP is to set out the main elements of the AMECRYS consortium data management policy for the datasets generated by the project. The DMP will present in detail only the procedure for the management of datasets created during the lifetime of the project and describes the key data management principles, notably in terms of data standards and metadata, sharing, archiving and preservation. This is the third version of the AMECRYS DMP and fulfils the project deliverable D7.2 month 6 by responsible partner University of Strathclyde (UST). It draws on Horizon 2020 guidance (European Commission, 2016) and guidance of the Research and Knowledge Exchange Services department, University of Strathclyde (University of Strathclyde (2014)). Section 4 below list the project partners and key contacts. Section 5 provides a summary of the datasets generated during the lifetime of the project, including the types and format, the expected size of the datasets and the data utility. The specific description of how AMECRYS will make this research data findable, accessible, interoperable and reusable (FAIR) is outlined in section 6. Sections 7 to 9 outline the policy in relation to data resources, security and ethics. This is a live “active” document to be updated at regular intervals during the project _https://strathcloud.sharefile.eu/f/focfc1a4-6516-4951-93f0-4e86dd241d17_ . 4 # Project Participants <table> <tr> <th> **Full Name** </th> <th> **Short Name** </th> <th> **Contacts** </th> </tr> <tr> <td> Consiglio Nazionale delle Ricerche, Italy </td> <td> CNR </td> <td> Dr Gianluca DI PROFIO [email protected]_ </td> </tr> <tr> <td> Imperial College of Science, Technology and Medicine, UK </td> <td> IMP </td> <td> Dr Jerry HENG [email protected]_ </td> </tr> <tr> <td> Università della Calabria, Italy </td> <td> UCAL </td> <td> Prof Efrem CURCIO [email protected]_ </td> </tr> <tr> <td> Centre national de la recherche scientifique, France </td> <td> CNRS </td> <td> Dr Jean-Baptiste SALMON [email protected]_ </td> </tr> <tr> <td> Université libre de Bruxelles, Belgium </td> <td> ULB </td> <td> Dr Jim LUTSKO [email protected]_ </td> </tr> <tr> <td> University of Strathclyde, UK </td> <td> UST </td> <td> Prof Joop TER HORST [email protected]_ </td> </tr> <tr> <td> Centre for Process Innovation Limited, UK </td> <td> CPI </td> <td> Dr John LIDDELL [email protected]_ </td> </tr> <tr> <td> GVS S.p.a., Italy </td> <td> GVS </td> <td> Dr Soccorso GAETA [email protected]_ </td> </tr> <tr> <td> Fujifilm Diosynth Biotechnologies, UK </td> <td> FDB </td> <td> Dr James PULLEN [email protected]_ </td> </tr> </table> <table> <tr> <th> **Coordinator Contact** </th> <th> </th> </tr> <tr> <td> Dr Gianluca DI PROFIO </td> <td> E: [email protected]_ T: +39 0984 492010/492014 </td> <td> Consiglio Nazionale delle Ricerche (CNR), Instituto per la Tecnologia delle Membrane (ITM), Via P. Bucci Cubo 17/C, I-87036 Rende (CS), Italy </td> </tr> </table> 5 # Data Summary The main purpose of the data collection/generation of this project is to industrially enable the template-assisted membrane crystallization process through a thorough scientific understanding of the process. AMECRYS will produce several datasets during the lifetime of the project. The data will be both quantitative and qualitative in nature and will be analysed from a range of methodological perspectives for project development and scientific purposes. These will be available in a variety of easily accessible formats, including Post Script (PDF, XPS), Excel (XLSX, CSV), Word (DOC, RTF), Power Point (PPT), image (JPEG, PNG, GIF, TIFF), Origin (OPJ), compressed formats (TAR.GZ, MTZ), Program database (PDB). Within AMECRYS approximately 49 separate datasets will be created (see list in table below). They are listed under each of the work package deliverables taken from the GA Annex 1 – Description of Action. The datasets will have the same structure, in accordance with the guide of Horizon 2020 for the Data Management Plan. The expected size of the datasets produced will be between 5MB and 1GB. **Table 5.1 –** Potential datasets <table> <tr> <th> **Data Type** </th> <th> **Format** </th> <th> **Volume** </th> <th> **IPR Owner** </th> </tr> <tr> <td> **Work Package 2 - D2.1: Report on preparation of nanotemplates for mAb crystallization** **(lead: IMP)** </td> </tr> <tr> <td> Experimental data – Brunauer–Emmett– Teller (BET) </td> <td> XLSX, JPEG, PDF </td> <td> < 100 MB </td> <td> IMP </td> </tr> <tr> <td> Experimental data – scanning electron microscope (SEM) </td> <td> XLSX, JPEG, PDF </td> <td> < 100 MB </td> <td> IMP </td> </tr> <tr> <td> Experimental data – transmission electron microscopy (TEM) </td> <td> XLSX, JPEG, PDF </td> <td> < 100 MB </td> <td> IMP </td> </tr> <tr> <td> Experimental data – crystal shape/size measurement </td> <td> XLSX, JPEG, PDF </td> <td> < 100 MB </td> <td> IMP </td> </tr> <tr> <td> Experimental data – X-ray diffraction (XRD) </td> <td> XLSX, JPEG, PDF </td> <td> < 100 MB </td> <td> IMP </td> </tr> <tr> <td> Experimental data – High performance liquid chromatography (HPLC) </td> <td> XLSX, JPEG, PDF </td> <td> < 100 MB </td> <td> IMP </td> </tr> <tr> <td> Experimental data – dynamic light scattering (DLS) </td> <td> XLSX, JPEG, PDF </td> <td> < 100 MB </td> <td> IMP </td> </tr> <tr> <td> Synthesis protocol </td> <td> DOC, PDF, JPEG </td> <td> < 100 MB </td> <td> IMP </td> </tr> <tr> <td> Lab notes </td> <td> DOC, PDF </td> <td> < 100 MB </td> <td> IMP </td> </tr> <tr> <td> **Work Package 2 - D2.2: HEL4 domain fragment & Anti CD20 mAb process specification report (lead: FDB) ** </td> </tr> <tr> <td> Final report </td> <td> DOC, PDF </td> <td> < 100 MB </td> <td> FDB </td> </tr> <tr> <td> USP Protocols and technology transfer package for production of Anti-CD20 </td> <td> DOC, XLSX , PDF, PPT </td> <td> < 200 MB </td> <td> FDB </td> </tr> <tr> <td> DSP Protocols and technology transfer package for production of Anti-CD20 </td> <td> DOC, XLSX, PDF, PPT </td> <td> < 200 MB </td> <td> FDB </td> </tr> <tr> <td> USP Protocols and technology transfer package for production of HEL4 </td> <td> DOC, XLSX, PDF, PPT </td> <td> < 200 MB </td> <td> FDB </td> </tr> <tr> <td> DSP Protocols and technology transfer package for production of HEL4 </td> <td> DOC, XLSX, PDF, PPT </td> <td> < 200 MB </td> <td> FDB </td> </tr> <tr> <td> **Work Package 2 - D2.3: Report on the optimised nanotemplates for selective mAb recognition & crystallization (lead: IMP) ** </td> </tr> </table> 6 <table> <tr> <th> Experimental data – Brunauer–Emmett– Teller (BET) </th> <th> XLSX, JPEG, PDF </th> <th> < 100 MB </th> <th> IMP </th> </tr> <tr> <td> Experimental data – scanning electron microscope (SEM) </td> <td> XLSX, JPEG, PDF </td> <td> < 100 MB </td> <td> IMP </td> </tr> <tr> <td> Experimental data – transmission electron microscopy (TEM) </td> <td> XLSX, JPEG, PDF </td> <td> < 100 MB </td> <td> IMP </td> </tr> <tr> <td> Experimental data – crystal shape/size measurement </td> <td> XLSX, JPEG, PDF </td> <td> < 100 MB </td> <td> IMP </td> </tr> <tr> <td> Experimental data – X-ray diffraction (XRD) </td> <td> XLSX, JPEG, PDF </td> <td> < 100 MB </td> <td> IMP </td> </tr> <tr> <td> Experimental data – High performance liquid chromatography (HPLC) </td> <td> XLSX, JPEG, PDF </td> <td> < 100 MB </td> <td> IMP </td> </tr> <tr> <td> Experimental data – dynamic light scattering (DLS) </td> <td> XLSX, JPEG, PDF </td> <td> < 100 MB </td> <td> IMP </td> </tr> <tr> <td> Synthesis protocol </td> <td> DOC, PDF, JPEG </td> <td> < 100 MB </td> <td> IMP </td> </tr> <tr> <td> Lab notes </td> <td> DOC, PDF </td> <td> < 100 MB </td> <td> IMP </td> </tr> <tr> <td> **Work Package 3 - D3.1: Report on pilot lines to prepare membranes available and debugged (lead: GVS)** </td> </tr> <tr> <td> Experimental results from pilot tests </td> <td> PDF </td> <td> < 100 MB </td> <td> GVS </td> </tr> <tr> <td> Pictures of pilot plant developed </td> <td> PDF </td> <td> < 100 MB </td> <td> GVS </td> </tr> <tr> <td> **Work Package 3 - D3.2: Report on the development of membranes for heterogeneous mAbs nucleation (lead: CNR)** </td> </tr> <tr> <td> Experimental results - Membranes development data v1 </td> <td> DOC, XLSX, OPJ, PPT, JPEG, TIFF </td> <td> < 500 MB </td> <td> CNR </td> </tr> <tr> <td> Experimental results - Membranes development data v2 </td> <td> DOC, XLSX, OPJ, PPT, JPEG, TIFF </td> <td> < 1 GB </td> <td> CNR </td> </tr> <tr> <td> Experimental results - Heterogeneous nucleation data v1 </td> <td> DOC, XLXS, OPJ, PPT, JPEG, TIFF </td> <td> < 500 MB </td> <td> CNR </td> </tr> <tr> <td> Experimental results - Heterogeneous nucleation data v2 </td> <td> DOC, XLSX, OPJ, PPT, JPEG, TIFF </td> <td> < 1 GB </td> <td> CNR </td> </tr> <tr> <td> **Work Package 3 - D3.3: Report on membrane preparation scaleup (lead: GVS)** </td> </tr> <tr> <td> Report about prototype of membrane PDF < 100 MB GVS </td> </tr> <tr> <td> **Work Package 4 - D4.1: Robust microfabrication protocols to embed hydrophobic fluoropolymers membranes within microfluidic chips (lead: CNRS)** </td> </tr> <tr> <td> Fabrication Protocols of Microfluidic devices </td> <td> PDF, JPEG </td> <td> < 10 MB </td> <td> CNRS </td> </tr> <tr> <td> **Work Package 4 - D4.2: Multilevel microfluidic device for high throughput crystallization screening (“pharma-on-a-chip” concept) (lead: CNRS)** </td> </tr> <tr> <td> Microfluidic screening of membrane crystallization </td> <td> PDF, XLSX </td> <td> < 10 MB </td> <td> CNRS </td> </tr> <tr> <td> **Work Package 4 -D4.3: Report on selective nucleation and growth kinetics of mAbs from screening tests (lead: UST)** </td> </tr> <tr> <td> Experimental results - Crystallization Kinetics data v1 </td> <td> XLSX, JPEG </td> <td> 55 MB </td> <td> UST </td> </tr> <tr> <td> Experimental results - Crystallization Kinetics data v2 </td> <td> XLSX, JPEG </td> <td> < 100 MB </td> <td> UST </td> </tr> <tr> <td> **Work Package 5 -D5.1: Simulation code for thermodynamics of course-grained model of mAbs in confined geometry (lead: ULB)** </td> </tr> <tr> <td> Code written by partners in the project - ftDFT Code </td> <td> TAR.GZ </td> <td> < 10 MB </td> <td> ULB </td> </tr> <tr> <td> **Work Package 5 - D5.2: Open-source MCFFS computational simulation packages (lead: ULB)** </td> </tr> <tr> <td> Code written by partners in the project \- KMC Code </td> <td> TAR.GZ </td> <td> < 10 MB </td> <td> ULB </td> </tr> <tr> <td> **Work Package 5 - D5.3: Report on multiscale simulation of mAbs crystallization on membranes/nanotemplates (lead: UCAL)** </td> </tr> </table> <table> <tr> <th> Report on simulation activities - Models/software </th> <th> PDF, JPEG </th> <th> < 100 MB </th> <th> UCAL </th> </tr> <tr> <td> **Work Package 5 - D5.4: Report on structural/ morphological/ bioactivity properties of mAbs crystals produced by prototype operation crystallizers (lead: CNR)** </td> </tr> <tr> <td> Results of structural analysis </td> <td> MTZ, PDB </td> <td> 50 MB </td> <td> CNR </td> </tr> <tr> <td> Results of crystal morphology </td> <td> XLSX, TIFF </td> <td> 50 MB </td> <td> CNR </td> </tr> <tr> <td> Bioactivity tests </td> <td> XLSX, TIFF </td> <td> 10 MB </td> <td> CNR </td> </tr> <tr> <td> Regression models & multivariate analysis </td> <td> XLSX, TIFF, DOC </td> <td> 5 MB </td> <td> CNR </td> </tr> <tr> <td> **Work Package 6 - D6.1: Design of continuous flow template assisted membrane crystallizer prototype (lead: UCAL)** </td> </tr> <tr> <td> Design of prototype – Design specifications & Design drawings </td> <td> PDF, JPEG </td> <td> < 100 MB </td> <td> UCAL </td> </tr> <tr> <td> **Work Package 6 - D6.2: Installation and validation of continuous flow template assisted membrane crystallizer prototype (lead: UCAL)** </td> </tr> <tr> <td> Installation of prototype - Construction schedules </td> <td> PDF, XLSX, CSV </td> <td> < 100 MB </td> <td> UCAL </td> </tr> <tr> <td> Validation of prototype – Lab notes </td> <td> PDF </td> <td> < 500 MB </td> <td> UCAL </td> </tr> <tr> <td> **Work Package 6 - D6.3: Report on prototype’s operation monitoring & compliance with QS/GMP/CQA regulations (lead: CPI) ** </td> </tr> <tr> <td> Experimental results (USP/DSP/Analytics) – Anti-CD20 prototype operation </td> <td> DOC, XLSX, PPT, PDF, XPS, JPEG, TIFF </td> <td> 500 MB </td> <td> CPI </td> </tr> <tr> <td> Experimental results (USP/DSP/Analytics) – HEL4 prototype operation </td> <td> DOC, XLSX, PPT, PDF, XPS, JPEG, TIFF </td> <td> 500 MB </td> <td> CPI </td> </tr> <tr> <td> Report summarising operation performance including CQA, comparison to conventional approaches and GMP compliance </td> <td> DOC </td> <td> 10 MB </td> <td> CPI </td> </tr> <tr> <td> **Work Package 6 - D6.4: Techno-economic comparison between conventional batch and innovative DSP (lead: FDB)** </td> </tr> <tr> <td> Final report </td> <td> DOC, PDF </td> <td> < 100 MB </td> <td> FDB </td> </tr> <tr> <td> Output from BioSolve Simulation </td> <td> BioSolve, XLSX </td> <td> < 200 MB </td> <td> FDB </td> </tr> </table> 7 8 **Table 5.2 –** Lead Partners for Work Packages <table> <tr> <th> **Lead partner** </th> <th> **Related WP(s)** </th> </tr> <tr> <td> IMP </td> <td> WP2: Production of mAb/domain & Nanotemplates synthesis </td> </tr> <tr> <td> GVS </td> <td> WP3: Membranes development </td> </tr> <tr> <td> CNRS </td> <td> WP4: Microfluidics for continuous mAbs crystallization </td> </tr> <tr> <td> ULB </td> <td> WP5: Multi-scale modelling & characterization of mAb crystals </td> </tr> <tr> <td> CPI </td> <td> WP6: Prototype design, construction & operation </td> </tr> </table> # 5 FAIR (Findability, Accessibility, Interoperability, and Reusability) data ## 5.1 Making data findable, including provisions for metadata A DOI will be assigned to datasets for effective and persistent citation when it is uploaded to the repository [ _Zenodo_ ]. This DOI can be used in any relevant publications to direct readers to the underlying dataset. Each dataset generated during the project will be recorded in an Excel spreadsheet with a standard format and allocated a dataset identifier, see tables 6.1.1 and 6.1.2 below. The spreadsheet will be hosted at UST [ _ShareFile_ ]. This dataset information will be included in the metadata file (see _Section 6.3_ and _Annex B_ ) at the beginning of the documentation, and updated with each version. AMECRYS naming convention for project datasets will comprise of the following: 1. A unique chronological number of the datasets in the project will be added. 2. The title of the dataset. 3. For each new version of a dataset it will be allocated with a version number which will be for example start at v1.0. 4. A prefix "AM" indicating an AMECRYS dataset. 5. A unique identification number linking with the dataset work package and deliverable/task e.g., "W4_D4.3". **01_ Crystallization Kinetics_v1.0._AM_W4_D4.3.xlsx** Search keywords will be provided when the dataset is uploaded to Zenodo which will optimise possibilities for re-use. Zenodo follows the minimum Data Cite metadata standards. 9 The specific metadata contents, formats and volume are given in table 6.1.1 below and will be further defined in future versions of the DMP. **Table 6.1.1** : AMECRYS Datasets fields <table> <tr> <th> **Dataset Identifier** </th> <th> The ID allocated using the naming convention outlined in section 6.1 </th> </tr> <tr> <td> **Tile of Dataset** </td> <td> The title of the dataset which should be easily searchable and findable </td> </tr> <tr> <td> **Responsible Partner** </td> <td> Lead partners responsible for the creation of the dataset </td> </tr> <tr> <td> **Work Package** </td> <td> The associated work package this dataset originates </td> </tr> <tr> <td> **Dataset Description** </td> <td> A brief description of the dataset </td> </tr> <tr> <td> **Dataset Benefit** </td> <td> What are the benefits of the dataset </td> </tr> <tr> <td> **Dataset** **Dissemination** </td> <td> Where will the dataset be disseminated </td> </tr> <tr> <td> **Type Format** </td> <td> This could be DOC, XLSX, PDF, JPEG, TIFF, PPT etc. (see table 5.1) </td> </tr> <tr> <td> **Expected Size** </td> <td> The approximate size of the dataset </td> </tr> <tr> <td> **Source** </td> <td> How/why was the dataset generated </td> </tr> <tr> <td> **Repository** </td> <td> Expected repository to be submitted </td> </tr> <tr> <td> **DOI (if known)** </td> <td> The DOI can be entered once the dataset has been deposited in the repository </td> </tr> <tr> <td> **Date of Repository Submission** </td> <td> The date of submission to the repository can be added once it has been submitted </td> </tr> <tr> <td> **Keywords** </td> <td> The keywords associated with the dataset </td> </tr> <tr> <td> **Version Number** </td> <td> To keep track of changes to the datasets </td> </tr> <tr> <td> **Link to metadata file** </td> <td> </td> </tr> </table> **Table 6.1.2:** AMEYCRES Completed Dataset example <table> <tr> <th> **Dataset Identifier** </th> <th> 01_ Crystallization Kinetics_v1.0._AM_W4_D4.3.xlsx </th> </tr> <tr> <td> **Title of the dataset** </td> <td> Crystallization kinetics </td> </tr> <tr> <td> **Responsible Partner** </td> <td> UST </td> </tr> <tr> <td> **Work Package** </td> <td> WP4, Deliverable 4.3 </td> </tr> <tr> <td> **Dataset Description** </td> <td> A dataset of protein nucleation rates data measured from probability distributions of induction times </td> </tr> <tr> <td> **Dataset Benefit** </td> <td> The developed method for collection, processing and analysis of lysozyme data will apply for measurement of nucleation kinetics of Anti-CD20 protein. Furthermore, this data will be used to construct new heterogeneous protein nucleation theories (Task 5.2) and to enable process scale up (Task 6.1) </td> </tr> <tr> <td> **Dataset Dissemination** </td> <td> This data will be the basis for a peer reviewed journal paper on the heterogeneous crystal nucleation of the studied proteins </td> </tr> <tr> <td> **Type Format** </td> <td> Word, Excel, JPEG </td> </tr> <tr> <td> **Expected Size** </td> <td> < 100 MB </td> </tr> <tr> <td> **Source** </td> <td> Experimental results </td> </tr> <tr> <td> **Repository** </td> <td> Zenodo </td> </tr> <tr> <td> **DOI (if known)** </td> <td> To be inserted once the dataset is uploaded to the repository </td> </tr> <tr> <td> **Date of Repository Submission** </td> <td> In this example the expected date is 30-9-2020 or before if used for publication </td> </tr> <tr> <td> **Keywords** </td> <td> Protein Crystallization, Crystal Nucleation Rate, Induction Time Probability Distributions, Template Crystallization, Heterogeneous Nucleation </td> </tr> <tr> <td> **Version Number** </td> <td> V1.0 </td> </tr> <tr> <td> **Link to metadata file** </td> <td> </td> </tr> </table> 10 ## 5.2 Making data openly accessible Research data which is created in the project is owned by the partner who generates it ( _GA_ _Art. 26_ ). Each partner must disseminate its results as soon as possible unless there is legitimate interest to protect the results. A partner that intends to disseminate its results must give advance notice to the other partners (at least 45 days) together with sufficient information on the results it will disseminate ( _GA Art. 29.1_ ). Research data should be deposited in the Zenodo repository as soon as possible unless a decision has been taken to protect results. Specifically, research data needed to validate the results in the scientific publications should be deposited in the data repository at the same time as publication ( _GA Art. 29.3_ ). During embargo periods, information about the restricted data will be published in the data repository, and details of when the data will become available will be included in the metadata. Where a restriction on open access to data is necessary, attempts will be made to make data available under controlled conditions to other individual researchers. There will be three restricted datasets within deliverable D6.3 (lead: CPI): * Anti-CD20 CHO cell line * HEL4 E. coli strain * Growth protocols (although basic data such as final product titre would be available). These datasets are proprietary to Fujifilm (FDB) and may only be used in the restricted application of making material to support the work of this project. As these activities are enabling aspects of the project allowing the production of industry relevant biomolecules 11 which can be used to develop the continuous crystallisation technology, it is not felt that restrictions will impact on eventual dissemination of the project outputs for the continuous crystallisation technology. CPI will archive the data as set out in Section 8: Data Security. In accordance with GA Art. 25, data must be made available to partners upon request, including in the context of checks, reviews, audits or investigations. Data will be made accessible and available for re-use and secondary analysis. The AMECRYS project has chosen to use Zenodo.org as the repository for storing the project data: * _Research. Shared_ . — all research outputs from across all fields of research are welcome! Sciences and Humanities, really! * _Citeable. Discoverable_ . — uploads gets a Digital Object Identifier (DOI) to make them easily and uniquely citeable. * _Communities_ — create and curate your own community for a workshop, project, department, journal, into which you can accept or reject uploads. Your own complete digital repository! * _Funding_ — identify grants, integrated in reporting lines for research funded by the European Commission via OpenAIRE. * _Flexible licensing_ — because not everything is under Creative Commons. * _Safe_ — your research output is stored safely for the future in the same cloud infrastructure as CERN's own LHC research data. Not all project partners have access to an institutional repository and the use of Zenodo ensures data management procedures are unified across the project. A project page (community) has been setup for easy upload of project datasets _https://zenodo.org/communities/amecrys-project-eu._ Details of how to access the data will be available on the project web-site _www.amecrys- project.eu_ . Zenodo.org is open, free, searchable and structured with flexible licensing allowing for storing all types of data: datasets, images, presentations, publications and software. In addition, Zenodo allows researchers to deposit both publications and data, while providing tools to link them. All the public data of the project will be openly accessible at the repository. Non-public data will be archived at the repository using the “closed access” option. Data objects will be deposited in ZENODO under: * Open access to data files and metadata and data files provided over standard protocols such as HTTP and OAI-PMH. * Use and reuse of data permitted. * Privacy of its users protected. Since the data is being deposited in an external repository [Zenodo], a dataset registry record should also be created in local host institutions repositories e.g. PURE for UST. The registry record should include relevant metadata explaining what data exists, and a DOI linking to where the data is available in the external repository. Any data which is deposited externally in a closed state, i.e. it is not accessible, should also be deposited in Pure, so that 12 the University is still able to access the data. Here's an example of a dataset registry record in Pure, which includes a description of the dataset, and a DOI linking to where the data is available in the UK Data Service: _https://pure.strath.ac.uk/portal/en/datasets/humour-stylesand- bullying-in-schools(fff279ab-3b66-4e25-99f1-39122b58839c).html_ ## 5.3 Making data interoperable The AMECRYS project aims to collect and document the data in a standardised way to ensure that, the datasets can be understood, interpreted and shared in isolation alongside accompanying metadata and documentation. Generated data will be preserved on institutional intranet platforms until the end of the Project ( _see section 8_ ). A metadata file will be created and linked within each dataset. It will include the following information: ### General Information * Title of the dataset * Dataset Identifier * Responsible Partner * Author Information * Date of data collection * Geographic location of data collection * The title of project and Funding sources that supported the collection of the data ### Sharing/Access Information * Licenses/access restrictions placed on the data * Link to data Repository * Links to other publicly accessible locations of the data • Links to publications that cite or use the data * Was data derived from another source? 13 ### Dataset/File Overview * This dataset contains X sub-dataset as listed below * What is the status of the documented data? – “complete”, “in progress”, or “planned” * Are there plans to update the data? ### Methodological Information * Used materials * Description of methods used for experimental design and data collection: <Include links or references to publications or other documentation containing experimental design or protocols used in data collection> * Methods for processing the data: <describe how the submitted data were generated from the raw or collected data> * Instruments and software used in data collection and processing-specific information needed to interpret the data * Standards and calibration information, if appropriate * Environmental/experimental conditions * Describe any quality-assurance procedures performed on the data * Dataset benefits An example of a metadata file can be found in _Annex B._ ## 5.4 Increase data re-use (through clarifying licences) The datasets will be made available for re-use through uploads to the Zenodo community page for the project. In principle, the data will be stored in Zenodo after the conclusion of the Project without additional cost. All the research data will be of the highest quality, have long-term validity and will be well documented in order other researchers to be able to get access and understand them after 5 years. If datasets are updated, the partner that possesses the data has the responsibility to manage the different versions and to make sure that the latest version is available in the case of publically available data. Quality control of the data is the responsibility of the relevant responsible partner generating the data. 14 # Allocation of resources There are no immediate costs anticipated to make the datasets produced FAIR. The datasets will be deposited in the Zenodo repository for at least 5 years after the conclusion of the project. Any unforeseen costs related to open access to research data in Horizon 2020 are eligible for reimbursement during the duration of the project under the conditions defined in the Grant Agreement, in particular Article 6 and Article 6.2.D.3. Prof Joop ter Horst and Claire Lynch based at the University of Strathclyde (UST) are responsible for data management within the AMECRYS project, specifically for D7.2 creation of data management plan and D7.6 updating the data management plan and ensuring the datasets are recorded. The PI of each partner will have overall responsibility for implementing the data management plan. Each AMECRYS partner should respect the policies set out in this DMP. Datasets have to be created, managed and stored appropriately and in line with European Commission and local legislation. Dataset validation and registration of metadata and backing up data for sharing through repositories is the responsibility of the partner that generates the data in the WP. The datasets in Zenodo will be preserved in line with the European Commission Data Deposit Policy. The data will be preserved indefinitely (minimum of 5 years) and there are currently no costs for archiving data in this repository. 15 # Data security For the duration of the project, datasets will be stored on the responsible partner’s centrally provided storage, detailed in the table below. **Table 8: Data Storage** <table> <tr> <th> **Short Name** </th> <th> **Data Storage** </th> </tr> <tr> <td> CNR </td> <td> All data is stored at the same time in internal servers of CNR-ITM, CNR-IC and CNR-IAC located in Rende and Bari, 300 km far each other. Data would also be fully copied in cloudbased repositories once provided by the central ICT system of CNR. Selected data is also stored in cloud-based repositories (Dropbox, Google drive) for sharing easily. </td> </tr> <tr> <td> IMP </td> <td> Non-sensitive data is stored in the “Home directory” of Imperial College London. Data stored on H: drives is secure and backed up daily, so a deleted file can be restored within 24 hours. More information can be found at _https://www.imperial.ac.uk/adminservices/ict/self-service/connect- communicate/file-storage/home-directory-h-drive/_ . Sensitive data can be encrypted and then stored in the “Home directory” of Imperial College London. More information can be found at _https://www.imperial.ac.uk/adminservices/ict/self-service/be-secure/protect- college-personal-information/encryption/_ </td> </tr> <tr> <td> UCAL </td> <td> At University of Calabria, data storage is managed by the ICT Center (“Centro ICT di Ateneo”, [email protected]). The ICT Center is in charge of Security Analysis of Applications (AsIA), Virtual Desktop Infrastructure Service (Cloud-VDIS), S.O. Microsoft Windows/ S.O. Linux Debian Servers, High Performance Computing (HPC), Institutional Research Information System (Cineca IRIS, an innovative best-of-breed solution has been designed to fulfil the needs of academic and research institutions: IRIS is an IT platform that makes it easy to collect and manage data on research activities and outputs within an organization, _https://www.cineca.it/en/content/iris-institutional-research- informationsystem_ ). </td> </tr> <tr> <td> CNRS </td> <td> All data are either stored on Solvay’s data repositories which are managed by the IT service of Solvay (with no access from outside) or on cloud-based repositories such as MyCore or Google drive for sharing easily the data. </td> </tr> <tr> <td> ULB </td> <td> All code and resultant data are stored in git repositories that are fully copied across numerous computers both on site at the ULB and off site. Selected data is also stored in cloud-based repositories (Dropbox, Google drive, ...). </td> </tr> <tr> <td> UST </td> <td> Data stored on the University of Strathclyde’s storage is dual sited and replicated between two data centres which are physically separated by several hundred metres. Data links between datacentres are provided by dual disparate fabrics, providing added resilience. Additionally, the central I.T. service provides tape based backup to a third and fourth site. Data security is provided by access controls defined at a user level. The data will be stored on network drives _http://www.strath.ac.uk/it/filestore/_ </td> </tr> <tr> <td> CPI </td> <td> The company has an IT group who have responsibility for IT infrastructure and data security. Electronic data is stored locally on network drives and/or data base systems (IDBS). Data is backed up daily to tape and stored in fireproof safes. </td> </tr> <tr> <td> GVS </td> <td> GVS has an internal archive where all proprietary data are stored. Moreover, all data are stored in secondary secure archives that are backed up every night. </td> </tr> <tr> <td> FDB </td> <td> Project data is stored on the internal intranet servers. FDB uses Commvault software to back up files on a nightly basis. This is a cloud-based solution and data is backed up to a data centre in London. Data on this server is also covered with our data recovery procedure which is replicated real time to the same data centre to enable remote access should we lose the server room. </td> </tr> </table> 16 Following completion of the project, all the responsibilities concerning data recovery and secure storage will go to the repository storing the dataset. Data will be archived and preserved in the Zenodo data sharing repository. This provides options for making data openly available and other data restricted access as required. # Ethical aspects AMECRYS partners are to comply with the ethical principles as set out in _Article 34 of the_ _Grant Agreement_ , which states that all activities must be carried out in compliance with: 1. ethical principles (including the highest standards of research integrity — as set out, for instance, in the European Code of Conduct for Research Integrity (European Science Foundation, 2011) — and including, in particular, avoiding fabrication, falsification, plagiarism or other research misconduct) and 2. applicable international, EU and national law. The AMECRYS project does not involve the use of human participants or personal data in the research and therefore there is no requirement for ethical review. **Confidentiality** AMECRYS partners must retain any data, documents or other material as confidential during the implementation for the project. Further details on confidentiality can be found in _Article_ _36 of the Grant Agreement_ along with the obligation to protect results in _Article 27_ . 17 # Other issues As well as European Commission policies on open data management, Project Partners must also adhere to their own institutional policies and procedures for data management: **IMP** _https://www.imperial.ac.uk/admin-services/ict/self-service/be-secure/protect- collegepersonal-information/sensitive-info/recommended-options/_ _https://www.imperial.ac.uk/admin-services/ict/self-service/be-secure/protect- collegepersonal-information/encryption/_ **UCAL** Source: _http://www.unical.it/portale/ateneo/stat_reg/_ * Regolamento per la gestione dell'innovazione e della proprietà intellettuale e industriale. Rectoral Decree n.1597, 19/10/2015 * Codice di comportamento dell'Università della Calabria. Rectoral Decree n. 2653, 23/12/2014 **UST** _http://www.strath.ac.uk/staff/policies/informationsecurity/_ _http://www.strath.ac.uk/media/ps/cs/gmap/academicaffairs/policies/research_code- of_practice_-_ __May_2010.pdf_ _http://www.strath.ac.uk/media/ps/cs/gmap/academicaffairs/policies/Research_Data_Policy_v1.pdf_ _http://www.strath.ac.uk/media/ps/cs/gmap/academicaffairs/policies/Research_Data_Policy_ __for_website.pdf_ 18 **CPI** IT policies for the company are set out in written policies which are subject to periodic review **FDB** FDB has its own set of internal policies and procedures on data management.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0380_ERA-PLANET_689443.md
**Introduction** The present survey aims at collecting information about data sharing and data management conditions and procedures for ERA-PLANET products. It will be an input for the definition of the Data Management Plan following ERA-PLANET Data Management Principles (EDMP): EDMP-1 All data generated in the action must be deposited in a research data repository and made accessible free of charge and at the FAIR conditions described in the DMP; EDMP-2 All the scientific results generated in the action (e.g. presented in a publication) must be reproducible providing the required data and information about tools and instruments necessary for validation; EDMP-3 All data generated in the action, which are relevant, directly or indirectly, for information to policy and decision-makers in key societal benefit areas must be accessible through GEOSS and Copernicus at the conditions described in the DMP and in compliance with GEOSS DSP and GEOSS- DMP; The ERA-PLANET Data Management Principles are based on requirements for participation in the Horizon 2020 Research and Innovation Framework Programme as stated in the ERA-PLANET Grant Agreement, and on the requested contribution to GEOSS initiative and Copernicus Programme (see “Data Management Plan - Deliverable D4.5,” 2018). In general terms, research data should be 'FAIR', that is findable, accessible, interoperable and re-usable. These principles precede implementation choices and do not necessarily suggest any specific technology, standard, or implementation-solution. The following section has been structured in way conformant to the H2020 Programme, Guidelines on FAIR Data Management in Horizon 2020 Version 3.0 26 July 2016 1 . **Identification and description of relevant datasets** Identify all the categories of datasets that you will collect/generate during the ERA-PLANET Transnational Project. For each data category please fill the following template. If you need clarifications about a question please contact the DMP responsible party for your transnational project. If you do not know the possible answer yet, please answer accordingly (“not yet considered”, “to be defined”) providing any information that may help the DMP responsible party to identify existing barriers and issues to interoperability. <table> <tr> <th> **Answer** </th> </tr> <tr> <td> _Name of person/organization responding to the survey_ </td> <td> </td> </tr> <tr> <td> _Name of data category_ </td> <td> _Please assign a short name to the data category for reference_ </td> </tr> <tr> <td> _**Data Summary** _ </td> </tr> <tr> <td> _What is the purpose of the data collection/generation and its relation to the objectives of the project?_ </td> <td> _The DMP only covers data relevant to the objectives of the project._ </td> </tr> <tr> <td> _To whom might it be useful ('data utility')?_ </td> <td> _Please indicate the stakeholder categories which could find your data of interest (e.g. scientists, citizens, policy-makers, industry). This information helps to understand the data value and to evaluate potential requirements (e.g. on data quality documentation)._ </td> </tr> <tr> <td> _**Making data findable, including provisions for metadata** _ </td> </tr> <tr> <td> _Are the data produced and/or used in the project discoverable with metadata?_ _What metadata will be created? In case metadata standards do not exist in your discipline, please outline what type of metadata will be created and how._ </td> <td> _The INSPIRE Profile of ISO 19115 can be considered as an example of a minimal set of information needed for data discovery (https://inspire.ec.europa.eu/documents/inspire-metadata-implementing-rules- technical-guidelines-based-en-iso19115-and-en-iso-1)._ _The Research Data Alliance provides a Metadata Standards Directory (http://rd-alli-_ _ance.github.io/metadata-directory/) that can be searched for discipline- specific standards and associated tools._ </td> </tr> <tr> <td> _Are data identifiable and locatable by means of a standard identification mechanism (e.g._ _persistent and unique identifiers such as Digital Object Identifiers)?_ </td> <td> </td> </tr> <tr> <td> _**Making data openly accessible** _ </td> </tr> </table> <table> <tr> <th> _Are datasets openly accessible?_ _If certain datasets cannot be shared (or need to be shared under restrictions), explain why, clearly separating legal and contractual reasons from voluntary restrictions. Also explain where and how the conditions for access are described (i.e. a machine readable license)?_ </th> <th> _Please note that according to EDMP-2, if data are necessary to validate the scientific results of a publication generated in the project, the conditions described in the DMP must not pose any barrier to reproducibility (data must be openly accessible)._ _Note that embargo periods in making data accessible are reported in the re- use section._ </th> </tr> <tr> <td> _Is datasets access requiring some specific software tools? If yes, are they easily available (e.g. executable or source code)?_ </td> <td> _Please note that according to EDMP-2, if data are necessary to validate the scientific results of a publication generated in the project, information about tools and instruments necessary for validation is required._ </td> </tr> <tr> <td> _Where will the data and associated metadata, documentation and code be deposited? Preference should be given to certified repositories which support open access where possible._ _Will data and all associated metadata be discoverable through catalogues and search engines?_ </td> <td> _Please note that according to EDMP-1, all data generated in the project must be deposited in a research data repository._ _Useful listings of repositories include:_ * _Registry of Research Data Repositories (http://www.re3data.org/)_ * _Some repositories like Zenodo_ _(https://zenodo.org/ an OpenAIRE and CERN collaboration), allow researchers to deposit both publications and data, while providing tools to link them._ _Other useful tools include DMP online_ _(https://dmponline.dcc.ac.uk/) and platforms for making individual scientific observations available such as ScienceMatters (https://www.sciencematters.io/)_ </td> </tr> <tr> <td> _**Making data interoperable** _ </td> </tr> <tr> <td> _Are the data produced in the project interoperable, that is allowing data exchange and re-use between researchers, institutions, organisations, countries, etc. (i.e. adhering to standards for formats, as much as possible compliant with available (open) software applications, and in particular facilitating re-combinations with different datasets from different origins)?_ </td> <td> _Information about the adopted data format is particularly important for (interdisciplinary) interoperability._ </td> </tr> <tr> <td> _Will you be using standard vocabularies for all data types present in your data set, to allow inter-disciplinary interoperability?_ </td> <td> _Information about the vocabulary used for specific metadata fields (e.g. keywords) is particularly important for (interdisciplinary) interoperability._ </td> </tr> <tr> <td> _**Increase data re-use** _ </td> </tr> <tr> <td> _Is the data safely stored in certified repositories for long term preservation and curation?_ </td> <td> </td> </tr> <tr> <td> _How will the data be licensed to permit the widest re-use possible?_ _Does the data require attribution? If significant in a scientific paper, does it require citation or authorship?_ </td> <td> _Please provide information about policy for access, re-use, attribution, etc._ 2 </td> </tr> <tr> <td> _When will the data be made available for reuse? If an embargo is sought to give time to publish or seek patents, specify why and how long this will apply, bearing in mind that research data should be made available as soon as possible._ </td> <td> _Please note that according to the ERA-PLANET Grant Agreement: “the data, including associated metadata, needed to validate the results presented in scientific publications [must be made accessible] as soon as possible;”_ </td> </tr> <tr> <td> _Will data include provenance metadata indicating the origin and processing history of raw observations and derived products, to ensure full traceability of the product chain?_ </td> <td> </td> </tr> <tr> <td> _Are data quality assurance processes described?_ </td> <td> </td> </tr> <tr> <td> _Will be data full documented including all elements necessary to access, use, understand, and process, preferably via formal structured metadata?_ _Will data be also described in peer-reviewed publications referenced in the metadata record?_ </td> <td> _For geospatial data, is the projection and dimensions clearly defined? Is the exact meaning of each field and other information to understand the meaning of the data components completely described?_ </td> </tr> </table> <table> <tr> <th> _Will data be accessible via online services, including user-customizable services for visualization and computation?_ </th> <th> </th> </tr> <tr> <td> _Is the complete dataset available for download?_ _For datasets that are composed of several element (e.g. a long time series), Is there a protocol or a API for automatic downloading?_ </td> <td> _In some cases only a representation of it is available, or only a fragment of the data can be downloaded each time._ _In some cases data is only accessible through a web portal that requires several steps to get access to the data._ </td> </tr> <tr> <td> _**Allocation of resources** _ </td> </tr> <tr> <td> _What are the costs for making data FAIR in your project?_ </td> <td> _Please provide information about possible financial barriers to FAIR access to research data generated in the project._ </td> </tr> <tr> <td> _Who will be responsible for data management and preservation?_ </td> <td> </td> </tr> <tr> <td> _Are the resources for long term preservation discussed (costs and potential value, who decides and how what data will be kept and for how long)?_ </td> <td> _Data should be protected from loss and preserved for future use; preservation planning will be for the long term and include guidelines for loss prevention, retention schedules, and disposal or transfer procedures._ _Data and associated metadata should be held in data management systems are periodically verified to ensure integrity, authenticity and readability._ _Data should de managed to perform corrections and updates in accordance with reviews, and to enable reprocessing as appropriate; where applicable this follows established and agreed procedures._ </td> </tr> </table>
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0386_ARCADIA_645372.md
**Executive Summary** This deliverable presents the data management plan for the ARCADIA project. This data management plan describes what kind of data is generated or collected in the ARCADIA project and how this data is published openly. A simple decision process is defined that either classifies a result as public or nonpublic. The publishing platforms used are the project website, the OwnCloud platform and GitHub for open-sourced code. All these platforms can be accessed openly. The list of publications so far includes a set of deliverables, including the identification of the requirements for the development of the ARCADIA framework (D2.1) and the corresponding ARCADIA context model (D2.2). At the current phase of the project, there are no scientific publications, however contribution has been provided on behalf of the project to a set of EU- projects clustering activities, including the preparation of a white paper with regards to the description of software engineering evolution challenges. The list of research data expected during the project consists of open- sourced, trust-related software components and a set of statistics and use cases’ implementation results. Both datasets are expected to be collected during the implementation and evaluation phase of the project and are therefore subject to change. # 1 Introduction The results of the ARCADIA project need to be published to communicate and spread the knowledge to all interested communities and stakeholders. Published results generate wider interest towards the improvements achieved by the project in order to facilitate and potentiate exploitation opportunities. The goal of this report is listing publishable results and research data and to investigate the appropriate methodologies and open repositories for data management and dissemination. The ARCADIA project partners try to offer as much information as possible generated by the ARCADIA project through open access. Such information includes scientific publications issued by the project consortium, white papers published, contribution to standardization bodies, open source code generated, datasets used or collected through the realization of the use cases and anonymous interview results. In general, there are two types of project results which differ in the way they are published; namely publications and other research data. Our publication strategy follows the ideas of open access and open research data. Scientific publications and related research data is as far as possible published openly. On the other site not all data collected or generated can be published openly, as it may contain private information or interfere with legal aspects. This kind of data must be identified and protected accordingly. ## 1.1 Scope of the Deliverable Each project in the EC's Horizon 2020 program has to define what kind of results are generated or collected during the project's runtime and when and how it is published openly. This document describes initially which results are published or expected to be published in the ARCADIA project after the first nine months. For all results generated or collected during the ARCADIA project a description is provided including the purpose of the document, the standards and metadata used for storage, the facility used for sharing openly, and for archiving them in the long term. This document is updated on a regular basis. However, it does not describe how the results are exploited, which is part of D6.15 and D6.16. ## 1.2 Structure of the Deliverable The document is separated into three sections. The first section defines the purpose of the document, its structure, and terms that are necessary to understand it. In section two, we define a process that needs to be applied to all results collected or generated during the project. The process defines if a result has to be published or not. In addition, we provide a summary of all publishing platforms used in the project. In the third section, we list all publications and related data that is already or may be generated or collected during the project. For each result we provide - in accordance to the data management guideline [1] - a short description, the chosen way of open access, and a long-term storage solution. ## 1.3 Terminology **Open Access** : Open access means unrestricted access to research results. Often the term open access is used for naming free online access to peer- reviewed publications. Open access is expected to enable others to: * build on top of existing research results, * avoid redundancy, * participate in open innovation, and o read about the results of a project or inform citizens. All major publishers in computer science - like ACM, IEEE, or Springer - have participated in the idea of open access. Both green or gold open access levels are promoted. Green open access means that authors eventually are going to publish their accepted, peer-reviewed articles themselves, e.g. by deposing it to their own institutional repositories. Gold open access means that a publisher is paid (e.g. by the authors) to provide immediate access on the publishers website and without charging any further fees to the readers. **Open Research Data** : Open research data is related to the long-term deposit of underlying or linked research data needed to validate the results presented in publications. Following the idea of open access, all open research data needs to be openly available, usually meaning online availability. In addition, standardized data formats and metadata has to be used to store and structure the data. Open research data is expected to enable others to o understand and reconstruct scientific conclusions, and o to build on top of existing research data. **Metadata** : Metadata defines information about the features of other data. Usually metadata is used to structure larger sets of data in a descriptive way. Typical metadata are names, locations, dates, storage data type, and relations to other data sets. Metadata is very important when it comes to index and search larger data sets for a specific kind of information. Sometimes metadata can be retrieved automatically from a dataset, but often it needs some manual classification also. # 2 Publishing Infrastructure for Open Access The ARCADIA publication infrastructure consists of a process and several web- based publication platforms that together provide long-term open access to all publishable, generated or collected results in the project. Both the process and the used web-based platforms are described in the following subsections. ## 2.1 Publishing Process The project partners defined a simple, deterministic process that defines if a result in the project must be published or not. The term result is used for all kind of artefacts collected or generated during the project like white papers, scientific publications, and anonymous usage data. By following the process, each result is either classified public or non-public. Public means that the result must be published under the open access policy. Non-public means that it must not be published. A non-public classification always prevails a public classification. For each result generated or collected during the project runtime, the following questions must be answered to classify it: 1. Does a result provide significant value to others or is it necessary to understand a scientific conclusion? If this question is answered with yes, then the result is classified as public. If this question is answered with no, the result is classified as non- public. For example, code that is very specific to the ARCADIA platform is usually of no scientific interest to anyone, nor does it add any significant contribution. 2. Does a result include personal information that is not the author's name? If this question is answered with yes, the result is classified as non-public. Personal information beyond the name must be removed if it should be published. 3. Does a result allow the identification of individuals even without the name? If this question is answered with yes, the result is classified as non-public. The information must be reduced to a level where single individuals can not be identified, usually by using abstraction techniques. Sometimes data inference can be used to superimpose different user data and reveal indirectly a single user's identity. In ARCADIA, such datasets are non-public. ARCADIA will use established anonymization techniques to conceal a single user's identity, e.g. abstraction, dummy users, or non-intersecting features. 4. Does a result include business or trade secrets of one or more partners of the project? If this question is answered with yes, the result is classified as non-public. Business or trade secrets needs to be removed in accordance to all partners' requirements before it can be published. 5. Does a result name technologies that are part of an ongoing, project-related patent application? If this question is answered with yes, then the result is classified as non- public. Of course, results can be published after patent has been filed. 6\. Can a result be abused for a purpose that is undesired by society in general or contradict with societal norms and the project’s ethics? If this question is answered with yes, the result is classified as non-public. 7. Does a result break national security interests for any project partner? If this question is answered with yes, the result is classified as non-public. ## 2.2 Publishing Platforms In ARCADIA, we use several platforms to publish our results openly. The following list presents the platforms used during the project and describes their concepts for publishing, storage, and backup. 2.2.1 OwnCloud OwnCloud is a suite of client-server software for creating file hosting services and using them. OwnCloud is functionally very similar to the widely used Dropbox, with the primary functional difference being that OwnCloud is free and open-source, and thereby allowing anyone to install and operate it without charge on a private server, with no limits on storage space (except for disk capacity or account quota) or the number of connected clients. In order for desktop machines to synchronize files with their OwnCloud server, desktop clients are available for PCs running Windows, OS X, FreeBSD or Linux. Mobile clients exist for iOS and Android devices. Files and other data (such as calendars, contacts or bookmarks) can also be accessed using a web browser without any additional software. Any updates to files are pushed between all computers or mobile devices connected to a user's account. The OwnCloud platform for ARCADIA is hosted by UBITECH and runs on a server at UBITECH's premises, therefore keeping all data on an own managed server. The OwnCloud platform is securely backed in the UBITECH system infrastructure and holds all project-related data. This includes data about the ARCADIA members, the ARCADIA projects, deliverables, and publications. Information and data from services or platforms of ARCADIA project partners will also be stored on the OwnCloud platform. The ARCADIA OwnCloud platform will not duplicate any project-related information to external servers, such as issues, requirements, product code, or deployment information. The ARCADIA OwnCloud platform will be available during the project runtime, and will still be available for at least one year after the official project end. Web link: _https://euprojects.net/owncloud/_ 2.2.2 ARCADIA Website The partners in the ARCADIA consortium decided early to setup its own project- related webpage. This webpage describes the mission and the general approach of the project and its development status. A blog informs about news on a regular basis. Later in the project the developed ARCADIA software components and software development paradigm will be announced. A dedicated area for downloads is used to publish reports and white papers. All documents are published using the portable document format (PDF). All downloads are enriched by using simple metadata information like the title and the type of the document. The webpage is hosted by partner ADITESS at its own infrastructure. All webpage-related data is backed on a regular basis. All information on the ARCADIA website can be accessed without creating an account. The information is indexed by web search engines like Microsoft Bing or Google Search. The webpage is backed manually once per month. Web link: _http://www.arcadia-framework.eu/_ 2.2.3 GitHub GitHub is a well-established online repository which supports distributed source code develop-ment, management, and revision control. It is primarily used for source code data. It enables world-wide collaboration between developers and provides also some facilities to work on doc-umentation and to track issues. GitHub provides paid and free service plans. Free service plans can have any number of public, open-access repositories with unlimited collaborators. Private, non-public repositories require a paid service plan. Many open-source projects use GitHub to share their results for free. The platform uses metadata like contributors' nicknames, keywords, time, and data file types to structure the projects and their results. The terms of service state that no intellectual property rights are claimed by the GitHub Inc. over provided material. For textual metadata items, English is preferred. The service is hosted by GitHub Inc. in the United States. GitHub using a rented Rackspace hardware infrastructure where data is backed continuously to different locations. All source-code components that are implemented during the project and decided to be public will be uploaded to an open access GitHub repository. Web link: _https://github.com/_ # 3 Project Results/Datasets In this section, a list of all existing or foreseeable results is presented, separated into public deliverables, publications and open research data. For each result and in accordance to the data management guideline [1], we provide a description, name the standards used for storage and metadata, and define which open access platform is chosen. ## 3.1 Deliverables #### 3.1.1 Description of Highly Distributed Applications and Programmable Infrastructure Requirements Data set reference and name Description of Highly Distributed Applications and Programmable Infrastructure Requirements (report D2.1). Data set description This document is the first technical deliverable of the project and is committed to depict the general context, to identify relevant actors and technologies, and to set the main group of requirements that will drive the design of the ARCADIA framework. The document is a result of the ARCADIA Task 2.1. Standards and metadata The document is stored in the cross-platform portable document format (PDF). Metadata is added manually and includes the title and the partner organizations and members that classify this report. Data sharing The document was published openly on the ARCADIA webpage. The access is free for everyone and without restrictions. Web link: _http://www.arcadia-framework.eu/wp/documentation/deliverables/_ Archiving and preservation (including storage and backup) The document was published on the ARCADIA webpage. All earlier versions of the document are archived on the project-internal OwnCloud repository. The repository is backed on a regular basis by UBITECH. #### 3.1.2 Definition of the ARCADIA context model Data set reference and name Definition of the ARCADIA context model (report D2.2). Data set description This document introduces the first version of the facets included in the ARCADIA context model and elaborate on their usage. A standalone version of the ARCADIA Context Model is presented by providing all appropriate information needed by the end-user prior to the explanation of the modeling artifacts. Standards and metadata The document is stored in the cross-platform portable document format (PDF). Metadata is added manually and includes the title and the partner organizations and members that classify this report. Data sharing The document was published openly on the ARCADIA webpage. The access is free for everyone and without restrictions. Web link: _http://www.arcadia-framework.eu/wp/documentation/deliverables/_ Archiving and preservation (including storage and backup) The document was published on the ARCADIA webpage. All earlier versions of the document are archived on the project-internal OwnCloud repository. The repository is backed on a regular basis by UBITECH. It should be noted that all the public deliverables of ARCADIA, upon their finalization, are going to be published and made available in the project’s website. ## 3.2 Publications 3.2.1 Cluster “Software Engineering for Services and Applications” White Paper Data set reference and name ARCADIA white paper 1: Cluster “Software Engineering for Services and Applications” White Paper. Data set description This is a white paper that is still under preparation within the cluster “Software Engineering for Services and Applications”. The white paper aims at collecting the challenges that have to be faced towards the design and development of novel software engineering approaches. It is a collaborative work among all the projects that participate to the cluster. The work is coordinated by UBITECH and is going to include the challenges identified on behalf of the ARCADIA project. Standards and metadata The document is under preparation. The final version is going to be stored in the cross-platform portable document format (PDF). Metadata is going to be added manually and include the title, the partner organizations, and keywords that classify this research paper. Data sharing This research paper will be published within the webpage of the Cluster “Software Engineering for Services and Applications”. It will be freely available worldwide. Web link: _https://eucloudclusters.wordpress.com/software-engineering-for- services-andapplications/_ Archiving and preservation (including storage and backup) The research paper will be made available by the the Cluster “Software Engineering for Services and Applications” will therefore be persistently available to the public. ## 3.3 Research Data With regards to research data that is going to be used in the project, the ARCADIA Consortium does not envisage the need to collect raw data, nor to produce huge amount of data during the Project lifespan. At the time of writing, the Consortium plans to put at disposal of the “Pilot on Open Research Data in Horizon 2020” the input data set that will be used in the final demonstrations of the ARCADIA use cases, with the aim to facilitate the reproducibility of the experiments and confirm the correctness of the obtained results. The final demonstrations has been planned to be organized within the works of WP5, in the dedicated Tasks 5.1, 5.2, 5.3 and 5.4, and will constitute the core of the deliverables D5.1, D5.2 and D5.3. All ARCADIA partners participate in WP5 and will contribute in defining and gathering the demonstration data set that will be available in the use cases.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0387_RAWFIE_645220.md
1. **Introduction** The present document is the last version of a series of three documents related to the RAWFIE Data Management policy. The document defines the rules applied to all datasets generated during the project. The purpose of “D7.6 (c) - Data Management plan” is to provide an overview of the main elements of the data management plan at the end of the project. It also describes the policy adopted to grand access to the parties interested in the data generated by the RAWFIE platform during its development, tests and operation. Finally, it discusses the compliance of the RAWFIE data structure, management and policy with respect to the EU regulations and directives. This deliverable was updated throughout the lifespan of the project. Figure 1 presents the main steps and actions involved in a typical data management cycle as were described in the previous versions of the deliverable. **Figure** **1** **Data Management Cycle** **:** • Dissemination • Preservation • Storage • Data Processing • Models • Analytics • Data set • Data streams Collect Process Share Archive The RAWFIE Data Management Plan (DMP) is realised in accordance with the Guidelines on Open Access to Scientific Publications and Research Data in Horizon 2020 1 . It also implements the needed actions to be compliant with the Guidelines on FAIR Data Management in Horizon 2020 2 (see Table 5 in the ANNEX at the end of the document). The document’s structure is as follows: Section 2 gives an overview of the data description, data types and data processing in the RAWFIE ecosystem. Section 3 contains the data access procedure and the dissemination mechanisms that will take place to provide reusability and access in the future. In section 4 software tools, together with standards and data formats for handling the data processing of research data during the execution of an experiment and the project lifetime are presented. Finally, Section 5 describes the procedures for the archiving and long-term storage. 2. **Dataset reference and processing** RAWFIE data related to the execution of the experiments are distinguished in the following categories: * _**Dynamic data** _ : this data refers to the data information that describes an UxV during an experiment in terms of system (e.g. operating system) information, central processing unit usage, storage usage, location etc. * _**Static data** _ : this data refers to the characteristics of testbeds and resources. RAWFIE federation adds in advance static information like resource descriptions and properties, type of sensors for each UxV, other UxV characteristics, testbed location, etc. * _**Raw data** _ : data produced during the execution of the experiments. Any kind of sensor that participates in an experiment generates raw data. Raw data is pushed to a message bus, which publishes this information upon request either from an experimenter or from a device that participates in the experiment, storing it for a predefined time interval. * _**Geospatial data** _ : refer to the geospatial information of data (georeferenced data) in the RAWFIE system. RAWFIE system will generate and collect geodata during the experiments. Geospatial data belongs to both static and dynamic data categories. 1. **Dynamic data from experiments** The RAWFIE UxV Protocol was devised to abstract the differences between UxVs and expose a simple, compact, extensible and expressive interface to monitor and control UxVs in a platform agnostic way. The RAWFIE infrastructure can support the addition of new UxVs by creating adapters or translators to convert UxV specific information to the RAWFIE UxV Protocol. The reference frame used to format and publish UxVs generated information, is defined in Table 1 below. **Table 1 - Dynamic Data Overview** <table> <tr> <th> **Message** </th> <th> **Data** </th> <th> **Data types** </th> <th> **Description** </th> </tr> <tr> <td> Header </td> <td> sourceSystem sourceModule time </td> <td> string string long </td> <td> All messages of the UxV Message API contain the same header, used to encode basic information about the dispatching entity. </td> </tr> <tr> <td> CPU Usage </td> <td> header value </td> <td> Header int </td> <td> The amount of CPU resources that is currently in use. </td> </tr> <tr> <td> Storage Usage </td> <td> header available </td> <td> Header int </td> <td> Measurement of storage usage. </td> </tr> </table> <table> <tr> <th> </th> <th> value </th> <th> int </th> <th> </th> </tr> <tr> <td> Fuel Usage </td> <td> header value </td> <td> Header int </td> <td> Amount of available fuel. </td> </tr> <tr> <td> Location </td> <td> header latitude longitude height n e d depth altitude </td> <td> Header double double float double double double float, null float, null </td> <td> The Location message encodes the position of the UxV in the World. It was designed to support all kinds of UxVs even when they are not capable of localizing themselves in the World. This message allows the UxV to encode its position in absolute (Latitude, Longitude, and Height) or relative (North/East/Down) coordinates. This message shall be published to the message bus and shall be consumed by any entity that needs to know the location of the UxV. </td> </tr> <tr> <td> Attitude </td> <td> header phi theta psi </td> <td> Header float float float </td> <td> Angles describing the attitude of a rigid body (i.e., Euler angles). </td> </tr> <tr> <td> Linear Velocity </td> <td> header x y z </td> <td> Header float float float </td> <td> Vector quantifying the direction and magnitude of the measured linear velocity that a </td> </tr> </table> <table> <tr> <th> </th> <th> </th> <th> </th> <th> system is exposed to. </th> </tr> <tr> <td> Angular Velocity </td> <td> header x y z </td> <td> Header float float float </td> <td> Vector quantifying the direction and magnitude of the measured angular velocity that a system is exposed to. </td> </tr> <tr> <td> Linear Acceleration </td> <td> header x y z </td> <td> Header float float float </td> <td> Vector quantifying the direction and magnitude of the measured linear acceleration that a system is exposed to. </td> </tr> <tr> <td> Current </td> <td> header value </td> <td> Header float </td> <td> Measurement of electrical current. </td> </tr> <tr> <td> Voltage </td> <td> header value </td> <td> Header float </td> <td> Measurement of electrical voltage. </td> </tr> <tr> <td> Sensor Reading Scalar </td> <td> header value Unit </td> <td> Header float Unit </td> <td> This message encodes scalar measurements of sensors. </td> </tr> <tr> <td> Abort </td> <td> header </td> <td> Header </td> <td> This command instructs the UxV to stop any executing actions and enter standby mode </td> </tr> <tr> <td> Goto </td> <td> header location speed timeout </td> <td> Header Location float, null float </td> <td> This command instructs a system to move to a given location at a given speed. </td> </tr> <tr> <td> KeepStation </td> <td> header location radius speed duration </td> <td> Header Location float float, null float, null </td> <td> This command instructs a system to keep station at a given location. </td> </tr> <tr> <td> Arm </td> <td> header </td> <td> Header </td> <td> This command instructs a UAV to </td> </tr> </table> <table> <tr> <th> </th> <th> </th> <th> </th> <th> arm its motors. </th> </tr> <tr> <td> LaserScan </td> <td> header angle_min angle_max angle_increment time_increment scan_time range_min range_max ranges intensities </td> <td> Header float float float float float float float {"type":"array", "items":"float"} {"type":"array", "items":"float"} </td> <td> This command enables a single scan from a planar laser range-finder. </td> </tr> <tr> <td> NetwPerfUxV </td> <td> header interface_name bitrate latency lqi rssi active </td> <td> Header string float int int boolean </td> <td> This command provides a UxV network performance report. </td> </tr> <tr> <td> occupancy_grid </td> <td> header map_load_time map_resolution map_width map_height origin data </td> <td> Header long float int int eLocation{"type":"array", "items": "int"} </td> <td> Representation of a 2-D grid map, where cells represents probability of occupancy. </td> </tr> <tr> <td> proxy_connect_data </td> <td> header srcaddress myaddress sequence rssi lqi </td> <td> Header int int int int int </td> <td> UxV proximity component connectivity data. UxV name. Received signal strength indicator associated to the received beacon. CC1101 radio link quality indicator of the received beacon </td> </tr> <tr> <td> RTL </td> <td> header </td> <td> Header </td> <td> This command instructs a UΑV to return to its initial </td> </tr> </table> <table> <tr> <th> </th> <th> </th> <th> </th> <th> position. </th> </tr> <tr> <td> sensor_info </td> <td> header vendor_name product_name serial types </td> <td> Header string string string {"type":"array", "items":"SensorType"} </td> <td> This command is used to request all the information regarding a sensor. Information of quantities that the sensor is able to measure </td> </tr> <tr> <td> sensor_info_get </td> <td> header types </td> <td> Header array </td> <td> This command is used to request sensor information. A list of sensor types. One SensorInfo message shall be dispatched for each sensor whose type is contained in this list. If the list is empty one SensorInfo message shall be dispatched for each sensor that the UxV has. </td> </tr> <tr> <td> sensor_publish_control </td> <td> header destinationModule types enabled </td> <td> .Header string type":"array", "items":"SensorType"} {"type":"array", "items":"SensorType"} boolean </td> <td> This command is used to either enable or disable publishing of specific sensor data to the message bus. Canonical name of the controlled module. True to enable publishing to the message bus, false otherwise. </td> </tr> </table> <table> <tr> <th> sensor_type </th> <th> enum </th> <th> [ "CURRENT", "VOLTAGE", "TEMPERATURE", "CONDUCTIVITY", "SALINITY", "WATER_DENSITY", "SOUND_SPEED", "PRESSURE", "RGB_CAM", "RGBD_CAM", "LIDAR" ] </th> <th> Holds the information about the different kinds of available sensors. </th> </tr> <tr> <td> system_info </td> <td> header vendor Model type name owner </td> <td> Header string string SystemType string string </td> <td> General system information. </td> </tr> <tr> <td> system_type </td> <td> enum </td> <td> [ "UUV", "USV", "UGV", "UAV", "FIXED" ] </td> <td> UxV type. </td> </tr> <tr> <td> Takeoff </td> <td> header height </td> <td> Header double </td> <td> This command instructs a UAV to takeoff to a given height. </td> </tr> <tr> <td> UAVStatus </td> <td> header status </td> <td> Header enum </td> <td> Publishes the state of the device and includes notifications to the commands. </td> </tr> <tr> <td> Unit </td> <td> Unit </td> <td> enum </td> <td> This enumeration contains all the available units of measurement. </td> </tr> <tr> <td> Voltage </td> <td> header value </td> <td> Header float </td> <td> Measurement of electrical voltage </td> </tr> <tr> <td> NetwReportingPeriod </td> <td> header period </td> <td> Header int </td> <td> Command for setting the </td> </tr> <tr> <td> </td> <td> </td> <td> </td> <td> network performance reporting period on the UxVs. </td> </tr> <tr> <td> NetwSelectIf </td> <td> header iface </td> <td> Header int </td> <td> Command for selecting the network interface that the UxV shall use to connect to the message bus. </td> </tr> </table> ## 2.2 Static Data from experiments Static data consists mainly of information related with the initial definition of an experiment. This information is usually defined prior to an experiment execution and may be updated after its completion. The term ‘static’ does not mean that the information is not updated, but mainly that it does not directly interfere with the actual data generated during the execution of an experiment. Static data mostly relate to the involved resources (e.g. UxV types), sensor types, testbeds and scripts associated with the execution of an experiment as well as identifiers needed to identify or track an experiment within the RAWFIE platform. These static data are directly maintained/stored in (or can be extracted by) appropriate relational database tables defined at the platform level. Below in Table 2 there is a complete list of them appropriately categorized: **Table 2 - Static Data Overview** <table> <tr> <th> **Data** </th> <th> **Data type** </th> <th> **Description** </th> </tr> <tr> <td> </td> <td> **Experiment Related Data** </td> </tr> <tr> <td> Experiment Id </td> <td> String </td> <td> Identifier for a defined experiment </td> </tr> <tr> <td> Experiment Name </td> <td> String </td> <td> A (user friendly) name of the experiment </td> </tr> <tr> <td> Experiment Description </td> <td> String </td> <td> A short description for the experiment </td> </tr> <tr> <td> User Id </td> <td> Integer </td> <td> Internal identifier that can be used for obtaining additional information about the user that defined the experiment (i.e. name, surname etc.) </td> </tr> <tr> <td> EDL script </td> <td> String </td> <td> Contains the EDL script initially defined for an experiment (information considered static since it is defined prior to the actual execution) </td> </tr> <tr> <td> Testbed Id </td> <td> String </td> <td> Identifier of the testbed where the experiment is expected to take place (experiments cannot span multiple </td> </tr> <tr> <td> </td> <td> </td> <td> testbeds) </td> </tr> <tr> <td> Resource Ids </td> <td> String[] </td> <td> Identifiers for the resources involved assigned in an experiment </td> </tr> <tr> <td> </td> <td> **Execution Related Data** </td> </tr> <tr> <td> Execution Id </td> <td> String </td> <td> Identifier uniquely identifying an executing/executed experiment within the RAWFIE system </td> </tr> <tr> <td> Start Execution </td> <td> Timestamp </td> <td> Timestamp denoting the start of execution </td> </tr> <tr> <td> End Execution </td> <td> Timestamp </td> <td> Timestamp denoting the completion of execution </td> </tr> <tr> <td> Experiment Status </td> <td> Integer </td> <td> Value indicates the execution status of an experiment (i.e. 0=BOOKED, 1= ONGOING, 2=COMPLETED). This field may be updated during the course of experiment execution </td> </tr> <tr> <td> </td> <td> **Reservation Related Data** </td> </tr> <tr> <td> Reservation Id </td> <td> String </td> <td> Identifier of the user level reservation associated with an experiment </td> </tr> <tr> <td> User Id </td> <td> Integer </td> <td> Internal identifier that can be used for obtaining additional information about the user that defined the reservation (i.e. name, surname etc.) This value should be the same with the <User Id> mentioned at the **Experiment** **Related Data** category </td> </tr> <tr> <td> </td> <td> **Resource Related Data** 3 </td> </tr> <tr> <td> Resource Name </td> <td> String </td> <td> User friendly name of the resource </td> </tr> <tr> <td> Resource Description </td> <td> String </td> <td> A short description for the resource </td> </tr> <tr> <td> Resource Status </td> <td> Integer </td> <td> The latest status of the resource </td> </tr> <tr> <td> Resource Type </td> <td> Integer </td> <td> Identifier denoting the type of the resource (i.e. UAV, UGV, USV etc.) </td> </tr> </table> ## 2.3 Raw data The types of raw data generated by UxVs relate to the different sensor types that take part in the context of the experiment. We can classify sensors and related data into the following categories: * Environmental sensors (temperature, thermal, heat, moisture, humidity, air pressure) * Position, angle, displacement, distance, speed, acceleration * Proximity (able to detect the presence of nearby objects) * Navigation instruments * Images and/or video feeds **2.4 Geospatial data** Geospatial data appears in various formats and relations in the RAWFIE system. Sometimes the data itself has a spatial aspect, sometimes it is just metadata (i.e. descriptive data belonging to the original data). The following list (Table 3) gives an overview of the types of data with a spatial reference that are generated and / or collected inside RAWFIE. **Table 3 - Geospatial Data Overview** <table> <tr> <th> **Data** </th> <th> **Data type** </th> <th> **Description** </th> </tr> <tr> <td> UxV location </td> <td> Point </td> <td> The location of an UxV during an experiment. Used in the Visualisation Engine </td> </tr> <tr> <td> UxV course </td> <td> Line </td> <td> The current course an UxV is taking, i.e. an extrapolation of the current position together with its direction to know where the UxV will probably be in the next seconds or minutes. </td> </tr> <tr> <td> Waypoints </td> <td> Point[] </td> <td> A time ordered list of waypoints for UxV navigation / predefined routes. They can have absolute coordinates or relative ones in respect to the current position (e.g. ‘move 30 meters in the direction of 45°’). Used for experiment authoring and in the resource controller during execution. </td> </tr> <tr> <td> Geo-fence </td> <td> Polygon </td> <td> Regions where an event or alarm should be triggered when an UxV enters or leaves. Used in experiment authoring (EDL) </td> </tr> <tr> <td> Sensor measurement location </td> <td> Point </td> <td> Location where a sensor measurement has been recorded. It is metadata for sensor data types. </td> </tr> <tr> <td> Detected object </td> <td> any </td> <td> An object detected by sensors or evaluation of sensor values. The type of object highly depends on task which should be performed by UxV, e.g.: </td> </tr> <tr> <td> </td> <td> </td> <td> * border surveillance: intruders / potential threats * firefighting: trees, fire or empty space which would form a natural block to the spreading fire * monitoring of water canals: cracks in canal’s wall structure The position or geo-referenced outline of the object is geospatial meta data of the experiment results </td> </tr> <tr> <td> Testbed position or area </td> <td> Point Polygon </td> <td> The fixed location of the testbed (meta data). In the simple case it is just a coordinate, in the more precise case it is the area of the testbed. Used in experiment authoring (EDL) and resource exploring </td> </tr> <tr> <td> Testbed surroundings </td> <td> Any </td> <td> The surroundings of a testbed. These could influence the experiments. Potential objects could be: * barriers (buildings, trees etc.) * streets * water ways * water surface * digital elevation model (above and under water) Used in experiment authoring (EDL) and resource exploring as well as for validation of experiments in their aftermath. </td> </tr> <tr> <td> Testbed obstacles </td> <td> Any </td> <td> The obstacles that might appear at some of the testbed areas and should be avoided during an experiment. Used in experiment authoring (EDL) as forbidden areas where the experimenter cannot navigate a device. Also used during geofencing and dynamic navigation of the devices. </td> </tr> </table> ## 2.5 Processed data, models and analytics Processed data refer to the outcome of models and statistical methods that will be generated by the execution of a data analysis experiment through the data analysis tool. Typical models include classification, regression and outlier detection. Depending on the type of data the end user wishes to perform analytics tasks on, the models used to carry out the analysis tasks and the results of said tasks can display a lot of variety. Indeed, while a streaming analytics task consists in analysing a potentially never-ending stream of data, without seen a given data point twice (once it has been seen, it is discarded by the algorithm), batch analytics tasks can perform several pass on the data, which is available in its entirety from the start of the experiment. The data analysis component enables the end user to carry out tasks of both nature: streaming tasks, and batch tasks. Since the settings are fundamentally different in terms of data availability and processing routine, the data analysis component is provided with a suite of ready-to-use algorithms of both families. Those tasks can be easily augmented, extended and event combined through the Data Analysis Tool, thanks to the user interface it provides. Once the data is processed, it can, depending on the nature of the task, be streamed to a timeseries-based storage (Whisper 4 time series database, provided through Grafana 5 , the result repository), or stored to HDFS 6 . Data stored in HDFS can then be accessed through the tool, for further experiments or simply retrieved for visualisation. The Data Analysis Tool’s notebook interface indeed facilitates data visualization, as plots can easily be embedded in notebooks. # 3 Open Access of RAWFIE outcomes The scientific and technical results of the RAWFIE project are expected to be of high interest for the scientific community. Throughout the duration of the project, RAWFIE partners have disseminated (subject to their legitimate interests) the obtained results and knowledge to the relevant scientific communities through contributions in journals and international conferences mainly in the field of IoT, wireless communications, robotics, etc. The RAWFIE project also produced, transformed and used data that is of interest and has a value for the next phases of the RAWFIE deployment on one hand and for other initiatives and contexts on the other hand. This chapter addresses the access to these outcomes. All publications come after the more general decision on whether to go for a publication directly or to seek first protection by registering the IPR. If the Steering Committee decides that the scientific research is not to be protected through IPR, but rather published directly, then the project is aware that Open Access must be granted to all scientific publications resulting from Horizon 2020 actions. This is done in accordance with the Guidelines on Open Access to Scientific Publications and Research Data in Horizon 2020. The process shown in Figure 2, was taken from the aforementioned document. Research Results Decision to disseminate/share Publications Gold OA Green OA Depositing research data Access and use free charge Restricted access and/or use Decision to exploit/protect Patenting Business plan, models & data value chain **Figure 2: Process for handling access to research results** In the ‘gold’ Open Access (OA) approach of a peer-reviewed scientific research article, the scientific publisher immediately provides this article in Open Access mode. The associated costs shift away from readers. The most common business model is the one-off payment by authors. These costs, often referred to as Article Processing Charges (APCs), are usually paid by the researcher's university or research institute or the agency funding publishing the research. In other cases, subsidies or other funding models cover the costs of Open Access. The ‘green’ Open Access approach to peer-reviewed scientific research articles means that the author, or a representative, self-archives (deposits) the published article or the final peerreviewed manuscript in an online repository before, at the same time as, or after publication. Some publishers request to apply the Open Access mode only after an embargo period has elapsed. This embargo period is to allow the scientific publisher to recoup its investment by selling subscriptions and charging pay-per download/view fees during an exclusivity period. ## 3.1 Categories of RAWFIE data outputs for the Open Access mode The following categories of RAWFIE outputs apply to a free of charge Open Access: * Public Deliverables * Conference/Workshop presentations (which may, or may not, be accompanied by papers, see below) * Conference/Workshop papers and articles for specialist magazines * Research (Experiment) Data and metadata Furthermore, the provision of specific data sets to selected organisations will be possible in order to fulfil the H2020 requirements of “Grand Challenges” 7 for third parties to access, mine, exploit, reproduce and disseminate the results of the RAWFIE project. The beneficiaries will have access to the information about the tools and instruments, for the sake of validating the results they will produce. ### 3.1.1 Open Access to RAWFIE Public Deliverables ##### 3.1.1.1 Data Sharing Open Access to the public deliverables is achieved in RAWFIE by depositing the data into online repositories. The public deliverables are stored in the RAWFIE Web site 8 , after approval by the Project Officer (if the document is subsequently updated, the original version will be replaced by the latest version). ##### 3.1.1.2 Archiving and Preservation Open Access to the project public deliverables will be maintained for at least 3 years following the project completion, through the Website. ##### 3.1.1.3 Archived deliverables The following table (Table 4) summarizes archived deliverables at 01/04/2019, which are available at the RAWFIE web page 7 . **Table 4 - Archived deliverable** <table> <tr> <th> </th> <th> _D3.1 - Specification & Analysis of RAWFIE Components Requirements (a) _ </th> <th> </th> <th> WP3 </th> </tr> <tr> <th> </th> </tr> <tr> <td> </td> <td> _D3.2 - Specification & Analysis of RAWFIE Components Requirements (b) _ </td> <td> </td> <td> WP3 </td> </tr> <tr> <td> </td> </tr> <tr> <td> </td> <td> _D3.3 - Specification & Analysis of RAWFIE Components Requirements (c) _ </td> <td> </td> <td> WP3 </td> </tr> <tr> <td> </td> </tr> <tr> <td> </td> <td> _D4.1 - High Level Design and Specification of RAWFIE Architecture_ </td> <td> _(a)_ </td> <td> WP4 </td> </tr> <tr> <td> </td> </tr> <tr> <td> </td> <td> _D4.2 - Design and Specification of RAWFIE Components_ </td> <td> _(a)_ </td> <td> WP4 </td> </tr> <tr> <td> </td> </tr> <tr> <td> </td> <td> _D4.3 - Pilot Experimentation Scenarios for Validating and Testing_ (a) </td> <td> </td> <td> WP4 </td> </tr> <tr> <td> </td> </tr> <tr> <td> </td> <td> _D4.4 - High Level Design and Specification of RAWFIE Architecture (b)_ </td> <td> </td> <td> WP4 </td> </tr> <tr> <td> </td> </tr> <tr> <td> </td> <td> _D4.5 - Design and Specification of RAWFIE Components_ </td> <td> _(b)_ </td> <td> WP4 </td> </tr> <tr> <td> </td> </tr> <tr> <td> </td> <td> _D4.6 - Pilot Experimentation Scenarios for Validating and Testing_ (b) </td> <td> </td> <td> WP4 </td> </tr> <tr> <td> </td> </tr> <tr> <td> </td> <td> _D4.7 - High Level Design and Specification of RAWFIE Architecture (c)_ </td> <td> </td> <td> WP4 </td> </tr> <tr> <td> </td> </tr> <tr> <td> </td> <td> _D4.8 - Design and Specification of RAWFIE Components_ </td> <td> _(c)_ </td> <td> WP4 </td> </tr> <tr> <td> </td> </tr> <tr> <td> </td> <td> _D4.9 - Pilot Experimentation Scenarios for Validating and Testing (c)_ </td> <td> </td> <td> WP4 </td> </tr> <tr> <td> </td> </tr> <tr> <td> </td> <td> _D6.1: RAWFIE Operational Platform Testing and Integration Report (a)_ </td> <td> </td> <td> WP6 </td> </tr> <tr> <td> </td> </tr> <tr> <td> </td> <td> _D6.2: RAWFIE Platform Validation (a)_ </td> <td> </td> <td> WP6 </td> </tr> <tr> <td> </td> </tr> <tr> <td> </td> <td> _D6.3: RAWFIE Operational Platform Testing and Integration Report (b)_ </td> <td> </td> <td> WP6 </td> </tr> <tr> <td> </td> </tr> <tr> <td> </td> <td> _D6.4: RAWFIE Platform Validation (b)_ </td> <td> </td> <td> WP6 </td> </tr> <tr> <td> </td> </tr> <tr> <td> </td> <td> _D6.5: RAWFIE Operational Platform Testing and Integration Report (c)_ </td> <td> </td> <td> WP6 </td> </tr> <tr> <td> </td> </tr> <tr> <td> </td> <td> _D6.6: RAWFIE Platform Validation (c)_ </td> <td> </td> <td> WP6 </td> </tr> <tr> <td> </td> </tr> <tr> <td> </td> <td> _D7.1. Building the RAWFIE Community_ </td> <td> </td> <td> WP7 </td> </tr> <tr> <td> </td> </tr> <tr> <td> _D7.2: Training_ </td> <td> WP7 </td> </tr> <tr> <td> _D7.3: Dissemination Activities_ </td> <td> WP7 </td> </tr> <tr> <td> </td> <td> _D7.4 Data Management Plan (a)_ </td> <td> </td> <td> WP7 </td> </tr> <tr> <td> </td> </tr> <tr> <td> _D7.5 Data Management Plan (b)_ </td> <td> WP7 </td> </tr> <tr> <td> D7.6 Data Management Plan (c) </td> <td> WP7 </td> </tr> <tr> <td> </td> <td> _D8.1: Open Calls, Report on Selection_ </td> <td> </td> <td> WP8 </td> </tr> <tr> <td> </td> </tr> <tr> <td> _D8.2: Open Calls, Final Report (a)_ </td> <td> WP8 </td> </tr> <tr> <td> _D8.3: Open Calls, Report on Selection (b)_ </td> <td> WP8 </td> </tr> <tr> <td> _D8.4: Open Calls, Final Report (b)_ </td> <td> WP8 </td> </tr> <tr> <td> _D8.5: Open Calls, Report on Selection (c)_ </td> <td> WP8 </td> </tr> </table> ### 3.1.2 Open Access to RAWFIE Conferences, Workshops and Presentations ##### 3.1.2.1 Data Sharing Open Access to conference/workshop presentations is achieved in RAWFIE by depositing the data into an online research data repository. The presentations are stored in the section of the promotion material in the RAWFIE Web site 9 . ##### 3.1.2.2 Archiving and Preservation Open Access to project public presentations will be maintained for at least 3 years following the project completion, through the Website. _3.1.2.3 Archived Presentations_ http://www.rawfie.eu/sites/default/files/rawfie_v06_0.ppsx ### 3.1.3 Open Access to RAWFIE Publications ##### 3.1.3.1 Data Sharing As previously mentioned and described in section 1.1, there are two main routes to providing Open Access to publications, namely, ´gold´ or´ green´. In any case, Open Access to its publications is achieved in RAWFIE by depositing the data into online research data repositories. The publications will be stored in one or more of the following locations: * An institutional research data repository * The ZENODO 10 repository, operated by the EC through the funded OpenAIRE 11 project * The RAWFIE http://expdata.rawfie.eu/ The ZENODO repository is recommended by the EC’s OpenAIRE initiative in order to unite all the research results arising from EC funded projects. ZENODO is an easy-to-use and innovative service that enables researchers, EU projects and research institutions to share and show case multidisciplinary research results (data and publications) that are not part of existing institutional or subject-based repositories. Namely, ZENODO enables users to: * Easily share the long tail of small data sets in a wide variety of formats, including text, spreadsheets, audio, video, and images across all fields of science. * Display and curate research results, got credited by making the research results citable, and integrate the min to existing reporting lines to funding agencies like the EC. * Easily access and reuse shared research results. * Define the different licenses and access levels that will be provided. Furthermore, ZENODO assigns a Digital Object Identifier 12 (DOI) to all publicly available uploads, in order to make the content easily and uniquely citable. This repository also makes use of the OAIPMH protocol (Open Archives Initiative Protocol for Metadata Harvesting) to facilitate the content search through the use of defined metadata. This metadata follows the schema defined in INVENIO3 13 (a free software suite enabling to run an own digital library or document repository on the web) and is exported in several standard formats such as MARCXML 14 , Dublin Core 15 and Data Cite 16 Metadata Schema according to OpenAIRE Guidelines. In addition, considering ZENODO as the repository, the short- and long-term storage of the research data will be secured since they are stored safely in same cloud infrastructure like research data from CERN's Large Hadron Collider 17 . Furthermore, it uses digital preservation strategies to store multiple online replicas and to backup the files (Data files and metadata are backed upon on a nightly basis). Therefore, this repository fulfils the main requirements imposed by the EC for data sharing, archiving and preservation of the data generated in H2020 projects. ##### 3.1.3.2 Publication Reference Identity (Digital Object Identifier-DOI) The DOI uniquely identifies a document. The publisher, in the case that the document is included in the ´gold´ Open Access, or OpenAIRE, in the case that the document is archived in ZENODO, will allocate this identifier. ##### 3.1.3.3 Archiving and Preservation Open Access to project public presentations will be maintained for at least 3 years following the project completion, through the above repositories. ##### 3.1.3.4 Archived Publications Following publications are uploaded on the RAWFIE web site publications page (http://www.rawfie.eu/publications). 1.P. Dallemagne, D. Piguet, J.- D. Decotignie, Publish-Subscribe Communication for Swarms of Unmanned Vehicles, CSEM Scientific and Technical Report, p. 116; 2016 2. K. Kolomvatsos, C. Anagnostopoulos, S. Hadjiefthymiades, ‘Distributed Localized Contextual Event Reasoning under Uncertainty’, accepted for publication in IEEE Internet of Things Journal, 2017 3. Md Fasiul Alam, Stathes Hadjiefthymiades, Advanced, Hardware Supported In-Network Processing for the Internet of Things, to be presented in ICC 2017 (2nd international conference on Internet of things, Data and cloud computing), March 2017, Cambridge UK. 4. Papadopoulou, P., Kolomvatsos, K, Panagidi, K. and Hadjiefthymiades, E. (2017). Internet of Things Applications and Services: The Case of RAWFIE project. 11th Mediterranean Conference on Information Systems (MCIS), 4-5 September 2017, Genova, Italy. 5. Magda Gregorova, Alexandros Kalousis, Stephane Marchand-Maillet: Forecasting and Granger Modelling with Non-linear Dynamical Dependencies. Machine Learning and Knowledge Discovery in Databases - European Conference, ECML/PKDD, 2017, Skopjie, FYROM. 6. Amina Mollaysa, Pablo Strasser, Alexandros Kalousis: Regularising Non-linear Models Using Feature Side-information. Proceedings of the 34th International Conference on Machine Learning, ICML, 2017, Sydney, 2508-251 7. Magda Gregorova, Alexandros Kalousis, Stephane Marchand-Maillet: Learning Predictive Leading Indicators for Forecasting Time Series Systems with Unknown Clusters of Forecast Tasks. Proceedings of the 9th Asian Conference on Machine Learning, {ACML} 2017, Seoul, South Corea. 8. A. Ch. Kapoutsis, Ch. M. Malliou, S. A. Chatzichristofis, E. B. Kosmatopoulos, “Continuously Informed Heuristic A*–Optimal Path Retrieval Inside an Unknown Environment”, «15th IEEE International Symposium on Safety, Security, and Rescue Robotics 2017 (SSRR 2017)», October 10-13 2017, Shanghai, China. 9. T. Kontos and S. Hadjiefthymiades, “An optimized cross-layer scheme for wireless ad hoc networking ”, Ad hoc networks, Elsevier, Under Review. 10. T. Kontos, C. Anagnostopoulos, E. Zervas and S. Hadjiefthymiades, "Adaptive Epidemic Dissemination as a Finite-Horizon Optimal Stopping Problem," to be published in Wireless Networks (WINET), Springer, 2018\. 11. Amina Mollaysa, Pablo Strasser, Alexandros Kalousis, Learning with Feature Sideinformation, Workshop on Learning in High Dimensions with Structure in the context of Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 7-12, 2016, Barcelona, Spain 12. Magda Gregorova, Stephane Marchand-Maillet, Alexandros Kalousis, Forecasting and Granger modelling with non-linear dynamical dependencies, under review in 20th International Conference on Artificial Intelligence and Statistics, AI STATS, 2017, Fort Lauderdale, Florida, USA. 13. Md Fasiul Alam, Serafeim Katsikas, Olga Beltramello, and Stathes Hadjiefthymiades, "Augmented and Virtual Reality Based Monitoring and Safety System: A Prototype IoT Platform", Journal of Network and Computer Applications (JNCA), Elsevier, 2017. 14. K. Kolomvatsos, M. Tsiroukis, and S. Hadjiefthymiades, "An Experiment Description Language for Supporting Mobile IoT Applications", 2016 FIRE Book, European Commission, River Publishers, November, 2017\. 15. Athanasios Kapoutsis, Savvas Chatzichristofis, and Elias Kosmatopoulos, “DARP: Divide Areas Algorithm for Optimal Multi-Robot Coverage Path Planning”, Journal of Intelligent & Robotic Systems, Springer, June 2017. 16. Magda Gregorova, Alexandros Kalousis, Stephane Marchand-Maillet: Forecasting and Granger Modelling with Non-linear Dynamical Dependencies. Machine Learning and Knowledge Discovery in Databases - European Conference, CML/PKDD, 2017, Skopjie, Macedonia. 17. Papadopoulou, P., Kolomvatsos, K, Panagidi, K. and Hadjiefthymiades, E. (2017). Internet of things: A business perspective. Internet of Things: Concepts, Technologies, Applications and Implementations, Q. Hassan, A.R. Khan and S. Madain (eds), CRC Press 18. A. Ch. Kapoutsis, Ch. M. Malliou, S. A. Chatzichristofis, E. B. Kosmatopoulos, “Continuously Informed Heuristic A*–Optimal Path Retrieval Inside an Unknown Environment”, «15th IEEE International Symposium on Safety, Security, and Rescue Robotics 2017 (SSRR 2017)», October 10 -13 2017, Shanghai, China. 19. T. Kontos · C. Anagnostopoulos · E.Zervas · S. Hadjiefthymiades, “Adaptive epidemic dissemination as a finite-horizon optimal stopping problem” 20. “Analysis of Hybrid Geographic /Delay-Tolerant Routing Protocols for Wireless Mobile Networks “, Infocom 2018, Honolulu, Hawaii, USA 21. “Asymptotics of the Packet Speed and Cost in a Mobile Wireless Network Model”, IEEE ISIT 2018,Vail, Colorado, USA 22. “Analysis of a One-Dimensional Continuous Delay-Tolerant Network Model”, IEEE SPAWC 2018, Kalamata, Greece 23. Harth, N. and Anagnostopoulos, C. (2018) Edge-centric Efficient Regression Analytics. In: 2018 IEEE International Conference on Edge Computing (EDGE), San Francisco, CA, USA, 0207 Jul 2018 24. Ali, A., Anagnostopoulos, C. and Pezaros, D. P. (2018) On the Optimality of Virtualized Security Function Placement in Multi-Tenant Data Centers. In: IEEE International Conference on Communications (ICC 2018), Kansas City, MO, USA, 20-24 May 2018 25. R. Cziva, C. Anagnostopoulos, D. P. Pezaros, (2018) Dynamic, Latency-Optimal vNF Placement at the Network Edge. IEEE Conference on Computer Communications (INFOCOM 2018), Honolulu, HI, USA 26. Harth, N., Anagnostopoulos, C., (2017) Quality-aware Aggregation & Predictive Analytics at the Edge. IEEE International Conference on Big Data (IEEE Big Data 2017), December 11-14, 2017, Boston, MA, USA. 27. Gregorová, Magda, Ramapuram, Jason and Kalousis, Alexandros "Large-scale Nonlinear Variable Selection via Kernel Random Features." ECML PKDD 2018 ### 3.1.4 Open Access to RAWFIE Research Data Apart from the Open Access to public deliverables, presentations and scientific publications, the Open Research Data Pilot also applies to two types of data: * The data, including associated metadata, needed to validate the results presented in scientific publications (underlying data); * Statistical data and metadata generated o in the course of the project, or o during the execution of experiments. A lot of information generated during the experiments will form statistical data that will be used for the purpose of the dynamic tuning of the resources and plans. This data could also be used after the execution of the experiment (post-mortem), for diagnostics or further analysis of the experiment execution. This is the case for example in the network behaviour reporting, done through the analysis of the link quality, latency and throughput. As experimental data contains information about the positions of UxVs, their operational measurements (CPU usage, battery consumption,...) and the sensor collected measurements, RAWFIE consortium takes consideration about testbeds that are characterized as “sensitive areas” like Skaramagas naval base. Staging processing is required as discussed further in Section 5.2. After cleaning and filtering, data may also be used as reference data as described in Section 2. The main categories in which statistical information is generated are: _networking_ , _processor_ and _machine load_ , _database transaction rates_ , etc. In other words, beneficiaries are able to choose data, additionally to the data underlying publications, they make available in Open Access mode. According to this requirement, the underlying data related to the scientific publications will be made publicly available (see Section 3.1). This allows other researchers to make use of that information to validate the results, thus being a starting point for their investigations, as expected by the EC through its Open Access policy. By design, RAWFIE avoids any unnecessary collection of personal data. In cases where some limited personal data collection is required, each entity that accesses data commits itself to respect data confidentiality throughout its entire processing cycle. More explicitly, data should: * Be fairly and lawfully processed; * Be used for limited purposes; * Be handled in an adequate, relevant and not excessive way; * Be limited to what is needed and relevant for the research; * Be collected on a voluntary basis under explicit consent from the end-users; * Be accessed in aggregate form or anonymously; * Kept not longer than necessary; * Be used in accordance with the data subject’s rights; * Be processed without transferring it to countries with absent or insufficient data protection policies. More generally, the RAWFIE platform is governed by the following principles, which should be respected by all users and partners: * Respect of privacy, personal data protection and individual freedom of choice. * Proportionality o By default, most RAWFIE experiments avoid or limit the collection of personal data beyond what is necessary and relevant to the carried out for an experiment. * Dissociation o If personal data is collected, identifying information such as the email address is dissociated from the collected data. * Principle of prior informed consent from the data originator. * Protection of minors by restricting personal data collection from non-adults. * Collected data are stored on Servers located in European countries. * Collective responsibility o All users and stakeholders are required to respect data handling rules and to inform the project privacy officer of any detected attempt of privacy breach. * Universality o The privacy and personal data protection standards followed by the RAWFIE platform are used for the users and other interacting parties, regardless of their country of residence. * No personal data is shared and transmitted to third parties, including governments and public agencies (except in cases of an, unlikely, judiciary decision). 4. **Research Data – Tools and Standards** **4.1 Tools** In this section, two major tools of the RAWFIE system are presented: the Apache Avro 18 tool which is a data serialization tool and provides a common framework for every robot to adhere to RAWFIE agnostically of their system, through the adaptor of the UxV Node and the SAMANT ontology, an extension of Open-Multinet (OMN) ontology suit, which describes semantically the dynamic and static data of the RAWFIE ecosystem. ### 4.1.1 Apache AVRO formatted messages and Kafka Schema Registry ##### 4.1.1.1 AVRO According to its own documentation, the Apache Avro tool is a data serialization system with some useful capabilities. Avro provides: * Rich data structures. * A compact, fast, binary data format. * A container file, to store persistent data. * Remote procedure calls (RPC). * Simple integration with dynamic languages. Code generation is not required to read or write data files nor to use or implement RPC protocols. Code generation as an optional optimization, only worth implementing for statically typed languages. In order to use the Avro schemas, RAWFIE adopts the Apache Kafka based Confluent Platform 19 which provides an easy way to build real-time data pipelines and streaming applications. Having a single, central streaming platform for the RAWFIE infrastructure simplifies connecting data sources to Kafka, building applications with Kafka, as well as securing, monitoring, and managing your Kafka infrastructure. Avro, being a schema-based serialization utility, accepts schemas as input. In spite of various schemas being available, Avro follows its own standards of defining schemas. These schemas describe the following details: * type of file (record by default) * location of record * name of the record * fields in the record with their corresponding data types Using these schemas, you can store serialized values in binary format using less space. These values are stored without the use of any metadata. Avro schemas are defined with JavaScript Object Notation (JSON) 20 document format, which is a lightweight text-based data interchange format. This facilitates implementation in languages that already have JSON libraries. _4.1.1.2 Kafka Schema Registry_ One of the most important things is to manage the Avro schemas and how those schemas should evolve. A Kafka Schema Registry is adopted for that purpose which provides a serving layer for our metadata. It provides a RESTful interface for storing and retrieving Avro schemas. It stores a versioned history of all schemas, provides multiple compatibility settings and allows evolution of schemas according to the configured compatibility setting. It provides serializers that plug into Kafka clients and handle schema storage and retrieval for Kafka messages sent in the Avro format. Briefly, the Schema Registry: * provides a serving layer for metadata. * provides interface for storing and retrieving Avro schemas. * stores a versioned history of all schemas, provides multiple compatibility settings and allows evolution of schemas according to the configured compatibility setting. * provides serializers that plug into Kafka clients that handle schema storage and retrieval for Kafka messages that are sent in the Avro format. In the end this Schema Registry is heavily based on the Java API of Confluent Schema Registry 21 . ### 4.1.2 Ontologies for RAWFIE Over the past decade semantic information models have been regularly used to address interoperability issues on managing federated experimental infrastructures (e.g., NDL-OWL 22 , NOVIIM 23 , NML 24 , INDL 25 , etc.). One of the most recent efforts, the OWL encoded OMN ontology suite builds upon existing ontologies. OMN is still evolving supported by a community of experts within the FIRE and GENI community. The ontology describes federated infrastructures and resources as generally as possible, while still supporting the management of their lifecycle in federated environments. OMN consists of a hierarchy of ontologies as depicted in Figure 3. The detailed description of the OMN ontology suite is provided in [17]. The OMN ontology at the highest level defines basic concepts and properties, which are then re-used and specialized in the subjacent ontologies. Included at every level are (i) axioms, such as the disjointness of each class; (ii) links to concepts in existing ontologies, such as NML, INDL and NOVI; and (iii) properties that have been shown to be needed in related ontologies. In a nutshell: * The Federation ontology describes federations, along with their members and related infrastructures. * The Lifecycle ontology describes the whole lifecycle of resource/service management in the federation. This includes requests, reservation (schedule for allocation), provisioning and release. * A resource in the OMN ontology is defined as any provisionable, controllable, and/or measurable entity. The Resource ontology augments the definitions of the Resource class defined in the main OMN upper ontology with concepts such as Node, Interface, Link, etc. * The Component ontology covers concepts that are considered descendants of the Component class defined in the OMN upper ontology (e.g. CPU, Sensor, Core, Port, Image, etc.) * A service is defined in the OMN ontology as any entity that has an API to use it. A service may further depend on a Resource. The Service ontology covers different services in the relevant application areas (e.g., Portal, etc.). * The Monitoring ontology is directly linked to other OMN ontologies and facilitates interoperability in terms of enabling common monitoring data to be exchanged federation wide. It is built based on existing ontologies, such as the NOVI monitoring ontology. The OMN ontology suite is designed in a flexible, extensible way to cover specific domains. Examples of such domains include wireless (e.g., Wi-Fi or sensors), SDN, Cloud computing, etc. **Figure 3: Open - Multinet ontology suite** ### 4.1.3 SAMANT OMN Extended Ontology The extension of the OMN ontology for the description of the resources of RAWFIE is twofold. It adopts many concepts from the ontologies of the OMN suite and includes two new ontologies to cover specifically the domains of UxVs and sensors. Furthermore, these ontologies include concepts from other existing relevant ontologies on sensors and measurements. ##### 4.1.3.1 OMN UxV Ontology Figure 4 illustrates the structure of OMN UxV (omn-domain-uxv) ontology. This ontology is available in Turtle 26 format. **Figure 4: OMN UxV ontology** This ontology describes the resources of RAWFIE testbeds, their reservation lifecycle and the attributes of RAWFIE members. Each RAWFIE testbed is described by the Testbed class that includes all the attributes of RAWFIE testbeds (name, description, location and UxV support) is linked with User Class and UxV class. The User class describes RAWFIE members and includes personal information and their role on RAWFIE testbeds. The UxV class describes the resources of each RAWFIE testbed. More specifically, it contains basic information about UxVs (name, description, location, UxV type) and is linked with many classes that describe the features of UxVs. The Connection class represents the communication capabilities of each UxV. The Resource Status class describes the current availability status of UxVs. The Health Status class describes the health status of UxVs and the Config Parameters class includes specific configuration parameters of each UxV. The reservation status of each UxV is described by the Lease class. UxV class is linked with the System class of OMN Sensor Ontology, which describe the specification of the sensors attached on UxVs. Figure 5 depicts the description of a ground unmanned vehicle (UgV) named UgV1. Ugv1 is part of the UgV Testbed (testbed) and is a type of UgV. Its connection features and configuration parameters are described by the UgV Connection and UgV Config Parameters individuals respectively. The health status of UgV1 is defined by term “OK” and UgV1 Health Information individuals. Its resource status is described by “Sleep Mode” status. The UgV1 Lease individual includes its reservation status. UgV1 Point 3D describes the exact location of UgV1 in terms of latitude, longitude and altitude. Finally Ugv1 Sensor System contains all the information of the attached sensors of UgV1. **Figure 5: UxV Example** OMN UxV ontology uses predefined concepts (classes) and links (properties) from OMN ontology suite. OMN Federation ontology is used for the description of testbed. OMN Resource ontology is used for the description of UxV resources. OMN Lifecycle is used for the reservation process of UxVs. OMN Wireless ontology is used for the description of UxV communication capabilities. Finally, for the location of RAWFIE testbeds and UxVs Geo RSS Feature Model and ontology [18] is used. ##### 4.1.3.2 OMN Sensor Ontology The OMN Sensor ontology describes the attached sensors of RAWFIE resources and the sensors record measurements for a variety of phenomena. It focuses on the sensor characteristics that are involved in the selection of the appropriate UxV. Thus, as interesting features are considered the following: * Feature of Interest (Air, Ground, Water) * Measured Property (Temperature, Velocity, Pressure, Electric Current Rate, etc.) * Unit of measured property. * Sensor description (vendor name, product name, serial number, description). For the description of the sensors we used the Semantic Sensor Networks (SSN) ontology, which is developed by the W3C Semantic Sensor Networks Incubator Group (SSN-XG) [19], and ontology for quantity kinds and units [20]. Figure 6 depicts the structure of OMN Sensor ontology. **Figure 6: OMN Sensor Ontology** The set of sensors of each UxV is described by the ssn: System class. System class is linked with the UxV class of the OMN UxV ontology. All basic sensors of the System are described by the corresponding subclass of the ssn: sensing Deviceclass. The measuring property of each sensor is represented by the qu: QuantityKind class (property) and its subclasses. These classes are linked with the ssn: Feature Of Interest class that define if the property corresponds to “Air”, “Ground” or “Water” environment. Figure 7 depicts the description of a sensor system attached on the ground unmanned vehicle (UgV) namedUgV1. This sensor system (UgV1MultiSensor) is equipped with an odometry (UgV1OdometryMultiSensor) and a laser sensor (UgV1LaserSensor). UgV1 Odometry Multi Sensor individual contains the basic sensors for measuring velocity (Ugv1OdometryVelocityorSpeed Sensor) and rotational speed (Ugv1OdometryRotationalSpeedSensor) respectively. The velocity sensor is linked with 'metre per second' individual and velocity individual (observing property).The rotational speed sensor is linked with 'radian per-second' individual and ‘normal rotational speed’ individual (observing property). UgV1 Laser Sensor individual is connected with 'metre' unit and distance (observing property) individuals. The ‘normal rotational speed’, velocity and distance properties are linked with the ground individual of the Feature of Interest class. **Figure 7: UgV Sensor System Example** ## 4.2 Standards #### 4.2.1 Data Analytics With the recent advances in Machine Learning, a plethora of new data analysis frameworks have emerged and gain considerable weight over the last years in the Data Analysis communities. The most popular of those frameworks are Google’s TensorFlow 27 and Facebook’s PyTorch 28 for Machine Learning tasks, and Apache Spark 29 for orchestrating massively parallel and distributed general-purpose data analysis tasks. In order for the data analysis component (Data Analysis Tool and Data Analysis Engine) to be as flexible and performant, the Data Analysis Engine is implemented via Apache Spark acting as a compute engine, while the Data Analysis Tool (implement via the Apache Zeppelin 30 interface) enables the end user to craft and assemble data analysis tasks of various nature, e.g. Machine Learning tasks with TensorFlow or PyTorch. Since the frameworks the data analysis component is able to operate with are of various nature and coming from various sub-communities of Data Analytics (Big Data, Machine Learning, Distributed Systems, to name a few), there is not any unified and massively-adopted solution to save data analytics models. Each framework has its own standards, and how to save models in a specific framework is always well-documented for the popular frameworks. The latter also document in detail how to load models, whether those models originate from a previous save or have been downloaded from an external source. Indeed various open-source projects share pre-trained and pre-tuned Machine Learning models to reduce the overall analysis time. ### 4.2.2 Geospatial Data Geospatial data is stored and processed in various, quite diverse, formats. Internally a common representation of the geospatial data will be used, which really simplifies the data handling. This representation hasn’t been decided yet. However, imported data comes in any of the mentioned formats of section 2.4, or even another different one. A list of common formats and standard is in the table below. Many of the standards are from the OGC [6]. <table> <tr> <th> **Format** </th> <th> **Description** </th> </tr> <tr> <td> Shapefile [3] </td> <td> * de facto standard (designed by ESRI) to store vector data * supported by almost all GIS systems * only one geometry type per Shapefile * consists of multiple files * attribute data stored in dBASE (version IV) database (.dbf file) [4] </td> </tr> <tr> <td> GeoPackage [5] </td> <td> \- </td> <td> recently developed OGC standard to store all kinds of geospatial related data (vector features, tile matrix sets of imagery and raster maps at various scales, schema, metadata) </td> </tr> <tr> <td> </td> <td> \- </td> <td> database file that can be accessed and updated directly without intermediate format translations </td> </tr> <tr> <td> </td> <td> \- </td> <td> can be seen as a modern replacement for shapefiles with the following advantages: * only one file instead of multiple files o smaller file sizes * wider spectrum of attribute types * less constraints (e.g. length of attribute names) </td> </tr> <tr> <td> GML [6] </td> <td> \- </td> <td> _Geography Markup Language_ </td> </tr> <tr> <td> </td> <td> \- </td> <td> OGC standard to exchange vector data via XML files </td> </tr> <tr> <td> </td> <td> \- </td> <td> very flexible and adaptable to individual needs </td> </tr> <tr> <td> </td> <td> \- </td> <td> used in many open source systems </td> </tr> <tr> <td> KML [7] </td> <td> \- </td> <td> _Keyhole Markup Language_ </td> </tr> <tr> <td> </td> <td> \- </td> <td> OGC standard to exchange vector data via XML files </td> </tr> <tr> <td> </td> <td> \- </td> <td> mainly used by Google Earth </td> </tr> <tr> <td> WMS [8] </td> <td> \- </td> <td> _Web Map Service_ </td> </tr> <tr> <td> </td> <td> \- </td> <td> OGC standard protocol for serving geo-referenced map images (raster data) </td> </tr> <tr> <td> </td> <td> \- </td> <td> images are generally generated by a map server (most using data from a GIS database) </td> </tr> <tr> <td> WMTS [12] </td> <td> \- </td> <td> _Web Map Tile Service_ </td> </tr> <tr> <td> </td> <td> \- </td> <td> OGC standard protocol for serving geo-referenced raster data </td> </tr> <tr> <td> </td> <td> \- </td> <td> very similar to WMS, but with much simpler request interfaces </td> </tr> <tr> <td> </td> <td> \- </td> <td> the raster data provided is normally pre-calculated and hence the server-side computing time is very low, making WMTS a very fast and responsive service </td> </tr> <tr> <td> WFS [9] </td> <td> \- </td> <td> _Web Feature Service_ </td> </tr> <tr> <td> </td> <td> \- </td> <td> OGC standard protocol which provides an interface for geographical feature requests (vector data) </td> </tr> <tr> <td> World-File [10] </td> <td> \- \- </td> <td> de facto standard (designed by ESRI) to store raster data supported by almost all GIS systems </td> </tr> <tr> <td> </td> <td> \- </td> <td> a text file (in conjunction with a picture file) that describes the projection of a picture into a specific coordinate system </td> </tr> <tr> <td> GeoTiff [11] </td> <td> \- </td> <td> public domain metadata standard </td> </tr> <tr> <td> </td> <td> \- </td> <td> allows geo-location information to be embedded within a Tiff image file </td> </tr> </table> Many other formats exist (structured text files, e.g. formatted as CSV, JSON (GeoJSON) or XML as well as many proprietary binary formats) that are used to store geospatial data. ## 4.3 RAWFIE Activities towards Standardization RAWFIE experimentation is based on the definition of experiments through the use of the implemented EDL and two editors. The EDL provides the necessary terminology for defining various parts of an experiment with focus on the behavior of autonomous unmanned vehicles. The RAWFIE consortium will pursue the standardization of the EDL as far as the following parts concerns: * Metadata adopted for the description of an experiment * Operational requirements for each experiment * Statements that will manage the behavior of autonomous vehicles: * Waypoints management o Timeline management o Sensors management o Data management o Communications management * Statements that will manage the coordination of multiple autonomous vehicles: * Management of groups of vehicles RAWFIE has already identified some bodies where standardization recommendations could be reported: * **ISO** . A standard recommendation for aerial vehicles has been already presented by ISO 31 The discussed activity focuses on operational requirements for drones, on safety and security, flying "etiquette" around no-fly zones, geo-fencing technology that can impede flights in restricted areas, flight logging requirements, as well as training and maintenance standards. * **ANSI** . ANSI Unmanned Aircraft Systems Standardization Collaborative (UASSC)’s mission is to coordinate and accelerate the development of the standards and conformity assessment programs needed to facilitate the safe integration of unmanned aircraft systems (UAS) – commonly known as drones – into the national airspace system (NAS) of the United States. The group has developed a standardization roadmap which identifies existing standards and standards in development, as well as related conformance programs, defines where gaps exist, and recommends additional work that is needed. The roadmap includes proposed timelines for completion of the work and lists organizations that potentially can perform the work. * **Airborne Public Safety Accreditation Commission (APSAC)** . The Airborne Public Safety Association (APSA, formerly the Airborne Law Enforcement Association) sponsored the development of aerial vehicles standards to be added to existing manned aviation standards. A committee of experienced law enforcement and fire safety personnel held their first meeting in December 2016. Unlike manned aviation standards, UAS standards also address the legal and ethical use of the technology. The final version of the standards was released in October of 2017. The standards contain five sections: 1) Administrative Matters; 2) Operational Procedures; 3) Safety; 4) Training; 5) Maintenance and Minimum System Requirements * **American Society of Mechanical Engineers (ASME)** . ASME has formed a special working group (SWG) under ASME Boiler and Pressure Vessel Code (BPVC) Section V Nondestructive Testing Committee tasked to develop guidelines for aeial vehicles for inspections. The SWG will develop a standard that will provide guidelines and requirements for safe and reliable use of UAS in the performance of examinations and inspections of fixed equipment including pressure vessels, tanks, piping systems, and other components considered part of the critical infrastructure. The table of contents sections include: scope, general definitions, object of inspection, preparation for inspection and preliminary mission planning, equipment use for inspection, personnel qualification for operators, conduction of inspection, analysis of data, reporting data, and documentation. * **ASTM International (ASTM)** . ASTM International’s portfolio of aerial vehicles standardization activities extends from the platform and software needs, operational and use, personnel and maintenance, all the way to user community applications. With ASTM’s broad sector reach, industry has the ability to leverage UAS expertise and integrate it into long-standing and accepted procedures. ASTM’s manned aircraft committees offer a wide selection of standards that can serve as demonstrated means of compliance to the increasing risk-based regulatory approach of global civil aviation authorities. Depending on the aircraft category or risk class, ASTM standards offer a selection of resources to meet user needs. * **Open Geospatial Consortium (OGC)** . The OGC has an Unmanned Systems (UxS) DWG. The UxS DWG was established in 2017 and holds sessions at each of OGC’s quarterly TC Meetings. While the scope of the UxS DWG broadly encompasses all unmanned vehicles and the sensors or equipment on those vehicles, and the broader systems that support them, most of the conversation in the DWG at this time is focused on the tasking, observations, processing, and usage of aircraft and mounted sensors. However, it is important to note that the UxS DWG does include in its membership experts on autonomous submersibles and automobiles, with the former providing some very relevant expertise to the aircraft community due to its maturity with respect to the use of standards. Participants in the UxS DWG include government organizations with long histories in developing and operating large UASs (e.g., Global Hawk, Predator, etc.), such as NASA, the U.S. Army Geospatial Center, the U.S. National Geospatial-Intelligence Agency, Harris Corporation, Lockheed Martin Corporation, Unifly, and others. # 5 Data Sharing, Archiving and Preservation RAWFIE will consider all the necessary procedures for archiving and provision of long-term preservation of either the experimental data or any of the available data through the Open Access RAWFIE outcomes. ## 5.1 Data sharing A specific dissemination strategy was developed in RAWFIE in parallel with the implementation activities, with the aim to keep potentially interested stakeholders informed about the availability and the possibility, in case they conduct an experiment, to have access to their experimental data or any kind of data described in Section 3. The strategy used for the dissemination of research data includes: * identification of the different type of stakeholders (users or groups of users) that are the intended “recipients” of the dissemination * identification of the most suitable tools or mechanisms to be used for the dissemination, according to the type of audience * implementation/use of the above-mentioned dissemination tools or mechanisms Stakeholders interested in the data generated by the project are: * Experimenters * Universities and research institutes * UxVs and, in general, technology manufacturers (e.g. sensor or wireless communication solutions providers) * Owners of institutional repositories (if any) ### 5.1.1 Sensor data from experiments Sensors data are distributed via the Kafka message bus inside the RAWFIE system and are persistently stored in a central database (access only to trusted RAWFIE components). They are made available to the experimenter via the Visualisation and the Data Analysis Tools. Furthermore, each experimenter will have access to its experiments’ raw data. ### 5.1.2 Data analysis results The Data Analysis Tool uses the Graphite 32 framework to visualise sensor values and the analysis results. **5.1.3 Exploitation** A number of business cases are briefly described below: * **Patenting:** UxV manufacturers have the possibility to patent components whose implementation will be raised by the needs of RAWFIE experimentation environment. * **Model valorisation** : Since there is not much direct business to be made out of the models themselves, the valorisation of the models will probably be done through the use of the RAWFIE platform. For that matter, please refer to D2.1 deliverable (Federation Policy). * **Data valorisation** : The collected data has an explicit and an implicit value. The explicit value lies in the information that can be extracted from it, after analysis and interpretation, e.g. for tuning or debugging the RAWFIE platform, its component or for adjusting the parameters of the resources, such as UxVs or Testbeds; this value can be commercially traded, as it is the case for the data obtained from any experiment. The implicit value comes from the nature of this data, which can then be used as reference data for other UxV or testbed owners that would like to introduce their assets or technologies into the RAWFIE infrastructure, for later use as a resource or a service. This value is probably not directly exploitable. **5.2 Staging processing for experiment validation streams** The RAWFIE architecture follows the principle of informed consent by end- users. Participants to the experiments are required to have previously given their consent to take part in it, with a clear understanding of what the collected data are and what is their potential use and distribution. In order for a user to access the gathered information, he/she has to register to a dedicated service. During this process, the end-user is notified about the purpose and the scope of the project via a “Terms and Conditions” notice. Special cases demand for specific disclosure process (clearance) and terms of use. This is for example the case of testbeds close to sensitive areas, like in the case of the following test-beds: * HMOD testbed. Since testbed boundaries are lying within the Naval Fortress of Skaramanga and due to the proximity of Salamis Naval Base, in order to prevent any sensitive information leakage with respect to operational capabilities all data collected by any means through UXVs will be submitted for a thorough check by a Hellenic Navy Competent security agency, prior public release. Sensitive data might be censored in order to fulfill the above-mentioned restrictions. * HAI testbed. Since testbed boundaries are adjacent with Tanagra military Air force base and due to the nature of the industry job objectives, collection and presentation of sensitive information related to the testbed premises and installations should be protected. The use of vehicles that transmit live streaming of video cameras and images outside the boundaries of the testbed is prohibited. Video and images from UAVs as well as video recording from the ground of flight tests and experiments can be publicly available after the examination and approval of HAI's security department. Data collection for other kinds of sensors that UAVs might be equipped with does not follow the above mentioned restrictions. Specific Sensor Restrictions will be disseminated to testbed users/experimenters upon commission of UxVs and after sensors capabilities have been notified. Furthermore, any data which not falls into prior restriction should be handled by testbed operators with respect to EU and national laws concerning Personal Information privacy. #### 5.3 Data Archiving and preservation RAWFIE considers all the necessary procedures for archiving and provision of long-term preservation. Suitable file formats and appropriate processes for organizing files are followed. In organizing the different data files the following steps are considered: * Source code and file version control using git repositories in GitLab (https://about.gitlab.com/) * File structure, Directory structure and file naming conventions using databases RAWFIE different repositories are: * _**Master Data Repository** _ contains all the management data sets (experiments, EDL scripts, bookings, testbeds and resources, status information of testbeds and their resources, and so on) of RAWFIE. PostgreSQL [6] with PostGIS extension is used for this database implementation, as it is well supported, open source, stable and make it easy to store, organize and handle geo-referenced data. * _**Measurements Repository** _ uses a big data storage system for storing the large number of measurements coming from the sensors on board of the UxVs during the experiments. The popular big data solution “Hadoop Distributed File System” [13] has been used for this purpose. The specific technological choice is detailed in WP4 and WP5 deliverables. In addition, a NoSQL database solution was adopted in the 2nd and 3rd implementation iterations to manage the data sets. For further interpretation of the raw data, the analysis results repository is used. * _**Analysis Results Repository** _ uses a separated database for performing the Data Analytics task over the results of the experiments. The Graphite data analysis framework was used with the database called Whisper [14]. * _**Users & Rights Repository ** _ uses a LDAP [15] repository, as the LDAP is the de facto standard for user management. It stores all user related data (name, organisation, address, password) and group memberships (role based access control). The selected implementation is OpenDJ [16]. Except for the Analysis Results Repository, all used repository systems (PostgreSQL, HDFS, OpenDJ) support replication, thus, they provide fault tolerance. In case of data loss in the Analysis Results Repository, they can be recomputed using data stored in the Measurements Repository. In addition for the long-term access appropriate data documentation is provided. Full understanding and analysis of the metadata that may be needed is considered. For instance, for improving documentation process we classify the metadata in two levels: project- and datalevel. Project-level metadata describes the “who, what, where, when, how and why” of the dataset, which provides context for understanding why the data are collected and how they are used. Examples of project-level metadata: * Name of the project * Dataset title * Project description * Dataset abstract * Principal investigator and collaborators * Contact information Dataset level metadata are more granular. They explain, in much better detail, the data and dataset. Examples of data-level metadata: * Data origin, experimental, observational, raw or processes, models, images, etc. * Data type: integer, boolean, character, floating * Data acquisition details: sensor deployment methods, experimental design, sensor calibration methods * File types: CSV, mat, tiff, xlsx, HDF * Data processing methods * Dataset parameter list: Variable names, Description of each variable, units The external repositories that can be used for the purposes of archiving and long-term storage were described in Section 3.These repositories are open therefore there will not add expenses for the RAWFIE consortium. [Data Management Plan (b)] **6 _References_ ** 1. PMML 4.2: _http://www.dmg.org/pmml-v4-2.html_ 2. PMML: An Open Standard for Sharing Models,Alex Guazzelli, Michael Zeller, Wen-Ching Lin and Graham Williams,The R Journal, Volume 1/1, May 2009\. 3. ESRI Shapefile Technical Description **,** ESRI, July 1998, _http://www.esri.com/library/whitepapers/pdfs/shapefile.pdf_ 4. _http://www.dbase.com/_ 5. OGC GeoPackage Encoding Standard, Paul Daisey, version: 1.0.1, April 2015, _http://www.geopackage.org/spec/_ 6. Geography Markup Language, OGC, various versions, _http://www.opengeospatial.org/standards/gml_ 7. KML, OGC, various versions, _http://www.opengeospatial.org/standards/kml/_ 8. Web Map Service, OGC, various versions, _http://www.opengeospatial.org/standards/wms/_ 9. Web Feature Service, OGC, various versions, _http://www.opengeospatial.org/standards/wfs/_ 10. About world files, ESRI, _http://webhelp.esri.com/arcims/9.2/general/topics/author_world_files.htm_ 11. GeoTIFF Format Specification, Niles Ritter, version 1.8.2, December 2000, _http://www.remotesensing.org/geotiff/spec/geotiffhome.html_ 12. Web Map Tile Service, OGC, various versions, _http://www.opengeospatial.org/standards/wmts_ 13. _http://hadoop.apache.org/index.html_ 14. _http://graphite.readthedocs.io/en/latest/whisper.html_ 15. _https://en.wikipedia.org/wiki/Lightweight_Directory_Access_Protocol_ 16. _https://forgerock.org/opendj/_ 17. A. Willner, C. Papagianni, M. Giatili, P. Grosso, M. Morsey, Al-Hazmi Y., I. Baldin, "The Open-Multinet Upper Ontology - Towards the Semantic-based Management of Federated Infrastructures", The 10th International Conference on Testbeds and Research Infrastructures for the Development of Networks & Communities (TRIDENTCOM 2015), Vancouver, Canada, June 2015. 18. Lieberman J., Signh R., Goad C., W3C Geospatial Vocabulary, Available at: _https://www.w3.org/2005/Incubator/geo/XGR-geo-20071023/_ 19. Compton, M., Barnaghi, P., Bermudez, L., GarcíA-Castro, R., Corcho, O., Cox, S., & Huang, V. (2012). “The SSN ontology of the W3C semantic sensor network incubator group”, Web Semantics: Science, Services and Agents on the World Wide Web, 17, 25-32. [Data Management Plan (c)] 20. Lefort L, “Ontology for quantity kinds and units: units and quantities definitions”, W3 Semantic Sensor Network Incubator Activity, 2005\. [Data Management Plan (b)] **A ANNEX Ι** **SUMMARY** **TABLE 1** **FAIR Data Management** The following table provides a summary of the Data Management Plan (DMP) issues addressed during RAWFIE lifetime. **Table 5 - Addressing FAIR Data Management principles** <table> <tr> <th> **DMP component** </th> <th> **Issues to be addressed** </th> <th> **Related Sections** </th> <th> </th> </tr> <tr> <td> 1\. Data summary </td> <td> * State the purpose of the data collection/generation * Explain the relation to the objectives of the project * Specify the types and formats of data generated/collected * Specify if existing data is being re-used (if any) * Specify the origin of the data * State the expected size of the data (if known) * Outline the data utility: to whom will it be useful </td> <td> **Section 2** **Dataset description processing** </td> <td> **and** </td> </tr> <tr> <td> 2. FAIR Data 2.1. Making data findable, including provisions for metadata </td> <td> * Outline the discoverability of data (metadata provision) * Outline the identifiability of data and refer to standard identification mechanism. Do you make use of persistent and unique identifiers such as Digital Object Identifiers? * Outline naming conventions used * Outline the approach towards search keyword * Outline the approach for clear versioning </td> <td> **Section 3 and Section 4** </td> <td> </td> </tr> </table> [Data Management Plan (c)] <table> <tr> <th> </th> <th> • </th> <th> Specify standards for metadata creation (if any). If there are no standards in your discipline describe what type of metadata will be created and how </th> <th> </th> <th> </th> </tr> <tr> <td> 2.2 Making data openly accessible </td> <td> • • • • • </td> <td> Specify which data will be made openly available? If some data is kept closed provide rationale for doing so Specify how the data will be made available Specify what methods or software tools are needed to access the data? Is documentation about the software needed to access the data included? Is it possible to include the relevant software (e.g. in open source code)? Specify where the data and associated metadata, documentation and code are deposited Specify how access will be provided in case there are any restrictions </td> <td> **Section 3 and Section 4** </td> <td> </td> </tr> <tr> <td> 2.3. Making data interoperable </td> <td> • • </td> <td> Assess the interoperability of your data. Specify what data and metadata vocabularies, standards or methodologies you will follow to facilitate interoperability. Specify whether you will be using standard vocabulary for all data types present in your data set, to allow inter-disciplinary interoperability? If not, will you provide mapping to more commonly used ontologies? </td> <td> **Section 4.1.1, Section** **Section 4.2, Section 4.3** </td> <td> **4.1.3,** </td> </tr> <tr> <td> 2.4. Increase data re- use (through clarifying licences) </td> <td> • • • </td> <td> Specify how the data will be licenced to permit the widest reuse possible Specify when the data will be made available for re-use. If applicable, specify why and for what period a data embargo is needed Specify whether the data produced and/or used in the project is useable by third parties, in particular after the end of the project? If the re-use of some data is restricted, explain why </td> <td> **Section 3** **D2.1 - Federation Policy** </td> <td> </td> </tr> </table> [Data Management Plan (c)] <table> <tr> <th> </th> <th> • </th> <th> Describe data quality assurance processes </th> <th> </th> </tr> <tr> <td> </td> <td> • </td> <td> Specify the length of time for which the data will remain re-usable </td> <td> </td> </tr> <tr> <td> 3\. Allocation of resources </td> <td> • • • </td> <td> Estimate the costs for making your data FAIR. Describe how you intend to cover these costs Clearly identify responsibilities for data management in your project Describe costs and potential value of long term preservation </td> <td> **D2.1 - Federation Policy** </td> </tr> <tr> <td> 4\. Data security </td> <td> • </td> <td> Address data recovery as well as secure storage and transfer of sensitive data </td> <td> **Section 3.1.4 and Section 5** </td> </tr> <tr> <td> 5\. Ethical aspects </td> <td> • </td> <td> To be covered in the context of the ethics review, ethics section of DoA and ethics deliverables. Include references and related technical aspects if not covered by the former </td> <td> **D1.13 - Ethics Issues Report 2** </td> </tr> <tr> <td> 6\. Other </td> <td> • </td> <td> Refer to other national/funder/sectorial/departmental procedures for data management that you are using (if any) </td> <td> **Not applicable** </td> </tr> </table>
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0388_RAWFIE_645220.md
# Introduction ## Scope of D7.4 The purpose of “D7.4 - Data Management plan” is to provide an overview of the main elements of the data management policy that will be used by the with regard to all datasets that will be generated by the project. It also describes the access granted to all parties interested in the data generated by the RAWFIE system during its development, tests and operations. Finally it discusses the compliance of the RAWFIE data structure, management and policy with respect to EU regulations and directives. This deliverable will be evolved and updated during the lifespan of the project. Figure 1 presents the steps and actions involved in a typical data management cycle. • Dissemination • Preservation • Storage • Data Processing • Models • Analytics • Data set • Data streams Collect Process Share Archive This document is structured as follows: Section 2 describes the data types and metadata that will be collected and processed during the experimentation and the project lifetime as well as the respective standards and formats. Section 3 contains the data access procedure and the dissemination mechanisms that will take place to provide reusability and access in the future. Finally, Section 4 describes the procedures for the archiving and long-term storage. ## Abbreviations <table> <tr> <th> **Abbreviation** </th> <th> **Meaning** </th> </tr> <tr> <td> DMP </td> <td> Data Management Plan </td> </tr> <tr> <td> SOS </td> <td> Sensor Observation Service </td> </tr> <tr> <td> OGC </td> <td> Open Geospatial Consortium </td> </tr> <tr> <td> GML </td> <td> Geography Markup Language </td> </tr> <tr> <td> KML </td> <td> KML - formerly Keyhole Markup Language </td> </tr> <tr> <td> WMS </td> <td> Web Map Service </td> </tr> <tr> <td> WMTS </td> <td> Web Map Tile Service </td> </tr> <tr> <td> WFS </td> <td> Web Feature Service </td> </tr> <tr> <td> CSV </td> <td> Comma-separated values </td> </tr> <tr> <td> JSON </td> <td> JavaScript Object Notation </td> </tr> <tr> <td> GIS </td> <td> Geographic Information System </td> </tr> <tr> <td> PMML </td> <td> Predictive Model Markup Language </td> </tr> <tr> <td> EDL </td> <td> Experiment Description Language </td> </tr> <tr> <td> XML </td> <td> Extensible Markup Language </td> </tr> <tr> <td> SensorML </td> <td> Sensor Model Language </td> </tr> <tr> <td> O&M </td> <td> Observations and Measurements </td> </tr> <tr> <td> SPS </td> <td> Sensor Planning Service </td> </tr> <tr> <td> SWE </td> <td> Sensor Web Enablement </td> </tr> <tr> <td> ML </td> <td> Machine Learning </td> </tr> <tr> <td> DM </td> <td> Data Management </td> </tr> <tr> <td> IMC </td> <td> Inter Module Communication </td> </tr> <tr> <td> ROS </td> <td> Robot Operating System </td> </tr> <tr> <td> LLF </td> <td> LSTS Log Format </td> </tr> <tr> <td> ISO </td> <td> International Organization for Standardization </td> </tr> </table> **Table 1: Abbreviations** # Data set reference, standards and metadata This section will describe identifiers for the data set to be produced. It will include: * description of the data that will be generated or collected * its origin (in case it is collected) * nature and scale, and to whom it could be useful * whether it underpins a scientific publication * information on the existence (or not) of similar data and the possibilities for integration and reuse. The following sections include the potential data types to be generated by the project and reference to existing suitable standards of the discipline. ## Raw data and sensor observations Raw sensor data will be collected by on-board mobile sensors located in the UxV devices. These data has not been subjected to processing or any other manipulation. The types of raw data will be related to the different sensor types that will be used in the context of the RAWFIE project. We can classify the sensors and the relevant data into the following categories: * Environmental sensors (temperature, thermal, heat, moisture, humidity) * Position, angle, displacement, distance, speed, acceleration * Air Pressure * Proximity (able to detect the presence of nearby objects) * Navigation instruments All these sensors form the basis for calculating the experiment context. The context and the metadata that could be retrieved are based on the relevant format and standard. The basic information that will be generated by the RAWFIE sensors is specified as follows in Table 2: <table> <tr> <th> **Data** </th> <th> **Data type** </th> <th> **Description** </th> </tr> <tr> <td> Identifier (ID) </td> <td> String </td> <td> Unique identification of the sensor </td> </tr> <tr> <td> Owner </td> <td> String </td> <td> Any text that describes the owner/manufacturer of the sensor </td> </tr> <tr> <td> Sensor type </td> <td> String </td> <td> Description of the sensor type </td> </tr> <tr> <td> Observed area </td> <td> Coordinates (latitude, longitude, elevation) </td> <td> The geographical area within which the associated observations were made </td> </tr> <tr> <td> Phenomenon </td> <td> String </td> <td> The phenomenon description (e.g., temperature) </td> </tr> <tr> <td> Observed result </td> <td> Integer or String </td> <td> The observed value that is related to the phenomenon </td> </tr> <tr> <td> Unit of measurement </td> <td> String or Character </td> <td> The unit of measured phenomenon (e.g., degree Celsius °C) </td> </tr> <tr> <td> Date and time </td> <td> Timestamp </td> <td> Sequence of characters specifying when the observation took place </td> </tr> <tr> <td> Offering ID </td> <td> String </td> <td> Text description that can be used for group purposes </td> </tr> </table> **Table 2: Sensor observation data and metadata** There are already some similar initiatives and EU projects that have generated and collected sensor observations, but for different purposes and applications. For instance Fed4FIRE project has collected such data (e.g., environmental data) from different experiment executions. These data are strictly related to the experimentation scenarios that could be a combination of different data types (e.g., sensor data and geospatial data) therefore there is no possibility to integrate the existing sensor observation data for RAWFIE purposes. The project will collect and generate such data from its own experiment executions. #### Standards There are already some standards and formats to encapsulate and manage the information from sensor observations and raw data. The sensor observation standard that could be adopted by RAWFIE is yet to be decided. However, some existing standards could be: * Sensor Observation Service (SOS) [1] The SOS is an interface provided by the OGC consortium in order to allow access and distribution for sensor description and observation. The specification leverages the Observations and Measurements (O&M) specification to encode observations and the Sensor Model Language (SensorML) specification to encode sensor descriptions. Both of these formats are based on Extensible Markup Language (XML). This standard defines a Web-based interface (Web Service) that allows querying observations, sensor metadata or representations of observed features. Further, it provides means to register new sensors or remove existing ones. It also defines operations to insert new sensor observations. The SOS operations follow the general pattern of other OGC Web Services and inherit or re-use, when needed, elements defined previously. * Sensor Model Language (SensorML) [2] SensorML provides models and XML encodings for describing any process related to sensor system. Processes, described in SensorML, define their inputs, outputs, parameters, method and they also provide relevant metadata. This standard includes sensors and actuators as well as computational processes applied pre- and post-measurement. The main objective is to enable interoperability, first at the syntactic level and later at the semantic level (by using ontologies and semantic mediation), so that sensors and processes can be better understood by machines, utilised automatically in complex workflows, and easily shared between intelligent sensor web nodes. This standard is one of several implementation standards produced under OGC’s Sensor Web Enablement (SWE) activity. In RAWFIE, SensorML can be used to describe different types of sensors (e.g. environmental, air pressure). * Observations and Measurements (O&M) [3] O&M defines a conceptual schema encoding for observations and features involved in sampling when making observations. O&M provides document models for the exchange of information describing observation acts and their results, both within and between different scientific and technical communities. This encoding is an essential dependency for the OGC Sensor Observation Service (SOS) Interface Standard. This standard can be leveraged by RAWFIE to describe the measurements derived by different sensor types. * Sensor Planning Service (SPS) [4] The SPS is an interface standard defining interfaces for queries that provide information about the capabilities of a sensor and how to task the sensor. The standard is designed to support queries that have the following purposes: a) to determine the feasibility of a sensor planning request, b) to submit and reserve/commit such a request, c) to inquire about the status of such a request, d) to update or cancel such a request; and e) to request information about other OGC Web services that provide access to the data collected by the requested task. ## Processed data, models and analytics Processed data refer to the models and the statistics that will be generated from the stream analytics platform. Typical models include classification and outlier detection models. The data mining and machine learning community traditionally relies on the exchange and publication of datasets. This is achieved by a number of relevant data repositories which will be described in a later section, and much less through models. Nevertheless there is a standard, PMML [5] which allows the description and sharing of learned models between different analytical environments. The publicly available datasets are used to compare and test different learning algorithms and it is one of the means that the community has used to ensure the replicability of the scientific results and the fair comparison of different learning methods. As mentioned in the previous paragraph the learning and mining community has focused on the exchange of data and not on that of models. Once the raw data for a given learning tasks are available, different teams can test their own approaches and algorithms on them. Probably the most well-known repository for datasets used by data mining and machine learning teams is the UCI machine learning repository [6]. We will discuss in more detail the availability and use of existing repositories in a following section on the dissemination of the data that will be generated by the project. Within the UCI repository one may find a number of datasets similar in nature to the data that will be generated within RAWFIE. These are mainly time-series datasets from different application domains such as finance, social media, physical activity sensors, chemical sensors and more. Nevertheless these datasets are not directly relevant for the RAWFIE project. Some of them might be used to provide additional testing datasets for the learning and mining algorithms that will be developed in RAWFIE. #### Standards Here we will mention standards that can be used for describing the results of the data analytical process. We will consider the use of PMML [7], Predictive Model Markup Language to describe the generated models, provided that the ones that we will generate are covered by the current PMML version (v 4.2) [5]. Briefly PMML is an industrial standard that is used for the exchange of machine learning and data mining models between different applications and data analytical environments. It is based on XML and offers support for models generated as a result of different data mining tasks such as association rule discovery, classification, regression and clustering. A description of a data mining model in PMML contains the following elements: * a header which provides general information about the model such as the analytical environment that generated the model, generation timestamps etc * a data dictionary describing the dataset from which the model was generated * a data transformation component describing transformations that are applied to the data prior to modeling, such as normalization, discretization * the model component describing the learned model. ## Geospatial data Geospatial data appears in various formats and relations in the RAWFIE system. Sometimes the data itself has a spatial aspect, sometimes it is just metadata (i.e. descriptive data belonging to the original data). Basically geometry data can be distinguished into vector and raster data. Vector data means that entities consist of one or more coordinates that form a geometric primitive (geometry type). The commonly supported geometric primitives are points, multipoints, lines, multi-lines, polygons and multi- polygons. The difference between the simple and the multi geometries is that one multi geometry consists of one or many simple geometries of the same type. There are also geometry collections consisting of geometries of different types, but they are not very commonly used. Apart from that specific vector formats may also support ellipses, splines, etc. Raster data means that an entity is a picture (of one or more sub entities) in conjunction with a definition of a projection into a specific coordinate system specifying the exact extent of this picture in terms of the coordinate reference system. Using this definition, the picture could be rendered on the right place and shaped in a map using this reference system. The following list of geospatial information, gives an overview of the types of data with a spatial reference that will be possibly generated and / or collected inside RAWFIE. <table> <tr> <th> **Data** </th> <th> **Data type** </th> <th> **Description** </th> </tr> <tr> <td> UxV location </td> <td> Point </td> <td> The location of an UxV during an experiment. Used in the Visualisation Engine </td> </tr> </table> <table> <tr> <th> UxV course </th> <th> Line </th> <th> The current course an UxV is taking, i.e. an extrapolation of the current position together and its direction to know where the UxV will probably be in the next seconds or minutes. </th> </tr> <tr> <td> Waypoints </td> <td> Point </td> <td> An ordered list of waypoints for UxV navigation / predefined routes. They can have absolute coordinates or relative ones in respect to the current position (e.g. ‘move 30 meters in the direction of 45°’). Used for experiment authoring and in the resource controller during execution. </td> </tr> <tr> <td> Geo-fence </td> <td> Polygon </td> <td> Regions where an event or alarm should be triggered when an UxV enters or leaves. Used in experiment authoring (EDL) </td> </tr> <tr> <td> Sensor measurement location </td> <td> Point </td> <td> Location where a sensor measurement has been recorded. It is metadata for sensor data types, see also section **Error! eference source not found.** </td> </tr> <tr> <td> Detected object </td> <td> any </td> <td> An object detected by sensors or evaluation of sensor values. The type of object highly depends on task which should be performed by UxV, e.g.: * border surveillance: intruders / potential threats * firefighting: trees, fire or empty space which would form a natural block to the spreading fire * monitoring of water canals: cracks in canal’s wall structure The position or geo-referenced outline of the object is geospatial meta data of the experiment results </td> </tr> <tr> <td> Testbed position or area </td> <td> Point Polygon </td> <td> The fixed location of the testbed (meta data). In the simple case it is just a coordinate, in the more precise case it is the area of the testbed. Used in experiment authoring (EDL) and resource exploring </td> </tr> <tr> <td> Testbed surroundings </td> <td> any </td> <td> The surroundings of a testbed. These could influence the experiments. Potential objects could be: * barriers (buildings, trees etc.) * streets * water ways * water surface </td> </tr> <tr> <td> </td> <td> </td> <td> \- digital elevation model (above and under water) Used in experiment authoring (EDL) and resource exploring as well as for validation of experiments in their aftermath. </td> </tr> </table> **Table 3: Geospatial data overview** During the implementation of the project new kinds of geospatial data will possibly be examined as new services will be integrated that have not been foreseen in the forefront of the project. When this happens, this list will be updated to reflect the new data types. #### Standards Geospatial data is stored and processed in various, quite diverse, formats. Internally a common representation of the geospatial data will be used, which really simplifies the data handling. This representation is yet to be decided. However, imported data comes in any of the mentioned formats, or even another different one. A list of common formats and standard is in the table blow. Many of the standards are from the OGC [8]. <table> <tr> <th> **Format** </th> <th> **Description** </th> </tr> <tr> <td> Shapefile [9] </td> <td> * de factor standard (designed by ESRI) to store vector data * supported by almost all GIS systems * only one geometry type per Shapefile * consists of multiple files * attribute data stored in dBASE (version IV) database (.dbf file) [10] </td> </tr> <tr> <td> GeoPackage [11] </td> <td> * recently developed OGC standard to store all kinds of geospatial related data (vector features, tile matrix sets of imagery and raster maps at various scales , schema, metadata) * database file that can be accessed and updated directly without intermediate format translations * can be seen as a modern replacement for shapefiles with the following advantages: o only one file instead of multiple files o smaller file sizes * wider spectrum of attribute types * less constraints (e.g. length of attribute names) </td> </tr> <tr> <td> GML [12] </td> <td> * _Geography Markup Language_ * OGC standard to exchange vector data via XML files * very flexible and adaptable to individual needs * used in many open source systems </td> </tr> <tr> <td> KML [13] </td> <td> * _Keyhole Markup Language_ * OGC standard to exchange vector data via XML files * mainly used by Google Earth </td> </tr> <tr> <td> WMS [14] </td> <td> \- _Web Map Service_ </td> </tr> <tr> <td> </td> <td> \- </td> <td> OGC standard protocol for serving geo-referenced map images (raster data) </td> </tr> <tr> <td> </td> <td> \- </td> <td> images are generally generated by a map server (most using data from a GIS database) </td> </tr> <tr> <td> WMTS [18] </td> <td> \- </td> <td> _Web Map Tile Service_ </td> </tr> <tr> <td> </td> <td> \- </td> <td> OGC standard protocol for serving geo-referenced raster data </td> </tr> <tr> <td> </td> <td> \- </td> <td> very similar to WMS, but with much simpler request interfaces </td> </tr> <tr> <td> </td> <td> \- </td> <td> the raster data provided is normally pre-calculated and hence the server-side computing time is very low, making WMTS a very fast and responsive service </td> </tr> <tr> <td> WFS [15] </td> <td> \- </td> <td> _Web Feature Service_ </td> </tr> <tr> <td> </td> <td> \- </td> <td> OGC standard protocol which provides an interface for geographical feature requests (vector data) </td> </tr> <tr> <td> World-File [16] </td> <td> \- \- </td> <td> de factor standard (designed by ESRI) to store raster data supported by almost all GIS systems </td> </tr> <tr> <td> </td> <td> \- </td> <td> a text file (in conjunction with an picture file) that describes the projection of a picture into a specific coordinate system </td> </tr> <tr> <td> GeoTiff [17] </td> <td> \- </td> <td> public domain metadata standard </td> </tr> <tr> <td> </td> <td> \- </td> <td> allows geo-location information to be embedded within a Tiff image file </td> </tr> </table> **Table 4: Geospatial data formats and standards** Also many other formats exist (structured text files, e.g. formatted as CSV, JSON (GeoJSON) or XML as well as many proprietary binary formats) that are used to store geospatial data. ## Image data from UxV platforms ### Robotnik platforms Robotnik’s platforms make use of ROS (Robot Operating System) standard formatting. ROS image format message are _sensor_msgs_ _and images_ _._ There are three standard data types to get video images in ROS: o "raw": The default transport o "compressed": JPEG or PNG image compression o "theora": Streaming video using the Theora codec. Normally, every camera driver is able to publish all of these formats, _image_transport_ ROS package is the recommended tool to do it. Robotnik platforms are usually integrating camera sensors such as Microsoft Kinects or Asus XTION and AXIS PTZ Cameras. The last ones also provide its own RTSP Web Server streaming MPEG-4. The ROS package sensor_msgs defines messages for commonly used sensors, including cameras and scanning laser rangefinders. A lot of data can be found under ROS as many other applications and programs make use of such data. For an example on how to work with specialized ROS messages please refer to MathWorks [19]. As stated with another critical components/software of these platforms, the communication with non-ROS systems shall be done by using tools such as the Rosbridge server (see D4.1) which involve JSON libraries. In addition, regarding Image data exchange, a better way to get the video stream is by using the package _mjpeg_server_ that connects to the ROS topic and publishes the video stream via MJPEG server. Some other tools like _cv_bridge (_ http://wiki.ros.org/cv_bridge) can be used to interface ROS and OpenCV by converting ROS images into OpenCV images, and vice versa. ### MST platforms Images and related data collected by MST vehicles are provided to the user as is, which means that the data format is a characteristic of the sensor/camera being used. As of the date of this writing the MST vehicles use two different types of cameras. The first type is an industrial camera that records video as individual JPEG images, the user may configure the desired frame rate which can be between 4 and 15 frames per second. Each JPEG image contains embedded navigation data (position and pose) encoded using the Exif format and the maximum available image resolution is 1376x1032 pixels. This type of camera is usually used to take georeferenced pictures of the seabed under low-light conditions occasionally with the aid of an external synchronized illumination module. The second type of camera is capable of recording and streaming video encoded with MPEG-4 (H.264) at a frame rate of 30 frames per second and a resolution of 1280x720 pixels. This camera is normally used without the aid of artificial illumination and the video collected is commonly streamed in real time to a control center for surveillance purposes. #### Standards Both types of cameras used by MST comply with the following industry standards: * ISO/IEC 10918-1:1994, Information technology - Digital compression and coding of continuous-tone still images: Requirements and guidelines. * CIPA DC-008-2012, Exchangeable image file format for digital still cameras: Exif Version 2.3 * ISO/IEC 14496 MPEG-4 Standard Parts 1 to 31 ## Simulated data This section contains the simulated data and formats that Robotnik’s and MST’s platforms uses during the experiment simulations, which possibly will be leveraged by the RAWFIE project. ### Robotnik platforms Robotnik platforms simulations are normally being carried by Gazebo Simulator [20]. Although it is not the only simulating software compatible with ROS, it is the most commonly used tool by ROS users because of its good integration with the platform. The main reason is that Gazebo is capable of simulating a wide range of testbeds, from a mapped indoor office, to outdoor environments with only a map and the model of the robot. To summarize, this model structures is as follows:  _Database_ * _database.config_ : Meta data about the database. This is now populated automatically from CMakeLists.txt * _model_1_ : A directory for model_1 * _model.config_ : Meta-data about model_1 * _model.sdf_ : SDF description of the model * _meshes_ : A directory for all COLLADA and STL files * _materials_ : A directory which should only contain the textures and scripts subdirectories * _textures_ : A directory for image files (jpg, png, etc). * _scripts_ : A directory for OGRE material scripts * _plugins_ : A directory for plugin source and header files Robotnik has developed these models for its platforms and there's no need of rebuilding them. The key of these simulations lay on the information that Gazebo is processing. In order to carry these simulations, similar nodes (processes) to the ones that control the real robot are launched. The real difference relies on who is subscribing and publishing the data. For instance, Gazebo generates data such as raw images or joint movements for the robot to process, while translates movement commands generated by the robot to move the model in the simulator. #### Standards All the simulated data has the standard ROS format [21] that defines messages for commonly used sensors, including cameras and scanning laser rangefinders. An example of this format is shown below. * Maintainer status: maintained * Maintainer: Tully Foote <tfoote AT osrfoundation DOT org> * Author: * License: BSD In addition Robotnik simulations follows geometry_msgs [22] format that provides messages for common geometric primitives such as points, vectors, and poses. These primitives are designed to provide a common data type and facilitate interoperability throughout the system. An example of geometry_msqs is shown below. * Maintainer status: maintained * Maintainer: Tully Foote <tfoote AT osrfoundation DOT org> * Author: Tully Foote * License: BSD ### MST platforms MST's simulated and real vehicles use the same data format for inter-module communication and for data storage to persistent media. This allows for simulated vehicles to use and replay data from real missions (e.g., environmental data, bathymetry). In doing so, while the vehicle's kinematics are simulated using the vehicle's model and physical characteristics, environmental sensor data can be simulated or taken from values collected in the field. #### Standards The data format in question is called IMC (Inter Module Communication) and comprises different logical message groups for networked vehicle and sensor operations. It defines an infrastructure that is modular and provides different layers for control and sensing. IMC defines the message entity as having an associated uniquely identifying number and consisting of a (possibly empty) sequence of data fields capable of representing fixed-width integers, floating point numbers, variable length byte sequences and inline messages (messages within messages). Integers can be signed or unsigned with sizes ranging from 8 to 64 bits. Floating point numbers have two sizes: 32 and 64 bits. Messages are prefixed with a header and suffixed with a footer to form a packet. Header and footer entities are defined as non-empty sequences of data fields and have the same structure for all packets. In order to transmit a message or save it to persistent storage the message has to be encapsulated in a packet and serialized. Serialization is performed by translating the data fields of the packet entities (header, message and footer) to a binary stream in the same order as they were defined. The first field of the packet header is the synchronization number, used to mark the beginning of a packet and to denote its protocol version. By inspecting the synchronization number the recipient is able to deduce the byte order of the remaining data fields and perform the necessary conversions for correct interpretation. Using this approach, communication between nodes with the same byte order incurs in no byte order conversion overhead and communication between nodes with different byte orders only introduces the conversion overhead when deserializing packets. The complete IMC protocol is defined in a single eXtensible Markup Language (XML) document that, when changed, can be verified against a XML Schema (XSD). In the XML definition, each message field must have at least a name, one abbreviation (used for code generation) and a type. Optionally units and range of permissible values can also be defined. Having a XML document describing the protocol has proven to be very practical for continuous development and testing. This happens because just after agreeing upon a specific version of IMC, two nodes can use the XML document to understand each other. Python and Java programs are used to automatically generate the IMC protocol reference documentation and optimized implementations exist in C++ and Java. Generating native code from the XML document has provided not only flexibility but also the performance needed for real-time execution in resource constrained computers. In addition to the main serialization format described above, there are two complementary serialization formats with specific intents. One is the LLF (LSTS Log Format) format, a text format used for logging IMC, amenable to direct human understanding and easier to parse directly by many standard applications e.g. Matlab, Microsoft Excel, and custom mission review and analysis software. In order to be possible to review data from past missions, the LLF format had to be independent of the originating IMC protocol description (since the message format can change over time). Our approach was to define this tab-separated log format where, for each column (message field), there is a header describing its data type, name and units to be used when representing the data. Another format is the IMC-XML, which can be used as a simplified serialization format itself for inter-module interoperability. The main reason for this additional format is to enable the integration of web-based components and web-enabled third-party sensors into large-scale data dissemination applications. # Data sharing This section describes the access procedures, the technical mechanisms for dissemination and necessary software for enabling re-use. ## Data access procedures In RAWFIE, data access will be specified with regards to the project phases, i.e. the project implementation period and the post-EC-funding phase. During the project implementation the data access will be set by default as public providing the possibility to the experimenters to set the access private according to his/her preferences. In this case, some experiments' data will be restricted and the results will not be shared to the public. The facilities will be open to experimenters, beyond participation in the Open Calls for Experiments, until the end of the project. Interested parties to conduct experiments on the RAWFIE facilities can contact the project consortium at any time to explore opportunities. After the project life-cycle, when the project’s outcomes will be exploited for both commercial and public funded cases, a different policy will be followed. The data access will be set by default as private and only in specific cases as public according to the experimenters' preferences. The experimenters and other stakeholders will have the opportunity to access the data through the project portal or the respective repository. The identification and the type of the repository (i.e., institutional, standard repository for the discipline) where the research data will be stored will be defined in an upcoming iteration of this deliverable. Finally, RAWFIE project results will be fully complied with the EU Regulation and Directives. More specifically, the data generated by the project will follow the following Directives: * Data Protection Directive, Directive 95/46/EC * Data Retention Directive, Directive 2006/24/EC * INSPIRE Directive Infrastructure for Spatial Information in the European Community, , Directive 2007/2/EC * Marine Strategy Framework Directive, Directive 2008/56/EC * Water Framework Directive, Directive 2000/60/EC #### Possible integration into EC frameworks The EC is supporting numerous initiatives, in particular for the consistent representation of common concepts, such as the building and city modeling, representation and parameters (IRCABC). RAWFIE may contribute to such initiatives by using or extending existing models. ## Mechanisms for dissemination and sharing research data This section outlines of technical mechanisms for dissemination and necessary software for enabling re-use. ### Mechanisms for dissemination A specific dissemination strategy will be developed in RAWFIE in parallel with the implementation activities, with the aim to keep potentially interested stakeholders informed about the availability, and the possibility to access new research data. Research data include experiments’ results, as well as any other kind of data described in Section 2, generated within the platform during the execution of the experiments. The strategy used for the dissemination of research data will include: * identification of the different type of stakeholders (users or groups of users) that will be the intended “recipients” of the dissemination * identification of the most suitable tools or mechanisms to be used for the dissemination, according to the type of audience * implementation / use of the abovementioned dissemination tools or mechanisms Possible stakeholders interested in the data generated by the project are: * Experimenters * Universities and research institutes * UxVs and, in general, technology manufactures (e.g. sensor or wireless communication solutions providers) * Owners of institutional repositories (if any) Potential dissemination mechanisms, together with the description on how they could be used, and the possible mapping with the different stakeholder groups they could reach, are provided in <table> <tr> <th> **Dissemination mechanism** </th> <th> **How it will be used** </th> <th> **Stakeholder type** </th> </tr> <tr> <td> Project website </td> <td> News will be published with information about executed experiments and data availability </td> <td> * Experimenters * Universities and research institutes * UxVs and, in general, technology manufactures (e.g. sensor or wireless communication solutions providers) * Owners of institutional repositories (if any) </td> </tr> <tr> <td> Newsletter / email </td> <td> Periodic news sent to selected stakeholder groups, will be also used to share information about the availability of relevant research data </td> <td> * Experimenters * Universities and research institutes </td> </tr> <tr> <td> Publications </td> <td> At certain points in time, results and statistics coming from the experiments will be published in scientific papers, together with the information on how to access them </td> <td>    </td> <td> Experimenters Universities and research institutes UxVs and, in general, technology manufactures (e.g. sensor or wireless communication solutions providers) </td> </tr> <tr> <td> </td> <td> </td> <td>  </td> <td> Owners of institutional repositories (if any) </td> </tr> <tr> <td> Public Web feeds </td> <td> Feeds will be created where interested stakeholders can subscribe in order to receive notification about new available research data </td> <td>    </td> <td> Experimenters Universities and research institutes UxVs and, in general, technology manufactures (e.g. sensor or wireless communication solutions providers) </td> </tr> <tr> <td> </td> <td> </td> <td>  </td> <td> Owners of institutional repositories (if any) </td> </tr> <tr> <td> Social Media (e.g. Twitter, Facebook, Linkedin) </td> <td> Notification about new available research data will be regularly published through the social media channels setup by the project </td> <td>    </td> <td> Experimenters Universities and research institutes UxVs and, in general, technology manufactures (e.g. sensor or wireless communication solutions providers) </td> </tr> <tr> <td> </td> <td> </td> <td>  </td> <td> Owners of institutional repositories (if any) </td> </tr> </table> Table 5: Mapping of mechanisms to stakeholder groups ### Software tools for sharing research data Further to the dissemination strategy that will be developed in order to inform interested stakeholders about availability of new research data, and to the repository concepts explained in the following section, a number of technical solutions will be taken into considerations, to allow the possibility to share some of the generated data in an almost real-time manner. Software mechanisms or interfaces for sharing some of the data types mentioned in this document far include: * Geospatial Data o WMS and WFS server (e.g. using GIS tools like GeoServer [23] or MapServer [24]) or WMTS (using tools like e.g. MapProxy [29]) o Custom functionalities to export them (e.g. download from the Web Portal) as Shapefile or GeoPackage o Custom functionalities to export them to Google Maps / Google Earth [30] (widely used GIS applications) * SensorML services and related standard software interfaces (see Section 2.1) to disseminate sensor measurements ### Repository concept for enabling re-use The machine learning and data mining community has a long history in sharing and re-using datasets to test and benchmark algorithms and develop new learning concepts. We give now a list of the most well-known such repositories: * The oldest machine learning repository is the one maintained by the University of California, Irvine, currently hosting more than 300 datasets [6]. Datasets there usually come in the form of .names and .data files with the first providing the metadata describing the dataset and the second being a simple .csv file containing the actual data. * A rather recent repository is mldata.org hosted by the machine learning group at the technical university of Berlin [27], which has been supported by the Pascale network [28]. It contains more than 800 datasets described in a number of different formats such as HDF5 (503.6 MB), XML, CSV, ARFF, LibSVM, Matlab, Octave  KDD nuggets maintain a repository of data repositories [29]. The above repositories can be used for dissemination and re-use of the data generated by RAWFIE within the Machine Learning (ML) and Data Management (DM) communities. Another option for sharing and re-use could be the exploitation of the Linked Open Data initiative and the reuse if appropriate of the technologies that have been developed in a number of European initiatives and projects in the area of open data such as: DaPaas [31], PlanetData [32]. However this might be less appropriate for the kind of data that will be generated by the RAWFIE infrastructure since Linked Open Data deal basically with the description of entities, their properties and their relations to other entities, while in RAWFIE we will be mainly generating measurements data. # Archiving and preservation RAWFIE will consider all the necessary procedures for archiving and provision of long term preservation. Suitable file formats and appropriate processes for organizing files will be followed. In organizing the different data files the following steps could be considered: * File version control * File structure * Directory structure and file naming conventions. In addition for the long-term access appropriate data documentation will be provided. Full understanding and analysis of the metadata that may be needed will be considered. For instance, for improving documentation process we could classify the metadata in two levels: project- and data-level. Project-level metadata describes the “who, what, where, when, how and why” of the dataset, which provides context for understanding why the data were collected and how they were used. Examples of project-level metadata: * Name of the project * Dataset title * Project description * Dataset abstract * Principal investigator and collaborators * Contact information * Contact information Dataset level metadata are more granular. They explain, in much better detail, the data and dataset. Examples of data-level metadata: * Data origin, experimental, observational, raw or processes, models, images, etc. * Data type: integer, boolean, character, floating etc * Data acquisition details: sensor deployment methods, experimental design, sensor calibration methods, etc * File types: CSV, mat, tiff, xlsx, HDF * Data processing methods * Dataset parameter list: Variable names, Description of each variable, units The external repositories that can be used for the purposes of archiving and long-term storage were described above (see Section 3.2.2). These repositories are free therefore there will not be expenses for the RAWFIE consortium. In case of additional procedures are needed for the longterm maintenance the project consortium will cover the respective costs.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0389_RAWFIE_645220.md
1. **Introduction** **1.1 Scope of D7.5** The present document is the second in a series of three documents related to the RAWFIE Data Management policy. These documents define the rules applied to all datasets generated during the project. The purpose of “D7.5 (b) - Data Management plan” is to provide an overview of the main elements of the data management plan after the second year of the project. It also describes the policy adopted to grand access to the parties interested in the data generated by the RAWFIE platform during its development, tests and operation. Finally, it discusses the compliance of the RAWFIE data structure, management and policy with respect to the EU regulations and directives. This deliverable will be continuously updated throughout the lifespan of the project. Figure 1 presents the main steps and actions involved in a typical data management cycle as were described in the previous version of the deliverable. **Figure** **1** **Data Management Cycle** **:** • Dissemination • Preservation • Storage • Data Processing • Models • Analytics • Data set • Data streams Collect Process Share Archive The RAWFIE Data Management Plan (DMP) is realised in accordance with the Guidelines on Open Access to Scientific Publications and Research Data in Horizon 2020 1 . It also makes the first attempt to be compliant with the Guidelines on FAIR Data Management in Horizon 2020 2 . The compatibility with FAIR will be finalised in D7.6, i.e. the third version of the deliverable. This document’s structure is as follows: Section 2 gives an overview of the data description, data types and data processing in the RAWFIE ecosystem. Section 3 contains the data access procedure and the dissemination mechanisms that will take place to provide reusability and access in the future. In section 4 software tools for handling the data processing of research data during the execution of an experiment and the project lifetime are presented as well as standards and formats. Finally, Section 5 describes the procedures for the archiving and longterm storage. 2. **Dataset reference and processing** RAWFIE data related to the execution of the experiments are distinguished in the following categories: * _**Dynamic data** _ : this data refers to the data information that describes an UxV during an experiment in terms of system information, sensor types, central processing unit usage, storage usage, location etc. * _**Static data** _ : this data refers to the characteristics of testbeds and resources. RAWFIE federation adds in advance static information like resource descriptions and properties, type of sensors on UxVs, UxV characteristics, testbed location, etc. * _**Raw data** _ : data produced during the execution of the experiments. Any kind of sensor that participates in an experiment generates raw data. This data pushed to a message bus, which publishes them upon a request either from an experimenter or from a device that participates in the experiment and stores them in it for a short-time interval. * _**Geospatial data** _ : this data refers to the geospatial information of data (data with a spatial reference or metadata) in the RAWFIE system. RAWFIE system will possibly generate and collect this data. Although this data is part of the static and dynamic data is presented separately (referenced in D7.4). ### 2.1.1 Dynamic Data from experiments The RAWFIE UxV Protocol was devised to abstract the differences between UxVs and expose a simple, compact, extensible, and expressive interface to monitor and control UxVs in a platform agnostic way. The RAWFIE infrastructure can support the addition of new UxVs by creating adapters or translators to convert UxV specific information to the RAWFIE UxV Protocol. The reference frame of a UxV is defined in Table 1\. **Table 1 - Dynamic Data Overview** <table> <tr> <th> **Message** </th> <th> **Data** </th> <th> **Data types** </th> <th> **Description** </th> </tr> <tr> <td> Header </td> <td> sourceSystem sourceModule time </td> <td> String string long </td> <td> All messages of the UxV Message API contain the same header, used to encode basic information about the dispatching entity. </td> </tr> <tr> <td> CPU Usage </td> <td> header Value </td> <td> Header Int </td> <td> The amount of CPU resources that is currently in use. </td> </tr> <tr> <td> Storage Usage </td> <td> header available value </td> <td> Header int int </td> <td> Measurement of storage usage. </td> </tr> <tr> <td> Fuel Usage </td> <td> header value </td> <td> Header Int </td> <td> Amount of available fuel. </td> </tr> </table> <table> <tr> <th> </th> <th> </th> <th> </th> <th> </th> </tr> <tr> <td> Location </td> <td> header latitude longitude height n e d depth altitude </td> <td> Header double double float double double double float, null float, null </td> <td> The Location message encodes the position of the UxV in the World. It was designed to support all kinds of UxVs even when they are not capable of localizing themselves in the World. This message allows the UxV to encode its position in absolute (Latitude, Longitude, and Height) or relative (North/East/Down) coordinates. This message shall be published to the message bus and shall be consumed by any entity that needs to know the location of the UxV. </td> </tr> <tr> <td> Attitude </td> <td> header phi theta psi </td> <td> Header float float float </td> <td> Angles describing the attitude of a rigid body (i.e., Euler angles). </td> </tr> <tr> <td> Linear Velocity </td> <td> header x y z </td> <td> Header float float float </td> <td> Vector quantifying the direction and magnitude of the measured linear velocity that a system is exposed to. </td> </tr> <tr> <td> Angular Velocity </td> <td> header x y z </td> <td> Header float float float </td> <td> Vector quantifying the direction and magnitude of the measured angular velocity that a system is exposed to. </td> </tr> <tr> <td> Linear Acceleration </td> <td> header x y z </td> <td> Header float float float </td> <td> Vector quantifying the direction and magnitude of the measured linear acceleration that a system is exposed to. </td> </tr> <tr> <td> Current </td> <td> header value </td> <td> Header float </td> <td> Measurement of electrical current. </td> </tr> <tr> <td> Voltage </td> <td> header value </td> <td> Header float </td> <td> Measurement of electrical voltage. </td> </tr> <tr> <td> Sensor Reading Scalar </td> <td> header value Unit </td> <td> Header float Unit </td> <td> This message encodes scalar measurements of sensors. </td> </tr> <tr> <td> Abort </td> <td> Header </td> <td> eu.rawfie.uxv.Headers </td> <td> This command instructs the UxV to stop any executing actions and enter standby mode </td> </tr> <tr> <td> Goto </td> <td> header location speed timeout </td> <td> eu.rawfie.uxv.Header eu.rawfie.uxv.Location float, null float </td> <td> This command instructs a system to move to a given location at a given speed. </td> </tr> <tr> <td> KeepStation </td> <td> header location radius speed duration </td> <td> eu.rawfie.uxv.Header eu.rawfie.uxv.Location float float, null float, null </td> <td> This command instructs a system to keep station at a given location. </td> </tr> </table> ### 2.1.2 Static Data from experiments Static data consists mainly of information related with the initial definition of an experiment. This information is usually defined prior to an experiment execution and may be updated after its completion. The term ‘static’ does not mean that the information is not updated, but mainly that it does not directly interfere with the actual data generated during the execution of an experiment. Static data mostly relate to the involved resources, sensor types, testbeds and scripts associated with the execution of an experiment as well as identifiers needed to identify or track an experiment within the RAWFIE platform. These static data are directly maintained/stored in (or can be extracted by) appropriate relational database tables defined at the platform level. Below in Table 2 there is a complete list of them appropriately categorized: **Table 2 - Static Data Overview** <table> <tr> <th> **Data** </th> <th> **Data type** </th> <th> **Description** </th> </tr> <tr> <td> </td> <td> **Experiment Related Data** </td> </tr> <tr> <td> Experiment Id </td> <td> String </td> <td> Identifier for a defined experiment </td> </tr> <tr> <td> Experiment Name </td> <td> String </td> <td> A (user friendly) name of the experiment </td> </tr> <tr> <td> Experiment Description </td> <td> String </td> <td> A short description for the experiment </td> </tr> <tr> <td> User Id </td> <td> Integer </td> <td> Internal identifier that can be used for obtaining additional information about the user that defined the experiment (i.e. name, surname etc.) </td> </tr> <tr> <td> EDL script </td> <td> String </td> <td> Contains the EDL script initially defined for an experiment (information considered static since it is defined prior to the actual execution) </td> </tr> <tr> <td> Testbed Id </td> <td> String </td> <td> Identifier of the testbed where the </td> </tr> <tr> <td> </td> <td> </td> <td> experiment is expected to take place (experiments cannot span multiple testbeds) </td> </tr> <tr> <td> Resource Ids </td> <td> String[] </td> <td> Identifiers for the resources involved assigned in an experiment </td> </tr> <tr> <td> </td> <td> **Execution Related Data** </td> </tr> <tr> <td> Execution Id </td> <td> String </td> <td> Identifier uniquely identifying an executing/executed experiment within the RAWFIE system </td> </tr> <tr> <td> Start Execution </td> <td> Timestamp </td> <td> Timestamp denoting the start of execution </td> </tr> <tr> <td> End Execution </td> <td> Timestamp </td> <td> Timestamp denoting the completion of execution </td> </tr> <tr> <td> Experiment Status </td> <td> Integer </td> <td> Value indicates the execution status of an experiment (i.e. 0=BOOKED, 1= ONGOING, 2=COMPLETED). This field may be updated during the course of experiment execution </td> </tr> <tr> <td> </td> <td> **Reservation Related Data** </td> </tr> <tr> <td> Reservation Id </td> <td> String </td> <td> Identifier of the user level reservation associated with an experiment </td> </tr> <tr> <td> User Id </td> <td> Integer </td> <td> Internal identifier that can be used for obtaining additional information about the user that defined the reservation (i.e. name, surname etc.) This value should be the same with the <User Id> mentioned at the **Experiment** **Related Data** category </td> </tr> <tr> <td> </td> <td> **Resource Related Data** 3 </td> </tr> <tr> <td> Resource Name </td> <td> String </td> <td> User friendly name of the resource </td> </tr> <tr> <td> Resource Description </td> <td> String </td> <td> A short description for the resource </td> </tr> <tr> <td> Resource Status </td> <td> Integer </td> <td> The latest status of the resource </td> </tr> <tr> <td> Resource Type </td> <td> Integer </td> <td> Identifier denoting the type of the resource (i.e. UAV, UGV, USV etc.) </td> </tr> </table> ### 2.1.3 Raw data The types of raw data generated by UxVs relate to the different sensor types that take part in the context of the RAWFIE project. We can classify the sensors and the relevant data into the following categories: * Environmental sensors (temperature, thermal, heat, moisture, humidity, air pressure) * Position, angle, displacement, distance, speed, acceleration * Proximity (able to detect the presence of nearby objects) * Navigation instruments **2.1.4 Geospatial data** Geospatial data appears in various formats and relations in the RAWFIE system. Sometimes the data itself has a spatial aspect, sometimes it is just metadata (i.e. descriptive data belonging to the original data). The following list (Table 3) gives an overview of the types of data with a spatial reference that will be possibly generated and / or collected inside RAWFIE. **Table 3 - Geospatial Data Overview** <table> <tr> <th> **Data** </th> <th> **Data type** </th> <th> **Description** </th> </tr> <tr> <td> UxV location </td> <td> Point </td> <td> The location of an UxV during an experiment. Used in the Visualisation Engine </td> </tr> <tr> <td> UxV course </td> <td> Line </td> <td> The current course an UxV is taking, i.e. an extrapolation of the current position together and its direction to know where the UxV will probably be in the next seconds or minutes. </td> </tr> <tr> <td> Waypoints </td> <td> Point[] </td> <td> A time ordered list of waypoints for UxV navigation / predefined routes. They can have absolute coordinates or relative ones in respect to the current position (e.g. ‘move 30 meters in the direction of 45°’). Used for experiment authoring and in the resource controller during execution. </td> </tr> <tr> <td> Geo-fence </td> <td> Polygon </td> <td> Regions where an event or alarm should be triggered when an UxV enters or leaves. Used in experiment authoring (EDL) </td> </tr> <tr> <td> Sensor measurement location </td> <td> Point </td> <td> Location where a sensor measurement has been recorded. It is metadata for sensor data types. </td> </tr> <tr> <td> Detected object </td> <td> any </td> <td> An object detected by sensors or evaluation of sensor values. The type of object highly depends on task which should be performed by UxV, e.g.: </td> </tr> <tr> <td> </td> <td> </td> <td> * border surveillance: intruders / potential threats * firefighting: trees, fire or empty space which would form a natural block to the spreading fire * monitoring of water canals: cracks in canal’s wall structure The position or geo-referenced outline of the object is geospatial meta data of the experiment results </td> </tr> <tr> <td> Testbed position or area </td> <td> Point Polygon </td> <td> The fixed location of the testbed (meta data). In the simple case it is just a coordinate, in the more precise case it is the area of the testbed. Used in experiment authoring (EDL) and resource exploring </td> </tr> <tr> <td> Testbed surroundings </td> <td> Any </td> <td> The surroundings of a testbed. These could influence the experiments. Potential objects could be: * barriers (buildings, trees etc.) * streets * water ways * water surface * digital elevation model (above and under water) Used in experiment authoring (EDL) and resource exploring as well as for validation of experiments in their aftermath. </td> </tr> </table> ### 2.1.5 Processed data, models and analytics Processed data refer to the outcome of models and statistical methods that will be generated by the stream analytics platform. Typical models include classification and outlier detection. Since most of our algorithms will be working on streaming data there is no specific model that can be used to generalize to all possible time instances. Instead we will be open-sourcing the architecture as is commonly done in the Deep Learning community. An example of this is the GoogleNet 4 & Visual Geometry Group 5 (VGG) style deep architectures. Providing the model architecture via a version control system such as Git 6 will allow for iterative development and reproduction by anyone who chooses to utilize these algorithms. The data mining and machine learning communities traditionally rely on the exchange and publication of datasets. This is achieved by a number of relevant data repositories which will be described in the next section, and much less through models. The publicly available datasets are used to compare and test different learning algorithms and it is one of the means that the community has used to ensure the replicability of the scientific results and the fair comparison of different learning methods. Once the raw data for specific learning tasks are available, different teams can test their own algorithms on them. Probably the most well-known repository for datasets used by data mining and machine learning teams is the UCI machine learning repository [6]. We will discuss in more detail the availability and use of existing repositories in a following section on the dissemination of the data that will be generated by the project. Within the UCI repository one may find a number of datasets similar in nature to the data that will be generated within RAWFIE. These are mainly time-series datasets from different application domains such as finance, social media, physical activity sensors, chemical sensors and more. Nevertheless these datasets are not directly relevant for the RAWFIE project. Some of them might be used to provide additional testing datasets for the learning and mining algorithms that will be developed in RAWFIE. # 3 Open Access of RAWFIE outcomes The scientific and technical results of the RAWFIE project are expected to be of high interest for the scientific community. Throughout the duration of the project, RAWFIE partners may disseminate (subject to their legitimate interests) the obtained results and knowledge to the relevant scientific communities through contributions in journals and international conferences mainly in the field of IoT, wireless communications, robotics, etc. This dissemination of the research outcomes should be firstly secured with any relevant protection (e.g., Intellectual Property Rights (IPR)). The RAWFIE project will also produce, transform and use data that is of interest and has a value for the next phases of the RAWFIE deployment on one hand and for other initiatives and contexts on the other hand. This chapter addresses the access to these outcomes. Any of its publications will come after the more general decision on whether to go for a publication directly or to seek first protection by registering . If the Steering Committee decides that the scientific research will not be protected through IPR, but will rather be published directly, then the project is aware that Open Access must be granted to all scientific publications resulting from Horizon 2020 actions. This will be done in accordance with the Guidelines on Open Access to Scientific Publications and Research Data in Horizon 2020. The process shown in Figure 2, was taken from the aforementioned document. Research Results Decision to disseminate/share Publications Gold OA Green OA Depositing research data Access and use free charge Restricted access and/or use Decision to exploit/protect Patenting Business plan, models & data value chain **Figure 2: Process for handling access to research results** In the ‘gold’ Open Access (OA) approach of a peer-reviewed scientific research article, the scientific publisher immediately provides this article in Open Access mode. The associated costs shift away from readers. The most common business model is the one-off payment by authors. These costs, often referred to as Article Processing Charges (APCs), are usually paid by the researcher's university or research institute or the agency funding publishing the research. In other cases, subsidies or other funding models cover the costs of Open Access. The ‘green’ Open Access approach to peer-reviewed scientific research articles means that the author, or a representative, self-archives (deposits) the published article or the final peerreviewed manuscript in an online repository before, at the same time as, or after publication. Some publishers request to apply the Open Access mode only after an embargo period has elapsed. This embargo period is to allow the scientific publisher to recoup its investment by selling subscriptions and charging pay-per download/view fees during an exclusivity period. ## 3.1 Categories of RAWFIE data outputs for the Open Access mode The following categories of RAWFIE outputs apply to a free of charge Open Access: * Public Deliverables * Conference/Workshop presentations (which may, or may not, be accompanied by papers, see below) * Conference/Workshop papers and articles for specialist magazines; and * Research (Experiment) Data and metadata Furthermore, the provision of specific data sets to selected organisations will be possible in order to fulfil the H2020 requirements of “Grand Challenges” 7 for third parties to access, mine, exploit, reproduce and disseminate the results of the RAWFIE project. The beneficiaries will have access to the information about the tools and instruments, for the sake of validating the results they will produce. ### 3.1.1 Open Access to RAWFIE Public Deliverables ##### 3.1.1.1 Data Sharing Open Access to the public deliverables will be achieved in RAWFIE by depositing the data into online repositories. The public deliverables will be stored in one or more of the following locations: * The RAWFIE Web site 11 , after approval by the Project Officer (if the document is subsequently updated, the original version will be replaced by the latest version) * The RAWFIE page on Cordis 12 web site, will host all public deliverables as submitted to the European Commission (EC). ##### 3.1.1.2 Archiving and Preservation Open Access to the project public deliverables will be maintained for at least 3 years following the project completion, through the Website. ##### 3.1.1.3 Archived deliverables The following table (Table 4) summarizes archived deliverables at 31/12/2016, which are available at the RAWFIE web page 7 . **Table 4 - Archived deliverable** <table> <tr> <th> </th> <th> _D3.1_ _-_ _Specification & Analysis of RAWFIE Components Requirements (a) _ </th> <th> </th> <th> WP3 </th> </tr> <tr> <th> </th> </tr> <tr> <td> </td> <td> _D3.2_ _-_ _Specification & Analysis of RAWFIE Components Requirements (b) _ </td> <td> </td> <td> WP3 </td> </tr> <tr> <td> </td> </tr> <tr> <td> </td> <td> _D4.1_ _-_ _High Level Design and Specification of RAWFIE Architecture_ </td> <td> </td> <td> </td> <td> WP4 </td> </tr> <tr> <td> </td> </tr> <tr> <td> </td> <td> _D4.2 (a)_ _-_ _Design and Specification of RAWFIE Components_ </td> <td> </td> <td> </td> <td> WP4 </td> </tr> <tr> <td> </td> </tr> <tr> <td> </td> <td> _D4.4_ _-_ _High Level Design and Specification of RAWFIE Architecture (2_ </td> <td> _nd_ </td> <td> </td> <td> WP4 </td> </tr> <tr> <td> _version)_ </td> <td> </td> <td> </td> </tr> <tr> <td> </td> </tr> <tr> <td> </td> <td> _D4.5_ _-_ _Design and Specification of RAWFIE Components_ </td> <td> </td> <td> </td> <td> WP4 </td> </tr> <tr> <td> </td> </tr> <tr> <td> </td> <td> _D6.1: RAWFIE Operational Platform Testing and Integration Report (a)_ </td> <td> </td> <td> </td> <td> WP6 </td> </tr> <tr> <td> </td> </tr> <tr> <td> </td> <td> _D6.2: RAWFIE Platform Validation (a)_ </td> <td> </td> <td> </td> <td> WP6 </td> </tr> <tr> <td> </td> </tr> <tr> <td> </td> <td> _D7.1. Building the RAWFIE Community_ </td> <td> </td> <td> </td> <td> WP7 </td> </tr> <tr> <td> </td> </tr> <tr> <td> </td> <td> _D7.4 Data Management Plan (a)_ </td> <td> </td> <td> </td> <td> WP7 </td> </tr> <tr> <td> </td> </tr> <tr> <td> </td> <td> _D8.1: Open Calls, Report on Selection_ </td> <td> </td> <td> </td> <td> WP8 </td> </tr> <tr> <td> </td> </tr> </table> 11 _http://www.rawfie.eu/deliverables_ 12 _http://cordis.europa.eu/project/rcn/194297_en.html_ ### 3.1.2 Open Access to RAWFIE Conferences, Workshops and Presentations ##### 3.1.2.1 Data Sharing Open Access to conference/workshop presentations will be achieved in RAWFIE by depositing the data into an online research data repository. The presentations will be stored in the section of the promotion material in the RAWFIE Web site 8 ##### 3.1.2.2 Archiving and Preservation Open Access to project public presentations will be maintained for at least 3 years following the project completion, through the Website. _3.1.2.3 Archived Presentations_ As of 31/12/2016, no presentation is released online. ### 3.1.3 Open Access to RAWFIE Publications ##### 3.1.3.1 Data Sharing As previously mentioned and described in section 1.1, there are two main routes to providing Open Access to these publications; namely; ´gold´ or´ green´. In any case, Open Access to its publications will be achieved in RAWFIE by depositing the data into online research data repositories. The publications will be stored in one or more of the following locations: * An institutional research data repository * The ZENODO 14 repository, operated by the EC through the funded OpenAIRE 9 project * The RAWFIE Website 10 The ZENODO repository is recommended by the EC’s OpenAIRE initiative in order to unite all the research results arising from EC funded projects. ZENODO is an easy-to-use and innovative service that enables researchers, EU projects and research institutions to share and show case multidisciplinary research results (data and publications) that are not part of existing institutional or subject-based repositories. Namely, ZENODO enables users to: * Easily share the long tail of small data sets in a wide variety of formats, including text, spreadsheets, audio, video, and images across all fields of science. * Display and curate research results, got credited by making the research results citable, and integrate the min to existing reporting lines to funding agencies like the EC. * Easily access and reuse shared research results. * Define the different licenses and access levels that will be provided. Furthermore, ZENODO assigns a Digital Object Identifier 11 (DOI) to all publicly available uploads, in order to make the content easily and uniquely citable. This repository also makes use of the OAIPMH protocol (Open Archives Initiative Protocol for Metadata Harvesting) to facilitate the content search through the use of defined metadata. This metadata follows the schema defined in INVENIO3 12 (a free software suite enabling to run an own digital library or document repository on the web) and is exported in several standard formats such as MARCXML 13 , Dublin Core 20 and Data Cite 14 Metadata Schema according to OpenAIRE Guidelines. In addition, considering ZENODO as the repository, the short- and long-term storage of the research data will be secured since they are stored safely in same cloud infrastructure like research data from CERN's Large Hadron Collider 15 . Furthermore, it uses digital preservation strategies to storage multiple online replicas and to backup the files (Data files and metadata are backed upon a nightly basis). Therefore, this repository fulfils the main requirements imposed by the EC for data sharing, archiving and preservation of the data generated in H2020 projects. ##### 3.1.3.2 Publication Reference Identity (Digital Object Identifier-DOI) The DOI uniquely identifies a document. The publisher, in the case that the document is included in the ´gold´ Open Access, or OpenAIRE, in the case that the document is archived in ZENODO, will allocate this identifier. ##### 3.1.3.3 Archiving and Preservation Open Access to project public presentations will be maintained for at least 3 years following the project completion, through the above repositories. ##### 3.1.3.4 Archived Publications 1. K. Kolomvatsos, C. Anagnostopoulos, S. Hadjiefthymiades, ‘Distributed Localized Contextual Event Reasoning under Uncertainty’, accepted for publication in IEEE Internet of Things Journal, 2017, DOI 10.1109/JIOT.2016.2638119 2. Md Fasiul Alam, Stathes Hadjiefthymiades, Advanced, Hardware Supported In-Network Processing for the Internet of Things, to be presented in ICC 2017 (2nd international conference on Internet of things, Data and cloud computing), March 2017, Cambridge UK. ### 3.1.4 Open Access to RAWFIE Research Data Apart from the Open Access to public deliverables, presentations and scientific publications, the Open Research Data Pilot also applies to two types of data: * The data, including associated metadata, needed to validate the results presented in scientific publications (underlying data); * Statistical data and metadata generated o in the course of the project, or o during the execution of experiments. A lot of information generated during the experiments will form statistical data that will be used for the purpose of the dynamic tuning of the resources and plans. This data could also be used after the execution of the experiment (post-mortem), for diagnostics or further analysis of the experiment execution. This is the case for example in the network behaviour reporting, done through the analysis of the link quality, latency, throughput, etc. As experimental data contain information about the positions of UxVs, their operational measurements (cpu usage, battery consumption etc) and the sensor collected measurements RAWFIE consortium should take consideration about testbeds that are characterized as “sensitive areas” like Skaramagas naval base Staging processing is required as discussed further in section 5.2. After cleaning and filtering, they may also be used as reference data as described in Section 2.1.5. A complete description of the statistical information cannot be given here, but the main categories in which such data are generated are: _networking_ , _processor_ and _machine load_ , _database transaction rates_ , etc. In other words, beneficiaries will be able to choose data, additionally to the data underlying publications, they make available in Open Access mode. According to this requirement, the underlying data related to the scientific publications will be made publicly available (see section3.1.1). This will allow that other researchers can make use of that information to validate the results, thus being a starting point for their investigations, as expected by the EC through its Open Access policy. By design, RAWFIE will avoid any unnecessary collection of personal data. In cases where some limited personal data collection is required, each entity that accesses data commits itself to respect data confidentiality throughout its entire processing cycle. More explicitly, data should: * Be fairly and lawfully processed; * Be used for limited purposes; * Be handled in an adequate, relevant and not excessive way; * Be limited to what is needed and relevant for the research; * Be collected on a voluntary basis under explicit consent from the end-users; * Be accessed in aggregate form or anonymously; * Be not kept longer than necessary; * Be used in accordance with the data subject’s rights; * Be processed without transferring it to countries with absent or insufficient data protection policies. More generally, the RAWFIE platform is governed by the following principles, which should be respected by all users and partners: * Respect of privacy, personal data protection and individual freedom of choice. * Proportionality o By default, most RAWFIE experiments will avoid or limit the collection of personal data beyond what is necessary and relevant to the carried out for an experiment. * Dissociation o If personal data is collected, the system will dissociate any identifying information, such as the email address, from the collected data. * Principle of prior informed consent from the data originator. * Protection of minors by restricting personal data collection from non-adults. * Collected data are stored on Servers located in European countries. * Collective responsibility o All users and stakeholders are required to respect data handling rules and to inform the project privacy officer of any detected attempt of privacy breach. * Universality o The privacy and personal data protection standards followed by the RAWFIE platform are binding for the users and other interacting parties, regardless of their country of residence. * No personal data is shared and transmitted to third parties, including governments and public agencies (except in cases of an, unlikely, judiciary decision). 4. **Research Data – Tools and Standards** **4.1 Tools** In this section, two major tools of the RAWFIE system are presented: The Apache Avro 16 tool which is a data serialization tool and provides a common framework for every robot to adhere to RAWFIE agnostically of their system through the adaptor of the UxV Node and the SAMANT ontology, an extension of Open-Multinet (OMN) ontology suit, which describes semantically the dynamic and static data of the RAWFIE ecosystem. ### 4.1.1 Apache AVRO formatted messages and Kafka Schema Registry ##### 4.1.1.1 AVRO According to its own documentation, the Apache Avro tool is a data serialization system with some useful capabilities. Avro provides: * Rich data structures. * A compact, fast, binary data format. * A container file, to store persistent data. * Remote procedure calls (RPC). * Simple integration with dynamic languages. Code generation is not required to read or write data files nor to use or implement RPC protocols. Code generation as an optional optimization, only worth implementing for statically typed languages. In order to use the Avro schemas, RAWFIE adopts the Apache Kafka based Confluent Platform 17 which provides an easy way to build real-time data pipelines and streaming applications. Having a single, central streaming platform for the RAWFIE infrastructure simplifies connecting data sources to Kafka, building applications with Kafka, as well as securing, monitoring, and managing your Kafka infrastructure. Avro, being a schema-based serialization utility, accepts schemas as input. In spite of various schemas being available, Avro follows its own standards of defining schemas. These schemas describe the following details: * type of file (record by default) * location of record * name of the record * fields in the record with their corresponding data types Using these schemas, you can store serialized values in binary format using less space. These values are stored without the use of any metadata. Avro schemas are defined with JavaScript Object Notation (JSON) 18 document format, which is a lightweight text-based data interchange format. This facilitates implementation in languages that already have JSON libraries. _4.1.1.2 Kafka Schema Registry_ One of the most important things is to manage the Avro schemas and how those schemas should evolve. A Kafka Schema Registry is adopted for that purpose which provides a serving layer for our metadata. It provides a RESTful interface for storing and retrieving Avro schemas. It stores a versioned history of all schemas, provides multiple compatibility settings and allows evolution of schemas according to the configured compatibility setting. It provides serializers that plug into Kafka clients and handle schema storage and retrieval for Kafka messages sent in the Avro format. Briefly, the Schema Registry: * provides a serving layer for metadata. * provides interface for storing and retrieving Avro schemas. * stores a versioned history of all schemas, provides multiple compatibility settings and allows evolution of schemas according to the configured compatibility setting. * provides serializers that plug into Kafka clients that handle schema storage and retrieval for Kafka messages that are sent in the Avro format. In the end this Schema Registry is heavily based on the Java API of Confluent Schema Registry 26 . ## 4.2 Ontologies for RAWFIE – 1 st Open Call Winner Over the past decade semantic information models have been regularly used to address interoperability issues on managing federated experimental infrastructures (e.g., NDL-OWL 27 , NOVIIM 28 , NML 19 , INDL 20 , etc.). One of the most recent efforts, the OWL encoded OMN ontology suite builds upon existing ontologies. OMN is still evolving supported by a community of experts within the FIRE and GENI community. The ontology describes federated infrastructures and resources as generally as possible, while still supporting the management of their lifecycle in federated environments. OMN consists of a hierarchy of ontologies as depicted in Figure 3. The detailed description of the OMN ontology suite is provided in [17]. The OMN ontology at the highest level defines basic concepts and properties, which are then re-used and specialized in the subjacent ontologies. Included at every level are (i) axioms, such as the disjointness of each class; (ii) links to concepts in existing ontologies, such as NML, INDL and NOVI; and (iii) properties that have been shown to be needed in related ontologies. In a nutshell: * The Federation ontology describes federations, along with their members and related infrastructures. * The Lifecycle ontology describes the whole lifecycle of resource/service management in the federation. This includes requests, reservation (schedule for allocation), provisioning and release. * A resource in the OMN ontology is defined as any provisionable, controllable, and/or measurable entity. The Resource ontology augments the definitions of the Resource class defined in the main OMN upper ontology with concepts such as Node, Interface, Link, etc. * The Component ontology covers concepts that are considered descendants of the Component class defined in the OMN upper ontology (e.g. CPU, Sensor, Core, Port, Image, etc.) * A service is defined in the OMN ontology as any entity that has an API to use it. A service may further depend on a Resource. The Service ontology covers different services in the relevant application areas (e.g., Portal, etc.). * The Monitoring ontology is directly linked to other OMN ontologies and facilitates interoperability in terms of enabling common monitoring data to be exchanged federation wide. It is built based on existing ontologies, such as the NOVI monitoring ontology. The OMN ontology suite is designed in a flexible, extensible way to cover specific domains. Examples of such domains include wireless (e.g., Wi-Fi or sensors), SDN, Cloud computing, etc. **Figure 3: Open - Multinet ontology suite** #### 4.2.1 SAMANT OMN Extended Ontology The extension of the OMN ontology for the description of the resources of RAWFIE is twofold. It adopts many concepts from the ontologies of the OMN suite and includes two new ontologies to cover specifically the domains of UxVs and sensors. Furthermore, these ontologies include concepts from other existing relevant ontologies on sensors and measurements. ##### 4.2.1.1 OMN UxV Ontology Figure 4 illustrates the structure of OMN UxV (omn-domain-uxv) ontology. This ontology is available in Turtle 21 format. **Figure 4: OMN UxV ontology** This ontology describes the resources of RAWFIE testbeds, their reservation lifecycle and the attributes of RAWFIE members. Each RAWFIE testbed is described by the Testbed class that includes all the attributes of RAWFIE testbeds (name, description, location and UxV support) is linked with User Class and UxV class. The User class describes RAWFIE members and includes personal information and their role on RAWFIE testbeds. The UxV class describes the resources of each RAWFIE testbed. More specifically, it contains basic information about UxVs (name, description, location, UxV type) and is linked with many classes that describe the features of UxVs. The Connection class represents the communication capabilities of each UxV. The Resource Status class describes the current availability status of UxVs. The Health Status class describes the health status of UxVs and the Config Parameters class includes specific configuration parameters of each UxV. The reservation status of each UxV is described by the Lease class. UxV class is linked with the System class of OMN Sensor Ontology, which describe the specification of the sensors attached on UxVs. Figure 5 depicts the description of a ground unmanned vehicle (UgV) named UgV1. Ugv1 is part of the UgV Testbed (testbed) and is a type of UgV. Its connection features and configuration parameters are described by the UgV Connection and UgV Config Parameters individuals respectively. The health status of UgV1 is defined by term “OK” and UgV1 Health Information individuals. Its resource status is described by “Sleep Mode” status. The UgV1 Lease individual includes its reservation status. UgV1 Point 3D describes the exact location of UgV1 in terms of latitude, longitude and altitude. Finally Ugv1 Sensor System contains all the information of the attached sensors of UgV1. **Figure 5: UxV Example** OMN UxV ontology uses predefined concepts (classes) and links (properties) from OMN ontology suite. OMN Federation ontology is used for the description of testbed. OMN Resource ontology is used for the description of UxV resources. OMN Lifecycle is used for the reservation process of UxVs. OMN Wireless ontology is used for the description of UxV communication capabilities. Finally, for the location of RAWFIE testbeds and UxVs Geo RSS Feature Model and ontology [18] is used. ##### 4.2.1.2 OMN Sensor Ontology The OMN Sensor ontology describes the attached sensors of RAWFIE resources and the sensors record measurements for a variety of phenomena. It focuses on the sensor characteristics that are involved in the selection of the appropriate UxV. Thus, as interesting features are considered the following: * Feature of Interest (Air, Ground, Water) * Measured Property (Temperature, Velocity, Pressure, Electric Current Rate, etc.)  Unit of measured property. * Sensor description (vendor name, product name, serial number, description). For the description of the sensors we used the Semantic Sensor Networks (SSN) ontology, which is developed by the W3C Semantic Sensor Networks Incubator Group (SSN-XG) [19], and ontology for quantity kinds and units [20]. Figure 6 depicts the structure of OMN Sensor ontology. **Figure 6: OMN Sensor Ontology** The set of sensors of each UxV is described by the ssn: System class. System class is linked with the UxV class of the OMN UxV ontology. All basic sensors of the System are described by the corresponding subclass of the ssn: sensing Deviceclass. The measuring property of each sensor is represented by the qu: QuantityKind class (property) and its subclasses. These classes are linked with the ssn: Feature Of Interest class that define if the property corresponds to “Air”, “Ground” or “Water” environment. Figure 7 depicts the description of a sensor system attached on the ground unmanned vehicle (UgV) namedUgV1. This sensor system (UgV1MultiSensor) is equipped with an odometry (UgV1OdometryMultiSensor) and a laser sensor (UgV1LaserSensor). UgV1 Odometry Multi Sensor individual contains the basic sensors for measuring velocity (Ugv1OdometryVelocityorSpeed Sensor) and rotational speed (Ugv1OdometryRotationalSpeedSensor) respectively. The velocity sensor is linked with 'metre per second' individual and velocity individual (observing property).The rotational speed sensor is linked with 'radian per-second' individual and ‘normal rotational speed’ individual (observing property). UgV1 Laser Sensor individual is connected with 'metre' unit and distance (observing property) individuals. The ‘normal rotational speed’, velocity and distance properties are linked with the ground individual of the Feature of Interest class. **Figure 7: UgV Sensor System Example** **4.3 Standards** #### 4.3.1 Data Analytics Here, we will mention standards that can be used for describing the results of the data analytical process. SparkML supports the export of models in PMML [2] (Predictive Model Markup Language to describe the generated models) provided that the ones, that we will generate, are covered by the current PMML version (v 4.2) [1]. While we will not be explicitly providing PMML files for our models (as per paragraph 1) the user can freely export their models (or variations to our models) in PMML via Spark and the Apache Zeppelin interface. Briefly PMML is an industrial standard that is used for the exchange of machine learning and data mining models between different applications and data analytical environments. It is based on XML and offers support for models generated as a result of different data mining tasks such as association rule discovery, classification, regression and clustering. A description of a data mining model in PMML contains the following elements: * a header which provides general information about the model such as the analytical environment that generated the model, generation timestamps etc. * a data dictionary describing the dataset from which the model was generated * a data transformation component describing transformations that are applied to the data prior to modelling, such as normalization, discretization  the model component describing the learned model. ### 4.3.2 Geospatial Data Geospatial data is stored and processed in various, quite diverse, formats. Internally a common representation of the geospatial data will be used, which really simplifies the data handling. This representation hasn’t been decided yet. However, imported data comes in any of the mentioned formats of section 2.1.4, or even another different one. A list of common formats and standard is in the table below. Many of the standards are from the OGC [6]. <table> <tr> <th> **Format** </th> <th> **Description** </th> </tr> <tr> <td> Shapefile [3] </td> <td> * de factor standard (designed by ESRI) to store vector data * supported by almost all GIS systems * only one geometry type per Shapefile * consists of multiple files * attribute data stored in dBASE (version IV) database (.dbf file) [4] </td> </tr> <tr> <td> GeoPackage [5] </td> <td> * recently developed OGC standard to store all kinds of geospatial related data (vector features, tile matrix sets of imagery and raster maps at various scales , schema, metadata) * database file that can be accessed and updated directly without intermediate format translations * can be seen as a modern replacement for shapefiles with the following advantages: * only one file instead of multiple files o smaller file sizes * wider spectrum of attribute types * less constraints (e.g. length of attribute names) </td> </tr> <tr> <td> GML [6] </td> <td> \- </td> <td> _Geography Markup Language_ </td> </tr> <tr> <td> </td> <td> \- </td> <td> OGC standard to exchange vector data via XML files </td> </tr> <tr> <td> </td> <td> \- </td> <td> very flexible and adaptable to individual needs </td> </tr> <tr> <td> </td> <td> \- </td> <td> used in many open source systems </td> </tr> <tr> <td> KML [7] </td> <td> \- </td> <td> _Keyhole Markup Language_ </td> </tr> <tr> <td> </td> <td> \- </td> <td> OGC standard to exchange vector data via XML files </td> </tr> <tr> <td> </td> <td> \- </td> <td> mainly used by Google Earth </td> </tr> <tr> <td> WMS [8] </td> <td> \- </td> <td> _Web Map Service_ </td> </tr> <tr> <td> </td> <td> \- </td> <td> OGC standard protocol for serving geo-referenced map images (raster data) </td> </tr> <tr> <td> </td> <td> \- </td> <td> images are generally generated by a map server (most using data from a GIS database) </td> </tr> <tr> <td> WMTS [12] </td> <td> \- </td> <td> _Web Map Tile Service_ </td> </tr> <tr> <td> </td> <td> \- </td> <td> OGC standard protocol for serving geo-referenced raster data </td> </tr> <tr> <td> </td> <td> \- </td> <td> very similar to WMS, but with much simpler request interfaces </td> </tr> <tr> <td> </td> <td> \- </td> <td> the raster data provided is normally pre-calculated and hence the server-side computing time is very low, making WMTS a very fast and responsive service </td> </tr> <tr> <td> WFS [9] </td> <td> \- </td> <td> _Web Feature Service_ </td> </tr> <tr> <td> </td> <td> \- </td> <td> OGC standard protocol which provides an interface for geographical feature requests (vector data) </td> </tr> <tr> <td> World-File [10] </td> <td> \- \- </td> <td> de factor standard (designed by ESRI) to store raster data supported by almost all GIS systems </td> </tr> <tr> <td> </td> <td> \- </td> <td> a text file (in conjunction with an picture file) that describes the projection of a picture into a specific coordinate system </td> </tr> <tr> <td> GeoTiff [11] </td> <td> \- </td> <td> public domain metadata standard </td> </tr> <tr> <td> </td> <td> \- </td> <td> allows geo-location information to be embedded within a Tiff image file </td> </tr> </table> Also many other formats exist (structured text files, e.g. formatted as CSV, JSON (GeoJSON) or XML as well as many proprietary binary formats) that are used to store geospatial data. # 5 Data Sharing, Archiving and Preservation RAWFIE will consider all the necessary procedures for archiving and provision of long-term preservation of either the experimental data or any of the available data through the Open Access RAWFIE outcomes. ## 5.1 Data sharing A specific dissemination strategy will be developed in RAWFIE in parallel with the implementation activities, with the aim to keep potentially interested stakeholders informed about the availability and the possibility, in case they conduct an experiment, to have access to their experimental data or any kind of data described in Section 3. The strategy used for the dissemination of research data will include: * identification of the different type of stakeholders (users or groups of users) that will be the intended “recipients” of the dissemination * identification of the most suitable tools or mechanisms to be used for the dissemination, according to the type of audience * implementation/use of the above mentioned dissemination tools or mechanisms Possible stakeholders interested in the data generated by the project are: * Experimenters * Universities and research institutes * UxVs and, in general, technology manufacturers (e.g. sensor or wireless communication solutions providers) * Owners of institutional repositories (if any) ### 5.1.1 Sensor data from experiments Sensors data are distributed via the Kafka message bus inside the RAWFIE system and are persistently stored in a central database (access only to trusted RAWFIE components). They are made available to the experimenter via the Visualisation and the Data Analysis Tools. Furthermore, each experimenter will have access to its experiments’ raw data. ### 5.1.2 Data analysis results The Data Analysis Tool uses the Graphite 22 framework to visualise sensor values and the analysis results. #### 5.1.3 Exploitation A number of business cases are briefly described below: * **Patenting:** UxV manufacturers can patent several components raised by the needs of RAWFIE experimentation environment like a redundant propulsion system for UAVs. * **Model valorisation** : Since there is not much direct business to be made out of the models themselves, the valorisation of the models will probably be done through the use of the RAWFIE platform. For that matter, please refer to WP2 deliverables. * **Data valorisation** : The collected data has an explicit and an implicit value. The explicit value lies in the information that can be extracted from it, after analysis and interpretation, e.g. for tuning or debugging the RAWFIE platform, its component or for adjusting the parameters of the resources, such as UxVs or Testbeds; this value can be commercially traded, as it will be the case for the data obtained from any other experiment. The implicit value comes from the nature of this data, which can, then, be used as reference data for other UxV or testbed owners that would like to introduce their assets or technologies into the RAWFIE infrastructure for later being used as a resource or a service; this value is probably not directly exploitable. **5.2 Staging processing for experiment validation streams** The RAWFIE architecture follows the principle of informed consent by end- users. Participants to the experiments will be required to have previously given their consent to take part in it, with a clear understanding of what the collected data are and what is their potential use and distribution. In order for a user to access the gathered information, he will have to register to a dedicated service. During this process, the end-user will be notified about the purpose and the scope of the project via a “Terms and Conditions” notice. Special cases demand for specific disclosure process (clearance) and terms of use. This is for example the case of testbeds close to sensitive areas. Such is the case for the following testbed:  Since Testbed Boundaries are lying within the Naval Fortress of Skaramanga and due to the proximity of Salamis Naval Base in order to prevent any sensitive information leakage with respect to operational capabilities all data collected by any means through UXVs will be submitted to thoroughly check by a Hellenic Navy Intelligence “Safety committee” prior public release. Sensitive data might be censored in order to fulfill above-mentioned restrictions. Specific Sensor Restrictions will be disseminated to testbed users/experimenters upon commission of UxVs and after sensors capabilities have been notified. Furthermore, any data which not falls into prior restriction should be handled by testbed operators with respect to EU and national laws concerning Personal Information privacy. #### 5.3 Data Archiving and preservation RAWFIE will consider all the necessary procedures for archiving and provision of long term preservation. Suitable file formats and appropriate processes for organizing files will be followed. In organizing the different data files the following steps could be considered: * File version control * File structure * Directory structure and file naming conventions RAWFIE different repositories are: * _**Master Data Repository** _ to contain all the management data sets (experiments, EDL scripts, bookings, testbeds and resources, status information of testbeds and their resources, and so on) of RAWFIE. PostgreSQL [6] with PostGIS extension was chosen for the implementation, as it is well supported, open source and stable, and to be able to easily handle geo-referenced data. * _**Measurements Repository** _ that will use a big data storage system for storing the large number of measurements that will be coming from the sensors on board of the UxVs during the experiments. The popular big data solution “Hadoop Distributed File System” [13] is one of the potential solutions for this purpose, however the specific technological choice will be detailed in further WP4 deliverables and WP5. In addition, a NoSQL solution is expected to be adopted in the 2nd implementation iteration to better manage the data sets. Currently HBase (running on top of HDFS) has been identified for this purpose. HBase supports random, real-time read/write access with a goal of hosting very large tables atop clusters of commodity hardware. HBase features include i) consistent reads and writes, ii) automatic and configurable sharding of tables and iii) automatic failover support. Hbase can be connected with Apache Confluent/Kafka and can use ZooKeeper for coordination of “truth” across the cluster. As region servers come online, they register themselves with ZooKeeper as members of the cluster. Region servers have shards of data (partitions of a database table) called “regions”. This supports the online streaming of raw data generated by experimenters with no delays at reads and writes. For further interpretation of the raw data analysis results repository is used. * _**Analysis Results Repository** _ uses a seperated database for performing the Data Analytics task over the results of the experiments. The Graphite data analysis framework will be used with the database called Whisper [14]. * _**Users & Rights Repository ** _ uses a LDAP [15] repository, as the LDAP is the de facto standard for user management. It stores all user related data (name, organisation, address, password) and group memberships (roles based access control). The selected implementation is OpenDJ [16]. Except for the Analysis Results Repository, all used repository systems (PostgreSQL, HDFS, OpenDJ) support replication, thus, they do provide fault tolerance. In case of data loss in the Analysis Results Repository, they can be recomputed using data stored in the Measurements Repository. In addition for the long-term access appropriate data documentation will be provided. Full understanding and analysis of the metadata that may be needed will be considered. For instance, for improving documentation process we could classify the metadata in two levels: project- and data-level. Project-level metadata describes the “who, what, where, when, how and why” of the dataset, which provides context for understanding why the data were collected and how they were used. Examples of project-level metadata: * Name of the project * Dataset title * Project description * Dataset abstract * Principal investigator and collaborators * Contact information Dataset level metadata are more granular. They explain, in much better detail, the data and dataset. Examples of data-level metadata: * Data origin, experimental, observational, raw or processes, models, images, etc. * Data type: integer, boolean, character, floating, etc * Data acquisition details: sensor deployment methods, experimental design, sensor calibration methods, etc * File types: CSV, mat, tiff, xlsx, HDF * Data processing methods * Dataset parameter list: Variable names, Description of each variable, units The external repositories that can be used for the purposes of archiving and long-term storage were described above (see Section 5.3). These repositories are open therefore there will not add expenses for the RAWFIE consortium. In case of additional procedures are needed for the long-term maintenance the project consortium will cover the respective costs. **6 _References_ ** 1. PMML 4.2: _http://www.dmg.org/pmml-v4-2.html_ 2. PMML: An Open Standard for Sharing Models,Alex Guazzelli, Michael Zeller, Wen-Ching Lin and Graham Williams,The R Journal, Volume 1/1, May 2009\. 3. ESRI Shapefile Technical Description **,** ESRI, July 1998, _http://www.esri.com/library/whitepapers/pdfs/shapefile.pdf_ 4. _http://www.dbase.com/_ 5. OGC GeoPackage Encoding Standard, Paul Daisey, version: 1.0.1, April 2015, _http://www.geopackage.org/spec/_ 6. Geography Markup Language, OGC, various versions, _http://www.opengeospatial.org/standards/gml_ 7. KML, OGC, various versions, _http://www.opengeospatial.org/standards/kml/_ 8. Web Map Service, OGC, various versions, _http://www.opengeospatial.org/standards/wms/_ 9. Web Feature Service, OGC, various versions, _http://www.opengeospatial.org/standards/wfs/_ 10. About world files, ESRI, _http://webhelp.esri.com/arcims/9.2/general/topics/author_world_files.htm_ 11. GeoTIFF Format Specification, Niles Ritter, version 1.8.2, December 2000, _http://www.remotesensing.org/geotiff/spec/geotiffhome.html_ 12. Web Map Tile Service, OGC, various versions, _http://www.opengeospatial.org/standards/wmts_ 13. _http://hadoop.apache.org/index.html_ 14. _http://graphite.readthedocs.io/en/latest/whisper.html_ 15. _https://en.wikipedia.org/wiki/Lightweight_Directory_Access_Protocol_ 16. _https://forgerock.org/opendj/_ 17. A. Willner, C. Papagianni, M. Giatili, P. Grosso, M. Morsey, Al-Hazmi Y., I. Baldin, "The Open-Multinet Upper Ontology - Towards the Semantic-based Management of Federated Infrastructures", The 10th International Conference on Testbeds and Research Infrastructures for the Development of Networks & Communities (TRIDENTCOM 2015), Vancouver, Canada, June 2015. 18. Lieberman J., Signh R., Goad C., W3C Geospatial Vocabulary, Available at: _https://www.w3.org/2005/Incubator/geo/XGR-geo-20071023/_ 19. Compton, M., Barnaghi, P., Bermudez, L., GarcíA-Castro, R., Corcho, O., Cox, S., & Huang, V. (2012). “The SSN ontology of the W3C semantic sensor network incubator group”, Web Semantics: Science, Services and Agents on the World Wide Web, 17, 25-32. 20. Lefort L, “Ontology for quantity kinds and units: units and quantities definitions”, W3 Semantic Sensor Network Incubator Activity, 2005\. **A ANNEX Ι** **SUMMARY** **TABLE 1** **FAIR Data Management** This table provides a summary of the Data Management Plan (DMP) issues to be addressed during RAWFIE lifetime. <table> <tr> <th> **DMP component** </th> <th> **Issues to be addressed** </th> <th> **Related Sections** </th> </tr> <tr> <td> 1\. Data summary </td> <td> * State the purpose of the data collection/generation * Explain the relation to the objectives of the project * Specify the types and formats of data generated/collected * Specify if existing data is being re-used (if any) * Specify the origin of the data * State the expected size of the data (if known) * Outline the data utility: to whom will it be useful </td> <td> **D7.5 – Section 2** **Dataset description and** **processing** </td> </tr> <tr> <td> 2. FAIR Data 2.1. Making data findable, including provisions for metadata </td> <td> * Outline the discoverability of data (metadata provision) * Outline the identifiability of data and refer to standard identification mechanism. Do you make use of persistent and unique identifiers such as Digital Object Identifiers? * Outline naming conventions used * Outline the approach towards search keyword * Outline the approach for clear versioning * Specify standards for metadata creation (if any). If there are no standards in your </td> <td> **D7.5 - Section 3 and Section 4** </td> </tr> </table> <table> <tr> <th> </th> <th> </th> <th> discipline describe what type of metadata will be created and how </th> <th> </th> </tr> <tr> <td> 2.2 Making data openly accessible </td> <td>      </td> <td> Specify which data will be made openly available? If some data is kept closed provide rationale for doing so Specify how the data will be made available Specify what methods or software tools are needed to access the data? Is documentation about the software needed to access the data included? Is it possible to include the relevant software (e.g. in open source code)? Specify where the data and associated metadata, documentation and code are deposited Specify how access will be provided in case there are any restrictions </td> <td> **D7.5 - Section 3 and Section 4** </td> </tr> <tr> <td> 2.3. Making data interoperable </td> <td>   </td> <td> Assess the interoperability of your data. Specify what data and metadata vocabularies, standards or methodologies you will follow to facilitate interoperability. Specify whether you will be using standard vocabulary for all data types present in your data set, to allow inter-disciplinary interoperability? If not, will you provide mapping to more commonly used ontologies? </td> <td> **D7.5 - Section 4.2** **Research on progress for further interoperability of the data compoennts. Desctiption in D7.5(c)** </td> </tr> <tr> <td> 2.4. Increase data reuse (through clarifying licences) </td> <td>      </td> <td> Specify how the data will be licenced to permit the widest reuse possible Specify when the data will be made available for re-use. If applicable, specify why and for what period a data embargo is needed Specify whether the data produced and/or used in the project is useable by third parties, in particular after the end of the project? If the re-use of some data is restricted, explain why Describe data quality assurance processes Specify the length of time for which the data </td> <td> **TBA in the version 3 – D7.6** </td> </tr> </table> <table> <tr> <th> </th> <th> </th> <th> will remain re-usable </th> <th> </th> </tr> <tr> <td> 3\. Allocation of resources </td> <td>    </td> <td> Estimate the costs for making your data FAIR. Describe how you intend to cover these costs Clearly identify responsibilities for data management in your project Describe costs and potential value of long term preservation </td> <td> **TBA in the version 3 – D7.6** </td> </tr> <tr> <td> 4\. Data security </td> <td>  </td> <td> Address data recovery as well as secure storage and transfer of sensitive data </td> <td> **Section 5 and 6** </td> </tr> <tr> <td> 5\. Ethical aspects </td> <td>  </td> <td> To be covered in the context of the ethics review, ethics section of DoA and ethics deliverables. Include references and related technical aspects if not covered by the former </td> <td> **Deliverable D1.13** </td> </tr> <tr> <td> 6\. Other </td> <td>  </td> <td> Refer to other national/funder/sectorial/departmental procedures for data management that you are using (if any) </td> <td> **TBA in the version 3 – D7.6** </td> </tr> </table>
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0390_INTERMODEL EU_690658.md
# 1\. Introduction ## 1.1 Scope The scope of this document is to produce a Data Management Plan (DMP) that describes the types of data that will be generated or gathered during the project, the standards that will be used, the ways how data will be exploited and shared for verification or reuse, and how data will be preserved. This document aims to provide a consolidated plan for INTERMODEL EU partners in the data management plan policy that the project will follow. The document is the first version of the DMP, delivered in M6 of the project. The DMP will be updated during the lifecycle of the project. ## 1.2 Audience The intended audience of this document is the INTERMODEL Consortium. ## 1.3 Definitions / Glossary The main terms used in this deliverable are described as follows: **Data Management Plan (DMP)** – document that describes the data management life cycle for all datasets to be collected, processed or generated by a research project. It covers: the handling of research data during and after the project; what data will be collected, processed or generated; what methodology and standards will be applied; whether data will be shared/made open access and how; and how data will be curated and preserved. **FAIR data** – set of guiding principles to make data Findable, Accessible, Interoperable and Re‐usable. ## 1.4 Abbreviations The abbreviations used in the present document are: **BEP** : BIM Execution Plan **BIM** : Building Information Modelling **DMP** : Data Management Plan **EU** : European Union **FAIR** : Findable, Accessible, Interoperable and Re‐usable **PDF** : Portable Document Format **WP** : Work Package ## 1.5 Structure * **Introduction:** contains an overview of this document, providing its Scope, Audience, and Structure. * **Responsibilities:** defines who is the responsible for data management. * **Data summary:** contains the purpose of the data collection/generation and its relation to the objectives of the project; types of formats of data generated and collected; origin of the data; to whom might it be useful. * **General principles:** describes the principles that must be taken into consideration for the data management. * **FAIR data:** this section includes the requirements to make data findable, openly accessible, interoperable and increase data re‐use, if necessary. * **Allocation of resources:** defines the costs for making data FAIR, if any, and the responsible for data management in the project. * **Data security:** explains provisions for data security if needed and how data is safely stored. * **Ethical aspects:** contains ethical and legal issues that can have an impact on data sharing. * **Other issues:** this section contains other national/sectorial/departamental procedures for data management, if necessary. * **Data Management Plan:** provides an analysis of the main elements of the data management policy used with regard to all datasets identified and generated by the project. * **Conclusions:** gathers the main issues of the DMP. # 2\. Responsibilities Mikel Borras (IDP) will be the person in charge of the data during the project. He has the responsibility to ensure that data shared through the INTERMODEL EU website are easily available, and also that backups are performed and that proprietary data are secured. <table> <tr> <th> </th> <th> **Data responsible** </th> </tr> <tr> <td> Person in charge of the data during the project </td> <td> Mikel Borràs [email protected]_ IDP </td> </tr> </table> IDP will ensure data integrity and compatibility for its use during the project lifetime by the different partners composing the Consortium. Validation and registration of data is responsibility of the partner who generates the data in the WP. # 3\. Data summary Data will be collected from the intermodal terminals in order to properly develop a BIM virtual model of them and to be able to simulate the processes within through simulation software. This data needed for the simulations will include information regarding aspects such as 3D volumes’ dimensions and geo‐location, modal laying out, waiting time, terminal arrival behaviours, number of trucks, terminal’s machinery data, etc. In addition, regarding the modelling, data needed will consist basically in the design/layout of the terminal, number of cranes, CAPEX, OPEX, etc. All this data will be provided by the Terminal Operators/Owners (CSI, ASPS) and managed mainly by IDP, MAC, VTT and VIASYS. All these partners will have open access to models and their underlying data. Different datasets will be created in order to define the models, and will be useful for the Consortium to be able to validate the results obtained through the simulations. ## 3.1 Data set description All consortium partners have identified the datasets that will be required for the development of the project. The list is provided below, while the nature and details for each dataset are given in the subsequent section 10. This list has been defined according to the needs of the project and could be adapted in the next version of the DMP taking into consideration the project progress. <table> <tr> <th> **#** </th> <th> **Dataset (DS) name** </th> <th> **Responsible partner** </th> <th> **Related WP(s) & task ** </th> </tr> <tr> <td> 1 </td> <td> DS1_Data_collection_terminals_operation </td> <td> MAC </td> <td> WP5 Task 5.1 </td> </tr> <tr> <td> 2 </td> <td> DS2_Data_collection_external_mobility </td> <td> CENIT </td> <td> WP6 Task 6.2 </td> </tr> <tr> <td> 3 </td> <td> DS3_Data_collection_terminals_layout </td> <td> IDP MAC </td> <td> WP4 Task 4.2 WP7 Task 7.1 </td> </tr> <tr> <td> 4 </td> <td> DS4_Data_collection_market_data </td> <td> DHL </td> <td> WP8 Task 8.2 </td> </tr> <tr> <td> 5 </td> <td> DS5_Project_deliverables </td> <td> IDP </td> <td> WP1 Task 1.1 </td> </tr> </table> # 4\. General principles There are no requirements expected by the funding or partners regarding data linked to the project, and there are no additional requirements associated with the data being submitted. The INTERMODEL EU project _only needs the collection of**non‐sensitive data** _ , which means that **_no personal identifiers will be recorded by the researches in any form_ ** . # 5\. FAIR data ## 5.1 Making data findable, including provision for metadata The BIM models and simulations will contain all the necessary data to achieve the goals mentioned previously and already have defined metadata. These models and simulations will be shared within the Consortium and the software programs to use and the necessary information to manage them will be specified in the BIM Execution Plan (BEP). ## 5.2 Making data openly accessible Models and simulations, and therefore, the data in them, will be shared openly among the consortium members through INTERMODEL EU website intranet. The use of them or their data consultation will be possible through the right BIM software programs (defined in the BEP) and other software programs for simulation and traffic studies (e.g. Aimsum, etc.). Also, during the project, the interoperability and data exchange and the integrating ICT environment prototype will be defined, but they will be confidential and only accessible for the members of the Consortium and the Commission Services. ## 5.3 Making data interoperable BIM methodology is based on the interoperability between several software programs. The outcome models follow BIM open standards and vocabularies specified in the BEP in accordance with the participant members. This BEP document, just like this DMP, is not a once‐time document but a live on so it will go through changes as the project is developed. ## 5.4 Increase data re‐use The generated models will remain accessible by Consortium members throughout all the project duration. The re‐use of the models and the data within them after the project shall be defined in the exploitation agreement (Deliverables 9.08, 9.09, 9.10 and 9.11 in months 18, 24, 30, 36 respectively). # 6\. Allocation of resources Making models and their underlying data FAIR will not take any more time or at least, any more calculable time, used to generate the models and carry on the different simulations, using the information in the BIM format/standards models. The staff personnel hours dedicated to this will be counted within the Person Month dedications to the respective tasks. The ultimate person responsible for data management in the INTERMODEL EU project will be Mikel Borràs, IDP Financial & Data Manager, as long as the project lasts. Once the project ends, this issue shall be discussed within the exploitation agreement. # 7\. Data security Models and their underlying data will be stored in the INTERMODEL EU website’s intranet, with access restricted to all Consortium partners in order to work on them through modelling, data introduction, data collection, simulation, etc. Each partner is responsible for those recovery files that may be stored in each partner’s facilities, databases, servers, etc. # 8\. Ethical aspects There are no ethical aspects that can have an impact on data sharing and no human data is included in any model/simulation, according to ethics deliverables D10.1 and D10.2. # 9\. Other issues Not applicable. # 10\. Data Management Plan ## 10.1 Dataset 1 <table> <tr> <th> **DS1_Data_collection_terminals_operation** </th> </tr> <tr> <td> **Data identification** </td> </tr> <tr> <td> Dataset description </td> <td> This dataset contains data from terminals (volumes handled, seasonal impacts, modal splits, staff, processing times, arrival patterns, equipment, etc.). </td> </tr> <tr> <td> Source </td> <td> CSI and other terminals own records. </td> </tr> <tr> <td> **Partners activities and responsibilities** </td> </tr> <tr> <td> Partner owner of the data </td> <td> MAC </td> </tr> <tr> <td> Partner in charge of the data collection </td> <td> MAC </td> </tr> <tr> <td> Partner in charge of the data analysis </td> <td> MAC </td> </tr> <tr> <td> Partner in charge of the data storage </td> <td> MAC </td> </tr> <tr> <td> Related WP(s) and task </td> <td> WP5 Task 5.1 </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Information about metadata and documentation </td> <td> N/A </td> </tr> <tr> <td> Standards, format, estimated volume of data </td> <td> This dataset can be a combination of EXCEL/WORD/PDF documents and file extensions such as .xlsx (Excel), .docx (Word) and .pdf (PDF). It will be updated if necessary. </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data exploitation (purpose and use of the data analysis) </td> <td> This dataset is the result of a collaborative work between MAC and CSI, and once cleaned and validated, will provide the basis for the simulation component library. </td> </tr> <tr> <td> Data access policy, dissemination level (confidential – only for members of the Consortium and the European Commission or public) </td> <td> Confidential, so only the members of the Consortium and the Commission Services will have access to this dataset. </td> </tr> <tr> <td> Data sharing, re‐use, distribution, publication </td> <td> None </td> </tr> <tr> <td> Personal data protection (are they personal data?) </td> <td> No personal data </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (where?, for how long?) </td> <td> The dataset will be preserved in MAC and IDP infrastructure. </td> </tr> </table> ## 10.2 2 <table> <tr> <th> **DS2_Data_collection_external_mobility** </th> </tr> <tr> <td> **Data identification** </td> </tr> <tr> <td> Dataset description </td> <td> This dataset contains data related to the traffic flows incoming to the terminals (Melzo and La Spezia) and at the surrounding road network. </td> </tr> <tr> <td> Source </td> <td> CSI and APSP </td> </tr> <tr> <td> **Partners activities and responsibilities** </td> </tr> <tr> <td> Partner owner of the data </td> <td> CENIT </td> </tr> <tr> <td> Partner in charge of the data collection </td> <td> CENIT </td> </tr> <tr> <td> Partner in charge of the data analysis </td> <td> CENIT </td> </tr> <tr> <td> Partner in charge of the data storage </td> <td> CENIT </td> </tr> <tr> <td> Related WP(s) and task </td> <td> WP6 Task 6.2 </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Information about metadata and documentation </td> <td> N/A </td> </tr> <tr> <td> Standards, format, estimated volume of data </td> <td> This dataset can be a combination of EXCEL/WORD documents and file extensions such as .xlsx, .docx and .ang. </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data exploitation (purpose and use of the data analysis) </td> <td> This dataset is the result of a collaborative work between CENIT and CSI/ASPS, and it will be used for the validation of the KPIs resulting from the simulations. </td> </tr> <tr> <td> Data access policy, dissemination level (confidential – only for members of the Consortium and the European Commission or public) </td> <td> Confidential, so only the members of the Consortium and the Commission Services will have access to this dataset. </td> </tr> <tr> <td> Data sharing, re‐use, distribution, publication </td> <td> None </td> </tr> <tr> <td> Personal data protection (are they personal data?) </td> <td> No personal data </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (where?, for how long?) </td> <td> The dataset will be preserved in CENIT and IDP infrastructure. </td> </tr> </table> ### 10.3 3 <table> <tr> <th> **DS3_Data_collection_terminals_layout** </th> </tr> <tr> <td> **Data identification** </td> </tr> <tr> <td> Dataset description </td> <td> This dataset contains data related to the layout of the real terminals that will be modelled and analysed throughout the project (Melzo and La Spezia) and the railway interconnection. </td> </tr> <tr> <td> Source </td> <td> CSI and APSP </td> </tr> <tr> <td> **Partners activities and responsibilities** </td> </tr> <tr> <td> Partner owner of the data </td> <td> IDP </td> </tr> <tr> <td> Partner in charge of the data collection </td> <td> IDP </td> </tr> <tr> <td> Partner in charge of the data analysis </td> <td> IDP </td> </tr> <tr> <td> Partner in charge of the data storage </td> <td> IDP </td> </tr> <tr> <td> Related WP(s) and task </td> <td> WP4 Task 4.2 WP7 Task 7.1 </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Information about metadata and documentation </td> <td> N/A </td> </tr> <tr> <td> Standards, format, estimated volume of data </td> <td> This dataset can be a combination of CAD files and file extensions such as .las and .rcp. </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data exploitation (purpose and use of the data analysis) </td> <td> This dataset composed of .dwg files and the results from the point cloud will be used for generating the BIM models of the real terminals. </td> </tr> <tr> <td> Data access policy, dissemination level (confidential – only for members of the Consortium and the European Commission or public) </td> <td> This dataset does not contain confidential information, but models are shown in demonstration activities for the members of the consortium and the Commission Services. </td> </tr> <tr> <td> Data sharing, re‐use, distribution, publication </td> <td> None </td> </tr> <tr> <td> Personal data protection (are they personal data?) </td> <td> No personal data </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (where?, for how long?) </td> <td> The dataset will be preserved in IDP infrastructure. </td> </tr> </table> ### 10.4 4 <table> <tr> <th> **DS4_Data_collection_market_data** </th> </tr> <tr> <td> **Data identification** </td> </tr> <tr> <td> Dataset description </td> <td> This dataset contains data related to transportation and logistics studies and statistical data compiled for the assessment of intermodal terminals and statistical market data and forecasts. </td> </tr> <tr> <td> Source </td> <td> Existing publications (international statistics institutions, public and private logistics companies, white papers, etc.). </td> </tr> <tr> <td> **Partners activities and responsibilities** </td> </tr> <tr> <td> Partner owner of the data </td> <td> DHL/CENIT </td> </tr> <tr> <td> Partner in charge of the data collection </td> <td> DHL/CENIT </td> </tr> <tr> <td> Partner in charge of the data analysis </td> <td> DHL/CENIT </td> </tr> <tr> <td> Partner in charge of the data storage </td> <td> DHL/CENIT </td> </tr> <tr> <td> Related WP(s) and task </td> <td> WP8 Task 8.2 </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Information about metadata and documentation </td> <td> N/A </td> </tr> <tr> <td> Standards, format, estimated volume of data </td> <td> This dataset can be a combination of WORD/PDF documents. </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data exploitation (purpose and use of the data analysis) </td> <td> This dataset will be used for the validation of results concerning functional, economic and environmental issues at selected terminals. </td> </tr> <tr> <td> Data access policy, dissemination level (confidential – only for members of the Consortium and the European Commission or public) </td> <td> This dataset does not contain confidential information, and data will be public through deliverables. </td> </tr> <tr> <td> Data sharing, re‐use, distribution, publication </td> <td> None </td> </tr> <tr> <td> Personal data protection (are they personal data?) </td> <td> No personal data </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (where?, for how long?) </td> <td> The dataset will be preserved in DHL and IDP infrastructure. </td> </tr> </table> ### 10.5 5 <table> <tr> <th> **DS5_Project_deliverables** </th> </tr> <tr> <td> **Data identification** </td> </tr> <tr> <td> Dataset description </td> <td> Deliverables resulting from the development of the project. </td> </tr> <tr> <td> Source </td> <td> Generated by WP leaders. </td> </tr> <tr> <td> **Partners activities and responsibilities** </td> </tr> <tr> <td> Partner owner of the data </td> <td> IDP </td> </tr> <tr> <td> Partner in charge of the data collection </td> <td> IDP </td> </tr> <tr> <td> Partner in charge of the data analysis </td> <td> IDP </td> </tr> <tr> <td> Partner in charge of the data storage </td> <td> IDP </td> </tr> <tr> <td> Related WP(s) and task </td> <td> WP1 Task 1.1 </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Information about metadata and documentation </td> <td> N/A </td> </tr> <tr> <td> Standards, format, estimated volume of data </td> <td> This dataset can be a combination of WORD/PDF documents. </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data exploitation (purpose and use of the data analysis) </td> <td> This dataset presents the outcomes of the project. </td> </tr> <tr> <td> Data access policy, dissemination level (confidential – only for members of the Consortium and the European Commission or public) </td> <td> This dataset does not contain confidential information. Thus, the access to the dataset is mainly public, except the progress technical and financial reports and deliverables associated to the definition and development of the decision making tool to be integrated within the BIM models and simulations. The reports related to exploitation agreement and ethics requirements will be confidential as well, as they only concerned members of the consortium. </td> </tr> <tr> <td> Data sharing, re‐use, distribution, publication </td> <td> None </td> </tr> <tr> <td> Personal data protection (are they personal data?) </td> <td> No personal data </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (where?, for how long?) </td> <td> The dataset will be preserved in IDP infrastructure. </td> </tr> </table> # 11\. Conclusions This Data Management Plan (DMP) provides an overview of the data that the INTERMODEL EU project will produce together with related challenges and constraints that need to be taken into consideration. The analysis contained in this report allows anticipating the procedures and infrastructures to be implemented within the project to efficiently manage the data it will produce. Some of the partners will be owners or/and producers of data, which implies specific responsibilities, described in this report.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0391_INTERMODEL EU_690658.md
# 1\. Introduction ## 1.1 Scope The scope of this document is to update the Data Management Plan (DMP) that was delivered in M6 of the project (February 2017), and describes the types of data that will be generated or gathered during the project, the standards that will be used, the ways how data will be exploited and shared for verification or reuse, and how data will be preserved. This document aims to provide a consolidated plan for INTERMODEL EU partners in the data management plan policy that the project will follow. The present document is the second version of the DMP. If necessary, the DMP will be updated during the lifecycle of the project. ## 1.2 Audience The intended audience of this document is the INTERMODEL Consortium. ## 1.3 Definitions / Glossary The main terms used in this deliverable are described as follows: **Data Management Plan (DMP)** – document that describes the data management life cycle for all datasets to be collected, processed or generated by a research project. It covers: the handling of research data during and after the project; what data will be collected, processed or generated; what methodology and standards will be applied; whether data will be shared/made open access and how; and how data will be curated and preserved. **FAIR data** – set of guiding principles to make data Findable, Accessible, Interoperable and Re‐usable. ## 1.4 Abbreviations The abbreviations used in the present document are: **BEP** : BIM Execution Plan **BIM** : Building Information Modelling **DMP** : Data Management Plan **EU** : European Union **FAIR** : Findable, Accessible, Interoperable and Re‐usable **PDF** : Portable Document Format **WP** : Work Package ## 1.5 Structure * **Introduction:** contains an overview of this document, providing its Scope, Audience, and Structure. * **Responsibilities:** defines who is the responsible for data management. * **Data summary:** contains the purpose of the data collection/generation and its relation to the objectives of the project; types of formats of data generated and collected; origin of the data; to whom might it be useful. * **General principles:** describes the principles that must be taken into consideration for the data management. * **FAIR data:** this section includes the requirements to make data findable, openly accessible, interoperable and increase data re‐use, if necessary. * **Allocation of resources:** defines the costs for making data FAIR, if any, and the responsible for data management in the project. * **Data security:** explains provisions for data security if needed and how data is safely stored. * **Ethical aspects:** contains ethical and legal issues that can have an impact on data sharing. * **Other issues:** this section contains other national/sectorial/departmental procedures for data management, if necessary. * **Data Management Plan:** provides an analysis of the main elements of the data management policy used with regard to all datasets identified and generated by the project. * **Conclusions:** gathers the main issues of the DMP. # 2\. Responsibilities Mikel Borras (IDP) will be the person in charge of the data during the project. He has the responsibility to ensure that data shared through the INTERMODEL EU website are easily available, and also that backups are performed and that proprietary data are secured. <table> <tr> <th> </th> <th> **Data responsible** </th> </tr> <tr> <td> Person in charge of the data during the project </td> <td> Mikel Borràs [email protected]_ IDP </td> </tr> </table> IDP will ensure data integrity and compatibility for its use during the project lifetime by the different partners composing the Consortium. Validation and registration of data is responsibility of the partner who generates the data in the WP. # 3\. Data summary Data will be collected from the intermodal terminals in order to properly develop a BIM virtual model of them and to be able to simulate the processes within through simulation software. This data needed for the simulations will include information regarding aspects such as 3D volumes’ dimensions and geo‐location, modal laying out, waiting time, terminal arrival behaviours, number of trucks, terminal’s machinery data, etc. In addition, regarding the modelling, data needed will consist basically in the design/layout of the terminal, number of cranes, CAPEX, OPEX, etc. All this data will be provided by the Terminal Operators/Owners (CSI, ASPS) and managed mainly by IDP, MAC, VTT and VIASYS. All these partners will have open access to models and their underlying data. Different datasets will be created in order to define the models, and will be useful for the Consortium to be able to validate the results obtained through the simulations. ## 3.1 Data set description All consortium partners have identified the datasets that will be required for the development of the project. The list is provided below, while the nature and details for each dataset are given in the subsequent section 10. This list was previously defined according to the needs of the project and has been adapted in the present updated version taking into consideration the project progress. If required, the list below could be adapted in future versions of the DMP. <table> <tr> <th> **#** </th> <th> **Dataset (DS) name** </th> <th> **Responsible partner** </th> <th> **Related WP(s) & task ** </th> </tr> <tr> <td> 1 </td> <td> DS1_Data_collection_terminals_operation </td> <td> MAC </td> <td> WP5 Task 5.1 </td> </tr> <tr> <td> 2 </td> <td> DS2_Data_collection_external_mobility </td> <td> CENIT </td> <td> WP6 Task 6.2 </td> </tr> <tr> <td> 3 </td> <td> DS3_Data_collection_terminals_layout </td> <td> IDP MAC </td> <td> WP4 Task 4.2 WP7 Task 7.1 </td> </tr> <tr> <td> 4 </td> <td> DS4_Data_collection_market_data </td> <td> DHL </td> <td> WP8 Task 8.2 </td> </tr> <tr> <td> 5 </td> <td> DS5_Project_deliverables </td> <td> IDP </td> <td> WP1 Task 1.1 </td> </tr> <tr> <td> 6 </td> <td> DS6_Data_collection_terminals_KPI </td> <td> IDP MAC VIAS </td> <td> WP4 Task 4.2 WP7 Task 7.1 WP7 Task 7.2 </td> </tr> </table> # 4\. General principles There are no requirements expected by the funding or partners regarding data linked to the project, and there are no additional requirements associated with the data being submitted. The INTERMODEL EU project _only needs the collection of**non‐sensitive data** _ , which means that **_no personal identifiers will be recorded by the researches in any form_ ** . # 5\. FAIR data ## 5.1 Making data findable, including provision for metadata The BIM models and simulations will contain all the necessary data to achieve the goals mentioned previously and already have defined metadata. These models and simulations will be shared within the Consortium and the software programs to use and the necessary information to manage them will be specified in the BIM Execution Plan (BEP). ## 5.2 Making data openly accessible Models and simulations, and therefore, the data in them, will be shared openly among the consortium members through INTERMODEL EU website intranet. The use of them or their data consultation will be possible through the right BIM software programs (defined in the BEP) and other software programs for simulation and traffic studies (e.g. Aimsum, etc.). Also, during the project, the interoperability and data exchange and the integrating ICT environment prototype will be defined, but they will be confidential and only accessible for the members of the Consortium and the Commission Services. ## 5.3 Making data interoperable BIM methodology is based on the interoperability between several software programs. The outcome models follow BIM open standards and vocabularies specified in the BEP in accordance with the participant members. This BEP document, just like this DMP, is not a once‐time document but a live on so it will go through changes as the project is developed. ## 5.4 Increase data re‐use The generated models will remain accessible by Consortium members throughout all the project duration. The re‐use of the models and the data within them after the project shall be defined in the exploitation agreement (Deliverables 9.8, 9.9, 9.10 and 9.11 in months 18, 24, 30, 36 respectively). # 6\. Allocation of resources Making models and their underlying data FAIR will not take any more time or at least, any more calculable time, used to generate the models and carry on the different simulations, using the information in the BIM format/standards models. The staff personnel hours dedicated to this will be counted within the Person Month dedications to the respective tasks. The ultimate person responsible for data management in the INTERMODEL EU project will be Mikel Borràs, IDP Financial & Data Manager, as long as the project lasts. Once the project ends, this issue shall be discussed within the exploitation agreement. # 7\. Data security Models and their underlying data will be stored in the INTERMODEL EU website’s intranet, with access restricted to all Consortium partners in order to work on them through modelling, data introduction, data collection, simulation, etc. Each partner is responsible for those recovery files that may be stored in each partner’s facilities, databases, servers, etc. # 8\. Ethical aspects There are no ethical aspects that can have an impact on data sharing and no human data is included in any model/simulation, according to ethics deliverables D10.1 and D10.2. # 9\. Other issues Not applicable. # 10\. Data Management Plan ## 10.1 Dataset 1 <table> <tr> <th> **DS1_Data_collection_terminals_operation** </th> </tr> <tr> <td> **Data identification** </td> </tr> <tr> <td> Dataset description </td> <td> This dataset contains data from terminals (volumes handled, seasonal impacts, modal splits, staff, processing times, arrival patterns, equipment, etc.). </td> </tr> <tr> <td> Source </td> <td> CSI and other terminals own records. </td> </tr> <tr> <td> **Partners activities and responsibilities** </td> </tr> <tr> <td> Partner owner of the data </td> <td> MAC </td> </tr> <tr> <td> Partner in charge of the data collection </td> <td> MAC </td> </tr> <tr> <td> Partner in charge of the data analysis </td> <td> MAC </td> </tr> <tr> <td> Partner in charge of the data storage </td> <td> MAC </td> </tr> <tr> <td> Related WP(s) and task </td> <td> WP5 Task 5.1 </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Information about metadata and documentation </td> <td> N/A </td> </tr> <tr> <td> Standards, format, estimated volume of data </td> <td> This dataset can be a combination of EXCEL/WORD/PDF documents and file extensions such as .xlsx (Excel), .docx (Word) and .pdf (PDF). It will be updated if necessary. </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data exploitation (purpose and use of the data analysis) </td> <td> This dataset is the result of a collaborative work between MAC and CSI, and once cleaned and validated, will provide the basis for the simulation component library. </td> </tr> <tr> <td> Data access policy, dissemination level (confidential – only for members of the Consortium and the European Commission or public) </td> <td> Confidential, so only the members of the Consortium and the Commission Services will have access to this dataset. </td> </tr> <tr> <td> Data sharing, re‐use, distribution, publication </td> <td> None </td> </tr> <tr> <td> Personal data protection (are they personal data?) </td> <td> No personal data </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (where?, for how long?) </td> <td> The dataset will be preserved in MAC and IDP infrastructure. </td> </tr> </table> ## 10.2 2 <table> <tr> <th> **DS2_Data_collection_external_mobility** </th> </tr> <tr> <td> **Data identification** </td> </tr> <tr> <td> Dataset description </td> <td> This dataset contains data related to the traffic flows incoming to the terminals (Melzo and La Spezia) and at the surrounding road network. </td> </tr> <tr> <td> Source </td> <td> CSI and APSP </td> </tr> <tr> <td> **Partners activities and responsibilities** </td> </tr> <tr> <td> Partner owner of the data </td> <td> CENIT </td> </tr> <tr> <td> Partner in charge of the data collection </td> <td> CENIT </td> </tr> <tr> <td> Partner in charge of the data analysis </td> <td> CENIT </td> </tr> <tr> <td> Partner in charge of the data storage </td> <td> CENIT </td> </tr> <tr> <td> Related WP(s) and task </td> <td> WP6 Task 6.2 </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Information about metadata and documentation </td> <td> N/A </td> </tr> <tr> <td> Standards, format, estimated volume of data </td> <td> This dataset can be a combination of EXCEL/WORD documents and file extensions such as .xlsx, .docx and .ang. </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data exploitation (purpose and use of the data analysis) </td> <td> This dataset is the result of a collaborative work between CENIT and CSI/ASPS, and it will be used for the validation of the KPIs resulting from the simulations. </td> </tr> <tr> <td> Data access policy, dissemination level (confidential – only for members of the Consortium and the European Commission or public) </td> <td> Confidential, so only the members of the Consortium and the Commission Services will have access to this dataset. </td> </tr> <tr> <td> Data sharing, re‐use, distribution, publication </td> <td> None </td> </tr> <tr> <td> Personal data protection (are they personal data?) </td> <td> No personal data </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (where?, for how long?) </td> <td> The dataset will be preserved in CENIT and IDP infrastructure. </td> </tr> </table> ### 10.3 3 <table> <tr> <th> **DS3_Data_collection_terminals_layout** </th> </tr> <tr> <td> **Data identification** </td> </tr> <tr> <td> Dataset description </td> <td> This dataset contains data related to the layout of the real terminals that will be modelled and analysed throughout the project (Melzo and La Spezia) and the railway interconnection. </td> </tr> <tr> <td> Source </td> <td> CSI and APSP </td> </tr> <tr> <td> **Partners activities and responsibilities** </td> </tr> <tr> <td> Partner owner of the data </td> <td> IDP </td> </tr> <tr> <td> Partner in charge of the data collection </td> <td> IDP </td> </tr> <tr> <td> Partner in charge of the data analysis </td> <td> IDP </td> </tr> <tr> <td> Partner in charge of the data storage </td> <td> IDP </td> </tr> <tr> <td> Related WP(s) and task </td> <td> WP4 Task 4.2 WP7 Task 7.1 </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Information about metadata and documentation </td> <td> N/A </td> </tr> <tr> <td> Standards, format, estimated volume of data </td> <td> This dataset can be a combination of CAD files and file extensions such as .las and .rcp. </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data exploitation (purpose and use of the data analysis) </td> <td> This dataset composed of .dwg files and the results from the point cloud will be used for generating the BIM models of the real terminals. </td> </tr> <tr> <td> Data access policy, dissemination level (confidential – only for members of the Consortium and the European Commission or public) </td> <td> This dataset does not contain confidential information, but models are shown in demonstration activities for the members of the consortium and the Commission Services. </td> </tr> <tr> <td> Data sharing, re‐use, distribution, publication </td> <td> None </td> </tr> <tr> <td> Personal data protection (are they personal data?) </td> <td> No personal data </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (where?, for how long?) </td> <td> The dataset will be preserved in IDP infrastructure. </td> </tr> </table> ### 10.4 4 <table> <tr> <th> **DS4_Data_collection_market_data** </th> </tr> <tr> <td> **Data identification** </td> </tr> <tr> <td> Dataset description </td> <td> This dataset contains data related to transportation and logistics studies and statistical data compiled for the assessment of intermodal terminals and statistical market data and forecasts. </td> </tr> <tr> <td> Source </td> <td> Existing publications (international statistics institutions, public and private logistics companies, white papers, etc.). </td> </tr> <tr> <td> **Partners activities and responsibilities** </td> </tr> <tr> <td> Partner owner of the data </td> <td> DHL/CENIT </td> </tr> <tr> <td> Partner in charge of the data collection </td> <td> DHL/CENIT </td> </tr> <tr> <td> Partner in charge of the data analysis </td> <td> DHL/CENIT </td> </tr> <tr> <td> Partner in charge of the data storage </td> <td> DHL/CENIT </td> </tr> <tr> <td> Related WP(s) and task </td> <td> WP8 Task 1, Task 8.2 </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Information about metadata and documentation </td> <td> N/A </td> </tr> <tr> <td> Standards, format, estimated volume of data </td> <td> This dataset can be a combination of WORD/PDF documents. </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data exploitation (purpose and use of the data analysis) </td> <td> This dataset will be used for the validation of results concerning functional, economic and environmental issues at selected terminals. </td> </tr> <tr> <td> Data access policy, dissemination level (confidential – only for members of the Consortium and the European Commission or public) </td> <td> This dataset does contain confidential information available to subscribers, only excerpts of aggregated data will be public through deliverables. </td> </tr> <tr> <td> Data sharing, re‐use, distribution, publication </td> <td> None </td> </tr> <tr> <td> Personal data protection (are they personal data?) </td> <td> No personal data </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (where?, for how long?) </td> <td> The dataset will be preserved in DHL and CENIT infrastructure. </td> </tr> </table> ### 10.5 5 <table> <tr> <th> **DS5_Project_deliverables** </th> </tr> <tr> <td> **Data identification** </td> </tr> <tr> <td> Dataset description </td> <td> Deliverables resulting from the development of the project. </td> </tr> <tr> <td> Source </td> <td> Generated by WP leaders. </td> </tr> <tr> <td> **Partners activities and responsibilities** </td> </tr> <tr> <td> Partner owner of the data </td> <td> IDP </td> </tr> <tr> <td> Partner in charge of the data collection </td> <td> IDP </td> </tr> <tr> <td> Partner in charge of the data analysis </td> <td> IDP </td> </tr> <tr> <td> Partner in charge of the data storage </td> <td> IDP </td> </tr> <tr> <td> Related WP(s) and task </td> <td> WP1 Task 1.1 </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Information about metadata and documentation </td> <td> N/A </td> </tr> <tr> <td> Standards, format, estimated volume of data </td> <td> This dataset can be a combination of WORD/PDF documents. </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data exploitation (purpose and use of the data analysis) </td> <td> This dataset presents the outcomes of the project. </td> </tr> <tr> <td> Data access policy, dissemination level (confidential – only for members of the Consortium and the European Commission or public) </td> <td> This dataset does not contain confidential information. Thus, the access to the dataset is mainly public, except the progress technical and financial reports and deliverables associated to the definition and development of the decision making tool to be integrated within the BIM models and simulations. The reports related to exploitation agreement and ethics requirements will be confidential as well, as they only concerned members of the consortium. </td> </tr> <tr> <td> Data sharing, re‐use, distribution, publication </td> <td> None </td> </tr> <tr> <td> Personal data protection (are they personal data?) </td> <td> No personal data </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (where?, for how long?) </td> <td> The dataset will be preserved in IDP infrastructure. </td> </tr> </table> ### 10.6 6 <table> <tr> <th> **DS6_Data_collection_terminals_KPI** </th> </tr> <tr> <td> **Data identification** </td> </tr> <tr> <td> Dataset description </td> <td> This dataset contains data related to the operation and exploitation of the real terminals that will be modelled and analysed throughout the project (Melzo and La Spezia) and the railway interconnection. This data is related mainly to the calculation of Key Performance Indicators. </td> </tr> <tr> <td> Source </td> <td> CSI and APSP </td> </tr> <tr> <td> **Partners activities and responsibilities** </td> </tr> <tr> <td> Partner owner of the data </td> <td> IDP/MAC/VIAS </td> </tr> <tr> <td> Partner in charge of the data collection </td> <td> IDP/MAC/VIAS </td> </tr> <tr> <td> Partner in charge of the data analysis </td> <td> IDP/MAC/VIAS </td> </tr> <tr> <td> Partner in charge of the data storage </td> <td> IDP/MAC/VIAS </td> </tr> <tr> <td> Related WP(s) and task </td> <td> WP4 Task 4.2 WP7 Task 7.1 WP7 Task 7.2 </td> </tr> <tr> <td> **Standards** </td> </tr> <tr> <td> Information about metadata and documentation </td> <td> N/A </td> </tr> <tr> <td> Standards, format, estimated volume of data </td> <td> This dataset can be a combination of WORD/PDF documents and EXCEL files. </td> </tr> <tr> <td> **Data exploitation and sharing** </td> </tr> <tr> <td> Data exploitation (purpose and use of the data analysis) </td> <td> This dataset will be used for the calculation of the Key Performance Indicators in the real case studies and for the validation of results. </td> </tr> <tr> <td> Data access policy, dissemination level (confidential – only for members of the Consortium and the European Commission or public) </td> <td> Confidential, so only the members of the Consortium and the Commission Services will have access to this dataset. </td> </tr> <tr> <td> Data sharing, re‐use, distribution, publication </td> <td> None </td> </tr> <tr> <td> Personal data protection (are they personal data?) </td> <td> No personal data </td> </tr> <tr> <td> **Archiving and preservation (including storage and backup)** </td> </tr> <tr> <td> Data storage (where?, for how long?) </td> <td> The dataset will be preserved in MAC and IDP infrastructure. </td> </tr> </table> # 11\. Conclusions This Data Management Plan (DMP) provides an overview of the data that the INTERMODEL EU project will produce together with related challenges and constraints that need to be taken into consideration. The analysis contained in this report allows anticipating the procedures and infrastructures to be implemented within the project to efficiently manage the data it will produce. Some of the partners will be owners or/and producers of data, which implies specific responsibilities, described in this report.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0394_A-Patch_824270.md
# 1\. Introduction This document presents the first version of the Data Management Plan (DMP) for the A-PATCH project. Projects funded by in the Horizon 2020 Open Research Data Pilot are required to develop several versions of a Data Management Plan (DMP), in which they will specify, among other things, what data will be kept for the longer term. The Consortium will follow the guidelines described in OpenAire1 platform and the document “Guidelines on Data Management in Horizon 2020”. DMP describes the data management life cycle for all datasets to be collected, processed or generated by a research project. It must cover: * the handling of research data during & after the project; * what data will be collected, processed or generated; * what methodology & standards will be applied; • whether data will be shared /made open access & how; * how data will be curated & preserved. The Data Management Plan will be updated - if appropriate - during the project lifetime (in the form of updated deliverables D8.2b if needed). D8.3 in the end of the project will conclude and describe the status of the project's reflections on data management. New versions of the DMP could also be created whenever significant changes arise in the project such as: * new data sets; * changes in consortium policies; * external factors’ needs. # 2\. Data Summary Research data is any information that has been collected, observed, generated or created to validate research findings. Datasets are the most common form of research data. Research data can also consist field and laboratory notebooks, diaries, questionnaires, transcripts, codebooks, videos, photographs, test responses, slides, artefacts, specimens, samples, collections of digital outputs, models, algorithms, workflow descriptions, standard operating procedures and protocols. During the initial clinical studies with off-line means, research data will be generated with hybrid sensor array with multiplexed detection capabilities as well as GC/MS means to detect disease-specific volatile organic compounds (VOCs) from the surface of the skin. Furthermore, clinical data collection forms, to obtain relevant information on disease/health status from study participants, will be collected using secure encrypted data collection software. The data will cover the main subject areas of the research: demographic information, personal habits, collected clinical variables identifying disease and measured VOC data, both signals from the sensors and GC/MS chromatograms. The format used for the clinical dataset will be .accdb. For the sensors’ signals and GC/MS chromatograms, formats will include .csv and .qgd files, respectively. Sensors’ responses can be structured in a dataset after analysis by the Technion. Clinical datasets will be processed and analysed using Apache Flink or Spark software. Files from the sensors’ signal can be processed and analysed using Matlab or python software. GC/MS chromatograms will be processed and analysed by designated GC/MS analysis as well as Matlab or python software. Information, which will be collected in a later stage with the A-PATCH prototypes (not by off-line means), will generate secured files in a possible format such as .dat and will be uploaded to the cloud to will be stored there in a dataset structure in a combination with the clinical dataset. A classification algorithm will be applied on these datasets in order to provide the test result. In Grant Agreement under Task 7.3 also various other kind of data collection is described: _“Focus groups, interviews, and specific co-design sessions with stakeholders will take place starting in the first year. Interactions with users can be web-based user surveys, through interactive websites such as Twitter or_ _Facebook, but it is also vital to encourage real-life, face-to-face exchanges in workshops, public discussions, and general interest conferences to gather feedback from future potential users on important issues such as acceptance, enduser value and business opportunities. Stakeholder engagement meetings will take place at relevant conferences and in tandem with consortium meetings as well as activities of VTT within WP8, exploitation activities of BRL and of end-user partner FIND_ .” As a general rule, all persons who take part in surveys, interviews and testing experiments conducted in project and register on the various platforms will be fully informed and will be asked in advance to state, by signing an informed consent form, that they are fully aware of the study procedure and that their participation is completely voluntary. In the informed consent form, it is explained that research subjects can withdraw from the experiment at any time. Participants will be explicitly asked for permission for recordings (audio or video) and permission to use the material for the research and where feasible in other purposes _e.g._ further research, further development or demonstration purposes ( _e.g._ conference presentations). To formalise the agreements with the research subjects, a template for Informed Consent Form has been prepared and will be translated into the national languages, when needed. Similarly information sheets will be prepared in relevant national languages to explain the project and its objectives to the research subjects. Potential re-utilization will be enabled and quality of the data ensured by careful documentation of data collection methods as well as the contents of the datasets. Possibility to re-use any existing open research data will be examined carefully during the project. # 3\. FAIR data ## 3.1. Making data findable, including provisions for metadata Quality control measures will be taken to maintain the accuracy of data during the project. Discipline compliant metadata elements will be used describing the data to aid data discovery and potential re-use. Metadata of opened data will be made available for research and re-use after project closure. ## 3.2. Making data openly accessible Decisions concerning the sharing of (selected) datasets will be taken by project steering group. Project manager in collaboration with project partners will take all the appropriate measures to make relevant data openly available and usable for third parties for study, teaching and research purposes. If, after project closure, permission to re-use the data is required, all requests for further use of data will be considered carefully and whenever possible approved by the principal investigator or the person mandated with the task. Permission for data use will be granted providing there are no IPR or confidentiality issues involved or any direct overlap of research questions with the primary research. Permission will be provided by contacting Principal Investigator (project manager). Contact information and appropriate procedure will be provided in connection with other metadata. Main focus in data sharing will be on the data underlying prospective scientific publications ensuring the validation of results presented in publications. Published and FAIR-compatible data will be archived in a common and open data repository. Recommended generic and certified repository services, either CSC’s IDA or CERN’s Zenodo, will be used to enhance long-term accessibility and reusability of the data. ## 3.3. Making data interoperable Variables and value names will be constructed following general data processing conventions common to the research subject. List of value names and used vocabulary will be provided in a separate list. Examples of vocabulary information to be managed within the project will be _e.g._ number of variables / units of observation, list of variables with the name and label of each variable as well as its values and value labels, frequency distribution of each variable, information on the classifications used and meanings of abbreviations used. ## 3.4. Increase data re-use (through clarifying licences) Ownership of datasets will belong to project consortium after the project completion. Creative Commons licence CC-BY-SA or CC-BY will be used for any opened datasets, unless there are compelling reasons to select more restricted type of CC-licence. Creative commons licences will by default include also a disclaimer of liability for the re-use of opened data. No definite period or time limit is planned for access or re-use of the data. Justification for possible case-specific embargo for published data will be decided by project consortium. Embargo will be sought primarily in connection with any potential patent application based on project results. # 4\. Allocation of resources Costs related to research data management and opening are eligible as part of the project grant. Cost allocation is based on the assumption that maximum of 5 % of total project costs will be needed to make research data quality- controlled, FAIRcompatible and as open as possible. During the project consortium partners will be responsible for managing and curating datasets at their possession. At the project ending, consortium steering group will mandate Principal Investigator or project data manager to take care of long-term preservation and sharing of datasets. # 5\. Data security At the beginning of the research project, the research consortium will decide and agree on the tasks, roles, responsibilities and rights relating to data collection, dataset management and data use. During the project research, datasets will be available only to those project partners or project consortium members, who have been accredited by and their data usage has been approved by Principal Investigator or authorized project consortium member. Project partners will be responsible for curating, preserving, disseminating and deleting in appropriate manner the datasets in their possession. Retention time for curated datasets will be the same as for other project results at the project consortium partners. Data collected or acquired within the project will be stored in a secure IT environment behind a firewall at premises or in secure cloud environment provided by project consortium partners. Access to it will need registration and authentication. Principal Investigator will check applications for the use of data. Where access is granted to research data, this will be provided through a physically and virtually secure telecommunications network. Long-term and secure preservation of published research data will be ensured by using only certified and OpenAIRE guidelines compatible repositories. # 6\. Ethical aspects Privacy of the project participants and persons involved will be secured by following closely all the relevant EU General Data Protection Regulations. No person or organisation involved will be unintentionally identifiable directly or indirectly in the datasets. Besides storing separately from the data, all direct identifiers of any respondents or subjects ( _e.g._ names and contact information of persons and organisations) - also indirect references to _e.g._ lines of businesses, branches or industries - will be removed and destroyed after the anonymised dataset has been checked and validated. Research integrity and ethical principles related to data collection and use are covered in detail in the ethics self-assessment section of the grant application. According to guidelines set by Research Ethics Committee, ethics review is required for the project as sensitive personal data will be collected or handled within the project. Approvals by the competent legal local Ethics Board will be provided in the clinical evaluation site. By submitting the application for scrutiny to the competent local/national ethical boards/bodies for authorization, detailed information will be provided on the informed consent procedures that will be implemented. Copies of examples of Informed Consent Forms and Information Sheets in language and terms understandable to the participants will be included. Ethical issues in relation to data collection and management are presented in the following deliverables: ## D9.1 H - Requirement No. 1 2.9 Copies of opinions/approvals by ethics committees and/or competent authorities for the research with humans must be kept on file. 2.3 Templates of the informed consent/assent forms and information sheets (in language and terms intelligible to the participants) must be kept on file. ## D9.2 POPD - Requirement No. 2 4.2 The host institution must confirm that it has appointed a Data Protection Officer (DPO) and the contact details of the DPO are made available to all data subjects involved in the research. For host institutions not required to appoint a DPO under the GDPR a detailed data protection policy for the project must be kept on file. 4.3 Justification for the processing of sensitive personal data must be included in the grant agreement before signature. 4.6 A description of the technical and organisational measures that will be implemented to safeguard the rights and freedoms of the data subjects/research participants must be submitted as a deliverable. 4.7 A description of the security measures that will be implemented to prevent unauthorised access to personal data or the equipment used for processing must be submitted as a deliverable. 4.8 Description of the anonymization/pseudonymisation techniques that will be implemented must be submitted as a deliverable. 4.9 Confirmation that transfers of personal data from the EU to a non-EU country ( _i.e_ . Israel) are in accordance with Chapter V of the General Data Protection Regulation 2016/679, must be submitted as a deliverable. 4.11 Detailed information on the informed consent procedures in regard to data processing must be kept on file. 4.12 Templates of the informed consent forms and information sheets (in language and terms intelligible to the participants) must be kept on file. 4.15 In case of further processing of previously collected personal data, an explicit confirmation that the beneficiary has lawful basis for the data processing and that the appropriate technical and organisational measures are in place to safeguard the rights of the data subjects must be submitted as a deliverable. **D 9.3** **A - Requirement No. 3** 5.1. Copies of relevant authorisations for animal experiments must be kept on file ## D 9.4 NEC - Requirement No. 4 6.1. The applicants must ensure that the research conducted outside the EU is legal in at least one EU Member State. This must be specified in the grant agreement. 6.4. Copies of export authorisations, as required by national/EU legislation must be kept on file. ## D 9.5 GEN - Requirement No. 5 12.4. A report by the Ethics Advisory Committee must be submitted as a deliverable at the end of each reporting period. # 7\. Other issues At this stage project will not make use of other national/funder/sectorial/departmental procedures for data management # 8\. Conclusions All participants in this project are committed to the responsible professional principles and code of conducts and will conform to the current legislation and regulations in countries where the research will be carried out. The consortium is committed to rigorously apply Ethical standards and guidelines of Horizon 2020 in all work regardless of the country in which the research/demonstration is carried out. The project complies with the Charter of Fundamental Rights of the EU. We subscribe to the requirement within H2020 to deal with ethical issues, which is anchored in the regulation setting up H2020 (Regulation No1291/2013). Article 19 (1) of this regulation is as follows: “ _All the research and innovation activities carried out under Horizon 2020 shall comply with ethical principles and relevant national, Union and international legislation, including the Charter of Fundamental Rights of the European Union and the European Convention on Human Rights and its Supplementary Protocols. Particular attention shall be paid to the principle of proportionality, the right to privacy, the right to the protection of personal data, the right to the physical and mental integrity of a person, the right to non-discrimination and the need to ensure high levels of human health protection_ .” In the A-PATCH project, privacy and data security in data collection, analysis and storage in the use of the device and in the research activities of the project are seen as major aspects causing risks. This is why data safety has to be an integrated part of the project. Users’ recordings (of clinical, physical and physiological data) will be collected in the validation of the A-Patch technology and applications. Information about user needs is gathered through questionnaires, observations and interviews. Technological development and globalization have raised new challenges concerning privacy and data security. To answer this fast digital change the new General Data Protection Regulation No 2016/679 is under the period of transition and starts to apply from 25 May 2018. According to the new Data Protection Regulation, the new rules will improve the protection of the EU citizens' fundamental rights to personal data protection especially in digital services. The studies ( _i.e_ . data collection, analysis and management) will be conducted according to the relevant Laws and Directives including: _Directive 95/46/EC: Protection of individuals with regard to the processing of personal data and on the free movement of such data._ _Directive 2002/58/EC: Processing of personal data and the protection of privacy in the electronic communications sector._ _Charter of Fundamental Rights of the EU, 2000._ _The new General Data Protection Regulation No 2016/679 will apply from 25 May 2018._ _Medical device Directive (MDD 93/42/EEC)_ _EU Regulation No 536/2014 on clinical trials on medicinal products for human use_ _EU Directive 2006/24/EC of 15 March 2006 on the retention of data generated or processed in connection with the provision of publicly available electronic communications services or of public communications networks_ _Directive 98/44/EC: Legal protection of biotechnological inventions_ _Directive 2001/20/EC or Clinical Trials Directive of Implementation of good clinical practice in the conduct of clinical trials on medicinal products for human use._ _Art. 29 - Data Protection Working Party: Working Document on Privacy on the Internet_ _CIOMS: International Ethical Guidelines for Biomedical Research Involving Human Subjects (2016)_ _WMA: Declaration of Helsinki of June 1964 and subsequent amendments; UNESCO: Universal Declaration on Bioethics and Human Rights (2005);_
https://phaidra.univie.ac.at/o:1140797
Horizon 2020
0396_Nunataryuk_773421.md
# EXECUTIVE SUMMARY The main goal of the Nunataryuk project is to determine the impacts of thawing land, coast and subsea permafrost on the global climate and on humans in the Arctic and to develop targeted and co-designed adaptation and mitigation strategies. For this purpose, a high diversity of data will be collected and produced within different work packages. The purpose of the Data Management Plan is to describe the data that will be created and to present a concept of how the data will be shared and preserved. The goal of Nunataryuk data management is to create a web-based Nunataryuk data portal which provides a unified search interface to all gathered data sets generated within the project in order to maximize the visibility and impact of data generated within the Nunataryuk project. This Data Management Plan (DMP) is based on the H2020 FAIR Data Management Plan template designed to be applicable to any H2020 project that produces, collects or processes research data. The purpose of the DMP is to describe the data that will be produced, collected or processed during the project, as well as the plans for data sharing and data preservation. Nunataryuk follows a metadata-driven approach where a physically distributed number of data repositories are integrated using standardized discovery metadata and interoperability interfaces for metadata and data storage and publication. The Nunataryuk data portal will provide a unified search interface to all gathered data sets. Further, Nunataryuk will host a data management system directly coupled to the Global Terrestrial Network for Permafrost (GTN-P), where many of the data generated in the project will be stored. Nunataryuk promotes free and open access to data in line with the European Open research Data Pilot (OpenAIRE). Within this plan an overview of the data collection procedures is provided as well as an initial outline of dissemination. This plan is a living document that will be updated during the project. # INTRODUCTION ## Background and motivation Within the Nunataryuk project, a vast amount and diversity of data will be produced. The purpose of the DMP is to document how the data generated within the project is handled during and after the project. It describes the basic principles for data management within the project. This includes standards and generation of discovery and use metadata, data sharing and preservation and life cycle management. This DMP is a living document that will be updated during the project in time with the periodic reports. Nunataryuk is following the principles outlined by the Open Research Data Pilot (OpenAIRE) and The FAIR Guiding Principles for scientific data management and stewardship (Wilkinson et al. 2016 1 ). ## Organization of the plan This DMP is based on the H2020 FAIR Data Management Plan template 2 designed to be applicable to any H2020 project that produces, collects or processes research data. This is the same plan as OpenAIRE is referring to in their guidance material. # Administration details Project Name: Nunataryuk Funding: EU HORIZON 2020 Research and Innovation Programme Partners: x Alfred Wegener Institute Helmholtz Center for Polar and Marine Research (Germany) x Stockholms Universitet (Sweden) x VU University Amsterdam (Netherlands) x Le Centre National de la Recherche Scientifique (France) x Université Laval (Canada) x Max Planck Institute for Meteorology, Hamburg (Germany) x University of Oulu, (Finland) x Technical University of Denmark (Denmark) x NORDREGIO (Sweden) x Stefansson Arctic Institute (Iceland) x University of Vienna (Austria) x B•GEOS (Austria) x Consiglio Nazionale delle Ricerche (Italy) x University of Oslo (Norway) x University of Lisbon (Portugal) x The International Institute for Applied Systems Analysis (Austria) x University of Hamburg (Germany) x Université libre de Bruxelles (Belgium) x Norwegian University of Science and Technology (Norway) x University of Versailly saint-Quentin en Yvelines (France) x Grid Arendal (Norway) x Natural Resources Canada - Geological Survey of Canada (Canada) x INFORMUS GmbH (Germany) x ACRI-He (France) x Universite Pierre Et Marie Curie (France) x Helmholtz Zentrum Potsdam Deutsches Geoforschungszentrum (Germany) x Kommune Kujalleq (Greenland) x Arctic Portal (Iceland) # Data summary The primary goal of Nunataryuk is _to investigate the impacts of thawing coastal and subsea permafrost on the global climate, and develop targeted and co-designed adaptation and mitigation strategies for the Arctic coastal population. Nunataryuk brings together world-leading specialists in natural science and socio-economics to:_ x _develop a quantitative understanding of the fluxes and fates of organic matter released from thawing coastal and subsea permafrost_ x _assess which risks are posed by thawing coastal permafrost to infrastructure, indigenous and local communities and peoples’ health, and from pollution_ x _use this understanding to estimate the long-term impacts of permafrost thaw on global climate and the economy_ . Therefore, a number of datasets will be generated. These will include: x Datasets to quantify thawing permafrost and its impact on storage and vulnerability of organic matter and contaminants on land x Datasets of lateral fluxes of organic matter from coastal erosion and watersheds draining into the Arctic Ocean x Datasets on the quantitative constraints for the vulnerable subsea permafrost system x Datasets on quantified trends in the signature of organic matter fluxes in coastal waters x Datasets on health and pollution risks associated with permafrost thaw for wildlife and humans living in the coastal Arctic x Datasets on the quantified effect of permafrost thaw on Arctic infrastructure by means of site investigations and local-scale modelling ## Data overview In order to get an overview over the variety and amount of data which are aimed to be generated during the Nunataryuk project, a short data survey was carried out among the Principal Investigators for ach work package in March 2017. The chapters 5.1.1 and 5.1.2 represent some of the survey results. ### Types and formats of data generated/collected Nunataryuk will generate a variety of data which will include: 1. Geospatial data: x Lateral carbon fluxes x Ocean color 10. Temperatures of soil and snow x Model output x Organic carbon in the Arctic Ocean shelf sediments x Elevations, distances, coordinates (GPS measurements) x Airborne LiDAR, Hyperspectral measurements x UAV surveys x Point clouds, thematic maps, coastline, vegetation, geomorphology 2. Multimedia data: 10. Videos x Photos x Audio recordings x Science blogs 3. Empirical data: 10. Interview recordings, transcripts, field notes x Socioeconomic data on Arctic coastal settlements 4. Field measurements: 10. Hydrological observations x Aquatic carbon samples x pH measurements x Optical and biogeochemical data x Data on physical properties and temperatures of soil, snow, and hydrodynamic conditions x Soil temperature measurements x Near surface geophysical data x CH 4 concentration measurements 5. Laboratory experiments: x Post-processed aquatic carbon samples (e.g. isotope information) x Optical and biogeochemical data x Physical properties of soils 10. Sediment and subsea permafrost organic carbon properties x Triple-isotope analyses of CH 4 x Soil Carbon, Nitrogen and their isotopes x Soil Organic Matter quality data (MS data) x Microbial activity x Inorganic contaminants x Thermal and salinity experiments for thermal model validation x Lab-scale coastal erosion experiment (soil and water temperature, wave heights and frequency, imagery for DEM creation, measurements of mechanical erosion) x Experimental data (soil laboratory incubations) 6. Other: x Data and information from literature (incl. grey literature like white papers, government documents, newspaper articles, etc.) The main data formats are expected to be: x Microsoft Excel (XLS) x Shapefile (SHP) x Comma-separated values (CSV) x Text (TXT) x NetCDF (NC) x MPEG-4 video format (MP4) x GeoTIFF (TIF) x Extensible Markup Language (XML) x Microsoft Word (DOC) x Joint Photographic Experts Group (JPG) x Portable Network Graphics (PNG) x MPEG 2.5 audio format (MP3) x Portable Document Format (PDF) x Hierarchical Data Format (HDF5) x Raw data (RXP, RAW) x Various proprietary binary formats (BIN) A high variety of data formats will be generated and worked with in the Nunataryuk project (Fig. 1). The most popular formats are expected to be XLS (16% of total amount of datasets), SHP (13%), CSV (12%), TXT (12%) and netCDF (5%). Figure 1. Data formats generated in the project ### Origin of the data The majority of the data used in the project will be generated within the project (81.8 %, Fig. 2). However, some data will be reused in accordance with the FAIR data re-use policy (18.2%). Figure 2. Distribution of data origin. Orange field marks newly generated data, blue field marks reused data. It is estimated that approximately 10 Terabytes of data will be generated within the Nunataryuk project. A major goal of the Nunataryuk data management is to make data generated within the project visible and useful for regional and global monitoring programs, Arctic researchers, Arctic communities and individuals. Therefore, a Nunataryuk data portal will be established, which will provide a unified view on the data produced by the Nunataryuk project. This approach is essential in order to increase the visibility of the Nunataryuk project and benefit from the generated data. # FAIR data ## Making data findable, including provisions for metadata [FAIR data] Nunataryuk is following a metadata driven approach, utilizing internationally accepted standards and protocols for documentation and exchange of discovery and use metadata. This ensures interoperability at the discovery level within international systems and frameworks. The Nunataryuk project will host a data management system directly coupled to the Global Terrestrial Network for Permafrost (GTN-P). Existing and new datasets will be documented in a standardized manner for data discovery and delivered through the data management system. GTN-P is part of the Global Climate Observing System (GCOS), of the Global Cryosphere Watch (GCW) and the World Data System (WDS) and complies with all geoinformation standards. Part of the Nunataryuk data will therefore naturally feed into the Global Earth Observation System of Systems (GEOSS) Common Infrastructure. Nunataryuk promotes the implementation of Persistent Identifiers at each contributing data center. Some have this in place, while others are in the process of establishing this. Although application of globally resolvable Persistent Identifiers (e.g. Digital Object Identifiers) is not required, it is promoted by the Nunataryuk data management. Concerning naming conventions, Nunataryuk requires that controlled vocabularies are used both at the discovery level and the data level to describe the content. Discovery level metadata must identify the convention used and the convention has to be available in machine readable format. The fallback solution for controlled vocabularies is the Global Change Master Directory (GCMD) vocabularies. The search model of the data management system is based on GCMD Science Keywords for parameter identification through discovery metadata. The Nunataryuk data management system can consume and expose discovery metadata provided in ISO19115. GCMD keywords must be used to describe physical and dynamical parameters. ## Making data openly accessible [FAIR data] Nunataryuk will participate in the Pilot on Open Research Data in Horizon 2020 (OpenAIRE). All discovery metadata will be available through a web-based search interface available through the central project website (www.Nunataryuk.org). Some data may have temporal access restrictions (embargo period). These will be handled accordingly. Valid reasons for an embargo period on data are primarily for educational reasons, allowing Ph.D. students to prepare and publish their work (2-3 years) and for publishing research papers (around 1 year). Even if data will be constrained in the embargo period, data will be shared internally in the project. Any disagreements on access to data or misuse of data internally are to be settled by the Nunataryuk Executive Board. Data will be openly accessible by the data management system directly coupled to the Global Terrestrial Network for Permafrost (GTN-P). Most of the datasets produced by the project will also be stored in the data repository PANGAEA (https://pangaea.de/). PANGAEA is a data publisher for Earth and environmental science and hosted jointly by the Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research (AWI) and the Center for Marine Environmental Science (MARUM) at the University of Bremen. PANGAEA provides long-term archiving of data, data publication and dissemination, as well as scientific data management. One major advantage of PANGEA is that it provides each dataset with a bibliographic citation and a Digital Object Identifier (DOI) allowing it to be identified, shared, published and cited. Is is expected that more than 45 % of the project’s data will be provided with a DOI identifier, 23 % will not and the remaining 32 % are yet undefined. ## Making data interoperable [FAIR data] In order to be able to reuse data, standardization is important. This implies both standardization of the encoding/documentation, as well as the interfaces to the data. Further up in the document, it is referred to documentation standards widely used by the scientific communities. This includes encoding gridded data output as NetCDF files, following the Climate and Forecast convention or the WMO GRIB format. NetCDF files following the CF convention are self-describing and interoperable. Application of the CF conventions implies requirements on the structure and semantic annotation of data (e.g. through identification of variables/parameters through CF standard names). Irregular data will be mostly encoded in XLS, CSV and TXT formats which are open and convenient in terms of data interoperability. ## Increase data re-use (through clarifying licenses) [FAIR data] Nunataryuk promotes free and open data sharing in line with the Open Research Data Pilot (OpenAIRE). Each dataset needs a license attached. The recommendation in Nunataryuk is to use Creative Commons attribution license for data (ee https://creativecommons.org/licenses/by/3.0/ for details). Nunataryuk data should be delivered in a timely manner meaning without un-due delay. Any delay, due or un-due, shall not be longer than one year after the dataset is finished. Discovery metadata shall be delivered immediately. Nunataryuk is promoting free and open access to data. Some data may have constraints (e.g. on access or dissemination) and may be exclusively available for project participants. Details will be evaluated during the project. The quality and information about the quality of each dataset is the responsibility of the Principal Investigator. # Allocation of resources In the current situation it is not possible to estimate the cost for making Nunataryuk data FAIR. Part of the reason is that this work is relying on existing functionality at the contributing data centers and that this functionality has been developed over years. The cost of preparing the data in accordance with the specifications and initial sharing is covered by the project. Maintenance of this over time is covered by the business models of the data centers. In the current situation there is no overview of the costs of long-term preservation of data as this is the responsibility of the contributing data centers and the business model for these differs. This information will be updated in further versions of the DMP. # Data security Data security relies on the existing mechanisms of the contributing data centers. Nunataryuk recommends ensuring the communication between the data management system and users with secure HTTP. Concerning the internal security, Nunataryuk recommends the best practices from the Open Archival information System (OAIS). The technical solution will vary between data centers, but most data centers have solutions using automated check sums and replication. # Ethical aspects Nunataryuk is transdisciplinary in nature, involving a large socio-economic component, and intrinsically user and stakeholder driven. Therefore, ethical principles are considered central in the design of the project and their proper processing will be essential for the successful implementation and dissemination of the project. While the project will not raise any highly sensitive data, the involvement of human beings and collection of personal data has a potential to raise general ethical concerns. In order to minimize the potential, the project will meet the highest established ethical standards in science and research and follow, inter alia, the European Charter for Researchers, The Code of Conduct for the Recruitment of Researchers and EC regulations personal data processing. The research groups involved will follow, where applicable and available, international and national ethical research principles, national legal regulations on personal data as well as any applicable local legislation. Also, voluntary informed consent of the research participants will be obtained in all cases. The project will not involve persons unable to give informed consent apart from occasional interviews/questionnaires with children, in which cases the informed consent will be obtained from their parents or legal guardians. Data will be collected, stored, protected and disposed of according to the applicable national and local regulations. In addition, all research, independent from the field site location, will comply with the Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data. All publication and dissemination of the projects data will be done in a manner respecting the research participants right to privacy and no link to actual persons will be included in such materials.
https://phaidra.univie.ac.at/o:1140797
Horizon 2020