reference
stringlengths 141
444k
| target
stringlengths 31
68k
|
---|---|
A SURVEY ON EDUCATIONAL DATA MINING AND RESEARCH TRENDS <s> INTRODUCTION <s> Academic failure among first-year university students has long fuelled a large number of debates. Many educational psychologists have tried to understand and then explain it. Many statisticians have tried to foresee it. Our research aims to classify, as early in the academic year as possible, students into three groups: the 'low-risk' students, who have a high probability of succeeding; the 'medium-risk' students, who may succeed thanks to the measures taken by the university; and the 'high-risk' students, who have a high probability of failing (or dropping out). This article describes our methodology and provides the most significant variables correlated to academic success among all the questions asked to 533 first-year university students during November of academic year 2003/04. Finally, it presents the results of the application of discriminant analysis, neural networks, random forests and decision trees aimed at predicting those students' academic success. <s> BIB001 </s> A SURVEY ON EDUCATIONAL DATA MINING AND RESEARCH TRENDS <s> INTRODUCTION <s> Applying data mining DM in education is an emerging interdisciplinary research field also known as educational data mining EDM. It is concerned with developing methods for exploring the unique types of data that come from educational environments. Its goal is to better understand how students learn and identify the settings in which they learn to improve educational outcomes and to gain insights into and explain educational phenomena. Educational information systems can store a huge amount of potential data from multiple sources coming in different formats and at different granularity levels. Each particular educational problem has a specific objective with special characteristics that require a different treatment of the mining problem. The issues mean that traditional DM techniques cannot be applied directly to these types of data and problems. As a consequence, the knowledge discovery process has to be adapted and some specific DM techniques are needed. This paper introduces and reviews key milestones and the current state of affairs in the field of EDM, together with specific applications, tools, and future insights. © 2012 Wiley Periodicals, Inc. <s> BIB002
|
Educational Data Mining (EDM) is an emerging field exploring data in educational context by applying different Data Mining (DM) techniques/tools. EDM inherits properties from areas like Learning Analytics, Psychometrics, Artificial Intelligence, Information Technology, Machine learning, Statics, Database Management System, Computing and Data Mining. It can be considered as interdisciplinary research field which provides intrinsic knowledge of teaching and learning process for effective education BIB002 . The exponential growth of educational data BIB001 from heterogeneous sources results an urgent need for research in EDM. This can help to meet the objectives and to determine specific goals of education. EDM objective can be classified in the following way: (1) Academic Objectives -Person oriented (related to direct participation in teaching and learning process) E.g.: Student learning, cognitive learning, modelling, behavior, risk, performance analysis, predicting right enrollment decision etc. both in traditional and digital environment and Faculty modelling-job performance and satisfaction analysis. -Department/Institutions oriented (related to particular department/institutions with respect to time, sequence and demand). E.g.: Redesign new courses according to industry requirements, identify realistic problems to effective research and learning process. -Domain Oriented (related to a particular branch/institutions) E.g.: Designing Methods-Tools, Techniques, Knowledge Discovery based Decision Support System (KDDS) for specific application, branch and institutions. (2) Administrative Objectives -Administrator Oriented (related to direct involvement of higher authorities/administrator) E.g.: Resource (Infrastructure as well as Human) utilization, Industry academia relationship, marketing for student enrollment in case of private institutions and establishment of network for innovative research and practices. -To explore heterogeneous educational data by analyzing the authors' views from traditional to intelligent educational systems in the decision making process. -To explore intelligent tools and techniques used in EDM and -To find out the various EDM challenges. To meet academic and administrative objectives, a survey of EDM is necessary which focus on cutting edge technologies for quality education delivery. This paper discusses the EDM components and research trends of DM in Educational System for the year 1998 to 2012 covering various issues and challenges on EDM. The rest of this paper is organized into 5 sections. Section-2 focuses on EDM Components such as Stakeholders, environments, data, methods, tools etc. Section-3 is about mining educational objectives. Section-4 highlights the research trends in EDM including various authors' views in educational outcomes, useful EDM Tools and Techniques. Section-5 is a discussion based on section 3 and 4, Section-6 concludes the paper with observations based on the survey work and the future scope of EDM.
|
A SURVEY ON EDUCATIONAL DATA MINING AND RESEARCH TRENDS <s> EDM environments <s> The subject materials of most enterprise e-training programs were mainly developed by employees of the enterprises; therefore, it becomes a challenging issue to efficiently and effectively translate the knowledge and experiences of the employees to computerized subject materials, especially for those who are not an experienced teacher. In addition, to develop an e- training course, personal ignorance or incorrect concepts might significantly affect the quality of the course if only a single employee is asked to develop the subject materials. To cope with this problem, a multi- expert e-training course design model is proposed in this paper. Accordingly, an e-training course development system has been implemented. Moreover, a practical application has showed that of the novel approach not only can improve the quality of the e- training course, but also help the experts to organize their domain knowledge. <s> BIB001 </s> A SURVEY ON EDUCATIONAL DATA MINING AND RESEARCH TRENDS <s> EDM environments <s> Most e-Learning systems store data about the learner's actions in log files, which give us detailed information about learner behaviour. Data mining and machine learning techniques can give meaning to these data and provide valuable information for learning improvement. One area that is of particular importance in the design of e-Learning systems is learner motivation as it is a key factor in the quality of learning and in the prevention of attrition. One aspect of motivation is engagement, a necessary condition for effective learning. Using data mining techniques for log file analysis, our research investigates the possibility of predicting users' level of engagement, with a focus on disengaged learners. As demonstrated previously across two different e-Learning systems, HTML-Tutor and iHelp, disengagement can be predicted by monitoring the learners' actions (e.g. reading pages and taking test/quizzes). In this paper we present the findings of three studies that refine this prediction approach. Results from the first study show that two additional reading speed attributes can increase the accuracy of prediction. The second study suggests that distinguishing between two different patterns of disengagement (spending a long time on a page/test and browsing quickly through pages/tests) may improve prediction in some cases. The third study demonstrates the influence of exploratory behaviour on prediction, as most users at the first login familiarize themselves with the system before starting to learn. <s> BIB002
|
Formal Environment. Direct interaction with primary group stakeholder of education. E.g.: face to face classroom interaction. Informal Environment. Indirect interaction with primary group stakeholder of education. E.g.: web based education (e-learning BIB002 e-training used in Chu et al. BIB001 , online supported tasks)
|
A SURVEY ON EDUCATIONAL DATA MINING AND RESEARCH TRENDS <s> Computer Supported Environment (individual and interaction). <s> Recommender systems have been evaluated in many, often incomparable, ways. In this article, we review the key decisions in evaluating collaborative filtering recommender systems: the user tasks being evaluated, the types of analysis and datasets being used, the ways in which prediction quality is measured, the evaluation of prediction attributes other than quality, and the user-based evaluation of the system as a whole. In addition to reviewing the evaluation strategies used by prior researchers, we present empirical results from the analysis of various accuracy metrics on one content domain where all the tested metrics collapsed roughly into three equivalence classes. Metrics within each equivalency class were strongly correlated, while metrics from different equivalency classes were uncorrelated. <s> BIB001 </s> A SURVEY ON EDUCATIONAL DATA MINING AND RESEARCH TRENDS <s> Computer Supported Environment (individual and interaction). <s> This paper consists of an in-depth summary and analysis of the research and development state of the art for intelligent tutoring system (ITS) authoring systems. A seven-part categorization of two dozen authoring systems is given, followed by a characterization of the authoring tools and the types of ITSs that are built for each category. An overview of the knowledge acquisition and authoring techniques used in these systems is given. A characterization of the design tradeoffs involved in building an ITS authoring system is given. Next the pragmatic questions of real use, productivity findings, and evaluation are discussed. Finally, I summarize the major unknowns and bottlenecks to having widespread use of ITS authoring tools. (http://aied.inf.ed.ac.uk/members99/archive/vol_10/murray/full.html) <s> BIB002 </s> A SURVEY ON EDUCATIONAL DATA MINING AND RESEARCH TRENDS <s> Computer Supported Environment (individual and interaction). <s> In this chapter we discuss how recent advances in the field of Computer Supported Collaborative Learning (CSCL) have created the opportunity for new synergies between CSCL and ITS research. Three “hot” CSCL research topics are used as examples: analyzing individual’s and group’s interactions, providing students with adaptive intelligent support, and providing students with adaptive technological means. <s> BIB003 </s> A SURVEY ON EDUCATIONAL DATA MINING AND RESEARCH TRENDS <s> Computer Supported Environment (individual and interaction). <s> Applying data mining DM in education is an emerging interdisciplinary research field also known as educational data mining EDM. It is concerned with developing methods for exploring the unique types of data that come from educational environments. Its goal is to better understand how students learn and identify the settings in which they learn to improve educational outcomes and to gain insights into and explain educational phenomena. Educational information systems can store a huge amount of potential data from multiple sources coming in different formats and at different granularity levels. Each particular educational problem has a specific objective with special characteristics that require a different treatment of the mining problem. The issues mean that traditional DM techniques cannot be applied directly to these types of data and problems. As a consequence, the knowledge discovery process has to be adapted and some specific DM techniques are needed. This paper introduces and reviews key milestones and the current state of affairs in the field of EDM, together with specific applications, tools, and future insights. © 2012 Wiley Periodicals, Inc. <s> BIB004
|
Direct and /or Indirect interaction with all the three groups (depends upon the objectives) stakeholder of education. E.g.: Intelligent Tutoring Systems-Tools such as DOCENT, IDE, ISD Expert, Expert CML related to curriculum development BIB002 . Tools such as such as Algebra Tutor, Mathematics Tutor, eTeacher, ZOSMAT , REALP, CIRCSlM-Tutor, Why2-Atlas, SmartTutor, AutoTutor, ActiveMath, Eon, GTE, REDEEM related to tutoring system. -Collaborative learning used in BIB003 -Adaptive Educational System [1] -Learning Management System, Cognitive Learning, Recommender System used in BIB001 and User Modeling BIB004 etc.
|
A SURVEY ON EDUCATIONAL DATA MINING AND RESEARCH TRENDS <s> Educational Data <s> education domain offers a fertile ground for many interesting and challenging data mining applications. These applications can help both educators and students, and improve the quality of education. In this paper, we present a real-life application for the Gifted Education Programme (GEP) of the Ministry of Education (MOE) in Singapore. The application involves many data mining tasks. This paper focuses only on one task, namely, selecting students for remedial classes. Traditionally, a cut-off mark for each subject is used to select the weak students. That is, those students whose scores in a subject fall below the cut-off mark for the subject are advised to take further classes in the subject. In this paper, we show that this traditional method requires too many students to take part in the remedial classes. This not only increases the teaching load of the teachers, but also gives unnecessary burdens to students, which is particularly undesirable in our case because the GEP students are generally taking more subjects than non-GEP students, and the GEP students are encouraged to have more time to explore advanced topics. With the help of data mining, we are able to select the targeted students much more precisely. <s> BIB001 </s> A SURVEY ON EDUCATIONAL DATA MINING AND RESEARCH TRENDS <s> Educational Data <s> Application of data mining techniques to the WWW (World Wide Web), referred to as Web mining, has been the focus of several research projects and papers. One of several possibilities can be its application to distance education. Taken as a whole, the emerging trends in distance education are facilitating its usability on the Internet. With the explosive growth of information sources available on the WWW, Web mining has become suitable for keeping pace with the trends in education, such as mass customization. In this paper, we define Web mining and present an overview of distance education. We describe the possibilities of application of Web mining to distance education, and, consequently, show that the use of Web mining for educational purposes is of great interest. <s> BIB002 </s> A SURVEY ON EDUCATIONAL DATA MINING AND RESEARCH TRENDS <s> Educational Data <s> Several approaches have been proposed for representing uncertain data in a database. These approaches have typically extended the relational model by incorporating probability measures to capture the uncertainty associated with data items. However, previous research has not directly addressed the issue of normalization for reducing data redundancy and data anomalies in probabilistic databases. We examine this issue. To that end, we generalize the concept of functional dependency to stochastic dependency and use that to extend the scope of normal forms to probabilistic databases. Our approach is a consistent extension of the conventional normalization theory and reduces to the latter. <s> BIB003 </s> A SURVEY ON EDUCATIONAL DATA MINING AND RESEARCH TRENDS <s> Educational Data <s> In this chapter we discuss how recent advances in the field of Computer Supported Collaborative Learning (CSCL) have created the opportunity for new synergies between CSCL and ITS research. Three “hot” CSCL research topics are used as examples: analyzing individual’s and group’s interactions, providing students with adaptive intelligent support, and providing students with adaptive technological means. <s> BIB004 </s> A SURVEY ON EDUCATIONAL DATA MINING AND RESEARCH TRENDS <s> Educational Data <s> Applying data mining DM in education is an emerging interdisciplinary research field also known as educational data mining EDM. It is concerned with developing methods for exploring the unique types of data that come from educational environments. Its goal is to better understand how students learn and identify the settings in which they learn to improve educational outcomes and to gain insights into and explain educational phenomena. Educational information systems can store a huge amount of potential data from multiple sources coming in different formats and at different granularity levels. Each particular educational problem has a specific objective with special characteristics that require a different treatment of the mining problem. The issues mean that traditional DM techniques cannot be applied directly to these types of data and problems. As a consequence, the knowledge discovery process has to be adapted and some specific DM techniques are needed. This paper introduces and reviews key milestones and the current state of affairs in the field of EDM, together with specific applications, tools, and future insights. © 2012 Wiley Periodicals, Inc. <s> BIB005
|
Decision-making in the field of academic planning involves extensive analysis of huge volumes of educational data . Data's are generated from heterogeneous sources like diverse and varied uses in BIB001 , diverse and distributed, structured and unstructured data. These data's are mostly generated from the offline or online source Offline Data. Offline Data are generated from traditional and modern classroom interaction interactive teaching/learning environments, learner/educators information, students attendance, emotional data, course information, data collected from the academic section of an institution BIB005 etc.. Online Data. Online Data are generated from the geographically separated stake holder of the education, distance educations used in , web based education used in BIB002 , computersupported collaborative learning used in BIB004 BIB003 , sensor generated data, privacy preservation process data, summarization of data .
|
A SURVEY ON EDUCATIONAL DATA MINING AND RESEARCH TRENDS <s> Classification <s> Recent research has indicated that misuse of intelligent tutoring software is correlated with substantially lower learning. Students who frequently engage in behavior termed “gaming the system” (behavior aimed at obtaining correct answers and advancing within the tutoring curriculum by systematically taking advantage of regularities in the software’s feedback and help) learn only 2/3 as much as similar students who do not engage in such behaviors. We present a machine-learned Latent Response Model that can identify if a student is gaming the system in a way that leads to poor learning. We believe this model will be useful both for re-designing tutors to respond appropriately to gaming, and for understanding the phenomenon of gaming better. <s> BIB001 </s> A SURVEY ON EDUCATIONAL DATA MINING AND RESEARCH TRENDS <s> Classification <s> Academic failure among first-year university students has long fuelled a large number of debates. Many educational psychologists have tried to understand and then explain it. Many statisticians have tried to foresee it. Our research aims to classify, as early in the academic year as possible, students into three groups: the 'low-risk' students, who have a high probability of succeeding; the 'medium-risk' students, who may succeed thanks to the measures taken by the university; and the 'high-risk' students, who have a high probability of failing (or dropping out). This article describes our methodology and provides the most significant variables correlated to academic success among all the questions asked to 533 first-year university students during November of academic year 2003/04. Finally, it presents the results of the application of discriminant analysis, neural networks, random forests and decision trees aimed at predicting those students' academic success. <s> BIB002
|
It is a two way technique (training and testing) which maps data into a predefined class. This technique is useful for success analysis with low, medium, high risk students used in BIB002 , student monitoring systems , predicting student performance, misuse detection used in BIB001 etc.
|
A SURVEY ON EDUCATIONAL DATA MINING AND RESEARCH TRENDS <s> Clustering <s> With advances in information and communication technology, interactive multimedia learning systems are widely used to support teaching and learning. However, as human factors vary across users, they may prefer the design of interactive multimedia learning systems differently. To have a deep understanding of the influences of human factors, we apply a data mining approach to the investigation of users’ preferences in using interactive multimedia learning systems. More specifically, a clustering technique named K-modes is used to group users’ preferences. The results indicate that users’ preferences could be divided into four groups where computer experience is a key human factor that influences their preferences. Implications for the development of interactive multimedia learning systems are also discussed. <s> BIB001 </s> A SURVEY ON EDUCATIONAL DATA MINING AND RESEARCH TRENDS <s> Clustering <s> Group work is widespread in education. The growing use of online tools supporting group work generates huge amounts of data. We aim to exploit this data to support mirroring: presenting useful high-level views of information about the group, together with desired patterns characterizing the behavior of strong groups. The goal is to enable the groups and their facilitators to see relevant aspects of the group's operation and provide feedback if these are more likely to be associated with positive or negative outcomes and indicate where the problems are. We explore how useful mirror information can be extracted via a theory-driven approach and a range of clustering and sequential pattern mining. The context is a senior software development project where students use the collaboration tool TRAC. We extract patterns distinguishing the better from the weaker groups and get insights in the success factors. The results point to the importance of leadership and group interaction, and give promising indications if they are occurring. Patterns indicating good individual practices were also identified. We found that some key measures can be mined from early data. The results are promising for advising groups at the start and early identification of effective and poor practices, in time for remediation. <s> BIB002 </s> A SURVEY ON EDUCATIONAL DATA MINING AND RESEARCH TRENDS <s> Clustering <s> Data mining combines machine learning, statistics and visualization techniques to discover and extract knowledge. One of the biggest challenges that higher education faces is to improve student retention (National Audition Office, 2007). Student retention has become an indication of academic performance and enrolment management. Our project uses data mining and natural language processing technologies to monitor student, analyze student academic behaviour and provide a basis for efficient intervention strategies. Our aim is to identify potential problems as early as possible and to follow up with intervention options to enhance student retention. In this paper we discuss how data mining can help spot students ‘at risk’, evaluate the course or module suitability, and tailor the interventions to increase student retention. <s> BIB003
|
It is a technique to group similar data into clusters in a way that groups are not predefined. This technique is useful to distinguish learner with their preference in using interactive multimedia system used in BIB001 , Students comprehensive character analysis used in BIB003 and suitable for collaborative learning used in BIB002 .
|
A SURVEY ON EDUCATIONAL DATA MINING AND RESEARCH TRENDS <s> Prediction <s> The monitoring and support of university freshmen is considered very important at many educational institutions. In this paper we describe the results of the educational data mining case study aimed at predicting the Electrical Engineering (EE) students drop out after the first semester of their studies or even before they enter the study program as well as identifying success-factors specific to the EE program. Our experimental results show that rather simple and intuitive classifiers (decision trees) give a useful result with accuracies between 75 and 80%. Besides, we demonstrate the usefulness of cost-sensitive learning and thorough analysis of misclassifications, and show a few ways of further prediction improvement without having to collect additional data about the students. <s> BIB001 </s> A SURVEY ON EDUCATIONAL DATA MINING AND RESEARCH TRENDS <s> Prediction <s> Data mining combines machine learning, statistics and visualization techniques to discover and extract knowledge. One of the biggest challenges that higher education faces is to improve student retention (National Audition Office, 2007). Student retention has become an indication of academic performance and enrolment management. Our project uses data mining and natural language processing technologies to monitor student, analyze student academic behaviour and provide a basis for efficient intervention strategies. Our aim is to identify potential problems as early as possible and to follow up with intervention options to enhance student retention. In this paper we discuss how data mining can help spot students ‘at risk’, evaluate the course or module suitability, and tailor the interventions to increase student retention. <s> BIB002 </s> A SURVEY ON EDUCATIONAL DATA MINING AND RESEARCH TRENDS <s> Prediction <s> We conduct a data mining project to generate predictive models for student retention management on campus. Given new records of incoming students, these predictive models can produce short accurate prediction lists identifying students who tend to need the support from the student retention program most. The project is a component in our artificial intelligence class. Students in the class get involved in the entire process of modeling and problem solving using machine learning algorithms. We examine the quality of the predictive models generated by the machine learning algorithms. The results show that some of the machine learning algorithms are able to establish effective predictive models from the existing student retention data. <s> BIB003
|
It is a technique which predicts a future state rather than a current state.This technique is useful to predict success rate, drop out used in Dekker et al. BIB001 BIB002 , and retention management used in BIB003 of students.
|
A SURVEY ON EDUCATIONAL DATA MINING AND RESEARCH TRENDS <s> Neural Network <s> Academic failure among first-year university students has long fuelled a large number of debates. Many educational psychologists have tried to understand and then explain it. Many statisticians have tried to foresee it. Our research aims to classify, as early in the academic year as possible, students into three groups: the 'low-risk' students, who have a high probability of succeeding; the 'medium-risk' students, who may succeed thanks to the measures taken by the university; and the 'high-risk' students, who have a high probability of failing (or dropping out). This article describes our methodology and provides the most significant variables correlated to academic success among all the questions asked to 533 first-year university students during November of academic year 2003/04. Finally, it presents the results of the application of discriminant analysis, neural networks, random forests and decision trees aimed at predicting those students' academic success. <s> BIB001 </s> A SURVEY ON EDUCATIONAL DATA MINING AND RESEARCH TRENDS <s> Neural Network <s> Student retention is an important issue for all university policy makers due to the potential negative impact on the image of the univer- sity and the career path of the dropouts. Although this issue has been thoroughly studied by many institutional researchers using parametric tech- niques, such as regression analysis and logit modeling, this article attempts to bring in a new perspective by exploring the issue with the use of three data mining techniques, namely, classification trees, multivariate adaptive regression splines (MARS), and neural networks. Data mining procedures identify transferred hours, residency, and ethnicity as crucial factors to re- tention. Carrying transferred hours into the university implies that the stu- dents have taken college level classes somewhere else, suggesting that they are more academically prepared for university study than those who have no transferred hours. Although residency was found to be a crucial predic- tor to retention, one should not go too far as to interpret this finding that retention is affected by proximity to the university location. Instead, this is a typical example of Simpson's Paradox. The geographical information system analysis indicates that non-residents from the east coast tend to be more persistent in enrollment than their west coast schoolmates. <s> BIB002 </s> A SURVEY ON EDUCATIONAL DATA MINING AND RESEARCH TRENDS <s> Neural Network <s> In this paper we are proposing a new Attribute Selection Measure Function (heuristic) on existing C4.5 algorithm. The advantage of heuristic is that the split information never approaches zero, hence produces stable Rule set and Decision Tree. In ideal situation, in admission process in engineering colleges in India, a student takes admission based on AIEEE rank and family pressure. If the student does not get admission in the desired branch of engineering, then they find it difficult to take decision which will be the suitable branch. The proposed knowledge based decision technique will guide the student for admission in proper branch of engineering. Another approach is also made to analyze the accuracy rate for decision tree algorithm (C5.0) and back propagation algorithm (ANN) to find out which one is more accurate for decision making. In this research work we have used the AIEEE2007 Database. <s> BIB003 </s> A SURVEY ON EDUCATIONAL DATA MINING AND RESEARCH TRENDS <s> Neural Network <s> Research highlights? Adaptive learning in TESL. ? Data mining in e-learning system. ? e-learning system for TESL. This study proposes an Adaptive Learning in Teaching English as a Second Language (TESL) for e-learning system (AL-TESL-e-learning system) that considers various student characteristics. This study explores the learning performance of various students using a data mining technique, an artificial neural network (ANN), as the core of AL-TESL-e-learning system. Three different levels of teaching content for vocabulary, grammar, and reading were set for adaptive learning in the AL-TESL-e-learning system. Finally, this study explores the feasibility of the proposed AL-TESL-e-learning system by comparing the results of the regular online course control group with the AL-TESL-e-learning system adaptive learning experiment group. Statistical results show that the experiment group had better learning performance than the control group; that is, the AL-TESL-e-learning system was better than a regular online course in improving student learning performance. <s> BIB004
|
It is a technique to improve the interpretability of the learned network by using extracted rules for learning networks. This technique is useful to determine residency, ethnicity used in BIB002 , to predict academic performance used in BIB001 , accuracy prediction in the branch selection used in BIB003 and explores learning performance in a TESL based e-learning system BIB004 .
|
A SURVEY ON EDUCATIONAL DATA MINING AND RESEARCH TRENDS <s> Association Rule Mining <s> A rechargeable aqueous metal-halogen cell is described which includes a casing, a pair of spaced apart porous electrode substrates in the casing, a micro-porous separator between the electrode substrates defining a positive and a negative electrode compartment, an aqueous electrolytic solution containing a zinc salt selected from the class consisting of zinc bromide, zinc iodide, and mixtures thereof in both compartments, and an organic halogen complexing additive of nitrobenzene in the electrolytic solution of at least the positive compartment. <s> BIB001 </s> A SURVEY ON EDUCATIONAL DATA MINING AND RESEARCH TRENDS <s> Association Rule Mining <s> Over the years, several statistical tools have been used to analyze students’ performance from different points of view. This paper presents data mining in education environment that identifies students’ failure patterns using association rule mining technique. The identified patterns are analysed to offer a helpful and ::: constructive recommendations to the academic planners in higher institutions of learning to enhance their decision making process. This will also aid in the curriculum structure and modification in order to improve students’ academic performance and trim down failure rate. The software for mining student failed courses was developed and the analytical process was described. <s> BIB002
|
It is a technique to identify specific relationships among data.This technique is useful to identify students' failure patterns BIB002 , parameters related to the admission process, migration, contribution of alumni, student assessment, co-relation between different group of students, to guide a search for a better fitting transfer model of student learning etc. used in BIB001 .
|
A SURVEY ON EDUCATIONAL DATA MINING AND RESEARCH TRENDS <s> Web mining <s> Most e-Learning systems store data about the learner's actions in log files, which give us detailed information about learner behaviour. Data mining and machine learning techniques can give meaning to these data and provide valuable information for learning improvement. One area that is of particular importance in the design of e-Learning systems is learner motivation as it is a key factor in the quality of learning and in the prevention of attrition. One aspect of motivation is engagement, a necessary condition for effective learning. Using data mining techniques for log file analysis, our research investigates the possibility of predicting users' level of engagement, with a focus on disengaged learners. As demonstrated previously across two different e-Learning systems, HTML-Tutor and iHelp, disengagement can be predicted by monitoring the learners' actions (e.g. reading pages and taking test/quizzes). In this paper we present the findings of three studies that refine this prediction approach. Results from the first study show that two additional reading speed attributes can increase the accuracy of prediction. The second study suggests that distinguishing between two different patterns of disengagement (spending a long time on a page/test and browsing quickly through pages/tests) may improve prediction in some cases. The third study demonstrates the influence of exploratory behaviour on prediction, as most users at the first login familiarize themselves with the system before starting to learn. <s> BIB001 </s> A SURVEY ON EDUCATIONAL DATA MINING AND RESEARCH TRENDS <s> Web mining <s> Mining association rules from huge amounts of data is an important issue in data mining, with the discovered information often being commercially valuable. Moreover, companies that conduct similar business are often willing to collaborate with each other by mining significant knowledge patterns from the collaborative datasets to gain the mutual benefit. However, in a cooperative project, some of these companies may want certain strategic or private data called sensitive patterns not to be published in the database. Therefore, before the database is released for sharing, some sensitive patterns have to be hidden in the database because of privacy or security concerns. To solve this problem, sensitive-knowledge-hiding (association rules hiding) problem has been discussed in the research community working on security and knowledge discovery. The aim of these algorithms is to extract as much as nonsensitive knowledge from the collaborative databases as possible while protecting sensitive information. Sensitive-knowledge-hiding problem was proven to be a nondeterministic polynomial-time hard problem. After that, a lot of research has been completed to solve the problem. In this article, we will introduce and discuss the major categories of sensitive-knowledge-protecting methodologies. © 2011 Wiley Periodicals, Inc. <s> BIB002
|
It is a technique for mining web data. This technique is useful for building virtual community in computational Intelligence used in , to determine misconception of learners used in BIB002 and to explore cognitive sense. Apart from the above methods, mentioned two new methods i.e. distillation of data for human judgment and discovery with models to analyze the behavioral impact of students in learning environments BIB001 .
|
A SURVEY ON EDUCATIONAL DATA MINING AND RESEARCH TRENDS <s> MINING EDUCATIONAL OBJECTIVIES <s> education domain offers a fertile ground for many interesting and challenging data mining applications. These applications can help both educators and students, and improve the quality of education. In this paper, we present a real-life application for the Gifted Education Programme (GEP) of the Ministry of Education (MOE) in Singapore. The application involves many data mining tasks. This paper focuses only on one task, namely, selecting students for remedial classes. Traditionally, a cut-off mark for each subject is used to select the weak students. That is, those students whose scores in a subject fall below the cut-off mark for the subject are advised to take further classes in the subject. In this paper, we show that this traditional method requires too many students to take part in the remedial classes. This not only increases the teaching load of the teachers, but also gives unnecessary burdens to students, which is particularly undesirable in our case because the GEP students are generally taking more subjects than non-GEP students, and the GEP students are encouraged to have more time to explore advanced topics. With the help of data mining, we are able to select the targeted students much more precisely. <s> BIB001 </s> A SURVEY ON EDUCATIONAL DATA MINING AND RESEARCH TRENDS <s> MINING EDUCATIONAL OBJECTIVIES <s> Web usage mining is the application of data mining techniques to discover usage patterns from Web data, in order to understand and better serve the needs of Web-based applications. Web usage mining consists of three phases, namely preprocessing, pattern discovery, and pattern analysis. This paper describes each of these phases in detail. Given its application potential, Web usage mining has seen a rapid increase in interest, from both the research and practice communities. This paper provides a detailed taxonomy of the work in this area, including research efforts as well as commercial offerings. An up-to-date survey of the existing work is also provided. Finally, a brief overview of the WebSIFT system as an example of a prototypical Web usage mining system is given. <s> BIB002 </s> A SURVEY ON EDUCATIONAL DATA MINING AND RESEARCH TRENDS <s> MINING EDUCATIONAL OBJECTIVIES <s> Web-based learning environments are now used extensively as integral components of course delivery in tertiary education. To provide an effective learning environment, it is important that educators understand how these environments are used by their students. In conventional teaching environments educators are able to obtain feedback on student learning experiences in face-to-face interactions with their students, enabling continual evaluation of their teaching programs. However, when students work in electronic environments, this informal monitoring is not possible; educators must look for other ways to attain this information. Capturing and recording student interactions with a website provides a rich source of information from data that is gathered unobtrusively. The aim of this study was firstly to explore what information can be gained from analysing student interactions with Web-based learning environments and secondly to determine the value of this process in providing information about student learning behaviours and learning outcomes. This study has provided critical information to educators about the learning behaviour of their students, informing future enhancements and developments to a courseware website and the teaching program it supports. <s> BIB003 </s> A SURVEY ON EDUCATIONAL DATA MINING AND RESEARCH TRENDS <s> MINING EDUCATIONAL OBJECTIVIES <s> Abstract Social cognitive career theory ( Lent, Brown, & Hackett, 1994 ) was originally designed to help explain interest development, choice, and performance in career and educational domains. These three aspects of career/academic development were presented in distinct but overlapping segmental models. This article presents a fourth social cognitive model aimed at understanding satisfaction experienced in vocational and educational pursuits. The model posits paths whereby core social cognitive variables (e.g., self-efficacy, goals) function jointly with personality/affective trait and contextual variables that have been linked to job satisfaction. We consider the model’s implications for forging an understanding of satisfaction that bridges the often disparate perspectives of organizational and vocational psychology. <s> BIB004 </s> A SURVEY ON EDUCATIONAL DATA MINING AND RESEARCH TRENDS <s> MINING EDUCATIONAL OBJECTIVIES <s> One of the biggest challenges that higher learning institutions face today is to improve the quality of managerial decisions. The managerial decision making process becomes more complex as the complexity of educational entities increase. Educational institute seeks more efficient technology to better manage and support decision making procedures or assist them to set new strategies and plan for a better management of the current processes. One way to effectively address the challenges for improving the quality is to provide new knowledge related to the educational processes and entities to the managerial system. This knowledge can be extracted from historical and operational data that reside in the educational organization's databases using the techniques of data mining technology. Data mining techniques are analytical tools that can be used to extract meaningful knowledge from large data sets. This paper presents the capabilities of data mining in the context of higher educational system by i) proposing an analytical guideline for higher education institutions to enhance their current decision processes, and ii) applying data mining techniques to discover new explicit knowledge which could be useful for the decision making processes. <s> BIB005 </s> A SURVEY ON EDUCATIONAL DATA MINING AND RESEARCH TRENDS <s> MINING EDUCATIONAL OBJECTIVIES <s> In this study, the authors developed and factor analyzed the Norwegian Teacher Self-Efficacy Scale. They also examined relations among teacher self-efficacy, perceived collective teacher efficacy, external control (teachers' general beliefs about limitations to what can be achieved through education), strain factors, and teacher burnout. Participants were 244 elementary and middle school teachers. The analysis supported the conceptualization of teacher self-efficacy as a multidimensional construct. They found strong support for 6 separate but correlated dimensions of teacher self-efficacy, which were included in the following subscales: Instruction, Adapting Education to Individual Students' Needs, Motivating Students, Keeping Discipline, Cooperating With Colleagues and Parents, and Coping With Changes and Challenges. They also found support for a strong 2nd-order self-efficacy factor underlying the 6 dimensions. Teacher self-efficacy was conceptually distinguished from perceived collective teacher efficacy and external control. Teacher self-efficacy was strongly related to collective teacher efficacy and teacher burnout. <s> BIB006 </s> A SURVEY ON EDUCATIONAL DATA MINING AND RESEARCH TRENDS <s> MINING EDUCATIONAL OBJECTIVIES <s> A novel personalized instructing recommendation system (PIRS) is designed for Web-based learning. This system recognizes different patterns of learning style and Web using habits through testing the learning styles of students and mining their Web browsing logs. Firstly, it processes the sparse data by item-based top-N recommendation algorithm in the course of testing the learning styles. Then it analyzes the habits and the interests of the Web users through mining the frequent sequences in the Web browsing logs by AprioriAll algorithm. Finally, this system completes personalized recommendation of the learning content based on the learning style and the habit of Web usage. Experiment shows that the recommendation model, proposed in this paper, is not only satisfied with the urgent need of the users, but also feasible and effective. <s> BIB007 </s> A SURVEY ON EDUCATIONAL DATA MINING AND RESEARCH TRENDS <s> MINING EDUCATIONAL OBJECTIVIES <s> The concept map proposed by Novak is a good tool to portray knowledge structure and to diagnose students' misconception in education. However, most of the learning concept maps have to be constructed through the suggestions of experts or scholars in related realm. It is really a complicated and time-consuming knowledge acquisition process. The study proposed to apply the algorithm of Apriori for Concept Map to develop an intelligent concept diagnostic system (ICDS). It provides teachers with constructed concept maps of learners rapidly, and enables teachers to diagnose the learning barriers and misconception of learners instantly. The best Remedial-Instruction Path (RIP) can be reached through the algorithm of RIP suggested in this study. Furthermore, RIP can be designed to provide remedial learning to learners. Moreover, by using statistical method, the study analyzed 245 students' data to investigate whether the learning performance of learners can be significantly enhanced after they have been guided by the RIP. <s> BIB008 </s> A SURVEY ON EDUCATIONAL DATA MINING AND RESEARCH TRENDS <s> MINING EDUCATIONAL OBJECTIVIES <s> The purpose of this paper is to propose an adaptive system analysis for optimizing learning sequences. The analysis employs a decision tree algorithm, based on students' profiles, to discover the most adaptive learning sequences for a particular teaching content. The profiles were created on the basis of pretesting and posttesting, and from a set of five student characteristics: gender, personality type, cognitive style, learning style, and the students' grades from the previous semester. This paper address the problem of adhering to a fixed learning sequence in the traditional method of teaching English, and recommend a rule for setting up an optimal learning sequence for facilitating students' learning processes and for maximizing their learning outcome. By using the technique proposed in this paper, teachers will be able both to lower the cost of teaching and to achieve an optimally adaptive learning sequence for students. The results show that the power of the adaptive learning sequence lies in the way it takes into account students' personal characteristics and performance; for this reason, it constitutes an important innovation in the field of Teaching English as a Second Language (TESL). <s> BIB009 </s> A SURVEY ON EDUCATIONAL DATA MINING AND RESEARCH TRENDS <s> MINING EDUCATIONAL OBJECTIVIES <s> Student retention is an important issue for all university policy makers due to the potential negative impact on the image of the univer- sity and the career path of the dropouts. Although this issue has been thoroughly studied by many institutional researchers using parametric tech- niques, such as regression analysis and logit modeling, this article attempts to bring in a new perspective by exploring the issue with the use of three data mining techniques, namely, classification trees, multivariate adaptive regression splines (MARS), and neural networks. Data mining procedures identify transferred hours, residency, and ethnicity as crucial factors to re- tention. Carrying transferred hours into the university implies that the stu- dents have taken college level classes somewhere else, suggesting that they are more academically prepared for university study than those who have no transferred hours. Although residency was found to be a crucial predic- tor to retention, one should not go too far as to interpret this finding that retention is affected by proximity to the university location. Instead, this is a typical example of Simpson's Paradox. The geographical information system analysis indicates that non-residents from the east coast tend to be more persistent in enrollment than their west coast schoolmates. <s> BIB010 </s> A SURVEY ON EDUCATIONAL DATA MINING AND RESEARCH TRENDS <s> MINING EDUCATIONAL OBJECTIVIES <s> This paper presents an applied study in data mining and knowledge discovery. It aims at discovering patterns within historical students' academic and financial data at UST (University of Science and Technology) from the year 1993 to 2005 in order to contribute improving academic performance at UST. Results show that these rules concentrate on three main issues, students' academic achievements (successes and failures), students' drop out, and students' financial behavior. Clustering (by K-means algorithm), association rules (by Apriori algorithm) and decision trees by (J48 and Id3 algorithms) techniques have been used to build the data model. Results have been discussed and analyzed comprehensively and then well evaluated by experts in terms of some criteria such as validity, reality, utility, and originality. In addition, practical evaluation using SQL queries have been applied to test the accuracy of produced model (rules). <s> BIB011 </s> A SURVEY ON EDUCATIONAL DATA MINING AND RESEARCH TRENDS <s> MINING EDUCATIONAL OBJECTIVIES <s> In this paper we are proposing a new Attribute Selection Measure Function (heuristic) on existing C4.5 algorithm. The advantage of heuristic is that the split information never approaches zero, hence produces stable Rule set and Decision Tree. In ideal situation, in admission process in engineering colleges in India, a student takes admission based on AIEEE rank and family pressure. If the student does not get admission in the desired branch of engineering, then they find it difficult to take decision which will be the suitable branch. The proposed knowledge based decision technique will guide the student for admission in proper branch of engineering. Another approach is also made to analyze the accuracy rate for decision tree algorithm (C5.0) and back propagation algorithm (ANN) to find out which one is more accurate for decision making. In this research work we have used the AIEEE2007 Database. <s> BIB012 </s> A SURVEY ON EDUCATIONAL DATA MINING AND RESEARCH TRENDS <s> MINING EDUCATIONAL OBJECTIVIES <s> This special issue of JEDM was dedicated to bridging work done in the disciplines of educational and psychological assessment and educational data mining (EDM) via the assessment design and implementation framework of evidence-centered design (ECD). It consisted of a series of five papers: one conceptual paper on ECD, three applied case studies that use ECD and EDM tools, and one simulation study that relies on ECD for its design and EDM for its implementation. In this reflection piece we discuss some of the key lessons that we have learned from the articles in this special issue with respect to the instructional utility of the digital learning environments, the nature of the statistical methodologies used, and the added value of the ECD framework for the work conducted in these projects. <s> BIB013 </s> A SURVEY ON EDUCATIONAL DATA MINING AND RESEARCH TRENDS <s> MINING EDUCATIONAL OBJECTIVIES <s> Abstract The study empirically tests an integrative model of work satisfaction ( Lent and Brown, 2006 , Duffy and Lent, 2009 , Lent et al., 2008 , Lent et al., 2011 ) in a sample of 5,022 teachers in Abu Dhabi in the United Arab Emirates. The study provided more support for the Lent and Brown (2006) model. Results revealed that this model was a strong fit for the data and accounted for 82% of the variance in work satisfaction. Of the five predictor classes, work conditions, goal progress, and positive affect were each found to explain unique predictive variance. This suggests that teachers who are most satisfied with their jobs see their work environment as supportive, experience positive goal progress, and report high levels of trait positive affect. Self-efficacy was related indirectly to work satisfaction (via work conditions and via goal progress). Goal support was also related indirectly to work satisfaction (via work conditions, and via self efficacy, but through goal progress. Implications of the findings for future research and efforts to promote teachers’ job satisfaction in Abu Dhabi are discussed. <s> BIB014
|
This survey focused on mining academic objectives of EDM in context of traditional to dynamic environments. In traditional teaching and learning environment Performance and Behavior analysis are performed on the basis of observation and paper records used in . BIB003 . This process is static used in BIB001 . This system has the drawbacks such as it cannot meet the need of the individual learner as well as lacking dynamic learning which can be improved by using five steps of an academic analytics process such as capture, report, predict, act and refine [36] . Learning and assessment process in a virtual environment using sophisticated DM methods in a digital learning environment is presented by BIB013 . This research focused on individual learners by "information-processing narratives" and group learners by "socio-cognitive narratives". To enhance the quality in higher learning institutions, the concept of predictive and descriptive models discussed by BIB005 . Predictive model predicts the success rate for individual students; individual lecturer and Descriptive model describe the pattern modeling of student course enrollment, course assignment policy making, behavior analysis etc. In Web-based education system learner behaviors, access patterns are recorded in a log file described in BIB002 , hence able to analyze the need of the individual learner. To better design and modification of web sites by analyzing the access patterns in weblogs are described in . The limitation of the log file is the authenticity of the user. In BIB010 provides the different way to log record process by keeping record of learning path. This approach is suitable only for small log files. To accumulate large log file in a real or virtual environment, an approach given by where recording of all activities of learning such as reading, writing, appearing test, communicate with peer groups are possible. To enhance this concept, BIB007 added collaborative learning approach between learner groups and educators which provides an easy way to analysis learner learning behavior. E-learning is one way of mining online data. Importance of DM in e-learning, concept map in elearning described in BIB008 learning management and Moodle system was described in . Researchers BIB007 BIB009 consider the "perception behavior" of learners and analysis with the help of sequential pattern mining technique which is able to analysis the data in a time sequence of actions. The researchers mixed up the different DM techniques to validate the Predictive and Descriptive model so it is not clearly visible which technique/algorithm is to discover the appropriate quality in higher education. To overcome this issue, an approach given by BIB011 , trying to discover the vital patterns of students by analyzing academic and financial data in terms of validity, reality, utility and originality. The researcher used clustering algorithm (k-means), Association Rules (Apriori algorithm) and DT algorithm (J48, ID3) and WEKA data mining tool to validate the data model. In this research, researcher focuses on vital pattern analysis in higher education system. But researcher did not mention which algorithm/technique is best to analyze vital patterns for quality education. Knowledge based decision technique by comparative study of the DM algorithms (C5.0 CART, ANN) and DM Tool (SPSS Clementine) was given by . Attribute mainly considered in this research work was enrollment decision making parameters such as parental pressure, demand of industry and historical placement record. To enhance the accuracy of the analysis real data set of AIEEE 2007 was used in this work. This research work concluded that C5.0 has the highest accuracy rate to predict the enrollment decision. Another approach given by BIB012 , proposing a new Attribute Selection Measure Function (heuristic) on existing C4.5 algorithm. The advantage of heuristic is that the split information never approaches zero, hence produces stable Rule Set and Decision Tree. Most of the above discussed researches try to meet student perspectives, where as analyses of satisfaction levels of teachers were not discussed which is also important in the educational system. To analyze this matter BIB004 proposed a model which comprises of five attributes i.e. Positive affect, Goal support, Self efficacy, Work conditions and Goal progress. This model tested on the sample data of Abu Dhabi employed teachers and it was found that most of the teachers satisfied with their supportive work conditions/ environments. Other parameters like Student's behavior, parent-teacher relationship, administrative satisfaction BIB006 , social culture, stress, demographic variables are also important to evaluate the teacher's satisfaction. In recent research, BIB014 enhanced the concept of BIB004 using hypothesis (22no) testing using 5022 samples of Abu Dhabi employed teachers. This study results a strong bond between the parameters "Positive affect" and "Work condition". "Goal progress" and "Self efficacy" are essential component where as goal support improves the goal performance if a teacher has high confidence in the work place. Apart from the teachers' job satisfaction; it is necessary to mine teachers' research interest including interdisciplinary areas to create a knowledge hub and hence transforming to world class institutions. In presents a methodology for managing educational capacity utilization, simulating various academic proposals and ultimately building a Decision Support System (DSS) that gives a comprehensive framework for systematic and efficient management of the university resources
|
A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> I. INTRODUCTION TO NANONETWORKS <s> This chapter contains sections titled: Introduction Silicon-Based CMOS Scaling Nanoelectronic Materials and Devices Three-Dimensional (3D) Integration Summary References ]]> <s> BIB001 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> I. INTRODUCTION TO NANONETWORKS <s> Nanotechnologies promise new solutions for several applications in biomedical, industrial and military fields. At nano-scale, a nano-machine can be considered as the most basic functional unit. Nano-machines are tiny components consisting of an arranged set of molecules, which are able to perform very simple tasks. Nanonetworks. i.e., the interconnection of nano-machines are expected to expand the capabilities of single nano-machines by allowing them to cooperate and share information. Traditional communication technologies are not suitable for nanonetworks mainly due to the size and power consumption of transceivers, receivers and other components. The use of molecules, instead of electromagnetic or acoustic waves, to encode and transmit the information represents a new communication paradigm that demands novel solutions such as molecular transceivers, channel models or protocols for nanonetworks. In this paper, first the state-of-the-art in nano-machines, including architectural aspects, expected features of future nano-machines, and current developments are presented for a better understanding of nanonetwork scenarios. Moreover, nanonetworks features and components are explained and compared with traditional communication networks. Also some interesting and important applications for nanonetworks are highlighted to motivate the communication needs between the nano-machines. Furthermore, nanonetworks for short-range communication based on calcium signaling and molecular motors as well as for long-range communication based on pheromones are explained in detail. Finally, open research challenges, such as the development of network components, molecular communication theory, and the development of new architectures and protocols, are presented which need to be solved in order to pave the way for the development and deployment of nanonetworks within the next couple of decades. <s> BIB002 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> I. INTRODUCTION TO NANONETWORKS <s> Abstract This paper provides an in-depth view on nanosensor technology and electromagnetic communication among nanosensors. First, the state of the art in nanosensor technology is surveyed from the device perspective, by explaining the details of the architecture and components of individual nanosensors, as well as the existing manufacturing and integration techniques for nanosensor devices. Some interesting applications of wireless nanosensor networks are highlighted to emphasize the need for communication among nanosensor devices. A new network architecture for the interconnection of nanosensor devices with existing communication networks is provided. The communication challenges in terms of terahertz channel modeling, information encoding and protocols for nanosensor networks are highlighted, defining a roadmap for the development of this new networking paradigm. <s> BIB003 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> I. INTRODUCTION TO NANONETWORKS <s> Development of targeted drug delivery systems using magnetic microrobots increases the therapeutic indices of drugs. These systems have to be incorporated with precise motion controllers. We demonstrate closed-loop motion control of microrobots under the influence of controlled magnetic fields. Point-to-point motion control of a cluster of iron oxide nanoparticles (diameter of 250 nm) is achieved by pulling the cluster towards a reference position using magnetic field gradients. Magnetotactic bacterium (MTB) is controlled by orienting the magnetic fields towards a reference position. MTB with membrane length of 5 μm moves towards the reference position using the propulsion force generated by its flagella. Similarly, self-propelled microjet with length of 50 μm is controlled by directing the microjet towards a reference position by external magnetic torque. The microjet moves along the field lines using the thrust force generated by the ejecting oxygen bubbles from one of its ends. Our control system positions the cluster of nanoparticles, an MTB and a microjet at an average velocity of 190 μm/s, 28 μm/s, 90 μm/s and within an average region-of-convergence of 132 μm, 40 μm, 235 μm, respectively. <s> BIB004
|
In upcoming years, the advancement in nanotechnology is expected to accelerate the development of integrated devices with the size ranging from one to a few hundred nanometers BIB002 , BIB003 . With the aim of shrinking traditional machines and creating nano-devices with new functionality, nanotechnologies have produced some novel nano-materials and nano-particles with novel behaviours and properties which are not observed at the microscopic level. The links and connectivity between nano-devices distributed through collaborative effort lead to the vision of nanonetworks, after the concept of nano-machine is proposed. The limited capabilities of nano-machines in terms of processing power, complexity and range of operations can be expanded by this collaborative communication. It is changing the paradigm from the Internet of Things (IoT) to Internet of Nano-Things (IoNTs) which shares the same development path as the nanonetworks. Communication between nano-machines in IoNTs can be set up through nano-mechanical, acoustic, chemical, electromagnetic (EM) and molecular communication approaches BIB002 . Unfortunately, traditional communication technologies are not suitable mainly due to the limitations, such as size, complexity and energy consumption of transmitters, receivers and other components at nano-scale BIB001 ; thus, novel and suitable communication techniques from physical layer to higher layers are required to develop for each paradigm. The molecular and EM communication schemes are envisioned as two most promising paradigms and numerous researches have been done in these two paradigms. This review focuses on molecular and EM approaches and presents their backgrounds, applications, recent developments and challenges. We mainly present a comprehensive survey on the researches that have already been done to enable the communication in nanonetworks. Moreover, several aspects of the integration of nanonetworks have been identified. We propose to implement a hybrid communication taking advantage of both paradigms to enhance the communication performance and aim to broaden and realize more applications. The feasibility of the novel hybrid communication is discussed based on the requirements and enabling technologies from both micro and macro perspectives, and the open challenges are explored as a source if inspiration towards future developments of this inter-connectivity. This paper provides a structured and comprehensive review arXiv:1912.09424v1 [eess.SP] BIB004 Dec 2019 on the recent literature on Body-Centric nanonetworks, an effectual foundation of IoNTs. The main contributions of this survey are summarized as follows. • The various applications are classified and summarized. • The latest advancement in physical, link, MAC, network and application layers have been comprehensively reviewed in addition to security changes. • The hybrid communication scheme by collaboratively employing EM-based nano-communication and molecular communication together is introduced. • Open issues and challenges for such hybrid networks are introduced. The rest of the paper is organized as follows. Section II presents an overview of various communication paradigms, numerous applications and standardization. Section III discusses the general requirements and performance metrics of the envisioned body-centric nanonetworks, while Section IV illustrates the enabling and concomitant technologies which would help the development of nanonetworks from EM and bio perspective, respectively. The architecture of the network and performance of EM and molecular communication is discussed in Section V and Section VI, respectively. The connectivity of both communication methods are discussed in Section VII. In Section VIII, the researches related to the security issues are discussed. In the end, the challenges and open problems are discussed with a brief conclusion.
|
A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> II. AN OVERVIEW OF NANONETWORKS <s> Nanotechnologies promise new solutions for several applications in biomedical, industrial and military fields. At nano-scale, a nano-machine can be considered as the most basic functional unit. Nano-machines are tiny components consisting of an arranged set of molecules, which are able to perform very simple tasks. Nanonetworks. i.e., the interconnection of nano-machines are expected to expand the capabilities of single nano-machines by allowing them to cooperate and share information. Traditional communication technologies are not suitable for nanonetworks mainly due to the size and power consumption of transceivers, receivers and other components. The use of molecules, instead of electromagnetic or acoustic waves, to encode and transmit the information represents a new communication paradigm that demands novel solutions such as molecular transceivers, channel models or protocols for nanonetworks. In this paper, first the state-of-the-art in nano-machines, including architectural aspects, expected features of future nano-machines, and current developments are presented for a better understanding of nanonetwork scenarios. Moreover, nanonetworks features and components are explained and compared with traditional communication networks. Also some interesting and important applications for nanonetworks are highlighted to motivate the communication needs between the nano-machines. Furthermore, nanonetworks for short-range communication based on calcium signaling and molecular motors as well as for long-range communication based on pheromones are explained in detail. Finally, open research challenges, such as the development of network components, molecular communication theory, and the development of new architectures and protocols, are presented which need to be solved in order to pave the way for the development and deployment of nanonetworks within the next couple of decades. <s> BIB001 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> II. AN OVERVIEW OF NANONETWORKS <s> Untethered robots miniaturized to the length scale of millimeter and below attract growing attention for the prospect of transforming many aspects of health care and bioengineering. As the robot size goes down to the order of a single cell, previously inaccessible body sites would become available for high-resolution in situ and in vivo manipulations. This unprecedented direct access would enable an extensive range of minimally invasive medical operations. Here, we provide a comprehensive review of the current advances in biome dical untethered mobile milli/microrobots. We put a special emphasis on the potential impacts of biomedical microrobots in the near future. Finally, we discuss the existing challenges and emerging concepts associated with designing such a miniaturized robot for operation inside a biological environment for biomedical applications. <s> BIB002
|
According to Feynman, there is plenty of room at the bottom . Based on such statement and the considerable development of nano-technology, Prof. Metin Sitti has proposed that in the near future the network would go down to the nanoscale if the nano robots and molecular machine are adopted as its elements BIB002 . Thus, the concept of nano-networks was proposed. However, the connection between nano-devices in such networks would be a challenge, leading to the study on nano-communication , BIB001 . Therefore, nano-communication can be defined as the communication between nano-devices where the communication principles should be novel and modified to meet the demands in the nano-world. To make it clearer, four requirements are summarized in IEEE P1906.1 in the aspects of components, system structure, communication principles and etc.,
|
A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> A. Nano-communication paradigms <s> Nanotechnologies promise new solutions for several applications in biomedical, industrial and military fields. At nano-scale, a nano-machine can be considered as the most basic functional unit. Nano-machines are tiny components consisting of an arranged set of molecules, which are able to perform very simple tasks. Nanonetworks. i.e., the interconnection of nano-machines are expected to expand the capabilities of single nano-machines by allowing them to cooperate and share information. Traditional communication technologies are not suitable for nanonetworks mainly due to the size and power consumption of transceivers, receivers and other components. The use of molecules, instead of electromagnetic or acoustic waves, to encode and transmit the information represents a new communication paradigm that demands novel solutions such as molecular transceivers, channel models or protocols for nanonetworks. In this paper, first the state-of-the-art in nano-machines, including architectural aspects, expected features of future nano-machines, and current developments are presented for a better understanding of nanonetwork scenarios. Moreover, nanonetworks features and components are explained and compared with traditional communication networks. Also some interesting and important applications for nanonetworks are highlighted to motivate the communication needs between the nano-machines. Furthermore, nanonetworks for short-range communication based on calcium signaling and molecular motors as well as for long-range communication based on pheromones are explained in detail. Finally, open research challenges, such as the development of network components, molecular communication theory, and the development of new architectures and protocols, are presented which need to be solved in order to pave the way for the development and deployment of nanonetworks within the next couple of decades. <s> BIB001 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> A. Nano-communication paradigms <s> Medical nanorobotics exploits nanometer-scale components and phenomena with robotics to provide new medical diagnostic and interventional tools. Here, the architecture and main specifications of a novel medical interventional platform based on nanorobotics and nanomedicine, and suited to target regions inaccessible to catheterization are described. The robotic platform uses magnetic resonance imaging (MRI) for feeding back information to a controller responsible for the real-time control and navigation along pre-planned paths in the blood vessels of untethered magnetic carriers, nanorobots, and/or magnetotactic bacteria (MTB) loaded with sensory or therapeutic agents acting like a wireless robotic arm, manipulator, or other extensions necessary to perform specific remote tasks. Unlike known magnetic targeting methods, the present platform allows us to reach locations deep in the human body while enhancing targeting efficacy using real-time navigational or trajectory control. The paper describes several versions of the platform upgraded through additional software and hardware modules allowing enhanced targeting efficacy and operations in very difficult locations such as tumoral lesions only accessible through complex microvasculature networks. <s> BIB002 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> A. Nano-communication paradigms <s> A remarkable feature of modern silicon electronics is its ability to remain physically invariant, almost indefinitely for practical purposes. Although this characteristic is a hallmark of applications of integrated circuits that exist today, there might be opportunities for systems that offer the opposite behavior, such as implantable devices that function for medically useful time frames but then completely disappear via resorption by the body. We report a set of materials, manufacturing schemes, device components, and theoretical design tools for a silicon-based complementary metal oxide semiconductor (CMOS) technology that has this type of transient behavior, together with integrated sensors, actuators, power supply systems, and wireless control strategies. An implantable transient device that acts as a programmable nonantibiotic bacteriocide provides a system-level example. <s> BIB003 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> A. Nano-communication paradigms <s> Development of targeted drug delivery systems using magnetic microrobots increases the therapeutic indices of drugs. These systems have to be incorporated with precise motion controllers. We demonstrate closed-loop motion control of microrobots under the influence of controlled magnetic fields. Point-to-point motion control of a cluster of iron oxide nanoparticles (diameter of 250 nm) is achieved by pulling the cluster towards a reference position using magnetic field gradients. Magnetotactic bacterium (MTB) is controlled by orienting the magnetic fields towards a reference position. MTB with membrane length of 5 μm moves towards the reference position using the propulsion force generated by its flagella. Similarly, self-propelled microjet with length of 50 μm is controlled by directing the microjet towards a reference position by external magnetic torque. The microjet moves along the field lines using the thrust force generated by the ejecting oxygen bubbles from one of its ends. Our control system positions the cluster of nanoparticles, an MTB and a microjet at an average velocity of 190 μm/s, 28 μm/s, 90 μm/s and within an average region-of-convergence of 132 μm, 40 μm, 235 μm, respectively. <s> BIB004 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> A. Nano-communication paradigms <s> Recent progress in bioresorbable radio frequency electronics has promised the prospect of realizing a transient microbot system (TMS) for therapeutic applications. In this paper, we investigate simulation tools for the analysis of a TMS-oriented nano-communication (NC) model for targeted drug delivery, where the physically transient microbots are remotely controllable and trackable through an applied electromagnetic field (EMF), and are responsible for transportation of drug particles (channel in the NC context). Our approach is illustrated with a study case of drug delivery for breast cancer treatment by using the proposed analytical framework integrating robotics and communications at small length scales. <s> BIB005
|
To make the network work well, the communication between the nano-devices needs to be linked. In BIB001 , nanocommunication is studied in two scenarios: BIB001 Communication between the nano-devices to the micro/macro-system, and (2) Communication between nano-devices. Furthermore, molecular, electromagnetic, acoustic, nano-mechanical communication can be modified to nano-networks [10] , summarized in our previous work in . Based on the burgeoning of the nanotechnology, a fresh model of mechanical communication, i.e. touch communication (TouchCom), was also proposed in , where bunches of nano-robots were adopted to play as the message carriers. In TouchCom, transient microbots (TMs) BIB003 - BIB002 were used to carry the drug particles and they are controlled and guided by the external macro-unit (MAU) BIB004 , BIB005 . These TMs would stay in the body for some time whose pathway is the channel and the operations of loading and unloading of drugs can be treated as the transmitting and receiving process. The channel model of TouchCom could be described by the propagation delay, loss of the signal strength in the aspects of the angular/delay spectra . A simulation tool was also introduced to characterize the action of the nanorobots in the blood vessel BIB005 .
|
A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> B. Applications of nanonetworks <s> PHYSICAL PRINCIPLES. Classical Magnitudes and Scaling Laws. Potential Energy Surfaces. Molecular Dynamics. Positional Uncertainty. Transitions, Errors, and Damage. Energy Dissipation. Mechanosynthesis. COMPONENTS AND SYSTEMS. Nanoscale Structural Components. Mobile Interfaces and Moving Parts. Intermediate Subsystems. Nanomechanical Computational Systems. Molecular Sorting, Processing, and Assembly. Molecular Manufacturing Systems. IMPLEMENTATION STRATEGIES. Macromolecular Engineering. Paths to Molecular Manufacturing. Appendices. Afterword. Symbols, Units, and Constants. Glossary. References. Index. <s> BIB001 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> B. Applications of nanonetworks <s> A cholesterol biosensors fabricated by immobilization of cholesterol oxidase (ChOx) in a layer of silicic sol-gel matrix on the top of a Prussian Blue-modified glassy carbon electrode was prepared. It is based on the detection of hydrogen peroxide produced by ChOx at −0.05 V. The half-lifetime of the biosensor is about 35 days. Cholesterol can be determined in the concentration range of 1×10−6−8×10−5 mol/L with a detection limit of 1.2×10−7 mol/L. Normal interfering compounds, such as ascorbic acid and uric acid do not affect the determination. The high sensitivity and outstanding selectivity are attributed to the Prussian Blue film modified on the sensor. <s> BIB002 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> B. Applications of nanonetworks <s> This paper discusses the technological advancement in nanotechnology and some aspects of applications in engineering devices and biotechnologies. The lecture outlines the logical development of technology and engineering following (1) initiation of idea and visualization, (2) ability to control and measurements and (3) intense effort in engineering developments. The paper includes a brief review of the historical development in biology, nano and potential applications of electro-magnetic phenomena. The paper also describes the activities in the Center for Nanomagnetics and Biotechnology for clinical applications using nanomagnetic particles. Several magnetic phenomena in life sciences are illustrated. Brief discussions of nano materials are introduced. The paper then concludes with possible near term applications and long term developments of nanotechnology in biomedical and bioengineering. <s> BIB003 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> B. Applications of nanonetworks <s> Recently the technology of capsule endoscopy has developed dramatically and many researchers are making efforts to combine surgical function into the capsule type endoscope. In this paper, the micro biopsy module which is a part of the capsule endoscope is proposed. The proposed module is less than 2 mm in thickness and has a diameter of 10 mm. It consists of a trigger with paraffin block, rotational tissue-cutting razor with a torsion spring and controller. This module makes it possible for the capsule endoscope to obtain a sample tissue inside the small intestine which can not be reached by a conventional biopsy device. Through dedicated experiments, tissue samples were successfully extracted using the proposed biopsy module and the cells in samples were extracted and tested by a microscope. <s> BIB004 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> B. Applications of nanonetworks <s> A capsule endoscope is a swallowable wireless miniature camera for getting images of the gastrointestinal (GI) mucosa. The initial capsule endoscope model was developed by Given Imaging and approved in Western countries in 2001. Before the introduction of capsule endoscopy (CE) and double-balloon endoscopy (DBE), there was no effective modality for the evaluation and management of patients with obscure GI bleeding. Obscure GI bleeding is defined as bleeding of unknown origin that persists or recurs after a negative initial or primary endoscopy (colonoscopy or upper endoscopy) result. The first capsule endoscope model, which is now regarded as a first-line tool for the detection of abnormalities of the small bowel, was the PillCam SB. It was approved in Japan in April 2007. The main indication for use of the PillCam SB is obscure GI bleeding. Almost the only complication of CE is capsule retention, which is the capsule remaining in the digestive tract for a minimum of 2 weeks. A retained capsule can be retrieved by DBE. There are some limitations of CE in that it cannot be used to obtain a biopsy specimen or for endoscopic treatment. However, the combination of a PillCam SB and DBE seems to be the best strategy for management of obscure GI bleeding. Recently, several new types of capsule endoscope have been developed, such as Olympus CE for the small bowel, PillCam ESO for investigation of esophageal diseases, and PillCam COLON for detection of colonic neoplasias. In the near future, CE is expected to have a positive impact on many aspects of GI disease evaluation and management. <s> BIB005 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> B. Applications of nanonetworks <s> A novel algorithm to accurately determine the location of an ultrasound source within heterogeneous media is presented. The method obtains a small spacial error of 748 microm+/-310 microm for 100 different measurements inside a circular area with 140 mm diameter. The new algorithm can be used in targeted drug delivery for cancer therapies as well as to accurately locate any kind of ultrasound sources in heterogeneous media, such as ultrasonically marked medical devices or tumors. <s> BIB006 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> B. Applications of nanonetworks <s> Nanotechnologies promise new solutions for several applications in biomedical, industrial and military fields. At nano-scale, a nano-machine can be considered as the most basic functional unit. Nano-machines are tiny components consisting of an arranged set of molecules, which are able to perform very simple tasks. Nanonetworks. i.e., the interconnection of nano-machines are expected to expand the capabilities of single nano-machines by allowing them to cooperate and share information. Traditional communication technologies are not suitable for nanonetworks mainly due to the size and power consumption of transceivers, receivers and other components. The use of molecules, instead of electromagnetic or acoustic waves, to encode and transmit the information represents a new communication paradigm that demands novel solutions such as molecular transceivers, channel models or protocols for nanonetworks. In this paper, first the state-of-the-art in nano-machines, including architectural aspects, expected features of future nano-machines, and current developments are presented for a better understanding of nanonetwork scenarios. Moreover, nanonetworks features and components are explained and compared with traditional communication networks. Also some interesting and important applications for nanonetworks are highlighted to motivate the communication needs between the nano-machines. Furthermore, nanonetworks for short-range communication based on calcium signaling and molecular motors as well as for long-range communication based on pheromones are explained in detail. Finally, open research challenges, such as the development of network components, molecular communication theory, and the development of new architectures and protocols, are presented which need to be solved in order to pave the way for the development and deployment of nanonetworks within the next couple of decades. <s> BIB007 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> B. Applications of nanonetworks <s> Inflammation occurs in episodic flares in Crohn's disease, which are part of the waxing and waning course of the disease. Healing between flares allows the intestine to reconstitute its epithelium, but this healing results in the deposition of fibrotic scar tissue as part of the healing process. Repeated cycles of flares and healing often lead to clinically significant fibrosis and stenosis of the intestine. Patients are treated empirically with steroids, with their many side effects, in the hope that they will respond. Many patients would be better treated with surgery if we could identify which patients truly have intestinal fibrosis. Ultrasound elasticity imaging (UEI) offers the potential to radically improve the diagnosis and management of local tissue elastic property, particularly intestinal fibrosis. This method allows complete characterization of local intestine tissue with high spatial resolution. The feasibility of UEI on Crohn's disease is demonstrated by directly applying this technique to an animal model of inflammatory bowel disease (IBD). Five female Lewis rats (150-180g) were prepared with phosphate buffered solution (PBS) as a control group and six were prepared with repeated intrarectal administration of trinitrobenzenesulfonic acid (TNBS) as a disease group. Preliminary strain measurements differentiate the diseased colons from the normal colons (p < 0.0002) and compared well with direct mechanical measurements and histology (p < 0.0005). UEI provides a simple and accurate assessment of local severity of fibrosis. The preliminary results on an animal model also suggest the feasibility of translating this imaging technique directly to human subjects for both diagnosis and monitoring. <s> BIB008 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> B. Applications of nanonetworks <s> The influence of oxygen on various ophthalmological complications is not completely understood and intraocular oxygen measurements are essential for better diagnosis and treatment. A magnetically controlled wireless sensor device is proposed for minimally invasive intraocular oxygen concentration measurements. This device will make it possible to make measurements at locations that are currently too invasive for human intervention by integrating a luminescence optical sensor and a magnetic steering system. The sensor works based on quenching of luminescence in the presence of oxygen. A novel iridium phosphorescent complex is designed and synthesized for this system. A frequency-domain lifetime measurement approach is employed because of the intrinsic nature of the lifetime of luminescence. Experimental results of the oxygen sensor together with magnetic and hydrodynamic characterization of the sensor platform are presented to demonstrate the concept. In order to use this sensor for in vivo intraocular applications, the size of the sensor must be reduced, which will require an improved signal-to-noise ratio. <s> BIB009 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> B. Applications of nanonetworks <s> Plants use inducible defence mechanisms to fend off harmful organisms. Resistance that is induced in response to local attack is often expressed systemically, that is, in organs that are not yet damaged. In the search for translocated defence signals, biochemical studies follow the physical movement of putative signals, and grafting experiments use mutants that are impaired in the production or perception of these signals. Long-distance signals can directly activate defence or can prime for the stronger and faster induction of defence. Historically, research has focused on the vascular transport of signalling metabolites, but volatiles can play a crucial role as well. We compare the advantages and constraints of vascular and airborne signals for the plant, and discuss how they can act in synergy to achieve optimised resistance in distal plant parts. <s> BIB010 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> B. Applications of nanonetworks <s> Filtration of molecules by nanometer-sized structures is ubiquitous in our everyday life, but our understanding of such molecular filtration processes is far less than desired. Until recently, one of the main reasons was the lack of experimental methods that can help provide detailed, microscopic pictures of molecule–nanostructure interactions. Several innovations in experimental methods, such as nuclear track-etched membranes developed in the 70s, and more recent development of nanofluidic molecular filters, played pivotal roles in advancing our understanding. With the ability to make truly molecular-scale filters and pores with well-defined sizes, shapes, and surface properties, now we are well positioned to engineer better functionality in molecular sieving, separation and other membrane applications. Reviewing past theoretical developments (often scattered across different fields) and connecting them to the most recent advances in the field would be essential to get a full, unified view on this important engineering question. <s> BIB011 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> B. Applications of nanonetworks <s> New methods to identify trace amount of infectious pathogens rapidly, accurately and with high sensitivity are in constant demand to prevent epidemics and loss of lives. Early detection of these pathogens to prevent, treat and contain the spread of infections is crucial. Therefore, there is a need and urgency for sensitive, specific, accurate, easy-to-use diagnostic tests. Versatile biofunctionalized engineered nanomaterials are proving to be promising in meeting these needs in diagnosing the pathogens in food, blood and clinical samples. The unique optical and magnetic properties of the nanoscale materials have been put to use for the diagnostics. In this review, we focus on the developments of the fluorescent nanoparticles, metallic nanostructures and superparamagnetic nanoparticles for bioimaging and detection of infectious microorganisms. The various nanodiagnostic assays developed to image, detect and capture infectious virus and bacteria in solutions, food or biological samples in vitro and in vivo are presented and their relevance to developing countries is discussed. <s> BIB012 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> B. Applications of nanonetworks <s> This chapter is a brief description of the state of the art of the field of targeted drug delivery using magnetic implants. It describes the advantages and drawbacks of the use of internal magnets to concentrate magnetic nanoparticles near tumor locations, and the different approaches to this task performed in vitro and in vivo reviewed in literature are presented. <s> BIB013 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> B. Applications of nanonetworks <s> Mechanical forces play important roles in the regulation of various biological processes at the molecular and cellular level, such as gene expression, adhesion, migration, and cell fate, which are essential to the maintenance of tissue homeostasis. In this review, we discuss emerging bioengineered tools enabled by microscale technologies for studying the roles of mechanical forces in cell biology. In addition to traditional mechanobiology experimental techniques, we review recent advances of microelectromechanical systems (MEMS)-based approaches for cell mechanobiology and discuss how microengineered platforms can be used to generate in vivo-like micromechanical environment in in vitro settings for investigating cellular processes in normal and pathophysiological contexts. These capabilities also have significant implications for mechanical control of cell and tissue development and cell-based regenerative therapies. <s> BIB014 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> B. Applications of nanonetworks <s> BACKGROUND ::: Capsule endoscopy (CE) has been widely used in clinical practice. ::: ::: ::: OBJECTIVE ::: To provide systematically pooled results on the indications and detection, completion, and retention rates of small-bowel CE. ::: ::: ::: DESIGN ::: A systematic review. ::: ::: ::: MAIN OUTCOME MEASUREMENTS ::: We searched the PubMed database (2000-2008) for original articles relevant to small-bowel CE for the evaluation of patients with small-bowel signs and symptoms. Data on the total number of capsule procedures, the distribution of different indications for the procedures, the percentages of procedures with positive detection (detection rate), complete examination (completion rate), or capsule retention (retention rate) were extracted and/or calculated, respectively. In addition, the detection, completion, and retention rates were also extracted and/or calculated in relation to indications such as obscure GI bleeding (OGIB), definite or suspected Crohn's disease (CD), and neoplastic lesions. ::: ::: ::: RESULTS ::: A total of 227 English-language original articles involving 22,840 procedures were included. OGIB was the most common indication (66.0%), followed by the indication of only clinical symptoms reported (10.6%), and definite or suspected CD (10.4%). The pooled detection rates were 59.4%; 60.5%, 55.3%, and 55.9%, respectively, for overall, OGIB, CD, and neoplastic lesions. Angiodysplasia was the most common reason (50.0%) for OGIB. The pooled completion rate was 83.5%, with the rates being 83.6%, 85.4%, and 84.2%, respectively, for the 3 indications. The pooled retention rates were 1.4%, 1.2%, 2.6%, and 2.1%, respectively, for overall and the 3 indications. ::: ::: ::: LIMITATIONS ::: Inclusion and exclusion criteria were loosely defined. ::: ::: ::: CONCLUSIONS ::: The pooled detection, completion, and retention rates are acceptable for total procedures. OGIB is the most common indication for small-bowel CE, with a high detection rate and low retention rate. In addition, angiodysplasia is the most common finding in patients with OGIB. A relatively high retention rate is associated with definite or suspected CD and neoplasms. <s> BIB015 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> B. Applications of nanonetworks <s> Triggerable drug delivery systems enable on-demand controlled release profiles that may enhance therapeutic effectiveness and reduce systemic toxicity. Recently, a number of new materials have been developed that exhibit sensitivity to visible light, near-infrared (NIR) light, ultrasound, or magnetic fields. This responsiveness can be triggered remotely to provide flexible control of dose magnitude and timing. Here we review triggerable materials that range in scale from nano to macro, and are activated by a range of stimuli. <s> BIB016 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> B. Applications of nanonetworks <s> Wireless capsule endoscopy (WCE) offers a feasible noninvasive way to detect the whole gastrointestinal (GI) tract and revolutionizes the diagnosis technology. However, compared with wired endoscopies, the limited working time, the low frame rate, and the low image resolution limit the wider application. The progress of this new technology is reviewed in this paper, and the evolution tendencies are analyzed to be high image resolution, high frame rate, and long working time. Unfortunately, the power supply of capsule endoscope (CE) is the bottleneck. Wireless power transmission (WPT) is the promising solution to this problem, but is also the technical challenge. Active CE is another tendency and will be the next geneion of the WCE. Nevertheless, it will not come true shortly, unless the practical locomotion mechanism of the active CE in GI tract is achieved. The locomotion mechanism is the other technical challenge, besides the challenge of WPT. The progress about the WPT and the active capsule technology is reviewed. <s> BIB017 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> B. Applications of nanonetworks <s> magnetic fi elds. [ 20 ] These helical microrobots have been used to transport a single microsphere in three dimensions. The microrobots were coated with a thin titanium (Ti) layer for better biocompatibility and affi nity with the cells; this was confi rmed by culturing cells on the helical microrobots. Similarly, microspheres can be transported in the fl owing streams of microfl uidic channels, which enable the microrobots to swim in the <s> BIB018 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> B. Applications of nanonetworks <s> Transport of individual cells or chemical payloads on a subcellular scale is an enabling tool for the study of cellular communication, cell migration, and other localized phenomena. We present a magnetically actuated robotic system capable of fully automated manipulation of cells and microbeads. Our strategy uses autofluorescent robotic transporters and fluorescently labeled microbeads to aid tracking and control in optically obstructed environments. We demonstrate automated delivery of microbeads infused with chemicals to specified positions on neurons. This system is compatible with standard upright and inverted light microscopes and is capable of applying forces less than 1 pN for precision positioning tasks. <s> BIB019 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> B. Applications of nanonetworks <s> Abstract Wirelessly interconnected nanorobots, i.e., engineered devices of sizes ranging from one to a few hundred nanometers, are promising revolutionary diagnostic and therapeutic medical applications that could enhance the treatment of major diseases. Each nanorobot is usually designed to perform a set of basic tasks such as sensing and actuation. A dense wireless network of nano-devices, i.e., a nanonetwork, could potentially accomplish new and more complex functionalities, e.g., in-vivo monitoring or adaptive drug-delivery, thus enabling revolutionary nanomedicine applications. Several innovative communication paradigms to enable nanonetworks have been proposed in the last few years, including electromagnetic communications in the terahertz band, or molecular and neural communications. In this paper, we propose and discuss an alternative approach based on establishing intra-body opto-ultrasonic communications among nanorobots. Opto-ultrasonic communications are based on the optoacoustic effect, which enables the generation of high-frequency acoustic waves by irradiating the medium with electromagnetic energy in the optical frequency range. We first discuss the fundamentals of nanoscale opto-ultrasonic communications in biological tissues by modeling the generation, propagation and detection of opto-ultrasonic waves, and we explore important tradeoffs. Then, we discuss potential research challenges for the design of opto-ultrasonic nanonetworks of implantable devices at the physical, medium access control, and network layers of the protocol stack. <s> BIB020 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> B. Applications of nanonetworks <s> The assembly of three-dimensional, complex functional materials at micro- or nanoscales for various applications is challenging. Tasoglu et al. develop a magnetic micro-robot system that is capable of programmable coding of soft and rigid building blocks to build heterogeneous materials. <s> BIB021 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> B. Applications of nanonetworks <s> As we move towards the miniaturization of devices to perform tasks at the nano and microscale, it has become increasingly important to develop new methods for actuation, sensing, and control. Over the past decade, bio-hybrid methods have been investigated as a promising new approach to overcome the challenges of scaling down robotic and other functional devices. These methods integrate biological cells with artificial components and therefore, can take advantage of the intrinsic actuation and sensing functionalities of biological cells. Here, the recent advancements in bio-hybrid actuation are reviewed, and the challenges associated with the design, fabrication, and control of bio-hybrid microsystems are discussed. As a case study, focus is put on the development of bacteria-driven microswimmers, which has been investigated as a targeted drug delivery carrier. Finally, a future outlook for the development of these systems is provided. The continued integration of biological and artificial components is envisioned to enable the performance of tasks at a smaller and smaller scale in the future, leading to the parallel and distributed operation of functional systems at the microscale. <s> BIB022 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> B. Applications of nanonetworks <s> This paper proposes a new wireless biopsy method where a magnetically actuated untethered soft capsule endoscope carries and releases a large number of thermo-sensitive, untethered microgrippers (μ-grippers) at a desired location inside the stomach and retrieves them after they self-fold and grab tissue samples. We describe the working principles and analytical models for the μ-gripper release and retrieval mechanisms, and evaluate the proposed biopsy method in ex vivo experiments. This hierarchical approach combining the advanced navigation skills of centimeter-scaled untethered magnetic capsule endoscopes with highly parallel, autonomous, submillimeter scale tissue sampling μ-grippers offers a multifunctional strategy for gastrointestinal capsule biopsy. <s> BIB023 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> B. Applications of nanonetworks <s> Untethered robots miniaturized to the length scale of millimeter and below attract growing attention for the prospect of transforming many aspects of health care and bioengineering. As the robot size goes down to the order of a single cell, previously inaccessible body sites would become available for high-resolution in situ and in vivo manipulations. This unprecedented direct access would enable an extensive range of minimally invasive medical operations. Here, we provide a comprehensive review of the current advances in biome dical untethered mobile milli/microrobots. We put a special emphasis on the potential impacts of biomedical microrobots in the near future. Finally, we discuss the existing challenges and emerging concepts associated with designing such a miniaturized robot for operation inside a biological environment for biomedical applications. <s> BIB024 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> B. Applications of nanonetworks <s> Thank you for reading carbon nanotubes synthesis structure properties and applications. Maybe you have knowledge that, people have search hundreds times for their favorite readings like this carbon nanotubes synthesis structure properties and applications, but end up in infectious downloads. Rather than reading a good book with a cup of tea in the afternoon, instead they are facing with some infectious virus inside their laptop. <s> BIB025
|
Nano-communication spans a wide area such as military, ubiquitous health care, sport, entertainment and many other areas, detailed description of which has been summarized and classified in BIB007 , shown in Table I . The main characteristic in all applications is to improve people's life quality and nanonetworks are generally believed as the perfect candidate for bio-medical fields due to bio-compatibility, bio-stability and its dimension. Generally, the applications are classified into two general categories, medical and non-medical as below. 1) Medical Applications: There are many biomedical application in literature, e.g, intraocular pressure (IOP) for vision BIB024 and nano robotos for cancer cells in BIB020 . Moreover, nanonetworks will monitor the body status in real time and some nano-devices can be used as tissue substitutes, i.e., biohybrid implants. In the following, we present two interesting BIB024 , BIB007 Biomedical Environmental Industrial Military • Active Visual Imaging for Disease Diagnosis BIB015 BIB005 BIB017 BIB006 BIB008 Health Monitor • Mobile Sensing for Disease Diagnosis BIB009 [25] BIB002 BIB012 Bio-Degradation Product Quality Control Nuclear, Biological and Chemical Defences BIB025 • Tissue Engineering BIB021 [32] BIB018 • Bio-Hybrid Implant BIB001 • Targeted Therapy/Drug Delivery BIB013 [37] BIB016 [39] BIB022 • Cell Manipulation BIB003 [19] BIB019 [43] BIB014 Therapy • Minimally Invasive Surgery BIB004 [46] BIB023 Bio-Control BIB010 [49] BIB011 Intelligent Office Nano-Fictionalized Equipment [10] Location of James Status of SmartBlood Health Status of James examples that come from movies, which shows the limitless possibilities of nano network medical applications. a) Health-Monitoring: In the movie The Circle, an example of the health-monitor system which is installed in the body of the lead actress May has been displayed. The whole system consists of two parts: digestible nano-sensors and a wristband. At first, the doctor asked May to drink a bag of green solution with the nano-sensors in and then gave her a wristband which should be worn all the time, shown in Fig. 1a . The medium band would sync up with the sensors May has swallowed while both equipments would collect data of the heart rate, blood pressure, cholesterol, sleep duration, sleep quality, digestive efficiency, and so on. The capture of the movie in Fig. 1b shows the related information on the wall. Through the wristband, all the data can be stored anywhere May wants. Also, all the data would be shared with the related people, like the doctor or the nutritionist. b) Real-Time Detection: In the movie of 007: Spectre, a technology called Smart Blood was illustrated, which is a bunch of nano-machines/micro-chips capable of tracking Mr. Bond's movement in the field. They were injected into Bond's blood system, and the institute would monitor Bond's vital signs from anywhere on the planet, shown in Fig. 2 . It is not just a scientific idea in the movie because several researchers are working on various kinds of injectable substances that can identify cancer cells. The folks at Seattle-based Blaze Bioscience are among the pioneers. c) Drug Delivery: It is believed that the nanonetworks can not only sense the information but also make some actions when needed. The most trustworthy application would go for real-time glucose control. The nano-sensors spreading in blood vessels can monitor the glucose level; at the same time, the nano-machines could release the insulin to regulate the glucose level (shown in Fig. 3 ). With such technologies, people with diabetes would not need to needle themselves and inject the medicine in public which would cause the embarrassment and infection if the operation is not correct. Also, the signal can also be sent to related people through wearable devices or smart phones to let them help the patients build a healthy habit.
|
A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> Fig. 3: Nanonetwork for drug delivery <s> Abstract We investigate techniques to enable communication in the THz band between graphene-based nanoscale devices and microscale network components for agricultural crop-monitoring applications. The properties of THz communications, in particular sensitivity to moisture levels on the communications path and attenuation by obstacles (e.g., leaves) mean that achieving a desired level of throughput of monitoring data can be difficult. Using a simplified model of plant structure and typical plant moisture patterns, we analyze the performance of four frequency selection strategies in terms of throughput and energy utilization for varying numbers of nano and microscale devices, moisture concentration patterns and plant leaf densities. We find that a Two-Phase optimization strategy for frequency selection performs best in a wide range of operational conditions and that leaf density has a significant impact on achievable throughput. Our plant model could serve as a useful basis for planning the necessary concentration of nano and microscale devices to deploy on particular crop types in order to meet given network performance targets. <s> BIB001
|
2) Non-Medical Applications: a) Entertainment (VR/AR): Currently, the realization of the visual/augmented reality requires the help of external devices like smartphones, shown in Fig. 4a which is bulky and not convenient. If the nano-devices are spreading over the eyes, near the retina, they would help people see things as required. At the same time, the nano-machines spreading over the body would excite different parts of human to make the experience real. Take the Pokemon shown in Fig. 4b as an example, the nano-machines in the eyes would help people see the monster in real world. If you want to catch the monster, all you need to do is just throwing you arm and the sensors in the arm would capture this action and judge if it is the right path and strength to capture the monster. If the monster fight, such as the shock generated by Pikachu, the nano-machines on the skin would cause some itches or aches to you. It is widely thought that such new technologies would cause radical changes to the current game experiences and would also help people gain the experiences they never have. b) E-Environment: The illustration of the e-office is shown in Fig. 5 . Every elements spreading over the office or the internal components are nano devices permanently connecting to the Internet. Thus, the locations and statuses of all belongings can be tracked effortlessly. Furthermore, by analysing all the information collected by nanonetworks of the office, actuators can make the working environment pleasant and intelligent. c) Agriculture/Industry Monitoring: Fig. 6 shows an example of using nanonetworks for crop-monitor BIB001 . Since the plants would release typical chemical compound which would be used to analyse the environment conditions and plant growth condition. The structure of such monitor network is described in BIB001 , shown in Fig. 6a . It is said that such systems can not only monitor growth status of the plants but also analyse the underground soil and air conditions which can be used as a chemical defence system.
|
A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> III. REQUIREMENTS AND PERFORMANCE METRICS OF BODY-CENTRIC NANONETWORKS <s> In this paper, the authors review the fundamental issues arising when nanoscale devices are meant to be interconnected to transmit information. The possibility of manipulating and assembling objects at the atomic scale has paved the way for a future generation of computing machines, where nanoscale devices substitute silicon-based transistors. Interconnections, needed to perform complex operations, are expected to be the driving factor in terms of performance and costs of the resulting systems. In view of the current research on nanomachines, the authors are interested in understanding which may be the limits of communications at the nanoscale level. Our research stems from a few, simple and yet unanswered questions, like "what is the capacity of a nanowire/nanotube?", "what is the capacity of molecular-based communication systems?" etc. While we do not answer to such questions directly, we shed some light on possible approaches based on information-theoretical concepts <s> BIB001 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> III. REQUIREMENTS AND PERFORMANCE METRICS OF BODY-CENTRIC NANONETWORKS <s> In this paper, the problem of communicating using chemical messages propagating using Brownian motion, rather than electromagnetic messages propagating as waves in free space or along a wire, is considered. This problem is motivated by nanotechnological and biotechnological applications, where the energy cost of electromagnetic communication might be prohibitive. Models are given for communication using particles that propagate with Brownian motion, and achievable capacity results are given. Under conservative assumptions, it is shown that rates exceeding one bit per particle are achievable. <s> BIB002 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> III. REQUIREMENTS AND PERFORMANCE METRICS OF BODY-CENTRIC NANONETWORKS <s> In molecular communication, messages are conveyed from a transmitter to a receiver by releasing a pattern of molecules at the transmitter, and allowing those molecules to propagate through a fluid medium towards a receiver. In this paper, achievable information rates are estimated for a molecular communication system when information is encoded using a set of distinct molecules, and when the molecules propagate across the medium via Brownian motion. Results are provided which indicate large gains in information rate over the case where the released molecules are indistinguishable from each other. <s> BIB003 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> III. REQUIREMENTS AND PERFORMANCE METRICS OF BODY-CENTRIC NANONETWORKS <s> Molecular communication is a novel communication paradigm which allows nanomachines to communicate using molecules as a carrier. Controlled molecule delivery between two nanomachines is one of the most important challenges which must be addressed to enable the molecular communication. Therefore, it is essential to develop an information theoretical approach to find out molecule delivery capacity of the molecular channel. In this paper, we develop an information theoretical approach for capacity of a molecular channel between two nanomachines. We first introduce a molecular communication model. Then, using the principles of mass action kinetics we give a molecule delivery model for the molecular communication between two nanomachines called as Transmitter Nanomachine (TN) and Receiver Nanomachine (RN). Then, we derive a closed form expression for capacity of the channel between TN and RN. Numerical results show that selecting appropriate molecular communication parameters such as temperature of environment, concentration of emitted molecules, distance between nanomachines and duration of molecule emission, it can be possible to achieve maximum capacity for the molecular communication channel between two nanomachines. <s> BIB004 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> III. REQUIREMENTS AND PERFORMANCE METRICS OF BODY-CENTRIC NANONETWORKS <s> Molecular communication is a biologically-inspired method of communication with attractive properties for microscale and nanoscale devices. In molecular communication, messages are transmitted by releasing a pattern of molecules at a transmitter, which propagate through a fluid medium towards a receiver. In this paper, molecular communication is formulated as a mathematical communication problem in an information-theoretic context. Physically realistic models are obtained, with sufficient abstraction to allow manipulation by communication and information theorists. Although mutual information in these channels is intractable, we give sequences of upper and lower bounds on the mutual information which trade off complexity and performance, and present results to illustrate the feasibility of these bounds in estimating the true mutual information. <s> BIB005 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> III. REQUIREMENTS AND PERFORMANCE METRICS OF BODY-CENTRIC NANONETWORKS <s> Using a single layer of electrically controlled metamaterial, researchers have achieved active control of the phase of terahertz waves and demonstrated high-speed broadband modulation. <s> BIB006 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> III. REQUIREMENTS AND PERFORMANCE METRICS OF BODY-CENTRIC NANONETWORKS <s> Molecular communication is a new paradigm for communication between biological nanomachines over a nano- and microscale range. As biological nanomachines (or nanomachines in short) are too small and simple to communicate through traditional communication mechanisms (e.g., through sending and receiving of radio or infrared signals), molecular communication provides a mechanism for a nanomachine (i.e., a sender) to communicate information by propagating molecules (i.e., information molecules) that represent the information to a nanomachine (i.e., a receiver). This paper describes the design of an in vitro molecular communication system and evaluates various approaches to maximize the probability of information molecules reaching a receiver(s) and the rate of information reaching the receiver(s). The approaches considered in this paper include propagating information molecules (diffusion or directional transport along protein filaments), removing excessive information molecules (natural decay or receiver removal of excessive information molecules), and encoding and decoding approaches (redundant information molecules to represent information and to decode information). Two types of molecular communication systems are considered: a unicast system in which a sender communicates with a single receiver and a broadcast system in which a sender communicates with multiple receivers. Through exploring tradeoffs among the various approaches on the two types of molecular communication systems, this paper identifies promising approaches and shows the feasibility of an in vitro molecular communication system. <s> BIB007 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> III. REQUIREMENTS AND PERFORMANCE METRICS OF BODY-CENTRIC NANONETWORKS <s> Inspired by biological communication systems, molecular communication has been proposed as a viable scheme to communicate between nano-sized devices separated by a very short distance. Here, molecules are released by the transmitter into the medium, which are then sensed by the receiver. This paper develops a preliminary version of such a communication system focusing on the release of either one or two molecules into a fluid medium with drift. We analyze the mutual information between transmitter and the receiver when information is encoded in the time of release of the molecule. Simplifying assumptions are required in order to calculate the mutual information, and theoretical results are provided to show that these calculations are upper bounds on the true mutual information. Furthermore, optimized degree distributions are provided, which suggest transmission strategies for a variety of drift velocities. <s> BIB008 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> III. REQUIREMENTS AND PERFORMANCE METRICS OF BODY-CENTRIC NANONETWORKS <s> Abstract Abstract Molecular communication is a new communication paradigm that uses molecules for information transmission between nanomachines. Similar to traditional communication systems, several factors constitute limits over the performance of this communication system. One of these factors is the energy budget of the transmitter. It limits the rate at which the transmitter can emit symbols, i.e., produce the messenger molecules. In this paper, an energy model for the communication via diffusion system is proposed. To evaluate the performance of this communication system, first a channel model is developed, and also the probability of correct decoding of the information is evaluated. Two optimization problems are set up for system analysis that focus on channel capacity and data rate. Evaluations are carried out using the human insulin hormone as the messenger molecule and a transmitter device whose capabilities are similar to a pancreatic β -cell. Results show that distance between the transmitter and receiver has a minor effect on the achievable data rate whereas the energy budget’s effect is significant. It is also shown that selecting appropriate threshold and symbol duration parameters are crucial to the performance of the system. <s> BIB009 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> III. REQUIREMENTS AND PERFORMANCE METRICS OF BODY-CENTRIC NANONETWORKS <s> Nanotechnologies promise new solutions for several applications in the biomedical, industrial and military fields. At the nanoscale, a nanomachine is considered as the most basic functional unit which is able to perform very simple tasks. Communication among nanomachines will allow them to accomplish more complex functions in a distributed manner. In this paper, the state of the art in molecular electronics is reviewed to motivate the study of the Terahertz Band (0.1-10.0 THz) for electromagnetic (EM) communication among nano-devices. A new propagation model for EM communications in the Terahertz Band is developed based on radiative transfer theory and in light of molecular absorption. This model accounts for the total path loss and the molecular absorption noise that a wave in the Terahertz Band suffers when propagating over very short distances. Finally, the channel capacity of the Terahertz Band is investigated by using this model for different power allocation schemes, including a scheme based on the transmission of femtosecond-long pulses. The results show that for very short transmission distances, in the order of several tens of millimeters, the Terahertz channel supports very large bit-rates, up to few terabits per second, which enables a radically different communication paradigm for nanonetworks. <s> BIB010 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> III. REQUIREMENTS AND PERFORMANCE METRICS OF BODY-CENTRIC NANONETWORKS <s> This paper characterizes intersymbol interference (ISI) in a unicast molecular communication between a pair of nanomachines in a nanonetwork. Correspondingly, a transmission-controlled approach based on reduced pulse-width transmission has been proposed in order to mitigate ISI. Binary amplitude modulation has been assumed for the concentration-encoded signaling. Characteristics of interference signal strength (as a fraction of total available signal strength at the location of receiving nanomachine) have been explained in terms of communication range, pulse-width, and data rate of the system. Performance evaluation has been explained in the form of improvement by reducing interference with a reduced pulse-width approach. Results based on numerical analyses with three suitable propagation media (air, water, and human blood plasma) have been shown for the sake of potential applications in the field of nano-bio-communication and healthcare nanomedicine. Finally, it is concluded that ISI is a significant issue in molecular communication, and the proposed reduced pulse-width based approach saves signal energy and improves ISI performance in concentration-encoded molecular communication. <s> BIB011 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> III. REQUIREMENTS AND PERFORMANCE METRICS OF BODY-CENTRIC NANONETWORKS <s> Abstract Currently, Communication via Diffusion (CvD) is one of the most prominent systems in nanonetworks. In this paper, we evaluate the effects of two major interference sources, Intersymbol Interference (ISI) and Co-channel Interference (CCI) in the CvD system using different modulation techniques. In the analysis of this paper, we use two modulation techniques, namely Concentration Shift Keying (CSK) and Molecule Shift Keying (MoSK) that we proposed in our previous paper. These techniques are suitable for the unique properties of messenger molecule concentration waves in nanonetworks. Using a two transmitting couple simulation environment, the channel capacity performances of the CvD system utilizing these modulation techniques are evaluated in terms of communication range, distance between interfering sources, physical size of devices, and average transmission power. <s> BIB012 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> III. REQUIREMENTS AND PERFORMANCE METRICS OF BODY-CENTRIC NANONETWORKS <s> Abstract Designing an optimum receiver for diffusion-based molecular communication in nano-networks needs a well justified channel model. In this paper, we present a linear and time invariant signal propagation model and an additive noise model for the diffusion-based molecular communication channel. These models are based on Brownian motion molecular statistics. Using these models, we develop the first optimal receiver design for diffusion-based molecular communication scenarios with and without inter-symbol interference. We evaluate the performance of our proposed receiver by investigating the bit error rate for small and large transmission rates. <s> BIB013 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> III. REQUIREMENTS AND PERFORMANCE METRICS OF BODY-CENTRIC NANONETWORKS <s> Diffusion-based molecular communications emerges due to the need for communication and networking among nanomachines, and molecular biological signaling networks. Inspired by the special molecular channel characteristics, we reveal the communication theoretical analogs with and differences from well-known wireless communications, particularly channel coding, intersymbol interference, multiple-input multipleoutput, and new design concepts in this article. <s> BIB014 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> III. REQUIREMENTS AND PERFORMANCE METRICS OF BODY-CENTRIC NANONETWORKS <s> We review different techniques for modulation of the electromagnetic properties of terahertz (THz) waves. We discuss various approaches for electronic, optical, thermal and nonlinear modulation in distinct material systems such as semiconductors, graphene, pho- tonic crystals and metamaterials. The modulators are classified and compared with respect to modulation speed, modulation depth and categorized by the physical quantity they control as e.g. amplitude, phase, spectrum, spatial and temporal properties of the THz wave. Based on the review paper, the reader should obtain guidelines for the proper choice of a specific modulation technique in view of the targeted application. <s> BIB015 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> III. REQUIREMENTS AND PERFORMANCE METRICS OF BODY-CENTRIC NANONETWORKS <s> We model the ligand-receptor molecular communication channel with a discrete-time Markov model, and show how to obtain the capacity of this channel. We show that the capacity-achieving input distribution is iid; further, unusually for a channel with memory, we show that feedback does not increase the capacity of this channel. <s> BIB016 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> III. REQUIREMENTS AND PERFORMANCE METRICS OF BODY-CENTRIC NANONETWORKS <s> Molecular communications emerges as a promising scheme for communications between nanoscale devices. In diffusion-based molecular communications, molecules as information symbols diffusing in the fluid environments suffer from molecule crossovers, i.e., the arriving order of molecules is different from their transmission order, leading to intersymbol interference (ISI). In this paper, we introduce a new family of channel codes, called ISI-free codes, which improve the communication reliability while keeping the decoding complexity fairly low in the diffusion environment modeled by the Brownian motion. We propose general encoding/decoding schemes for the ISI-free codes, working upon the modulation schemes of transmitting a fixed number of identical molecules at a time. In addition, the bit error rate (BER) approximation function of the ISI-free codes is derived mathematically as an analytical tool to decide key factors in the BER performance. Compared with the uncoded systems, the proposed ISI-free codes offer good performance with reasonably low complexity for diffusion-based molecular communication systems. <s> BIB017 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> III. REQUIREMENTS AND PERFORMANCE METRICS OF BODY-CENTRIC NANONETWORKS <s> In the Molecular Communication (MC), molecules are utilized to encode, transmit, and receive information. Transmission of the information is achieved by means of diffusion of molecules and the information is recovered based on the molecule concentration variations at the receiver location. The MC is very prone to intersymbol interference (ISI) due to residual molecules emitted previously. Furthermore, the stochastic nature of the molecule movements adds noise to the MC. For the first time, we propose four methods for a receiver in the MC to recover the transmitted information distorted by both ISI and noise. We introduce sequence detection methods based on maximum a posteriori (MAP) and maximum likelihood (ML) criterions, a linear equalizer based on minimum mean-square error (MMSE) criterion, and a decision-feedback equalizer (DFE) which is a nonlinear equalizer. We present a channel estimator to estimate time varying MC channel at the receiver. The performances of the proposed methods based on bit error rates are evaluated. The sequence detection methods reveal the best performance at the expense of computational complexity. However, the MMSE equalizer has the lowest performance with the lowest computational complexity. The results show that using these methods significantly increases the information transmission rate in the MC. <s> BIB018 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> III. REQUIREMENTS AND PERFORMANCE METRICS OF BODY-CENTRIC NANONETWORKS <s> This paper studies the mitigation of intersymbol interference in a diffusive molecular communication system using enzymes that freely diffuse in the propagation environment. The enzymes form reaction intermediates with information molecules and then degrade them so that they cannot interfere with future transmissions. A lower bound expression on the expected number of molecules measured at the receiver is derived. A simple binary receiver detection scheme is proposed where the number of observed molecules is sampled at the time when the maximum number of molecules is expected. Insight is also provided into the selection of an appropriate bit interval. The expected bit error probability is derived as a function of the current and all previously transmitted bits. Simulation results show the accuracy of the bit error probability expression and the improvement in communication performance by having active enzymes present. <s> BIB019 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> III. REQUIREMENTS AND PERFORMANCE METRICS OF BODY-CENTRIC NANONETWORKS <s> Abstract In this paper, a novel error control strategy for electromagnetic nanonetworks, based on the utilization of low-weight channel codes and aimed at the prevention of channel errors, is proposed. In particular, it is first analytically shown that both the molecular absorption noise and the multi-user interference in nanonetworks can be mitigated by reducing the channel code weight, which results into a lower channel error probability. Then, the relation between the channel code weight and the code word length is analyzed for the case of utilizing constant weight codes. Finally, the performance of the proposed strategy is analytically and numerically investigated in terms of the achievable information rate after coding and the Codeword Error Rate (CER). Two different receiver architectures are considered, namely, an ideal soft-receiver and a hard receiver. An accurate Terahertz Band channel model and novel stochastic models for the molecular absorption noise and the multi-user interference, validated with COMSOL, are utilized. The results show that low-weight channel codes can be used to reduce the CER without compromising the achievable information rate or even increasing it, especially for the hard-receiver architecture. Moreover, it is shown that there is an optimal code weight, for which the information rate is maximized. <s> BIB020 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> III. REQUIREMENTS AND PERFORMANCE METRICS OF BODY-CENTRIC NANONETWORKS <s> The memoryless additive inverse Gaussian noise channel model describing communication based on the exchange of chemical molecules in a drifting liquid medium is investigated for the situation of simultaneously an average-delay and a peak-delay constraint. Analytical upper and lower bounds on its capacity in bits per molecule use are presented. These bounds are shown to be asymptotically tight, i.e., for the delay constraints tending to infinity with their ratio held constant (or for the drift velocity of the fluid tending to infinity), the asymptotic capacity is derived precisely. Moreover, characteristics of the capacity-achieving input distribution are derived that allow accurate numerical computation of capacity. The optimal input appears to be a mixed continuous and discrete distribution. <s> BIB021 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> III. REQUIREMENTS AND PERFORMANCE METRICS OF BODY-CENTRIC NANONETWORKS <s> The design of biologically-inspired wireless communication systems using bacteria as the basic element of the system is initially motivated by a phenomenon called Quorum Sensing. Due to high randomness in the individual behavior of a bacterium, reliable communication between two bacteria is almost impossible. Therefore, we have recently proposed that a population of bacteria in a cluster is considered as a bio node in the network capable of molecular transmission and reception. This proposition enables us to form a reliable bio node out of many unreliable bacteria. In this paper, we study the communication between two nodes in such a network where information is encoded in the concentration of molecules by the transmitter. The molecules produced by the bacteria in the transmitter node propagate through the diffusion channel. Then, the concentration of molecules is sensed by the bacteria population in the receiver node which would decode the information and output light or fluorescent as a result. The uncertainty in the communication is caused by all three components of communication, i.e., transmission, propagation and reception. We study the theoretical limits of the information transfer rate in the presence of such uncertainties. Finally, we consider M-ary signaling schemes and study their achievable rates and corresponding error probabilities. <s> BIB022 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> III. REQUIREMENTS AND PERFORMANCE METRICS OF BODY-CENTRIC NANONETWORKS <s> Particulate Drug Delivery Systems (PDDS) are ther- apeutic methods that use nanoparticles to achieve their healing effects at the exact time, concentration level of drug nanoparti- cles, and location in the body, while minimizing the effects on other healthy locations. The Molecular Communication (MC) paradigm, where the transmitted message is the drug injection process, the channel is the cardiovascular system, and the received message is the drug reception process, has been investigated as a tool to study nanoscale biological and medical systems in recent years. In this paper, the various noise effects that cause uncertainty in the cardiovascular system are analyzed, modeled, and evaluated from the information theory perspective. Analytical MC noises are presented to include all end-to-end noise effects, from the drug injection, to the absorption of drug nanoparticles by the diseased cells, in the presence of a time-varying and turbulent blood flow. The PDDS capacity is derived analytically including all these noise effects and the constraints on the drug injection. The proposed MC noise is validated by using the kinetic Monte-Carlo simulation technique. Analytical expressions of the noise and the capacity are derived, and MC is presented as a framework for the optimization of particulate drug delivery systems (PDDS). Index Terms—Drug delivery systems, nanonetworks, molecu- lar communication, time-varying channels, communication chan- nels, intra-body communication, noise modeling, capacity, kinetic Monte-Carlo. <s> BIB023 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> III. REQUIREMENTS AND PERFORMANCE METRICS OF BODY-CENTRIC NANONETWORKS <s> Within the domain of molecular communications, researchers mimic the techniques in nature to come up with alternative communication methods for collaborating nanomachines. This letter investigates the channel transfer function for molecular communications via diffusion. In nature, information-carrying molecules are generally absorbed by the target node via receptors. Using the concentration function, without considering the absorption process, as the channel transfer function implicitly assumes that the receiver node does not affect the system. In this letter, we propose a solid analytical formulation and analyze the signal metrics (attenuation and propagation delay) for molecular communication via diffusion channel with an absorbing receiver in a 3-D environment. The proposed model and the formulation match well with the simulations without any normalization. <s> BIB024 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> III. REQUIREMENTS AND PERFORMANCE METRICS OF BODY-CENTRIC NANONETWORKS <s> In this paper, we consider a multi-hop molecular communication network consisting of one nanotransmitter, one nanoreceiver, and multiple nanotransceivers acting as relays. We consider three different relaying schemes to improve the range of diffusion-based molecular communication. In the first scheme, different types of messenger molecules are utilized in each hop of the multi-hop network. In the second and third schemes, we assume that two types of molecules and one type of molecule are utilized in the network, respectively. We identify self-interference, backward intersymbol interference (backward-ISI), and forward-ISI as the performance-limiting effects for the second and third relaying schemes. Furthermore, we consider two relaying modes analogous to those used in wireless communication systems, namely full-duplex and half-duplex relaying. We propose the adaptation of the decision threshold as an effective mechanism to mitigate self-interference and backward-ISI at the relay for full-duplex and half-duplex transmission. We derive closed-form expressions for the expected end-to-end error probability of the network for the three considered relaying schemes. Furthermore, we derive closed-form expressions for the optimal number of molecules released by the nanotransmitter and the optimal detection threshold of the nanoreceiver for minimization of the expected error probability of each hop. <s> BIB025 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> III. REQUIREMENTS AND PERFORMANCE METRICS OF BODY-CENTRIC NANONETWORKS <s> This paper studies a three-node network in which an intermediate nano-transceiver, acting as a relay, is placed between a nano-transmitter and a nano-receiver to improve the range of diffusion-based molecular communication. Motivated by the relaying protocols used in traditional wireless communication systems, we study amplify-and-forward (AF) relaying with fixed and variable amplification factor for use in molecular communication systems. To this end, we derive a closed-form expression for the expected end-to-end error probability. Furthermore, we derive a closed-form expression for the optimal amplification factor at the relay node for minimization of an approximation of the expected error probability of the network. Our analytical and simulation results show the potential of AF relaying to improve the overall performance of nano-networks. <s> BIB026 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> III. REQUIREMENTS AND PERFORMANCE METRICS OF BODY-CENTRIC NANONETWORKS <s> In this paper, the capacity of a diffusion based molecular communication network under the model of a Linear Time Invarient-Poisson (LTI-Poisson) channel is studied. Introduced in the context of molecular communication, the LTI-Poisson model is a natural extension of the conventional memoryless Poisson channel to include memory. Exploiting prior art on linear ISI channels, a computable finite-letter characterization of the capacity of single-hop LTI-Poisson networks is provided. Then, the problem of finding more explicit bounds on the capacity is examined, where lower and upper bounds for the point to point case are provided. Furthermore, an approach for bounding mutual information in the low SNR regime using the symmetrized KL divergence is introduced and its applicability to Poisson channels is shown. To best of our knowledge, the first non-trivial upper bound on the capacity of Poisson channel with a maximum transmission constraint in the low SNR regime is found. Numerical results show that the proposed upper bound is of the same order as the capacity in the low SNR regime. <s> BIB027 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> III. REQUIREMENTS AND PERFORMANCE METRICS OF BODY-CENTRIC NANONETWORKS <s> In this paper, we present an analytical model for the diffusive molecular communication (MC) system with a reversible adsorption receiver in a fluid environment. The widely used concentration shift keying (CSK) is considered for modulation. The time-varying spatial distribution of the information molecules under the reversible adsorption and desorption reaction at the surface of a receiver is analytically characterized. Based on the spatial distribution, we derive the net number of newly-adsorbed information molecules expected in any time duration. We further derive the number of newly-adsorbed molecules expected at the steady state to demonstrate the equilibrium concentration. Given the number of newly-adsorbed information molecules, the bit error probability of the proposed MC system is analytically approximated. Importantly, we present a simulation framework for the proposed model that accounts for the diffusion and reversible reaction. Simulation results show the accuracy of our derived expressions, and demonstrate the positive effect of the adsorption rate and the negative effect of the desorption rate on the error probability of reversible adsorption receiver with last transmit bit-1. Moreover, our analytical results simplify to the special cases of a full adsorption receiver and a partial adsorption receiver, both of which do not include desorption. <s> BIB028 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> III. REQUIREMENTS AND PERFORMANCE METRICS OF BODY-CENTRIC NANONETWORKS <s> Nanocommunications via Forster Resonance Energy Transfer (FRET) is a promising means of realising collaboration between photoactive nanomachines to implement advanced nanotechnology applications. The method is based on exchange of energy levels between fluorescent molecules by the FRET phenomenon which intrinsically provides a virtual nanocommunication link. In this work, further to the extensive theoretical studies, we demonstrate the first information transfer through a FRET-based nanocommunication channel. We implement a digital communication system combining macroscale transceiver instruments and a bulk solution of fluorophore nanoantennas. The performance of the FRET-based Multiple-Input and Multiple-Output (MIMO) nanocommunication channel between closely located mobile nanoantennas in the sample solution is evaluated in terms of Signal-to-Noise Ratio (SNR) and Bit Error Rate (BER) obtained for the transmission rates of 50 kbps, 150 kbps and 250 kbps. The results of the performance evaluation are very promising for the development of high-rate and reliable molecular communication networks at nanoscale. <s> BIB029 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> III. REQUIREMENTS AND PERFORMANCE METRICS OF BODY-CENTRIC NANONETWORKS <s> Molecular communication is a promising approach to realize the communication between nanoscale devices. In a diffusion-based molecular communication network, transmitters and receivers communicate by using signalling molecules. The transmitter uses different time-varying functions of concentration of signalling molecules (called emission patterns) to represent different transmission symbols. The signalling molecules diffuse freely in the medium. The receiver is assumed to consist of a number of receptors, which can be in ON or OFF state. When the signalling molecules arrive at the receiver, they react with the receptors and switch them from OFF to ON state probabilistically. The receptors remain ON for a random amount of time before reverting to the OFF state. This paper assumes that the receiver uses the continuous history of receptor state to infer the transmitted symbol. Furthermore, it assumes that the transmitter uses two transmission symbols and approaches the decoding problem from the maximum a posteriori (MAP) framework. Specifically, the decoding is realized by calculating the logarithm of the ratio of the posteriori probabilities of the two transmission symbols, or log-MAP ratio. A contribution of this paper is to show that the computation of log-MAP ratio can be performed by an analog filter. The receiver can therefore use the output of this filter to decide which symbol has been sent. This analog filter provides insight on what information is important for decoding. In particular, the timing at which the receptors switch from OFF to ON state, the number of OFF receptors and the mean number of signalling molecules at the receiver are important. Numerical examples are used to illustrate the property of this decoding method. <s> BIB030 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> III. REQUIREMENTS AND PERFORMANCE METRICS OF BODY-CENTRIC NANONETWORKS <s> In vivo wireless nanosensor networks (iWNSNs) consist of nanosized communicating devices, which can operate inside the human body in real time. iWNSNs are at the basis of transformative healthcare techniques, ranging from intra-body health-monitoring systems to drug-delivery applications. Plasmonic nanoantennas are expected to enable the communication among nanosensors in the near infrared and optical transmission window. This result motivates the analysis of the phenomena affecting the propagation of such electromagnetic (EM) signals inside the human body. In this paper, a channel model for intra-body optical communication among nanosensors is developed. The total path loss is computed by taking into account the absorption from different types of molecules and the scattering by different types of cells. In particular, first, the impact of a single cell on the propagation of an optical wave is analytically obtained, by modeling a cell as a multi-layer sphere with complex permittivity. Then, the impact of having a large number of cells with different properties arranged in layered tissues is analyzed. The analytical channel model is validated by means of electromagnetic simulations and extensive numerical results are provided to understand the behavior of the intra-body optical wireless channel. The result shows that, at optical frequencies, the scattering loss introduced by cells is much larger than the absorption loss from the medium. This result motivates the utilization of the lower frequencies of the near-infrared window for communication in iWNSNs. <s> BIB031 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> III. REQUIREMENTS AND PERFORMANCE METRICS OF BODY-CENTRIC NANONETWORKS <s> Nanonetworks consist of nano-sized communicating devices which are able to perform simple tasks at the nanoscale. The limited capabilities of individual nanomachines and the Terahertz (THz) band channel behavior lead to error-prone wireless links. In this paper, a cross-layer analysis of error-control strategies for nanonetworks in the THz band is presented. A mathematical framework is developed and used to analyze the tradeoffs between Bit Error Rate, Packet Error Rate, energy consumption and latency, for five different error-control strategies, namely, Automatic Repeat reQuest (ARQ), Forward Error Correction (FEC), two types of Error Prevention Codes (EPC) and a hybrid EPC. The cross-layer effects between the physical and the link layers as well as the impact of the nanomachine capabilities in both layers are taken into account. At the physical layer, nanomachines are considered to communicate by following a time-spread on-off keying modulation based on the transmission of femtosecond-long pulses. At the link layer, nanomachines are considered to access the channel in an uncoordinated fashion, by leveraging the possibility to interleave pulse-based transmissions from different nodes. Throughout the analysis, accurate path loss, noise and multi-user interference models, validated by means of electromagnetic simulation, are utilized. In addition, the energy consumption and latency introduced by a hardware implementation of each error control technique, as well as, the additional constraints imposed by the use of energy-harvesting mechanisms to power the nanomachines, are taken into account. The results show that, despite their simplicity, EPCs outperform traditional ARQ and FEC schemes, in terms of error correcting capabilities, which results in further energy savings and reduced latency. <s> BIB032 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> III. REQUIREMENTS AND PERFORMANCE METRICS OF BODY-CENTRIC NANONETWORKS <s> In this paper, a graphene-based plasmonic phase modulator for Terahertz band (0.1–10 THz) communication is proposed, modeled and analyzed. The modulator is based on a fixed-length graphene-based plasmonic waveguide, and leverages the possibility to tune the propagation speed of Surface Plasmon Polariton (SPP) waves on graphene by modifying the Fermi energy of the graphene layer. An analytical model for the modulator is developed starting from the dynamic complex conductivity of graphene and a revised dispersion equation for SPP waves in gated graphene structures. By utilizing the model, the performance of the modulator is analyzed in terms of symbol error rate when utilized to implement a M-ary digital phase shift keying modulation. The model is validated by means of electromagnetic simulations, and numerical results are provided to illustrate the performance of the modulator. <s> BIB033 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> III. REQUIREMENTS AND PERFORMANCE METRICS OF BODY-CENTRIC NANONETWORKS <s> This paper studies the problem of receiver modeling in molecular communication systems. We consider the diffusive molecular communication channel between a transmitter nano-machine and a receiver nano-machine in a fluid environment. The information molecules released by the transmitter nano-machine into the environment can degrade in the channel via a first-order degradation reaction and those that reach the receiver nano-machine can participate in a reversible bimolecular reaction with receiver receptor proteins. Thereby, we distinguish between two scenarios. In the first scenario, we assume that the entire surface of the receiver is covered by receptor molecules. We derive a closed-form analytical expression for the expected received signal at the receiver, i.e., the expected number of activated receptors on the surface of the receiver. Then, in the second scenario, we consider the case where the number of receptor molecules is finite and the uniformly distributed receptor molecules cover the receiver surface only partially. We show that the expected received signal for this scenario can be accurately approximated by the expected received signal for the first scenario after appropriately modifying the forward reaction rate constant. The accuracy of the derived analytical results is verified by Brownian motion particle-based simulations of the considered environment, where we also show the impact of the effect of receptor occupancy on the derived analytical results. <s> BIB034 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> III. REQUIREMENTS AND PERFORMANCE METRICS OF BODY-CENTRIC NANONETWORKS <s> In active transport molecular communication (ATMC), information particles are actively transported from a transmitter to a receiver using special proteins. Prior work has demonstrated that ATMC can be an attractive and viable solution for on-chip applications. The energy consumption of an ATMC system plays a central role in its design and engineering. In this work, an energy model is presented for ATMC and the model is used to provide guidelines for designing energy efficient systems. The channel capacity per unit energy is analyzed and maximized. It is shown that based on the size of the symbol set and the symbol duration, there is a vesicle size that maximizes rate per unit energy. It is also demonstrated that maximizing rate per unit energy yields very different system parameters compared to maximizing the rate only. <s> BIB035 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> III. REQUIREMENTS AND PERFORMANCE METRICS OF BODY-CENTRIC NANONETWORKS <s> The performance of communication systems is fundamentally limited by the loss of energy through propagation and circuit inefficiencies. In this article, we show that it is possible to achieve ultra low energy communications at the nano-scale, if diffusive molecules are used for carrying data. Whilst the energy of electromagnetic waves will inevitably decay as a function of transmission distance and time, the energy in individual molecules does not. Over time, the receiver has an opportunity to recover some, if not all of the molecular energy transmitted. The article demonstrates the potential of ultra-low energy simultaneous molecular information and energy transfer (SMIET) through the design of two different nano-relay systems, and the discusses how molecular communications can benefit more from crowd energy harvesting than traditional wave-based systems. <s> BIB036 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> III. REQUIREMENTS AND PERFORMANCE METRICS OF BODY-CENTRIC NANONETWORKS <s> The opportunity to control and fine-tune the behavior of biological cells is a fascinating possibility for many diverse disciplines, ranging from medicine and ecology, to chemical industry and space exploration. While synthetic biology is providing novel tools to reprogram cell behavior from their genetic code, many challenges need to be solved before it can become a true engineering discipline, such as reliability, safety assurance, reproducibility and stability. This paper aims to understand the limits in the controllability of the behavior of a natural (non-engineered) biological cell. In particular, the focus is on cell metabolism, and its natural regulation mechanisms, and their ability to react and change according to the chemical characteristics of the external environment. To understand the aforementioned limits of this ability, molecular communication is used to abstract biological cells into a series of channels that propagate information on the chemical composition of the extracellular environment to the cell's behavior in terms of uptake and consumption of chemical compounds, and growth rate. This provides an information-theoretic framework to analyze the upper bound limit to the capacity of these channels to propagate information, which is based on a well-known and computationally efficient metabolic simulation technique. A numerical study is performed on two human gut microbes, where the upper bound is estimated for different environmental compounds, showing there is a potential for future practical applications. <s> BIB037 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> III. REQUIREMENTS AND PERFORMANCE METRICS OF BODY-CENTRIC NANONETWORKS <s> Information delivery using chemical molecules is an integral part of biology at multiple distance scales and has attracted recent interest in bioengineering and communication theory. Potential applications include cooperative networks with a large number of simple devices that could be randomly located (e.g., due to mobility). This paper presents the first tractable analytical model for the collective signal strength due to randomly placed transmitters in a 3-D large-scale molecular communication system, either with or without degradation in the propagation environment. Transmitter locations in an unbounded and homogeneous fluid are modeled as a homogeneous Poisson point process. By applying stochastic geometry, analytical expressions are derived for the expected number of molecules absorbed by a fully absorbing receiver or observed by a passive receiver. The bit error probability is derived under ON/OFF keying and either a constant or adaptive decision threshold. Results reveal that the combined signal strength increases proportionately with the transmitter density, and the minimum bit error probability can be improved by introducing molecule degradation. Furthermore, the analysis of the system can be generalized to other receiver designs and other performance characteristics in large-scale molecular communication systems. <s> BIB038 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> III. REQUIREMENTS AND PERFORMANCE METRICS OF BODY-CENTRIC NANONETWORKS <s> Molecular communication via diffusion (MCvD) is inherently an energy efficient transportation paradigm, which requires no external energy during molecule propagation. Inspired by the fact that the emitted molecules have a finite probability to reach the receiver, this letter introduces an energy efficient scheme for the information molecule synthesis process of MCvD via a simultaneous molecular information and energy transfer (SMIET) relay. With this SMIET capability, the relay can decode the received information as well as generate its emission molecules using its absorbed molecules via chemical reactions. To reveal the advantages of SMIET, approximate closed-form expressions for the bit error probability and the synthesis cost of this two-hop molecular communication system are derived and then validated by particle-based simulation. Interestingly, by comparing with a conventional relay system, the SMIET relay system can be shown to achieve a lower minimum bit error probability via molecule division, and a lower synthesis cost via molecule type conversion or molecule division. <s> BIB039 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> III. REQUIREMENTS AND PERFORMANCE METRICS OF BODY-CENTRIC NANONETWORKS <s> In multi-cellular organisms, molecular signaling spans multiple distance scales and is essential to tissue structure and functionality. Molecular communications is increasingly researched and developed as a key subsystem in the Internet-of-Nano-Things paradigm. While short range microscopic diffusion communications is well understood, longer range channels can be inefficient and unreliable. Static and mobile relays have been proposed in both conventional wireless systems and molecular communication contexts. In this paper, our main contribution is to analyze the information delivery energy efficiency of bacteria mobile relays. We discover that these mobile relays offer superior energy efficiency compared with pure diffusion information transfer over long diffusion distances. This paper has widespread implications ranging from understanding biological processes to designing new efficient synthetic biology communication systems. <s> BIB040
|
A. EM-Based Body-Centric Nanonetworks 1) Achievable Information Rates: The maximum achievable information rate, I R max(sym) , with the unit of bit/symbol based on a specific modulation scheme in a communication system has been defined as : where X and Y denote the message sent by the transmitter and its noisy version at the receiver, respectively. Here, H(X) represents the entropy of message X, while H(X|Y) is the conditional entropy of X given Y. Represent the transmitted information over the asymmetric THz band channel without coding as a discrete binary random variable, x 0 and x 1 ; then, H(X) is given as : where p X (x m ) indicates the probability of transmitted symbol x 0 named as silence and x 1 named as pulse. Assuming Additive Coloured Gaussian Noise (ACGN) BIB010 at the receiver, and a Binary Asymmetric Channel (BAC) with Y being a discrete random variable, the information rate (in bits/second) is given as BIB020 : where B represents the bandwidth of channel. β is the ratio of the symbol interval T s and the pulse length T p . And the rate of the symbols transmitted is defined as R = 1 / T s = 1 (βT p ) . Note that the requirements on the transceiver can be greatly relaxed by reducing the single-user rate to increase β. Fig. 9 studies the trade-off between the information rate and the transmission distance, for three different human body tissues (with an EM channel of bandwidth 1 THz . 2) Bit Error Rate: Since EM waves propagate through the frequency-dependent materials inside the human body, the operating frequency has an important effect on the communication channel. BIB031 shows that the scattering from cells is the major phenomenon affecting the propagation of EM waves at Fig. 9 : The trade-off between Information rate and transmission distance for three different human tissues . optical frequencies inside the human body. BIB032 does the error analysis (at the physical layer and link layer) of an EM system operating in THz band. 3) Symbol Error Rate: BIB015 studies different types of modulators capable of setting the amplitude or phase of the THz wave. A meta-material-based modulator was employed to control the phase of THz wave in BIB006 . BIB033 proposes and validates an analytic model for the plasmonic phase modulator that starts from the dynamic complex conductivity of graphene. By applying the model, the symbol error rate performance of the plasmonic modulator is studied when it is utilized to implement an M-array phase shift keying modulation. B. MC-Based Body-Centric Nanonetworks 1) Achievable Information Rates: The discussion of the performance limits of the MC-based nanonetworks in terms of achievable information rates was first initiated by BIB001 . Later, Eckford computed the mutual information (i.e., the maximum achievable information rate) for an MC channel whereby the information was encoded into the release time of molecules BIB002 , and by a set of distinct molecules BIB003 . In a followup work, Eckford also provided tractable lower and upper bounds on information rate of one-dimensional MC system BIB005 . In another work BIB008 , Kadloor et. al. considered an MC system inside a blood vessel and introduced a drift component into the MC channel to take into account the blood flow, and computed the information rate for the case when pulse-position modulation is used by the emitter. Last but not the least, reported an important finding whereby it was proved that the noise in the one-dimensional MC channel with positive drift velocity is additive with inverse Gaussian (IG) distribution. Below, we summarize the information rates achieved by some very prominent MC channels. • Timing Channel: In a timing channel, the point transmitter encodes a message in the release time of a molecule, and once a molecule reaches the receiver, it is fully absorbed, thus the first arrival time determines the actual arrival time of the molecule. For a single molecule released at time X, its actual arrival time Y will be expressed as , BIB021 where N T is the first arrival time at the receiver boundary. For the positive drift v > 0, N T follows AIGN distribution IG( l v , 2l 2 D ) with the communication distance l and diffusion coefficient D. Based on [70] bounded from above and below the capacity of additive IG noise channel with a constraint on the mean of the transmitted message X. Extended from , the authors in BIB021 studied the capacity of the same additive IG noise channel under either an average-and a peakdelay constraint or a peak-delay constraint, and the authors in revisited the capacity bounds of diffusionbased timing channel (without drift) with finite particle's time. • Concentration-encoded Channel: In this channel, concentration of molecules is varied to convey information BIB004 - . The authors in BIB004 studied the mutual information of a more specific molecular communication system with ligand-binding receptors, where the molecules can bind or unbind from the receiver, but without taking into account the diffusion propagation and channel memory. The authors in BIB007 modeled and measured the information rate of various molecular communication systems with diffusion, connected, or hybrid-aster propagation approaches, and noise-free, all-noise, exponential decay, and receiver removal noise model. The achievable rates of the diffusion-based MC channel, under two different coding schemes were studied in BIB012 . considered concentration encoding at the emitter, a diffusion-based MC channel with memory and noise at the receiver, and derived the closed-form expression for the channel capacity. To account for memory, the bounds on capacity of the conventional memoryless Poisson channel was extended to that of the Linear Time Invarient-Poisson channel of diffusion-based single-hop networks BIB027 . However, the reception process has not been treated in , BIB027 . • Biological System: In BIB016 and BIB022 , the capacities of an inter-cellular signal transduction channel and bacterial communication were studied by modelling the ligandreception processes as a discrete-time Markov model, and a Binomial Channel for a bacterial colony, respectively. The capacity analysis of molecular communication channel in a drug delivery system BIB023 and cell metabolism BIB037 were studied using COMSOL Multiphysics and KBase (Department of Energy Systems Biology Knowledgebase) software application suite, respectively. More detailed literature review on information theoretic study of molecular commmunication can be found in . 2) Bit Error Rate: During each slot, the receiver will receive the molecules due to the current slot as well as from the previous slots (due to brownian motion of molecules). This phenomenon is known as inter-symbol interference (ISI). As the main bottleneck of bit error performance of molecular communication system, the ISI is first characterized in BIB011 , and increasing attention has been focused on the bit error rate performance characterization from then on. • Single-Hop System with the Passive Receiver: Initial MC works have focused on a passive (spherical) receiver that just counts the number of received molecules in its close vicinity without interacting with them. The bit error rate of the MC system with a passive receiver under ISI and no ISI was studied in BIB013 where the receiver implements the optimal maximum a-posteriori probability (MAP) rule. To improve the BER performance of the MC systems, BIB017 introduced a new family of ISI-free coding with fairly low decoding complexity. While, BIB018 did the MAP based, maximum likelihood (ML) based, linear equalizer/minimum mean-square error (MMSE) based, and a decision-feedback equalizer (DFE) based sequence detection. BIB019 introduced the enzyme reactions to the diffusion, derived the average BER, and verified it via the realistic particle-based simulation. All these works point to the undesirable effect of ISI on the performance of an MC system with a passive receiver. • Single-Hop System with the Active Receiver: In a real biological system, the receiver actually consists of receptors that react to some specific molecules (e.g., peptides or calcium ions). Thus, research efforts have shifted to the simulation and modelling of the active receivers, such as the fully absorbing receiver BIB024 , the reversible absorbing receiver BIB028 , and the ligand-binding receiver BIB034 . BIB024 derived a simple expression for the channel impulse response of an MC system with an fully absorbing receiver, and validated it by the particle-based simulation simulator (MUCIN). BIB028 and BIB034 derived the analytical expressions for the expected received signal and the average BER for an MC system with reversible absorbing receiver, and for an MC system with the ligand-binding receiver, respectively. The expressions obtained were then verified by particle-based simulation algorithms. • Multi-Hop System and Large-scale System: The average BER of the multi-hop decode-and-forward relay and amplify-and-forward relay MC systems were derived and simulated in BIB025 and BIB026 to extend the transmission range and improve the reliability of MC systems. Using the three-dimensional stochastic geometry, the average BER with large number of transmitters perfrom joint transmission to the fully absorbing receiver were analyzed and simulated via particle-based simulation and Pseudo-Random simulation in BIB038 , which provided an analytical model of BER evaluation for large-scale MC system with all kinds of active receivers. • Experimental System: The BER performance of the F orster Resonance Energy Transfer (FRET) nanoscale MIMO communication channel has been tested and examined in BIB029 , which was shown to provide acceptable reliability with BER about 5.7×10 −5 bit −1 for nanonetworks up to 150 kbps transmission rates. 3) Symbol Error Rate: The symbol error rate (SER) of molecular communication system was first mentioned in BIB014 , then the SERs of an MC system with absorbing receiver under the binary concentration keying (BCSK), the quadrature CSK (QCSK), the binary molecular frequency shift keying (BMFSK), and the quadrature MFSK (QMFSK) were simu-lated in using MUCIN simulator. In BIB030 , the SER of diffusion-based MC system with receiver having periodically ON and OFF receptors and analog filter for computing the logarithm of the MAP ratio was studied. 4) Energy Cost: BIB009 develops an energy model for the MC system whereby the energy costs in the messenger molecule synthesizing process, the secretory vesicle production process, the secretory vesicle carrying process, and the molecule releasing process were defined based on molecular cell biology. The energy model of vesicle-based active transport MC system was described in BIB035 , where the energy costs of the vesicle synthesis, the intranode transportation, the DNA hybridization, the vesicle anchoring, loading, unloading, and the micro-tubule motion were defined. In BIB039 , BIB036 , a detailed mathematical model for the molecule synthesis cost in MC system with the absorbing receiver was provided to examine the energy efficiency of different relay schemes. In BIB040 , the energy costs in the encoding and synthesizing plasmid, the plasmid transportation, the carrier bacterial transportation, the decapsulation and decoding were defined and examined within bacterial relay MC networks.
|
A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> IV. ENABLING AND CONCOMITANT TECHNOLOGIES <s> The dielectric properties of tissues have been extracted from the literature of the past five decades and presented in a graphical format. The purpose is to assess the current state of knowledge, expose the gaps there are and provide a basis for the evaluation and analysis of corresponding data from an on-going measurement programme. <s> BIB001 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> IV. ENABLING AND CONCOMITANT TECHNOLOGIES <s> A parametric model was developed to describe the variation of dielectric properties of tissues as a function of frequency. The experimental spectrum from 10 Hz to 100 GHz was modelled with four dispersion regions. The development of the model was based on recently acquired data, complemented by data surveyed from the literature. The purpose is to enable the prediction of dielectric data that are in line with those contained in the vast body of literature on the subject. The analysis was carried out on a Microsoft Excel spreadsheet. Parameters are given for 17 tissue types. <s> BIB002 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> IV. ENABLING AND CONCOMITANT TECHNOLOGIES <s> Multicellular organisms create complex patterned structures from identical, unreliable components. Learning how to engineer such robust behavior is important to both an improved understanding of computer science and to a better understanding of the natural developmental process. Earlier work by our colleagues and ourselves on amorphous computing demonstrates in simulation how one might build complex patterned behavior in this way. This work reports on our first efforts to engineer microbial cells to exhibit this kind of multicellular pattern directed behavior. We describe a specific natural system, the Lux operon of Vibrio fischeri, which exhibits density dependent behavior using a well characterized set of genetic components. We have isolated, sequenced, and used these components to engineer intercellular communication mechanisms between living bacterial cells. In combination with digitally controlled intracellular genetic circuits, we believe this work allows us to begin the more difficult process of using these communication mechanisms to perform directed engineering of multicellular structures, using techniques such as chemical diffusion dependent behavior. These same techniques form an essential part of our toolkit for engineering with life, and are widely applicable in the field of microbial robotics, with potential applications in medicine, environmental monitoring and control, engineered crop cultivation, and molecular scale fabrication. <s> BIB003 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> IV. ENABLING AND CONCOMITANT TECHNOLOGIES <s> Terahertz technology is continually evolving and much progress has been made in recent years. Many new applications are being discovered and new ways to implement terahertz imaging investigated. In this review, we limit our discussion to biomedical applications of terahertz imaging such as cancer detection, genetic sensing and molecular spectroscopy. Our discussion of the development of new terahertz techniques is also focused on those that may accelerate the progress of terahertz imaging and spectroscopy in biomedicine. <s> BIB004 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> IV. ENABLING AND CONCOMITANT TECHNOLOGIES <s> The complex refractive indices of freshly excised healthy breast tissue and breast cancers collected from 20 patients were measured in the range of 0.15 - 2.0 THz using a portable terahertz pulsed transmission spectrometer. Histology was performed to classify the tissue samples as healthy adipose tissue, healthy fibrous breast tissue, or breast cancers. The average complex refractive index was determined for each group and it was found that samples containing cancer had a higher refractive index and absorption coefficient. The terahertz properties of the tissues were also used to simulate the impulse response functions expected when imaging breast tissue in a reflection geometry as in terahertz pulsed imaging (TPI). Our results indicate that both TPS and TPI can be used to distinguish between healthy adipose breast tissue, healthy fibrous breast tissue and breast cancer due to the differences in the fundamental optical properties. <s> BIB005 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> IV. ENABLING AND CONCOMITANT TECHNOLOGIES <s> Nanotechnology is enabling the development of devices in a scale ranging from one to a few hundred nanometers. Coordination and information sharing among these nano-devices will lead towards the development of future nanonetworks, boosting the range of applications of nanotechnology in the biomedical, environmental and military fields. Despite the major progress in nano-device design and fabrication, it is still not clear how these atomically precise machines will communicate. Recently, the advancements in graphene-based electronics have opened the door to electromagnetic communications in the nano-scale. In this paper, a new quantum mechanical framework is used to analyze the properties of Carbon Nanotubes (CNTs) as nano-dipole antennas. For this, first the transmission line properties of CNTs are obtained using the tight-binding model as functions of the CNT length, diameter, and edge geometry. Then, relevant antenna parameters such as the fundamental resonant frequency and the input impedance are calculated and compared to those of a nano-patch antenna based on a Graphene Nanoribbon (GNR) with similar dimensions. The results show that for a maximum antenna size in the order of several hundred nanometers (the expected maximum size for a nano-device), both a nano-dipole and a nano-patch antenna will be able to radiate electromagnetic waves in the terahertz band (0.1–10.0 THz). <s> BIB006 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> IV. ENABLING AND CONCOMITANT TECHNOLOGIES <s> We have previously demonstrated that terahertz pulsed imaging is able to distinguish between rat tissues from different healthy organs. In this paper we report our measurements of healthy and cirrhotic liver tissues using terahertz reflection spectroscopy. The water content of the fresh tissue samples was also measured in order to investigate the correlations between the terahertz properties, water content, structural changes and cirrhosis. Finally, the samples were fixed in formalin to determine whether water was the sole source of image contrast in this study. We found that the cirrhotic tissue had a higher water content and absorption coefficient than the normal tissue and that even after formalin fixing there were significant differences between the normal and cirrhotic tissues' terahertz properties. Our results show that terahertz pulsed imaging can distinguish between healthy and diseased tissue due to differences in absorption originating from both water content and tissue structure. <s> BIB007 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> IV. ENABLING AND CONCOMITANT TECHNOLOGIES <s> We present the results from a feasibility study which measures properties in the terahertz frequency range of excised cancerous, dysplastic and healthy colonic tissues from 30 patients. We compare their absorption and refractive index spectra to identify trends which may enable different tissue types to be distinguished. In addition, we present statistical models based on variations between up to 17 parameters calculated from the reflected time and frequency domain signals of all the measured tissues. These models produce a sensitivity of 82% and a specificity of 77% in distinguishing between healthy and all diseased tissues and a sensitivity of 89% and a specificity of 71% in distinguishing between dysplastic and healthy tissues. The contrast between the tissue types was supported by histological staining studies which showed an increased vascularity in regions of increased terahertz absorption. <s> BIB008 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> IV. ENABLING AND CONCOMITANT TECHNOLOGIES <s> Ubiquitous healthcare (u-healthcare) applications typically require the frequent transmission of small data sets, e.g., from patient monitors, over wireless networks. We consider the transmissions of such u-healthcare data over an LTE-Advanced network, where each small data set must complete the standardized random access (RA) procedure. We mathematically analyze the delay of the RA procedure and verify our analysis with simulations. We find that our delay analysis, which is the first of its kind, gives reasonably accurate delay characterization. Thus, the presented delay characterization may form the basis for network management mechanisms that ensure reliable delivery of small frequent u-health\-care data sets within small delays. <s> BIB009 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> IV. ENABLING AND CONCOMITANT TECHNOLOGIES <s> The Internet is continuously changing and evolving. The main communication form of present Internet is human-human. The Internet of Things (IoT) can be considered as the future evaluation of the Internet that realizes machine-to-machine (M2M) learning. Thus, IoT provides connectivity for everyone and everything. The IoT embeds some intelligence in Internet-connected objects to communicate, exchange information, take decisions, invoke actions and provide amazing services. This paper addresses the existing development trends, the generic architecture of IoT, its distinguishing features and possible future applications. This paper also forecast the key challenges associated with the development of IoT. The IoT is getting increasing popularity for academia, industry as well as government that has the potential to bring significant personal, professional and economic benefits. <s> BIB010 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> IV. ENABLING AND CONCOMITANT TECHNOLOGIES <s> Recently there has been quite a number of independent research activities that investigated the potentialities of integrating social networking concepts into Internet of Things (IoT) solutions. The resulting paradigm, named Social Internet of Things (SIoT), has the potential to support novel applications and networking services for the IoT in more effective and efficient ways. In this context, the main contributions of this paper are the following: (i) we identify appropriate policies for the establishment and the management of social relationships between objects in such a way that the resulting social network is navigable; (ii) we describe a possible architecture for the IoT that includes the functionalities required to integrate things into a social network; (iii) we analyze the characteristics of the SIoT network structure by means of simulations. <s> BIB011 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> IV. ENABLING AND CONCOMITANT TECHNOLOGIES <s> Nanonetworks, i.e., networks of nano-sized devices, are the enabling technology of long-awaited applications in the biological, industrial and military fields. For the time being, the size and power constraints of nano-devices limit the applicability of classical wireless communication in nanonetworks. Alternatively, nanomaterials can be used to enable electromagnetic (EM) communication among nano-devices. In this paper, a novel graphene-based nano-antenna, which exploits the behavior of Surface Plasmon Polariton (SPP) waves in semi-finite size Graphene Nanoribbons (GNRs), is proposed, modeled and analyzed. First, the conductivity of GNRs is analytically and numerically studied by starting from the Kubo formalism to capture the impact of the electron lateral confinement in GNRs. Second, the propagation of SPP waves in GNRs is analytically and numerically investigated, and the SPP wave vector and propagation length are computed. Finally, the nano-antenna is modeled as a resonant plasmonic cavity, and its frequency response is determined. The results show that, by exploiting the high mode compression factor of SPP waves in GNRs, graphene-based plasmonic nano-antennas are able to operate at much lower frequencies than their metallic counterparts, e.g., the Terahertz Band for a one-micrometer-long ten-nanometers-wide antenna. This result has the potential to enable EM communication in nanonetworks. <s> BIB012 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> IV. ENABLING AND CONCOMITANT TECHNOLOGIES <s> Embedding nanosensors in the environment would add a new dimension to the Internet of Things, but realizing the IoNT vision will require developing new communication paradigms and overcoming various technical obstacles. <s> BIB013 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> IV. ENABLING AND CONCOMITANT TECHNOLOGIES <s> This paper is concerned with parameter extraction for the double Debye model, which is used for analytically determining human skin permittivity. These parameters are thought to be the origin of contrast in terahertz (THz) images of skin cancer. The existing extraction methods could generate Debye models, which track their measurements accurately at frequencies higher than 1 THz but poorly at lower frequencies, where the majority of permittivity contrast between healthy and diseased skin tissues is actually observed. We propose a global optimization-based parameter extraction, which results in globally accurate tracking and thus supports the full validity of the Debye model for simulating human skin permittivity in the whole usable THz frequencies. Numerical results confirm viability of our novel methodology. <s> BIB014 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> IV. ENABLING AND CONCOMITANT TECHNOLOGIES <s> In this work, we describe the first modular, and programmable platform capable of transmitting a text message using chemical signalling - a method also known as molecular communication. This form of communication is attractive for applications where conventional wireless systems perform poorly, from nanotechnology to urban health monitoring. Using examples, we demonstrate the use of our platform as a testbed for molecular communication, and illustrate the features of these communication systems using experiments. By providing a simple and inexpensive means of performing experiments, our system fills an important gap in the molecular communication literature, where much current work is done in simulation with simplified system models. A key finding in this paper is that these systems are often nonlinear in practice, whereas current simulations and analysis often assume that the system is linear. However, as we show in this work, despite the nonlinearity, reliable communication is still possible. Furthermore, this work motivates future studies on more realistic modelling, analysis, and design of theoretical models and algorithms for these systems. <s> BIB015 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> IV. ENABLING AND CONCOMITANT TECHNOLOGIES <s> Bacterial populations housed in microfluidic environments can serve as transceivers for molecular communication, but the data-rates are extremely low (e.g., 10-5 bits per second.). In this work, genetically engineered Escherichia coli bacteria were maintained in a microfluidic device where their response to a chemical stimulus was examined over time. The bacteria serve as a communication receiver where a simple modulation such as on-off keying (OOK) is achievable, although it suffers from very poor data-rates. We explore an alternative communication strategy called time-elapse communication (TEC) that uses the time period between signals to encode information. We identify the limitations of TEC under practical non-zero error conditions and propose an advanced communication strategy called smart time-elapse communication (TEC-SMART) that achieves over a 10x improvement in data-rate over OOK. We derive the capacity of TEC and provide a theoretical maximum data-rate that can be achieved. <s> BIB016 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> IV. ENABLING AND CONCOMITANT TECHNOLOGIES <s> Microfluidics deals with manipulation and control of fluids which flow in constrained sub-millimetric media. In this paper communication concepts and networking approaches, typical of telecommunications, are extended to the microfluidic domain. The work illustrated in this paper investigates on possible approaches to information encoding and evaluation of the corresponding channel capacity as well as design of switching solutions. Based on the results of this study, the Hydrodynamic Controlled Microfluidic Network (HCN) paradigm is proposed, which is based on a pure hydrodynamic microfluidic switching function. The HCN paradigm can be applied to interconnect Labs-on-a-Chip (LoCs) in a microfluidic network in which chemical/biological samples are routed between the LoCs by exploiting hydrodynamic effects only. The resulting LoCs are expected to be highly flexible and inexpensive, and thus to become extremely useful in chemical/biological analysis and synthesis. <s> BIB017 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> IV. ENABLING AND CONCOMITANT TECHNOLOGIES <s> Based on the properties of graphene nano-patch antennas, we propose a reconfigurable multiple-input multiple-output (MIMO) antenna system for Terahertz (THz) communications. First, the characteristics of the graphene are analyzed and a beam reconfigurable antenna is designed. The beamwidth and direction can be controlled by the states of each graphene patch in the antenna. Then the path loss and reflection models of the THz channel are discussed. We combine the graphene-based antenna and the THz channel model, and propose a new MIMO antenna design. The radiation directions of the transmit antennas can be programmed dynamically, leading to different channel state matrices. Finally, the path loss and the channel capacity are numerically calculated and compared with those of the Gigahertz (GHz) channel. The results show that for short range communications, the proposed MIMO antenna design can enlarge the channel capacity by both increasing the number of antennas and choosing the best channel state matrices. <s> BIB018 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> IV. ENABLING AND CONCOMITANT TECHNOLOGIES <s> Nearly all existing nanoelectronic sensors are based on charge detection, where molecular binding changes the charge density of the sensor and leads to sensing signal. However, intrinsically slow dynamics of interface-trapped charges and defect-mediated charge-transfer processes significantly limit those sensors' response to tens to hundreds of seconds, which has long been known as a bottleneck for studying the dynamics of molecule-nanomaterial interaction and for many applications requiring rapid and sensitive response. Here we report a fundamentally different sensing mechanism based on molecular dipole detection enabled by a pioneering graphene nanoelectronic heterodyne sensor. The dipole detection mechanism is confirmed by a plethora of experiments with vapour molecules of various dipole moments, particularly, with cis- and trans-isomers that have different polarities. Rapid (down to ~0.1 s) and sensitive (down to ~1 ppb) detection of a wide range of vapour analytes is achieved, representing orders of magnitude improvement over state-of-the-art nanoelectronics sensors. <s> BIB019 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> IV. ENABLING AND CONCOMITANT TECHNOLOGIES <s> Cloud computing is ever stronger converging with the Internet of Things (IoT) offering novel techniques for IoT infrastructure virtualization and its management on the cloud. However, system designers and operations managers face numerous challenges to realize IoT cloud systems in practice, mainly due to the complexity involved with provisioning large-scale IoT cloud systems and diversity of their requirements in terms of IoT resources consumption, customization of IoT capabilities and runtime governance. In this paper, we introduce the concept of software-defined IoT units--a novel approach to IoT cloud computing that encapsulates fine-grained IoT resources and IoT capabilities in well-defined APIs in order to provide a unified view on accessing, configuring and operating IoT cloud systems. Our software-defined IoT units are the fundamental building blocks of software-defined IoT cloud systems. We present our framework for dynamic, on-demand provisioning and deploying such software-defined IoT cloud systems. By automating provisioning processes and supporting managed configuration models, our framework simplifies provisioning and enables flexible runtime customizations of software-defined IoT cloud systems. We demonstrate its advantages on a real-world IoT cloud system for managing electric fleet vehicles. <s> BIB020 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> IV. ENABLING AND CONCOMITANT TECHNOLOGIES <s> A log-periodic toothed nanoantenna based on graphene is proposed, and its multi-resonance properties with respect to the variations of the chemical potential are investigated. The field enhancement and radar cross-section of the antenna for different chemical potentials are calculated, and the effect of the chemical potential on the resonance frequency is analyzed. In addition, the dependence of the resonance frequency on the substrate is also discussed. It is shown that large modulation of resonance intensity in log-periodic toothed nanoantenna can be achieved via turning the chemical potential of graphene. The tunability of the resonant frequencies of the antenna can be used to broad tuning of spectral features. The property of tunable multi-resonant field enhancement has great prospect in the field of graphene-based broadband nanoantenna, which can be applied in non-linear spectroscopy, optical sensor, and near-field optical microscopy. <s> BIB021 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> IV. ENABLING AND CONCOMITANT TECHNOLOGIES <s> This paper provides an overview of the Internet of Things (IoT) with emphasis on enabling technologies, protocols, and application issues. The IoT is enabled by the latest developments in RFID, smart sensors, communication technologies, and Internet protocols. The basic premise is to have smart sensors collaborate directly without human involvement to deliver a new class of applications. The current revolution in Internet, mobile, and machine-to-machine (M2M) technologies can be seen as the first phase of the IoT. In the coming years, the IoT is expected to bridge diverse technologies to enable new applications by connecting physical objects together in support of intelligent decision making. This paper starts by providing a horizontal overview of the IoT. Then, we give an overview of some technical details that pertain to the IoT enabling technologies, protocols, and applications. Compared to other survey papers in the field, our objective is to provide a more thorough summary of the most relevant protocols and application issues to enable researchers and application developers to get up to speed quickly on how the different protocols fit together to deliver desired functionalities without having to go through RFCs and the standards specifications. We also provide an overview of some of the key IoT challenges presented in the recent literature and provide a summary of related research work. Moreover, we explore the relation between the IoT and other emerging technologies including big data analytics and cloud and fog computing. We also present the need for better horizontal integration among IoT services. Finally, we present detailed service use-cases to illustrate how the different protocols presented in the paper fit together to deliver desired IoT services. <s> BIB022 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> IV. ENABLING AND CONCOMITANT TECHNOLOGIES <s> The journey of IoT from Arpanet to state of art wireless communication in vehicles is presented. The history of the wireless standards used in IoT is described which gives the path followed by the community of IoT using different communication modes. It is observed that Wi-Fi is the speediest of all the wireless standards used for IoT. A special observation here, which is design constraint, is that internet connectivity is mandatory for information communication. Extensive usage of IoT in Vehicle communication has impacted the research work to develop new routing and data gathering protocols. The growth of internet of things in vehicular communication is discussed. Surveys of the routing protocols are presented. It is interesting to note that present smart vehicles have data sensing and gathering (DSG) modules and data fusing models to improve the services provided to user community. The survey depicts the advancement in the IoT trends up to date till the year 2015. A brief overview of the IoT system design is presented with some typical issues that have to be seen during deployment phase. <s> BIB023 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> IV. ENABLING AND CONCOMITANT TECHNOLOGIES <s> The double Debye model has been used to understand the dielectric response of different types of biological tissues at terahertz (THz) frequencies but fails in accurately simulating human breast tissue. This leads to limited knowledge about the structure, dynamics, and macroscopic behavior of breast tissue, and hence, constrains the potential of THz imaging in breast cancer detection. The first goal of this paper is to propose a new dielectric model capable of mimicking the spectra of human breast tissue's complex permittivity in THz regime. Namely, a non-Debye relaxation model is combined with a single Debye model to produce a mixture model of human breast tissue. A sampling gradient algorithm of nonsmooth optimization is applied to locate the optimal fitting solution. Samples of healthy breast tissue and breast tumor are used in the simulation to evaluate the effectiveness of the proposed model. Our simulation demonstrates exceptional fitting quality in all cases. The second goal is to confirm the potential of using the parameters of the proposed dielectric model to distinguish breast tumor from healthy breast tissue, especially fibrous tissue. Statistical measures are employed to analyze the discrimination capability of the model parameters while support vector machines are applied to assess the possibility of using the combinations of these parameters for higher classification accuracy. The obtained analysis confirms the classification potential of these features. <s> BIB024 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> IV. ENABLING AND CONCOMITANT TECHNOLOGIES <s> Nanocommunications via Forster Resonance Energy Transfer (FRET) is a promising means of realising collaboration between photoactive nanomachines to implement advanced nanotechnology applications. The method is based on exchange of energy levels between fluorescent molecules by the FRET phenomenon which intrinsically provides a virtual nanocommunication link. In this work, further to the extensive theoretical studies, we demonstrate the first information transfer through a FRET-based nanocommunication channel. We implement a digital communication system combining macroscale transceiver instruments and a bulk solution of fluorophore nanoantennas. The performance of the FRET-based Multiple-Input and Multiple-Output (MIMO) nanocommunication channel between closely located mobile nanoantennas in the sample solution is evaluated in terms of Signal-to-Noise Ratio (SNR) and Bit Error Rate (BER) obtained for the transmission rates of 50 kbps, 150 kbps and 250 kbps. The results of the performance evaluation are very promising for the development of high-rate and reliable molecular communication networks at nanoscale. <s> BIB025 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> IV. ENABLING AND CONCOMITANT TECHNOLOGIES <s> This paper presents experimental study of real human skin material parameter extraction based on terahertz (THz) time-domain spectroscopy in the band 0.1–2.5 THz. Results in this paper show that electromagnetic properties of the human skin distinctively affect the path loss and noise temperature parameters of the communication link, which are vital for channel modeling of in-body nanonetworks. Refractive index and absorption coefficient values are evaluated for dermis layer of the human skin. Repeatability and consistency of the data are accounted for in the experimental investigation and the morphology of the skin tissue is verified using a standard optical microscope. Finally, the results of this paper are compared with the available work in the literature, which shows the effects of dehydration on the path loss and noise temperature. The measured parameters, i.e., the refractive index and absorption coefficient are 2.1 and 18.45 cm−1, respectively, at 1 THz for a real human skin, which are vital for developing and optimizing future in-body nanonetworks. <s> BIB026 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> IV. ENABLING AND CONCOMITANT TECHNOLOGIES <s> In diffusion-based molecular communication, information transport is governed by diffusion through a fluid medium. The achievable data rates for these channels are very low compared to the radio-based communication system, since diffusion can be a slow process. To improve the data rate, a novel multiple-input multiple-output (MIMO) design for molecular communication is proposed that utilizes multiple molecular emitters at the transmitter and multiple molecular detectors at the receiver (in RF communication these all correspond to antennas). Using particle-based simulators, the channel’s impulse response is obtained and mathematically modeled. These models are then used to determine interlink interference (ILI) and intersymbol interference (ISI). It is assumed that when the receiver has incomplete information regarding the system and the channel state, low complexity symbol detection methods are preferred since the receiver is small and simple. Thus, four detection algorithms are proposed—adaptive thresholding, practical zero forcing with channel models excluding/including the ILI and ISI, and Genie-aided zero forcing. The proposed algorithms are evaluated extensively using numerical and analytical evaluations. <s> BIB027 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> IV. ENABLING AND CONCOMITANT TECHNOLOGIES <s> Stochastic resonance (SR) is an intrinsic noise usage system for small-signal sensing found in various living creatures. The noise-enhanced signal transmission and detection system, which is probabilistic but consumes low power, has not been used in modern electronics. We demonstrated SR in a summing network based on a single-walled carbon nanotube (SWNT) device that detects small subthreshold signals with very low current flow. The nonlinear current-voltage characteristics of this SWNT device, which incorporated Cr electrodes, were used as the threshold level of signal detection. The adsorption of redox-active polyoxometalate molecules on SWNTs generated additional noise, which was utilized as a self-noise source. To form a summing network SR device, a large number of SWNTs were aligned parallel to each other between the electrodes, which increased the signal detection ability. The functional capabilities of the present small-size summing network SR device, which rely on dense nanomaterials and exploit i... <s> BIB028 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> IV. ENABLING AND CONCOMITANT TECHNOLOGIES <s> This paper focuses on the analysis of cultivated collagen samples at the terahertz (THz) band using double debye model parameter extraction. Based on measured electrical and optical parameters, we propose a model to describe such parameters extracted with a global optimisation method, namely, particle swarm optimisation. Comparing the measured data with ones in the open literature, it is evident that using only cultivated collagen is not sufficient to represent the performance of the epidermis layer of the skin tissue at the THz band of interest. The results show that the differences between the measured data and published ones are as high as 14 and 6 for the real and imaginary values of the dielectric constant, respectively. Our proposed double debye model agrees well with the measured data. <s> BIB029 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> IV. ENABLING AND CONCOMITANT TECHNOLOGIES <s> We propose an adaptive sampling algorithm to improve the acquisition efficiency for terahertz time-domain spectroscopy (THz-TDS). Most THz-TDS measurements scan the delay line with constant speed and the data acquired have constant time steps. Our algorithm exploits the fact that the useful information within THz signals tends to cluster at certain positions: efficient sampling can be done by adaptively increasing the sample rate in regions containing more interesting features. The algorithm was implemented by programming a linear optical delay line. Depending on the experiment parameters, the sampling time of a pulse can be reduced by a factor of 2–3 with only slight degradation in accuracy, possible sources of error are discussed. We show how adaptive sampling algorithms can improve the acquisition time in applications where the main pulse is the primary concern. <s> BIB030 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> IV. ENABLING AND CONCOMITANT TECHNOLOGIES <s> The design of communication systems capable of processing and exchanging information through molecules and chemical processes is a rapidly growing interdisciplinary field, which holds the promise to revolutionize how we realize computing and communication devices. While molecular communication (MC) theory has had major developments in recent years, more practical aspects in the design and prototyping of components capable of MC functionalities remain less explored. In this paper, motivated by a bulk of MC literature on information transmission via molecular pulse modulation, the design of a pulse generator is proposed as an MC component able to output a predefined pulse-shaped molecular concentration upon a triggering input. The chemical processes at the basis of this pulse generator are inspired by how cells generate pulse-shaped molecular signals in biology. At the same time, the slow-speed, unreliability, and non-scalability of these processes in cells are overcome with a microfluidic-based implementation based on standard reproducible components with well-defined design parameters. Mathematical models are presented to demonstrate the analytical tractability of each component, and are validated against a numerical finite element simulation. Finally, the complete pulse generator design is implemented and simulated in a standard engineering software framework, where the predefined nature of the output pulse shape is demonstrated together with its dependence on practical design parameters. <s> BIB031 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> IV. ENABLING AND CONCOMITANT TECHNOLOGIES <s> The nervous system holds a central position among the major in-body networks. It comprises of cells known as neurons that are responsible to carry messages between different parts of the body and make decisions based on those messages. In this work, further to the extensive theoretical studies, we demonstrate the first controlled information transfer through an in vivo nervous system by modulating digital data from macro-scale devices onto the nervous system of common earthworms and conducting successful transmissions. The results and analysis of our experiments provide a method to model networks of neurons, calculate the channel propagation delay, create their simulation models, indicate optimum parameters such as frequency, amplitude and modulation schemes for such networks, and identify average nerve spikes per input pulse as the nervous information coding scheme. Future studies on neuron characterization and artificial neurons may benefit from the results of our work. <s> BIB032
|
A. EM Aspects 1) Nano-Devices: Advances in nanotechnology have paralleled developments in Internet and sensing technology. The development routine is summarised in Fig. 10 . At the same time, due to the general belief that graphene/CNT would be the future star of the nano-technology world since its appearance, more attentions has been put on such novel materials and great advances have been achieved. Antenna, as the basic element in communication system, is firstly fully investigated with numerous papers on the design of the antenna made of graphene or CNT in the last five years. First, the possibility of the applications of graphene and CNT was investigated and the wave performance on a graphene sheet was also studied in . Then, various antennas like graphene patch antenna with different shapes - , CNT dipole antenna , , BIB006 , and so on were proposed. Furthermore, a nanoantenna with the shape of log-periodic tooth made of graphene was proposed in BIB021 and a novel graphene-based nanoantenna, which exploits the behaviour of Surface Plasmon Polariton waves in semi-finite sized Graphene Nanoribons (GNRs) was proposed in BIB012 . Recently, a beam reconfigurable multiple input multiple output (MIMO) antenna system based on graphene nano-patch antenna is proposed in BIB018 , whose radiation pattern can be steered dynamically, leading to different channel state matrices. Meanwhile, the design of the sensors made of graphene is also introduced. Reference BIB019 introduces a graphene-based wearable sensor which can be used to detect airborne chemicals and its concentration level like acetone (an indicator of diabetes) or nitric oxide and oxygen (a bio-marker for high blood pressure, anemia, or lung disease). Later, the sensor made of graphene is designed with higher accuracy to detect HIV-related DNA hybridization at picomolar concentrations, which is a charge detector fabricated of graphene capable of detecting extremely low concentration of charges close to its surface . A Stochastic Resonance based (SR-based) electronic device, consisting of singlewalled carbon nanotubes (SWNTs) and phosphomolybdic acid (PMo12) molecules, has been developed at Osaka University to apply in bio-inspired sensor BIB028 . It is believed by the authors that by using such devices neural networks capable of spontaneous fluctuation can be developed. 2) Internet-of-Things: Internet-of-Things (IoT) refers to a network of devices with Internet connectivity to communicate directly without human-intervention in order to provide smart services to users BIB022 . The Internet-of-Things shares the same development route with nanonetworks, and it is believed that the ultimate goal is to emerge both technologies to form the Internet-of-Nano-Things (IoNT) BIB013 . It is generally believed that the achievements in IoT can also be applied to nanonetworks with minor modification. In IoT, the number of sensors/devices could achieve as high as tons BIB022 , many challenges related to addressing and identification of the connected devices would appear, same as nanonetworks. Furthermore, huge amount of data would be produced by such high numbers of sensors which requires high bandwidth and real-time access. Furthermore, implementation of IoT is complex, as it includes cooperation among massive, distributed, autonomous and heterogeneous components at various levels of granularity and abstraction . Applications in health BIB009 , smart security, and smart cities found their way to the market and realize the potential benefits of this technology BIB023 . In addition, many other applications of IoT can be enumerated such as agriculture, industry, natural resources (water, forests, etc.) monitoring, transport system design, and military applications BIB010 . Network densification is considered as an enabler for the successful diffusion of IoT services and application in the society. In reality, millions of simultaneous connections would be built in IoT, involving a variety of devices, connected homes, smart grids and smart transportation systems BIB010 . The concept of cloud and fog computing is introduced to offer large storage, high computation and networking capabilities BIB020 . Also, a high level design of cloud assisted, intelligent, software agent-based IoT architecture is proposed in . Besides of the concept of IoNT, Social Internet of Things (SIoT) is also proposed recently BIB011 . To advocate a common standard, IoT Global Standards (IoT-GSI) are proposed by ITU-T . 3) Bio-Tissue Characterization: Characterization of channel medium is an essential part to investigate the channel; therefore, it is important to obtain the parameters of biotissues if the body-centric communication is under study. Usually, the electromagnetic parameters, i.e., permittivity and permiability µ, are used to describe medium in microwave and RF frequency; while at optical frequency, the material is usually described by refractive index (or index of refraction). The techniques such as resonant cavity perturbation method, Transmission-Reflection-Method (TRM), and THz Time Domain Spectroscopy system have been applied to obtain the dielectric property of human tissues BIB029 . In , the database of the parameters for human tissues (skin, muscle blood bone and etc.) from 10 Hz to 100 GHz are illustrated, mainly on the basis of Gabriel's work BIB001 - BIB002 . THZ TDS system is fully studied by E. Pickwell BIB004 - BIB030 and has been applied to measure the dielectric parameters of bio-tissues like livers BIB007 , human colonic tissues BIB008 , human breast tissues BIB005 , etc.. Both basal cell carcinoma and normal skin are measured by C.Bao to investigate the possibility of the detection of skin cancer at early stage based on the work of parameter extraction of skin with global optimization method BIB014 . And also, the model of human breast tissue in THz band is studied in BIB024 . Recently, the performance of DED samples and collagen have been investigated in BIB026 , and the corresponding model has been studied as well to investigate the possibility of adoption of collagen and DED sample as the phantom during the measurement BIB029 . More work needs to be done to build the database and the appropriate phantom should be sought to use in the measurement setup. B. Molecular Aspects 1) Molecular Test-beds: Until now, one fundamental challenge in the application of molecular communication is that we still do not have well studied nano-size biological friendly molecular communication transceivers, despite the existing research efforts in designing and building MC test-beds BIB025 , BIB015 - BIB032 , and in engineering biological MC systems BIB003 , . • Macroscale MC Test-beds: The first macro-scale experimental test-bed for molecular communication was shown in BIB015 , where the text messages were converted to binary sequence, and transmitted via alcohol particles based on a time-slotted on-off-keying modulation. In this tabletop MC test-bed the messages transmission and detection were realized via the alcohol spray and the alcohol metal-oxide sensor, and the message generation and interpretation were electronically controlled via the Arduino micro-controllers. They shown that a transmission data rate of 0.3 bit/s with the bit error rates of 0.01 to 0.03 can be achieved using this Later on, this SISO test-bed was duplicated to form a multiple-input-multiple-output (MIMO) tabletop MC test-bed with multiple sprays and sensors in BIB027 , which achieved 1.7 times higher transmission data rates than that of the SISO test-bed. • Nanoscale MC Test-bed: The first nanoscale molecular communication based on the F orster Resonance Energy Transfer (FRET) was implemented and tested in BIB025 , where the information was encoded on the energy states of fluorescent molecules, and the energy states were exchanged via FRET. • Microfludic MC Test-beds: In BIB016 , the genetically engineered Escherichia coli (E. coli) bacteria population housed in micrometer sized chambers were used as MC transceivers connected via microfluidic pathways, and the message molecule (N-(3-Oxyhexanoyl)-L-homoserine lactone, or C6-HSL) generation and detection were realized via the LuxI enzyme catalyzes and the LuxR receptor protein with fluorescent light based on On-Off Keying (OOK). To improve the achievable data rates of this testbed with OOK, the time-elapse communication (TEC) was proposed by encoding the information in the time interval between two consecutive pulses, which shown an order of magnitude data-rate improvement. In BIB017 , the Hydrodynamic Controlled microfluidic Network (HCN) fabricated in poly(dimethylsiloxane) (PDMS) polymer was proposed, where the information was encoded and decoded based on the distance between consecutive droplets, and droplets carrying information were controlled and transported in HCN to realize molecular communication. The maximum information rate of HCN was analyzed, the noise effect in HCN was simulated using OpenFOAM, and a HCN prototype was fabricated in poly(dimethylsiloxane) (PDMS) polymer. Inspired by the biological circuits in synthetic biology, a chemical circuits based on a series of faster chemical reactions were designed to achieve the transformation of the timing varying information molecules flow from the digital signal to the analog signal inside a designed microfluidic devices in BIB031 . This work provides a novel research direction for performing signal processing using chemical circuits inside microfluidic device, and also an alternative method for proof-of-concept analogues of biological circuits with potentially higher speed.
|
A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> 2) Molecular Experiments: <s> Multicellular organisms create complex patterned structures from identical, unreliable components. Learning how to engineer such robust behavior is important to both an improved understanding of computer science and to a better understanding of the natural developmental process. Earlier work by our colleagues and ourselves on amorphous computing demonstrates in simulation how one might build complex patterned behavior in this way. This work reports on our first efforts to engineer microbial cells to exhibit this kind of multicellular pattern directed behavior. We describe a specific natural system, the Lux operon of Vibrio fischeri, which exhibits density dependent behavior using a well characterized set of genetic components. We have isolated, sequenced, and used these components to engineer intercellular communication mechanisms between living bacterial cells. In combination with digitally controlled intracellular genetic circuits, we believe this work allows us to begin the more difficult process of using these communication mechanisms to perform directed engineering of multicellular structures, using techniques such as chemical diffusion dependent behavior. These same techniques form an essential part of our toolkit for engineering with life, and are widely applicable in the field of microbial robotics, with potential applications in medicine, environmental monitoring and control, engineered crop cultivation, and molecular scale fabrication. <s> BIB001 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> 2) Molecular Experiments: <s> This study presents a self organized control of nano sensor's mobility based on the particle swarm optimization (PSO) algorithm. PSO models the set of potential problem solutions as a swarm of particles moving about in a virtual search space by adapting self organizing concept. The study is based on the premise that the deployment of nanosensors and used for investigating within human body. The spatially distributed nanosensors collect information about the target environment by moving throughout the body. Movement of sensors will be addressed how sensors are perimeter covered to the target environment. There may be two kinds of issues for coverage such that insufficient coverage and covered by too many sensors in certain areas without necessary. If there is not enough coverage, the required information may not get properly. If there are more sensors than necessary, there may be some redundant nodes and it may consume more energy. Therefore, the sensors should move to the next target, which has proper coverage criteria. The simulation results have approved that the proposed scheme effectively constructs a self organized mobility within the optimized coverage area. <s> BIB002 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> 2) Molecular Experiments: <s> Abstract Nanotechnology has the potential to have a significant impact on a number of application areas. The possibility of building components at the nanoscale revolutionized the way we think about systems by enabling myriad possibilities, that were simply impossible otherwise. At the same time, countless challenges were raised in system design. One such challenge is to build components that act together to handle complex tasks that require physically separate components to work in unison. To achieve coordination, these components have to be capable of communicating reliably, either with a central controller or amongst themselves. In this research, we propose to build analytical foundations to analyze and design nanonetworks, consisting of individual stations communicating over a wireless medium using nanotransceivers with nanotube antennas. We give a simple nanoreceiver design and analyze its basic limitations. Based on the insights drawn, we propose a communication-theoretic framework to design reliable and robust nanoreceivers. With the basic limitations of the nanocommunications via nanoantennas in mind, it is possible to develop mathematical tools to help construct nanonetworks that execute basic sequential tasks in a reliable manner with minimal amount of communication and computation required. In this paper, we present a communication-theoretic analysis of networks of nanoscale nodes equipped with carbon nanotube-based receivers and transmitters. Our objective is to analyze the performance characteristics of nanoscale nodes and expose their fundamental capabilities and limitations. The presented analysis is intended to serve as the basis of nanonetwork design enabling various applications. <s> BIB003 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> 2) Molecular Experiments: <s> Embedding nanosensors in the environment would add a new dimension to the Internet of Things, but realizing the IoNT vision will require developing new communication paradigms and overcoming various technical obstacles. <s> BIB004 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> 2) Molecular Experiments: <s> Emerging nanotechnology presents great potential to change human society. Nanoscale devices are able to be included with Internet. This new communication paradigm, referred to as Internet of Nanothings (IoNT), demands very short-range connections among nanoscale devices. IoNT raises many challenges to realize it. Current network protocols and techniques may not be directly applied to communicate with nanosensors. Due to the very limited capability of nanodevices, the devices must have simple communication and simple medium sharing mechanism in order to collect the data effectively from nanosensors. Moreover, nanosensors may be deployed at organs of the human body, and they may produce large data. In this process, the data transmission from nanosensors to gateway should be controlled from the energy efficiency point of view. In this paper, we propose a wireless nanosensor network (WNSN) at the nanoscale that would be useful for intrabody disease detection. The proposed conceptual network model is based on On-Off Keying (OOK) protocol and TDMA framework. The model assumes hexagonal cell-based nanosensors deployed in cylindrical shape 3D hexagonal pole. We also present in this paper the analysis of the data transmission efficiency, for the various combinations of transmission methods, exploiting hybrid, direct, and multi-hop methods. <s> BIB005 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> 2) Molecular Experiments: <s> Nano-communication is considered to become a major building block for many novel applications in the health care and fitness sector. Given the recent developments in the scope of nano machinery, coordination and control of these devices becomes the critical challenge to be solved. In-Body Nano-Communication based on either molecular, acoustic, or RF radio communication in the terahertz band supports the exchange of messages between these in-body devices. Yet, the control and communication with external units is not yet fully understood. In this paper, we investigate the challenges and opportunities of connecting Body Area Networks and other external gateways with in-body nano-devices, paving the road towards more scalable and efficient Internet of Nano Things (IoNT) systems. We derive a novel network architecture supporting the resulting requirements and, most importantly, investigate options for the simulation based performance evaluation of such novel concepts. Our study is concluded by a first look at the resulting security issues considering the high impact of potential misuse of the communication links. <s> BIB006 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> 2) Molecular Experiments: <s> Terahertz frequency band, 0.1–10THz, is envisioned as one of the possible resources to be utilized for wireless communications in networks beyond 5G. Communications over this band will be feature a number of attractive properties, including potentially terabit-per-second link capacities, miniature transceivers and, potentially, high energy efficiency. Meanwhile, a number of specific research challenges have to be addressed to convert the theoretical estimations into commercially attractive solutions. Due to the diversity of the challenges, the research on THz communications at its early stages was mostly performed by independent communities from different areas. Therefore, the existing knowledge in the field is substantially fragmented. In this paper, an attempt to address this issue and provide a clear and easy to follow introduction to the THz communications is performed. A review on the state-of-the-art in THz communications research is given by identifying the target applications and major open research challenges as well as the recent achievements by industry, academia, and the standardization bodies. The potential of the THz communications is presented by illustrating the basic tradeoffs in typical use cases. Based on the given summary, certain prospective research directions in the field are identified. <s> BIB007 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> 2) Molecular Experiments: <s> The nervous system holds a central position among the major in-body networks. It comprises of cells known as neurons that are responsible to carry messages between different parts of the body and make decisions based on those messages. In this work, further to the extensive theoretical studies, we demonstrate the first controlled information transfer through an in vivo nervous system by modulating digital data from macro-scale devices onto the nervous system of common earthworms and conducting successful transmissions. The results and analysis of our experiments provide a method to model networks of neurons, calculate the channel propagation delay, create their simulation models, indicate optimum parameters such as frequency, amplitude and modulation schemes for such networks, and identify average nerve spikes per input pulse as the nervous information coding scheme. Future studies on neuron characterization and artificial neurons may benefit from the results of our work. <s> BIB008
|
• In Vivo Nervous System Experiment: The first controlled information transfer through an in vivo nervous system was demonstrated in BIB008 . Modulated signals were transmitted into nervous systems of earthworms from anterior end, and propagated through earthworms' nerve cord. Although the network of neurons, i.e., the channel response, were considered as a black-box, the authors found the received signals can be decoded as the number of average nerve spikes per input pulse counted in the posterior end. In addition, the MC system was optimized in terms of frequency, amplitude, and modulation scheme, and the authors showed that the data rate can reach 52.6646 bps with a 7.2 × 10 −4 bit error rate when employing a 4FSK modulation and square shaped pulse. • Biological MC Experiments: The first engineered intercellular MC experiment between living bacterial cells was reported in BIB001 , where the plasmid pSND-1 was the sender constructed to produce the autoinducer chemical (VAI) via the LuxI gene expression inside E. coli. Then, the VAI (information messenger) migrates through the cell membranes and medium to interact with the LuxR gene of the receiver-plasmid pRCV-3 inside E. coli, and produces Green fluorescent protein (GFP) for information decoding. Using the protein engineering and synthetic biology, a simple MC based on baterial quorum sensing (QS) was engineered in , where a multidomain fusion protein with QS molecular signal generation capability was fabricated as the sender, and an E. coli was engineered as the receiver to receive and report this QS signal. These research demonstrated the great potential of biofabrication of MC devices. V. ARCHITECTURE OF EM AND MOLECULAR BODY-CENTRIC NANONETWORKS Generally, it is believed that both EM and MC should share the same network architecture, but will have minor differences according to various specific applications. 1) Network Deployment: Aligned with the IEEE P1906.1 framework, the authors of provided an overview of the nanonetworks and is divided in to nano-routers, nano-nodes, gateway and nano-micro interfaces. The work proposed in BIB005 attempts to investigate the ideal number of devices, optimal edge length relative to horizontal length of a general human body organ. The proposed scheme assumes nanosensors are distributed in a 3-dimensional space in the nanonetworks according to a homogeneous spatial Poisson process as shown in Fig. 11 . Authors represent the network deployment as cylindrical 3D hexagonal pole, claiming that the cylindrical shape is closer to the shape of human body organs. They assume that they can put as many nano-sensors as possible and there is only one active nano-sensor in each hexagonal cell. They proposed a scheme for each sensor duty cycle with the assumption that only one sensor is active in each cell. A cell is defined as the smallest living unit of an organ. The ideal number of nano-sensors is calculated using an equation derived by the authors. The equation describes the diameter of the cylinder, the width of the organ in relation to the edge length of the cylinder. The work of BIB005 is considered a step forward in realizing the nano-sensors deployment; however, the authors assume that all the nano-sensors may recognize other neighbouring nano-sensors. The authors also assume that network deployment also includes routeing nodes; however, they did not state how to calculate the number of routers or micro-interfaces and the positioning technique for these nodes. Emre et. al. BIB003 debates that the first step in network design and deployment is highly tied to the parameters of the nano-antenna, hence nano-antenna design is a critical component of the network design. The reason behind this is their observation that there is a clear trade-off between the number of different tasks the nanonetworks can execute and the reliable communication over the network. Hence, the authors proposed a network of nano-devices that are able to carry out binary tasks and proved that it is possible to construct multi-hop nanonetworks using simple individual nodes activated simultaneously over a shared medium without a significant detriment in reliability. The number of nodes depends on the number of complex tasks for the nanonetworks. The authors did not provide a mechanism describing the process of choosing the appropriate number of nano-nodes or interfaces. Additionally, the authors did not provide an analysis of the nano-router or interfaces as they did for nano-sensors. Dressler and Fischer BIB006 discussed the requirements and challenges of designing the gateway or the interface between the nanonetwork and the macro/micro network to bridge the gap of the gateway or interface void. They stated that multiple gateways are required in IoNT deployment such that each one of them is associated with one or more nanonetworks. They also suggested that a gateway should operate at the application layer and recognize the right nanonetwork to receive a message. They also suggested that the gateway being equipped with one or more nano communication interface should contain the molecular and terahertz interface. As a molecular network may prove to be a significant challenge, a reasonable approach might be to make the gateway an implantable micro device that uses electromagnetic wireless communication to interface the molecular network to the Internet. While the study in BIB006 discussed the requirement and challenges of gateway deployment, they did not provide any solution. Similar to BIB006 , the study presented in BIB004 discussed the challenges and requirements for the gateway deployment. The study concluded that the gateway will be an implantable device equipped to enable communication with the molecular interface as well as the EM nanonetworks. However, the study remarked that the high ratio of nano-sensors to gateways could lead to swift energy depletion if gateways process information from every nano-sensor. They suggested to thereby distribute the sink architecture and develop a twolayered hierarchy consisting of gateways and nanonetworks. The aforementioned research attempts to address the network deployment, however, the proposed schemes provide partial solutions to the network deployment; some focused on nano-sensor deployment, while others discussed the requirements and challenges of gateway deployment. However, no all-encompassing solution has been provided in literature yet. Additionally, deployment that achieves essential goals such as survivability, reliability, accuracy or latency intolerance remains an unexplored area of research in nanonetworks deployment. 2) Network Mobility: Nano sensors (NS) are dynamic components in their applications whereby they are forced to move and each move is dictated by their environment. Environmental NS will move according to wind direction and force, which in turns will act to adjust their controller association, location and link quality. Comparatively, the motion of blood monitoring NS will be influenced by its surroundings, whereby speed and turbidity of blood flow and vessel thickness will affect NS link communication quality, velocity and location. This effect is highly pronounced in nanonetworks when compared to traditional sensor networks due to the unique nature of the NS and the used modulation used in nanonetwork communication. Nanonetworks communicate using TS-OOK. This requires nodes to be highly synchronized, an aspect that can be significantly affected by changes in NS mobility. TS-OOK synchronizes transmissions between sender and receiver by requiring the receiver to listen to transmissions on fixed time intervals, thereby ensuring that transmitted bits are received. Distance between the receiver and sender has the largest impact on this process and deciding the time intervals at which the receiver should listen. This distance may change due to NS movement and might result in missing a transmission. The work in studied the effect of NS movements on the communication link. The authors studied the pulse time-shift, which is defined by the authors as the distance in time between the actual arrival of the signal and its estimated arrival (in case of no movement), taking into account the Doppler effect, information reduction, and error rate increase. The authors concluded that the doppler effect can be negligible; however, the pulse time-shift can introduce inter-symbol interference (ISI), and the NS movement influences the maximum information rate and the attainable error rate. The work presented in provides a good insight on the effect of mobility in nanosensors networks. However, the assumption of the authors that the transmitter is static while the receiver is mobile, and NSs are moving with the speed of light may limit the scope of the results and their adaptability into applications. Even though the mobility of NSs may pause a major challenge on practical deployment of nanosensors, this area is still severely under-researched. References BIB007 remarked that there is an eminent need to come up with mobility perdition models; a reactive response to NS movement is no longer satisfactory. The authors of BIB002 proposed a movement control method for nanonetworks which is self-organised. The algorithm uses the localization of a particle and its neighbouring paticles to optimise the location of particles and enhance movement positions fo NS through the use of particle swarm optimisation. The proposed algorithm cannot be considered a general mobility model for nanosensors because the algorithm is proposed for homogeneous networks, which is not the norm of a nanonetwork; they are expected to consist of heterogeneous devices with diverse capabilities. Additionally, the model is designed based on the unit disk coverage thereby inheriting the advantages and disadvantages of using this method. In , the authors proposed a scheme for the hand-off of mobile sensors to the most appropriate nano-controller to conserve energy consumption and reduce the unsuccessful transmission rates. The authors presented a TDMA-based MAC protocol simple fuzzy logic system to control the mobility procedure. They used locally available metrics at each nano-node consisting of the distance of mobile nano-node from nano-controller, traffic load and residual energy of nano-controller, which are considered as fuzzy input variables to control the hand-off decision procedure. The scope of the offered solution is limited by the assumption of constant velocity of the nanosensors and the unit disk transmission similarly to the other proposed schemes. Additionally, the practicality of the system deployment is highly dependent of the trade-off between accuracy and complexity of the algorithm. Hence, the problem of NSs mobility modeling still stands as an urgent area of research for practical deployment of nanonetworkss.
|
A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> VI. COMMUNICATION AND NETWORKING OF EM AND MOLECULAR BODY-CENTRIC NANONETWORKS <s> Nanotechnology is enabling the development of devices in a scale ranging from one to a few hundred nanometers. Coordination and information sharing among these nano-devices will lead towards the development of future nanonetworks, rising new applications of nanotechnology in the medical, environmental and military fields. Despite the major progress in nano-device design and fabrication, it is still not clear how these atomically precise machines will communicate. The latest advancements in graphene- based electronics have opened the door to electromagnetic communication among nano-devices in the terahertz band (0.1-10 THz). This frequency band can potentially provide very large bandwidths, ranging from the entire band to several gigahertz- wide windows, depending on the transmission distance and the molecular composition of the channel. In this paper, the capacity of the terahertz channel is numerically evaluated by using a new terahertz propagation model, for different channel molecular compositions, and under different power allocation schemes. A novel communication technique based on the transmission of ultra-short pulses, less than one picosecond long, is motivated and quantitatively compared to the capacity- optimal power allocation scheme. The results show that for the very short range, up to a few tens of millimeters, the transmission of short pulses offer a realistic and still efficient way to exploit the terahertz channel. <s> BIB001 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> VI. COMMUNICATION AND NETWORKING OF EM AND MOLECULAR BODY-CENTRIC NANONETWORKS <s> Nanotechnology is providing the engineering community with a new set of tools to design and manufacture integrated devices just a few hundred nanometers in total size. Communication among these nano-devices will boost the range of applications of nanotechnology in several fields, ranging from biomedical research to military technology or environmental science. Within the different alternatives for communication in the nanoscale, recent developments in nanomaterials point to the Terahertz band (0.1-10 THz) as the frequency range of operation of future electromagnetic nano-transceivers. This frequency band can theoretically support very large bit-rates in the short range, i.e., for distances below one meter. Due to the limited capabilities of individual nano-devices, pulse-based communications have been proposed for electromagnetic nanonetworks in the Terahertz band. However, the expectedly very large number of nano-devices and the unfeasibility to coordinate them, can make interference a major impairment for the system. In this paper, low-weight channel coding is proposed as a novel mechanism to reduce interference in pulse-based nanonetworks. Rather than utilizing channel codes to detect and correct transmission errors, it is shown that by appropriately choosing the weight of a code, interference can be mitigated. The performance of the proposed scheme is analytically and numerically investigated both in terms of overall interference reduction and achievable information rate, by utilizing a new statistical interference model. The results show that this type of network-friendly channel coding schemes can be used to alleviate the interference problem in nanonetworks without compromising the individual user information rate. <s> BIB002 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> VI. COMMUNICATION AND NETWORKING OF EM AND MOLECULAR BODY-CENTRIC NANONETWORKS <s> Nanotechnology is enabling the development of sensing devices just a few hundreds of nanometers in size, which are able to measure new types of events in the nanoscale by exploiting the properties of novel nanomaterials. Wireless communication among these nanosensors will boost the range of applications of nanotechnology in the biomedical, environmental and military fields, amongst others. Within the different alternatives for communication in the nanoscale, recent advancements in nanomaterials point to the Terahertz band (0.1–10.0 THz) as the frequency range of operation of future electronic nano-devices. This still unlicensed band can theoretically support very large transmission bit-rates in the short range, i.e., for distances below one meter. More importantly, the Terahertz band also enables very simple communication mechanisms suited to the very limited capabilities of nanosensors. In this paper, a new communication paradigm called TS-OOK (Time Spread On-Off Keying) for Electromagnetic Wireless Nanosensor Networks (WNSNs) is presented. This new technique is based on the transmission of femtosecond-long pulses by following an on-off keying modulation spread in time. The performance of this scheme is assessed in terms of information capacity for the single-user case as well as aggregated network capacity for the multiuser case. The results show that by exploiting the peculiarities of the Terahertz band, this scheme provides a very simple but robust communication technique for WNSNs. Moreover, it is shown that, due to the peculiar behavior of the noise in the Terahertz band, the single-user capacity and the aggregated network capacity can exceed those of the AWGN channel classical wireless networks, when the appropriate channel codes are used. <s> BIB003 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> VI. COMMUNICATION AND NETWORKING OF EM AND MOLECULAR BODY-CENTRIC NANONETWORKS <s> Wireless nanosensor networks (WNSNs), which are collections of nanosensors with communication units, can be used for sensing and data collection with extremely high resolution and low power consumption for various applications. In order to realize WNSNs, it is essential to develop energy-efficient communication techniques, since nanonodes are severely energy-constrained. In this paper, a novel minimum energy coding scheme (MEC) is proposed to achieve energy-efficiency in WNSNs. Unlike the existing minimum energy codes, MEC maintains the desired Hamming distance, while minimizing energy, in order to provide reliability. It is analytically shown that, with MEC, codewords can be decoded perfectly for large code distance, if source set cardinality, M is less than inverse of symbol error probability, 1/ps. Performance analysis shows that MEC outperforms popular codes such as Hamming, Reed-Solomon and Golay in average energy per codeword sense. <s> BIB004 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> VI. COMMUNICATION AND NETWORKING OF EM AND MOLECULAR BODY-CENTRIC NANONETWORKS <s> Bio-nanosensors and communication at the nanoscale are a promising paradigm and technology for the development of a new class of ehealth solutions. While recent communication technologies such as mobile and wireless combined with medical sensors have allowed new successful eHealth applications, another level of innovation is required to deliver scalable and cost-effective solutions via developing devices that operate and communicate directly inside the body. This work presents the application of nano technology for the development of miniaturized bio-nanosensors that are able to communicate and exchange information about sensed molecules or chemical compound concentration and therefore draw a global response in the case of health anomalies. Two communication techniques are reviewed: electromagnetic wireless communication in the terahertz band and molecular communication. The characteristics of these two modes of communication are highlighted, and a general architecture for bio-nanosensors is proposed along with examples of cooperation schemes. An implementation of the bio-nanosensor part of the nanomachine is presented along with some experimental results of sensing biomolecules. Finally, a general example of coordination among bio-nanomachines using both communication technologies is presented, and challenges in terms of communication protocols, data transmission, and coordination among nanomachines are discussed. <s> BIB005 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> VI. COMMUNICATION AND NETWORKING OF EM AND MOLECULAR BODY-CENTRIC NANONETWORKS <s> Wireless NanoSensor Networks (WNSNs), i.e., networks of nanoscale devices with unprecedented sensing capabilities, are the enabling technology of long-awaited applications such as advanced health monitoring systems or surveillance networks for chemical and biological attack prevention. The peculiarities of the Terahertz Band, which is the envisioned frequency band for communication among nano-devices, and the extreme energy limitations of nanosensors, which require the use of nanoscale energy harvesting systems, introduce major challenges in the design of MAC protocols for WNSNs. This paper aims to design energy and spectrum-aware MAC protocols for WNSNs with the objective to achieve fair, throughput and lifetime optimal channel access by jointly optimizing the energy harvesting and consumption processes in nanosensors. Towards this end, the critical packet transmission ratio (CTR) is derived, which is the maximum allowable ratio between the transmission time and the energy harvesting time, below which a nanosensor can harvest more energy than the consumed one, thus achieving perpetual data transmission. Based on the CTR, first, a novel symbol-compression scheduling algorithm, built on a recently proposed pulse-based physical layer technique, is introduced. The symbol-compression solution utilizes the unique elasticity of the inter-symbol spacing of the pulse-based physical layer to allow a large number of nanosensors to transmit their packets in parallel without inducing collisions. In addition, a packet-level timeline scheduling algorithm, built on a theoretical bandwidth-adaptive capacity-optimal physical layer, is proposed with an objective to achieve balanced single-user throughput with infinite network lifetime. The simulation results show that the proposed simple scheduling algorithms can enable nanosensors to transmit with extremely high speed perpetually without replacing the batteries. <s> BIB006 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> VI. COMMUNICATION AND NETWORKING OF EM AND MOLECULAR BODY-CENTRIC NANONETWORKS <s> It is essential to develop energy-efficient communication techniques for nanoscale wireless communications. In this paper, a new modulation and a novel minimum energy coding scheme (MEC) are proposed to achieve energy efficiency in wireless nanosensor networks (WNSNs). Unlike existing studies, MEC maintains the desired code distance to provide reliability, while minimizing energy. It is analytically shown that, with MEC, codewords can be decoded perfectly for large code distances, if the source set cardinality is less than the inverse of the symbol error probability. Performance evaluations show that MEC outperforms popular codes such as Hamming, Reed-Solomon and Golay in the average codeword energy sense. <s> BIB007 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> VI. COMMUNICATION AND NETWORKING OF EM AND MOLECULAR BODY-CENTRIC NANONETWORKS <s> Wireless nanosensor networks (WNSNs), which consist of a lot of nanosensors with size of just a few hundred nanometers and are able to detect and sense new types of events at the nanoscale, are promising for a lot of unique applications like intrabody drug delivery systems, air pollution surveillance, etc. One important feature of WNSNs is that the nanosensors are highly energy-constrained, which makes it essential to develop energy efficient protocols for different layers of such networks. This paper focuses on a WNSN with on-off keying (OOK) modulation and explores the problem of transmission energy minimization in it. We first propose a general minimum transmission energy (MTE) coding scheme, which maps m-bit symbols into n-bit codewords with the least number of high-bits and thus results in the lowest energy consumption per symbol for any given m and n. We further determine the optimal setting of symbol length m and codeword length n in the MTE coding scheme so as to achieve the minimum energy consumption per data bit, which serves as the lower bound of transmission energy consumption in such WNSNs. Numerical results are provided to demonstrate the efficiency of the MTE coding scheme. <s> BIB008 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> VI. COMMUNICATION AND NETWORKING OF EM AND MOLECULAR BODY-CENTRIC NANONETWORKS <s> The progress of nanotechnology is paving the way to the emerging concept of wireless nanosensor network (WNSN). In fact, it is now possible to create integrated machines at the nano scale, which interact on cooperative basis using wireless communications. The research in this field is still in an embryonal stage and the design of the WNSN protocol suite represents a fundamental issue to address. Therefore, an open source simulation framework for WNSN would be highly beneficial to let research activities converge towards participated design methodologies. In an our recent work, we presented a new NS-3 module, namely Nano-Sim, modeling WNSNs based on electromagnetic communications in the Terahertz band. In its preliminary version, Nano-Sim provides a simple network architecture and a protocol suite for such an emerging technology. In this paper, we significantly improved our previous work in several directions. First, we have extended the tool by developing a new routing algorithm and a more efficient MAC protocol. Moreover, focusing the attention on a WNSN operating in a health monitoring scenario, we have investigated how the density of nodes, the transmission range of nanomachines, and the adoption of specific combinations of routing and MAC strategies may affect the network behavior. Finally, a study on Nano-Sim computational requirements has been also carried out, thus demonstrating how the developed module guarantees great achievements in terms of scalability. <s> BIB009 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> VI. COMMUNICATION AND NETWORKING OF EM AND MOLECULAR BODY-CENTRIC NANONETWORKS <s> Nanonetworks consist of nano-sized communicating devices which are able to perform simple tasks at the nanoscale. Nanonetworks are the enabling technology of long-awaited applications such as advanced health monitoring systems or high-performance distributed nano-computing architectures. The peculiarities of novel plasmonic nano-transceivers and nano-antennas, which operate in the Terahertz Band (0.1-10 THz), require the development of tailored communication schemes for nanonetworks. In this paper, a modulation and channel access scheme for nanonetworks in the Terahertz Band is developed. The proposed technique is based on the transmission of one-hundred-femtosecond-long pulses by following an asymmetric On-Off Keying modulation Spread in Time (TS-OOK). The performance of TS-OOK is evaluated in terms of the achievable information rate in the single-user and the multi-user cases. An accurate Terahertz Band channel model, validated by COMSOL simulation, is used, and novel stochastic models for the molecular absorption noise in the Terahertz Band and for the multi-user interference in TS-OOK are developed. The results show that the proposed modulation can support a very large number of nano-devices simultaneously transmitting at multiple Gigabits-per-second and up to Terabits-per-second, depending on the modulation parameters and the network conditions. <s> BIB010 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> VI. COMMUNICATION AND NETWORKING OF EM AND MOLECULAR BODY-CENTRIC NANONETWORKS <s> Wireless NanoSensor Networks (WNSNs) will allow novel intelligent nanomaterial-based sensors, or nanosensors, to detect new types of events at the nanoscale in a distributed fashion over extended areas. Two main characteristics are expected to guide the design of WNSNs architectures and protocols, namely, their Terahertz Band wireless communication and their nanoscale energy harvesting process. In this paper, a routing framework for WNSNs is proposed to optimize the use of the harvested energy to guarantee the perpetual operation of the WNSN while, at the same time, increasing the overall network throughput. The proposed routing framework, which is based on a previously proposed medium access control protocol for the joint throughput and lifetime optimization in WNSNs, uses a hierarchical cluster-based architecture that offloads the network operation complexity from the individual nanosensors towards the cluster heads, or nano-controllers. This framework is based on the evaluation of the probability of saving energy through a multi-hop transmission, the tuning of the transmission power of each nanosensor for throughput and hop distance optimization, and the selection of the next hop nanosensor on the basis of their available energy and current load. The performance of this framework is also numerically evaluated in terms of energy, capacity, and delay, and compared to that of the single-hop communication for the same WNSN scenario. The results show how the energy per bit consumption and the achievable throughput can be jointly maximized by exploiting the peculiarities of this networking paradigm. <s> BIB011 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> VI. COMMUNICATION AND NETWORKING OF EM AND MOLECULAR BODY-CENTRIC NANONETWORKS <s> This paper presents the characteristics of electromagnetic waves propagating inside human body at Terahertz frequencies and an initial study of the system performance of nano-network. It has been observed that the path loss is not only the function of distance and frequency but also related to the dielectric loss of human tissues. Numerical results have been compared with analytical studies and a good match has been found which validates the proposed numerical model. Based on the calculation of path losses and noise level for THz wave propagation, the channel capacity is studied to give an insight of future nano-communications within the human body. Results show that at the distance of millimeters, the capacity can reach as high as 100 Terabits per second (Tbps) depending on the environment and exciting pulse types. <s> BIB012 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> VI. COMMUNICATION AND NETWORKING OF EM AND MOLECULAR BODY-CENTRIC NANONETWORKS <s> Wireless Nano-scale Sensor Networks (WNSNs) are very simple and energy restricted networks that operate over terahertz band ranging from 0.1–10 THz, which faces significant molecular absorption noise and attenuation. Given these challenges, reliability, energy efficiency, and simplicity constitute the main criteria in designing communication protocols for WNSNs. Due to its simplicity and energy efficiency, carrier-less pulse based modulation is considered the best candidate for WNSNs. In this paper, we compare the performance of four different carrier-less modulations, PAM, OOK, PPM, and BPSK, in the context of WNSNs operating within the terahertz band. Our study shows that although BPSK is relatively more complex in terms of decoding logic at the receiver, it provides the highest reliability and energy efficiency among all the contenders. PAM has the worst performance in terms of reliability as well as energy efficiency. OOK and PPM have simpler decoding logic, but perform worse than BPSK in both reliability and energy efficiency. <s> BIB013 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> VI. COMMUNICATION AND NETWORKING OF EM AND MOLECULAR BODY-CENTRIC NANONETWORKS <s> Nano-communication is considered to become a major building block for many novel applications in the health care and fitness sector. Given the recent developments in the scope of nano machinery, coordination and control of these devices becomes the critical challenge to be solved. In-Body Nano-Communication based on either molecular, acoustic, or RF radio communication in the terahertz band supports the exchange of messages between these in-body devices. Yet, the control and communication with external units is not yet fully understood. In this paper, we investigate the challenges and opportunities of connecting Body Area Networks and other external gateways with in-body nano-devices, paving the road towards more scalable and efficient Internet of Nano Things (IoNT) systems. We derive a novel network architecture supporting the resulting requirements and, most importantly, investigate options for the simulation based performance evaluation of such novel concepts. Our study is concluded by a first look at the resulting security issues considering the high impact of potential misuse of the communication links. <s> BIB014 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> VI. COMMUNICATION AND NETWORKING OF EM AND MOLECULAR BODY-CENTRIC NANONETWORKS <s> The present paper introduces a joint coordinate and routing system (CORONA) which can be deployed dynamically on a 2D ad-hoc nanonetwork. User-selected nodes are used as anchor-points at the setup phase. All nodes then measure their distances, in number of hops, from these anchors, obtaining a sense of geolocation. At operation phase, the routing employs the appropriate subset of anchors, selected by the sender of a packet. CORONA requires minimal setup overhead and simple integer-based calculations only, imposing limited requirements for trustworthy operation. Once deployed, it operates efficiently, yielding a very low packet retransmission and packet loss rate, promoting energy-efficiency and medium multiplexity. <s> BIB015 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> VI. COMMUNICATION AND NETWORKING OF EM AND MOLECULAR BODY-CENTRIC NANONETWORKS <s> Electromagnetic-based Wireless NanoSensor Networks (EM-WNSNs) operating in the TeraHertz (THz) band (0.1 THz--10 THz) has been in focus recently because of potential applications in nano-scale scenarios. However, one major hurdle for advancing nano-scale communications is the lack of suitable networking protocols to address current and future needs of nanonetworks. Working together with routing that finds the path from a source to destination, forwarding is a networking task of sending a packet to the next-hop along its path to the destination. While forwarding has been straightforward in traditional wired networks, forwarding schemes now play a vital role in determining wireless network performance. In this paper, we propose a channel-aware forwarding scheme and compare it against traditional forwarding schemes for wireless sensor networks. To fit the peculiarity of EM-WNSNs, the channel-aware forwarding scheme makes forwarding decision considering the frequency selective pecularities of the THz channel which are undesirable from a networking perspective. It is shown through simulation that the proposed channel-aware forwarding scheme outperforms traditional forwarding schemes in terms of the end-to-end capacity while maintaining comparable performance for delay. <s> BIB016 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> VI. COMMUNICATION AND NETWORKING OF EM AND MOLECULAR BODY-CENTRIC NANONETWORKS <s> Wireless networks of nano-nodes will play a critical role in future medical, quality control, environmental monitoring and military applications. Nano-nodes are invisible/marginally visible to the human eye, ranging in size from approximately 100 $\mu \text{m}$ to few nanometers. Nano-networking poses unique challenges, requiring ground-breaking solutions. First, the nano-scale imposes severe restrictions to the computational and communication capabilities of the nodes. Second, nano-nodes are not accessible for programming, configuration and debugging in the classical sense. Thus, a nano-network should be self-configuring, resilient and adaptive to environmental changes. Finally, all nano-networking protocols should be ultra-scalable, since a typical nano-network may comprise billions of nodes. The study contributes a novel paradigm for data dissemination in networking nano-machines, addressing these unique challenges. Relying on innovative analytical results on lattice algebra and nature-inspired processes, a novel data dissemination method is proposed. The nano-nodes exploit their environmental feedback and mature adaptively into network backbone or remain single network users. Such a process can be implemented as an ultra-scalable, low complexity, multi-modal nano-node architecture (physical layer), providing efficient networking and application services at the same time. Requiring existing manufacturing technology, the proposed architecture constitutes the first candidate solution for realizable nano-networking. <s> BIB017 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> VI. COMMUNICATION AND NETWORKING OF EM AND MOLECULAR BODY-CENTRIC NANONETWORKS <s> Abstract Body Area Nano-NETworks (BANNETs) consist of integrated nano-machines, diffused in the human body for collecting diagnostic information and tuning medical treatments. Endowed with communication capabilities, such nano-metric devices can interact with each other and the external micro/macro world, thus enabling advanced health-care services (e.g., therapeutic, monitoring, sensing, and telemedicine tasks). Due to limited computational and communication capabilities of nano-devices, as well as their scarce energy availability, the design of powerful BANNET systems represents a very challenging research activity for upcoming years. Starting from the most significant and recent findings of the research community, this work provides a further step ahead by proposing a hierarchical network architecture, which integrates a BANNET and a macro-scale health-care monitoring system and two different energy-harvesting protocol stacks that regulate the communication among nano-devices during the execution of advanced nano-medical applications. The effectiveness of devised solutions and the comparison with the common flooding-based communication technique have been evaluated through computer simulations. Results highlight pros and cons of considered approaches and pave the way for future activities in the Internet of Nano-Things and nano-medical research fields. <s> BIB018 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> VI. COMMUNICATION AND NETWORKING OF EM AND MOLECULAR BODY-CENTRIC NANONETWORKS <s> Abstract This paper focuses on molecular absorption noise caused by molecular absorption in the higher frequency bands, such as THz band (0.1–10 THz). This transmission induced noise has been predicted to exist in the THz band, since the conservation of energy requires the conservation of the absorbed energy in the medium. There exist multiple models for the molecular absorption noise. Most of them focus only on the transformation of the absorbed energy directly into antenna temperature. This paper aims at giving additional perspectives to the molecular absorption noise. It is shown that the molecular absorption noise can be investigated with multiple different approaches, strongly affecting on the predicted strength and behavior of the noise. The full molecular absorption noise model is not given in this paper. Instead, we study the molecular absorption noise from different perspectives and give their derivations and the general ideas behind the noise modeling. <s> BIB019 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> VI. COMMUNICATION AND NETWORKING OF EM AND MOLECULAR BODY-CENTRIC NANONETWORKS <s> Nanosized devices operating inside the human body open up new prospects in the healthcare domain. Invivo wireless nanosensor networks (iWNSNs) will result in a plethora of applications ranging from intrabody health-monitoring to drug-delivery systems. With the development of miniature plasmonic signal sources, antennas, and detectors, wireless communications among intrabody nanodevices will expectedly be enabled at both the terahertz band (0.1-10 THz) as well as optical frequencies (400-750 THz). This result motivates the analysis of the phenomena affecting the propagation of electromagnetic signals inside the human body. In this paper, a rigorous channel model for intrabody communication in iWNSNs is developed. The total path loss is computed by taking into account the combined effect of the spreading of the propagating wave, molecular absorption from human tissues, as well as scattering from both small and large body particles. The analytical results are validated by means of electromagnetic wave propagation simulations. Moreover, this paper provides the first framework necessitated for conducting link budget analysis between nanodevices operating within the human body. This analysis is performed by taking into account the transmitter power, medium path loss, and receiver sensitivity, where both the THz and photonic devices are considered. The overall attenuation model of intrabody THz and optical frequency propagation facilitates the accurate design and practical deployment of iWNSNs. <s> BIB020 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> VI. COMMUNICATION AND NETWORKING OF EM AND MOLECULAR BODY-CENTRIC NANONETWORKS <s> The envisioned dense nano-network inside the human body at terahertz (THz) frequency suffers a communication performance degradation among nano-devices. The reason for this performance limitation is not only the path loss and molecular absorption noise, but also the presence of multi-user interference and the interference caused by utilising any communication scheme, such as time spread ON—OFF keying (TS-OOK). In this paper, an interference model utilising TS-OOK as a communication scheme of the THz communication channel inside the human body has been developed and the probability distribution of signal-to-interference-plus-noise ratio (SINR) for THz communication within different human tissues, such as blood, skin, and fat, has been analyzed and presented. In addition, this paper evaluates the performance degradation by investigating the mean values of SINR under different node densities in the area and the probabilities of transmitting pulses. It results in the conclusion that the interference restrains the achievable communication distance to approximate 1 mm, and more specific range depends on the particular transmission circumstance. Results presented in this paper also show that by controlling the pulse transmission probability and node density, the system performance can be ameliorated. In particular, SINR of in vivo THz communication between the deterministic targeted transmitter and the receiver with random interfering nodes in the medium improves about 10 dB, when the node density decreases one order. The SINR increases approximate 5 and 2 dB, when the pulse transmitting probability drops from 0.5 to 0.1 and 0.9 to 0.5. <s> BIB021 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> VI. COMMUNICATION AND NETWORKING OF EM AND MOLECULAR BODY-CENTRIC NANONETWORKS <s> Abstract Micro and nanorobotics represents one of the most challenging sectors of modern robotics. Through batch fabrication of Micro Electro-Mechanical Systems (MEMS), advanced small scale sensing and actuating tasks in a wide area of applications can be performed. Most miniaturized electro-mechanical devices are characterized by low-power and low-memory capacity. The huge number of modular robots introduces the need to explore novel self-reconfiguration algorithms to optimize movement and communication performances in terms of efficiency, parallelism and scalability. Nano-transceivers and nano-antennas operating in the Terahertz Band are already a well acquainted communication paradigm, enforcing nano-wireless networking that can be directly integrated in MEMS microrobots. Several logical topology shape-shifting algorithms are already implemented and tested in literature, along with performance evaluation on nano-wireless use. This article aims to provide an algorithm to reconnect groups of microrobots, along with a novel movement model for microrobotics ensembles introduced to enforce more realistic simulations. Special emphasis is given on the need of novel movement algorithms for swarms of microrobots. <s> BIB022 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> VI. COMMUNICATION AND NETWORKING OF EM AND MOLECULAR BODY-CENTRIC NANONETWORKS <s> Abstract We discuss the combination of in-body nano communication with the Internet of Things (IoT) as the Internet of Nano Things (IoNT). This combination enables a wide range of new applications and opportunities – particularly in the biomedical domain – but it also entails a number of new challenges. One of many research challenges in functional and non-functional aspects is the addressing and naming of nodes in a nano network. Our study in this area not only includes traditional techniques driven from today’s IoT, but also new unconventional ideas, originating from molecular level communication. We come up with a summary of either theoretical, simulated or realized ideas to draw conclusions about implementations and performance potential, with a focus on medical in-body communication scenarios, before we present our concept, Function Centric Nano-Networking (FCNN). FCNN allows us to address groups of interchangeable nano machines in a network by using location information and functional capabilities of the machines. This concept does not rely on the durability and uniqueness of individual nodes. We are comparing the novel concept of FCNN with similar ones and highlight elementary differences between them as well advantages and disadvantages. <s> BIB023 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> VI. COMMUNICATION AND NETWORKING OF EM AND MOLECULAR BODY-CENTRIC NANONETWORKS <s> Researchers consider wireless nanosensor networks (WNSNs) as a revolutionary emerging network paradigm from the point of its diversified applications and contributions to the humanity. Existing research in this field is still in elementary stage and performance enhancement via designing protocol suit represents a potential issue to address for this field. However, most of the studies in the literature mainly focus on lower layers, i.e., Physical and MAC layer protocols leaving upper layers such as Network layer and Transport layer protocols still unexplored. Therefore, in this paper, we explore performance enhancement in WNSNs via modifications in the existing network and transport layers protocols. In this paper, we devise a hierarchical AODV routing protocol and an acknowledgement-based UDP protocol for WNSNs. We perform rigorous simulation in ns−2 to prove the efficacy and efficiency of our proposed mechanisms. Our simulation results reveal significant performance enhancement in wireless nanosensor networks for our proposed protocols. <s> BIB024
|
A. EM-Based Body-Centric Nanonetworks 1) Physical Layer and MAC Layer: a) Path Loss Model: Studies on THz channel modelling of nano-communication is conducted in BIB012 - , based on the researches of the one in the air BIB001 - BIB010 . From the above studies, it can be concluded that there are three parts in the path loss of the THz wave inside human tissues: the spread path loss PL spr , the absorption path loss PL abs and the scattering path loss PL sca : where f is the frequency while r stands for the path length. The spread path loss, caused by the expansion of the wave in the medium, is defined as where λ g = λ o /n r represents the wavelength in medium with free-space wavelength λ o , and r stands for the transmission distance of the wave. Generally, the electromagnetic power is considered to travel spherically. 4πr 2 denotes the isotropic expansion term and 4π( n r f c ) 2 is the frequency dependent receiver antenna aperture term. The absorption path loss represents the attenuation absorbed by the molecular of the medium. It is assumed that part of the energy would convert into internal kinetic energy to excite the molecules in the medium. By reversing the the transmittance of the medium τ( f , d), we can obtain the absorption loss: where α is the absorption coefficient while r is the distance. The scattering path loss accounts for the loss of the signals caused by the deflection of the beam because of the nonuniformity's of the environment. Take human as an example, there are tons of molecules, cells, organs with various shapes and EM properties. The effects are dependent not only on the size, shape and EM properties of the particles but also on the wavelength of the transmitting signal. In BIB020 , the detailed phenomenon was discussed and it can be written as where µ sca refers to the scattering coefficient and r is the travelling distance. In BIB020 , the effects of all three path loss have been fully discussed for the in-body nano-communication. It is stated that the scattering path loss is almost negligible compared with the absorption path loss at the THz band. b) Noise model: The molecular absorption noise is the main element of the noises at Terahertz band, which is introduced by the molecular vibration, partially re-radiated the energy absorbed from the EM waves . Therefore, such noises are dependent on the transmitted signal. In BIB006 , noise model was investigated while in BIB021 noise of the human tissues was studied. The total molecular absorption noise p.s.d. S N can considered as the summation of the atmospheric noise S N 0 , the selfinduced noise S N 1 and others originating from other sources like devices S N o : where r refers to the propagation distance, f stands for the frequency of the EM wave, k B is the Boltzmann constant, T 0 is the reference temperature of the medium, α( f ) is the absorption coefficient, c is the speed of light in vacuum, f 0 is the design centre frequency, and S is the p.s.d of the transmitted signal. The atmosphere can be seen as an effective black body radiatior in the homogeneously absorbing medium; thus, the absorbing atmosphere with any temperature would produce the atmospheric noise BIB019 . Such atmospheric noise is called as the background noise, independent on the transmitted signal. However, the noise model of Eq. (11) only describes a special case for THz wave in air. Without loss of the generality, the term k B T 0 should be replaced with the Planck's law, which describes the general radiation of the black body BIB019 . Therefore, the molecular absorption noise contains three main contributors: the background noise S N b (r, f ), the self-induced noise S N s (r, f ) and other noise S N o (r, f ): The detailed discussions were conducted in BIB021 , and it is found that the molecular absorption noise would be the essential part of the contributors to the noise at the receiver. Meanwhile, p.s.d of human tissues on the sef-induced noise and background noise are investigated as well, where the following trends were observed: • The background noise p.s.d stay steady for all three tissue types because of the slight difference of refractive index. • The induced noise p.s.d change slowly with frequency, different from the fierce fluctuations of THz communication in air . • The self-induced noise p.s.d is way bigger than the background noise for all three human tissues, leading to the conclusion that the background noise could be neglected in vivo. c) Modulation Technique: Because the limitation of the size, nano-devices are power-limited; thus, it is not possible to adopt the traditional modulation techniques which would cause energy. Based on such situations, the modulation of carrier-less pulse based modulation is investigated in BIB013 . And a pulse modulation technique, , named TS-OOK, is studied in BIB002 and improved in BIB010 to fully exploit the potential of the nano-devices made of graphene. So far, TS-OOK is the most promising communication scheme for resource-constrained nanonetworks. To investigate the collision between symbols in body-centric nano-communication, reference BIB021 investigated the feasibility of TS-OOK as a communication scheme at THz band for the in-body communication of nanonetwork where not only the noise but also the interference is investigated. It shows that the received signal power is closely related to the transmitted signal power; thus we need to choose the transmitted power carefully to make the difference of the received power with the silence pulse large enough to make the detection accurate. In BIB003 , TS-OOK is introduced and femto-second pulse is used as the communication signal between nano-devices . Reference BIB010 analysed this pulse-based modulation where the transmitted pulse length is 100 f s. Meanwhile, the channel access scheme of nano networks at THz band was proposed and analyzed. In the paper, interference-free scenario and multi-user scenario were both discussed. In the end, the model was evaluated by COMSOL Multi-physics . The results showed that such modulation schemes were suitable for nanonetworks and by choosing suitable parameters the rates would go from a few Gbps to a few Tbps. Later, Rate Division Time-Spread On-Off Keying (RD TS-OOK) is studied in and the PHysical Layer Aware MAC protocol for Electromagnetic nanonetworks in the Terahertz Band (PHLAME) is first proposed. The proposal of these two concept is to support the extremely high density of nanodevices in nanonetworks and enable the network throughput to go up to tens of Gbps. In 2013, the critical packet transmission ratio (CTR) was derived in BIB006 to introduce an energy and spectrum aware MAC protocol which can make nano-sensors transmit with high speed with little energy consumption. d) Coding Technique: Due to the simple structure, nanonodes only have limited power storage. Thus, to save the transmitted energy, numerous coding methods were discussed. Fixed-length codewords with a constant weight can be used not only reduce the power consumption, but also to reduce the interference BIB002 . Kocaoglu et al. BIB004 , BIB007 proposed a fixed-length coding methods later to keep the Hamming distance of codewords which would make the Average Code Weight (ACW) lowest. The performance study of the fixedlength code at the aspects on ACW and code length was conducted in BIB008 . Based on this research, variable-length low weight codes for OOK modulation was investigated in which would lower the transmission energy while keep the desired throughput. 2) Network Layer: • Addressing The IEEE 1906 standard defines the specificity as the technique that enables a reception of a message carrier by a target and maps it to an address in classical communication systems. However, it does not provide any discussion on how to generate, manage, or assign specificity component to nanonodes in molecular or EM nanonetworks. Individualized network addresses and conventional addressing are not feasible nor practical due to the nano-scale of the nanonetworks. Therefore, the use of cluster-based addressing is advantageous over node-base addressing. It provides the ability to address a group of nodes with a specific function in monitoring health or in a specific biological organ BIB005 . Additionally, addressing may be safely assumed to be required in inbound direction within the nanonetworks to inform a cluster or a nanonetwork performing a specific function (application) on its next action. However, in outbound direction, no addressing is necessary since the outbound device is the sink of communication of the nanonetworks; whenever a gateway receives a message from inside, it will simply forward it to that device BIB014 . Hence, conventional addressing is not necessary for nanonetworks. It may be sufficient to reach a destination just to know the right direction, since it may be the only possible option as discussed above or any member of cluster in that direction is a suitable destination. Addressing in its conventional meaning may not be needed. For example, broadcasting a message in nanonetworks may be a solution for data dissemination because of the low possibility of collision in THz band due to the wide bandwidth and transmission time. A receiver overhearing the message decides if the message is of interest. This method can be naturally implemented in molecular nanonetworks. Direct connectivity between nanodevices is another example, where a guided communication can be provided via antenna aperture, resonant frequency, or impedance match in EM nanonetworks and shape or affinity of molecule to a particular target, complementary DNA for hybridization, etc in molecular networks. In literature, several authors in the context of proposing routing or MAC protocols for EM nanonetworks assumed that the nanonodes are assigned addresses without discussing how (for e.g BIB022 , BIB009 ). Few studies discuss nanonetwork addressing. Stelzner et.al. BIB023 proposed an addressing scheme that is based on the function of the nanosensor and its location rather than focussing on individual nodes. The authors proposed employing known protocols like IPv6 or overhead-reduced variants like 6LoWPAN for the control station and gateways. In the proposed scheme, it is irrelevant which specific sensor detects an event or which node executes a service as long as the right function is performed at the right location. However, this scheme is challenged when exact and specific quantities are required such as the case in releasing a certain amount of a drug. Addressing a partial number of nodes based on the required quantity with the lack of individual addressing of node continues to be an open area of research. • Routing One of the most fundamental concerns for Body-Centric nanonetworkss is accurate routing in order to transmit signal promptly and precisely. Some challenges affect the routing protocol, including energy, complexity, latency and throughput. Thinking of the limited resourceequipped nano-sensors, one of the most important requirement is to reduce the energy consumption. There have been a few attempts towards achieving energy efficiency in such networks by multi-hop networking BIB011 - BIB015 . A routing framework for WNSNs is proposed to guarantee the perpetual operation while increase the overall network throughput BIB011 . The framework uses a hierarchical cluster-based architecture. The choice between direct and multi-hop transmission is determined based on the probability of energy savings through the transmission process. It is concluded that multi-hop performs better for varying distance. However, only two hop counts are considered which requires more hops consideration in the performance evaluation. Besides, it mainly focuses on WNSNs and does not solve the requirements and constraints of BCNN. The primary task of networking protocol is forwarding, which is sending packets to the next-hop along its path to the destination. In traditional wireless sensor networks (WSN), multi-hop forwarding schemes including the nearest hop forwarding, the longest hop forwarding and the random forwarding schemes as well as the single-hop end-to-end transmission are utilised. For long range THz wireless nano-sensor networks (WNSN) with absorption-defined windows, in order to overcome the frequency-selective feature, a channel-aware forwarding scheme is proposed in BIB016 . The selection of the next hop is a trade-off between minmising the transmission distance and the hop count. Nevertheless, all the relay nodes are assumed to have sufficient energy and computation capacity which is impractical. Moreover, authors in BIB015 , propose a geographic routing protocol. User-selected nodes are used as anchor-points at the setup phase, and all nodes measure their distances from these anchors to obtain address. The routing then employs the appropriate subset of anchors which is selected by the sender of a packet. However, the proposed scheme is based on fixed topology neglecting the mobility and dynamic of nano-nodes. A flood-based data dissemination scheme is introduced in BIB017 . This scheme classifies each node as infrastructure or network user after processing the reception quality. Only infrastructure nodes can act as re-transmitters, while the remaining nodes revert to receiving-only mode. This approach improves the energy efficiency by avoiding the unconditional broadcast and reliving the serious redundancy and collisions. Nonetheless, this dynamicallyforming infrastructure requires topology-dependent optimisation and digital signal processing capabilities of nano-nodes. BCNN routing protocols design provides a challenge with no real solutions despite the growing research tackling this area. Two kinds of energy-harvesting protocol stacks that regulate the communication among nano-devices are proposed in BIB018 . The greedy energy-harvesting scheme simply delivers the packet to the node with the higher energy level, while the optimal energy-harvesting scheme selects the node that can maximise the overall energy level within each cluster. Both schemes shown better performance compared with the traditional flooding-based scheme. However, the optimal routing strategy cannot be easily employed because of its high computational capacity requirement. Besides, the transmission distance is not taken into consideration, which makes the selection of relay path inappropriate only based on the energy level. Recently, a cognitive routing named enhanced energy-efficient approach is proposed for IoNT . An analytic-hierarchy process is implemented as the reasoning element to make the cognitive decision based on observing the dynamically changing topology of the network. 3) Transport Layer: IEEE P1906.1 standard in mapping the nanonetwork to the conventional layering system ignored the transport layer as shown in Table II . Reliable transmission is a requirement for practical implementation of nanonetworks. Due to the peculiar characteristics of the nanonetworks, researchers agree that reliability at the MAC layer or the transport layer is sufficient but not necessary in both. Hence, the IEEE P1906.1 framework assumes the existence of the MAC layer and the absence of the transport layer. Piro et. al. BIB009 implemented two types of MAC protocols, transparent MAC and Smart MAC, in designing their nano simulator. Transparent MAC pushes packets from the network layer to the physical interface without any processing at the MAC layer. Smart MAC enqueues a packet in reception to discover the neighboring nodes before sending the packet through a handshaking procedure. For transparent MAC, researches assume that the reliability service is shifted to the transport layer, thereby advocating for the existence of transport layer. The authors of proposed adapting the Optimized Exchange Protocol (OEP) protocol, which is part of the IEEE 11073-20601 standard and is particularly important in telemedicine to provide access points to services from the application layer to the transport layer. The OEP protocol is flexible and lightweight, which makes it suitable for implementation in constraint processing power and storage nano devices. However, the authors did not propose any technique on how to adapt or implement the OEP protocol for nanonetworks. Tairin et. al. BIB024 proposed an acknowledgement-based UDP protocol to improve the packet delivery ratio in a nanonetwork. The proposed protocol utilizes timeout timer in UDP to double check whether the packet gets delivered to the destination. The authors evaluated the performance of the protocol via simulation and found that the proposed protocol improved the delivery ratio of packets but introduced additional delay to the network. Few proposals addressed transport layer protocol design. This area remains unexplored in academia and industrial research. The interaction of congestion avoidance and reliability between the MAC layer and the transport layer along with the trade-off of induced delay and energy consumption is yet to be explored.
|
A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> 1) Propagation Channel Model: <s> This book is a lucid, straightforward introduction to the concepts and techniques of statistical physics that students of biology, biochemistry, and biophysics must know. It provides a sound basis for understanding random motions of molecules, subcellular particles, or cells, or of processes that depend on such motion or are markedly affected by it. Readers do not need to understand thermodynamics in order to acquire a knowledge of the physics involved in diffusion, sedimentation, electrophoresis, chromatography, and cell motility--subjects that become lively and immediate when the author discusses them in terms of random walks of individual particles. <s> BIB001 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> 1) Propagation Channel Model: <s> Molecular communication is a biologically-inspired method of communication with attractive properties for microscale and nanoscale devices. In molecular communication, messages are transmitted by releasing a pattern of molecules at a transmitter, which propagate through a fluid medium towards a receiver. In this paper, molecular communication is formulated as a mathematical communication problem in an information-theoretic context. Physically realistic models are obtained, with sufficient abstraction to allow manipulation by communication and information theorists. Although mutual information in these channels is intractable, we give sequences of upper and lower bounds on the mutual information which trade off complexity and performance, and present results to illustrate the feasibility of these bounds in estimating the true mutual information. <s> BIB002 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> 1) Propagation Channel Model: <s> Abstract Abstract Molecular communication is a new communication paradigm that uses molecules for information transmission between nanomachines. Similar to traditional communication systems, several factors constitute limits over the performance of this communication system. One of these factors is the energy budget of the transmitter. It limits the rate at which the transmitter can emit symbols, i.e., produce the messenger molecules. In this paper, an energy model for the communication via diffusion system is proposed. To evaluate the performance of this communication system, first a channel model is developed, and also the probability of correct decoding of the information is evaluated. Two optimization problems are set up for system analysis that focus on channel capacity and data rate. Evaluations are carried out using the human insulin hormone as the messenger molecule and a transmitter device whose capabilities are similar to a pancreatic β -cell. Results show that distance between the transmitter and receiver has a minor effect on the achievable data rate whereas the energy budget’s effect is significant. It is also shown that selecting appropriate threshold and symbol duration parameters are crucial to the performance of the system. <s> BIB003 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> 1) Propagation Channel Model: <s> Abstract Currently, Communication via Diffusion (CvD) is one of the most prominent systems in nanonetworks. In this paper, we evaluate the effects of two major interference sources, Intersymbol Interference (ISI) and Co-channel Interference (CCI) in the CvD system using different modulation techniques. In the analysis of this paper, we use two modulation techniques, namely Concentration Shift Keying (CSK) and Molecule Shift Keying (MoSK) that we proposed in our previous paper. These techniques are suitable for the unique properties of messenger molecule concentration waves in nanonetworks. Using a two transmitting couple simulation environment, the channel capacity performances of the CvD system utilizing these modulation techniques are evaluated in terms of communication range, distance between interfering sources, physical size of devices, and average transmission power. <s> BIB004 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> 1) Propagation Channel Model: <s> This paper studies the mitigation of intersymbol interference in a diffusive molecular communication system using enzymes that freely diffuse in the propagation environment. The enzymes form reaction intermediates with information molecules and then degrade them so that they cannot interfere with future transmissions. A lower bound expression on the expected number of molecules measured at the receiver is derived. A simple binary receiver detection scheme is proposed where the number of observed molecules is sampled at the time when the maximum number of molecules is expected. Insight is also provided into the selection of an appropriate bit interval. The expected bit error probability is derived as a function of the current and all previously transmitted bits. Simulation results show the accuracy of the bit error probability expression and the improvement in communication performance by having active enzymes present. <s> BIB005 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> 1) Propagation Channel Model: <s> In this letter, a new molecular modulation scheme for nanonetworks is proposed. To evaluate the scheme, a system model based on the Poisson distribution is introduced. The error probability of the proposed scheme as well as that of two previously known schemes, the concentration and molecular shift keying modulations, are derived for the Poisson model by taking into account the error propagation effect of previously decoded symbols. The proposed scheme is shown to outperform the previously introduced schemes. This is due to the fact that the decoding of the current symbol in the proposed scheme does not encounter propagation of error, as the decoding of the current symbol does not depend on the previously transmitted and decoded symbols. Finally, fundamental limits on the probability of error of a practical set of encoders and decoders are derived using information theoretical tools. <s> BIB006 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> 1) Propagation Channel Model: <s> Molecular communication is an emerging communication paradigm for biological nanomachines. It allows biological nanomachines to communicate through exchanging molecules in an aqueous environment and to perform collaborative tasks through integrating functionalities of individual biological nanomachines. This paper develops the layered architecture of molecular communication and describes research issues that molecular communication faces at each layer of the architecture. Specifically, this paper applies a layered architecture approach, traditionally used in communication networks, to molecular communication, decomposes complex molecular communication functionality into a set of manageable layers, identifies basic functionalities of each layer, and develops a descriptive model consisting of key components of the layer for each layer. This paper also discusses open research issues that need to be addressed at each layer. In addition, this paper provides an example design of targeted drug delivery, a nanomedical application, to illustrate how the layered architecture helps design an application of molecular communication. The primary contribution of this paper is to provide an in-depth architectural view of molecular communication. Establishing a layered architecture of molecular communication helps organize various research issues and design concerns into layers that are relatively independent of each other, and thus accelerates research in each layer and facilitates the design and development of applications of molecular communication. <s> BIB007 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> 1) Propagation Channel Model: <s> The memoryless additive inverse Gaussian noise channel model describing communication based on the exchange of chemical molecules in a drifting liquid medium is investigated for the situation of simultaneously an average-delay and a peak-delay constraint. Analytical upper and lower bounds on its capacity in bits per molecule use are presented. These bounds are shown to be asymptotically tight, i.e., for the delay constraints tending to infinity with their ratio held constant (or for the drift velocity of the fluid tending to infinity), the asymptotic capacity is derived precisely. Moreover, characteristics of the capacity-achieving input distribution are derived that allow accurate numerical computation of capacity. The optimal input appears to be a mixed continuous and discrete distribution. <s> BIB008 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> 1) Propagation Channel Model: <s> Within the domain of molecular communications, researchers mimic the techniques in nature to come up with alternative communication methods for collaborating nanomachines. This letter investigates the channel transfer function for molecular communications via diffusion. In nature, information-carrying molecules are generally absorbed by the target node via receptors. Using the concentration function, without considering the absorption process, as the channel transfer function implicitly assumes that the receiver node does not affect the system. In this letter, we propose a solid analytical formulation and analyze the signal metrics (attenuation and propagation delay) for molecular communication via diffusion channel with an absorbing receiver in a 3-D environment. The proposed model and the formulation match well with the simulations without any normalization. <s> BIB009 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> 1) Propagation Channel Model: <s> This article examines recent research in molecular communications from a telecommunications system design perspective. In particular, it focuses on channel models and state-of-the-art physical layer techniques. The goal is to provide a foundation for higher layer research and motivation for research and development of functional prototypes. In the first part of the article, we focus on the channel and noise model, comparing molecular and radio-wave pathloss formulae. In the second part, the article examines, equipped with the appropriate channel knowledge, the design of appropriate modulation and error correction coding schemes. The third reviews transmitter and receiver side signal processing methods that suppress inter-symbol interference. Taken together, the three parts present a series of physical layer techniques that are necessary to produce reliable and practical molecular communications. <s> BIB010 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> 1) Propagation Channel Model: <s> In this paper, we present an analytical model for the diffusive molecular communication (MC) system with a reversible adsorption receiver in a fluid environment. The widely used concentration shift keying (CSK) is considered for modulation. The time-varying spatial distribution of the information molecules under the reversible adsorption and desorption reaction at the surface of a receiver is analytically characterized. Based on the spatial distribution, we derive the net number of newly-adsorbed information molecules expected in any time duration. We further derive the number of newly-adsorbed molecules expected at the steady state to demonstrate the equilibrium concentration. Given the number of newly-adsorbed information molecules, the bit error probability of the proposed MC system is analytically approximated. Importantly, we present a simulation framework for the proposed model that accounts for the diffusion and reversible reaction. Simulation results show the accuracy of our derived expressions, and demonstrate the positive effect of the adsorption rate and the negative effect of the desorption rate on the error probability of reversible adsorption receiver with last transmit bit-1. Moreover, our analytical results simplify to the special cases of a full adsorption receiver and a partial adsorption receiver, both of which do not include desorption. <s> BIB011 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> 1) Propagation Channel Model: <s> Molecular communication via diffusion (MCvD) is a new field of communication where molecules are used to transfer information. One of the main challenges in MCvD is the intersymbol interference (ISI), which inhibits communication at high data rates. Furthermore, at nanoscale, energy efficiency becomes an essential problem. Before addressing these problems, a pre-determined threshold for the received signal must be calculated to make a decision. In this paper, an analytical technique is proposed to determine the optimum threshold, whereas in the literature, these thresholds are calculated empirically. Since the main goal of this paper is to build an MCvD system suitable for operating at high data rates without sacrificing quality, new modulation and filtering techniques are proposed to decrease the effects of ISI and enhance energy efficiency. As a transmitter-based solution, a modulation technique, molecular transition shift keying (MTSK), is proposed in order to increase the data rate by suppressing ISI. Furthermore, for energy efficiency, a power adjustment technique that utilizes the residual molecules is proposed. Finally, as a receiver-based solution, a new energy efficient decision feedback filter (DFF) is proposed as a substitute for the conventional decoders in the literature. The error performance of DFF and MMSE equalizers are compared in terms of bit error rates, and it is concluded that DFF may be more advantageous when energy efficiency is concerned, due to its lower computational complexity. <s> BIB012 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> 1) Propagation Channel Model: <s> This paper studies the problem of receiver modeling in molecular communication systems. We consider the diffusive molecular communication channel between a transmitter nano-machine and a receiver nano-machine in a fluid environment. The information molecules released by the transmitter nano-machine into the environment can degrade in the channel via a first-order degradation reaction and those that reach the receiver nano-machine can participate in a reversible bimolecular reaction with receiver receptor proteins. Thereby, we distinguish between two scenarios. In the first scenario, we assume that the entire surface of the receiver is covered by receptor molecules. We derive a closed-form analytical expression for the expected received signal at the receiver, i.e., the expected number of activated receptors on the surface of the receiver. Then, in the second scenario, we consider the case where the number of receptor molecules is finite and the uniformly distributed receptor molecules cover the receiver surface only partially. We show that the expected received signal for this scenario can be accurately approximated by the expected received signal for the first scenario after appropriately modifying the forward reaction rate constant. The accuracy of the derived analytical results is verified by Brownian motion particle-based simulations of the considered environment, where we also show the impact of the effect of receptor occupancy on the derived analytical results. <s> BIB013 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> 1) Propagation Channel Model: <s> Information delivery using chemical molecules is an integral part of biology at multiple distance scales and has attracted recent interest in bioengineering and communication theory. Potential applications include cooperative networks with a large number of simple devices that could be randomly located (e.g., due to mobility). This paper presents the first tractable analytical model for the collective signal strength due to randomly placed transmitters in a 3-D large-scale molecular communication system, either with or without degradation in the propagation environment. Transmitter locations in an unbounded and homogeneous fluid are modeled as a homogeneous Poisson point process. By applying stochastic geometry, analytical expressions are derived for the expected number of molecules absorbed by a fully absorbing receiver or observed by a passive receiver. The bit error probability is derived under ON/OFF keying and either a constant or adaptive decision threshold. Results reveal that the combined signal strength increases proportionately with the transmitter density, and the minimum bit error probability can be improved by introducing molecule degradation. Furthermore, the analysis of the system can be generalized to other receiver designs and other performance characteristics in large-scale molecular communication systems. <s> BIB014
|
• In the free-diffusion channel, the information molecules (such as hormones, pheromones, DNA) move in the fluid medium via Brownian motion. In this case, the propagation is often assumed to follow the Wiener process, and the propagation model can be mathematically described using Fick's second law BIB001 : where the diffusion coefficient D is governed by the Einstein relation as where T is temperature in kelvin, η is the viscosity of the fluid environment, r m is the radius of information molecule, and k B is the Boltmann constant. This Einstein relation may lose accuracy in most realistic scenarios, and the diffusion coefficient of which is usually obtained via experiment . • In the diffusion with drift channel, the propagation model in a 3D environment can be mathematically expressed as BIB001 Ch. 4 ] where v x , v y , and v z are the constant drift velocities in the +x, +y, and +z directions, respectively. Different from the EM wave propagation model, the molecular propagation has the advantages of not sufferring from the diffraction loss under the shadow of objects, and not restricting by the cut-off frequency in pipe, aperture, and mesh environments BIB010 . 2) Noise Model: • The inherent noise is usually contributed by the random arrival of emitted molecules at the previous bit intervals. In the timing channel, the noise N T is the first arrival time at the receiver boundary given as with the communication distance l and diffusion coefficient D for the positive drift v > 0. In the concentration-encoded channel, the number of left over molecules belonging to the previous bit to the current bit duration follows the binomial distributions, and the noise at the n b th bit interval due to previous (n b − 1) bit intervals is described as BIB011 , BIB003 where N is the number of transmit molecules at the start of the first bit interval, n b is the number bit intervals, T b is the length of one bit interval, d is the distance between the transmitter and the receiver, and F (·, ·, ·) is the fraction number of molecules counted at the receiver. • The external noise usually includes the biochemical noise, the thermal noise, the physical noise, the sampling noise, and the counting noise. The biochemical noise is the biochemically interaction between the information molecules/bio-nanomachines and the surrouding molecules and environment. The thermal noise is the varied activity levels of the thermally activated processes or stochastic thermal motion due to the changing surrounding temperature, and the physical noise is the physical force on the molecules movement due to the viscosity of fluid environment BIB007 . The counting noise arises when measuring the molecular concentration at the receiver location, and it is due to the randomness in the molecules movement and to the discreteness of the molecules, whereas the sampling noise arises when modulating the molecular concentration at the emission of molecules, and is due to the discreteness of the molecules and the unwanted perturbation at the emission process . 3) Modulation Techniques: Different from the modulation in radio frequency (RF) wireless communication systems where the information is modulated on the amplitude, frequency, and phase of the radio waves, the molecular communication transmitters modulate the information on the type/structure, the emitting time, and the number of releasing molecules. • In the timing channel, the information was modulated on the emitting time of molecules as in BIB002 - BIB008 . • In the concentration-encoded channel, two types of modulation schemes for binary MC system was first described in , which are the ON-OFF modulation and the Multilevel amplitude modulation (M-AM). In the ON-OFF modulation scheme, the concentration of information molecules during the bit interval is Q to represent bit-1, and the concentration of information molecules during the bit interval is 0 to represent bit-0. In the M-AM scheme, the concentration of information molecules is continuous sinusoidal wave, where the amplitude and the frequency can be encoded. The Concentration shift keying (CSK) was proposed for modulating the number of information molecules, and Molecule Shift Keying (MoSK) was proposed for modulating on different types of information molecules BIB004 . Due to the constraints in the accurate time arrival of molecules in random walks, and the limited types of molecules in MC system, the Binary CSK modulation based on the number of releasing molecules have been widely applied BIB004 , BIB009 , BIB011 , BIB014 , , BIB005 , , where the molecules concentration is considered as the signal amplitude. In more detail, in the Binary CSK, the transmitter emits N 1 molecules at the start of the bit interval to represent the bit-1 transmission, and emits N 2 molecules at the start of the bit interval to represent the bit-0 transmission. In most works, N 1 can be set as zero to reduce the energy consumption and make the received signal more distinguishable. The hybrid modulation based on the number as well as the types of releasing moleules were proposed and studied in BIB006 , BIB012 . 4) Reception Model: For the same single point transmitter located at − → r relative to the center of a receiver with radius r r , the received number of molecules will be different depending on the types of receivers. • For the passive receiver, the local point concentration at the center of the passive receiver at time t due to a single pulse emission by the transmitter occurring at t = 0 is given as [207, Eq. (4.28)] C Ω r r , t − → r = 1 where − → r = [x, y, z], and [x, y, z] are the coordinates along the three axes. • For the fully absorbing receiver with spherical symmetry, the reception process can be described as [208, Eq. (3.64) ] where k is the absorption rate (in length×time −1 ). The molecule distribution function of the fully absorbing receiver at time t due to a single pulse emission by the transmitter occurring at t = 0 is presented as • For the reversible adsorption receiver with spherical symmetry, the boundary condition of the information molecules at its surface is [209, Eq. (4)] D ∂ (C (r, t| r 0 )) ∂r where k 1 is the adsorption rate (length×time −1 ) and k −1 is the desorption rate (time −1 ), and its molecule distribution function was derived in [210, Eq. (8)]. • For the ligand binding receiver with spherical symmetry, the boundary condition of the information molecules at its surface is where S ( t| r 0 ) is the probability that the information molecules released at distance r 0 given as and its molecule distribution function was derived in BIB013 Eq. (23) ].
|
A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> 5) Coding <s> Communication between nanoscale devices is an area of considerable importance as it is essential that future devices be able to form nanonetworks and realise their full potential. Molecular communication is a method based on diffusion, inspired by biological systems and useful over transmission distances in the nm to m range. The propagation of messenger molecules via diffusion implies that there is thus a probability that they can either arrive outside of their required time slot or ultimately, not arrive at all. Therefore, in this paper, the use of a error correcting codes is considered as a method of enhancing the performance of future nanonetworks. Using a simple block code, it is shown that it is possible to deliver a coding gain of ∼ 1.7dB at transmission distances of 1 m. Nevertheless, energy is required for the coding and decoding and as such this paper also considers the code in this context. It is shown that these simple error correction codes can deliver a benefit in terms of energy usage for transmission distances of upwards of 25 m for receivers of a 5 m radius. <s> BIB001 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> 5) Coding <s> Molecular communications emerges as a promising scheme for communications between nanoscale devices. In diffusion-based molecular communications, molecules as information symbols diffusing in the fluid environments suffer from molecule crossovers, i.e., the arriving order of molecules is different from their transmission order, leading to intersymbol interference (ISI). In this paper, we introduce a new family of channel codes, called ISI-free codes, which improve the communication reliability while keeping the decoding complexity fairly low in the diffusion environment modeled by the Brownian motion. We propose general encoding/decoding schemes for the ISI-free codes, working upon the modulation schemes of transmitting a fixed number of identical molecules at a time. In addition, the bit error rate (BER) approximation function of the ISI-free codes is derived mathematically as an analytical tool to decide key factors in the BER performance. Compared with the uncoded systems, the proposed ISI-free codes offer good performance with reasonably low complexity for diffusion-based molecular communication systems. <s> BIB002 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> 5) Coding <s> Owing to the limitations of molecular nanomachines, it is essential to develop reliable, yet energy-efficient communication techniques. Two error correction coding techniques are compared under a diffusive molecular communication mechanism, namely, Hamming codes and minimum energy codes (MECs). MECs, which previously have not been investigated in a diffusive channel, maintain the desired code distance to keep reliability while minimising energy. Results show that MECs outperform the Hamming codes, both in aspects of bit error rate and energy consumption. <s> BIB003 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> 5) Coding <s> Future applications for nano-machines, such as drug-delivery and health monitoring, will require robust communications and nanonetworking capabilities. This is likely to be enabled via the use of molecules, as opposed to electromagnetic waves, acting as the information carrier. To enhance the reliability of the transmitted data, Euclidean geometry low density parity check (EG-LDPC) and cyclic Reed-Muller (C-RM) codes are considered for use within a molecular communication system for the first time. These codes are compared against the Hamming code to show that an $\boldsymbol{s}=4$ LDPC (integer $\boldsymbol{s}\ge 2$ ) has a superior coding gain of 7.26 dBs. Furthermore, the critical distance and energy cost for a coded system are also taken into account as two other performance metrics. It is shown that when considering the case of nano–to nano-machines communication, a Hamming code with $\boldsymbol{m}=4$ , (integer $\boldsymbol{m}\ge 2$ ) is better for a system operating between $10^{-6}$ and $10^{-3}$ bit error rate (BER) levels. Below these BERs, $\boldsymbol{s}=2$ LDPC codes are superior, exhibiting the lowest energy cost. For communication between nano–to macro-machines, and macro–to nano-machines, $\boldsymbol{s}=3$ LDPC and $\boldsymbol{s}=2$ LDPC are the best options respectively. <s> BIB004 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> 5) Coding <s> Molecular communication (MC) has recently emerged as a novel paradigm for nano-scale communication utilizing molecules as information carriers. In diffusion-based molecular communication, the system performance is constrained by the inter-symbol-interference caused by the crossover of information carrying molecules in consecutive bits. To cope with this, we propose the Reed-Solomon (RS) codes as an error recovery tool, to improve the transmission reliability in diffusion-based MC systems. To quantify the performance improvement due to RS codes, we derive the analytical expression for the approximate bit error probability (BEP) of the diffusion-based MC system with the full absorption receiver. We further develop the particle-based simulation framework to simulate the proposed system with RS code to verify the accuracy of our derived analytical results. Our results show that, as the number of molecules per bit increases, the BEP of the system with RS codes exhibits a substantial improvement than that of non-coded systems. Furthermore, the BEP of the proposed system with RS codes can be greatly improved by increasing the minimum distance of the codeword. <s> BIB005 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> 5) Coding <s> Molecular Communication (MC) is an enabling paradigm for the interconnection of future devices and networks in the biological environment, with applications ranging from bio-medicine to environmental monitoring and control. The engineering of biological circuits, which allows to manipulate the molecular information processing abilities of biological cells, is a candidate technology for the realization of MC-enabled devices. In this paper, inspired by recent studies favoring the efficiency of analog computation over digital in biological cells, an analog decoder design is proposed based on biological circuit components. In particular, this decoder computes the a-posteriori log-likelihood ratio of parity-check-encoded bits from a binary-modulated concentration of molecules. The proposed design implements the required L-value and the box-plus operations entirely in the biochemical domain by using activation and repression of gene expression, and reactions of molecular species. Each component of the circuit is designed and tuned in this paper by comparing the resulting functionality with that of the corresponding analytical expression. Despite evident differences with classical electronics, biochemical simulation data of the resulting biological circuit demonstrate very close performance in terms of Mean Squared Error (MSE) and Bit Error Rate (BER), and validate the proposed approach for the future realization of MC components. <s> BIB006 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> 5) Coding <s> Abstract Real-time monitoring of medical test parameters as well as biological and chemical substances inside the human body is an aspiration which might facilitate the control of pathologies and would ensure better effectiveness in diagnostics and treatments. Future Body Area NanoNetworks (BANN) represent an ongoing effort to complement these initiatives, although due to its early stage of development, further research is required. This paper contributes with a hierarchical BANN architecture consisting of two types of nanodevices, namely, nanonodes and a nanorouter, which are conceptually designed using technologically available electronic components. A straightforward communication scheme operating at the THz band for the exchange of information among nanodevices is also proposed. Communications are conducted in a human hand scenario since, unlike other parts of the human body, the negative impact of path loss and molecular absorption noise on the propagation of electromagnetic waves in biological tissues is mitigated. However, data transmission is restricted by the tiny size of nanodevices and their extremely limited energy storing capability. To overcome this concern, nanodevices must be powered through the bloodstream and external ultrasound energy harvesting sources. Under these conditions, the necessary energy and its management have been thoroughly examined and assessed. The results obtained reveal the outstanding ability of nanonodes to recharge, thus enabling each pair of nanonode–nanorouter to communicate every 52 min. This apparently long period is compensated by the considerably high number of nanonodes in the network, which satisfies a quasi-constant monitoring of medical parameter readings. <s> BIB007 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> 5) Coding <s> Inspired by nature, molecular communications (MC), i.e., the use of molecules to encode, transmit, and receive information, stands as the most promising communication paradigm to realize the nanonetworks. Even though there has been extensive theoretical research toward nanoscale MC, there are no examples of implemented nanoscale MC networks. The main reason for this lies in the peculiarities of nanoscale physics, challenges in nanoscale fabrication, and highly stochastic nature of the biochemical domain of envisioned nanonetwork applications. This mandates developing novel device architectures and communication methods compatible with MC constraints. To that end, various transmitter and receiver designs for MC have been proposed in the literature together with numerable modulation, coding, and detection techniques. However, these works fall into domains of a very wide spectrum of disciplines, including, but not limited to, information and communication theory, quantum physics, materials science, nanofabrication, physiology, and synthetic biology. Therefore, we believe it is imperative for the progress of the field that an organized exposition of cumulative knowledge on the subject matter can be compiled. Thus, to fill this gap, in this comprehensive survey, we review the existing literature on transmitter and receiver architectures toward realizing MC among nanomaterial-based nanomachines and/or biological entities and provide a complete overview of modulation, coding, and detection techniques employed for MC. Moreover, we identify the most significant shortcomings and challenges in all these research areas and propose potential solutions to overcome some of them. <s> BIB008
|
Techniques: Similar to traditional wireless communication systems, many coding schemes have been studied for molecular paradigm to improve transmission reliability. Hamming codes were used as the error control coding (ECC) for DMC in BIB001 , where the coding gain can achieve 1.7dB with transmission distance being 1µm. Meanwhile, the authors modelled the energy consumption of coding and decoding to show that the proposed coding scheme is energy inefficient at shorter transmission distances. In their subsequent work, the minimum energy codes (MECs) were investigated and outperformed the Hamming codes in bit error rate and energy consumption BIB003 . Moreover, the authors of BIB004 compared and evaluated the Hamming codes, Euclidean geometry low density parity check (EG-LDPC) and cyclic Reed-Muller (C-RM) codes. In order to mitigate the inter-symbol-interference (ISI) caused by the overlap of two consecutive symbols for DMC, Reed Solomon (RS) codes were investigated in BIB005 . Compared with the Hamming codes capable of correcting one bit error, the RS codes are highly effective against burst and random errors. The results showed that the bit error probability (BEP) increases as either the number of molecules per bit increases or the codeword minimum distance decreases. Besides these frequently used wireless communication codes, new coding schemes were developed to tailor MC channel characteristics, such as the coding based on molecular coding (MoCo) distance function and the ISI-free code for DMC channels with a drift BIB002 . Further to these, the authors of BIB006 considered coding implementation and designed a parity check analog decoder using biological components. The decoding process depends on the computation of aposteriori log-likelihood ratio involving L-value and box-plus calculation. The calculations are completed with the help of chemical reactions and the gene regulation mechanism whose input-output relation can be described by Hill function. Through carefully choosing the parameters in Hill function, the Hill function is able to approximate some mathematical operations, such as the hyperbolic operation and logarithmic operation, and finally leads to the successfully bits decoding. More details on coding schemes for MC could refer to , BIB008 . (a) The proposed scheme in (b) The proposed scheme in BIB007 . Fig. 12 : Two nanonetworks schemes that adopt electromagnetic paradigm as their in-body and body-area communication method.
|
A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> A. Requirements and Opportunities <s> In this study, we propose and simulate a high-sensitivity carbon nanotube sensor, capable of transducing protein-ligand binding, or more generally, macromolecular-recognition into a frequency variation of an electric current. In conjunction with small proteins like streptavidin the nanosensor can reach the sensitivity threshold, i.e. detecting a single molecule binding. For heavier, virus-sized particles the same device can provide a relatively accurate measure of their mass. In a first step, we focus on mechanical issues and characterize the sensor under several aspects both by molecular dynamics and continuous shell theory. The second part focuses on the transduction of the cantilever deflection into an electrical signal, and is achieved through a combination of Green's functions and spatial domain decomposition. The influence of thermal effects on the proper operation of the sensor is also discussed in conjunction with the construction of the current-displacement characteristic. <s> BIB001 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> A. Requirements and Opportunities <s> Abstract This paper provides an in-depth view on nanosensor technology and electromagnetic communication among nanosensors. First, the state of the art in nanosensor technology is surveyed from the device perspective, by explaining the details of the architecture and components of individual nanosensors, as well as the existing manufacturing and integration techniques for nanosensor devices. Some interesting applications of wireless nanosensor networks are highlighted to emphasize the need for communication among nanosensor devices. A new network architecture for the interconnection of nanosensor devices with existing communication networks is provided. The communication challenges in terms of terahertz channel modeling, information encoding and protocols for nanosensor networks are highlighted, defining a roadmap for the development of this new networking paradigm. <s> BIB002 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> A. Requirements and Opportunities <s> Approaches, Derivatives and Applications Vasilios Georgakilas,† Michal Otyepka,‡ Athanasios B. Bourlinos,‡ Vimlesh Chandra, Namdong Kim, K. Christian Kemp, Pavel Hobza,‡,§,⊥ Radek Zboril,*,‡ and Kwang S. Kim* †Institute of Materials Science, NCSR “Demokritos”, Ag. Paraskevi Attikis, 15310 Athens, Greece ‡Regional Centre of Advanced Technologies and Materials, Department of Physical Chemistry, Faculty of Science, Palacky University Olomouc, 17. listopadu 12, 771 46 Olomouc, Czech Republic Center for Superfunctional Materials, Department of Chemistry, Pohang University of Science and Technology, San 31, Hyojadong, Namgu, Pohang 790-784, Korea Institute of Organic Chemistry and Biochemistry, Academy of Sciences of the Czech Republic, v.v.i., Flemingovo naḿ. 2, 166 10 Prague 6, Czech Republic <s> BIB003 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> A. Requirements and Opportunities <s> In molecular communication, a group of biological nanomachines communicates through exchanging molecules and collectively performs application dependent tasks. An open research issue in molecular communication is to establish interfaces to interconnect the molecular communication environment (e.g., inside the human body) and its external environment (e.g., outside the human body). Such interfaces allow conventional devices in the external environment to control the location and timing of molecular communication processes in the molecular communication environment and expand the capability of molecular communication. In this paper, we first describe an architecture of externally controllable molecular communication and introduce two types of interfaces for biological nanomachines; bio-nanomachine to bio-nanomachine interfaces (BNIs) for bio-nanomachines to interact with other biological nanomachines in the molecular communication environment, and inmessaging and outmessaging interfaces (IMIs and OMIs) for bio-nanomachines to interact with devices in the external environment. We then describe a proof-of- concept design and wet laboratory implementation of the IMI and OMI, using biological cells. We further demonstrate, through mathematical modeling and numerical experiments, how an architecture of externally controllable molecular communication with BNIs and IMIs/OMIs may apply to pattern formation, a promising nanomedical application of molecular communication. <s> BIB004 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> A. Requirements and Opportunities <s> Nano-communication is considered to become a major building block for many novel applications in the health care and fitness sector. Given the recent developments in the scope of nano machinery, coordination and control of these devices becomes the critical challenge to be solved. In-Body Nano-Communication based on either molecular, acoustic, or RF radio communication in the terahertz band supports the exchange of messages between these in-body devices. Yet, the control and communication with external units is not yet fully understood. In this paper, we investigate the challenges and opportunities of connecting Body Area Networks and other external gateways with in-body nano-devices, paving the road towards more scalable and efficient Internet of Nano Things (IoNT) systems. We derive a novel network architecture supporting the resulting requirements and, most importantly, investigate options for the simulation based performance evaluation of such novel concepts. Our study is concluded by a first look at the resulting security issues considering the high impact of potential misuse of the communication links. <s> BIB005 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> A. Requirements and Opportunities <s> Abstract Real-time monitoring of medical test parameters as well as biological and chemical substances inside the human body is an aspiration which might facilitate the control of pathologies and would ensure better effectiveness in diagnostics and treatments. Future Body Area NanoNetworks (BANN) represent an ongoing effort to complement these initiatives, although due to its early stage of development, further research is required. This paper contributes with a hierarchical BANN architecture consisting of two types of nanodevices, namely, nanonodes and a nanorouter, which are conceptually designed using technologically available electronic components. A straightforward communication scheme operating at the THz band for the exchange of information among nanodevices is also proposed. Communications are conducted in a human hand scenario since, unlike other parts of the human body, the negative impact of path loss and molecular absorption noise on the propagation of electromagnetic waves in biological tissues is mitigated. However, data transmission is restricted by the tiny size of nanodevices and their extremely limited energy storing capability. To overcome this concern, nanodevices must be powered through the bloodstream and external ultrasound energy harvesting sources. Under these conditions, the necessary energy and its management have been thoroughly examined and assessed. The results obtained reveal the outstanding ability of nanonodes to recharge, thus enabling each pair of nanonode–nanorouter to communicate every 52 min. This apparently long period is compensated by the considerably high number of nanonodes in the network, which satisfies a quasi-constant monitoring of medical parameter readings. <s> BIB006 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> A. Requirements and Opportunities <s> The nervous system holds a central position among the major in-body networks. It comprises of cells known as neurons that are responsible to carry messages between different parts of the body and make decisions based on those messages. In this work, further to the extensive theoretical studies, we demonstrate the first controlled information transfer through an in vivo nervous system by modulating digital data from macro-scale devices onto the nervous system of common earthworms and conducting successful transmissions. The results and analysis of our experiments provide a method to model networks of neurons, calculate the channel propagation delay, create their simulation models, indicate optimum parameters such as frequency, amplitude and modulation schemes for such networks, and identify average nerve spikes per input pulse as the nervous information coding scheme. Future studies on neuron characterization and artificial neurons may benefit from the results of our work. <s> BIB007
|
Besides the five components discussed in Sec. II-C, the IEEE P1906.1 framework also defined the element of the interface between the In-Body Network and the Body-Area Network which is an important part for the application implementation of nanonetworks, especially for medical-related applications. However, as the goal of the standard is to highlight the minimum required components and their corresponding functions necessary to deploy a nanonetwork, which communication paradigm is adopted inside the human body and outside people, and what is the interface to transmit healthy parameters from nano-nodes inside human body to outside devices are not specified. Some groups specified the communication paradigm with corresponding interface either using EM paradigm or MC paradigm. The authors of proposed a network deployment tailored for coronary heart disease monitoring, which is shown in Fig 12a. The network consists of two major components: Nanodevice-embedded Drug Eluting Stents (nanoDESs) and Nano-macro Interface (NM). The nanoDESs are deployed to occluded regions of coronary arteries and responsible for measuring the arterial constriction, communicating relevant information, and controlling the release of any required drugs. NanoDESs use THz band to communicate with an interface which is inserted in the intercostal space of the rib cage of a Coronary Heart Disease (CHD) patient and acts as a gateway between the nanonetworks and the macroworld. Another example that chooses THz communication is presented in BIB006 . It proposed a nanoscale communication network consisting of nanonodes circulating in bloodstream and a nanorouter implanted between epidermis and dermis in hand skin, illustrated in Fig. 12b . The nanonodes in blood vessels collect healthy parameters and exchange data with the nanorouter using THz band only when they approach the nanorouter. In this way, the relatively short distance between nanonodes and the nanorouter minimizes the negative impact of path loss. Subsequently, the nanorouter transmits the received information also in THz band to a gateway wristband that relays the healthy data to external devices or the Internet via traditional communication methods. As for MC paradigm, authors in BIB004 implemented artificially synthesized materials (ARTs) as an interface. In their wet laboratory experiments, the ART contains pHrodo molecules which are a kind of fluorescent dyes that are almost non-fluorescent under neutral solutions while fluorescent in acidic solutions. Therefore, conducting fluorescence microscopy observations and measuring fluorescent intensity can tell us the information inside our body. Apparently, all the above schemes can enable the connection between the In-Body Network and the Body-Area Network using electromagnetic paradigm or molecular paradigm, but there are some factors making them less practical. First, the nanonodes in BIB006 and nanoDESs in are nonbiological and may intervene other physiological activities, as the nanonodes need to be injected into blood vessels or enter the human body through drinking a solution containing them, and the nanoDESs are even required to be surgically placed into body. Moreover, the injection or insertion of numerous nanonodes into the human body may not be accepted by the public, and some countries have published national laws to strictly regulate the production and marketing of such devices BIB005 . Meanwhile, how to recycle these nanonodes is also a problem. Second, with regard to the method in BIB004 , the need of externally devices, fluorescent microscope, makes the method too complicated to implement for ordinary being. Furthermore, the fluorescent intensity information has to be transformed to electromagnetic form for the following transmission to the Internet. The nanoscale is the natural domain of molecules, proteins, DNA, organelles, and major components of cells BIB002 , . investigated three kinds of possible signaling particles and discussed their corresponding biological building blocks to serve as transmitters and receivers for MC. A physiological process that happens naturally is the neurotransmitters transmission between presynaptic part and postsynaptic terminal, which is depicted in Fig. 13 . In response to an excitation of a nerve fiber, the generated action potential moves along the presynaptic part and triggers the release of neurotransmitters (signaling particles) contained in vesicles. The released information molecules diffuse in the environment, and they can bind to the ion channel located at the membrane of postsynaptic terminal. Then, the binded ion channel becomes permeable to some ions, which the ion influx finally leads to a depolarization of the cell membrane that propagates subsequently as a new action potential along the cell , . Undoubtedly, the neurotransmitter delivery establishes a MC link and is much more biological, biocompatible, and less invasive than nanonetworks systems consisting of nanonodes and using electromagnetic paradigm, since spontaneously existed molecular paradigms eliminate the risk of injection or The red molecules are signaling neurotransmitters enclosed by roundshaped vesicles, and the green molecules are ion molecules who can result in a depolarization of the cell membrane . intake of nano devices. In other words, the molecular paradigm makes up for the drawback of , BIB006 . Moreover, the implementation in BIB007 further demonstrates the feasibility of that some physiological processes can be interpreted as MC systems. In MC, the information is generally modulated by molecules' concentration, while the information is usually transmitted outside the human body via electromagnetic waves, so a chemical concentration/electromagnetic wave convertor or interface is needed. Fortunately, some nanonodes with chemical nanosensors being embedded on the CNTs or GNRs are able to take this responsibility BIB001 - BIB003 . The mechanism is that some specific type of molecules can be absorbed on the top of CNTs and GNRs, thus resulting in a locally change in the number of electrons moving through the carbon lattice and generating an electrical signal BIB002 . So far, the discussed advantages brougth by MC and electromagnetic communication provide the opportunity and open a door to propose a hybrid communication for nanonetworks systems.
|
A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> 14. <s> The ability of engineered biological nanomachines to communicate with biological systems at the molecular level is anticipated to enable future applications such as monitoring the condition of a human body, regenerating biological tissues and organs, and interfacing artificial devices with neural systems. From the viewpoint of communication theory and engineering, molecular communication is proposed as a new paradigm for engineered biological nanomachines to communicate with the natural biological nanomachines which form a biological system. Distinct from the current telecommunication paradigm, molecular communication uses molecules as the carriers of information; sender biological nanomachines encode information on molecules and release the molecules in the environment, the molecules then propagate in the environment to receiver biological nanomachines, and the receiver biological nanomachines biochemically react with the molecules to decode information. Current molecular communication research is limited to small-scale networks of several biological nanomachines. Key challenges to bridge the gap between current research and practical applications include developing robust and scalable techniques to create a functional network from a large number of biological nanomachines. Developing networking mechanisms and communication protocols is anticipated to introduce new avenues into integrating engineered and natural biological nanomachines into a single networked system. In this paper, we present the state-of-the-art in the area of molecular communication by discussing its architecture, features, applications, design, engineering, and physical modeling. We then discuss challenges and opportunities in developing networking mechanisms and communication protocols to create a network from a large number of bio-nanomachines for future applications. <s> BIB001 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> 14. <s> In this paper, we consider a multi-hop molecular communication network consisting of one nanotransmitter, one nanoreceiver, and multiple nanotransceivers acting as relays. We consider three different relaying schemes to improve the range of diffusion-based molecular communication. In the first scheme, different types of messenger molecules are utilized in each hop of the multi-hop network. In the second and third schemes, we assume that two types of molecules and one type of molecule are utilized in the network, respectively. We identify self-interference, backward intersymbol interference (backward-ISI), and forward-ISI as the performance-limiting effects for the second and third relaying schemes. Furthermore, we consider two relaying modes analogous to those used in wireless communication systems, namely full-duplex and half-duplex relaying. We propose the adaptation of the decision threshold as an effective mechanism to mitigate self-interference and backward-ISI at the relay for full-duplex and half-duplex transmission. We derive closed-form expressions for the expected end-to-end error probability of the network for the three considered relaying schemes. Furthermore, we derive closed-form expressions for the optimal number of molecules released by the nanotransmitter and the optimal detection threshold of the nanoreceiver for minimization of the expected error probability of each hop. <s> BIB002 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> 14. <s> This paper studies a three-node network in which an intermediate nano-transceiver, acting as a relay, is placed between a nano-transmitter and a nano-receiver to improve the range of diffusion-based molecular communication. Motivated by the relaying protocols used in traditional wireless communication systems, we study amplify-and-forward (AF) relaying with fixed and variable amplification factor for use in molecular communication systems. To this end, we derive a closed-form expression for the expected end-to-end error probability. Furthermore, we derive a closed-form expression for the optimal amplification factor at the relay node for minimization of an approximation of the expected error probability of the network. Our analytical and simulation results show the potential of AF relaying to improve the overall performance of nano-networks. <s> BIB003 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> 14. <s> Molecular communications (MC) is a promising paradigm which enables nano-machines to communicate with each other. Due to the severe attenuation of molecule concentrations, there tends to be more errors when the receiver becomes farther from the transmitter. To solve this problem, relaying schemes need to be implemented to achieve reliable communications. In this letter, time-dependent molecular concentrations are utilised as the information carrier, which will be influenced by the noise and channel memory. The emission process is also considered. The relay node (RN) can decode messages, and forward them by sending either the same or a different kind of molecules as the transmitter. The performance is evaluated by deriving theoretical expressions as well as through simulations. Results show that the relaying scheme will bring significant benefits to the communication reliability. <s> BIB004 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> 14. <s> This paper focuses on the development of a novel radio channel model inside the human skin at the terahertz range, which will enable the interaction among potential nano-machines operating in the inter cellular areas of the human skin. Thorough studies are performed on the attenuation of electromagnetic waves inside the human skin, while taking into account the frequency of operation, distance between the nano-machines and number of sweat ducts. A novel channel model is presented for communication of nano-machines inside the human skin and its validation is performed by varying the aforementioned parameters with a reasonable accuracy. The statistics of error prediction between simulated and modeled data are: mean (μ)= 0.6 dB and standard deviation (σ)= 0.4 dB, which indicates the high accuracy of the prediction model as compared with measurement data from simulation. In addition, the results of proposed channel model are compared with terhaertz time-domain spectroscopy based measurement of skin sample and the statistics of error prediction in this case are: μ = 2.10 dB and σ = 6.23 dB, which also validates the accuracy of proposed model. Results in this paper highlight the issues and related challenges while characterizing the communication in such a medium, thus paving the way towards novel research activities devoted to the design and the optimization of advanced applications in the healthcare domain. <s> BIB005 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> 14. <s> The envisioned dense nano-network inside the human body at terahertz (THz) frequency suffers a communication performance degradation among nano-devices. The reason for this performance limitation is not only the path loss and molecular absorption noise, but also the presence of multi-user interference and the interference caused by utilising any communication scheme, such as time spread ON—OFF keying (TS-OOK). In this paper, an interference model utilising TS-OOK as a communication scheme of the THz communication channel inside the human body has been developed and the probability distribution of signal-to-interference-plus-noise ratio (SINR) for THz communication within different human tissues, such as blood, skin, and fat, has been analyzed and presented. In addition, this paper evaluates the performance degradation by investigating the mean values of SINR under different node densities in the area and the probabilities of transmitting pulses. It results in the conclusion that the interference restrains the achievable communication distance to approximate 1 mm, and more specific range depends on the particular transmission circumstance. Results presented in this paper also show that by controlling the pulse transmission probability and node density, the system performance can be ameliorated. In particular, SINR of in vivo THz communication between the deterministic targeted transmitter and the receiver with random interfering nodes in the medium improves about 10 dB, when the node density decreases one order. The SINR increases approximate 5 and 2 dB, when the pulse transmitting probability drops from 0.5 to 0.1 and 0.9 to 0.5. <s> BIB006 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> 14. <s> A preliminary investigation is carried out on the artificial human skin tissues with and without metastatic melanomas using Terahertz Time Domain Spectroscopy (THz-TDS). Both the refractive indexes and absorption coefficients of artificial skin with melanomas are higher than the normal artificial skin samples over the entire frequency range between 0.2 THz to 1.6 THz. The reason is that tumour cells degrade the contraction of fibroblasts causing more water content in malignant tissues. This study quantifies the impact of melanomas on the optical parameters of artificial skin tissue and can help in techniques that will diagnose and prevent tumours at the early stage. <s> BIB007
|
In the proposed hybrid communication network, the MC is utilized in the human body because it shows a superiority over other communication schemes in terms of biocompatibility and noninvasiveness. The blue nano-node in Fig. 14 refers to a MC system, and MC systems are grouped to constitute a molecular nanonetwork who is only responsible for a certain area. The molecular nanonetworks are either made up of multiple MC transmitters and receivers or a MC transmitter, MC receiver, and multiple transceivers that play the role of relaying. A biological transmitter first collects health parameters, and then modulates and transmits the collected information among the molecular nanonetworks. In order to successfully delivery the information to the outside of the human body, a graphene based nano-device is implanted into the human body. This device is mainly made up of a chemical nanosensor, a transceiver, and the battery. The embedded chemical nanosensor is capable of detecting the concentration information coming from the molecular nanonetworks, and converts it to an electrical signal. The THz electromagnetic signal is further transmitted to a nano-micro interface. This interface can either be a dermal display device [229] or a gateway to connect with the Internet. The nano-micro interface is usually equipped with two kinds of antennas: THz antenna and micro/macro antenna. The proposed hybrid communication architecture not only tries its best to avoid using non-biological nanonodes inside the body but also makes in-body healthy parameters easily be detected outside. There are several enabling technologies to enhance the feasibility of the proposed hybrid communication. First, the molecular nanonetworks have been well studied (see Sec. III-B2) BIB002 , BIB003 , BIB004 , BIB001 . Different relaying or multihop schemes have been proposed and their performance are theoretically and numerically analysed, which demonstrate the effectiveness of communication distance extension and communication reliability improvement. Then, the in-vivo THz communication including the channel modelling, modulation methods, and channel capacity has been studied (see Sec. VI-A1) BIB005 , BIB006 , BIB007 . The conducted research not only helps us understand the impact of human tissue on signal propagation but also assists researchers to estimate the received signal level which is a key indicator for the further information transmission.
|
A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> A. Security in traditional wireless networks <s> Over the past 5 years, there has been a significant interest in employing terahertz (THz) technology, spectroscopy and imaging for security applications. There are three prime motivations for this interest: (a) THz radiation can detect concealed weapons since many non-metallic, non-polar materials are transparent to THz radiation; (b) target compounds such as explosives and illicit drugs have characteristic THz spectra that can be used to identify these compounds and (c) THz radiation poses no health risk for scanning of people. In this paper, stand-off interferometric imaging and sensing for the detection of explosives, weapons and drugs is emphasized. Future prospects of THz technology are discussed. <s> BIB001 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> A. Security in traditional wireless networks <s> Nano communication is one of the fastest growing emerging research fields. In recent years, much progress has been achieved in developing nano machines supporting our needs in health care and other scenarios. However, experts agree that only the interaction among nano machines allows to address the very complex requirements in the field. Drug delivery and environmental control are only two of the many interesting application domains, which, at the same time, pose many new challenging problems. Very relevant communication concepts have been investigated such as RF radio communication in the terra hertz band or molecular communication based on transmitter molecules. Yet, one question has not been considered so far and that is nano communication security, i.e., will it be possible to protect such systems from manipulation by malicious parties? Our objective is to provide some first insights into the security challenges and to highlight some of the open research challenges in this field. The main observation is that especially for molecular communication existing security and cryptographic solutions might not be applicable. In this context, we coin the term biochemical cryptography that might lead to significant improvements in the field of molecular communication. We also point to relevant problems that have similarities with typical network architectures but also completely new challenges. <s> BIB002 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> A. Security in traditional wireless networks <s> Incredible improvements in the field of nano-technologies have enabled nano-scale machines that promise new solutions for several applications in biomedical, industry and military fields. Some of these applications require or might exploit the potential advantages of communication and hence cooperative behavior of these nano-scale machines to achieve a common and challenging objective that exceeds the capabilities of a single device. Extensions to known wireless communication mechanisms as well as completely novel approaches have been investigated. Examples include RF radio communication in the terahertz band or molecular communication based on transmitter molecules. Yet, one question has not been considered so far and that is nano-communication security, i.e., how we can protect such systems from manipulation by malicious parties? Our objective in this paper is to provide some first insights into this new field and to highlight some of the open research challenges. We start from a discussion of classical security objectives and their relevance in nano-networking. Looking at the well-understood field of sensor networks, we derive requirements and investigate if and how available solutions can be applied to nano-communication. Our main observation is that, especially for molecular communication, existing security and cryptographic solutions might not be applicable. In this context, we coin the new term biochemical cryptography that might open a completely new research direction and lead to significant improvements in the field of molecular communication. We point out similarities with typical network architectures where they exist but also highlight completely new challenges where existing solutions do not apply. <s> BIB003 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> A. Security in traditional wireless networks <s> In wireless body area network (BAN), node authentication is essential for trustworthy and reliable gathering of patient's critical health information. Traditional authentication solutions depend on prior trust among nodes whose establishment would require either key pre-distribution or non-intuitive participation by inexperienced users. Most existing non-cryptographic authentication schemes require advanced hardware or significant modifications to the system software, which are impractical for BANs. In this paper, for the first time, we propose a lightweight body area network authentication scheme BANA. Different from previous work, BANA does not depend on prior-trust among nodes and can be efficiently realized on commercial off-the-shelf low-end sensors. We achieve this by exploiting a unique physical layer characteristic naturally arising from the multi-path environment surrounding a BAN, i.e., the distinct received signal strength (RSS) variation behaviors among on-body channels and between on-body and off-body communication channels. Based on distinct RSS variations, BANA adopts clustering analysis to differentiate the signals from an attacker and a legitimate node. We also make use of multi-hop on-body channel characteristics to enhance the robustness of our authentication mechanism. The effectiveness of BANA is validated through extensive real-world experiments under various scenarios. It is shown that BANA can accurately identify multiple attackers with minimal amount of overhead. <s> BIB004 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> A. Security in traditional wireless networks <s> This paper presents a study on physical layer authentication problem for in vivo nano networks at terahertz (THz) frequencies. A system model based on envisioned nano network for in vivo body-centric nano communication is considered and distance-dependent pathloss based authentication is performed. Experimental data collected from THz time-domain spectroscopy setup shows that pathloss can indeed be used as a device fingerprint. Furthermore, simulation results clearly show that given a maximum tolerable false alarm rate, detection rate up to any desired level can be achieved within the feasible region of the proposed method. It is anticipated that this paper will pave a new paradigm for secured, authenticated nano network for future applications, e.g., drug delivery and Internet of nano-things-based intelligent office. <s> BIB005
|
In traditional wireless networks, communication between legitimate nodes is prone to active and passive attacks by adversaries, due to the broadcast nature of the wireless medium. The literature have considered various kinds of attacks, e.g., impersonation attack, Sybil attack, Replay attack, Sinkhole attack, jamming, man-in-the-middle attack, denial of service attack, eavesdropping attacks, selfish/malicious relays in cooperative communication systems etc., and their potential (cryptography based) solutions. More recently, researchers have started to develop various security solutions at physical layer by exploiting the unique characteristics of the physical/wireless medium. Some of the most significant problems in physical layer security include intrusion detection/authentication, shared secret key generation, secrecy capacity maximization (for a wiretap channel), artificial noise generation, design of friendly jammers (in a cooperative communication system) etc. Keeping this context in mind, we evaluate the answer to the following question: do the aforementioned security solutions hold for the nano-scale communication? The answer is in negation for MC based nano networks because information exchange by using molecules instead of EM waves as carriers is a different regime altogether. On the other hand, we find that for EM based nano networks, operating at THz frequencies, some of the aforementioned concepts (if not the solutions) are still meaningful. BIB001 . However, THz based imaging systems are not the focus of this survey article. The survey articles BIB002 , BIB003 review some of the fundamental security mechanisms for THz systems and conclude that the traditional crypto based mechanisms could be ported to THz systems, but they need to be light weighted due to limited processing capabilities of the THz devices. The so-called BANA protocol proposed by Shi et. al. in BIB004 addresses the security needs of the micro-macro link of a body area network. In BIB005 , the authors consider a scenario where an onbody nano device communicates with inside-body nano device, while a malicious node attempts to send malicious/harmful data to the inside-body node. To this end, the authors utilize the measured pathloss as the fingerprint of transmit nano device to do the authentication at the physical layer. presents the device layout which consists of a micro-ring transceiver and a graphene based panda ring-resonator. The molecules are trapped in a whispering gallery mode, the polarized light is transceived and this device which could be used as a molecular RFID system.
|
A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> C. Security in Molecular based Nanonetworks <s> The microdot is a means of concealing messages (steganography)1 that was developed by Professor Zapp and used by German spies in the Second World War to transmit secret information2. A microdot (“the enemy's masterpiece of espionage”2) was a greatly reduced photograph of a typewritten page that was pasted over a full stop in an innocuous letter2. We have taken the microdot a step further and developed a DNA-based, doubly steganographic technique for sending secret messages. A DNA-encoded message is first camouflaged within the enormous complexity of human genomic DNA and then further concealed by confining this sample to a microdot. <s> BIB001 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> C. Security in Molecular based Nanonetworks <s> Recent research has considered DNA as a medium for ultra-scale computation and for ultra-compact information storage. One potential key application is DNA-based, molecular cryptography systems. We present some procedures for DNA-based cryptography based on one-time-pads that are in principle unbreakable. Practical applications of cryptographic systems based on one-time-pads are limited in conventional electronic media by the size of the one-time-pad; however DNA provides a much more compact storage medium, and an extremely small amount of DNA suffices even for huge one-time-pads. We detail procedures for two DNA one-time-pad encryption schemes: (i) a substitution method using libraries of distinct pads, each of which defines a specific, randomly generated, pair-wise mapping; and (ii) an XOR scheme utilizing molecular computation and indexed, random key strings. These methods can be applied either for the encryption of natural DNA or for artificial DNA encoding binary data. In the latter case, we also present a novel use of chip-based DNA micro-array technology for 2D data input and output. Finally, we examine a class of DNA steganography systems, which secretly tag the input DNA and then hide it within collections of other DNA. We consider potential limitations of these steganographic techniques, proving that in theory the message hidden with such a method can be recovered by an adversary. We also discuss various modified DNA steganography methods which appear to have improved security. <s> BIB002 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> C. Security in Molecular based Nanonetworks <s> The paper presents the principles of bio molecular computation (BMC) and several algorithms for DNA (deoxyribonucleic acid) steganography and cryptography: One-Time-Pad (OTP), DNA XOR OTP and DNA chromosomes indexing. It represents a synthesis of our work in the field, sustained by former referred publications. Experimental results obtained using Matlab Bioinformatics Toolbox and conclusions are ending the work. <s> BIB003 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> C. Security in Molecular based Nanonetworks <s> Incredible improvements in the field of nano-technologies have enabled nano-scale machines that promise new solutions for several applications in biomedical, industry and military fields. Some of these applications require or might exploit the potential advantages of communication and hence cooperative behavior of these nano-scale machines to achieve a common and challenging objective that exceeds the capabilities of a single device. Extensions to known wireless communication mechanisms as well as completely novel approaches have been investigated. Examples include RF radio communication in the terahertz band or molecular communication based on transmitter molecules. Yet, one question has not been considered so far and that is nano-communication security, i.e., how we can protect such systems from manipulation by malicious parties? Our objective in this paper is to provide some first insights into this new field and to highlight some of the open research challenges. We start from a discussion of classical security objectives and their relevance in nano-networking. Looking at the well-understood field of sensor networks, we derive requirements and investigate if and how available solutions can be applied to nano-communication. Our main observation is that, especially for molecular communication, existing security and cryptographic solutions might not be applicable. In this context, we coin the new term biochemical cryptography that might open a completely new research direction and lead to significant improvements in the field of molecular communication. We point out similarities with typical network architectures where they exist but also highlight completely new challenges where existing solutions do not apply. <s> BIB004 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> C. Security in Molecular based Nanonetworks <s> Nano communication is one of the fastest growing emerging research fields. In recent years, much progress has been achieved in developing nano machines supporting our needs in health care and other scenarios. However, experts agree that only the interaction among nano machines allows to address the very complex requirements in the field. Drug delivery and environmental control are only two of the many interesting application domains, which, at the same time, pose many new challenging problems. Very relevant communication concepts have been investigated such as RF radio communication in the terra hertz band or molecular communication based on transmitter molecules. Yet, one question has not been considered so far and that is nano communication security, i.e., will it be possible to protect such systems from manipulation by malicious parties? Our objective is to provide some first insights into the security challenges and to highlight some of the open research challenges in this field. The main observation is that especially for molecular communication existing security and cryptographic solutions might not be applicable. In this context, we coin the term biochemical cryptography that might lead to significant improvements in the field of molecular communication. We also point to relevant problems that have similarities with typical network architectures but also completely new challenges. <s> BIB005 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> C. Security in Molecular based Nanonetworks <s> Molecular Communication (MC) is an emerging and ::: promising communication paradigm for several multi-disciplinary ::: domains like bio-medical, industry and military. Differently to the ::: traditional communication paradigm, the information is encoded ::: on the molecules, that are then used as carriers of information. ::: Novel approaches related to this new communication paradigm ::: have been proposed, mainly focusing on architectural aspects and ::: categorization of potential applications. So far, security and privacy ::: aspects related to the molecular communication systems have ::: not been investigated at all and represent an open question that ::: need to be addressed. The main motivation of this paper lies on ::: providing some first insights about security and privacy aspects of ::: MC systems, by highlighting the open issues and challenges and ::: above all by outlining some specific directions of potential solutions. ::: Existing cryptographicmethods and security approaches are ::: not suitable for MC systems since do not consider the pecific issues ::: and challenges, that need ad-hoc solutions.We will discuss directions ::: in terms of potential solutions by trying to highlight the ::: main advantages and potential drawbacks for each direction considered. ::: We will try to answer to the main questions: 1) why this ::: solution can be exploited in the MC field to safeguard the system ::: and its reliability? 2) which are the main issues related to the specific ::: approach? <s> BIB006 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> C. Security in Molecular based Nanonetworks <s> The emergence of molecular communication has provided an avenue for developing biological nanonetworks. Synthetic biology is a platform that enables reprogramming cells, which we refer to as Bio-NanoThings, that can be assembled to create nanonetworks. In this paper, we focus on specific Bio-NanoThings, i.e, bacteria, where engineering their ability to emit or sense molecules can result in functionalities, such as cooperative target localization. Although this opens opportunities, e.g., for novel healthcare applications of the future, this can also lead to new problems, such as a new form of bioterrorism. In this paper, we investigate the disruptions that malicious Bio-NanoThings (M-BNTs) can create for molecular nanonetworks. In particular, we introduce two types of attacks: 1) blackhole and 2) sentry attacks. In blackhole attack M-BNTs emit attractant chemicals to draw-in the legitimate Bio-NanoThings (L-BNTs) from searching for their target, while in the sentry attack, the M-BNTs emit repellents to disperse the L-BNTs from reaching their target. We also present a countermeasure that L-BNTs can take to be resilient to the attacks, where we consider two forms of decision processes that includes Bayes’ rule as well as a simple threshold approach. We run a thorough set of simulations to assess the effectiveness of the proposed attacks as well as the proposed countermeasure. Our results show that the attacks can significantly hinder the regular behavior of Bio-NanoThings, while the countermeasures are effective for protecting against such attacks. <s> BIB007 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> C. Security in Molecular based Nanonetworks <s> Eavesdroppers are notoriously difficult to detect and locate in traditional wireless communication systems, especially if they are silent. We show that in molecular communications, where information molecules undergo random walk (RW) propagation, eavesdropper detection and localization are possible if the eavesdropper is an absorbing receiver. This is due to the fact that the RW process has a finite return probability, and the eavesdropper is a detectable energy sink of which its location can be reverse estimated. <s> BIB008 </s> A Comprehensive Survey on Hybrid Communication for Internet of Nano-Things in Context of Body-Centric Communications <s> C. Security in Molecular based Nanonetworks <s> Molecular communication in nanonetworks is an emerging communication paradigm that uses molecules as information carriers. Achieving a secure information exchange is one of the practical challenges that need to be considered to address the potential of molecular communications in nanonetworks. In this article, we have introduced secure channel into molecular communications to prevent eavesdropping. First, we propose a Diffie Hellman algorithm-based method by which communicating nanomachines can exchange a secret key through molecular signaling. Then, we use this secret key to perform ciphering. Also, we present both the algorithm for secret key exchange and the secured molecular communication system. The proposed secured system is found effective in terms of energy consumption. <s> BIB009
|
For MC networks, the traditional crypto based methods need to be replaced by the so-called biochemical crypto techniques whereby attacks as well as countermeasures are all defined by the chemical reactions between the molecules BIB004 , BIB005 . Various bio-inspired approaches are proposed in BIB006 to secure MC systems and different attacks are classified according to the (five) different layers of MC system in Table IV . From the table, we can see that besides the classical attacks numerous other novel attacks are possible. Two kinds of attacks are discussed in BIB007 , which are blackhole attack where malicious bionano things attract the other bionano things towards itself (by emitting chemoattractants) preventing them from their task of localization, and sentry attacks where malicious bionano things in the vicinity of the target cells emit chemo-repellents not letting the legitimate bionano things reach their target. Reference BIB008 and BIB009 consider the situations that eavesdropper appears and causes troubles; furthermore, the solutions are also discussed and evaluated. Additionally, in vesicle based molecular transport, vesicles act like keys in MC networks and thus inherently help the cause of secure communication. Recently, researchers from cryptography have extensively work on DNA inspired cryptography BIB001 [245] BIB002 BIB003 , the crux of which is that DNA computing is a computationally hard problem of biological origin, just as Heisenberg's uncertainty principle is a hard problem of physics origin; thus, this could be applied to cryptography purposes.
|
Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> ICN-Based VANET <s> Data dissemination in dynamic environments such as vehicular networks has been a critical challenge. One of the key characteristics of vehicular networks is the high intermittent connectivity. Recent studies have investigated and proven the feasibility of a content-centric networking paradigm for vehicular networks. Content-centric information dissemination has a potential number of applications in vehicular networking, including advertising, traffic and parking notifications and emergency announcements. It is clear and evident that knowledge about the type of content and its relevance can enhance the performance of data dissemination in VANETs. In this paper we address the problem of information dissemination in vehicular network environments and propose a model and solution based on a content-centric approach of networking. We leverage the expansion properties of interacting nodes in a cluster to be interpreted in terms of social connections among nodes and perform a selective random network coding approach. We compare the reliability performance of our method with a conventional random network coding approach and comment on the complexity of the proposed solution. <s> BIB001 </s> Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> ICN-Based VANET <s> Traditionally, the vehicle has been the extension of the man's ambulatory system, docile to the driver's commands. Recent advances in communications, controls and embedded systems have changed this model, paving the way to the Intelligent Vehicle Grid. The car is now a formidable sensor platform, absorbing information from the environment (and from other cars) and feeding it to drivers and infrastructure to assist in safe navigation, pollution control and traffic management. The next step in this evolution is just around the corner: the Internet of Autonomous Vehicles. Pioneered by the Google car, the Internet of Vehicles will be a distributed transport fabric capable to make its own decisions about driving customers to their destinations. Like other important instantiations of the Internet of Things (e.g., the smart building), the Internet of Vehicles will have communications, storage, intelligence, and learning capabilities to anticipate the customers' intentions. The concept that will help transition to the Internet of Vehicles is the Vehicular Cloud, the equivalent of Internet cloud for vehicles, providing all the services required by the autonomous vehicles. In this article, we discuss the evolution from Intelligent Vehicle Grid to Autonomous, Internet-connected Vehicles, and Vehicular Cloud. <s> BIB002 </s> Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> ICN-Based VANET <s> The peculiarities of the vehicular environment, characterized by dynamic topologies, unreliable broadcast channels, short-lived and intermittent connectivity, call into the question the capabilities of existing IP-based networking solutions to support the wide set of initially conceived and emerging vehicular applications. The research community is currently exploring groundbreaking approaches to transform the Internet. Among them, the Information-Centric Networking (ICN) paradigm appears as a promising solution to tackle the aforementioned challenges. By leveraging innovative concepts, such as named content, name-based routing, and in-network content caching, ICN well suits scenarios in which applications specify what they search for and not where they expect it to be provided and all that is required is a localized communication exchange. In this chapter, solutions are presented that rely on Content-Centric Networking (CCN), the most studied ICN approach for vehicular networks. The potential of ICN as the key enabler of the emerging vehicular cloud computing paradigm is also discussed. <s> BIB003
|
The research community and academia world explored and successfully deployed VANET in the last two decades. Currently, it is gaining more popularity due to rapid increase in the number of vehicles worldwide. VANET emerges in our daily life through conferring vehicles and their related objects, for example, road side units (RSUs), with communication resources that lead to vehicle-to-vehicle (V2V), vehicle-to-infrastructure (V2I), infrastructure-to-vehicle (I2V), and more generically, vehicle-to-everything (V2X) communications. Sensors in vehicles allow the transmission of collected information such as live video streaming between two vehicles, V2X application servers, RSUs, and cellphones of pedestrians . Vehicles may extend the insight of their network outside and can have more general view of the local settings. Remote driving permits an application concerning V2X to activate a remote vehicle located in a hazardous zone or for people who are unable to drive vehicles. For a scenario where roads are predictable such as public transportation, then driving on the basis of cloud computing may be used. For this kind of scenario, such a platform may be considered where the entrance is based on cloud services . VANET related applications have several advantages, for instance, it reduces accidents and harms identified with individuals and vehicles BIB003 . In addition, it saves individuals time by providing traffic related data such as a busy and congested road BIB002 . ICN is an appealing applicant solution for vehicular communications due to its numerous benefits. First, it fits fine to the quality of usual VANET usages, such as route reports and accident messages . These usages are probable to benefit from in-network content caching and strategies. Second, data caching is mainly favorable to speed up content retrieval via caching in different nodes BIB001 . In vehicles, caching may typically be deployed at fairly low cost, as the energy demands of ICN nodes are likely to be a small portion of the overall energy use of vehicles, hence agreeing on high-level computation, uninterrupted data processing, and enough caching space in vehicles. In addition, ICN mainly endorses asynchronous information sharing among end users. . Beside several advantages, VANET also faces numerous challenges, for example, vehicles' mobility and network disruptions, i.e., if two vehicles are connected in a grid and after some distance they change their route, then it is a challenging task that how they can communicate with each other. To overcome these kinds of issues, a new clean-slate model is required so that the user quality of experience is achieved. ICN is believed to be the most suitable paradigm for smooth data transmission without extra retrieval delay in VANET environments.
|
Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> ICN-Based VANET Issues <s> There have been many recent papers on data-oriented or content-centric network architectures. Despite the voluminous literature, surprisingly little clarity is emerging as most papers focus on what differentiates them from other proposals. We begin this paper by identifying the existing commonalities and important differences in these designs, and then discuss some remaining research issues. After our review, we emerge skeptical (but open-minded) about the value of this approach to networking. <s> BIB001 </s> Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> ICN-Based VANET Issues <s> The current Internet architecture was founded upon a host-centric communication model, which was appropriate for coping with the needs of the early Internet users. Internet usage has evolved however, with most users mainly interested in accessing (vast amounts of) information, irrespective of its physical location. This paradigm shift in the usage model of the Internet, along with the pressing needs for, among others, better security and mobility support, has led researchers into considering a radical change to the Internet architecture. In this direction, we have witnessed many research efforts investigating Information-Centric Networking (ICN) as a foundation upon which the Future Internet can be built. Our main aims in this survey are: (a) to identify the core functionalities of ICN architectures, (b) to describe the key ICN proposals in a tutorial manner, highlighting the similarities and differences among them with respect to those core functionalities, and (c) to identify the key weaknesses of ICN proposals and to outline the main unresolved research challenges in this area of networking research. <s> BIB002 </s> Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> ICN-Based VANET Issues <s> Vehicular Ad-hoc Networks (VANETs) are seen as the key enabling technology of Intelligent Transportation Systems (ITS). In addition to safety, VANETs also provide a cost-effective platform for numerous comfort and entertainment applications. A pragmatic solution of VANETs requires synergistic efforts in multidisciplinary areas of communication standards, routings, security and trust. Furthermore, a realistic VANET simulator is required for performance evaluation. There have been many research efforts in these areas, and consequently, a number of surveys have been published on various aspects. In this article, we first explain the key characteristics of VANETs, then provide a meta-survey of research works. We take a tutorial approach to introducing VANETs and gradually discuss intricate details. Extensive listings of existing surveys and research projects have been provided to assess development efforts. The article is useful for researchers to look at the big picture and channel their efforts in an effective way. <s> BIB003 </s> Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> ICN-Based VANET Issues <s> Information-centric networking (ICN) is a new communication paradigm that focuses on content retrieval from a network regardless of the storage location or physical representation of this content. In ICN, securing the content itself is much more important than securing the infrastructure or the endpoints. To achieve the security goals in this new paradigm, it is crucial to have a comprehensive understanding of ICN attacks, their classification, and proposed solutions. In this paper, we provide a survey of attacks unique to ICN architectures and other generic attacks that have an impact on ICN. It also provides a taxonomy of these attacks in ICN, which are classified into four main categories, i.e., naming, routing, caching, and other miscellaneous related attacks. Furthermore, this paper shows the relation between ICN attacks and unique ICN attributes, and that between ICN attacks and security requirements, i.e., confidentiality, integrity, availability, and privacy. Finally, this paper presents the severity levels of ICN attacks and discusses the existing ICN security solutions. <s> BIB004 </s> Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> ICN-Based VANET Issues <s> In the connected vehicle ecosystem, a high volume of information-rich and safety-critical data will be exchanged by roadside units and onboard transceivers to improve the driving and traveling experience. However, poor-quality wireless links and the mobility of vehicles highly challenge data delivery. The IP address-centric model of the current Internet barely works in such extremely dynamic environments and poorly matches the localized nature of the majority of vehicular communications, which typically target specific road areas (e.g., in the proximity of a hazard or a point of interest) regardless of the identity/address of a single vehicle passing by. Therefore, a paradigm shift is advocated from traditional IP-based networking toward the groundbreaking information- centric networking. In this article, we scrutinize the applicability of this paradigm in vehicular environments by reviewing its core functionalities and the related work. The analysis shows that, thanks to features like named content retrieval, innate multicast support, and in-network data caching, information-centric networking is positioned to meet the challenging demands of vehicular networks and their evolution. Interoperability with the standard architectures for vehicular applications along with synergies with emerging computing and networking paradigms are debated as future research perspectives. <s> BIB005 </s> Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> ICN-Based VANET Issues <s> Information-Centric Networking (ICN) treats content as a first-class entity — each content has a unique identity and ICN routers forward traffic based on content identity rather than the locations of the content. This provides benefits like dynamic request routing, caching and mobility support. The choice of naming schema (flat vs. hierarchical) is a fundamental design choice in ICN which determines the functional separation between the network layer and the application layer. With hierarchical names, the network layer is cognizant of the semantics of hierarchical names. Name space management is also part of network layer. ICN architectures using flat names leave these to the application layer. The naming schema affects the performance and scalability of the network in terms of forwarding efficiency, routing table size and name space size. This paper provides both qualitative and quantitative comparison on the two naming schemas using these metrics, noting that they are interdependent. We seek to understand which naming schema would be better for a high-performance, scalable ICN architecture. <s> BIB006
|
The existing papers, such as BIB003 BIB005 [10] address the ICN features corresponding to VANET approach, which motivate this article. However, these papers present VANET issues in part or whole in terms of security, routing, mobility, and scalability. In this paper, we investigate the ICN-based VANET challenges with respect to security, routing, mobility, naming, and caching, which are serious issues and must be resolved when ICN is deployed in the VANET environment. The mentioned modules have received a tremendous interest from the VANET research community that works towards the ICN-based VANET deployment. Security, among other features, is the most vital feature of a wireless sensor network, which can be resolved up to a maximum extent. However, there are still some scenarios in which ICN may also face problems. These issues can be categorized into several modes, such as denial of service (DOS) attacks. These attacks may be further divided into numerous types, for example, attacks on (a) authentication, (b) availability, and (c) confidentiality. Authentication is further divided into Sybil and impersonation. In the prior one the attacker exercises various identities at a time, whereas in the latter one the attacker demonstrates himself as the genuine user. Availability is the most vigorous element in the VANET environment, therefore, attackers exercise attacks on this critical aspect. The main aim of attackers in this module is to confirm customers' lives. This element has been presented in the literature rigorously, and interested readers are referred to [10] for in-depth comprehensions. In the third module, confidentiality, the information is merely accessed by the authorized group. In other words, the information must be hidden from unauthorized users. Confidentiality is prone to attacks due to exchanged information in the public network. However, attacks on the confidentiality may be avoided up to a maximum level in ICN as content requests are based on names rather than IP addresses. ICN routing mechanisms are categorized into two classes, i.e., name-based routing and name resolution. In name-based routing, a content request is forwarded on the basis of content name where its related information is stored on the publisher-subscriber path and therefore the content itself is delivered on the reverse path to the subscriber . On the other hand, name resolution can be achieved in two steps. In the first step, the content is matched with a single IP or group of IP addresses. In the second step, a shortest path in the network is followed using any protocol such as Open Shortest Path First (OSFP), and subscribers' requests are forwarded to the content publisher BIB004 . The existing Internet was designed for fixed devices, where a node's IP should be in the subnet of the network to which it is attached. Nevertheless, the number of non-fixed nodes shows a persistent growth where wireless traffic with 27 billion devices will report for more than 63% of the total IP traffic by 2021 . Mobile devices can easily switch networks, change their IP addresses, and therefore present novel transmission means on the basis of opportunistic and intermittent connectivity BIB002 . Nonetheless, such kinds of approaches do not attain uninterrupted connectivity, which has become an essential need to a great extent. ICN naming may be divided into two classes, i.e., hierarchical and flat naming. In the hierarchical approach, a name can be made of several hierarchical elements. Where an element may be a series of characters that is created by subscribers. Hierarchical names are easy to understand but they are non-persistent BIB004 . On the other hand, a flat name approach is more useful as compared to hierarchical one because a hash-table is used to identify the next hop in case of a content request BIB006 . The supremacy of flat names over hierarchical names is such that flat names can be subdivided and therefore parallel processing can be achieved. ICN In-network caching can be achieved using three principles, i.e., democratic, uniform, and pervasive BIB001 . In the democratic principle, all network nodes have equal rights to publish contents if they have cached them already. In the uniform principle, a routing protocol may be used for all contents or all network nodes, if required. Pervasive principle demands that a cached content should be available/provided to all nodes in the network.
|
Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> An Overview of Research Challenges <s> The current Internet architecture was founded upon a host-centric communication model, which was appropriate for coping with the needs of the early Internet users. Internet usage has evolved however, with most users mainly interested in accessing (vast amounts of) information, irrespective of its physical location. This paradigm shift in the usage model of the Internet, along with the pressing needs for, among others, better security and mobility support, has led researchers into considering a radical change to the Internet architecture. In this direction, we have witnessed many research efforts investigating Information-Centric Networking (ICN) as a foundation upon which the Future Internet can be built. Our main aims in this survey are: (a) to identify the core functionalities of ICN architectures, (b) to describe the key ICN proposals in a tutorial manner, highlighting the similarities and differences among them with respect to those core functionalities, and (c) to identify the key weaknesses of ICN proposals and to outline the main unresolved research challenges in this area of networking research. <s> BIB001 </s> Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> An Overview of Research Challenges <s> Content dissemination in Vehicular Ad-hoc Networks has a myriad of applications, ranging from advertising and parking notifications, to traffic and emergency warnings. This heterogeneity requires optimizing content storing, retrieval and forwarding among vehicles to deliver data with short latency and without jeopardizing network resources. In this paper, for a few reference scenarios, we illustrate how approaches that combine Content Centric Networking (CCN) and Floating Content (FC) enable new and efficient solutions to this issue. Moreover, we describe how a network architecture based on Software Defined Networking (SDN) can support both CCN and FC by coordinating distributed caching strategies, by optimizing the packet forwarding process and the availability of floating data items. For each scenario analyzed, we highlight the main research challenges open, and we describe a few possible solutions. <s> BIB002
|
In this section, we provide the imperative ICN-based VANET challenges, as presented in Figure 2 , that need to be addressed and resolved before the ICN deployment. The main goal is to grasp the attention of the ICN, SDN, Edge, and VANET research communities for merging these attractive models at one platform. The taxonomy of these models is presented in Figure 3 , which is based on the papers BIB002 . In ICN, contents are forwarded hop-by-hop by in-network nodes, with each node holding three data structures, i.e., Pending Interest Table (PIT) that tracks records of the interfaces through which content requests arrive, Forwarding Information Base (FIB) that matches content names to the output interfaces, and Content Store (CS) which is used to cache contents locally BIB001 .
|
Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> SDN-Based Vehicular ICN <s> With the advances in telecommunications, more and more devices are connected to the Internet and getting smart. As a promising application scenario for carrier networks, vehicular communication has enabled many traffic-related applications. However, the heterogeneity of wireless infrastructures and the inflexibility in protocol deployment hinder the real world application of vehicular communications. SDN is promising to bridge the gaps through unified network abstraction and programmability. In this research, we propose an SDN-based architecture to enable rapid network innovation for vehicular communications. Under this architecture, heterogeneous wireless devices, including vehicles and roadside units, are abstracted as SDN switches with a unified interface. In addition, network resources such as bandwidth and spectrum can also be allocated and assigned by the logically centralized control plane, which provides a far more agile configuration capability. Besides, we also study several cases to highlight the advantages of the architecture, such as adaptive protocol deployment and multiple tenants isolation. Finally, the feasibility and effectiveness of the proposed architecture and cases are validated through traffic-trace-based simulation. <s> BIB001 </s> Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> SDN-Based Vehicular ICN <s> This paper provides an overview of software-defined “hardware” infrastructures (SDHI). SDHI builds upon the concept of hardware (HW) resource disaggregation. HW resource disaggregation breaks today’s physical server-oriented model where the use of a physical resource (e.g., processor or memory) is constrained to a physical server’s chassis. SDHI extends the definition of of software-defined infrastructures (SDI) and brings greater modularity, flexibility, and extensibility to cloud infrastructures, thus allowing cloud operators to employ resources more efficiently and allowing applications not to be bounded by the physical infrastructure’s layout. This paper aims to be an initial introduction to SDHI and its associated technological advancements. This paper starts with an overview of the cloud domain and puts into perspective some of the most prominent efforts in the area. Then, it presents a set of differentiating use-cases that SDHI enables. Next, we state the fundamentals behind SDI and SDHI, and elaborate why SDHI is of great interest today. Moreover, it provides an overview of the functional architecture of a cloud built on SDHI, exploring how the impact of this transformation goes far beyond the cloud infrastructure level in its impact on platforms, execution environments, and applications. Finally, an in-depth assessment is made of the technologies behind SDHI, the impact of these technologies, and the associated challenges and potential future directions of SDHI. <s> BIB002 </s> Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> SDN-Based Vehicular ICN <s> Information-Centric Networking (ICN) is an appealing architecture that has received a remarkable interest from the research community thanks to its friendly structure. Several projects have proposed innovative ICN models to cope with the Internet practice, which moves from host-centrism to receiver-driven communication. A worth mentioning component of these novel models is in-network caching, which provides flexibility and pervasiveness for the upturn of swiftness in data distribution. Because of the rapid Internet traffic growth, cache deployment and content caching have been unanimously accepted as conspicuous ICN issues to be resolved. In this article, a survey of cache management strategies in ICN is presented along with their contributions and limitations, and their performance is evaluated in a simulation network environment with respect to cache hit, stretch ratio, and eviction operations. Some unresolved ICN caching challenges and directions for future research in this networking area are also discussed. <s> BIB003
|
SDN carries a new idea to the Cloud BIB002 by presenting resource separation where all resources are assembled and supervised by software. Unlike legacy physical server model, SDN takes into account resource separation and allows for hardware resources as discrete and flexible elements. With the deployment of SDN, the concept of client-server system is exterminated, but only information of various hardware is coded and processed. The administrator of a Cloud architecture then deploys a particular client-server environment, for example, logical clients and servers, which brings a high level of flexibility to the Cloud environment. However, this phenomenon needs essential physical resources so that to connect various devices through a high-speed communications channel to deal with the physical disaggregation of resources BIB002 . Moreover, SDN is an evolving idea that isolates data plane from control plane, where the controller collects information from network nodes and provides an abstract view of the network. In addition, all SDN applications can access the SDN controller, where the applications deploy various network services . In the context of ICN-based VANET, SDN provides scalability, manageability, and a universal view of the network. Among different ICN modules, in-network caching is one of the most popular approach BIB003 , which improves the content availability ratio and decreases the content retrieval delay. SDN is the most suitable choice for the cache improvement as it helps in distinguishing diverse kinds of cached contents. However, this linkage of ICN and SDN in the VANET environment is a challenging task due to vehicles' mobility. An SDN-based VANET scheme is proposed in BIB001 , where different wireless nodes are considered as SDN switches, which share the network bandwidth in a centralized control plane. This technique seems suitable for the SDN-based VANET, but it does not consider the most promising paradigm, i.e., ICN. Thus, an intelligent and flexible method is required for the ICN-based VANET communication paradigm to share network resources with smooth content mobility and caching.
|
Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> Cloud-Based Vehicular ICN <s> Cloud computing offers utility-oriented IT services to users worldwide. Based on a pay-as-you-go model, it enables hosting of pervasive applications from consumer, scientific, and business domains. However, data centers hosting Cloud applications consume huge amounts of electrical energy, contributing to high operational costs and carbon footprints to the environment. Therefore, we need Green Cloud computing solutions that can not only minimize operational costs but also reduce the environmental impact. In this paper, we define an architectural framework and principles for energy-efficient Cloud computing. Based on this architecture, we present our vision, open research challenges, and resource provisioning and allocation algorithms for energy-efficient management of Cloud computing environments. The proposed energy-aware allocation heuristics provision data center resources to client applications in a way that improves energy efficiency of the data center, while delivering the negotiated Quality of Service (QoS). In particular, in this paper we conduct a survey of research in energy-efficient computing and propose: (a) architectural principles for energy-efficient management of Clouds; (b) energy-efficient resource allocation policies and scheduling algorithms considering QoS expectations and power usage characteristics of the devices; and (c) a number of open research challenges, addressing which can bring substantial benefits to both resource providers and consumers. We have validated our approach by conducting a performance evaluation study using the CloudSim toolkit. The results demonstrate that Cloud computing model has immense potential as it offers significant cost savings and demonstrates high potential for the improvement of energy efficiency under dynamic workload scenarios. <s> BIB001 </s> Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> Cloud-Based Vehicular ICN <s> We present a Selective Neighbor Caching (SNC) approach for enhancing seamless mobility in ICN architectures. The approach is based on proactively caching information requests and the corresponding items to a subset of proxies that are one hop away from the proxy a mobile is currently connected to. A key contribution of this paper is the definition of a target cost function that captures the tradeoff between delay and cache cost, and a simple procedure for selecting the appropriate subset of neighbors which considers the mobility behavior of users. We present investigations for the steady-state and transient performance of the proposed scheme which identify and quantify its gains compared to proactively caching in all neighbor proxies and to the case where no caching is performed. Moreover, our investigations show how these gains are affected by the delay and cache cost, and the mobility behavior. <s> BIB002 </s> Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> Cloud-Based Vehicular ICN <s> The Internet is straining to meet demands that its design never anticipated, such as supporting billions of mobile devices and transporting huge amounts of multimedia content. The publish-subscribe Internet (PSI) architecture, a clean slate information-centric networking approach to the future Internet, was designed to satisfy the current and emerging user demands for pervasive content delivery, which the Internet can no longer handle. This article provides an overview of the PSI architecture, explaining its operation from bootstrapping to information delivery, focusing on its support for network layer caching and seamless mobility, which make PSI an excellent platform for ubiquitous information delivery. <s> BIB003 </s> Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> Cloud-Based Vehicular ICN <s> Content-Centric Networking (CCN) is a new popular communication paradigm that achieves information retrieval and distribution by using named data instead of end-to-end host-centric communications. This innovative model particularly fits mobile wireless environments characterized by dynamic topologies, unreliable broadcast channels, short-lived and intermittent connectivity, as proven by preliminary works in the literature. In this paper we extend the CCN framework to efficiently and reliably support content delivery on top of IEEE 802.11p vehicular technology. Achieved results show that the proposed solution, by leveraging distributed broadcast storm mitigation techniques, simple transport routines, and lightweight soft-state forwarding procedures, brings significant improvements w.r.t. a plain CCN model, confirming the effectiveness and efficiency of our design choices. <s> BIB004 </s> Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> Cloud-Based Vehicular ICN <s> Cloud computing datacenter hosts hundreds of thousands of servers that coordinate users' tasks in order to deliver highly available computing service. These servers consist of multiple memory modules, network cards, storage disks, processors etc…, each of these components while capable of failing. At such a large scale, hardware component failure is the norm rather than an exception. Hardware failure can lead to performance degradation to users and can result in losses to the business. Fault tolerant is one of efficient modules that keep hardware in operational mode as much as possible. In this paper, we survey the most famous fault tolerance technique in cloud computing, and list numerous FT methods proposed by the research experts in this field. <s> BIB005 </s> Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> Cloud-Based Vehicular ICN <s> Migration can contribute to efficient resource management in cloud computing environment. Migration is used in many areas such as power reduction, load balancing, and fault tolerance in Cloud Dater Centers (CDCs). However most of the previous works concentrated on the implementation of migration technology itself. Thus, it need to consider metrics which may impact the migration performance and energy efficiency. In this paper we summarize and classify previous approaches of migration in CDCs. Furthermore, we conclude with a discussion of research problems in this area. In the future work, we will study on live migration mechanism to improve the live migration performance and energy efficiency in the variety of CDCs. <s> BIB006 </s> Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> Cloud-Based Vehicular ICN <s> Recently, a series of innovative information-centric networking (ICN) architectures have been designed to better address the shift from host-centric end-to-end communication to requester-driven content retrieval. With the explosive increase of mobile data traffic, the mobility issue in ICN is a growing concern and a number of approaches have been proposed to deal with the mobility problem in ICN. Despite the potential advantages of ICN in mobile wireless environments, several significant research challenges remain to be addressed before its widespread deployment, including consistent routing, local cached content discovery, energy efficiency, privacy, security and trust, and practical deployment. In this paper, we present a brief survey on some of the works that have already been done to achieve mobile ICN, and discuss some research issues and challenges. We identify several important aspects of mobile ICN: overview, mobility enabling technologies, information-centric wireless mobile networks, and research challenges. <s> BIB007
|
Unlike data processing/execution in a local area network (LAN), Cloud computing is a concept where computation depends on difference resources that are shared in a wide area network (WAN). Usually, the architecture of Cloud computing is based on across-the-board data centers connected together wherein users access their desired resources through the Internet BIB005 . At present, various companies such as Google, Microsoft, and Amazon exploit Cloud data centers BIB001 for caching a huge amount of records and hold prevalent service applications BIB006 . Because of the hardware and/or software constraints or reformation of security issues, it is indispensable to use data centers for storing records BIB001 BIB006 . However, service provision in this situation becomes a crucial problem. Due to exponential growth in Internet traffic, use of vehicles as cloud nodes is an accepting idea. This is possible because several vehicles are equipped with caching and sensing sensors for providing safety in the driving as well as infotainment for passengers. Using the cloud as an infotainment-only feature is easy in the current IP-based Internet paradigm. However, linking this feature with the ICN model is a challenging task due to vehicles' mobility. Using the concept of naming in ICN, seamless mobility can be supported without executing difficult network administrations needed in the IP-based networks, when topological or physical locations of mobile nodes change BIB007 . In the last few years, various strategies have been proposed for addressing this challenge from the viewpoints of publisher and subscriber mobility. The proposals for enabling subscriber's mobility generally incorporate proactive caching BIB002 and prompt recovery of request/reply BIB004 . Basically, ICN is a publish-subscribe networking model, wherein subscribers are mainly interested in actual contents rather than their locations BIB003 , its primary focus is on content retrieval to allow subscribers to attain the requested information. Thus, it is a prospective model to resolve several ubiquitous challenges face in the IP-based infrastructure, for example, mobility and security among others BIB007 . Mobile cloud services can be integrated with the ICN-based VANET to provide access to the stored information. However, due to the nature of ICN accessing method which is based on names, a feasible connectivity approach is required for the smooth transition. Nevertheless, this facility raises several new challenges in the network model, which are discussed in Section 5.
|
Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> Edge Computing-Based Vehicular ICN <s> In many aspects of human activity, there has been a continuous struggle between the forces of centralization and decentralization. Computing exhibits the same phenomenon; we have gone from mainframes to PCs and local networks in the past, and over the last decade we have seen a centralization and consolidation of services and applications in data centers and clouds. We position that a new shift is necessary. Technological advances such as powerful dedicated connection boxes deployed in most homes, high capacity mobile end-user devices and powerful wireless networks, along with growing user concerns about trust, privacy, and autonomy requires taking the control of computing applications, data, and services away from some central nodes (the "core") to the other logical extreme (the "edge") of the Internet. We also position that this development can help blurring the boundary between man and machine, and embrace social computing in which humans are part of the computation and decision making loop, resulting in a human-centered system design. We refer to this vision of human-centered edge-device based computing as Edge-centric Computing. We elaborate in this position paper on this vision and present the research challenges associated with its implementation. <s> BIB001 </s> Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> Edge Computing-Based Vehicular ICN <s> The proliferation of Internet of Things (IoT) and the success of rich cloud services have pushed the horizon of a new computing paradigm, edge computing, which calls for processing the data at the edge of the network. Edge computing has the potential to address the concerns of response time requirement, battery life constraint, bandwidth cost saving, as well as data safety and privacy. In this paper, we introduce the definition of edge computing, followed by several case studies, ranging from cloud offloading to smart home and city, as well as collaborative edge to materialize the concept of edge computing. Finally, we present several challenges and opportunities in the field of edge computing, and hope this paper will gain attention from the community and inspire more research in this direction. <s> BIB002 </s> Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> Edge Computing-Based Vehicular ICN <s> As a meaningful and typical application scenario of Internet of Things (IoT), Internet of Vehicles (IoV) has attracted a lot of attentions to solve the increasingly severe problem of traffic congestion and safety issues in smart city. Information-centric networks (ICN) is a main stream of next generation network because of its content based forwarding strategy and in-network caching properties. Many existing works have been done to introduce ICN in IoV because of IP-based network architecture's maladjustment of such extremely mobile and dynamic IoV environment. In contrast, ICN is able to sustain packet delivery in unreliable and extreme environment. However, the frequent mobility of the vehicles will consume ICN's network resources to incessantly update the Forward Information Base (FIB) which will further affect the aggregation processing. For example, geographic location based name schema aggregation will be affected dramatically by mobility problem. On the other hand, fog computing is an edge computing technology usually integrated with IoT by bringing computation and storage capacity near the underlying networks to provide low-latency and time sensitive services. In this paper, we integrate fog computing into information-centric IoV to provide mobility support by allocating different schema taking account of the data characteristic (e.g., user-shareable data, communication data). Moreover, we use the computation, storage and location-aware capabilities of fog to design a mobility support mechanism for data exchange and communication considering the feature of IoV service (e.g., alarm danger in local, updating traffic information, V2V communication etc.). We evaluate related performances of the proposed mechanism in high mobility environment, compared with original information-centric IoV. The result shows the advantages of a fog computing based information-centric IoV. <s> BIB003 </s> Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> Edge Computing-Based Vehicular ICN <s> Internet of Things (IoT) allows billions of physical objects to be connected to collect and exchange data for offering various applications, such as environmental monitoring, infrastructure management, and home automation. On the other hand, IoT has unsupported features (e.g., low latency, location awareness, and geographic distribution) that are critical for some IoT applications, including smart traffic lights, home energy management and augmented reality. To support these features, fog computing is integrated into IoT to extend computing, storage and networking resources to the network edge. Unfortunately, it is confronted with various security and privacy risks, which raise serious concerns towards users. In this survey, we review the architecture and features of fog computing and study critical roles of fog nodes, including real-time services, transient storage, data dissemination and decentralized computation. We also examine fog-assisted IoT applications based on different roles of fog nodes. Then, we present security and privacy threats towards IoT applications and discuss the security and privacy requirements in fog computing. Further, we demonstrate potential challenges to secure fog computing and review the state-of-the-art solutions used to address security and privacy issues in fog computing for IoT applications. Finally, by defining several open research issues, it is expected to draw more attention and efforts into this new architecture. <s> BIB004
|
Edge computing BIB001 is deployed to bring computing resources near data subscribers. The architecture of Edge computing is decentralized that uses network nodes to jointly execute a considerable amount of processing, data caching, and controlling BIB002 BIB004 . With the help of Edge nodes and network connections, processing overhead is largely decreased and bandwidth restrictions are surmounted for consolidated services . The attraction of Edge computing increases with the provision of on-demand services and availability of resources near consumer devices, which result in low response time and greater consumer satisfaction . Data sharing in vehicular network has been a growing concern for the last few years. Three types of data sharing are deemed the most prominent practices in vehicular network, i.e., sharing warning messages about an accident, reminder for the prevention of vehicles crash, and notice about the road congestion BIB003 . Besides, infotainment content availability for passengers has also attracted the attention of vehicular research forums. In this regard, Edge computing, which brings the storage capacity and computational processes near the customers for content retrieval with minimum delays, is integrated with vehicular networks . However, the integration of ICN with vehicular Edge computing is a challenging issue due to consistent mobility of vehicles. That is, if content, which is stored in an Edge node (a vehicle), is accessed by a passenger in a moving car, that can be accessed somehow easily in the IP-based Internet model. However, relating it with the ICN, which believes in names rather than IPs, is quite complicated job. This can be resolved by storing an accessed content at several RSUs thereby ignoring the redundancy factor. Yet, according to the Cisco Visual Networking Index (VNI) , mobile traffic by 2021, with 5.5 billion users, will reach seven times more than the current traffic, which introduces further complication for vehicular ICN. In other words, the storage of RSUs would exhaust in a moment, which requires replacement of the already-stored contents. In this scenario, the accessing sensor node (vehicle) will not find the previously cached content. This scenario motivates researchers to design such an intelligent and sophisticated mechanism that paves the way to integrate Edge and ICN with vehicular networks.
|
Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> ICN-Based VANET Research Opportunities <s> Vehicle-to-anything (V2X) communications refer to information exchange between a vehicle and various elements of the intelligent transportation system (ITS), including other vehicles, pedestrians, Internet gateways, and transport infrastructure (such as traffic lights and signs). The technology has a great potential of enabling a variety of novel applications for road safety, passenger infotainment, car manufacturer services, and vehicle traffic optimization. Today, V2X communications is based on one of two main technologies: dedicated short-range communications (DSRC) and cellular networks. However, in the near future, it is not expected that a single technology can support such a variety of expected V2X applications for a large number of vehicles. Hence, interworking between DSRC and cellular network technologies for efficient V2X communications is proposed. This paper surveys potential DSRC and cellular interworking solutions for efficient V2X communications. First, we highlight the limitations of each technology in supporting V2X applications. Then, we review potential DSRC-cellular hybrid architectures, together with the main interworking challenges resulting from vehicle mobility, such as vertical handover and network selection issues. In addition, we provide an overview of the global DSRC standards, the existing V2X research and development platforms, and the V2X products already adopted and deployed in vehicles by car manufactures, as an attempt to align academic research with automotive industrial activities. Finally, we suggest some open research issues for future V2X communications based on the interworking of DSRC and cellular network technologies. <s> BIB001 </s> Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> ICN-Based VANET Research Opportunities <s> The developments of connected vehicles are heavily influenced by information and communications technologies, which have fueled a plethora of innovations in various areas, including networking, caching, and computing. Nevertheless, these important enabling technologies have traditionally been studied separately in the existing works on vehicular networks. In this paper, we propose an integrated framework that can enable dynamic orchestration of networking, caching, and computing resources to improve the performance of next generation vehicular networks. We formulate the resource allocation strategy in this framework as a joint optimization problem, where the gains of not only networking but also caching and computing are taken into consideration in the proposed framework. The complexity of the system is very high when we jointly consider these three technologies. Therefore, we propose a novel deep reinforcement learning approach in this paper. Simulation results with different system parameters are presented to show the effectiveness of the proposed scheme. <s> BIB002
|
Generally, VANET communications are achieved on the basis of two key techniques, i.e., cellular networking and dedicated short range communications (DSRC) BIB001 . Through these two technologies, a vehicle obtains information from its own sensors and provides it to other vehicles or RSUs BIB002 . Currently, moving computing, management, and content caching in the Edge, ICN, and Cloud is the growing trend. This computation however brings a lot of new system requirements, which are presented in Figure 4 and discussed in the following subsections.
|
Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> ICN-Based VANET Mobility <s> 'Where's' in a name? <s> BIB001 </s> Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> ICN-Based VANET Mobility <s> Wireless network virtualization and information-centric networking (ICN) are two promising techniques in software-defined 5G mobile wireless networks. Traditionally, these two technologies have been addressed separately. In this paper we show that integrating wireless network virtualization with ICN techniques can significantly improve the end-to-end network performance. In particular, we propose an information- centric wireless network virtualization architecture for integrating wireless network virtualization with ICN. We develop the key components of this architecture: radio spectrum resource, wireless network infrastructure, virtual resources (including content-level slicing, network-level slicing, and flow-level slicing), and informationcentric wireless virtualization controller. Then we formulate the virtual resource allocation and in-network caching strategy as an optimization problem, considering the gain of not only virtualization but also in-network caching in our proposed information-centric wireless network virtualization architecture. The obtained simulation results show that our proposed information-centric wireless network virtualization architecture and the related schemes significantly outperform the other existing schemes. <s> BIB002
|
Mobility is a critical challenge to the ICN deployment corresponding to all emerging technologies BIB001 . In ICN, when subscribers change their location, their connectivity is changed from one node to the other. However, as no IP address is used to route contents, it is transparent, in contrast to IP, in which addresses are changed BIB002 . In the VANET environment, content objects must pass a centralized facilitator (vehicle having sensors) before reaching the actual subscriber. This is the most crucial module because contents in VANETs travel along a longer path rather than the best one. Mobility in ICN is achieved through a paradigm, known as publish-subscribe Internet model. In this approach, interested subscribers request particular contents by sending request messages without knowing the location of the content, and the publisher responds with the actual content BIB002 . This phenomenon guarantees secure content distribution because of the publisher and subscriber decoupling, and thereby catch the attention of the ICN and VANET communities for integrating these emerging models. However, providing mobility support in ICN-based VANETs requires smart and suitable techniques so that content should route to a particular destination without extra retrieving delay.
|
Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> ICN-Based VANET Naming <s> In this paper we apply the Named Data Networking [8], a newly proposed Internet architecture, to networking vehicles on the run. Our initial design, dubbed V-NDN, illustrates NDN's promising potential in providing a unifying architecture that enables networking among all computing devices independent from whether they are connected through wired infrastructure, ad hoc, or intermittent DTN. This paper describes a prototype implementation of V-NDN and its preliminary performance assessment, and identifies remaining challenges. <s> BIB001 </s> Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> ICN-Based VANET Naming <s> In the connected vehicle ecosystem, a high volume of information-rich and safety-critical data will be exchanged by roadside units and onboard transceivers to improve the driving and traveling experience. However, poor-quality wireless links and the mobility of vehicles highly challenge data delivery. The IP address-centric model of the current Internet barely works in such extremely dynamic environments and poorly matches the localized nature of the majority of vehicular communications, which typically target specific road areas (e.g., in the proximity of a hazard or a point of interest) regardless of the identity/address of a single vehicle passing by. Therefore, a paradigm shift is advocated from traditional IP-based networking toward the groundbreaking information- centric networking. In this article, we scrutinize the applicability of this paradigm in vehicular environments by reviewing its core functionalities and the related work. The analysis shows that, thanks to features like named content retrieval, innate multicast support, and in-network data caching, information-centric networking is positioned to meet the challenging demands of vehicular networks and their evolution. Interoperability with the standard architectures for vehicular applications along with synergies with emerging computing and networking paradigms are debated as future research perspectives. <s> BIB002 </s> Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> ICN-Based VANET Naming <s> Information-centric networking (ICN) approaches have been considered as an alternative approach to TCP/IP. Contrary to the traditional IP, the ICN treats content as a first-class citizen of the entire network, where names are given through different naming schemes to contents and are used during the retrieval. Among ICN approaches, content centric networking (CCN) is one of the key protocols being explored for Internet of Things (IoT), names the contents using hierarchical naming. Moreover, CCN follows pull-based strategy and exhibits the communication loop problem because of its broadcasting mode. However, IoT requires both pull and push modes of communication with scalable and secured content names in terms of integrity. In this paper, we propose a hybrid naming scheme that names contents using hierarchical and flat components to support both push and pull communication and to provide both scalability and security, respectively. We consider an IoT-based smart campus scenario and introduce two transmission modes: 1) unicast mode and 2) broadcast mode to address loop problem associated with CCN. Simulation results demonstrate that proposed scheme significantly improves the rate of interest transmissions, number of covered hops, name aggregation, and reliability along with addressing the loop problem. <s> BIB003
|
Unlike the current Internet sending-receiving approach, contents in ICN are named, where names are either hierarchical or flat BIB003 . In ICN, the name of the content is decoupled from its location with the intention to supply it to the requesting subscriber. Hence, content retrieval follows a receiver-driven approach and therefore avoids control of subscribers over the content, as done in the IP-based architecture. In the ICN-based VANET, content may be easily discovered as compared to the IP-based VANET, because ICN does not need the original server (publisher) to be connected every time content is requested. Furthermore, content retrieval from different publishers, for instance, map from a common RSU, becomes easier through combining requests for contents with the same names. This process simplifies the process of data delivery for the incoming requests BIB002 . ICN naming brings significant assistance to vehicular communications by allowing forwarding vehicles to handle contents on the basis of application requirements. Named data transmission provides ICN-based VANET with robustness to connection interruptions and hereby characterizes vehicular Internet BIB001 .
|
Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> ICN-Based VANET Caching <s> Information-Centric Networking (ICN) is an appealing architecture that has received a remarkable interest from the research community thanks to its friendly structure. Several projects have proposed innovative ICN models to cope with the Internet practice, which moves from host-centrism to receiver-driven communication. A worth mentioning component of these novel models is in-network caching, which provides flexibility and pervasiveness for the upturn of swiftness in data distribution. Because of the rapid Internet traffic growth, cache deployment and content caching have been unanimously accepted as conspicuous ICN issues to be resolved. In this article, a survey of cache management strategies in ICN is presented along with their contributions and limitations, and their performance is evaluated in a simulation network environment with respect to cache hit, stretch ratio, and eviction operations. Some unresolved ICN caching challenges and directions for future research in this networking area are also discussed. <s> BIB001 </s> Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> ICN-Based VANET Caching <s> The developments of connected vehicles are heavily influenced by information and communications technologies, which have fueled a plethora of innovations in various areas, including networking, caching, and computing. Nevertheless, these important enabling technologies have traditionally been studied separately in the existing works on vehicular networks. In this paper, we propose an integrated framework that can enable dynamic orchestration of networking, caching, and computing resources to improve the performance of next generation vehicular networks. We formulate the resource allocation strategy in this framework as a joint optimization problem, where the gains of not only networking but also caching and computing are taken into consideration in the proposed framework. The complexity of the system is very high when we jointly consider these three technologies. Therefore, we propose a novel deep reinforcement learning approach in this paper. Simulation results with different system parameters are presented to show the effectiveness of the proposed scheme. <s> BIB002
|
ICN caching is divided into several parts, i.e., (a) off-path caching that requires extra storing devices, (b) on-path caching which is achieved opportunistically, (c) homogeneous caching where two caching nodes co-operate with each other, and (d) heterogeneous caching in which caching nodes do not co-operate with one another. For understanding the concept of these techniques, a recent survey BIB001 provides a detailed explanation of ICN caching strategies along with their contributions and limitations. In vehicular networks, caching is achieved either through RSUs or even another sensing device such as vehicle. If an RSU is deemed as a caching node and a vehicle needs to access its cached data, then it can be acquired easily BIB002 . However, when a vehicle covers some distance and leaves the covering area of the RSU, then it is complicated to locate the real cache position. Similarly, if another vehicle is supposed to be a caching node, then after covering distance in the opposite direction from the accessing vehicle, it is more challenging to locate that caching vehicle. Moreover, cache update is also an essential mania for avoiding traffic congestions and accidents, which may cause due to outdated cached content. Therefore, a flawless updating cache strategy accompanied by a fine-grained forwarding mechanism is the basic requirement of the ICN-based VANET environment.
|
Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> ICN-Based VANET Reliability <s> Sensors are distributed across the globe leading to an avalanche of data about our environment. It is possible today to utilize networks of sensors to detect and identify a multitude of observations, from simple phenomena to complex events and situations. The lack of integration and communication between these networks, however, often isolates important data streams and intensifies the existing problem of too much data and not enough knowledge. With a view to addressing this problem, the semantic sensor Web (SSW) proposes that sensor data be annotated with semantic metadata that will both increase interoperability and provide contextual information essential for situational knowledge. <s> BIB001 </s> Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> ICN-Based VANET Reliability <s> Direct radio-based vehicle-to-vehicle communication can help prevent accidents by providing accurate and up-to-date local status and hazard information to the driver. In this paper, we assume that two types of messages are used for traffic safety-related communication: 1) Periodic messages (ldquobeaconsrdquo) that are sent by all vehicles to inform their neighbors about their current status (i.e., position) and 2) event-driven messages that are sent whenever a hazard has been detected. In IEEE 802.11 distributed-coordination-function-based vehicular networks, interferences and packet collisions can lead to the failure of the reception of safety-critical information, in particular when the beaconing load leads to an almost-saturated channel, as it could easily happen in many critical vehicular traffic conditions. In this paper, we demonstrate the importance of transmit power control to avoid saturated channel conditions and ensure the best use of the channel for safety-related purposes. We propose a distributed transmit power control method based on a strict fairness criterion, i.e., distributed fair power adjustment for vehicular environments (D-FPAV), to control the load of periodic messages on the channel. The benefits are twofold: 1) The bandwidth is made available for higher priority data like dissemination of warnings, and 2) beacons from different vehicles are treated with ldquoequal rights,rdquo and therefore, the best possible reception under the available bandwidth constraints is ensured. We formally prove the fairness of the proposed approach. Then, we make use of the ns-2 simulator that was significantly enhanced by realistic highway mobility patterns, improved radio propagation, receiver models, and the IEEE 802.11p specifications to show the beneficial impact of D-FPAV for safety-related communications. We finally put forward a method, i.e., emergency message dissemination for vehicular environments (EMDV), for fast and effective multihop information dissemination of event-driven messages and show that EMDV benefits of the beaconing load control provided by D-FPAV with respect to both probability of reception and latency. <s> BIB002 </s> Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> ICN-Based VANET Reliability <s> Vehicular ad hoc networks play a critical role in enabling important active safety applications such as cooperative collision warning. These active safety applications rely on continuous broadcast of self-information by all vehicles, which allows each vehicle to track all its neighboring cars in real time. The most pressing challenge in such safety-driven communication is to maintain acceptable tracking accuracy while avoiding congestion in the shared channel. In this article we propose a transmission control protocol that adapts communication rate and power based on the dynamics of a vehicular network and safety-driven tracking process. The proposed solution uses a closed-loop control concept and accounts for wireless channel unreliability. Simulation results confirm that if packet generation rate and associated transmission power for safety messages are adjusted in an on-demand and adaptive fashion, robust tracking is possible under various traffic conditions. <s> BIB003 </s> Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> ICN-Based VANET Reliability <s> Vehicle Safety Communications (VSC) is advancing rapidly towards product development and field testing. While a number of possible solutions have been proposed, the question remains open as how such a system will address the issue of scalability in its actual deployment. This paper presents a design methodology for congestion control in VSC as well as the description and evaluation of a resulting rate adaption oriented protocol named PULSAR. We start with a list of design principles reflecting the state of the art that define why and how vehicles should behave while responding to channel congestion in order to ensure fairness and support the needs of safety applications. From these principles, we derive protocol building blocks required to fulfill the defined objectives. Then, the actual protocol is described and assessed in detail, including a discussion on the intricate features of channel load assessment, rate adaptation and information sharing. A comparison with other state-of-the-art protocols shows that “details matter” with respect to the temporal and spatial dimensions of the protocol outcome. <s> BIB004
|
The deployment of ICN schemes in VANETs confronts the most vibrant and heterogeneous kind of problems, when considering the goals and requirements of such integration . The difference in techniques (with respect to actuators, sensors, end-to-end diversity, and their functionalities, and in the collected and consumed data in such scenarios, would certainly lead to various concerns . For example, vehicular nodes share similarities with the restrictions and prerequisites. One of the most vital problems is the use of technologies that may deploy resourceful connectivity in an unreliable network, i.e., VANET. Nevertheless, one of the most significant points in vehicular communications is how different semantics of the shared and stored contents would affect content distributions. In reality, this is a challenging issue which is not covered in the ICN scope BIB001 . Considering the facility of in-network nodes to store forwarded contents to increase upcoming requests, an issue appears concerning the ICN framework that whether it should participate in any type of process that enables the association or the analysis of various suppliers of information . In VANETs, the traffic on wireless medium that results from periodic packet exchange needs to be wisely controlled so as to avoid decline in the quality of safety-related data at the time of receiving . For this reason, various strategies have been proposed in the literature such as D-FPAV BIB002 , ATC BIB003 , and PULSAR BIB004 , which regulate traffic congestion with a stringent fairness measure that needs to be achieved for security purposes as well as emergency messages. However, the exchange of control messages in vehicular ICN is yet-to-resolve issue and needs careful considerations for designing adaptive strategies to achieve fair and reliable transmissions.
|
Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> ICN-Based 5G-Enabled VANET <s> In this paper, we present a service-driven mobility support architecture for Information Centric Networks that provides seamless mobility as an on-demand network service, which can be enabled/disabled based on network capabilities or resource availability. Proposed architecture relies on the ID/Locator split on ICN namespaces to support the use of persistent names and avoids name reconfiguration due to mobility. We implemented the proposed solution over a service-centric CCN platform, with multiple end-hosts running a video conferencing application acting as Consumers and Producers, and observed its capability to support seamless handover. <s> BIB001 </s> Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> ICN-Based 5G-Enabled VANET <s> This chapter presents a thorough investigation on current vehicular networking architectures (access technologies and overlay networks) and their (r)evolution towards the 5G era. The main driving force behind vehicular networking is to increase safety, with several other applications exploiting this ecosystem for traffic efficiency and infotainment provision. The most prominent existing candidates for vehicular networking are based on dedicated short range communications (DSRC) and cellular (4G) communications. In addition, the maturity of cloud computing has accommodated the invasion of vehicular space with cloud-based services. Nevertheless, current architectures can not meet the latency requirements of Intelligent Transport Systems (ITS) applications in highly congested and mobile environments. The future trend of autonomous driving pushes current networking architectures further to their limits with hard real-time requirements. Vehicular networks in 5G have to address five major challenges that affect current architectures: congestion, mobility management, backhaul networking, air interface and security. As networking transforms from simple connectivity provision, to service and content provision, fog computing approaches with caching and pre-fetching improve significantly the performance of the networks. The cloudification of network resources through software defined networking (SDN)/network function virtualization (NFV) principles, is another promising enabler for efficient vehicular networking in 5G. Finally, new wireless access mechanisms combined with current DSRC and 4G will enable to bring the vehicles in the cloud. <s> BIB002 </s> Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> ICN-Based 5G-Enabled VANET <s> The proposed 3GPP's 5G Next-generation (NextGen) Core architecture (5GC) enables the ability to introduce new user and control plane functions within the context of network slicing to allow greater flexibility in handling of heterogeneous devices and applications. In this paper, we discuss the integration of such architecture with future networking technologies by focusing on the information centric networking (ICN) technology. For that purpose, we first provide a short description of the proposed 5GC, which is followed by a discussion on the extensions to 5GC's control and user planes to support Protocol Data Unit (PDU) sessions from ICN. To illustrate the value of enabling ICN within 5GC, we focus on two important network services that can be enabled by ICN data networks. The first case targets mobile edge computing for a connected car use case, whereas the second case targets seamless mobility support for ICN sessions. We present these discussions in consideration with the procedures proposed by 3GPP's 23.501 and 23.502 technical specifications. <s> BIB003 </s> Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> ICN-Based 5G-Enabled VANET <s> The challenging requirements of 5G, from both the applications and architecture perspectives, motivate the need to explore the feasibility of delivering services over new network architectures. As 5G proposes application-centric network slicing, which enables the use of new data planes realizable over a programmable compute, storage, and transport infrastructure, we consider information- centric networking as a candidate network architecture to realize 5G objectives. This can coexist with end-to-end IP services that are offered today. To this effect, we first propose a 5G-ICN architecture and compare its benefits (i.e., innovative services offered by leveraging ICN features) to current 3GPP-based mobile architectures. We then introduce a general application-driven framework that emphasizes the flexibility afforded by network functions virtualization and software defined networking over which 5G-ICN can be realized. We specifically focus on the issue of how mobility as a service (MaaS) can be realized as a 5G-ICN slice, and give an in-depth overview on resource provisioning and inter-dependencies and coordination among functional 5G-ICN slices to meet the MaaS objectives. The article tries to show the flexibility of delivering services over ICN where virtualization of control and data plane can be used by applications to meet complex service logic execution while creating value to its end users. <s> BIB004
|
The 5G architecture, proposed by 3GPP, provides manageability to familiarize new control plane and user plane functions in the perspective of network slicing, which offers better elasticity to control various applications as well as devices. Thus, ICN would benefit 5G from the viewpoint of multi-access edge computing (MEC) in terms of edge computing, edge caching, and session mobility BIB003 . In addition, mobile nodes are positioned at the network edge that support various delay sensitive applications, e.g., virtual and augmented reality (VR/AR), and driving of autonomous vehicles. BIB004 . This drift is useful in low-latency and high-bandwidth applications, e.g., VR/AR, and non-real-time applications, for instance, IoT communications and video-on-demand (VoD) BIB003 . Furthermore, the caching feature of ICN assists both real-time and non-real-time applications every time there are temporal or spatial associations among data objects retrieved by edge subscribers BIB003 . This argument can be strengthened by the study conducted in , where it is argued that vehicular named networking encodes the data of geolocation into names. This is important as all request messages are sent to geolocations where contents are published. This is utterly feasible that a request message hits a vehicle with the desired content prior to reach its location. Moreover, the existing deployments of mobile communications hold session mobility through centralized techniques for routing, which face severe problems at the time of replication of service demands. Different from this phenomenon, name resolutions and application-restricted identifiers divide the notion deemed for ICN is exposed to grip node mobility in an effective way BIB001 . However, this module is quite challenging in such environments where rapid mobility is involved, for example, vehicular communications BIB002 . Thus, substantial determination is demanded in this area from the ICN community working on vehicular communications. A summary of the existing ICN-based vehicular communication proposals is presented in Table 1 .
|
Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> Routing <s> Content-centric networking is a new paradigm conceived for future Internet architectures, where communications are driven by contents instead of host addresses. This paradigm has key potentialities to enable effective and efficient communications in the challenging vehicular environment characterized by short-lived connectivity and highly dynamic network topologies. We design CRoWN, a content-centric framework for vehicular ad-hoc networks, which is implemented on top of the IEEE 802.11p standard layers and is fully compliant with them. Performance comparison against the legacy IP-based approach demonstrates the superiority of CRoWN, thus paving the way for content-centric vehicular networking. <s> BIB001 </s> Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> Routing <s> Content-Centric Networking (CCN) is a new popular communication paradigm that achieves information retrieval and distribution by using named data instead of end-to-end host-centric communications. This innovative model particularly fits mobile wireless environments characterized by dynamic topologies, unreliable broadcast channels, short-lived and intermittent connectivity, as proven by preliminary works in the literature. In this paper we extend the CCN framework to efficiently and reliably support content delivery on top of IEEE 802.11p vehicular technology. Achieved results show that the proposed solution, by leveraging distributed broadcast storm mitigation techniques, simple transport routines, and lightweight soft-state forwarding procedures, brings significant improvements w.r.t. a plain CCN model, confirming the effectiveness and efficiency of our design choices. <s> BIB002 </s> Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> Routing <s> Recently, Information Centric Networking (ICN) has attracted much attention also for mobiles. Unlike host-based communication models, ICN promotes data names as the first-class citizen in the network. However, the current ICN name-based routing requires Interests be routed by name to the nearest replica, implying the Interests are flooded in VANET. This introduces large overhead and consequently degrades wireless network performance. In order to maintain the efficiency of ICN implementation in VANET, we propose an opportunistic geo-inspired content based routing method. Our method utilizes the last encounter information of each node to infer the locations of content holders. With this information, the Interests can be geo-routed instead of being flooded to reduce the congestion level of the entire network. The simulation results show that our proposed method reduces the scope of flooding to less than two hops and improves retrieval rate by 1.42 times over flooding-based methods. <s> BIB003 </s> Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> Routing <s> Vehicular information network and Internet of Things (IoT) technologies have been receiving a lot of attention in recent years. As one of the most important and promising IoT areas, a vehicular information network aims to implement a myriad of applications related to vehicles, traffic information, drivers, passengers, and pedestrians. However, intervehicular communication (IVC) in a vehicular information network is still based on the TCP/IP protocol stack which is not efficient and scalable. To address the efficiency and scalability issues of the IVC, we leverage the named data networking (NDN) paradigm where the end user only cares about the needed content and pays no attention to the actual location of the content. The NDN model is highly suitable for the IVC scenario with its hierarchical content naming scheme and flexible content retrieval and caching support. We design a novel vehicular information network architecture based on the basic communication principle of NDN. Our proposed architecture aims to improve content naming, addressing, data aggregation, and mobility for IVC in the vehicular information network. In addition, the key parameter settings of the proposed schemes are analyzed in order to help guide their actual deployment. <s> BIB004 </s> Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> Routing <s> High-quality multimedia streaming services in Vehicular Ad-hoc Networks (VANETs) are severely hindered by intermittent host connectivity issues. The Information Centric Networking (ICN) paradigm could help solving this issue thanks to its new networking primitives driven by content names rather than host addresses. This unique feature, in fact, enables native support to mobility, in-network caching, nomadic networking, multicast, and efficient content dissemination. In this paper, we focus on exploring the potential social cooperation among vehicles in highways. An ICN-based COoperative Caching solution, namely ICoC, is proposed to improve the quality of experience (QoE) of multimedia streaming services. In particular, ICoC leverages two novel social cooperation schemes, namely partner-assisted and courier-assisted, to enhance information-centric caching. To validate its effectiveness, extensive ns-3 simulations have been executed, showing that ICoC achieves a considerable improvement in terms of start-up delay and playback freezing with respect to a state-of-the-art solution based on probabilistic caching. <s> BIB005
|
Amadeo et al. BIB001 Collision avoidance Techniques to avoid the explosion of ICN data structures Amadeo et al. BIB002 Selective flooding scheme Selection of dynamic outgoing interfaces Naming Yan et al. BIB004 Hierarchical naming schemes Agreement on common naming Quan et al. BIB005 Flat naming scheme Agreement on common naming Caching Yu et al. BIB003 Caching unsolicited contents Smart scope-based caching strategies
|
Information-Centric Network-Based Vehicular Communications: Overview and Research Opportunities <s> Lessons Learned <s> Vehicular information network and Internet of Things (IoT) technologies have been receiving a lot of attention in recent years. As one of the most important and promising IoT areas, a vehicular information network aims to implement a myriad of applications related to vehicles, traffic information, drivers, passengers, and pedestrians. However, intervehicular communication (IVC) in a vehicular information network is still based on the TCP/IP protocol stack which is not efficient and scalable. To address the efficiency and scalability issues of the IVC, we leverage the named data networking (NDN) paradigm where the end user only cares about the needed content and pays no attention to the actual location of the content. The NDN model is highly suitable for the IVC scenario with its hierarchical content naming scheme and flexible content retrieval and caching support. We design a novel vehicular information network architecture based on the basic communication principle of NDN. Our proposed architecture aims to improve content naming, addressing, data aggregation, and mobility for IVC in the vehicular information network. In addition, the key parameter settings of the proposed schemes are analyzed in order to help guide their actual deployment. <s> BIB001
|
The ICN-based vehicular network is a part of human-centric communications that use private and public-scale participating information gathering. This type of data network is driven by the production of high-performance nodes. Currently, these nodes contain billions of smartphones and now growing into smart meters, GPS in vehicles, and activity-supervising sportswear accessible to subscribers BIB001 . Drivers and vehicles can use the information, such as weather, road, traffic, and health conditions, on the road. Vehicular ICN has several applications BIB001 , for example, vehicles can distribute the collected data via the network though pull-based or push-based mechanisms, they can interact with neighboring vehicles in a wireless fashion so as to send or receive road or traffic information. ICN is a name-based communications paradigm that allows caching contents at local in-network nodes. With technological advancements, Internet is evolving in every field, which introduces the concept of Internet of Things (IoT). Things ranging from a small body-fit gadget to large vehicles. In addition, as the Internet shifts to name-based model (i.e., ICN) from the existing nature of IP-based communication, the combination of VANETs with the ICN may face various issues such as intermittent connectivity due to rapid mobility of vehicles. In this study, we investigated the ICN-based VANET challenges in terms of mobility, security, routing, naming, caching, and 5G communications. Furthermore, due to several other communications enigmas such as security, DoS attacks, and exponentially increasing number of Internet users, the integration of ICN-based VANETs with other architectures, i.e., SDN, Cloud, and Edge, is the need of the day. We presented the idea of these architectures with the ICN-based VANET in line with their glitches that may probably be faced when deploy together.
|
Secret-Sharing Schemes: A Survey <s> Introduction <s> Every function of n inputs can be efficiently computed by a complete network of n processors in such a way that: If no faults occur, no set of size t n /2 of players gets any additional information (other than the function value), Even if Byzantine faults are allowed, no set of size t n /3 can either disrupt the computation or get additional information. Furthermore, the above bounds on t are tight! <s> BIB001 </s> Secret-Sharing Schemes: A Survey <s> Introduction <s> Under the assumption that each pair of participants can communicate secretly, we show that any reasonable multiparty protocol can be achieved if at least 2 n /3 of the participants are honest. The secrecy achieved is unconditional. It does not rely on any assumption about computational intractability. <s> BIB002 </s> Secret-Sharing Schemes: A Survey <s> Introduction <s> We present three alternative simple constructions of small probability spaces on n bits for which any k bits are almost independent. The number of bits used to specify a point in the sample space is (2 + o(1)) (log log n + k/2 + log k + log 1/ϵ), where ϵ is the statistical difference between the distribution induced on any k bit locations and the uniform distribution. This is asymptotically comparable to the construction recently presented by Naor and Naor (our size bound is better as long as ϵ < 1/(k log n)). An additional advantage of our constructions is their simplicity. <s> BIB003 </s> Secret-Sharing Schemes: A Survey <s> Introduction <s> We suggest a method of controlling the access to a secure database via quorum systems. A quorum system is a collection of sets (quorums) every two of which have a nonempty intersection. Quorum systems have been used for a number of applications in the area of distributed systems. We propose a separation between access servers which are protected and trustworthy, but may be outdated, and the data servers which may all be compromised. The main paradigm is that only the servers in a complete quorum can collectively grant (or revoke) access permission. The method we suggest ensures that after authorization is revoked, a cheating user Alice will not be able to access the data even if many access servers still consider her authorized, and even if the complete raw database is available to her. The method has a low overhead in terms of communication and computation. It can also be converted into a distributed system for issuing secure signatures. <s> BIB004 </s> Secret-Sharing Schemes: A Survey <s> Introduction <s> We show that verifiable secret sharing (VSS) and secure multi-party computation (MPC) among a set of n players can efficiently be based on any linear secret sharing scheme (LSSS) for the players, provided that the access structure of the LSSS allows MPC or VSS at all. Because an LSSS neither guarantees reconstructability when some shares are false, nor verifiability of a shared value, nor allows for the multiplication of shared values, an LSSS is an apparently much weaker primitive than VSS or MPC. ::: ::: Our approach to secure MPC is generic and applies to both the information-theoretic and the cryptographic setting. The construction is based on 1) a formalization of the special multiplicative property of an LSSS that is needed to perform a multiplication on shared values, 2) an efficient generic construction to obtain from any LSSS a multiplicative LSSS for the same access structure, and 3) an efficient generic construction to build verifiability into every LSSS (always assuming that the adversary structure allows for MPC or VSS at all). ::: ::: The protocols are efficient. In contrast to all previous information-theoretically secure protocols, the field size is not restricted (e.g, to be greater than n). Moreover, we exhibit adversary structures for which our protocols are polynomial in n while all previous approaches to MPC for non-threshold adversaries provably have super-polynomial complexity. <s> BIB005 </s> Secret-Sharing Schemes: A Survey <s> Introduction <s> As more sensitive data is shared and stored by third-party sites on the Internet, there will be a need to encrypt data stored at these sites. One drawback of encrypting data, is that it can be selectively shared only at a coarse-grained level (i.e., giving another party your private key). We develop a new cryptosystem for fine-grained sharing of encrypted data that we call Key-Policy Attribute-Based Encryption (KP-ABE). In our cryptosystem, ciphertexts are labeled with sets of attributes and private keys are associated with access structures that control which ciphertexts a user is able to decrypt. We demonstrate the applicability of our construction to sharing of audit-log information and broadcast encryption. Our construction supports delegation of private keys which subsumesHierarchical Identity-Based Encryption (HIBE). <s> BIB006 </s> Secret-Sharing Schemes: A Survey <s> Introduction <s> Protocols for Generalized Oblivious Transfer(GOT) were introduced by Ishai and Kushilevitz [10]. They built it by reducing GOT protocols to standard 1-out-of-2 oblivious transfer protocols based on private protocols. In our protocols, we provide alternative reduction by using secret sharing schemes instead of private protocols. We therefore show that there exist a natural correspondence between GOT and general secret sharing schemes and thus the techniques and tools developed for the latter can be applied equally well to the former. <s> BIB007 </s> Secret-Sharing Schemes: A Survey <s> Introduction <s> We present a new methodology for realizing Ciphertext-Policy Attribute Encryption (CP-ABE) under concrete and noninteractive cryptographic assumptions in the standard model. Our solutions allow any encryptor to specify access control in terms of any access formula over the attributes in the system. In our most efficient system, ciphertext size, encryption, and decryption time scales linearly with the complexity of the access formula. The only previous work to achieve these parameters was limited to a proof in the generic group model. ::: ::: We present three constructions within our framework. Our first system is proven selectively secure under a assumption that we call the decisional Parallel Bilinear Diffie-Hellman Exponent (PBDHE) assumption which can be viewed as a generalization of the BDHE assumption. Our next two constructions provide performance tradeoffs to achieve provable security respectively under the (weaker) decisional Bilinear-Diffie-Hellman Exponent and decisional Bilinear Diffie-Hellman assumptions. <s> BIB008
|
Secret-sharing schemes are a tool used in many cryptographic protocols. A secretsharing scheme involves a dealer who has a secret, a set of n parties, and a collection A of subsets of parties called the access structure. A secret-sharing scheme for A is a method by which the dealer distributes shares to the parties such that: BIB003 any subset in A can reconstruct the secret from its shares, and (2) any subset not in A cannot reveal any partial information on the secret. Originally motivated by the problem of secure information storage, secret-sharing schemes have found numerous other applications in cryptography and distributed computing, e.g., Byzantine agreement , secure multiparty computations BIB001 BIB002 BIB005 , threshold cryptography , access control BIB004 , attribute-based encryption BIB006 BIB008 , and generalized oblivious transfer BIB007 .
|
Secret-Sharing Schemes: A Survey <s> Example 1 (Attribute Based Encryption). <s> Certain cryptographic keys, such as a number which makes it possible to compute the secret decoding exponent in an RSA public key cryptosystem,1,5 or the system master key and certain other keys in a DES cryptosystem,3 are so important that they present a dilemma. If too many copies are distributed one might go astray. If too few copies are made they might all be destroyed. A typical cryptosystem will have several volatile copies of an important key in protected memory locations where they will very probably evaporate if any tampering or probing occurs. Since an opponent may be content to disrupt the system by forcing the evaporation of all these copies it is useful to entrust one or more other nonvolatile copies to reliable individuals or secure locations. What must the nonvolatile copies of the keys, or nonvolatile pieces of information from which the keys are reconstructed, be guarded against? The answer is that there are at least three types of incidents: <s> BIB001 </s> Secret-Sharing Schemes: A Survey <s> Example 1 (Attribute Based Encryption). <s> Secret Sharing from the perspective of threshold schemes has been well-studied over the past decade. Threshold schemes, however, can only handle a small fraction of the secret sharing functions which we may wish to form. For example, if it is desirable to divide a secret among four participants A, B, C, and D in such a way that either A together with B can reconstruct the secret or C together with D can reconstruct the secret, then threshold schemes (even with weighting) are provably insufficient.This paper will present general methods for constructing secret sharing schemes for any given secret sharing function. There is a natural correspondence between the set of "generalized" secret sharing functions and the set of monotone functions, and tools developed for simplifying the latter set can be applied equally well to the former set. <s> BIB002 </s> Secret-Sharing Schemes: A Survey <s> Example 1 (Attribute Based Encryption). <s> In a secret sharing scheme, a dealer has a secret. The dealer gives each participant in the scheme a share of the secret. There is a set Γ of subsets of the participants with the property that any subset of participants that is in Γ can determine the secret. In a perfect secret sharing scheme, any subset of participants that is not in Γ cannot obtain any information about the secret. We will say that a perfect secret sharing scheme is ideal if all of the shares are from the same domain as the secret. Shamir and Blakley constructed ideal threshold schemes, and Benaloh has constructed other ideal secret sharing schemes. In this paper, we construct ideal secret sharing schemes for more general access structures which include the multilevel and compartmented access structures proposed by Simmons. <s> BIB003 </s> Secret-Sharing Schemes: A Survey <s> Example 1 (Attribute Based Encryption). <s> Particular aqueous gels, epoxy resin compositions and optional additives such as diluents, retarders and accelerators are described which produce a practical composition and method for in situ sand consolidation and gravel packing by which a resin coated sand is positioned in a desired location and cured by an internal catalyst to form a porous permeable or plugged consolidated mass. <s> BIB004 </s> Secret-Sharing Schemes: A Survey <s> Example 1 (Attribute Based Encryption). <s> We present three alternative simple constructions of small probability spaces on n bits for which any k bits are almost independent. The number of bits used to specify a point in the sample space is (2 + o(1)) (log log n + k/2 + log k + log 1/ϵ), where ϵ is the statistical difference between the distribution induced on any k bit locations and the uniform distribution. This is asymptotically comparable to the construction recently presented by Naor and Naor (our size bound is better as long as ϵ < 1/(k log n)). An additional advantage of our constructions is their simplicity. <s> BIB005 </s> Secret-Sharing Schemes: A Survey <s> Example 1 (Attribute Based Encryption). <s> A linear algebraic model of computation the span program, is introduced, and several upper and lower bounds on it are proved. These results yield applications in complexity and cryptography. The proof of the main connection, between span programs and counting branching programs, uses a variant of Razborov's general approximation method. > <s> BIB006 </s> Secret-Sharing Schemes: A Survey <s> Example 1 (Attribute Based Encryption). <s> A secret sharing scheme permits a secret to be shared among participants of an n-element group in such a way that only qualified subsets of participants can recover the secret. If any nonqualified subset has absolutely no information on the secret, then the scheme is called perfect. The share in a scheme is the information that a participant must remember. ::: ::: In [3] it was proved that for a certain access structure any perfect secret sharing scheme must give some participant a share which is at least 50\percent larger than the secret size. We prove that for each n there exists an access structure on n participants so that any perfect sharing scheme must give some participant a share which is at least about $n/\log n$ times the secret size.^1 We also show that the best possible result achievable by the information-theoretic method used here is n times the secret size. ::: ::: ^1 All logarithms in this paper are of base 2. <s> BIB007 </s> Secret-Sharing Schemes: A Survey <s> Example 1 (Attribute Based Encryption). <s> A secret sharing scheme permits a secret to be shared among participants in such a way that only qualified subsets of participants can recover the secret, but any nonqualified subset has absolutely no information on the secret. The set of all qualified subsets defines the access structure to the secret. Sharing schemes are useful in the management of cryptographic keys and in multiparty secure protocols. ::: ::: We analyze the relationships among the entropies of the sample spaces from which the shares and the secret are chosen. We show that there are access structures with four participants for which any secret sharing scheme must give to a participant a share at least 50% greater than the secret size. This is the first proof that there exist access structures for which the best achievable information rate (i.e., the ratio between the size of the secret and that of the largest share) is bounded away from 1. The bound is the best possible, as we construct a secret sharing scheme for the above access structures that meets the bound with equality. <s> BIB008 </s> Secret-Sharing Schemes: A Survey <s> Example 1 (Attribute Based Encryption). <s> Span programs provide a linear algebraic model of computation. Lower bounds for span programs imply lower bounds for formula size, symmetric branching programs, and contact schemes. Monotone span programs correspond also to linear secret-sharing schemes. We present a new technique for proving lower bounds for monotone span programs. We prove a lower bound of Ω(m2.5) for the 6-clique function. Our results improve on the previously known bounds for explicit functions. <s> BIB009 </s> Secret-Sharing Schemes: A Survey <s> Example 1 (Attribute Based Encryption). <s> As more sensitive data is shared and stored by third-party sites on the Internet, there will be a need to encrypt data stored at these sites. One drawback of encrypting data, is that it can be selectively shared only at a coarse-grained level (i.e., giving another party your private key). We develop a new cryptosystem for fine-grained sharing of encrypted data that we call Key-Policy Attribute-Based Encryption (KP-ABE). In our cryptosystem, ciphertexts are labeled with sets of attributes and private keys are associated with access structures that control which ciphertexts a user is able to decrypt. We demonstrate the applicability of our construction to sharing of audit-log information and broadcast encryption. Our construction supports delegation of private keys which subsumesHierarchical Identity-Based Encryption (HIBE). <s> BIB010 </s> Secret-Sharing Schemes: A Survey <s> Example 1 (Attribute Based Encryption). <s> We present a new methodology for realizing Ciphertext-Policy Attribute Encryption (CP-ABE) under concrete and noninteractive cryptographic assumptions in the standard model. Our solutions allow any encryptor to specify access control in terms of any access formula over the attributes in the system. In our most efficient system, ciphertext size, encryption, and decryption time scales linearly with the complexity of the access formula. The only previous work to achieve these parameters was limited to a proof in the generic group model. ::: ::: We present three constructions within our framework. Our first system is proven selectively secure under a assumption that we call the decisional Parallel Bilinear Diffie-Hellman Exponent (PBDHE) assumption which can be viewed as a generalization of the BDHE assumption. Our next two constructions provide performance tradeoffs to achieve provable security respectively under the (weaker) decisional Bilinear-Diffie-Hellman Exponent and decisional Bilinear Diffie-Hellman assumptions. <s> BIB011
|
Public-key encryption is a powerful mechanism for protecting the confidentiality of stored and transmitted information. Nowadays, in many applications there is a provider that wants to share data according to some policy based on user's credentials. In an attributed-based encryption system, presented by Sahai and Waters , each user has a set of attributes (i.e., credentials), and the provider will grant permission to decrypt the message if some predicate of the attributes holds (e.g., a user can decode an e-mail if she is a "FRIEND" and "IMPORTANT"). In BIB010 BIB011 , it is shown that if the predicate can be described by an access structure that can be implemented by an efficient linear secret-sharing scheme, then there is an efficient attribute-based encryption system for this predicate. Secret-sharing schemes were introduced by Blakley BIB001 and Shamir for the threshold case, that is, for the case where the subsets that can reconstruct the secret are all the sets whose cardinality is at least a certain threshold. Secretsharing schemes for general access structures were introduced and constructed by Ito, Saito, and Nishizeki . More efficient schemes were presented in, e.g., BIB002 BIB004 BIB003 BIB006 . Specifically, Benaloh and Leichter BIB002 proved that if an access structure can be described by a small monotone formula then it has an efficient perfect secret-sharing scheme. This was generalized by Karchmer and Wigderson BIB006 who showed that if an access structure can be described by a small monotone span program then it has an efficient scheme (a special case of this construction appeared before in BIB003 ). A major problem with secret-sharing schemes is that the shares' size in the best known secret-sharing schemes realizing general access structures is exponential in the number of parties in the access structure. Thus, the known constructions for general access structures are impractical. This is true even for explicit access structures (e.g., access structures whose characteristic function can be computed by a small uniform circuit). On the other hand, the best known lower bounds on the shares' size for sharing a secret with respect to an access structure (e.g., in BIB008 BIB007 ) are far from the above upper bounds. The best lower bound was proved by Csirmaz BIB007 , proving that, for every n, there is an access structure with n parties such that sharing -bit secrets requires shares of length Ω( n/ log n). The question if there exist more efficient schemes, or if there exists an access structure that does not have (space) efficient schemes remains open. The following is a widely believed conjecture (see, e.g., ): Conjecture 1. There exists an > 0 such that for every integer n there is an access structure with n parties, for which every secret-sharing scheme distributes shares of length exponential in the number of parties, that is, 2 n . Proving (or disproving) this conjecture is one of the most important open questions concerning secret sharing. No major progress on proving or disproving this conjecture has been obtained in the last 16 years. It is not known how to prove that there exists an access structure that requires super-polynomial shares (even for an implicit access structure). Most previously known secret-sharing schemes are linear. In a linear scheme, the secret is viewed as an element of a finite field, and the shares are obtained by applying a linear mapping to the secret and several independent random field elements. For example, the schemes of BIB001 BIB002 BIB004 BIB006 are all linear. For many application, the linearity is important, e.g., for secure multiparty computation as will be described in Section 4. Thus, studying linear secret-sharing schemes and their limitations is important. Linear secret-sharing schemes are equivalent to monotone span programs, defined by BIB006 . Super-polynomial lower bounds for monotone span programs and, therefore, for linear secret-sharing schemes were proved in BIB009 . In this survey we will present two unpublished results of Rudich . Rudich considered a Hamiltonian access structure, the parties in this access structure are edges in a complete undirected graph, and a set of edges (parties) is authorized if it contains a Hamiltonian cycle. BIB005 Rudich proved that if N P = coN P , then this access structure does not have a secret-sharing scheme in which the sharing of the secret can be done by a polynomial-time algorithm. As efficient sharing of secrets is essential in applications of secret-sharing, Rudich's results implies that there is no practical scheme for the Hamiltonian access structure. Furthermore, Rudich proved that if one-way functions exist and if the Hamiltonian access structure has a computational secret-sharing scheme (with efficient sharing and reconstruction), then efficient protocols for oblivious transfer exists. Thus, constructing a computational secret-sharing scheme for the Hamiltonian access structure will solve a major open problem in cryptography, i.e., using Impagliazzo's terminology , it will prove that Minicrypt = Cryptomania.
|
Secret-Sharing Schemes: A Survey <s> Definitions <s> Given a set of parties {1, /spl middot//spl middot//spl middot/, n}, an access structure is a monotone collection of subsets of the parties. For a certain domain of secrets, a secret-sharing scheme for an access structure is a method for a dealer to distribute shares to the parties. These shares enable subsets in the access structure to reconstruct the secret, while subsets not in the access structure get no information about the secret. A secret-sharing scheme is ideal if the domains of the shares are the same as the domain of the secrets. An access structure is universally ideal if there exists an ideal secret-sharing scheme for it over every finite domain of secrets. An obvious necessary condition for an access structure to be universally ideal is to be ideal over the binary and ternary domains of secrets. The authors prove that this condition is also sufficient. They also show that being ideal over just one of the two domains does not suffice for universally ideal access structures. Finally, they give an exact characterization for each of these two conditions. > <s> BIB001 </s> Secret-Sharing Schemes: A Survey <s> Definitions <s> Let ? n be a monotone, nontrivial family of sets over {1, 2, ?,n}. An ? n perfect secret-sharing scheme is a probabilistic mapping of a secret ton shares, such that: ::: ::: Various secret-sharing schemes have been proposed, and applications in diverse contexts were found. In all these cases the set of secrets and the set of shares are finite. ::: ::: In this paper we study the possibility of secret-sharing schemes overinfinite domains. The major case of interest is when the secrets and the shares are taken from acountable set, for example all binary strings. We show that no ? n secret-sharing scheme over any countable domain exists (for anyn ? 2). ::: ::: One consequence of this impossibility result is that noperfect private-key encryption schemes, over the set of all strings, exist. Stated informally, this means that there is no way to encrypt all strings perfectly without revealing information about their length. These impossibility results are stated and proved not only for perfect secret-sharing and private-key encryption schemes, but also for wider classes--weak secret-sharing and private-key encryption schemes. ::: ::: We constrast these results with the case where both the secrets and the shares are real numbers. Simple perfect secret-sharing schemes (and perfect private-key encryption schemes) are presented. Thus, infinity alone does not rule out the possibility of secret sharing. <s> BIB002 </s> Secret-Sharing Schemes: A Survey <s> Definitions <s> We give a unified account of classical secret-sharing goals from a modern cryptographic vantage. Our treatment encompasses perfect, statistical, and computational secret sharing; static and dynamic adversaries; schemes with or without robustness; schemes where a participant recovers the secret and those where an external party does so. We then show that Krawczyk's 1993 protocol for robust computational secret sharing (RCSS) need not be secure, even in the random-oracle model and for threshold schemes, if the encryption primitive it uses satisfies only one-query indistinguishability (ind1), the only notion Krawczyk defines. Nonetheless, we show that the protocol is secure (in the random-oracle model, for threshold schemes) if the encryption scheme also satisfies one-query key-unrecoverability (key1). Since practical encryption schemes are ind1+key1 secure, our result effectively shows that Krawczyk's RCSS protocol is sound (in the random-oracle model, for threshold schemes). Finally, we prove the security for a variant of Krawczyk's protocol, in the standard model and for arbitrary access structures, assuming ind1 encryption and a statistically-hiding, weakly-binding commitment scheme. <s> BIB003
|
In this section we define secret-sharing schemes. We supply two definitions and argue that they are equivalent. We start with a definition of secret-sharing as given in BIB002 BIB001 BIB003 .
|
Secret-Sharing Schemes: A Survey <s> Definition 2 (Secret Sharing <s> A "secret sharing system" permits a secret to be shared among n trustees in such a way that any k of them can recover the secret, but any k-1 have complete uncertainty about it. A linear coding scheme for secret sharing is exhibited which subsumes the polynomial interpolation method proposed by Shamir and can also be viewed as a deterministic version of Blakley's probabilistic method. Bounds on the maximum value of n for a given k and secret size are derived for any system, linear or nonlinear. The proposed scheme achieves the lower bound which, for practical purposes, differs insignificantly from the upper bound. The scheme may be extended to protect several secrets. Methods to protect against deliberate tampering by any of the trustees are also presented. <s> BIB001 </s> Secret-Sharing Schemes: A Survey <s> Definition 2 (Secret Sharing <s> Preface to the Second Edition. Preface to the First Edition. Acknowledgments for the Second Edition. Acknowledgments for the First Edition. 1. Introduction and Preview. 1.1 Preview of the Book. 2. Entropy, Relative Entropy, and Mutual Information. 2.1 Entropy. 2.2 Joint Entropy and Conditional Entropy. 2.3 Relative Entropy and Mutual Information. 2.4 Relationship Between Entropy and Mutual Information. 2.5 Chain Rules for Entropy, Relative Entropy, and Mutual Information. 2.6 Jensen's Inequality and Its Consequences. 2.7 Log Sum Inequality and Its Applications. 2.8 Data-Processing Inequality. 2.9 Sufficient Statistics. 2.10 Fano's Inequality. Summary. Problems. Historical Notes. 3. Asymptotic Equipartition Property. 3.1 Asymptotic Equipartition Property Theorem. 3.2 Consequences of the AEP: Data Compression. 3.3 High-Probability Sets and the Typical Set. Summary. Problems. Historical Notes. 4. Entropy Rates of a Stochastic Process. 4.1 Markov Chains. 4.2 Entropy Rate. 4.3 Example: Entropy Rate of a Random Walk on a Weighted Graph. 4.4 Second Law of Thermodynamics. 4.5 Functions of Markov Chains. Summary. Problems. Historical Notes. 5. Data Compression. 5.1 Examples of Codes. 5.2 Kraft Inequality. 5.3 Optimal Codes. 5.4 Bounds on the Optimal Code Length. 5.5 Kraft Inequality for Uniquely Decodable Codes. 5.6 Huffman Codes. 5.7 Some Comments on Huffman Codes. 5.8 Optimality of Huffman Codes. 5.9 Shannon-Fano-Elias Coding. 5.10 Competitive Optimality of the Shannon Code. 5.11 Generation of Discrete Distributions from Fair Coins. Summary. Problems. Historical Notes. 6. Gambling and Data Compression. 6.1 The Horse Race. 6.2 Gambling and Side Information. 6.3 Dependent Horse Races and Entropy Rate. 6.4 The Entropy of English. 6.5 Data Compression and Gambling. 6.6 Gambling Estimate of the Entropy of English. Summary. Problems. Historical Notes. 7. Channel Capacity. 7.1 Examples of Channel Capacity. 7.2 Symmetric Channels. 7.3 Properties of Channel Capacity. 7.4 Preview of the Channel Coding Theorem. 7.5 Definitions. 7.6 Jointly Typical Sequences. 7.7 Channel Coding Theorem. 7.8 Zero-Error Codes. 7.9 Fano's Inequality and the Converse to the Coding Theorem. 7.10 Equality in the Converse to the Channel Coding Theorem. 7.11 Hamming Codes. 7.12 Feedback Capacity. 7.13 Source-Channel Separation Theorem. Summary. Problems. Historical Notes. 8. Differential Entropy. 8.1 Definitions. 8.2 AEP for Continuous Random Variables. 8.3 Relation of Differential Entropy to Discrete Entropy. 8.4 Joint and Conditional Differential Entropy. 8.5 Relative Entropy and Mutual Information. 8.6 Properties of Differential Entropy, Relative Entropy, and Mutual Information. Summary. Problems. Historical Notes. 9. Gaussian Channel. 9.1 Gaussian Channel: Definitions. 9.2 Converse to the Coding Theorem for Gaussian Channels. 9.3 Bandlimited Channels. 9.4 Parallel Gaussian Channels. 9.5 Channels with Colored Gaussian Noise. 9.6 Gaussian Channels with Feedback. Summary. Problems. Historical Notes. 10. Rate Distortion Theory. 10.1 Quantization. 10.2 Definitions. 10.3 Calculation of the Rate Distortion Function. 10.4 Converse to the Rate Distortion Theorem. 10.5 Achievability of the Rate Distortion Function. 10.6 Strongly Typical Sequences and Rate Distortion. 10.7 Characterization of the Rate Distortion Function. 10.8 Computation of Channel Capacity and the Rate Distortion Function. Summary. Problems. Historical Notes. 11. Information Theory and Statistics. 11.1 Method of Types. 11.2 Law of Large Numbers. 11.3 Universal Source Coding. 11.4 Large Deviation Theory. 11.5 Examples of Sanov's Theorem. 11.6 Conditional Limit Theorem. 11.7 Hypothesis Testing. 11.8 Chernoff-Stein Lemma. 11.9 Chernoff Information. 11.10 Fisher Information and the Cram-er-Rao Inequality. Summary. Problems. Historical Notes. 12. Maximum Entropy. 12.1 Maximum Entropy Distributions. 12.2 Examples. 12.3 Anomalous Maximum Entropy Problem. 12.4 Spectrum Estimation. 12.5 Entropy Rates of a Gaussian Process. 12.6 Burg's Maximum Entropy Theorem. Summary. Problems. Historical Notes. 13. Universal Source Coding. 13.1 Universal Codes and Channel Capacity. 13.2 Universal Coding for Binary Sequences. 13.3 Arithmetic Coding. 13.4 Lempel-Ziv Coding. 13.5 Optimality of Lempel-Ziv Algorithms. Compression. Summary. Problems. Historical Notes. 14. Kolmogorov Complexity. 14.1 Models of Computation. 14.2 Kolmogorov Complexity: Definitions and Examples. 14.3 Kolmogorov Complexity and Entropy. 14.4 Kolmogorov Complexity of Integers. 14.5 Algorithmically Random and Incompressible Sequences. 14.6 Universal Probability. 14.7 Kolmogorov complexity. 14.9 Universal Gambling. 14.10 Occam's Razor. 14.11 Kolmogorov Complexity and Universal Probability. 14.12 Kolmogorov Sufficient Statistic. 14.13 Minimum Description Length Principle. Summary. Problems. Historical Notes. 15. Network Information Theory. 15.1 Gaussian Multiple-User Channels. 15.2 Jointly Typical Sequences. 15.3 Multiple-Access Channel. 15.4 Encoding of Correlated Sources. 15.5 Duality Between Slepian-Wolf Encoding and Multiple-Access Channels. 15.6 Broadcast Channel. 15.7 Relay Channel. 15.8 Source Coding with Side Information. 15.9 Rate Distortion with Side Information. 15.10 General Multiterminal Networks. Summary. Problems. Historical Notes. 16. Information Theory and Portfolio Theory. 16.1 The Stock Market: Some Definitions. 16.2 Kuhn-Tucker Characterization of the Log-Optimal Portfolio. 16.3 Asymptotic Optimality of the Log-Optimal Portfolio. 16.4 Side Information and the Growth Rate. 16.5 Investment in Stationary Markets. 16.6 Competitive Optimality of the Log-Optimal Portfolio. 16.7 Universal Portfolios. 16.8 Shannon-McMillan-Breiman Theorem (General AEP). Summary. Problems. Historical Notes. 17. Inequalities in Information Theory. 17.1 Basic Inequalities of Information Theory. 17.2 Differential Entropy. 17.3 Bounds on Entropy and Relative Entropy. 17.4 Inequalities for Types. 17.5 Combinatorial Bounds on Entropy. 17.6 Entropy Rates of Subsets. 17.7 Entropy and Fisher Information. 17.8 Entropy Power Inequality and Brunn-Minkowski Inequality. 17.9 Inequalities for Determinants. 17.10 Inequalities for Ratios of Determinants. Summary. Problems. Historical Notes. Bibliography. List of Symbols. Index. <s> BIB002 </s> Secret-Sharing Schemes: A Survey <s> Definition 2 (Secret Sharing <s> A secret sharing scheme permits a secret to be shared among participants in such a way that only qualified subsets of participants can recover the secret, but any nonqualified subset has absolutely no information on the secret. The set of all qualified subsets defines the access structure to the secret. Sharing schemes are useful in the management of cryptographic keys and in multiparty secure protocols. ::: ::: We analyze the relationships among the entropies of the sample spaces from which the shares and the secret are chosen. We show that there are access structures with four participants for which any secret sharing scheme must give to a participant a share at least 50% greater than the secret size. This is the first proof that there exist access structures for which the best achievable information rate (i.e., the ratio between the size of the secret and that of the largest share) is bounded away from 1. The bound is the best possible, as we construct a secret sharing scheme for the above access structures that meets the bound with equality. <s> BIB003
|
Remark 1. In the above definition, we required correctness with probability 1 and perfect privacy: for every two secrets a, b the distributions Π(a, r) T and Π(a, r) T are identical. We can relax these requirements and require that the correctness holds with high probability and that the statistical distance between Π(a, r) T and Π(a, r) T is small. Schemes that satisfy these relaxed requirements are called statistical secret-sharing schemes. For example, such schemes are designed in . We next define an alternative definition of secret-sharing schemes originating in BIB001 BIB003 ; this definition uses the entropy function. For this definition we assume that there is some known probability distribution on the domain of secrets K. Any probability distribution on the domain of secrets, together with the distribution scheme Σ, induces, for any A ⊆ {p 1 , . . . , p n }, a probability distribution on the vector of shares of the parties in A. We denote the random variable taking values according to this probability distribution on the vector of shares of A by S A , and by S the random variable denoting the secret. The privacy in the alternative definition requires that if T / ∈ A, then the random variables S and S T are independent. As traditional in the secret sharing literature, we formalize the above two requirements using the entropy function. The support of a random variables X is the set of all values x such that Pr[X = x] > 0. Given a random variable X, the entropy of X is defined as H(X) , where the sum is taken over all values x in the support of X, i.e., all values x such that Pr[X = x] > 0. It holds that 0 ≤ H(X) ≤ log |SUPPORT(X)|. Intuitively, H(X) measures the amount of uncertainty in X where H(X) = 0 if X is deterministic, i.e., there is a value x such that Pr[X = x] = 1, and H(X) = log |SUPPORT(X)| if X is uniformly distributed over SUPPORT(X). Given two random variables X and Y we consider their concatenation XY and define the conditional entropy ; two random variables X and Y are independent iff H(X|Y ) = H(X) and the value of Y implies the value of X iff H(X|Y ) = 0. For more background on the entropy function, the reader may consult BIB002 . Definition 3 (Secret Sharing -Alternative Definition). We say that a distribution scheme is a secret-sharing scheme realizing an access structure A with respect to a given probability distribution on the secrets, denoted by a random variable S, if the following conditions hold.
|
Secret-Sharing Schemes: A Survey <s> The Monotone Formulae Construction [14] <s> Secret Sharing from the perspective of threshold schemes has been well-studied over the past decade. Threshold schemes, however, can only handle a small fraction of the secret sharing functions which we may wish to form. For example, if it is desirable to divide a secret among four participants A, B, C, and D in such a way that either A together with B can reconstruct the secret or C together with D can reconstruct the secret, then threshold schemes (even with weighting) are provably insufficient.This paper will present general methods for constructing secret sharing schemes for any given secret sharing function. There is a natural correspondence between the set of "generalized" secret sharing functions and the set of monotone functions, and tools developed for simplifying the latter set can be applied equally well to the former set. <s> BIB001 </s> Secret-Sharing Schemes: A Survey <s> The Monotone Formulae Construction [14] <s> We present three alternative simple constructions of small probability spaces on n bits for which any k bits are almost independent. The number of bits used to specify a point in the sample space is (2 + o(1)) (log log n + k/2 + log k + log 1/ϵ), where ϵ is the statistical difference between the distribution induced on any k bit locations and the uniform distribution. This is asymptotically comparable to the construction recently presented by Naor and Naor (our size bound is better as long as ϵ < 1/(k log n)). An additional advantage of our constructions is their simplicity. <s> BIB002
|
Benaloh and Leichter BIB001 describe a construction of secret-sharing schemes for any access structure based on monotone formulae. The construction of BIB001 generalizes the construction of and is more efficient. However, also in this scheme for most access structures the length of the shares is exponential in the number of parties even for a one-bit secret. The scheme of Benaloh and Leichter is recursive. It starts with schemes for simple access structures and constructs a scheme for a composition of the access structures. Let A 1 and A 2 be two access structures. We assume that they have the same set of parties {p 1 , . . . , p n }. However, it is possible that some parties are redundant in one of the access structures, that is, there might be parties that do not belong to minimal authorized sets in one of the access structures. We define two new access structures, where B ∈ A 1 ∨ A 2 iff B ∈ A 1 or B ∈ A 2 , and B ∈ A 1 ∧ A 2 iff B ∈ A 1 and B ∈ A 2 . We assume that for i ∈ {1, 2} there is a secret-sharing scheme Σ i realizing A i , where the two schemes have same domain of secrets K = {0, . . . , m − 1} for some m ∈ N. Furthermore, assume that for every 1 ≤ j ≤ n the share of p j in the scheme Σ i is an element in K a i,j for every i ∈ {1, 2}, and denote a j = a 1,j + a 2,j . Then there exist secret-sharing schemes realizing A 1 ∨ A 2 and A 1 ∧ A 2 in which the domain of shares of p j is K aj : -To share a secret k ∈ K for the access structure A 1 ∨ A 2 , independently share k using the scheme with uniform distribution and let k 2 = (k − k 1 ) mod m. Next, for i ∈ {1, 2}, independently share k i using the scheme Σ i (realizing A i ). For every set B ∈ A 1 ∧ A 2 , the parties in B can reconstruct both k 1 and k 2 and compute On the other hand, for every set T / ∈ A, the parties in T do not have any information on at least one k i , hence do not have any information on the secret k. For example, given an access structure , and for every 1 ≤ i ≤ there is a scheme realizing {B i } with a domain of secrets {0, 1}, where each p j ∈ B gets a one-bit share. Applying the scheme of Benaloh and Leichter recursively, we get the scheme of Ito, Saito, and Nishizeki. The scheme of Benaloh and Leichter can efficiently realize a much richer family of access structures than the access structures that can be efficiently realized by the scheme of Ito, Saito, and Nishizeki. To describe the access structures that can be efficiently realized by Benaloh and Leichter's scheme it is convenient to view an access structure as a function. We describe each set A ⊆ {p BIB002 With an access structure A, we associate the function f A : {0, 1} n → {0, 1}, where f A (v B ) = 1 iff B ∈ A. We say that f A describes A. As A is monotone, the function f A is monotone. Furthermore, for two access structures A 1 and Using this observation, the scheme of Benaloh and Leichter can efficiently realize every access structure that can be described by a small monotone formula.
|
Secret-Sharing Schemes: A Survey <s> 4 <s> Secret Sharing from the perspective of threshold schemes has been well-studied over the past decade. Threshold schemes, however, can only handle a small fraction of the secret sharing functions which we may wish to form. For example, if it is desirable to divide a secret among four participants A, B, C, and D in such a way that either A together with B can reconstruct the secret or C together with D can reconstruct the secret, then threshold schemes (even with weighting) are provably insufficient.This paper will present general methods for constructing secret sharing schemes for any given secret sharing function. There is a natural correspondence between the set of "generalized" secret sharing functions and the set of monotone functions, and tools developed for simplifying the latter set can be applied equally well to the former set. <s> BIB001
|
Lemma 1. Let A be an access structure and assume that f A can be computed by a monotone formula in which for every 1 ≤ j ≤ n, the variable x j appears a j times in the formula. Then, for every m ∈ N, A can be realized with domain of secrets Z m by the scheme of BIB001 . The resulting scheme has information ratio max 1≤j≤n a j . Any monotone Boolean function over n variables can be computed by a monotone formula. Thus, every access structure can be realized by the scheme of BIB001 . However, for most monotone functions, the size of the smallest monotone formula computing them is exponential in n; i.e., the information ratio of the resulting scheme is exponential in the number of the parties.
|
Secret-Sharing Schemes: A Survey <s> 3.5 <s> In a secret sharing scheme, a dealer has a secret. The dealer gives each participant in the scheme a share of the secret. There is a set Γ of subsets of the participants with the property that any subset of participants that is in Γ can determine the secret. In a perfect secret sharing scheme, any subset of participants that is not in Γ cannot obtain any information about the secret. We will say that a perfect secret sharing scheme is ideal if all of the shares are from the same domain as the secret. Shamir and Blakley constructed ideal threshold schemes, and Benaloh has constructed other ideal secret sharing schemes. In this paper, we construct ideal secret sharing schemes for more general access structures which include the multilevel and compartmented access structures proposed by Simmons. <s> BIB001 </s> Secret-Sharing Schemes: A Survey <s> 3.5 <s> A linear algebraic model of computation the span program, is introduced, and several upper and lower bounds on it are proved. These results yield applications in complexity and cryptography. The proof of the main connection, between span programs and counting branching programs, uses a variant of Razborov's general approximation method. > <s> BIB002
|
The monotone Span Programs Construction BIB001 BIB002 All the above constructions are linear, that is, the distribution scheme is a linear mapping. More formally, in a linear secret-sharing scheme over a finite field F, the secret is an element of the field, the random string is a vector over the field such that each coordinate of this vector is chosen independently with uniform distribution from the field, and each share is a vector over the field such that each coordinate of this vector is some fixed linear combination of the secret and the coordinates of the random string. Example 2. Consider the scheme for A ustcon described in Section 3.2. This scheme is linear over the field with two elements F 2 . In particular, the randomness is a vector r 2 , . . . , r |V |−1 of |V | − 2 random elements in F 2 , and the share of an edge (v 1 , v 2 ), for example, is (k + r 2 ) mod 2, that is, this is the linear combination where the coefficient of k and r 2 are 1 and all other coefficients are zero. To model a linear scheme, we use monotone span programs, which is, basically, the matrix describing the linear mapping of the linear scheme. The monotone span program also defines the access structure which the secret-sharing scheme realizes. In the rest of the paper, vectors are denoted by bold letters (e.g., r) and, according to the context, vectors are either row vectors or column vectors (i.e., if we write rM , then r is a row vector, if we write M r, then r is a column vector). We next prove that this scheme is private. If T / ∈ A, then the rows of M T do not span the vector e 1 , i.e., rank(M T ) < rank
|
Secret-Sharing Schemes: A Survey <s> Remark 2 (Historical Notes). <s> In a secret sharing scheme, a dealer has a secret. The dealer gives each participant in the scheme a share of the secret. There is a set Γ of subsets of the participants with the property that any subset of participants that is in Γ can determine the secret. In a perfect secret sharing scheme, any subset of participants that is not in Γ cannot obtain any information about the secret. We will say that a perfect secret sharing scheme is ideal if all of the shares are from the same domain as the secret. Shamir and Blakley constructed ideal threshold schemes, and Benaloh has constructed other ideal secret sharing schemes. In this paper, we construct ideal secret sharing schemes for more general access structures which include the multilevel and compartmented access structures proposed by Simmons. <s> BIB001 </s> Secret-Sharing Schemes: A Survey <s> Remark 2 (Historical Notes). <s> A linear algebraic model of computation the span program, is introduced, and several upper and lower bounds on it are proved. These results yield applications in complexity and cryptography. The proof of the main connection, between span programs and counting branching programs, uses a variant of Razborov's general approximation method. > <s> BIB002 </s> Secret-Sharing Schemes: A Survey <s> Remark 2 (Historical Notes). <s> In this paper, we generalize the vector space construction due to Brickell [5]. This generalization, introduced by Bertilsson [1], leads to perfect secret sharing schemes with rational information rates in which the secret can be computed efficiently by each qualified group. A one to one correspondence between the generalized construction and linear block codes is stated. It turns out that the approach of minimal codewords by Massey [15] is a special case of this construction. For general access structures we present an outline of an algorithm for determining whether a rational number can be realized as information rate by means of the generalized vector space construction. If so, the algorithm produces a perfect secret sharing scheme with this information rate. As a side-result we show a correspondence between the duality of access structures and the duality of codes. <s> BIB003 </s> Secret-Sharing Schemes: A Survey <s> Remark 2 (Historical Notes). <s> A secret sharing scheme permits a secret to be shared among participants in such a way that only qualified subsets of participants can recover the secret, but any nonqualified subset has absolutely no information on the secret. The set of all qualified subsets defines the access structure to the secret. Sharing schemes are useful in the management of cryptographic keys and in multiparty secure protocols. ::: ::: We analyze the relationships among the entropies of the sample spaces from which the shares and the secret are chosen. We show that there are access structures with four participants for which any secret sharing scheme must give to a participant a share at least 50% greater than the secret size. This is the first proof that there exist access structures for which the best achievable information rate (i.e., the ratio between the size of the secret and that of the largest share) is bounded away from 1. The bound is the best possible, as we construct a secret sharing scheme for the above access structures that meets the bound with equality. <s> BIB004
|
Brickell BIB001 in 1989 implicitly defined monotone span programs for the case that each party labels exactly one row, and proved Claim 2. Karchmer and Wigderson BIB002 in 1993 explicitly defined span programs and monotone span programs. They considered them as a computational model and their motivation was proving lower bounds for modular branching programs. Karchmer and Wigderson showed that monotone span programs imply (linear) secret-sharing schemes. Beimel proved that linear secret-sharing schemes imply monotone span programs. Thus, linear secret-sharing schemes are equivalent to monotone span programs, and lower bounds on the size of monotone span programs imply the same lower bounds on the information ratio of linear secretsharing schemes. Example 4. We next describe the linear secret-sharing for A ustcon , presented in Section 3.2, as a monotone span program. In this access structure, we consider a graph with m vertices and n = m 2 edges, each edge is a party. We construct a monotone span program over F 2 , which has b = m − 1 columns and a = n rows. For every party (edge) (v i , v j ), where 1 ≤ i < j ≤ m − 1, there is a unique row in the program labeled by this party; in this row all entries in the row are zero, except for the ith and the jth entries which are 1. Furthermore, for every party (edge) (v i , v m ), where 1 ≤ i ≤ m − 1, there is a unique row in the program labeled by this party; in this row all entries in the row are zero, except for the ith entry which is 1 (this is equivalent to choosing r m = 0 in Section 3.2). It can be proved that this monotone span program accepts a set of parties (edges) if and only if the set contains a path from v 1 to v m . To construct a secret-sharing scheme from this monotone span program, we multiply the above matrix by a vector r = (k, r 2 , . . . , r m−1 ) and the share of party (v i , v j ) is the row labeled by (v i , v j ) in the matrix multiplied by r, that is, the share is as defined in the scheme for A ustcon described above. 3.6 Multi-Linear Secret-Sharing Schemes BIB003 In the schemes derived from monotone span programs, the secret is one element from the field. This can be generalized to the case where the secret is some vector over the field. Such schemes, studied by BIB003 , are called multi linear and are based on the following generalization of monotone span programs. = (k 1 , . . . , k c , r c+1 , . . . , r b ), and computes the shares M r. Any multi-target monotone span program is a monotone span program; however, using it to construct a multi-linear secret-sharing scheme results in a scheme with better information ratio. BIB004 that in any secret-sharing scheme realizing the information ratio is at least 1.5. We present this lower bound and prove it in Theorem 1. By definition, the information ratio of a linear scheme is integral. We next present a multi-linear secret-sharing scheme realizing with information ratio 1.5. We first describe a linear scheme whose information ratio is 2. To share a bit k 1 ∈ F 2 , the dealer independently chooses two random bits r 1 and r 2 with uniform distribution. The share of p 1 is r 1 , the share of p 2 is r 1 ⊕ k 1 , the share of p 3 is two bits, r 1 and r 2 ⊕ k 1 , and the share of p 4 is r 2 . Clearly, this scheme realizes .
|
Secret-Sharing Schemes: A Survey <s> Other Constructions <s> In information based systems, the integrity of the information (from unauthorized scrutiny or disclosure, manipulation or alteration, forgery, false dating, etc.) is commonly provided for by requiring operation(s) on the information that one or more of the participants, who know some private piece(s) of information not known to all of the other participants, can carry out but which (probably) can’t be carried out by anyone who doesn’t know the private information. Encryption/decryption in a single key cryptoalgorithm is a paradigm of such an operation, with the key being the private (secret) piece of information. Although it is implicit, it is almost never stated explicitly that in a single-key cryptographic communications link, the transmitter and the receiver must unconditionally trust each other since either can do anything that the other can. <s> BIB001 </s> Secret-Sharing Schemes: A Survey <s> Other Constructions <s> In a secret sharing scheme, a dealer has a secret. The dealer gives each participant in the scheme a share of the secret. There is a set Γ of subsets of the participants with the property that any subset of participants that is in Γ can determine the secret. In a perfect secret sharing scheme, any subset of participants that is not in Γ cannot obtain any information about the secret. We will say that a perfect secret sharing scheme is ideal if all of the shares are from the same domain as the secret. Shamir and Blakley constructed ideal threshold schemes, and Benaloh has constructed other ideal secret sharing schemes. In this paper, we construct ideal secret sharing schemes for more general access structures which include the multilevel and compartmented access structures proposed by Simmons. <s> BIB002 </s> Secret-Sharing Schemes: A Survey <s> Other Constructions <s> The paper describes a very powerful decomposition construction for perfect secret-sharing schemes. The author gives several applications of the construction and improves previous results by showing that for any graph G of maximum degree d, there is a perfect secret-sharing scheme for G with information rate 2/(d+1). As a corollary, the maximum information rate of secret-sharing schemes for paths on more than three vertices and for cycles on more than four vertices is shown to be 2/3. > <s> BIB003 </s> Secret-Sharing Schemes: A Survey <s> Other Constructions <s> We consider the problem of threshold secret sharing in groups with hierarchical structure. In such settings, the secret is shared among a group of participants that is partitioned into levels. The access structure is then determined by a sequence of threshold requirements: a subset of participants is authorized if it has at least k 0 members from the highest level, as well as at least k 1 > k 0 members from the two highest levels and so forth. Such problems may occur in settings where the participants differ in their authority or level of confidence and the presence of higher level participants is imperative to allow the recovery of the common secret. Even though secret sharing in hierarchical groups has been studied extensively in the past, none of the existing solutions addresses the simple setting where, say, a bank transfer should be signed by three employees, at least one of whom must be a department manager. We present a perfect secret sharing scheme for this problem that, unlike most secret sharing schemes that are suitable for hierarchical structures, is ideal. As in Shamir’s scheme, the secret is represented as the free coefficient of some polynomial. The novelty of our scheme is the usage of polynomial derivatives in order to generate lesser shares for participants of lower levels. Consequently, our scheme uses Birkhoff interpolation, i.e., the construction of a polynomial according to an unstructured set of point and derivative values. A substantial part of our discussion is dedicated to the question of how to assign identities to the participants from the underlying finite field so that the resulting Birkhoff interpolation problem will be well posed. In the course of this discussion, we borrow some results from the theory of Birkhoff interpolation over ℝ and import them to the context of finite fields. <s> BIB004 </s> Secret-Sharing Schemes: A Survey <s> Other Constructions <s> Weighted threshold functions with positive weights are a natural generalization of unweighted threshold functions. These functions are clearly monotone. However, the naive way of computing them is adding the weights of the satisfied variables and checking if the sum is greater than the threshold; this algorithm is inherently non-monotone since addition is a non-monotone function. In this work we by-pass this addition step and construct a polynomial size logarithmic depth unbounded fan-in monotone circuit for every weighted threshold function, i.e., we show that weighted threshold functions are in mAC^1. (To the best of our knowledge, prior to our work no polynomial monotone circuits were known for weighted threshold functions.) Our monotone circuits are applicable for the cryptographic tool of secret sharing schemes. Using general results for compiling monotone circuits (Yao, 1989) and monotone formulae (Benaloh and Leichter, 1990) into secret sharing schemes, we get secret sharing schemes for every weighted threshold access structure. Specifically, we get: (1) information-theoretic secret sharing schemes where the size of each share is quasi-polynomial in the number of users, and (2) computational secret sharing schemes where the size of each share is polynomial in the number of users. <s> BIB005 </s> Secret-Sharing Schemes: A Survey <s> Other Constructions <s> By using threshold schemes, λ -decompositions were introduced by Stinson [D.R. Stinson, Decomposition constructions for secret sharing schemes, IEEE Trans. Inform. Theory IT-40 (1994) 118-125] and used to achieve often optimal worst-case information rates of secret sharing schemes based on graphs. By using the broader class of ramp schemes, (λ,ω)-decompositions were introduced in [M. van Dijk, W.-A. Jackson, K.M. Martin, A general decomposition construction for incomplete secret sharing schemes, Des. Codes Cryptogr. 15 (1998) 301-321] together with a general theory of decompositions. However, no improvements of existing schemes have been found by using this general theory. In this contribution we show for the first time how to successfully use (λ,ω)-decompositions. We give examples of improved constructions of secret sharing schemes based on connected graphs on six vertices. <s> BIB006 </s> Secret-Sharing Schemes: A Survey <s> Other Constructions <s> Given a set of participants that is partitioned into distinct compartments, a multipartite access structure is an access structure that does not distinguish between participants belonging to the same compartment. We examine here three types of such access structures: two that were studied before, compartmented access structures and hierarchical threshold access structures, and a new type of compartmented access structures that we present herein. We design ideal perfect secret sharing schemes for these types of access structures that are based on bivariate interpolation. The secret sharing schemes for the two types of compartmented access structures are based on bivariate Lagrange interpolation with data on parallel lines. The secret sharing scheme for the hierarchical threshold access structures is based on bivariate Lagrange interpolation with data on lines in general position. The main novelty of this paper is the introduction of bivariate Lagrange interpolation and its potential power in designing schemes for multipartite settings, as different compartments may be associated with different lines or curves in the plane. In particular, we show that the introduction of a second dimension may create the same hierarchical effect as polynomial derivatives and Birkhoff interpolation were shown to do in Tassa (J. Cryptol. 20:237–264, 2007). <s> BIB007
|
There are many other constructions of secret-sharing schemes for other specific access structures, e.g., hierarchical access structures BIB001 BIB002 BIB004 BIB007 , weighted threshold access structures BIB005 , and more complicated compositions of access structures BIB003 BIB006 .
|
Secret-Sharing Schemes: A Survey <s> Secret Sharing and Secure Multi-Party Computation <s> Every function of n inputs can be efficiently computed by a complete network of n processors in such a way that: If no faults occur, no set of size t n /2 of players gets any additional information (other than the function value), Even if Byzantine faults are allowed, no set of size t n /3 can either disrupt the computation or get additional information. Furthermore, the above bounds on t are tight! <s> BIB001 </s> Secret-Sharing Schemes: A Survey <s> Secret Sharing and Secure Multi-Party Computation <s> Under the assumption that each pair of participants can communicate secretly, we show that any reasonable multiparty protocol can be achieved if at least 2 n /3 of the participants are honest. The secrecy achieved is unconditional. It does not rely on any assumption about computational intractability. <s> BIB002
|
Secret-sharing schemes are a basic building box in construction of many cryptographic protocols. In this section we demonstrate the use of secret-sharing schemes for secure multi-party computation of general functions. For simplicity we concentrate on the case that the parties are honest-but-curious, that is, the parties follow the instructions of the protocol, however, at the end of the protocol some of them might collude and try to deduce information from the messages they got. The protocols that we describe are secure against an all-powerful adversary, that is, they supply information-theoretic security. We will first show a homomorphic property of Shamir's secret-sharing scheme. Using this property, we show how to use secret sharing to construct a protocol for securely computing the sum of secret inputs. Then, we will show how to securely compute the product of inputs. Combining these protocols we get an efficient protocol for computing any function which can be computed by a small arithmetic circuit. Such protocols with information-theoretic security were first presented in BIB001 BIB002 . The exact protocol we present here is from . Proof. Let Q 1 and Q 2 be the polynomial of degree at most t generating the shares s 1,1 , . . . , s 1,n and s 2,1 , . . . , s 2,n respectively, that is Q i (0) = k i and Q i (α j ) = s i,j for i ∈ {1, 2} and 1 ≤ j ≤ n (where α 1 , . . . , α n are defined in Section 3.1). Define Q(x) = Q 1 (x) + Q 2 (x). This is a polynomial of degree at most t such that 1 + s 2,1 , . . . , s 1,n + s 2,n given the secret . This is a polynomial degree at most 2t generating the shares s 1,1 · s 2,1 , . . . , s 1,n · s 2,n given the secret k 1 · k 2 .
|
Secret-Sharing Schemes: A Survey <s> Extensions to Other Models <s> The goal of secure multiparty computation is to transform a given protocol involving a trusted party into a protocol without need for the trusted party, by simulating the party among the players. Indeed, by the same means, one can simulate an arbitrary player in any given protocol. We formally define what it means to simulate a player by a multiparty protocol among a set of (new) players, and we derive the resilience of the new protocol as a function of the resiliences of the original protocol and the protocol used for the simulation. ::: ::: In contrast to all previous protocols that specify the tolerable adversaries by the number of corruptible players (a threshold), we consider general adversaries characterized by an adversary structure, a set of subsets of the player set, where the adversary may corrupt the players of one set in the structure. Recursively applying the simulation technique to standard threshold multiparty protocols results in protocols secure against general adversaries. ::: ::: The classical results in unconditional multiparty computation among a set of n players state that, in the passive model, any adversary that corrupts less than n/2 players can be tolerated, and in the active model, any adversary that corrupts less than n/3 players can be tolerated. Strictly generalizing these results we prove that, in the passive model, every function (more generally, every cooperation specified by involving a trusted party) can be computed securely with respect to a given adversary structure if and only if no two sets in the adversary structure cover the full set of players, and, in the active model, if and only if no three sets cover the full set of players. The complexities of the protocols are polynomial in the number of maximal adverse player sets in the adversary structure. <s> BIB001 </s> Secret-Sharing Schemes: A Survey <s> Extensions to Other Models <s> We show that verifiable secret sharing (VSS) and secure multi-party computation (MPC) among a set of n players can efficiently be based on any linear secret sharing scheme (LSSS) for the players, provided that the access structure of the LSSS allows MPC or VSS at all. Because an LSSS neither guarantees reconstructability when some shares are false, nor verifiability of a shared value, nor allows for the multiplication of shared values, an LSSS is an apparently much weaker primitive than VSS or MPC. ::: ::: Our approach to secure MPC is generic and applies to both the information-theoretic and the cryptographic setting. The construction is based on 1) a formalization of the special multiplicative property of an LSSS that is needed to perform a multiplication on shared values, 2) an efficient generic construction to obtain from any LSSS a multiplicative LSSS for the same access structure, and 3) an efficient generic construction to build verifiability into every LSSS (always assuming that the adversary structure allows for MPC or VSS at all). ::: ::: The protocols are efficient. In contrast to all previous information-theoretically secure protocols, the field size is not restricted (e.g, to be greater than n). Moreover, we exhibit adversary structures for which our protocols are polynomial in n while all previous approaches to MPC for non-threshold adversaries provably have super-polynomial complexity. <s> BIB002
|
The protocol we described above assumes that the corrupted parties are honestbut-curious. A more realistic assumption is that the parties can deviate from the protocol and send any messages that might help them. Such parties are called malicious. For example, in the multiplication protocol, a party that should share s j can send shares that are not consistent with any secret. Furthermore, in the reconstruction step in the arithmetic circuit protocol, a party can send a "wrong" share. To cope with malicious behavior, the notion of verifiable secret sharing was introduced by Chor et al. . Such schemes were constructed under various assumptions, see for a partial list of such constructions. We will not elaborate on verifiable secret sharing in this survey. In the definition of secure computation we assumed that there is a parameter t, and an adversary can control any coalition of size at most t. This assumes that all parties are as likely to be corrupted. Hirt and Maurer BIB001 considered a more general scenario in which there is an access structure, and the adversary can control any set of parties not in the access structure. That is, they require that any set not in the access structure cannot learn information not implied by the inputs of the parties in the set and the output of the function. Similarly to the requirement that 2t < n in the protocol we described above, secure computation against honest-but-curious parties is possible for general functions iff the union of every two sets not in the access structure does not cover the entire set of parties BIB001 . For every such access structure A, Cramer et al. BIB002 showed that using every linear secret-sharing scheme realizing A, one can construct a protocol for computing any arithmetic circuit such that any set not in the access structure cannot learn any information; the complexity of the protocol is linear in the size of the circuit. Their protocol is similar to the protocol we described above, where for addition gates every party does local computation. Multiplication is also similar, however, the choice of the constants β 1 , . . . , β n is more involved. The protocol of Cramer et al. BIB002 shows the need for general secret-sharing schemes.
|
Secret-Sharing Schemes: A Survey <s> Lower Bounds on the Size of the Shares <s> A "secret sharing system" permits a secret to be shared among n trustees in such a way that any k of them can recover the secret, but any k-1 have complete uncertainty about it. A linear coding scheme for secret sharing is exhibited which subsumes the polynomial interpolation method proposed by Shamir and can also be viewed as a deterministic version of Blakley's probabilistic method. Bounds on the maximum value of n for a given k and secret size are derived for any system, linear or nonlinear. The proposed scheme achieves the lower bound which, for practical purposes, differs insignificantly from the upper bound. The scheme may be extended to protect several secrets. Methods to protect against deliberate tampering by any of the trustees are also presented. <s> BIB001 </s> Secret-Sharing Schemes: A Survey <s> Lower Bounds on the Size of the Shares <s> Secret Sharing from the perspective of threshold schemes has been well-studied over the past decade. Threshold schemes, however, can only handle a small fraction of the secret sharing functions which we may wish to form. For example, if it is desirable to divide a secret among four participants A, B, C, and D in such a way that either A together with B can reconstruct the secret or C together with D can reconstruct the secret, then threshold schemes (even with weighting) are provably insufficient.This paper will present general methods for constructing secret sharing schemes for any given secret sharing function. There is a natural correspondence between the set of "generalized" secret sharing functions and the set of monotone functions, and tools developed for simplifying the latter set can be applied equally well to the former set. <s> BIB002 </s> Secret-Sharing Schemes: A Survey <s> Lower Bounds on the Size of the Shares <s> In a secret sharing scheme, a dealer has a secret. The dealer gives each participant in the scheme a share of the secret. There is a set Γ of subsets of the participants with the property that any subset of participants that is in Γ can determine the secret. In a perfect secret sharing scheme, any subset of participants that is not in Γ cannot obtain any information about the secret. We will say that a perfect secret sharing scheme is ideal if all of the shares are from the same domain as the secret. Shamir and Blakley constructed ideal threshold schemes, and Benaloh has constructed other ideal secret sharing schemes. In this paper, we construct ideal secret sharing schemes for more general access structures which include the multilevel and compartmented access structures proposed by Simmons. <s> BIB003 </s> Secret-Sharing Schemes: A Survey <s> Lower Bounds on the Size of the Shares <s> A linear algebraic model of computation the span program, is introduced, and several upper and lower bounds on it are proved. These results yield applications in complexity and cryptography. The proof of the main connection, between span programs and counting branching programs, uses a variant of Razborov's general approximation method. > <s> BIB004 </s> Secret-Sharing Schemes: A Survey <s> Lower Bounds on the Size of the Shares <s> In this paper, we generalize the vector space construction due to Brickell [5]. This generalization, introduced by Bertilsson [1], leads to perfect secret sharing schemes with rational information rates in which the secret can be computed efficiently by each qualified group. A one to one correspondence between the generalized construction and linear block codes is stated. It turns out that the approach of minimal codewords by Massey [15] is a special case of this construction. For general access structures we present an outline of an algorithm for determining whether a rational number can be realized as information rate by means of the generalized vector space construction. If so, the algorithm produces a perfect secret sharing scheme with this information rate. As a side-result we show a correspondence between the duality of access structures and the duality of codes. <s> BIB005 </s> Secret-Sharing Schemes: A Survey <s> Lower Bounds on the Size of the Shares <s> We derive new limitations on the information rate and the average information rate of secret sharing schemes for access structure represented by graphs. We give the first proof of the existence of access structures with optimal information rate and optimal average information rate less that 1/2 + e, where e is an arbitrary positive constant. We also provide several general lower bounds on information rate and average information rate of graphs. In particular, we show that any graph with n vertices admits a secret sharing scheme with information rate Ω((log n)/n). <s> BIB006 </s> Secret-Sharing Schemes: A Survey <s> Lower Bounds on the Size of the Shares <s> A secret sharing scheme permits a secret to be shared among participants of an n-element group in such a way that only qualified subsets of participants can recover the secret. If any nonqualified subset has absolutely no information on the secret, then the scheme is called perfect. The share in a scheme is the information that a participant must remember. ::: ::: In [3] it was proved that for a certain access structure any perfect secret sharing scheme must give some participant a share which is at least 50\percent larger than the secret size. We prove that for each n there exists an access structure on n participants so that any perfect sharing scheme must give some participant a share which is at least about $n/\log n$ times the secret size.^1 We also show that the best possible result achievable by the information-theoretic method used here is n times the secret size. ::: ::: ^1 All logarithms in this paper are of base 2. <s> BIB007 </s> Secret-Sharing Schemes: A Survey <s> Lower Bounds on the Size of the Shares <s> A secret sharing scheme permits a secret to be shared among participants in such a way that only qualified subsets of participants can recover the secret, but any nonqualified subset has absolutely no information on the secret. The set of all qualified subsets defines the access structure to the secret. Sharing schemes are useful in the management of cryptographic keys and in multiparty secure protocols. ::: ::: We analyze the relationships among the entropies of the sample spaces from which the shares and the secret are chosen. We show that there are access structures with four participants for which any secret sharing scheme must give to a participant a share at least 50% greater than the secret size. This is the first proof that there exist access structures for which the best achievable information rate (i.e., the ratio between the size of the secret and that of the largest share) is bounded away from 1. The bound is the best possible, as we construct a secret sharing scheme for the above access structures that meets the bound with equality. <s> BIB008 </s> Secret-Sharing Schemes: A Survey <s> Lower Bounds on the Size of the Shares <s> Span programs provide a linear algebraic model of computation. Lower bounds for span programs imply lower bounds for formula size, symmetric branching programs, and contact schemes. Monotone span programs correspond also to linear secret-sharing schemes. We present a new technique for proving lower bounds for monotone span programs. We prove a lower bound of Ω(m2.5) for the 6-clique function. Our results improve on the previously known bounds for explicit functions. <s> BIB009
|
The best known constructions of secret-sharing schemes for general access structures (e.g., BIB002 BIB003 BIB004 BIB005 ) have information ratio 2 O(n) , where n is the number of parties in the access structure. As discussed in the introduction, we conjecture that this is the best possible. Lower bounds for secret-sharing schemes have been proved in, e.g., BIB001 BIB008 BIB006 BIB007 . However, these lower bounds are far from the exponential upper bounds. The best lower bound was proved by Csirmaz BIB007 , who proved that for every n there exists an n-party access structure such that every secret-sharing scheme realizing it has information ratio Ω(n/ log n). In Sections 5.2 -5.3, we review this proof. For linear secret-sharing schemes the situation is much better -for every n there exist access structures with n parties such that every linear secret-sharing scheme realizing them has super-polynomial, i.e., n Ω(log n) , information ratio BIB009 . In Section 5.5, we present the lower bound proof of .
|
Secret-Sharing Schemes: A Survey <s> Stronger Lower Bounds <s> A "secret sharing system" permits a secret to be shared among n trustees in such a way that any k of them can recover the secret, but any k-1 have complete uncertainty about it. A linear coding scheme for secret sharing is exhibited which subsumes the polynomial interpolation method proposed by Shamir and can also be viewed as a deterministic version of Blakley's probabilistic method. Bounds on the maximum value of n for a given k and secret size are derived for any system, linear or nonlinear. The proposed scheme achieves the lower bound which, for practical purposes, differs insignificantly from the upper bound. The scheme may be extended to protect several secrets. Methods to protect against deliberate tampering by any of the trustees are also presented. <s> BIB001 </s> Secret-Sharing Schemes: A Survey <s> Stronger Lower Bounds <s> We derive new limitations on the information rate and the average information rate of secret sharing schemes for access structure represented by graphs. We give the first proof of the existence of access structures with optimal information rate and optimal average information rate less that 1/2 + e, where e is an arbitrary positive constant. We also provide several general lower bounds on information rate and average information rate of graphs. In particular, we show that any graph with n vertices admits a secret sharing scheme with information rate Ω((log n)/n). <s> BIB002 </s> Secret-Sharing Schemes: A Survey <s> Stronger Lower Bounds <s> A secret sharing scheme permits a secret to be shared among participants of an n-element group in such a way that only qualified subsets of participants can recover the secret. If any nonqualified subset has absolutely no information on the secret, then the scheme is called perfect. The share in a scheme is the information that a participant must remember. ::: ::: In [3] it was proved that for a certain access structure any perfect secret sharing scheme must give some participant a share which is at least 50\percent larger than the secret size. We prove that for each n there exists an access structure on n participants so that any perfect sharing scheme must give some participant a share which is at least about $n/\log n$ times the secret size.^1 We also show that the best possible result achievable by the information-theoretic method used here is n times the secret size. ::: ::: ^1 All logarithms in this paper are of base 2. <s> BIB003 </s> Secret-Sharing Schemes: A Survey <s> Stronger Lower Bounds <s> A secret sharing scheme permits a secret to be shared among participants in such a way that only qualified subsets of participants can recover the secret, but any nonqualified subset has absolutely no information on the secret. The set of all qualified subsets defines the access structure to the secret. Sharing schemes are useful in the management of cryptographic keys and in multiparty secure protocols. ::: ::: We analyze the relationships among the entropies of the sample spaces from which the shares and the secret are chosen. We show that there are access structures with four participants for which any secret sharing scheme must give to a participant a share at least 50% greater than the secret size. This is the first proof that there exist access structures for which the best achievable information rate (i.e., the ratio between the size of the secret and that of the largest share) is bounded away from 1. The bound is the best possible, as we construct a secret sharing scheme for the above access structures that meets the bound with equality. <s> BIB004
|
Starting from the works of Karnin et al. BIB001 and Capocelli et al. BIB004 , the entropy was used to prove lower bounds on the share size in secret-sharing schemes BIB002 BIB003 . In other words, to prove lower bounds on the information ratio of secret-sharing schemes, we use the alternative definition of secret sharing via the entropy function, Definition 3. Towards proving lower bounds, we use properties of the entropy function as well as the correctness and privacy of secret-sharing schemes. This is summarized in Claim 5. To simplify notations, in the sequel we denote H(S A ) by H(A) for any set of parties A ⊆ {p 1 , . . . , p n }.
|
Secret-Sharing Schemes: A Survey <s> Limitations of Known Techniques for Lower Bounds <s> A secret sharing scheme permits a secret to be shared among participants of an n-element group in such a way that only qualified subsets of participants can recover the secret. If any nonqualified subset has absolutely no information on the secret, then the scheme is called perfect. The share in a scheme is the information that a participant must remember. ::: ::: In [3] it was proved that for a certain access structure any perfect secret sharing scheme must give some participant a share which is at least 50\percent larger than the secret size. We prove that for each n there exists an access structure on n participants so that any perfect sharing scheme must give some participant a share which is at least about $n/\log n$ times the secret size.^1 We also show that the best possible result achievable by the information-theoretic method used here is n times the secret size. ::: ::: ^1 All logarithms in this paper are of base 2. <s> BIB001 </s> Secret-Sharing Schemes: A Survey <s> Limitations of Known Techniques for Lower Bounds <s> The properties of the so-called basic information inequalities of Shannon's information measures are discussed. Do these properties fully characterize the entropy function? To make this question more precise, we view an entropy function as a 2n-1 dimensional vector where the coordinates are indexed by the subsets of the ground set (1, 2, ..., n). The main discovery of this paper is a new information inequality involving 4 discrete random variables which gives a negative answer to this fundamental problem of information theory. <s> BIB002 </s> Secret-Sharing Schemes: A Survey <s> Limitations of Known Techniques for Lower Bounds <s> When finite, Shannon entropies of all sub vectors of a random vector are considered for the coordinates of an entropic point in Euclidean space. A linear combination of the coordinates gives rise to an unconstrained information inequality if it is nonnegative for all entropic points. With at least four variables no finite set of linear combinations generates all such inequalities. This is proved by constructing explicitly an infinite sequence of new linear information inequalities and a curve in a special geometric position to the halfspaces defined by the inequalities. The inequalities are constructed recurrently by adhesive pasting of restrictions of polymatroids and the curve ranges in the closure of a set of the entropic points. <s> BIB003 </s> Secret-Sharing Schemes: A Survey <s> Limitations of Known Techniques for Lower Bounds <s> An access structure specifying the qualified sets of a secret sharing scheme must have information rate less than or equal to one. The Vamos matroid induces two non-isomorphic access structures V"1 and V"6, which were shown by Marti-Farre and Padro to have information rates of at least 3/4. Beimel, Livne, and Padro showed that the information rates of V"1 and V"6 are bounded above by 10/11 and 9/10 respectively. Here we improve those upper bounds to 8/9 for V"1 and 17/19 for V"6. We also indicate a general method that allows one to read off an upper bound for the information rate of V"6 directly from the coefficients of any non-Shannon inequality with certain properties, properties that hold for all 4-variable non-Shannon inequalities known to the author. <s> BIB004 </s> Secret-Sharing Schemes: A Survey <s> Limitations of Known Techniques for Lower Bounds <s> The known secret-sharing schemes for most access structures are not efficient; even for a one-bit secret the length of the shares in the schemes is 2O(n), where n is the number of participants in the access structure. It is a long standing open problem to improve these schemes or prove that they cannot be improved. The best known lower bound is by Csirmaz, who proved that there exist access structures with n participants such that the size of the share of at least one party is n/logn times the secret size. Csirmaz's proof uses Shannon information inequalities, which were the only information inequalities known when Csirmaz published his result. On the negative side, Csirmaz proved that by only using Shannon information inequalities one cannot prove a lower bound of ω(n) on the share size. In the last decade, a sequence of non-Shannon information inequalities were discovered. In fact, it was proved that there are infinity many independent information inequalities even in four variables. This raises the hope that these inequalities can help in improving the lower bounds beyond n . However, we show that any information inequality with four or five variables cannot prove a lower bound of ω(n) on the share size. In addition, we show that the same negative result holds for all information inequalities with more than five variables that are known to date. <s> BIB005
|
Basically, all known lower bounds for the size of shares in secret-sharing schemes are implied by Claim 5. In other words, they only use the so-called Shannon information inequalities (i.e., the fact that the conditional mutual information is non-negative). Csirmaz BIB001 in 1994 proved that such proofs cannot prove a lower bound of ω(n) on the information ratio. That is, Csirmaz's lower bound is nearly optimal (up to a factor log n) using only Shannon inequalities. In 1998, new information inequalities were discovered by Zhang and Yeung BIB002 . Other information inequalities were discovered since, see, e.g. . In particular, there are infinitely many independent information inequalities in 4 variables BIB003 . Such inequalities were used in BIB004 to prove lower bounds for secret-sharing schemes. Beimel and Orlov BIB005 proved that all information information inequalities with 4 or 5 variables and all known information inequalities in more than 5 variables cannot prove a lower bound of ω(n) on the information ratio of secret-sharing schemes. Thus, new information inequalities with more than 5 variables should be found if we want to improve the lower bounds.
|
Secret-Sharing Schemes: A Survey <s> Lower Bounds for Linear Secret Sharing <s> We present three alternative simple constructions of small probability spaces on n bits for which any k bits are almost independent. The number of bits used to specify a point in the sample space is (2 + o(1)) (log log n + k/2 + log k + log 1/ϵ), where ϵ is the statistical difference between the distribution induced on any k bit locations and the uniform distribution. This is asymptotically comparable to the construction recently presented by Naor and Naor (our size bound is better as long as ϵ < 1/(k log n)). An additional advantage of our constructions is their simplicity. <s> BIB001 </s> Secret-Sharing Schemes: A Survey <s> Lower Bounds for Linear Secret Sharing <s> Monotone span programs are a linear-algebraic model of computation. They are equivalent to linear secret sharing schemes and have various applications in cryptography and complexity. A fundamental question is how the choice of the field in which the algebraic operations are performed effects the power of the span program. In this paper we prove that the power of monotone span programs over finite fields of different characteristics is incomparable; we show a super-polynomial separation between any two fields with different characteristics, answering an open problem of Pudlak and Sgall (1998). Using this result we prove a super-polynomial lower bound for monotone span programs for a function in uniform - /spl Nscr/;/spl Cscr/;/sup 2/ (and therefore in /spl Pscr/;), answering an open problem of Babai, Wigderson, and Gal (1999). Finally, we show that quasi-linear schemes, a generalization of linear secret sharing schemes introduced in Beimel and Ishai (2001), are stronger than linear secret sharing schemes. In particular, this proves, without any assumptions, that non-linear secret sharing schemes are more efficient than linear secret sharing schemes. <s> BIB002 </s> Secret-Sharing Schemes: A Survey <s> Lower Bounds for Linear Secret Sharing <s> Span programs provide a linear algebraic model of computation. Lower bounds for span programs imply lower bounds for formula size, symmetric branching programs, and contact schemes. Monotone span programs correspond also to linear secret-sharing schemes. We present a new technique for proving lower bounds for monotone span programs. We prove a lower bound of Ω(m2.5) for the 6-clique function. Our results improve on the previously known bounds for explicit functions. <s> BIB003
|
For linear secret-sharing schemes we can prove much stronger lower bounds than for general secret-sharing schemes. Recall that linear secret-sharing schemes are equivalent to monotone span programs and we first state the results using monotone span programs. Lower bounds for monotone span programs were presented in BIB003 BIB002 ; the best known lower bound is n Ω(log n) as proved in . We present here an alternative proof of . We start with a simple observation. Observation 1. Let A be a (monotone) access structure. Let B ∈ A and C ⊆ {p BIB001 The observation follows from the fact that if B∩C = ∅, then B ⊆ {p 1 , . . . , p n }\ C, contradicting the fact that B ∈ A and {p 1 , . . . , p n } \ C / ∈ A. To prove the lower bound, Gál and Pudlák choose a subset of the unauthorized sets that satisfies some properties, they use this subset to construct a matrix over F, and prove that the rank of the matrix over F is a lower bound on the size of every monotone span program realizing A. Let B = {B 1 , . . . , B } be the collection of minimal authorized sets in A and To prove the lower bound, Gál and Pudlák use a collection C such that, for every i, j, exactly one of the above conditions hold.
|
Secret-Sharing Schemes: A Survey <s> For a set <s> We present three alternative simple constructions of small probability spaces on n bits for which any k bits are almost independent. The number of bits used to specify a point in the sample space is (2 + o(1)) (log log n + k/2 + log k + log 1/ϵ), where ϵ is the statistical difference between the distribution induced on any k bit locations and the uniform distribution. This is asymptotically comparable to the construction recently presented by Naor and Naor (our size bound is better as long as ϵ < 1/(k log n)). An additional advantage of our constructions is their simplicity. <s> BIB001 </s> Secret-Sharing Schemes: A Survey <s> For a set <s> A linear algebraic model of computation the span program, is introduced, and several upper and lower bounds on it are proved. These results yield applications in complexity and cryptography. The proof of the main connection, between span programs and counting branching programs, uses a variant of Razborov's general approximation method. > <s> BIB002 </s> Secret-Sharing Schemes: A Survey <s> For a set <s> The known secret-sharing schemes for most access structures are not efficient; even for a one-bit secret the length of the shares in the schemes is 2O(n), where n is the number of participants in the access structure. It is a long standing open problem to improve these schemes or prove that they cannot be improved. The best known lower bound is by Csirmaz, who proved that there exist access structures with n participants such that the size of the share of at least one party is n/logn times the secret size. Csirmaz's proof uses Shannon information inequalities, which were the only information inequalities known when Csirmaz published his result. On the negative side, Csirmaz proved that by only using Shannon information inequalities one cannot prove a lower bound of ω(n) on the share size. In the last decade, a sequence of non-Shannon information inequalities were discovered. In fact, it was proved that there are infinity many independent information inequalities even in four variables. This raises the hope that these inequalities can help in improving the lower bounds beyond n . However, we show that any information inequality with four or five variables cannot prove a lower bound of ω(n) on the share size. In addition, we show that the same negative result holds for all information inequalities with more than five variables that are known to date. <s> BIB003
|
it is a neighbor of all vertices in A. Let G = (U, V, E) be a bipartite graph satisfying the isolated neighbor property for t, where the vertices of the graph are parties, i.e., U ∪ V = {p 1 , . . . , p n }. We define an access structure N G with |U |+|V | parties whose minimal authorized sets are the sets A∪N (A) where A ⊂ U and |A| = t. Example 8. Consider the graph described in Figure 1 . This is a trivial graph satisfying the isolated neighbor property for t = 2. For example, consider the disjoint sets {p 1 , p 2 } and {p 3 , p 4 }; vertex p 5 is a neighbor of all the vertices in the first set while it is not a neighbor of any vertex in the second set. The access structure N G defined for this graph is the access structure defined in Example 6. t . Proof. We prove the lemma using Theorem 4. We take C to be all the pairs ∈ E}, that is, C 1 contains all vertices that are not neighbors of any vertex in C 1 . We first claim that the collection C satisfies the unique intersection property for A: . We need to show that T / ∈ A, that is, T does not contain any minimal authorized set. Let A ⊆ U ∩T be any set such that |A| = t. Thus, |A| = |C 0 | = t, and there is a vertex v ∈ V such that v ∈ N (A) and v ∈ C 1 , that is, v / ∈ T . In other words, T does not contain any minimal authorized set A ∪ N (A). Thus, by Theorem 4, the size of every monotone span program accepting A is at least rank F (D). In this case, for every A, C 0 such that |A| = |C 0 | = t, the entry corresponding to A ∪ N (A) and (C 0 , C 1 ) is zero if A ∩ C 0 = ∅ and is one otherwise. That is, D is the (n, t)-disjointness matrix, which has full rank over every field (see, e.g., Example 2.12] ). BIB003 The rank of D is, thus, the number of minimal authorized sets in A, namely, As there exist graphs which satisfy the isolated neighbor property for t = Ω(log n), e.g., the Paley Graph BIB001 , we derive the promised lower bound. Theorem 5. For every n, there exists an access structure N n such that every monotone span program over any field accepting it has size n Ω(log n) . As monotone span program are equivalent to linear secret-sharing schemes BIB002 , the same lower bound applies to linear secret-sharing schemes.
|
Secret-Sharing Schemes: A Survey <s> Oblivious-Transfer Protocols from Secret-Sharing <s> Randomized protocols for signing contracts, certified mail, and flipping a coin are presented. The protocols use a 1-out-of-2 oblivious transfer subprotocol which is axiomatically defined. The 1-out-of-2 oblivious transfer allows one party to transfer exactly one secret, out of two recognizable secrets, to his counterpart. The first (second) secret is received with probability one half, while the sender is ignorant of which secret has been received. An implementation of the 1-out-of-2 oblivious transfer, using any public key cryptosystem, is presented. <s> BIB001 </s> Secret-Sharing Schemes: A Survey <s> Oblivious-Transfer Protocols from Secret-Sharing <s> We present strong evidence that the implication, “if one-way permutations exist, then secure secret key agreement is possible”, is not provable by standard techniques. Since both sides of this implication are widely believed true in real life, to show that the implication is false requires a new model. We consider a world where all parties have access to a black box for a randomly selected permutation. Being totally random, this permutation will be strongly one-way in a provable, information-theoretic way. We show that, if P = NP, no protocol for secret key agreement is secure in such a setting. Thus, to prove that a secret key agreement protocol which uses a one-way permutation as a black box is secure is as hard as proving P ≠ NP. We also obtain, as a corollary, that there is an oracle relative to which the implication is false, i.e., there is a one-way permutation, yet secret-exchange is impossible. Thus, no technique which relativizes can prove that secret exchange can be based on any one-way permutation. Our results present a general framework for proving statements of the form, “Cryptographic application X is not likely possible based solely on complexity assumption Y.” <s> BIB002 </s> Secret-Sharing Schemes: A Survey <s> Oblivious-Transfer Protocols from Secret-Sharing <s> Much research in theoretical cryptography has been centered around finding the weakest possible cryptographic assumptions required to implement major primitives. Ever since Diffie and Hellman first suggested that modern cryptography be based on one-way functions (which are easy to compute, but hard to invert) and trapdoor functions (one-way functions which are, however, easy to invert given an associated secret), researchers have been busy trying to construct schemes that only require one of these general assumptions. For example, pseudorandom generators at first could only be constructed from a specific hard problem, such as discrete log IBM2]. Later it was shown how to construct pseudo-random generators given any one-way permutation [Y], and from other weak forms of one-way functions [Le, GKL]. Finally JILL] proved that the existence of any one-way function was a necessary and sufficient condition for the existence of pseudo-random generators. Similarly, the existence of trapdoor permutations can be shown to be necessary and sufficient for secure encryption schemes. However, progress on characterizing the requirements for secure digital signatures has been slower in coming. We will be interested in signature schemes which are secure agMnst existential forgery under adaptive chosen message attacks. This notion of security, as well as the first construction of digital signatures secure in this sense was provided by [GMR]. Their scheme was based on factoring, or more generally, the existence of clawfree pairs. More recently, signatures based on any trap*supported in p a r t b y a N a t i o n a l Science F o u n d a t i o n G r a d u a t e Fellowship, D A R P A c o n t r a c t N00014-80-C-0622, a n d Air Force G r a n t A F S O R - 8 6 - 0 0 7 8 <s> BIB003 </s> Secret-Sharing Schemes: A Survey <s> Oblivious-Transfer Protocols from Secret-Sharing <s> We present three alternative simple constructions of small probability spaces on n bits for which any k bits are almost independent. The number of bits used to specify a point in the sample space is (2 + o(1)) (log log n + k/2 + log k + log 1/ϵ), where ϵ is the statistical difference between the distribution induced on any k bit locations and the uniform distribution. This is asymptotically comparable to the construction recently presented by Naor and Naor (our size bound is better as long as ϵ < 1/(k log n)). An additional advantage of our constructions is their simplicity. <s> BIB004 </s> Secret-Sharing Schemes: A Survey <s> Oblivious-Transfer Protocols from Secret-Sharing <s> We give a unified account of classical secret-sharing goals from a modern cryptographic vantage. Our treatment encompasses perfect, statistical, and computational secret sharing; static and dynamic adversaries; schemes with or without robustness; schemes where a participant recovers the secret and those where an external party does so. We then show that Krawczyk's 1993 protocol for robust computational secret sharing (RCSS) need not be secure, even in the random-oracle model and for threshold schemes, if the encryption primitive it uses satisfies only one-query indistinguishability (ind1), the only notion Krawczyk defines. Nonetheless, we show that the protocol is secure (in the random-oracle model, for threshold schemes) if the encryption scheme also satisfies one-query key-unrecoverability (key1). Since practical encryption schemes are ind1+key1 secure, our result effectively shows that Krawczyk's RCSS protocol is sound (in the random-oracle model, for threshold schemes). Finally, we prove the security for a variant of Krawczyk's protocol, in the standard model and for arbitrary access structures, assuming ind1 encryption and a statistically-hiding, weakly-binding commitment scheme. <s> BIB005
|
To appreciate the result presented below we start with some background. Cryptographic protocols are built based on some assumptions. These assumption can be specific (e.g., factoring is hard) or generic (e.g., there exist one-way functions or there exist trapdoor permutations). The minimal generic assumption is the existence of one-way functions. This assumption implies, for example, that pseudorandom generators and private-key encryption systems exist and digital signatures exist BIB003 . However, many other tasks are not known to follow from one-way functions. Impagliazzo and Rudich BIB002 showed that using blackbox reductions one cannot construct oblivious-transfer protocols based on one-way functions. The next result of Rudich shows how to construct oblivious-transfer protocols based on one-way functions and an efficient secret-sharing scheme for A ham . By Theorem 6, we cannot hope for a perfect secret-sharing scheme for A ham . However, if one can construct computational secret-sharing schemes realizing A ham based on one-way functions, then we get that one-way functions imply oblivious-transfer protocols. This will solve a major open problem in cryptography, i.e., using Impagliazzo's terminology , it will prove that Minicrypt = Cryptomania. As Rudich's result uses a non-blackbox reduction, such construction bypasses the impossibility result of BIB002 . Preliminaries. In this survey we will not define computational secret-sharing schemes. This definition can be found in BIB005 . In such schemes we require that the sharing and reconstruction are done in polynomial-time in the secret length and the number of parties in the access structure. Furthermore, we require that a polynomial-time adversary controlling of an unauthorized set cannot distinguish between shares of one secret and shares of another secret. Rudich considers schemes for A ham , where the requirement on efficient reconstruction is quite weak: any authorized subset E can efficiently reconstruct the secret given that the set knows the Hamiltonian cycle in E. Thus, this weaker requirement avoids problems arising from the NP-completeness of the Hamiltonian problem. Next, we recall the notion of 1-out-of-2 oblivious transfer BIB001 . This is a two party protocol between two parties, a sender holding two bits b 0 , b 1 and a receiver holding an index i ∈ {0, 1}. At the end of the protocol, the receiver should hold b i without gaining any knowledge on the other bit b 1−i . The sender should not be able to deduce any information on i. Intuitively, the sender sends exactly one bit to the receiver, however, it is oblivious to which bit it sends. As in Section 4, we consider honest-but-curious parties. As the result of BIB002 already applies to this setting, constructing oblivious-transfer protocols for honest-butcurious parties is already interesting. Furthermore, by a transformation of , any such protocol can be transformed into a protocol secure against malicious parties assuming that one-way functions exist. We are ready to state and prove Rudich's result. Proof. Let Gen be a pseudorandom generator stretching bits to 2 bits. By , if one-way functions exist, then such Gen exists. Define the language L Gen = {y : ∃ x Gen(x) = y}. Clearly, L Gen ∈ N P . Let f be a polynomial-time reduction from L Gen to Hamiltonian, that is, f can be computed in polynomial time and y ∈ L Gen iff G = f (y) ∈ Hamiltonian. Such f exists with the property that a witness to y can be efficiently translated to a witness to G = f (x), that is, given y ∈ L Gen , a witness x for it, that is, Gen(x) = y, and G = f (x), one can find in polynomial time a Hamiltonian cycle in G. The next protocol is an oblivious-transfer protocol (for honest-but-curious parties): Receiver's input: i ∈ {0, 1} and security parameter 1 . Sender's input: b 0 , b 1 and security parameter 1 . Instructions for the receiver: -Choose at random x 1 ∈ {0, 1} and compute y 1 = Gen(x 1 ). -Choose at random y 0 ∈ {0, 1} 2 . Instructions for the sender: - be the graphs that the receiver sends. -For j ∈ {0, 1}, share the bit b j using the scheme for the Hamiltonian access structure A ham for the complete graph with |V j | vertices, and send the shares of the parties corresponding to the edges in E j to the receiver. Instructions for the receiver: Compute a Hamiltonian cycle in G 1 from x BIB004 and y 1 , and reconstruct b i from the shares of this cycle for the graph The privacy of the receiver is protected since the sender cannot efficiently distinguish between a string sampled according to the uniform distribution in {0, 1} 2 and an output of the generator on a string sampled uniformly in {0, 1} . In particular, the sender cannot efficiently distinguish between the output of the reduction f on two such strings. The privacy of the sender is protected against an honest-but-curious receiver since with probability at least 1 − 1/2 the string y 0 is not in the range of Gen, thus, G 1−i has no Hamiltonian cycle, that is, E i is an unauthorized set. In this case, the secret b 1−i cannot be efficiently computed from the shares of E 1−i . If we hope to construct an oblivious-transfer protocol using the approach of Theorem 7, then we should construct an efficient computational scheme for the Hamiltonian access structure based on the assumption that one-way functions exist. For feasibility purposes it would be interesting to construct a computational secret-sharing scheme for Hamiltonicity based on stronger cryptographic assumptions, e.g., that trapdoor permutations exist.
|
Secret-Sharing Schemes: A Survey <s> Summary and Open Problems <s> Secret Sharing from the perspective of threshold schemes has been well-studied over the past decade. Threshold schemes, however, can only handle a small fraction of the secret sharing functions which we may wish to form. For example, if it is desirable to divide a secret among four participants A, B, C, and D in such a way that either A together with B can reconstruct the secret or C together with D can reconstruct the secret, then threshold schemes (even with weighting) are provably insufficient.This paper will present general methods for constructing secret sharing schemes for any given secret sharing function. There is a natural correspondence between the set of "generalized" secret sharing functions and the set of monotone functions, and tools developed for simplifying the latter set can be applied equally well to the former set. <s> BIB001 </s> Secret-Sharing Schemes: A Survey <s> Summary and Open Problems <s> In a secret sharing scheme, a dealer has a secret. The dealer gives each participant in the scheme a share of the secret. There is a set Γ of subsets of the participants with the property that any subset of participants that is in Γ can determine the secret. In a perfect secret sharing scheme, any subset of participants that is not in Γ cannot obtain any information about the secret. We will say that a perfect secret sharing scheme is ideal if all of the shares are from the same domain as the secret. Shamir and Blakley constructed ideal threshold schemes, and Benaloh has constructed other ideal secret sharing schemes. In this paper, we construct ideal secret sharing schemes for more general access structures which include the multilevel and compartmented access structures proposed by Simmons. <s> BIB002 </s> Secret-Sharing Schemes: A Survey <s> Summary and Open Problems <s> A linear algebraic model of computation the span program, is introduced, and several upper and lower bounds on it are proved. These results yield applications in complexity and cryptography. The proof of the main connection, between span programs and counting branching programs, uses a variant of Razborov's general approximation method. > <s> BIB003 </s> Secret-Sharing Schemes: A Survey <s> Summary and Open Problems <s> A secret-sharing scheme enables a dealer to distribute a secret among $n$ parties such that only some predefined authorized sets of parties will be able to reconstruct the secret from their shares. The (monotone) collection of authorized sets is called an access structure, and is freely identified with its characteristic monotone function $f:\{0,1\}^n\rightarrow \{0,1\}$. A family of secret-sharing schemes is called efficient if the total length of the n shares is polynomial in n. Most previously known secret-sharing schemes belonged to a class of linear schemes, whose complexity coincides with the monotone span program size of their access structure. Prior to this work there was no evidence that nonlinear schemes can be significantly more efficient than linear schemes, and in particular there were no candidates for schemes efficiently realizing access structures which do not lie in NC. ::: The main contribution of this work is the construction of two efficient nonlinear schemes: (1) A scheme with perfect privacy whose access structure is conjectured not to lie in NC, and (2) a scheme with statistical privacy whose access structure is conjectured not to lie in P/poly. Another contribution is the study of a class of nonlinear schemes, termed quasi-linear schemes, obtained by composing linear schemes over different fields. While these schemes are (superpolynomially) more powerful than linear schemes, we show that they cannot efficiently realize access structures outside NC. <s> BIB004 </s> Secret-Sharing Schemes: A Survey <s> Summary and Open Problems <s> Monotone span programs are a linear-algebraic model of computation. They are equivalent to linear secret sharing schemes and have various applications in cryptography and complexity. A fundamental question is how the choice of the field in which the algebraic operations are performed effects the power of the span program. In this paper we prove that the power of monotone span programs over finite fields of different characteristics is incomparable; we show a super-polynomial separation between any two fields with different characteristics, answering an open problem of Pudlak and Sgall (1998). Using this result we prove a super-polynomial lower bound for monotone span programs for a function in uniform - /spl Nscr/;/spl Cscr/;/sup 2/ (and therefore in /spl Pscr/;), answering an open problem of Babai, Wigderson, and Gal (1999). Finally, we show that quasi-linear schemes, a generalization of linear secret sharing schemes introduced in Beimel and Ishai (2001), are stronger than linear secret sharing schemes. In particular, this proves, without any assumptions, that non-linear secret sharing schemes are more efficient than linear secret sharing schemes. <s> BIB005 </s> Secret-Sharing Schemes: A Survey <s> Summary and Open Problems <s> Secret sharing is a very important primitive in cryptography and distributed computing. In this work, we consider computational secret sharing (CSS) which provably allows a smaller share size (and hence greater efficiency) than its information-theoretic counterparts. Extant CSS schemes result in succinct share-size and are in a few cases, like threshold access structures, optimal. However, in general, they are not efficient (share-size not polynomial in the number of players n), since they either assume efficient perfect schemes for the given access structure (as in [10]) or make use of exponential (in n) amount of public information (like in [5]). In this paper, our goal is to explore other classes of access structures that admit of efficient CSS, without making any other assumptions. We construct efficient CSS schemes for every access structure in monotone P. As of now, most of the efficient information-theoretic schemes known are for access structures in algebraic NC 2. Monotone P and algebraic NC 2 are not comparable in the sense one does not include other. Thus our work leads to secret sharing schemes for a new class of access structures. In the second part of the paper, we introduce the notion of secret sharing with a semi-trusted third party, and prove that in this relaxed model efficient CSS schemes exist for a wider class of access structures, namely monotone NP. <s> BIB006 </s> Secret-Sharing Schemes: A Survey <s> Summary and Open Problems <s> Monotone span programs represent a linear-algebraic model of computation. They are equivalent to linear secret sharing schemes and have various applications in cryptography and complexity. A fundamental question regarding them is how the choice of the field in which the algebraic operations are performed affects the power of the span program. In this paper we prove that the power of monotone span programs over finite fields of different characteristics is incomparable; we show a superpolynomial separation between any two fields with different characteristics, solving an open problem of Pudlak and Sgall [Algebraic models of computation and interpolation for algebraic proof systems, in Proof Complexity and Feasible Arithmetic, DIMACS Ser. Discrete Math. Theoret. Comput. Sci. 39, P. W. Beame and S. Buss, eds., AMS, Providence, RI, 1998, pp. 279--296]. Using this result we prove a superpolynomial lower bound for monotone span programs for a function in uniform-${\cal N}C^2$ (and therefore in ${\cal P}$), solving an open problem of Babai, Gal, and Wigderson [Combinatorica, 19 (1999), pp. 301--319]. (All previous superpolynomial lower bounds for monotone span programs were for functions not known to be in ${\cal P}$.) Finally, we show that quasi-linear secret sharing schemes, a generalization of linear secret sharing schemes introduced in Beimel and Ishai [On the power of nonlinear secret-sharing, in Proceedings of the 16th Annual IEEE Conference on Computational Complexity, 2001, pp. 188--202], are stronger than linear secret sharing schemes. In particular, this proves, without any assumptions, that nonlinear secret sharing schemes are more efficient than linear secret sharing schemes. <s> BIB007 </s> Secret-Sharing Schemes: A Survey <s> Summary and Open Problems <s> In a secret sharing scheme, a dealer has a secret key. There is a finite set P of participants and a set ? of subsets of P. A secret sharing scheme with ? as the access structure is a method which the dealer can use to distribute shares to each participant so that a subset of participants can determine the key if and only if that subset is in ?. The share of a participant is the information sent by the dealer in private to the participant. A secret sharing scheme is ideal if any subset of participants who can use their shares to determine any information about the key can in fact actually determine the key, and if the set of possible shares is the same as the set of possible keys. In this paper, we show a relationship between ideal secret sharing schemes and matroids. <s> BIB008
|
In this survey we consider secret-sharing schemes, a basic tool in cryptography. We show several constructions of secret-sharing schemes, starting from the scheme of . We then described its generalization by BIB001 , showing that if an access structure can be described by a small monotone formula, then it has an efficient secret-sharing scheme. We also showed the construction of secret-sharing schemes from monotone span programs BIB002 BIB003 . Monotone span programs are equivalent to linear secret-sharing schemes and are equivalent to schemes where the reconstruction is linear . As every monotone formula can be transformed into a monotone span program of the same size, the monotone span program construction is a generalization of the construction of BIB001 . Furthermore, there are functions that have small monotone span programs and do not have small monotone formulae , thus, this is a strict generalization. Finally, we presented the multi-linear construction of secret-sharing schemes. All the constructions presented in Section 3 are linear over a finite field (some of the schemes work also over finite groups, e.g., the scheme of Benaloh and Leichter). The linearity of a scheme is important in many applications, as we demonstrated in Section 4 for the construction of secure multiparty protocols for general functions. Thus, it is interesting to understand the access structures that have efficient linear secret-sharing schemes. The access structures that can efficiently be realized by linear and multi-linear secret-sharing scheme are characterized by functions that have polynomial size monotone span programs, or, more generally, multi-target monotone span programs. We would like to consider the class of access structures that can be realized by linear secret-sharing schemes with polynomial share length. As this discussion is asymptotic, we consider a sequence of access structures {A n } n∈N , where A n has n parties. As linear algebra can be computed in N C (informally, N C is the class of problems that can be solved by parallel algorithms with polynomially many processors and poly-logarithmic running time), every sequence of access structures that has efficient linear secret-sharing schemes can be recognized by N C algorithms. For example, if P = N C, then access structures recognized by monotone P -complete problems do not have efficient linear secret-sharing schemes. The limitations of linear secret-sharing schemes raise the question if there are non-linear secret-sharing schemes. Beimel and Ishai BIB004 have constructed non-linear schemes for access structures that are not known to be in P (e.g., for an access structure related to the quadratic residuosity problem over N = pq). Thus, non-linear schemes are probably stronger than linear schemes. Furthermore, Beimel and Ishai defined quasi-linear schemes, which are compositions of linear schemes over different fields. Beimel and Weinreb BIB005 showed, without any assumptions, that quasi-linear schemes are stronger than linear schemes, that is, there exists an access structure that has quasi-linear schemes with constant information ratio while every linear secret-sharing scheme realizing this access structure has super-polynomial information ratio. However, Beimel and Ishai BIB004 proved that if an access structure has efficient quasi-linear scheme, then it can be recognized by an N C algorithm. Thus, also the class of access structures realized by efficient quasi-linear schemes is limited. Another non-linear construction of secret-sharing schemes is an unpublished result of Yao (see also BIB006 ). Yao showed that if an access structure can be described by a small monotone circuit, then it has an efficient computational secret-sharing scheme. This generalizes the results of BIB001 showing that if an access structure can be described by a small monotone formula, then it has an efficient perfect secret-sharing scheme. We will not describe the construction of Yao in this survey. An additional topic that we will not cover in this survey is ideal secret-sharing schemes. By Lemma 2, the size of the share of each party is at least the size of the secret. An ideal secret-sharing scheme is a scheme in which the size of the share of each party is exactly the size of the secret. For example, Shamir's scheme is ideal. An access structure is ideal if it has an ideal scheme over some finite domain of secrets. For example, threshold access structures are ideal, while the access structure described in Example 5 is not ideal. Brickell BIB002 considered ideal schemes and constructed ideal schemes for some access structures, i.e., for hierarchical access structures. Brickell and Davenport BIB008 showed an interesting connection between ideal access structures and matroids, that is, -If an access structure is ideal then it is a matroid port, -If an access structure is a matroid port of a representable matroid, then the access structure is ideal. Following this work, many works have constructed ideal schemes, and have studied ideal access structures and matroids. For example, Martí-Farré and Padró BIB007 showed that if an access structure is not a matroid port, then the information ratio of every secret-sharing scheme realizing it is at least 1.5 (compared to information ratio 1 of ideal schemes).
|
Secret-Sharing Schemes: A Survey <s> Question 3. <s> Weighted threshold functions with positive weights are a natural generalization of unweighted threshold functions. These functions are clearly monotone. However, the naive way of computing them is adding the weights of the satisfied variables and checking if the sum is greater than the threshold; this algorithm is inherently non-monotone since addition is a non-monotone function. In this work we by-pass this addition step and construct a polynomial size logarithmic depth unbounded fan-in monotone circuit for every weighted threshold function, i.e., we show that weighted threshold functions are in mAC^1. (To the best of our knowledge, prior to our work no polynomial monotone circuits were known for weighted threshold functions.) Our monotone circuits are applicable for the cryptographic tool of secret sharing schemes. Using general results for compiling monotone circuits (Yao, 1989) and monotone formulae (Benaloh and Leichter, 1990) into secret sharing schemes, we get secret sharing schemes for every weighted threshold access structure. Specifically, we get: (1) information-theoretic secret sharing schemes where the size of each share is quasi-polynomial in the number of users, and (2) computational secret sharing schemes where the size of each share is polynomial in the number of users. <s> BIB001 </s> Secret-Sharing Schemes: A Survey <s> Question 3. <s> In this work we study linear secret sharing schemes for s-tconnectivity in directed graphs. In such schemes the parties are edges of a complete directed graph, and a set of parties (i.e., edges) can reconstruct the secret if it contains a path from node sto node t. We prove that in every linear secret sharing scheme realizing the st-con function on a directed graph with nedges the total size of the shares is i¾?(n1.5). This should be contrasted with s-tconnectivity in undirected graphs, where there is a scheme with total share size n. Our result is actually a lower bound on the size monotone span programs for st i¾? con, where a monotone span program is a linear-algebraic model of computation equivalent to linear secret sharing schemes. Our results imply the best known separation between the power of monotone and non-monotone span programs. Finally, our results imply the same lower bounds for matching. <s> BIB002
|
Prove that there exists an explicit access structure such that the information ratio of every linear secret-sharing scheme realizing it is 2 Ω(n) . In this survey, we describe linear and multi-linear secret-sharing schemes. It is known that multi-linear schemes are more efficient than linear schemes for small access structures, e.g., . However, the possible improvement by using multi-linear schemes compared to linear schemes is open. There are interesting access structures that we do not know if they have efficient schemes. The first access structure is the directed connectivity access structure whose parties are edges in a complete directed graph and whose authorized sets are sets of edges containing a path from v 1 to v m . As there is a small monotone circuit for this access structure, by it has an efficient computational scheme. It is not known if this access structure can be described by a small monotone span program and it is open if it has an efficient perfect scheme. In BIB002 , it was proved that every monotone span program accepting the directed connectivity access structure has size Ω(n 3/2 ). In comparison, the undirected connectivity access structure has an efficient perfect scheme [15] (see Section 3.2). The second access structure that we do not know if it has an efficient scheme is the perfect matching access structure. The parties of this access structure are edges in a complete undirected graph and the authorized sets are sets of edges containing a perfect matching. It is not even known if this access structure has an efficient computational scheme as every monotone circuit for perfect matching has super-polynomial size. We remark that an efficient scheme for this access structure implies an efficient scheme for the directed connectivity access structure. The third interesting family of access structures is weighted threshold access structures. In such an access structure each party has a weight and there is some threshold. A set of parties is authorized if the sum of the weights of the parties in the set is bigger than the threshold. For these access structures there is an efficient computational scheme BIB001 and a perfect scheme with n O(log n) long shares. It is open if these access structures have a perfect scheme with polynomial shares. Furthermore, it is open if they can be described by polynomial size monotone formulae.
|
Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Introduction <s> There is an urgent need to reduce the growing backlog of forensic examinations in Digital Forensics Laboratories (DFLs). Currently, DFLs routinely create forensic duplicates and perform in-depth forensic examinations of all submitted media. This approach is rapidly becoming untenable as more cases involve increasing quantities of digital evidence. A more efficient and effective three-tiered strategy for performing forensic examinations will enable DFLs to produce useful results in a timely manner at different phases of an investigation, and will reduce unnecessary expenditure of resources on less serious matters. The three levels of forensic examination are described along with practical examples and suitable tools. Realizing that this is not simply a technical problem, we address the need to update training and establish thresholds in DFLs. Threshold considerations include the likelihood of missing exculpatory evidence and seriousness of the offense. We conclude with the implications of scaling forensic examinations to the investigation. <s> BIB001 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Introduction <s> Digital triage is a pre-digital-forensic phase that sometimes takes place as a way of gathering quick intelligence. Although effort has been undertaken to model the digital forensics process, little has been done to date to model digital triage. This work discuses the further development of a model that does attempt to address digital triage the Partially-automated Crime Specific Digital Triage Process model. The model itself will be presented along with a description of how its automated functionality was implemented to facilitate model testing. <s> BIB002 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Introduction <s> The digital forensic process as traditionally laid out begins with the collection, duplication, and authentication of every piece of digital media prior to examination. These first three phases of the digital forensic process are by far the most costly. However, complete forensic duplication is standard practice among digital forensic laboratories. The time it takes to complete these stages is quickly becoming a serious problem. Digital forensic laboratories do not have the resources and time to keep up with the growing demand for digital forensic examinations with the current methodologies. One solution to this problem is the use of pre-examination techniques commonly referred to as digital triage. Pre-examination techniques can assist the examiner with intelligence that can be used to prioritize and lead the examination process. This work discusses a proposed model for digital triage that is currently under development at Mississippi State University. <s> BIB003 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Introduction <s> In enterprise environments, digital forensic analysis generates data volumes that traditional forensic methods are no longer prepared to handle. Triaging has been proposed as a solution to systematically prioritize the acquisition and analysis of digital evidence. We explore the application of automated triaging processes in such settings, where reliability and customizability are crucial for a successful deployment. We specifically examine the use of GRR Rapid Response (GRR) - an advanced open source distributed enterprise forensics system - in the triaging stage of common incident response investigations. We show how this system can be leveraged for automated prioritization of evidence across the whole enterprise fleet and describe the implementation details required to obtain sufficient robustness for large scale enterprise deployment. We analyze the performance of the system by simulating several realistic incidents and discuss some of the limitations of distributed agent based systems for enterprise triaging. <s> BIB004 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Introduction <s> There are two main reasons the processing speed of current generation digital forensic tools is inadequate for the average case: a) users have failed to formulate explicit performance requirements; and b) developers have failed to put performance, specifically latency, as a top-level concern in line with reliability and correctness. In this work, we formulate forensic triage as a real-time computation problem with specific technical requirements, and we use these requirements to evaluate the suitability of different forensic methods for triage purposes. Further, we generalize our discussion to show that the complete digital forensics process should be viewed as a (soft) real-time computation with well-defined performance requirements. We propose and validate a new approach to target acquisition that enables file-centric processing without disrupting optimal data throughput from the raw device. We evaluate core forensic processing functions with respect to processing rates and show their intrinsic limitations in both desktop and server scenarios. Our results suggest that, with current software, keeping up with a commodity SATA HDD at 120 MB/s requires 120-200 cores. <s> BIB005 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Introduction <s> In many police investigations today, computer systems are somehow involved. The number and capacity of computer systems needing to be seized and examined is increasing, and in some cases it may be necessary to quickly find a single computer system within a large number of computers in a network. To investigate potential evidence from a large quantity of seized computer system, or from a computer network with multiple clients, triage analysis may be used. In this work we first define triage based on the medical definition. From this definition, we describe a PXE-based client-server environment that allows for triage tasks to be conducted over the network from a central triage server. Finally, three real world cases are described in which the proposed triage solution was used. <s> BIB006 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Introduction <s> Recently, digital evidence has been playing an increasingly important role in criminal cases. The seizure of Hard Disk Drives (HDDs) and creation of images of entire disk drives have become a best practice by law enforcement agencies. In most criminal cases, however, the incriminatory information found on an HDD is only a small portion of the entire HDD and the remaining information is not relevant to the case. For this reason, demands for the regulation of excessive search and seizure of defendants' innocuous information have been increasing and gaining strength. Some courts have even ruled out inadmissible digital evidence gathered from sites where the scope of a warrant has been exceeded, considering it to be a violation of due process. In order to protect the privacy of suspects, a standard should be made restricting excessive search and seizure. There are, however, many difficulties in selectively identifying and collecting digital evidence at a crime scene, and it is not realistic to expect law enforcement officers to search and collect completely only case-relevant evidence. Too much restriction can cause severe problems in investigations and may result in law enforcement authorities missing crucial evidence. Therefore, a model needs to be established that can assess and regulate excessive search and seizure of digital evidence in accordance with a reasonable standard that considers practical limitations. Consequently, we propose a new approach that balances two conflicting values: human rights protection versus the achievement of effective investigations. In this new approach, a triage model is derived from an assessment of the limiting factors of on-site search and seizure. For the assessment, a survey that provides information about the level of law enforcement, such as the available labor, equipment supply, technical limitations, and time constraints, was conducted using current field officers. A triage model that can meet the legal system's demand for privacy protection and which supports decision making by field officers that can have legal effects was implemented. Since the demands of each legal system and situation of law enforcement vary from country to country, the triage model should be established individually for each legal system. Along with experiment of our proposed approach, this paper presents a new triage model that is designed to meet the recent requirements of the Korean legal system for privacy protection from, specifically, a Korean perspective. <s> BIB007 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Introduction <s> Digital forensic triage is poorly defined and poorly understood. The lack of clarity surrounding the process of triage has given rise to legitimate concerns. By trying to define what triage actually is, one can properly engage with the concerns surrounding the process. This paper argues that digital forensic triage has been conducted on an informal basis for a number of years in digital forensic laboratories, even where there are legitimate objections to the process. Nevertheless, there are clear risks associated with the process of technical triage, as currently practised. The author has developed and deployed a technical digital forensic previewing process that negates many of the current concerns regarding the triage process and that can be deployed in any digital forensic laboratory at very little cost. This paper gives a high-level overview of how the system works and how it can be deployed in the digital forensic laboratory. <s> BIB008 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Introduction <s> The role of triage in digital forensics is disputed, with some practitioners questioning its reliability for identifying evidential data. Although successfully implemented in the field of medicine, triage has not established itself to the same degree in digital forensics. This article presents a novel approach to triage for digital forensics. Case-Based Reasoning Forensic Triager (CBR-FT) is a method for collecting and reusing past digital forensic investigation information in order to highlight likely evidential areas on a suspect operating system, thereby helping an investigator to decide where to search for evidence. The CBR-FT framework is discussed and the results of twenty test triage examinations are presented. CBR-FT has been shown to be a more effective method of triage when compared to a practitioner using a leading commercial application. <s> BIB009 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Introduction <s> The investigation of fraud in business has been a staple for the digital forensics practitioner since the introduction of computers in business. Much of this fraud takes place in the retail industry. When trying to stop losses from insider retail fraud, triage, i.e. the quick identification of sufficiently suspicious behaviour to warrant further investigation, is crucial, given the amount of normal, or insignificant behaviour. <s> BIB010 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Introduction <s> The Internet of Things (IoT) is the interconnection of uniquely identifiable embedded computing devices within the existing Internet infrastructure. Typically, internet of things (IoT) is expected to offer advanced connectivity of devices, systems, and services that goes beyond machine-to-machine communications (M2M) and covers a variety of protocols, domains, and applications. The interconnection of these embedded devices including smart objects, is expected to usher in automation in nearly all fields, while also enabling advanced applications like a Smart Grid. The main research challenge in Internet of things (IoT) for the forensic investigators is based size of the objects of forensic interest, relevancy, blurry network boundaries and edgeless networks, especially on method for conducting the investigation. The aim of this paper is to identify the best approach by designing a novel model to conduct the investigation situations for digital forensic professionals and experts. There was existing research works which introduce models for identifying the objects of forensics interest in investigations, but there were no rigorous testing for accepting the approach. Currently in this work, an integrated model is designed based on triage model and 1-2-3 zone model for volatile based data preservation. <s> BIB011 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Introduction <s> The BitTorrent client application is a popular utility for sharing large files over the Internet. Sometimes, this powerful utility is used to commit cybercrimes, like sharing of illegal material or illegal sharing of legal material. In order to help forensics investigators to fight against these cybercrimes, we carried out an investigation of the artifacts left by the BitTorrent client. We proposed a methodology to locate the artifacts that indicate the BitTorrent client activity performed. Additionally, we designed and implemented a tool that searches for the evidence left by the BitTorrent client application in a local computer running Windows. The tool looks for the four files holding the evidence. The files are as follows: *.torrent, dht.dat, resume.dat, and settings.dat. The tool decodes the files, extracts important information for the forensic investigator and converts it into XML format. The results are combined into a single result file. <s> BIB012 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Introduction <s> Following their detection and seizure by police and border guard authorities, false identity and travel documents are usually scanned, producing digital images. This research investigates the potential of these images to classify false identity documents, highlight links between documents produced by a same modus operandi or same source, and thus support forensic intelligence efforts. Inspired by previous research work about digital images of Ecstasy tablets, a systematic and complete method has been developed to acquire, collect, process and compare images of false identity documents. This first part of the article highlights the critical steps of the method and the development of a prototype that processes regions of interest extracted from images. Acquisition conditions have been fine-tuned in order to optimise reproducibility and comparability of images. Different filters and comparison metrics have been evaluated and the performance of the method has been assessed using two calibration and validation sets of documents, made up of 101 Italian driving licenses and 96 Portuguese passports seized in Switzerland, among which some were known to come from common sources. Results indicate that the use of Hue and Edge filters or their combination to extract profiles from images, and then the comparison of profiles with a Canberra distance-based metric provides the most accurate classification of documents. The method appears also to be quick, efficient and inexpensive. It can be easily operated from remote locations and shared amongst different organisations, which makes it very convenient for future operational applications. The method could serve as a first fast triage method that may help target more resource-intensive profiling methods (based on a visual, physical or chemical examination of documents for instance). Its contribution to forensic intelligence and its application to several sets of false identity documents seized by police and border guards will be developed in a forthcoming article (part II). (C) 2016 Elsevier Ireland Ltd. All rights reserved. <s> BIB013
|
The volume of data for forensic investigation keeps constantly growing. This is a result of the continuing technology development when scale and bounds of the Internet rapidly change and social networks come to everyday use. The storage capacity expands to new areas when smart phones become part of the Internet devices and cloud storage services are offered. The digital forensic process is very time consuming, because it requires the examination of all available data volumes collected from the cybercrime scene. The digital forensic process commences with the collection, duplication, and authentication of every piece of digital media prior to examination. Moreover, every action taken has to adhere to the legitimacy rules so that the obtained digital evidence could be presented in the court. However, life is very dynamic, and the situations, in which some information about a possible cybercrime has to be obtained as promptly as possible without adhering to the rules of long legal scrutiny, arise. Of course, the information obtained in a such way cannot be directly used in the court; however, a quick access to such knowledge can speed up the future process of digital forensics and, is some situations, can even save somebody's life. Therefore, such actions are justifiable. A process that takes place prior to the standard forensic methodology is called digital triage. It can provide valuable intelligence without subjecting digital evidence to a full examination. This quick intelligence can be used in the field to guide the search and seizure, and in the laboratory to determine if a media is worth to be examined. The term "triage" comes from the field of medicine, in which it refers to the situations when because of having limited resources, the injured people are ranked according to the necessity to receive treatment. Such ranking ensures the achievement of the least damage to patients when resources are limited BIB004 . Rogers et al. , the authors of the first field triage model in computer forensics, define triage as a process of ranking objects in terms of importance or priority. Casey et al. BIB001 define triage in digital forensics as part of forensic examination process. The forensic examination is described as three-tier strategy consisting of three levels: (i) survey/triage forensic inspection, (ii) preliminary forensic examination, and (iii) in-depth forensic examination. The first stage, in which many potential sources of digital evidence for specific information are reviewed, is alternatively referred to as survey or triage. The same idea that triage is part of forensic examination, is supported in later works BIB012 BIB005 . Casey underlines that triage is effective for prioritizing, but it is not a substitute for a more thorough review. Casey argues that triage is a technical process, which can be performed outside a laboratory by professionals with basic training and limited oversight. Categorizing digital triage as a technical process makes it more clear that the information has not undergone rigorous quality assessment and its legitimacy has not been evaluated. There are many other definitions of triage, which slightly differ depending on the attributed qualities BIB005 BIB002 BIB009 BIB006 . The diversity of triage definitions reflects the variety of the views and indicates the immaturity of the field. However, it is not the main problem. The focus should be devoted to the decision whether digital triage is a forensic process. As Cantrell et al. BIB003 state, "Digital triage is not a forensic process by definition". It is not clear to which definition Cantrell et al. BIB003 refer. It is possible to suppose that it is the definition by Rogers et al. . However, other definitions exist, and this statement is not true for all the cases BIB005 BIB006 BIB007 . Koopman and James BIB006 , and Roussev et al. BIB005 use the term "digital forensic triage". If digital triage is not the forensic process, then the term "forensic" cannot be used together with the term "digital triage", because it misleads. Hong et al. BIB007 introduce a triage model that is adapted to the requirements of the legal Korean system. Consequently, the proposed triage model adheres to the rules of the forensic process. Moreover, Hong et al. BIB007 suggest establishing a triage model individually for the legal system of a specific country. To summarize the diversity of views on digital triage, we stress the following features: 1. Digital triage is a technical process to provide information for the forensic examination, but does not involve the evaluation of digital evidence 2. The goal of digital triage is to rapidly review many potential sources of digital evidence for specific information and prioritize the digital media to make the subsequent analysis easier 3. The term "forensic" cannot be used together with the term "digital triage" if the process of digital triage does not adhere to the rules of the forensic process specific to the country Digital triage comes in two forms: live and post-mortem. The post-mortem form of triage, which is conducted on the digital image, is not always recognized as triage. We suppose that both forms of digital triage are equally important. Live triage raises many concerns, because it is conducted on the live system, and the destruction of the likely evidence is possible. However, live digital triage has several advantages: 1. It enables a rapid extraction of intelligence that can be used for suspect interrogation 2. Some data can be lost if the computer is shut down The primary concern inherent to both forms of digital triage is that the evidential data can remain unnoticed BIB008 . Pollitt argues that the process of digital triage in the context of forensics is an admission of failure. However, he recognizes that for now a better approach does not exist. Moreover, the term "triage" becomes the common word to indicate the initial and rapid step in the different areas of the forensic investigation. For example, it is used in the retail industry BIB010 , in the internet of things BIB011 , in the fraud of identity and travel documents BIB013 . We review the research works related to digital triage. We divide the review into four sections as follows: live triage, post-mortem triage, mobile device triage, and triage tools. The largest section is on the triage tools. Such abundance of research works highlights the practical need for triage tools. In the next section, we review the models and methods of live triage.
|
Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Models and Methods of Live Triage <s> This paper describes the Advanced Forensic Format (AFF), which is designed as an alternative to current proprietary disk image formats. AFF offers two significant benefits. First, it is more flexible because it allows extensive metadata to be stored with images. Second, AFF images consume less disk space than images in other formats (e.g., EnCase images). This paper also describes the Advanced Disk Imager, a new program for acquiring disk images that compares favorably with existing alternatives. <s> BIB001 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Models and Methods of Live Triage <s> The digital forensic process as traditionally laid out begins with the collection, duplication, and authentication of every piece of digital media prior to examination. These first three phases of the digital forensic process are by far the most costly. However, complete forensic duplication is standard practice among digital forensic laboratories. The time it takes to complete these stages is quickly becoming a serious problem. Digital forensic laboratories do not have the resources and time to keep up with the growing demand for digital forensic examinations with the current methodologies. One solution to this problem is the use of pre-examination techniques commonly referred to as digital triage. Pre-examination techniques can assist the examiner with intelligence that can be used to prioritize and lead the examination process. This work discusses a proposed model for digital triage that is currently under development at Mississippi State University. <s> BIB002 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Models and Methods of Live Triage <s> Recently, digital evidence has been playing an increasingly important role in criminal cases. The seizure of Hard Disk Drives (HDDs) and creation of images of entire disk drives have become a best practice by law enforcement agencies. In most criminal cases, however, the incriminatory information found on an HDD is only a small portion of the entire HDD and the remaining information is not relevant to the case. For this reason, demands for the regulation of excessive search and seizure of defendants' innocuous information have been increasing and gaining strength. Some courts have even ruled out inadmissible digital evidence gathered from sites where the scope of a warrant has been exceeded, considering it to be a violation of due process. In order to protect the privacy of suspects, a standard should be made restricting excessive search and seizure. There are, however, many difficulties in selectively identifying and collecting digital evidence at a crime scene, and it is not realistic to expect law enforcement officers to search and collect completely only case-relevant evidence. Too much restriction can cause severe problems in investigations and may result in law enforcement authorities missing crucial evidence. Therefore, a model needs to be established that can assess and regulate excessive search and seizure of digital evidence in accordance with a reasonable standard that considers practical limitations. Consequently, we propose a new approach that balances two conflicting values: human rights protection versus the achievement of effective investigations. In this new approach, a triage model is derived from an assessment of the limiting factors of on-site search and seizure. For the assessment, a survey that provides information about the level of law enforcement, such as the available labor, equipment supply, technical limitations, and time constraints, was conducted using current field officers. A triage model that can meet the legal system's demand for privacy protection and which supports decision making by field officers that can have legal effects was implemented. Since the demands of each legal system and situation of law enforcement vary from country to country, the triage model should be established individually for each legal system. Along with experiment of our proposed approach, this paper presents a new triage model that is designed to meet the recent requirements of the Korean legal system for privacy protection from, specifically, a Korean perspective. <s> BIB003 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Models and Methods of Live Triage <s> This paper addresses the increasing resources overload being experienced by law enforcement digital forensics units with the proposal to introduce triage template pipelines into the investigative process, enabling devices and the data they contain to be examined according to a number of prioritised criteria. <s> BIB004 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Models and Methods of Live Triage <s> There are two main reasons the processing speed of current generation digital forensic tools is inadequate for the average case: a) users have failed to formulate explicit performance requirements; and b) developers have failed to put performance, specifically latency, as a top-level concern in line with reliability and correctness. In this work, we formulate forensic triage as a real-time computation problem with specific technical requirements, and we use these requirements to evaluate the suitability of different forensic methods for triage purposes. Further, we generalize our discussion to show that the complete digital forensics process should be viewed as a (soft) real-time computation with well-defined performance requirements. We propose and validate a new approach to target acquisition that enables file-centric processing without disrupting optimal data throughput from the raw device. We evaluate core forensic processing functions with respect to processing rates and show their intrinsic limitations in both desktop and server scenarios. Our results suggest that, with current software, keeping up with a commodity SATA HDD at 120 MB/s requires 120-200 cores. <s> BIB005 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Models and Methods of Live Triage <s> Digital Forensics is being actively researched and performed in various areas against changing IT environment such as mobile phone, e-commerce, cloud service and video surveillance. Moreover, it is necessary to research unified digital evidence management for correlation analysis from diverse sources. Meanwhile, various triage approaches have been developed to cope with the growing amount of digital evidence being encountered in criminal cases, enterprise investigations and military contexts. Despite of debating over whether triage inspection is necessary or not, it will be essential to develop a framework for managing scattered digital evidences. This paper presents a framework with unified digital evidence management for appropriate security convergence, which is based on triage investigation. Moreover, this paper describes a framework in network video surveillance system to shows how it works as an unified evidence management for storing diverse digital evidences, which is a good example of security convergence. <s> BIB006 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Models and Methods of Live Triage <s> The volume of digital forensic evidence is rapidly increasing, leading to large backlogs. In this paper, a Digital Forensic Data Reduction and Data Mining Framework is proposed. Initial research with sample data from South Australia Police Electronic Crime Section and Digital Corpora Forensic Images using the proposed framework resulted in significant reduction in the storage requirements — the reduced subset is only 0.196 percent and 0.75 percent respectively of the original data volume. The framework outlined is not suggested to replace full analysis, but serves to provide a rapid triage, collection, intelligence analysis, review and storage methodology to support the various stages of digital forensic examinations. Agencies that can undertake rapid assessment of seized data can more effectively target specific criminal matters. The framework may also provide a greater potential intelligence gain from analysis of current and historical data in a timely manner, and the ability to undertake research of trends over time. <s> BIB007 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Models and Methods of Live Triage <s> We present a new approach to digital forensic evidence acquisition and disk imaging called sifting collectors that images only those regions of a disk with expected forensic value. Sifting collectors produce a sector-by-sector, bit-identical AFF v3 image of selected disk regions that can be mounted and is fully compatible with existing forensic tools and methods. In our test cases, they have achieved an acceleration of >3× while collecting >95% of the evidence, and in some cases we have observed acceleration of up to 13×. Sifting collectors challenge many conventional notions about forensic acquisition and may help tame the volume challenge by enabling examiners to rapidly acquire and easily store large disks without sacrificing the many benefits of imaging. <s> BIB008 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Models and Methods of Live Triage <s> In recent years the capacity of digital storage devices has been increasing at a rate that has left digital forensic services struggling to cope. There is an acknowledgement that current forensic tools have failed to keep up. The workload is such that a form of 'administrative triage' takes place in many labs where perceived low priority jobs are delayed or dropped without reference to the data itself. In this paper we investigate the feasibility of first responders performing a fast initial scan of a device by sampling on the device itself. A Bloom filter is used to store the block hashes of large collections of contraband data. We show that by sampling disk clusters, we can achieve 99.9% accuracy scanning for contraband data in minutes. Even under the constraints imposed by low specification legacy equipment, it is possible to scan a device for contraband with a known and controllable margin of error in a reasonable time. We conclude that in this type of case it is feasible to boot the device into a forensically sound environment and do a pre-imaging scan to prioritise the device for further detailed investigation. <s> BIB009 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Models and Methods of Live Triage <s> The sharp rise in consumer computing, electronic and mobile devices and data volumes has resulted in increased workloads for digital forensic investigators and analysts. The number of crimes involving electronic devices is increasing, as is the amount of data for each job. This is becoming unscaleable and alternate methods to reduce the time trained analysts spend on each job are necessary.This work leverages standardised knowledge representations techniques and automated rule-based systems to encapsulate expert knowledge for forensic data. The implementation of this research can provide high-level analysis based on low-level digital artefacts in a way that allows an understanding of what decisions support the facts. Analysts can quickly make determinations as to which artefacts warrant further investigation and create high level case data without manually creating it from the low-level artefacts. Extraction and understanding of users and social networks and translating the state of file systems to sequences of events are the first uses for this work.A major goal of this work is to automatically derive 'events' from the base forensic artefacts. Events may be system events, representing logins, start-ups, shutdowns, or user events, such as web browsing, sending email. The same information fusion and homogenisation techniques are used to reconstruct social networks. There can be numerous social network data sources on a single computer; internet cache can locate?Facebook, LinkedIn, Google Plus caches; email has address books and copies of emails sent and received; instant messenger has friend lists and call histories. Fusing these into a single graph allows a more complete, less fractured view for an investigator.Both event creation and social network creation are expected to assist investigator-led triage and other fast forensic analysis situations. <s> BIB010 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Models and Methods of Live Triage <s> Due to budgetary constraints and the high level of training required, digital forensic analysts are in short supply in police forces the world over. This inevitably leads to a prolonged time taken between an investigator sending the digital evidence for analysis and receiving the analytical report back. In an attempt to expedite this procedure, various process models have been created to place the forensic analyst in the field conducting a triage of the digital evidence. By conducting triage in the field, an investigator is able to act upon pertinent information quicker, while waiting on the full report.The work presented as part of this paper focuses on the training of front-line personnel in the field triage process, without the need of a forensic analyst attending the scene. The premise has been successfully implemented within regular/non-digital forensics, i.e., crime scene investigation. In that field, front-line members have been trained in specific tasks to supplement the trained specialists. The concept of front-line members conducting triage of digital evidence in the field is achieved through the development of a new process model providing guidance to these members. To prove the model's viability, an implementation of this new process model is presented and evaluated. The results outlined demonstrate how a tiered response involving digital evidence specialists and non-specialists can better deal with the increasing number of investigations involving digital evidence. <s> BIB011 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Models and Methods of Live Triage <s> This paper discusses the challenges of performing a forensic investigation against a multi-node Hadoop cluster and proposes a methodology for examiners to use in such situations. The procedure's aim of minimising disruption to the data centre during the acquisition process is achieved through the use of RAM forensics. This affords initial cluster reconnaissance which in turn facilitates targeted data acquisition on the identified DataNodes. To evaluate the methodology's feasibility, a small Hadoop Distributed File System (HDFS) was configured and forensic artefacts simulated upon it by deleting data originally stored inźthe cluster. RAM acquisition and analysis was then performed on the NameNode in order to test the validity of the suggested methodology. The results are cautiously positive in establishing that RAM analysis of the NameNode can be used to pinpoint the data blocks affected by the attack, allowing a targeted approach to the acquisition of data from the DataNodes, provided that the physical locations can be determined. A full forensic analysis of the DataNodes was beyond the scope of this project. <s> BIB012 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Models and Methods of Live Triage <s> An issue that continues to impact digital forensics is the increasing volume of data and the growing number of devices. One proposed method to deal with the problem of "big digital forensic data": the volume, variety, and velocity of digital forensic data, is to reduce the volume of data at either the collection stage or the processing stage. We have developed a novel approach which significantly improves on current practice, and in this paper we outline our data volume reduction process which focuses on imaging a selection of key files and data such as: registry, documents, spreadsheets, email, internet history, communications, logs, pictures, videos, and other relevant file types. When applied to test cases, a hundredfold reduction of original media volume was observed. When applied to real world cases of an Australian Law Enforcement Agency, the data volume further reduced to a small percentage of the original media volume, whilst retaining key evidential files and data. The reduction process was applied to a range of real world cases reviewed by experienced investigators and detectives and highlighted that evidential data was present in the data reduced forensic subset files. A data reduction approach is applicable in a range of areas, including: digital forensic triage, analysis, review, intelligence analysis, presentation, and archiving. In addition, the data reduction process outlined can be applied using common digital forensic hardware and software solutions available in appropriately equipped digital forensic labs without requiring additional purchase of software or hardware. The process can be applied to a wide variety of cases, such as terrorism and organised crime investigations, and the proposed data reduction process is intended to provide a capability to rapidly process data and gain an understanding of the information and/or locate key evidence or intelligence in a timely manner. <s> BIB013
|
Rogers et al. introduce the model for the field triage process in computer forensics and name it the Cyber Forensic Field Triage Process Model (CFFTPM). The CFFTPM has six phases: planning, triage, usage/user profiles, chronology/timeline, internet activity, and case specific evidence. Each phase has several sub-tasks and considerations that vary according to the specifics of the case and operating system under investigation. The CFFTPM originates from child pornography cases. Nevertheless, it is general enough to be applicable to other possible cases; however, the model cannot be considered as the ultimate solution for every case. It is also important to note that the proposed model does not preclude transporting the system to a laboratory environment for a more thorough investigation. Cantrell et al. BIB002 discuss a proposed model for digital triage. The proposed model is a linear framework, except the preservation phase that is an investigative principle preserved throughout all the phases. The first phase is planning and readiness that occurs before the investigation onsite. The next phase is live forensic that is included as an optional step, depending on the need and expertise, and it must occur prior to the following phases because the volatile memory can be lost very quickly. The middle three phases: computer profile phase, crime potential phase, and presentation phase are intended to be an automated process, coded as a computer program or script using the existing tools. The last phase, triage examination phase, is optional depending on the need. The triage examination should be an automated process that is guided by the examiner using predefined templates specific to each case. Hong et al. BIB003 propose a theoretical framework for implementing a triage model. The requirement for the triage model is to consider the limiting factors of the onsite search and seizure. The framework consists of three phases: assessment, triage model, and reassessment. The proposed framework is based on the assumption that reassessments are performed periodically according to the changes in search and the conditions of the onsite seizure. To establish a triage model, a questionnaire that consists of 48 questions, which are provided in the paper, was prepared; it was answered by 58 respondents in total. The paper presents a large discussion of the results. After assessing the results of the questionnaire, a new triage model is proposed. The triage process is divided into four steps: planning, execution, categorization, and decision. The properly collected information mostly depends on the execution step. The execution step prioritizes the file types for the search according to three types of crime: personal general crime, personal high-tech crime, and corporate general crime. Next, the file search is conducted in the following order: timeline of interest; filename-or contents-based keywords search; and file/directory path-based search. Another important procedure in the execution step is the detection of suspicious files. The proposed triage model can be applied only to personal computers and it is tailored to the Korean legal system requirements for the privacy protection. Overill et al. BIB004 propose an attractive idea to introduce triage template pipelines into the investigative process for the most popular types of digital crimes, enabling digital evidence to be examined according to a number of prioritised criteria. Each specific digital crime has its own template of prioritised devices and the data based on the cost-effectiveness criteria of front-loading probative value and back-loading resource utilisation. The authors declare that about 80% of all digital crimes in Hong Kong are accounted for just five types of crime. However, they do not enumerate these types of crime. The authors state that "the work this far has addressed the set of five digital crime templates", however, the examples of templates for only two digital crimes are provided. To be more precise, they are the Distributed Denial of Service (DDoS) template diagram and the Peerto-Peer (P2P) template diagram. Moreover, the construction of these example templates is not discussed in detail. An advantage of the triage template pipeline approach over the triage tools is that the evidential recovery process can be terminated as soon as it becomes apparent that the probative value criterion has been fulfilled. Therefore, the triage time can be shorter in some cases. The essence of the proposed triage template pipelines is formalized common sense. Roussev et al. BIB005 argue and analyze forensic triage as a real-time computation problem, which has allotted limited time and resources. One hour is considered to be an acceptable time limit for triage. The authors assume that an increase in the performance can be achieved if the acquisition and processing start and complete at almost the same time. It means that the processing should be as fast as data the cloning. The suitability of the most common open-source implementations and of most common forensic procedures to fit into the time constraints is investigated experimentally. The authors state that the triage investigation can be carried out in the field and in the laboratory. For the fieldwork, they consider 8-core workstation and for the laboratory, they consider 48-core server. The obtained results show that only a few basic methods, like file metadata extraction, crypto-hashing, and registry extraction, can fit into the time budget in the workstation triage. To increase the performance of the file acquisition, Roussev et al. BIB005 implement a Latency-Optimized Target Acquisition (LOTA) scheme. The main idea of this scheme is that the metadata of a filesystem is parsed to make an inverse map from blocks to files before cloning the target. This procedure allows sequential scanning of blocks and reconstructing the files. The LOTA scheme enables an improvement of a factor of two for files larger than 1 M and a factor of 100 for smaller files. It is recommended to use the scheme in the forensic environment routinely. The authors advocate employing parallel computations to obtain higher processing rates. Lim and Lee BIB006 describe a unified evidence container XeBag for storing diverse digital evidence from different sources. The XeBag can be used for selective evidence collection and searching on the live system. The file structure of XeBag is based on well-known compression file formats, PKZip and WinRAR. To record forensic metadata, an Extensible Markup Language (XML) document is included additionally for each stored object. The XML format is a popular data exchange format, therefore, it enables easy access to the data. The authors provide a description of a video surveillance system to show how its digital evidence is stored and can be retrieved from the unified evidence container XeBag. Grier and Richard III BIB008 introduce a new approach, called sifting collectors, for imaging of the selected regions of disk drives. The sifting collectors create a sector-by-sector, bit-for-bit exact image of disk regions that have forensic value. The forensics image is produced in an Advanced Forensics Format v3 BIB001 , and it is fully compatible with the existing forensic tools. The selection of the regions that have forensics value is based on profiles. The authors do not expect that the examiners can prepare the profiles themselves, therefore, the profiles must be created and stored in a library. The sifting collectors firstly collect the metadata according to the defined profile. Then they interpret metadata, determine sectors of interest, and assemble them in the disk order. As a result, their methods are not suitable for unknown filesystems. If profiles are not possible to define, the alternative proposes to include a person in the scanning loop to decide what is relevant. The implemented prototype targets New Technology File System (NTFS) as a file system and uses the Master File Table as its primary source. The conducted experiment shows a speed up from 3 to 13 times in comparison to the forensic image acquisition tool Sleuthkit [24] for the test cases. The absolute values of runtimes are not provided. The accuracy of the region selection is between 54% and 95% for the considered test cases. Faster image acquisition time gives less accuracy. One important limitation of sifting collectors is their susceptibility to steganography and anti-forensics. Penrose et al. BIB009 present an approach for fast contraband file detection on the device itself. The approach is based on clusters scanning, hash calculating, and comparison to the database. The cluster size is 4 KiB. A Bloom filter is used to store the cluster hashes of the contraband files. The Bloom filter reduces the size of the database of the block level Message-Digest Algorithm 5 (MD5) hashes by an order of magnitude; however, it costs a small false positive rate. The designed Bloom filter is 1 GiB in size and it uses eight hash functions. A larger Bloom filter enables faster access to the hashes of the contraband files. The performed experiment shows that the approach achieves 99.9% accuracy scanning for contraband files in minutes. Some false positives are encountered; however, the results are positive for the existence of all contraband files. The experiment was conducted in legitimate computing environment. The authors draw a conclusion that this type of case can be further investigated in a forensically sound environment. Turnbull and Randhawa BIB010 describe an ontology-based approach to assist examiner-led triage. The purpose of the approach is to enable a less technically intrinsic user to run a triage tool. This is implemented by collecting low-level artifacts and inferencing hypotheses from the collected facts. The approach is oriented to automatically deriving events from the base of the forensics artefacts. A Resource Descriptive Framework (RDF) is used as the basis of the ontology. The representative feature of the approach is that the layered multiple ontologies are designed over the same dataset. The description of the ontologies used is vague. The authors find some advantages of the RDF; however, they recognize that a Web Ontology Language (OWL) could provide more possibilities. The authors suggest that the approach is applicable for the extraction of information from social networks, though, no evidence of such application can be found in the paper. The implemented system to provide a proof-of-concept consists of a knowledge base, data ingestors, reasoners, and a visualiser. The visualiser is hardcoded into the used ontology. Neither test, nor real cases are provided. To conclude, the idea of the approach is attractive, however, the description and the development are immature. Hitchcock et al. BIB011 introduce a Digital Field Triage (DFT) model to offload some of the initial tasks performed in the field by forensic examiners to non-digital evidence specialists. The primary goals of the model are twofold: (i) To increase the efficiency of an investigation by providing digital evidence in a timely manner; (ii) To decrease the backlog of files at a forensic laboratory. The proposed model is based on Rogers et al. and it has four phases: planning, assessment, reporting, and threshold. The DFT model has inherent risks associated with it. They are as follows: the management, training, and supporting tools. The management and ongoing training are integral parts of the success of the DFT model. The tools must support the management. For the DFT to work, there are three fundamental concepts: 1. DFT must work with a supervising examiner 2. DFT must maintain the forensic integrity of the digital evidence 3. A DFT assessment does not replace the forensic analysis Therefore, the DFT model is not a replacement for full analysis, but is part of the overall strategy of handling digital evidence. The first version of the DFT model was implemented in Canada six years ago. The implementation achieved the goals pursued by the model; however, persistent attention needs to be turned to the risks associated with the model. Leimich et al. BIB012 propose a variation of cloud forensic methodology tailored to a live analysis of Random-Access Memory (RAM) for Hadoop Distributed File System (HDFS). The aim of the methodology is to minimize the disruption to the data center after data breach. The Hadoop is a Java implemented system developed for UNIX based operating systems. It is a master/slave distributed architecture for storing and processing big data. The HDFS consists of DataNodes (slaves), which store the data, and NameNode (master) that manages the DataNodes. The methodology is oriented to the acquisition of the NameNode contents of to pinpoint the affected DataNodes. The forensic analysis of the DataNodes is out of scope of the proposed methodology. The methodology contains nine phases: preparation, live acquisition of the NameNode, initial cluster reconnaissance, checkpointing via a forensic workstation, live artefact analysis, establish 'suspect' transactions and map to data block, perform targeted dead acquisition of the DataNodes, data reconstruction, and report. To test the validity of the methodology a small HDFS cluster that has one master and three slaves, was configured with a single scenario of deleted data. The phase of data reconstruction is not carried out. The experiment confirms that the methodology enables locating the deleted data blocks. Liemich et al. BIB012 discuss the ability to implement the proposed methodology in forensic tool in compliance with the National Institute of Standards and Technology (NIST) Computer Forensic Tool Testing criteria. Montasari extends the Rogers et al.'s model by dividing all phases into two stages and introducing new sub-tasks into the phases. The single planning activity is assigned to the first stage. The planning should be carried out before attending the site. Montasari considers many models of the forensics process, not just triage models, because according to the author, the single model proposed by Rogers et al. exists for the onsite triage process. The author selects activities, which would be appropriate for the triage process, from other models. Therefore, several sub-tasks are added to the model of the forensics field triage process, and the model is presented in a more detailed and categorized way. Additionally, the model is extended by a set of investigative principles joined into a group under the name of "Overriding Principles", which are an additional contribution of the paper. These principles are as follows: 1. To preserve chain of custody 2. To maintain an accurate audit trail 3. To maintain a restricted access control 4. To maintain an effective case management 5. To maintain the information flow Peersman et al. present an approach that incorporates artificial intelligence and machine learning techniques (support vector machines) to automatically label new Child Sexual Abuse (CSA) media. The approach employs two stages for labelling the unknown CSA files. The first stage uses the text categorization techniques to determine whether a file contains CSA content based on its filename. The text categorization applies the following features: predefined keywords, forms of explicit language use, expressions relating to children and family relations in English, French, German, Italian, Dutch, and Japanese. Additionally, all patterns of two, three, and four consecutive characters are extracted from the filenames. The second stage gets the files from the first level and examines the visual content of images and audio files. The second stage bases the decision on multi-modal features. The multi-modal features consist of the following representations: colour-correlograms, skin features, visual words and visual pyramids, and audio words for audio files. The conducted experiment shows a false positive rate of 20.3% after the first stage. The second stage reduces the false positive rate to 7.9% for images and 4.3% for videos. The approach is implemented into the iCOP toolkit [30] that performs live forensic analysis on a P2P network. Therefore, the proposed approach is designed for a proactive monitoring activity. To label the most pertinent candidates for the CSA media, an examiner can login to the iCOP canvas that automatically arrange the results. Additionally, the approach can be adapted to the identification of the new CSA media during a reactive investigation. The approach is implemented in the Gnutella P2P network. Quick and Choo BIB013 develop the idea of data reduction introduced in BIB007 . The authors present the methodology to reduce the data volume using selective imaging. The methodology suggests to select only the key files and data. Windows, Apple and Linux operating systems and their filesystems are considered. A forensic examiner makes the decision to include or exclude particular file types. The decision is based on the data, contained in these file types, relevance to the case. The other possibility considered for reducing data volume is a thumbnailing of video, movie, and picture files. The thumbnailing significantly reduces large image files. Once the file types are selected and some thumbnails are loaded into the forensics software, the logical image file is created. The presented methodology can be applied using common digital forensics tools. The methodology is applied to test as well as real world data. Many results of the experiments that illustrate the viability of the methodology are provided. In general, time reductions observed are 14 min on average to collect a logical image and process in the Internet Evidence Finder, meanwhile the processing of full forensic image takes 8 h 4 min on average. The presented methodology can be applied to either write-blocked physical media or a forensic image.
|
Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Methods of Post-Mortem Triage <s> Forensic study of mobile devices is a relatively new field, dating from the early 2000s. The proliferation of phones (particularly smart phones) on the consumer market has caused a growing demand for forensic examination of the devices, which could not be met by existing Computer Forensics techniques. As a matter of fact, Law enforcement are much more likely to encounter a suspect with a mobile device in his possession than a PC or laptop and so the growth of demand for analysis of mobiles has increased exponentially in the last decade. Early investigations, moreover, consisted of live analysis of mobile devices by examining phone contents directly via the screen and photographing it with the risk of modifying the device content, as well as leaving many parts of the proprietary operating system inaccessible. The recent development of Mobile Forensics, a branch of Digital Forensics, is the answer to the demand of forensically sound examination procedures of gathering, retrieving, identifying, storing and documenting evidence of any digital device that has both internal memory and communication ability [1]. Over time commercial tools appeared which allowed analysts to recover phone content with minimal interference and examine it separately. By means of such toolkits, moreover, it is now possible to think of a new approach to Mobile Forensics which takes also advantage of "Data Mining" and "Machine Learning" theory. This paper is the result of study concerning cell phones classification in a real case of pedophilia. Based on Mobile Forensics "Triaging" concept and the adoption of self-knowledge algorithms for classifying mobile devices, we focused our attention on a viable way to predict phone usage's classifications. Based on a set of real sized phones, the research has been extensively discussed with Italian law enforcement cyber crime specialists in order to find a viable methodology to determine the likelihood that a mobile phone has been used to commit the specific crime of pedophilia, which could be very relevant during a forensic investigation. <s> BIB001 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Methods of Post-Mortem Triage <s> Over the past few years, the number of crimes related to the worldwide diffusion of digital devices with large storage and broadband network connections has increased dramatically. In order to better address the problem, law enforcement specialists have developed new ideas and methods for retrieving evidence more effectively. In accordance with this trend, our research aims to add new pieces of information to the automated analysis of evidence according to Machine Learning-based “post mortem” triage. The scope consists of some copyright infringement court cases coming from the Italian Cybercrime Police Unit database. We draw our inspiration from this “low level” crime which is normally sat at the bottom of the forensic analyst's queue, behind higher priority cases and dealt with the lowest priority. The present work aims to bring order back in the analyst's queue by providing a method to rank each queued item, e.g. a seized device, before being analyzed in detail. The paper draws the guidelines for drive-under-triage classification (e.g. hard disk drive, thumb drive, solid state drive etc.), according to a list of crime-dependent features such as installed software, file statistics and browser history. The model, inspired by the theory of Data Mining and Machine Learning, is able to classify each exhibit by predicting the problem dependent variable (i.e. the class) according to the aforementioned crime-dependent features. In our research context the “class” variable identifies with the likelihood that a drive image may contain evidence concerning the crime and, thus, the associated item must receive an high (or low) ranking in the list. <s> BIB002 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Methods of Post-Mortem Triage <s> A novel concept for improving the trustworthiness of results obtained from digital investigations is presented. Case Based Reasoning Forensic Auditor (CBR-FA) is a method by which results from previous digital forensic examinations are stored and reused to audit current digital forensic investigations. CBR-FA provides a method for evaluating digital forensic investigations in order to provide a practitioner with a level of reassurance that evidence that is relevant to their case has not been missed. The structure of CBR-FA is discussed as are the methodologies it incorporates as part of its auditing functionality. <s> BIB003 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Methods of Post-Mortem Triage <s> The global diffusion of smartphones and tablets, exceeding traditional desktops and laptops market share, presents investigative opportunities and poses serious challenges to law enforcement agencies and forensic professionals. Traditional Digital Forensics techniques, indeed, may be no longer appropriate for timely analysis of digital devices found at the crime scene. Nevertheless, dealing with specific crimes such as murder, child abductions, missing persons, death threats, such activity may be crucial to speed up investigations. Motivated by this, the paper explores the field of Triage, a relatively new branch of Digital Forensics intended to provide investigators with actionable intelligence through digital media inspection, and describes a new interdisciplinary approach that merges Digital Forensics techniques and Machine Learning principles. The proposed Triage methodology aims at automating the categorization of digital media on the basis of plausible connections between traces retrieved (i.e. digital evidence) and crimes under investigation. As an application of the proposed method, two case studies about copyright infringement and child pornography exchange are then presented to actually prove that the idea is viable. The term ''feature'' will be regarded in the paper as a quantitative measure of a ''plausible digital evidence'', according to the Machine Learning terminology. In this regard, we (a) define a list of crime-related features, (b) identify and extract them from available devices and forensic copies, (c) populate an input matrix and (d) process it with different Machine Learning mining schemes to come up with a device classification. We perform a benchmark study about the most popular mining algorithms (i.e. Bayes Networks, Decision Trees, Locally Weighted Learning and Support Vector Machines) to find the ones that best fit the case in question. Obtained results are encouraging as we will show that, triaging a dataset of 13 digital media and 45 copyright infringement-related features, it is possible to obtain more than 93% of correctly classified digital media using Bayes Networks or Support Vector Machines while, concerning child pornography exchange, with a dataset of 23 cell phones and 23 crime-related features it is possible to classify correctly 100% of the phones. In this regards, methods to reduce the number of linearly independent features are explored and classification results presented. <s> BIB004 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Methods of Post-Mortem Triage <s> There are two main reasons the processing speed of current generation digital forensic tools is inadequate for the average case: a) users have failed to formulate explicit performance requirements; and b) developers have failed to put performance, specifically latency, as a top-level concern in line with reliability and correctness. In this work, we formulate forensic triage as a real-time computation problem with specific technical requirements, and we use these requirements to evaluate the suitability of different forensic methods for triage purposes. Further, we generalize our discussion to show that the complete digital forensics process should be viewed as a (soft) real-time computation with well-defined performance requirements. We propose and validate a new approach to target acquisition that enables file-centric processing without disrupting optimal data throughput from the raw device. We evaluate core forensic processing functions with respect to processing rates and show their intrinsic limitations in both desktop and server scenarios. Our results suggest that, with current software, keeping up with a commodity SATA HDD at 120 MB/s requires 120-200 cores. <s> BIB005 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Methods of Post-Mortem Triage <s> The evolution of modern digital devices is outpacing the scalability and effectiveness of Digital Forensics techniques. Digital Forensics Triage is one solution to this problem as it can extract evidence quickly at the crime scene and provide vital intelligence in time critical investigations. Similarly, such methodologies can be used in a laboratory to prioritize deeper analysis of digital devices and alleviate examination backlog. Developments in Digital Forensics Triage methodologies have moved towards automating the device classification process and those which incorporate Machine Learning principles have proven to be successful. Such an approach depends on crime-related features which provide a relevant basis upon which device classification can take place. In addition, to be an accepted and viable methodology it should be also as accurate as possible. Previous work has concentrated on the issues of feature extraction and classification, where less attention has been paid to improving classification accuracy through feature manipulation. In this regard, among the several techniques available for the purpose, we concentrate on feature weighting, a process which places more importance on specific features. A twofold approach is followed: on one hand, automated feature weights are quantified using Kullback-Leibler measure and applied to the training set whereas, on the other hand, manual weights are determined with the contribution of surveyed digital forensic experts. Experimental results of manual and automatic feature weighting are described which conclude that both the techniques are effective in improving device classification accuracy in crime investigations. <s> BIB006 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Methods of Post-Mortem Triage <s> The role of triage in digital forensics is disputed, with some practitioners questioning its reliability for identifying evidential data. Although successfully implemented in the field of medicine, triage has not established itself to the same degree in digital forensics. This article presents a novel approach to triage for digital forensics. Case-Based Reasoning Forensic Triager (CBR-FT) is a method for collecting and reusing past digital forensic investigation information in order to highlight likely evidential areas on a suspect operating system, thereby helping an investigator to decide where to search for evidence. The CBR-FT framework is discussed and the results of twenty test triage examinations are presented. CBR-FT has been shown to be a more effective method of triage when compared to a practitioner using a leading commercial application. <s> BIB007 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Methods of Post-Mortem Triage <s> A sharp increase in malware and cyber-attacks has been observed in recent years. Analysing cyber-attacks on the affected digital devices falls under the purview of digital forensics. The Internet is the main source of cyber and malware attacks, which sometimes result in serious damage to the digital assets. The motive behind digital crimes varies – such as online banking fraud, information stealing, denial of services, security breaches, deceptive output of running programs and data distortion. Digital forensics analysts use a variety of tools for data acquisition, evidence analysis and presentation of malicious activities. This leads to device diversity posing serious challenges for investigators. For this reason, some attack scenarios have to be examined repeatedly, which entails tremendous effort on the part of the examiners when analysing the evidence. To counter this problem, Muhammad Shamraiz Bashir and Muhammad Naeem Ahmed Khan at the Shaheed Zulfikar Ali Bhutto Institute of Science and Technology, Islamabad, Pakistan propose a novel triage framework for digital forensics. <s> BIB008 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Methods of Post-Mortem Triage <s> Criminal investigations invariably involve the triage or cursory examination of relevant electronic media for evidentiary value. Legislative restrictions and operational considerations can result in investigators having minimal time and resources to establish such relevance, particularly in situations where a person is in custody and awaiting interview. Traditional uninformed search methods can be slow, and informed search techniques are very sensitive to the search heuristic's quality. This research introduces Monte-Carlo Filesystem Search, an efficient crawl strategy designed to assist investigators by identifying known materials of interest in minimum time, particularly in bandwidth constrained environments. This is achieved by leveraging random selection with non-binary scoring to ensure robustness. The algorithm is then expanded with the integration of domain knowledge. A rigorous and extensive training and testing regime conducted using electronic media seized during investigations into online child exploitation proves the efficacy of this approach. <s> BIB009 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Methods of Post-Mortem Triage <s> Computer forensics faces a range of challenges due to the widespread use of computing technologies. Examples include the increasing volume of data and devices that need to be analysed in any single case, differing platforms, use of encryption and new technology paradigms (such as cloud computing and the Internet of Things). Automation within forensic tools exists, but only to perform very simple tasks, such as data carving and file signature analysis. Investigators are responsible for undertaking the cognitively challenging and time-consuming process of identifying relevant artefacts. Due to the volume of cyber-dependent (e.g., malware and hacking) and cyber-enabled (e.g., fraud and online harassment) crimes, this results in a large backlog of cases. With the aim of speeding up the analysis process, this paper investigates the role that unsupervised pattern recognition can have in identifying notable artefacts. A study utilising the Self-Organising Map (SOM) to automatically cluster notable artefacts was devised using a series of four cases. Several SOMs were created - a File List SOM containing the metadata of files based upon the file system, and a series of application level SOMs based upon metadata extracted from files themselves (e.g., EXIF data extracted from JPEGs and email metadata extracted from email files). A total of 275 sets of experiments were conducted to determine the viability of clustering across a range of network configurations. The results reveal that more than 93.5% of notable artefacts were grouped within the rank-five clusters in all four cases. The best performance was achieved by using a 10ź×ź10 SOM where all notables were clustered in a single cell with only 1.6% of the non-notable artefacts (noise) being present, highlighting that SOM-based analysis does have the potential to cluster notable versus noise files to a degree that would significantly reduce the investigation time. Whilst clustering has proven to be successful, operationalizing it is still a challenge (for example, how to identify the cluster containing the largest proportion of notables within the case). The paper continues to propose a process that capitalises upon SOM and other parameters such as the timeline to identify notable artefacts whilst minimising noise files. Overall, based solely upon unsupervised learning, the approach is able to achieve a recall rate of up to 93%. <s> BIB010
|
Marturana and Tacconi BIB004 summarize the research works BIB001 BIB002 delivered at conferences and present a model intended for both live and post-mortem triage using machine learning techniques. The presented model consists of the following four steps: forensic acquisition, feature extraction and normalization, context and priority definition, and data classification. For such model, there are two main challenges, the definition of crime-related features and collection of a consistent set of classified samples related to the investigated crimes. The crime-related features are defined for two cases studies, copyright infringement and child pornography exchange. Guidelines for using the classifiers are provided. The attention of the experiment is mostly directed to the comparison of the classifiers used at the last stage of the model. No conclusion is made as to which classifier is best suited for the investigated cases. The presented statistical approach has proven to be valid for ranking the digital evidence related to copyright infringement and child pornography exchange. However, for this approach to be viable, it is necessary to have a deep understanding of possible relations between the crime under investigation and the potential digital evidence. McClelland and Marturana BIB006 extend the research presented by Marturana and Tacconi BIB004 . The authors investigate the impact of the feature manipulation on the accuracy of the classification. The weights are assigned to the features. Two approaches are used for assigning weights to the features, automatic and manual. The automated feature weights are quantified using the Kullback-Leibler measure. The manual weights are determined on the basis of the surveyed digital forensic experts' contribution. The Naïve Bayes classifier is used for the experiment. The only improvement is achieved in the child pornography case. Horsman et al. BIB007 extend the ideas presented in BIB003 and discuss a Case-Based Reasoning Forensic Triager (CBR-FT) method for retrieving the evidential data based on the location of the digital evidence in the past cases. The CBR-FT maintains a knowledge base for gathering the previous experience. Each location on the system stored in the knowledge base is assigned an evidence relevance rating (ERR), which is used as the prior probabilities in the Bayesian model to determine the priority of a particular location for searching. The model enables calculating a primary relevance figure (PRF) for each location. The search is carried out in two stages: in the first stage, only locations with a PRF above 0.5 are used, while the second stage is optional. If the examiner suspects that additional evidence can exist, s/he proceeds to the second stage. During the second stage, the examiner focuses on identifying similar patterns in cases stored in the CBR-FT knowledge base. The CBR-FT knowledge base must cover enough cases to reflect its target population correctly. That is the first restriction for application of the method. The study focuses on fraud offences and it has constructed a fraud knowledge base from 47 prior investigations. The experiment shows that the CBR-FT is more effective when compared to a commercial application EnCase Portable [38] , which uses precision and recall rates. However, an additional shortcoming of this study is that it focuses only on offences of fraud. Bashir and Khan BIB008 suggest a triage framework oriented to analyzing and resolving an attack. The framework contains the usual steps that belong to a general investigative process. The term "triage" refers to a certain part of the framework. The main idea of the triage framework is to create a blacklist database that contains a list of the previously known attacks with details on how to resolve. Every attack is characterized by six attributes: identifier, name, description, status, signature, and then counter measures. The key attribute is the signature that is a placeholder to store unique signatures of cyber-attacks in the form of MD5 hashes. If the signature of any of the affected files is found in the blacklist database, then it means that the attack is known. The answer to how to resolve it is in the blacklist database. However, if the attack is unknown, there is no triage process; a detailed analysis follows. The blacklist database is updated periodically on the basis of the new knowledge and new attacks. Dalins et al. BIB009 introduce a crawl and search method that can be used for digital triage. The proposed method adopts the Monte Carlo Tree Search strategy that is used in games for the filesystem search, which is called Monte Carlo Filesystem Search (MCFS). The original random selection is leveraged with non-binary scoring to keep guided search. Three file scoring methods are introduced, each built on the previous one: simple scorer, type of interest scorer, and similarity-based scorer. Other customizations are made to deliver better performance: integration of domain knowledge to enhance guided search, use of proprietary Microsoft PhotoDNA algorithm to measure the similarity of images, and skin tone detection to identify exposed skin that is usual component of child pornography. The experiment is carried out on real data that was obtained from the Australian Federal Police. The data presented as forensics images are related to the possession and online trading of child pornography. The experiment shows that the proposed MCFS is an effective method for larger and complex tree structures of the file system hierarchy. The search efficiency can be improved by around a third compared to uninformed depth-first search. However, the integration of domain knowledge and skin tone detection scoring showed lower results than expected. An additional investigation is necessary to improve these customizations. In general, the improved proposed method is promising, since many performance limitations arise due to the complicated filesystem design BIB005 . Fahdi et al. BIB010 investigate the possibility of utilizing the Self-Organising Map (SOM) technique to automatically cluster notable artefacts that are relevant to the case. A SOM is a neural network that generates a mapping from the high dimensional input data into a regular two dimensional array of nodes based upon their similarity in an unsupervised manner. The approach is based on using the metadata from several sources, such as the file system, email, and Internet, as the input into the SOM clustering. Moreover, the approach is oriented at the investigation of the suspects' systems rather than the victims' systems. Several pre-processing options are employed before the application of the approach. These options include the creation of the file list, expanding compound files, data carving, entropy test for encryption, and known file search. The results of data carving are not included into the file list of the SOM. Data carving should not be deployed during triage, since data carving tends to generate a lot of data due to high false positive rates BIB005 . The experiment shows that the use of the approach as a triage to verify the existence of the notable files allows identifying 38.6% of notable files at a cost of 1.3% of noise files. It is possible to expand the network size to increase the percentage of the notable files, however, at the cost of picking up more noise files. Most of the analysis takes a relatively trivial amount of time for small data sets (several GB); however, it takes an hour on average to process a large data set (0.5 TB). The appeal of the approach is that the only examiner interaction required in this process is when selecting the crime category. The approach can be a building block with further research and refinement to provide a triage tool for investigating simpler and technically more trivial cases that represent a large proportion of the forensic examiners' daily activities.
|
Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Triage of Mobile Devices <s> The increasing number of mobile devices being submitted to Digital Forensic Laboratories (DFLs) is creating a backlog that can hinder investigations and negatively impact public safety and the criminal justice system. In a military context, delays in extracting intelligence from mobile devices can negatively impact troop and civilian safety as well as the overall mission. To address this problem, there is a need for more effective on-scene triage methods and tools to provide investigators with information in a timely manner, and to reduce the number of devices that are submitted to DFLs for analysis. Existing tools that are promoted for on-scene triage actually attempt to fulfill the needs of both on-scene triage and in-lab forensic examination in a single solution. On-scene triage has unique requirements because it is a precursor to and distinct from the forensic examination process, and may be performed by mobile device technicians rather than forensic analysts. This paper formalizes the on-scene triage process, placing it firmly in the overall forensic handling process and providing guidelines for standardization of on-scene triage. In addition, this paper outlines basic requirements for automated triage tools. <s> BIB001 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Triage of Mobile Devices <s> We present DEC0DE, a system for recovering information from phones with unknown storage formats, a critical problem for forensic triage. Because phones have myriad custom hardware and software, we examine only the stored data. Via flexible descriptions of typical data structures, and using a classic dynamic programming algorithm, we are able to identify call logs and address book entries in phones across varied models and manufacturers. We designed DEC0DE by examining the formats of one set of phone models, and we evaluate its performance on other models. Overall, we are able to obtain high performance for these unexamined models: an average recall of 97% and precision of 80% for call logs; and average recall of 93% and precision of 52% for address books. Moreover, at the expense of recall dropping to 14%, we can increase precision of address book recovery to 94% by culling results that don't match between call logs and address book entries on the same phone. <s> BIB002 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Triage of Mobile Devices <s> Forensic study of mobile devices is a relatively new field, dating from the early 2000s. The proliferation of phones (particularly smart phones) on the consumer market has caused a growing demand for forensic examination of the devices, which could not be met by existing Computer Forensics techniques. As a matter of fact, Law enforcement are much more likely to encounter a suspect with a mobile device in his possession than a PC or laptop and so the growth of demand for analysis of mobiles has increased exponentially in the last decade. Early investigations, moreover, consisted of live analysis of mobile devices by examining phone contents directly via the screen and photographing it with the risk of modifying the device content, as well as leaving many parts of the proprietary operating system inaccessible. The recent development of Mobile Forensics, a branch of Digital Forensics, is the answer to the demand of forensically sound examination procedures of gathering, retrieving, identifying, storing and documenting evidence of any digital device that has both internal memory and communication ability [1]. Over time commercial tools appeared which allowed analysts to recover phone content with minimal interference and examine it separately. By means of such toolkits, moreover, it is now possible to think of a new approach to Mobile Forensics which takes also advantage of "Data Mining" and "Machine Learning" theory. This paper is the result of study concerning cell phones classification in a real case of pedophilia. Based on Mobile Forensics "Triaging" concept and the adoption of self-knowledge algorithms for classifying mobile devices, we focused our attention on a viable way to predict phone usage's classifications. Based on a set of real sized phones, the research has been extensively discussed with Italian law enforcement cyber crime specialists in order to find a viable methodology to determine the likelihood that a mobile phone has been used to commit the specific crime of pedophilia, which could be very relevant during a forensic investigation. <s> BIB003 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Triage of Mobile Devices <s> Bulk data analysis eschews file extraction and analysis, common in forensic practice today, and instead processes data in "bulk," recognizing and extracting salient details ("features") of use in the typical digital forensics investigation. This article presents the requirements, design and implementation of the bulk_extractor, a high-performance carving and feature extraction tool that uses bulk data analysis to allow the triage and rapid exploitation of digital media. Bulk data analysis and the bulk_extractor are designed to complement traditional forensic approaches, not replace them. The approach and implementation offer several important advances over today's forensic tools, including optimistic decompression of compressed data, context-based stop-lists, and the use of a "forensic path" to document both the physical location and forensic transformations necessary to reconstruct extracted evidence. The bulk_extractor is a stream-based forensic tool, meaning that it scans the entire media from beginning to end without seeking the disk head, and is fully parallelized, allowing it to work at the maximum I/O capabilities of the underlying hardware (provided that the system has sufficient CPU resources). Although bulk_extractor was developed as a research prototype, it has proved useful in actual police investigations, two of which this article recounts. <s> BIB004 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Triage of Mobile Devices <s> When forensic triage techniques designed for feature phones are applied to smart phones, these recovery techniques return hundreds of thousands of results, only a few of which are relevant to the investigation. We propose the use of relevance feedback to address this problem: a small amount of investigator input can efficiently and accurately rank in order of relevance, the results of a forensic triage tool. We present LIFTR, a novel system for prioritizing information recovered from Android phones. We evaluate LIFTR's ranking algorithm on 13 previously owned Android smart phones and three recovery engines -- DEC0DE, Bulk Extractor, and Strings? using a standard information retrieval metric, Normalized Discounted Cumulative Gain (NDCG). LIFTR's initial ranking improves the NDCG scores of the three engines from 0.0 to an average of 0.73; and with as little as 5 rounds of feedback, the ranking score in- creases to 0.88. Our results demonstrate the efficacy of relevance feedback for quickly locating useful information among the large amount of irrelevant data returned by current recovery techniques. Further, our empirical findings show that a significant amount of important user information persists for weeks or even months in the expired space of a phone's memory. This phenomenon underscores the importance of using file system agnostic recovery techniques, which are the type of techniques that benefit most from LIFTR. <s> BIB005 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Triage of Mobile Devices <s> Commercial mobile forensic vendors continue to use and rely upon outdated physical acquisition techniques in their products. As new mobile devices are introduced and storage capacities trend upward, so will the time it takes to perform physical forensic acquisitions, especially when performed over limited bandwidth means such as Universal Serial Bus (USB). We introduce an automated differential forensic acquisition technique and algorithm that uses baseline datasets and hash comparisons to limit the amount of data sent from a mobile device to an acquisition endpoint. We were able to produce forensically validated bit-for-bit copies of device storage in significantly reduced amounts of time compared to commonly available techniques. For example, using our technique, we successfully achieved an average imaging rate of under 7źmin per device for a corpus of actively used, real-world 16źGB Samsung Galaxy S3 smartphones. Current commercially available mobile forensic kits would typically take between one to 3źh to yield the same result. Details of our differential forensic imaging technique, algorithm, testing procedures, and results are documented herein. <s> BIB006
|
Mislan et al. BIB001 discuss the onsite triage process for mobile devices. The following steps are suggested for an on-scene triage investigation of mobile devices: 1. Initiate the chain of custody 2. Isolate the device from the network 3. Disable the security features 4. Extract the limited data 5. Review the extracted data 6. Preview the removable storage media. All the steps are discussed in details. The process of the investigation should be well documented in order to validate the results. The mobile device technicians, who are less experienced as technical examiners, should perform the onsite triage. The basic requirements for the automated onsite triage tools are outlined. To present shortly, they are as follows: simplicity of use, audit trail, and access control. The legal allowances of the United States to examine mobile devices are considered as well. Walls et al. BIB002 introduce an investigative tool DEC0DE for recovering information from mobile phones with unknown storage formats. The main idea is that the data formats from known phone models can be leveraged for recovering information from the new phone models. The evaluation focuses on feature phones, i.e., phones with less capability than that of smartphones. The DEC0DE takes the physical image of a mobile phone as input. It is the first limitation of the tool, because the image is not its concern. The second limitation is the assumption that the owner of the phone has left the data in plaintext format. The next shortcoming is that the extracted results are limited to address books and call log records. The contribution of the paper is a technique for an empirical mobile phone data analysis. The used technique consists of two steps-removal of known data and recovering information from the remaining data. The latter step is called an inference process. Block hash filtering accomplishes the first step. The second step adapts the techniques from natural language processing, namely the context-free grammar, and uses probabilistic finite state machines to encode typical data structures. The Viterbi algorithm treats the created finite state machines twice. Finally, the decision tree classifier is used to remove the potential false positive. The development is based on the four following models: Nokia 3200B, LG G4015, Motorola v551, and Samsung SGH-T309. The performance of DEC0DE's inference engine is evaluated against two metrics, recall and precision. The conducted experiment on the phones that have not been seen previously shows an average recall of 93% and precision of 52% for address books, and an average recall of 97% and precision of 80% for call logs. Marturana et al. BIB003 discuss the application of machine learning algorithms for digital triage of mobile phones. The triage stage is introduced between the stages of acquisition and analysis. The extracted data are firstly preprocessed in order to clean data, remove redundant attributes, and normalize data. Several classification algorithms are used to show the ability to classify whether a mobile phone was used to commit a pedophilia crime. The attention is devoted to the performance of the classification algorithms. The research is the first step towards the post-mortem forensic triage of mobile phones. Varma et al. BIB005 present a system, called LIFTR, for prioritizing the information recovered from Android phones. The initial data for the system is a forensic image extracted by a recovery engine. Three recovery engines-DEC0DE BIB002 , Bulk Extractor BIB004 , and Strings, a common UNIX utility for identifying stings of printable characters in a file -are used as the suppliers of the forensic images. Therefore, the LIFTR should operate in concert with the recovery engine, as it augments the results obtained by the engine. The basic idea is that the recovery engine returns many unrelated items to the investigated crime results, since it does not consider the semantics behind the recovered content. Varma et al. BIB005 explore the filesystem of the Android phones and learn the rules of storing the information. These rules learnt and the feedback from the examiner form the basis for information prioritizing. The examiner labels the relevant information units of the investigated crime at the page level. The labeling takes several times and it is performed in the cycle. All the information is ranked based on a combination of the examiner's feedback, the actual content, and the storage system locality information. To test the validity of the approach, the open-source prototype of the system LIFTR is implemented. The LIFTR's ranking algorithm is evaluated against 13 previously owned Android smart phones. Moreover, the set includes nine phones with the Yaffs filesystem [46] . To improve the results, the authors wrote a special Yaffs parser to identify the expired pages that are important to the information relevance. The experiment shows that the LIFTR ranking improves the score of standard information retrieval metric from 0.0 to an average 0.88. Guido et al. BIB006 introduce a differential acquisition technique that can be used for forensic image acquisition of mobile devices for triage purposes. The advantage of the technique introduced is its runtime that is several times faster than other compared commercial tools or techniques. The main idea is to use the precomputed baseline hashes. Therefore, the hashes of the unknown blocks are only sent to the server. The prototype named Hawkeye is implemented. The Hawkeye uses MD5 algorithm for hashing. Several other improvements are implemented to obtain less runtime. They are as follows: threading (10 threads by default) and comparison function of the zero block. The Hawkeye runs on Android devices in the recovery mode. The experiment is performed with 16 GB Samsung Galaxy S3 smartphone (Samsung, Seoul, South Korea). The acquisition techniques of the tool can be applied to other platforms, such as iOS (Apple Inc., Cupertino, CA, USA) as well.
|
Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Roussev and Quates <s> This paper explores the use of purpose-built functions and cryptographic hashes of small data blocks for identifying data in sectors, file fragments, and entire files. It introduces and defines the concept of a ''distinct'' disk sector-a sector that is unlikely to exist elsewhere except as a copy of the original. Techniques are presented for improved detection of JPEG, MPEG and compressed data; for rapidly classifying the forensic contents of a drive using random sampling; and for carving data based on sector hashes. <s> BIB001 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Roussev and Quates <s> Digital triage is a pre-digital-forensic phase that sometimes takes place as a way of gathering quick intelligence. Although effort has been undertaken to model the digital forensics process, little has been done to date to model digital triage. This work discuses the further development of a model that does attempt to address digital triage the Partially-automated Crime Specific Digital Triage Process model. The model itself will be presented along with a description of how its automated functionality was implemented to facilitate model testing. <s> BIB002 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Roussev and Quates <s> The digital forensic process as traditionally laid out begins with the collection, duplication, and authentication of every piece of digital media prior to examination. These first three phases of the digital forensic process are by far the most costly. However, complete forensic duplication is standard practice among digital forensic laboratories. The time it takes to complete these stages is quickly becoming a serious problem. Digital forensic laboratories do not have the resources and time to keep up with the growing demand for digital forensic examinations with the current methodologies. One solution to this problem is the use of pre-examination techniques commonly referred to as digital triage. Pre-examination techniques can assist the examiner with intelligence that can be used to prioritize and lead the examination process. This work discusses a proposed model for digital triage that is currently under development at Mississippi State University. <s> BIB003 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Roussev and Quates <s> The ever growing capacity of hard drives poses a severe problem to forensic practitioners who strive to deal with digital investigations in a timely manner. Therefore, the on-the-spot digital investigation paradigm is emerging as a new standard to select only that evidence which is important for the case being investigated. In the light of this issue, we propose an incident response tool which is able to speed up the investigation by finding crime-related evidence in a faster way compared with the traditional state-of-the-art post-mortem analysis tools. The tool we have implemented is called Live Data Forensic System (LDFS). LDFS is an on-the-spot live forensic toolkit, which can be used to collect and analyze relevant data in a timely manner and to perform a triage of a Microsoft Windows-based system. Particularly, LDFS demonstrates the ability of the tool to automatically gather evidence according to general categories, such as live data, Windows Registry, file system metadata, instant messaging services clients, web browser artifacts, memory dump and page file. In addition, unified analysis tools of ELF provide a fast and effective way to obtain a picture of the system at the time the analysis is done. The result of the analysis from different categories can be easily correlated to provide useful clues for the sake of the investigation. <s> BIB004 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Roussev and Quates <s> The number of forensic examinations being performed by digital forensic laboratories is rising, and the amount of data received for each examination is increasing significantly. At the same time, because forensic investigations are results oriented, the demand for timely results has remained steady, and in some instances has increased. In order to keep up with these growing demands, digital forensic laboratories are being compelled to rethink the overall forensic process. This work dismantles the barriers between steps in prior digital investigation process models and concentrates on supporting key decision points. In addition to increasing efficiency of forensic processes, one of the primary goals of these efforts is to enhance the comprehensiveness and investigative usefulness of forensic results. The purpose of honing digital forensic processes is to empower the forensic examiner to focus on the unique and interesting aspects of their work, allowing them to spend more time addressing the probative questions in an investigation, enabling them to be decision makers rather than tool runners, and ultimately increase the quality of service to customers. This paper describes a method of evaluating the complete forensic process performed by examiners, and applying this approach to developing tools that recognize the interconnectivity of examiner tasks across a digital forensic laboratory. Illustrative examples are provided to demonstrate how this approach can be used to increase the overall efficiency and effectiveness of forensic examination of file systems, malware, and network traffic. <s> BIB005 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Roussev and Quates <s> Bulk data analysis eschews file extraction and analysis, common in forensic practice today, and instead processes data in "bulk," recognizing and extracting salient details ("features") of use in the typical digital forensics investigation. This article presents the requirements, design and implementation of the bulk_extractor, a high-performance carving and feature extraction tool that uses bulk data analysis to allow the triage and rapid exploitation of digital media. Bulk data analysis and the bulk_extractor are designed to complement traditional forensic approaches, not replace them. The approach and implementation offer several important advances over today's forensic tools, including optimistic decompression of compressed data, context-based stop-lists, and the use of a "forensic path" to document both the physical location and forensic transformations necessary to reconstruct extracted evidence. The bulk_extractor is a stream-based forensic tool, meaning that it scans the entire media from beginning to end without seeking the disk head, and is fully parallelized, allowing it to work at the maximum I/O capabilities of the underlying hardware (provided that the system has sufficient CPU resources). Although bulk_extractor was developed as a research prototype, it has proved useful in actual police investigations, two of which this article recounts. <s> BIB006 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Roussev and Quates <s> In many police investigations today, computer systems are somehow involved. The number and capacity of computer systems needing to be seized and examined is increasing, and in some cases it may be necessary to quickly find a single computer system within a large number of computers in a network. To investigate potential evidence from a large quantity of seized computer system, or from a computer network with multiple clients, triage analysis may be used. In this work we first define triage based on the medical definition. From this definition, we describe a PXE-based client-server environment that allows for triage tasks to be conducted over the network from a central triage server. Finally, three real world cases are described in which the proposed triage solution was used. <s> BIB007
|
Cantrell and Dampier BIB002 present the implementation of the automated phases in the partiallyautomated digital triage process model BIB003 . The implementation is carried out on the basis of series of scripts comprised of original and open source tools written in Perl. The Linux distribution CAINE [49] installed to a USB drive is chosen as the development and testing environment in order to provide some form of boot media and to incorporate full onsite capability. The Windows registry is obtained by using the open source tool RegRipper [50] . The final report is provided in the form of HyperText Markup Language (HTML) pages. The tool is implemented to search the Web browser history for Internet Explorer only. The initial testing is done on a series of 300 GB drives. The runtimes are not provided. Lim et al. BIB004 introduce a Live Data Forensic System (LDFS) designed to collect and analyze live data for Microsoft Windows-based systems. The LDFS consist of two separate tools, LDFS collection and LDFS analysis. The LDFS collection system gathers volatile and non-volatile data such as: memory dump, page file, web browser artifacts, instant messaging services clients, Windows Registry, and file system metadata. The distinctive feature of the LDFS collection system is that it can decode encoded BuddyBuddy, Yahoo, and MissLee messenger clients' chat logs. The physical memory dump and dump of all active processes are performed by means of third-party applications. The focus of choosing these applications is based on the least changes to the investigated system by the tool. The XML collection report holds all the collected items with their MD5 and Secure Hash Algorithm 1 (SHA1) hash values. The LDFS collection system is tested against five different types of Windows OSs (Microsoft, Redmond, WA, USA). Several experiments are conducted to test the performance of the system. The largest collection time does not exceed 49 min. The LDFS analysis module has the capabilities for analyzing all the collected data; however, it has not been fully implemented yet. Lim et al. BIB004 argue that the input data and its trustworthiness are of paramount importance in the live forensics analysis. However, it is not clear whether any defense against the subversion of the collection process is implemented in the LDFS collection system. Casey et al. BIB005 discuss the need for and possibilities of honing the digital forensic processes to obtain the timely results. Many tasks in the forensic processes are not resource limited, and rethinking the overall organization of the forensic processes can assure greater improvements than considering the tasks separately. Therefore, improving the complete forensic process is oriented towards two areas, namely, dismantling the barriers between the tasks of the forensic process and providing useful information to support the key decisions. The efforts discussed in this paper focus on processing data from three primary sources: (i) filesystems, (ii) malware, and (iii) network traffic. Many triage tools analyze the filesystems. The analysis reveals that the main bottleneck in this process is the disk Input/Output (I/O) speeds. Using the results of the analysis, Casey et al. BIB005 provide the following guidelines for the triage or forensic data extraction tools to improve efficiency: 1. A tool can simultaneously deliver data into multiple extraction operations and create the forensic duplicate 2. A tool can store extracted information in both, the XML format and SQLite database 3. A tool should provide a user-friendly interface to facilitate the viewing, sorting, and classification of files Additionally, tool developers have to consult about each step of the development with their customers. For the malware, the main suggestion is that the tool should firstly determine whether the file has been seen before. Next, the automatic malware processing tool developed by Defense Cyber Crime Center (DC3) is presented as an illustrative example. However, no suggestions are provided for the network traffic tools. The suite of tools PCAPFAST, developed by DC3, is provided as the example of the right network traffic tool. Garfinkel BIB006 extends the research work presented by Garfinkel et al. BIB001 and introduces a forensic tool bulk_extractor devoted to the initial part of an investigation. The base of the bulk_extractor is the analysis of bulk data. The bulk_extractor scans raw disk images or any data dump for useful patterns (emails, credit card numbers, Internet Protocol (IP) addresses, etc.). It uses multiple scanners tailored to the certain patterns and heuristics to reduce false positive results and noises. The identified patterns are stored in feature files. When processing is complete, the bulk_extractor creates a feature histogram for each feature file. To improve the speed of processing, the bulk_extractor takes advantage of available multi-core capabilities. It detects and decompresses the compressed data. A lot of attention is devoted to the decompression of data. This feature is not usual for triage tools, because it consumes a lot of processing time. However, the feature is very useful for the forensic tool. The performance of the bulk_extractor is compared to the commercial tool EnCase. The results indicate that the bulk_extractor extracts email addresses from the forensic 42 GB disk image 10 times faster than EnCase, and it takes 44 min. The processing time of the bulk_extractor is between 1 and 8 h per piece of media, depending on the size and complexity of the subject data. The processing time does not meet the triage requirements. The bulk_extractor is successfully applied to 250 GB hard disk drives in two real cases. The processing time is 2.5 h for the first case and 2 h for the second. In general, the bulk_extractor is nice-to-have; however, it is not a triage tool. Koopmans and James BIB007 introduce an automated network triage (ANT) solution designed for client-server environment. The purpose of the solution is to sort the analyzed systems by their likely relevance to the investigated case. The ANT is developed on the basis of the Preboot eXecution Environment (PXE) protocol and is composed of a network server that runs various services, and the clients, which are the systems to be analyzed, in a physically isolated network. The ANT server boots a suspected computer via a network. The authors provide many technical details that explain the specific steps-what software to use and how to boot the seized computers. The interface is developed in Personal Home Page (PHP) programming language. The data for triage are as follow:
|
Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> A list of keywords to search for 2. A list of preferred file names or extensions 3. A list of preferred directories 4. A hash database that contains the hashes of files of interest 5. A hash database index file <s> Remote live forensics has recently been increasingly used in order to facilitate rapid remote access to enterprise machines. We present the GRR Rapid Response Framework (GRR), a new multi-platform, open source tool for enterprise forensic investigations enabling remote raw disk and memory access. GRR is designed to be scalable, opening the door for continuous enterprise wide forensic analysis. This paper describes the architecture used by GRR and illustrates how it is used routinely to expedite enterprise forensic investigations. <s> BIB001 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> A list of keywords to search for 2. A list of preferred file names or extensions 3. A list of preferred directories 4. A hash database that contains the hashes of files of interest 5. A hash database index file <s> In enterprise environments, digital forensic analysis generates data volumes that traditional forensic methods are no longer prepared to handle. Triaging has been proposed as a solution to systematically prioritize the acquisition and analysis of digital evidence. We explore the application of automated triaging processes in such settings, where reliability and customizability are crucial for a successful deployment. We specifically examine the use of GRR Rapid Response (GRR) - an advanced open source distributed enterprise forensics system - in the triaging stage of common incident response investigations. We show how this system can be leveraged for automated prioritization of evidence across the whole enterprise fleet and describe the implementation details required to obtain sufficient robustness for large scale enterprise deployment. We analyze the performance of the system by simulating several realistic incidents and discuss some of the limitations of distributed agent based systems for enterprise triaging. <s> BIB002 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> A list of keywords to search for 2. A list of preferred file names or extensions 3. A list of preferred directories 4. A hash database that contains the hashes of files of interest 5. A hash database index file <s> Digital forensic triage is poorly defined and poorly understood. The lack of clarity surrounding the process of triage has given rise to legitimate concerns. By trying to define what triage actually is, one can properly engage with the concerns surrounding the process. This paper argues that digital forensic triage has been conducted on an informal basis for a number of years in digital forensic laboratories, even where there are legitimate objections to the process. Nevertheless, there are clear risks associated with the process of technical triage, as currently practised. The author has developed and deployed a technical digital forensic previewing process that negates many of the current concerns regarding the triage process and that can be deployed in any digital forensic laboratory at very little cost. This paper gives a high-level overview of how the system works and how it can be deployed in the digital forensic laboratory. <s> BIB003 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> A list of keywords to search for 2. A list of preferred file names or extensions 3. A list of preferred directories 4. A hash database that contains the hashes of files of interest 5. A hash database index file <s> Considering that a triage related task may essentially make-or-break a digital investigation and the fact that a number of triage tools are freely available online but there is currently no mature framework for practically testing and evaluating them, in this paper we put three open source triage tools to the test. In an attempt to identify common issues, strengths and limitations we evaluate them both in terms of efficiency and compliance to published forensic principles. Our results show that due to the increased complexity and wide variety of system configurations, the triage tools should be made more adaptable, either dynamically or manually (depending on the case and context) instead of maintaining a monolithic functionality. <s> BIB004 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> A list of keywords to search for 2. A list of preferred file names or extensions 3. A list of preferred directories 4. A hash database that contains the hashes of files of interest 5. A hash database index file <s> Bulk data analysis eschews file extraction and analysis, common in forensic practice today, and instead processes data in "bulk," recognizing and extracting salient details ("features") of use in the typical digital forensics investigation. This article presents the requirements, design and implementation of the bulk_extractor, a high-performance carving and feature extraction tool that uses bulk data analysis to allow the triage and rapid exploitation of digital media. Bulk data analysis and the bulk_extractor are designed to complement traditional forensic approaches, not replace them. The approach and implementation offer several important advances over today's forensic tools, including optimistic decompression of compressed data, context-based stop-lists, and the use of a "forensic path" to document both the physical location and forensic transformations necessary to reconstruct extracted evidence. The bulk_extractor is a stream-based forensic tool, meaning that it scans the entire media from beginning to end without seeking the disk head, and is fully parallelized, allowing it to work at the maximum I/O capabilities of the underlying hardware (provided that the system has sufficient CPU resources). Although bulk_extractor was developed as a research prototype, it has proved useful in actual police investigations, two of which this article recounts. <s> BIB005 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> A list of keywords to search for 2. A list of preferred file names or extensions 3. A list of preferred directories 4. A hash database that contains the hashes of files of interest 5. A hash database index file <s> The role of triage in digital forensics is disputed, with some practitioners questioning its reliability for identifying evidential data. Although successfully implemented in the field of medicine, triage has not established itself to the same degree in digital forensics. This article presents a novel approach to triage for digital forensics. Case-Based Reasoning Forensic Triager (CBR-FT) is a method for collecting and reusing past digital forensic investigation information in order to highlight likely evidential areas on a suspect operating system, thereby helping an investigator to decide where to search for evidence. The CBR-FT framework is discussed and the results of twenty test triage examinations are presented. CBR-FT has been shown to be a more effective method of triage when compared to a practitioner using a leading commercial application. <s> BIB006 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> A list of keywords to search for 2. A list of preferred file names or extensions 3. A list of preferred directories 4. A hash database that contains the hashes of files of interest 5. A hash database index file <s> This paper describes a five-phase, multi-threaded bootable approach to digital forensic triage, which is implemented in a product called Forensics2020. The first phase collects metadata for every logical file on the hard drive of a computer system. The second phase collects EXIF camera data from each image found on the hard drive. The third phase analyzes and categorizes each file based on its header information. The fourth phase parses each executable file to provide a complete audit of the software applications on the system; a signature is generated for every executable file, which is later checked against a threat detection database. The fifth and final phase hashes each file and records its hash value. All five phases are performed in the background while the first responder interacts with the system. This paper assesses the forensic soundness of Forensics2020. The tool makes certain changes to a hard drive that are similar to those made by other bootable forensic examination environments, although the changes are greater in number. The paper also describes the lessons learned from developing Forensics2020, which can help guide the development of other forensic triage tools. <s> BIB007 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> A list of keywords to search for 2. A list of preferred file names or extensions 3. A list of preferred directories 4. A hash database that contains the hashes of files of interest 5. A hash database index file <s> Purpose – The purpose of this paper is to propose a novel approach that automates the visualisation of both quantitative data (the network) and qualitative data (the content) within emails to aid the triage of evidence during a forensics investigation. Email remains a key source of evidence during a digital investigation, and a forensics examiner may be required to triage and analyse large email data sets for evidence. Current practice utilises tools and techniques that require a manual trawl through such data, which is a time-consuming process. Design/methodology/approach – This paper applies the methodology to the Enron email corpus, and in particular one key suspect, to demonstrate the applicability of the approach. Resulting visualisations of network narratives are discussed to show how network narratives may be used to triage large evidence data sets. Findings – Using the network narrative approach enables a forensics examiner to quickly identify relevant evidence within large email data sets. Within... <s> BIB008 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> A list of keywords to search for 2. A list of preferred file names or extensions 3. A list of preferred directories 4. A hash database that contains the hashes of files of interest 5. A hash database index file <s> Imagine the following scenario: an inexperienced law enforcement officer enters a crime scene and – on finding a USB key on a potential suspect – inserts it into a nearby Windows desktop computer hoping to find some information which may help an ongoing investigation. The desktop crashes and all data on the USB key and on the Windows desktop has now been potentially compromised. However, the law enforcement officer in question is using a Virtual Crime Scene Simulator and has just learned a valuable lesson. This paper discusses the development and initial user evaluation of a Virtual Crime Scene Simulator that includes the ability to interact with and perform live triage of commonly-found digital devices. Based on our experience of teaching digital evidence handling, we aimed to create a realistic virtual environment that integrates many different aspects of the digital and physical crime scene processing, such as physical search activities, triage of digital devices, note taking and form filling, interaction with suspects at the scene, as well as search team training. <s> BIB009 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> A list of keywords to search for 2. A list of preferred file names or extensions 3. A list of preferred directories 4. A hash database that contains the hashes of files of interest 5. A hash database index file <s> Abstract Large email data sets are often the focus of criminal and civil investigations. This has created a daunting task for investigators due to the extraordinary size of many of these collections. Our work offers an interactive visual analytic alternative to the current, manually intensive methodology used in the search for evidence in large email data sets. These sets usually contain many emails which are irrelevant to an investigation, forcing investigators to manually comb through information in order to find relevant emails, a process which is costly in terms of both time and money. To aid the investigative process we combine intelligent preprossessing, a context aware visual search, and a results display that presents an integrated view of diverse information contained within emails. This allows an investigator to reduce the number of emails that need to be viewed in detail without the current tedious manual search and comb process. <s> BIB010
|
Three real cases of the likelihood that the suspicious computers actually pose threat are very successfully investigated; the runtimes of the three cases are within 10 min. The runtimes are very short, however, it is not clear why they are so short, and an explanation is not provided. Moreover, Horsman et al. BIB006 state that hashing and keyword searching approaches can limit the effectiveness of digital triage because they are too restrictive. The limitations of the ANT solution are the following: there is no possibility to boot from the external source and encrypted data could not be analysed. Moser and Cohen BIB002 discuss the use of triage in quite a different context than the traditional criminal case investigation-an incident response. The authors consider the use of the GRR Rapid Response (GRR) system. It is an agent-based open source distributed enterprise forensics system. Moser and Cohen BIB002 overview the components of the GRR system. A more detailed description of the GRR system is available elsewhere BIB001 . This method lowers the total time cost of triage analysis by distributing this task to the system agents. The main attention is directed towards the reliability of agents. Constant monitoring of used resources-memory and central processing unit (CPU)ensures the reliability of agents. The investigation consists of three phases: planning, collection, and analysis. The experiment is carried out on many corporate workstations and laptops. The GRR agents are installed on these computers. The goal of the experiment is to examine the representative cases of a typical enterprise investigation performed by an incident response team. Four cases are analyzed. The majority of agents pick up artifacts in the first few minutes after the start. Nevertheless, the GRR continues running to 24 h so, if the missing machines come back online later, the artifacts will still be detected. The case of the autorun key comparison required an extensive manual analysis, therefore, improvement is necessary for such cases. Shaw and Browne BIB003 argue that a digital forensic triage has been conducted on an informal basis for several years. The authors introduce the concepts of administrative and technical triage. The administrative triage assesses the circumstances of a new case before starting an examination of the evidence. Shaw and Browne BIB003 discuss and summarize the weaknesses of digital triage. The enhanced previewing is suggested as an alternative to digital triage. The Linux forensic distribution CAINE [49] installed on a compact disc (CD) is chosen as a base for the implementation. The bootable CD is remastered to include the existing open source forensic tools and to add new analysis software. A high-level overview of system work is presented. The possibilities to deploy the enhanced previewing in the digital forensic laboratory are analyzed. The weaknesses of the enhanced previewing are as follows: the case management becomes more complicated and the system is not suitable to the field use at all. The authors doubt "whether the Enhanced Previewing process is a subset of technical triage or whether it is a distinct process only loosely related to technical triage". We are inclined to state that the enhanced previewing is not a subset of technical triage, because the processing time of the enhanced previewing would be quite long. We base our conclusion on the provided description of the system. Shiaeles et al. BIB004 review three open source triage tools and suggest the ways to improve them. The TriageIR, TR3Secure, and Kludge tools are tested for various Microsoft Windows versions. There is currently no mature framework for practically testing and evaluating triage tools, however, the authors do not suggest a framework and evaluate the tools in their best way imagined. The first principle to assess is the access to volatile data. The next principle to assess is the adherence of tools to forensic principles ensuring the admissibility of the collected evidence to the court. An experiment shows that no single considered tool is better than others. All the tools have their strengths and weaknesses. The solution is to preferably have several tools and maintain a profile of the tool capabilities. The recommendations for improving the tools are as follows: 1. The tools should be made more adaptable, either dynamically or manually 2. Disabling Prefetch on Windows systems will result to less system alterations 3. The tools should record and undo all registry changes, which they perform to the examined system 4. The tools should collect the Internet activity artifacts that belong to all known browsers Woods et al. present an open source software for automated analysis and visualization of disk images created as part of the BitCurator project [57] . The goal of the presented software is to assist in triage tasks. The data for analysis is obtained from open source forensic tools fiwalk [58] and bulk_extractor BIB005 . The fiwalk tool recognizes and interprets the content of filesystems that are contained in disk images, and produces an XML report. The bulk_extractor tool reads the raw contents of the disk image and reports on various features. The BitCurator reporting tools produce Portable Document Format (PDF) reports on filesystem and for each feature separately. If data entry datasets are large, it is possible to configure the reporting tools to produce the report for a subset of the filesystem or a subset of features. The time required to manage a given disk image with forensic tools fiwalk and bulk_extractor is within the range of tens of minutes. The limiting factor in terms of time is the BitCurator reporting tools that may have to process an extremely large XML filesystem report and text feature reports. The BitCurator project freely distributes these reporting tools in a variety of ways for the practitioners and researchers to use. Baggili et al. BIB007 present a five-phase, multi-threaded bootable tool Forensics2020 for forensics triage. The tool is loaded from a bootable Windows Pre-installation Environment using a USB stick. Phases proceed in sequence, however, while the tool is working, the examiner can interact with the tool to see the results up to that point and to request certain types of data. The first phase collects logical files and their metadata. The second phase analyses every image for the Exchangeable Image File Format (EXIF) data. The third phase explores and classifies each file based on its header. The fourth phase parses executable files for audit and threat purposes. The fifth phase hashes each file and takes the longest time of all the phases. The experiment is carried out to assess the efficacy and a forensic soundness of Forensics2020. In sum, 26.33 TB of data from 57 computers are analyzed. The total time required to complete the process is 10,356 s. The tool makes certain changes to the hard drive; however, the changes are greater in number than those of similar Linux-based tools. Two lessons can be learned from the development of Forensics2020. Firstly, a multi-threaded, multi-stage tool allows the examiner to interact with the evidence while the system is performing the forensics processing. Secondly, the mounting of the hard drive by a bootable tool has influence over the perception of the forensic soundness. Haggerty et al. BIB008 propose an approach to automate the visualization of quantitative and qualitative email data to assist the triage of digital evidence during a forensics investigation. The quantitative information, which is retrieved from the email, refers to the network events and actor relationships. The qualitative information refers to the body of the emails themselves. The authors have developed a TagSNet software to implement the proposed approach. The software provides two views-a network of the actors and a tag of keywords that are found in the email bodies. Both views are interactive in that the forensics examiner may move the actors and text around. The experiment is carried out on the Enron email data. The average time to process and visualize email data is about 10 min. However, the visualization is not aimed at answering the investigative questions; it only aids the forensics examiner to triage email data more quickly than in the manual mode. Vidas et al. describe a free forensic tool, OpenLV, which can be deployed in the field and in the laboratory. It is noteworthy that over the past years it has been used under the name of "LiveView". The interface of the tool is oriented to the examiners with little training. The OpenLV asks for configuration and creates a virtual machine out of a forensic image or physical disk. The virtual machine enables booting up the image and gains an interactive environment without modifying the underlying image. The tool natively supports only the dd/raw image format. Other formats require third party software that can be integrated into the tool, which is Windows centric, and a limited Linux support is added. Additionally, the OpenLV aids to remove the barrier of passwords for Windows users. The authors claim that "OpenLV aims to meet the demand for an easy-to-use triage tool", however, neither an example nor a reference is provided for how OpenLV is used for triage purposes. Conway et al. BIB009 discuss a development of a Virtual Crime Scene Simulator (VCSS) that can perform a live triage of digital devices. Training is important for the law enforcement officers; therefore, the tool will have a field of its application. The VCSS is an open source project, and it is implemented as game playing, where Unity3D [63] is chosen as the base platform. The virtual environment includes a three-dimensional (3D) representation of a house with four rooms, a hallway, and outside scenery. The crime scenery has a set of the following items: furniture, various hardware devices, and an avatar for interrogation. The following in-game actions are possible: live examination of the various digital devices, interrogation of the avatar, and other actions related to the crime scene. The full device interaction is implemented on Windows version only. The trainer can add new logic by modifying the existing JavaScript. The law enforcement officers from a developing country used the VCSS for training. The participants highly evaluated the educational purpose of the application. Hegarty and Haggerty present the SlackStick approach to identify files of interest for forensic examiner on the live system. The approach is based on the signatures of the files. To create the signature of the file, a block within the original file is chosen, which may be from anywhere within a file, except for the first and the last blocks. Several predetermined bytes are chosen to represent the file. The number of bytes can be chosen by balancing the tradeoff between the false positives and false negatives. The higher number of bytes decreases the likelihood of false positives. The SlackStick software written in Python under Slax operating system (Software Manufacturer, City, State, Country). runs from an external device. SlackStick reads the memory blocks on the target machine sequentially to generate block signatures for comparison with the signature library. If a match is found, a report that includes the matched signature and the physical location of the file in the storage media is generated. They conducted an experiment in which it took a dozen of seconds to analyze 1 GB partition that has 2 194 JPEG images. Signatures are generated by selecting 11 bytes within the second block of each target files. Neither false positives nor false negatives are found. As the number of signatures increases, no measurable impact on performance is observed. Further, van Beek et al. introduce a development of the distributed digital forensic system HANSKEN [67] that is the successor of the operating digital forensic system XIRAF . The goal of HANSKEN is to speed-up the computations of big data. The three forensic drivers for the system are as follows: minimization of the case lead time, maximization of the trace coverage, and specialization of the people involved. These drivers justify the building of the distributed big data forensic platform. To mitigate the threats associated with a big data platform, the development of the system HANSKEN is based on eight design principles. They are enumerated in the order of the priority: 1. Security, 2. Privacy, 3. Transparency, 4. Multi tenancy, 5. Future proof, 6. Data retention, 7. Reliability, and 8. High availability. The first three principles are sociological; meanwhile the other five are business principles and define the system boundaries. The system uses its own forensic image format. The authors justify the need for its own format; however it could be the limitation of the system, especially for the future development. The system HANSKEN stores the data compressed and encrypted. The encryption of data ensures a restricted access to it. The process of extracting data from a forensic image starts as soon as the first bits of the image are uploaded to the system. Such approach acknowledges the right organization of the forensic processes to improve the efficiency of the forensic investigation. The authors admit that triage is a valuable approach for ordering the processing of images, not for leaving images unprocessed. Such form of triage is planned to be included into the system HANSKEN. The system is implemented on the Hadoop realization of MapReduce. The system HANSKEN was planned to be put into production at the end of the year 2015. Koven et al. BIB010 further explore and develop the idea of email data visualization BIB008 . The authors present a visual email search tool InVEST. Firstly, the tool preprocesses the email data to create indexes for various email fields. The duplicate information and junk data are excluded from indexing. Next, the user starts the search process with defined keywords. The search results are presented in five different visual views. The visual views enable better understanding and interpreting of the search results as well as finding the relationships between the search entities. The diverse views show different relationships between search entities and present the contextual information found within these results. All the views support the possibility to refine the search results using filtering and expanding. The process of filtering and expanding is iterative until the search is successful. An experiment is carried out on the Enron email data set. Two case studies are successfully investigated. Koven et al. BIB010 used the term "triage" in the title of the paper. The term "triage" is used in the sense of a tool, which allows selecting a subset of the emails that are related to a particular subject from the whole email set. However, the time spent to select can be quite long. The process of selecting the subset of the email is interactive heavily involving the user. The authors present an example that "the time to make the discovery and exploration including the skimming of at least 30 of the discovered emails was approximately 1 h". Therefore, the use of the tool in triage process is quite unlikely, unless the data captured is only in form of email.
|
Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Lessons Learned from the Review <s> Forensic study of mobile devices is a relatively new field, dating from the early 2000s. The proliferation of phones (particularly smart phones) on the consumer market has caused a growing demand for forensic examination of the devices, which could not be met by existing Computer Forensics techniques. As a matter of fact, Law enforcement are much more likely to encounter a suspect with a mobile device in his possession than a PC or laptop and so the growth of demand for analysis of mobiles has increased exponentially in the last decade. Early investigations, moreover, consisted of live analysis of mobile devices by examining phone contents directly via the screen and photographing it with the risk of modifying the device content, as well as leaving many parts of the proprietary operating system inaccessible. The recent development of Mobile Forensics, a branch of Digital Forensics, is the answer to the demand of forensically sound examination procedures of gathering, retrieving, identifying, storing and documenting evidence of any digital device that has both internal memory and communication ability [1]. Over time commercial tools appeared which allowed analysts to recover phone content with minimal interference and examine it separately. By means of such toolkits, moreover, it is now possible to think of a new approach to Mobile Forensics which takes also advantage of "Data Mining" and "Machine Learning" theory. This paper is the result of study concerning cell phones classification in a real case of pedophilia. Based on Mobile Forensics "Triaging" concept and the adoption of self-knowledge algorithms for classifying mobile devices, we focused our attention on a viable way to predict phone usage's classifications. Based on a set of real sized phones, the research has been extensively discussed with Italian law enforcement cyber crime specialists in order to find a viable methodology to determine the likelihood that a mobile phone has been used to commit the specific crime of pedophilia, which could be very relevant during a forensic investigation. <s> BIB001 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Lessons Learned from the Review <s> We present DEC0DE, a system for recovering information from phones with unknown storage formats, a critical problem for forensic triage. Because phones have myriad custom hardware and software, we examine only the stored data. Via flexible descriptions of typical data structures, and using a classic dynamic programming algorithm, we are able to identify call logs and address book entries in phones across varied models and manufacturers. We designed DEC0DE by examining the formats of one set of phone models, and we evaluate its performance on other models. Overall, we are able to obtain high performance for these unexamined models: an average recall of 97% and precision of 80% for call logs; and average recall of 93% and precision of 52% for address books. Moreover, at the expense of recall dropping to 14%, we can increase precision of address book recovery to 94% by culling results that don't match between call logs and address book entries on the same phone. <s> BIB002 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Lessons Learned from the Review <s> A novel concept for improving the trustworthiness of results obtained from digital investigations is presented. Case Based Reasoning Forensic Auditor (CBR-FA) is a method by which results from previous digital forensic examinations are stored and reused to audit current digital forensic investigations. CBR-FA provides a method for evaluating digital forensic investigations in order to provide a practitioner with a level of reassurance that evidence that is relevant to their case has not been missed. The structure of CBR-FA is discussed as are the methodologies it incorporates as part of its auditing functionality. <s> BIB003 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Lessons Learned from the Review <s> Over the past few years, the number of crimes related to the worldwide diffusion of digital devices with large storage and broadband network connections has increased dramatically. In order to better address the problem, law enforcement specialists have developed new ideas and methods for retrieving evidence more effectively. In accordance with this trend, our research aims to add new pieces of information to the automated analysis of evidence according to Machine Learning-based “post mortem” triage. The scope consists of some copyright infringement court cases coming from the Italian Cybercrime Police Unit database. We draw our inspiration from this “low level” crime which is normally sat at the bottom of the forensic analyst's queue, behind higher priority cases and dealt with the lowest priority. The present work aims to bring order back in the analyst's queue by providing a method to rank each queued item, e.g. a seized device, before being analyzed in detail. The paper draws the guidelines for drive-under-triage classification (e.g. hard disk drive, thumb drive, solid state drive etc.), according to a list of crime-dependent features such as installed software, file statistics and browser history. The model, inspired by the theory of Data Mining and Machine Learning, is able to classify each exhibit by predicting the problem dependent variable (i.e. the class) according to the aforementioned crime-dependent features. In our research context the “class” variable identifies with the likelihood that a drive image may contain evidence concerning the crime and, thus, the associated item must receive an high (or low) ranking in the list. <s> BIB004 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Lessons Learned from the Review <s> There are two main reasons the processing speed of current generation digital forensic tools is inadequate for the average case: a) users have failed to formulate explicit performance requirements; and b) developers have failed to put performance, specifically latency, as a top-level concern in line with reliability and correctness. In this work, we formulate forensic triage as a real-time computation problem with specific technical requirements, and we use these requirements to evaluate the suitability of different forensic methods for triage purposes. Further, we generalize our discussion to show that the complete digital forensics process should be viewed as a (soft) real-time computation with well-defined performance requirements. We propose and validate a new approach to target acquisition that enables file-centric processing without disrupting optimal data throughput from the raw device. We evaluate core forensic processing functions with respect to processing rates and show their intrinsic limitations in both desktop and server scenarios. Our results suggest that, with current software, keeping up with a commodity SATA HDD at 120 MB/s requires 120-200 cores. <s> BIB005 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Lessons Learned from the Review <s> This paper addresses the increasing resources overload being experienced by law enforcement digital forensics units with the proposal to introduce triage template pipelines into the investigative process, enabling devices and the data they contain to be examined according to a number of prioritised criteria. <s> BIB006 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Lessons Learned from the Review <s> The global diffusion of smartphones and tablets, exceeding traditional desktops and laptops market share, presents investigative opportunities and poses serious challenges to law enforcement agencies and forensic professionals. Traditional Digital Forensics techniques, indeed, may be no longer appropriate for timely analysis of digital devices found at the crime scene. Nevertheless, dealing with specific crimes such as murder, child abductions, missing persons, death threats, such activity may be crucial to speed up investigations. Motivated by this, the paper explores the field of Triage, a relatively new branch of Digital Forensics intended to provide investigators with actionable intelligence through digital media inspection, and describes a new interdisciplinary approach that merges Digital Forensics techniques and Machine Learning principles. The proposed Triage methodology aims at automating the categorization of digital media on the basis of plausible connections between traces retrieved (i.e. digital evidence) and crimes under investigation. As an application of the proposed method, two case studies about copyright infringement and child pornography exchange are then presented to actually prove that the idea is viable. The term ''feature'' will be regarded in the paper as a quantitative measure of a ''plausible digital evidence'', according to the Machine Learning terminology. In this regard, we (a) define a list of crime-related features, (b) identify and extract them from available devices and forensic copies, (c) populate an input matrix and (d) process it with different Machine Learning mining schemes to come up with a device classification. We perform a benchmark study about the most popular mining algorithms (i.e. Bayes Networks, Decision Trees, Locally Weighted Learning and Support Vector Machines) to find the ones that best fit the case in question. Obtained results are encouraging as we will show that, triaging a dataset of 13 digital media and 45 copyright infringement-related features, it is possible to obtain more than 93% of correctly classified digital media using Bayes Networks or Support Vector Machines while, concerning child pornography exchange, with a dataset of 23 cell phones and 23 crime-related features it is possible to classify correctly 100% of the phones. In this regards, methods to reduce the number of linearly independent features are explored and classification results presented. <s> BIB007 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Lessons Learned from the Review <s> The volume of digital forensic evidence is rapidly increasing, leading to large backlogs. In this paper, a Digital Forensic Data Reduction and Data Mining Framework is proposed. Initial research with sample data from South Australia Police Electronic Crime Section and Digital Corpora Forensic Images using the proposed framework resulted in significant reduction in the storage requirements — the reduced subset is only 0.196 percent and 0.75 percent respectively of the original data volume. The framework outlined is not suggested to replace full analysis, but serves to provide a rapid triage, collection, intelligence analysis, review and storage methodology to support the various stages of digital forensic examinations. Agencies that can undertake rapid assessment of seized data can more effectively target specific criminal matters. The framework may also provide a greater potential intelligence gain from analysis of current and historical data in a timely manner, and the ability to undertake research of trends over time. <s> BIB008 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Lessons Learned from the Review <s> The role of triage in digital forensics is disputed, with some practitioners questioning its reliability for identifying evidential data. Although successfully implemented in the field of medicine, triage has not established itself to the same degree in digital forensics. This article presents a novel approach to triage for digital forensics. Case-Based Reasoning Forensic Triager (CBR-FT) is a method for collecting and reusing past digital forensic investigation information in order to highlight likely evidential areas on a suspect operating system, thereby helping an investigator to decide where to search for evidence. The CBR-FT framework is discussed and the results of twenty test triage examinations are presented. CBR-FT has been shown to be a more effective method of triage when compared to a practitioner using a leading commercial application. <s> BIB009 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Lessons Learned from the Review <s> The evolution of modern digital devices is outpacing the scalability and effectiveness of Digital Forensics techniques. Digital Forensics Triage is one solution to this problem as it can extract evidence quickly at the crime scene and provide vital intelligence in time critical investigations. Similarly, such methodologies can be used in a laboratory to prioritize deeper analysis of digital devices and alleviate examination backlog. Developments in Digital Forensics Triage methodologies have moved towards automating the device classification process and those which incorporate Machine Learning principles have proven to be successful. Such an approach depends on crime-related features which provide a relevant basis upon which device classification can take place. In addition, to be an accepted and viable methodology it should be also as accurate as possible. Previous work has concentrated on the issues of feature extraction and classification, where less attention has been paid to improving classification accuracy through feature manipulation. In this regard, among the several techniques available for the purpose, we concentrate on feature weighting, a process which places more importance on specific features. A twofold approach is followed: on one hand, automated feature weights are quantified using Kullback-Leibler measure and applied to the training set whereas, on the other hand, manual weights are determined with the contribution of surveyed digital forensic experts. Experimental results of manual and automatic feature weighting are described which conclude that both the techniques are effective in improving device classification accuracy in crime investigations. <s> BIB010 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Lessons Learned from the Review <s> When forensic triage techniques designed for feature phones are applied to smart phones, these recovery techniques return hundreds of thousands of results, only a few of which are relevant to the investigation. We propose the use of relevance feedback to address this problem: a small amount of investigator input can efficiently and accurately rank in order of relevance, the results of a forensic triage tool. We present LIFTR, a novel system for prioritizing information recovered from Android phones. We evaluate LIFTR's ranking algorithm on 13 previously owned Android smart phones and three recovery engines -- DEC0DE, Bulk Extractor, and Strings? using a standard information retrieval metric, Normalized Discounted Cumulative Gain (NDCG). LIFTR's initial ranking improves the NDCG scores of the three engines from 0.0 to an average of 0.73; and with as little as 5 rounds of feedback, the ranking score in- creases to 0.88. Our results demonstrate the efficacy of relevance feedback for quickly locating useful information among the large amount of irrelevant data returned by current recovery techniques. Further, our empirical findings show that a significant amount of important user information persists for weeks or even months in the expired space of a phone's memory. This phenomenon underscores the importance of using file system agnostic recovery techniques, which are the type of techniques that benefit most from LIFTR. <s> BIB011 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Lessons Learned from the Review <s> This paper describes a five-phase, multi-threaded bootable approach to digital forensic triage, which is implemented in a product called Forensics2020. The first phase collects metadata for every logical file on the hard drive of a computer system. The second phase collects EXIF camera data from each image found on the hard drive. The third phase analyzes and categorizes each file based on its header information. The fourth phase parses each executable file to provide a complete audit of the software applications on the system; a signature is generated for every executable file, which is later checked against a threat detection database. The fifth and final phase hashes each file and records its hash value. All five phases are performed in the background while the first responder interacts with the system. This paper assesses the forensic soundness of Forensics2020. The tool makes certain changes to a hard drive that are similar to those made by other bootable forensic examination environments, although the changes are greater in number. The paper also describes the lessons learned from developing Forensics2020, which can help guide the development of other forensic triage tools. <s> BIB012 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Lessons Learned from the Review <s> Purpose – The purpose of this paper is to propose a novel approach that automates the visualisation of both quantitative data (the network) and qualitative data (the content) within emails to aid the triage of evidence during a forensics investigation. Email remains a key source of evidence during a digital investigation, and a forensics examiner may be required to triage and analyse large email data sets for evidence. Current practice utilises tools and techniques that require a manual trawl through such data, which is a time-consuming process. Design/methodology/approach – This paper applies the methodology to the Enron email corpus, and in particular one key suspect, to demonstrate the applicability of the approach. Resulting visualisations of network narratives are discussed to show how network narratives may be used to triage large evidence data sets. Findings – Using the network narrative approach enables a forensics examiner to quickly identify relevant evidence within large email data sets. Within... <s> BIB013 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Lessons Learned from the Review <s> We present a new approach to digital forensic evidence acquisition and disk imaging called sifting collectors that images only those regions of a disk with expected forensic value. Sifting collectors produce a sector-by-sector, bit-identical AFF v3 image of selected disk regions that can be mounted and is fully compatible with existing forensic tools and methods. In our test cases, they have achieved an acceleration of >3× while collecting >95% of the evidence, and in some cases we have observed acceleration of up to 13×. Sifting collectors challenge many conventional notions about forensic acquisition and may help tame the volume challenge by enabling examiners to rapidly acquire and easily store large disks without sacrificing the many benefits of imaging. <s> BIB014 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Lessons Learned from the Review <s> The sharp rise in consumer computing, electronic and mobile devices and data volumes has resulted in increased workloads for digital forensic investigators and analysts. The number of crimes involving electronic devices is increasing, as is the amount of data for each job. This is becoming unscaleable and alternate methods to reduce the time trained analysts spend on each job are necessary.This work leverages standardised knowledge representations techniques and automated rule-based systems to encapsulate expert knowledge for forensic data. The implementation of this research can provide high-level analysis based on low-level digital artefacts in a way that allows an understanding of what decisions support the facts. Analysts can quickly make determinations as to which artefacts warrant further investigation and create high level case data without manually creating it from the low-level artefacts. Extraction and understanding of users and social networks and translating the state of file systems to sequences of events are the first uses for this work.A major goal of this work is to automatically derive 'events' from the base forensic artefacts. Events may be system events, representing logins, start-ups, shutdowns, or user events, such as web browsing, sending email. The same information fusion and homogenisation techniques are used to reconstruct social networks. There can be numerous social network data sources on a single computer; internet cache can locate?Facebook, LinkedIn, Google Plus caches; email has address books and copies of emails sent and received; instant messenger has friend lists and call histories. Fusing these into a single graph allows a more complete, less fractured view for an investigator.Both event creation and social network creation are expected to assist investigator-led triage and other fast forensic analysis situations. <s> BIB015 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Lessons Learned from the Review <s> An issue that continues to impact digital forensics is the increasing volume of data and the growing number of devices. One proposed method to deal with the problem of "big digital forensic data": the volume, variety, and velocity of digital forensic data, is to reduce the volume of data at either the collection stage or the processing stage. We have developed a novel approach which significantly improves on current practice, and in this paper we outline our data volume reduction process which focuses on imaging a selection of key files and data such as: registry, documents, spreadsheets, email, internet history, communications, logs, pictures, videos, and other relevant file types. When applied to test cases, a hundredfold reduction of original media volume was observed. When applied to real world cases of an Australian Law Enforcement Agency, the data volume further reduced to a small percentage of the original media volume, whilst retaining key evidential files and data. The reduction process was applied to a range of real world cases reviewed by experienced investigators and detectives and highlighted that evidential data was present in the data reduced forensic subset files. A data reduction approach is applicable in a range of areas, including: digital forensic triage, analysis, review, intelligence analysis, presentation, and archiving. In addition, the data reduction process outlined can be applied using common digital forensic hardware and software solutions available in appropriately equipped digital forensic labs without requiring additional purchase of software or hardware. The process can be applied to a wide variety of cases, such as terrorism and organised crime investigations, and the proposed data reduction process is intended to provide a capability to rapidly process data and gain an understanding of the information and/or locate key evidence or intelligence in a timely manner. <s> BIB016 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Lessons Learned from the Review <s> Computer forensics faces a range of challenges due to the widespread use of computing technologies. Examples include the increasing volume of data and devices that need to be analysed in any single case, differing platforms, use of encryption and new technology paradigms (such as cloud computing and the Internet of Things). Automation within forensic tools exists, but only to perform very simple tasks, such as data carving and file signature analysis. Investigators are responsible for undertaking the cognitively challenging and time-consuming process of identifying relevant artefacts. Due to the volume of cyber-dependent (e.g., malware and hacking) and cyber-enabled (e.g., fraud and online harassment) crimes, this results in a large backlog of cases. With the aim of speeding up the analysis process, this paper investigates the role that unsupervised pattern recognition can have in identifying notable artefacts. A study utilising the Self-Organising Map (SOM) to automatically cluster notable artefacts was devised using a series of four cases. Several SOMs were created - a File List SOM containing the metadata of files based upon the file system, and a series of application level SOMs based upon metadata extracted from files themselves (e.g., EXIF data extracted from JPEGs and email metadata extracted from email files). A total of 275 sets of experiments were conducted to determine the viability of clustering across a range of network configurations. The results reveal that more than 93.5% of notable artefacts were grouped within the rank-five clusters in all four cases. The best performance was achieved by using a 10ź×ź10 SOM where all notables were clustered in a single cell with only 1.6% of the non-notable artefacts (noise) being present, highlighting that SOM-based analysis does have the potential to cluster notable versus noise files to a degree that would significantly reduce the investigation time. Whilst clustering has proven to be successful, operationalizing it is still a challenge (for example, how to identify the cluster containing the largest proportion of notables within the case). The paper continues to propose a process that capitalises upon SOM and other parameters such as the timeline to identify notable artefacts whilst minimising noise files. Overall, based solely upon unsupervised learning, the approach is able to achieve a recall rate of up to 93%. <s> BIB017 </s> Methods and Tools of Digital Triage in Forensic Context: Survey and Future Directions <s> Lessons Learned from the Review <s> Abstract Large email data sets are often the focus of criminal and civil investigations. This has created a daunting task for investigators due to the extraordinary size of many of these collections. Our work offers an interactive visual analytic alternative to the current, manually intensive methodology used in the search for evidence in large email data sets. These sets usually contain many emails which are irrelevant to an investigation, forcing investigators to manually comb through information in order to find relevant emails, a process which is costly in terms of both time and money. To aid the investigative process we combine intelligent preprossessing, a context aware visual search, and a results display that presents an integrated view of diverse information contained within emails. This allows an investigator to reduce the number of emails that need to be viewed in detail without the current tedious manual search and comb process. <s> BIB018
|
To summarize the field of live triage, the noteworthy research focusses are as follows: 1. The stress of a real-time computation problem having allotted limited time and resources for triage, presented by Roussev et al. BIB005 . The idea is that an increase in the performance can be achieved if acquisition and processing start and complete at almost same time. The implementation of the forensic system HANSKEN proves the appropriateness of the presented idea 2. The selective imaging approaches to reduce data volume, presented by Grier and Richard III BIB014 and Quick and Choo BIB016 BIB008 . The difference between the approaches is in selecting the regions that have a forensic value. Grier and Richard III BIB014 state that the profiles must be created and stored in a library. Moreover, Quick and Choo BIB008 suggest the idea of thumbnailing video, movie, and picture files 3. The introduction of triage template pipelines into the investigative process for the most popular types of digital crimes, presented by Overill et al. BIB006 . However, the authors do not enumerate these types of crimes and provide only the DDoS and P2P template diagrams without the discussion of the details 4. The artificial intelligence approaches presented by Turnbull and Radhava BIB015 and Peersman et al. . Turnbull and Randhawa BIB015 describe an approach to assist a less technically intrinsic user to run a triage tool. Peersman et al. present an approach to automatically label new child sexual abuse media To summarize the field of post-mortem triage, the noteworthy research focusses are as follows: 1. Storing and using the knowledge of the past cases, presented by Horsman et al. BIB009 BIB003 and Bashir and Khan [39] 2. The use of machine learning techniques, presented by Marturana and Taconi BIB007 BIB001 BIB004 , McCleland and Marturana BIB010 , and Fahdi et al. BIB017 . The trend is promising because such techniques are indeed valuable in many research areas; however, the presented research works are immature To summarize the field of triage of mobile devices, the noteworthy research achievement is only single one: 1. The information recovery engine DEC0DE, offered by Walls et al. BIB002 and the information prioritization system LIFTR, which uses the data obtained from DEC0DE, offered by Varna et al. BIB011 To summarize the field of triage tools, the noteworthy research achievements are as follows: 1. The method of similarity digests, offered by Roussev and Quates 2. The online GRR Rapid Response system used for incident response, offered by Moser and Cohen [1] 3. The multi-threaded bootable tool Forensic2020, which allows interaction of the examiner, while the tool is processing data, offered by Baggili et al. BIB012 4. The visualization of email data offered by Haggerty et al. BIB013 . Koven et al. BIB018 presented an approach of email data visualization, as well. However, the provided runtimes are quite long and, therefore, the tool is not suitable for triage purposes 5. The SlackStick approach to identify the files of interest, when several predetermined bytes are chosen to represent the file, offered by Hegarty and Haggerty [64] 6. The distributed digital forensic system HANSKEN that works on a big data platform, offered by van Beek et al. .
|
Network service orchestration standardization: A technology survey <s> Requirements <s> Operator interviews and anecdotal evidence suggest that an operator's ability to manage a network decreases as the network becomes more complex. However, there is currently no way to systematically quantify how complex a network's design is nor how complexity may impact network management activities. In this paper, we develop a suite of complexity models that describe the routing design and configuration of a network in a succinct fashion, abstracting away details of the underlying configuration languages. Our models, and the complexity metrics arising from them, capture the difficulty of configuring control and data plane behaviors on routers. They also measure the inherent complexity of the reachability constraints that a network implements via its routing design. Our models simplify network design and management by facilitating comparison between alternative designs for a network. We tested our models on seven networks, including four university networks and three enterprise networks. We validated the results through interviews with the operators of five of the networks, and we show that the metrics are predictive of the issues operators face when reconfiguring their networks. <s> BIB001 </s> Network service orchestration standardization: A technology survey <s> Requirements <s> Telecom providers struggle with low service flexibility, increasing complexity and related costs. Although "cloud" has been an active field of research, there is currently little integration between the vast networking assets and data centres of telecom providers. UNIFY considers the entire network, from home networks up to data centre, as a "unified production environment" supporting virtualization, programmability and automation and guarantee a high level of agility for network operations and for deploying new, secure and quality services, seamlessly instantiatable across the entire infrastructure. UNIFY focuses on the required enablers and will develop an automated, dynamic service creation platform, leveraging fine-granular service chaining. A service abstraction model and a proper service creation language and a global orchestrator, with novel optimization algorithms, will enable the automatic optimal placement of networking, computing and storage components across the infrastructure. New management technologies based on experience from DCs, called Service Provider DevOps, will be developed and integrated into the orchestration architecture to cope with the dynamicity of services. The applicability of a universal node based on commodity hardware will be evaluated in order to support both network functions and traditional data centre workloads, with an investigation of the need of hardware acceleration. <s> BIB002 </s> Network service orchestration standardization: A technology survey <s> Requirements <s> New and emerging use cases, such as the interconnection of geographically distributed data centers (DCs), are drawing attention to the requirement for dynamic end-to-end service provisioning, spanning multiple and heterogeneous optical network domains. This heterogeneity is, not only due to the diverse data transmission and switching technologies, but also due to the different options of control plane techniques. In light of this, the problem of heterogeneous control plane interworking needs to be solved, and in particular, the solution must address the specific issues of multi-domain networks, such as limited domain topology visibility, given the scalability and confidentiality constraints. In this article, some of the recent activities regarding the Software-Defined Networking (SDN) orchestration are reviewed to address such a multi-domain control plane interworking problem. Specifically, three different models, including the single SDN controller model, multiple SDN controllers in mesh, and multiple SDN controllers in a hierarchical setting, are presented for the DC interconnection network with multiple SDN/ OpenFlow domains or multiple OpenFlow/ Generalized Multi-Protocol Label Switching (GMPLS) heterogeneous domains. In addition, two concrete implementations of the orchestration architectures are detailed, showing the overall feasibility and procedures of SDN orchestration for the end-to-end service provisioning in multi-domain data center optical networks. <s> BIB003 </s> Network service orchestration standardization: A technology survey <s> Requirements <s> Network Function Virtualization (NFV) enables to implement network functions in software, high-speed packet processing functions which traditionally are dominated by hardware implementations. Virtualized Network Functions (NFs) may be deployed on generic-purpose servers, e.g., in datacenters. The latter enables flexibility and scalability which previously were only possible for web services deployed on cloud platforms. The merit of NFV is challenged by control challenges related to the selection of NF implementations, discovery and reservation of sufficient network and server resources, and interconnecting both in a way which ful fills SLAs related to reliability and scalability. This paper details the role of a scalable orchestrator in charge of finding and reserving adequate resources. The latter will steer network and cloud control and management platforms to actually reserve and deploy requested services. We highlight the role of involved interfaces, propose elements of algorithmic components, and will identify major blocks in orchestration time in a proof of concept prototype which accounts for most functional parts in the considered architecture. Based on these evaluations, we propose several architectural enhancements in order to implement a highly scalable network orchestrator for carrier and cloud networks. <s> BIB004 </s> Network service orchestration standardization: A technology survey <s> Requirements <s> Software Defined Networking (SDN) and Network Function Virtualization (NFV) provide an alluring vision of how to transform broadcast, contribution and content distribution networks. In our laboratory we assembled a multi-vendor, multi-layer media network environment that used SDN controllers and NFV-based applications to schedule, coordinate, and control media flows across broadcast and contribution network infrastructure. — This paper will share our experiences of investigating, designing and experimenting in order to build the next generation broadcast and contribution network. We will describe our experience of dynamic workflow automation of high-bandwidth broadcast and media services across multi-layered optical network environment using SDN-based technologies for programmatic forwarding plane control and orchestration of key network functions hosted on virtual machines. Finally, we will outline the prospects for the future of how packet and optical technologies might continue to scale to support the transport of increasingly growing broadcast media. <s> BIB005 </s> Network service orchestration standardization: A technology survey <s> Requirements <s> Network virtualization is an emerging technique that enables multiple tenants to share an underlying physical infrastructure, isolating the traffic running over different virtual infrastructures/tenants. This technique aims to improve network utilization, while reducing the complexities in terms of network management for operators. Applied to this context, software-defined networking (SDN) paradigm can ease network configurations by enabling network programability and automation, which reduces the amount of operations required from both service and infrastructure providers. SDN techniques are decreasing vendor lock-in issues due to specific configuration methods or protocols. Application-based network operations (ABNO) are a toolbox of key network functional components with the goal of offering application-driven network management. Service provisioning using ABNO may involve direct configuration of data plane elements or delegate it to several control plane modules. We validate the applicability of ABNO to multitenant virtual networks in multitechnology optical domains based on two scenarios, in which multiple control plane instances are orchestrated by the architecture. Congestion detection and failure recovery are chosen to demonstrate fast recalculation and reconfiguration, while hiding the configurations in the physical layer from the upper layer. <s> BIB006
|
A Service orchestration is a complex high-level control system and relevant research efforts have proposed a wide range of goals for a service orchestrator. We identify the following functional properties: Coordination: Operator infrastructures comprise of a wide range of network and computation systems providing a diverse set of resources, including network bandwidth, CPU and storage. Effective deployment of a network service depends on their coordinated configuration. The network manager must provision network resources and modify the forwarding policy of the network, to ensure ordering and connectivity between the service NFs. This process becomes complex when considering the different control capabilities and interfaces across network technologies found in the metropolitan, access and wide area layers of the operator network. Furthermore, the network manager must configure the devices that will host the service NFs, either in software or hardware. The service orchestrator is responsible for abstracting the management and configuration heterogeneity of the different technologies and administrative domains BIB006 BIB002 . Automation: Existing infrastructures incur significant operational workload for the configuration, troubleshooting and management of network services. Network technologies typically provide different configuration interfaces in each network layer and require manual and repetitive configuration by network managers to deploy a network service BIB001 . In addition, vertical integration of network devices requires extensive human intervention to deploy and manage a network service in a multivendor and multi-technology environment. A key goal for service orchestration is to minimize human intervention during the deployment and management of network services. Efforts in programmable network and NFV control, like SDN, ABNO and ETSI NFV MANO, provide low-level automation capabilities, which can be exploited by the service orchestrator to synthesize high-level automation service deployment and management mechanisms BIB003 . Resource Provision and Monitor: The specification of network services contain complex SLA guarantees, which perplex network management. For example, allocating resources, which meet service delivery guarantees, is an NP-hard problem from the perspective of the operator and the re-optimization of a large network can take days. In parallel, existing service deployment approaches rely on static resource allocations and require resource provision for the worst-case service load scenarios. A key goal for service orchestration is to enable dynamic and flexible resource control and monitoring mechanisms, which converge resource control across the underlying technologies and abstract their heterogeneity [10, BIB004 . Efforts towards service orchestration are still limited. Relevant architecture and interface specifications define mechanisms for effective automation and programmability of individual resource types, like the SDN and ABNO paradigms for network resources and the NFV MANO for compute and storage resources. Nonetheless, these architectures remain low-level and provide partial control over the infrastructure towards ser- vice orchestration. Service orchestration initiatives from network operators and vendors BIB005 propose the development of a new orchestration layer above and beyond the existing individual control mechanisms which will capitalize on their lowlevel automation and flexibility capabilities to support a serviceoriented control abstraction exposed to the OSS/BSS, as depicted in Figure 1 . In terms of network control, the service orchestrator can access low-level forwarding interfaces, as well as high high-level control interfaces implementing standardized forwarding control mechanisms, like Segment Routing and Service Function Chain, through the network controller. In parallel, NF management across the operator datacenters can be achieved through a dual-layer control and management stack, as suggested by relevant NF management architectures. The lower layer contains the Virtual Infrastructure Manager (VIM), which manages and configures the virtualization policy of compute and storage resources. The top layer contains the VNF Manager (VNFM) responsible for the configuration, control and monitor of individual NFs. The service orchestrator will operate on top of these two management services (network and IT, see Figure 1 ) and will be responsible for exploiting their functionality to provide network service delivery, given the policy of the operator, channeled through the OSS. The effectiveness of the service orchestrator highly depends on the granularity and flexibility of the underlying control interfaces. This paper surveys standardization efforts for infrastructure control in an effort to discuss the existing opportunities and challenges towards service orchestration.
|
Network service orchestration standardization: A technology survey <s> Radio Access Network (RAN) <s> The cellular industry is evaluating architectures to distribute the signal processing in radio access networks. One of the options is to process the signals of all base stations on a shared pool of compute resources in a central location. In this centralized architecture, the existing base stations will be replaced with just the antennas and a few other active RF components, and the remainder of the digital processing including the physical layer will be carried out in a central location. This model has potential benefits that include a reduction in the cost of operating the network due to fewer site visits, easy upgrades, and lower site lease costs, and an improvement in the network performance with joint signal processing techniques that span multiple base stations. Further there is a potential to exploit variations in the processing load across base stations, to pool the base stations into fewer compute resources, thereby allowing the operator to either reduce energy consumption by turning the remaining processors off or reducing costs by provisioning fewer compute resources. We focus on this aspect in this paper. Specifically, we make the following contributions in the paper. Based on real-world data, we characterise the potential savings if shared homogeneous compute resources are used to process the signals from multiple base stations in the centralized architecture. We show that the centralized architecture can potentially result in savings of at least 22 % in compute resources by exploiting the variations in the processing load across base stations. These savings are achievable with statistical guarantees on successfully processing the base station's signals. We also design a framework that has two objectives: (i) partitioning the set of base stations into groups that are simultaneously processed on a shared homogeneous compute platform for a given statistical guarantee, and (ii) scheduling the set of base stations allocated to a platform in order to meet their real-time processing requirements. This partitioning and scheduling framework saves up to 19 % of the compute resources for a probability of failure of one in 100 million. We refer to this solution as CloudIQ. Finally we implement and extensively evaluate the CloudIQ framework with a 3GPP compliant implementation of 5 MHz LTE. <s> BIB001 </s> Network service orchestration standardization: A technology survey <s> Radio Access Network (RAN) <s> Driven by the need to cope with exponentially growing mobile data traffic and to support new traffic types from massive numbers of machine-type devices, academia and industry are thinking beyond the current generation of mobile cellular networks to chalk a path towards fifth generation (5G) mobile networks. Several new approaches and technologies are being considered as potential elements making up such a future mobile network, including cloud RANs, application of SDN principles, exploiting new and unused portions of spectrum, use of massive MIMO and full-duplex communications. Research on these technologies requires realistic and flexible experimentation platforms that offer a wide range of experimentation modes from real-world experimentation to controlled and scalable evaluations while at the same time retaining backward compatibility with current generation systems. Towards this end, we present OpenAirInterface (OAI) as a suitably flexible platform. In addition, we discuss the use of OAI in the context of several widely mentioned 5G research directions. <s> BIB002 </s> Network service orchestration standardization: A technology survey <s> Radio Access Network (RAN) <s> Past years have witnessed the surge of mobile broadband Internet traffic due to the broad adoption of a number of major technical advances in new wireless technologies and consumer electronics. In this respect, mobile networks have greatly increased their availability to meet the exponentially growing capacity demand of modern mobile applications and services. The upcoming scenario in the near future lays down the possibility of a continuum of communications thanks also to the deployment of so called small cells. Conventional cellular networks and the small cells will form the foundation of this pervasive communication system. Therefore, future wireless systems must carry the necessary scalability and seamless operation to accommodate the users and integrate the macro cells and small cells together. In this work we propose the V-Cell concept and architecture. V-Cell is potentially leading to a paradigm shift when approaching the system designs that allows to overcome most of the limitations of physical layer techniques in conventional wireless networks. <s> BIB003
|
The 3G standards split the mobile RAN in two functional blocks: the Remote Radio Head (RRH), which receives and transmit the wireless signal and applies the appropriate signal transformations and amplification, and the Base Band Unit (BBU), which runs the MAC protocol and coordinates neighboring cells. The channel between these two entities has high bandwidth and ultra-low latency requirements and the two systems are typically co-located in production deployments. Nonetheless, this design choice increases the operator cost to deploy and operate its RAN. BBUs are expensive components which increase the overall acquisition cost of a base station, while the BBU cooling requirements makes the RAN a significant contributor to the aggregate power consumption of the operator . Recent trends in RAN design separate the two components, by moving the BBU to the central office of the operator; an architectural paradigm commonly termed Cloud-RAN (C-RAN). C-RAN significantly reduces deployment and operational costs and improves elasticity and resilience of the RAN. In parallel, the centralization of multiple RRHs under the control of a single BBU improves resource utilization and cell handovers, and minimizes cell-interference. Currently multiple interfaces, architectures and testbeds provide the technological capabilities to run and test C-RAN systems BIB001 BIB002 , while vendors currently provide production-ready virtualized BBU appliances [17] . In addition, novel control abstractions can converge RAN control with underlying transport technologies and enable flexible deployment strategies BIB003 . A challenge for C-RAN architectures is the high multi-Gb bandwidth requirements and strict sub-milliseconds latency and jitter demands for the links between the RRH and the datacenter [19] . These connectivity guarantees exhibit significant variability (from a few Mb to 30 Gb) within the course of a day, reflecting the varying loads of mobile cell, as well as the signal modulation and channel configuration. To provide flexible and on-demand front-haul connectivity with strong latency guarantees, operators require novel orchestration mechanisms supporting dynamic and multi-technology resource management. In addition, effective RAN virtualization requires a framework for the management and monitoring of BBU instances to provide service resiliency. The service orchestrator can monitor the performance of the BBU VNF instances and adjust the compute resource allocation, the VNF replication degree and the load distribution policy. In parallel, the orchestrator can improve front-haul efficiency by mapping the connectivity requirements between the BBU and the RRH in network resource allocation policy. The 3rd Generation Partnership Project (3GPP) is actively exploring the applicability of NFV technologies on a range of mobile network use-cases, like fault-management and performance monitoring, and has defined a set of management requirements in the RAN, the Mobile Core Network and the IP Multimedia Subsystem (IMS) . In parallel, the 5G Public Private Partnership (5G PPP), within its effort to standardize the technologies and protocols for the next generation communication network defines end-to-end network service orchestration as a core design goal .
|
Network service orchestration standardization: A technology survey <s> Content Delivery Network (CDN) <s> Content distribution networks (CDNs) are a mechanism to deliver content to end users on behalf of origin Web sites. Content distribution offloads work from origin servers by serving some or all of the contents of Web pages. We found an order of magnitude increase in the number and percentage of popular origin sites using CDNs between November 1999 and December 2000.In this paper we discuss how CDNs are commonly used on the Web and define a methodology to study how well they perform. A performance study was conducted over a period of months on a set of CDN companies employing the techniques of DNS redirection and URL rewriting to balance load among their servers. Some CDNs generally provide better results than others when we examine results from a set of clients. The performance of one CDN company clearly improved between the two testing periods in our study due to a dramatic increase in the number of distinct servers employed in its network. More generally, the results indicate that use of a DNS lookup in the critical path of a resource retrieval does not generally result in better server choices being made relative to client response time in either average or worst case situations. <s> BIB001 </s> Network service orchestration standardization: A technology survey <s> Content Delivery Network (CDN) <s> Today a spectrum of solutions are available for istributing content over the Internet, ranging from commercial CDNs to ISP-operated CDNs to content-provider-operated CDNs to peer-to-peer CDNs. Some deploy servers in just a few large data centers while others deploy in thousands of locations or even on millions of desktops. Recently, major CDNs have formed strategic alliances with large ISPs to provide content delivery network solutions. Such alliances show the natural evolution of content delivery today driven by the need to address scalability issues and to take advantage of new technology and business opportunities. In this paper we revisit the design and operating space of CDN-ISP collaboration in light of recent ISP and CDN alliances. We identify two key enablers for supporting collaboration and improving content delivery performance: informed end-user to server assignment and in-network server allocation. We report on the design and evaluation of a prototype system, NetPaaS, that materializes them. Relying on traces from the largest commercial CDN and a large tier-1 ISP, we show that NetPaaS is able to increase CDN capacity on-demand, enable coordination, reduce download time, and achieve multiple traffic engineering goals leading to a win-win situation for both ISP and CDN. <s> BIB002 </s> Network service orchestration standardization: A technology survey <s> Content Delivery Network (CDN) <s> The demand for rich multimedia services over mobile networks has been soaring at a tremendous pace over recent years. However, due to the centralized architecture of current cellular networks, the wireless link capacity as well as the bandwidth of the radio access networks and the backhaul network cannot practically cope with the explosive growth in mobile traffic. Recently, we have observed the emergence of promising mobile content caching and delivery techniques, by which popular contents are cached in the intermediate servers (or middleboxes, gateways, or routers) so that demands from users for the same content can be accommodated easily without duplicate transmissions from remote servers; hence, redundant traffic can be significantly eliminated. In this article, we first study techniques related to caching in current mobile networks, and discuss potential techniques for caching in 5G mobile networks, including evolved packet core network caching and radio access network caching. A novel edge caching scheme based on the concept of content-centric networking or information-centric networking is proposed. Using trace-driven simulations, we evaluate the performance of the proposed scheme and validate the various advantages of the utilization of caching content in 5G mobile networks. Furthermore, we conclude the article by exploring new relevant opportunities and challenges. <s> BIB003 </s> Network service orchestration standardization: A technology survey <s> Content Delivery Network (CDN) <s> We propose joint bandwidth provisioning and base station caching for video delivery in software-defined PONs. Performance evaluation via custom simulation models reveals 30% increase in served video requests and 50% reduction in service response delays. <s> BIB004 </s> Network service orchestration standardization: A technology survey <s> Content Delivery Network (CDN) <s> High quality video streaming has become an essential part of many consumers' lives.We designed OpenCache; an OpenFlow-assisted in-network caching service.OpenCache benefits last mile environments by improving network utilisation.OpenCache increases the Quality of Experience for the end-user.OpenCache was evaluated on a large pan-European OpenFlow testbed with clear benefits. High quality online video streaming, both live and on-demand, has become an essential part of many consumers' lives. The popularity of video streaming, however, places a burden on the underlying network infrastructure. This is because it needs to be capable of delivering significant amounts of data in a time-critical manner to users. The Video-on-Demand (VoD) distribution paradigm uses a unicast independent flow for each user request. This results in multiple duplicate flows carrying the same video assets that only serve to exacerbate the burden placed upon the network. In this paper we present OpenCache: a highly configurable, efficient and transparent in-network caching service that aims to improve the VoD distribution efficiency by caching video assets as close to the end-user as possible. OpenCache leverages Software Defined Networking technology to benefit last mile environments by improving network utilisation and increasing the Quality of Experience for the end-user. Our evaluation on a pan-European OpenFlow testbed uses adaptive bitrate video to demonstrate that with the use of OpenCache, streaming applications play back higher quality video and experience increased throughput, higher bitrate, and shorter start up and buffering times. <s> BIB005
|
CDN services provide efficient distribution of static content on behalf of third-party Internet applications BIB001 . They rely on a well-provisioned and highly-available network of cache servers and allow end-users to retrieve static content with low latency by automatically redirecting them to an appropriate cache server, based on the user location, the caching policy and cache load. CDN traffic currently constitutes a large portion of the operator traffic volumes and providers, like Akamai, serve 15-30% of the global Internet traffic . The CDN service chain is simple and consists of a loadbalancing function and a cache function, as depicted in Figure 2 . The greatest challenge in the deployment of such a service is the aggregate network data volumes of the service and the large number of network end-points. As a result, temporal variations in CDN traffic patterns can have a dramatic effect on the traffic matrix of the operator and affect Internet service delivery. In parallel, CDN-ISP integration lacks support for dynamical resource provision, in order to gracefully manage the dynamic traffic patterns. Connectivity relies on fixedcapacity peering relationships through popular IXPs or CDNoperated peering locations , which must be provisioned for the worst-case scenario. The current design of CDN services introduces an interesting joint optimization problem between operators and CDN service providers. A CDN service bring content closer to the user and enable dynamic deployment of caching NFs in the central offices of the operator and enforce network resource guarantees. The service can provide sufficient elasticity for the CDN caching layer, while the ISP can reduce core network load. Similar approaches have been proposed in the context of mobile operators, mobile CDN emerged to faster access to mobile apps, facilitate mobile video streaming and supporting dynamic contents BIB003 BIB004 . In parallel, new network control architectures based on SDN and NFV principles enable CDN services to localize users and offload the redirection task in the network forwarding policy BIB005 BIB002 . These approaches provide an innovative environment to improve CDN functionality, but require a flexible control mechanism to integrate CDN services and infrastructures. A service orchestrator can autonomously adapt the CDN service deployment plan to the CDN load characteristics, using a policy specification from the CDN provider. In parallel, the orchestrator can monitor traffic volumes to infer content locality and hotspot development and deploy NF caches close to the end-user to improve latency and network efficiency.
|
Network service orchestration standardization: A technology survey <s> Software Defined Networking (SDN) <s> Active networks are a novel approach to network architecture in which the switches (or routers) of the network perform customized computations on the messages flowing through them. This approach is motivated by both lead user applications, which perform user-driven computation at nodes within the network today, and the emergence of mobile code technologies that make dynamic network service innovation attainable. The authors discuss two approaches to the realization of active networks and provide a snapshot of the current research issues and activities. They illustrate how the routers of an IP network could be augmented to perform such customized processing on the datagrams flowing through them. These active routers could also interoperate with legacy routers, which transparently forward datagrams in the traditional manner. <s> BIB001 </s> Network service orchestration standardization: A technology survey <s> Software Defined Networking (SDN) <s> This document describes the use of RSVP (Resource Reservation Protocol), including all the necessary extensions, to establish label-switched paths (LSPs) in MPLS (Multi-Protocol Label Switching). Since the flow along an LSP is completely identified by the label applied at the ingress node of the path, these paths may be treated as tunnels. A key application of LSP tunnels is traffic engineering with MPLS as specified in RFC 2702. <s> BIB002 </s> Network service orchestration standardization: A technology survey <s> Software Defined Networking (SDN) <s> The routers in an Autonomous System (AS) must distribute the information they learn about how to reach external destinations. Unfortunately, today's internal Border Gateway Protocol (iBGP) architectures have serious problems: a "full mesh" iBGP configuration does not scale to large networks and "route reflection" can introduce problems such as protocol oscillations and persistent loops. Instead, we argue that a Routing Control Platform (RCP) should collect information about external destinations and internal topology and select the BGP routes for each router in an AS. RCP is a logically-centralized platform, separate from the IP forwarding plane, that performs route selection on behalf of routers and communicates selected routes to the routers using the unmodified iBGP protocol. RCP provides scalability without sacrificing correctness. In this paper, we present the design and implementation of an RCP prototype on commodity hardware. Using traces of BGP and internal routing data from a Tier-1 backbone, we demonstrate that RCP is fast and reliable enough to drive the BGP routing decisions for a large network. We show that RCP assigns routes correctly, even when the functionality is replicated and distributed, and that networks using RCP can expect comparable convergence delays to those using today's iBGP architectures. <s> BIB003 </s> Network service orchestration standardization: A technology survey <s> Software Defined Networking (SDN) <s> This document specifies the Path Computation Element (PCE) ::: Communication Protocol (PCEP) for communications between a Path ::: Computation Client (PCC) and a PCE, or between two PCEs. Such ::: interactions include path computation requests and path computation ::: replies as well as notifications of specific states related to the use ::: of a PCE in the context of Multiprotocol Label Switching (MPLS) and ::: Generalized MPLS (GMPLS) Traffic Engineering. PCEP is designed to be ::: flexible and extensible so as to easily allow for the addition of ::: further messages and objects, should further requirements be expressed ::: in the future. [STANDARDS-TRACK] <s> BIB004 </s> Network service orchestration standardization: A technology survey <s> Software Defined Networking (SDN) <s> Peer-to-peer applications, such as file sharing, real-time ::: communication, and live media streaming, use a significant amount of ::: Internet resources. Such applications often transfer large amounts of ::: data in direct peer-to-peer connections. However, they usually have ::: little knowledge of the underlying network topology. As a result, they ::: may choose their peers based on measurements and statistics that, in ::: many situations, may lead to suboptimal choices. This document ::: describes problems related to optimizing traffic generated by peer-to- ::: peer applications and associated issues such optimizations raise in ::: the use of network-layer information. <s> BIB005 </s> Network service orchestration standardization: A technology survey <s> Software Defined Networking (SDN) <s> The Network Configuration Protocol (NETCONF) defined in this document ::: provides mechanisms to install, manipulate, and delete the ::: configuration of network devices. It uses an Extensible Markup ::: Language (XML)-based data encoding for the configuration data as well ::: as the protocol messages. The NETCONF protocol operations are realized ::: as remote procedure calls (RPCs). This document obsoletes RFC 4741. ::: [STANDARDS-TRACK] <s> BIB006 </s> Network service orchestration standardization: A technology survey <s> Software Defined Networking (SDN) <s> We present our experiences to date building ONOS (Open Network Operating System), an experimental distributed SDN control platform motivated by the performance, scalability, and availability requirements of large operator networks. We describe and evaluate two ONOS prototypes. The first version implemented core features: a distributed, but logically centralized, global network view; scale-out; and fault tolerance. The second version focused on improving performance. Based on experience with these prototypes, we identify additional steps that will be required for ONOS to support use cases such as core network traffic engineering and scheduling, and to become a usable open source, distributed network OS platform that the SDN community can build upon. <s> BIB007 </s> Network service orchestration standardization: A technology survey <s> Software Defined Networking (SDN) <s> Networking is undergoing a transformation throughout our industry. The shift from hardware driven products with ad hoc control to Software Defined Networks is now well underway. In this paper, we adopt the perspective of the Promise Theory to examine the current state of networking technologies so that we might see beyond specific technologies to principles for building flexible and scalable networks. Today's applications are increasingly distributed planet-wide in cloud-like hosting environments. Promise Theory's bottom-up modelling has been applied to server management for many years and lends itself to principles of self-healing, scalability and robustness. <s> BIB008 </s> Network service orchestration standardization: A technology survey <s> Software Defined Networking (SDN) <s> As IP networks grow more complicated, these networks require a new ::: interaction mechanism between customers and their networks based on ::: intent rather than detailed specifics. An intent-based language is ::: need to enable customers to easily describe their diverse intent for ::: network connectivity to the network management systems. This document ::: describes the problem Intent-Based NEtwork Modelling (IB- NEMO) ::: language is trying to solve, a summary of the use cases that ::: demonstrate this problem, and a proposed scope of work. Part of the ::: scope is the validation of the language as a minimal (or reduced) ::: subset. The IB-NEMO language consists of commands exchanged between an ::: application and a network manager/controller. Some would call this ::: boundary between the application and the network management system as ::: northbound interface (NBI). IB-NEMO focuses on creating minimal subset ::: of the total possible Intent-Based commands to pass across this NBI. ::: By creating a minimal subset (about 20% of the total possible) of all ::: intent commands, the IB-NEMO can be a simple Intent interface for most ::: applications (hopefully 80%). Part of validation of this command ::: language is to to provide test cases where a set of commands are used ::: to convey information for a use case which result in a particular data ::: model in the network controller. <s> BIB009
|
SDN is a recent network paradigm aiming for automated, flexible and user-controlled network forwarding and management. SDN is motivated by earlier network programmability efforts, including Active Networks BIB001 , ForCES , RCP BIB003 and Tempest . Unlike most earlier network programmability architectures, which explored clean-slate design of data plane protocols, SDN maintains backwards compatibility with existing network technologies. SDN design is driven by four major design goals: i) network control and data plane separation; ii) logical control centralization; iii) open and flexible interfaces between control layers; and iv) network programmability. SDN standardization efforts are primarily driven by the Open Network Foundation (ONF), while the IRTF SDNRG WG explores complementary standards for the higher control layers. Similar standardization activities take place within various SDOs, namely the Broadband Forum (broadband network applications) and the International Telecommunication Union (ITU) study groups (SG) 11 (SDN signaling), SG 13 (SDN applications in future networks), SG 15 (transport network applications of SDN) and SG 17 (applications of SDN for secure services), but efforts in these SDOs are currently in early stages and provide initial problem statements and requirement analysis. Figure 3 presents an architectural model of an SDN control stack. The architecture separates the control functionalities into three distinct layers. The data plane is the bottom layer and contains all the network devices of the infrastructure. Data plane devices are designed to efficiently perform a restricted set of low-level traffic monitoring and packet manipulation functions and have limited control intelligence. Each devices implements one or more southbound Interfaces (SBIs) which enable control of the forwarding and resource allocation policy from external entities. SBIs can be categorized into control interfaces like OpenFlow and PCE BIB004 , designed to manipulate the device forwarding policy, and management interfaces, like NETCONF BIB006 and OF-CONFIG , designed to provide remote device configuration, monitoring and fault management. SDN functionality is not limited to networks supporting new clean-slate programmable interfaces and includes SBIs based on existing control protocol, like routing protocols. The control plane is the middle layer of the architecture and contains the Network Operating System (NOS), a focal point of the architecture. A NOS aggregates and centralizes control of multiple data plane devices and synthesize new highlevel Northbound Interfaces (NBIs) for management applications. For example, existing NOS implementations provide topology monitoring and resource virtualization services and enable high-level policy specification languages, among other functionalities. Furthermore, a NOS aggregates control policy requirements from management applications and provides them accurate network state information. The NOS is responsible to analyze policy requests from individual management applications, ensure conformance with the administrative domain policy, detect and mitigate policy conflicts between management applications and translate these requests into appropriate data plane device configurations. A key element for the scalability of the architecture is logical centralization of network control; a control plane can consist of multiple NOS instances, each controlling an overlapping network segment, and use synchronization mechanisms, typically termed as eastbound and westbound interfaces, to converge in a common network-wide view of the network state and policy between NOS instances. This way, an SDN control domain can recover from multiple NOS instance failures and the control load can be distributed across the remaining instances. Finally, the application plane is the top layer of the architecture and contains specialized applications that use NBIs to implement high-level NFs, like load balancing and resource management. Detailed presentation of the standardization, research and implementation efforts in the SDN community are presented in . For the rest of this section we focus on NBI standardization efforts. NBIs are crucial for service orchestration, since they enable control and monitoring of service connectivity and network resource utilization and flexible fault-management. Nonetheless, NBI standardization is limited and existing control interface and mechanism design is driven by NOS development efforts. NBIs can be organized in two broad categories. The first category contains low-level information modeling NBIs. Information models converge the state representation of data plane devices and abstract the heterogeneity of SBIs. Network information models have been developed before the introduction of the SDN paradigm by multiple SDOs, like the ITU and the Distributed Management Task Force (DMTF) . Relevant to the SDN paradigm is the ONF information modeling working group (WG), which develops the Common Information Model (CoreModel) specifications. The CoreModel is hierarchical and includes a core model, which provides a basic abstraction for data plane forwarding elements, and a technology forwarding and an application-specific model, which evolve the core model abstraction. CoreModel specifications exploit object inheritance and allow control applications to acquire abstract network connectivity information and, in parallel, access technology-specific attributes of individual network devices. The CoreModel adoption is limited and existing NOSes employ custom information models. The second NBI category contains high-level and innovative control abstractions, exploring interfaces beyond the typical match-action-forward model. These interfaces are typically implemented as NOS management applications, use the information model to implement their control logic and are consumed by external entities, like the Operation Support System (OSS), the service orchestrator and other control applications. Effectively, these interfaces manifest the reference points between the Network and Service Orchestrator components (Figure 1 ). For the rest of this section we elaborate on NBI formal specifications, as well as NBI designs developed in production NOSes. We elaborate on legacy control interfaces implemented in SDN environment, as well as interfaces supported by the ONOS BIB007 and OpenDayLight (ODL) projects, the most popular and mature open-source NOS implementations. Path Computation. Path Computation Element (PCE) is a control technology which addresses resource and forwarding control limitations in label-switched technologies. Generalized MultiProtocol Label Switching (GMPLS) and Multi-Protocol Label Switching (MPLS) technologies follow a distributed approach for path establishment. Switches use traffic engineering extensions to routing protocols, like OSPF-TE , to collect network resource and topology information. Path requests trigger a label switch to compute an end-to-end path to the destination network using its topology information and provisions the path using signaling protocols, like RSVP-TE BIB002 . A significant limitation in MPLS path computation is the increased computational requirements for the co-processor of edge label switches in large networks, while limited visibility between network layers or across administrative domains can lead to sub-optimal path selections. PCE proposes a centralized path computation architecture and defines a protocol which allows the network controller to receive path requests from the NMS and to configure paths across individual network forwarding elements. PCE control can be used by the service orchestrator to provision connectivity between the NF nodes. The ONOS PCEP project 1 enables ONOS to serve Path Computation Client (PCC) requests and to manage label switched paths (LSP) and MPLS-TE tunnels. In addition, the PCEP project develops a path computation mechanism for the ONOS tunneling subsystem and provides tunnels as a system resource. Tunnel establishment support, both as L2 and L3 VPNs, is available to application through a RESTful NBI and applications are distinguished between tunnel providers and tunnel consumers. LSP computation relies on network topology information, stored in a traffic engineering database (TED) and populated by an Interior Gateway Protocol (IGP). This information remains locoal within an Autonomous System (AS), limiting Path Computation in a single administrative domain. The IETF InterDomain Routing WG defines a mechanism to share link-state information across domains using the Network Layer Reachability Information (NLRI) field of the BGP protocol, standardized in the BGP-LS protocol extensions . The ONOS BGP-LS project introduces support for the BGP-LS protocol (peering and link state information support) as SBI to complement the ONOS PCEP project . The BGP-LS/PCEP module 2 of the ODL project implements support for the aforementioned protocols as a control application. Furthermore, the module supports additional PCE extensions, like stateful-PCE , PCEP for segment routing ( § 5.4), and secure transport for PCEP (PCEPS) . Stateful-PCE introduces time, sequence and resource usage synchronization within and across PCEP sessions, allowing dynamic LSP management. Furthermore, PCEPS adds security extension to the control channel of the PCE protocol. ALTO. The Application Layer Traffic Optimization BIB005 is an IETF WG developing specifications that allow end-user applications to access accurate network performance information. Distributed network applications, like peer-to-peer and content distribution, can improve their peer-selection logic using network path information towards alternative service end-points. This better-than-random decision improves the performance of bandwidth-intensive or latency-sensitive applications, while the network provider can improve link utilization across its network. The ALTO protocol enables a service orchestrator to monitor the network of the operator and make informed service deployment decisions. ODL provides an ALTO server module 2 with a RESTful ALTO NBI. Virtual Tenant Networks. Virtual Tenant Networks (VTNs) is a network virtualization architecture, developed by NEC. VTN develops an abstraction that logically disassociates the specification of virtual overlay networks from the topology of the underlying network infrastructure. Effectively, users can define any network topology and the VTN management system will map the requested topology over the physical topology. VTN enables seamless service deployment for the service orchestrator, by decoupling the deployment plan from the underlying infrastructure. The VTN abstraction is extensively supported by the ODL project 2 . Locator/ID Separation. The IETF Locator/ID separation protocol (LISP) is a network architecture addressing the scalability problems of routing systems at Internet-scale. LISP proposes a dual addressing mechanism, which decouples the location of a host from its unique identifier. LISP-aware end-hosts require only a unique destination end-point identifier (EID) to transmit a packet, while intermediate routing nodes use a distributed mapping service to translate EIDs to Routing Locations (RLOCs), an identifier of the network of the destination host. A packet is send to an Edge LISP router in the EID domain, where a LISP header with the RLOC address of the destination network is added. The packet is then routed across the underlay network to the destination EID domain. The LISP architecture provides a scalable mechanism for NFs connectivity and mobility. ODL provides a LISP flow mapping module 2 . The module uses an SBI to acquire RLOC and EID information from the underlying network and exposes this information through a RESTCONF NBI. In addition, the NBI allows applications, like load balancers, to create custom overlay networks. The module is currently compatible with the Service Function Chain (SFC) ( § 5.3) functionality and holds future integration plans with group-based policy mechanisms. Real time media. The ONF has currently a dedicated WG exploring standardization requirements for SDN NBIs. At the time of writing, the group has released an NBI specifications for a Real Time Media control protocol, in collaboration with the International Multimedia Telecommunication Consortium (IMTC). The protocol allows end-user applications to communicate with the local network controller, discover available resources and assign individual flows to specific quality of experience (QoE) classes, through a RESTful API. ONF is currently developing a proof-of-concept implementation of the API as part of the ASPEN project . Intent-based networking. Intent-based networking is a popular SDN NBI exploring the applicability of declarative policy languages in network management. Unlike traditional imperative policy language, Intent-based policies describe to the NOS the set of acceptable network states and leave low-level network configuration and adaptation to the NOS. As a result, Intents are invariant to network parameters like link outages and vendor variance, because they lack any implementation details. In addition, intents are portable across controllers, thus simplifying application integration and run-time complexity, but requires a common NBI across platforms, which is currently an active goal for multiple SDOs WG. The IETF has adopted the NEMO specifications BIB009 , an Intent-based networking policy language. NEMO is a Domain Specific Language (DSL), following the declarative programming paradigm. NEMO applications do not define the underlying mechanisms for data storage and manipulation, but rather describe their goals. The language defines three major abstractions: an end-point, describes a network end-point, a connection, describes connectivity requirements between network end-points, and an operation, describes packet operations. Huawei is currently leading an implementation initiative, based on ODL and the OPNFV project . In parallel, the ONF has recently organized a WG to standardize a common Intent model. The group aims to fulfill two objectives: i) describe the architecture and requirements of Intent implementations across controllers and define portable intent expressions, and ii) develop a community-approved information model which unifies Intent interfaces across controllers. The respective standard is coupled with the development of the Boulder framework , an open-source and portable Intent framework which can integrate with all major SDN NOSes. Boulder organizes intents through a grammar which consists of subjects, predicates and targets. The language can be extended to include constraints and conditions. The reference Boulder implementation has established compatibility with ODL through the Network Intent Composition (NIC) project, while ONOS support is currently under development. Group-Based Policy (GBP) is an alternative Intent-based networking paradigm, developed by the ODL project. Based upon promise-theory BIB008 , GBP separates application concerns and simplifies dependency mapping, thus allowing greater automation during the consolidation and deployment of multiple policy specifications. The GBP abstraction models policy using the notions of end-point and end-point groups and provides language primitives to control the communication between them. Developers can specify through GBP their application requirements and the relationship between different tiers of their application, while remaining opaque towards the topology and capabilities of the underlying network. The ODL GBD module provides an NBI 2 which leverages the low-level control of several network virtualization technologies, like OpenStack Neutron and SFC( § 5.3).
|
Network service orchestration standardization: A technology survey <s> Application-Based Network Operations (ABNO) <s> Services such as content distribution, distributed databases, or ::: inter-data center connectivity place a set of new requirements on the ::: operation of networks. They need on-demand and application-specific ::: reservation of network connectivity, reliability, and resources (such ::: as bandwidth) in a variety of network applications (such as point-to- ::: point connectivity, network virtualization, or mobile back-haul) and ::: in a range of network technologies from packet (IP/MPLS) down to ::: optical. An environment that operates to meet these types of ::: requirements is said to have Application-Based Network Operations ::: (ABNO). ABNO brings together many existing technologies and may be ::: seen as the use of a toolbox of existing components enhanced with a ::: few new elements. This document describes an architecture and ::: framework for ABNO, showing how these components fit together. It ::: provides a cookbook of existing technologies to satisfy the ::: architecture and meet the needs of the applications. <s> BIB001 </s> Network service orchestration standardization: A technology survey <s> Application-Based Network Operations (ABNO) <s> Transport networks provide reliable delivery of data between two end points. Today's most advanced transport networks are based on Wavelength Switching Optical Networks (WSON) and offer connections of 10Gbps up to 100Gbps. However, a significant disadvantage of WSON is the rigid bandwidth granularity because only single, large chunks of bandwidth can be assigned matching the available fixed wavelengths resulting in considerable waste of network resources. Elastic Optical Networks (EON) provides spectrum-efficient and scalable transport by introducing flexible granular grooming in the optical frequency domain. EON provides arbitrary contiguous concatenation of optical spectrum that allows creation of custom-sized bandwidth. The allocation is performed according to the traffic volume or user request in a highly spectrum-efficient and scalable manner. The Adaptive Network Manager (ANM) concept appears as a necessity for operators to dynamically configure their infrastructure based on user requirements and network conditions. This work introduces the ANM and defines ANM use cases, and its requirements, and proposes an architecture for ANM that is aligned with solutions being developed by the industry. <s> BIB002 </s> Network service orchestration standardization: A technology survey <s> Application-Based Network Operations (ABNO) <s> ABNO architecture is proposed in IETF as a framework which enables network automation and programmability thanks to the utilization of standard protocols and components. This work not only justifies the architecture but also presents the first experimental demonstration. <s> BIB003 </s> Network service orchestration standardization: A technology survey <s> Application-Based Network Operations (ABNO) <s> Huge amount of algorithmic research is being done in the field of optical networks, including Routing and Spectrum Allocation (RSA), elastic operations, spectrum defragmentation, and other re-optimization algorithms. Frequently, those algorithms are developed and executed on simulated environments, where many assumptions are done about network control and management issues. Those issues are relevant, since they might prevent algorithms to be deployed in real scenarios. To completely validate network-related algorithms, we developed an extensible control and management plane test-bed, named as iONE, for single layer and multilayer flexgrid-based optical networks. iONE is based on the Applications-Based Network Operations (ABNO) architecture currently under standardization by the IETF. iONE enables deploying and assessing the designed algorithms by defining workflows. This paper presents the iONE test-bed architecture, describes its components, and experimentally demonstrates its operation with a specific use-case. <s> BIB004 </s> Network service orchestration standardization: A technology survey <s> Application-Based Network Operations (ABNO) <s> Traditionally, routing systems have implemented routing and signaling ::: (e.g., MPLS) to control traffic forwarding in a network. Route ::: computation has been controlled by relatively static policies that ::: define link cost, route cost, or import and export routing policies. ::: Requirements have emerged to more dynamically manage and program ::: routing systems due to the advent of highly dynamic data-center ::: networking, on-demand WAN services, dynamic policy-driven traffic ::: steering and service chaining, the need for real-time security threat ::: responsiveness via traffic control, and a paradigm of separating ::: policy-based decision-making from the router itself. These ::: requirements should allow controlling routing information and traffic ::: paths and extracting network topology information, traffic statistics, ::: and other network analytics from routing systems. This document ::: proposes meeting this need via an Interface to the Routing System ::: (I2RS). <s> BIB005 </s> Network service orchestration standardization: A technology survey <s> Application-Based Network Operations (ABNO) <s> Abstract As current traffic growth is expected to strain capacity of today׳s metro network, novel content distribution architectures where contents are placed closer to the users are being investigated. In that regard, telecom operators can deploy datacenters (DCs) in metro areas, thus reducing the impact of the traffic between users and DCs. In this paper, a hierarchical content distribution architecture for the telecom cloud is investigated: core DCs placed in geographically distributed locations, are interconnected through permanent “per content provider” (CP) virtual network topologies (CP-VNT); additionally, metro DCs need to be interconnected with the core DCs. CP׳s data is replicated in the core DCs through the CP-VNTs, while metro-to-core (M2C) anycast connections are established periodically for content synchronization. Since network failures might disconnect the CP-VNTs, recovery mechanisms are proposed to reconnect both topologies and anycast connections. Topology creation, anycast provisioning, and recovery problems are first formally stated and modeled as Integer Linear Programs (ILP) and heuristic algorithms are proposed. Exhaustive simulation results show significant improvements in both supported traffic and restorability. Workflows to implement the algorithms within the Applications-based Network Operations (ABNO) architecture and extensions for PCEP are proposed. Finally, the architecture is experimentally validated in UPC's SYNERGY test-bed running our ABNO-based iONE architecture. <s> BIB006
|
The evolution of the SDN paradigm has highlighted that clean-slate design approaches are prone to protocol and interface proliferation which can limit the evolvability and interoperability of a deployment. ABNO BIB001 s an alternative modular control architecture standard, published as an Area Director sponsored RFC document, and it reuse existing standards to provide connectivity services. ABNO by-design provides network orchestration capabilities for multi-technology and multidomain environments, since it relies on production protocols developed and adopted to fulfill these requirements. The architecture enables network applications to automatically provision network paths and access network state information, controlled by an operator-defined network policy. ABNO consists of eight functional blocks, presented in Figure 4 along with their interfaces, but production deployments do not require to implement all the components. A core element of the architecture is the ABNO controller. The controller allows applications and NMS/OSS to specify end-to-end path requirements and access path state information. A path request triggers the controller to inspect the current network connectivity and resource allocations, and to provision a path which fulfills the resource requirements and does not violate the network policy. In addition, the controller is responsible to re-optimize paths at run-time, taking under consideration other path requests, routing state and network errors. The architecture contains an OAM handler to collect network error from all network layers. The OAM handler monitors the network and collects error notifications from network devices, using interfaces like IPFIX and NETCONF, which are correlated in order to synthesize highlevel error reports for the ABNO controller and the NMS. In addition, the ABNO architecture integrates with the network routing policy through an Interface to the Routing System (I2RS) client. I2RS BIB005 is an IETF WG that develops an architecture for real-time and event-based application interaction with the routing system of network devices. Furthermore, the WG has developed a detailed information model that allows external applications to monitor the RIB of a forwarding device. As a result, the I2RS client of the ABNO architecture aggregates information from network routers in order to adapt its routing policy, while it can by modify routing tables the routing policy to reflect path availability. Path selection is provided by a PCE controller, while a provisioning manager is responsible for path deployment and configuration using existing control plane protocols, like OpenFlow and NETCONF. It is important to highlight that these functional blocks may be omitted in a production deployment and the architecture proposes multiple overlapping control channels. In addition, the architecture contains an optional Virtual Network Topology Manager (VNTM), which can provision connectivity in the network physical layer, like configuring virtual links in WDM networks. Topology discovery is a key requirement for the path selection algorithm of the PCE controller and the ABNO architecture uses multiple databases to store relevant information. The Traffic-Engineering Database (TED) is a required database for any ABNO architecture and contains the network topology along with link resource and capability information. The database is populated with information through traffic engineering extensions in the routing protocol. Optionally, the architecture suggests support for an LSP database, which stores information for established paths, and a database to store associative information between physical links and network paths, for link capacity prediction during virtual link provision over optical technologies. A critical element for production deployment is the ability of the ABNO architecture to employ a common policy for all path selection decisions. The ABNO architecture incorporates a Policy Agent which is controlled by the NMS/OSS. The policy agent authenticates requests, maintains accounting information and reflects policy restrictions for the path selection algorithm. The policy agent is a focal point in the architecture and any decision by the ABNO controller, the PCE controller and the ALTO server requires a check with the active network policy. In addition to the ABNO control interfaces, the architecture provides additional application interfaces which expose network state information through an ALTO server. The server uses the ALTO protocol to provide accurate path capacity and load information to applications and assist the application server selection process and performance monitoring. A number of ABNO-based implementations exist detailing how the architecture was used to orchestrate resources in complex network environments, including: iONE BIB004 for content distribution in the telecom Cloud BIB006 , and Adaptive Network Manager BIB003 for co-ordinating operations in flex-grid optical and packet networks BIB002 . The large telecom vendor Infinera and network operator Telefonica, also provided a joint demonstration to orchestrate and provision bandwidth services in real-time (Network as a Service NaaS) across a multi-vendor IP/MPLS and optical transport network, using a variety of APIs .
|
Network service orchestration standardization: A technology survey <s> Service Function Chain (SFC) <s> This document describes requirements for conveying information between ::: Service Function Chaining (SFC) control elements (including management ::: components) and SFC functional elements. Also, this document ::: identifies a set of control interfaces to interact with SFC- aware ::: elements to establish, maintain or recover service function chains. ::: This document does not specify protocols nor extensions to existing ::: protocols. This document exclusively focuses on SFC deployments that ::: are under the responsibility of a single administrative entity. Inter- ::: domain considerations are out of scope. <s> BIB001 </s> Network service orchestration standardization: A technology survey <s> Service Function Chain (SFC) <s> This document defines a YANG data model that can be used to configure ::: and manage Service Function Chains. <s> BIB002
|
SFC is a recently formed IETF WG which aims to define the architectural principles and protocols for the deployment and management of NF forwarding graphs. An SFC deployment operates as a network overlay, logically separating the control plane of the service from the control of the underlying network. The overlay functionality is implemented by specialized forwarding elements, using a new network header. Figure 6 presents an example deployment scenario of an SFC domain. An administrative network domain can contain one or more SFC domains. An SFC domain is a set of SFC-enabled network devices sharing a common information context. The information context contains state regarding the deployed service graphs, the available paths for each service graph and classification information mapping incoming traffic to a service path. An SFC-specific header is appended on all packets on the edges of the SFC domain by an SFC-Classifier. The SFC-Classifier assigns incoming traffic to a service path by appending an appropriate SFC header to each packet. For outgoing traffic, the SFC-Classifier is responsible to remove any SFC headers and forward each packet appropriately. Once the packet is within the SFC domain, it is forwarded by the classifier to an SF Forwarder (SFF), an element responsible to forward traffic to an SF according to the service function ordering. Finally, the architecture is designed to accommodate both SFC-aware and legacy NFs. The main difference between them is that the SFCaware NFs can parse and manipulate SFC headers. For legacy NFs, the architecture defines a specialized element to manipulate SFC headers on behalf of the service function, the SFCProxy. The network overlay of the SFC architecture is realized through a new protocol layer, the Network Service Header (NSH) [83] . NSH contains information which define the position of a packet in the service path, using a service path and path index identifiers, and carry metadata between service functions regarding policy and post-service delivery. Highly relevant for service orchestration is the control and management interfaces of the SFC architecture. At the time of writing, the SFC WG currently explores the SFC control channel requirements and initial design goals BIB001 define four main control interfaces. C1 is the control channel of the SFCClassifier and allows manipulation of the classification policy which assigns incoming traffic to specific service paths. This control interface can be used to load balance traffic between service paths and optimize resource utilization. C2 is a control channel of the SFF forwarding policy and exposes monitoring information, like latency and load. C3 is the control protocol used to aggregate status, liveness and performance information from each NF-aware service function. Finally, the controller can use the C4 protocol to configure SFC-Proxies with respect to NSH header manipulation before and after a packet traverses an SFC-unaware NF. In parallel, the WG has proposed a set of YANG models to implement the proposed control interfaces BIB002 . Furthermore, the WG has also specified a set of YANG models for the management interface of an SFC controller BIB001 . This interface provides information about the liveness of individual SFC paths, topological information for the underlying SFC infrastructure, performance counters and control of the fault and error management strategies. In addition, the management interface allows external applications to re-optimize service paths and control load balancing policy. At the time of writing, multiple open-source platforms introduce SFC support. The Open vSwitch soft-switch has introduced SFC support both in the data and the control (OpenFlow extensions) plane. The OpenStack cloud management platform exploits the Open vSwitch SFC support and implements a highlevel SFC control interface . Furthermore, the ONOS controller currently supports SFC functionality using VTN overlays, while ODL implements SFC support using LISP tunnels. In addition, ONF has released recommendations for an L4-L7 SFC architecture which uses OpenFlow as the SBI of the SFC controller and explores the applicability and required extension to the OpenFlow abstraction to improve support for SFF elements.
|
Network service orchestration standardization: A technology survey <s> Segment Routing (SR) <s> Network operators anticipate the offering of an increasing variety of cloud-based services with stringent Service Level Agreements. Technologies currently supporting IP networks however lack the flexibility and scalability properties to realize such evolution. In this article, we present Segment Routing (SR), a new network architecture aimed at filling this gap, driven by use-cases defined by network operators. SR implements the source routing and tunneling paradigms, letting nodes steer packets over paths using a sequence of instructions (segments) placed in the packet header. As such, SR allows the implementation of routing policies without per-flow entries at intermediate routers. This paper introduces the SR architecture, describes its related ongoing standardization efforts, and reviews the main use-cases envisioned by network operators. <s> BIB001 </s> Network service orchestration standardization: A technology survey <s> Segment Routing (SR) <s> Segment Routing (SR) leverages the source routing paradigm. A node ::: steers a packet through an ordered list of instructions, called ::: segments. A segment can represent any instruction, topological or ::: service-based. A segment can have a semantic local to an SR node or ::: global within an SR domain. SR allows to enforce a flow through any ::: topological path while maintaining per-flow state only at the ingress ::: nodes to the SR domain. Segment Routing can be directly applied to the ::: MPLS architecture with no change on the forwarding plane. A segment is ::: encoded as an MPLS label. An ordered list of segments is encoded as a ::: stack of labels. The segment to process is on the top of the stack. ::: Upon completion of a segment, the related label is popped from the ::: stack. Segment Routing can be applied to the IPv6 architecture, with a ::: new type of routing header. A segment is encoded as an IPv6 address. ::: An ordered list of segments is encoded as an ordered list of IPv6 ::: addresses in the routing header. The active segment is indicated by ::: the Destination Address of the packet. The next active segment is ::: indicated by a pointer in the new routing header. <s> BIB002 </s> Network service orchestration standardization: A technology survey <s> Segment Routing (SR) <s> Segment Routing can be applied to the IPv6 data plane using a new type ::: of Routing Extension Header called the Segment Routing Header. This ::: document describes the Segment Routing Header and how it is used by ::: Segment Routing capable nodes. <s> BIB003
|
Segment Routing (SR) BIB001 is an architecture for the instantiation of service graphs over a network infrastructure using source routing mechanisms, standardized by the IETF Source Packet Routing in Networking (SPRING) WG BIB002 . SR is a data plane technology and uses existing protocols to store instructions (segments) for the packet path in its header. SR segments can have local or global semantics, and the architecture defines three segments types: a node segment forwards a packet over the shortest path towards a network node, an adjacency segment forwards the packet through a specific router port and a service segment introduces service differentiation on a service path. Currently, the SR architecture has defined a set of extensions for the IPv6 BIB003 and the MPLS protocols, which define protocol-compliant mechanisms to store the segment stack and the active segment pointer in the protocol header. In addition, to enable dynamic adaptation of the forwarding policy, the architecture defines a set of control operations for forwarding elements to manipulate the packet segment list and to update established paths dynamically. The selection of the packet path is implemented on the edge routers of the SR domain. The architecture specifies multiple path selection mechanisms, including static configurations, distributed shortest-path selection algorithms and programmatic control of segment path using SDN SBIs. The network IGP protocol can be used to provide segment visibility between routers and a YANG management interface is defined for SR segment information retrieval and SR routing entry control. SR provides a readily-available framework to instantiate service forwarding graphs. A forwarding graph can be implemented as a segment stack and existing VNFs can be integrated with the architecture by introducing appropriate support for MPLS and IPv6 SR extensions. In comparison to the SFC architecture, SR provides a simpler architecture which does not require deployment of new network elements. Nonetheless, SFC provides wider protocol support and the architecture is designed to support different data plane technologies, while SR is closely aligned with MPLS technologies. SR support is currently introduced in both major SDN NOSes. The ONOS project has introduced support for SR to implement CORD, a flexible central office architecture designed to simplify network service management . Similarly, ODL supports SR functionality using MPLS labels and the PCE SBI module. In parallel, CISCO has introduced SR support in recent XR IOS versions .
|
Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Introduction <s> Neurologically impaired infants have immature, damaged, or abnormally developed nervous systems that may cause abnormalities of sucking and swallowing, among other problems. Sucking abnormalities usually present as absence of the sucking response, weakness or incoordination of sucking and swallowing, or some combination of these problems. More investigation of the responses of these infants to various stimuli and training techniques is greatly needed. Although training neurologically impaired infants to breastfeed may present a challenge to even the most experienced neonatal nurse, physician, or therapist, most infants improve and can learn to suckle at the breast. If a mother has intended to nurse her infant, she should be encouraged to do so, even when the infant has abnormalities of sucking, except in the rare and most severely affected infants who remain dependent on gavage or gastrostomy feedings. Various techniques of stimulating, positioning, and progressive weaning to the breast can be helpful in teaching mother and infant to breastfeed. Encouraging support should be provided by all professionals involved with the mother and infant, as well as by a team experienced in helping with such problems. Most importantly, mother and staff must be patient, because the rewards for both the infant and mother are worth the effort. <s> BIB001 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Introduction <s> The sucking patterns of 42 healthy full-term and 44 preterm infants whose gestational age at birth was 30.9 +/- 2.1 weeks were compared using the Kron Nutritive Sucking Apparatus for a 5-minute period. The measured pressures were used to calculate six characteristics of the sucking response: maximum pressure generated, amount of nutrient consumed per suck, number of sucks per episode, the duration or width of each suck, the length of time between sucks, and the length of time between sucking episodes. The maximum pressure of the term infant (100.3 +/- 35) was higher, p less than .05, than the maximum pressure of the preterm infant (84 +/- 33). Term infants also consumed more formula per suck (45.3 +/- 20.3 vs. 37.6 +/- 15.9, p less than .05). In addition, they had more sucks/episode (13.6 +/- 8.7 vs. 7.7 +/- 4.1, p less than .001) and maintained the pressure longer for a wider suck width (0.49 +/- 0.1 vs. 0.45 +/- 0.08, p less than .05). Sucking profiles of the preterm infant are significantly different from the full-term infant. These sucking profiles can be developed as a clinically useful tool for nursing practice. <s> BIB002 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Introduction <s> Non-invasive, sensitive equipment was designed to record nasal air flow, the timing and volume of milk flow, intraoral pressure and swallowing in normal full-term newborn babies artificially fed under strictly controlled conditions. Synchronous recordings of these events are presented in chart form. Interpretation of the charts, with the aid of applied anatomy, suggests an hypothesis of the probable sequence of events during an ideal feeding cycle under the test conditions. This emphasises the importance of complete coordination between breathing, sucking and swallowing. The feeding respiratory pattern and its relationship to the other events was different from the non-nutritive respiratory pattern. The complexity of the coordinated patterns, the small bolus size which influenced the respiratory pattern, together with the coordination of all these events when milk was present in the mouth, emphasise the importance of the sensory mechanisms. The discussion considers (1) the relationship between these results, those reported by other workers under other feeding conditions and the author's (WGS) clinical experience, (2) factors which appear to be essential to permit conventional bottle feeding and (3) the importance of the coordination between the muscles of articulation, by which babies obtain their nourishment in relation to normal development and maturation. <s> BIB003 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Introduction <s> OBJECTIVE ::: To determine the prevalence and nature of feeding difficulties and oral motor dysfunction among a representative sample of 49 children with cerebral palsy (12 to 72 months of age). ::: ::: ::: STUDY DESIGN ::: A population survey was undertaken by means of a combination of interview and home observational measures. ::: ::: ::: RESULTS ::: Sucking (57%) and swallowing (38%) problems in the first 12 months of life were common, and 80% had been fed nonorally on at least one occasion. More than 90% had clinically significant oral motor dysfunction. One in three (36.2%) was severely impaired and therefore at high risk of chronic undernourishment. There was a substantial discrepancy between the lengthy duration of mealtimes reported by mothers and those actually observed in the home (mean, 19 minutes 21 seconds; range, 5 minutes 21 seconds to 41 minutes 39 seconds). In 60% of the children, severe feeding problems preceded the diagnosis of cerebral palsy. ::: ::: ::: CONCLUSIONS ::: Using a standardized assessment of oral motor function, we found the majority of children to have clinically significant oral motor dysfunction. Contrary to maternal report, mealtimes were relatively brief, and this, combined with the severity of oral motor dysfunction, made it difficult for some children to achieve a satisfactory nutritional intake. The study illustrates the importance of observing feeding, preferably in the home. <s> BIB004 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Introduction <s> Abstract The American Academy of Pediatrics (AAP) recently released a policy statement on the issue of hospital discharge of the high-risk neonate., The statement has been developed, to the extent possible, on the basis of published, scientifically derived information. <s> BIB005 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Introduction <s> Human newborns appear to regulate sucking pressure when bottle feeding by employing, with similar precision, the same principle of control evidenced by adults in skilled behavior, such as reaching (Lee et al., 1998a). In particular, the present study of 12 full-term newborn infants indicated that the intraoral sucking pressures followed an internal dynamic prototype – an intrinsic τ-guide. The intrinsic τ-guide, a recent hypothesis of general tau theory is a time-varying quantity, τg, assumed to be generated within the nervous system. It corresponds to some quantity (e.g., electrical charge), changing with a constant second-order temporal derivative from a rest level to a goal level, in the sense that τg equals τ of the gap between the quantity and its goal level at each time t. (τ of a gap is the time-to-closure of the gap at the current closure-rate.) According to the hypothesis, the infant senses τp, the τ of the gap between the current intraoral pressure and its goal level, and regulates intraoral pressure so that τp and τg remain coupled in a constant ratio, k; i.e., τp=kτg. With k in the range 0–1, the τ-coupling would result in a bell-shaped rate of change pressure profile, as was, in fact, found. More specifically, the high mean r2 values obtained when regressing τp on τg, for both the increasing and decreasing suction periods of the infants’ suck, supported a strong τ-coupling between τp and τg. The mean k values were significantly higher in the increasing suction period, indicating that the ending of the movement was more forceful, a finding which makes sense given the different functions of the two periods of the suck. <s> BIB006 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Introduction <s> As a consequence of the fragility of various neural structures, preterm infants born at a low gestation and/or birthweight are at an increased risk of developing motor abnormalities. The lack of a reliable means of assessing motor integrity prevents early therapeutic intervention. In this paper, we propose a new method of assessing neonatal motor performance, namely the recording and subsequent analysis of intraoral sucking pressures generated when feeding nutritively. By measuring the infant's control of sucking in terms of a new development of tau theory, normal patterns of intraoral motor control were established for term infants. Using this same measure, the present study revealed irregularities in sucking control of preterm infants. When these findings were compared to a physiotherapist's assessment six months later, the preterm infants who sucked irregularly were found to be delayed in their motor development. Perhaps a goal-directed behaviour such as sucking control that can be measured objectively at a very young age, could be included as part of the neurological assessment of the preterm infant. More accurate classification of a preterm infant's movement abnormalities would allow for early therapeutic interventions to be realised when the infant is still acquiring the most basic of motor functions. <s> BIB007 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Introduction <s> Finding ways to consistently prepare preterm infants and their families for more timely discharge must continue as a focus for everyone involved in the care of these infants in the neonatal intensive care unit. The gold standards for discharge from the neonatal intensive care unit are physiologic stability (especially respiratory stability), consistent weight gain, and successful oral feeding, usually from a bottle. Successful bottle-feeding is considered the most complex task of infancy. Fostering successful oral feeding in preterm infants requires consistently high levels of skilled nursing care, which must begin with accurate assessment of feeding readiness and thoughtful progression to full oral feeding. This comprehensive review of the literature provides an overview of the state of the science related to feeding readiness and progression in the preterm infant. The theoretical foundation for feeding readiness and factors that appear to affect bottle-feeding readiness, progression, and success are presented in this article. <s> BIB008 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Introduction <s> The development of feeding and swallowing is the result of a complex interface between the developing nervous system, various physiological systems, and the environment. The purpose of this article is to review the neurobiology, development, and assessment of feeding and swallowing during early infancy. In recent years, there have been exciting advances in our understanding of the physiology and neurological control of feeding and swallowing. These advances may prove useful in furthering our understanding of the pathophysiology of dysphagia in infancy. Progress in developing standardized, reliable, and valid measures of oral sensorimotor and swallowing function in infancy has been slow. However, there have been significant advances in the instrumental analysis of feeding and swallowing disorders in infancy, including manometric analyses of sucking and swallowing, measures of respiration during feeding, videofluoroscopic swallow evaluations, ultrasonography, and flexible endoscopic examination of swallowing. Further efforts are needed to develop clinical evaluative measures of dysphagia in infancy. <s> BIB009 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Introduction <s> This study aimed to determine whether neonatal feeding performance can predict the neurodevelopmental outcome of infants at 18 months of age. We measured the expression and sucking pressures of 65 infants (32 males and 33 females, mean gestational age 37.8 weeks [SD 0.5]; range 35.1 to 42.7 weeks and mean birthweight 2722g [SD 92]) with feeding problems and assessed their neurodevelopmental outcome at 18 months of age. Their diagnoses varied from mild asphyxia and transient tachypnea to Chiari malformation. A neurological examination was performed at 40 to 42 weeks postmenstrual age by means of an Amiel-Tison examination. Feeding performance at 1 and 2 weeks after initiation of oral feeding was divided into four classes: class 1, no suction and weak expression; class 2, arrhythmic alternation of expression/suction and weak pressures; class 3, rhythmic alternation, but weak pressures; and class 4, rhythmic alternation with normal pressures. Neurodevelopmental outcome was evaluated with the Bayley Scales of Infant Development-II and was divided into four categories: severe disability, moderate delay, minor delay, and normal. We examined the brain ultrasound on the day of feeding assessment, and compared the prognostic value of ultrasound and feeding performance. There was a significant correlation between feeding assessment and neurodevelopmental outcome at 18 months (p < 0.001). Improvements of feeding pattern at the second evaluation resulted in better neurodevelopmental outcome. The sensitivity and specificity of feeding assessment were higher than those of ultrasound assessment. Neonatal feeding performance is, therefore, of prognostic value in detecting future developmental problems. <s> BIB010 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Introduction <s> Preterm infants often have difficulties in learning how to suckle from the breast or how to drink from a bottle. As yet, it is unclear whether this is part of their prematurity or whether it is caused by neurological problems. Is it possible to decide on the basis of how an infant learns to suckle or drink whether it needs help and if so, what kind of help? In addition, can any predictions be made regarding the relationship between these difficulties and later neurodevelopmental outcome? We searched the literature for recent insights into the development of sucking and the factors that play a role in acquiring this skill. Our aim was to find a diagnostic tool that focuses on the readiness for feeding or that provides guidelines for interventions. At the same time, we searched for studies on the relationship between early sucking behavior and developmental outcome. It appeared that there is a great need for a reliable, user-friendly and noninvasive diagnostic tool to study sucking in preterm and full-term infants. <s> BIB011 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Introduction <s> Abstract Neonatal motor behavior predicts both current neurological status and future neurodevelopmental outcomes. For speech pathologists, the earliest observable patterned oromotor behavior is su... <s> BIB012 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Introduction <s> ABSTRACT:Objectives:The relationship between the pattern of sucking behavior of preterm infants during the early weeks of life and neurodevelopmental outcomes during the first year of life was evaluated.Methods:The study sample consisted of 105 preterm infants (postmenstrual age [PMA] at birth = 30. <s> BIB013 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Introduction <s> Preterm infants often display difficulty establishing oral feeding in the weeks following birth. This article aims to provide an overview of the literature investigating the development of feeding skills in preterm infants, as well as of interventions aimed at assisting preterm infants to develop their feeding skills. Available research suggests that preterm infants born at a lower gestational age and/or with a greater degree of morbidity are most at risk of early feeding difficulties. Respiratory disease was identified as a particular risk factor. Mechanisms for feeding difficulty identified in the literature include immature or dysfunctional sucking skills and poor suck–swallow–breath coordination. Available evidence provides some support for therapy interventions aimed at improving feeding skills, as well as the use of restricted milk flow to assist with maintaining appropriate ventilation during feeds. Further research is needed to confirm these findings, as well as to answer remaining clinical questions. <s> BIB014 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Introduction <s> BACKGROUND ::: One of the most challenging milestones for preterm infants is the acquisition of safe and efficient feeding skills. The majority of healthy full term infants are born with skills to coordinate their suck, swallow and respiration. However, this is not the case for preterm infants who develop these skills gradually as they transition from tube feeding to suck feeds. For preterm infants the ability to engage in oral feeding behaviour is dependent on many factors. The complexity of factors influencing feeding readiness has led some researchers to investigate the use of an individualised assessment of an infant's abilities. A limited number of instruments that aim to indicate an individual infant's readiness to commence either breast or bottle feeding have been developed. ::: ::: ::: OBJECTIVES ::: To determine the effects of using a feeding readiness instrument when compared to no instrument or another instrument on the outcomes of time to establish full oral feeding and duration of hospitalisation. ::: ::: ::: SEARCH METHODS ::: We used the standard methods of the Cochrane Neonatal Review Group, including a search of the Cochrane Central Register of Controlled Trials (The Cochrane Library 2010, Issue 2), MEDLINE via EBSCO (1966 to July 2010), EMBASE (1980 to July 2010), CINAHL via EBSCO (1982 to July 2010), Web of Science via EBSCO (1980 to July 2010) and Health Source (1980 to July 2010). Other sources such as cited references from retrieved articles and databases of clinical trials were also searched. We did not apply any language restriction. We updated this search in March 2012. ::: ::: ::: SELECTION CRITERIA ::: Randomised and quasi-randomised trials comparing a formal instrument to assess a preterm infant's readiness to commence suck feeds with either no instrument (usual practice) or another feeding readiness instrument. ::: ::: ::: DATA COLLECTION AND ANALYSIS ::: The standard methods of the Cochrane Neonatal Review Group were used. Two authors independently screened potential studies for inclusion. No studies were found that met our inclusion criteria. ::: ::: ::: MAIN RESULTS ::: No studies met the inclusion criteria. ::: ::: ::: AUTHORS' CONCLUSIONS ::: There is currently no evidence to inform clinical practice, with no studies meeting the inclusion criteria for this review. Research is needed in this area to establish an evidence base for the clinical utility of implementing the use of an instrument to assess feeding readiness in the preterm infant population. <s> BIB015 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Introduction <s> AIM ::: Early sucking and swallowing problems may be potential markers of neonatal brain injury and assist in identifying those infants at increased risk of adverse outcomes, but the relation between early sucking and swallowing problems and neonatal brain injury has not been established. The aim of the review was, therefore, to investigate the relation between early measures of sucking and swallowing and neurodevelopmental outcomes in infants diagnosed with neonatal brain injury and in infants born very preterm (<32wks) with very low birthweight (<1500g), at risk of neonatal brain injury. ::: ::: ::: METHOD ::: We conducted a systematic review of English-language articles using CINAHL, EMBASE, and MEDLINE OVID (from 1980 to May 2011). Additional studies were identified through manual searches of key journals and the works of expert authors. Extraction of data informed an assessment of the level of evidence and risk of bias for each study using a predefined set of quality indicators. ::: ::: ::: RESULTS ::: A total of 394 abstracts were generated by the search but only nine studies met the inclusion criterion. Early sucking and swallowing problems were present in a consistent proportion of infants and were predictive of neurodevelopmental outcome in infancy in five of the six studies reviewed. ::: ::: ::: LIMITATIONS ::: The methodological quality of studies was variable in terms of research design, level of evidence (National Health and Medical Research Council levels II, III, and IV), populations studied, assessments used and the nature and timing of neurodevelopmental follow-up. ::: ::: ::: CONCLUSIONS ::: Based upon the results of this review, there is currently insufficient evidence to clearly determine the relation between early sucking and swallowing problems and neonatal brain injury. Although early sucking and swallowing problems may be related to later neurodevelopmental outcomes, further research is required to delineate their value in predicting later motor outcomes and to establish reliable measures of early sucking and swallowing function. <s> BIB016
|
A recent report of the World Health Organization (WHO) describes how the rate of preterm births all over the world is increasing . This result is particularly interesting since prematurity is the leading cause of newborns' death and because premature newborns represent a copious and ever-increasing population at high risk for chronic diseases and neurodevelopmental problems. Feeding support is one of the possible strategies reported in to reduce deaths among premature infants. Such intervention requires specifically designed tools to assess oral feeding ability, so as to provide clinicians with new devices that may be used for routine clinical monitoring and decision-making. Several studies BIB008 BIB014 BIB015 stress the importance of introducing oral feeding for preterm infants as early as the Neonatal Intensive Care Unit (NICU), highlighting the need of evidence-based clinical tools for the assessment of infants' oral feeding readiness. The need for reliable assessment of feeding ability is further highlighted by the American Academy of Pediatrics that included the attainment of independent oral feeding as an essential criterion for hospital discharge BIB005 . The acquisition of efficient Nutritive Sucking (NS) skills is a fundamental and challenging milestone for newborns. It is essential during the first six months of life and it requires the complex coordination of three different processes: sucking, swallowing and breathing. The development of such precocious motor skills depends on intact brainstem pathways and cranial nerves. Hence, the immaturity of the Central Nervous System (CNS) can affect oral motor functions BIB004 and/or cause the inability to successfully perform oral feeding BIB001 BIB002 BIB009 BIB003 . NS is one of the most precocious goal-directed action evident in a newborn's movement repertoire, and it may provide an opportunity to investigate mechanisms of fine motor control in the neonate, as reported by Craig and Lee in BIB006 . For these reasons, sucking skills can provide valuable insights into the infant's neurological status and its future development BIB012 BIB013 BIB010 BIB016 BIB007 . Moreover, since sucking control involves similar oral motor structures to those required for coherent speech production, early sucking problems have also been suggested as predictors of significant delays in the emergence or development of speech-language skills . The importance of early sucking monitoring has been confirmed over the years, and the need for reliable instruments for neonatal sucking assessment is stressed in several works BIB008 BIB015 BIB016 BIB011 , even though no standardized instrumental assessment tools exist as yet. NS assessment is in fact part of the clinical evaluation, but this is not carried out objectively. With few objective criteria for the assessment of its progress in the hospital, and no organized home follow-up care, poor feeding skills may go undetected for too long. Notwithstanding the ongoing development of tools for the assessment of NS, there is not a common approach to this issue, thus causing problems of variability of the measurements, as highlighted by several authors BIB009 BIB016 BIB011 . Such heterogeneity represents one of the causes of the discrepant findings reported in literature, and a major challenge in applying them to clinical practice, as reported by Slattery et al. in 2012 [15] . The use of standard pre-discharge assessment tools may foster the development of common quantitative criteria useful to assist clinicians in planning clinical interventions. Such devices, or a simplified version of them, might be adopted also for patients' follow-up, as remote monitoring of infants at home after discharge. Section 2 provides a detailed survey of the main quantities and indices measured and/or estimated to characterize sucking behavior skills and their development. Section 3 presents the main characteristics of the technological sensing solutions adopted to measure the previously identified quantities and indices. Finally, we will discuss the main functional specifications required to a proper feeding assessment device, and the main advantages and weaknesses of the adopted sensing systems, taking into consideration the application to the clinical practice, or to at home monitoring as post-discharge assessment tools.
|
Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Preliminary Definitions <s> The sucking rhythms of infants with a benign perinatal course are compared to those of infants with a history of perinatal distress. The ontogenesis of sucking rhythms, and the sucking patterns of children with major congenital malformations of the brain and various metabolic disorders are described. The analysis of rhythms of non-nutritive sucking discriminates to a statistically significant degree between normal infants and infants with a history of perinatal distress who have no gross neurological signs. <s> BIB001 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Preliminary Definitions <s> Feeding by sucking is one of the first activities of daily life performed by infants. Sucking plays a fundamental role in neurological development and may be considered a good early predictor of neuromotor development. In this paper, a new method for ecological assessment of infants' nutritive sucking behavior is presented and experimentally validated. Preliminary data on healthy newborn subjects are first acquired to define the main technical specifications of a novel instrumented device. This device is designed to be easily integrated in a commercially available feeding bottle, allowing clinical method development for screening large numbers of subjects. The new approach proposed allows: 1) accurate measurement of intra-oral pressure for neuromotor control analysis and 2) estimation of milk volume delivered to the mouth within variation between estimated and reference volumes. <s> BIB002
|
Sucking is one of the first oromotor behaviors to occur in the womb. There are two basic forms of sucking: Non-Nutritive sucking (NNS) when no nutrient is involved, and Nutritive Sucking (NS) when a nutrient such as milk is ingested from a bottle or breast. A nutritive suck is characterized by the rhythmic alternation of Suction (S), i.e., creation of a negative Intraoral Pressure (IP) through the depression of jaw and tongue, and Expression (E), i.e., the generation of positive Expression Pressure (EP) through the compression of the nipple between the tongue and the hard palate. This S/E alternation allows the infant to create the extraction pressure over the fluid, contained in a vessel, towards the oral cavity. From birth throughout the first 6 months of life, infants obtain their primary food through NS. During this process, the infant must control oral sucking pressures to optimize the milk flow from the feeding vessel into the mouth, and to move the expressed milk to the back of the mouth, prior to being swallowed. The amount of milk entering the mouth dictates the swallow event, which in turn interrupts breathing. Hence, during NS, Sucking (Sk), Swallowing (Sw) and Breathing (B) are closely dependent on each other. This dependence represents another strong difference between NS and NNS: during NNS, the demands on swallowing are minimal (the infant has only to handle their own secretions), and respiration can operate independently. Safety in NS implies a proper coordination of Sk, Sw and B to avoid aspiration, as the anatomical pathways for air and nutrients share the same pharyngeal tract. During the Sw phase, airflow falls to zero, where it remains for an average duration of 530 ms, to be rapidly restored after this time. This period of flow cessation between functionally significant airflows is usually referred to as "swallow apnea" . In full-term healthy infants, the NS process is characterized by a burst-pause sucking pattern where a burst consists of a series of suck events, occurring with a typical frequency of 1 Hz BIB001 , separated by the following suck event through a pause of at least 2 s. This burst-pause pattern evolves during feeding in three stages: continuous, intermittent and paused . At the beginning of a feeding period, infants suck vigorously and continuously with a stable rhythm and long bursts (continuous sucking phase). This phase is generally followed by an intermittent phase in which sucks are less vigorous, bursts are shorter and pauses are longer (intermittent sucking phase). The final paused phase is characterized by weak sucks and very short sporadic bursts. Figure 1 reports a typical 10 s pressure burst: experimental data, acquired on healthy subjects and reported in BIB002 , showed that intraoral pressure is in the range of [−140,+15] mmHg. The bandwidth of the pressure signal was estimated, calculating its Power Spectral Density (PSD) by means of the Welch overlapped segmented average: it may be considered well below 20 Hz. Moreover, in a coordinated cycle of NS, the 1:1:1 relational pattern among sucking (S/E), swallowing and breathing is expected, and creates a rhythmic unit where breaths seem uninterrupted (no asphyxia or choking signs) .
|
Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Nutritive Sucking Behavior Monitoring and Assessment: Measured Quantities and Principal Sucking Parameters <s> In 100 bottle-fed preterm infants feeding efficiency was studied by quantifying the volume of milk intake per minute and the number of teat insertions per 10 ml of milk intake. These variables were related to gestational age and to number of weeks of feeding experience. Feeding efficiency was greater in infants above 34 weeks gestational age than in those below this age. There was a significant correlation between feeding efficiency and the duration of feeding experience at most gestational ages between 32 and 37 weeks. A characteristic adducted and flexed arm posture was observed during feeding: it changed along with feeding experience. A neonatal feeding score was devised that allowed the quantification of the early oral feeding behavior. The feeding score correlated well with some aspects of perinatal assessment, with some aspects of the neonatal neurological evaluation and with developmental assessment at 7 months of age. These findings are a stimulus to continue our study into the relationships between feeding behaviour and other aspects of early development, especially of neurological development. <s> BIB001 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Nutritive Sucking Behavior Monitoring and Assessment: Measured Quantities and Principal Sucking Parameters <s> Milk flow achieved during feeding may contribute to the ventilatory depression observed during nipple feeding. One of the important determinants of milk flow is the size of the feeding hole. In the first phase of the study, investigators compared the breathing patterns of 10 preterm infants during bottle feeding with two types of commercially available (Enfamil) single-hole nipples: one type designed for term infants and the other for preterm infants. Reductions in ventilation, tidal volume, and breathing frequency, compared with prefeeding control values, were observed with both nipple types during continuous and intermittent sucking phases; no significant differences were observed for any of the variables. Unlike the commercially available, mechanically drilled nipples, laser-cut nipple units showed a markedly lower coefficient of variation in milk flow. In the second phase of the study, two sizes of laser-cut nipple units, low and high flow, were used to feed nine preterm infants. Significantly lower sucking pressures were observed with high-flow nipples as compared with low-flow nipples. Decreases in minute ventilation and breathing frequency were also significantly greater with high-flow nipples. These results suggest that milk flow contributes to the observed reduction in ventilation during nipple feeding and that preterm infants have limited ability to self-regulate milk flow. <s> BIB002 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Nutritive Sucking Behavior Monitoring and Assessment: Measured Quantities and Principal Sucking Parameters <s> OBJECTIVE ::: To describe the bottle-feeding histories of preterm infants and determine physical indices related to and predictive of bottle-feeding initiation and progression. ::: ::: ::: DESIGN ::: Ex post facto. ::: ::: ::: SETTING ::: Academic medical center. ::: ::: ::: PARTICIPANTS ::: A convenience sample of 40 preterm infants without concomitant cardiac, gastrointestinal, or cognitive impairment. ::: ::: ::: MAIN OUTCOME MEASURES ::: Postconceptional age at first bottle-feeding, full bottle-feeding, and discharge. ::: ::: ::: RESULTS ::: The morbidity rating, using the Neonatal Medical Index (NMI), was most strongly correlated with postconceptional age at first bottle-feeding (r = .34, p < .05), full bottle-feeding (r = .65, p < .01), and discharge (r = .55, p < .05). The morbidity rating also accounted for 12%, 42%, and 30% of the variance in postconceptional age at first bottle-feeding, full bottle-feeding, and discharge, respectively. ::: ::: ::: CONCLUSIONS ::: The NMI may be a useful tool for predicting the initiation and progression of bottle-feeding in preterm infants. <s> BIB003 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Nutritive Sucking Behavior Monitoring and Assessment: Measured Quantities and Principal Sucking Parameters <s> The maturation of deglutition apnoea time was investigated in 42 bottle-fed preterm infants, 28 to 37 weeks gestation, and in 29 normal term infants as a comparison group. Deglutition apnoea times reduced as infants matured, as did the number and length of episodes of multiple-swallow deglutition apnoea. The maturation appears related to developmental age (gestation) rather than feeding experience (postnatal age). Prolonged (>4 seconds) episodes of deglutition apnoea remained significantly more frequent in preterm infants reaching term postconceptual age compared to term infants. However, multiple-swallow deglutition apnoeas also occurred in the term comparison group, showing that maturation of this aspect is not complete at term gestation. The establishment of normal data for maturation should be valuable in assessing infants with feeding difficulties as well as for evaluation of neurological maturity and functioning of ventilatory control during feeding. <s> BIB004 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Nutritive Sucking Behavior Monitoring and Assessment: Measured Quantities and Principal Sucking Parameters <s> It is acknowledged that the difficulty many preterm infants have in feeding orally results from their immature sucking skills. However, little is known regarding the development of sucking in these infants. The aim of this study was to demonstrate that the bottle-feeding performance of preterm infants is positively correlated with the developmental stage of their sucking. Infants' oral-motor skills were followed longitudinally using a special nipple/bottle system which monitored the suction and expression/compression component of sucking. The maturational process was rated into five primary stages based on the presence/absence of suction and the rhythmicity of the two components of sucking, suction and expression/compression. This five-point scale was used to characterize the developmental stage of sucking of each infant. Outcomes of feeding performance consisted of overall transfer (percent total volume transfered/volume to be taken) and rate of transfer (ml/min). Assessments were conducted when infants were taking 1-2, 3-5 and 6-8 oral feedings per day. Significant positive correlations were observed between the five stages of sucking and postmenstrual age, the defined feeding outcomes, and the number of daily oral feedings. Overall transfer and rate of transfer were enhanced when infants reached the more mature stages of sucking. ::: ::: We have demonstrated that oral feeding performance improves as infants' sucking skills mature. In addition, we propose that the present five-point sucking scale may be used to assess the developmental stages of sucking of preterm infants. Such knowledge would facilitate the management of oral feeding in these infants. <s> BIB005 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Nutritive Sucking Behavior Monitoring and Assessment: Measured Quantities and Principal Sucking Parameters <s> Twenty healthy preterm infants (gestational age 26 to 33 weeks, postmenstrual age [PMA] 32.1 to 39.6 weeks, postnatal age [PNA] 2.0 to 11.6 weeks) were studied weekly from initiation of bottle feeding until discharge, with simultaneous digital recordings of pharyngeal and nipple (teat) pressure and nasal thermistor and thoracic strain gauge readings. The percentage of sucks aggregated into 'runs' (defined as > or = 3 sucks with < or = 2 seconds between suck peaks) increased over time and correlated significantly with PMA (r=0.601, p<0.001). The length of the sucking-runs also correlated significantly with PMA (r=0.613, p<0.001). The stability of sucking rhythm, defined as a function of the mean/SD of the suck interval, was also directly correlated with increasing PMA (r=0.503, p=0.002), as was increasing suck rate (r=0.379, p<0.03). None of these measures was correlated with PNA. Similarly, increasing PMA, but not PNA, correlated with a higher percentage of swallows in runs (r=0.364, p<0.03). Stability of swallow rhythm did not change significantly from 32 to 40 weeks' PMA. In low-risk preterm infants, increasing PMA is correlated with a faster and more stable sucking rhythm and with increasing organization into longer suck and swallow runs. Stable swallow rhythm appears to be established earlier than suck rhythm. The fact that PMA is a better predictor than PNA of these patterns lends support to the concept that these patterns are innate rather than learned behaviors. Quantitative assessment of the stability of suck and swallow rhythms in preterm infants may allow prediction of subsequent feeding dysfunction as well as more general underlying neurological impairment. Knowledge of the normal ontogeny of the rhythms of suck and swallow may also enable us to differentiate immature (but normal) feeding patterns in preterm infants from dysmature (abnormal) patterns, allowing more appropriate intervention measures. <s> BIB006 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Nutritive Sucking Behavior Monitoring and Assessment: Measured Quantities and Principal Sucking Parameters <s> The aim of this study was to gain a better understanding of the development of sucking behavior in infants with Down's syndrome. The sucking behavior of 14 infants with Down's syndrome was consecutively studied at 1, 4, 8 and 12 mo of age. They were free from complications that may cause sucking difficulty. The sucking pressure, expression pressure, frequency and duration were measured. In addition, an ultrasound study during sucking was performed in sagittal planes. Although levels of the sucking pressure and duration were in the normal range, significant development occurred with time. Ultrasonographic images showed deficiency in the smooth peristaltic tongue movement. ::: ::: ::: ::: Conclusion: The sucking deficiency in Down's syndrome may result from not only hypotonicity of the perioral muscles, lips and masticatory muscles, but also deficiency in the smooth tongue movement. This approach using the sucking pressure waveform and ultrasonography can help in the examination of the development of sucking behavior, intraoral movement and therapeutic effects. <s> BIB007 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Nutritive Sucking Behavior Monitoring and Assessment: Measured Quantities and Principal Sucking Parameters <s> To quantify parameters of rhythmic suckle feeding in healthy term infants and to assess developmental changes during the first month of life, we recorded pharyngeal and nipple pressure in 16 infants at 1 to 4 days of age and again at 1 month. Over the first month of life in term infants, sucks and swallows become more rapid and increasingly organized into runs. Suck rate increased from 55/minute in the immediate postnatal period to 70/minute by the end of the first month (p<0.001). The percentage of sucks in runs of ≧3 increased from 72.7% (SD 12.8) to 87.9% (SD 9.1; p=0.001). Average length of suck runs also increased over the first month. Swallow rate increased slightly by the end of the first month, from about 46 to 50/minute (p=0.019), as did percentage of swallows in runs (76.8%, SD 14.9 versus 54.6%, SD 19.2;p=0.002). Efficiency of feeding, as measured by volume of nutrient per suck (0.17, SD 0.08 versus 0.30, SD 0.11cc/suck; p=0.008) and per swallow (0.23, SD 0.11 versus 0.44, SD 0.19 cc/swallow; p=0.002), almost doubled over the first month. The rhythmic stability of swallow-swallow, suck-suck, and suck-swallow dyadic interval, quantified using the coefficient of variation of the interval, was similar at the two age points, indicating that rhythmic stability of suck and swallow, individually and interactively, appears to be established by term. Percentage of sucks and swallows in 1:1 ratios (dyads), decreased from 78.8% (SD 20.1) shortly after birth to 57.5% (SD 25.8) at 1 month of age (p=0.002), demonstrating that the predominant 1:1 ratio of suck to swallow is more variable at 1 month, with the addition of ratios of 2:1, 3:1, and so on, and suggesting that infants gain the ability to adjust feeding patterns to improve efficiency. Knowledge of normal development in term infants provides a gold standard against which rhythmic patterns in preterm and other high-risk infants can be measured, and may allow earlier identification of infants at risk of neurodevelopmental delay and feeding disorders. <s> BIB008 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Nutritive Sucking Behavior Monitoring and Assessment: Measured Quantities and Principal Sucking Parameters <s> OBJECTIVES ::: Our objectives were to establish normative maturational data for feeding behavior of preterm infants from 32 to 36 weeks of postconception and to evaluate how the relation between swallowing and respiration changes with maturation. ::: ::: ::: STUDY DESIGN ::: Twenty-four infants (28 to 31 weeks of gestation at birth) without complications or defects were studied weekly between 32 and 36 weeks after conception. During bottle feeding with milk flowing only when infants were sucking, sucking efficiency, pressure, frequency, and duration were measured and the respiratory phase in which swallowing occurs was also analyzed. Statistical analysis was performed by repeated-measures analysis of variance with post hoc analysis. ::: ::: ::: RESULTS ::: The sucking efficiency significantly increased between 34 and 36 weeks after conception and exceeded 7 mL/min at 35 weeks. There were significant increases in sucking pressure and frequency as well as in duration between 33 and 36 weeks. Although swallowing occurred mostly during pauses in respiration at 32 and 33 weeks, after 35 weeks swallowing usually occurred at the end of inspiration. ::: ::: ::: CONCLUSIONS ::: Feeding behavior in premature infants matured significantly between 33 and 36 weeks after conception, and swallowing infrequently interrupted respiration during feeding after 35 weeks after conception. <s> BIB009 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Nutritive Sucking Behavior Monitoring and Assessment: Measured Quantities and Principal Sucking Parameters <s> UNLABELLED ::: Safe oral feeding of infants necessitates the coordination of suck-swallow-breathe. Healthy full-term infants demonstrate such skills at birth. But, preterm infants are known to have difficulty in the transition from tube to oral feeding. ::: ::: ::: AIM ::: To examine the relationship between suck and swallow and between swallow and breathe. It is hypothesized that greater milk transfer results from an increase in bolus size and/or swallowing frequency, and an improved swallow-breathe interaction. ::: ::: ::: METHODS ::: Twelve healthy preterm (<30 wk of gestation) and 8 full-term infants were recruited. Sucking (suction and expression), swallowing, and respiration were recorded simultaneously when the preterm infants began oral feeding (i.e. taking 1-2 oral feedings/d) and at 6-8 oral feedings/d. The full-term infants were similarly monitored during their first and 2nd to 4th weeks. Rate of milk transfer (ml/min) was used as an index of oral feeding performance. Sucking and swallowing frequencies (#/min), average bolus size (ml), and suction amplitude (mmHg) were measured. ::: ::: ::: RESULTS ::: The rate of milk transfer in the preterm infants increased over time and was correlated with average bolus size and swallowing frequency. Average bolus size was not correlated with swallowing frequency. Bolus size was correlated with suction amplitude, whereas the frequency of swallowing was correlated with sucking frequency. Preterm infants swallowed preferentially at different phases of respiration than those of their full-term counterparts. ::: ::: ::: CONCLUSION ::: As feeding performance improved, sucking and swallowing frequency, bolus size, and suction amplitude increased. It is speculated that feeding difficulties in preterm infants are more likely to result from inappropriate swallow-respiration interfacing than suck-swallow interaction. <s> BIB010 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Nutritive Sucking Behavior Monitoring and Assessment: Measured Quantities and Principal Sucking Parameters <s> The development of feeding and swallowing is the result of a complex interface between the developing nervous system, various physiological systems, and the environment. The purpose of this article is to review the neurobiology, development, and assessment of feeding and swallowing during early infancy. In recent years, there have been exciting advances in our understanding of the physiology and neurological control of feeding and swallowing. These advances may prove useful in furthering our understanding of the pathophysiology of dysphagia in infancy. Progress in developing standardized, reliable, and valid measures of oral sensorimotor and swallowing function in infancy has been slow. However, there have been significant advances in the instrumental analysis of feeding and swallowing disorders in infancy, including manometric analyses of sucking and swallowing, measures of respiration during feeding, videofluoroscopic swallow evaluations, ultrasonography, and flexible endoscopic examination of swallowing. Further efforts are needed to develop clinical evaluative measures of dysphagia in infancy. <s> BIB011 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Nutritive Sucking Behavior Monitoring and Assessment: Measured Quantities and Principal Sucking Parameters <s> This study aimed to determine whether neonatal feeding performance can predict the neurodevelopmental outcome of infants at 18 months of age. We measured the expression and sucking pressures of 65 infants (32 males and 33 females, mean gestational age 37.8 weeks [SD 0.5]; range 35.1 to 42.7 weeks and mean birthweight 2722g [SD 92]) with feeding problems and assessed their neurodevelopmental outcome at 18 months of age. Their diagnoses varied from mild asphyxia and transient tachypnea to Chiari malformation. A neurological examination was performed at 40 to 42 weeks postmenstrual age by means of an Amiel-Tison examination. Feeding performance at 1 and 2 weeks after initiation of oral feeding was divided into four classes: class 1, no suction and weak expression; class 2, arrhythmic alternation of expression/suction and weak pressures; class 3, rhythmic alternation, but weak pressures; and class 4, rhythmic alternation with normal pressures. Neurodevelopmental outcome was evaluated with the Bayley Scales of Infant Development-II and was divided into four categories: severe disability, moderate delay, minor delay, and normal. We examined the brain ultrasound on the day of feeding assessment, and compared the prognostic value of ultrasound and feeding performance. There was a significant correlation between feeding assessment and neurodevelopmental outcome at 18 months (p < 0.001). Improvements of feeding pattern at the second evaluation resulted in better neurodevelopmental outcome. The sensitivity and specificity of feeding assessment were higher than those of ultrasound assessment. Neonatal feeding performance is, therefore, of prognostic value in detecting future developmental problems. <s> BIB012 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Nutritive Sucking Behavior Monitoring and Assessment: Measured Quantities and Principal Sucking Parameters <s> OBJECTIVE ::: This study examined the relationship between the number of sucks in the first nutritive suck burst and feeding outcomes in preterm infants. The relationships of morbidity, maturity, and feeding experience to the number of sucks in the first suck burst were also examined. ::: ::: ::: METHODS ::: A non-experimental study of 95 preterm infants was used. Feeding outcomes included proficiency (percent consumed in first 5 min of feeding), efficiency (volume consumed over total feeding time), consumed (percent consumed over total feeding), and feeding success (proficiency >or=0.3, efficiency >or=1.5 mL/min, and consumed >or=0.8). Data were analyzed using correlation and regression analysis. ::: ::: ::: RESULTS AND CONCLUSIONS ::: There were statistically significant positive relationships between number of sucks in the first burst and all feeding outcomes-proficiency, efficiency, consumed, and success (r=0.303, 0.365, 0.259, and tau=0.229, P<.01, respectively). The number of sucks in the first burst was also positively correlated to behavior state and feeding experience (tau=0.104 and r=0.220, P<.01, respectively). Feeding experience was the best predictor of feeding outcomes; the number of sucks in the first suck burst also contributed significantly to all feeding outcomes. The findings suggest that as infants gain experience at feeding, the first suck burst could be a useful indicator for how successful a particular feeding might be. <s> BIB013 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Nutritive Sucking Behavior Monitoring and Assessment: Measured Quantities and Principal Sucking Parameters <s> To study the coordination of respiration and swallow rhythms we assessed feeding episodes in 20 preterm infants (gestational age range at birth 26-33wks; postmenstrual age [PMA] range when studied 32-40wks) and 16 term infants studied on days 1 to 4 (PMA range 37-41wks) and at 1 month (PMA range 41-45wks). A pharyngeal pressure transducer documented swallows and a thoracoabdominal strain gauge recorded respiratory efforts. Coefficients of variation (COVs) of breath-breath (BBr-BR) and swallow-breath (SW-BR) intervals during swallow runs, percentage ofapneic swallows (at least three swallows without interposed breaths), and phase of respiration relative to swallowing efforts were analyzed. Percentage of apneic swallows decreased with increasing PMA (16.6% [SE 4.7] in preterm infants s35wks' PMA; 6.6% [SE 1.6] in preterms >35wks; 1.5% [SE 0.4] in term infants; p 35wks' PMA; 0.693 [SE 0.059] at <35wks' PMA). Phase relation between swallowing and respiration stabilized with increasing PMA, with decreased apnea, and a significant increase in percentage of swallows occurring at end-inspiration. These data indicate that unlike the stabilization of suck and suck-swallow rhythms, which occur before about 36 weeks' PMA, improvement in coordination of respiration and swallow begins later. Coordination of swallow-respiration and suck-swallow rhythms may be predictive of feeding, respiratory, and neurodevelopmental abnormalities. <s> BIB014 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Nutritive Sucking Behavior Monitoring and Assessment: Measured Quantities and Principal Sucking Parameters <s> Abstract Aim The sucking pattern of term infants is composed of a rhythmic alteration of expression and suction movements. The aim is to evaluate if direct linear transformation (DLT) method could be used for the assessment of infant feeding. Subject and methods A total of 10 gnormalh infants and two infants with neurological disorders were studied using DLT procedures and expression/suction pressure recordings. Feeding pattern of seven gnormalh infants were evaluated simultaneously recording DLT and pressures. The other infants were tested non-simultaneously. We placed markers on the lateral angle of the eye, tip of the jaw, and throat. The faces of infants while sucking were recorded in profile. The jaw and throat movements were calculated using the DLT procedure. Regression analysis was implemented to investigate the relationship between suction and expression pressures and eye–jaw and eye–throat movement. All regression analyses investigated univariate relationships and adjusted for other covariates. Results Ten gnormalh infants demonstrated higher suction pressure than expression pressure, and their throat movements were larger than jaw movements. Two infants with neurological problems did not generate suction pressure and demonstrated larger movements in their jaw than throat. The simultaneous measurement ( n = 7) showed a significant correlation, not only between eye–jaw distance and the expression pressure, but also between eye–throat distance and suction pressure. The change in the eye–jaw distance was smaller than the changes in the eye–throat distance in gnormalh infants ( p Conclusions The DLT method can be used to evaluate feeding performance without any special device. <s> BIB015 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Nutritive Sucking Behavior Monitoring and Assessment: Measured Quantities and Principal Sucking Parameters <s> Aim: Safe and successful oral feeding requires proper maturation of sucking, swallowing and respiration. We hypothesized that oral feeding difficulties result from different temporal development of the musculatures implicated in these functions. ::: ::: Methods: Sixteen medically stable preterm infants (26 to 29 weeks gestation, GA) were recruited. Specific feeding skills were monitored as indirect markers for the maturational process of oral feeding musculatures: rate of milk intake (mL/min); percent milk leakage (lip seal); sucking stage, rate (#/s) and suction/expression ratio; suction amplitude (mmHg), rate and slope (mmHg/s); sucking/swallowing ratio; percent occurrence of swallows at specific phases of respiration. Coefficients of variation (COV) were used as indices of functional stability. Infants, born at 26/27- and 28/29-week GA, were at similar postmenstrual ages (PMA) when taking 1–2 and 6–8 oral feedings per day. ::: ::: Results: Over time, feeding efficiency and several skills improved, some decreased and others remained unchanged. Differences in COVs between the two GA groups demonstrated that, despite similar oral feeding outcomes, maturation levels of certain skills differed. ::: ::: Conclusions: Components of sucking, swallowing, respiration and their coordinated activity matured at different times and rates. Differences in functional stability of particular outcomes confirm that maturation levels depend on infants' gestational rather than PMA. <s> BIB016 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Nutritive Sucking Behavior Monitoring and Assessment: Measured Quantities and Principal Sucking Parameters <s> ABSTRACT:Objectives:The relationship between the pattern of sucking behavior of preterm infants during the early weeks of life and neurodevelopmental outcomes during the first year of life was evaluated.Methods:The study sample consisted of 105 preterm infants (postmenstrual age [PMA] at birth = 30. <s> BIB017 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Nutritive Sucking Behavior Monitoring and Assessment: Measured Quantities and Principal Sucking Parameters <s> We report quantitative measurements of ten parameters of nutritive sucking behavior in 91 normal full-term infants obtained using a novel device (an Orometer) and a data collection/analytical system (Suck Editor). The sucking parameters assessed include the number of sucks, mean pressure amplitude of sucks, mean frequency of sucks per second, mean suck interval in seconds, sucking amplitude variability, suck interval variability, number of suck bursts, mean number of sucks per suck burst, mean suck burst duration, and mean interburst gap duration. For analyses, test sessions were divided into 4 × 2-min segments. In single-study tests, 36 of 60 possible comparisons of ten parameters over six pairs of 2-min time intervals showed a p value of 0.05 or less. In 15 paired tests in the same infants at different ages, 33 of 50 possible comparisons of ten parameters over five time intervals showed p values of 0.05 or less. Quantification of nutritive sucking is feasible, showing statistically valid results for ten parameters that change during a feed and with age. These findings suggest that further research, based on our approach, may show clinical value in feeding assessment, diagnosis, and clinical management. <s> BIB018 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Nutritive Sucking Behavior Monitoring and Assessment: Measured Quantities and Principal Sucking Parameters <s> AIM ::: To obtain a better understanding of the changes in feeding behaviour from 1 to 6 months of age. By comparing breast- and bottle-feeding, we intended to clarify the difference in longitudinal sucking performance. ::: ::: ::: METHODS ::: Sucking variables were consecutively measured for 16 breast-fed and eight bottle-fed infants at 1, 3 and 6 months of age. ::: ::: ::: RESULTS ::: For breast-feeding, number of sucks per burst (17.8 +/- 8.8, 23.8 +/- 8.3 and 32.4 +/- 15.3 times), sucking burst duration (11.2 +/- 6.1, 14.7 +/- 8.0 and 17.9 +/- 8.8 sec) and number of sucking bursts per feed (33.9 +/- 13.9, 28.0 +/- 18.2 and 18.6 +/- 12.8 times) at 1, 3 and 6 months of age respectively showed significant differences between 1 and 6 months of age (p < 0.05). The sucking pressure and total number of sucks per feed did not differ among different ages. Bottle-feeding resulted in longer sucking bursts and more sucks per burst compared with breast-feeding in each month (p < 0.05). ::: ::: ::: CONCLUSION ::: The increase in the amount of ingested milk with maturation resulted from an increase in bolus volume per minute as well as the higher number of sucks continuously for both breast- and bottle-fed infants. <s> BIB019 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Nutritive Sucking Behavior Monitoring and Assessment: Measured Quantities and Principal Sucking Parameters <s> AIM ::: Early sucking and swallowing problems may be potential markers of neonatal brain injury and assist in identifying those infants at increased risk of adverse outcomes, but the relation between early sucking and swallowing problems and neonatal brain injury has not been established. The aim of the review was, therefore, to investigate the relation between early measures of sucking and swallowing and neurodevelopmental outcomes in infants diagnosed with neonatal brain injury and in infants born very preterm (<32wks) with very low birthweight (<1500g), at risk of neonatal brain injury. ::: ::: ::: METHOD ::: We conducted a systematic review of English-language articles using CINAHL, EMBASE, and MEDLINE OVID (from 1980 to May 2011). Additional studies were identified through manual searches of key journals and the works of expert authors. Extraction of data informed an assessment of the level of evidence and risk of bias for each study using a predefined set of quality indicators. ::: ::: ::: RESULTS ::: A total of 394 abstracts were generated by the search but only nine studies met the inclusion criterion. Early sucking and swallowing problems were present in a consistent proportion of infants and were predictive of neurodevelopmental outcome in infancy in five of the six studies reviewed. ::: ::: ::: LIMITATIONS ::: The methodological quality of studies was variable in terms of research design, level of evidence (National Health and Medical Research Council levels II, III, and IV), populations studied, assessments used and the nature and timing of neurodevelopmental follow-up. ::: ::: ::: CONCLUSIONS ::: Based upon the results of this review, there is currently insufficient evidence to clearly determine the relation between early sucking and swallowing problems and neonatal brain injury. Although early sucking and swallowing problems may be related to later neurodevelopmental outcomes, further research is required to delineate their value in predicting later motor outcomes and to establish reliable measures of early sucking and swallowing function. <s> BIB020
|
The ability to nutritively suck is not always completely mature in infants at birth and may require time to develop or to mature. For immature infants, the developmental complexity of the feeding process can cause a series of difficulties associated with the initiation and progression of feeding from a bottle, which is the most frequent indicator of the discharge readiness adopted by healthcare personnel BIB003 . Bottle feeding indeed has been widely investigated because of this reason, and because it allows standardization or control of some feeding characteristics across infants (e.g., liquid composition, nipple hole size, and hydrostatic pressure of milk) BIB009 . For the same reasons this review work is focused on the tools adopted for the assessment of infants' NS skills during bottle feeding. The adoption of instrumental measures for this early assessment (in opposition to non-instrumental observational methods) is increasing, because of the growing interest in standardized, reliable, and valid measures of oral sensorimotor function in infancy BIB011 . Indeed, such instrumental measures of early oral feeding ability have been reported to be more sensitive and specific to predict later neurodevelopment outcomes, compared to non-instrumental observational tools BIB020 , whose psychometric properties are still debated . Literature reports this instrumental assessment of NS behavior and of its development through the analysis of a wide variety of indices that could be extracted from the measurements. The identification a b of the most significant indices may be important to lead future research to focus on their investigation and on the establishment of normative data for the identification of deviations from the norm. We have focused on providing a survey of the principal indices used for the assessment of NS behavior, as well as of the quantities measured to extract them. Table 1 reports the most significant indices adopted for the instrumental assessment of infants' NS behavior during bottle feeding. The indices have been grouped in three main categories according to the final objective of the assessment: (i) to evaluate the level of maturation of oral feeding skills in preterm infants BIB009 BIB005 BIB016 BIB006 BIB013 BIB010 BIB014 BIB002 BIB007 BIB001 ; (ii) to evaluate or characterize the level of maturation of oral feeding skills in term infants BIB018 BIB008 BIB019 BIB004 ; and (iii) to make an early detection of later neurodevelopmental outcomes BIB017 BIB012 BIB001 BIB015 . Table 2 lists the different physical quantities that have been measured to monitor the NS process and from which the evaluation indices have been extracted. Both tables are organized in order to separate the different components of the NS process, i.e., sucking, swallowing, breathing, and nutrient consumption. For the assessment of preterm infants' maturation in terms of sucking skills during bottle feeding, several indices have been adopted. The organization into bursts and the establishment of a stable temporal pattern are important developmental steps in the maturation process of the sucking component BIB006 . Some descriptive parameters represent important indices for the evaluation of this maturation, i.e., the number of sucks per burst and the percentage of sucks in bursts. Moreover, the number of sucks composing the first burst has turned out to be a useful indicator of the feeding outcome BIB013 . In addition to these descriptive parameters, several temporal parameters appear to be consistent indicators of preterm infants' maturation, such as: sucking frequency (sucks per min), burst duration, inter-burst width, inter-suck width, and an index of rhythmic stability referred to as Coefficient of Variation of the Sucking process (COV Sk ). This index is adopted by several studies to assess the maturational patterns in terms of rhythmicity BIB016 BIB006 BIB014 , and it is defined as follows: (1) where SD is the standard deviation, and I represents the time interval between two consecutive events of the considered X process (e.g., the interval between consecutive sucks). All these indices can be calculated measuring any quantity that allows the identification and the temporal characterization of sucking events, without distinction between suction and expression components. For example, these parameters can be estimated even through measures of intranipple pressure or chin movements, as in BIB006 BIB013 . On the other hand, the specific measure of the suction component (IP) is very frequent for the assessment of NS skills. This allows to estimate all the indices already mentioned, as well as the maximum suction amplitude the infant is able to generate (IP amplitude), which is reported as an indicator of the preterm infant's suction maturation BIB009 BIB005 BIB016 BIB010 . However, the maturational process of preterm infants' oral-motor skills has been proven to be characterized by some developmental stages defined according to indices of both expression and suction components BIB005 . Preterm infants develop and establish first the expression component, then suction, and finally the S/E rhythmic alternation. Hence, measures of both sucking pressures (IP/EP) are needed to estimate some significant indicators of this maturational progress BIB005 BIB016 : S and E rhythmicity, the S:E ratio, the time interval between S and E (S-E interval), IP and EP amplitude. The maturational level of sucking skills in term infants appear to be completely assessable through a set of descriptive and temporal indices, that do not require the measurement of both sucking
|
Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Sucking <s> In 100 bottle-fed preterm infants feeding efficiency was studied by quantifying the volume of milk intake per minute and the number of teat insertions per 10 ml of milk intake. These variables were related to gestational age and to number of weeks of feeding experience. Feeding efficiency was greater in infants above 34 weeks gestational age than in those below this age. There was a significant correlation between feeding efficiency and the duration of feeding experience at most gestational ages between 32 and 37 weeks. A characteristic adducted and flexed arm posture was observed during feeding: it changed along with feeding experience. A neonatal feeding score was devised that allowed the quantification of the early oral feeding behavior. The feeding score correlated well with some aspects of perinatal assessment, with some aspects of the neonatal neurological evaluation and with developmental assessment at 7 months of age. These findings are a stimulus to continue our study into the relationships between feeding behaviour and other aspects of early development, especially of neurological development. <s> BIB001 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Sucking <s> OBJECTIVE ::: To determine the prevalence and nature of feeding difficulties and oral motor dysfunction among a representative sample of 49 children with cerebral palsy (12 to 72 months of age). ::: ::: ::: STUDY DESIGN ::: A population survey was undertaken by means of a combination of interview and home observational measures. ::: ::: ::: RESULTS ::: Sucking (57%) and swallowing (38%) problems in the first 12 months of life were common, and 80% had been fed nonorally on at least one occasion. More than 90% had clinically significant oral motor dysfunction. One in three (36.2%) was severely impaired and therefore at high risk of chronic undernourishment. There was a substantial discrepancy between the lengthy duration of mealtimes reported by mothers and those actually observed in the home (mean, 19 minutes 21 seconds; range, 5 minutes 21 seconds to 41 minutes 39 seconds). In 60% of the children, severe feeding problems preceded the diagnosis of cerebral palsy. ::: ::: ::: CONCLUSIONS ::: Using a standardized assessment of oral motor function, we found the majority of children to have clinically significant oral motor dysfunction. Contrary to maternal report, mealtimes were relatively brief, and this, combined with the severity of oral motor dysfunction, made it difficult for some children to achieve a satisfactory nutritional intake. The study illustrates the importance of observing feeding, preferably in the home. <s> BIB002 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Sucking <s> The maturation of deglutition apnoea time was investigated in 42 bottle-fed preterm infants, 28 to 37 weeks gestation, and in 29 normal term infants as a comparison group. Deglutition apnoea times reduced as infants matured, as did the number and length of episodes of multiple-swallow deglutition apnoea. The maturation appears related to developmental age (gestation) rather than feeding experience (postnatal age). Prolonged (>4 seconds) episodes of deglutition apnoea remained significantly more frequent in preterm infants reaching term postconceptual age compared to term infants. However, multiple-swallow deglutition apnoeas also occurred in the term comparison group, showing that maturation of this aspect is not complete at term gestation. The establishment of normal data for maturation should be valuable in assessing infants with feeding difficulties as well as for evaluation of neurological maturity and functioning of ventilatory control during feeding. <s> BIB003 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Sucking <s> It is acknowledged that the difficulty many preterm infants have in feeding orally results from their immature sucking skills. However, little is known regarding the development of sucking in these infants. The aim of this study was to demonstrate that the bottle-feeding performance of preterm infants is positively correlated with the developmental stage of their sucking. Infants' oral-motor skills were followed longitudinally using a special nipple/bottle system which monitored the suction and expression/compression component of sucking. The maturational process was rated into five primary stages based on the presence/absence of suction and the rhythmicity of the two components of sucking, suction and expression/compression. This five-point scale was used to characterize the developmental stage of sucking of each infant. Outcomes of feeding performance consisted of overall transfer (percent total volume transfered/volume to be taken) and rate of transfer (ml/min). Assessments were conducted when infants were taking 1-2, 3-5 and 6-8 oral feedings per day. Significant positive correlations were observed between the five stages of sucking and postmenstrual age, the defined feeding outcomes, and the number of daily oral feedings. Overall transfer and rate of transfer were enhanced when infants reached the more mature stages of sucking. ::: ::: We have demonstrated that oral feeding performance improves as infants' sucking skills mature. In addition, we propose that the present five-point sucking scale may be used to assess the developmental stages of sucking of preterm infants. Such knowledge would facilitate the management of oral feeding in these infants. <s> BIB004 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Sucking <s> Twenty healthy preterm infants (gestational age 26 to 33 weeks, postmenstrual age [PMA] 32.1 to 39.6 weeks, postnatal age [PNA] 2.0 to 11.6 weeks) were studied weekly from initiation of bottle feeding until discharge, with simultaneous digital recordings of pharyngeal and nipple (teat) pressure and nasal thermistor and thoracic strain gauge readings. The percentage of sucks aggregated into 'runs' (defined as > or = 3 sucks with < or = 2 seconds between suck peaks) increased over time and correlated significantly with PMA (r=0.601, p<0.001). The length of the sucking-runs also correlated significantly with PMA (r=0.613, p<0.001). The stability of sucking rhythm, defined as a function of the mean/SD of the suck interval, was also directly correlated with increasing PMA (r=0.503, p=0.002), as was increasing suck rate (r=0.379, p<0.03). None of these measures was correlated with PNA. Similarly, increasing PMA, but not PNA, correlated with a higher percentage of swallows in runs (r=0.364, p<0.03). Stability of swallow rhythm did not change significantly from 32 to 40 weeks' PMA. In low-risk preterm infants, increasing PMA is correlated with a faster and more stable sucking rhythm and with increasing organization into longer suck and swallow runs. Stable swallow rhythm appears to be established earlier than suck rhythm. The fact that PMA is a better predictor than PNA of these patterns lends support to the concept that these patterns are innate rather than learned behaviors. Quantitative assessment of the stability of suck and swallow rhythms in preterm infants may allow prediction of subsequent feeding dysfunction as well as more general underlying neurological impairment. Knowledge of the normal ontogeny of the rhythms of suck and swallow may also enable us to differentiate immature (but normal) feeding patterns in preterm infants from dysmature (abnormal) patterns, allowing more appropriate intervention measures. <s> BIB005 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Sucking <s> The aim of this study was to gain a better understanding of the development of sucking behavior in infants with Down's syndrome. The sucking behavior of 14 infants with Down's syndrome was consecutively studied at 1, 4, 8 and 12 mo of age. They were free from complications that may cause sucking difficulty. The sucking pressure, expression pressure, frequency and duration were measured. In addition, an ultrasound study during sucking was performed in sagittal planes. Although levels of the sucking pressure and duration were in the normal range, significant development occurred with time. Ultrasonographic images showed deficiency in the smooth peristaltic tongue movement. ::: ::: ::: ::: Conclusion: The sucking deficiency in Down's syndrome may result from not only hypotonicity of the perioral muscles, lips and masticatory muscles, but also deficiency in the smooth tongue movement. This approach using the sucking pressure waveform and ultrasonography can help in the examination of the development of sucking behavior, intraoral movement and therapeutic effects. <s> BIB006 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Sucking <s> To quantify parameters of rhythmic suckle feeding in healthy term infants and to assess developmental changes during the first month of life, we recorded pharyngeal and nipple pressure in 16 infants at 1 to 4 days of age and again at 1 month. Over the first month of life in term infants, sucks and swallows become more rapid and increasingly organized into runs. Suck rate increased from 55/minute in the immediate postnatal period to 70/minute by the end of the first month (p<0.001). The percentage of sucks in runs of ≧3 increased from 72.7% (SD 12.8) to 87.9% (SD 9.1; p=0.001). Average length of suck runs also increased over the first month. Swallow rate increased slightly by the end of the first month, from about 46 to 50/minute (p=0.019), as did percentage of swallows in runs (76.8%, SD 14.9 versus 54.6%, SD 19.2;p=0.002). Efficiency of feeding, as measured by volume of nutrient per suck (0.17, SD 0.08 versus 0.30, SD 0.11cc/suck; p=0.008) and per swallow (0.23, SD 0.11 versus 0.44, SD 0.19 cc/swallow; p=0.002), almost doubled over the first month. The rhythmic stability of swallow-swallow, suck-suck, and suck-swallow dyadic interval, quantified using the coefficient of variation of the interval, was similar at the two age points, indicating that rhythmic stability of suck and swallow, individually and interactively, appears to be established by term. Percentage of sucks and swallows in 1:1 ratios (dyads), decreased from 78.8% (SD 20.1) shortly after birth to 57.5% (SD 25.8) at 1 month of age (p=0.002), demonstrating that the predominant 1:1 ratio of suck to swallow is more variable at 1 month, with the addition of ratios of 2:1, 3:1, and so on, and suggesting that infants gain the ability to adjust feeding patterns to improve efficiency. Knowledge of normal development in term infants provides a gold standard against which rhythmic patterns in preterm and other high-risk infants can be measured, and may allow earlier identification of infants at risk of neurodevelopmental delay and feeding disorders. <s> BIB007 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Sucking <s> OBJECTIVES ::: Our objectives were to establish normative maturational data for feeding behavior of preterm infants from 32 to 36 weeks of postconception and to evaluate how the relation between swallowing and respiration changes with maturation. ::: ::: ::: STUDY DESIGN ::: Twenty-four infants (28 to 31 weeks of gestation at birth) without complications or defects were studied weekly between 32 and 36 weeks after conception. During bottle feeding with milk flowing only when infants were sucking, sucking efficiency, pressure, frequency, and duration were measured and the respiratory phase in which swallowing occurs was also analyzed. Statistical analysis was performed by repeated-measures analysis of variance with post hoc analysis. ::: ::: ::: RESULTS ::: The sucking efficiency significantly increased between 34 and 36 weeks after conception and exceeded 7 mL/min at 35 weeks. There were significant increases in sucking pressure and frequency as well as in duration between 33 and 36 weeks. Although swallowing occurred mostly during pauses in respiration at 32 and 33 weeks, after 35 weeks swallowing usually occurred at the end of inspiration. ::: ::: ::: CONCLUSIONS ::: Feeding behavior in premature infants matured significantly between 33 and 36 weeks after conception, and swallowing infrequently interrupted respiration during feeding after 35 weeks after conception. <s> BIB008 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Sucking <s> UNLABELLED ::: Safe oral feeding of infants necessitates the coordination of suck-swallow-breathe. Healthy full-term infants demonstrate such skills at birth. But, preterm infants are known to have difficulty in the transition from tube to oral feeding. ::: ::: ::: AIM ::: To examine the relationship between suck and swallow and between swallow and breathe. It is hypothesized that greater milk transfer results from an increase in bolus size and/or swallowing frequency, and an improved swallow-breathe interaction. ::: ::: ::: METHODS ::: Twelve healthy preterm (<30 wk of gestation) and 8 full-term infants were recruited. Sucking (suction and expression), swallowing, and respiration were recorded simultaneously when the preterm infants began oral feeding (i.e. taking 1-2 oral feedings/d) and at 6-8 oral feedings/d. The full-term infants were similarly monitored during their first and 2nd to 4th weeks. Rate of milk transfer (ml/min) was used as an index of oral feeding performance. Sucking and swallowing frequencies (#/min), average bolus size (ml), and suction amplitude (mmHg) were measured. ::: ::: ::: RESULTS ::: The rate of milk transfer in the preterm infants increased over time and was correlated with average bolus size and swallowing frequency. Average bolus size was not correlated with swallowing frequency. Bolus size was correlated with suction amplitude, whereas the frequency of swallowing was correlated with sucking frequency. Preterm infants swallowed preferentially at different phases of respiration than those of their full-term counterparts. ::: ::: ::: CONCLUSION ::: As feeding performance improved, sucking and swallowing frequency, bolus size, and suction amplitude increased. It is speculated that feeding difficulties in preterm infants are more likely to result from inappropriate swallow-respiration interfacing than suck-swallow interaction. <s> BIB009 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Sucking <s> This study aimed to determine whether neonatal feeding performance can predict the neurodevelopmental outcome of infants at 18 months of age. We measured the expression and sucking pressures of 65 infants (32 males and 33 females, mean gestational age 37.8 weeks [SD 0.5]; range 35.1 to 42.7 weeks and mean birthweight 2722g [SD 92]) with feeding problems and assessed their neurodevelopmental outcome at 18 months of age. Their diagnoses varied from mild asphyxia and transient tachypnea to Chiari malformation. A neurological examination was performed at 40 to 42 weeks postmenstrual age by means of an Amiel-Tison examination. Feeding performance at 1 and 2 weeks after initiation of oral feeding was divided into four classes: class 1, no suction and weak expression; class 2, arrhythmic alternation of expression/suction and weak pressures; class 3, rhythmic alternation, but weak pressures; and class 4, rhythmic alternation with normal pressures. Neurodevelopmental outcome was evaluated with the Bayley Scales of Infant Development-II and was divided into four categories: severe disability, moderate delay, minor delay, and normal. We examined the brain ultrasound on the day of feeding assessment, and compared the prognostic value of ultrasound and feeding performance. There was a significant correlation between feeding assessment and neurodevelopmental outcome at 18 months (p < 0.001). Improvements of feeding pattern at the second evaluation resulted in better neurodevelopmental outcome. The sensitivity and specificity of feeding assessment were higher than those of ultrasound assessment. Neonatal feeding performance is, therefore, of prognostic value in detecting future developmental problems. <s> BIB010 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Sucking <s> To study the coordination of respiration and swallow rhythms we assessed feeding episodes in 20 preterm infants (gestational age range at birth 26-33wks; postmenstrual age [PMA] range when studied 32-40wks) and 16 term infants studied on days 1 to 4 (PMA range 37-41wks) and at 1 month (PMA range 41-45wks). A pharyngeal pressure transducer documented swallows and a thoracoabdominal strain gauge recorded respiratory efforts. Coefficients of variation (COVs) of breath-breath (BBr-BR) and swallow-breath (SW-BR) intervals during swallow runs, percentage ofapneic swallows (at least three swallows without interposed breaths), and phase of respiration relative to swallowing efforts were analyzed. Percentage of apneic swallows decreased with increasing PMA (16.6% [SE 4.7] in preterm infants s35wks' PMA; 6.6% [SE 1.6] in preterms >35wks; 1.5% [SE 0.4] in term infants; p 35wks' PMA; 0.693 [SE 0.059] at <35wks' PMA). Phase relation between swallowing and respiration stabilized with increasing PMA, with decreased apnea, and a significant increase in percentage of swallows occurring at end-inspiration. These data indicate that unlike the stabilization of suck and suck-swallow rhythms, which occur before about 36 weeks' PMA, improvement in coordination of respiration and swallow begins later. Coordination of swallow-respiration and suck-swallow rhythms may be predictive of feeding, respiratory, and neurodevelopmental abnormalities. <s> BIB011 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Sucking <s> OBJECTIVE ::: This study examined the relationship between the number of sucks in the first nutritive suck burst and feeding outcomes in preterm infants. The relationships of morbidity, maturity, and feeding experience to the number of sucks in the first suck burst were also examined. ::: ::: ::: METHODS ::: A non-experimental study of 95 preterm infants was used. Feeding outcomes included proficiency (percent consumed in first 5 min of feeding), efficiency (volume consumed over total feeding time), consumed (percent consumed over total feeding), and feeding success (proficiency >or=0.3, efficiency >or=1.5 mL/min, and consumed >or=0.8). Data were analyzed using correlation and regression analysis. ::: ::: ::: RESULTS AND CONCLUSIONS ::: There were statistically significant positive relationships between number of sucks in the first burst and all feeding outcomes-proficiency, efficiency, consumed, and success (r=0.303, 0.365, 0.259, and tau=0.229, P<.01, respectively). The number of sucks in the first burst was also positively correlated to behavior state and feeding experience (tau=0.104 and r=0.220, P<.01, respectively). Feeding experience was the best predictor of feeding outcomes; the number of sucks in the first suck burst also contributed significantly to all feeding outcomes. The findings suggest that as infants gain experience at feeding, the first suck burst could be a useful indicator for how successful a particular feeding might be. <s> BIB012 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Sucking <s> Abstract Aim The sucking pattern of term infants is composed of a rhythmic alteration of expression and suction movements. The aim is to evaluate if direct linear transformation (DLT) method could be used for the assessment of infant feeding. Subject and methods A total of 10 gnormalh infants and two infants with neurological disorders were studied using DLT procedures and expression/suction pressure recordings. Feeding pattern of seven gnormalh infants were evaluated simultaneously recording DLT and pressures. The other infants were tested non-simultaneously. We placed markers on the lateral angle of the eye, tip of the jaw, and throat. The faces of infants while sucking were recorded in profile. The jaw and throat movements were calculated using the DLT procedure. Regression analysis was implemented to investigate the relationship between suction and expression pressures and eye–jaw and eye–throat movement. All regression analyses investigated univariate relationships and adjusted for other covariates. Results Ten gnormalh infants demonstrated higher suction pressure than expression pressure, and their throat movements were larger than jaw movements. Two infants with neurological problems did not generate suction pressure and demonstrated larger movements in their jaw than throat. The simultaneous measurement ( n = 7) showed a significant correlation, not only between eye–jaw distance and the expression pressure, but also between eye–throat distance and suction pressure. The change in the eye–jaw distance was smaller than the changes in the eye–throat distance in gnormalh infants ( p Conclusions The DLT method can be used to evaluate feeding performance without any special device. <s> BIB013 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Sucking <s> Aim: Safe and successful oral feeding requires proper maturation of sucking, swallowing and respiration. We hypothesized that oral feeding difficulties result from different temporal development of the musculatures implicated in these functions. ::: ::: Methods: Sixteen medically stable preterm infants (26 to 29 weeks gestation, GA) were recruited. Specific feeding skills were monitored as indirect markers for the maturational process of oral feeding musculatures: rate of milk intake (mL/min); percent milk leakage (lip seal); sucking stage, rate (#/s) and suction/expression ratio; suction amplitude (mmHg), rate and slope (mmHg/s); sucking/swallowing ratio; percent occurrence of swallows at specific phases of respiration. Coefficients of variation (COV) were used as indices of functional stability. Infants, born at 26/27- and 28/29-week GA, were at similar postmenstrual ages (PMA) when taking 1–2 and 6–8 oral feedings per day. ::: ::: Results: Over time, feeding efficiency and several skills improved, some decreased and others remained unchanged. Differences in COVs between the two GA groups demonstrated that, despite similar oral feeding outcomes, maturation levels of certain skills differed. ::: ::: Conclusions: Components of sucking, swallowing, respiration and their coordinated activity matured at different times and rates. Differences in functional stability of particular outcomes confirm that maturation levels depend on infants' gestational rather than PMA. <s> BIB014 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Sucking <s> ABSTRACT:Objectives:The relationship between the pattern of sucking behavior of preterm infants during the early weeks of life and neurodevelopmental outcomes during the first year of life was evaluated.Methods:The study sample consisted of 105 preterm infants (postmenstrual age [PMA] at birth = 30. <s> BIB015 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Sucking <s> We report quantitative measurements of ten parameters of nutritive sucking behavior in 91 normal full-term infants obtained using a novel device (an Orometer) and a data collection/analytical system (Suck Editor). The sucking parameters assessed include the number of sucks, mean pressure amplitude of sucks, mean frequency of sucks per second, mean suck interval in seconds, sucking amplitude variability, suck interval variability, number of suck bursts, mean number of sucks per suck burst, mean suck burst duration, and mean interburst gap duration. For analyses, test sessions were divided into 4 × 2-min segments. In single-study tests, 36 of 60 possible comparisons of ten parameters over six pairs of 2-min time intervals showed a p value of 0.05 or less. In 15 paired tests in the same infants at different ages, 33 of 50 possible comparisons of ten parameters over five time intervals showed p values of 0.05 or less. Quantification of nutritive sucking is feasible, showing statistically valid results for ten parameters that change during a feed and with age. These findings suggest that further research, based on our approach, may show clinical value in feeding assessment, diagnosis, and clinical management. <s> BIB016 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Sucking <s> AIM ::: To obtain a better understanding of the changes in feeding behaviour from 1 to 6 months of age. By comparing breast- and bottle-feeding, we intended to clarify the difference in longitudinal sucking performance. ::: ::: ::: METHODS ::: Sucking variables were consecutively measured for 16 breast-fed and eight bottle-fed infants at 1, 3 and 6 months of age. ::: ::: ::: RESULTS ::: For breast-feeding, number of sucks per burst (17.8 +/- 8.8, 23.8 +/- 8.3 and 32.4 +/- 15.3 times), sucking burst duration (11.2 +/- 6.1, 14.7 +/- 8.0 and 17.9 +/- 8.8 sec) and number of sucking bursts per feed (33.9 +/- 13.9, 28.0 +/- 18.2 and 18.6 +/- 12.8 times) at 1, 3 and 6 months of age respectively showed significant differences between 1 and 6 months of age (p < 0.05). The sucking pressure and total number of sucks per feed did not differ among different ages. Bottle-feeding resulted in longer sucking bursts and more sucks per burst compared with breast-feeding in each month (p < 0.05). ::: ::: ::: CONCLUSION ::: The increase in the amount of ingested milk with maturation resulted from an increase in bolus volume per minute as well as the higher number of sucks continuously for both breast- and bottle-fed infants. <s> BIB017 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Sucking <s> BACKGROUND ::: Preterm infants often have difficulty in achieving a coordinated sucking pattern. To analyze the correlation between preterm infants with disorganized sucking and future development, weekly studies were performed of 27 preterm infants from initiation of bottle feeding until a normal sucking pattern was recognized. ::: ::: ::: METHODS ::: A total of 27 preterm infants without brain lesion participated in the present study. Neonatal Oral Motor Assessment Scale (NOMAS) was utilized to evaluate the sucking pattern. Infants who were initially assessed as having disorganized sucking on NOMAS and regained a normal sucking pattern by 37 weeks old were assigned to group I; infants with a persistent disorganized sucking pattern after 37 weeks were assigned to group II. The mental (MDI) and psychomotor (PDI) developmental indices of Bayley Scales of Infant Development, second edition were used for follow-up tests to demonstrate neurodevelopment at 6 months and 12 months of corrected age. ::: ::: ::: RESULTS ::: At 6 months follow up, subjects in group I had a significantly higher PDI score than group II infants (P= 0.04). At 12 months follow up, group I subjects had a significantly higher score on MDI (P= 0.03) and PDI (P= 0.04). There was also a higher rate for development delay in group II at 6 months (P= 0.05). ::: ::: ::: CONCLUSION ::: NOMAS-based assessment for neonatal feeding performance could be a helpful tool to predict neurodevelopmental outcome at 6 and 12 months. Close follow up and early intervention may be necessary for infants who present with a disorganized sucking pattern after 37 weeks post-conceptional age. <s> BIB018 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Sucking <s> AIM ::: Early sucking and swallowing problems may be potential markers of neonatal brain injury and assist in identifying those infants at increased risk of adverse outcomes, but the relation between early sucking and swallowing problems and neonatal brain injury has not been established. The aim of the review was, therefore, to investigate the relation between early measures of sucking and swallowing and neurodevelopmental outcomes in infants diagnosed with neonatal brain injury and in infants born very preterm (<32wks) with very low birthweight (<1500g), at risk of neonatal brain injury. ::: ::: ::: METHOD ::: We conducted a systematic review of English-language articles using CINAHL, EMBASE, and MEDLINE OVID (from 1980 to May 2011). Additional studies were identified through manual searches of key journals and the works of expert authors. Extraction of data informed an assessment of the level of evidence and risk of bias for each study using a predefined set of quality indicators. ::: ::: ::: RESULTS ::: A total of 394 abstracts were generated by the search but only nine studies met the inclusion criterion. Early sucking and swallowing problems were present in a consistent proportion of infants and were predictive of neurodevelopmental outcome in infancy in five of the six studies reviewed. ::: ::: ::: LIMITATIONS ::: The methodological quality of studies was variable in terms of research design, level of evidence (National Health and Medical Research Council levels II, III, and IV), populations studied, assessments used and the nature and timing of neurodevelopmental follow-up. ::: ::: ::: CONCLUSIONS ::: Based upon the results of this review, there is currently insufficient evidence to clearly determine the relation between early sucking and swallowing problems and neonatal brain injury. Although early sucking and swallowing problems may be related to later neurodevelopmental outcomes, further research is required to delineate their value in predicting later motor outcomes and to establish reliable measures of early sucking and swallowing function. <s> BIB019 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Sucking <s> OBJECTIVE ::: To examine the association between sucking patterns and the quality of fidgety movements in preterm infants. ::: ::: ::: STUDY DESIGN ::: We studied the sucking patterns and fidgety movements of 44 preterm infants (gestational age <35 weeks) longitudinally from 34 weeks' postmenstrual age up to 14 weeks postterm. We used the Neonatal Oral-Motor Assessment Scale during feeding and scored the sucking patterns as normal or abnormal. Abnormal sucking patterns were categorized into arrhythmic sucking and uncoordinated sucking. At 14 weeks postterm, we scored the quality of fidgety movements from videotapes as normal, abnormal, or absent. ::: ::: ::: RESULTS ::: The postmenstrual age at which sucking patterns became normal (median, 48 weeks; range, 34 to >50 weeks) was correlated with the quality of fidgety movements (Spearman ρ = -0.33; P = .035). The percentage per infant of normal and uncoordinated sucking patterns was also correlated with the quality of fidgety movements (ρ = 0.31; P = .048 and ρ = -0.33; P = .032, respectively). Infants with uncoordinated sucking patterns had a higher rate of abnormal fidgety movements (OR, 7.5; 95% CI, 1.4-40; P = .019). ::: ::: ::: CONCLUSION ::: The development of sucking patterns in preterm infants was related to the quality of fidgety movements. Uncoordinated sucking patterns were associated with abnormal fidgety movements, indicating that uncoordinated sucking, swallowing, and breathing may represent neurologic dysfunction. <s> BIB020
|
Swallowing Breathing Nutrient Consumption IP: BIB008 BIB016 Pharyngeal pressure: BIB008 BIB005 BIB011 BIB007 Nasal airflow/ Thoracic movements: BIB008 BIB005 Total transferred nutrient: BIB008 BIB004 BIB006 BIB007 EP/IP: BIB010 BIB004 BIB014 BIB009 BIB006 Hyoid bone movements: BIB014 BIB009 Thoracic movements: BIB014 BIB009 BIB011 Minute transferred volume: BIB009 BIB001 Intranipple pressure: BIB005 BIB007 Swallows sounds: Transferred milk weight: BIB014 Chin movements: BIB012 Throat-eye (S) and jaw-eye (E) distance: BIB013 pressures BIB016 BIB007 BIB017 . Almost all of these indices have been already mentioned. An additional one is introduced in BIB016 to quantify the sucking variability, through a measure of the suck-to-suck fluctuation in amplitude. The authors refer to it as inconsistency index and define it as the SD of the ratios of amplitudes of successive sucks within bursts. Moreover, an index of sucking intensity is defined as the mean maximum sucking pressure divided by the mean suck duration, and appears to be correlated with the efficiency of the sucking pattern . However, as Table 1 reported, some significant indices for the assessment of the oral feeding maturation also concern other components of the NS process. Immature NS does not only reflect sucking ability, but also the coordination of suck with swallowing and respiration. Among the principal indices adopted for the evaluation of these coordination skills in preterm infants, there are the coefficients of variation calculated from the breath-breath and swallow-swallow intervals (COV B and COV Sw ), that allow the analysis of the feeding-related respiratory and swallowing rhythms BIB011 . Another significant index is the percentage of apneic swallows, i.e., the number of series of at least three swallows not associated with breathing events, divided by total swallows. This index appears to be a clear indicator of maturation in bottle-fed preterm infants, as reported in BIB003 , stressing the importance of the ventilatory control during feeding. However, the maturation of this aspect cannot be complete at term gestation, hence this index of deglutition apnea represents an indicator of term infants assessment as well. Moreover, a safe coordination between Sw and B is reported as an important developmental achievement for preterm immature infants, and it is usually assessed as the percent occurrence of a specific Sw-B interface (e.g., Inspiration-Swallow-Expiration, I-S-E) BIB008 BIB014 BIB011 . This index is also an important indicator for the assessment of the term infant's feeding pattern, along with the sucking-breathing (Sk-B) interface and the Sk:Sw:B ratio . All these parameters to evaluate the preterm infants' ability to establish a mature coordination between sucking, swallowing and breathing, can be estimated from different measures of swallowing and breathing (see Table 2 ), which allow the detection of the events (swallows, inspirations, expirations). For both preterm and term infants, oral feeding performance is usually assessed through indices of sucking efficiency (usually defined as the nutrient volume per suck) and the rate of nutrient intake (intake volume divided by feeding duration), calculated using all the different measures of nutrient consumption adopted and reported in Table 2 . An alternative definition of the sucking efficiency that has been adopted is the average milk intake per suck divided by average effect (pressure • duration) per suck . The bolus size (volume per swallow) is another index of nutrient consumption that allows to assess feeding performance in relation to the swallowing pattern. Nutritive sucking has also been considered as an early motor marker for the prediction of later neurodevelopmental outcomes in infants BIB019 . Some already mentioned sucking indices (Sk frequency, number of sucks per burst, IP amplitude) have turned out to be predictors of later neurological outcomes BIB015 . Moreover, with measures of both suction and expression components (IP and EP), the newborn's sucking pattern has been classified according to the rhythmicity and amplitude of both components, inferring prediction of later neurological development BIB010 . S and E have also been assessed, through measurements of throat and jaw movements BIB013 (see Section 3 for additional details). The eye-jaw and eye-throat distances have demonstrated to be useful to identify the differences in feeding performance between healthy infants and infants with neurological disorders. Nutrient consumption is another important factor whose monitoring can allow the estimation of significant indices with predictive value. The newborn's feeding behavior, assessed through the milk intake rate (mL/min), has demonstrated to be correlated with future neurodevelopment assessment BIB001 . No measurements of the other components (breathing, swallowing) of the nutritive sucking process have been carried out to this aim. Such predictive potential of sucking assessment was also confirmed by other authors BIB002 BIB018 BIB020 whose studies are not reported in this work since they adopted non-instrumental tools for the assessment. The importance of the instrumental monitoring of NS has also been demonstrated in the case of neurodisabled infants with Down's syndrome: the use of sucking pressure waveforms (IP and EP measures) can be helpful in the examination of the development of sucking behavior, intraoral movements and therapeutic effects BIB006 . Moreover, problems with sucking and swallowing can be observed in children with cerebral palsy (CP) within the first 12 months of life, which often precede the diagnosis BIB002 . These observations emphasize the importance of monitoring feeding behavior, preferably at home, and taking a careful feeding history.
|
Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Sucking Behavior Monitoring and Assessment: Technological Solutions and Methods Adopted in Research Studies <s> The aim of this study was to gain a better understanding of the development of sucking behavior in infants with Down's syndrome. The sucking behavior of 14 infants with Down's syndrome was consecutively studied at 1, 4, 8 and 12 mo of age. They were free from complications that may cause sucking difficulty. The sucking pressure, expression pressure, frequency and duration were measured. In addition, an ultrasound study during sucking was performed in sagittal planes. Although levels of the sucking pressure and duration were in the normal range, significant development occurred with time. Ultrasonographic images showed deficiency in the smooth peristaltic tongue movement. ::: ::: ::: ::: Conclusion: The sucking deficiency in Down's syndrome may result from not only hypotonicity of the perioral muscles, lips and masticatory muscles, but also deficiency in the smooth tongue movement. This approach using the sucking pressure waveform and ultrasonography can help in the examination of the development of sucking behavior, intraoral movement and therapeutic effects. <s> BIB001 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Sucking Behavior Monitoring and Assessment: Technological Solutions and Methods Adopted in Research Studies <s> UNLABELLED ::: Safe oral feeding of infants necessitates the coordination of suck-swallow-breathe. Healthy full-term infants demonstrate such skills at birth. But, preterm infants are known to have difficulty in the transition from tube to oral feeding. ::: ::: ::: AIM ::: To examine the relationship between suck and swallow and between swallow and breathe. It is hypothesized that greater milk transfer results from an increase in bolus size and/or swallowing frequency, and an improved swallow-breathe interaction. ::: ::: ::: METHODS ::: Twelve healthy preterm (<30 wk of gestation) and 8 full-term infants were recruited. Sucking (suction and expression), swallowing, and respiration were recorded simultaneously when the preterm infants began oral feeding (i.e. taking 1-2 oral feedings/d) and at 6-8 oral feedings/d. The full-term infants were similarly monitored during their first and 2nd to 4th weeks. Rate of milk transfer (ml/min) was used as an index of oral feeding performance. Sucking and swallowing frequencies (#/min), average bolus size (ml), and suction amplitude (mmHg) were measured. ::: ::: ::: RESULTS ::: The rate of milk transfer in the preterm infants increased over time and was correlated with average bolus size and swallowing frequency. Average bolus size was not correlated with swallowing frequency. Bolus size was correlated with suction amplitude, whereas the frequency of swallowing was correlated with sucking frequency. Preterm infants swallowed preferentially at different phases of respiration than those of their full-term counterparts. ::: ::: ::: CONCLUSION ::: As feeding performance improved, sucking and swallowing frequency, bolus size, and suction amplitude increased. It is speculated that feeding difficulties in preterm infants are more likely to result from inappropriate swallow-respiration interfacing than suck-swallow interaction. <s> BIB002 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Sucking Behavior Monitoring and Assessment: Technological Solutions and Methods Adopted in Research Studies <s> To study the coordination of respiration and swallow rhythms we assessed feeding episodes in 20 preterm infants (gestational age range at birth 26-33wks; postmenstrual age [PMA] range when studied 32-40wks) and 16 term infants studied on days 1 to 4 (PMA range 37-41wks) and at 1 month (PMA range 41-45wks). A pharyngeal pressure transducer documented swallows and a thoracoabdominal strain gauge recorded respiratory efforts. Coefficients of variation (COVs) of breath-breath (BBr-BR) and swallow-breath (SW-BR) intervals during swallow runs, percentage ofapneic swallows (at least three swallows without interposed breaths), and phase of respiration relative to swallowing efforts were analyzed. Percentage of apneic swallows decreased with increasing PMA (16.6% [SE 4.7] in preterm infants s35wks' PMA; 6.6% [SE 1.6] in preterms >35wks; 1.5% [SE 0.4] in term infants; p 35wks' PMA; 0.693 [SE 0.059] at <35wks' PMA). Phase relation between swallowing and respiration stabilized with increasing PMA, with decreased apnea, and a significant increase in percentage of swallows occurring at end-inspiration. These data indicate that unlike the stabilization of suck and suck-swallow rhythms, which occur before about 36 weeks' PMA, improvement in coordination of respiration and swallow begins later. Coordination of swallow-respiration and suck-swallow rhythms may be predictive of feeding, respiratory, and neurodevelopmental abnormalities. <s> BIB003 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Sucking Behavior Monitoring and Assessment: Technological Solutions and Methods Adopted in Research Studies <s> Aim: Safe and successful oral feeding requires proper maturation of sucking, swallowing and respiration. We hypothesized that oral feeding difficulties result from different temporal development of the musculatures implicated in these functions. ::: ::: Methods: Sixteen medically stable preterm infants (26 to 29 weeks gestation, GA) were recruited. Specific feeding skills were monitored as indirect markers for the maturational process of oral feeding musculatures: rate of milk intake (mL/min); percent milk leakage (lip seal); sucking stage, rate (#/s) and suction/expression ratio; suction amplitude (mmHg), rate and slope (mmHg/s); sucking/swallowing ratio; percent occurrence of swallows at specific phases of respiration. Coefficients of variation (COV) were used as indices of functional stability. Infants, born at 26/27- and 28/29-week GA, were at similar postmenstrual ages (PMA) when taking 1–2 and 6–8 oral feedings per day. ::: ::: Results: Over time, feeding efficiency and several skills improved, some decreased and others remained unchanged. Differences in COVs between the two GA groups demonstrated that, despite similar oral feeding outcomes, maturation levels of certain skills differed. ::: ::: Conclusions: Components of sucking, swallowing, respiration and their coordinated activity matured at different times and rates. Differences in functional stability of particular outcomes confirm that maturation levels depend on infants' gestational rather than PMA. <s> BIB004
|
In the previous section we have reported the complex set of indices and quantities, used to objectively assess infant oral feeding. The measurement of such a heterogeneous set of indices requires several technological solutions that can be grouped into three categories: (i) measuring systems to monitor sucking process (Table 3) ; (ii) measuring systems to monitor swallowing and breathing process (Table 4) , and (iii) measuring systems to monitor nutrient consumption (Table 5 ). Swallowing and breathing are considered together, because several authors BIB004 BIB002 BIB003 BIB001 demonstrated the oral feeding performance in preterm infants to depend mainly on their coordination. Table 3 . Overview of the measuring systems used to monitor sucking process: measurands, sensors and measurement procedures.
|
Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Suction <s> Non-invasive, sensitive equipment was designed to record nasal air flow, the timing and volume of milk flow, intraoral pressure and swallowing in normal full-term newborn babies artificially fed under strictly controlled conditions. Synchronous recordings of these events are presented in chart form. Interpretation of the charts, with the aid of applied anatomy, suggests an hypothesis of the probable sequence of events during an ideal feeding cycle under the test conditions. This emphasises the importance of complete coordination between breathing, sucking and swallowing. The feeding respiratory pattern and its relationship to the other events was different from the non-nutritive respiratory pattern. The complexity of the coordinated patterns, the small bolus size which influenced the respiratory pattern, together with the coordination of all these events when milk was present in the mouth, emphasise the importance of the sensory mechanisms. The discussion considers (1) the relationship between these results, those reported by other workers under other feeding conditions and the author's (WGS) clinical experience, (2) factors which appear to be essential to permit conventional bottle feeding and (3) the importance of the coordination between the muscles of articulation, by which babies obtain their nourishment in relation to normal development and maturation. <s> BIB001 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Suction <s> Milk flow achieved during feeding may contribute to the ventilatory depression observed during nipple feeding. One of the important determinants of milk flow is the size of the feeding hole. In the first phase of the study, investigators compared the breathing patterns of 10 preterm infants during bottle feeding with two types of commercially available (Enfamil) single-hole nipples: one type designed for term infants and the other for preterm infants. Reductions in ventilation, tidal volume, and breathing frequency, compared with prefeeding control values, were observed with both nipple types during continuous and intermittent sucking phases; no significant differences were observed for any of the variables. Unlike the commercially available, mechanically drilled nipples, laser-cut nipple units showed a markedly lower coefficient of variation in milk flow. In the second phase of the study, two sizes of laser-cut nipple units, low and high flow, were used to feed nine preterm infants. Significantly lower sucking pressures were observed with high-flow nipples as compared with low-flow nipples. Decreases in minute ventilation and breathing frequency were also significantly greater with high-flow nipples. These results suggest that milk flow contributes to the observed reduction in ventilation during nipple feeding and that preterm infants have limited ability to self-regulate milk flow. <s> BIB002 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Suction <s> The purpose of this investigation was to quantify normal nutritive sucking, using a microcomputer-based instrument which replicated the infant's customary bottle-feeding routine. 86 feeding sessions were recorded from infants ranging between 1.5 and 11.5 months of age. Suck height, suck area and percentage of time spent sucking were unrelated to age. Volume per suck declined with age, as did intersuck interval, which corresponded to a more rapid sucking rate. This meant that volume per minute of sucking time was fairly constant. The apparatus provided an objective description of the patterns of normal nutritive sucking in infants to which abnormal sucking patterns may be compared. <s> BIB003 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Suction <s> During feeding, infants have been found to decrease ventilation in proportion to increasing swallowing frequency, presumably as a consequence of neural inhibition of breathing and airway closure during swallowing. To what extent infants decrease ventilatory compromise during feeding by modifying feeding behavior is unknown. We increased swallowing frequency in infants by facilitating formula flow to study potential ventilatory sparing mechanisms. We studied seven full-term healthy infants 5-12 days of age. Nasal air flow and tidal volume were recorded with a nasal flowmeter. Soft fluid-filled catheters in the oropharynx and bottle recorded swallowing and sucking activity, and volume changes in the bottle were continuously measured. Bottle pressure was increased to facilitate formula flow. Low- and high-pressure trials were then compared. With the change from low to high pressure, consumption rate increased, as did sucking and swallowing frequencies. This change reversed on return to low pressure. Under high-pressure conditions, we saw a decrease in minute ventilation as expected. With onset of high pressure, sucking and swallowing volumes increased, whereas duration of airway closure during swallows remained constant. Therefore, increased formula consumption was associated with reduced ventilation, a predictable consequence of increased swallowing frequency. However, when consumption rate was high, the infant also increased swallowing volume, a tactic that is potentially ventilatory sparing as a lower swallowing frequency is required to achieve the increased consumption rate. As well, when consumption rate is low, the sucking-to-swallowing ratio increases, again potentially conserving ventilation by decreasing swallowing frequency much more than if the sucking-to-swallowing ratio was constant. <s> BIB004 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Suction <s> Abstract To gain a better understanding of the development of sucking behavior in low birth weight infants, the aims of this study were as follows: (1) to assess these infants' oral feeding performance when milk delivery was unrestricted, as routinely administered in nurseries, versus restricted when milk flow occurred only when the infant was sucking; (2) to determine whether the term sucking pattern of suction/expression was necessary for feeding success; and (3) to identify clinical indicators of successful oral feeding. Infants (26 to 29 weeks of gestation) were evaluated at their first oral feeding and on achieving independent oral feeding. Bottle nipples were adapted to monitor suction and expression. To assess performance during a feeding, proficiency (percent volume transferred during the first 5 minutes of a feeding/total volume ordered), efficiency (volume transferred per unit time), and overall transfer (percent volume transferred) were calculated. Restricted milk flow enhanced all three parameters. Successful oral feeding did not require the term sucking pattern. Infants who demonstrated both a proficiency ≥30% and efficiency ≥1.5 ml/min at their first oral feeding were successful with that feeding and attained independent oral feeding at a significantly earlier postmenstrual age than their counterparts with lower proficiency, efficiency, or both. Thus a restricted milk flow facilitates oral feeding in infants younger than 30 weeks of gestation, the term sucking pattern is not necessary for successful oral feeding, and proficiency and efficiency together may be used as reliable indicators of early attainment of independent oral feeding in low birth weight infants.(J Pediatr 1997;130:561-9) <s> BIB005 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Suction <s> It is acknowledged that the difficulty many preterm infants have in feeding orally results from their immature sucking skills. However, little is known regarding the development of sucking in these infants. The aim of this study was to demonstrate that the bottle-feeding performance of preterm infants is positively correlated with the developmental stage of their sucking. Infants' oral-motor skills were followed longitudinally using a special nipple/bottle system which monitored the suction and expression/compression component of sucking. The maturational process was rated into five primary stages based on the presence/absence of suction and the rhythmicity of the two components of sucking, suction and expression/compression. This five-point scale was used to characterize the developmental stage of sucking of each infant. Outcomes of feeding performance consisted of overall transfer (percent total volume transfered/volume to be taken) and rate of transfer (ml/min). Assessments were conducted when infants were taking 1-2, 3-5 and 6-8 oral feedings per day. Significant positive correlations were observed between the five stages of sucking and postmenstrual age, the defined feeding outcomes, and the number of daily oral feedings. Overall transfer and rate of transfer were enhanced when infants reached the more mature stages of sucking. ::: ::: We have demonstrated that oral feeding performance improves as infants' sucking skills mature. In addition, we propose that the present five-point sucking scale may be used to assess the developmental stages of sucking of preterm infants. Such knowledge would facilitate the management of oral feeding in these infants. <s> BIB006 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Suction <s> As a consequence of the fragility of various neural structures, preterm infants born at a low gestation and/or birthweight are at an increased risk of developing motor abnormalities. The lack of a reliable means of assessing motor integrity prevents early therapeutic intervention. In this paper, we propose a new method of assessing neonatal motor performance, namely the recording and subsequent analysis of intraoral sucking pressures generated when feeding nutritively. By measuring the infant's control of sucking in terms of a new development of tau theory, normal patterns of intraoral motor control were established for term infants. Using this same measure, the present study revealed irregularities in sucking control of preterm infants. When these findings were compared to a physiotherapist's assessment six months later, the preterm infants who sucked irregularly were found to be delayed in their motor development. Perhaps a goal-directed behaviour such as sucking control that can be measured objectively at a very young age, could be included as part of the neurological assessment of the preterm infant. More accurate classification of a preterm infant's movement abnormalities would allow for early therapeutic interventions to be realised when the infant is still acquiring the most basic of motor functions. <s> BIB007 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Suction <s> The aim of this study was to gain a better understanding of the development of sucking behavior in infants with Down's syndrome. The sucking behavior of 14 infants with Down's syndrome was consecutively studied at 1, 4, 8 and 12 mo of age. They were free from complications that may cause sucking difficulty. The sucking pressure, expression pressure, frequency and duration were measured. In addition, an ultrasound study during sucking was performed in sagittal planes. Although levels of the sucking pressure and duration were in the normal range, significant development occurred with time. Ultrasonographic images showed deficiency in the smooth peristaltic tongue movement. ::: ::: ::: ::: Conclusion: The sucking deficiency in Down's syndrome may result from not only hypotonicity of the perioral muscles, lips and masticatory muscles, but also deficiency in the smooth tongue movement. This approach using the sucking pressure waveform and ultrasonography can help in the examination of the development of sucking behavior, intraoral movement and therapeutic effects. <s> BIB008 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Suction <s> UNLABELLED ::: Safe oral feeding of infants necessitates the coordination of suck-swallow-breathe. Healthy full-term infants demonstrate such skills at birth. But, preterm infants are known to have difficulty in the transition from tube to oral feeding. ::: ::: ::: AIM ::: To examine the relationship between suck and swallow and between swallow and breathe. It is hypothesized that greater milk transfer results from an increase in bolus size and/or swallowing frequency, and an improved swallow-breathe interaction. ::: ::: ::: METHODS ::: Twelve healthy preterm (<30 wk of gestation) and 8 full-term infants were recruited. Sucking (suction and expression), swallowing, and respiration were recorded simultaneously when the preterm infants began oral feeding (i.e. taking 1-2 oral feedings/d) and at 6-8 oral feedings/d. The full-term infants were similarly monitored during their first and 2nd to 4th weeks. Rate of milk transfer (ml/min) was used as an index of oral feeding performance. Sucking and swallowing frequencies (#/min), average bolus size (ml), and suction amplitude (mmHg) were measured. ::: ::: ::: RESULTS ::: The rate of milk transfer in the preterm infants increased over time and was correlated with average bolus size and swallowing frequency. Average bolus size was not correlated with swallowing frequency. Bolus size was correlated with suction amplitude, whereas the frequency of swallowing was correlated with sucking frequency. Preterm infants swallowed preferentially at different phases of respiration than those of their full-term counterparts. ::: ::: ::: CONCLUSION ::: As feeding performance improved, sucking and swallowing frequency, bolus size, and suction amplitude increased. It is speculated that feeding difficulties in preterm infants are more likely to result from inappropriate swallow-respiration interfacing than suck-swallow interaction. <s> BIB009 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Suction <s> OBJECTIVES ::: Our objectives were to establish normative maturational data for feeding behavior of preterm infants from 32 to 36 weeks of postconception and to evaluate how the relation between swallowing and respiration changes with maturation. ::: ::: ::: STUDY DESIGN ::: Twenty-four infants (28 to 31 weeks of gestation at birth) without complications or defects were studied weekly between 32 and 36 weeks after conception. During bottle feeding with milk flowing only when infants were sucking, sucking efficiency, pressure, frequency, and duration were measured and the respiratory phase in which swallowing occurs was also analyzed. Statistical analysis was performed by repeated-measures analysis of variance with post hoc analysis. ::: ::: ::: RESULTS ::: The sucking efficiency significantly increased between 34 and 36 weeks after conception and exceeded 7 mL/min at 35 weeks. There were significant increases in sucking pressure and frequency as well as in duration between 33 and 36 weeks. Although swallowing occurred mostly during pauses in respiration at 32 and 33 weeks, after 35 weeks swallowing usually occurred at the end of inspiration. ::: ::: ::: CONCLUSIONS ::: Feeding behavior in premature infants matured significantly between 33 and 36 weeks after conception, and swallowing infrequently interrupted respiration during feeding after 35 weeks after conception. <s> BIB010 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Suction <s> This study aimed to determine whether neonatal feeding performance can predict the neurodevelopmental outcome of infants at 18 months of age. We measured the expression and sucking pressures of 65 infants (32 males and 33 females, mean gestational age 37.8 weeks [SD 0.5]; range 35.1 to 42.7 weeks and mean birthweight 2722g [SD 92]) with feeding problems and assessed their neurodevelopmental outcome at 18 months of age. Their diagnoses varied from mild asphyxia and transient tachypnea to Chiari malformation. A neurological examination was performed at 40 to 42 weeks postmenstrual age by means of an Amiel-Tison examination. Feeding performance at 1 and 2 weeks after initiation of oral feeding was divided into four classes: class 1, no suction and weak expression; class 2, arrhythmic alternation of expression/suction and weak pressures; class 3, rhythmic alternation, but weak pressures; and class 4, rhythmic alternation with normal pressures. Neurodevelopmental outcome was evaluated with the Bayley Scales of Infant Development-II and was divided into four categories: severe disability, moderate delay, minor delay, and normal. We examined the brain ultrasound on the day of feeding assessment, and compared the prognostic value of ultrasound and feeding performance. There was a significant correlation between feeding assessment and neurodevelopmental outcome at 18 months (p < 0.001). Improvements of feeding pattern at the second evaluation resulted in better neurodevelopmental outcome. The sensitivity and specificity of feeding assessment were higher than those of ultrasound assessment. Neonatal feeding performance is, therefore, of prognostic value in detecting future developmental problems. <s> BIB011 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Suction <s> Abstract Aim The sucking pattern of term infants is composed of a rhythmic alteration of expression and suction movements. The aim is to evaluate if direct linear transformation (DLT) method could be used for the assessment of infant feeding. Subject and methods A total of 10 gnormalh infants and two infants with neurological disorders were studied using DLT procedures and expression/suction pressure recordings. Feeding pattern of seven gnormalh infants were evaluated simultaneously recording DLT and pressures. The other infants were tested non-simultaneously. We placed markers on the lateral angle of the eye, tip of the jaw, and throat. The faces of infants while sucking were recorded in profile. The jaw and throat movements were calculated using the DLT procedure. Regression analysis was implemented to investigate the relationship between suction and expression pressures and eye–jaw and eye–throat movement. All regression analyses investigated univariate relationships and adjusted for other covariates. Results Ten gnormalh infants demonstrated higher suction pressure than expression pressure, and their throat movements were larger than jaw movements. Two infants with neurological problems did not generate suction pressure and demonstrated larger movements in their jaw than throat. The simultaneous measurement ( n = 7) showed a significant correlation, not only between eye–jaw distance and the expression pressure, but also between eye–throat distance and suction pressure. The change in the eye–jaw distance was smaller than the changes in the eye–throat distance in gnormalh infants ( p Conclusions The DLT method can be used to evaluate feeding performance without any special device. <s> BIB012 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Suction <s> Abstract A dynamical systems approach to infant feeding problems is presented. A theoretically motivated analysis of coordination among sucking, swallowing, and breathing is at the heart of the approach. Current views in neonatology and allied medical disciplines begin their analysis of feeding problems with reference to descriptive phases of moving fluid from the mouth to the gut. By contrast, in a dynamical approach, sucking, swallowing, and breathing are considered as a synergy characterized by more or less stable coordination patterns. Research with healthy and at-risk groups of infants is presented to illustrate how coordination dynamics distinguish safe swallowing from patterns of swallowing and breathing that place premature infants at risk for serious medical problems such as pneumonia. Coordination dynamics is also the basis for a new medical device: a computer-controlled milk bottle that controls milk flow on the basis of the infant's coordination patterns. The device is designed so that infants... <s> BIB013 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Suction <s> Aim: Safe and successful oral feeding requires proper maturation of sucking, swallowing and respiration. We hypothesized that oral feeding difficulties result from different temporal development of the musculatures implicated in these functions. ::: ::: Methods: Sixteen medically stable preterm infants (26 to 29 weeks gestation, GA) were recruited. Specific feeding skills were monitored as indirect markers for the maturational process of oral feeding musculatures: rate of milk intake (mL/min); percent milk leakage (lip seal); sucking stage, rate (#/s) and suction/expression ratio; suction amplitude (mmHg), rate and slope (mmHg/s); sucking/swallowing ratio; percent occurrence of swallows at specific phases of respiration. Coefficients of variation (COV) were used as indices of functional stability. Infants, born at 26/27- and 28/29-week GA, were at similar postmenstrual ages (PMA) when taking 1–2 and 6–8 oral feedings per day. ::: ::: Results: Over time, feeding efficiency and several skills improved, some decreased and others remained unchanged. Differences in COVs between the two GA groups demonstrated that, despite similar oral feeding outcomes, maturation levels of certain skills differed. ::: ::: Conclusions: Components of sucking, swallowing, respiration and their coordinated activity matured at different times and rates. Differences in functional stability of particular outcomes confirm that maturation levels depend on infants' gestational rather than PMA. <s> BIB014 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Suction <s> We report quantitative measurements of ten parameters of nutritive sucking behavior in 91 normal full-term infants obtained using a novel device (an Orometer) and a data collection/analytical system (Suck Editor). The sucking parameters assessed include the number of sucks, mean pressure amplitude of sucks, mean frequency of sucks per second, mean suck interval in seconds, sucking amplitude variability, suck interval variability, number of suck bursts, mean number of sucks per suck burst, mean suck burst duration, and mean interburst gap duration. For analyses, test sessions were divided into 4 × 2-min segments. In single-study tests, 36 of 60 possible comparisons of ten parameters over six pairs of 2-min time intervals showed a p value of 0.05 or less. In 15 paired tests in the same infants at different ages, 33 of 50 possible comparisons of ten parameters over five time intervals showed p values of 0.05 or less. Quantification of nutritive sucking is feasible, showing statistically valid results for ten parameters that change during a feed and with age. These findings suggest that further research, based on our approach, may show clinical value in feeding assessment, diagnosis, and clinical management. <s> BIB015
|
Intraoral Pressure PT PT embedded into a catheter and placed at the tip of the nipple BIB006 BIB014 BIB009 BIB002 BIB005 PT connected to a catheter, whose opposite tip ends into the oral cavity lumen BIB001 BIB011 BIB007 BIB010 BIB008 BIB012 BIB013 BIB004 BIB003 PT placed between the nipple and a flow limiting device (restriction orifice or a capillary tube) BIB015 Throat movements Videocamera and markers (DLT) Digital video camera at 1 m from the infant's face; markers placed on the lateral angle of the eye and on the throat BIB012 Expression Expression Pressure PT PT connected to a polyethylene catheter, connected to a catheter of compressible silicone rubber, placed on the nipple BIB014 BIB009 BIB002 BIB005 PT connected to the lumen of the nipple by means of a silicone catheter; one-way valve placed between the nipple chamber and the nutrient reservoir. BIB011 BIB008 BIB013 Jaw movements Videocamera and markers (DLT) Digital video camera at 1 m from the infant's face; markers placed on the lateral angle of the eye and on the tip of the jaw. [44]
|
Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Sucking Process <s> Energetics and mechanics of sucking in preterm and term neonates were determined by simultaneous records of intraoral pressure, flow, volume, and work of individual sucks. Nine term infants (mean postconceptional age: 38.6 +/- 0.7 SD weeks; mean postnatal age: 18.4 +/- 6.1 SD days) and nine preterm infants (mean postconceptional age: 35.2 +/- 0.7 SD weeks; mean postnatal age: 21.9 +/- 5.4 SD days) were studied under identical feeding conditions. Preterm infants generated significantly lower peak pressure (mean values of 48.5 cm H2O compared with 65.5 cm H2O in term infants; P less than 0.01), and the volume ingested per such was generally less than or equal to 0.5 mL. Term infants demonstrated a higher frequency of sucking, a well-defined suck-pause pattern, and a higher minute consumption of formula. Energy and caloric expenditure estimations revealed significantly lower work performed by preterm infants for isovolumic feeds (1190 g/cm/dL in preterm infants compared with 2030 g.cm/dL formula ingested in term infants; P less than 0.01). Furthermore, work performed by term infants was disproportionately higher for volumes greater than or equal to 0.5 mL ingested. This study indicates that preterm infants expend less energy than term infants to suck the same volume of feed and also describes an objective technique to evaluate nutritive sucking during growth and development. <s> BIB001 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Sucking Process <s> The sucking patterns of 42 healthy full-term and 44 preterm infants whose gestational age at birth was 30.9 +/- 2.1 weeks were compared using the Kron Nutritive Sucking Apparatus for a 5-minute period. The measured pressures were used to calculate six characteristics of the sucking response: maximum pressure generated, amount of nutrient consumed per suck, number of sucks per episode, the duration or width of each suck, the length of time between sucks, and the length of time between sucking episodes. The maximum pressure of the term infant (100.3 +/- 35) was higher, p less than .05, than the maximum pressure of the preterm infant (84 +/- 33). Term infants also consumed more formula per suck (45.3 +/- 20.3 vs. 37.6 +/- 15.9, p less than .05). In addition, they had more sucks/episode (13.6 +/- 8.7 vs. 7.7 +/- 4.1, p less than .001) and maintained the pressure longer for a wider suck width (0.49 +/- 0.1 vs. 0.45 +/- 0.08, p less than .05). Sucking profiles of the preterm infant are significantly different from the full-term infant. These sucking profiles can be developed as a clinically useful tool for nursing practice. <s> BIB002 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Sucking Process <s> Non-invasive, sensitive equipment was designed to record nasal air flow, the timing and volume of milk flow, intraoral pressure and swallowing in normal full-term newborn babies artificially fed under strictly controlled conditions. Synchronous recordings of these events are presented in chart form. Interpretation of the charts, with the aid of applied anatomy, suggests an hypothesis of the probable sequence of events during an ideal feeding cycle under the test conditions. This emphasises the importance of complete coordination between breathing, sucking and swallowing. The feeding respiratory pattern and its relationship to the other events was different from the non-nutritive respiratory pattern. The complexity of the coordinated patterns, the small bolus size which influenced the respiratory pattern, together with the coordination of all these events when milk was present in the mouth, emphasise the importance of the sensory mechanisms. The discussion considers (1) the relationship between these results, those reported by other workers under other feeding conditions and the author's (WGS) clinical experience, (2) factors which appear to be essential to permit conventional bottle feeding and (3) the importance of the coordination between the muscles of articulation, by which babies obtain their nourishment in relation to normal development and maturation. <s> BIB003 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Sucking Process <s> Incoordination of sucking, swallowing, and breathing might lead to the decreased ventilation that accompanies bottle feeding in infants, but the precise temporal relationship between these events has not been established. Therefore, we studied the coordination of sucks, swallows, and breaths in healthy infants (8 full-term and 5 preterm). Respiratory movements and airflow were recorded as were sucks and swallows (intraoral and intrapharyngeal pressure). Sucks did not interrupt breathing or decrease minute ventilation during nonnutritive sucking. Minute ventilation during bottle feedings was inversely related to swallow frequency, with elimination of ventilation as the swallowing frequency approached 1.4/s. Swallows were associated with a 600-ms period of decreased respiratory initiation and with a period of airway closure lasting 530 +/- 9.8 (SE) ms. Occasional periods of prolonged airway closure were observed in all infants during feedings. Respiratory efforts during airway closure (obstructed breaths) were common. The present findings indicate that the decreased ventilation observed during bottle feedings is primarily a consequence of airway closure associated with the act of swallowing, whereas the decreased ventilatory efforts result from respiratory inhibition during swallows. <s> BIB004 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Sucking Process <s> Milk flow achieved during feeding may contribute to the ventilatory depression observed during nipple feeding. One of the important determinants of milk flow is the size of the feeding hole. In the first phase of the study, investigators compared the breathing patterns of 10 preterm infants during bottle feeding with two types of commercially available (Enfamil) single-hole nipples: one type designed for term infants and the other for preterm infants. Reductions in ventilation, tidal volume, and breathing frequency, compared with prefeeding control values, were observed with both nipple types during continuous and intermittent sucking phases; no significant differences were observed for any of the variables. Unlike the commercially available, mechanically drilled nipples, laser-cut nipple units showed a markedly lower coefficient of variation in milk flow. In the second phase of the study, two sizes of laser-cut nipple units, low and high flow, were used to feed nine preterm infants. Significantly lower sucking pressures were observed with high-flow nipples as compared with low-flow nipples. Decreases in minute ventilation and breathing frequency were also significantly greater with high-flow nipples. These results suggest that milk flow contributes to the observed reduction in ventilation during nipple feeding and that preterm infants have limited ability to self-regulate milk flow. <s> BIB005 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Sucking Process <s> The purpose of this investigation was to quantify normal nutritive sucking, using a microcomputer-based instrument which replicated the infant's customary bottle-feeding routine. 86 feeding sessions were recorded from infants ranging between 1.5 and 11.5 months of age. Suck height, suck area and percentage of time spent sucking were unrelated to age. Volume per suck declined with age, as did intersuck interval, which corresponded to a more rapid sucking rate. This meant that volume per minute of sucking time was fairly constant. The apparatus provided an objective description of the patterns of normal nutritive sucking in infants to which abnormal sucking patterns may be compared. <s> BIB006 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Sucking Process <s> During feeding, infants have been found to decrease ventilation in proportion to increasing swallowing frequency, presumably as a consequence of neural inhibition of breathing and airway closure during swallowing. To what extent infants decrease ventilatory compromise during feeding by modifying feeding behavior is unknown. We increased swallowing frequency in infants by facilitating formula flow to study potential ventilatory sparing mechanisms. We studied seven full-term healthy infants 5-12 days of age. Nasal air flow and tidal volume were recorded with a nasal flowmeter. Soft fluid-filled catheters in the oropharynx and bottle recorded swallowing and sucking activity, and volume changes in the bottle were continuously measured. Bottle pressure was increased to facilitate formula flow. Low- and high-pressure trials were then compared. With the change from low to high pressure, consumption rate increased, as did sucking and swallowing frequencies. This change reversed on return to low pressure. Under high-pressure conditions, we saw a decrease in minute ventilation as expected. With onset of high pressure, sucking and swallowing volumes increased, whereas duration of airway closure during swallows remained constant. Therefore, increased formula consumption was associated with reduced ventilation, a predictable consequence of increased swallowing frequency. However, when consumption rate was high, the infant also increased swallowing volume, a tactic that is potentially ventilatory sparing as a lower swallowing frequency is required to achieve the increased consumption rate. As well, when consumption rate is low, the sucking-to-swallowing ratio increases, again potentially conserving ventilation by decreasing swallowing frequency much more than if the sucking-to-swallowing ratio was constant. <s> BIB007 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Sucking Process <s> Abstract To gain a better understanding of the development of sucking behavior in low birth weight infants, the aims of this study were as follows: (1) to assess these infants' oral feeding performance when milk delivery was unrestricted, as routinely administered in nurseries, versus restricted when milk flow occurred only when the infant was sucking; (2) to determine whether the term sucking pattern of suction/expression was necessary for feeding success; and (3) to identify clinical indicators of successful oral feeding. Infants (26 to 29 weeks of gestation) were evaluated at their first oral feeding and on achieving independent oral feeding. Bottle nipples were adapted to monitor suction and expression. To assess performance during a feeding, proficiency (percent volume transferred during the first 5 minutes of a feeding/total volume ordered), efficiency (volume transferred per unit time), and overall transfer (percent volume transferred) were calculated. Restricted milk flow enhanced all three parameters. Successful oral feeding did not require the term sucking pattern. Infants who demonstrated both a proficiency ≥30% and efficiency ≥1.5 ml/min at their first oral feeding were successful with that feeding and attained independent oral feeding at a significantly earlier postmenstrual age than their counterparts with lower proficiency, efficiency, or both. Thus a restricted milk flow facilitates oral feeding in infants younger than 30 weeks of gestation, the term sucking pattern is not necessary for successful oral feeding, and proficiency and efficiency together may be used as reliable indicators of early attainment of independent oral feeding in low birth weight infants.(J Pediatr 1997;130:561-9) <s> BIB008 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Sucking Process <s> To measure infant nutritive sucking reproducibly, nipple flow resistance must be controlled. Previous investigators have accomplished this with flow-limiting venturis, which has two limitations: flow resistance is highly dependent on fluid viscosity and older infants often reject the venturi nipple. This report describes the validation of calibrated-orifice nipples for the measurement of infant nutritive sucking. The flow characteristics of two infant formulas and water through these nipples were not different; those through venturi nipples were (analysis of variance; p < 0.0001). Flow characteristics did not differ among calibrated-orifice nipples constructed from three commercial nipple styles, indicating that the calibrated-orifice design is applicable to different types of baby bottle nipples. Among 3-month-old infants using calibrated-orifice nipples, acceptability was high, and sucking accounted for 85% of the variance in fluid intake during a feeding. We conclude that calibrated-orifice nipples are a valid and acceptable tool for the measurement of infant nutritive sucking. <s> BIB009 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Sucking Process <s> It is acknowledged that the difficulty many preterm infants have in feeding orally results from their immature sucking skills. However, little is known regarding the development of sucking in these infants. The aim of this study was to demonstrate that the bottle-feeding performance of preterm infants is positively correlated with the developmental stage of their sucking. Infants' oral-motor skills were followed longitudinally using a special nipple/bottle system which monitored the suction and expression/compression component of sucking. The maturational process was rated into five primary stages based on the presence/absence of suction and the rhythmicity of the two components of sucking, suction and expression/compression. This five-point scale was used to characterize the developmental stage of sucking of each infant. Outcomes of feeding performance consisted of overall transfer (percent total volume transfered/volume to be taken) and rate of transfer (ml/min). Assessments were conducted when infants were taking 1-2, 3-5 and 6-8 oral feedings per day. Significant positive correlations were observed between the five stages of sucking and postmenstrual age, the defined feeding outcomes, and the number of daily oral feedings. Overall transfer and rate of transfer were enhanced when infants reached the more mature stages of sucking. ::: ::: We have demonstrated that oral feeding performance improves as infants' sucking skills mature. In addition, we propose that the present five-point sucking scale may be used to assess the developmental stages of sucking of preterm infants. Such knowledge would facilitate the management of oral feeding in these infants. <s> BIB010 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Sucking Process <s> As a consequence of the fragility of various neural structures, preterm infants born at a low gestation and/or birthweight are at an increased risk of developing motor abnormalities. The lack of a reliable means of assessing motor integrity prevents early therapeutic intervention. In this paper, we propose a new method of assessing neonatal motor performance, namely the recording and subsequent analysis of intraoral sucking pressures generated when feeding nutritively. By measuring the infant's control of sucking in terms of a new development of tau theory, normal patterns of intraoral motor control were established for term infants. Using this same measure, the present study revealed irregularities in sucking control of preterm infants. When these findings were compared to a physiotherapist's assessment six months later, the preterm infants who sucked irregularly were found to be delayed in their motor development. Perhaps a goal-directed behaviour such as sucking control that can be measured objectively at a very young age, could be included as part of the neurological assessment of the preterm infant. More accurate classification of a preterm infant's movement abnormalities would allow for early therapeutic interventions to be realised when the infant is still acquiring the most basic of motor functions. <s> BIB011 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Sucking Process <s> The aim of this study was to gain a better understanding of the development of sucking behavior in infants with Down's syndrome. The sucking behavior of 14 infants with Down's syndrome was consecutively studied at 1, 4, 8 and 12 mo of age. They were free from complications that may cause sucking difficulty. The sucking pressure, expression pressure, frequency and duration were measured. In addition, an ultrasound study during sucking was performed in sagittal planes. Although levels of the sucking pressure and duration were in the normal range, significant development occurred with time. Ultrasonographic images showed deficiency in the smooth peristaltic tongue movement. ::: ::: ::: ::: Conclusion: The sucking deficiency in Down's syndrome may result from not only hypotonicity of the perioral muscles, lips and masticatory muscles, but also deficiency in the smooth tongue movement. This approach using the sucking pressure waveform and ultrasonography can help in the examination of the development of sucking behavior, intraoral movement and therapeutic effects. <s> BIB012 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Sucking Process <s> Twenty healthy preterm infants (gestational age 26 to 33 weeks, postmenstrual age [PMA] 32.1 to 39.6 weeks, postnatal age [PNA] 2.0 to 11.6 weeks) were studied weekly from initiation of bottle feeding until discharge, with simultaneous digital recordings of pharyngeal and nipple (teat) pressure and nasal thermistor and thoracic strain gauge readings. The percentage of sucks aggregated into 'runs' (defined as > or = 3 sucks with < or = 2 seconds between suck peaks) increased over time and correlated significantly with PMA (r=0.601, p<0.001). The length of the sucking-runs also correlated significantly with PMA (r=0.613, p<0.001). The stability of sucking rhythm, defined as a function of the mean/SD of the suck interval, was also directly correlated with increasing PMA (r=0.503, p=0.002), as was increasing suck rate (r=0.379, p<0.03). None of these measures was correlated with PNA. Similarly, increasing PMA, but not PNA, correlated with a higher percentage of swallows in runs (r=0.364, p<0.03). Stability of swallow rhythm did not change significantly from 32 to 40 weeks' PMA. In low-risk preterm infants, increasing PMA is correlated with a faster and more stable sucking rhythm and with increasing organization into longer suck and swallow runs. Stable swallow rhythm appears to be established earlier than suck rhythm. The fact that PMA is a better predictor than PNA of these patterns lends support to the concept that these patterns are innate rather than learned behaviors. Quantitative assessment of the stability of suck and swallow rhythms in preterm infants may allow prediction of subsequent feeding dysfunction as well as more general underlying neurological impairment. Knowledge of the normal ontogeny of the rhythms of suck and swallow may also enable us to differentiate immature (but normal) feeding patterns in preterm infants from dysmature (abnormal) patterns, allowing more appropriate intervention measures. <s> BIB013 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Sucking Process <s> To quantify parameters of rhythmic suckle feeding in healthy term infants and to assess developmental changes during the first month of life, we recorded pharyngeal and nipple pressure in 16 infants at 1 to 4 days of age and again at 1 month. Over the first month of life in term infants, sucks and swallows become more rapid and increasingly organized into runs. Suck rate increased from 55/minute in the immediate postnatal period to 70/minute by the end of the first month (p<0.001). The percentage of sucks in runs of ≧3 increased from 72.7% (SD 12.8) to 87.9% (SD 9.1; p=0.001). Average length of suck runs also increased over the first month. Swallow rate increased slightly by the end of the first month, from about 46 to 50/minute (p=0.019), as did percentage of swallows in runs (76.8%, SD 14.9 versus 54.6%, SD 19.2;p=0.002). Efficiency of feeding, as measured by volume of nutrient per suck (0.17, SD 0.08 versus 0.30, SD 0.11cc/suck; p=0.008) and per swallow (0.23, SD 0.11 versus 0.44, SD 0.19 cc/swallow; p=0.002), almost doubled over the first month. The rhythmic stability of swallow-swallow, suck-suck, and suck-swallow dyadic interval, quantified using the coefficient of variation of the interval, was similar at the two age points, indicating that rhythmic stability of suck and swallow, individually and interactively, appears to be established by term. Percentage of sucks and swallows in 1:1 ratios (dyads), decreased from 78.8% (SD 20.1) shortly after birth to 57.5% (SD 25.8) at 1 month of age (p=0.002), demonstrating that the predominant 1:1 ratio of suck to swallow is more variable at 1 month, with the addition of ratios of 2:1, 3:1, and so on, and suggesting that infants gain the ability to adjust feeding patterns to improve efficiency. Knowledge of normal development in term infants provides a gold standard against which rhythmic patterns in preterm and other high-risk infants can be measured, and may allow earlier identification of infants at risk of neurodevelopmental delay and feeding disorders. <s> BIB014 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Sucking Process <s> UNLABELLED ::: Safe oral feeding of infants necessitates the coordination of suck-swallow-breathe. Healthy full-term infants demonstrate such skills at birth. But, preterm infants are known to have difficulty in the transition from tube to oral feeding. ::: ::: ::: AIM ::: To examine the relationship between suck and swallow and between swallow and breathe. It is hypothesized that greater milk transfer results from an increase in bolus size and/or swallowing frequency, and an improved swallow-breathe interaction. ::: ::: ::: METHODS ::: Twelve healthy preterm (<30 wk of gestation) and 8 full-term infants were recruited. Sucking (suction and expression), swallowing, and respiration were recorded simultaneously when the preterm infants began oral feeding (i.e. taking 1-2 oral feedings/d) and at 6-8 oral feedings/d. The full-term infants were similarly monitored during their first and 2nd to 4th weeks. Rate of milk transfer (ml/min) was used as an index of oral feeding performance. Sucking and swallowing frequencies (#/min), average bolus size (ml), and suction amplitude (mmHg) were measured. ::: ::: ::: RESULTS ::: The rate of milk transfer in the preterm infants increased over time and was correlated with average bolus size and swallowing frequency. Average bolus size was not correlated with swallowing frequency. Bolus size was correlated with suction amplitude, whereas the frequency of swallowing was correlated with sucking frequency. Preterm infants swallowed preferentially at different phases of respiration than those of their full-term counterparts. ::: ::: ::: CONCLUSION ::: As feeding performance improved, sucking and swallowing frequency, bolus size, and suction amplitude increased. It is speculated that feeding difficulties in preterm infants are more likely to result from inappropriate swallow-respiration interfacing than suck-swallow interaction. <s> BIB015 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Sucking Process <s> OBJECTIVES ::: Our objectives were to establish normative maturational data for feeding behavior of preterm infants from 32 to 36 weeks of postconception and to evaluate how the relation between swallowing and respiration changes with maturation. ::: ::: ::: STUDY DESIGN ::: Twenty-four infants (28 to 31 weeks of gestation at birth) without complications or defects were studied weekly between 32 and 36 weeks after conception. During bottle feeding with milk flowing only when infants were sucking, sucking efficiency, pressure, frequency, and duration were measured and the respiratory phase in which swallowing occurs was also analyzed. Statistical analysis was performed by repeated-measures analysis of variance with post hoc analysis. ::: ::: ::: RESULTS ::: The sucking efficiency significantly increased between 34 and 36 weeks after conception and exceeded 7 mL/min at 35 weeks. There were significant increases in sucking pressure and frequency as well as in duration between 33 and 36 weeks. Although swallowing occurred mostly during pauses in respiration at 32 and 33 weeks, after 35 weeks swallowing usually occurred at the end of inspiration. ::: ::: ::: CONCLUSIONS ::: Feeding behavior in premature infants matured significantly between 33 and 36 weeks after conception, and swallowing infrequently interrupted respiration during feeding after 35 weeks after conception. <s> BIB016 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Sucking Process <s> Finding ways to consistently prepare preterm infants and their families for more timely discharge must continue as a focus for everyone involved in the care of these infants in the neonatal intensive care unit. The gold standards for discharge from the neonatal intensive care unit are physiologic stability (especially respiratory stability), consistent weight gain, and successful oral feeding, usually from a bottle. Successful bottle-feeding is considered the most complex task of infancy. Fostering successful oral feeding in preterm infants requires consistently high levels of skilled nursing care, which must begin with accurate assessment of feeding readiness and thoughtful progression to full oral feeding. This comprehensive review of the literature provides an overview of the state of the science related to feeding readiness and progression in the preterm infant. The theoretical foundation for feeding readiness and factors that appear to affect bottle-feeding readiness, progression, and success are presented in this article. <s> BIB017 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Sucking Process <s> This study aimed to determine whether neonatal feeding performance can predict the neurodevelopmental outcome of infants at 18 months of age. We measured the expression and sucking pressures of 65 infants (32 males and 33 females, mean gestational age 37.8 weeks [SD 0.5]; range 35.1 to 42.7 weeks and mean birthweight 2722g [SD 92]) with feeding problems and assessed their neurodevelopmental outcome at 18 months of age. Their diagnoses varied from mild asphyxia and transient tachypnea to Chiari malformation. A neurological examination was performed at 40 to 42 weeks postmenstrual age by means of an Amiel-Tison examination. Feeding performance at 1 and 2 weeks after initiation of oral feeding was divided into four classes: class 1, no suction and weak expression; class 2, arrhythmic alternation of expression/suction and weak pressures; class 3, rhythmic alternation, but weak pressures; and class 4, rhythmic alternation with normal pressures. Neurodevelopmental outcome was evaluated with the Bayley Scales of Infant Development-II and was divided into four categories: severe disability, moderate delay, minor delay, and normal. We examined the brain ultrasound on the day of feeding assessment, and compared the prognostic value of ultrasound and feeding performance. There was a significant correlation between feeding assessment and neurodevelopmental outcome at 18 months (p < 0.001). Improvements of feeding pattern at the second evaluation resulted in better neurodevelopmental outcome. The sensitivity and specificity of feeding assessment were higher than those of ultrasound assessment. Neonatal feeding performance is, therefore, of prognostic value in detecting future developmental problems. <s> BIB018 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Sucking Process <s> Abstract Aim The sucking pattern of term infants is composed of a rhythmic alteration of expression and suction movements. The aim is to evaluate if direct linear transformation (DLT) method could be used for the assessment of infant feeding. Subject and methods A total of 10 gnormalh infants and two infants with neurological disorders were studied using DLT procedures and expression/suction pressure recordings. Feeding pattern of seven gnormalh infants were evaluated simultaneously recording DLT and pressures. The other infants were tested non-simultaneously. We placed markers on the lateral angle of the eye, tip of the jaw, and throat. The faces of infants while sucking were recorded in profile. The jaw and throat movements were calculated using the DLT procedure. Regression analysis was implemented to investigate the relationship between suction and expression pressures and eye–jaw and eye–throat movement. All regression analyses investigated univariate relationships and adjusted for other covariates. Results Ten gnormalh infants demonstrated higher suction pressure than expression pressure, and their throat movements were larger than jaw movements. Two infants with neurological problems did not generate suction pressure and demonstrated larger movements in their jaw than throat. The simultaneous measurement ( n = 7) showed a significant correlation, not only between eye–jaw distance and the expression pressure, but also between eye–throat distance and suction pressure. The change in the eye–jaw distance was smaller than the changes in the eye–throat distance in gnormalh infants ( p Conclusions The DLT method can be used to evaluate feeding performance without any special device. <s> BIB019 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Sucking Process <s> To study the coordination of respiration and swallow rhythms we assessed feeding episodes in 20 preterm infants (gestational age range at birth 26-33wks; postmenstrual age [PMA] range when studied 32-40wks) and 16 term infants studied on days 1 to 4 (PMA range 37-41wks) and at 1 month (PMA range 41-45wks). A pharyngeal pressure transducer documented swallows and a thoracoabdominal strain gauge recorded respiratory efforts. Coefficients of variation (COVs) of breath-breath (BBr-BR) and swallow-breath (SW-BR) intervals during swallow runs, percentage ofapneic swallows (at least three swallows without interposed breaths), and phase of respiration relative to swallowing efforts were analyzed. Percentage of apneic swallows decreased with increasing PMA (16.6% [SE 4.7] in preterm infants s35wks' PMA; 6.6% [SE 1.6] in preterms >35wks; 1.5% [SE 0.4] in term infants; p 35wks' PMA; 0.693 [SE 0.059] at <35wks' PMA). Phase relation between swallowing and respiration stabilized with increasing PMA, with decreased apnea, and a significant increase in percentage of swallows occurring at end-inspiration. These data indicate that unlike the stabilization of suck and suck-swallow rhythms, which occur before about 36 weeks' PMA, improvement in coordination of respiration and swallow begins later. Coordination of swallow-respiration and suck-swallow rhythms may be predictive of feeding, respiratory, and neurodevelopmental abnormalities. <s> BIB020 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Sucking Process <s> Abstract A dynamical systems approach to infant feeding problems is presented. A theoretically motivated analysis of coordination among sucking, swallowing, and breathing is at the heart of the approach. Current views in neonatology and allied medical disciplines begin their analysis of feeding problems with reference to descriptive phases of moving fluid from the mouth to the gut. By contrast, in a dynamical approach, sucking, swallowing, and breathing are considered as a synergy characterized by more or less stable coordination patterns. Research with healthy and at-risk groups of infants is presented to illustrate how coordination dynamics distinguish safe swallowing from patterns of swallowing and breathing that place premature infants at risk for serious medical problems such as pneumonia. Coordination dynamics is also the basis for a new medical device: a computer-controlled milk bottle that controls milk flow on the basis of the infant's coordination patterns. The device is designed so that infants... <s> BIB021 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Sucking Process <s> Aim: Safe and successful oral feeding requires proper maturation of sucking, swallowing and respiration. We hypothesized that oral feeding difficulties result from different temporal development of the musculatures implicated in these functions. ::: ::: Methods: Sixteen medically stable preterm infants (26 to 29 weeks gestation, GA) were recruited. Specific feeding skills were monitored as indirect markers for the maturational process of oral feeding musculatures: rate of milk intake (mL/min); percent milk leakage (lip seal); sucking stage, rate (#/s) and suction/expression ratio; suction amplitude (mmHg), rate and slope (mmHg/s); sucking/swallowing ratio; percent occurrence of swallows at specific phases of respiration. Coefficients of variation (COV) were used as indices of functional stability. Infants, born at 26/27- and 28/29-week GA, were at similar postmenstrual ages (PMA) when taking 1–2 and 6–8 oral feedings per day. ::: ::: Results: Over time, feeding efficiency and several skills improved, some decreased and others remained unchanged. Differences in COVs between the two GA groups demonstrated that, despite similar oral feeding outcomes, maturation levels of certain skills differed. ::: ::: Conclusions: Components of sucking, swallowing, respiration and their coordinated activity matured at different times and rates. Differences in functional stability of particular outcomes confirm that maturation levels depend on infants' gestational rather than PMA. <s> BIB022 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Sucking Process <s> ABSTRACT:Objectives:The relationship between the pattern of sucking behavior of preterm infants during the early weeks of life and neurodevelopmental outcomes during the first year of life was evaluated.Methods:The study sample consisted of 105 preterm infants (postmenstrual age [PMA] at birth = 30. <s> BIB023 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Sucking Process <s> We report quantitative measurements of ten parameters of nutritive sucking behavior in 91 normal full-term infants obtained using a novel device (an Orometer) and a data collection/analytical system (Suck Editor). The sucking parameters assessed include the number of sucks, mean pressure amplitude of sucks, mean frequency of sucks per second, mean suck interval in seconds, sucking amplitude variability, suck interval variability, number of suck bursts, mean number of sucks per suck burst, mean suck burst duration, and mean interburst gap duration. For analyses, test sessions were divided into 4 × 2-min segments. In single-study tests, 36 of 60 possible comparisons of ten parameters over six pairs of 2-min time intervals showed a p value of 0.05 or less. In 15 paired tests in the same infants at different ages, 33 of 50 possible comparisons of ten parameters over five time intervals showed p values of 0.05 or less. Quantification of nutritive sucking is feasible, showing statistically valid results for ten parameters that change during a feed and with age. These findings suggest that further research, based on our approach, may show clinical value in feeding assessment, diagnosis, and clinical management. <s> BIB024
|
The literature suggests several methods to monitor the Sk process. Such methods rely on Pressure Transducers (PTs), optical motion capture systems, and resistive strain gauges to monitor EP and IP, chin, throat, and jaw movements (see Table 3 ). PTs are usually adopted to measure both IP and EP. In particular, the measurement of IP is always performed using PTs, but adopting two different nutrient delivery systems. In the first one, a common bottle nipple is used (Figure 2 ): a catheter is applied to the tip of the nipple for IP measurement, while the nutrient flows from the lumen of nipple to the infant's mouth through the orifice normally present on the nipple tip. This configuration has been adopted in several studies, which have used two different sensing solutions for IP measurement, depending on the position of the transducer and on its type, as illustrated in Figure 2 : a small pressure catheter (e.g., Millar Mikro-Tip SPR-524) is used and directly placed at the nipple tip BIB010 BIB022 BIB015 BIB005 BIB008 , or a semiconductor PT is connected to the end of a catheter whose tip is placed into the oral cavity BIB003 BIB018 BIB011 BIB016 BIB012 BIB019 BIB021 BIB007 BIB006 . Some studies BIB011 BIB007 BIB006 which use the first configuration specify that the catheter used is filled with fluid for a more robust pressure measurement, less sensitive to artifacts; in the others using the same configuration, it is not specified. The nipple can be also standardized and calibrated so that it responds to a certain differential pressure (difference between intranipple and intraoral pressure) with a known and acceptable milk flow rate, as in BIB018 BIB016 BIB012 BIB009 . In the second configuration (see Figure 3) , the nipple is modified to embed a tube for nutrient delivery within the nipple tip, and a second tube is connected to a PT to measure IP pressure. The properties of the nipple in this case do not completely resemble an ordinary nipple: it is not filled with fluid, so expression movements cannot influence the nutrient's release. This configuration implies that nutrient flows only when the infant develops an appropriate IP. One of the earliest works studying Sk describes the use of a capillary tube as flow meter. The system adopted in this study is composed of a stoppered burette connected to a capillary tube and then to a nipple. To guarantee a constant delivery pressure equal to the atmospheric one, an opening is placed to a side arm of the burette and always kept at the same height as the nipple. The flow-limiting capillary tube allows to regulate the flow of nutrient introducing a known linear relation between IP and flow throughout the range of infant sucking pressures (considered as 0 to −300 mmHg). Since such arrangement may be considered a closed hydraulic system, any increase or decrease of pressure applied to the nipple is transmitted to every part of the connected system. In particular, since the capillary can be considered as a concentrated resistance, it is possible to measure the main pressure drop along it, and a pressure equal to the desired IP in the part after it. Figure 3a shows the described configuration where the PT is placed between the capillary and the nipple. A similar capillary system has also been adopted in later studies BIB002 BIB023 where the PT is specified to be connected to the oral cavity by means of a second catheter inserted within the nipple, as Figure 3b illustrates. The nipple is stiffened with the use of a silicone rubber, in order to prevent nutrient delivery through expression movements, in both systems. With this arrangement, the nutrient flow rate through the tube can be calibrated, so that a certain intraoral pressure provides a known flow rate, and the consumption is proportional to the pressure-time integral. This calibration is allowed thanks to a proper configuration of the feeding system which eliminates two influencing factors: the hydrostatic pressure caused by the height of the level of nutrient over the infant's mouth, and the gradual vacuum increase inside a sealed nutrient reservoir as the milk flows out. Particular attention has to be paid in order to limit the effects of these two factors, which influence the net pressure forcing the liquid into the mouth thus hampering the feeding performance BIB010 . In Figure 3c another feeding apparatus adopting such expedients is shown. An opened reservoir is used to avoid vacuum creation, and the level of the nutrient is constantly maintained at the level of the catheter tip to eliminate any hydrostatic pressure. This measuring system, adopted in BIB001 , embeds two catheters: one for nutrient delivery into the oral cavity, and a second one ending into the same chamber as the nipple which is in direct communication with the oral cavity thanks to some holes in the nipple tip. This nutrient delivery system allows the establishment of a linear flow rate in the catheter over a range of 1 to 100 cm H 2 0 of infant sucking pressures. Lang and colleagues BIB024 developed a solution, very similar to an ordinary feeding bottle, embedding a nutrient delivery tube which enables a higher level of portability (see Figure 4a ). They use a modified commercial bottle (VentAire feeding bottle, produced by Platex) where a flow chamber is inserted between the milk reservoir and the outlet. The chamber has an inlet flow restriction orifice and an anti-backflow valve. The inlet chamber diameter is very small with respect to the outlet, offering a higher resistance to milk flow. The pressure inside the milk reservoir is maintained at the atmospheric value thanks to a gas permeable, fluid impermeable membrane. The shape of the bottle reduces the effect of the hydrostatic pressure allowing for an easy adjustment of the level of milk to that of the infant's mouth. Such system allows monitoring the suction pressure by measuring the pressure changes inside the chamber. The system may be modeled by the equivalent electronic circuit reported in Figure 4b . The inlet and outlet diameter are represented by two electrical resistances, R IN and R OUT respectively; the PT measuring the pressure (voltage) inside the flow chamber with respect to the atmospheric pressure (GND) is modeled by a voltmeter connected to the measuring node M; finally, sucking pressure is represented by a voltage generator (Sk). The voltage measured at node M will be: BIB017 since R in >> R out (due to the geometry), V M may be reasonably assumed as equal to V Sk . For the EP measurement, a rubber silicon tube can be placed on the outer surface of the nipple of a feeding bottle BIB022 BIB015 BIB005 BIB008 ] (see Figure 5a) . One end of the catheter (the extremity inside the mouth) is closed, while the other end is connected to a PT by means of a polyethylene catheter. This measuring system presents a limitation due to the rapid reaching of a plateau corresponding to the catheter's full compression. Otherwise, a PT can be connected to the lumen of the nipple by a silicone catheter to measure EP. In particular, EP is measured using this configuration and adding a one-way valve between the nipple chamber and the nutrient reservoir BIB018 BIB012 BIB021 ] (see Figure 5b) . Such a valve allows to isolate the interior of the nipple from the milk reservoir during the expression phase, in order to ensure that the nipple be always full. The same configuration without the valve allows monitoring the intra-nipple pressure changes due to the sucking events with no S/E distinction BIB013 BIB020 BIB014 BIB004 . Moreover, McGowan et al. BIB006 estimate the net pressure forcing the nutrient out of the nipple as the difference between intra-nipple pressure and negative intraoral pressure measured outside the nipple (see Figure 6 ). The two different sucking components can also be monitored through the measurement of throat and jaw movements. Such movements are assessed adopting two different technological solutions in BIB003 BIB019 . One includes the use of a strain-gauge transducer attached between the infant's forehead and the chin BIB003 , to measure jaw movements associated with mouthing movements. In BIB019 authors use a 1 meter distant camera placed at 90° with respect to the front of the baby's face. Three markers are placed on the infant' s face (see Figure 7) : one on the lateral Eye Angle (EA), one on the Tip of the Jaw (TJ), and the last one on the Throat (T). To estimate the distance of the marker in the object plane, the Direct Linear Transformation (DLT) method is used. Such method allows the definition of a linear transformation between the object space and the image-plane reference frame. Considering a point T in the object space ( ), it is mapped onto the image plane as and expressed in the image reference frame as ). We can write: where x 0 , y 0 , z 0 and u 0 , v 0 , d 0 represent the coordinates of the Projection Center (PC) respectively in the object reference frame and in the image reference frame; c represents a scale factor and the matrix R a transformation matrix which allows the projection from one space to the other one. The elements of this matrix are estimated thanks to a preliminary calibration procedure after which the camera should not be moved. Authors simultaneously recorded IP, EP and two anatomical distances, i.e., the eye-throat and the eye-jaw distance, and proved their correlation with suction and expression pressures respectively.
|
Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Figure 7. <s> The purpose of this study was to examine the concurrent validity of the Whitney strain gage for the measurement of nutritive sucking in preterm infants. Ten preterm infants were studied continuously during at least one entire bottle feeding per week, from admission into the study until discharge from the nursery. Sucking was measured simultaneously by an adapted nipple and the Whitney gage. The two instruments were compared on the following measures: number of sucking bursts, number of sucks per burst, and duration of bursts and pauses between bursts. Total percent agreement for the occurrence of a sucking burst was 99.3% (K = .99). Sucks per burst varied from 2 to 113, with 89.3% of the pairs of sucking bursts differing by < or = 1 suck per burst. The mean absolute difference between the two instruments for the duration of sucking bursts and pauses was .64 s and .72 s, respectively. These results demonstrate the concurrent validity of the Whitney gage for measurement of sucking events in preterm infants. <s> BIB001 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Figure 7. <s> ABSTRACTPURPOSEThe purpose of this study was to examine the effect of prefeeding non-nutritive sucking (NNS) on breathing, nutritive sucking (NS), and behavioral characteristics of bottle feeding.SUBJECTSThe convenience sample was composed of 10 preterm infants who were 33 to 40 weeks postconceptual <s> BIB002 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Figure 7. <s> OBJECTIVE ::: This study examined the relationship between the number of sucks in the first nutritive suck burst and feeding outcomes in preterm infants. The relationships of morbidity, maturity, and feeding experience to the number of sucks in the first suck burst were also examined. ::: ::: ::: METHODS ::: A non-experimental study of 95 preterm infants was used. Feeding outcomes included proficiency (percent consumed in first 5 min of feeding), efficiency (volume consumed over total feeding time), consumed (percent consumed over total feeding), and feeding success (proficiency >or=0.3, efficiency >or=1.5 mL/min, and consumed >or=0.8). Data were analyzed using correlation and regression analysis. ::: ::: ::: RESULTS AND CONCLUSIONS ::: There were statistically significant positive relationships between number of sucks in the first burst and all feeding outcomes-proficiency, efficiency, consumed, and success (r=0.303, 0.365, 0.259, and tau=0.229, P<.01, respectively). The number of sucks in the first burst was also positively correlated to behavior state and feeding experience (tau=0.104 and r=0.220, P<.01, respectively). Feeding experience was the best predictor of feeding outcomes; the number of sucks in the first suck burst also contributed significantly to all feeding outcomes. The findings suggest that as infants gain experience at feeding, the first suck burst could be a useful indicator for how successful a particular feeding might be. <s> BIB003
|
Position of the marker on the throat region, for DLT method application, is determined by first locating three facial markers: the external eye angle (A), the tip of the jaw (B) and the throat region (C). In other works BIB003 BIB001 , mercury-in-rubber strain gauges are used to monitor chin movements. Such sensors are connected to a plethysmograph that detects changes in electrical resistance of the gages as they are stretched with sucking activity. Strain gage is kept under tension during measurements, stretching it by at least 10% to 20% beyond its resting length before application. Such set up demonstrates reliability in sucking monitoring, even distinguishing chewing on the nipple and other non-sucking activity from true sucking BIB002 .
|
Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Swallowing and Breathing Processes <s> Incoordination of sucking, swallowing, and breathing might lead to the decreased ventilation that accompanies bottle feeding in infants, but the precise temporal relationship between these events has not been established. Therefore, we studied the coordination of sucks, swallows, and breaths in healthy infants (8 full-term and 5 preterm). Respiratory movements and airflow were recorded as were sucks and swallows (intraoral and intrapharyngeal pressure). Sucks did not interrupt breathing or decrease minute ventilation during nonnutritive sucking. Minute ventilation during bottle feedings was inversely related to swallow frequency, with elimination of ventilation as the swallowing frequency approached 1.4/s. Swallows were associated with a 600-ms period of decreased respiratory initiation and with a period of airway closure lasting 530 +/- 9.8 (SE) ms. Occasional periods of prolonged airway closure were observed in all infants during feedings. Respiratory efforts during airway closure (obstructed breaths) were common. The present findings indicate that the decreased ventilation observed during bottle feedings is primarily a consequence of airway closure associated with the act of swallowing, whereas the decreased ventilatory efforts result from respiratory inhibition during swallows. <s> BIB001 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Swallowing and Breathing Processes <s> Non-invasive, sensitive equipment was designed to record nasal air flow, the timing and volume of milk flow, intraoral pressure and swallowing in normal full-term newborn babies artificially fed under strictly controlled conditions. Synchronous recordings of these events are presented in chart form. Interpretation of the charts, with the aid of applied anatomy, suggests an hypothesis of the probable sequence of events during an ideal feeding cycle under the test conditions. This emphasises the importance of complete coordination between breathing, sucking and swallowing. The feeding respiratory pattern and its relationship to the other events was different from the non-nutritive respiratory pattern. The complexity of the coordinated patterns, the small bolus size which influenced the respiratory pattern, together with the coordination of all these events when milk was present in the mouth, emphasise the importance of the sensory mechanisms. The discussion considers (1) the relationship between these results, those reported by other workers under other feeding conditions and the author's (WGS) clinical experience, (2) factors which appear to be essential to permit conventional bottle feeding and (3) the importance of the coordination between the muscles of articulation, by which babies obtain their nourishment in relation to normal development and maturation. <s> BIB002 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Swallowing and Breathing Processes <s> During feeding, infants have been found to decrease ventilation in proportion to increasing swallowing frequency, presumably as a consequence of neural inhibition of breathing and airway closure during swallowing. To what extent infants decrease ventilatory compromise during feeding by modifying feeding behavior is unknown. We increased swallowing frequency in infants by facilitating formula flow to study potential ventilatory sparing mechanisms. We studied seven full-term healthy infants 5-12 days of age. Nasal air flow and tidal volume were recorded with a nasal flowmeter. Soft fluid-filled catheters in the oropharynx and bottle recorded swallowing and sucking activity, and volume changes in the bottle were continuously measured. Bottle pressure was increased to facilitate formula flow. Low- and high-pressure trials were then compared. With the change from low to high pressure, consumption rate increased, as did sucking and swallowing frequencies. This change reversed on return to low pressure. Under high-pressure conditions, we saw a decrease in minute ventilation as expected. With onset of high pressure, sucking and swallowing volumes increased, whereas duration of airway closure during swallows remained constant. Therefore, increased formula consumption was associated with reduced ventilation, a predictable consequence of increased swallowing frequency. However, when consumption rate was high, the infant also increased swallowing volume, a tactic that is potentially ventilatory sparing as a lower swallowing frequency is required to achieve the increased consumption rate. As well, when consumption rate is low, the sucking-to-swallowing ratio increases, again potentially conserving ventilation by decreasing swallowing frequency much more than if the sucking-to-swallowing ratio was constant. <s> BIB003 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Swallowing and Breathing Processes <s> Abstract To gain a better understanding of the development of sucking behavior in low birth weight infants, the aims of this study were as follows: (1) to assess these infants' oral feeding performance when milk delivery was unrestricted, as routinely administered in nurseries, versus restricted when milk flow occurred only when the infant was sucking; (2) to determine whether the term sucking pattern of suction/expression was necessary for feeding success; and (3) to identify clinical indicators of successful oral feeding. Infants (26 to 29 weeks of gestation) were evaluated at their first oral feeding and on achieving independent oral feeding. Bottle nipples were adapted to monitor suction and expression. To assess performance during a feeding, proficiency (percent volume transferred during the first 5 minutes of a feeding/total volume ordered), efficiency (volume transferred per unit time), and overall transfer (percent volume transferred) were calculated. Restricted milk flow enhanced all three parameters. Successful oral feeding did not require the term sucking pattern. Infants who demonstrated both a proficiency ≥30% and efficiency ≥1.5 ml/min at their first oral feeding were successful with that feeding and attained independent oral feeding at a significantly earlier postmenstrual age than their counterparts with lower proficiency, efficiency, or both. Thus a restricted milk flow facilitates oral feeding in infants younger than 30 weeks of gestation, the term sucking pattern is not necessary for successful oral feeding, and proficiency and efficiency together may be used as reliable indicators of early attainment of independent oral feeding in low birth weight infants.(J Pediatr 1997;130:561-9) <s> BIB004 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Swallowing and Breathing Processes <s> The maturation of deglutition apnoea time was investigated in 42 bottle-fed preterm infants, 28 to 37 weeks gestation, and in 29 normal term infants as a comparison group. Deglutition apnoea times reduced as infants matured, as did the number and length of episodes of multiple-swallow deglutition apnoea. The maturation appears related to developmental age (gestation) rather than feeding experience (postnatal age). Prolonged (>4 seconds) episodes of deglutition apnoea remained significantly more frequent in preterm infants reaching term postconceptual age compared to term infants. However, multiple-swallow deglutition apnoeas also occurred in the term comparison group, showing that maturation of this aspect is not complete at term gestation. The establishment of normal data for maturation should be valuable in assessing infants with feeding difficulties as well as for evaluation of neurological maturity and functioning of ventilatory control during feeding. <s> BIB005 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Swallowing and Breathing Processes <s> The coordination between swallowing and respiration is essential for safe feeding, and noninvasive feeding-respiratory instrumentation has been used in feeding and dysphagia assessment. Sometimes there are differences of interpretation of the data produced by the various respiratory monitoring techniques, some of which may be inappropriate for observing the rapid respiratory events associated with deglutition. Following a review of each of the main techniques employed for recording resting, pre-feeding, feeding, and post-feeding respiration on different subject groups (infants, children, and adults), a critical comparison of the methods is illustrated by simultaneous recordings from various respiratory transducers. As a result, a minimal combination of instruments is recommended which can provide the necessary respiratory information for routine feeding assessments in a clinical environment. <s> BIB006 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Swallowing and Breathing Processes <s> Twenty healthy preterm infants (gestational age 26 to 33 weeks, postmenstrual age [PMA] 32.1 to 39.6 weeks, postnatal age [PNA] 2.0 to 11.6 weeks) were studied weekly from initiation of bottle feeding until discharge, with simultaneous digital recordings of pharyngeal and nipple (teat) pressure and nasal thermistor and thoracic strain gauge readings. The percentage of sucks aggregated into 'runs' (defined as > or = 3 sucks with < or = 2 seconds between suck peaks) increased over time and correlated significantly with PMA (r=0.601, p<0.001). The length of the sucking-runs also correlated significantly with PMA (r=0.613, p<0.001). The stability of sucking rhythm, defined as a function of the mean/SD of the suck interval, was also directly correlated with increasing PMA (r=0.503, p=0.002), as was increasing suck rate (r=0.379, p<0.03). None of these measures was correlated with PNA. Similarly, increasing PMA, but not PNA, correlated with a higher percentage of swallows in runs (r=0.364, p<0.03). Stability of swallow rhythm did not change significantly from 32 to 40 weeks' PMA. In low-risk preterm infants, increasing PMA is correlated with a faster and more stable sucking rhythm and with increasing organization into longer suck and swallow runs. Stable swallow rhythm appears to be established earlier than suck rhythm. The fact that PMA is a better predictor than PNA of these patterns lends support to the concept that these patterns are innate rather than learned behaviors. Quantitative assessment of the stability of suck and swallow rhythms in preterm infants may allow prediction of subsequent feeding dysfunction as well as more general underlying neurological impairment. Knowledge of the normal ontogeny of the rhythms of suck and swallow may also enable us to differentiate immature (but normal) feeding patterns in preterm infants from dysmature (abnormal) patterns, allowing more appropriate intervention measures. <s> BIB007 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Swallowing and Breathing Processes <s> To quantify parameters of rhythmic suckle feeding in healthy term infants and to assess developmental changes during the first month of life, we recorded pharyngeal and nipple pressure in 16 infants at 1 to 4 days of age and again at 1 month. Over the first month of life in term infants, sucks and swallows become more rapid and increasingly organized into runs. Suck rate increased from 55/minute in the immediate postnatal period to 70/minute by the end of the first month (p<0.001). The percentage of sucks in runs of ≧3 increased from 72.7% (SD 12.8) to 87.9% (SD 9.1; p=0.001). Average length of suck runs also increased over the first month. Swallow rate increased slightly by the end of the first month, from about 46 to 50/minute (p=0.019), as did percentage of swallows in runs (76.8%, SD 14.9 versus 54.6%, SD 19.2;p=0.002). Efficiency of feeding, as measured by volume of nutrient per suck (0.17, SD 0.08 versus 0.30, SD 0.11cc/suck; p=0.008) and per swallow (0.23, SD 0.11 versus 0.44, SD 0.19 cc/swallow; p=0.002), almost doubled over the first month. The rhythmic stability of swallow-swallow, suck-suck, and suck-swallow dyadic interval, quantified using the coefficient of variation of the interval, was similar at the two age points, indicating that rhythmic stability of suck and swallow, individually and interactively, appears to be established by term. Percentage of sucks and swallows in 1:1 ratios (dyads), decreased from 78.8% (SD 20.1) shortly after birth to 57.5% (SD 25.8) at 1 month of age (p=0.002), demonstrating that the predominant 1:1 ratio of suck to swallow is more variable at 1 month, with the addition of ratios of 2:1, 3:1, and so on, and suggesting that infants gain the ability to adjust feeding patterns to improve efficiency. Knowledge of normal development in term infants provides a gold standard against which rhythmic patterns in preterm and other high-risk infants can be measured, and may allow earlier identification of infants at risk of neurodevelopmental delay and feeding disorders. <s> BIB008 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Swallowing and Breathing Processes <s> OBJECTIVES ::: Our objectives were to establish normative maturational data for feeding behavior of preterm infants from 32 to 36 weeks of postconception and to evaluate how the relation between swallowing and respiration changes with maturation. ::: ::: ::: STUDY DESIGN ::: Twenty-four infants (28 to 31 weeks of gestation at birth) without complications or defects were studied weekly between 32 and 36 weeks after conception. During bottle feeding with milk flowing only when infants were sucking, sucking efficiency, pressure, frequency, and duration were measured and the respiratory phase in which swallowing occurs was also analyzed. Statistical analysis was performed by repeated-measures analysis of variance with post hoc analysis. ::: ::: ::: RESULTS ::: The sucking efficiency significantly increased between 34 and 36 weeks after conception and exceeded 7 mL/min at 35 weeks. There were significant increases in sucking pressure and frequency as well as in duration between 33 and 36 weeks. Although swallowing occurred mostly during pauses in respiration at 32 and 33 weeks, after 35 weeks swallowing usually occurred at the end of inspiration. ::: ::: ::: CONCLUSIONS ::: Feeding behavior in premature infants matured significantly between 33 and 36 weeks after conception, and swallowing infrequently interrupted respiration during feeding after 35 weeks after conception. <s> BIB009 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Swallowing and Breathing Processes <s> UNLABELLED ::: Safe oral feeding of infants necessitates the coordination of suck-swallow-breathe. Healthy full-term infants demonstrate such skills at birth. But, preterm infants are known to have difficulty in the transition from tube to oral feeding. ::: ::: ::: AIM ::: To examine the relationship between suck and swallow and between swallow and breathe. It is hypothesized that greater milk transfer results from an increase in bolus size and/or swallowing frequency, and an improved swallow-breathe interaction. ::: ::: ::: METHODS ::: Twelve healthy preterm (<30 wk of gestation) and 8 full-term infants were recruited. Sucking (suction and expression), swallowing, and respiration were recorded simultaneously when the preterm infants began oral feeding (i.e. taking 1-2 oral feedings/d) and at 6-8 oral feedings/d. The full-term infants were similarly monitored during their first and 2nd to 4th weeks. Rate of milk transfer (ml/min) was used as an index of oral feeding performance. Sucking and swallowing frequencies (#/min), average bolus size (ml), and suction amplitude (mmHg) were measured. ::: ::: ::: RESULTS ::: The rate of milk transfer in the preterm infants increased over time and was correlated with average bolus size and swallowing frequency. Average bolus size was not correlated with swallowing frequency. Bolus size was correlated with suction amplitude, whereas the frequency of swallowing was correlated with sucking frequency. Preterm infants swallowed preferentially at different phases of respiration than those of their full-term counterparts. ::: ::: ::: CONCLUSION ::: As feeding performance improved, sucking and swallowing frequency, bolus size, and suction amplitude increased. It is speculated that feeding difficulties in preterm infants are more likely to result from inappropriate swallow-respiration interfacing than suck-swallow interaction. <s> BIB010 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Swallowing and Breathing Processes <s> The premature infant has limited ability to integrate the swallowing-breathing cycle during feeding. The aim of this study was to assess the pattern of swallowing between the period of tube-bottle (TBF) and bottle (BF) feeding by means of cervical auscultation in premature infants. Twenty-three premature infants were enrolled (mean gestational age 34.7 +/- 1.7 weeks). Audiosignal recordings were made during TBF and BF with a small microphone set in front of the cricoid cartilage. The following parameters were calculated for 2 min and reported at 1 min: the percentage of time involved in swallowing (ST), the numbers of swallows (SN) and swallowing bursts (SB) and swallowing groups (SG). Individual histograms were established to show the individual pattern of swallowing behaviour and the distribution of groups, bursts and swallows over 2 min. Mean (STm), (SNm), (SBm), (SGm) values were calculated (+/- S.D.). Statistical analysis was used to compare the means and to establish correlations between parameters and curves. (STm), (SNm) and (SBm) increased significantly during BF compared with TBF for all premature infants and during follow-up. The histograms showed that in BF the groups were high in bursts. These findings and the histograms for each infant will allow determination of transition to bottle feeding without risk corresponding to the stage of maturation of swallowing function. <s> BIB011 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Swallowing and Breathing Processes <s> To study the coordination of respiration and swallow rhythms we assessed feeding episodes in 20 preterm infants (gestational age range at birth 26-33wks; postmenstrual age [PMA] range when studied 32-40wks) and 16 term infants studied on days 1 to 4 (PMA range 37-41wks) and at 1 month (PMA range 41-45wks). A pharyngeal pressure transducer documented swallows and a thoracoabdominal strain gauge recorded respiratory efforts. Coefficients of variation (COVs) of breath-breath (BBr-BR) and swallow-breath (SW-BR) intervals during swallow runs, percentage ofapneic swallows (at least three swallows without interposed breaths), and phase of respiration relative to swallowing efforts were analyzed. Percentage of apneic swallows decreased with increasing PMA (16.6% [SE 4.7] in preterm infants s35wks' PMA; 6.6% [SE 1.6] in preterms >35wks; 1.5% [SE 0.4] in term infants; p 35wks' PMA; 0.693 [SE 0.059] at <35wks' PMA). Phase relation between swallowing and respiration stabilized with increasing PMA, with decreased apnea, and a significant increase in percentage of swallows occurring at end-inspiration. These data indicate that unlike the stabilization of suck and suck-swallow rhythms, which occur before about 36 weeks' PMA, improvement in coordination of respiration and swallow begins later. Coordination of swallow-respiration and suck-swallow rhythms may be predictive of feeding, respiratory, and neurodevelopmental abnormalities. <s> BIB012 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Swallowing and Breathing Processes <s> Abstract A dynamical systems approach to infant feeding problems is presented. A theoretically motivated analysis of coordination among sucking, swallowing, and breathing is at the heart of the approach. Current views in neonatology and allied medical disciplines begin their analysis of feeding problems with reference to descriptive phases of moving fluid from the mouth to the gut. By contrast, in a dynamical approach, sucking, swallowing, and breathing are considered as a synergy characterized by more or less stable coordination patterns. Research with healthy and at-risk groups of infants is presented to illustrate how coordination dynamics distinguish safe swallowing from patterns of swallowing and breathing that place premature infants at risk for serious medical problems such as pneumonia. Coordination dynamics is also the basis for a new medical device: a computer-controlled milk bottle that controls milk flow on the basis of the infant's coordination patterns. The device is designed so that infants... <s> BIB013 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Swallowing and Breathing Processes <s> Aim: Safe and successful oral feeding requires proper maturation of sucking, swallowing and respiration. We hypothesized that oral feeding difficulties result from different temporal development of the musculatures implicated in these functions. ::: ::: Methods: Sixteen medically stable preterm infants (26 to 29 weeks gestation, GA) were recruited. Specific feeding skills were monitored as indirect markers for the maturational process of oral feeding musculatures: rate of milk intake (mL/min); percent milk leakage (lip seal); sucking stage, rate (#/s) and suction/expression ratio; suction amplitude (mmHg), rate and slope (mmHg/s); sucking/swallowing ratio; percent occurrence of swallows at specific phases of respiration. Coefficients of variation (COV) were used as indices of functional stability. Infants, born at 26/27- and 28/29-week GA, were at similar postmenstrual ages (PMA) when taking 1–2 and 6–8 oral feedings per day. ::: ::: Results: Over time, feeding efficiency and several skills improved, some decreased and others remained unchanged. Differences in COVs between the two GA groups demonstrated that, despite similar oral feeding outcomes, maturation levels of certain skills differed. ::: ::: Conclusions: Components of sucking, swallowing, respiration and their coordinated activity matured at different times and rates. Differences in functional stability of particular outcomes confirm that maturation levels depend on infants' gestational rather than PMA. <s> BIB014
|
The evaluation indices on swallowing and breathing concern rhythmicity and their interface, rather than their "extent" (see Table 1 ). Different solutions in terms of complexity have been adopted to monitor these two processes. Sw events are often detected by monitoring the pharyngeal pressure with a PT connected to a tube, transnasally inserted as far as the pharynx BIB009 BIB007 BIB012 BIB008 BIB003 BIB001 . Otherwise, a PT connected to a small drum, placed on the hyoid region of the infant's neck, is used to monitor hyoid bone movements related to swallowing BIB014 BIB010 BIB004 : the upward movement of this bone, caused by Sw, results into a biphasic pressure wave in the drum, and the peak pressure deflection is generally used as marker of Sw. Other authors BIB002 BIB013 use microphones placed on the infant's throat to record swallow sounds. Such microphones need to be small enough to be effectively applied to infant and their bandwidth should cover a range from 100 Hz to 8 kHz . Moreover, they require to be shielded from external noise and to filter out some possible interference, such as e.g., babbling. The acoustic technique to record swallowing in premature infants is widely investigated also in BIB011 . Breathing process is monitored measuring nasal airflow and/or respiratory movements. All the different sensing solutions for respiratory monitoring during feeding in infants are shown in Figure 8 . Nasal thermistors or thermocouples below the nostrils are used to measure air flow. However, they are not sufficient to distinguish between inspiration and expiration. To address this issue, one possibility is represented by the use of additional sensors. To clearly identify flow direction, a PT connected to the nostrils by a soft catheter can be used BIB002 . Catheter and thermistor are embedded into a rigid tool (see Figure 9b) , which is kept in the infant's nostrils during feeding, capable of recording very low airflows (discrimination threshold less than 0.5 L/min), without adding any significant resistance to the flow. Another solution that allows measuring both the airflow and its direction is the use of a miniaturized pneumotachograph connected to a pressure transducer, placed in a nostril. Such nasal flowmeter has turned out to be suitable for preterm infants BIB005 , because of its low dead space (less than 0.11 mL), low resistance (0.1 mm H 2 O/mL•s), light weight (0.2 g) and compact design. Airflow monitoring is also widely performed by thermistors because of their rapid response to flow changes BIB002 BIB009 BIB007 BIB008 BIB013 ; however, they are prone to artifact caused by temperature equilibration when airflow stops BIB006 . To avoid such problem, many authors monitor breathing by measuring thoracic movements (see Figure 9a) . Such measures pertain to changes in lung inflation measured by chest and abdominal movements, and enable to determine the precise timing of the end of inspiration and expiration, not allowing for quantitative measures such as tidal volume or minute ventilation. Mercury-in-rubber or piezo-resistive strain gauges (respiratory bands) have been used to measure chest movements BIB009 BIB007 BIB012 BIB003 , as well as PTs, connected to a drum taped at the thoraco-abdominal junction BIB014 BIB010 . Figure 9 . Devices used for breathing monitoring. (a) Nasal thermistor or thermocouple applied below the nostrils for nasal airflow measurement; pressure drum or strain gauge band on the chest for respiratory movements measurement; (b) Rigid tool applied into the nostrils: the thermistor and the PT are used respectively to assess air flow and its versus.
|
Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Nutrient Consumption <s> Energetics and mechanics of sucking in preterm and term neonates were determined by simultaneous records of intraoral pressure, flow, volume, and work of individual sucks. Nine term infants (mean postconceptional age: 38.6 +/- 0.7 SD weeks; mean postnatal age: 18.4 +/- 6.1 SD days) and nine preterm infants (mean postconceptional age: 35.2 +/- 0.7 SD weeks; mean postnatal age: 21.9 +/- 5.4 SD days) were studied under identical feeding conditions. Preterm infants generated significantly lower peak pressure (mean values of 48.5 cm H2O compared with 65.5 cm H2O in term infants; P less than 0.01), and the volume ingested per such was generally less than or equal to 0.5 mL. Term infants demonstrated a higher frequency of sucking, a well-defined suck-pause pattern, and a higher minute consumption of formula. Energy and caloric expenditure estimations revealed significantly lower work performed by preterm infants for isovolumic feeds (1190 g/cm/dL in preterm infants compared with 2030 g.cm/dL formula ingested in term infants; P less than 0.01). Furthermore, work performed by term infants was disproportionately higher for volumes greater than or equal to 0.5 mL ingested. This study indicates that preterm infants expend less energy than term infants to suck the same volume of feed and also describes an objective technique to evaluate nutritive sucking during growth and development. <s> BIB001 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Nutrient Consumption <s> Non-invasive, sensitive equipment was designed to record nasal air flow, the timing and volume of milk flow, intraoral pressure and swallowing in normal full-term newborn babies artificially fed under strictly controlled conditions. Synchronous recordings of these events are presented in chart form. Interpretation of the charts, with the aid of applied anatomy, suggests an hypothesis of the probable sequence of events during an ideal feeding cycle under the test conditions. This emphasises the importance of complete coordination between breathing, sucking and swallowing. The feeding respiratory pattern and its relationship to the other events was different from the non-nutritive respiratory pattern. The complexity of the coordinated patterns, the small bolus size which influenced the respiratory pattern, together with the coordination of all these events when milk was present in the mouth, emphasise the importance of the sensory mechanisms. The discussion considers (1) the relationship between these results, those reported by other workers under other feeding conditions and the author's (WGS) clinical experience, (2) factors which appear to be essential to permit conventional bottle feeding and (3) the importance of the coordination between the muscles of articulation, by which babies obtain their nourishment in relation to normal development and maturation. <s> BIB002 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Nutrient Consumption <s> During feeding, infants have been found to decrease ventilation in proportion to increasing swallowing frequency, presumably as a consequence of neural inhibition of breathing and airway closure during swallowing. To what extent infants decrease ventilatory compromise during feeding by modifying feeding behavior is unknown. We increased swallowing frequency in infants by facilitating formula flow to study potential ventilatory sparing mechanisms. We studied seven full-term healthy infants 5-12 days of age. Nasal air flow and tidal volume were recorded with a nasal flowmeter. Soft fluid-filled catheters in the oropharynx and bottle recorded swallowing and sucking activity, and volume changes in the bottle were continuously measured. Bottle pressure was increased to facilitate formula flow. Low- and high-pressure trials were then compared. With the change from low to high pressure, consumption rate increased, as did sucking and swallowing frequencies. This change reversed on return to low pressure. Under high-pressure conditions, we saw a decrease in minute ventilation as expected. With onset of high pressure, sucking and swallowing volumes increased, whereas duration of airway closure during swallows remained constant. Therefore, increased formula consumption was associated with reduced ventilation, a predictable consequence of increased swallowing frequency. However, when consumption rate was high, the infant also increased swallowing volume, a tactic that is potentially ventilatory sparing as a lower swallowing frequency is required to achieve the increased consumption rate. As well, when consumption rate is low, the sucking-to-swallowing ratio increases, again potentially conserving ventilation by decreasing swallowing frequency much more than if the sucking-to-swallowing ratio was constant. <s> BIB003 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Nutrient Consumption <s> Abstract To gain a better understanding of the development of sucking behavior in low birth weight infants, the aims of this study were as follows: (1) to assess these infants' oral feeding performance when milk delivery was unrestricted, as routinely administered in nurseries, versus restricted when milk flow occurred only when the infant was sucking; (2) to determine whether the term sucking pattern of suction/expression was necessary for feeding success; and (3) to identify clinical indicators of successful oral feeding. Infants (26 to 29 weeks of gestation) were evaluated at their first oral feeding and on achieving independent oral feeding. Bottle nipples were adapted to monitor suction and expression. To assess performance during a feeding, proficiency (percent volume transferred during the first 5 minutes of a feeding/total volume ordered), efficiency (volume transferred per unit time), and overall transfer (percent volume transferred) were calculated. Restricted milk flow enhanced all three parameters. Successful oral feeding did not require the term sucking pattern. Infants who demonstrated both a proficiency ≥30% and efficiency ≥1.5 ml/min at their first oral feeding were successful with that feeding and attained independent oral feeding at a significantly earlier postmenstrual age than their counterparts with lower proficiency, efficiency, or both. Thus a restricted milk flow facilitates oral feeding in infants younger than 30 weeks of gestation, the term sucking pattern is not necessary for successful oral feeding, and proficiency and efficiency together may be used as reliable indicators of early attainment of independent oral feeding in low birth weight infants.(J Pediatr 1997;130:561-9) <s> BIB004 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Nutrient Consumption <s> The purpose of this study was to compare the mechanics of sucking for 48 term infants with four different nipple units: Gerber Newborn (Gerber Products Company, Fremont, Mich.), Playtex (Playtex Products, Westport, Conn.), Evenflo (Evenflo Products Co., Canton, Ga.), and Gerber NUK. At 24 hours after birth, infants were assigned randomly to one of the nipple units and were studied twice with that nipple unit. A customized data acquisition system was used to measure and record the following variables: intraoral suction, sucking frequency, work, power, milk flow, milk volume per suck, and oxygen saturation. Although no statistically significant differences among the nipple units were noted for intraoral suction, sucking frequency, power, and oxygen saturation, the data revealed that the Playtex nipple unit was accompanied by higher peak milk flow and greater volume of milk per suck ( p <s> BIB005 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Nutrient Consumption <s> It is acknowledged that the difficulty many preterm infants have in feeding orally results from their immature sucking skills. However, little is known regarding the development of sucking in these infants. The aim of this study was to demonstrate that the bottle-feeding performance of preterm infants is positively correlated with the developmental stage of their sucking. Infants' oral-motor skills were followed longitudinally using a special nipple/bottle system which monitored the suction and expression/compression component of sucking. The maturational process was rated into five primary stages based on the presence/absence of suction and the rhythmicity of the two components of sucking, suction and expression/compression. This five-point scale was used to characterize the developmental stage of sucking of each infant. Outcomes of feeding performance consisted of overall transfer (percent total volume transfered/volume to be taken) and rate of transfer (ml/min). Assessments were conducted when infants were taking 1-2, 3-5 and 6-8 oral feedings per day. Significant positive correlations were observed between the five stages of sucking and postmenstrual age, the defined feeding outcomes, and the number of daily oral feedings. Overall transfer and rate of transfer were enhanced when infants reached the more mature stages of sucking. ::: ::: We have demonstrated that oral feeding performance improves as infants' sucking skills mature. In addition, we propose that the present five-point sucking scale may be used to assess the developmental stages of sucking of preterm infants. Such knowledge would facilitate the management of oral feeding in these infants. <s> BIB006 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Nutrient Consumption <s> To quantify parameters of rhythmic suckle feeding in healthy term infants and to assess developmental changes during the first month of life, we recorded pharyngeal and nipple pressure in 16 infants at 1 to 4 days of age and again at 1 month. Over the first month of life in term infants, sucks and swallows become more rapid and increasingly organized into runs. Suck rate increased from 55/minute in the immediate postnatal period to 70/minute by the end of the first month (p<0.001). The percentage of sucks in runs of ≧3 increased from 72.7% (SD 12.8) to 87.9% (SD 9.1; p=0.001). Average length of suck runs also increased over the first month. Swallow rate increased slightly by the end of the first month, from about 46 to 50/minute (p=0.019), as did percentage of swallows in runs (76.8%, SD 14.9 versus 54.6%, SD 19.2;p=0.002). Efficiency of feeding, as measured by volume of nutrient per suck (0.17, SD 0.08 versus 0.30, SD 0.11cc/suck; p=0.008) and per swallow (0.23, SD 0.11 versus 0.44, SD 0.19 cc/swallow; p=0.002), almost doubled over the first month. The rhythmic stability of swallow-swallow, suck-suck, and suck-swallow dyadic interval, quantified using the coefficient of variation of the interval, was similar at the two age points, indicating that rhythmic stability of suck and swallow, individually and interactively, appears to be established by term. Percentage of sucks and swallows in 1:1 ratios (dyads), decreased from 78.8% (SD 20.1) shortly after birth to 57.5% (SD 25.8) at 1 month of age (p=0.002), demonstrating that the predominant 1:1 ratio of suck to swallow is more variable at 1 month, with the addition of ratios of 2:1, 3:1, and so on, and suggesting that infants gain the ability to adjust feeding patterns to improve efficiency. Knowledge of normal development in term infants provides a gold standard against which rhythmic patterns in preterm and other high-risk infants can be measured, and may allow earlier identification of infants at risk of neurodevelopmental delay and feeding disorders. <s> BIB007 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Nutrient Consumption <s> UNLABELLED ::: Safe oral feeding of infants necessitates the coordination of suck-swallow-breathe. Healthy full-term infants demonstrate such skills at birth. But, preterm infants are known to have difficulty in the transition from tube to oral feeding. ::: ::: ::: AIM ::: To examine the relationship between suck and swallow and between swallow and breathe. It is hypothesized that greater milk transfer results from an increase in bolus size and/or swallowing frequency, and an improved swallow-breathe interaction. ::: ::: ::: METHODS ::: Twelve healthy preterm (<30 wk of gestation) and 8 full-term infants were recruited. Sucking (suction and expression), swallowing, and respiration were recorded simultaneously when the preterm infants began oral feeding (i.e. taking 1-2 oral feedings/d) and at 6-8 oral feedings/d. The full-term infants were similarly monitored during their first and 2nd to 4th weeks. Rate of milk transfer (ml/min) was used as an index of oral feeding performance. Sucking and swallowing frequencies (#/min), average bolus size (ml), and suction amplitude (mmHg) were measured. ::: ::: ::: RESULTS ::: The rate of milk transfer in the preterm infants increased over time and was correlated with average bolus size and swallowing frequency. Average bolus size was not correlated with swallowing frequency. Bolus size was correlated with suction amplitude, whereas the frequency of swallowing was correlated with sucking frequency. Preterm infants swallowed preferentially at different phases of respiration than those of their full-term counterparts. ::: ::: ::: CONCLUSION ::: As feeding performance improved, sucking and swallowing frequency, bolus size, and suction amplitude increased. It is speculated that feeding difficulties in preterm infants are more likely to result from inappropriate swallow-respiration interfacing than suck-swallow interaction. <s> BIB008 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Nutrient Consumption <s> OBJECTIVES ::: Our objectives were to establish normative maturational data for feeding behavior of preterm infants from 32 to 36 weeks of postconception and to evaluate how the relation between swallowing and respiration changes with maturation. ::: ::: ::: STUDY DESIGN ::: Twenty-four infants (28 to 31 weeks of gestation at birth) without complications or defects were studied weekly between 32 and 36 weeks after conception. During bottle feeding with milk flowing only when infants were sucking, sucking efficiency, pressure, frequency, and duration were measured and the respiratory phase in which swallowing occurs was also analyzed. Statistical analysis was performed by repeated-measures analysis of variance with post hoc analysis. ::: ::: ::: RESULTS ::: The sucking efficiency significantly increased between 34 and 36 weeks after conception and exceeded 7 mL/min at 35 weeks. There were significant increases in sucking pressure and frequency as well as in duration between 33 and 36 weeks. Although swallowing occurred mostly during pauses in respiration at 32 and 33 weeks, after 35 weeks swallowing usually occurred at the end of inspiration. ::: ::: ::: CONCLUSIONS ::: Feeding behavior in premature infants matured significantly between 33 and 36 weeks after conception, and swallowing infrequently interrupted respiration during feeding after 35 weeks after conception. <s> BIB009 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Nutrient Consumption <s> Aim: Safe and successful oral feeding requires proper maturation of sucking, swallowing and respiration. We hypothesized that oral feeding difficulties result from different temporal development of the musculatures implicated in these functions. ::: ::: Methods: Sixteen medically stable preterm infants (26 to 29 weeks gestation, GA) were recruited. Specific feeding skills were monitored as indirect markers for the maturational process of oral feeding musculatures: rate of milk intake (mL/min); percent milk leakage (lip seal); sucking stage, rate (#/s) and suction/expression ratio; suction amplitude (mmHg), rate and slope (mmHg/s); sucking/swallowing ratio; percent occurrence of swallows at specific phases of respiration. Coefficients of variation (COV) were used as indices of functional stability. Infants, born at 26/27- and 28/29-week GA, were at similar postmenstrual ages (PMA) when taking 1–2 and 6–8 oral feedings per day. ::: ::: Results: Over time, feeding efficiency and several skills improved, some decreased and others remained unchanged. Differences in COVs between the two GA groups demonstrated that, despite similar oral feeding outcomes, maturation levels of certain skills differed. ::: ::: Conclusions: Components of sucking, swallowing, respiration and their coordinated activity matured at different times and rates. Differences in functional stability of particular outcomes confirm that maturation levels depend on infants' gestational rather than PMA. <s> BIB010 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Nutrient Consumption <s> Feeding by sucking is one of the first activities of daily life performed by infants. Sucking plays a fundamental role in neurological development and may be considered a good early predictor of neuromotor development. In this paper, a new method for ecological assessment of infants' nutritive sucking behavior is presented and experimentally validated. Preliminary data on healthy newborn subjects are first acquired to define the main technical specifications of a novel instrumented device. This device is designed to be easily integrated in a commercially available feeding bottle, allowing clinical method development for screening large numbers of subjects. The new approach proposed allows: 1) accurate measurement of intra-oral pressure for neuromotor control analysis and 2) estimation of milk volume delivered to the mouth within variation between estimated and reference volumes. <s> BIB011 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Measuring Systems to Monitor Nutrient Consumption <s> The Sucking Efficiency (SEF) is one of the main parameters used to monitor and assess the sucking pattern development in infants. Since utritive Sucking ( S) is one of the earliest motor activity performed by infants, its objective monitoring may allow to assess neurological and motor development of newborns. This work proposes a new ecological and low-cost method for SEF monitoring, specifically designed for feeding bottles. The methodology, based on the measure of the hydrostatic pressure exerted by the liquid at the teat base, is presented and experimentally validated at different operative conditions. Results show how the proposed method allows to estimate the minimum volume an infant ingests during a burst of sucks with a relative error within the range of [3-7]% depending on the inclination of the liquid reservoir. <s> BIB012
|
Nutrient consumption is often estimated measuring the residual nutrient volume at the end of the feeding session (total consumption) or at variable time intervals. This measure is frequently performed observing the liquid level in a graduated reservoir BIB006 BIB008 or using a balance BIB010 BIB004 . Many authors do not even mention this measure, despite reporting consideration about the total ingested nutrient volume BIB009 BIB007 . Measurements of the nutrient consumption at very close intervals of time have also been adopted BIB011 BIB003 BIB012 . The volume of delivered milk can be estimated measuring the changes of the air pressure inside a closed bottle (vacuum built-up) by means of a PT, while the liquid flows out, as reported in BIB011 . A PT can be also used to measure the hydrostatic pressure of the remaining liquid column in a cylindrical reservoir in order to estimate the residual volume of liquid, as presented in BIB012 . In this work, the PT was connected to an air-filled catheter ending at the base of an inverted bottle where the liquid column laid. Such sensing system also included the presence of an accelerometer to estimate the bottle tilt and correct its influence on the hydrostatic pressure. The same principle was also adopted by Al-Sayed BIB003 , who measured the hydrostatic pressure of the residual volume of milk, but in a more controlled situation where the reservoir was fixed and could not be tilted. Such approach enabled to measure the residual nutrient volume in the bottle at variable temporal intervals, when there was no sucking activity. Measures of the flow of nutrient have been performed using particular flow meters BIB002 BIB001 BIB005 . In BIB005 , the authors use an ultrasonic flow transducer to measure the liquid flow between the feeding bottle and the tip of the feeding nipple. In the other two studies the milk flow is estimated from the measurements of airflow entering the reservoir to fill the void left by the milk, using a pneumotachometer BIB001 , or a thermistor BIB002 . In addition to all these methods, many studies, as already described in Section 3.1, estimated the milk flow rate using a calibration of the nutrient delivery system.
|
Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Discussion <s> Abstract Two studies were performed to test the ability of the newborn human infant to separate the “suction” and “expression” components of the sucking response. An experimental nipple and nutrient delivery system were devised which permitted the delivery of nutrient as a function of the occurrence of either response component and which permitted independent measurement of the components. Thirty infants, 2 to 5 days old, were studied during two successive feedings, 4 hours apart. The results indicated that newborn infants were able to modify the components of the sucking response when performance of these components led to nutritive consequences. When Ss had to express at one of two different pressure levels in order to obtain nutrient they showed significant shifts in expression amplitudes. The results indicate that some learning might have occurred as a result of the experimental manipulations. However, the infant's ability to adapt his sucking behavior seems to be well established before the fifth day of life. <s> BIB001 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Discussion <s> In 100 bottle-fed preterm infants feeding efficiency was studied by quantifying the volume of milk intake per minute and the number of teat insertions per 10 ml of milk intake. These variables were related to gestational age and to number of weeks of feeding experience. Feeding efficiency was greater in infants above 34 weeks gestational age than in those below this age. There was a significant correlation between feeding efficiency and the duration of feeding experience at most gestational ages between 32 and 37 weeks. A characteristic adducted and flexed arm posture was observed during feeding: it changed along with feeding experience. A neonatal feeding score was devised that allowed the quantification of the early oral feeding behavior. The feeding score correlated well with some aspects of perinatal assessment, with some aspects of the neonatal neurological evaluation and with developmental assessment at 7 months of age. These findings are a stimulus to continue our study into the relationships between feeding behaviour and other aspects of early development, especially of neurological development. <s> BIB002 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Discussion <s> The purpose of this investigation was to quantify normal nutritive sucking, using a microcomputer-based instrument which replicated the infant's customary bottle-feeding routine. 86 feeding sessions were recorded from infants ranging between 1.5 and 11.5 months of age. Suck height, suck area and percentage of time spent sucking were unrelated to age. Volume per suck declined with age, as did intersuck interval, which corresponded to a more rapid sucking rate. This meant that volume per minute of sucking time was fairly constant. The apparatus provided an objective description of the patterns of normal nutritive sucking in infants to which abnormal sucking patterns may be compared. <s> BIB003 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Discussion <s> Milk flow achieved during feeding may contribute to the ventilatory depression observed during nipple feeding. One of the important determinants of milk flow is the size of the feeding hole. In the first phase of the study, investigators compared the breathing patterns of 10 preterm infants during bottle feeding with two types of commercially available (Enfamil) single-hole nipples: one type designed for term infants and the other for preterm infants. Reductions in ventilation, tidal volume, and breathing frequency, compared with prefeeding control values, were observed with both nipple types during continuous and intermittent sucking phases; no significant differences were observed for any of the variables. Unlike the commercially available, mechanically drilled nipples, laser-cut nipple units showed a markedly lower coefficient of variation in milk flow. In the second phase of the study, two sizes of laser-cut nipple units, low and high flow, were used to feed nine preterm infants. Significantly lower sucking pressures were observed with high-flow nipples as compared with low-flow nipples. Decreases in minute ventilation and breathing frequency were also significantly greater with high-flow nipples. These results suggest that milk flow contributes to the observed reduction in ventilation during nipple feeding and that preterm infants have limited ability to self-regulate milk flow. <s> BIB004 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Discussion <s> The maturation of deglutition apnoea time was investigated in 42 bottle-fed preterm infants, 28 to 37 weeks gestation, and in 29 normal term infants as a comparison group. Deglutition apnoea times reduced as infants matured, as did the number and length of episodes of multiple-swallow deglutition apnoea. The maturation appears related to developmental age (gestation) rather than feeding experience (postnatal age). Prolonged (>4 seconds) episodes of deglutition apnoea remained significantly more frequent in preterm infants reaching term postconceptual age compared to term infants. However, multiple-swallow deglutition apnoeas also occurred in the term comparison group, showing that maturation of this aspect is not complete at term gestation. The establishment of normal data for maturation should be valuable in assessing infants with feeding difficulties as well as for evaluation of neurological maturity and functioning of ventilatory control during feeding. <s> BIB005 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Discussion <s> The coordination between swallowing and respiration is essential for safe feeding, and noninvasive feeding-respiratory instrumentation has been used in feeding and dysphagia assessment. Sometimes there are differences of interpretation of the data produced by the various respiratory monitoring techniques, some of which may be inappropriate for observing the rapid respiratory events associated with deglutition. Following a review of each of the main techniques employed for recording resting, pre-feeding, feeding, and post-feeding respiration on different subject groups (infants, children, and adults), a critical comparison of the methods is illustrated by simultaneous recordings from various respiratory transducers. As a result, a minimal combination of instruments is recommended which can provide the necessary respiratory information for routine feeding assessments in a clinical environment. <s> BIB006 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Discussion <s> The purpose of this study was to compare the mechanics of sucking for 48 term infants with four different nipple units: Gerber Newborn (Gerber Products Company, Fremont, Mich.), Playtex (Playtex Products, Westport, Conn.), Evenflo (Evenflo Products Co., Canton, Ga.), and Gerber NUK. At 24 hours after birth, infants were assigned randomly to one of the nipple units and were studied twice with that nipple unit. A customized data acquisition system was used to measure and record the following variables: intraoral suction, sucking frequency, work, power, milk flow, milk volume per suck, and oxygen saturation. Although no statistically significant differences among the nipple units were noted for intraoral suction, sucking frequency, power, and oxygen saturation, the data revealed that the Playtex nipple unit was accompanied by higher peak milk flow and greater volume of milk per suck ( p <s> BIB007 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Discussion <s> To measure infant nutritive sucking reproducibly, nipple flow resistance must be controlled. Previous investigators have accomplished this with flow-limiting venturis, which has two limitations: flow resistance is highly dependent on fluid viscosity and older infants often reject the venturi nipple. This report describes the validation of calibrated-orifice nipples for the measurement of infant nutritive sucking. The flow characteristics of two infant formulas and water through these nipples were not different; those through venturi nipples were (analysis of variance; p < 0.0001). Flow characteristics did not differ among calibrated-orifice nipples constructed from three commercial nipple styles, indicating that the calibrated-orifice design is applicable to different types of baby bottle nipples. Among 3-month-old infants using calibrated-orifice nipples, acceptability was high, and sucking accounted for 85% of the variance in fluid intake during a feeding. We conclude that calibrated-orifice nipples are a valid and acceptable tool for the measurement of infant nutritive sucking. <s> BIB008 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Discussion <s> It is acknowledged that the difficulty many preterm infants have in feeding orally results from their immature sucking skills. However, little is known regarding the development of sucking in these infants. The aim of this study was to demonstrate that the bottle-feeding performance of preterm infants is positively correlated with the developmental stage of their sucking. Infants' oral-motor skills were followed longitudinally using a special nipple/bottle system which monitored the suction and expression/compression component of sucking. The maturational process was rated into five primary stages based on the presence/absence of suction and the rhythmicity of the two components of sucking, suction and expression/compression. This five-point scale was used to characterize the developmental stage of sucking of each infant. Outcomes of feeding performance consisted of overall transfer (percent total volume transfered/volume to be taken) and rate of transfer (ml/min). Assessments were conducted when infants were taking 1-2, 3-5 and 6-8 oral feedings per day. Significant positive correlations were observed between the five stages of sucking and postmenstrual age, the defined feeding outcomes, and the number of daily oral feedings. Overall transfer and rate of transfer were enhanced when infants reached the more mature stages of sucking. ::: ::: We have demonstrated that oral feeding performance improves as infants' sucking skills mature. In addition, we propose that the present five-point sucking scale may be used to assess the developmental stages of sucking of preterm infants. Such knowledge would facilitate the management of oral feeding in these infants. <s> BIB009 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Discussion <s> An earlier study demonstrated that oral feeding of premature infants (<30 wk gestation) was enhanced when milk was delivered through a self-paced flow system. The aims of this study were to identify the principle(s) by which this occurred and to develop a practical method to implement the self-paced system in neonatal nurseries. Feeding performance, measured by overall transfer, duration of oral feedings, efficiency, and percentage of successful feedings, was assessed at three time periods, when infants were taking 1-2, 3-5, and 6-8 oral feedings/day. At each time period, infants were fed, sequentially and in a random order, with a self-paced system, a standard bottle, and a test bottle, the shape of which allowed the elimination of the internal hydrostatic pressure. In a second study, infants were similarly fed with the self-paced system and a vacuum-free bottle which eliminated both hydrostatic pressure and vacuum within the bottle. The duration of oral feedings, efficiency, and percentage of successful feedings were improved with the self-paced system as compared to the standard and test bottles. The results were similar in the comparison between the self-paced system and the vacuum-free bottle. Elimination of the vacuum build-up naturally occurring in bottles enhances the feeding performance of infants born <30 wk gestation as they are transitioned from tube to oral feeding. The vacuum-free bottle is a tool which caretakers can readily use in neonatal nurseries. <s> BIB010 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Discussion <s> This study aimed to determine whether neonatal feeding performance can predict the neurodevelopmental outcome of infants at 18 months of age. We measured the expression and sucking pressures of 65 infants (32 males and 33 females, mean gestational age 37.8 weeks [SD 0.5]; range 35.1 to 42.7 weeks and mean birthweight 2722g [SD 92]) with feeding problems and assessed their neurodevelopmental outcome at 18 months of age. Their diagnoses varied from mild asphyxia and transient tachypnea to Chiari malformation. A neurological examination was performed at 40 to 42 weeks postmenstrual age by means of an Amiel-Tison examination. Feeding performance at 1 and 2 weeks after initiation of oral feeding was divided into four classes: class 1, no suction and weak expression; class 2, arrhythmic alternation of expression/suction and weak pressures; class 3, rhythmic alternation, but weak pressures; and class 4, rhythmic alternation with normal pressures. Neurodevelopmental outcome was evaluated with the Bayley Scales of Infant Development-II and was divided into four categories: severe disability, moderate delay, minor delay, and normal. We examined the brain ultrasound on the day of feeding assessment, and compared the prognostic value of ultrasound and feeding performance. There was a significant correlation between feeding assessment and neurodevelopmental outcome at 18 months (p < 0.001). Improvements of feeding pattern at the second evaluation resulted in better neurodevelopmental outcome. The sensitivity and specificity of feeding assessment were higher than those of ultrasound assessment. Neonatal feeding performance is, therefore, of prognostic value in detecting future developmental problems. <s> BIB011 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Discussion <s> Aim: Safe and successful oral feeding requires proper maturation of sucking, swallowing and respiration. We hypothesized that oral feeding difficulties result from different temporal development of the musculatures implicated in these functions. ::: ::: Methods: Sixteen medically stable preterm infants (26 to 29 weeks gestation, GA) were recruited. Specific feeding skills were monitored as indirect markers for the maturational process of oral feeding musculatures: rate of milk intake (mL/min); percent milk leakage (lip seal); sucking stage, rate (#/s) and suction/expression ratio; suction amplitude (mmHg), rate and slope (mmHg/s); sucking/swallowing ratio; percent occurrence of swallows at specific phases of respiration. Coefficients of variation (COV) were used as indices of functional stability. Infants, born at 26/27- and 28/29-week GA, were at similar postmenstrual ages (PMA) when taking 1–2 and 6–8 oral feedings per day. ::: ::: Results: Over time, feeding efficiency and several skills improved, some decreased and others remained unchanged. Differences in COVs between the two GA groups demonstrated that, despite similar oral feeding outcomes, maturation levels of certain skills differed. ::: ::: Conclusions: Components of sucking, swallowing, respiration and their coordinated activity matured at different times and rates. Differences in functional stability of particular outcomes confirm that maturation levels depend on infants' gestational rather than PMA. <s> BIB012 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Discussion <s> ABSTRACT:Objectives:The relationship between the pattern of sucking behavior of preterm infants during the early weeks of life and neurodevelopmental outcomes during the first year of life was evaluated.Methods:The study sample consisted of 105 preterm infants (postmenstrual age [PMA] at birth = 30. <s> BIB013 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Discussion <s> AIM ::: To obtain a better understanding of the changes in feeding behaviour from 1 to 6 months of age. By comparing breast- and bottle-feeding, we intended to clarify the difference in longitudinal sucking performance. ::: ::: ::: METHODS ::: Sucking variables were consecutively measured for 16 breast-fed and eight bottle-fed infants at 1, 3 and 6 months of age. ::: ::: ::: RESULTS ::: For breast-feeding, number of sucks per burst (17.8 +/- 8.8, 23.8 +/- 8.3 and 32.4 +/- 15.3 times), sucking burst duration (11.2 +/- 6.1, 14.7 +/- 8.0 and 17.9 +/- 8.8 sec) and number of sucking bursts per feed (33.9 +/- 13.9, 28.0 +/- 18.2 and 18.6 +/- 12.8 times) at 1, 3 and 6 months of age respectively showed significant differences between 1 and 6 months of age (p < 0.05). The sucking pressure and total number of sucks per feed did not differ among different ages. Bottle-feeding resulted in longer sucking bursts and more sucks per burst compared with breast-feeding in each month (p < 0.05). ::: ::: ::: CONCLUSION ::: The increase in the amount of ingested milk with maturation resulted from an increase in bolus volume per minute as well as the higher number of sucks continuously for both breast- and bottle-fed infants. <s> BIB014 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Discussion <s> We report quantitative measurements of ten parameters of nutritive sucking behavior in 91 normal full-term infants obtained using a novel device (an Orometer) and a data collection/analytical system (Suck Editor). The sucking parameters assessed include the number of sucks, mean pressure amplitude of sucks, mean frequency of sucks per second, mean suck interval in seconds, sucking amplitude variability, suck interval variability, number of suck bursts, mean number of sucks per suck burst, mean suck burst duration, and mean interburst gap duration. For analyses, test sessions were divided into 4 × 2-min segments. In single-study tests, 36 of 60 possible comparisons of ten parameters over six pairs of 2-min time intervals showed a p value of 0.05 or less. In 15 paired tests in the same infants at different ages, 33 of 50 possible comparisons of ten parameters over five time intervals showed p values of 0.05 or less. Quantification of nutritive sucking is feasible, showing statistically valid results for ten parameters that change during a feed and with age. These findings suggest that further research, based on our approach, may show clinical value in feeding assessment, diagnosis, and clinical management. <s> BIB015 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Discussion <s> Perioral movements and sucking pattern during bottle feeding with a novel, experimental teat are similar to breastfeeding <s> BIB016 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Discussion <s> Feeding by sucking is one of the first activities of daily life performed by infants. Sucking plays a fundamental role in neurological development and may be considered a good early predictor of neuromotor development. In this paper, a new method for ecological assessment of infants' nutritive sucking behavior is presented and experimentally validated. Preliminary data on healthy newborn subjects are first acquired to define the main technical specifications of a novel instrumented device. This device is designed to be easily integrated in a commercially available feeding bottle, allowing clinical method development for screening large numbers of subjects. The new approach proposed allows: 1) accurate measurement of intra-oral pressure for neuromotor control analysis and 2) estimation of milk volume delivered to the mouth within variation between estimated and reference volumes. <s> BIB017 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Discussion <s> The Sucking Efficiency (SEF) is one of the main parameters used to monitor and assess the sucking pattern development in infants. Since utritive Sucking ( S) is one of the earliest motor activity performed by infants, its objective monitoring may allow to assess neurological and motor development of newborns. This work proposes a new ecological and low-cost method for SEF monitoring, specifically designed for feeding bottles. The methodology, based on the measure of the hydrostatic pressure exerted by the liquid at the teat base, is presented and experimentally validated at different operative conditions. Results show how the proposed method allows to estimate the minimum volume an infant ingests during a burst of sucks with a relative error within the range of [3-7]% depending on the inclination of the liquid reservoir. <s> BIB018
|
Oral feeding is a complex process requiring a mature sucking ability and an especially mature coordination of sucking with breathing and swallowing. The proposed overview of scientific literature about this topic highlighted the absence of a unique technological sensing solution to assess such skills. Oral feeding behavior is assessed in literature, monitoring sucking, breathing, swallowing, and nutrient consumption, through a wide set of quantities and indices. Such monitoring has resulted potentially useful for the assessment of oral feeding pattern maturation in preterm infants, in term infants, and as prognostic tool for predicting later neurodevelopmental outcomes. To study preterm infants' feeding behavior, intraoral and expression pressures are two fundamental quantities. In particular, S/E coordination and rhythmicity are principally investigated and are significant to characterize and assess the development of sucking skills and, possibly, their immaturity. Besides, the importance of these two components has also been confirmed in the case neurodisabled infants, confirming the importance of S/E monitoring for the assessment of immature sucking patterns. On the contrary, sucking skills of term infants can be assessed without distinction between IP and EP measures. Regarding the prognostic value of infants' oral feeding skills, it seems that the assessment of the only sucking pattern is sufficient to predict later neurodevelopmental outcomes. However, the reported studies BIB013 BIB011 BIB005 , dealing with the issue, are rather recent and encourage further research focused on breathing and swallowing processes as well BIB002 . Both for preterm, and term infants, Sw-B coordination represent a challenging milestone to attain in development of feeding skills. Sw-B rhythmicity and their integration into the sucking process is fundamental, highlighting the importance of measuring systems for the detection of Sw and B events and their temporal pattern, rather than for their quantitative characterization. Volume consumption was monitored in most of the studies, and feeding efficiency, as well as feeding rate and bolus size, appear as important indices to evaluate the changing feeding performances in both preterm and term infants. Such heterogeneous set of quantities and indices has been assessed with different technological solutions. Due to the differences related to the application field (preterm assessment; term assessment; prognostic tool), each solution should be carefully assessed according to the specific application requirements, which should always aim at a balanced compromise between reliability, invasiveness and practicability. The distinction between suction and expression components of sucking seems to be a characteristic specific of the assessment of preterm infants. PTs are the most commonly adopted sensing solution for the measurement of intraoral and expression pressures. However, the simultaneous monitoring of both sucking components imposes special constraints. In particular, the measuring configuration including a tube for nutrient delivery (Figures 3 and 4 ) cannot be adopted, because it removes the contribution of the expression component on the extraction of nutrient and it drives infants to modify their sucking response BIB001 forcing a higher IP. This solution can be adopted if the only goal is to investigate suction ability, since it forces the newborn to rely on it. Moreover, the specific application to preterm infants imposes additional constraints, due to the compromised motor system of this population. The presence of an additional resistance to the nutrient flow, caused by the tube restriction (see configuration in Figure 3a ,b and Figure 4 ), makes such measuring solutions unsuitable when applied to preterm infants: additional resistance implies additional sucking efforts to extract the same amount of nutrient from the bottle. Furthermore, the acceptability of the feeding apparatus is essential for every application. A nutrient delivery tube within the nipple tip may compromise the nipple's mouth feel, even for full-term and/or healthy infants accustomed to feeding from standard nipples on commercial bottles: they often refuse anything other than their usual nipple style BIB008 . For such reason, the relative simplicity of the common orifice nipples adapted for IP monitoring, as shown in Figure 2 , is more advantageous. With such a configuration, infants' expression movements can alter flow rate through the nipple, and so the simultaneous measurement of EP, which is fundamental for preterm infants' assessment, makes sense. Both sensing solutions, illustrated in Figure 2 , are applicable on every feeding apparatus and nipple, not requiring a particular design: they can be embedded in both a clinical and a portable domestic assessment tool. Besides, they can be also adopted for IP monitoring during breastfeeding BIB014 BIB016 . The sensing solution where the pressure waveform is directed to the PT by means of a catheter (Figure 2a and Figure 3 ), should always adopt fluid-filled catheters (free of air bubbles), because air-filled lines do not respond to rapid pressure changes and underestimate peak negative pressures. However, the PT directly placed into the infants' mouth (see Figure 2b ) is more advantageous for several aspects (higher accuracy, no time delay, no motion artifacts), also avoiding the need for a fluid-filled system (higher easiness of use), but it implies higher costs. Regarding the measurement of EP using PTs, both reported methods ( Figure 5 ) appear to be suitable for clinical and domestic use, as they can be incorporated in a standard feeding apparatus. However, the one measuring EP from the nipple lumen is suggested since the other one presents limitations due to a plateau in the system response corresponding to a full compression of the catheter. Optical motion capture systems may be also considered for E and S monitoring through jaw and throat movements. The advantage of such monitoring approach is its complete non-invasiveness: any feeding apparatus can be adopted (depending on clinicians' or parents' decision), as no sensing elements are required (it can be even adopted for breastfeeding monitoring BIB016 ). Notwithstanding such advantage, its practicability is very low, as it would require specialized personnel, a structured environment, precise calibration procedure and it would be time consuming. These reasons make this kind of monitoring system not easily practicable both for clinical and domestic post-discharge application. Moreover, mouthing movements (jaw movements) are not directly linked to the effective nutrient expression, as infants could have an ineffective seal around the teat, which would prevent them from feeding properly. If no S/E distinction is necessary, sucking events can be monitored. The intranipple pressure can be easily recorded adopting a non-invasive and practical sensing solution, even embeddable in a common feeding apparatus. Sucking movements can also be recorded, adopting mercury-in-rubber strain-gauges on the infant's chin. The advantages of this latter solution is the fact that it does not require any special sensors to be applied on the feeding apparatus, which can consequently be freely selected by the user (it could be also used for breastfeeding monitoring). However, it is moderately invasive as the sensing element has to be placed on the infant's face. This may produce additional stress to the preterm newborn who often shows hypersensitivity of the facial area due to frequent necessary aversive oral and/or nasal procedures BIB012 . Two additional aspects should be taken into consideration in the definition of the main requirements of a standardized assessment tool for feeding: the hydrostatic pressure and the gradual build-up in negative pressure inside the bottle. In most feeding apparatuses, used to enable measures of feeding behavior, particular expedients were required to avoid these two factors, which might hamper the feeding performance of immature infants BIB009 , so as to permit the standardization of feeding across infants and the generalization of results. The adopted expedients often imply solutions showing low practicability and requiring structured environment, not suited for post-discharge use (see the schematic representation in Figure 3 ). More efforts are required to design and develop measuring feeding tools, which can be easy-to-use also in post-discharge usual environment. While the vacuum problem can be easily avoided using a commercial vented bottle, as in BIB015 (Figure 4 ) or in BIB017 , the hydrostatic pressure might also represent an important parameter to record in the case of daily home monitoring of sucking behavior, when the infant has to face this factor while bottle-feeding. Some described apparatus BIB003 BIB018 can be adoptable for such application: they report sensing solutions based on PTs measuring the pressure at the base of the nutrient column. Moreover, these factors suggest that another important aspect in the design of a standardized feeding assessment tool, is its shape BIB010 , as it can determine the extent of the influence of hydrostatic pressure on feeding (see Figure 4) . The Sw-B coordination is a challenging milestone in the development of oral feeding skills in term infants and even more in preterm ones, given their greater immaturity and risky condition. Therefore, its careful assessment at the discharge of NICUs is strongly suggested. Respiratory monitoring during feeding is quite thorny because of the high response time required to the sensors (breathing events are faster during NS), and because of the movement artifacts. Moreover, it is essential, particularly in premature infants, that any respiratory measurement be acceptable to the subject without imposing additional stress. The main techniques to record respiratory events in a clinical environment during feeding, were critically compared and analyzed in BIB006 , even taking into consideration the above-mentioned issues. The use of a PT in a nasal cannula just inside the nostrils, as shown in Figure 9b , and of an abdominal PT (Figure 9a ), can be considered suitable for clinical monitoring of respiratory events, because of their minimal invasiveness. The latter can be considered preferable in NICU where the subject may also receive oxygen via a nasal cannula, so any measure relying on nasal airflow would result useless. Quantitative information can be obtained using a nasal thermistor, embedded into a tube where the flow stream has to be canalized. However, it does not provide information about the airflow direction and it may also impose additional stress especially to a hypersensitive premature infant. A preferable solution can be the use of a pneumotachograph connected to a PT placed in a miniaturized cannula to be inserted in a nostril: it can measure both airflow and direction. However, as already said, these sensing solutions (measuring nasal airflow) may turn out to be impracticable in NICU applications. Considering a domestic post-discharge application requiring high easiness to use and portability, none of the respiratory monitoring solutions, reported in Section 3, would be easily practicable: none of them is embedded in the feeding apparatus, but they all imply the use of an additional dedicated apparatus. In a post-discharge setting, this may stress the user that is generally less inclined than a clinician to the application of any additional element on the infant's body. The same applies to the swallowing measuring systems described, as all of them imply the use of additional sensing tools. The use of PTs connected to the pharynx by means of a trans-nasal catheter has been often adopted, but it can be easily set up only in NIUCs, because it is highly invasive. A less invasive sensing solution for swallowing monitoring is represented by the use of a microphone or a pressure drum placed on the infant's neck. The nutrient consumption is mainly recorded at the beginning and at the end of feeding (or at specific time intervals), by weighing the bottle or verifying the level of the nutrient on the graduated reservoir. Even if such methods are quite simple and not invasive, their main drawback is that they enable a global estimation of the ingested volume, but they do not allow for continuous monitoring of milk volume intake. They exclude the energetic analysis of sucking process. Moreover, as the rate of milk flow during bottle feeding plays a crucial role in feeding-related ventilator changes of both term and preterm infants BIB004 , its assessment through reliable measures appears to be important both for infants' clinical evaluation and post-discharge monitoring. To this aim, the use of air-flow sensors (thermistors or pneumotachometers) mounted on the top of the inverted nutrient reservoir, may represent a practicable sensing solution to be further investigated to increase the level of integration and develop a portable feeding tool (it also implies the resolution of the vacuum build-up problem). The solutions using PTs to measure nutrient consumption may also be easily adopted at home for a remote continuous monitoring of infant's development, but further research is suggested for their validation in field BIB017 BIB018 . The same portability can be obtained adopting an ultrasonic flow sensor, as in BIB007 , but it represents an expensive solution. As described in Section 3, several systems adopted a calibration procedure in order to obtain a linear relation between the suction pressure and the consequent flow. However, they imply some particular expedients to eliminate hydrostatic pressure and vacuum in the nutrient reservoir, in order to make the flow solely depending on suction (see configurations in Figure 3 ). This affects the easiness to use and portability of the system, so it does not represent a recommended solution when both sucking components (S/E) need to be monitored, as already discussed. On the contrary, an interesting approach is the one based on the estimation of the net pressure causing the nutrient release, by measuring the pressure gradient causing the flow through a calibrated teat. Two PTs easily embeddable into the feeding teat can be used for such aim (see Figure 6 ) allowing both measures of sucking pressures and estimation of milk flow. Such a sensing solution allows a calibration that does not require the absence of hydrostatic pressure or vacuum in the bottle. Moreover, it is not dependent on the nutrient viscosity, which can vary among commercial formulas and breast milk, as the flow rate through an orifice is known to be independent from fluid viscosity.
|
Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Conclusions <s> Preterm infants often have difficulties in learning how to suckle from the breast or how to drink from a bottle. As yet, it is unclear whether this is part of their prematurity or whether it is caused by neurological problems. Is it possible to decide on the basis of how an infant learns to suckle or drink whether it needs help and if so, what kind of help? In addition, can any predictions be made regarding the relationship between these difficulties and later neurodevelopmental outcome? We searched the literature for recent insights into the development of sucking and the factors that play a role in acquiring this skill. Our aim was to find a diagnostic tool that focuses on the readiness for feeding or that provides guidelines for interventions. At the same time, we searched for studies on the relationship between early sucking behavior and developmental outcome. It appeared that there is a great need for a reliable, user-friendly and noninvasive diagnostic tool to study sucking in preterm and full-term infants. <s> BIB001 </s> Technological Solutions and Main Indices for the Assessment of Newborns' Nutritive Sucking: A Review <s> Conclusions <s> Abstract Neonatal motor behavior predicts both current neurological status and future neurodevelopmental outcomes. For speech pathologists, the earliest observable patterned oromotor behavior is su... <s> BIB002
|
The acquisition of efficient nutritive sucking skills is a fundamental and challenging milestone for every newborn, and even more for premature ones, as it requires the complex coordination of sucking, swallowing and breathing processes, which is usually not yet developed in premature infants at birth. Such skills and their development, if monitored, may provide objective parameters for the assessment of infants' well-being and health status, and allow predictions about later neurodevelopmental outcomes. A specifically designed tool to assess infant's oral feeding ability may provide clinicians with new devices for prognosis, diagnosis and routine clinical monitoring of newborn patients. However, such a standardized instrumental tool does not exist yet, and clinical evaluation of feeding ability is not carried out objectively, running the risk of ignoring poor feeding skills for too long. This work carries out a critical analysis of the main instrumental solutions adopted up to now for infant NS monitoring. The first step was to identify the main application fields where the objective NS assessment may contribute to improve the level of healthcare assistance: preterm assessment, term and full term assessment, and early diagnosis of later neurological dysfunctions. Different guidelines may be useful in the design and development of a measurement tool suitable for the two principal environments where it will be used: a clinical setting (the NICU in particular), and a domestic environment for post-discharge monitoring. In the latter, an instrument for monitoring NS and its development should meet two main functional requirements, besides offering reliable and valid measures: its portability and its easiness of use. In addition, non-invasiveness is strongly required in a post-discharge environment, and always preferable when dealing with preterm infants as well. As previously discussed, some sensing solutions proposed for sucking monitoring, meeting these requirements, are based on the use of common PTs and thin catheters in a standard nipple. Even to estimate nutrient consumption, sensing solutions adopting PTs seemed to be suitable, because of their simplicity. They deserve further investigation for their reliability in field to be demonstrated. Further interest should also be addressed to the nutrient consumption estimation by means of air-flow sensors, as they show the advantages previously discussed. There is a need for sensitive, quantitative, and efficient analyses of sucking skills, first of all among preterm infants in the NICU BIB001 . The analytical tools for suck assessment in most NICUs are based on subjective judgment. There are obvious limitations to this approach, including reliability within and between examiners and an inability to access the fine structure of pressure dynamics, variability of suck patterning, and developmental progression BIB002 . The requirements for the application in clinical settings do not strictly include the portability. However, the need for a univocal assessing instrument should promote the development of the cited sensing solutions even for a clinical application, provided that reliability and validity of the measuring instrument are guaranteed. Further research should focus on the integration of the proper set of sensors for sucking monitoring on a practical feeding apparatus, and on its validation even in the case of untrained users. Concerning the monitoring of swallowing and breathing processes during NS, some of the sensing solutions, among the ones described in literature, resulted applicable for clinical monitoring. However, none of them seemed to be easily embeddable on a feeding tool for a practical and easy use in a domestic setting, where the user is more demanding. This suggests to orient further research efforts to the design of sensing solutions for breathing and swallowing recording that can be embeddable on a simple feeding apparatus, or to the analysis of the domestic practicability of some of the less invasive solutions proposed. Challenges and limitations discussed in this review should warrant further studies to overcome them, in order to obtain a valid and objective tool for standardizing infants' oral feeding assessment. The use of standard pre-discharge assessment devices may foster the establishment of common quantitative criteria useful to assist clinicians in planning clinical interventions. Such devices, or a simplified version of them, might be adopted also for patients' follow-up, as remote monitoring of infants at home after discharge, as everyday feeding problems can be an early symptom of disability. Besides the instrumental solution, the standardization of infants' oral feeding assessment will require a considerable work to collect the amount of data necessary to define normative indices. Moreover, the interpretation of this huge amount of data, will require further research to develop ad-hoc algorithms for data analysis.
|
A survey on Hamilton cycles in directed graphs <s> Introduction <s> Abstract A theorem is proved that is, in a sense to be made precise, the best possible generalization of the theorems of Dirac, Posa, and Bondy that give successively weaker sufficient conditions for a graph to be Hamiltonian. Some simple corollaries are deduced concerning Hamiltonian paths, n -Hamiltonian graphs, and Hamiltonian bipartite graphs. <s> BIB001 </s> A survey on Hamilton cycles in directed graphs <s> Introduction <s> The theory of directed graphs has developed enormously over recent decades, yet this book (first published in 2000) remains the only book to cover more than a small fraction of the results. New research in the field has made a second edition a necessity. Substantially revised, reorganised and updated, the book now comprises eighteen chapters, carefully arranged in a straightforward and logical manner, with many new results and open problems. As well as covering the theoretical aspects of the subject, with detailed proofs of many important results, the authors present a number of algorithms, and whole chapters are devoted to topics such as branchings, feedback arc and vertex sets, connectivity augmentations, sparse subdigraphs with prescribed connectivity, and also packing, covering and decompositions of digraphs. Throughout the book, there is a strong focus on applications which include quantum mechanics, bioinformatics, embedded computing, and the travelling salesman problem. Detailed indices and topic-oriented chapters ease navigation, and more than 650 exercises, 170 figures and 150 open problems are included to help immerse the reader in all aspects of the subject. Digraphs is an essential, comprehensive reference for undergraduate and graduate students, and researchers in mathematics, operations research and computer science. It will also prove invaluable to specialists in related areas, such as meteorology, physics and computational biology. <s> BIB002 </s> A survey on Hamilton cycles in directed graphs <s> Introduction <s> In this paper we give an approximate answer to a question of Nash-Williams from 1970: we show that for every \alpha > 0, every sufficiently large graph on n vertices with minimum degree at least (1/2 + \alpha)n contains at least n/8 edge-disjoint Hamilton cycles. More generally, we give an asymptotically best possible answer for the number of edge-disjoint Hamilton cycles that a graph G with minimum degree \delta must have. We also prove an approximate version of another long-standing conjecture of Nash-Williams: we show that for every \alpha > 0, every (almost) regular and sufficiently large graph on n vertices with minimum degree at least $(1/2 + \alpha)n$ can be almost decomposed into edge-disjoint Hamilton cycles. <s> BIB003
|
The decision problem of whether a graph has a Hamilton cycle is NP-complete and so a satisfactory characterization of Hamiltonian graphs seems unlikely. Thus it makes sense to ask for degree conditions which ensure that a graph has a Hamilton cycle. One such result is Dirac's theorem , which states that every graph on n ≥ 3 vertices with minimum degree at least n/2 contains a Hamilton cycle. This is strengthened by Ore's theorem : If G is a graph with n ≥ 3 vertices such that every pair x = y of non-adjacent vertices satisfies d(x) + d(y) ≥ n, then G has a Hamilton cycle. Dirac's theorem can also be strengthened considerably by allowing many of the vertices to have small degree: Pósa's theorem states that a graph on n ≥ 3 vertices has a Hamilton cycle if its degree sequence d 1 ≤ · · · ≤ d n satisfies d i ≥ i + 1 for all i < (n − 1)/2 and if additionally d ⌈n/2⌉ ≥ ⌈n/2⌉ when n is odd. Again, this is best possible -none of the degree conditions can be relaxed. Chvátal's theorem BIB001 is a further generalization. It characterizes all those degree sequences which ensure the existence of a Hamilton cycle in a graph: suppose that the degrees of the graph G are d 1 ≤ · · · ≤ d n . If n ≥ 3 and d i ≥ i+1 or d n−i ≥ n−i for all i < n/2 then G is Hamiltonian. This condition on the degree sequence is best possible in the sense that for any degree sequence d 1 ≤ · · · ≤ d n violating this condition there is a corresponding graph with no Hamilton cycle whose degree sequence dominates d 1 , . . . , d n . These four results are among the most general and well-known Hamiltonicity conditions. There are many more -often involving additional structural conditions like planarity. The survey gives an extensive overview (which concentrates on undirected graphs). In this survey, we concentrate on recent progress for directed graphs. Though the problems are equally natural for directed graphs, it is usually much more difficult to obtain satisfactory results. Additional results beyond those discussed Date: June 4, 2010. here can be found in the corresponding chapter of the monograph BIB002 . In Section 2, we discuss digraph analogues and generalizations of the above four results. The next section is devoted to oriented graphs -these are obtained from undirected graphs by orienting the edges (and thus are digraphs without 2-cycles). Section 4 is concerned with tournaments. Section 5 is devoted to several generalizations of the notion of a Hamilton cycle, e.g. pancyclicity and k-ordered Hamilton cycles. The final section is devoted to the concept of 'robust expansion'. This has been useful in proving many of the recent results discussed in this survey. We will give a brief sketch of how it can be used. In this paper, we also use this notion (and several results from this survey) to obtain a new result (Theorem 18) which gives further support to Kelly's conjecture on Hamilton decompositions of regular tournaments. In a similar vein, we use a result of BIB003 to deduce that the edges of every sufficiently dense regular (undirected) graph can be covered by Hamilton cycles which are almost edge-disjoint (Theorem 21).
|
A survey on Hamilton cycles in directed graphs <s> Hamilton cycles in directed graphs <s> Diaphragms for electrolytic cells are prepared by depositing onto a cathode screen, discrete thermoplastic fibers. The fibers are highly branched, and which, when deposited form an entanglement or network thereof, which does not require bonding or cementing. <s> BIB001 </s> A survey on Hamilton cycles in directed graphs <s> Hamilton cycles in directed graphs <s> This result is best possible for k = 3 since the Petersen graph is a nonhamiltonian, 2-connected, 3-regular graph on 10 vertices. It is essentially best possible for k > 4 since there exist non-hamiltonian, 2-connected, kregular graphs on 3k + 4 vertices for k even, and 3k + 5 vertices for all k. Examples of such graphs are given in [ 1, 3 1. The problem of determining the values of k for which all 2-connected, k-regular graphs on n vertices are hamiltonian was first suggested by G. Szekeres. Erdijs and Hobbs [ 3 ] proved that such graphs are hamiltonian if n < 2k + ck”*, where c is a positive constant. Subsequently, Bollobas and Hobbs [ 1 ] showed that G is hamiltonian if n < +k. We shall in fact prove a result slightly stronger than Theorem 1. <s> BIB002
|
2.1. Minimum degree conditions. For an analogue of Dirac's theorem in directed graphs it is natural to consider the minimum semidegree δ 0 (G) of a digraph G, which is the minimum of its minimum outdegree δ + (G) and its minimum indegree δ − (G). (Here a directed graph may have two edges between a pair of vertices, but in this case their directions must be opposite.) The corresponding result is a theorem of Ghouila-Houri BIB001 . Theorem 1 (Ghouila-Houri BIB001 ). Every strongly connected digraph on n vertices with δ + (G) + δ − (G) ≥ n contains a Hamilton cycle. In particular, every digraph with δ 0 (G) ≥ n/2 contains a Hamilton cycle. (When referring to paths and cycles in directed graphs we usually mean that these are directed, without mentioning this explicitly.) For undirected regular graphs, Jackson BIB002 showed that one can reduce the degree condition in Dirac's theorem considerably if we also impose a connectivity condition, i.e. every 2-connected d-regular graph on n vertices with d ≥ n/3 contains a Hamilton cycle. Hilbig improved the degree condition to n/3 − 1 unless G is the Petersen graph or another exceptional graph. The example in Figure 1 shows that the degree condition cannot be reduced any further. Clearly, the connectivity condition is necessary. We believe that a similar result should hold for directed graphs too. Conjecture 2. Every strongly 2-connected d-regular digraph on n vertices with d ≥ n/3 contains a Hamilton cycle. Replacing each edge in Figure 1 with two oppositely oriented edges shows that the degree condition cannot be reduced. Moreover, it is not hard to see that the strong 2-connectivity cannot be replaced by just strong connectivity.
|
A survey on Hamilton cycles in directed graphs <s> 2.2. <s> Abstract In this article we prove that a sufficient condition for an oriented strongly connected graph with n vertices to be Hamiltonian is: (1) for any two nonadjacent vertices x and y d + (x)+d − (x)+d + (y)+d − (y)⩽sn−1 . <s> BIB001 </s> A survey on Hamilton cycles in directed graphs <s> 2.2. <s> We describe a new type of sufficient condition for a digraph to be Hamiltonian. Conditions of this type combine local structure of the digraph with conditions on the degrees of non-adjacent vertices. The main difference from earlier conditions is that we do not require a degree condition on all pairs of non-adjacent vertices. Our results generalize the classical conditions by Ghouila-Houri and Woodall. <s> BIB002 </s> A survey on Hamilton cycles in directed graphs <s> 2.2. <s> In \cite{suffcond} the following extension of Meyniels theorem was conjectured: If $D$ is a digraph on $n$ vertices with the property that $d(x)+d(y)\geq 2n-1$ for every pair of non-adjacent vertices $x,y$ with a common out-neighbour or a common in-neighbour, then $D$ is Hamiltonian. We verify the conjecture in the special case where we also require that $\min\{d^+(x)+d^-(y),d^-(x)+d^+(y)\}\geq n-1$ for all pairs of vertices $x,y$ as above. This generalizes one of the results in \cite{suffcond}. Furthermore we provide additional support for the conjecture above by showing that such a digraph always has a factor (a spanning collection of disjoint cycles). Finally we show that if $D$ satisfies that $d(x)+d(y)\geq\frac{5}{2}n-4$ for every pair of non-adjacent vertices $x,y$ with a common out-neighbour or a common in-neighbour, then $D$ is Hamiltonian. <s> BIB003 </s> A survey on Hamilton cycles in directed graphs <s> 2.2. <s> The theory of directed graphs has developed enormously over recent decades, yet this book (first published in 2000) remains the only book to cover more than a small fraction of the results. New research in the field has made a second edition a necessity. Substantially revised, reorganised and updated, the book now comprises eighteen chapters, carefully arranged in a straightforward and logical manner, with many new results and open problems. As well as covering the theoretical aspects of the subject, with detailed proofs of many important results, the authors present a number of algorithms, and whole chapters are devoted to topics such as branchings, feedback arc and vertex sets, connectivity augmentations, sparse subdigraphs with prescribed connectivity, and also packing, covering and decompositions of digraphs. Throughout the book, there is a strong focus on applications which include quantum mechanics, bioinformatics, embedded computing, and the travelling salesman problem. Detailed indices and topic-oriented chapters ease navigation, and more than 650 exercises, 170 figures and 150 open problems are included to help immerse the reader in all aspects of the subject. Digraphs is an essential, comprehensive reference for undergraduate and graduate students, and researchers in mathematics, operations research and computer science. It will also prove invaluable to specialists in related areas, such as meteorology, physics and computational biology. <s> BIB004
|
Ore-type conditions. Woodall proved the following digraph version of Ore's theorem, which generalizes Ghouila-Houri's theorem. d + (x) denotes the outdegree of a vertex x, and d − (x) its indegree. Theorem 3 (Woodall ). Let G be a strongly connected digraph on n ≥ 2 vertices. If d + (x) + d − (y) ≥ n for every pair x = y of vertices for which there is no edge from x to y, then G has a Hamilton cycle. Woodall's theorem in turn is generalized by Meyniel's theorem, where the degree condition is formulated in terms of the total degree of a vertex. Here the total degree d(x) of x is defined as d( Theorem 4 (Meyniel BIB001 ). Let G be a strongly connected digraph on n ≥ 2 vertices. If d(x) + d(y) ≥ 2n − 1 for all pairs of non-adjacent vertices in G, then G has a Hamilton cycle. The following conjecture of Bang-Jensen, Gutin and Li BIB002 would strengthen Meyniel's theorem by requiring the degree condition only for dominated pairs of vertices (a pair of vertices is dominated if there is a vertex which sends an edge to both of them). Conjecture 5 (Bang-Jensen, Gutin and Li BIB002 ). Let G be a strongly connected digraph on n ≥ 2 vertices. If d(x) + d(y) ≥ 2n − 1 for all dominated pairs of non-adjacent vertices in G, then G has a Hamilton cycle. An extremal example F can be constructed as in Figure 2 . To see that F has no Hamilton cycle, note that every Hamilton path in F − x has to start at z. Also, note that the only non-adjacent (dominated) pairs of vertices are z together with a vertex u in K and these satisfy d(z) + d(u) = 2n − 2. Some support for the conjecture is given e.g. by the following result of BangJensen, Guo and Yeo BIB003 : if we also assume the degree condition for all pairs of non-adjacent vertices which have a common outneighbour, then G has a 1-factor, i.e. a union of vertex-disjoint cycles covering all the vertices of G. An extremal example for Conjecture 5: let F be the digraph obtained from the complete digraph K = K ↔ n−3 and a complete digraph on 3 other vertices x, y, z as follows: remove the edge from x to z, add all edges in both directions between x and K and all edges from y to K. There are also a number of degree conditions which involve triples or 4-sets of vertices, see e.g. the corresponding chapter in BIB004 .
|
A survey on Hamilton cycles in directed graphs <s> 2.3. <s> Proof. Let G satisfy the hypothesis of Theorem 1. Clearly, G contains a circuit ; let C be the longest one . If G has no Hamiltonian circuit, there is a vertex x with x ~ C . Since G is s-connected, there are s paths starting at x and terminating in C which are pairwise disjoint apart from x and share with C just their terminal vertices x l, X2, . . ., x s (see [ 11, Theorem 1) . For each i = 1, 2, . . ., s, let y i be the successor of x i in a <s> BIB001 </s> A survey on Hamilton cycles in directed graphs <s> 2.3. <s> The main subjects of this survey paper are Hamitonian cycles, cycles of prescirbed lengths, cycles in tournaments, and partitions, packings, and coverings by cycles. Several unsolved problems and a bibiligraphy are included. <s> BIB002 </s> A survey on Hamilton cycles in directed graphs <s> 2.3. <s> Abstract We give a survey of results and conjectures concerning sufficient conditions in terms of connectivity and independence number for which a graph or digraph has various path or cyclic properties, for example hamilton path/cycle, hamilton connected, pancyclic, path/cycle covers, 2-cyclic. <s> BIB003 </s> A survey on Hamilton cycles in directed graphs <s> 2.3. <s> We show that for each \eta>0 every digraph G of sufficiently large order n is Hamiltonian if its out- and indegree sequences d^+_1\le ... \le d^+_n and d^- _1 \le ... \le d^-_n satisfy (i) d^+_i \geq i+ \eta n or d^-_{n-i- \eta n} \geq n-i and (ii) d^-_i \geq i+ \eta n or d^+_{n-i- \eta n} \geq n-i for all i<n/2. This gives an approximate solution to a problem of Nash-Williams concerning a digraph analogue of Chv\'atal's theorem. In fact, we prove the stronger result that such digraphs G are pancyclic. <s> BIB004 </s> A survey on Hamilton cycles in directed graphs <s> 2.3. <s> We show that every sufficiently large oriented graph with minimum in- and outdegree at least (3n-4)/8 contains a Hamilton cycle. This is best possible and solves a problem of Thomassen from 1979. <s> BIB005 </s> A survey on Hamilton cycles in directed graphs <s> 2.3. <s> We show that for each $\beta > 0$, every digraph $G$ of sufficiently large order $n$ whose outdegree and indegree sequences $d_1^+ \leq \cdots \leq d_n^+$ and $d_1^- \leq \cdots \leq d_n^-$ satisfy $d_i^+, d_i^- \geq \min{\{i + \beta n, n/2\}}$ is Hamiltonian. In fact, we can weaken these assumptions to (i) $d_i^+ \geq \min{\{i + \beta n, n/2\}}$ or $d^-_{n - i - \beta n} \geq n-i$, (ii) $d_i^- \geq \min{\{i + \beta n, n/2\}}$ or $d^+_{n - i - \beta n} \geq n-i$, and still deduce that $G$ is Hamiltonian. This provides an approximate version of a conjecture of Nash-Williams from 1975 and improves a previous result of Kuhn, Osthus, and Treglown. <s> BIB006
|
Degree sequences forcing Hamilton cycles in directed graphs. NashWilliams raised the question of a digraph analogue of Chvátal's theorem quite soon after the latter was proved: for a digraph G it is natural to consider both its outdegree sequence d Conjecture 6 (Nash-Williams ). Suppose that G is a strongly connected digraph on n ≥ 3 vertices such that for all i < n/2 (i) d It is even an open problem whether the conditions imply the existence of a cycle through any pair of given vertices (see BIB002 ). The following example shows that the degree condition in Conjecture 6 would be best possible in the sense that for all n ≥ 3 and all k < n/2 there is a non-Hamiltonian strongly connected digraph G on n vertices which satisfies the degree conditions except that d − k ≥ k in the kth pair of conditions. To see this, take an independent set I of size k < n/2 and a complete digraph K of order n − k. Pick a set X of k vertices of K and add all possible edges (in both directions) between I and X. The digraph G thus obtained is strongly connected, not Hamiltonian and is both the out-and indegree sequence of G. In contrast to the undirected case there exist examples with a similar degree sequence to the above but whose structure is quite different (see BIB004 and BIB006 ). This is one of the reasons which makes the directed case much harder than the undirected one. In BIB006 , the following approximate version of Conjecture 6 for large digraphs was proved. Theorem 7 (Christofides, Keevash, Kühn and Osthus BIB006 ). For every β > 0 there exists an integer n 0 = n 0 (β) such that the following holds. Suppose that G is a digraph on n ≥ n 0 vertices such that for all i < n/2 (i) d This improved a recent result in BIB004 , where the degrees in the first parts of these conditions were not 'capped' at n/2. The earlier result in BIB004 was derived from a result in BIB005 on the existence of a Hamilton cycle in an oriented graph satisfying a certain expansion property. Capping the degrees at n/2 makes the proof far more difficult: the conditions of Theorem 7 only imply a rather weak expansion property and there are many types of digraphs which almost satisfy the conditions but are not Hamiltonian. The following weakening of Conjecture 6 was posed earlier by Nash-Williams . It would yield a digraph analogue of Pósa's theorem. Conjecture 8 (Nash-Williams ). Let G be a digraph on n ≥ 3 vertices such that d The previous example shows the degree condition would be best possible in the same sense as described there. The assumption of strong connectivity is not necessary in Conjecture 8, as it follows from the degree conditions. Theorem 7 immediately implies a corresponding approximate version of Conjecture 8. In particular, for half of the vertex degrees (namely those whose value is n/2), the result matches the conjectured value. 2.4. Chvátal-Erdős type conditions. Another sufficient condition for Hamiltonicity in undirected graphs which is just as fundamental as those listed in the introduction is the Chvátal-Erdős theorem BIB001 : suppose that G is an undirected graph with n ≥ 3 vertices, for which the vertex-connectivity number κ(G) and the independence number α(G) satisfy κ(G) ≥ α(G), then G has a Hamilton cycle. Currently, there is no digraph analogue of this. Given a digraph G, let α 0 (G) denote the size of the largest set S so that S induces no edge and let α 2 (G) be the size of the largest set S so that S induces no cycle of length 2. So α 0 (G) ≤ α 2 (G). α 0 (G) is probably the more natural extension of the independence number to digraphs. However, even the following basic question (already discussed e.g. in BIB003 ) is still open.
|
A survey on Hamilton cycles in directed graphs <s> Question 9. <s> Abstract We give a survey of results and conjectures concerning sufficient conditions in terms of connectivity and independence number for which a graph or digraph has various path or cyclic properties, for example hamilton path/cycle, hamilton connected, pancyclic, path/cycle covers, 2-cyclic. <s> BIB001
|
Is there a function f 0 (k) so that every digraph with κ(G) ≥ f 0 (k) and α 0 (G) ≤ k contains a Hamilton cycle? Here the connectivity κ(G) of a digraph is defined to be the size of the smallest set of vertices S so that G − S is either not strongly connected or consists of a single vertex. The following result shows that the analogous function for α 2 (G) does exist. Theorem 10 (Jackson ). If G is a digraph with then G has a Hamilton cycle. The proof involves a 'reduction' of the problem to the undirected case. As observed by Thomassen and Chakroun (see BIB001 again), there are non-Hamiltonian digraphs with κ(G) = α 2 (G) = 2 and κ(G) = α 2 (G) = 3. But it could well be that every digraph satisfying κ(G) ≥ α 2 (G) ≥ 4 has a Hamilton cycle. Even the following weaker conjecture is still wide open. Conjecture 11 (Jackson and Ordaz BIB001 ). If G is a digraph with κ(G) ≥ α 2 (G) + 1, then G contains a Hamilton cycle. (In fact, they even conjectured that G as above is pancyclic.) Since the problem seems very difficult, even (say) a bound on κ which is polynomial in α 2 in Theorem 10 would be interesting.
|
A survey on Hamilton cycles in directed graphs <s> Hamilton cycles in oriented graphs <s> The screen or shade for the window is of flexible material so that when in open position it can be pleated. Opposite sides of the screen have tapes sewed on near the edges and male snap fasteners are spaced from each other on the tapes. These fasteners snap into holes in plastic glides which travel along plastic tracks. Either converted by screws or integral with the tracks are angle sealing strips, one flange being on the outside and arranged to prevent air movement from the inside of the screen to the outside, or vice versa, when the screen is closed. Either attached by screws to the track and sealing strip assembly or integral therewith is a mounting rail which is attached by screws to the building or support. A slanting roof is shown having several windows equipped with independently glided screens. <s> BIB001 </s> A survey on Hamilton cycles in directed graphs <s> Hamilton cycles in oriented graphs <s> We show that for each \alpha>0 every sufficiently large oriented graph G with \delta^+(G),\delta^-(G)\ge 3|G|/8+ \alpha |G| contains a Hamilton cycle. This gives an approximate solution to a problem of Thomassen. In fact, we prove the stronger result that G is still Hamiltonian if \delta(G)+\delta^+(G)+\delta^-(G)\geq 3|G|/2 + \alpha |G|. Up to the term \alpha |G| this confirms a conjecture of H\"aggkvist. We also prove an Ore-type theorem for oriented graphs. <s> BIB002 </s> A survey on Hamilton cycles in directed graphs <s> Hamilton cycles in oriented graphs <s> We show that every sufficiently large oriented graph with minimum in- and outdegree at least (3n-4)/8 contains a Hamilton cycle. This is best possible and solves a problem of Thomassen from 1979. <s> BIB003
|
Recall that an oriented graph is a directed graph with no 2-cycles. Results on oriented graphs seem even more difficult to obtain than results for the digraph case (the Caccetta-Häggkvist conjecture on the girth of oriented graphs of large minimum outdegree is a notorious example of this kind). In particular, most problems regarding Hamiltonicity of such graphs were open until recently and many open questions still remain. 3.1. Minimum degree conditions. Thomassen raised the natural question of determining the minimum semidegree that forces a Hamilton cycle in an oriented graph. Thomassen initially believed that the correct minimum semidegree bound should be n/3 (this bound is obtained by considering a 'blow-up' of an oriented triangle). However, Häggkvist BIB001 later gave a construction which gives a lower bound of ⌈(3n − 4)/8⌉ − 1: For n of the form n = 4m + 3 where m is odd, we construct G on n vertices as in Figure 3 . Since every path which joins two vertices in D has to pass through B, it follows that every cycle contains at least as many vertices from B as it contains from D. As |D| > |B| this means that one cannot cover all the vertices of G by disjoint cycles. This construction can be extended to arbitrary n (see BIB003 ). The following result exactly matches this bound and improves earlier ones of several authors, e.g. BIB001 . In particular, the proof builds on an approximate version which was proved in BIB002 . Theorem 12 (Keevash, Kühn and Osthus BIB003 ). There exists an integer n 0 so that any oriented graph G on n ≥ n 0 vertices with minimum semidegree δ 0 (G) ≥ 3n−4 8 contains a Hamilton cycle. Jackson conjectured that for regular oriented graphs one can significantly reduce the degree condition. The disjoint union of two regular tournaments on n/2 vertices shows that this would be best possible. Note that the degree condition is smaller than the one in Conjecture 2. We believe that it may actually be possible to reduce the degree condition even further if we assume that G is strongly 2-connected: is it true that for each d > 2, every d-regular strongly 2-connected oriented graph G on n ≤ 6d vertices has a Hamilton cycle? A suitable orientation of the example in Figure 1 shows that this would be best possible.
|
A survey on Hamilton cycles in directed graphs <s> 3.2. <s> The screen or shade for the window is of flexible material so that when in open position it can be pleated. Opposite sides of the screen have tapes sewed on near the edges and male snap fasteners are spaced from each other on the tapes. These fasteners snap into holes in plastic glides which travel along plastic tracks. Either converted by screws or integral with the tracks are angle sealing strips, one flange being on the outside and arranged to prevent air movement from the inside of the screen to the outside, or vice versa, when the screen is closed. Either attached by screws to the track and sealing strip assembly or integral therewith is a mounting rail which is attached by screws to the building or support. A slanting roof is shown having several windows equipped with independently glided screens. <s> BIB001 </s> A survey on Hamilton cycles in directed graphs <s> 3.2. <s> Let D be an oriented graph of order n ≧ 9 and minimum degree n − 2. This paper proves that D is pancyclic if for any two vertices u and v, either uv ≅ A(D), or dD+(u) + dD−(v) ≧ n − 3. <s> BIB002 </s> A survey on Hamilton cycles in directed graphs <s> 3.2. <s> We show that for each \alpha>0 every sufficiently large oriented graph G with \delta^+(G),\delta^-(G)\ge 3|G|/8+ \alpha |G| contains a Hamilton cycle. This gives an approximate solution to a problem of Thomassen. In fact, we prove the stronger result that G is still Hamiltonian if \delta(G)+\delta^+(G)+\delta^-(G)\geq 3|G|/2 + \alpha |G|. Up to the term \alpha |G| this confirms a conjecture of H\"aggkvist. We also prove an Ore-type theorem for oriented graphs. <s> BIB003 </s> A survey on Hamilton cycles in directed graphs <s> 3.2. <s> We show that for each \eta>0 every digraph G of sufficiently large order n is Hamiltonian if its out- and indegree sequences d^+_1\le ... \le d^+_n and d^- _1 \le ... \le d^-_n satisfy (i) d^+_i \geq i+ \eta n or d^-_{n-i- \eta n} \geq n-i and (ii) d^-_i \geq i+ \eta n or d^+_{n-i- \eta n} \geq n-i for all i<n/2. This gives an approximate solution to a problem of Nash-Williams concerning a digraph analogue of Chv\'atal's theorem. In fact, we prove the stronger result that such digraphs G are pancyclic. <s> BIB004
|
Ore-type conditions. Häggkvist BIB001 also made the following conjecture which is closely related to Theorem 12. Given an oriented graph G, let δ(G) denote the minimum degree of G (i.e. the minimum number of edges incident to a vertex) and set δ * (G) : Conjecture 14 (Häggkvist BIB001 ). Every oriented graph G on n vertices with δ * (G) > (3n − 3)/2 contains a Hamilton cycle. (Note that this conjecture does not quite imply Theorem 12 as it results in a marginally greater minimum semidegree condition.) In BIB003 , Conjecture 14 was verified approximately, i.e. if δ * (G) ≥ (3/2 + o(1))n, then G has a Hamilton cycle (note this implies an approximate version of Theorem 12). The same methods also yield an approximate version of Ore's theorem for oriented graphs. Theorem 15 (Kelly, Kühn and Osthus BIB003 ). For every α > 0 there exists an integer n 0 = n 0 (α) such that every oriented graph G of order n ≥ n 0 with d + (x) + d − (y) ≥ (3/4 + α)n whenever G does not contain an edge from x to y contains a Hamilton cycle. The construction in Figure 3 shows that the bound is best possible up to the term αn. It would be interesting to obtain an exact version of this result. Song BIB002 proved that every oriented graph on n ≥ 9 vertices with δ(G) ≥ n − 2 and d + (x) + d − (y) ≥ n − 3 whenever G does not contain an edge from x to y is pancyclic (i.e. G contains cycles of all possible lengths). In BIB002 he also claims (without proof) that the condition is best possible for infinitely many n as G may fail to contain a Hamilton cycle otherwise. Note that Theorem 15 implies that this claim is false. 3.3. Degree sequence conditions and Chvátal-Erdős type conditions. In BIB004 a construction was described which showed that there is no satisfactory analogue of Pósa's theorem for oriented graphs: as soon as we allow a few vertices to have a degree somewhat below 3n/8, then one cannot guarantee a Hamilton cycle. The question of exactly determining all those degree sequences which guarantee a Hamilton cycle remains open though. It is also not clear whether there may be a version of the Chvátal-Erdős theorem for oriented graphs.
|
A survey on Hamilton cycles in directed graphs <s> Tournaments <s> We obtain several sufficient conditions on the degrees of an oriented graph for the existence of long paths and cycles. As corollaries of our results we deduce that a regular tournament contains an edge-disjoint Hamilton cycle and path, and that a regular bipartite tournament is hamiltonian. <s> BIB001 </s> A survey on Hamilton cycles in directed graphs <s> Tournaments <s> The so-called Kelly conjecture1 A proof of the Kelly conjecture for large k has been announced by R. Haggkvist at several conferences and in [5] but to this date no proof has been published.states that every regular tournament on 2k+1 vertices has a decomposition into k-arc-disjoint hamiltonian cycles. In this paper we formulate a generalization of that conjecture, namely we conjecture that every k-arc-strong tournament contains k arc-disjoint spanning strong subdigraphs. We prove several results which support the conjecture:If D = (V, A) is a 2-arc-strong semicomplete digraph then it contains 2 arc-disjoint spanning strong subdigraphs except for one digraph on 4 vertices.Every tournament which has a non-trivial cut (both sides containing at least 2 vertices) with precisely k arcs in one direction contains k arc-disjoint spanning strong subdigraphs. In fact this result holds even for semicomplete digraphs with one exception on 4 vertices.Every k-arc-strong tournament with minimum in- and out-degree at least 37k contains k arc-disjoint spanning subdigraphs H1, H2, . . . , Hk such that each Hi is strongly connected.The last result implies that if T is a 74k-arc-strong tournament with speci.ed not necessarily distinct vertices u1, u2, . . . , uk, v1, v2, . . . , vk then T contains 2k arc-disjoint branchings $$F^{ - }_{{u_{1} }} ,F^{ - }_{{u_{2} }} ,...,F^{ - }_{{u_{k} }} ,F^{ + }_{{v_{1} }} ,F^{ + }_{{v_{2} }} ,...,F^{ + }_{{v_{k} }}$$ where $$F^{ - }_{{u_{i} }}$$ is an in-branching rooted at the vertex ui and $$F^{ + }_{{v_{i} }}$$ is an out-branching rooted at the vertex vi, i=1,2, . . . , k. This solves a conjecture of Bang-Jensen and Gutin [3].We also discuss related problems and conjectures. <s> BIB002 </s> A survey on Hamilton cycles in directed graphs <s> Tournaments <s> We show that every sufficiently large regular tournament can a lmost completely be decomposed into edge-disjoint Hamilton cycles. More precisely, for each � > 0 every regular tournament G of sufficiently large ordern contains at least (1/2 �)n edge-disjoint Hamilton cycles. This gives an approximate solution to a conjecture of Kelly from 1968. Our result also extends to almost regular tournaments. <s> BIB003 </s> A survey on Hamilton cycles in directed graphs <s> Tournaments <s> In this paper we give an approximate answer to a question of Nash-Williams from 1970: we show that for every \alpha > 0, every sufficiently large graph on n vertices with minimum degree at least (1/2 + \alpha)n contains at least n/8 edge-disjoint Hamilton cycles. More generally, we give an asymptotically best possible answer for the number of edge-disjoint Hamilton cycles that a graph G with minimum degree \delta must have. We also prove an approximate version of another long-standing conjecture of Nash-Williams: we show that for every \alpha > 0, every (almost) regular and sufficiently large graph on n vertices with minimum degree at least $(1/2 + \alpha)n$ can be almost decomposed into edge-disjoint Hamilton cycles. <s> BIB004
|
A tournament is an orientation of a complete graph. It has long been known that tournaments enjoy particularly strong Hamiltonicity properties: Camion showed that we only need to assume strong connectivity to ensure that a tournament has a Hamilton cycle. Moon strengthened this by proving that every strongly connected tournament is even pancyclic. It is easy to see that a minimum semidegree of n/4 forces a tournament on n vertices to be strongly connected, leading to a better degree condition for Hamiltonicity than that of (3n − 4)/8 for the class of all oriented graphs. 4.1. Edge-disjoint Hamilton cycles and decompositions. A Hamilton decomposition of a graph or digraph G is a set of edge-disjoint Hamilton cycles which together cover all the edges of G. Not many examples of graphs with such decompositions are known. One can construct a Hamilton decomposition of a complete graph if and only if its order is odd (this was first observed by Walecki in the late 19th century). Tillson proved that a complete digraph G on n vertices has a Hamilton decomposition if and only if n = 4, 6. The following conjecture of Kelly from 1968 (see Moon ) would be a far-reaching generalization of Walecki's result: Conjecture 16 (Kelly) . Every regular tournament on n vertices can be decomposed into (n − 1)/2 edge-disjoint Hamilton cycles. In BIB003 we proved an approximate version of Kelly's conjecture. Moreover, the result holds even for oriented graphs G which are not quite regular and whose 'underlying' undirected graph is not quite complete. Theorem 17 (Kühn, Osthus and Treglown BIB003 ). For every η 1 > 0 there exist n 0 = n 0 (η 1 ) and η 2 = η 2 (η 1 ) > 0 such that the following holds. Suppose that G is an oriented graph on n ≥ n 0 vertices such that δ 0 (G) ≥ (1/2 − η 2 )n. Then G contains at least (1/2 − η 1 )n edge-disjoint Hamilton cycles. We also proved that the condition on the minimum semidegree can be relaxed to δ 0 (G) ≥ (3/8+η 2 )n. This is asymptotically best possible since the construction described in Figure 3 is almost regular. Some earlier support for Kelly's conjecture was provided by Thomassen [63] , who showed that the edges of every regular tournament can be covered by at most 12n Hamilton cycles. In this paper, we improve this to an asymptotically best possible result. We will give a proof (which relies on Theorem 17) in Section 6.1. Theorem 18. For every ξ > 0 there exists an integer n 0 = n 0 (ξ) so that every regular tournament G on n ≥ n 0 vertices contains a set of (1/2 + ξ)n Hamilton cycles which together cover all the edges of G. Kelly's conjecture has been generalized in several ways, e.g. Bang-Jensen and Yeo BIB002 conjectured that every k-edge-connected tournament has a decomposition into k spanning strong digraphs. A bipartite version of Kelly's conjecture was also formulated by Jackson BIB001 . Thomassen made the following conjecture which replaces the assumption of regularity by high connectivity. Conjecture 19 (Thomassen ). For every k ≥ 2 there is an integer f (k) so that every strongly f (k)-connected tournament has k edge-disjoint Hamilton cycles. A conjecture of Erdős (see ) which is also related to Kelly's conjecture states that almost all tournaments G have at least δ 0 (G) edge-disjoint Hamilton cycles. Similar techniques as in the proof of the approximate version of Kelly's conjecture were used at the same time in BIB004 to prove approximate versions of two long-standing conjectures of Nash-Williams on edge-disjoint Hamilton cycles in (undirected) graphs. One of these results states that one can almost decompose any dense regular graph into Hamilton cycles. Theorem 20 (Christofides, Kühn and Osthus BIB004 ). For every η > 0 there is an integer n 0 = n 0 (η) so that every d-regular graph on n ≥ n 0 vertices with d ≥ (1/2 + η)n contains at least (d − ηn)/2 edge-disjoint Hamilton cycles. In Section 6.1 we deduce the following analogue of Theorem 18: Theorem 21. For every ξ > 0 there is an integer n 0 = n 0 (ξ) so that every dregular graph G on n ≥ n 0 vertices with d ≥ (1/2+ξ)n contains at most (d+ξn)/2 Hamilton cycles which together cover all the edges of G.
|
A survey on Hamilton cycles in directed graphs <s> Counting Hamilton cycles in tournaments. <s> Solving an old conjecture of Szele we show that the maximum number of directed Hamiltonian paths in a tournament onn vertices is at mostc · n3/2· n!/2n−1, wherec is a positive constant independent ofn. <s> BIB001 </s> A survey on Hamilton cycles in directed graphs <s> Counting Hamilton cycles in tournaments. <s> Let $P(n)$ and $C(n)$ denote, respectively, the maximum possible numbers of Hamiltonian paths and Hamiltonian cycles in a tournament on n vertices. The study of $P(n)$ was suggested by Szele [14], who showed in an early application of the probabilistic method that $P(n) \geq n!2^{-n+1}$, and conjectured that $\lim ( {P(n)}/ {n!} )^{1/n}= 1/2.$ This was proved by Alon [2], who observed that the conjecture follows from a suitable bound on $C(n)$, and showed $C(n) <O(n^{3/2}(n-1)!2^{-n}).$ Here we improve this to $C(n)<O\big(n^{3/2-\xi}(n-1)!2^{-n}\big),$ with $\xi = 0.2507$… Our approach is mainly based on entropy considerations. <s> BIB002
|
One of the earliest results on tournaments (and the probablistic method), was obtained by Szele , who showed that the maximum number P (n) of Hamilton paths in a tournament on n vertices satisfies P (n) = O(n!/2 3n/4 ) and P (n) ≥ n!/2 n−1 =: f (n). The lower bound is obtained by considering a random tournament. The best upper bound is due to Friedgut and Kahn BIB002 who showed that P (n) = O(n c f (n)), where c is slightly less than 5/4. The best current lower bound is due to Wormald , who showed that P (n) ≥ (2.855 + o(1))f (n). So in particular, P (n) is not attained for random tournaments. Also, he conjectured that this bound is very close to the correct value. Similarly, one can define the maximum number C(n) of Hamilton cycles in a tournament on n vertices. Note that by considering a random tournament again, we obtain C(n) ≥ (n − 1)!/2 n =: g(n). Unsurprisingly, C(n) and P (n) are very closely related, e.g. we have P (n) ≥ nC(n). In particular, the main result in BIB002 states that C(n) = O(n c g(n)), where c is the same as above. This implies the above bound on P (n), since Alon BIB001 observed that P (n) ≤ 4C(n + 1). Also, Wormald showed that C(n) ≥ (2.855 + o(1))g(n). (Note this also follows by combining Alon's observation with the lower bound on P (n) in .) Of course, in general it does not make sense to ask for the minimum number of Hamilton paths or cycles in a tournament. However, the question does make sense for regular tournaments. Friedgut and Kahn BIB002 asked whether the number of Hamilton cycles in a regular tournament is always at least Ω(g(n)). The best result towards this was recently obtained by Cuckler , who showed that every regular tournament on n vertices contains at least n!/(2 + o(1)) n Hamilton cycles. This also answers an earlier question of Thomassen. Asking for the minimum number of Hamilton paths in a tournament T also makes sense if we assume that T is strongly connected. Busch determined this number exactly by showing that an earlier construction of Moon is best possible. The related question on the minimum number of Hamilton cycles in a strongly 2-connected tournament is still open (see ).
|
A survey on Hamilton cycles in directed graphs <s> 4.3. <s> We prove that with three exceptions, every tournament of order n contains each oriented path of order n. The exceptions are the antidirected paths in the 3-cycle, in the regular tournament on 5 vertices, and in the Paley tournament on 7 vertices. <s> BIB001 </s> A survey on Hamilton cycles in directed graphs <s> 4.3. <s> Sumner@?s universal tournament conjecture states that any tournament on 2n-2 vertices contains a copy of any directed tree on n vertices. We prove an asymptotic version of this conjecture, namely that any tournament on (2+o(1))n vertices contains a copy of any directed tree on n vertices. In addition, we prove an asymptotically best possible result for trees of bounded degree, namely that for any fixed @D, any tournament on (1+o(1))n vertices contains a copy of any directed tree on n vertices with maximum degree at most @D. <s> BIB002 </s> A survey on Hamilton cycles in directed graphs <s> 4.3. <s> Sumner's universal tournament conjecture states that any tournament on $2n-2$ vertices contains any directed tree on $n$ vertices. In this paper we prove that this conjecture holds for all sufficiently large $n$. The proof makes extensive use of results and ideas from a recent paper by the same authors, in which an approximate version of the conjecture was proved. <s> BIB003
|
Sumner's universal tournament conjecture. Sumner's universal tournament conjecture states that every tournament on 2n − 2 vertices contains every tree on n vertices. In BIB002 an approximate version of this conjecture was proved and subsequently in BIB003 , the conjecture was proved for all large trees (see e.g. BIB002 for a discussion of numerous previous results). The proof in BIB003 builds on several structural results proved in BIB002 . Theorem 22 (Kühn, Mycroft and Osthus BIB002 BIB003 ). There is an integer n 0 such that for all n ≥ n 0 every tournament G on 2n − 2 vertices contains any directed tree T on n vertices. While this result is not directly related to the main topic of the survey (i.e. Hamilton cycles), there are several connections. Firstly, just as with many of the new results in the other sections, the concept of a robust expander is crucial in the proof of Theorem 22. Secondly, the proof of Theorem 22 also makes direct use of the fact that a robust expander contains a Hamilton cycle (Theorem 30). Suitable parts of the tree T are embedded by considering a random walk on (the blow-up of) such a Hamilton cycle. In BIB002 , we also proved that if T has bounded maximum degree, then it suffices if the tournament G has (1 + α)n vertices. This is best possible in the sense that the 'error term' αn cannot be completely omitted in general. But it seems possible that it can be reduced to a constant which depends only on the maximum degree of T . If T is an orientation of a path, then the error term can be omitted completely: Havet and Thomassé BIB001 proved that every tournament on at least 8 vertices contains every possible orientation of a Hamilton path (for arbitrary orientations of Hamilton cycles, see Section 5.2).
|
A survey on Hamilton cycles in directed graphs <s> Generalizations <s> We show that for each \ell\geq 4 every sufficiently large oriented graph G with \delta^+(G), \delta^-(G) \geq \lfloor |G|/3 \rfloor +1 contains an \ell-cycle. This is best possible for all those \ell\geq 4 which are not divisible by 3. Surprisingly, for some other values of \ell, an \ell-cycle is forced by a much weaker minimum degree condition. We propose and discuss a conjecture regarding the precise minimum degree which forces an \ell-cycle (with \ell \geq 4 divisible by 3) in an oriented graph. We also give an application of our results to pancyclicity and consider \ell-cycles in general digraphs. <s> BIB001 </s> A survey on Hamilton cycles in directed graphs <s> Generalizations <s> Abstract We prove that every digraph on n vertices with minimum out-degree 0.3465 n contains an oriented triangle. This improves the bound of 0.3532 n of Hamburger, Haxell and Kostochka. The main tool for our proof is the theory of flag algebras developed recently by Razborov. <s> BIB002
|
In this section, we discuss several natural ways of strengthening the notion of a Hamilton cycle. 5.1. Pancyclicity. Recall that a graph (or digraph) is pancyclic if it contains a cycle of every possible length. Dirac's theorem implies that a graph on n ≥ 3 vertices is pancyclic if it has minimum degree greater than n/2. (To see this, remove a vertex x and apply Dirac's theorem to the remaining subgraph to obtain a cycle of length n−1. Then consider the neighbourhood of x on this cycle to obtain cycles of all possible lengths through x.) Similarly, one can use Ghouila-Houri's theorem to deduce that every digraph on n vertices with minimum semidegree greater than n/2 is pancyclic. In both cases, the complete bipartite (di-)graph whose vertex class sizes are as equal as possible shows that the bound is best possible. More generally, the same trick also works for Meyniel's theorem: let G be a strongly connected digraph on n ≥ 2 vertices. If d(x) + d(y) ≥ 2n + 1 for all pairs of non-adjacent vertices x = y in G, then G is pancyclic. (Indeed, the conditions imply that either G contains a strongly connected tournament or contains a vertex x with d(x) > n, in which case we can proceed as above.) If n is even, the bound 2n + 1 is best possible. For n is odd, it follows from a result of Thomassen [60] that one can improve it to 2n. For oriented graphs the minimum semidegree threshold which guarantees pancyclicity turns out to be (3n − 4)/8, i.e. the same threshold as for Hamiltonicity (see BIB001 ). The above trick of removing a vertex does not work here. Instead, to obtain 'long' cycles one can modify the proof of Theorem 12. A triangle is guaranteed by results on the Caccetta-Häggkvist conjecture -e.g. a very recent result of Hladký, Král and Norine BIB002 states that every oriented graph on n vertices with minimum semidegree at least 0.347n contains a 3-cycle. Short cycles of length ℓ ≥ 4 can be guaranteed by a result in BIB001 which states that for all n ≥ 10 10 ℓ every oriented graph G on n vertices with δ 0 (G) ≥ ⌊n/3⌋ + 1 contains an ℓ-cycle. This is best possible for all those ℓ ≥ 4 which are not divisible by 3. Surprisingly, for some other values of ℓ, an ℓ-cycle is forced by a much weaker minimum degree condition. In particular, the following conjecture was made in BIB001 . Conjecture 23 (Kelly, Kühn and Osthus BIB001 ). Let ℓ ≥ 4 be a positive integer and let k be the smallest integer that is greater than 2 and does not divide ℓ. Then there exists an integer n 0 = n 0 (ℓ) such that every oriented graph G on n ≥ n 0 vertices with minimum semidegree δ 0 (G) ≥ ⌊n/k⌋ + 1 contains an ℓ-cycle. The extremal examples for this conjecture are always 'blow-ups' of cycles of length k. Possibly one can even weaken the condition by requiring only the outdegree of G to be large. It is easy to see that the only values of k that can appear in Conjecture 23 are of the form k = p s with k ≥ 3, where p is a prime and s a positive integer.
|
A survey on Hamilton cycles in directed graphs <s> Arbitrary orientations. <s> Sufficient conditions are given for the existence of an oriented path with given end vertices in a tournament. As a consequence a conjecture of Rosenfeld is established. This states that if n is large enough, then every non-strongly oriented cycle of order n is contained in every tournament of order n. It is well known and easy to see that every tournament has a directed hamilton path. Rosenfeld [8] conjectured that if n is large enough, then any oriented path of order n is contained in any tournament of order n. This has been established for alternating paths by Griinbaum [5] and Rosenfeld [8], for paths with two blocks (a block being a maximal directed subpath) by Alspach and Rosenfeld [1] and Straight [10], for paths where the ith block has length at least i + 1 by Alspach and Rosenfeld [1] and, curiously, for all paths if n is a power of 2 by Forcade [4]. Reid and Wormald [7] have shown that every oriented path of order n is contained in every tournament of order 3n/2. It is easy to show that a tournament has a strongly oriented hamilton cycle if and only if it is strongly connected. Rosenfeld in [9] conjectured that any non-strongly oriented cycle of order n is contained in any tournament of order n, provided n is large enough. This has been verified for cycles with a block of length n 1 by Griinbaum, for alternating cycles by Rosenfeld [9] and Thomassen [11], and for cycles with just two blocks by Benhocine and Wojda [2]. (It has also been shown by Heydemann, Sotteau and Thomassen [6] that every digraph of order n with (n 1) (n 2) + 3 edges contains every non-strong oriented cycle of order n.) In this paper we prove both conjectures (the first is of course a consequence of the second). "Large enough" in this case means at least 2128, or about 1039. In fact it seems that the path conjecture is true for n > 8 (indeed there are probably just three pairs (P, T) with P t T) and that the cycle conjecture is true for n > 9. We stress that we make no attempt to give a small lower bound, but aim to establish the conjecture in the shortest time possible. The main result is Theorem 14, which rests on Lemmas 9, 11 and 13. Roughly speaking, Lemma 13 proves the conjecture if the cycle has two separate and fair sized blocks (as do most cycles), Lemma 9 proves it if the cycle has a huge block (as in Griinbaum's result) and Lemma 11 takes care of cycles with many small blocks (such as alternating cycles). The proof of the conjecture is in the third section. In the first two sections we establish results of independent interest concerning the existence of oriented paths with specified end vertices. Apart from the odd detail, they are as follows: let P Received by the editors August 30, 1982 and, in revised form, January 20, 1985 and July 5, 1985. 1980 Mathematics Subject Classification. Primary 05C20; Secondary 05C45. (?)1986 American Mathematical Society 0002-9947/86 $1.00 + $.25 per page <s> BIB001 </s> A survey on Hamilton cycles in directed graphs <s> Arbitrary orientations. <s> We show that a directed graph of order n will contain n-cycles of every orientation, provided each vertex has indegree and outdegree at least (1/2 + n-1/6)n and n is sufficiently large. © 1995 John Wiley & Sons, Inc. <s> BIB002 </s> A survey on Hamilton cycles in directed graphs <s> Arbitrary orientations. <s> We prove that with three exceptions, every tournament of order n contains each oriented path of order n. The exceptions are the antidirected paths in the 3-cycle, in the regular tournament on 5 vertices, and in the Paley tournament on 7 vertices. <s> BIB003 </s> A survey on Hamilton cycles in directed graphs <s> Arbitrary orientations. <s> We prove that every tournament of order n?68 contains every oriented Hamiltonian cycle except possibly the directed one when the tournament is reducible. <s> BIB004 </s> A survey on Hamilton cycles in directed graphs <s> Arbitrary orientations. <s> We use a randomised embedding method to prove that for all \alpha>0 any sufficiently large oriented graph G with minimum in-degree and out-degree \delta^+(G),\delta^-(G)\geq (3/8+\alpha)|G| contains every possible orientation of a Hamilton cycle. This confirms a conjecture of H\"aggkvist and Thomason. <s> BIB005
|
As mentioned earlier, the most natural notion of a cycle in a digraph is to have all edges directed consistently. But it also makes sense to ask for Hamilton cycles where the edges are oriented in some prescribed way, e.g. to ask for an 'antidirected' Hamilton cycle where consecutive edges have opposite directions. Surprisingly, it turns out that both for digraphs and oriented graphs the minimum degree threshold which guarantees a 'consistent' Hamilton cycle is approximately the same as that which guarantees an arbitrary orientation of a Hamilton cycle. Theorem 24 (Häggkvist and Thomason BIB002 ). There exists an n 0 so that every digraph G on n ≥ n 0 vertices with minimum semidegree δ 0 (G) ≥ n/2 + n 5/6 contains every orientation of a Hamilton cycle. In , they conjectured an analogue of this for oriented graphs, which was recently proved by Kelly. Theorem 25 (Kelly BIB005 ). For every α > 0 there exists an integer n 0 = n 0 (α) such that every oriented graph G on n ≥ n 0 vertices with minimum semidegree δ 0 (G) ≥ (3/8 + α)n contains every orientation of a Hamilton cycle. The proof of this result uses Theorem 12 as the well as the notion of expanding digraphs. Interestingly, Kelly observed that the thresholds for various orientations do not coincide exactly: for instance, if we modify the example in Figure 3 so that all classes have the same odd size, then the resulting oriented graph has minimum semidegree (3n − 4)/8 but no antidirected Hamilton cycle. Thomason BIB001 showed that for large tournaments strong connectivity ensures every possible orientation of a Hamilton cycle. More precisely, he showed that for n ≥ 2 128 , every tournament on n vertices contains all possible orientations of a Hamilton cycle, except possibly the 'consistently oriented' one. (Note that this also implies that every large tournament contains every orientation of a Hamilton path, i.e. a weaker version of the result in BIB003 mentioned earlier.) The bound on n was later reduced to 68 by Havet BIB004 . Thomason conjectured that the correct bound is n ≥ 9.
|
A survey on Hamilton cycles in directed graphs <s> k-ordered <s> A Hamiltonian graph $G$ of order $n$ is $k$-ordered, $2\leq k \leq n$, if for every sequence $v_1, v_2, \ldots ,v_k$ of $k$ distinct vertices of $G$, there exists a Hamiltonian cycle that encounters $v_1, v_2, \ldots , v_k$ in this order. In this paper, answering a question of Ng and Schultz, we give a sharp bound for the minimum degree guaranteeing that a graph is a $k$-ordered Hamiltonian graph under some mild restrictions. More precisely, we show that there are $\varepsilon, n_0> 0$ such that if $G$ is a graph of order $n\geq n_0$ with minimum degree at least $\lceil \frac{n}{2} \rceil + \lfloor \frac{k}{2} \rfloor - 1$ and $2\leq k \leq \eps n$, then $G$ is a $k$-ordered Hamiltonian graph. It is also shown that this bound is sharp for every $2\leq k \leq \lfloor \frac{n}{2} \rfloor$. <s> BIB001 </s> A survey on Hamilton cycles in directed graphs <s> k-ordered <s> For a positive integer k, a graph G is k-ordered hamiltonian if for every ordered sequence of k vertices there is a hamiltonian cycle that encounters the vertices of the sequence in the given order. It is shown that if G is a graph of order n with 3 ≤ k ≤ n-2, and deg(u) + deg(v) ≥ n + (3k - 9)-2 for every pair u, v of nonadjacent vertices of G, then G is k-ordered hamiltonian. Minimum degree conditions are also given for k-ordered hamiltonicity. © 2003 Wiley Periodicals, Inc. J Graph Theory 42: 199–210, 2003 <s> BIB002
|
Hamilton cycles. Suppose that we require our (Hamilton) cycle to visit several vertices in a specific order. More formally, we say that a graph G is k-ordered if for every sequence s 1 , . . . , s k of distinct vertices of G there is a cycle which encounters s 1 , . . . , s k in this order. G is k-ordered Hamiltonian if it contains a Hamilton cycle with this property. Kierstead, Sárközy and Selkow BIB001 determined the minimum degree which forces an (undirected) graph to be k-ordered Hamiltonian. Theorem 26 (Kierstead, Sárközy and Selkow BIB001 ). For all k ≥ 2, every graph on n ≥ 11k − 3 vertices of minimum degree at least ⌈n/2⌉ + ⌊k/2⌋ − 1 is k-ordered Hamiltonian. The extremal example consists of two cliques intersecting in k − 1 vertices if k is even and two cliques intersecting in k − 2 vertices if k is odd. The case when n is not too large compared to k is still open. The corresponding Ore-type problem was solved in BIB002 . Here the Ore-type result does not imply the Dirac-type result above. Many variations and stronger notions have been investigated (see e.g. again). Directed graphs form a particularly natural setting for this kind of question. The following result gives a directed analogue of Theorem 26. Theorem 27 (Kühn, Osthus and Young ). For every k ≥ 3 there is an integer n 0 = n 0 (k) such that every digraph G on n ≥ n 0 vertices with minimum semidegree Note that if n is even and k is odd the bound on the minimum semidegree is slightly larger than in the undirected case. However, it is best possible in all cases. In fact, if the minimum semidegree is smaller, it turns out that G need not even be k-ordered. Again, the family of extremal examples turns out to be much richer than in the undirected case. Note that every Hamiltonian digraph is 2-ordered Hamiltonian, so the case when k ≤ 2 in Theorem 27 is covered by Ghouila-Houri's theorem. It would be interesting to obtain an Ore-type or an oriented version of Theorem 27.
|
A survey on Hamilton cycles in directed graphs <s> Factors with prescribed cycle lengths. <s> Our main aim is to show that for every e > 0 and k ∈ N there is an n(e, k) such that if T is a tournament of order n ≥n(e, k) and in T every vertex has indegree at least (14+e)n and at most (34−e)n then T contains the kth power of a Hamilton cycle. <s> BIB001 </s> A survey on Hamilton cycles in directed graphs <s> Factors with prescribed cycle lengths. <s> In this paper, the following theorem and some related problems are investigated.THEOREM. Let T be a 2-connected n-tournament with n ≥ 6. Then T contains two vertex-disjoint cycles of lengths k and n − k for any integer k with n − 3 ≥ k ≥ 3, unless T is isomorphic to the 7-tournament which contains no transitive 4-tournament. <s> BIB002 </s> A survey on Hamilton cycles in directed graphs <s> Factors with prescribed cycle lengths. <s> Let k be a positive integer. A strong digraph G is termed k-connected if the removal of any set of fewer than k vertices results in a strongly connected digraph. The purpose of this paper is to show that every k-connected tournament with at least 8k vertices contains k vertex-disjoint directed cycles spanning the vertex set. This result answers a question posed by Bollobas. <s> BIB003 </s> A survey on Hamilton cycles in directed graphs <s> Factors with prescribed cycle lengths. <s> Packing and decomposition of combinatorial objects such as graphs, digraphs, and hypergraphs by smaller objects are central problems in combinatorics and combinatorial optimization. Their study combines probabilistic, combinatorial, and algebraic methods. In addition to being among the most fascinating purely combinatorial problems, they are often motivated by algorithmic applications. There is a considerable number of intriguing fundamental problems and results in this area, and the goal of this paper is to survey the state-of-the-art. <s> BIB004 </s> A survey on Hamilton cycles in directed graphs <s> Factors with prescribed cycle lengths. <s> An oriented graph is a directed graph which can be obtained from a simple undirected graph by orienting its edges. In this paper we show that any oriented graph G on n vertices with minimum indegree and outdegree at least (1/2-o(1))n contains a packing of cyclic triangles covering all but at most 3 vertices. This almost answers a question of Cuckler and Yuster and is best possible, since for n = 3 mod 18 there is a tournament with no perfect triangle packing and with all indegrees and outdegrees (n-1)/2 or (n-1)/2 \pm 1. Under the same hypotheses, we also show that one can embed any prescribed almost 1-factor, i.e. for any sequence n_1,...,n_t with n_1+...+n_t < n-O(1) we can find a vertex-disjoint collection of directed cycles with lengths n_1,...,n_t. In addition, under quite general conditions on the n_i we can remove the O(1) additive error and find a prescribed 1-factor. <s> BIB005
|
Another natural way of generalizing Dirac's theorem is to ask for a certain set of vertex-disjoint cycles in G which together cover all the vertices of G (note this also generalizes the notion of pancyclicity). For large undirected graphs, Abassi determined the minimum degree which guarantees k vertex-disjoint cycles in a graph G whose (given) lengths are n 1 , . . . , n k , where the n i sum up to n and where the order n of G is sufficiently large. As in the case of Hamilton cycles, the corresponding questions for directed and oriented graphs appear much harder than in the undirected case and again much less is known. Keevash and Sudakov BIB005 recently obtained the following result. Theorem 28 (Keevash and Sudakov BIB005 ). There exist positive constants c, C and an integer n 0 so that whenever G is an oriented graph on n ≥ n 0 vertices with minimum semidegree at least (1/2 − c)n and whenever n 1 , . . . , n t are so that t i=1 n i ≤ n − C, then G contains vertex-disjoint cycles of length n 1 , . . . , n t . In general, one cannot take C = 0. In the case of triangles (i.e. when all the n i = 3), they show that one can choose C = 3. This comes very close to proving a recent conjecture formulated independently by Cuckler and Yuster BIB004 , which states that every regular tournament on n = 6k + 3 vertices contains vertexdisjoint triangles covering all the vertices of the tournament. Similar questions were also raised earlier by Song BIB002 . For instance, given t, he asked for the smallest integer f (t) so that all but a finite number of strongly f (t)-connected tournaments T satisfy the following: Let n be the number of vertices of T and let t i=1 n i = n. Then T contains vertex-disjoint cycles of length n 1 , . . . , n t . Chen, and E) indicates an orientation of the complete bipartite graph so that within each set, the in-and outdegrees of the vertices differ by at most one. Gould and Li BIB003 proved the weaker result that every sufficiently large t-connected tournament G contains t vertex-disjoint cycles which together cover all the vertices of G. This proved a conjecture of Bollobás. 5.5. Powers of Hamilton cycles. Sarközy, Komlós and Szemerédi showed that every sufficiently large graph G on n vertices with minimum degree at least kn/(k + 1) contains the kth power of a Hamilton cycle. Extremal examples are complete (k + 1)-partite graphs with classes of almost equal size. It appears likely that the situation for digraphs is similar. However, just as for ordinary Hamilton cycles, it seems that for oriented graphs the picture is rather different. (Both for digraphs and oriented graphs, the most natural definition of the kth power of a cycle is a cyclically ordered set of vertices so that every vertex sends an edge to the next k vertices in the ordering.) Conjecture 29 (Treglown [64] ). For every ε > 0 there is an integer n 0 = n 0 (ε) so that every oriented graph G on n ≥ n 0 vertices with minimum semidegree at least (5/12 + ε)n contains the square of a Hamilton cycle. A construction which shows that the constant 5/12 cannot be improved is given in Figure 4 . We claim that the square of any Hamilton cycle would have to visit a vertex of B in between two visits of E. Since |B| < |E|, this shows that the graph does not contain the square of a Hamilton cycle. To prove the claim, suppose that F is a squared Hamilton cycle and consider a vertex e of F which lies in E. Then the predecessor of e lies in C or B, so without loss of generality we may assume that it is a vertex c 1 ∈ C. Again, the predecessor of c 1 lies in C or B (since it must lie in the common inneighbourhood of C and E), so without loss of generality we may assume that it is a vertex c 2 ∈ C. The predecessor of c 2 can now lie in A, B or C. If it lies in B we are done again, if it is a vertex c 3 ∈ C, we consider its predecessor, which can again only lie in A, B or C. Since F must visit all vertices, it follows that we eventually arrive at a predecessor a ∈ A whose successor on F is some vertex c ∈ C. The predecessor of a on F must lie in the common inneighbourhood of a and c, so it must lie in B, as required. For the case of tournaments, the problem was solved asymptotically by Bollobás and Häggkvist BIB001 . Given a tournament T of large order n with minimum semidegree at least n/4 + εn, they proved that (for fixed k) T contains the kth power of a Hamilton cycle. So asymptotically, the semidegree threshold for an ordinary Hamilton cycle in a tournament is the same as that for the kth power of a Hamilton cycle.
|
A survey on Hamilton cycles in directed graphs <s> Robustly expanding digraphs <s> We prove that any strong tournament with minimum outdegree at least 3k+3 has at least 4 K k! distinct Hamilton circuits and that every regular tournament of order n can be covered by a collection of 12n Hamilton circuits. <s> BIB001 </s> A survey on Hamilton cycles in directed graphs <s> Robustly expanding digraphs <s> We provide an NC algorithm for nding Hamilton cycles in directed graphs with a certain robust expansion property. This property captures several known criteria for the existence of Hamilton cycles in terms of the degree sequence and thus we provide algorithmic proofs of (i) an ‘oriented’ analogue of Dirac’s theorem and (ii) an approximate version (for directed graphs) of Chv atal’s theorem. Moreover, our main result is used as a tool <s> BIB002 </s> A survey on Hamilton cycles in directed graphs <s> Robustly expanding digraphs <s> We show that for each \alpha>0 every sufficiently large oriented graph G with \delta^+(G),\delta^-(G)\ge 3|G|/8+ \alpha |G| contains a Hamilton cycle. This gives an approximate solution to a problem of Thomassen. In fact, we prove the stronger result that G is still Hamiltonian if \delta(G)+\delta^+(G)+\delta^-(G)\geq 3|G|/2 + \alpha |G|. Up to the term \alpha |G| this confirms a conjecture of H\"aggkvist. We also prove an Ore-type theorem for oriented graphs. <s> BIB003 </s> A survey on Hamilton cycles in directed graphs <s> Robustly expanding digraphs <s> Let G be a simple graph on n vertices. A conjecture of Bollobas and Eldridge [5] asserts that if δ(G) ≥ kn−1 k+1 then G contains any n vertex graph H with ∆(H) = k. We prove a strengthened version of this conjecture for bipartite, bounded degree H, for sufficiently large n. This is the first result on this conjecture for expander graphs of arbitrary (but bounded) degree. An important tool for the proof is a new version of the Blow-up Lemma. <s> BIB004 </s> A survey on Hamilton cycles in directed graphs <s> Robustly expanding digraphs <s> We show that for each \eta>0 every digraph G of sufficiently large order n is Hamiltonian if its out- and indegree sequences d^+_1\le ... \le d^+_n and d^- _1 \le ... \le d^-_n satisfy (i) d^+_i \geq i+ \eta n or d^-_{n-i- \eta n} \geq n-i and (ii) d^-_i \geq i+ \eta n or d^+_{n-i- \eta n} \geq n-i for all i<n/2. This gives an approximate solution to a problem of Nash-Williams concerning a digraph analogue of Chv\'atal's theorem. In fact, we prove the stronger result that such digraphs G are pancyclic. <s> BIB005 </s> A survey on Hamilton cycles in directed graphs <s> Robustly expanding digraphs <s> We show that every sufficiently large oriented graph with minimum in- and outdegree at least (3n-4)/8 contains a Hamilton cycle. This is best possible and solves a problem of Thomassen from 1979. <s> BIB006 </s> A survey on Hamilton cycles in directed graphs <s> Robustly expanding digraphs <s> In this paper we prove a sufficient condition for the existence of a Hamilton cycle, which is applicable to a wide variety of graphs, including relatively sparse graphs. In contrast to previous criteria, ours is based on only two properties: one requiring expansion of ``small'' sets, the other ensuring the existence of an edge between any two disjoint ``large'' sets. We also discuss applications in positional games, random graphs and extremal graph theory. <s> BIB007 </s> A survey on Hamilton cycles in directed graphs <s> Robustly expanding digraphs <s> We show that for each $\beta > 0$, every digraph $G$ of sufficiently large order $n$ whose outdegree and indegree sequences $d_1^+ \leq \cdots \leq d_n^+$ and $d_1^- \leq \cdots \leq d_n^-$ satisfy $d_i^+, d_i^- \geq \min{\{i + \beta n, n/2\}}$ is Hamiltonian. In fact, we can weaken these assumptions to (i) $d_i^+ \geq \min{\{i + \beta n, n/2\}}$ or $d^-_{n - i - \beta n} \geq n-i$, (ii) $d_i^- \geq \min{\{i + \beta n, n/2\}}$ or $d^+_{n - i - \beta n} \geq n-i$, and still deduce that $G$ is Hamiltonian. This provides an approximate version of a conjecture of Nash-Williams from 1975 and improves a previous result of Kuhn, Osthus, and Treglown. <s> BIB008
|
Roughly speaking, a graph is an expander if for every set S of vertices the neighbourhood N (S) of S is significantly larger than S itself. A number of papers have recently demonstrated that there is a remarkably close connection between Hamiltonicity and expansion (see e.g. BIB007 ). The following notion of robustly expanding (dense) digraphs was introduced in BIB005 . Let 0 < ν ≤ τ < 1. Given any digraph G on n vertices and S ⊆ V (G), the ν-robust outneighbourhood RN + ν,G (S) of S is the set of all those vertices x of G which have at least νn inneighbours in S. G is called a robust (ν, τ )-outexpander if |RN + ν,G (S)| ≥ |S| + νn for all S ⊆ V (G) with τ n < |S| < (1 − τ )n. As the name suggests, this notion has the advantage that it is preserved even if we delete some vertices and edges from G. We will also use the more traditional (and weaker) notion of a (ν, τ )-outexpander, which means |N + (S)| ≥ |S| + νn for all S ⊆ V (G) with τ n < |S| < (1 − τ )n. Theorem 30 (Kühn, Osthus and Treglown BIB005 ). Let n 0 be a positive integer and ν, τ, η be positive constants such that 1/n 0 ≪ ν ≤ τ ≪ η < 1. Let G be a digraph on n ≥ n 0 vertices with δ 0 (G) ≥ ηn which is a robust (ν, τ )-outexpander. Then G contains a Hamilton cycle. Theorem 30 is used in BIB005 to give a weaker version of Theorem 7 (i.e. without the degrees capped at n/2). In the same paper it is also applied to prove a conjecture of Thomassen regarding a weak version of Conjecture 16 (Kelly's conjecture). One can also use it to prove e.g. Theorem 15 and thus an approximate version of Theorem 12. (Indeed, as proved in BIB003 , the degree conditions of Theorem 15 imply expansion, the proof for robust expansion is similar.) As mentioned earlier, it is also used as a tool in the proof of Theorem 22. Finally, we will also use it in the next subsection to prove Theorem 18. In BIB005 , Theorem 30 was deduced from a result in BIB006 . The proof of the result in BIB006 (and a similar approach in BIB003 ) in turn relied on Szemerédi's regularity lemma and a (rather technical) version of the Blow-up lemma due to Csaba BIB004 . A (parallel) algorithmic version of Theorem 30 was also proved in BIB002 . Below, we give a brief sketch of a proof of Theorem 30 which avoids any use of the Blow-up lemma and is based on an approach in BIB008 . The density of a bipartite graph G with vertex classes A and B is defined to be d(A, B) = e (A,B) |A||B| , where e(A, B) denotes the number of edges between A and B. Given ε > 0, we say that G is ε-regular if for all subsets X ⊆ A and Y ⊆ B with |X| ≥ ε|A| and |Y | ≥ ε|B| we have that |d(X, Y ) − d(A, B)| < ε. We also say that G is (ε, d)-super-regular if it is ε-regular and furthermore every vertex a ∈ A has degree at least d|B| and similarly for every b ∈ B. These definitions generalize naturally to non-bipartite (di-)graphs. We also need the result that every super-regular digraph contains a Hamilton cycle. Lemma 31 is a special case e.g. of a result of Frieze and Krivelevich , who proved that an (ε, d)-super-regular digraph on n vertices has almost dn edgedisjoint Hamilton cycles if n is large. Here we also give a sketch of a direct proof of Lemma 31. We first prove that G contains a 1-factor. Consider the auxiliary bipartite graph whose vertex classes A and B are copies of V (G) with an edge between a ∈ A and b ∈ B if there is an edge from a to b in G. One can show that this bipartite graph has a perfect matching (by Hall's marriage theorem), which in turn corresponds to a 1-factor in G. It is now not hard to prove the lemma using the 'rotation-extension' technique: Choose a 1-factor of G. Now remove an edge of a cycle in this 1-factor and let P be the resulting path. If the final vertex of P has any outneighbours on another cycle C of the 1-factor, we can extend P into a longer path which includes the vertices of C (and similarly for the initial vertex of P ). We repeat this as long as possible (and one can always ensure that the extension step can be carried out at least once). So we may assume that all outneighbours of the final vertex of P lie on P and similarly for the initial vertex of P . Together with the ε-regularity this can be used to find a cycle with the same vertex set as P . Eventually, we arrive at a Hamilton cycle. Sketch proof of Theorem 30. Choose ε, d to satisfy 1/n 0 ≪ ε ≪ d ≪ ν. The first step is to apply a directed version of Szemerédi's regularity lemma to G. This gives us a partition of the vertices of G into clusters V 1 , . . . , V k and an exceptional set V 0 so that |V 0 | ≤ εn and all the clusters have size m. Now define a 'reduced' digraph R whose vertices are the clusters V 1 , . . . , V k and with an edge from V i to V j if the bipartite graph spanned by the edges from V i to V j is ε-regular and has density at least d. Then one can show (see Lemma 14 in [46] ) that R is still a (ν/2, 2τ )-outexpander (this is the point where we need the robustness of the expansion in G) with minimum semidegree at least ηk/2. This in turn can be used to show that R has a 1-factor F (using the same auxiliary bipartite graph as in the proof of Lemma 31) . By removing a small number of vertices from the clusters, we can also assume that the bipartite subgraphs spanned by successive clusters on each cycle of F are super-regular, i.e. have high minimum degree. For simplicity, assume that the cluster size is still m. Moreover, since G is an expander, we can find a short path in G between clusters of different cycles of F and also between any pair of exceptional vertices. However, we need to choose such paths without affecting any of the useful structures that we have found so far. For this, we will consider paths which 'wind around' cycles in F before moving to another cycle. More precisely, a shifted walk from a cluster A to a cluster B is a walk W (A, B) of the form where X 1 = A, X t+1 = B, C i is the cycle of F containing X i , and for each 1 ≤ i ≤ t, X − i is the predecessor of X i on C i and the edge X − i X i+1 belongs to R. We say that W as above traverses t cycles (even if some C i appears several times in W ). We also say that the clusters X 2 , . . . , X t+1 are the entry clusters (as this is where W 'enters' a cycle C i ) and the clusters X (ii) for any clusters A and B there is a shifted walk from A to B which does not traverse too many cycles. Indeed, the expansion property implies that the number of clusters one can reach by traversing t cycles is at least tνk/2 as long as this is significantly less than the total number k of clusters. Now we will 'join up' the exceptional vertices using shifted walks. For this, write V 0 = {a 1 , . . . , a ℓ }. For each exceptional vertex a i choose a cluster T i so that a i has many outneighbours in T i . Similarly choose a cluster U i so that a i has many inneighbours in U i and so that (iii) no cluster appears too often as a T i or a U i . Given a cluster X, let X − be the predecessor of X on the cycle of F which contains X and let X + be its successor. Form a 'walk' W on V 0 ∪ V (R) which starts at a 1 , then moves to T 1 , then follows a shifted walk from T 1 to U + 2 , then it winds around the entire cycle of F containing U + 2 until it reaches U 2 . Then W moves to a 2 , then to a 3 using a shifted walk as above until it has visited all the exceptional vertices. Proceeding similarly, we can ensure that W has the following properties: (a) W is a closed walk which visits all of V 0 and all of V (R). (b) For any cycle of F , its clusters are visited the same number of times by W . (c) Every cluster appears at most m/10 times as an entry or exit cluster. (b) follows from (i) and (c) follows from (ii) and (iii). The next step towards a Hamilton cycle would be to find a cycle C in G which corresponds to W (i.e. each occurrence of a cluster in W is replaced by a distinct vertex of G lying in this cluster). Unfortunately, the fact that V 0 may be much larger than the cluster size m implies that there may be clusters which are visited more than m times by W , which makes it impossible to find such a C. So we will apply a 'short-cutting' technique to W which avoids 'winding around' the cycles of F too often. For this, we now fix edges in G corresponding to all those edges of W that do not lie within a cycle of F . These edges of W are precisely the edges in W at the exceptional vertices as well as all the edges of the form AB where A is used as an exit cluster by W and B is used as an entrance cluster by W . For each edge a i T i at an exceptional vertex we choose an edge a i x, where x is an outneighbour of a i in T i . We similarly choose an edge ya i from U i to a i for each U i a i . We do this in such a way that all these edges are disjoint outside V 0 . For each occurrence of AB in W , where A is used as an exit cluster by W and B is used as an entrance cluster, we choose an edge ab from A to B in G so that all these edges are disjoint from each other and from the edges chosen for the exceptional vertices (we use (c) here). Given a cluster A, let A entry be the set of all those vertices in A which are the final vertex of an edge of G fixed so far and let A exit be the set of all those vertices in A which are the initial vertex of an edge of G fixed so far. So A entry ∩ A exit = ∅. Let G A be the bipartite graph whose vertex classes are A \ A exit and A + \ A + entry and whose edges are all the edges from A \ A exit to A + \ A + entry in G. Since W consists of shifted walks, it is easy to see that the vertex classes of G A have equal size. Moreover, it is possible to carry out the previous steps in such a way that G A is super-regular (here we use (c) again). This in turn means that G A has a perfect matching M A . These perfect matchings (for all clusters A) together with all the edges of G fixed so far form a 1-factor C of G. It remains to transform C into a Hamilton cycle. We claim that for any cluster A, we can find a perfect matching M ′ A in G A so that if we replace M A in C with M ′ A , then all vertices of G A will lie on a common cycle in the new 1-factor C. To prove this claim we proceed as follows. For every a ∈ A + \ A + entry , we move along the cycle C a of C containing a (starting at a) and let f (a) be the first vertex on C a in A \ A exit . Define an auxiliary digraph J on A + \ A Since G A is super-regular, it follows that J is also super-regular. By Lemma 31, J has a Hamilton cycle, which clearly corresponds to a perfect matching M ′ A in G A with the desired property. We now repeatedly apply the above claim to every cluster. Since A entry ∩A exit = ∅ for each cluster A, this ensures that all vertices which lie in clusters on the same cycle of F will lie on the same cycle of the new 1-factor C. Since by (a) W visits all clusters, this in turn implies that all the non-exceptional vertices will lie in the same cycle of C. Since the exceptional vertices form an independent set in C, it follows that C is actually a Hamilton cycle. Proof of Theorem 18. Choose new constants η 1 , ν, τ such that 1/n 0 ≪ η 1 ≪ ν ≤ τ ≪ ξ. Consider any regular tournament G on n ≥ n 0 vertices. Apply Theorem 17 to G in order to obtain a collection C of at least (1/2 − η 1 )n edgedisjoint Hamilton cycles. Let F be the undirected graph consisting of all those edges of G which are not covered by the Hamilton cycles in C. Note that F is k-regular for some k ≤ 2η 1 n. By Vizing's theorem the edges of F can be coloured with at most ∆(F ) + 1 ≤ 3η 1 n colours and thus F can be decomposed into at most 3η 1 n matchings. Split each of these matchings into at most 1/η 1/2 1 edgedisjoint matchings, each containing at most η 1/2 1 n edges. So altogether this yields a collection M of at most 3η 1/2 1 n matchings covering all edges of F . It is enough to show that for each M ∈ M there exists a Hamilton cycle of G which contains all the edges in M . So consider any M ∈ M. As observed in BIB005 (see the proof of Corollary 16 there), any regular tournament is a robust (ν, τ )-outexpander. Let D be the digraph obtained from G by 'contracting' all the edges in M , i.e. by successively replacing each edge xy ∈ M with a vertex v xy whose inneighbourhood is the inneighbourhood of x and whose outneighbourhood is the outneighbourhood of y. Using that M consists of at most η Note that we cannot simply apply Theorem 12 instead of Theorem 30 at the end of the proof, because D may not been an oriented graph. However, instead of using Theorem 30, one can also use the following result of Thomassen BIB001 : for every set E of n/24 independent edges in a regular tournament on n vertices, there is a Hamilton cycle which contains all edges in E. Theorem 21 can be proved in a similar way, using Ghouila-Houri's theorem instead of Theorem 30. Proof of Theorem 21. Choose a new constant η such that 1/n 0 ≪ η ≪ ξ and apply Theorem 20 to find a collection of at least (d− ηn)/2 edge-disjoint Hamilton cycles. Let F denote the subgraph of G consisting of all edges not lying in these Hamilton cycles. Then F is k-regular for some k ≤ ηn. Choose a collection M of matchings covering all edges of F as in the the proof of Theorem 18. So each matching consists of at most η 1/2 n edges. As before, for each M ∈ M it suffices to find a Hamilton cycle of G containing all edges of M . Let D ′ be the digraph obtained from G by orienting each edge in M and replacing each edge in E(G)\M with two edges, one in each direction. Let D be the digraph obtained from D ′ by 'contracting' the edges in M as in the the proof of Theorem 18. Then D ′ has minimum semidegree at least n/2 and thus contains a Hamilton cycle by GhouilaHouri's theorem (Theorem 1). This Hamilton cycle corresponds to a Hamilton cycle in G containing all edges of M , as required.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.