Target_Summary_ID
stringclasses
200 values
Target_Sentence_Index
stringlengths
4
7
External
stringclasses
2 values
Target_Sentence
stringlengths
19
592
Original_Abstract
stringclasses
200 values
t177
t177_4
no
This review found that ORS ≤ 270 mOsm/L appears to be as effective as ORS ≥ 310 mOsm/L at rehydrating people with cholera, but may lead to low blood salt levels.
Oral rehydration solution (ORS) is used to treat the dehydration caused by diarrhoeal diseases, including cholera. ORS formulations with an osmolarity (a measure of solute concentration) of ≤ 270 mOsm/L (ORS ≤ 270) are safe and more effective than ORS formulations with an osmolarity of ≥ 310 mOsm/L (ORS ≥ 310) for treating non‐cholera diarrhoea. As cholera causes rapid electrolyte loss, it is important to know if these benefits are similar for people suffering from cholera. Objectives To compare the safety and efficacy of ORS ≤270 with ORS ≥ 310 for treating dehydration due to cholera. Search methods We searched the Cochrane Infectious Disease Group Specialized Register (April 2011), CENTRAL ( The Cochrane Library Issue 4, 2011), MEDLINE (1966 to April 2011), EMBASE (1974 to April 2011), and LILACS (1982 to April 2011). We also contacted organizations and searched reference lists. Selection criteria Randomized controlled trials comparing ORS ≤ 270 with ORS ≥ 310 for treating adults and children with acute diarrhoea due to cholera. Data collection and analysis Two reviewers independently applied eligibility criteria, assessed trial quality, and extracted data. We pooled dichotomous data using risk ratio (RR), pooled continuous data using mean difference (MD) or the standardized mean difference (SMD), and presented the results with 95% confidence intervals (CI). For glucose‐based ORS, seven trials (718 participants) met the inclusion criteria. Biochemical hyponatraemia (blood sodium levels < 130 mmol/L) was more common with ORS ≤ 270 (RR 1.67, CI 1.09 to 2.57; 465 participants, four trials), while a higher level of severe biochemical hyponatraemia (blood sodium levels < 125 mmol/L) in the same group was not significant (RR 1.58, CI 0.62 to 4.04; 465 participants, four trials). No instances of symptomatic hyponatraemia or death were noted in the trials that intended to record them. We found no statistically significant difference in the need for unscheduled intravenous infusion. Analyses separating children and adults showed no obvious trends. Two trials also examined rice‐based ORS. In the ORS ≤ 270 group, duration of diarrhoea was shorter (MD ‐11.42 hours, CI ‐13.80 to ‐9.04; 102 participants, two trials). In people with cholera, ORS ≤ 270 is associated with biochemical hyponatraemia when compared with ORS ≥ 310, but there are no differences in terms of other outcomes. Although this risk does not appear to be associated with any serious consequences, the total patient experience in existing trials is small. Under wider practice conditions, especially where patient monitoring is difficult, caution is warranted. 23 April 2019 No update planned Research area no longer active This is not a current research question.
t177
t177_5
yes
More research is needed to better understand these potential safety issues.
Oral rehydration solution (ORS) is used to treat the dehydration caused by diarrhoeal diseases, including cholera. ORS formulations with an osmolarity (a measure of solute concentration) of ≤ 270 mOsm/L (ORS ≤ 270) are safe and more effective than ORS formulations with an osmolarity of ≥ 310 mOsm/L (ORS ≥ 310) for treating non‐cholera diarrhoea. As cholera causes rapid electrolyte loss, it is important to know if these benefits are similar for people suffering from cholera. Objectives To compare the safety and efficacy of ORS ≤270 with ORS ≥ 310 for treating dehydration due to cholera. Search methods We searched the Cochrane Infectious Disease Group Specialized Register (April 2011), CENTRAL ( The Cochrane Library Issue 4, 2011), MEDLINE (1966 to April 2011), EMBASE (1974 to April 2011), and LILACS (1982 to April 2011). We also contacted organizations and searched reference lists. Selection criteria Randomized controlled trials comparing ORS ≤ 270 with ORS ≥ 310 for treating adults and children with acute diarrhoea due to cholera. Data collection and analysis Two reviewers independently applied eligibility criteria, assessed trial quality, and extracted data. We pooled dichotomous data using risk ratio (RR), pooled continuous data using mean difference (MD) or the standardized mean difference (SMD), and presented the results with 95% confidence intervals (CI). For glucose‐based ORS, seven trials (718 participants) met the inclusion criteria. Biochemical hyponatraemia (blood sodium levels < 130 mmol/L) was more common with ORS ≤ 270 (RR 1.67, CI 1.09 to 2.57; 465 participants, four trials), while a higher level of severe biochemical hyponatraemia (blood sodium levels < 125 mmol/L) in the same group was not significant (RR 1.58, CI 0.62 to 4.04; 465 participants, four trials). No instances of symptomatic hyponatraemia or death were noted in the trials that intended to record them. We found no statistically significant difference in the need for unscheduled intravenous infusion. Analyses separating children and adults showed no obvious trends. Two trials also examined rice‐based ORS. In the ORS ≤ 270 group, duration of diarrhoea was shorter (MD ‐11.42 hours, CI ‐13.80 to ‐9.04; 102 participants, two trials). In people with cholera, ORS ≤ 270 is associated with biochemical hyponatraemia when compared with ORS ≥ 310, but there are no differences in terms of other outcomes. Although this risk does not appear to be associated with any serious consequences, the total patient experience in existing trials is small. Under wider practice conditions, especially where patient monitoring is difficult, caution is warranted. 23 April 2019 No update planned Research area no longer active This is not a current research question.
t178
t178_1
no
In most parts of the world there are increasing numbers of older adults, and memory complaints and conditions such as Alzheimer's disease and other forms of dementia are becoming increasingly common as a result.
Alzheimer's disease and other forms of dementia are becoming increasingly common with the aging of most populations. The majority of individuals with dementia will first present for care and assessment in primary care settings. There is a need for brief dementia screening instruments that can accurately diagnose dementia in primary care settings. The Mini‐Cog is a brief, cognitive screening test that is frequently used to evaluate cognition in older adults in various settings. Objectives To determine the diagnostic accuracy of the Mini‐Cog for diagnosing Alzheimer’s disease dementia and related dementias in a primary care setting. Search methods We searched the Cochrane Dementia and Cognitive Improvement Register of Diagnostic Test Accuracy Studies, MEDLINE, Embase and four other databases, initially to September 2012. Since then, four updates to the search were performed using the same search methods, and the most recent was January 2017. We used citation tracking (using the databases' ‘related articles’ feature, where available) as an additional search method and contacted authors of eligible studies for unpublished data. Selection criteria We only included studies that evaluated the Mini‐Cog as an index test for the diagnosis of Alzheimer's disease dementia or related forms of dementia when compared to a reference standard using validated criteria for dementia. We only included studies that were conducted in primary care populations. Data collection and analysis We extracted and described information on the characteristics of the study participants and study setting. Using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS‐2) criteria we evaluated the quality of studies, and we assessed risk of bias and applicability of each study for each domain in QUADAS‐2. Two review authors independently extracted information on the true positives, true negatives, false positives, and false negatives and entered the data into Review Manager 5 (RevMan 5). We then used RevMan 5 to determine the sensitivity, specificity, and 95% confidence intervals. We summarized the sensitivity and specificity of the Mini‐Cog in the individual studies in forest plots and also plotted them in a receiver operating characteristic plot. We also created a 'Risk of bias' and applicability concerns graph to summarize information related to the quality of included studies. There were a total of four studies that met our inclusion criteria, including a total of 1517 total participants. The sensitivity of the Mini‐Cog varied between 0.76 to 1.00 in studies while the specificity varied between 0.27 to 0.85. The included studies displayed significant heterogeneity in both methodologies and clinical populations, which did not allow for a meta‐analysis to be completed. Only one study ( Holsinger 2012 ) was found to be at low risk of bias on all methodological domains. The results of this study reported that the sensitivity of the Mini‐Cog was 0.76 and the specificity was 0.73. We found the quality of all other included studies to be low due to a high risk of bias with methodological limitations primarily in their selection of participants. There is a limited number of studies evaluating the accuracy of the Mini‐Cog for the diagnosis of dementia in primary care settings. Given the small number of studies, the wide range in estimates of the accuracy of the Mini‐Cog, and methodological limitations identified in most of the studies, at the present time there is insufficient evidence to recommend that the Mini‐Cog be used as a screening test for dementia in primary care. Further studies are required to determine the accuracy of Mini‐Cog in primary care and whether this tool has sufficient diagnostic test accuracy to be useful as a screening test in this setting.
t178
t178_2
no
Most individuals with memory difficulties will first seek out care or be identified in the healthcare system through their primary care health care providers, which may include family physicians or nurses.
Alzheimer's disease and other forms of dementia are becoming increasingly common with the aging of most populations. The majority of individuals with dementia will first present for care and assessment in primary care settings. There is a need for brief dementia screening instruments that can accurately diagnose dementia in primary care settings. The Mini‐Cog is a brief, cognitive screening test that is frequently used to evaluate cognition in older adults in various settings. Objectives To determine the diagnostic accuracy of the Mini‐Cog for diagnosing Alzheimer’s disease dementia and related dementias in a primary care setting. Search methods We searched the Cochrane Dementia and Cognitive Improvement Register of Diagnostic Test Accuracy Studies, MEDLINE, Embase and four other databases, initially to September 2012. Since then, four updates to the search were performed using the same search methods, and the most recent was January 2017. We used citation tracking (using the databases' ‘related articles’ feature, where available) as an additional search method and contacted authors of eligible studies for unpublished data. Selection criteria We only included studies that evaluated the Mini‐Cog as an index test for the diagnosis of Alzheimer's disease dementia or related forms of dementia when compared to a reference standard using validated criteria for dementia. We only included studies that were conducted in primary care populations. Data collection and analysis We extracted and described information on the characteristics of the study participants and study setting. Using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS‐2) criteria we evaluated the quality of studies, and we assessed risk of bias and applicability of each study for each domain in QUADAS‐2. Two review authors independently extracted information on the true positives, true negatives, false positives, and false negatives and entered the data into Review Manager 5 (RevMan 5). We then used RevMan 5 to determine the sensitivity, specificity, and 95% confidence intervals. We summarized the sensitivity and specificity of the Mini‐Cog in the individual studies in forest plots and also plotted them in a receiver operating characteristic plot. We also created a 'Risk of bias' and applicability concerns graph to summarize information related to the quality of included studies. There were a total of four studies that met our inclusion criteria, including a total of 1517 total participants. The sensitivity of the Mini‐Cog varied between 0.76 to 1.00 in studies while the specificity varied between 0.27 to 0.85. The included studies displayed significant heterogeneity in both methodologies and clinical populations, which did not allow for a meta‐analysis to be completed. Only one study ( Holsinger 2012 ) was found to be at low risk of bias on all methodological domains. The results of this study reported that the sensitivity of the Mini‐Cog was 0.76 and the specificity was 0.73. We found the quality of all other included studies to be low due to a high risk of bias with methodological limitations primarily in their selection of participants. There is a limited number of studies evaluating the accuracy of the Mini‐Cog for the diagnosis of dementia in primary care settings. Given the small number of studies, the wide range in estimates of the accuracy of the Mini‐Cog, and methodological limitations identified in most of the studies, at the present time there is insufficient evidence to recommend that the Mini‐Cog be used as a screening test for dementia in primary care. Further studies are required to determine the accuracy of Mini‐Cog in primary care and whether this tool has sufficient diagnostic test accuracy to be useful as a screening test in this setting.
t178
t178_3
no
Therefore, there is a need for tools that could identify individuals who may have dementia or significant memory problems.
Alzheimer's disease and other forms of dementia are becoming increasingly common with the aging of most populations. The majority of individuals with dementia will first present for care and assessment in primary care settings. There is a need for brief dementia screening instruments that can accurately diagnose dementia in primary care settings. The Mini‐Cog is a brief, cognitive screening test that is frequently used to evaluate cognition in older adults in various settings. Objectives To determine the diagnostic accuracy of the Mini‐Cog for diagnosing Alzheimer’s disease dementia and related dementias in a primary care setting. Search methods We searched the Cochrane Dementia and Cognitive Improvement Register of Diagnostic Test Accuracy Studies, MEDLINE, Embase and four other databases, initially to September 2012. Since then, four updates to the search were performed using the same search methods, and the most recent was January 2017. We used citation tracking (using the databases' ‘related articles’ feature, where available) as an additional search method and contacted authors of eligible studies for unpublished data. Selection criteria We only included studies that evaluated the Mini‐Cog as an index test for the diagnosis of Alzheimer's disease dementia or related forms of dementia when compared to a reference standard using validated criteria for dementia. We only included studies that were conducted in primary care populations. Data collection and analysis We extracted and described information on the characteristics of the study participants and study setting. Using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS‐2) criteria we evaluated the quality of studies, and we assessed risk of bias and applicability of each study for each domain in QUADAS‐2. Two review authors independently extracted information on the true positives, true negatives, false positives, and false negatives and entered the data into Review Manager 5 (RevMan 5). We then used RevMan 5 to determine the sensitivity, specificity, and 95% confidence intervals. We summarized the sensitivity and specificity of the Mini‐Cog in the individual studies in forest plots and also plotted them in a receiver operating characteristic plot. We also created a 'Risk of bias' and applicability concerns graph to summarize information related to the quality of included studies. There were a total of four studies that met our inclusion criteria, including a total of 1517 total participants. The sensitivity of the Mini‐Cog varied between 0.76 to 1.00 in studies while the specificity varied between 0.27 to 0.85. The included studies displayed significant heterogeneity in both methodologies and clinical populations, which did not allow for a meta‐analysis to be completed. Only one study ( Holsinger 2012 ) was found to be at low risk of bias on all methodological domains. The results of this study reported that the sensitivity of the Mini‐Cog was 0.76 and the specificity was 0.73. We found the quality of all other included studies to be low due to a high risk of bias with methodological limitations primarily in their selection of participants. There is a limited number of studies evaluating the accuracy of the Mini‐Cog for the diagnosis of dementia in primary care settings. Given the small number of studies, the wide range in estimates of the accuracy of the Mini‐Cog, and methodological limitations identified in most of the studies, at the present time there is insufficient evidence to recommend that the Mini‐Cog be used as a screening test for dementia in primary care. Further studies are required to determine the accuracy of Mini‐Cog in primary care and whether this tool has sufficient diagnostic test accuracy to be useful as a screening test in this setting.
t178
t178_4
yes
These tools should also be able to rule out dementia in those individuals with memory complaints who do not have dementia or significant memory problems.
Alzheimer's disease and other forms of dementia are becoming increasingly common with the aging of most populations. The majority of individuals with dementia will first present for care and assessment in primary care settings. There is a need for brief dementia screening instruments that can accurately diagnose dementia in primary care settings. The Mini‐Cog is a brief, cognitive screening test that is frequently used to evaluate cognition in older adults in various settings. Objectives To determine the diagnostic accuracy of the Mini‐Cog for diagnosing Alzheimer’s disease dementia and related dementias in a primary care setting. Search methods We searched the Cochrane Dementia and Cognitive Improvement Register of Diagnostic Test Accuracy Studies, MEDLINE, Embase and four other databases, initially to September 2012. Since then, four updates to the search were performed using the same search methods, and the most recent was January 2017. We used citation tracking (using the databases' ‘related articles’ feature, where available) as an additional search method and contacted authors of eligible studies for unpublished data. Selection criteria We only included studies that evaluated the Mini‐Cog as an index test for the diagnosis of Alzheimer's disease dementia or related forms of dementia when compared to a reference standard using validated criteria for dementia. We only included studies that were conducted in primary care populations. Data collection and analysis We extracted and described information on the characteristics of the study participants and study setting. Using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS‐2) criteria we evaluated the quality of studies, and we assessed risk of bias and applicability of each study for each domain in QUADAS‐2. Two review authors independently extracted information on the true positives, true negatives, false positives, and false negatives and entered the data into Review Manager 5 (RevMan 5). We then used RevMan 5 to determine the sensitivity, specificity, and 95% confidence intervals. We summarized the sensitivity and specificity of the Mini‐Cog in the individual studies in forest plots and also plotted them in a receiver operating characteristic plot. We also created a 'Risk of bias' and applicability concerns graph to summarize information related to the quality of included studies. There were a total of four studies that met our inclusion criteria, including a total of 1517 total participants. The sensitivity of the Mini‐Cog varied between 0.76 to 1.00 in studies while the specificity varied between 0.27 to 0.85. The included studies displayed significant heterogeneity in both methodologies and clinical populations, which did not allow for a meta‐analysis to be completed. Only one study ( Holsinger 2012 ) was found to be at low risk of bias on all methodological domains. The results of this study reported that the sensitivity of the Mini‐Cog was 0.76 and the specificity was 0.73. We found the quality of all other included studies to be low due to a high risk of bias with methodological limitations primarily in their selection of participants. There is a limited number of studies evaluating the accuracy of the Mini‐Cog for the diagnosis of dementia in primary care settings. Given the small number of studies, the wide range in estimates of the accuracy of the Mini‐Cog, and methodological limitations identified in most of the studies, at the present time there is insufficient evidence to recommend that the Mini‐Cog be used as a screening test for dementia in primary care. Further studies are required to determine the accuracy of Mini‐Cog in primary care and whether this tool has sufficient diagnostic test accuracy to be useful as a screening test in this setting.
t178
t178_5
yes
Such tools in primary care must be relatively easy to use, quick to administer, and accurate so as to be feasible to use in primary care while at the same time not overdiagnose or underdiagnose dementia.
Alzheimer's disease and other forms of dementia are becoming increasingly common with the aging of most populations. The majority of individuals with dementia will first present for care and assessment in primary care settings. There is a need for brief dementia screening instruments that can accurately diagnose dementia in primary care settings. The Mini‐Cog is a brief, cognitive screening test that is frequently used to evaluate cognition in older adults in various settings. Objectives To determine the diagnostic accuracy of the Mini‐Cog for diagnosing Alzheimer’s disease dementia and related dementias in a primary care setting. Search methods We searched the Cochrane Dementia and Cognitive Improvement Register of Diagnostic Test Accuracy Studies, MEDLINE, Embase and four other databases, initially to September 2012. Since then, four updates to the search were performed using the same search methods, and the most recent was January 2017. We used citation tracking (using the databases' ‘related articles’ feature, where available) as an additional search method and contacted authors of eligible studies for unpublished data. Selection criteria We only included studies that evaluated the Mini‐Cog as an index test for the diagnosis of Alzheimer's disease dementia or related forms of dementia when compared to a reference standard using validated criteria for dementia. We only included studies that were conducted in primary care populations. Data collection and analysis We extracted and described information on the characteristics of the study participants and study setting. Using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS‐2) criteria we evaluated the quality of studies, and we assessed risk of bias and applicability of each study for each domain in QUADAS‐2. Two review authors independently extracted information on the true positives, true negatives, false positives, and false negatives and entered the data into Review Manager 5 (RevMan 5). We then used RevMan 5 to determine the sensitivity, specificity, and 95% confidence intervals. We summarized the sensitivity and specificity of the Mini‐Cog in the individual studies in forest plots and also plotted them in a receiver operating characteristic plot. We also created a 'Risk of bias' and applicability concerns graph to summarize information related to the quality of included studies. There were a total of four studies that met our inclusion criteria, including a total of 1517 total participants. The sensitivity of the Mini‐Cog varied between 0.76 to 1.00 in studies while the specificity varied between 0.27 to 0.85. The included studies displayed significant heterogeneity in both methodologies and clinical populations, which did not allow for a meta‐analysis to be completed. Only one study ( Holsinger 2012 ) was found to be at low risk of bias on all methodological domains. The results of this study reported that the sensitivity of the Mini‐Cog was 0.76 and the specificity was 0.73. We found the quality of all other included studies to be low due to a high risk of bias with methodological limitations primarily in their selection of participants. There is a limited number of studies evaluating the accuracy of the Mini‐Cog for the diagnosis of dementia in primary care settings. Given the small number of studies, the wide range in estimates of the accuracy of the Mini‐Cog, and methodological limitations identified in most of the studies, at the present time there is insufficient evidence to recommend that the Mini‐Cog be used as a screening test for dementia in primary care. Further studies are required to determine the accuracy of Mini‐Cog in primary care and whether this tool has sufficient diagnostic test accuracy to be useful as a screening test in this setting.
t178
t178_6
yes
The Mini‐Cog, a brief cognitive screening tool, has been suggested as a possible screening test for dementia in primary care as it has been reported to be accurate and relatively easy to administer in primary care settings.
Alzheimer's disease and other forms of dementia are becoming increasingly common with the aging of most populations. The majority of individuals with dementia will first present for care and assessment in primary care settings. There is a need for brief dementia screening instruments that can accurately diagnose dementia in primary care settings. The Mini‐Cog is a brief, cognitive screening test that is frequently used to evaluate cognition in older adults in various settings. Objectives To determine the diagnostic accuracy of the Mini‐Cog for diagnosing Alzheimer’s disease dementia and related dementias in a primary care setting. Search methods We searched the Cochrane Dementia and Cognitive Improvement Register of Diagnostic Test Accuracy Studies, MEDLINE, Embase and four other databases, initially to September 2012. Since then, four updates to the search were performed using the same search methods, and the most recent was January 2017. We used citation tracking (using the databases' ‘related articles’ feature, where available) as an additional search method and contacted authors of eligible studies for unpublished data. Selection criteria We only included studies that evaluated the Mini‐Cog as an index test for the diagnosis of Alzheimer's disease dementia or related forms of dementia when compared to a reference standard using validated criteria for dementia. We only included studies that were conducted in primary care populations. Data collection and analysis We extracted and described information on the characteristics of the study participants and study setting. Using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS‐2) criteria we evaluated the quality of studies, and we assessed risk of bias and applicability of each study for each domain in QUADAS‐2. Two review authors independently extracted information on the true positives, true negatives, false positives, and false negatives and entered the data into Review Manager 5 (RevMan 5). We then used RevMan 5 to determine the sensitivity, specificity, and 95% confidence intervals. We summarized the sensitivity and specificity of the Mini‐Cog in the individual studies in forest plots and also plotted them in a receiver operating characteristic plot. We also created a 'Risk of bias' and applicability concerns graph to summarize information related to the quality of included studies. There were a total of four studies that met our inclusion criteria, including a total of 1517 total participants. The sensitivity of the Mini‐Cog varied between 0.76 to 1.00 in studies while the specificity varied between 0.27 to 0.85. The included studies displayed significant heterogeneity in both methodologies and clinical populations, which did not allow for a meta‐analysis to be completed. Only one study ( Holsinger 2012 ) was found to be at low risk of bias on all methodological domains. The results of this study reported that the sensitivity of the Mini‐Cog was 0.76 and the specificity was 0.73. We found the quality of all other included studies to be low due to a high risk of bias with methodological limitations primarily in their selection of participants. There is a limited number of studies evaluating the accuracy of the Mini‐Cog for the diagnosis of dementia in primary care settings. Given the small number of studies, the wide range in estimates of the accuracy of the Mini‐Cog, and methodological limitations identified in most of the studies, at the present time there is insufficient evidence to recommend that the Mini‐Cog be used as a screening test for dementia in primary care. Further studies are required to determine the accuracy of Mini‐Cog in primary care and whether this tool has sufficient diagnostic test accuracy to be useful as a screening test in this setting.
t178
t178_7
yes
The Mini‐Cog consists of a memory task that involves recall of three words and an evaluation of a clock drawing task.
Alzheimer's disease and other forms of dementia are becoming increasingly common with the aging of most populations. The majority of individuals with dementia will first present for care and assessment in primary care settings. There is a need for brief dementia screening instruments that can accurately diagnose dementia in primary care settings. The Mini‐Cog is a brief, cognitive screening test that is frequently used to evaluate cognition in older adults in various settings. Objectives To determine the diagnostic accuracy of the Mini‐Cog for diagnosing Alzheimer’s disease dementia and related dementias in a primary care setting. Search methods We searched the Cochrane Dementia and Cognitive Improvement Register of Diagnostic Test Accuracy Studies, MEDLINE, Embase and four other databases, initially to September 2012. Since then, four updates to the search were performed using the same search methods, and the most recent was January 2017. We used citation tracking (using the databases' ‘related articles’ feature, where available) as an additional search method and contacted authors of eligible studies for unpublished data. Selection criteria We only included studies that evaluated the Mini‐Cog as an index test for the diagnosis of Alzheimer's disease dementia or related forms of dementia when compared to a reference standard using validated criteria for dementia. We only included studies that were conducted in primary care populations. Data collection and analysis We extracted and described information on the characteristics of the study participants and study setting. Using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS‐2) criteria we evaluated the quality of studies, and we assessed risk of bias and applicability of each study for each domain in QUADAS‐2. Two review authors independently extracted information on the true positives, true negatives, false positives, and false negatives and entered the data into Review Manager 5 (RevMan 5). We then used RevMan 5 to determine the sensitivity, specificity, and 95% confidence intervals. We summarized the sensitivity and specificity of the Mini‐Cog in the individual studies in forest plots and also plotted them in a receiver operating characteristic plot. We also created a 'Risk of bias' and applicability concerns graph to summarize information related to the quality of included studies. There were a total of four studies that met our inclusion criteria, including a total of 1517 total participants. The sensitivity of the Mini‐Cog varied between 0.76 to 1.00 in studies while the specificity varied between 0.27 to 0.85. The included studies displayed significant heterogeneity in both methodologies and clinical populations, which did not allow for a meta‐analysis to be completed. Only one study ( Holsinger 2012 ) was found to be at low risk of bias on all methodological domains. The results of this study reported that the sensitivity of the Mini‐Cog was 0.76 and the specificity was 0.73. We found the quality of all other included studies to be low due to a high risk of bias with methodological limitations primarily in their selection of participants. There is a limited number of studies evaluating the accuracy of the Mini‐Cog for the diagnosis of dementia in primary care settings. Given the small number of studies, the wide range in estimates of the accuracy of the Mini‐Cog, and methodological limitations identified in most of the studies, at the present time there is insufficient evidence to recommend that the Mini‐Cog be used as a screening test for dementia in primary care. Further studies are required to determine the accuracy of Mini‐Cog in primary care and whether this tool has sufficient diagnostic test accuracy to be useful as a screening test in this setting.
t178
t178_8
no
The purpose of our review was to compare the accuracy of the Mini‐Cog for diagnosing dementia of any type in primary care settings when compared to in‐depth evaluation conducted by dementia specialists.
Alzheimer's disease and other forms of dementia are becoming increasingly common with the aging of most populations. The majority of individuals with dementia will first present for care and assessment in primary care settings. There is a need for brief dementia screening instruments that can accurately diagnose dementia in primary care settings. The Mini‐Cog is a brief, cognitive screening test that is frequently used to evaluate cognition in older adults in various settings. Objectives To determine the diagnostic accuracy of the Mini‐Cog for diagnosing Alzheimer’s disease dementia and related dementias in a primary care setting. Search methods We searched the Cochrane Dementia and Cognitive Improvement Register of Diagnostic Test Accuracy Studies, MEDLINE, Embase and four other databases, initially to September 2012. Since then, four updates to the search were performed using the same search methods, and the most recent was January 2017. We used citation tracking (using the databases' ‘related articles’ feature, where available) as an additional search method and contacted authors of eligible studies for unpublished data. Selection criteria We only included studies that evaluated the Mini‐Cog as an index test for the diagnosis of Alzheimer's disease dementia or related forms of dementia when compared to a reference standard using validated criteria for dementia. We only included studies that were conducted in primary care populations. Data collection and analysis We extracted and described information on the characteristics of the study participants and study setting. Using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS‐2) criteria we evaluated the quality of studies, and we assessed risk of bias and applicability of each study for each domain in QUADAS‐2. Two review authors independently extracted information on the true positives, true negatives, false positives, and false negatives and entered the data into Review Manager 5 (RevMan 5). We then used RevMan 5 to determine the sensitivity, specificity, and 95% confidence intervals. We summarized the sensitivity and specificity of the Mini‐Cog in the individual studies in forest plots and also plotted them in a receiver operating characteristic plot. We also created a 'Risk of bias' and applicability concerns graph to summarize information related to the quality of included studies. There were a total of four studies that met our inclusion criteria, including a total of 1517 total participants. The sensitivity of the Mini‐Cog varied between 0.76 to 1.00 in studies while the specificity varied between 0.27 to 0.85. The included studies displayed significant heterogeneity in both methodologies and clinical populations, which did not allow for a meta‐analysis to be completed. Only one study ( Holsinger 2012 ) was found to be at low risk of bias on all methodological domains. The results of this study reported that the sensitivity of the Mini‐Cog was 0.76 and the specificity was 0.73. We found the quality of all other included studies to be low due to a high risk of bias with methodological limitations primarily in their selection of participants. There is a limited number of studies evaluating the accuracy of the Mini‐Cog for the diagnosis of dementia in primary care settings. Given the small number of studies, the wide range in estimates of the accuracy of the Mini‐Cog, and methodological limitations identified in most of the studies, at the present time there is insufficient evidence to recommend that the Mini‐Cog be used as a screening test for dementia in primary care. Further studies are required to determine the accuracy of Mini‐Cog in primary care and whether this tool has sufficient diagnostic test accuracy to be useful as a screening test in this setting.
t178
t178_9
yes
We included studies that evaluated individuals with any potential severity of dementia and regardless of whether previous cognitive testing had been completed prior to the Mini‐Cog.
Alzheimer's disease and other forms of dementia are becoming increasingly common with the aging of most populations. The majority of individuals with dementia will first present for care and assessment in primary care settings. There is a need for brief dementia screening instruments that can accurately diagnose dementia in primary care settings. The Mini‐Cog is a brief, cognitive screening test that is frequently used to evaluate cognition in older adults in various settings. Objectives To determine the diagnostic accuracy of the Mini‐Cog for diagnosing Alzheimer’s disease dementia and related dementias in a primary care setting. Search methods We searched the Cochrane Dementia and Cognitive Improvement Register of Diagnostic Test Accuracy Studies, MEDLINE, Embase and four other databases, initially to September 2012. Since then, four updates to the search were performed using the same search methods, and the most recent was January 2017. We used citation tracking (using the databases' ‘related articles’ feature, where available) as an additional search method and contacted authors of eligible studies for unpublished data. Selection criteria We only included studies that evaluated the Mini‐Cog as an index test for the diagnosis of Alzheimer's disease dementia or related forms of dementia when compared to a reference standard using validated criteria for dementia. We only included studies that were conducted in primary care populations. Data collection and analysis We extracted and described information on the characteristics of the study participants and study setting. Using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS‐2) criteria we evaluated the quality of studies, and we assessed risk of bias and applicability of each study for each domain in QUADAS‐2. Two review authors independently extracted information on the true positives, true negatives, false positives, and false negatives and entered the data into Review Manager 5 (RevMan 5). We then used RevMan 5 to determine the sensitivity, specificity, and 95% confidence intervals. We summarized the sensitivity and specificity of the Mini‐Cog in the individual studies in forest plots and also plotted them in a receiver operating characteristic plot. We also created a 'Risk of bias' and applicability concerns graph to summarize information related to the quality of included studies. There were a total of four studies that met our inclusion criteria, including a total of 1517 total participants. The sensitivity of the Mini‐Cog varied between 0.76 to 1.00 in studies while the specificity varied between 0.27 to 0.85. The included studies displayed significant heterogeneity in both methodologies and clinical populations, which did not allow for a meta‐analysis to be completed. Only one study ( Holsinger 2012 ) was found to be at low risk of bias on all methodological domains. The results of this study reported that the sensitivity of the Mini‐Cog was 0.76 and the specificity was 0.73. We found the quality of all other included studies to be low due to a high risk of bias with methodological limitations primarily in their selection of participants. There is a limited number of studies evaluating the accuracy of the Mini‐Cog for the diagnosis of dementia in primary care settings. Given the small number of studies, the wide range in estimates of the accuracy of the Mini‐Cog, and methodological limitations identified in most of the studies, at the present time there is insufficient evidence to recommend that the Mini‐Cog be used as a screening test for dementia in primary care. Further studies are required to determine the accuracy of Mini‐Cog in primary care and whether this tool has sufficient diagnostic test accuracy to be useful as a screening test in this setting.
t178
t178_10
no
Overall, our review identified four studies conducted in primary care settings that compared the accuracy of the Mini‐Cog to detailed assessment of dementia by dementia specialists.
Alzheimer's disease and other forms of dementia are becoming increasingly common with the aging of most populations. The majority of individuals with dementia will first present for care and assessment in primary care settings. There is a need for brief dementia screening instruments that can accurately diagnose dementia in primary care settings. The Mini‐Cog is a brief, cognitive screening test that is frequently used to evaluate cognition in older adults in various settings. Objectives To determine the diagnostic accuracy of the Mini‐Cog for diagnosing Alzheimer’s disease dementia and related dementias in a primary care setting. Search methods We searched the Cochrane Dementia and Cognitive Improvement Register of Diagnostic Test Accuracy Studies, MEDLINE, Embase and four other databases, initially to September 2012. Since then, four updates to the search were performed using the same search methods, and the most recent was January 2017. We used citation tracking (using the databases' ‘related articles’ feature, where available) as an additional search method and contacted authors of eligible studies for unpublished data. Selection criteria We only included studies that evaluated the Mini‐Cog as an index test for the diagnosis of Alzheimer's disease dementia or related forms of dementia when compared to a reference standard using validated criteria for dementia. We only included studies that were conducted in primary care populations. Data collection and analysis We extracted and described information on the characteristics of the study participants and study setting. Using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS‐2) criteria we evaluated the quality of studies, and we assessed risk of bias and applicability of each study for each domain in QUADAS‐2. Two review authors independently extracted information on the true positives, true negatives, false positives, and false negatives and entered the data into Review Manager 5 (RevMan 5). We then used RevMan 5 to determine the sensitivity, specificity, and 95% confidence intervals. We summarized the sensitivity and specificity of the Mini‐Cog in the individual studies in forest plots and also plotted them in a receiver operating characteristic plot. We also created a 'Risk of bias' and applicability concerns graph to summarize information related to the quality of included studies. There were a total of four studies that met our inclusion criteria, including a total of 1517 total participants. The sensitivity of the Mini‐Cog varied between 0.76 to 1.00 in studies while the specificity varied between 0.27 to 0.85. The included studies displayed significant heterogeneity in both methodologies and clinical populations, which did not allow for a meta‐analysis to be completed. Only one study ( Holsinger 2012 ) was found to be at low risk of bias on all methodological domains. The results of this study reported that the sensitivity of the Mini‐Cog was 0.76 and the specificity was 0.73. We found the quality of all other included studies to be low due to a high risk of bias with methodological limitations primarily in their selection of participants. There is a limited number of studies evaluating the accuracy of the Mini‐Cog for the diagnosis of dementia in primary care settings. Given the small number of studies, the wide range in estimates of the accuracy of the Mini‐Cog, and methodological limitations identified in most of the studies, at the present time there is insufficient evidence to recommend that the Mini‐Cog be used as a screening test for dementia in primary care. Further studies are required to determine the accuracy of Mini‐Cog in primary care and whether this tool has sufficient diagnostic test accuracy to be useful as a screening test in this setting.
t178
t178_11
yes
Of the four studies included in the review, all except one study had limitations in how the Mini‐Cog was evaluated, which may have led to an overestimation of the accuracy of the Mini‐Cog in the remaining studies.
Alzheimer's disease and other forms of dementia are becoming increasingly common with the aging of most populations. The majority of individuals with dementia will first present for care and assessment in primary care settings. There is a need for brief dementia screening instruments that can accurately diagnose dementia in primary care settings. The Mini‐Cog is a brief, cognitive screening test that is frequently used to evaluate cognition in older adults in various settings. Objectives To determine the diagnostic accuracy of the Mini‐Cog for diagnosing Alzheimer’s disease dementia and related dementias in a primary care setting. Search methods We searched the Cochrane Dementia and Cognitive Improvement Register of Diagnostic Test Accuracy Studies, MEDLINE, Embase and four other databases, initially to September 2012. Since then, four updates to the search were performed using the same search methods, and the most recent was January 2017. We used citation tracking (using the databases' ‘related articles’ feature, where available) as an additional search method and contacted authors of eligible studies for unpublished data. Selection criteria We only included studies that evaluated the Mini‐Cog as an index test for the diagnosis of Alzheimer's disease dementia or related forms of dementia when compared to a reference standard using validated criteria for dementia. We only included studies that were conducted in primary care populations. Data collection and analysis We extracted and described information on the characteristics of the study participants and study setting. Using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS‐2) criteria we evaluated the quality of studies, and we assessed risk of bias and applicability of each study for each domain in QUADAS‐2. Two review authors independently extracted information on the true positives, true negatives, false positives, and false negatives and entered the data into Review Manager 5 (RevMan 5). We then used RevMan 5 to determine the sensitivity, specificity, and 95% confidence intervals. We summarized the sensitivity and specificity of the Mini‐Cog in the individual studies in forest plots and also plotted them in a receiver operating characteristic plot. We also created a 'Risk of bias' and applicability concerns graph to summarize information related to the quality of included studies. There were a total of four studies that met our inclusion criteria, including a total of 1517 total participants. The sensitivity of the Mini‐Cog varied between 0.76 to 1.00 in studies while the specificity varied between 0.27 to 0.85. The included studies displayed significant heterogeneity in both methodologies and clinical populations, which did not allow for a meta‐analysis to be completed. Only one study ( Holsinger 2012 ) was found to be at low risk of bias on all methodological domains. The results of this study reported that the sensitivity of the Mini‐Cog was 0.76 and the specificity was 0.73. We found the quality of all other included studies to be low due to a high risk of bias with methodological limitations primarily in their selection of participants. There is a limited number of studies evaluating the accuracy of the Mini‐Cog for the diagnosis of dementia in primary care settings. Given the small number of studies, the wide range in estimates of the accuracy of the Mini‐Cog, and methodological limitations identified in most of the studies, at the present time there is insufficient evidence to recommend that the Mini‐Cog be used as a screening test for dementia in primary care. Further studies are required to determine the accuracy of Mini‐Cog in primary care and whether this tool has sufficient diagnostic test accuracy to be useful as a screening test in this setting.
t178
t178_12
yes
Notably, the most problematic issue in study quality related to how participants were selected to participate in research studies, which may have further contributed to an overestimation of the accuracy of the Mini‐Cog in most of the studies included in our review.
Alzheimer's disease and other forms of dementia are becoming increasingly common with the aging of most populations. The majority of individuals with dementia will first present for care and assessment in primary care settings. There is a need for brief dementia screening instruments that can accurately diagnose dementia in primary care settings. The Mini‐Cog is a brief, cognitive screening test that is frequently used to evaluate cognition in older adults in various settings. Objectives To determine the diagnostic accuracy of the Mini‐Cog for diagnosing Alzheimer’s disease dementia and related dementias in a primary care setting. Search methods We searched the Cochrane Dementia and Cognitive Improvement Register of Diagnostic Test Accuracy Studies, MEDLINE, Embase and four other databases, initially to September 2012. Since then, four updates to the search were performed using the same search methods, and the most recent was January 2017. We used citation tracking (using the databases' ‘related articles’ feature, where available) as an additional search method and contacted authors of eligible studies for unpublished data. Selection criteria We only included studies that evaluated the Mini‐Cog as an index test for the diagnosis of Alzheimer's disease dementia or related forms of dementia when compared to a reference standard using validated criteria for dementia. We only included studies that were conducted in primary care populations. Data collection and analysis We extracted and described information on the characteristics of the study participants and study setting. Using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS‐2) criteria we evaluated the quality of studies, and we assessed risk of bias and applicability of each study for each domain in QUADAS‐2. Two review authors independently extracted information on the true positives, true negatives, false positives, and false negatives and entered the data into Review Manager 5 (RevMan 5). We then used RevMan 5 to determine the sensitivity, specificity, and 95% confidence intervals. We summarized the sensitivity and specificity of the Mini‐Cog in the individual studies in forest plots and also plotted them in a receiver operating characteristic plot. We also created a 'Risk of bias' and applicability concerns graph to summarize information related to the quality of included studies. There were a total of four studies that met our inclusion criteria, including a total of 1517 total participants. The sensitivity of the Mini‐Cog varied between 0.76 to 1.00 in studies while the specificity varied between 0.27 to 0.85. The included studies displayed significant heterogeneity in both methodologies and clinical populations, which did not allow for a meta‐analysis to be completed. Only one study ( Holsinger 2012 ) was found to be at low risk of bias on all methodological domains. The results of this study reported that the sensitivity of the Mini‐Cog was 0.76 and the specificity was 0.73. We found the quality of all other included studies to be low due to a high risk of bias with methodological limitations primarily in their selection of participants. There is a limited number of studies evaluating the accuracy of the Mini‐Cog for the diagnosis of dementia in primary care settings. Given the small number of studies, the wide range in estimates of the accuracy of the Mini‐Cog, and methodological limitations identified in most of the studies, at the present time there is insufficient evidence to recommend that the Mini‐Cog be used as a screening test for dementia in primary care. Further studies are required to determine the accuracy of Mini‐Cog in primary care and whether this tool has sufficient diagnostic test accuracy to be useful as a screening test in this setting.
t178
t178_13
yes
Holsinger found that the Mini‐Cog had a sensitivity of 76%, indicating that the Mini‐Cog failed to detect up to 24% of individuals who have dementia (e.g. false negatives).
Alzheimer's disease and other forms of dementia are becoming increasingly common with the aging of most populations. The majority of individuals with dementia will first present for care and assessment in primary care settings. There is a need for brief dementia screening instruments that can accurately diagnose dementia in primary care settings. The Mini‐Cog is a brief, cognitive screening test that is frequently used to evaluate cognition in older adults in various settings. Objectives To determine the diagnostic accuracy of the Mini‐Cog for diagnosing Alzheimer’s disease dementia and related dementias in a primary care setting. Search methods We searched the Cochrane Dementia and Cognitive Improvement Register of Diagnostic Test Accuracy Studies, MEDLINE, Embase and four other databases, initially to September 2012. Since then, four updates to the search were performed using the same search methods, and the most recent was January 2017. We used citation tracking (using the databases' ‘related articles’ feature, where available) as an additional search method and contacted authors of eligible studies for unpublished data. Selection criteria We only included studies that evaluated the Mini‐Cog as an index test for the diagnosis of Alzheimer's disease dementia or related forms of dementia when compared to a reference standard using validated criteria for dementia. We only included studies that were conducted in primary care populations. Data collection and analysis We extracted and described information on the characteristics of the study participants and study setting. Using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS‐2) criteria we evaluated the quality of studies, and we assessed risk of bias and applicability of each study for each domain in QUADAS‐2. Two review authors independently extracted information on the true positives, true negatives, false positives, and false negatives and entered the data into Review Manager 5 (RevMan 5). We then used RevMan 5 to determine the sensitivity, specificity, and 95% confidence intervals. We summarized the sensitivity and specificity of the Mini‐Cog in the individual studies in forest plots and also plotted them in a receiver operating characteristic plot. We also created a 'Risk of bias' and applicability concerns graph to summarize information related to the quality of included studies. There were a total of four studies that met our inclusion criteria, including a total of 1517 total participants. The sensitivity of the Mini‐Cog varied between 0.76 to 1.00 in studies while the specificity varied between 0.27 to 0.85. The included studies displayed significant heterogeneity in both methodologies and clinical populations, which did not allow for a meta‐analysis to be completed. Only one study ( Holsinger 2012 ) was found to be at low risk of bias on all methodological domains. The results of this study reported that the sensitivity of the Mini‐Cog was 0.76 and the specificity was 0.73. We found the quality of all other included studies to be low due to a high risk of bias with methodological limitations primarily in their selection of participants. There is a limited number of studies evaluating the accuracy of the Mini‐Cog for the diagnosis of dementia in primary care settings. Given the small number of studies, the wide range in estimates of the accuracy of the Mini‐Cog, and methodological limitations identified in most of the studies, at the present time there is insufficient evidence to recommend that the Mini‐Cog be used as a screening test for dementia in primary care. Further studies are required to determine the accuracy of Mini‐Cog in primary care and whether this tool has sufficient diagnostic test accuracy to be useful as a screening test in this setting.
t178
t178_14
yes
In this same study, the specificity of the Mini‐Cog was 73% indicating that up to 27% of individuals may be incorrectly identified as having dementia on the Mini‐Cog when these individuals do not actually have an underlying dementia (e.g. false positives).
Alzheimer's disease and other forms of dementia are becoming increasingly common with the aging of most populations. The majority of individuals with dementia will first present for care and assessment in primary care settings. There is a need for brief dementia screening instruments that can accurately diagnose dementia in primary care settings. The Mini‐Cog is a brief, cognitive screening test that is frequently used to evaluate cognition in older adults in various settings. Objectives To determine the diagnostic accuracy of the Mini‐Cog for diagnosing Alzheimer’s disease dementia and related dementias in a primary care setting. Search methods We searched the Cochrane Dementia and Cognitive Improvement Register of Diagnostic Test Accuracy Studies, MEDLINE, Embase and four other databases, initially to September 2012. Since then, four updates to the search were performed using the same search methods, and the most recent was January 2017. We used citation tracking (using the databases' ‘related articles’ feature, where available) as an additional search method and contacted authors of eligible studies for unpublished data. Selection criteria We only included studies that evaluated the Mini‐Cog as an index test for the diagnosis of Alzheimer's disease dementia or related forms of dementia when compared to a reference standard using validated criteria for dementia. We only included studies that were conducted in primary care populations. Data collection and analysis We extracted and described information on the characteristics of the study participants and study setting. Using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS‐2) criteria we evaluated the quality of studies, and we assessed risk of bias and applicability of each study for each domain in QUADAS‐2. Two review authors independently extracted information on the true positives, true negatives, false positives, and false negatives and entered the data into Review Manager 5 (RevMan 5). We then used RevMan 5 to determine the sensitivity, specificity, and 95% confidence intervals. We summarized the sensitivity and specificity of the Mini‐Cog in the individual studies in forest plots and also plotted them in a receiver operating characteristic plot. We also created a 'Risk of bias' and applicability concerns graph to summarize information related to the quality of included studies. There were a total of four studies that met our inclusion criteria, including a total of 1517 total participants. The sensitivity of the Mini‐Cog varied between 0.76 to 1.00 in studies while the specificity varied between 0.27 to 0.85. The included studies displayed significant heterogeneity in both methodologies and clinical populations, which did not allow for a meta‐analysis to be completed. Only one study ( Holsinger 2012 ) was found to be at low risk of bias on all methodological domains. The results of this study reported that the sensitivity of the Mini‐Cog was 0.76 and the specificity was 0.73. We found the quality of all other included studies to be low due to a high risk of bias with methodological limitations primarily in their selection of participants. There is a limited number of studies evaluating the accuracy of the Mini‐Cog for the diagnosis of dementia in primary care settings. Given the small number of studies, the wide range in estimates of the accuracy of the Mini‐Cog, and methodological limitations identified in most of the studies, at the present time there is insufficient evidence to recommend that the Mini‐Cog be used as a screening test for dementia in primary care. Further studies are required to determine the accuracy of Mini‐Cog in primary care and whether this tool has sufficient diagnostic test accuracy to be useful as a screening test in this setting.
t178
t178_15
no
We conclude that at the present time there is not enough evidence to support the routine use of the Mini‐Cog as a screening test for dementia in primary care and additional studies are required before concluding that the Mini‐Cog is useful in this setting.
Alzheimer's disease and other forms of dementia are becoming increasingly common with the aging of most populations. The majority of individuals with dementia will first present for care and assessment in primary care settings. There is a need for brief dementia screening instruments that can accurately diagnose dementia in primary care settings. The Mini‐Cog is a brief, cognitive screening test that is frequently used to evaluate cognition in older adults in various settings. Objectives To determine the diagnostic accuracy of the Mini‐Cog for diagnosing Alzheimer’s disease dementia and related dementias in a primary care setting. Search methods We searched the Cochrane Dementia and Cognitive Improvement Register of Diagnostic Test Accuracy Studies, MEDLINE, Embase and four other databases, initially to September 2012. Since then, four updates to the search were performed using the same search methods, and the most recent was January 2017. We used citation tracking (using the databases' ‘related articles’ feature, where available) as an additional search method and contacted authors of eligible studies for unpublished data. Selection criteria We only included studies that evaluated the Mini‐Cog as an index test for the diagnosis of Alzheimer's disease dementia or related forms of dementia when compared to a reference standard using validated criteria for dementia. We only included studies that were conducted in primary care populations. Data collection and analysis We extracted and described information on the characteristics of the study participants and study setting. Using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS‐2) criteria we evaluated the quality of studies, and we assessed risk of bias and applicability of each study for each domain in QUADAS‐2. Two review authors independently extracted information on the true positives, true negatives, false positives, and false negatives and entered the data into Review Manager 5 (RevMan 5). We then used RevMan 5 to determine the sensitivity, specificity, and 95% confidence intervals. We summarized the sensitivity and specificity of the Mini‐Cog in the individual studies in forest plots and also plotted them in a receiver operating characteristic plot. We also created a 'Risk of bias' and applicability concerns graph to summarize information related to the quality of included studies. There were a total of four studies that met our inclusion criteria, including a total of 1517 total participants. The sensitivity of the Mini‐Cog varied between 0.76 to 1.00 in studies while the specificity varied between 0.27 to 0.85. The included studies displayed significant heterogeneity in both methodologies and clinical populations, which did not allow for a meta‐analysis to be completed. Only one study ( Holsinger 2012 ) was found to be at low risk of bias on all methodological domains. The results of this study reported that the sensitivity of the Mini‐Cog was 0.76 and the specificity was 0.73. We found the quality of all other included studies to be low due to a high risk of bias with methodological limitations primarily in their selection of participants. There is a limited number of studies evaluating the accuracy of the Mini‐Cog for the diagnosis of dementia in primary care settings. Given the small number of studies, the wide range in estimates of the accuracy of the Mini‐Cog, and methodological limitations identified in most of the studies, at the present time there is insufficient evidence to recommend that the Mini‐Cog be used as a screening test for dementia in primary care. Further studies are required to determine the accuracy of Mini‐Cog in primary care and whether this tool has sufficient diagnostic test accuracy to be useful as a screening test in this setting.
t179
t179_1
no
Malignant ascites is the build‐up of fluid within the abdominal cavity caused by underlying cancer.
Ascites is the accumulation of fluid within the abdominal cavity. Most women with advanced ovarian cancer and some women with advanced endometrial cancer need repeated drainage for ascites. Guidelines to advise those involved in the drainage of ascites are usually produced locally and are generally not evidence‐based. Managing drains that improve the efficacy and quality of the procedure is key in making recommendations that could improve the quality of life (QoL) for women at this critical period of their lives. Objectives To evaluate the effectiveness and adverse events of different interventions for the management of malignant ascites drainage in the palliative care of women with gynaecological cancer. Search methods We searched CENTRAL, MEDLINE, and Embase to 4 November 2019. We checked clinical trial registries, grey literature, reports of conferences, citation lists of included studies, and key textbooks for potentially relevant studies. Selection criteria We included randomised controlled trials (RCTs) of women with malignant ascites with gynaecological cancer. If studies also included women with non‐gynaecological cancer, we planned to extract data specifically for women with gynaecological cancers or request the data from trial authors. If this was not possible, we planned to include the study only if at least 50% of participants were diagnosed with gynaecological cancer. Data collection and analysis Two review authors independently selected studies, extracted data, evaluated the quality of the included studies, compared results, and assessed the certainty of the evidence using Cochrane methodology. In the original 2010 review, we identified no relevant studies. This updated review included one RCT involving 245 participants that compared abdominal paracentesis and intraperitoneal infusion of catumaxomab versus abdominal paracentesis alone. The study was at high risk of bias in almost all domains. The data were not suitable for analysis. The median time to the first deterioration of QoL ranged from 19 to 26 days in participants receiving paracentesis alone compared to 47 to 49 days among participants receiving paracentesis with catumaxomab infusion (very low‐certainty evidence). Adverse events were only reported among participants receiving catumaxomab infusion. The most common severe adverse events were abdominal pain and lymphopenia (157 participants; very low‐certainty evidence). There were no data on the improvement of symptoms, satisfaction of participants and caregivers, and cost‐effectiveness. Currently, there is insufficient evidence to recommend the most appropriate management of drainage for malignant ascites among women with gynaecological cancer, as there was only very low‐certainty evidence from one small RCT at overall high risk of bias.
t179
t179_2
no
Women with advanced ovarian cancer and some women with advanced uterine cancer (also known as womb cancer) often need drainage for malignant ascites to alleviate discomfort.
Ascites is the accumulation of fluid within the abdominal cavity. Most women with advanced ovarian cancer and some women with advanced endometrial cancer need repeated drainage for ascites. Guidelines to advise those involved in the drainage of ascites are usually produced locally and are generally not evidence‐based. Managing drains that improve the efficacy and quality of the procedure is key in making recommendations that could improve the quality of life (QoL) for women at this critical period of their lives. Objectives To evaluate the effectiveness and adverse events of different interventions for the management of malignant ascites drainage in the palliative care of women with gynaecological cancer. Search methods We searched CENTRAL, MEDLINE, and Embase to 4 November 2019. We checked clinical trial registries, grey literature, reports of conferences, citation lists of included studies, and key textbooks for potentially relevant studies. Selection criteria We included randomised controlled trials (RCTs) of women with malignant ascites with gynaecological cancer. If studies also included women with non‐gynaecological cancer, we planned to extract data specifically for women with gynaecological cancers or request the data from trial authors. If this was not possible, we planned to include the study only if at least 50% of participants were diagnosed with gynaecological cancer. Data collection and analysis Two review authors independently selected studies, extracted data, evaluated the quality of the included studies, compared results, and assessed the certainty of the evidence using Cochrane methodology. In the original 2010 review, we identified no relevant studies. This updated review included one RCT involving 245 participants that compared abdominal paracentesis and intraperitoneal infusion of catumaxomab versus abdominal paracentesis alone. The study was at high risk of bias in almost all domains. The data were not suitable for analysis. The median time to the first deterioration of QoL ranged from 19 to 26 days in participants receiving paracentesis alone compared to 47 to 49 days among participants receiving paracentesis with catumaxomab infusion (very low‐certainty evidence). Adverse events were only reported among participants receiving catumaxomab infusion. The most common severe adverse events were abdominal pain and lymphopenia (157 participants; very low‐certainty evidence). There were no data on the improvement of symptoms, satisfaction of participants and caregivers, and cost‐effectiveness. Currently, there is insufficient evidence to recommend the most appropriate management of drainage for malignant ascites among women with gynaecological cancer, as there was only very low‐certainty evidence from one small RCT at overall high risk of bias.
t179
t179_3
no
Guidelines to advise healthcare professionals involved in the drainage of ascites are usually produced locally and are generally based on clinicians' experience.
Ascites is the accumulation of fluid within the abdominal cavity. Most women with advanced ovarian cancer and some women with advanced endometrial cancer need repeated drainage for ascites. Guidelines to advise those involved in the drainage of ascites are usually produced locally and are generally not evidence‐based. Managing drains that improve the efficacy and quality of the procedure is key in making recommendations that could improve the quality of life (QoL) for women at this critical period of their lives. Objectives To evaluate the effectiveness and adverse events of different interventions for the management of malignant ascites drainage in the palliative care of women with gynaecological cancer. Search methods We searched CENTRAL, MEDLINE, and Embase to 4 November 2019. We checked clinical trial registries, grey literature, reports of conferences, citation lists of included studies, and key textbooks for potentially relevant studies. Selection criteria We included randomised controlled trials (RCTs) of women with malignant ascites with gynaecological cancer. If studies also included women with non‐gynaecological cancer, we planned to extract data specifically for women with gynaecological cancers or request the data from trial authors. If this was not possible, we planned to include the study only if at least 50% of participants were diagnosed with gynaecological cancer. Data collection and analysis Two review authors independently selected studies, extracted data, evaluated the quality of the included studies, compared results, and assessed the certainty of the evidence using Cochrane methodology. In the original 2010 review, we identified no relevant studies. This updated review included one RCT involving 245 participants that compared abdominal paracentesis and intraperitoneal infusion of catumaxomab versus abdominal paracentesis alone. The study was at high risk of bias in almost all domains. The data were not suitable for analysis. The median time to the first deterioration of QoL ranged from 19 to 26 days in participants receiving paracentesis alone compared to 47 to 49 days among participants receiving paracentesis with catumaxomab infusion (very low‐certainty evidence). Adverse events were only reported among participants receiving catumaxomab infusion. The most common severe adverse events were abdominal pain and lymphopenia (157 participants; very low‐certainty evidence). There were no data on the improvement of symptoms, satisfaction of participants and caregivers, and cost‐effectiveness. Currently, there is insufficient evidence to recommend the most appropriate management of drainage for malignant ascites among women with gynaecological cancer, as there was only very low‐certainty evidence from one small RCT at overall high risk of bias.
t179
t179_4
no
We searched for studies up to Novemeber 2019 that compared different ways of managing the drainage of fluid collected in the abdomen of women with gynaecological cancer (cancer that starts in a woman's reproductive organs).This updated review included one randomised controlled trial (RCT: a type of study in which people are randomly assigned to receive different treatments) involving 245 women that compared drainage combined with catumaxomab (a medicine used to treatment malignant ascites) versus drainage alone.
Ascites is the accumulation of fluid within the abdominal cavity. Most women with advanced ovarian cancer and some women with advanced endometrial cancer need repeated drainage for ascites. Guidelines to advise those involved in the drainage of ascites are usually produced locally and are generally not evidence‐based. Managing drains that improve the efficacy and quality of the procedure is key in making recommendations that could improve the quality of life (QoL) for women at this critical period of their lives. Objectives To evaluate the effectiveness and adverse events of different interventions for the management of malignant ascites drainage in the palliative care of women with gynaecological cancer. Search methods We searched CENTRAL, MEDLINE, and Embase to 4 November 2019. We checked clinical trial registries, grey literature, reports of conferences, citation lists of included studies, and key textbooks for potentially relevant studies. Selection criteria We included randomised controlled trials (RCTs) of women with malignant ascites with gynaecological cancer. If studies also included women with non‐gynaecological cancer, we planned to extract data specifically for women with gynaecological cancers or request the data from trial authors. If this was not possible, we planned to include the study only if at least 50% of participants were diagnosed with gynaecological cancer. Data collection and analysis Two review authors independently selected studies, extracted data, evaluated the quality of the included studies, compared results, and assessed the certainty of the evidence using Cochrane methodology. In the original 2010 review, we identified no relevant studies. This updated review included one RCT involving 245 participants that compared abdominal paracentesis and intraperitoneal infusion of catumaxomab versus abdominal paracentesis alone. The study was at high risk of bias in almost all domains. The data were not suitable for analysis. The median time to the first deterioration of QoL ranged from 19 to 26 days in participants receiving paracentesis alone compared to 47 to 49 days among participants receiving paracentesis with catumaxomab infusion (very low‐certainty evidence). Adverse events were only reported among participants receiving catumaxomab infusion. The most common severe adverse events were abdominal pain and lymphopenia (157 participants; very low‐certainty evidence). There were no data on the improvement of symptoms, satisfaction of participants and caregivers, and cost‐effectiveness. Currently, there is insufficient evidence to recommend the most appropriate management of drainage for malignant ascites among women with gynaecological cancer, as there was only very low‐certainty evidence from one small RCT at overall high risk of bias.
t179
t179_5
no
However, the results were insufficient to assess the difference between these treatments.
Ascites is the accumulation of fluid within the abdominal cavity. Most women with advanced ovarian cancer and some women with advanced endometrial cancer need repeated drainage for ascites. Guidelines to advise those involved in the drainage of ascites are usually produced locally and are generally not evidence‐based. Managing drains that improve the efficacy and quality of the procedure is key in making recommendations that could improve the quality of life (QoL) for women at this critical period of their lives. Objectives To evaluate the effectiveness and adverse events of different interventions for the management of malignant ascites drainage in the palliative care of women with gynaecological cancer. Search methods We searched CENTRAL, MEDLINE, and Embase to 4 November 2019. We checked clinical trial registries, grey literature, reports of conferences, citation lists of included studies, and key textbooks for potentially relevant studies. Selection criteria We included randomised controlled trials (RCTs) of women with malignant ascites with gynaecological cancer. If studies also included women with non‐gynaecological cancer, we planned to extract data specifically for women with gynaecological cancers or request the data from trial authors. If this was not possible, we planned to include the study only if at least 50% of participants were diagnosed with gynaecological cancer. Data collection and analysis Two review authors independently selected studies, extracted data, evaluated the quality of the included studies, compared results, and assessed the certainty of the evidence using Cochrane methodology. In the original 2010 review, we identified no relevant studies. This updated review included one RCT involving 245 participants that compared abdominal paracentesis and intraperitoneal infusion of catumaxomab versus abdominal paracentesis alone. The study was at high risk of bias in almost all domains. The data were not suitable for analysis. The median time to the first deterioration of QoL ranged from 19 to 26 days in participants receiving paracentesis alone compared to 47 to 49 days among participants receiving paracentesis with catumaxomab infusion (very low‐certainty evidence). Adverse events were only reported among participants receiving catumaxomab infusion. The most common severe adverse events were abdominal pain and lymphopenia (157 participants; very low‐certainty evidence). There were no data on the improvement of symptoms, satisfaction of participants and caregivers, and cost‐effectiveness. Currently, there is insufficient evidence to recommend the most appropriate management of drainage for malignant ascites among women with gynaecological cancer, as there was only very low‐certainty evidence from one small RCT at overall high risk of bias.
t179
t179_6
no
Although women receiving drainage combined with catumaxomab had better quality of life (the general well‐being of a person) for longer compared to drainage alone, we are very unsure of this evidence due to the small number of participants and trials.
Ascites is the accumulation of fluid within the abdominal cavity. Most women with advanced ovarian cancer and some women with advanced endometrial cancer need repeated drainage for ascites. Guidelines to advise those involved in the drainage of ascites are usually produced locally and are generally not evidence‐based. Managing drains that improve the efficacy and quality of the procedure is key in making recommendations that could improve the quality of life (QoL) for women at this critical period of their lives. Objectives To evaluate the effectiveness and adverse events of different interventions for the management of malignant ascites drainage in the palliative care of women with gynaecological cancer. Search methods We searched CENTRAL, MEDLINE, and Embase to 4 November 2019. We checked clinical trial registries, grey literature, reports of conferences, citation lists of included studies, and key textbooks for potentially relevant studies. Selection criteria We included randomised controlled trials (RCTs) of women with malignant ascites with gynaecological cancer. If studies also included women with non‐gynaecological cancer, we planned to extract data specifically for women with gynaecological cancers or request the data from trial authors. If this was not possible, we planned to include the study only if at least 50% of participants were diagnosed with gynaecological cancer. Data collection and analysis Two review authors independently selected studies, extracted data, evaluated the quality of the included studies, compared results, and assessed the certainty of the evidence using Cochrane methodology. In the original 2010 review, we identified no relevant studies. This updated review included one RCT involving 245 participants that compared abdominal paracentesis and intraperitoneal infusion of catumaxomab versus abdominal paracentesis alone. The study was at high risk of bias in almost all domains. The data were not suitable for analysis. The median time to the first deterioration of QoL ranged from 19 to 26 days in participants receiving paracentesis alone compared to 47 to 49 days among participants receiving paracentesis with catumaxomab infusion (very low‐certainty evidence). Adverse events were only reported among participants receiving catumaxomab infusion. The most common severe adverse events were abdominal pain and lymphopenia (157 participants; very low‐certainty evidence). There were no data on the improvement of symptoms, satisfaction of participants and caregivers, and cost‐effectiveness. Currently, there is insufficient evidence to recommend the most appropriate management of drainage for malignant ascites among women with gynaecological cancer, as there was only very low‐certainty evidence from one small RCT at overall high risk of bias.
t179
t179_7
no
There were some side effects in the drainage plus catumaxomab group (e.g. pain, low white blood cell count), but they were not well reported.
Ascites is the accumulation of fluid within the abdominal cavity. Most women with advanced ovarian cancer and some women with advanced endometrial cancer need repeated drainage for ascites. Guidelines to advise those involved in the drainage of ascites are usually produced locally and are generally not evidence‐based. Managing drains that improve the efficacy and quality of the procedure is key in making recommendations that could improve the quality of life (QoL) for women at this critical period of their lives. Objectives To evaluate the effectiveness and adverse events of different interventions for the management of malignant ascites drainage in the palliative care of women with gynaecological cancer. Search methods We searched CENTRAL, MEDLINE, and Embase to 4 November 2019. We checked clinical trial registries, grey literature, reports of conferences, citation lists of included studies, and key textbooks for potentially relevant studies. Selection criteria We included randomised controlled trials (RCTs) of women with malignant ascites with gynaecological cancer. If studies also included women with non‐gynaecological cancer, we planned to extract data specifically for women with gynaecological cancers or request the data from trial authors. If this was not possible, we planned to include the study only if at least 50% of participants were diagnosed with gynaecological cancer. Data collection and analysis Two review authors independently selected studies, extracted data, evaluated the quality of the included studies, compared results, and assessed the certainty of the evidence using Cochrane methodology. In the original 2010 review, we identified no relevant studies. This updated review included one RCT involving 245 participants that compared abdominal paracentesis and intraperitoneal infusion of catumaxomab versus abdominal paracentesis alone. The study was at high risk of bias in almost all domains. The data were not suitable for analysis. The median time to the first deterioration of QoL ranged from 19 to 26 days in participants receiving paracentesis alone compared to 47 to 49 days among participants receiving paracentesis with catumaxomab infusion (very low‐certainty evidence). Adverse events were only reported among participants receiving catumaxomab infusion. The most common severe adverse events were abdominal pain and lymphopenia (157 participants; very low‐certainty evidence). There were no data on the improvement of symptoms, satisfaction of participants and caregivers, and cost‐effectiveness. Currently, there is insufficient evidence to recommend the most appropriate management of drainage for malignant ascites among women with gynaecological cancer, as there was only very low‐certainty evidence from one small RCT at overall high risk of bias.
t180
t180_1
yes
Basilar skull fracture (7% to 15.8% of all skull fractures) places the central nervous system in contact with bacteria from the nose and throat and may be associated with cerebrospinal fluid leakage (occurring in 2% to 20.8% of patients).
Basilar skull fractures predispose patients to meningitis because of the possible direct contact of bacteria in the paranasal sinuses, nasopharynx or middle ear with the central nervous system (CNS). Cerebrospinal fluid (CSF) leakage has been associated with a greater risk of contracting meningitis. Antibiotics are often given prophylactically, although their role in preventing bacterial meningitis has not been established. Objectives To evaluate the effectiveness of prophylactic antibiotics for preventing meningitis in patients with basilar skull fractures. Search methods We searched CENTRAL (2014, Issue 5), MEDLINE (1966 to June week 1, 2014), EMBASE (1974 to June 2014) and LILACS (1982 to June 2014). We also performed an electronic search of meeting proceedings from the American Association of Neurological Surgeons (1997 to September 2005) and handsearched the abstracts of meeting proceedings of the European Association of Neurosurgical Societies (1995, 1999 and 2003). Selection criteria Randomised controlled trials (RCTs) comparing any antibiotic versus placebo or no intervention. We also identified non‐RCTs to perform a separate meta‐analysis in order to compare results. Data collection and analysis Three review authors independently screened and selected trials, assessed risk of bias and extracted data. We sought clarification with trial authors when needed. We pooled risk ratios (RRs) for dichotomous data with their 95% confidence intervals (CIs) using a random‐effects model. We assessed the overall quality of evidence using the GRADE (Grades of Recommendation, Assessment, Development and Evaluation) approach. In this update we did not identify any new trials for inclusion. We included five RCTs with 208 participants in the review and meta‐analysis. We also identified 17 non‐RCTs comparing different types of antibiotic prophylaxis with placebo or no intervention in patients with basilar skull fractures. Most trials presented insufficient methodological detail. All studies included meningitis in their primary outcome. When we evaluated the five included RCTs, there were no significant differences between antibiotic prophylaxis groups and control groups in terms of reduction of the frequency of meningitis, all‐cause mortality, meningitis‐related mortality and need for surgical correction in patients with CSF leakage. There were no reported adverse effects of antibiotic administration, although one of the five RCTs reported an induced change in the posterior nasopharyngeal flora towards potentially more pathogenic organisms resistant to the antibiotic regimen used in prophylaxis. We performed a subgroup analysis to evaluate the primary outcome in patients with and without CSF leakage. We also completed a meta‐analysis of all the identified controlled non‐RCTs (enrolling a total of 2168 patients), which produced results consistent with the randomised data from the included studies. Using the GRADE approach, we assessed the quality of trials as moderate. Currently available evidence from RCTs does not support prophylactic antibiotic use in patients with basilar skull fractures, whether there is evidence of CSF leakage or not. Until more research is available, the effectiveness of antibiotics in patients with basilar skull fractures cannot be determined because studies published to date are flawed by biases. Large, appropriately designed RCTs are needed.
t180
t180_2
yes
Blood or watery discharge from the nose or ears, bruising behind the ear or around the eyes, hearing loss, inability to perceive odours or facial asymmetry may lead physicians to the diagnosis of basilar skull fracture.
Basilar skull fractures predispose patients to meningitis because of the possible direct contact of bacteria in the paranasal sinuses, nasopharynx or middle ear with the central nervous system (CNS). Cerebrospinal fluid (CSF) leakage has been associated with a greater risk of contracting meningitis. Antibiotics are often given prophylactically, although their role in preventing bacterial meningitis has not been established. Objectives To evaluate the effectiveness of prophylactic antibiotics for preventing meningitis in patients with basilar skull fractures. Search methods We searched CENTRAL (2014, Issue 5), MEDLINE (1966 to June week 1, 2014), EMBASE (1974 to June 2014) and LILACS (1982 to June 2014). We also performed an electronic search of meeting proceedings from the American Association of Neurological Surgeons (1997 to September 2005) and handsearched the abstracts of meeting proceedings of the European Association of Neurosurgical Societies (1995, 1999 and 2003). Selection criteria Randomised controlled trials (RCTs) comparing any antibiotic versus placebo or no intervention. We also identified non‐RCTs to perform a separate meta‐analysis in order to compare results. Data collection and analysis Three review authors independently screened and selected trials, assessed risk of bias and extracted data. We sought clarification with trial authors when needed. We pooled risk ratios (RRs) for dichotomous data with their 95% confidence intervals (CIs) using a random‐effects model. We assessed the overall quality of evidence using the GRADE (Grades of Recommendation, Assessment, Development and Evaluation) approach. In this update we did not identify any new trials for inclusion. We included five RCTs with 208 participants in the review and meta‐analysis. We also identified 17 non‐RCTs comparing different types of antibiotic prophylaxis with placebo or no intervention in patients with basilar skull fractures. Most trials presented insufficient methodological detail. All studies included meningitis in their primary outcome. When we evaluated the five included RCTs, there were no significant differences between antibiotic prophylaxis groups and control groups in terms of reduction of the frequency of meningitis, all‐cause mortality, meningitis‐related mortality and need for surgical correction in patients with CSF leakage. There were no reported adverse effects of antibiotic administration, although one of the five RCTs reported an induced change in the posterior nasopharyngeal flora towards potentially more pathogenic organisms resistant to the antibiotic regimen used in prophylaxis. We performed a subgroup analysis to evaluate the primary outcome in patients with and without CSF leakage. We also completed a meta‐analysis of all the identified controlled non‐RCTs (enrolling a total of 2168 patients), which produced results consistent with the randomised data from the included studies. Using the GRADE approach, we assessed the quality of trials as moderate. Currently available evidence from RCTs does not support prophylactic antibiotic use in patients with basilar skull fractures, whether there is evidence of CSF leakage or not. Until more research is available, the effectiveness of antibiotics in patients with basilar skull fractures cannot be determined because studies published to date are flawed by biases. Large, appropriately designed RCTs are needed.
t180
t180_3
no
Patients with a basilar skull fracture may develop meningitis and some doctors give antibiotics in an attempt to reduce this risk.
Basilar skull fractures predispose patients to meningitis because of the possible direct contact of bacteria in the paranasal sinuses, nasopharynx or middle ear with the central nervous system (CNS). Cerebrospinal fluid (CSF) leakage has been associated with a greater risk of contracting meningitis. Antibiotics are often given prophylactically, although their role in preventing bacterial meningitis has not been established. Objectives To evaluate the effectiveness of prophylactic antibiotics for preventing meningitis in patients with basilar skull fractures. Search methods We searched CENTRAL (2014, Issue 5), MEDLINE (1966 to June week 1, 2014), EMBASE (1974 to June 2014) and LILACS (1982 to June 2014). We also performed an electronic search of meeting proceedings from the American Association of Neurological Surgeons (1997 to September 2005) and handsearched the abstracts of meeting proceedings of the European Association of Neurosurgical Societies (1995, 1999 and 2003). Selection criteria Randomised controlled trials (RCTs) comparing any antibiotic versus placebo or no intervention. We also identified non‐RCTs to perform a separate meta‐analysis in order to compare results. Data collection and analysis Three review authors independently screened and selected trials, assessed risk of bias and extracted data. We sought clarification with trial authors when needed. We pooled risk ratios (RRs) for dichotomous data with their 95% confidence intervals (CIs) using a random‐effects model. We assessed the overall quality of evidence using the GRADE (Grades of Recommendation, Assessment, Development and Evaluation) approach. In this update we did not identify any new trials for inclusion. We included five RCTs with 208 participants in the review and meta‐analysis. We also identified 17 non‐RCTs comparing different types of antibiotic prophylaxis with placebo or no intervention in patients with basilar skull fractures. Most trials presented insufficient methodological detail. All studies included meningitis in their primary outcome. When we evaluated the five included RCTs, there were no significant differences between antibiotic prophylaxis groups and control groups in terms of reduction of the frequency of meningitis, all‐cause mortality, meningitis‐related mortality and need for surgical correction in patients with CSF leakage. There were no reported adverse effects of antibiotic administration, although one of the five RCTs reported an induced change in the posterior nasopharyngeal flora towards potentially more pathogenic organisms resistant to the antibiotic regimen used in prophylaxis. We performed a subgroup analysis to evaluate the primary outcome in patients with and without CSF leakage. We also completed a meta‐analysis of all the identified controlled non‐RCTs (enrolling a total of 2168 patients), which produced results consistent with the randomised data from the included studies. Using the GRADE approach, we assessed the quality of trials as moderate. Currently available evidence from RCTs does not support prophylactic antibiotic use in patients with basilar skull fractures, whether there is evidence of CSF leakage or not. Until more research is available, the effectiveness of antibiotics in patients with basilar skull fractures cannot be determined because studies published to date are flawed by biases. Large, appropriately designed RCTs are needed.
t180
t180_4
no
This review examined five randomised controlled trials, comprising a total of 208 participants with basilar skull fracture, which compared those who received preventive antibiotic therapy with those who did not receive antibiotics, to establish how many participants developed meningitis.
Basilar skull fractures predispose patients to meningitis because of the possible direct contact of bacteria in the paranasal sinuses, nasopharynx or middle ear with the central nervous system (CNS). Cerebrospinal fluid (CSF) leakage has been associated with a greater risk of contracting meningitis. Antibiotics are often given prophylactically, although their role in preventing bacterial meningitis has not been established. Objectives To evaluate the effectiveness of prophylactic antibiotics for preventing meningitis in patients with basilar skull fractures. Search methods We searched CENTRAL (2014, Issue 5), MEDLINE (1966 to June week 1, 2014), EMBASE (1974 to June 2014) and LILACS (1982 to June 2014). We also performed an electronic search of meeting proceedings from the American Association of Neurological Surgeons (1997 to September 2005) and handsearched the abstracts of meeting proceedings of the European Association of Neurosurgical Societies (1995, 1999 and 2003). Selection criteria Randomised controlled trials (RCTs) comparing any antibiotic versus placebo or no intervention. We also identified non‐RCTs to perform a separate meta‐analysis in order to compare results. Data collection and analysis Three review authors independently screened and selected trials, assessed risk of bias and extracted data. We sought clarification with trial authors when needed. We pooled risk ratios (RRs) for dichotomous data with their 95% confidence intervals (CIs) using a random‐effects model. We assessed the overall quality of evidence using the GRADE (Grades of Recommendation, Assessment, Development and Evaluation) approach. In this update we did not identify any new trials for inclusion. We included five RCTs with 208 participants in the review and meta‐analysis. We also identified 17 non‐RCTs comparing different types of antibiotic prophylaxis with placebo or no intervention in patients with basilar skull fractures. Most trials presented insufficient methodological detail. All studies included meningitis in their primary outcome. When we evaluated the five included RCTs, there were no significant differences between antibiotic prophylaxis groups and control groups in terms of reduction of the frequency of meningitis, all‐cause mortality, meningitis‐related mortality and need for surgical correction in patients with CSF leakage. There were no reported adverse effects of antibiotic administration, although one of the five RCTs reported an induced change in the posterior nasopharyngeal flora towards potentially more pathogenic organisms resistant to the antibiotic regimen used in prophylaxis. We performed a subgroup analysis to evaluate the primary outcome in patients with and without CSF leakage. We also completed a meta‐analysis of all the identified controlled non‐RCTs (enrolling a total of 2168 patients), which produced results consistent with the randomised data from the included studies. Using the GRADE approach, we assessed the quality of trials as moderate. Currently available evidence from RCTs does not support prophylactic antibiotic use in patients with basilar skull fractures, whether there is evidence of CSF leakage or not. Until more research is available, the effectiveness of antibiotics in patients with basilar skull fractures cannot be determined because studies published to date are flawed by biases. Large, appropriately designed RCTs are needed.
t180
t180_5
no
The available data did not support the use of prophylactic antibiotics, as there is no proven benefit of such therapy.
Basilar skull fractures predispose patients to meningitis because of the possible direct contact of bacteria in the paranasal sinuses, nasopharynx or middle ear with the central nervous system (CNS). Cerebrospinal fluid (CSF) leakage has been associated with a greater risk of contracting meningitis. Antibiotics are often given prophylactically, although their role in preventing bacterial meningitis has not been established. Objectives To evaluate the effectiveness of prophylactic antibiotics for preventing meningitis in patients with basilar skull fractures. Search methods We searched CENTRAL (2014, Issue 5), MEDLINE (1966 to June week 1, 2014), EMBASE (1974 to June 2014) and LILACS (1982 to June 2014). We also performed an electronic search of meeting proceedings from the American Association of Neurological Surgeons (1997 to September 2005) and handsearched the abstracts of meeting proceedings of the European Association of Neurosurgical Societies (1995, 1999 and 2003). Selection criteria Randomised controlled trials (RCTs) comparing any antibiotic versus placebo or no intervention. We also identified non‐RCTs to perform a separate meta‐analysis in order to compare results. Data collection and analysis Three review authors independently screened and selected trials, assessed risk of bias and extracted data. We sought clarification with trial authors when needed. We pooled risk ratios (RRs) for dichotomous data with their 95% confidence intervals (CIs) using a random‐effects model. We assessed the overall quality of evidence using the GRADE (Grades of Recommendation, Assessment, Development and Evaluation) approach. In this update we did not identify any new trials for inclusion. We included five RCTs with 208 participants in the review and meta‐analysis. We also identified 17 non‐RCTs comparing different types of antibiotic prophylaxis with placebo or no intervention in patients with basilar skull fractures. Most trials presented insufficient methodological detail. All studies included meningitis in their primary outcome. When we evaluated the five included RCTs, there were no significant differences between antibiotic prophylaxis groups and control groups in terms of reduction of the frequency of meningitis, all‐cause mortality, meningitis‐related mortality and need for surgical correction in patients with CSF leakage. There were no reported adverse effects of antibiotic administration, although one of the five RCTs reported an induced change in the posterior nasopharyngeal flora towards potentially more pathogenic organisms resistant to the antibiotic regimen used in prophylaxis. We performed a subgroup analysis to evaluate the primary outcome in patients with and without CSF leakage. We also completed a meta‐analysis of all the identified controlled non‐RCTs (enrolling a total of 2168 patients), which produced results consistent with the randomised data from the included studies. Using the GRADE approach, we assessed the quality of trials as moderate. Currently available evidence from RCTs does not support prophylactic antibiotic use in patients with basilar skull fractures, whether there is evidence of CSF leakage or not. Until more research is available, the effectiveness of antibiotics in patients with basilar skull fractures cannot be determined because studies published to date are flawed by biases. Large, appropriately designed RCTs are needed.
t180
t180_6
yes
There was a possible adverse effect of increased susceptibility to infection with more pathogenic (disease‐causing) organisms.
Basilar skull fractures predispose patients to meningitis because of the possible direct contact of bacteria in the paranasal sinuses, nasopharynx or middle ear with the central nervous system (CNS). Cerebrospinal fluid (CSF) leakage has been associated with a greater risk of contracting meningitis. Antibiotics are often given prophylactically, although their role in preventing bacterial meningitis has not been established. Objectives To evaluate the effectiveness of prophylactic antibiotics for preventing meningitis in patients with basilar skull fractures. Search methods We searched CENTRAL (2014, Issue 5), MEDLINE (1966 to June week 1, 2014), EMBASE (1974 to June 2014) and LILACS (1982 to June 2014). We also performed an electronic search of meeting proceedings from the American Association of Neurological Surgeons (1997 to September 2005) and handsearched the abstracts of meeting proceedings of the European Association of Neurosurgical Societies (1995, 1999 and 2003). Selection criteria Randomised controlled trials (RCTs) comparing any antibiotic versus placebo or no intervention. We also identified non‐RCTs to perform a separate meta‐analysis in order to compare results. Data collection and analysis Three review authors independently screened and selected trials, assessed risk of bias and extracted data. We sought clarification with trial authors when needed. We pooled risk ratios (RRs) for dichotomous data with their 95% confidence intervals (CIs) using a random‐effects model. We assessed the overall quality of evidence using the GRADE (Grades of Recommendation, Assessment, Development and Evaluation) approach. In this update we did not identify any new trials for inclusion. We included five RCTs with 208 participants in the review and meta‐analysis. We also identified 17 non‐RCTs comparing different types of antibiotic prophylaxis with placebo or no intervention in patients with basilar skull fractures. Most trials presented insufficient methodological detail. All studies included meningitis in their primary outcome. When we evaluated the five included RCTs, there were no significant differences between antibiotic prophylaxis groups and control groups in terms of reduction of the frequency of meningitis, all‐cause mortality, meningitis‐related mortality and need for surgical correction in patients with CSF leakage. There were no reported adverse effects of antibiotic administration, although one of the five RCTs reported an induced change in the posterior nasopharyngeal flora towards potentially more pathogenic organisms resistant to the antibiotic regimen used in prophylaxis. We performed a subgroup analysis to evaluate the primary outcome in patients with and without CSF leakage. We also completed a meta‐analysis of all the identified controlled non‐RCTs (enrolling a total of 2168 patients), which produced results consistent with the randomised data from the included studies. Using the GRADE approach, we assessed the quality of trials as moderate. Currently available evidence from RCTs does not support prophylactic antibiotic use in patients with basilar skull fractures, whether there is evidence of CSF leakage or not. Until more research is available, the effectiveness of antibiotics in patients with basilar skull fractures cannot be determined because studies published to date are flawed by biases. Large, appropriately designed RCTs are needed.
t180
t180_7
no
We suggest that research is needed to address this question, as there are too few studies available on this subject and they have overall design shortcomings and small combined numbers of participants studied.
Basilar skull fractures predispose patients to meningitis because of the possible direct contact of bacteria in the paranasal sinuses, nasopharynx or middle ear with the central nervous system (CNS). Cerebrospinal fluid (CSF) leakage has been associated with a greater risk of contracting meningitis. Antibiotics are often given prophylactically, although their role in preventing bacterial meningitis has not been established. Objectives To evaluate the effectiveness of prophylactic antibiotics for preventing meningitis in patients with basilar skull fractures. Search methods We searched CENTRAL (2014, Issue 5), MEDLINE (1966 to June week 1, 2014), EMBASE (1974 to June 2014) and LILACS (1982 to June 2014). We also performed an electronic search of meeting proceedings from the American Association of Neurological Surgeons (1997 to September 2005) and handsearched the abstracts of meeting proceedings of the European Association of Neurosurgical Societies (1995, 1999 and 2003). Selection criteria Randomised controlled trials (RCTs) comparing any antibiotic versus placebo or no intervention. We also identified non‐RCTs to perform a separate meta‐analysis in order to compare results. Data collection and analysis Three review authors independently screened and selected trials, assessed risk of bias and extracted data. We sought clarification with trial authors when needed. We pooled risk ratios (RRs) for dichotomous data with their 95% confidence intervals (CIs) using a random‐effects model. We assessed the overall quality of evidence using the GRADE (Grades of Recommendation, Assessment, Development and Evaluation) approach. In this update we did not identify any new trials for inclusion. We included five RCTs with 208 participants in the review and meta‐analysis. We also identified 17 non‐RCTs comparing different types of antibiotic prophylaxis with placebo or no intervention in patients with basilar skull fractures. Most trials presented insufficient methodological detail. All studies included meningitis in their primary outcome. When we evaluated the five included RCTs, there were no significant differences between antibiotic prophylaxis groups and control groups in terms of reduction of the frequency of meningitis, all‐cause mortality, meningitis‐related mortality and need for surgical correction in patients with CSF leakage. There were no reported adverse effects of antibiotic administration, although one of the five RCTs reported an induced change in the posterior nasopharyngeal flora towards potentially more pathogenic organisms resistant to the antibiotic regimen used in prophylaxis. We performed a subgroup analysis to evaluate the primary outcome in patients with and without CSF leakage. We also completed a meta‐analysis of all the identified controlled non‐RCTs (enrolling a total of 2168 patients), which produced results consistent with the randomised data from the included studies. Using the GRADE approach, we assessed the quality of trials as moderate. Currently available evidence from RCTs does not support prophylactic antibiotic use in patients with basilar skull fractures, whether there is evidence of CSF leakage or not. Until more research is available, the effectiveness of antibiotics in patients with basilar skull fractures cannot be determined because studies published to date are flawed by biases. Large, appropriately designed RCTs are needed.
t181
t181_1
yes
Neck pain (NP) is defined as pain, muscle tension, or stiffness localized in the neck and may originate from many structures, including the spine or soft tissues.
Although research on non‐surgical treatments for neck pain (NP) is progressing, there remains uncertainty about the efficacy of cognitive‐behavioural therapy (CBT) for this population. Addressing cognitive and behavioural factors might reduce the clinical burden and the costs of NP in society. Objectives To assess the effects of CBT among individuals with subacute and chronic NP. Specifically, the following comparisons were investigated: (1) cognitive‐behavioural therapy versus placebo, no treatment, or waiting list controls; (2) cognitive‐behavioural therapy versus other types of interventions; (3) cognitive‐behavioural therapy in addition to another intervention (e.g. physiotherapy) versus the other intervention alone. Search methods We searched CENTRAL, MEDLINE, EMBASE, CINAHL, PsycINFO, SCOPUS, Web of Science, and PubMed, as well as ClinicalTrials.gov and the World Health Organization International Clinical Trials Registry Platform up to November 2014. Reference lists and citations of identified trials and relevant systematic reviews were screened. Selection criteria We included randomised controlled trials that assessed the use of CBT in adults with subacute and chronic NP. Data collection and analysis Two review authors independently assessed the risk of bias in each study and extracted the data. If sufficient homogeneity existed among studies in the pre‐defined comparisons, a meta‐analysis was performed. We determined the quality of the evidence for each comparison with the GRADE approach. We included 10 randomised trials (836 participants) in this review. Four trials (40%) had low risk of bias, the remaining 60% of trials had a high risk of bias. The quality of the evidence for the effects of CBT on patients with chronic NP was from very low to moderate. There was low quality evidence that CBT was better than no treatment for improving pain (standard mean difference (SMD) ‐0.58, 95% confidence interval (CI) ‐1.01 to ‐0.16), disability (SMD ‐0.61, 95% CI ‐1.21 to ‐0.01), and quality of life (SMD ‐0.93, 95% CI ‐1.54 to ‐0.31) at short‐term follow‐up, while there was from very low to low quality evidence of no effect on various psychological indicators at short‐term follow‐up. Both at short‐ and intermediate‐term follow‐up, CBT did not affect pain (SMD ‐0.06, 95% CI ‐0.33 to 0.21, low quality, at short‐term follow‐up; MD ‐0.89, 95% CI ‐2.73 to 0.94, low quality, at intermediate‐term follow‐up) or disability (SMD ‐0.10, 95% CI ‐0.40 to 0.20, moderate quality, at short‐term follow‐up; SMD ‐0.24, 95% CI‐0.54 to 0.07, moderate quality, at intermediate‐term follow‐up) compared to other types of interventions. There was moderate quality evidence that CBT was better than other interventions for improving kinesiophobia at intermediate‐term follow‐up (SMD ‐0.39, 95% CI ‐0.69 to ‐0.08, I 2 = 0%). Finally, there was very low quality evidence that CBT in addition to another intervention did not differ from the other intervention alone in terms of effect on pain (SMD ‐0.36, 95% CI ‐0.73 to 0.02) and disability (SMD ‐0.10, 95% CI ‐0.56 to 0.36) at short‐term follow‐up. For patients with subacute NP, there was low quality evidence that CBT was better than other interventions at reducing pain at short‐term follow‐up (SMD ‐0.24, 95% CI ‐0.48 to 0.00), while no difference was found in terms of effect on disability (SMD ‐0.12, 95% CI ‐0.36 to 0.12) and kinesiophobia. None of the included studies reported on adverse effects. With regard to chronic neck pain, CBT was found to be statistically significantly more effective for short‐term pain reduction only when compared to no treatment, but these effects could not be considered clinically meaningful. When comparing both CBT to other types of interventions and CBT in addition to another intervention to the other intervention alone, no differences were found. For patients with subacute NP, CBT was significantly better than other types of interventions at reducing pain at short‐term follow‐up, while no difference was found for disability and kinesiophobia. Further research is recommended to investigate the long‐term benefits and risks of CBT including for the different subgroups of subjects with NP.
t181
t181_2
yes
Risk factors include age, gender, a history of pain, poor posture, repetitive strain, and social and psychological factors.
Although research on non‐surgical treatments for neck pain (NP) is progressing, there remains uncertainty about the efficacy of cognitive‐behavioural therapy (CBT) for this population. Addressing cognitive and behavioural factors might reduce the clinical burden and the costs of NP in society. Objectives To assess the effects of CBT among individuals with subacute and chronic NP. Specifically, the following comparisons were investigated: (1) cognitive‐behavioural therapy versus placebo, no treatment, or waiting list controls; (2) cognitive‐behavioural therapy versus other types of interventions; (3) cognitive‐behavioural therapy in addition to another intervention (e.g. physiotherapy) versus the other intervention alone. Search methods We searched CENTRAL, MEDLINE, EMBASE, CINAHL, PsycINFO, SCOPUS, Web of Science, and PubMed, as well as ClinicalTrials.gov and the World Health Organization International Clinical Trials Registry Platform up to November 2014. Reference lists and citations of identified trials and relevant systematic reviews were screened. Selection criteria We included randomised controlled trials that assessed the use of CBT in adults with subacute and chronic NP. Data collection and analysis Two review authors independently assessed the risk of bias in each study and extracted the data. If sufficient homogeneity existed among studies in the pre‐defined comparisons, a meta‐analysis was performed. We determined the quality of the evidence for each comparison with the GRADE approach. We included 10 randomised trials (836 participants) in this review. Four trials (40%) had low risk of bias, the remaining 60% of trials had a high risk of bias. The quality of the evidence for the effects of CBT on patients with chronic NP was from very low to moderate. There was low quality evidence that CBT was better than no treatment for improving pain (standard mean difference (SMD) ‐0.58, 95% confidence interval (CI) ‐1.01 to ‐0.16), disability (SMD ‐0.61, 95% CI ‐1.21 to ‐0.01), and quality of life (SMD ‐0.93, 95% CI ‐1.54 to ‐0.31) at short‐term follow‐up, while there was from very low to low quality evidence of no effect on various psychological indicators at short‐term follow‐up. Both at short‐ and intermediate‐term follow‐up, CBT did not affect pain (SMD ‐0.06, 95% CI ‐0.33 to 0.21, low quality, at short‐term follow‐up; MD ‐0.89, 95% CI ‐2.73 to 0.94, low quality, at intermediate‐term follow‐up) or disability (SMD ‐0.10, 95% CI ‐0.40 to 0.20, moderate quality, at short‐term follow‐up; SMD ‐0.24, 95% CI‐0.54 to 0.07, moderate quality, at intermediate‐term follow‐up) compared to other types of interventions. There was moderate quality evidence that CBT was better than other interventions for improving kinesiophobia at intermediate‐term follow‐up (SMD ‐0.39, 95% CI ‐0.69 to ‐0.08, I 2 = 0%). Finally, there was very low quality evidence that CBT in addition to another intervention did not differ from the other intervention alone in terms of effect on pain (SMD ‐0.36, 95% CI ‐0.73 to 0.02) and disability (SMD ‐0.10, 95% CI ‐0.56 to 0.36) at short‐term follow‐up. For patients with subacute NP, there was low quality evidence that CBT was better than other interventions at reducing pain at short‐term follow‐up (SMD ‐0.24, 95% CI ‐0.48 to 0.00), while no difference was found in terms of effect on disability (SMD ‐0.12, 95% CI ‐0.36 to 0.12) and kinesiophobia. None of the included studies reported on adverse effects. With regard to chronic neck pain, CBT was found to be statistically significantly more effective for short‐term pain reduction only when compared to no treatment, but these effects could not be considered clinically meaningful. When comparing both CBT to other types of interventions and CBT in addition to another intervention to the other intervention alone, no differences were found. For patients with subacute NP, CBT was significantly better than other types of interventions at reducing pain at short‐term follow‐up, while no difference was found for disability and kinesiophobia. Further research is recommended to investigate the long‐term benefits and risks of CBT including for the different subgroups of subjects with NP.
t181
t181_3
yes
NP is experienced by people of all ages and both genders and is an important cause of medical expenses, work absenteeism, and disability.
Although research on non‐surgical treatments for neck pain (NP) is progressing, there remains uncertainty about the efficacy of cognitive‐behavioural therapy (CBT) for this population. Addressing cognitive and behavioural factors might reduce the clinical burden and the costs of NP in society. Objectives To assess the effects of CBT among individuals with subacute and chronic NP. Specifically, the following comparisons were investigated: (1) cognitive‐behavioural therapy versus placebo, no treatment, or waiting list controls; (2) cognitive‐behavioural therapy versus other types of interventions; (3) cognitive‐behavioural therapy in addition to another intervention (e.g. physiotherapy) versus the other intervention alone. Search methods We searched CENTRAL, MEDLINE, EMBASE, CINAHL, PsycINFO, SCOPUS, Web of Science, and PubMed, as well as ClinicalTrials.gov and the World Health Organization International Clinical Trials Registry Platform up to November 2014. Reference lists and citations of identified trials and relevant systematic reviews were screened. Selection criteria We included randomised controlled trials that assessed the use of CBT in adults with subacute and chronic NP. Data collection and analysis Two review authors independently assessed the risk of bias in each study and extracted the data. If sufficient homogeneity existed among studies in the pre‐defined comparisons, a meta‐analysis was performed. We determined the quality of the evidence for each comparison with the GRADE approach. We included 10 randomised trials (836 participants) in this review. Four trials (40%) had low risk of bias, the remaining 60% of trials had a high risk of bias. The quality of the evidence for the effects of CBT on patients with chronic NP was from very low to moderate. There was low quality evidence that CBT was better than no treatment for improving pain (standard mean difference (SMD) ‐0.58, 95% confidence interval (CI) ‐1.01 to ‐0.16), disability (SMD ‐0.61, 95% CI ‐1.21 to ‐0.01), and quality of life (SMD ‐0.93, 95% CI ‐1.54 to ‐0.31) at short‐term follow‐up, while there was from very low to low quality evidence of no effect on various psychological indicators at short‐term follow‐up. Both at short‐ and intermediate‐term follow‐up, CBT did not affect pain (SMD ‐0.06, 95% CI ‐0.33 to 0.21, low quality, at short‐term follow‐up; MD ‐0.89, 95% CI ‐2.73 to 0.94, low quality, at intermediate‐term follow‐up) or disability (SMD ‐0.10, 95% CI ‐0.40 to 0.20, moderate quality, at short‐term follow‐up; SMD ‐0.24, 95% CI‐0.54 to 0.07, moderate quality, at intermediate‐term follow‐up) compared to other types of interventions. There was moderate quality evidence that CBT was better than other interventions for improving kinesiophobia at intermediate‐term follow‐up (SMD ‐0.39, 95% CI ‐0.69 to ‐0.08, I 2 = 0%). Finally, there was very low quality evidence that CBT in addition to another intervention did not differ from the other intervention alone in terms of effect on pain (SMD ‐0.36, 95% CI ‐0.73 to 0.02) and disability (SMD ‐0.10, 95% CI ‐0.56 to 0.36) at short‐term follow‐up. For patients with subacute NP, there was low quality evidence that CBT was better than other interventions at reducing pain at short‐term follow‐up (SMD ‐0.24, 95% CI ‐0.48 to 0.00), while no difference was found in terms of effect on disability (SMD ‐0.12, 95% CI ‐0.36 to 0.12) and kinesiophobia. None of the included studies reported on adverse effects. With regard to chronic neck pain, CBT was found to be statistically significantly more effective for short‐term pain reduction only when compared to no treatment, but these effects could not be considered clinically meaningful. When comparing both CBT to other types of interventions and CBT in addition to another intervention to the other intervention alone, no differences were found. For patients with subacute NP, CBT was significantly better than other types of interventions at reducing pain at short‐term follow‐up, while no difference was found for disability and kinesiophobia. Further research is recommended to investigate the long‐term benefits and risks of CBT including for the different subgroups of subjects with NP.
t181
t181_4
yes
Current management of NP includes a range of different treatments such as reassurance, education, promotion of a timely return to normal activities, appropriate use of painkillers, and exercises.
Although research on non‐surgical treatments for neck pain (NP) is progressing, there remains uncertainty about the efficacy of cognitive‐behavioural therapy (CBT) for this population. Addressing cognitive and behavioural factors might reduce the clinical burden and the costs of NP in society. Objectives To assess the effects of CBT among individuals with subacute and chronic NP. Specifically, the following comparisons were investigated: (1) cognitive‐behavioural therapy versus placebo, no treatment, or waiting list controls; (2) cognitive‐behavioural therapy versus other types of interventions; (3) cognitive‐behavioural therapy in addition to another intervention (e.g. physiotherapy) versus the other intervention alone. Search methods We searched CENTRAL, MEDLINE, EMBASE, CINAHL, PsycINFO, SCOPUS, Web of Science, and PubMed, as well as ClinicalTrials.gov and the World Health Organization International Clinical Trials Registry Platform up to November 2014. Reference lists and citations of identified trials and relevant systematic reviews were screened. Selection criteria We included randomised controlled trials that assessed the use of CBT in adults with subacute and chronic NP. Data collection and analysis Two review authors independently assessed the risk of bias in each study and extracted the data. If sufficient homogeneity existed among studies in the pre‐defined comparisons, a meta‐analysis was performed. We determined the quality of the evidence for each comparison with the GRADE approach. We included 10 randomised trials (836 participants) in this review. Four trials (40%) had low risk of bias, the remaining 60% of trials had a high risk of bias. The quality of the evidence for the effects of CBT on patients with chronic NP was from very low to moderate. There was low quality evidence that CBT was better than no treatment for improving pain (standard mean difference (SMD) ‐0.58, 95% confidence interval (CI) ‐1.01 to ‐0.16), disability (SMD ‐0.61, 95% CI ‐1.21 to ‐0.01), and quality of life (SMD ‐0.93, 95% CI ‐1.54 to ‐0.31) at short‐term follow‐up, while there was from very low to low quality evidence of no effect on various psychological indicators at short‐term follow‐up. Both at short‐ and intermediate‐term follow‐up, CBT did not affect pain (SMD ‐0.06, 95% CI ‐0.33 to 0.21, low quality, at short‐term follow‐up; MD ‐0.89, 95% CI ‐2.73 to 0.94, low quality, at intermediate‐term follow‐up) or disability (SMD ‐0.10, 95% CI ‐0.40 to 0.20, moderate quality, at short‐term follow‐up; SMD ‐0.24, 95% CI‐0.54 to 0.07, moderate quality, at intermediate‐term follow‐up) compared to other types of interventions. There was moderate quality evidence that CBT was better than other interventions for improving kinesiophobia at intermediate‐term follow‐up (SMD ‐0.39, 95% CI ‐0.69 to ‐0.08, I 2 = 0%). Finally, there was very low quality evidence that CBT in addition to another intervention did not differ from the other intervention alone in terms of effect on pain (SMD ‐0.36, 95% CI ‐0.73 to 0.02) and disability (SMD ‐0.10, 95% CI ‐0.56 to 0.36) at short‐term follow‐up. For patients with subacute NP, there was low quality evidence that CBT was better than other interventions at reducing pain at short‐term follow‐up (SMD ‐0.24, 95% CI ‐0.48 to 0.00), while no difference was found in terms of effect on disability (SMD ‐0.12, 95% CI ‐0.36 to 0.12) and kinesiophobia. None of the included studies reported on adverse effects. With regard to chronic neck pain, CBT was found to be statistically significantly more effective for short‐term pain reduction only when compared to no treatment, but these effects could not be considered clinically meaningful. When comparing both CBT to other types of interventions and CBT in addition to another intervention to the other intervention alone, no differences were found. For patients with subacute NP, CBT was significantly better than other types of interventions at reducing pain at short‐term follow‐up, while no difference was found for disability and kinesiophobia. Further research is recommended to investigate the long‐term benefits and risks of CBT including for the different subgroups of subjects with NP.
t181
t181_5
no
There remains uncertainty about the efficacy of cognitive‐behavioural therapy (CBT) for these patients.
Although research on non‐surgical treatments for neck pain (NP) is progressing, there remains uncertainty about the efficacy of cognitive‐behavioural therapy (CBT) for this population. Addressing cognitive and behavioural factors might reduce the clinical burden and the costs of NP in society. Objectives To assess the effects of CBT among individuals with subacute and chronic NP. Specifically, the following comparisons were investigated: (1) cognitive‐behavioural therapy versus placebo, no treatment, or waiting list controls; (2) cognitive‐behavioural therapy versus other types of interventions; (3) cognitive‐behavioural therapy in addition to another intervention (e.g. physiotherapy) versus the other intervention alone. Search methods We searched CENTRAL, MEDLINE, EMBASE, CINAHL, PsycINFO, SCOPUS, Web of Science, and PubMed, as well as ClinicalTrials.gov and the World Health Organization International Clinical Trials Registry Platform up to November 2014. Reference lists and citations of identified trials and relevant systematic reviews were screened. Selection criteria We included randomised controlled trials that assessed the use of CBT in adults with subacute and chronic NP. Data collection and analysis Two review authors independently assessed the risk of bias in each study and extracted the data. If sufficient homogeneity existed among studies in the pre‐defined comparisons, a meta‐analysis was performed. We determined the quality of the evidence for each comparison with the GRADE approach. We included 10 randomised trials (836 participants) in this review. Four trials (40%) had low risk of bias, the remaining 60% of trials had a high risk of bias. The quality of the evidence for the effects of CBT on patients with chronic NP was from very low to moderate. There was low quality evidence that CBT was better than no treatment for improving pain (standard mean difference (SMD) ‐0.58, 95% confidence interval (CI) ‐1.01 to ‐0.16), disability (SMD ‐0.61, 95% CI ‐1.21 to ‐0.01), and quality of life (SMD ‐0.93, 95% CI ‐1.54 to ‐0.31) at short‐term follow‐up, while there was from very low to low quality evidence of no effect on various psychological indicators at short‐term follow‐up. Both at short‐ and intermediate‐term follow‐up, CBT did not affect pain (SMD ‐0.06, 95% CI ‐0.33 to 0.21, low quality, at short‐term follow‐up; MD ‐0.89, 95% CI ‐2.73 to 0.94, low quality, at intermediate‐term follow‐up) or disability (SMD ‐0.10, 95% CI ‐0.40 to 0.20, moderate quality, at short‐term follow‐up; SMD ‐0.24, 95% CI‐0.54 to 0.07, moderate quality, at intermediate‐term follow‐up) compared to other types of interventions. There was moderate quality evidence that CBT was better than other interventions for improving kinesiophobia at intermediate‐term follow‐up (SMD ‐0.39, 95% CI ‐0.69 to ‐0.08, I 2 = 0%). Finally, there was very low quality evidence that CBT in addition to another intervention did not differ from the other intervention alone in terms of effect on pain (SMD ‐0.36, 95% CI ‐0.73 to 0.02) and disability (SMD ‐0.10, 95% CI ‐0.56 to 0.36) at short‐term follow‐up. For patients with subacute NP, there was low quality evidence that CBT was better than other interventions at reducing pain at short‐term follow‐up (SMD ‐0.24, 95% CI ‐0.48 to 0.00), while no difference was found in terms of effect on disability (SMD ‐0.12, 95% CI ‐0.36 to 0.12) and kinesiophobia. None of the included studies reported on adverse effects. With regard to chronic neck pain, CBT was found to be statistically significantly more effective for short‐term pain reduction only when compared to no treatment, but these effects could not be considered clinically meaningful. When comparing both CBT to other types of interventions and CBT in addition to another intervention to the other intervention alone, no differences were found. For patients with subacute NP, CBT was significantly better than other types of interventions at reducing pain at short‐term follow‐up, while no difference was found for disability and kinesiophobia. Further research is recommended to investigate the long‐term benefits and risks of CBT including for the different subgroups of subjects with NP.
t181
t181_6
yes
CBT is a psychological technique that encompasses a wide set of interventions conducted by health professionals.
Although research on non‐surgical treatments for neck pain (NP) is progressing, there remains uncertainty about the efficacy of cognitive‐behavioural therapy (CBT) for this population. Addressing cognitive and behavioural factors might reduce the clinical burden and the costs of NP in society. Objectives To assess the effects of CBT among individuals with subacute and chronic NP. Specifically, the following comparisons were investigated: (1) cognitive‐behavioural therapy versus placebo, no treatment, or waiting list controls; (2) cognitive‐behavioural therapy versus other types of interventions; (3) cognitive‐behavioural therapy in addition to another intervention (e.g. physiotherapy) versus the other intervention alone. Search methods We searched CENTRAL, MEDLINE, EMBASE, CINAHL, PsycINFO, SCOPUS, Web of Science, and PubMed, as well as ClinicalTrials.gov and the World Health Organization International Clinical Trials Registry Platform up to November 2014. Reference lists and citations of identified trials and relevant systematic reviews were screened. Selection criteria We included randomised controlled trials that assessed the use of CBT in adults with subacute and chronic NP. Data collection and analysis Two review authors independently assessed the risk of bias in each study and extracted the data. If sufficient homogeneity existed among studies in the pre‐defined comparisons, a meta‐analysis was performed. We determined the quality of the evidence for each comparison with the GRADE approach. We included 10 randomised trials (836 participants) in this review. Four trials (40%) had low risk of bias, the remaining 60% of trials had a high risk of bias. The quality of the evidence for the effects of CBT on patients with chronic NP was from very low to moderate. There was low quality evidence that CBT was better than no treatment for improving pain (standard mean difference (SMD) ‐0.58, 95% confidence interval (CI) ‐1.01 to ‐0.16), disability (SMD ‐0.61, 95% CI ‐1.21 to ‐0.01), and quality of life (SMD ‐0.93, 95% CI ‐1.54 to ‐0.31) at short‐term follow‐up, while there was from very low to low quality evidence of no effect on various psychological indicators at short‐term follow‐up. Both at short‐ and intermediate‐term follow‐up, CBT did not affect pain (SMD ‐0.06, 95% CI ‐0.33 to 0.21, low quality, at short‐term follow‐up; MD ‐0.89, 95% CI ‐2.73 to 0.94, low quality, at intermediate‐term follow‐up) or disability (SMD ‐0.10, 95% CI ‐0.40 to 0.20, moderate quality, at short‐term follow‐up; SMD ‐0.24, 95% CI‐0.54 to 0.07, moderate quality, at intermediate‐term follow‐up) compared to other types of interventions. There was moderate quality evidence that CBT was better than other interventions for improving kinesiophobia at intermediate‐term follow‐up (SMD ‐0.39, 95% CI ‐0.69 to ‐0.08, I 2 = 0%). Finally, there was very low quality evidence that CBT in addition to another intervention did not differ from the other intervention alone in terms of effect on pain (SMD ‐0.36, 95% CI ‐0.73 to 0.02) and disability (SMD ‐0.10, 95% CI ‐0.56 to 0.36) at short‐term follow‐up. For patients with subacute NP, there was low quality evidence that CBT was better than other interventions at reducing pain at short‐term follow‐up (SMD ‐0.24, 95% CI ‐0.48 to 0.00), while no difference was found in terms of effect on disability (SMD ‐0.12, 95% CI ‐0.36 to 0.12) and kinesiophobia. None of the included studies reported on adverse effects. With regard to chronic neck pain, CBT was found to be statistically significantly more effective for short‐term pain reduction only when compared to no treatment, but these effects could not be considered clinically meaningful. When comparing both CBT to other types of interventions and CBT in addition to another intervention to the other intervention alone, no differences were found. For patients with subacute NP, CBT was significantly better than other types of interventions at reducing pain at short‐term follow‐up, while no difference was found for disability and kinesiophobia. Further research is recommended to investigate the long‐term benefits and risks of CBT including for the different subgroups of subjects with NP.
t181
t181_7
yes
It includes cognitive and behavioural modifications of specific activities to reduce the impact of pain as well as physical and psychosocial disability and to overcome dangerous barriers to physical and psychosocial recovery.
Although research on non‐surgical treatments for neck pain (NP) is progressing, there remains uncertainty about the efficacy of cognitive‐behavioural therapy (CBT) for this population. Addressing cognitive and behavioural factors might reduce the clinical burden and the costs of NP in society. Objectives To assess the effects of CBT among individuals with subacute and chronic NP. Specifically, the following comparisons were investigated: (1) cognitive‐behavioural therapy versus placebo, no treatment, or waiting list controls; (2) cognitive‐behavioural therapy versus other types of interventions; (3) cognitive‐behavioural therapy in addition to another intervention (e.g. physiotherapy) versus the other intervention alone. Search methods We searched CENTRAL, MEDLINE, EMBASE, CINAHL, PsycINFO, SCOPUS, Web of Science, and PubMed, as well as ClinicalTrials.gov and the World Health Organization International Clinical Trials Registry Platform up to November 2014. Reference lists and citations of identified trials and relevant systematic reviews were screened. Selection criteria We included randomised controlled trials that assessed the use of CBT in adults with subacute and chronic NP. Data collection and analysis Two review authors independently assessed the risk of bias in each study and extracted the data. If sufficient homogeneity existed among studies in the pre‐defined comparisons, a meta‐analysis was performed. We determined the quality of the evidence for each comparison with the GRADE approach. We included 10 randomised trials (836 participants) in this review. Four trials (40%) had low risk of bias, the remaining 60% of trials had a high risk of bias. The quality of the evidence for the effects of CBT on patients with chronic NP was from very low to moderate. There was low quality evidence that CBT was better than no treatment for improving pain (standard mean difference (SMD) ‐0.58, 95% confidence interval (CI) ‐1.01 to ‐0.16), disability (SMD ‐0.61, 95% CI ‐1.21 to ‐0.01), and quality of life (SMD ‐0.93, 95% CI ‐1.54 to ‐0.31) at short‐term follow‐up, while there was from very low to low quality evidence of no effect on various psychological indicators at short‐term follow‐up. Both at short‐ and intermediate‐term follow‐up, CBT did not affect pain (SMD ‐0.06, 95% CI ‐0.33 to 0.21, low quality, at short‐term follow‐up; MD ‐0.89, 95% CI ‐2.73 to 0.94, low quality, at intermediate‐term follow‐up) or disability (SMD ‐0.10, 95% CI ‐0.40 to 0.20, moderate quality, at short‐term follow‐up; SMD ‐0.24, 95% CI‐0.54 to 0.07, moderate quality, at intermediate‐term follow‐up) compared to other types of interventions. There was moderate quality evidence that CBT was better than other interventions for improving kinesiophobia at intermediate‐term follow‐up (SMD ‐0.39, 95% CI ‐0.69 to ‐0.08, I 2 = 0%). Finally, there was very low quality evidence that CBT in addition to another intervention did not differ from the other intervention alone in terms of effect on pain (SMD ‐0.36, 95% CI ‐0.73 to 0.02) and disability (SMD ‐0.10, 95% CI ‐0.56 to 0.36) at short‐term follow‐up. For patients with subacute NP, there was low quality evidence that CBT was better than other interventions at reducing pain at short‐term follow‐up (SMD ‐0.24, 95% CI ‐0.48 to 0.00), while no difference was found in terms of effect on disability (SMD ‐0.12, 95% CI ‐0.36 to 0.12) and kinesiophobia. None of the included studies reported on adverse effects. With regard to chronic neck pain, CBT was found to be statistically significantly more effective for short‐term pain reduction only when compared to no treatment, but these effects could not be considered clinically meaningful. When comparing both CBT to other types of interventions and CBT in addition to another intervention to the other intervention alone, no differences were found. For patients with subacute NP, CBT was significantly better than other types of interventions at reducing pain at short‐term follow‐up, while no difference was found for disability and kinesiophobia. Further research is recommended to investigate the long‐term benefits and risks of CBT including for the different subgroups of subjects with NP.
t181
t181_8
no
Review Question We therefore reviewed the evidence about the effect of CBT on pain, disability, psychological factors, and quality of life among patients with subacute and chronic NP.
Although research on non‐surgical treatments for neck pain (NP) is progressing, there remains uncertainty about the efficacy of cognitive‐behavioural therapy (CBT) for this population. Addressing cognitive and behavioural factors might reduce the clinical burden and the costs of NP in society. Objectives To assess the effects of CBT among individuals with subacute and chronic NP. Specifically, the following comparisons were investigated: (1) cognitive‐behavioural therapy versus placebo, no treatment, or waiting list controls; (2) cognitive‐behavioural therapy versus other types of interventions; (3) cognitive‐behavioural therapy in addition to another intervention (e.g. physiotherapy) versus the other intervention alone. Search methods We searched CENTRAL, MEDLINE, EMBASE, CINAHL, PsycINFO, SCOPUS, Web of Science, and PubMed, as well as ClinicalTrials.gov and the World Health Organization International Clinical Trials Registry Platform up to November 2014. Reference lists and citations of identified trials and relevant systematic reviews were screened. Selection criteria We included randomised controlled trials that assessed the use of CBT in adults with subacute and chronic NP. Data collection and analysis Two review authors independently assessed the risk of bias in each study and extracted the data. If sufficient homogeneity existed among studies in the pre‐defined comparisons, a meta‐analysis was performed. We determined the quality of the evidence for each comparison with the GRADE approach. We included 10 randomised trials (836 participants) in this review. Four trials (40%) had low risk of bias, the remaining 60% of trials had a high risk of bias. The quality of the evidence for the effects of CBT on patients with chronic NP was from very low to moderate. There was low quality evidence that CBT was better than no treatment for improving pain (standard mean difference (SMD) ‐0.58, 95% confidence interval (CI) ‐1.01 to ‐0.16), disability (SMD ‐0.61, 95% CI ‐1.21 to ‐0.01), and quality of life (SMD ‐0.93, 95% CI ‐1.54 to ‐0.31) at short‐term follow‐up, while there was from very low to low quality evidence of no effect on various psychological indicators at short‐term follow‐up. Both at short‐ and intermediate‐term follow‐up, CBT did not affect pain (SMD ‐0.06, 95% CI ‐0.33 to 0.21, low quality, at short‐term follow‐up; MD ‐0.89, 95% CI ‐2.73 to 0.94, low quality, at intermediate‐term follow‐up) or disability (SMD ‐0.10, 95% CI ‐0.40 to 0.20, moderate quality, at short‐term follow‐up; SMD ‐0.24, 95% CI‐0.54 to 0.07, moderate quality, at intermediate‐term follow‐up) compared to other types of interventions. There was moderate quality evidence that CBT was better than other interventions for improving kinesiophobia at intermediate‐term follow‐up (SMD ‐0.39, 95% CI ‐0.69 to ‐0.08, I 2 = 0%). Finally, there was very low quality evidence that CBT in addition to another intervention did not differ from the other intervention alone in terms of effect on pain (SMD ‐0.36, 95% CI ‐0.73 to 0.02) and disability (SMD ‐0.10, 95% CI ‐0.56 to 0.36) at short‐term follow‐up. For patients with subacute NP, there was low quality evidence that CBT was better than other interventions at reducing pain at short‐term follow‐up (SMD ‐0.24, 95% CI ‐0.48 to 0.00), while no difference was found in terms of effect on disability (SMD ‐0.12, 95% CI ‐0.36 to 0.12) and kinesiophobia. None of the included studies reported on adverse effects. With regard to chronic neck pain, CBT was found to be statistically significantly more effective for short‐term pain reduction only when compared to no treatment, but these effects could not be considered clinically meaningful. When comparing both CBT to other types of interventions and CBT in addition to another intervention to the other intervention alone, no differences were found. For patients with subacute NP, CBT was significantly better than other types of interventions at reducing pain at short‐term follow‐up, while no difference was found for disability and kinesiophobia. Further research is recommended to investigate the long‐term benefits and risks of CBT including for the different subgroups of subjects with NP.
t181
t181_9
no
Specifically, we compared CBT versus no treatment, CBT versus other types of interventions, and CBT in addition to another intervention (e.g. physiotherapy) versus the other intervention alone.
Although research on non‐surgical treatments for neck pain (NP) is progressing, there remains uncertainty about the efficacy of cognitive‐behavioural therapy (CBT) for this population. Addressing cognitive and behavioural factors might reduce the clinical burden and the costs of NP in society. Objectives To assess the effects of CBT among individuals with subacute and chronic NP. Specifically, the following comparisons were investigated: (1) cognitive‐behavioural therapy versus placebo, no treatment, or waiting list controls; (2) cognitive‐behavioural therapy versus other types of interventions; (3) cognitive‐behavioural therapy in addition to another intervention (e.g. physiotherapy) versus the other intervention alone. Search methods We searched CENTRAL, MEDLINE, EMBASE, CINAHL, PsycINFO, SCOPUS, Web of Science, and PubMed, as well as ClinicalTrials.gov and the World Health Organization International Clinical Trials Registry Platform up to November 2014. Reference lists and citations of identified trials and relevant systematic reviews were screened. Selection criteria We included randomised controlled trials that assessed the use of CBT in adults with subacute and chronic NP. Data collection and analysis Two review authors independently assessed the risk of bias in each study and extracted the data. If sufficient homogeneity existed among studies in the pre‐defined comparisons, a meta‐analysis was performed. We determined the quality of the evidence for each comparison with the GRADE approach. We included 10 randomised trials (836 participants) in this review. Four trials (40%) had low risk of bias, the remaining 60% of trials had a high risk of bias. The quality of the evidence for the effects of CBT on patients with chronic NP was from very low to moderate. There was low quality evidence that CBT was better than no treatment for improving pain (standard mean difference (SMD) ‐0.58, 95% confidence interval (CI) ‐1.01 to ‐0.16), disability (SMD ‐0.61, 95% CI ‐1.21 to ‐0.01), and quality of life (SMD ‐0.93, 95% CI ‐1.54 to ‐0.31) at short‐term follow‐up, while there was from very low to low quality evidence of no effect on various psychological indicators at short‐term follow‐up. Both at short‐ and intermediate‐term follow‐up, CBT did not affect pain (SMD ‐0.06, 95% CI ‐0.33 to 0.21, low quality, at short‐term follow‐up; MD ‐0.89, 95% CI ‐2.73 to 0.94, low quality, at intermediate‐term follow‐up) or disability (SMD ‐0.10, 95% CI ‐0.40 to 0.20, moderate quality, at short‐term follow‐up; SMD ‐0.24, 95% CI‐0.54 to 0.07, moderate quality, at intermediate‐term follow‐up) compared to other types of interventions. There was moderate quality evidence that CBT was better than other interventions for improving kinesiophobia at intermediate‐term follow‐up (SMD ‐0.39, 95% CI ‐0.69 to ‐0.08, I 2 = 0%). Finally, there was very low quality evidence that CBT in addition to another intervention did not differ from the other intervention alone in terms of effect on pain (SMD ‐0.36, 95% CI ‐0.73 to 0.02) and disability (SMD ‐0.10, 95% CI ‐0.56 to 0.36) at short‐term follow‐up. For patients with subacute NP, there was low quality evidence that CBT was better than other interventions at reducing pain at short‐term follow‐up (SMD ‐0.24, 95% CI ‐0.48 to 0.00), while no difference was found in terms of effect on disability (SMD ‐0.12, 95% CI ‐0.36 to 0.12) and kinesiophobia. None of the included studies reported on adverse effects. With regard to chronic neck pain, CBT was found to be statistically significantly more effective for short‐term pain reduction only when compared to no treatment, but these effects could not be considered clinically meaningful. When comparing both CBT to other types of interventions and CBT in addition to another intervention to the other intervention alone, no differences were found. For patients with subacute NP, CBT was significantly better than other types of interventions at reducing pain at short‐term follow‐up, while no difference was found for disability and kinesiophobia. Further research is recommended to investigate the long‐term benefits and risks of CBT including for the different subgroups of subjects with NP.
t181
t181_10
no
We included 10 randomised trials (836 participants).
Although research on non‐surgical treatments for neck pain (NP) is progressing, there remains uncertainty about the efficacy of cognitive‐behavioural therapy (CBT) for this population. Addressing cognitive and behavioural factors might reduce the clinical burden and the costs of NP in society. Objectives To assess the effects of CBT among individuals with subacute and chronic NP. Specifically, the following comparisons were investigated: (1) cognitive‐behavioural therapy versus placebo, no treatment, or waiting list controls; (2) cognitive‐behavioural therapy versus other types of interventions; (3) cognitive‐behavioural therapy in addition to another intervention (e.g. physiotherapy) versus the other intervention alone. Search methods We searched CENTRAL, MEDLINE, EMBASE, CINAHL, PsycINFO, SCOPUS, Web of Science, and PubMed, as well as ClinicalTrials.gov and the World Health Organization International Clinical Trials Registry Platform up to November 2014. Reference lists and citations of identified trials and relevant systematic reviews were screened. Selection criteria We included randomised controlled trials that assessed the use of CBT in adults with subacute and chronic NP. Data collection and analysis Two review authors independently assessed the risk of bias in each study and extracted the data. If sufficient homogeneity existed among studies in the pre‐defined comparisons, a meta‐analysis was performed. We determined the quality of the evidence for each comparison with the GRADE approach. We included 10 randomised trials (836 participants) in this review. Four trials (40%) had low risk of bias, the remaining 60% of trials had a high risk of bias. The quality of the evidence for the effects of CBT on patients with chronic NP was from very low to moderate. There was low quality evidence that CBT was better than no treatment for improving pain (standard mean difference (SMD) ‐0.58, 95% confidence interval (CI) ‐1.01 to ‐0.16), disability (SMD ‐0.61, 95% CI ‐1.21 to ‐0.01), and quality of life (SMD ‐0.93, 95% CI ‐1.54 to ‐0.31) at short‐term follow‐up, while there was from very low to low quality evidence of no effect on various psychological indicators at short‐term follow‐up. Both at short‐ and intermediate‐term follow‐up, CBT did not affect pain (SMD ‐0.06, 95% CI ‐0.33 to 0.21, low quality, at short‐term follow‐up; MD ‐0.89, 95% CI ‐2.73 to 0.94, low quality, at intermediate‐term follow‐up) or disability (SMD ‐0.10, 95% CI ‐0.40 to 0.20, moderate quality, at short‐term follow‐up; SMD ‐0.24, 95% CI‐0.54 to 0.07, moderate quality, at intermediate‐term follow‐up) compared to other types of interventions. There was moderate quality evidence that CBT was better than other interventions for improving kinesiophobia at intermediate‐term follow‐up (SMD ‐0.39, 95% CI ‐0.69 to ‐0.08, I 2 = 0%). Finally, there was very low quality evidence that CBT in addition to another intervention did not differ from the other intervention alone in terms of effect on pain (SMD ‐0.36, 95% CI ‐0.73 to 0.02) and disability (SMD ‐0.10, 95% CI ‐0.56 to 0.36) at short‐term follow‐up. For patients with subacute NP, there was low quality evidence that CBT was better than other interventions at reducing pain at short‐term follow‐up (SMD ‐0.24, 95% CI ‐0.48 to 0.00), while no difference was found in terms of effect on disability (SMD ‐0.12, 95% CI ‐0.36 to 0.12) and kinesiophobia. None of the included studies reported on adverse effects. With regard to chronic neck pain, CBT was found to be statistically significantly more effective for short‐term pain reduction only when compared to no treatment, but these effects could not be considered clinically meaningful. When comparing both CBT to other types of interventions and CBT in addition to another intervention to the other intervention alone, no differences were found. For patients with subacute NP, CBT was significantly better than other types of interventions at reducing pain at short‐term follow‐up, while no difference was found for disability and kinesiophobia. Further research is recommended to investigate the long‐term benefits and risks of CBT including for the different subgroups of subjects with NP.
t181
t181_11
yes
Two studies included subjects with subacute NP (337 participants), while the other eight studies included participants with chronic NP (499 participants).
Although research on non‐surgical treatments for neck pain (NP) is progressing, there remains uncertainty about the efficacy of cognitive‐behavioural therapy (CBT) for this population. Addressing cognitive and behavioural factors might reduce the clinical burden and the costs of NP in society. Objectives To assess the effects of CBT among individuals with subacute and chronic NP. Specifically, the following comparisons were investigated: (1) cognitive‐behavioural therapy versus placebo, no treatment, or waiting list controls; (2) cognitive‐behavioural therapy versus other types of interventions; (3) cognitive‐behavioural therapy in addition to another intervention (e.g. physiotherapy) versus the other intervention alone. Search methods We searched CENTRAL, MEDLINE, EMBASE, CINAHL, PsycINFO, SCOPUS, Web of Science, and PubMed, as well as ClinicalTrials.gov and the World Health Organization International Clinical Trials Registry Platform up to November 2014. Reference lists and citations of identified trials and relevant systematic reviews were screened. Selection criteria We included randomised controlled trials that assessed the use of CBT in adults with subacute and chronic NP. Data collection and analysis Two review authors independently assessed the risk of bias in each study and extracted the data. If sufficient homogeneity existed among studies in the pre‐defined comparisons, a meta‐analysis was performed. We determined the quality of the evidence for each comparison with the GRADE approach. We included 10 randomised trials (836 participants) in this review. Four trials (40%) had low risk of bias, the remaining 60% of trials had a high risk of bias. The quality of the evidence for the effects of CBT on patients with chronic NP was from very low to moderate. There was low quality evidence that CBT was better than no treatment for improving pain (standard mean difference (SMD) ‐0.58, 95% confidence interval (CI) ‐1.01 to ‐0.16), disability (SMD ‐0.61, 95% CI ‐1.21 to ‐0.01), and quality of life (SMD ‐0.93, 95% CI ‐1.54 to ‐0.31) at short‐term follow‐up, while there was from very low to low quality evidence of no effect on various psychological indicators at short‐term follow‐up. Both at short‐ and intermediate‐term follow‐up, CBT did not affect pain (SMD ‐0.06, 95% CI ‐0.33 to 0.21, low quality, at short‐term follow‐up; MD ‐0.89, 95% CI ‐2.73 to 0.94, low quality, at intermediate‐term follow‐up) or disability (SMD ‐0.10, 95% CI ‐0.40 to 0.20, moderate quality, at short‐term follow‐up; SMD ‐0.24, 95% CI‐0.54 to 0.07, moderate quality, at intermediate‐term follow‐up) compared to other types of interventions. There was moderate quality evidence that CBT was better than other interventions for improving kinesiophobia at intermediate‐term follow‐up (SMD ‐0.39, 95% CI ‐0.69 to ‐0.08, I 2 = 0%). Finally, there was very low quality evidence that CBT in addition to another intervention did not differ from the other intervention alone in terms of effect on pain (SMD ‐0.36, 95% CI ‐0.73 to 0.02) and disability (SMD ‐0.10, 95% CI ‐0.56 to 0.36) at short‐term follow‐up. For patients with subacute NP, there was low quality evidence that CBT was better than other interventions at reducing pain at short‐term follow‐up (SMD ‐0.24, 95% CI ‐0.48 to 0.00), while no difference was found in terms of effect on disability (SMD ‐0.12, 95% CI ‐0.36 to 0.12) and kinesiophobia. None of the included studies reported on adverse effects. With regard to chronic neck pain, CBT was found to be statistically significantly more effective for short‐term pain reduction only when compared to no treatment, but these effects could not be considered clinically meaningful. When comparing both CBT to other types of interventions and CBT in addition to another intervention to the other intervention alone, no differences were found. For patients with subacute NP, CBT was significantly better than other types of interventions at reducing pain at short‐term follow‐up, while no difference was found for disability and kinesiophobia. Further research is recommended to investigate the long‐term benefits and risks of CBT including for the different subgroups of subjects with NP.
t181
t181_12
yes
CBTwas compared to no treatment (225 participants) or to other types of treatments (506 participants), or combined with another intervention (e.g.
Although research on non‐surgical treatments for neck pain (NP) is progressing, there remains uncertainty about the efficacy of cognitive‐behavioural therapy (CBT) for this population. Addressing cognitive and behavioural factors might reduce the clinical burden and the costs of NP in society. Objectives To assess the effects of CBT among individuals with subacute and chronic NP. Specifically, the following comparisons were investigated: (1) cognitive‐behavioural therapy versus placebo, no treatment, or waiting list controls; (2) cognitive‐behavioural therapy versus other types of interventions; (3) cognitive‐behavioural therapy in addition to another intervention (e.g. physiotherapy) versus the other intervention alone. Search methods We searched CENTRAL, MEDLINE, EMBASE, CINAHL, PsycINFO, SCOPUS, Web of Science, and PubMed, as well as ClinicalTrials.gov and the World Health Organization International Clinical Trials Registry Platform up to November 2014. Reference lists and citations of identified trials and relevant systematic reviews were screened. Selection criteria We included randomised controlled trials that assessed the use of CBT in adults with subacute and chronic NP. Data collection and analysis Two review authors independently assessed the risk of bias in each study and extracted the data. If sufficient homogeneity existed among studies in the pre‐defined comparisons, a meta‐analysis was performed. We determined the quality of the evidence for each comparison with the GRADE approach. We included 10 randomised trials (836 participants) in this review. Four trials (40%) had low risk of bias, the remaining 60% of trials had a high risk of bias. The quality of the evidence for the effects of CBT on patients with chronic NP was from very low to moderate. There was low quality evidence that CBT was better than no treatment for improving pain (standard mean difference (SMD) ‐0.58, 95% confidence interval (CI) ‐1.01 to ‐0.16), disability (SMD ‐0.61, 95% CI ‐1.21 to ‐0.01), and quality of life (SMD ‐0.93, 95% CI ‐1.54 to ‐0.31) at short‐term follow‐up, while there was from very low to low quality evidence of no effect on various psychological indicators at short‐term follow‐up. Both at short‐ and intermediate‐term follow‐up, CBT did not affect pain (SMD ‐0.06, 95% CI ‐0.33 to 0.21, low quality, at short‐term follow‐up; MD ‐0.89, 95% CI ‐2.73 to 0.94, low quality, at intermediate‐term follow‐up) or disability (SMD ‐0.10, 95% CI ‐0.40 to 0.20, moderate quality, at short‐term follow‐up; SMD ‐0.24, 95% CI‐0.54 to 0.07, moderate quality, at intermediate‐term follow‐up) compared to other types of interventions. There was moderate quality evidence that CBT was better than other interventions for improving kinesiophobia at intermediate‐term follow‐up (SMD ‐0.39, 95% CI ‐0.69 to ‐0.08, I 2 = 0%). Finally, there was very low quality evidence that CBT in addition to another intervention did not differ from the other intervention alone in terms of effect on pain (SMD ‐0.36, 95% CI ‐0.73 to 0.02) and disability (SMD ‐0.10, 95% CI ‐0.56 to 0.36) at short‐term follow‐up. For patients with subacute NP, there was low quality evidence that CBT was better than other interventions at reducing pain at short‐term follow‐up (SMD ‐0.24, 95% CI ‐0.48 to 0.00), while no difference was found in terms of effect on disability (SMD ‐0.12, 95% CI ‐0.36 to 0.12) and kinesiophobia. None of the included studies reported on adverse effects. With regard to chronic neck pain, CBT was found to be statistically significantly more effective for short‐term pain reduction only when compared to no treatment, but these effects could not be considered clinically meaningful. When comparing both CBT to other types of interventions and CBT in addition to another intervention to the other intervention alone, no differences were found. For patients with subacute NP, CBT was significantly better than other types of interventions at reducing pain at short‐term follow‐up, while no difference was found for disability and kinesiophobia. Further research is recommended to investigate the long‐term benefits and risks of CBT including for the different subgroups of subjects with NP.
t181
t181_13
yes
physiotherapy) and compared to the other intervention alone (200 participants).
Although research on non‐surgical treatments for neck pain (NP) is progressing, there remains uncertainty about the efficacy of cognitive‐behavioural therapy (CBT) for this population. Addressing cognitive and behavioural factors might reduce the clinical burden and the costs of NP in society. Objectives To assess the effects of CBT among individuals with subacute and chronic NP. Specifically, the following comparisons were investigated: (1) cognitive‐behavioural therapy versus placebo, no treatment, or waiting list controls; (2) cognitive‐behavioural therapy versus other types of interventions; (3) cognitive‐behavioural therapy in addition to another intervention (e.g. physiotherapy) versus the other intervention alone. Search methods We searched CENTRAL, MEDLINE, EMBASE, CINAHL, PsycINFO, SCOPUS, Web of Science, and PubMed, as well as ClinicalTrials.gov and the World Health Organization International Clinical Trials Registry Platform up to November 2014. Reference lists and citations of identified trials and relevant systematic reviews were screened. Selection criteria We included randomised controlled trials that assessed the use of CBT in adults with subacute and chronic NP. Data collection and analysis Two review authors independently assessed the risk of bias in each study and extracted the data. If sufficient homogeneity existed among studies in the pre‐defined comparisons, a meta‐analysis was performed. We determined the quality of the evidence for each comparison with the GRADE approach. We included 10 randomised trials (836 participants) in this review. Four trials (40%) had low risk of bias, the remaining 60% of trials had a high risk of bias. The quality of the evidence for the effects of CBT on patients with chronic NP was from very low to moderate. There was low quality evidence that CBT was better than no treatment for improving pain (standard mean difference (SMD) ‐0.58, 95% confidence interval (CI) ‐1.01 to ‐0.16), disability (SMD ‐0.61, 95% CI ‐1.21 to ‐0.01), and quality of life (SMD ‐0.93, 95% CI ‐1.54 to ‐0.31) at short‐term follow‐up, while there was from very low to low quality evidence of no effect on various psychological indicators at short‐term follow‐up. Both at short‐ and intermediate‐term follow‐up, CBT did not affect pain (SMD ‐0.06, 95% CI ‐0.33 to 0.21, low quality, at short‐term follow‐up; MD ‐0.89, 95% CI ‐2.73 to 0.94, low quality, at intermediate‐term follow‐up) or disability (SMD ‐0.10, 95% CI ‐0.40 to 0.20, moderate quality, at short‐term follow‐up; SMD ‐0.24, 95% CI‐0.54 to 0.07, moderate quality, at intermediate‐term follow‐up) compared to other types of interventions. There was moderate quality evidence that CBT was better than other interventions for improving kinesiophobia at intermediate‐term follow‐up (SMD ‐0.39, 95% CI ‐0.69 to ‐0.08, I 2 = 0%). Finally, there was very low quality evidence that CBT in addition to another intervention did not differ from the other intervention alone in terms of effect on pain (SMD ‐0.36, 95% CI ‐0.73 to 0.02) and disability (SMD ‐0.10, 95% CI ‐0.56 to 0.36) at short‐term follow‐up. For patients with subacute NP, there was low quality evidence that CBT was better than other interventions at reducing pain at short‐term follow‐up (SMD ‐0.24, 95% CI ‐0.48 to 0.00), while no difference was found in terms of effect on disability (SMD ‐0.12, 95% CI ‐0.36 to 0.12) and kinesiophobia. None of the included studies reported on adverse effects. With regard to chronic neck pain, CBT was found to be statistically significantly more effective for short‐term pain reduction only when compared to no treatment, but these effects could not be considered clinically meaningful. When comparing both CBT to other types of interventions and CBT in addition to another intervention to the other intervention alone, no differences were found. For patients with subacute NP, CBT was significantly better than other types of interventions at reducing pain at short‐term follow‐up, while no difference was found for disability and kinesiophobia. Further research is recommended to investigate the long‐term benefits and risks of CBT including for the different subgroups of subjects with NP.
t181
t181_14
yes
The interventions were carried out at primary and secondary health care centres.
Although research on non‐surgical treatments for neck pain (NP) is progressing, there remains uncertainty about the efficacy of cognitive‐behavioural therapy (CBT) for this population. Addressing cognitive and behavioural factors might reduce the clinical burden and the costs of NP in society. Objectives To assess the effects of CBT among individuals with subacute and chronic NP. Specifically, the following comparisons were investigated: (1) cognitive‐behavioural therapy versus placebo, no treatment, or waiting list controls; (2) cognitive‐behavioural therapy versus other types of interventions; (3) cognitive‐behavioural therapy in addition to another intervention (e.g. physiotherapy) versus the other intervention alone. Search methods We searched CENTRAL, MEDLINE, EMBASE, CINAHL, PsycINFO, SCOPUS, Web of Science, and PubMed, as well as ClinicalTrials.gov and the World Health Organization International Clinical Trials Registry Platform up to November 2014. Reference lists and citations of identified trials and relevant systematic reviews were screened. Selection criteria We included randomised controlled trials that assessed the use of CBT in adults with subacute and chronic NP. Data collection and analysis Two review authors independently assessed the risk of bias in each study and extracted the data. If sufficient homogeneity existed among studies in the pre‐defined comparisons, a meta‐analysis was performed. We determined the quality of the evidence for each comparison with the GRADE approach. We included 10 randomised trials (836 participants) in this review. Four trials (40%) had low risk of bias, the remaining 60% of trials had a high risk of bias. The quality of the evidence for the effects of CBT on patients with chronic NP was from very low to moderate. There was low quality evidence that CBT was better than no treatment for improving pain (standard mean difference (SMD) ‐0.58, 95% confidence interval (CI) ‐1.01 to ‐0.16), disability (SMD ‐0.61, 95% CI ‐1.21 to ‐0.01), and quality of life (SMD ‐0.93, 95% CI ‐1.54 to ‐0.31) at short‐term follow‐up, while there was from very low to low quality evidence of no effect on various psychological indicators at short‐term follow‐up. Both at short‐ and intermediate‐term follow‐up, CBT did not affect pain (SMD ‐0.06, 95% CI ‐0.33 to 0.21, low quality, at short‐term follow‐up; MD ‐0.89, 95% CI ‐2.73 to 0.94, low quality, at intermediate‐term follow‐up) or disability (SMD ‐0.10, 95% CI ‐0.40 to 0.20, moderate quality, at short‐term follow‐up; SMD ‐0.24, 95% CI‐0.54 to 0.07, moderate quality, at intermediate‐term follow‐up) compared to other types of interventions. There was moderate quality evidence that CBT was better than other interventions for improving kinesiophobia at intermediate‐term follow‐up (SMD ‐0.39, 95% CI ‐0.69 to ‐0.08, I 2 = 0%). Finally, there was very low quality evidence that CBT in addition to another intervention did not differ from the other intervention alone in terms of effect on pain (SMD ‐0.36, 95% CI ‐0.73 to 0.02) and disability (SMD ‐0.10, 95% CI ‐0.56 to 0.36) at short‐term follow‐up. For patients with subacute NP, there was low quality evidence that CBT was better than other interventions at reducing pain at short‐term follow‐up (SMD ‐0.24, 95% CI ‐0.48 to 0.00), while no difference was found in terms of effect on disability (SMD ‐0.12, 95% CI ‐0.36 to 0.12) and kinesiophobia. None of the included studies reported on adverse effects. With regard to chronic neck pain, CBT was found to be statistically significantly more effective for short‐term pain reduction only when compared to no treatment, but these effects could not be considered clinically meaningful. When comparing both CBT to other types of interventions and CBT in addition to another intervention to the other intervention alone, no differences were found. For patients with subacute NP, CBT was significantly better than other types of interventions at reducing pain at short‐term follow‐up, while no difference was found for disability and kinesiophobia. Further research is recommended to investigate the long‐term benefits and risks of CBT including for the different subgroups of subjects with NP.
t181
t181_15
no
With regard to chronic NP, CBT was statistically significantly better than no treatment at improving pain, disability, and quality of life, but these effects could not be considered clinically meaningful.
Although research on non‐surgical treatments for neck pain (NP) is progressing, there remains uncertainty about the efficacy of cognitive‐behavioural therapy (CBT) for this population. Addressing cognitive and behavioural factors might reduce the clinical burden and the costs of NP in society. Objectives To assess the effects of CBT among individuals with subacute and chronic NP. Specifically, the following comparisons were investigated: (1) cognitive‐behavioural therapy versus placebo, no treatment, or waiting list controls; (2) cognitive‐behavioural therapy versus other types of interventions; (3) cognitive‐behavioural therapy in addition to another intervention (e.g. physiotherapy) versus the other intervention alone. Search methods We searched CENTRAL, MEDLINE, EMBASE, CINAHL, PsycINFO, SCOPUS, Web of Science, and PubMed, as well as ClinicalTrials.gov and the World Health Organization International Clinical Trials Registry Platform up to November 2014. Reference lists and citations of identified trials and relevant systematic reviews were screened. Selection criteria We included randomised controlled trials that assessed the use of CBT in adults with subacute and chronic NP. Data collection and analysis Two review authors independently assessed the risk of bias in each study and extracted the data. If sufficient homogeneity existed among studies in the pre‐defined comparisons, a meta‐analysis was performed. We determined the quality of the evidence for each comparison with the GRADE approach. We included 10 randomised trials (836 participants) in this review. Four trials (40%) had low risk of bias, the remaining 60% of trials had a high risk of bias. The quality of the evidence for the effects of CBT on patients with chronic NP was from very low to moderate. There was low quality evidence that CBT was better than no treatment for improving pain (standard mean difference (SMD) ‐0.58, 95% confidence interval (CI) ‐1.01 to ‐0.16), disability (SMD ‐0.61, 95% CI ‐1.21 to ‐0.01), and quality of life (SMD ‐0.93, 95% CI ‐1.54 to ‐0.31) at short‐term follow‐up, while there was from very low to low quality evidence of no effect on various psychological indicators at short‐term follow‐up. Both at short‐ and intermediate‐term follow‐up, CBT did not affect pain (SMD ‐0.06, 95% CI ‐0.33 to 0.21, low quality, at short‐term follow‐up; MD ‐0.89, 95% CI ‐2.73 to 0.94, low quality, at intermediate‐term follow‐up) or disability (SMD ‐0.10, 95% CI ‐0.40 to 0.20, moderate quality, at short‐term follow‐up; SMD ‐0.24, 95% CI‐0.54 to 0.07, moderate quality, at intermediate‐term follow‐up) compared to other types of interventions. There was moderate quality evidence that CBT was better than other interventions for improving kinesiophobia at intermediate‐term follow‐up (SMD ‐0.39, 95% CI ‐0.69 to ‐0.08, I 2 = 0%). Finally, there was very low quality evidence that CBT in addition to another intervention did not differ from the other intervention alone in terms of effect on pain (SMD ‐0.36, 95% CI ‐0.73 to 0.02) and disability (SMD ‐0.10, 95% CI ‐0.56 to 0.36) at short‐term follow‐up. For patients with subacute NP, there was low quality evidence that CBT was better than other interventions at reducing pain at short‐term follow‐up (SMD ‐0.24, 95% CI ‐0.48 to 0.00), while no difference was found in terms of effect on disability (SMD ‐0.12, 95% CI ‐0.36 to 0.12) and kinesiophobia. None of the included studies reported on adverse effects. With regard to chronic neck pain, CBT was found to be statistically significantly more effective for short‐term pain reduction only when compared to no treatment, but these effects could not be considered clinically meaningful. When comparing both CBT to other types of interventions and CBT in addition to another intervention to the other intervention alone, no differences were found. For patients with subacute NP, CBT was significantly better than other types of interventions at reducing pain at short‐term follow‐up, while no difference was found for disability and kinesiophobia. Further research is recommended to investigate the long‐term benefits and risks of CBT including for the different subgroups of subjects with NP.
t181
t181_16
no
No differences between CBT and other types of interventions (e.g. medication, education, physiotherapy, manual therapy, and exercises) were found in terms of pain and disability; there was moderate quality evidence that CBT was better than other interventions in improving fear of movement.
Although research on non‐surgical treatments for neck pain (NP) is progressing, there remains uncertainty about the efficacy of cognitive‐behavioural therapy (CBT) for this population. Addressing cognitive and behavioural factors might reduce the clinical burden and the costs of NP in society. Objectives To assess the effects of CBT among individuals with subacute and chronic NP. Specifically, the following comparisons were investigated: (1) cognitive‐behavioural therapy versus placebo, no treatment, or waiting list controls; (2) cognitive‐behavioural therapy versus other types of interventions; (3) cognitive‐behavioural therapy in addition to another intervention (e.g. physiotherapy) versus the other intervention alone. Search methods We searched CENTRAL, MEDLINE, EMBASE, CINAHL, PsycINFO, SCOPUS, Web of Science, and PubMed, as well as ClinicalTrials.gov and the World Health Organization International Clinical Trials Registry Platform up to November 2014. Reference lists and citations of identified trials and relevant systematic reviews were screened. Selection criteria We included randomised controlled trials that assessed the use of CBT in adults with subacute and chronic NP. Data collection and analysis Two review authors independently assessed the risk of bias in each study and extracted the data. If sufficient homogeneity existed among studies in the pre‐defined comparisons, a meta‐analysis was performed. We determined the quality of the evidence for each comparison with the GRADE approach. We included 10 randomised trials (836 participants) in this review. Four trials (40%) had low risk of bias, the remaining 60% of trials had a high risk of bias. The quality of the evidence for the effects of CBT on patients with chronic NP was from very low to moderate. There was low quality evidence that CBT was better than no treatment for improving pain (standard mean difference (SMD) ‐0.58, 95% confidence interval (CI) ‐1.01 to ‐0.16), disability (SMD ‐0.61, 95% CI ‐1.21 to ‐0.01), and quality of life (SMD ‐0.93, 95% CI ‐1.54 to ‐0.31) at short‐term follow‐up, while there was from very low to low quality evidence of no effect on various psychological indicators at short‐term follow‐up. Both at short‐ and intermediate‐term follow‐up, CBT did not affect pain (SMD ‐0.06, 95% CI ‐0.33 to 0.21, low quality, at short‐term follow‐up; MD ‐0.89, 95% CI ‐2.73 to 0.94, low quality, at intermediate‐term follow‐up) or disability (SMD ‐0.10, 95% CI ‐0.40 to 0.20, moderate quality, at short‐term follow‐up; SMD ‐0.24, 95% CI‐0.54 to 0.07, moderate quality, at intermediate‐term follow‐up) compared to other types of interventions. There was moderate quality evidence that CBT was better than other interventions for improving kinesiophobia at intermediate‐term follow‐up (SMD ‐0.39, 95% CI ‐0.69 to ‐0.08, I 2 = 0%). Finally, there was very low quality evidence that CBT in addition to another intervention did not differ from the other intervention alone in terms of effect on pain (SMD ‐0.36, 95% CI ‐0.73 to 0.02) and disability (SMD ‐0.10, 95% CI ‐0.56 to 0.36) at short‐term follow‐up. For patients with subacute NP, there was low quality evidence that CBT was better than other interventions at reducing pain at short‐term follow‐up (SMD ‐0.24, 95% CI ‐0.48 to 0.00), while no difference was found in terms of effect on disability (SMD ‐0.12, 95% CI ‐0.36 to 0.12) and kinesiophobia. None of the included studies reported on adverse effects. With regard to chronic neck pain, CBT was found to be statistically significantly more effective for short‐term pain reduction only when compared to no treatment, but these effects could not be considered clinically meaningful. When comparing both CBT to other types of interventions and CBT in addition to another intervention to the other intervention alone, no differences were found. For patients with subacute NP, CBT was significantly better than other types of interventions at reducing pain at short‐term follow‐up, while no difference was found for disability and kinesiophobia. Further research is recommended to investigate the long‐term benefits and risks of CBT including for the different subgroups of subjects with NP.
t181
t181_17
no
No difference was found in terms of disability and fear of movement.
Although research on non‐surgical treatments for neck pain (NP) is progressing, there remains uncertainty about the efficacy of cognitive‐behavioural therapy (CBT) for this population. Addressing cognitive and behavioural factors might reduce the clinical burden and the costs of NP in society. Objectives To assess the effects of CBT among individuals with subacute and chronic NP. Specifically, the following comparisons were investigated: (1) cognitive‐behavioural therapy versus placebo, no treatment, or waiting list controls; (2) cognitive‐behavioural therapy versus other types of interventions; (3) cognitive‐behavioural therapy in addition to another intervention (e.g. physiotherapy) versus the other intervention alone. Search methods We searched CENTRAL, MEDLINE, EMBASE, CINAHL, PsycINFO, SCOPUS, Web of Science, and PubMed, as well as ClinicalTrials.gov and the World Health Organization International Clinical Trials Registry Platform up to November 2014. Reference lists and citations of identified trials and relevant systematic reviews were screened. Selection criteria We included randomised controlled trials that assessed the use of CBT in adults with subacute and chronic NP. Data collection and analysis Two review authors independently assessed the risk of bias in each study and extracted the data. If sufficient homogeneity existed among studies in the pre‐defined comparisons, a meta‐analysis was performed. We determined the quality of the evidence for each comparison with the GRADE approach. We included 10 randomised trials (836 participants) in this review. Four trials (40%) had low risk of bias, the remaining 60% of trials had a high risk of bias. The quality of the evidence for the effects of CBT on patients with chronic NP was from very low to moderate. There was low quality evidence that CBT was better than no treatment for improving pain (standard mean difference (SMD) ‐0.58, 95% confidence interval (CI) ‐1.01 to ‐0.16), disability (SMD ‐0.61, 95% CI ‐1.21 to ‐0.01), and quality of life (SMD ‐0.93, 95% CI ‐1.54 to ‐0.31) at short‐term follow‐up, while there was from very low to low quality evidence of no effect on various psychological indicators at short‐term follow‐up. Both at short‐ and intermediate‐term follow‐up, CBT did not affect pain (SMD ‐0.06, 95% CI ‐0.33 to 0.21, low quality, at short‐term follow‐up; MD ‐0.89, 95% CI ‐2.73 to 0.94, low quality, at intermediate‐term follow‐up) or disability (SMD ‐0.10, 95% CI ‐0.40 to 0.20, moderate quality, at short‐term follow‐up; SMD ‐0.24, 95% CI‐0.54 to 0.07, moderate quality, at intermediate‐term follow‐up) compared to other types of interventions. There was moderate quality evidence that CBT was better than other interventions for improving kinesiophobia at intermediate‐term follow‐up (SMD ‐0.39, 95% CI ‐0.69 to ‐0.08, I 2 = 0%). Finally, there was very low quality evidence that CBT in addition to another intervention did not differ from the other intervention alone in terms of effect on pain (SMD ‐0.36, 95% CI ‐0.73 to 0.02) and disability (SMD ‐0.10, 95% CI ‐0.56 to 0.36) at short‐term follow‐up. For patients with subacute NP, there was low quality evidence that CBT was better than other interventions at reducing pain at short‐term follow‐up (SMD ‐0.24, 95% CI ‐0.48 to 0.00), while no difference was found in terms of effect on disability (SMD ‐0.12, 95% CI ‐0.36 to 0.12) and kinesiophobia. None of the included studies reported on adverse effects. With regard to chronic neck pain, CBT was found to be statistically significantly more effective for short‐term pain reduction only when compared to no treatment, but these effects could not be considered clinically meaningful. When comparing both CBT to other types of interventions and CBT in addition to another intervention to the other intervention alone, no differences were found. For patients with subacute NP, CBT was significantly better than other types of interventions at reducing pain at short‐term follow‐up, while no difference was found for disability and kinesiophobia. Further research is recommended to investigate the long‐term benefits and risks of CBT including for the different subgroups of subjects with NP.
t182
t182_1
yes
Raynaud's phenomenon is a disorder whereby blood vessels in the fingers and toes constrict and reduce blood flow, causing pain and discolouration.
Calcium channel blockers are the most commonly prescribed drugs for people with primary Raynaud's phenomenon. Primary Raynaud's phenomenon is a common condition characterised by an exaggerated vasospastic response to cold or emotion: classically the digits (fingers and toes) turn white, then blue, then red. This is an update of the review first published in 2014. Objectives To assess the effects of different calcium channel blockers for primary Raynaud's phenomenon as determined by attack rates, severity scores, participant‐preference scores and physiological measurements. Search methods For this update the Cochrane Vascular Trial Search Co‐ordinator searched the Specialised Register (last searched January 2016) and the Cochrane Register of Studies (CENTRAL) (2015, Issue 12). In addition the TSC searched clinical trials databases. Selection criteria Randomised controlled trials evaluating the effects of oral calcium channel blockers for the treatment of primary Raynaud's phenomenon. Data collection and analysis Three review authors independently assessed the trials for inclusion and their quality, and extracted the data. Data extraction included adverse events. We contacted trial authors for missing data. We included seven randomised trials with 296 participants. Four trials examined nifedipine and the remainder nicardipine. Comparisons were with placebo in six trials and with both dazoxiben and placebo in one trial (only the nifedipine versus placebo data were used within this review). Treatment with oral calcium channel blockers was minimally effective in primary Raynaud's phenomenon at decreasing the frequency of attacks (standardised mean difference of 0.23; 95% confidence interval (CI) 0.08 to 0.38, P = 0.003). This translates to 1.72 (95% CI 0.60 to 2.84) fewer attacks per week on calcium channel blockers compared to placebo. One trial provided details on duration of attacks reporting no statistically significant difference between the nicardipine and placebo groups (no P value reported). Only two trials provided any detail of statistical comparisons of (unvalidated) severity scores between treatment groups: one of these trials (60 participants) reported a mean severity score of 1.55 on placebo and 1.36 on nicardipine, difference 0.2 (95% CI of difference 0 to 0.4, no P value reported) and the other trial (three participants only with primary Raynaud's phenomenon) reported a median severity score of 2 on both nicardipine and placebo treatment (P > 0.999) suggesting little effect on severity. Participant‐preference scores were included in four trials, but in only two were results specific to participants with primary Raynaud's phenomenon, and scoring systems differed between trials: scores differed between treatments in only one trial, in which 33% of participants on placebo and 73% on nifedipine reported improvement in symptoms (P < 0.001). Physiological measurements were included as outcome measures in five trials (different methodologies were used in each): none of these trials found any statistically significant between‐treatment group differences. Treatment with calcium channel blockers appeared to be associated with a number of adverse reactions, including headaches, flushing and oedema (swelling). Overall, the trials were classed as being at low or unclear risk of bias; and the quality of the evidence presented was moderate for number of attacks, very low for duration of attacks, high for severity scores and low for patient preference scores. The randomised controlled trials included in this review provide moderate quality evidence that oral calcium channel blockers are minimally effective in the treatment of primary Raynaud's phenomenon as measured by the frequency of attacks and high‐quality evidence that they have little effect on severity. We are unable to comment on duration of attacks or on patient preference due to the very low and low quality of evidence as a result of small sample sizes in the included studies and the variable data quality of outcome measures.
t182
t182_2
no
This is usually in response to cold exposure or emotional stress.
Calcium channel blockers are the most commonly prescribed drugs for people with primary Raynaud's phenomenon. Primary Raynaud's phenomenon is a common condition characterised by an exaggerated vasospastic response to cold or emotion: classically the digits (fingers and toes) turn white, then blue, then red. This is an update of the review first published in 2014. Objectives To assess the effects of different calcium channel blockers for primary Raynaud's phenomenon as determined by attack rates, severity scores, participant‐preference scores and physiological measurements. Search methods For this update the Cochrane Vascular Trial Search Co‐ordinator searched the Specialised Register (last searched January 2016) and the Cochrane Register of Studies (CENTRAL) (2015, Issue 12). In addition the TSC searched clinical trials databases. Selection criteria Randomised controlled trials evaluating the effects of oral calcium channel blockers for the treatment of primary Raynaud's phenomenon. Data collection and analysis Three review authors independently assessed the trials for inclusion and their quality, and extracted the data. Data extraction included adverse events. We contacted trial authors for missing data. We included seven randomised trials with 296 participants. Four trials examined nifedipine and the remainder nicardipine. Comparisons were with placebo in six trials and with both dazoxiben and placebo in one trial (only the nifedipine versus placebo data were used within this review). Treatment with oral calcium channel blockers was minimally effective in primary Raynaud's phenomenon at decreasing the frequency of attacks (standardised mean difference of 0.23; 95% confidence interval (CI) 0.08 to 0.38, P = 0.003). This translates to 1.72 (95% CI 0.60 to 2.84) fewer attacks per week on calcium channel blockers compared to placebo. One trial provided details on duration of attacks reporting no statistically significant difference between the nicardipine and placebo groups (no P value reported). Only two trials provided any detail of statistical comparisons of (unvalidated) severity scores between treatment groups: one of these trials (60 participants) reported a mean severity score of 1.55 on placebo and 1.36 on nicardipine, difference 0.2 (95% CI of difference 0 to 0.4, no P value reported) and the other trial (three participants only with primary Raynaud's phenomenon) reported a median severity score of 2 on both nicardipine and placebo treatment (P > 0.999) suggesting little effect on severity. Participant‐preference scores were included in four trials, but in only two were results specific to participants with primary Raynaud's phenomenon, and scoring systems differed between trials: scores differed between treatments in only one trial, in which 33% of participants on placebo and 73% on nifedipine reported improvement in symptoms (P < 0.001). Physiological measurements were included as outcome measures in five trials (different methodologies were used in each): none of these trials found any statistically significant between‐treatment group differences. Treatment with calcium channel blockers appeared to be associated with a number of adverse reactions, including headaches, flushing and oedema (swelling). Overall, the trials were classed as being at low or unclear risk of bias; and the quality of the evidence presented was moderate for number of attacks, very low for duration of attacks, high for severity scores and low for patient preference scores. The randomised controlled trials included in this review provide moderate quality evidence that oral calcium channel blockers are minimally effective in the treatment of primary Raynaud's phenomenon as measured by the frequency of attacks and high‐quality evidence that they have little effect on severity. We are unable to comment on duration of attacks or on patient preference due to the very low and low quality of evidence as a result of small sample sizes in the included studies and the variable data quality of outcome measures.
t182
t182_3
yes
In a small number of cases, Raynaud's phenomenon is associated with an underlying disease but, for most people, it is idiopathic (of uncertain cause, or 'primary').
Calcium channel blockers are the most commonly prescribed drugs for people with primary Raynaud's phenomenon. Primary Raynaud's phenomenon is a common condition characterised by an exaggerated vasospastic response to cold or emotion: classically the digits (fingers and toes) turn white, then blue, then red. This is an update of the review first published in 2014. Objectives To assess the effects of different calcium channel blockers for primary Raynaud's phenomenon as determined by attack rates, severity scores, participant‐preference scores and physiological measurements. Search methods For this update the Cochrane Vascular Trial Search Co‐ordinator searched the Specialised Register (last searched January 2016) and the Cochrane Register of Studies (CENTRAL) (2015, Issue 12). In addition the TSC searched clinical trials databases. Selection criteria Randomised controlled trials evaluating the effects of oral calcium channel blockers for the treatment of primary Raynaud's phenomenon. Data collection and analysis Three review authors independently assessed the trials for inclusion and their quality, and extracted the data. Data extraction included adverse events. We contacted trial authors for missing data. We included seven randomised trials with 296 participants. Four trials examined nifedipine and the remainder nicardipine. Comparisons were with placebo in six trials and with both dazoxiben and placebo in one trial (only the nifedipine versus placebo data were used within this review). Treatment with oral calcium channel blockers was minimally effective in primary Raynaud's phenomenon at decreasing the frequency of attacks (standardised mean difference of 0.23; 95% confidence interval (CI) 0.08 to 0.38, P = 0.003). This translates to 1.72 (95% CI 0.60 to 2.84) fewer attacks per week on calcium channel blockers compared to placebo. One trial provided details on duration of attacks reporting no statistically significant difference between the nicardipine and placebo groups (no P value reported). Only two trials provided any detail of statistical comparisons of (unvalidated) severity scores between treatment groups: one of these trials (60 participants) reported a mean severity score of 1.55 on placebo and 1.36 on nicardipine, difference 0.2 (95% CI of difference 0 to 0.4, no P value reported) and the other trial (three participants only with primary Raynaud's phenomenon) reported a median severity score of 2 on both nicardipine and placebo treatment (P > 0.999) suggesting little effect on severity. Participant‐preference scores were included in four trials, but in only two were results specific to participants with primary Raynaud's phenomenon, and scoring systems differed between trials: scores differed between treatments in only one trial, in which 33% of participants on placebo and 73% on nifedipine reported improvement in symptoms (P < 0.001). Physiological measurements were included as outcome measures in five trials (different methodologies were used in each): none of these trials found any statistically significant between‐treatment group differences. Treatment with calcium channel blockers appeared to be associated with a number of adverse reactions, including headaches, flushing and oedema (swelling). Overall, the trials were classed as being at low or unclear risk of bias; and the quality of the evidence presented was moderate for number of attacks, very low for duration of attacks, high for severity scores and low for patient preference scores. The randomised controlled trials included in this review provide moderate quality evidence that oral calcium channel blockers are minimally effective in the treatment of primary Raynaud's phenomenon as measured by the frequency of attacks and high‐quality evidence that they have little effect on severity. We are unable to comment on duration of attacks or on patient preference due to the very low and low quality of evidence as a result of small sample sizes in the included studies and the variable data quality of outcome measures.
t182
t182_4
yes
Primary Raynaud's phenomenon is extremely common (especially in women), with one UK study suggesting that over 15% of the population are affected.
Calcium channel blockers are the most commonly prescribed drugs for people with primary Raynaud's phenomenon. Primary Raynaud's phenomenon is a common condition characterised by an exaggerated vasospastic response to cold or emotion: classically the digits (fingers and toes) turn white, then blue, then red. This is an update of the review first published in 2014. Objectives To assess the effects of different calcium channel blockers for primary Raynaud's phenomenon as determined by attack rates, severity scores, participant‐preference scores and physiological measurements. Search methods For this update the Cochrane Vascular Trial Search Co‐ordinator searched the Specialised Register (last searched January 2016) and the Cochrane Register of Studies (CENTRAL) (2015, Issue 12). In addition the TSC searched clinical trials databases. Selection criteria Randomised controlled trials evaluating the effects of oral calcium channel blockers for the treatment of primary Raynaud's phenomenon. Data collection and analysis Three review authors independently assessed the trials for inclusion and their quality, and extracted the data. Data extraction included adverse events. We contacted trial authors for missing data. We included seven randomised trials with 296 participants. Four trials examined nifedipine and the remainder nicardipine. Comparisons were with placebo in six trials and with both dazoxiben and placebo in one trial (only the nifedipine versus placebo data were used within this review). Treatment with oral calcium channel blockers was minimally effective in primary Raynaud's phenomenon at decreasing the frequency of attacks (standardised mean difference of 0.23; 95% confidence interval (CI) 0.08 to 0.38, P = 0.003). This translates to 1.72 (95% CI 0.60 to 2.84) fewer attacks per week on calcium channel blockers compared to placebo. One trial provided details on duration of attacks reporting no statistically significant difference between the nicardipine and placebo groups (no P value reported). Only two trials provided any detail of statistical comparisons of (unvalidated) severity scores between treatment groups: one of these trials (60 participants) reported a mean severity score of 1.55 on placebo and 1.36 on nicardipine, difference 0.2 (95% CI of difference 0 to 0.4, no P value reported) and the other trial (three participants only with primary Raynaud's phenomenon) reported a median severity score of 2 on both nicardipine and placebo treatment (P > 0.999) suggesting little effect on severity. Participant‐preference scores were included in four trials, but in only two were results specific to participants with primary Raynaud's phenomenon, and scoring systems differed between trials: scores differed between treatments in only one trial, in which 33% of participants on placebo and 73% on nifedipine reported improvement in symptoms (P < 0.001). Physiological measurements were included as outcome measures in five trials (different methodologies were used in each): none of these trials found any statistically significant between‐treatment group differences. Treatment with calcium channel blockers appeared to be associated with a number of adverse reactions, including headaches, flushing and oedema (swelling). Overall, the trials were classed as being at low or unclear risk of bias; and the quality of the evidence presented was moderate for number of attacks, very low for duration of attacks, high for severity scores and low for patient preference scores. The randomised controlled trials included in this review provide moderate quality evidence that oral calcium channel blockers are minimally effective in the treatment of primary Raynaud's phenomenon as measured by the frequency of attacks and high‐quality evidence that they have little effect on severity. We are unable to comment on duration of attacks or on patient preference due to the very low and low quality of evidence as a result of small sample sizes in the included studies and the variable data quality of outcome measures.
t182
t182_5
yes
For people with primary Raynaud's phenomenon who do not respond to conservative measures (e.g.
Calcium channel blockers are the most commonly prescribed drugs for people with primary Raynaud's phenomenon. Primary Raynaud's phenomenon is a common condition characterised by an exaggerated vasospastic response to cold or emotion: classically the digits (fingers and toes) turn white, then blue, then red. This is an update of the review first published in 2014. Objectives To assess the effects of different calcium channel blockers for primary Raynaud's phenomenon as determined by attack rates, severity scores, participant‐preference scores and physiological measurements. Search methods For this update the Cochrane Vascular Trial Search Co‐ordinator searched the Specialised Register (last searched January 2016) and the Cochrane Register of Studies (CENTRAL) (2015, Issue 12). In addition the TSC searched clinical trials databases. Selection criteria Randomised controlled trials evaluating the effects of oral calcium channel blockers for the treatment of primary Raynaud's phenomenon. Data collection and analysis Three review authors independently assessed the trials for inclusion and their quality, and extracted the data. Data extraction included adverse events. We contacted trial authors for missing data. We included seven randomised trials with 296 participants. Four trials examined nifedipine and the remainder nicardipine. Comparisons were with placebo in six trials and with both dazoxiben and placebo in one trial (only the nifedipine versus placebo data were used within this review). Treatment with oral calcium channel blockers was minimally effective in primary Raynaud's phenomenon at decreasing the frequency of attacks (standardised mean difference of 0.23; 95% confidence interval (CI) 0.08 to 0.38, P = 0.003). This translates to 1.72 (95% CI 0.60 to 2.84) fewer attacks per week on calcium channel blockers compared to placebo. One trial provided details on duration of attacks reporting no statistically significant difference between the nicardipine and placebo groups (no P value reported). Only two trials provided any detail of statistical comparisons of (unvalidated) severity scores between treatment groups: one of these trials (60 participants) reported a mean severity score of 1.55 on placebo and 1.36 on nicardipine, difference 0.2 (95% CI of difference 0 to 0.4, no P value reported) and the other trial (three participants only with primary Raynaud's phenomenon) reported a median severity score of 2 on both nicardipine and placebo treatment (P > 0.999) suggesting little effect on severity. Participant‐preference scores were included in four trials, but in only two were results specific to participants with primary Raynaud's phenomenon, and scoring systems differed between trials: scores differed between treatments in only one trial, in which 33% of participants on placebo and 73% on nifedipine reported improvement in symptoms (P < 0.001). Physiological measurements were included as outcome measures in five trials (different methodologies were used in each): none of these trials found any statistically significant between‐treatment group differences. Treatment with calcium channel blockers appeared to be associated with a number of adverse reactions, including headaches, flushing and oedema (swelling). Overall, the trials were classed as being at low or unclear risk of bias; and the quality of the evidence presented was moderate for number of attacks, very low for duration of attacks, high for severity scores and low for patient preference scores. The randomised controlled trials included in this review provide moderate quality evidence that oral calcium channel blockers are minimally effective in the treatment of primary Raynaud's phenomenon as measured by the frequency of attacks and high‐quality evidence that they have little effect on severity. We are unable to comment on duration of attacks or on patient preference due to the very low and low quality of evidence as a result of small sample sizes in the included studies and the variable data quality of outcome measures.
t182
t182_6
yes
keeping warm), calcium channel blockers represent the first line in drug treatment.
Calcium channel blockers are the most commonly prescribed drugs for people with primary Raynaud's phenomenon. Primary Raynaud's phenomenon is a common condition characterised by an exaggerated vasospastic response to cold or emotion: classically the digits (fingers and toes) turn white, then blue, then red. This is an update of the review first published in 2014. Objectives To assess the effects of different calcium channel blockers for primary Raynaud's phenomenon as determined by attack rates, severity scores, participant‐preference scores and physiological measurements. Search methods For this update the Cochrane Vascular Trial Search Co‐ordinator searched the Specialised Register (last searched January 2016) and the Cochrane Register of Studies (CENTRAL) (2015, Issue 12). In addition the TSC searched clinical trials databases. Selection criteria Randomised controlled trials evaluating the effects of oral calcium channel blockers for the treatment of primary Raynaud's phenomenon. Data collection and analysis Three review authors independently assessed the trials for inclusion and their quality, and extracted the data. Data extraction included adverse events. We contacted trial authors for missing data. We included seven randomised trials with 296 participants. Four trials examined nifedipine and the remainder nicardipine. Comparisons were with placebo in six trials and with both dazoxiben and placebo in one trial (only the nifedipine versus placebo data were used within this review). Treatment with oral calcium channel blockers was minimally effective in primary Raynaud's phenomenon at decreasing the frequency of attacks (standardised mean difference of 0.23; 95% confidence interval (CI) 0.08 to 0.38, P = 0.003). This translates to 1.72 (95% CI 0.60 to 2.84) fewer attacks per week on calcium channel blockers compared to placebo. One trial provided details on duration of attacks reporting no statistically significant difference between the nicardipine and placebo groups (no P value reported). Only two trials provided any detail of statistical comparisons of (unvalidated) severity scores between treatment groups: one of these trials (60 participants) reported a mean severity score of 1.55 on placebo and 1.36 on nicardipine, difference 0.2 (95% CI of difference 0 to 0.4, no P value reported) and the other trial (three participants only with primary Raynaud's phenomenon) reported a median severity score of 2 on both nicardipine and placebo treatment (P > 0.999) suggesting little effect on severity. Participant‐preference scores were included in four trials, but in only two were results specific to participants with primary Raynaud's phenomenon, and scoring systems differed between trials: scores differed between treatments in only one trial, in which 33% of participants on placebo and 73% on nifedipine reported improvement in symptoms (P < 0.001). Physiological measurements were included as outcome measures in five trials (different methodologies were used in each): none of these trials found any statistically significant between‐treatment group differences. Treatment with calcium channel blockers appeared to be associated with a number of adverse reactions, including headaches, flushing and oedema (swelling). Overall, the trials were classed as being at low or unclear risk of bias; and the quality of the evidence presented was moderate for number of attacks, very low for duration of attacks, high for severity scores and low for patient preference scores. The randomised controlled trials included in this review provide moderate quality evidence that oral calcium channel blockers are minimally effective in the treatment of primary Raynaud's phenomenon as measured by the frequency of attacks and high‐quality evidence that they have little effect on severity. We are unable to comment on duration of attacks or on patient preference due to the very low and low quality of evidence as a result of small sample sizes in the included studies and the variable data quality of outcome measures.
t182
t182_7
yes
Calcium channel blockers (sometimes called calcium antagonists) are drugs that affect the way calcium passes into certain muscle cells and they are the most commonly prescribed medication for primary Raynaud's phenomenon.
Calcium channel blockers are the most commonly prescribed drugs for people with primary Raynaud's phenomenon. Primary Raynaud's phenomenon is a common condition characterised by an exaggerated vasospastic response to cold or emotion: classically the digits (fingers and toes) turn white, then blue, then red. This is an update of the review first published in 2014. Objectives To assess the effects of different calcium channel blockers for primary Raynaud's phenomenon as determined by attack rates, severity scores, participant‐preference scores and physiological measurements. Search methods For this update the Cochrane Vascular Trial Search Co‐ordinator searched the Specialised Register (last searched January 2016) and the Cochrane Register of Studies (CENTRAL) (2015, Issue 12). In addition the TSC searched clinical trials databases. Selection criteria Randomised controlled trials evaluating the effects of oral calcium channel blockers for the treatment of primary Raynaud's phenomenon. Data collection and analysis Three review authors independently assessed the trials for inclusion and their quality, and extracted the data. Data extraction included adverse events. We contacted trial authors for missing data. We included seven randomised trials with 296 participants. Four trials examined nifedipine and the remainder nicardipine. Comparisons were with placebo in six trials and with both dazoxiben and placebo in one trial (only the nifedipine versus placebo data were used within this review). Treatment with oral calcium channel blockers was minimally effective in primary Raynaud's phenomenon at decreasing the frequency of attacks (standardised mean difference of 0.23; 95% confidence interval (CI) 0.08 to 0.38, P = 0.003). This translates to 1.72 (95% CI 0.60 to 2.84) fewer attacks per week on calcium channel blockers compared to placebo. One trial provided details on duration of attacks reporting no statistically significant difference between the nicardipine and placebo groups (no P value reported). Only two trials provided any detail of statistical comparisons of (unvalidated) severity scores between treatment groups: one of these trials (60 participants) reported a mean severity score of 1.55 on placebo and 1.36 on nicardipine, difference 0.2 (95% CI of difference 0 to 0.4, no P value reported) and the other trial (three participants only with primary Raynaud's phenomenon) reported a median severity score of 2 on both nicardipine and placebo treatment (P > 0.999) suggesting little effect on severity. Participant‐preference scores were included in four trials, but in only two were results specific to participants with primary Raynaud's phenomenon, and scoring systems differed between trials: scores differed between treatments in only one trial, in which 33% of participants on placebo and 73% on nifedipine reported improvement in symptoms (P < 0.001). Physiological measurements were included as outcome measures in five trials (different methodologies were used in each): none of these trials found any statistically significant between‐treatment group differences. Treatment with calcium channel blockers appeared to be associated with a number of adverse reactions, including headaches, flushing and oedema (swelling). Overall, the trials were classed as being at low or unclear risk of bias; and the quality of the evidence presented was moderate for number of attacks, very low for duration of attacks, high for severity scores and low for patient preference scores. The randomised controlled trials included in this review provide moderate quality evidence that oral calcium channel blockers are minimally effective in the treatment of primary Raynaud's phenomenon as measured by the frequency of attacks and high‐quality evidence that they have little effect on severity. We are unable to comment on duration of attacks or on patient preference due to the very low and low quality of evidence as a result of small sample sizes in the included studies and the variable data quality of outcome measures.
t182
t182_8
yes
Although overall all the trials were classed as being at low or unclear risk of bias, the sample size of the included trials was small and there was unclear reporting of outcomes.
Calcium channel blockers are the most commonly prescribed drugs for people with primary Raynaud's phenomenon. Primary Raynaud's phenomenon is a common condition characterised by an exaggerated vasospastic response to cold or emotion: classically the digits (fingers and toes) turn white, then blue, then red. This is an update of the review first published in 2014. Objectives To assess the effects of different calcium channel blockers for primary Raynaud's phenomenon as determined by attack rates, severity scores, participant‐preference scores and physiological measurements. Search methods For this update the Cochrane Vascular Trial Search Co‐ordinator searched the Specialised Register (last searched January 2016) and the Cochrane Register of Studies (CENTRAL) (2015, Issue 12). In addition the TSC searched clinical trials databases. Selection criteria Randomised controlled trials evaluating the effects of oral calcium channel blockers for the treatment of primary Raynaud's phenomenon. Data collection and analysis Three review authors independently assessed the trials for inclusion and their quality, and extracted the data. Data extraction included adverse events. We contacted trial authors for missing data. We included seven randomised trials with 296 participants. Four trials examined nifedipine and the remainder nicardipine. Comparisons were with placebo in six trials and with both dazoxiben and placebo in one trial (only the nifedipine versus placebo data were used within this review). Treatment with oral calcium channel blockers was minimally effective in primary Raynaud's phenomenon at decreasing the frequency of attacks (standardised mean difference of 0.23; 95% confidence interval (CI) 0.08 to 0.38, P = 0.003). This translates to 1.72 (95% CI 0.60 to 2.84) fewer attacks per week on calcium channel blockers compared to placebo. One trial provided details on duration of attacks reporting no statistically significant difference between the nicardipine and placebo groups (no P value reported). Only two trials provided any detail of statistical comparisons of (unvalidated) severity scores between treatment groups: one of these trials (60 participants) reported a mean severity score of 1.55 on placebo and 1.36 on nicardipine, difference 0.2 (95% CI of difference 0 to 0.4, no P value reported) and the other trial (three participants only with primary Raynaud's phenomenon) reported a median severity score of 2 on both nicardipine and placebo treatment (P > 0.999) suggesting little effect on severity. Participant‐preference scores were included in four trials, but in only two were results specific to participants with primary Raynaud's phenomenon, and scoring systems differed between trials: scores differed between treatments in only one trial, in which 33% of participants on placebo and 73% on nifedipine reported improvement in symptoms (P < 0.001). Physiological measurements were included as outcome measures in five trials (different methodologies were used in each): none of these trials found any statistically significant between‐treatment group differences. Treatment with calcium channel blockers appeared to be associated with a number of adverse reactions, including headaches, flushing and oedema (swelling). Overall, the trials were classed as being at low or unclear risk of bias; and the quality of the evidence presented was moderate for number of attacks, very low for duration of attacks, high for severity scores and low for patient preference scores. The randomised controlled trials included in this review provide moderate quality evidence that oral calcium channel blockers are minimally effective in the treatment of primary Raynaud's phenomenon as measured by the frequency of attacks and high‐quality evidence that they have little effect on severity. We are unable to comment on duration of attacks or on patient preference due to the very low and low quality of evidence as a result of small sample sizes in the included studies and the variable data quality of outcome measures.
t182
t182_9
no
Two different calcium channel blockers were included: nifedipine and nicardipine.
Calcium channel blockers are the most commonly prescribed drugs for people with primary Raynaud's phenomenon. Primary Raynaud's phenomenon is a common condition characterised by an exaggerated vasospastic response to cold or emotion: classically the digits (fingers and toes) turn white, then blue, then red. This is an update of the review first published in 2014. Objectives To assess the effects of different calcium channel blockers for primary Raynaud's phenomenon as determined by attack rates, severity scores, participant‐preference scores and physiological measurements. Search methods For this update the Cochrane Vascular Trial Search Co‐ordinator searched the Specialised Register (last searched January 2016) and the Cochrane Register of Studies (CENTRAL) (2015, Issue 12). In addition the TSC searched clinical trials databases. Selection criteria Randomised controlled trials evaluating the effects of oral calcium channel blockers for the treatment of primary Raynaud's phenomenon. Data collection and analysis Three review authors independently assessed the trials for inclusion and their quality, and extracted the data. Data extraction included adverse events. We contacted trial authors for missing data. We included seven randomised trials with 296 participants. Four trials examined nifedipine and the remainder nicardipine. Comparisons were with placebo in six trials and with both dazoxiben and placebo in one trial (only the nifedipine versus placebo data were used within this review). Treatment with oral calcium channel blockers was minimally effective in primary Raynaud's phenomenon at decreasing the frequency of attacks (standardised mean difference of 0.23; 95% confidence interval (CI) 0.08 to 0.38, P = 0.003). This translates to 1.72 (95% CI 0.60 to 2.84) fewer attacks per week on calcium channel blockers compared to placebo. One trial provided details on duration of attacks reporting no statistically significant difference between the nicardipine and placebo groups (no P value reported). Only two trials provided any detail of statistical comparisons of (unvalidated) severity scores between treatment groups: one of these trials (60 participants) reported a mean severity score of 1.55 on placebo and 1.36 on nicardipine, difference 0.2 (95% CI of difference 0 to 0.4, no P value reported) and the other trial (three participants only with primary Raynaud's phenomenon) reported a median severity score of 2 on both nicardipine and placebo treatment (P > 0.999) suggesting little effect on severity. Participant‐preference scores were included in four trials, but in only two were results specific to participants with primary Raynaud's phenomenon, and scoring systems differed between trials: scores differed between treatments in only one trial, in which 33% of participants on placebo and 73% on nifedipine reported improvement in symptoms (P < 0.001). Physiological measurements were included as outcome measures in five trials (different methodologies were used in each): none of these trials found any statistically significant between‐treatment group differences. Treatment with calcium channel blockers appeared to be associated with a number of adverse reactions, including headaches, flushing and oedema (swelling). Overall, the trials were classed as being at low or unclear risk of bias; and the quality of the evidence presented was moderate for number of attacks, very low for duration of attacks, high for severity scores and low for patient preference scores. The randomised controlled trials included in this review provide moderate quality evidence that oral calcium channel blockers are minimally effective in the treatment of primary Raynaud's phenomenon as measured by the frequency of attacks and high‐quality evidence that they have little effect on severity. We are unable to comment on duration of attacks or on patient preference due to the very low and low quality of evidence as a result of small sample sizes in the included studies and the variable data quality of outcome measures.
t182
t182_10
no
Comparisons in six trials were with placebo and in one trial with both placebo and another type of drug (although only data relating to the calcium channel blocker and placebo were used in this case).
Calcium channel blockers are the most commonly prescribed drugs for people with primary Raynaud's phenomenon. Primary Raynaud's phenomenon is a common condition characterised by an exaggerated vasospastic response to cold or emotion: classically the digits (fingers and toes) turn white, then blue, then red. This is an update of the review first published in 2014. Objectives To assess the effects of different calcium channel blockers for primary Raynaud's phenomenon as determined by attack rates, severity scores, participant‐preference scores and physiological measurements. Search methods For this update the Cochrane Vascular Trial Search Co‐ordinator searched the Specialised Register (last searched January 2016) and the Cochrane Register of Studies (CENTRAL) (2015, Issue 12). In addition the TSC searched clinical trials databases. Selection criteria Randomised controlled trials evaluating the effects of oral calcium channel blockers for the treatment of primary Raynaud's phenomenon. Data collection and analysis Three review authors independently assessed the trials for inclusion and their quality, and extracted the data. Data extraction included adverse events. We contacted trial authors for missing data. We included seven randomised trials with 296 participants. Four trials examined nifedipine and the remainder nicardipine. Comparisons were with placebo in six trials and with both dazoxiben and placebo in one trial (only the nifedipine versus placebo data were used within this review). Treatment with oral calcium channel blockers was minimally effective in primary Raynaud's phenomenon at decreasing the frequency of attacks (standardised mean difference of 0.23; 95% confidence interval (CI) 0.08 to 0.38, P = 0.003). This translates to 1.72 (95% CI 0.60 to 2.84) fewer attacks per week on calcium channel blockers compared to placebo. One trial provided details on duration of attacks reporting no statistically significant difference between the nicardipine and placebo groups (no P value reported). Only two trials provided any detail of statistical comparisons of (unvalidated) severity scores between treatment groups: one of these trials (60 participants) reported a mean severity score of 1.55 on placebo and 1.36 on nicardipine, difference 0.2 (95% CI of difference 0 to 0.4, no P value reported) and the other trial (three participants only with primary Raynaud's phenomenon) reported a median severity score of 2 on both nicardipine and placebo treatment (P > 0.999) suggesting little effect on severity. Participant‐preference scores were included in four trials, but in only two were results specific to participants with primary Raynaud's phenomenon, and scoring systems differed between trials: scores differed between treatments in only one trial, in which 33% of participants on placebo and 73% on nifedipine reported improvement in symptoms (P < 0.001). Physiological measurements were included as outcome measures in five trials (different methodologies were used in each): none of these trials found any statistically significant between‐treatment group differences. Treatment with calcium channel blockers appeared to be associated with a number of adverse reactions, including headaches, flushing and oedema (swelling). Overall, the trials were classed as being at low or unclear risk of bias; and the quality of the evidence presented was moderate for number of attacks, very low for duration of attacks, high for severity scores and low for patient preference scores. The randomised controlled trials included in this review provide moderate quality evidence that oral calcium channel blockers are minimally effective in the treatment of primary Raynaud's phenomenon as measured by the frequency of attacks and high‐quality evidence that they have little effect on severity. We are unable to comment on duration of attacks or on patient preference due to the very low and low quality of evidence as a result of small sample sizes in the included studies and the variable data quality of outcome measures.
t182
t182_11
no
Treatment with oral calcium channel blockers was found to be minimally effective in primary Raynaud's phenomenon, reducing the frequency of attacks by around 1.7 attacks per person per week.
Calcium channel blockers are the most commonly prescribed drugs for people with primary Raynaud's phenomenon. Primary Raynaud's phenomenon is a common condition characterised by an exaggerated vasospastic response to cold or emotion: classically the digits (fingers and toes) turn white, then blue, then red. This is an update of the review first published in 2014. Objectives To assess the effects of different calcium channel blockers for primary Raynaud's phenomenon as determined by attack rates, severity scores, participant‐preference scores and physiological measurements. Search methods For this update the Cochrane Vascular Trial Search Co‐ordinator searched the Specialised Register (last searched January 2016) and the Cochrane Register of Studies (CENTRAL) (2015, Issue 12). In addition the TSC searched clinical trials databases. Selection criteria Randomised controlled trials evaluating the effects of oral calcium channel blockers for the treatment of primary Raynaud's phenomenon. Data collection and analysis Three review authors independently assessed the trials for inclusion and their quality, and extracted the data. Data extraction included adverse events. We contacted trial authors for missing data. We included seven randomised trials with 296 participants. Four trials examined nifedipine and the remainder nicardipine. Comparisons were with placebo in six trials and with both dazoxiben and placebo in one trial (only the nifedipine versus placebo data were used within this review). Treatment with oral calcium channel blockers was minimally effective in primary Raynaud's phenomenon at decreasing the frequency of attacks (standardised mean difference of 0.23; 95% confidence interval (CI) 0.08 to 0.38, P = 0.003). This translates to 1.72 (95% CI 0.60 to 2.84) fewer attacks per week on calcium channel blockers compared to placebo. One trial provided details on duration of attacks reporting no statistically significant difference between the nicardipine and placebo groups (no P value reported). Only two trials provided any detail of statistical comparisons of (unvalidated) severity scores between treatment groups: one of these trials (60 participants) reported a mean severity score of 1.55 on placebo and 1.36 on nicardipine, difference 0.2 (95% CI of difference 0 to 0.4, no P value reported) and the other trial (three participants only with primary Raynaud's phenomenon) reported a median severity score of 2 on both nicardipine and placebo treatment (P > 0.999) suggesting little effect on severity. Participant‐preference scores were included in four trials, but in only two were results specific to participants with primary Raynaud's phenomenon, and scoring systems differed between trials: scores differed between treatments in only one trial, in which 33% of participants on placebo and 73% on nifedipine reported improvement in symptoms (P < 0.001). Physiological measurements were included as outcome measures in five trials (different methodologies were used in each): none of these trials found any statistically significant between‐treatment group differences. Treatment with calcium channel blockers appeared to be associated with a number of adverse reactions, including headaches, flushing and oedema (swelling). Overall, the trials were classed as being at low or unclear risk of bias; and the quality of the evidence presented was moderate for number of attacks, very low for duration of attacks, high for severity scores and low for patient preference scores. The randomised controlled trials included in this review provide moderate quality evidence that oral calcium channel blockers are minimally effective in the treatment of primary Raynaud's phenomenon as measured by the frequency of attacks and high‐quality evidence that they have little effect on severity. We are unable to comment on duration of attacks or on patient preference due to the very low and low quality of evidence as a result of small sample sizes in the included studies and the variable data quality of outcome measures.
t182
t182_12
no
One trial provided information on duration of attacks reporting no difference between the calcium channel blocker and placebo groups .
Calcium channel blockers are the most commonly prescribed drugs for people with primary Raynaud's phenomenon. Primary Raynaud's phenomenon is a common condition characterised by an exaggerated vasospastic response to cold or emotion: classically the digits (fingers and toes) turn white, then blue, then red. This is an update of the review first published in 2014. Objectives To assess the effects of different calcium channel blockers for primary Raynaud's phenomenon as determined by attack rates, severity scores, participant‐preference scores and physiological measurements. Search methods For this update the Cochrane Vascular Trial Search Co‐ordinator searched the Specialised Register (last searched January 2016) and the Cochrane Register of Studies (CENTRAL) (2015, Issue 12). In addition the TSC searched clinical trials databases. Selection criteria Randomised controlled trials evaluating the effects of oral calcium channel blockers for the treatment of primary Raynaud's phenomenon. Data collection and analysis Three review authors independently assessed the trials for inclusion and their quality, and extracted the data. Data extraction included adverse events. We contacted trial authors for missing data. We included seven randomised trials with 296 participants. Four trials examined nifedipine and the remainder nicardipine. Comparisons were with placebo in six trials and with both dazoxiben and placebo in one trial (only the nifedipine versus placebo data were used within this review). Treatment with oral calcium channel blockers was minimally effective in primary Raynaud's phenomenon at decreasing the frequency of attacks (standardised mean difference of 0.23; 95% confidence interval (CI) 0.08 to 0.38, P = 0.003). This translates to 1.72 (95% CI 0.60 to 2.84) fewer attacks per week on calcium channel blockers compared to placebo. One trial provided details on duration of attacks reporting no statistically significant difference between the nicardipine and placebo groups (no P value reported). Only two trials provided any detail of statistical comparisons of (unvalidated) severity scores between treatment groups: one of these trials (60 participants) reported a mean severity score of 1.55 on placebo and 1.36 on nicardipine, difference 0.2 (95% CI of difference 0 to 0.4, no P value reported) and the other trial (three participants only with primary Raynaud's phenomenon) reported a median severity score of 2 on both nicardipine and placebo treatment (P > 0.999) suggesting little effect on severity. Participant‐preference scores were included in four trials, but in only two were results specific to participants with primary Raynaud's phenomenon, and scoring systems differed between trials: scores differed between treatments in only one trial, in which 33% of participants on placebo and 73% on nifedipine reported improvement in symptoms (P < 0.001). Physiological measurements were included as outcome measures in five trials (different methodologies were used in each): none of these trials found any statistically significant between‐treatment group differences. Treatment with calcium channel blockers appeared to be associated with a number of adverse reactions, including headaches, flushing and oedema (swelling). Overall, the trials were classed as being at low or unclear risk of bias; and the quality of the evidence presented was moderate for number of attacks, very low for duration of attacks, high for severity scores and low for patient preference scores. The randomised controlled trials included in this review provide moderate quality evidence that oral calcium channel blockers are minimally effective in the treatment of primary Raynaud's phenomenon as measured by the frequency of attacks and high‐quality evidence that they have little effect on severity. We are unable to comment on duration of attacks or on patient preference due to the very low and low quality of evidence as a result of small sample sizes in the included studies and the variable data quality of outcome measures.
t182
t182_13
no
Oral calcium channel blockers had no effect on severity scores in the two trials in which these were assessed.
Calcium channel blockers are the most commonly prescribed drugs for people with primary Raynaud's phenomenon. Primary Raynaud's phenomenon is a common condition characterised by an exaggerated vasospastic response to cold or emotion: classically the digits (fingers and toes) turn white, then blue, then red. This is an update of the review first published in 2014. Objectives To assess the effects of different calcium channel blockers for primary Raynaud's phenomenon as determined by attack rates, severity scores, participant‐preference scores and physiological measurements. Search methods For this update the Cochrane Vascular Trial Search Co‐ordinator searched the Specialised Register (last searched January 2016) and the Cochrane Register of Studies (CENTRAL) (2015, Issue 12). In addition the TSC searched clinical trials databases. Selection criteria Randomised controlled trials evaluating the effects of oral calcium channel blockers for the treatment of primary Raynaud's phenomenon. Data collection and analysis Three review authors independently assessed the trials for inclusion and their quality, and extracted the data. Data extraction included adverse events. We contacted trial authors for missing data. We included seven randomised trials with 296 participants. Four trials examined nifedipine and the remainder nicardipine. Comparisons were with placebo in six trials and with both dazoxiben and placebo in one trial (only the nifedipine versus placebo data were used within this review). Treatment with oral calcium channel blockers was minimally effective in primary Raynaud's phenomenon at decreasing the frequency of attacks (standardised mean difference of 0.23; 95% confidence interval (CI) 0.08 to 0.38, P = 0.003). This translates to 1.72 (95% CI 0.60 to 2.84) fewer attacks per week on calcium channel blockers compared to placebo. One trial provided details on duration of attacks reporting no statistically significant difference between the nicardipine and placebo groups (no P value reported). Only two trials provided any detail of statistical comparisons of (unvalidated) severity scores between treatment groups: one of these trials (60 participants) reported a mean severity score of 1.55 on placebo and 1.36 on nicardipine, difference 0.2 (95% CI of difference 0 to 0.4, no P value reported) and the other trial (three participants only with primary Raynaud's phenomenon) reported a median severity score of 2 on both nicardipine and placebo treatment (P > 0.999) suggesting little effect on severity. Participant‐preference scores were included in four trials, but in only two were results specific to participants with primary Raynaud's phenomenon, and scoring systems differed between trials: scores differed between treatments in only one trial, in which 33% of participants on placebo and 73% on nifedipine reported improvement in symptoms (P < 0.001). Physiological measurements were included as outcome measures in five trials (different methodologies were used in each): none of these trials found any statistically significant between‐treatment group differences. Treatment with calcium channel blockers appeared to be associated with a number of adverse reactions, including headaches, flushing and oedema (swelling). Overall, the trials were classed as being at low or unclear risk of bias; and the quality of the evidence presented was moderate for number of attacks, very low for duration of attacks, high for severity scores and low for patient preference scores. The randomised controlled trials included in this review provide moderate quality evidence that oral calcium channel blockers are minimally effective in the treatment of primary Raynaud's phenomenon as measured by the frequency of attacks and high‐quality evidence that they have little effect on severity. We are unable to comment on duration of attacks or on patient preference due to the very low and low quality of evidence as a result of small sample sizes in the included studies and the variable data quality of outcome measures.
t182
t182_14
no
Only two trials reported preference scores (whereby participants are asked which treatment they prefer) specifically in those with primary Raynaud's phenomenon, and in only one of these was there a between‐treatment group difference (participants preferred nifedipine to placebo).
Calcium channel blockers are the most commonly prescribed drugs for people with primary Raynaud's phenomenon. Primary Raynaud's phenomenon is a common condition characterised by an exaggerated vasospastic response to cold or emotion: classically the digits (fingers and toes) turn white, then blue, then red. This is an update of the review first published in 2014. Objectives To assess the effects of different calcium channel blockers for primary Raynaud's phenomenon as determined by attack rates, severity scores, participant‐preference scores and physiological measurements. Search methods For this update the Cochrane Vascular Trial Search Co‐ordinator searched the Specialised Register (last searched January 2016) and the Cochrane Register of Studies (CENTRAL) (2015, Issue 12). In addition the TSC searched clinical trials databases. Selection criteria Randomised controlled trials evaluating the effects of oral calcium channel blockers for the treatment of primary Raynaud's phenomenon. Data collection and analysis Three review authors independently assessed the trials for inclusion and their quality, and extracted the data. Data extraction included adverse events. We contacted trial authors for missing data. We included seven randomised trials with 296 participants. Four trials examined nifedipine and the remainder nicardipine. Comparisons were with placebo in six trials and with both dazoxiben and placebo in one trial (only the nifedipine versus placebo data were used within this review). Treatment with oral calcium channel blockers was minimally effective in primary Raynaud's phenomenon at decreasing the frequency of attacks (standardised mean difference of 0.23; 95% confidence interval (CI) 0.08 to 0.38, P = 0.003). This translates to 1.72 (95% CI 0.60 to 2.84) fewer attacks per week on calcium channel blockers compared to placebo. One trial provided details on duration of attacks reporting no statistically significant difference between the nicardipine and placebo groups (no P value reported). Only two trials provided any detail of statistical comparisons of (unvalidated) severity scores between treatment groups: one of these trials (60 participants) reported a mean severity score of 1.55 on placebo and 1.36 on nicardipine, difference 0.2 (95% CI of difference 0 to 0.4, no P value reported) and the other trial (three participants only with primary Raynaud's phenomenon) reported a median severity score of 2 on both nicardipine and placebo treatment (P > 0.999) suggesting little effect on severity. Participant‐preference scores were included in four trials, but in only two were results specific to participants with primary Raynaud's phenomenon, and scoring systems differed between trials: scores differed between treatments in only one trial, in which 33% of participants on placebo and 73% on nifedipine reported improvement in symptoms (P < 0.001). Physiological measurements were included as outcome measures in five trials (different methodologies were used in each): none of these trials found any statistically significant between‐treatment group differences. Treatment with calcium channel blockers appeared to be associated with a number of adverse reactions, including headaches, flushing and oedema (swelling). Overall, the trials were classed as being at low or unclear risk of bias; and the quality of the evidence presented was moderate for number of attacks, very low for duration of attacks, high for severity scores and low for patient preference scores. The randomised controlled trials included in this review provide moderate quality evidence that oral calcium channel blockers are minimally effective in the treatment of primary Raynaud's phenomenon as measured by the frequency of attacks and high‐quality evidence that they have little effect on severity. We are unable to comment on duration of attacks or on patient preference due to the very low and low quality of evidence as a result of small sample sizes in the included studies and the variable data quality of outcome measures.
t182
t182_15
no
Physiological measurements (for example measurement of finger blood flow) were performed in five trials, data could not be combined as the methods were too different, no differences found between calcium channel blocker and placebo treatment were seen in any trial.
Calcium channel blockers are the most commonly prescribed drugs for people with primary Raynaud's phenomenon. Primary Raynaud's phenomenon is a common condition characterised by an exaggerated vasospastic response to cold or emotion: classically the digits (fingers and toes) turn white, then blue, then red. This is an update of the review first published in 2014. Objectives To assess the effects of different calcium channel blockers for primary Raynaud's phenomenon as determined by attack rates, severity scores, participant‐preference scores and physiological measurements. Search methods For this update the Cochrane Vascular Trial Search Co‐ordinator searched the Specialised Register (last searched January 2016) and the Cochrane Register of Studies (CENTRAL) (2015, Issue 12). In addition the TSC searched clinical trials databases. Selection criteria Randomised controlled trials evaluating the effects of oral calcium channel blockers for the treatment of primary Raynaud's phenomenon. Data collection and analysis Three review authors independently assessed the trials for inclusion and their quality, and extracted the data. Data extraction included adverse events. We contacted trial authors for missing data. We included seven randomised trials with 296 participants. Four trials examined nifedipine and the remainder nicardipine. Comparisons were with placebo in six trials and with both dazoxiben and placebo in one trial (only the nifedipine versus placebo data were used within this review). Treatment with oral calcium channel blockers was minimally effective in primary Raynaud's phenomenon at decreasing the frequency of attacks (standardised mean difference of 0.23; 95% confidence interval (CI) 0.08 to 0.38, P = 0.003). This translates to 1.72 (95% CI 0.60 to 2.84) fewer attacks per week on calcium channel blockers compared to placebo. One trial provided details on duration of attacks reporting no statistically significant difference between the nicardipine and placebo groups (no P value reported). Only two trials provided any detail of statistical comparisons of (unvalidated) severity scores between treatment groups: one of these trials (60 participants) reported a mean severity score of 1.55 on placebo and 1.36 on nicardipine, difference 0.2 (95% CI of difference 0 to 0.4, no P value reported) and the other trial (three participants only with primary Raynaud's phenomenon) reported a median severity score of 2 on both nicardipine and placebo treatment (P > 0.999) suggesting little effect on severity. Participant‐preference scores were included in four trials, but in only two were results specific to participants with primary Raynaud's phenomenon, and scoring systems differed between trials: scores differed between treatments in only one trial, in which 33% of participants on placebo and 73% on nifedipine reported improvement in symptoms (P < 0.001). Physiological measurements were included as outcome measures in five trials (different methodologies were used in each): none of these trials found any statistically significant between‐treatment group differences. Treatment with calcium channel blockers appeared to be associated with a number of adverse reactions, including headaches, flushing and oedema (swelling). Overall, the trials were classed as being at low or unclear risk of bias; and the quality of the evidence presented was moderate for number of attacks, very low for duration of attacks, high for severity scores and low for patient preference scores. The randomised controlled trials included in this review provide moderate quality evidence that oral calcium channel blockers are minimally effective in the treatment of primary Raynaud's phenomenon as measured by the frequency of attacks and high‐quality evidence that they have little effect on severity. We are unable to comment on duration of attacks or on patient preference due to the very low and low quality of evidence as a result of small sample sizes in the included studies and the variable data quality of outcome measures.
t182
t182_16
no
Treatment with calcium channel blockers was associated with a number of adverse events including headaches, flushing and ankle swelling.
Calcium channel blockers are the most commonly prescribed drugs for people with primary Raynaud's phenomenon. Primary Raynaud's phenomenon is a common condition characterised by an exaggerated vasospastic response to cold or emotion: classically the digits (fingers and toes) turn white, then blue, then red. This is an update of the review first published in 2014. Objectives To assess the effects of different calcium channel blockers for primary Raynaud's phenomenon as determined by attack rates, severity scores, participant‐preference scores and physiological measurements. Search methods For this update the Cochrane Vascular Trial Search Co‐ordinator searched the Specialised Register (last searched January 2016) and the Cochrane Register of Studies (CENTRAL) (2015, Issue 12). In addition the TSC searched clinical trials databases. Selection criteria Randomised controlled trials evaluating the effects of oral calcium channel blockers for the treatment of primary Raynaud's phenomenon. Data collection and analysis Three review authors independently assessed the trials for inclusion and their quality, and extracted the data. Data extraction included adverse events. We contacted trial authors for missing data. We included seven randomised trials with 296 participants. Four trials examined nifedipine and the remainder nicardipine. Comparisons were with placebo in six trials and with both dazoxiben and placebo in one trial (only the nifedipine versus placebo data were used within this review). Treatment with oral calcium channel blockers was minimally effective in primary Raynaud's phenomenon at decreasing the frequency of attacks (standardised mean difference of 0.23; 95% confidence interval (CI) 0.08 to 0.38, P = 0.003). This translates to 1.72 (95% CI 0.60 to 2.84) fewer attacks per week on calcium channel blockers compared to placebo. One trial provided details on duration of attacks reporting no statistically significant difference between the nicardipine and placebo groups (no P value reported). Only two trials provided any detail of statistical comparisons of (unvalidated) severity scores between treatment groups: one of these trials (60 participants) reported a mean severity score of 1.55 on placebo and 1.36 on nicardipine, difference 0.2 (95% CI of difference 0 to 0.4, no P value reported) and the other trial (three participants only with primary Raynaud's phenomenon) reported a median severity score of 2 on both nicardipine and placebo treatment (P > 0.999) suggesting little effect on severity. Participant‐preference scores were included in four trials, but in only two were results specific to participants with primary Raynaud's phenomenon, and scoring systems differed between trials: scores differed between treatments in only one trial, in which 33% of participants on placebo and 73% on nifedipine reported improvement in symptoms (P < 0.001). Physiological measurements were included as outcome measures in five trials (different methodologies were used in each): none of these trials found any statistically significant between‐treatment group differences. Treatment with calcium channel blockers appeared to be associated with a number of adverse reactions, including headaches, flushing and oedema (swelling). Overall, the trials were classed as being at low or unclear risk of bias; and the quality of the evidence presented was moderate for number of attacks, very low for duration of attacks, high for severity scores and low for patient preference scores. The randomised controlled trials included in this review provide moderate quality evidence that oral calcium channel blockers are minimally effective in the treatment of primary Raynaud's phenomenon as measured by the frequency of attacks and high‐quality evidence that they have little effect on severity. We are unable to comment on duration of attacks or on patient preference due to the very low and low quality of evidence as a result of small sample sizes in the included studies and the variable data quality of outcome measures.
t183
t183_1
yes
Modern technologies have created new platforms for advancing medical education.
The use of e‐learning, defined as any educational intervention mediated electronically via the Internet, has steadily increased among health professionals worldwide. Several studies have attempted to measure the effects of e‐learning in medical practice, which has often been associated with large positive effects when compared to no intervention and with small positive effects when compared with traditional learning (without access to e‐learning). However, results are not conclusive. Objectives To assess the effects of e‐learning programmes versus traditional learning in licensed health professionals for improving patient outcomes or health professionals' behaviours, skills and knowledge. Search methods We searched CENTRAL, MEDLINE, Embase, five other databases and three trial registers up to July 2016, without any restrictions based on language or status of publication. We examined the reference lists of the included studies and other relevant reviews. If necessary, we contacted the study authors to collect additional information on studies. Selection criteria Randomised trials assessing the effectiveness of e‐learning versus traditional learning for health professionals. We excluded non‐randomised trials and trials involving undergraduate health professionals. Data collection and analysis Two authors independently selected studies, extracted data and assessed risk of bias. We graded the certainty of evidence for each outcome using the GRADE approach and standardised the outcome effects using relative risks (risk ratio (RR) or odds ratio (OR)) or standardised mean difference (SMD) when possible. We included 16 randomised trials involving 5679 licensed health professionals (4759 mixed health professionals, 587 nurses, 300 doctors and 33 childcare health consultants). When compared with traditional learning at 12‐month follow‐up, low‐certainty evidence suggests that e‐learning may make little or no difference for the following patient outcomes: the proportion of patients with low‐density lipoprotein (LDL) cholesterol of less than 100 mg/dL (adjusted difference 4.0%, 95% confidence interval (CI) −0.3 to 7.9, N = 6399 patients, 1 study) and the proportion with glycated haemoglobin level of less than 8% (adjusted difference 4.6%, 95% CI −1.5 to 9.8, 3114 patients, 1 study). At 3‐ to 12‐month follow‐up, low‐certainty evidence indicates that e‐learning may make little or no difference on the following behaviours in health professionals: screening for dyslipidaemia (OR 0.90, 95% CI 0.77 to 1.06, 6027 patients, 2 studies) and treatment for dyslipidaemia (OR 1.15, 95% CI 0.89 to 1.48, 5491 patients, 2 studies). It is uncertain whether e‐learning improves or reduces health professionals' skills (2912 health professionals; 6 studies; very low‐certainty evidence), and it may make little or no difference in health professionals' knowledge (3236 participants; 11 studies; low‐certainty evidence). Due to the paucity of studies and data, we were unable to explore differences in effects across different subgroups. Owing to poor reporting, we were unable to collect sufficient information to complete a meaningful 'Risk of bias' assessment for most of the quality criteria. We evaluated the risk of bias as unclear for most studies, but we classified the largest trial as being at low risk of bias. Missing data represented a potential source of bias in several studies. When compared to traditional learning, e‐learning may make little or no difference in patient outcomes or health professionals' behaviours, skills or knowledge. Even if e‐learning could be more successful than traditional learning in particular medical education settings, general claims of it as inherently more effective than traditional learning may be misleading.
t183
t183_2
yes
E‐learning has gained popularity due to the potential benefits of personalised instruction, allowing learners to tailor the pace and content of courses to their individual needs, increasing the accessibility of information to remote learners, decreasing costs and facilitating frequent content updates.
The use of e‐learning, defined as any educational intervention mediated electronically via the Internet, has steadily increased among health professionals worldwide. Several studies have attempted to measure the effects of e‐learning in medical practice, which has often been associated with large positive effects when compared to no intervention and with small positive effects when compared with traditional learning (without access to e‐learning). However, results are not conclusive. Objectives To assess the effects of e‐learning programmes versus traditional learning in licensed health professionals for improving patient outcomes or health professionals' behaviours, skills and knowledge. Search methods We searched CENTRAL, MEDLINE, Embase, five other databases and three trial registers up to July 2016, without any restrictions based on language or status of publication. We examined the reference lists of the included studies and other relevant reviews. If necessary, we contacted the study authors to collect additional information on studies. Selection criteria Randomised trials assessing the effectiveness of e‐learning versus traditional learning for health professionals. We excluded non‐randomised trials and trials involving undergraduate health professionals. Data collection and analysis Two authors independently selected studies, extracted data and assessed risk of bias. We graded the certainty of evidence for each outcome using the GRADE approach and standardised the outcome effects using relative risks (risk ratio (RR) or odds ratio (OR)) or standardised mean difference (SMD) when possible. We included 16 randomised trials involving 5679 licensed health professionals (4759 mixed health professionals, 587 nurses, 300 doctors and 33 childcare health consultants). When compared with traditional learning at 12‐month follow‐up, low‐certainty evidence suggests that e‐learning may make little or no difference for the following patient outcomes: the proportion of patients with low‐density lipoprotein (LDL) cholesterol of less than 100 mg/dL (adjusted difference 4.0%, 95% confidence interval (CI) −0.3 to 7.9, N = 6399 patients, 1 study) and the proportion with glycated haemoglobin level of less than 8% (adjusted difference 4.6%, 95% CI −1.5 to 9.8, 3114 patients, 1 study). At 3‐ to 12‐month follow‐up, low‐certainty evidence indicates that e‐learning may make little or no difference on the following behaviours in health professionals: screening for dyslipidaemia (OR 0.90, 95% CI 0.77 to 1.06, 6027 patients, 2 studies) and treatment for dyslipidaemia (OR 1.15, 95% CI 0.89 to 1.48, 5491 patients, 2 studies). It is uncertain whether e‐learning improves or reduces health professionals' skills (2912 health professionals; 6 studies; very low‐certainty evidence), and it may make little or no difference in health professionals' knowledge (3236 participants; 11 studies; low‐certainty evidence). Due to the paucity of studies and data, we were unable to explore differences in effects across different subgroups. Owing to poor reporting, we were unable to collect sufficient information to complete a meaningful 'Risk of bias' assessment for most of the quality criteria. We evaluated the risk of bias as unclear for most studies, but we classified the largest trial as being at low risk of bias. Missing data represented a potential source of bias in several studies. When compared to traditional learning, e‐learning may make little or no difference in patient outcomes or health professionals' behaviours, skills or knowledge. Even if e‐learning could be more successful than traditional learning in particular medical education settings, general claims of it as inherently more effective than traditional learning may be misleading.
t183
t183_3
yes
Previous reviews have not identified differences, but they were limited by the type of participants included (mix of licensed health professionals and medical students) and study types evaluated (randomised together with non‐randomised trials).
The use of e‐learning, defined as any educational intervention mediated electronically via the Internet, has steadily increased among health professionals worldwide. Several studies have attempted to measure the effects of e‐learning in medical practice, which has often been associated with large positive effects when compared to no intervention and with small positive effects when compared with traditional learning (without access to e‐learning). However, results are not conclusive. Objectives To assess the effects of e‐learning programmes versus traditional learning in licensed health professionals for improving patient outcomes or health professionals' behaviours, skills and knowledge. Search methods We searched CENTRAL, MEDLINE, Embase, five other databases and three trial registers up to July 2016, without any restrictions based on language or status of publication. We examined the reference lists of the included studies and other relevant reviews. If necessary, we contacted the study authors to collect additional information on studies. Selection criteria Randomised trials assessing the effectiveness of e‐learning versus traditional learning for health professionals. We excluded non‐randomised trials and trials involving undergraduate health professionals. Data collection and analysis Two authors independently selected studies, extracted data and assessed risk of bias. We graded the certainty of evidence for each outcome using the GRADE approach and standardised the outcome effects using relative risks (risk ratio (RR) or odds ratio (OR)) or standardised mean difference (SMD) when possible. We included 16 randomised trials involving 5679 licensed health professionals (4759 mixed health professionals, 587 nurses, 300 doctors and 33 childcare health consultants). When compared with traditional learning at 12‐month follow‐up, low‐certainty evidence suggests that e‐learning may make little or no difference for the following patient outcomes: the proportion of patients with low‐density lipoprotein (LDL) cholesterol of less than 100 mg/dL (adjusted difference 4.0%, 95% confidence interval (CI) −0.3 to 7.9, N = 6399 patients, 1 study) and the proportion with glycated haemoglobin level of less than 8% (adjusted difference 4.6%, 95% CI −1.5 to 9.8, 3114 patients, 1 study). At 3‐ to 12‐month follow‐up, low‐certainty evidence indicates that e‐learning may make little or no difference on the following behaviours in health professionals: screening for dyslipidaemia (OR 0.90, 95% CI 0.77 to 1.06, 6027 patients, 2 studies) and treatment for dyslipidaemia (OR 1.15, 95% CI 0.89 to 1.48, 5491 patients, 2 studies). It is uncertain whether e‐learning improves or reduces health professionals' skills (2912 health professionals; 6 studies; very low‐certainty evidence), and it may make little or no difference in health professionals' knowledge (3236 participants; 11 studies; low‐certainty evidence). Due to the paucity of studies and data, we were unable to explore differences in effects across different subgroups. Owing to poor reporting, we were unable to collect sufficient information to complete a meaningful 'Risk of bias' assessment for most of the quality criteria. We evaluated the risk of bias as unclear for most studies, but we classified the largest trial as being at low risk of bias. Missing data represented a potential source of bias in several studies. When compared to traditional learning, e‐learning may make little or no difference in patient outcomes or health professionals' behaviours, skills or knowledge. Even if e‐learning could be more successful than traditional learning in particular medical education settings, general claims of it as inherently more effective than traditional learning may be misleading.
t183
t183_4
no
The review authors identified 16 relevant studies from 10 different countries, providing data on 5679 participants (4759 mixed health professionals, 587 nurses, 300 doctors and 33 childcare health consultants).
The use of e‐learning, defined as any educational intervention mediated electronically via the Internet, has steadily increased among health professionals worldwide. Several studies have attempted to measure the effects of e‐learning in medical practice, which has often been associated with large positive effects when compared to no intervention and with small positive effects when compared with traditional learning (without access to e‐learning). However, results are not conclusive. Objectives To assess the effects of e‐learning programmes versus traditional learning in licensed health professionals for improving patient outcomes or health professionals' behaviours, skills and knowledge. Search methods We searched CENTRAL, MEDLINE, Embase, five other databases and three trial registers up to July 2016, without any restrictions based on language or status of publication. We examined the reference lists of the included studies and other relevant reviews. If necessary, we contacted the study authors to collect additional information on studies. Selection criteria Randomised trials assessing the effectiveness of e‐learning versus traditional learning for health professionals. We excluded non‐randomised trials and trials involving undergraduate health professionals. Data collection and analysis Two authors independently selected studies, extracted data and assessed risk of bias. We graded the certainty of evidence for each outcome using the GRADE approach and standardised the outcome effects using relative risks (risk ratio (RR) or odds ratio (OR)) or standardised mean difference (SMD) when possible. We included 16 randomised trials involving 5679 licensed health professionals (4759 mixed health professionals, 587 nurses, 300 doctors and 33 childcare health consultants). When compared with traditional learning at 12‐month follow‐up, low‐certainty evidence suggests that e‐learning may make little or no difference for the following patient outcomes: the proportion of patients with low‐density lipoprotein (LDL) cholesterol of less than 100 mg/dL (adjusted difference 4.0%, 95% confidence interval (CI) −0.3 to 7.9, N = 6399 patients, 1 study) and the proportion with glycated haemoglobin level of less than 8% (adjusted difference 4.6%, 95% CI −1.5 to 9.8, 3114 patients, 1 study). At 3‐ to 12‐month follow‐up, low‐certainty evidence indicates that e‐learning may make little or no difference on the following behaviours in health professionals: screening for dyslipidaemia (OR 0.90, 95% CI 0.77 to 1.06, 6027 patients, 2 studies) and treatment for dyslipidaemia (OR 1.15, 95% CI 0.89 to 1.48, 5491 patients, 2 studies). It is uncertain whether e‐learning improves or reduces health professionals' skills (2912 health professionals; 6 studies; very low‐certainty evidence), and it may make little or no difference in health professionals' knowledge (3236 participants; 11 studies; low‐certainty evidence). Due to the paucity of studies and data, we were unable to explore differences in effects across different subgroups. Owing to poor reporting, we were unable to collect sufficient information to complete a meaningful 'Risk of bias' assessment for most of the quality criteria. We evaluated the risk of bias as unclear for most studies, but we classified the largest trial as being at low risk of bias. Missing data represented a potential source of bias in several studies. When compared to traditional learning, e‐learning may make little or no difference in patient outcomes or health professionals' behaviours, skills or knowledge. Even if e‐learning could be more successful than traditional learning in particular medical education settings, general claims of it as inherently more effective than traditional learning may be misleading.
t183
t183_5
yes
Companies funded three studies, whereas government agencies financed six.
The use of e‐learning, defined as any educational intervention mediated electronically via the Internet, has steadily increased among health professionals worldwide. Several studies have attempted to measure the effects of e‐learning in medical practice, which has often been associated with large positive effects when compared to no intervention and with small positive effects when compared with traditional learning (without access to e‐learning). However, results are not conclusive. Objectives To assess the effects of e‐learning programmes versus traditional learning in licensed health professionals for improving patient outcomes or health professionals' behaviours, skills and knowledge. Search methods We searched CENTRAL, MEDLINE, Embase, five other databases and three trial registers up to July 2016, without any restrictions based on language or status of publication. We examined the reference lists of the included studies and other relevant reviews. If necessary, we contacted the study authors to collect additional information on studies. Selection criteria Randomised trials assessing the effectiveness of e‐learning versus traditional learning for health professionals. We excluded non‐randomised trials and trials involving undergraduate health professionals. Data collection and analysis Two authors independently selected studies, extracted data and assessed risk of bias. We graded the certainty of evidence for each outcome using the GRADE approach and standardised the outcome effects using relative risks (risk ratio (RR) or odds ratio (OR)) or standardised mean difference (SMD) when possible. We included 16 randomised trials involving 5679 licensed health professionals (4759 mixed health professionals, 587 nurses, 300 doctors and 33 childcare health consultants). When compared with traditional learning at 12‐month follow‐up, low‐certainty evidence suggests that e‐learning may make little or no difference for the following patient outcomes: the proportion of patients with low‐density lipoprotein (LDL) cholesterol of less than 100 mg/dL (adjusted difference 4.0%, 95% confidence interval (CI) −0.3 to 7.9, N = 6399 patients, 1 study) and the proportion with glycated haemoglobin level of less than 8% (adjusted difference 4.6%, 95% CI −1.5 to 9.8, 3114 patients, 1 study). At 3‐ to 12‐month follow‐up, low‐certainty evidence indicates that e‐learning may make little or no difference on the following behaviours in health professionals: screening for dyslipidaemia (OR 0.90, 95% CI 0.77 to 1.06, 6027 patients, 2 studies) and treatment for dyslipidaemia (OR 1.15, 95% CI 0.89 to 1.48, 5491 patients, 2 studies). It is uncertain whether e‐learning improves or reduces health professionals' skills (2912 health professionals; 6 studies; very low‐certainty evidence), and it may make little or no difference in health professionals' knowledge (3236 participants; 11 studies; low‐certainty evidence). Due to the paucity of studies and data, we were unable to explore differences in effects across different subgroups. Owing to poor reporting, we were unable to collect sufficient information to complete a meaningful 'Risk of bias' assessment for most of the quality criteria. We evaluated the risk of bias as unclear for most studies, but we classified the largest trial as being at low risk of bias. Missing data represented a potential source of bias in several studies. When compared to traditional learning, e‐learning may make little or no difference in patient outcomes or health professionals' behaviours, skills or knowledge. Even if e‐learning could be more successful than traditional learning in particular medical education settings, general claims of it as inherently more effective than traditional learning may be misleading.
t183
t183_6
yes
One study with 847 health professionals found little or no difference between e‐learning and traditional learning on patient outcomes at one year, and two studies with 950 health professionals suggested little to no difference in health professionals' behaviours at 3 to 12 months, as the certainty of the evidence was low.
The use of e‐learning, defined as any educational intervention mediated electronically via the Internet, has steadily increased among health professionals worldwide. Several studies have attempted to measure the effects of e‐learning in medical practice, which has often been associated with large positive effects when compared to no intervention and with small positive effects when compared with traditional learning (without access to e‐learning). However, results are not conclusive. Objectives To assess the effects of e‐learning programmes versus traditional learning in licensed health professionals for improving patient outcomes or health professionals' behaviours, skills and knowledge. Search methods We searched CENTRAL, MEDLINE, Embase, five other databases and three trial registers up to July 2016, without any restrictions based on language or status of publication. We examined the reference lists of the included studies and other relevant reviews. If necessary, we contacted the study authors to collect additional information on studies. Selection criteria Randomised trials assessing the effectiveness of e‐learning versus traditional learning for health professionals. We excluded non‐randomised trials and trials involving undergraduate health professionals. Data collection and analysis Two authors independently selected studies, extracted data and assessed risk of bias. We graded the certainty of evidence for each outcome using the GRADE approach and standardised the outcome effects using relative risks (risk ratio (RR) or odds ratio (OR)) or standardised mean difference (SMD) when possible. We included 16 randomised trials involving 5679 licensed health professionals (4759 mixed health professionals, 587 nurses, 300 doctors and 33 childcare health consultants). When compared with traditional learning at 12‐month follow‐up, low‐certainty evidence suggests that e‐learning may make little or no difference for the following patient outcomes: the proportion of patients with low‐density lipoprotein (LDL) cholesterol of less than 100 mg/dL (adjusted difference 4.0%, 95% confidence interval (CI) −0.3 to 7.9, N = 6399 patients, 1 study) and the proportion with glycated haemoglobin level of less than 8% (adjusted difference 4.6%, 95% CI −1.5 to 9.8, 3114 patients, 1 study). At 3‐ to 12‐month follow‐up, low‐certainty evidence indicates that e‐learning may make little or no difference on the following behaviours in health professionals: screening for dyslipidaemia (OR 0.90, 95% CI 0.77 to 1.06, 6027 patients, 2 studies) and treatment for dyslipidaemia (OR 1.15, 95% CI 0.89 to 1.48, 5491 patients, 2 studies). It is uncertain whether e‐learning improves or reduces health professionals' skills (2912 health professionals; 6 studies; very low‐certainty evidence), and it may make little or no difference in health professionals' knowledge (3236 participants; 11 studies; low‐certainty evidence). Due to the paucity of studies and data, we were unable to explore differences in effects across different subgroups. Owing to poor reporting, we were unable to collect sufficient information to complete a meaningful 'Risk of bias' assessment for most of the quality criteria. We evaluated the risk of bias as unclear for most studies, but we classified the largest trial as being at low risk of bias. Missing data represented a potential source of bias in several studies. When compared to traditional learning, e‐learning may make little or no difference in patient outcomes or health professionals' behaviours, skills or knowledge. Even if e‐learning could be more successful than traditional learning in particular medical education settings, general claims of it as inherently more effective than traditional learning may be misleading.
t183
t183_7
no
We are uncertain whether e‐learning improves or reduces health professionals' skills at 0 to 12 weeks' follow‐up, based on the results of six studies with 2912 participants and very low certainty of evidence.
The use of e‐learning, defined as any educational intervention mediated electronically via the Internet, has steadily increased among health professionals worldwide. Several studies have attempted to measure the effects of e‐learning in medical practice, which has often been associated with large positive effects when compared to no intervention and with small positive effects when compared with traditional learning (without access to e‐learning). However, results are not conclusive. Objectives To assess the effects of e‐learning programmes versus traditional learning in licensed health professionals for improving patient outcomes or health professionals' behaviours, skills and knowledge. Search methods We searched CENTRAL, MEDLINE, Embase, five other databases and three trial registers up to July 2016, without any restrictions based on language or status of publication. We examined the reference lists of the included studies and other relevant reviews. If necessary, we contacted the study authors to collect additional information on studies. Selection criteria Randomised trials assessing the effectiveness of e‐learning versus traditional learning for health professionals. We excluded non‐randomised trials and trials involving undergraduate health professionals. Data collection and analysis Two authors independently selected studies, extracted data and assessed risk of bias. We graded the certainty of evidence for each outcome using the GRADE approach and standardised the outcome effects using relative risks (risk ratio (RR) or odds ratio (OR)) or standardised mean difference (SMD) when possible. We included 16 randomised trials involving 5679 licensed health professionals (4759 mixed health professionals, 587 nurses, 300 doctors and 33 childcare health consultants). When compared with traditional learning at 12‐month follow‐up, low‐certainty evidence suggests that e‐learning may make little or no difference for the following patient outcomes: the proportion of patients with low‐density lipoprotein (LDL) cholesterol of less than 100 mg/dL (adjusted difference 4.0%, 95% confidence interval (CI) −0.3 to 7.9, N = 6399 patients, 1 study) and the proportion with glycated haemoglobin level of less than 8% (adjusted difference 4.6%, 95% CI −1.5 to 9.8, 3114 patients, 1 study). At 3‐ to 12‐month follow‐up, low‐certainty evidence indicates that e‐learning may make little or no difference on the following behaviours in health professionals: screening for dyslipidaemia (OR 0.90, 95% CI 0.77 to 1.06, 6027 patients, 2 studies) and treatment for dyslipidaemia (OR 1.15, 95% CI 0.89 to 1.48, 5491 patients, 2 studies). It is uncertain whether e‐learning improves or reduces health professionals' skills (2912 health professionals; 6 studies; very low‐certainty evidence), and it may make little or no difference in health professionals' knowledge (3236 participants; 11 studies; low‐certainty evidence). Due to the paucity of studies and data, we were unable to explore differences in effects across different subgroups. Owing to poor reporting, we were unable to collect sufficient information to complete a meaningful 'Risk of bias' assessment for most of the quality criteria. We evaluated the risk of bias as unclear for most studies, but we classified the largest trial as being at low risk of bias. Missing data represented a potential source of bias in several studies. When compared to traditional learning, e‐learning may make little or no difference in patient outcomes or health professionals' behaviours, skills or knowledge. Even if e‐learning could be more successful than traditional learning in particular medical education settings, general claims of it as inherently more effective than traditional learning may be misleading.
t183
t183_8
no
E‐learning may also make little or no difference on health professionals' knowledge, based on the results from 11 studies with 3236 participants at 0 to 12 weeks follow‐up, as the certainty of the evidence was low.
The use of e‐learning, defined as any educational intervention mediated electronically via the Internet, has steadily increased among health professionals worldwide. Several studies have attempted to measure the effects of e‐learning in medical practice, which has often been associated with large positive effects when compared to no intervention and with small positive effects when compared with traditional learning (without access to e‐learning). However, results are not conclusive. Objectives To assess the effects of e‐learning programmes versus traditional learning in licensed health professionals for improving patient outcomes or health professionals' behaviours, skills and knowledge. Search methods We searched CENTRAL, MEDLINE, Embase, five other databases and three trial registers up to July 2016, without any restrictions based on language or status of publication. We examined the reference lists of the included studies and other relevant reviews. If necessary, we contacted the study authors to collect additional information on studies. Selection criteria Randomised trials assessing the effectiveness of e‐learning versus traditional learning for health professionals. We excluded non‐randomised trials and trials involving undergraduate health professionals. Data collection and analysis Two authors independently selected studies, extracted data and assessed risk of bias. We graded the certainty of evidence for each outcome using the GRADE approach and standardised the outcome effects using relative risks (risk ratio (RR) or odds ratio (OR)) or standardised mean difference (SMD) when possible. We included 16 randomised trials involving 5679 licensed health professionals (4759 mixed health professionals, 587 nurses, 300 doctors and 33 childcare health consultants). When compared with traditional learning at 12‐month follow‐up, low‐certainty evidence suggests that e‐learning may make little or no difference for the following patient outcomes: the proportion of patients with low‐density lipoprotein (LDL) cholesterol of less than 100 mg/dL (adjusted difference 4.0%, 95% confidence interval (CI) −0.3 to 7.9, N = 6399 patients, 1 study) and the proportion with glycated haemoglobin level of less than 8% (adjusted difference 4.6%, 95% CI −1.5 to 9.8, 3114 patients, 1 study). At 3‐ to 12‐month follow‐up, low‐certainty evidence indicates that e‐learning may make little or no difference on the following behaviours in health professionals: screening for dyslipidaemia (OR 0.90, 95% CI 0.77 to 1.06, 6027 patients, 2 studies) and treatment for dyslipidaemia (OR 1.15, 95% CI 0.89 to 1.48, 5491 patients, 2 studies). It is uncertain whether e‐learning improves or reduces health professionals' skills (2912 health professionals; 6 studies; very low‐certainty evidence), and it may make little or no difference in health professionals' knowledge (3236 participants; 11 studies; low‐certainty evidence). Due to the paucity of studies and data, we were unable to explore differences in effects across different subgroups. Owing to poor reporting, we were unable to collect sufficient information to complete a meaningful 'Risk of bias' assessment for most of the quality criteria. We evaluated the risk of bias as unclear for most studies, but we classified the largest trial as being at low risk of bias. Missing data represented a potential source of bias in several studies. When compared to traditional learning, e‐learning may make little or no difference in patient outcomes or health professionals' behaviours, skills or knowledge. Even if e‐learning could be more successful than traditional learning in particular medical education settings, general claims of it as inherently more effective than traditional learning may be misleading.
t184
t184_1
yes
Women have different lengths of labour, with first labours lasting on average eight hours (and unlikely to last more than 18 hours) and second and subsequent labours lasting an average of five hours and unlikely to last more than 12 hours.
A major cause of failure to achieve spontaneous vaginal birth is delay in labour due to presumed inefficient uterine action. Oxytocin is given to increase contractions and high‐dose regimens may potentially increase the number of spontaneous vaginal births, but as oxytocin can cause hyperstimulation of the uterus, there is a possibility of increased adverse events. Objectives To compare starting dose and increment dose of oxytocin for augmentation for women delayed in labour to determine whether augmentation by high‐dose regimens of oxytocin improves labour outcomes and to examine the effect on both maternal/neonatal outcomes and women's birth experiences. Search methods We searched the Cochrane Pregnancy and Childbirth Group's Trials Register (31 May 2013) and reference lists of retrieved studies. Selection criteria We included all randomised and quasi‐randomised controlled trials for women in delayed labour requiring augmentation by oxytocin comparing high‐dose regimens (defined as starting dose and increment of equal to or more than 4 mU per minute) with low‐dose regimens (defined as starting dose and an increment of less than 4 mU per minute). Increase interval: between 15 and 40 minutes. The separation of low‐ and high‐dose regimens is based on an arbitrary decision. Data collection and analysis Four review authors undertook assessment of trial eligibility, risk of bias, and data extraction independently. We included four studies involving 644 pregnant women. Three studies were randomised controlled trials and one trial was a quasi‐randomised study. A higher dose of oxytocin was associated with a significant reduction in length of labour reported from one trial (mean difference (MD) ‐3.50 hours; 95% confidence interval (CI) ‐6.38 to ‐0.62; one trial, 40 women). There was a decrease in the rate of caesarean section (risk ratio (RR) 0.62; 95% CI 0.44 to 0.86 four trials, 644 women) and an increase in the rate of spontaneous vaginal birth in the high‐dose group (RR 1.35; 95% CI 1.13 to 1.62, three trials, 444 women), although for both of these outcomes there were inconsistencies between studies in the size of effect. When we carried out sensitivity analysis (temporarily removing a study at high risk of bias) the differences between groups were no longer statistically significant There were no significant differences between high‐ and low‐dose regimens for instrumental vaginal birth, epidural analgesia, hyperstimulation, postpartum haemorrhage, chorioamnionitis or women's perceptions of experiences. For neonatal outcomes, there was no significant difference between groups for Apgar scores, umbilical cord pH, admission to special care baby unit, or neonatal mortality. The following outcomes were not evaluated in the included studies: perinatal mortality, uterine rupture, abnormal cardiotocography, women's pyrexia, dystocia and neonatal neurological morbidity. Higher‐dose regimens of oxytocin (4 mU per minute or more) were associated with a reduction in the length of labour and in caesarean section, and an increase in spontaneous vaginal birth. However, there is insufficient evidence to recommend that high‐dose regimens are advised routinely for women with delay in the first stage of labour. Further research should evaluate the effect of high‐dose regimens of oxytocin for women delayed in labour and should include maternal and neonatal outcomes as well as the effects on women.
t184
t184_2
yes
Assessment of progress in labour takes into account not just cervical dilatation, but also descent and rotation of the fetal head and the strength, duration and frequency of contractions.
A major cause of failure to achieve spontaneous vaginal birth is delay in labour due to presumed inefficient uterine action. Oxytocin is given to increase contractions and high‐dose regimens may potentially increase the number of spontaneous vaginal births, but as oxytocin can cause hyperstimulation of the uterus, there is a possibility of increased adverse events. Objectives To compare starting dose and increment dose of oxytocin for augmentation for women delayed in labour to determine whether augmentation by high‐dose regimens of oxytocin improves labour outcomes and to examine the effect on both maternal/neonatal outcomes and women's birth experiences. Search methods We searched the Cochrane Pregnancy and Childbirth Group's Trials Register (31 May 2013) and reference lists of retrieved studies. Selection criteria We included all randomised and quasi‐randomised controlled trials for women in delayed labour requiring augmentation by oxytocin comparing high‐dose regimens (defined as starting dose and increment of equal to or more than 4 mU per minute) with low‐dose regimens (defined as starting dose and an increment of less than 4 mU per minute). Increase interval: between 15 and 40 minutes. The separation of low‐ and high‐dose regimens is based on an arbitrary decision. Data collection and analysis Four review authors undertook assessment of trial eligibility, risk of bias, and data extraction independently. We included four studies involving 644 pregnant women. Three studies were randomised controlled trials and one trial was a quasi‐randomised study. A higher dose of oxytocin was associated with a significant reduction in length of labour reported from one trial (mean difference (MD) ‐3.50 hours; 95% confidence interval (CI) ‐6.38 to ‐0.62; one trial, 40 women). There was a decrease in the rate of caesarean section (risk ratio (RR) 0.62; 95% CI 0.44 to 0.86 four trials, 644 women) and an increase in the rate of spontaneous vaginal birth in the high‐dose group (RR 1.35; 95% CI 1.13 to 1.62, three trials, 444 women), although for both of these outcomes there were inconsistencies between studies in the size of effect. When we carried out sensitivity analysis (temporarily removing a study at high risk of bias) the differences between groups were no longer statistically significant There were no significant differences between high‐ and low‐dose regimens for instrumental vaginal birth, epidural analgesia, hyperstimulation, postpartum haemorrhage, chorioamnionitis or women's perceptions of experiences. For neonatal outcomes, there was no significant difference between groups for Apgar scores, umbilical cord pH, admission to special care baby unit, or neonatal mortality. The following outcomes were not evaluated in the included studies: perinatal mortality, uterine rupture, abnormal cardiotocography, women's pyrexia, dystocia and neonatal neurological morbidity. Higher‐dose regimens of oxytocin (4 mU per minute or more) were associated with a reduction in the length of labour and in caesarean section, and an increase in spontaneous vaginal birth. However, there is insufficient evidence to recommend that high‐dose regimens are advised routinely for women with delay in the first stage of labour. Further research should evaluate the effect of high‐dose regimens of oxytocin for women delayed in labour and should include maternal and neonatal outcomes as well as the effects on women.
t184
t184_3
yes
Some evidence suggests that up to one‐third of women in their first labour experience delay.
A major cause of failure to achieve spontaneous vaginal birth is delay in labour due to presumed inefficient uterine action. Oxytocin is given to increase contractions and high‐dose regimens may potentially increase the number of spontaneous vaginal births, but as oxytocin can cause hyperstimulation of the uterus, there is a possibility of increased adverse events. Objectives To compare starting dose and increment dose of oxytocin for augmentation for women delayed in labour to determine whether augmentation by high‐dose regimens of oxytocin improves labour outcomes and to examine the effect on both maternal/neonatal outcomes and women's birth experiences. Search methods We searched the Cochrane Pregnancy and Childbirth Group's Trials Register (31 May 2013) and reference lists of retrieved studies. Selection criteria We included all randomised and quasi‐randomised controlled trials for women in delayed labour requiring augmentation by oxytocin comparing high‐dose regimens (defined as starting dose and increment of equal to or more than 4 mU per minute) with low‐dose regimens (defined as starting dose and an increment of less than 4 mU per minute). Increase interval: between 15 and 40 minutes. The separation of low‐ and high‐dose regimens is based on an arbitrary decision. Data collection and analysis Four review authors undertook assessment of trial eligibility, risk of bias, and data extraction independently. We included four studies involving 644 pregnant women. Three studies were randomised controlled trials and one trial was a quasi‐randomised study. A higher dose of oxytocin was associated with a significant reduction in length of labour reported from one trial (mean difference (MD) ‐3.50 hours; 95% confidence interval (CI) ‐6.38 to ‐0.62; one trial, 40 women). There was a decrease in the rate of caesarean section (risk ratio (RR) 0.62; 95% CI 0.44 to 0.86 four trials, 644 women) and an increase in the rate of spontaneous vaginal birth in the high‐dose group (RR 1.35; 95% CI 1.13 to 1.62, three trials, 444 women), although for both of these outcomes there were inconsistencies between studies in the size of effect. When we carried out sensitivity analysis (temporarily removing a study at high risk of bias) the differences between groups were no longer statistically significant There were no significant differences between high‐ and low‐dose regimens for instrumental vaginal birth, epidural analgesia, hyperstimulation, postpartum haemorrhage, chorioamnionitis or women's perceptions of experiences. For neonatal outcomes, there was no significant difference between groups for Apgar scores, umbilical cord pH, admission to special care baby unit, or neonatal mortality. The following outcomes were not evaluated in the included studies: perinatal mortality, uterine rupture, abnormal cardiotocography, women's pyrexia, dystocia and neonatal neurological morbidity. Higher‐dose regimens of oxytocin (4 mU per minute or more) were associated with a reduction in the length of labour and in caesarean section, and an increase in spontaneous vaginal birth. However, there is insufficient evidence to recommend that high‐dose regimens are advised routinely for women with delay in the first stage of labour. Further research should evaluate the effect of high‐dose regimens of oxytocin for women delayed in labour and should include maternal and neonatal outcomes as well as the effects on women.
t184
t184_4
no
They are often given a synthetic version of the hormone oxytocin to increase uterine contractions and shorten labour.
A major cause of failure to achieve spontaneous vaginal birth is delay in labour due to presumed inefficient uterine action. Oxytocin is given to increase contractions and high‐dose regimens may potentially increase the number of spontaneous vaginal births, but as oxytocin can cause hyperstimulation of the uterus, there is a possibility of increased adverse events. Objectives To compare starting dose and increment dose of oxytocin for augmentation for women delayed in labour to determine whether augmentation by high‐dose regimens of oxytocin improves labour outcomes and to examine the effect on both maternal/neonatal outcomes and women's birth experiences. Search methods We searched the Cochrane Pregnancy and Childbirth Group's Trials Register (31 May 2013) and reference lists of retrieved studies. Selection criteria We included all randomised and quasi‐randomised controlled trials for women in delayed labour requiring augmentation by oxytocin comparing high‐dose regimens (defined as starting dose and increment of equal to or more than 4 mU per minute) with low‐dose regimens (defined as starting dose and an increment of less than 4 mU per minute). Increase interval: between 15 and 40 minutes. The separation of low‐ and high‐dose regimens is based on an arbitrary decision. Data collection and analysis Four review authors undertook assessment of trial eligibility, risk of bias, and data extraction independently. We included four studies involving 644 pregnant women. Three studies were randomised controlled trials and one trial was a quasi‐randomised study. A higher dose of oxytocin was associated with a significant reduction in length of labour reported from one trial (mean difference (MD) ‐3.50 hours; 95% confidence interval (CI) ‐6.38 to ‐0.62; one trial, 40 women). There was a decrease in the rate of caesarean section (risk ratio (RR) 0.62; 95% CI 0.44 to 0.86 four trials, 644 women) and an increase in the rate of spontaneous vaginal birth in the high‐dose group (RR 1.35; 95% CI 1.13 to 1.62, three trials, 444 women), although for both of these outcomes there were inconsistencies between studies in the size of effect. When we carried out sensitivity analysis (temporarily removing a study at high risk of bias) the differences between groups were no longer statistically significant There were no significant differences between high‐ and low‐dose regimens for instrumental vaginal birth, epidural analgesia, hyperstimulation, postpartum haemorrhage, chorioamnionitis or women's perceptions of experiences. For neonatal outcomes, there was no significant difference between groups for Apgar scores, umbilical cord pH, admission to special care baby unit, or neonatal mortality. The following outcomes were not evaluated in the included studies: perinatal mortality, uterine rupture, abnormal cardiotocography, women's pyrexia, dystocia and neonatal neurological morbidity. Higher‐dose regimens of oxytocin (4 mU per minute or more) were associated with a reduction in the length of labour and in caesarean section, and an increase in spontaneous vaginal birth. However, there is insufficient evidence to recommend that high‐dose regimens are advised routinely for women with delay in the first stage of labour. Further research should evaluate the effect of high‐dose regimens of oxytocin for women delayed in labour and should include maternal and neonatal outcomes as well as the effects on women.
t184
t184_5
yes
Surprisingly for such a routine treatment, the ideal dose at which it should be given is not known, although some comparisons suggest that higher‐dose regimens of oxytocin could shorten labour and reduce the chance of caesarean section with an increase in the numbers of women having a spontaneous vaginal birth compared with lower‐dose regimens.
A major cause of failure to achieve spontaneous vaginal birth is delay in labour due to presumed inefficient uterine action. Oxytocin is given to increase contractions and high‐dose regimens may potentially increase the number of spontaneous vaginal births, but as oxytocin can cause hyperstimulation of the uterus, there is a possibility of increased adverse events. Objectives To compare starting dose and increment dose of oxytocin for augmentation for women delayed in labour to determine whether augmentation by high‐dose regimens of oxytocin improves labour outcomes and to examine the effect on both maternal/neonatal outcomes and women's birth experiences. Search methods We searched the Cochrane Pregnancy and Childbirth Group's Trials Register (31 May 2013) and reference lists of retrieved studies. Selection criteria We included all randomised and quasi‐randomised controlled trials for women in delayed labour requiring augmentation by oxytocin comparing high‐dose regimens (defined as starting dose and increment of equal to or more than 4 mU per minute) with low‐dose regimens (defined as starting dose and an increment of less than 4 mU per minute). Increase interval: between 15 and 40 minutes. The separation of low‐ and high‐dose regimens is based on an arbitrary decision. Data collection and analysis Four review authors undertook assessment of trial eligibility, risk of bias, and data extraction independently. We included four studies involving 644 pregnant women. Three studies were randomised controlled trials and one trial was a quasi‐randomised study. A higher dose of oxytocin was associated with a significant reduction in length of labour reported from one trial (mean difference (MD) ‐3.50 hours; 95% confidence interval (CI) ‐6.38 to ‐0.62; one trial, 40 women). There was a decrease in the rate of caesarean section (risk ratio (RR) 0.62; 95% CI 0.44 to 0.86 four trials, 644 women) and an increase in the rate of spontaneous vaginal birth in the high‐dose group (RR 1.35; 95% CI 1.13 to 1.62, three trials, 444 women), although for both of these outcomes there were inconsistencies between studies in the size of effect. When we carried out sensitivity analysis (temporarily removing a study at high risk of bias) the differences between groups were no longer statistically significant There were no significant differences between high‐ and low‐dose regimens for instrumental vaginal birth, epidural analgesia, hyperstimulation, postpartum haemorrhage, chorioamnionitis or women's perceptions of experiences. For neonatal outcomes, there was no significant difference between groups for Apgar scores, umbilical cord pH, admission to special care baby unit, or neonatal mortality. The following outcomes were not evaluated in the included studies: perinatal mortality, uterine rupture, abnormal cardiotocography, women's pyrexia, dystocia and neonatal neurological morbidity. Higher‐dose regimens of oxytocin (4 mU per minute or more) were associated with a reduction in the length of labour and in caesarean section, and an increase in spontaneous vaginal birth. However, there is insufficient evidence to recommend that high‐dose regimens are advised routinely for women with delay in the first stage of labour. Further research should evaluate the effect of high‐dose regimens of oxytocin for women delayed in labour and should include maternal and neonatal outcomes as well as the effects on women.
t184
t184_6
yes
However, there are potentially harmful side effects as oxytocin may cause the uterus to contract too quickly, and the baby to become distressed.
A major cause of failure to achieve spontaneous vaginal birth is delay in labour due to presumed inefficient uterine action. Oxytocin is given to increase contractions and high‐dose regimens may potentially increase the number of spontaneous vaginal births, but as oxytocin can cause hyperstimulation of the uterus, there is a possibility of increased adverse events. Objectives To compare starting dose and increment dose of oxytocin for augmentation for women delayed in labour to determine whether augmentation by high‐dose regimens of oxytocin improves labour outcomes and to examine the effect on both maternal/neonatal outcomes and women's birth experiences. Search methods We searched the Cochrane Pregnancy and Childbirth Group's Trials Register (31 May 2013) and reference lists of retrieved studies. Selection criteria We included all randomised and quasi‐randomised controlled trials for women in delayed labour requiring augmentation by oxytocin comparing high‐dose regimens (defined as starting dose and increment of equal to or more than 4 mU per minute) with low‐dose regimens (defined as starting dose and an increment of less than 4 mU per minute). Increase interval: between 15 and 40 minutes. The separation of low‐ and high‐dose regimens is based on an arbitrary decision. Data collection and analysis Four review authors undertook assessment of trial eligibility, risk of bias, and data extraction independently. We included four studies involving 644 pregnant women. Three studies were randomised controlled trials and one trial was a quasi‐randomised study. A higher dose of oxytocin was associated with a significant reduction in length of labour reported from one trial (mean difference (MD) ‐3.50 hours; 95% confidence interval (CI) ‐6.38 to ‐0.62; one trial, 40 women). There was a decrease in the rate of caesarean section (risk ratio (RR) 0.62; 95% CI 0.44 to 0.86 four trials, 644 women) and an increase in the rate of spontaneous vaginal birth in the high‐dose group (RR 1.35; 95% CI 1.13 to 1.62, three trials, 444 women), although for both of these outcomes there were inconsistencies between studies in the size of effect. When we carried out sensitivity analysis (temporarily removing a study at high risk of bias) the differences between groups were no longer statistically significant There were no significant differences between high‐ and low‐dose regimens for instrumental vaginal birth, epidural analgesia, hyperstimulation, postpartum haemorrhage, chorioamnionitis or women's perceptions of experiences. For neonatal outcomes, there was no significant difference between groups for Apgar scores, umbilical cord pH, admission to special care baby unit, or neonatal mortality. The following outcomes were not evaluated in the included studies: perinatal mortality, uterine rupture, abnormal cardiotocography, women's pyrexia, dystocia and neonatal neurological morbidity. Higher‐dose regimens of oxytocin (4 mU per minute or more) were associated with a reduction in the length of labour and in caesarean section, and an increase in spontaneous vaginal birth. However, there is insufficient evidence to recommend that high‐dose regimens are advised routinely for women with delay in the first stage of labour. Further research should evaluate the effect of high‐dose regimens of oxytocin for women delayed in labour and should include maternal and neonatal outcomes as well as the effects on women.
t184
t184_7
yes
Clinicians attempt to mitigate these side effects by adjusting the dose of oxytocin with the contractions to reduce the chances of the baby being distressed in labour.
A major cause of failure to achieve spontaneous vaginal birth is delay in labour due to presumed inefficient uterine action. Oxytocin is given to increase contractions and high‐dose regimens may potentially increase the number of spontaneous vaginal births, but as oxytocin can cause hyperstimulation of the uterus, there is a possibility of increased adverse events. Objectives To compare starting dose and increment dose of oxytocin for augmentation for women delayed in labour to determine whether augmentation by high‐dose regimens of oxytocin improves labour outcomes and to examine the effect on both maternal/neonatal outcomes and women's birth experiences. Search methods We searched the Cochrane Pregnancy and Childbirth Group's Trials Register (31 May 2013) and reference lists of retrieved studies. Selection criteria We included all randomised and quasi‐randomised controlled trials for women in delayed labour requiring augmentation by oxytocin comparing high‐dose regimens (defined as starting dose and increment of equal to or more than 4 mU per minute) with low‐dose regimens (defined as starting dose and an increment of less than 4 mU per minute). Increase interval: between 15 and 40 minutes. The separation of low‐ and high‐dose regimens is based on an arbitrary decision. Data collection and analysis Four review authors undertook assessment of trial eligibility, risk of bias, and data extraction independently. We included four studies involving 644 pregnant women. Three studies were randomised controlled trials and one trial was a quasi‐randomised study. A higher dose of oxytocin was associated with a significant reduction in length of labour reported from one trial (mean difference (MD) ‐3.50 hours; 95% confidence interval (CI) ‐6.38 to ‐0.62; one trial, 40 women). There was a decrease in the rate of caesarean section (risk ratio (RR) 0.62; 95% CI 0.44 to 0.86 four trials, 644 women) and an increase in the rate of spontaneous vaginal birth in the high‐dose group (RR 1.35; 95% CI 1.13 to 1.62, three trials, 444 women), although for both of these outcomes there were inconsistencies between studies in the size of effect. When we carried out sensitivity analysis (temporarily removing a study at high risk of bias) the differences between groups were no longer statistically significant There were no significant differences between high‐ and low‐dose regimens for instrumental vaginal birth, epidural analgesia, hyperstimulation, postpartum haemorrhage, chorioamnionitis or women's perceptions of experiences. For neonatal outcomes, there was no significant difference between groups for Apgar scores, umbilical cord pH, admission to special care baby unit, or neonatal mortality. The following outcomes were not evaluated in the included studies: perinatal mortality, uterine rupture, abnormal cardiotocography, women's pyrexia, dystocia and neonatal neurological morbidity. Higher‐dose regimens of oxytocin (4 mU per minute or more) were associated with a reduction in the length of labour and in caesarean section, and an increase in spontaneous vaginal birth. However, there is insufficient evidence to recommend that high‐dose regimens are advised routinely for women with delay in the first stage of labour. Further research should evaluate the effect of high‐dose regimens of oxytocin for women delayed in labour and should include maternal and neonatal outcomes as well as the effects on women.
t184
t184_8
no
From the four randomised controlled trials involving 644 pregnant women that we included in this review, results indicate that a higher dose of oxytocin (4‐7 mU per minute, compared with 1‐2 mU per minute) reduced the length of labour and the rate of caesarean sections with increased spontaneous vaginal births, but the studies did not provide enough evidence on possible differences between the high‐ and low‐dose regimens on adverse events including hyperstimulation of the uterus, and outcomes for the newborn infant.
A major cause of failure to achieve spontaneous vaginal birth is delay in labour due to presumed inefficient uterine action. Oxytocin is given to increase contractions and high‐dose regimens may potentially increase the number of spontaneous vaginal births, but as oxytocin can cause hyperstimulation of the uterus, there is a possibility of increased adverse events. Objectives To compare starting dose and increment dose of oxytocin for augmentation for women delayed in labour to determine whether augmentation by high‐dose regimens of oxytocin improves labour outcomes and to examine the effect on both maternal/neonatal outcomes and women's birth experiences. Search methods We searched the Cochrane Pregnancy and Childbirth Group's Trials Register (31 May 2013) and reference lists of retrieved studies. Selection criteria We included all randomised and quasi‐randomised controlled trials for women in delayed labour requiring augmentation by oxytocin comparing high‐dose regimens (defined as starting dose and increment of equal to or more than 4 mU per minute) with low‐dose regimens (defined as starting dose and an increment of less than 4 mU per minute). Increase interval: between 15 and 40 minutes. The separation of low‐ and high‐dose regimens is based on an arbitrary decision. Data collection and analysis Four review authors undertook assessment of trial eligibility, risk of bias, and data extraction independently. We included four studies involving 644 pregnant women. Three studies were randomised controlled trials and one trial was a quasi‐randomised study. A higher dose of oxytocin was associated with a significant reduction in length of labour reported from one trial (mean difference (MD) ‐3.50 hours; 95% confidence interval (CI) ‐6.38 to ‐0.62; one trial, 40 women). There was a decrease in the rate of caesarean section (risk ratio (RR) 0.62; 95% CI 0.44 to 0.86 four trials, 644 women) and an increase in the rate of spontaneous vaginal birth in the high‐dose group (RR 1.35; 95% CI 1.13 to 1.62, three trials, 444 women), although for both of these outcomes there were inconsistencies between studies in the size of effect. When we carried out sensitivity analysis (temporarily removing a study at high risk of bias) the differences between groups were no longer statistically significant There were no significant differences between high‐ and low‐dose regimens for instrumental vaginal birth, epidural analgesia, hyperstimulation, postpartum haemorrhage, chorioamnionitis or women's perceptions of experiences. For neonatal outcomes, there was no significant difference between groups for Apgar scores, umbilical cord pH, admission to special care baby unit, or neonatal mortality. The following outcomes were not evaluated in the included studies: perinatal mortality, uterine rupture, abnormal cardiotocography, women's pyrexia, dystocia and neonatal neurological morbidity. Higher‐dose regimens of oxytocin (4 mU per minute or more) were associated with a reduction in the length of labour and in caesarean section, and an increase in spontaneous vaginal birth. However, there is insufficient evidence to recommend that high‐dose regimens are advised routinely for women with delay in the first stage of labour. Further research should evaluate the effect of high‐dose regimens of oxytocin for women delayed in labour and should include maternal and neonatal outcomes as well as the effects on women.
t184
t184_9
yes
Only one trial reported on the possible effect on women.
A major cause of failure to achieve spontaneous vaginal birth is delay in labour due to presumed inefficient uterine action. Oxytocin is given to increase contractions and high‐dose regimens may potentially increase the number of spontaneous vaginal births, but as oxytocin can cause hyperstimulation of the uterus, there is a possibility of increased adverse events. Objectives To compare starting dose and increment dose of oxytocin for augmentation for women delayed in labour to determine whether augmentation by high‐dose regimens of oxytocin improves labour outcomes and to examine the effect on both maternal/neonatal outcomes and women's birth experiences. Search methods We searched the Cochrane Pregnancy and Childbirth Group's Trials Register (31 May 2013) and reference lists of retrieved studies. Selection criteria We included all randomised and quasi‐randomised controlled trials for women in delayed labour requiring augmentation by oxytocin comparing high‐dose regimens (defined as starting dose and increment of equal to or more than 4 mU per minute) with low‐dose regimens (defined as starting dose and an increment of less than 4 mU per minute). Increase interval: between 15 and 40 minutes. The separation of low‐ and high‐dose regimens is based on an arbitrary decision. Data collection and analysis Four review authors undertook assessment of trial eligibility, risk of bias, and data extraction independently. We included four studies involving 644 pregnant women. Three studies were randomised controlled trials and one trial was a quasi‐randomised study. A higher dose of oxytocin was associated with a significant reduction in length of labour reported from one trial (mean difference (MD) ‐3.50 hours; 95% confidence interval (CI) ‐6.38 to ‐0.62; one trial, 40 women). There was a decrease in the rate of caesarean section (risk ratio (RR) 0.62; 95% CI 0.44 to 0.86 four trials, 644 women) and an increase in the rate of spontaneous vaginal birth in the high‐dose group (RR 1.35; 95% CI 1.13 to 1.62, three trials, 444 women), although for both of these outcomes there were inconsistencies between studies in the size of effect. When we carried out sensitivity analysis (temporarily removing a study at high risk of bias) the differences between groups were no longer statistically significant There were no significant differences between high‐ and low‐dose regimens for instrumental vaginal birth, epidural analgesia, hyperstimulation, postpartum haemorrhage, chorioamnionitis or women's perceptions of experiences. For neonatal outcomes, there was no significant difference between groups for Apgar scores, umbilical cord pH, admission to special care baby unit, or neonatal mortality. The following outcomes were not evaluated in the included studies: perinatal mortality, uterine rupture, abnormal cardiotocography, women's pyrexia, dystocia and neonatal neurological morbidity. Higher‐dose regimens of oxytocin (4 mU per minute or more) were associated with a reduction in the length of labour and in caesarean section, and an increase in spontaneous vaginal birth. However, there is insufficient evidence to recommend that high‐dose regimens are advised routinely for women with delay in the first stage of labour. Further research should evaluate the effect of high‐dose regimens of oxytocin for women delayed in labour and should include maternal and neonatal outcomes as well as the effects on women.
t184
t184_10
no
While the current evidence is promising and suggests that the high‐dose regimens reduce the length of labour and the rate of caesarean sections, this evidence is not strong enough to recommend that high‐dose regimens are used routinely for women delayed in labour.
A major cause of failure to achieve spontaneous vaginal birth is delay in labour due to presumed inefficient uterine action. Oxytocin is given to increase contractions and high‐dose regimens may potentially increase the number of spontaneous vaginal births, but as oxytocin can cause hyperstimulation of the uterus, there is a possibility of increased adverse events. Objectives To compare starting dose and increment dose of oxytocin for augmentation for women delayed in labour to determine whether augmentation by high‐dose regimens of oxytocin improves labour outcomes and to examine the effect on both maternal/neonatal outcomes and women's birth experiences. Search methods We searched the Cochrane Pregnancy and Childbirth Group's Trials Register (31 May 2013) and reference lists of retrieved studies. Selection criteria We included all randomised and quasi‐randomised controlled trials for women in delayed labour requiring augmentation by oxytocin comparing high‐dose regimens (defined as starting dose and increment of equal to or more than 4 mU per minute) with low‐dose regimens (defined as starting dose and an increment of less than 4 mU per minute). Increase interval: between 15 and 40 minutes. The separation of low‐ and high‐dose regimens is based on an arbitrary decision. Data collection and analysis Four review authors undertook assessment of trial eligibility, risk of bias, and data extraction independently. We included four studies involving 644 pregnant women. Three studies were randomised controlled trials and one trial was a quasi‐randomised study. A higher dose of oxytocin was associated with a significant reduction in length of labour reported from one trial (mean difference (MD) ‐3.50 hours; 95% confidence interval (CI) ‐6.38 to ‐0.62; one trial, 40 women). There was a decrease in the rate of caesarean section (risk ratio (RR) 0.62; 95% CI 0.44 to 0.86 four trials, 644 women) and an increase in the rate of spontaneous vaginal birth in the high‐dose group (RR 1.35; 95% CI 1.13 to 1.62, three trials, 444 women), although for both of these outcomes there were inconsistencies between studies in the size of effect. When we carried out sensitivity analysis (temporarily removing a study at high risk of bias) the differences between groups were no longer statistically significant There were no significant differences between high‐ and low‐dose regimens for instrumental vaginal birth, epidural analgesia, hyperstimulation, postpartum haemorrhage, chorioamnionitis or women's perceptions of experiences. For neonatal outcomes, there was no significant difference between groups for Apgar scores, umbilical cord pH, admission to special care baby unit, or neonatal mortality. The following outcomes were not evaluated in the included studies: perinatal mortality, uterine rupture, abnormal cardiotocography, women's pyrexia, dystocia and neonatal neurological morbidity. Higher‐dose regimens of oxytocin (4 mU per minute or more) were associated with a reduction in the length of labour and in caesarean section, and an increase in spontaneous vaginal birth. However, there is insufficient evidence to recommend that high‐dose regimens are advised routinely for women with delay in the first stage of labour. Further research should evaluate the effect of high‐dose regimens of oxytocin for women delayed in labour and should include maternal and neonatal outcomes as well as the effects on women.
t185
t185_1
no
Body dysmorphic disorder (BDD) is a condition characterised by a distressing and disabling preoccupation with an imagined or slight defect in appearance.
Body dysmorphic disorder (BDD) is a prevalent and disabling preoccupation with a slight or imagined defect in appearance. Trials have investigated the use of serotonin reuptake inhibitors (SRIs) and cognitive behaviour therapy (CBT) for BDD. Objectives To assess the efficacy of pharmacotherapy, psychotherapy or a combination of both treatment modalities for body dysmorphic disorder. Search methods We searched the Cochrane Depression, Anxiety and Neurosis Trial Register (December 2007), the Cochrane Central Register of Controlled Trials (The Cochrane Library Issue 4, 2007), MEDLINE (January 1966 to December 2007), and PsycINFO (1967 to December 2007). Ongoing and unpublished trials were located through searching the metaRegister of Controlled Trials, the CRISP and WHO ICTRP search portals (databases searched in December 2007), and through contacting key researchers and pharmaceutical companies. Additional studies were located through study reference lists. Selection criteria Randomised controlled trials (RCTs) of patients meeting DSM or ICD diagnostic criteria for BDD, in which the trials compare pharmacotherapy, psychotherapy or multi‐modal treatment groups with active or non‐active control groups. Short or long‐term trials were eligible. Data collection and analysis Two review authors independently assessed RCTs for inclusion in the review, collated trial data, and assessed trial quality. Investigators were contacted to obtain missing data. Summary effect sizes for dichotomous and continuous outcomes were calculated using a random effects model and heterogeneity was assessed. Two pharmacotherapy and three psychotherapy trials were eligible for inclusion in the review, with data from four short‐term RCTs (169 participants) available for analysis. Response data from a single placebo‐controlled trial of fluoxetine suggested overall superiority of medication relative to placebo (relative risk (RR) 3.07, 95% CI 1.4 to 6.72, n = 67). Symptom severity was also significantly reduced in the RCTs of fluoxetine and clomipramine (relative to desipramine), as well as in the two CBT trials (WMD ‐44.96, 95% CI ‐54.43 to ‐35.49, n = 73). A low relapse rate (4/22) was demonstrated in one trial of CBT. Results from the small number of available RCTs suggest that SRIs and CBT may be useful in treating patients with BDD. The findings of these studies need to be replicated. In addition, future controlled studies in other samples, such as adolescents, and using other selective SRIs, as well as a range of psychological therapy approaches and modalities (alone and in combination), are essential in supplementing the sparse data currently available.
t185
t185_2
yes
This causes people with this disorder either significant distress or disrupts their daily functioning (or both).
Body dysmorphic disorder (BDD) is a prevalent and disabling preoccupation with a slight or imagined defect in appearance. Trials have investigated the use of serotonin reuptake inhibitors (SRIs) and cognitive behaviour therapy (CBT) for BDD. Objectives To assess the efficacy of pharmacotherapy, psychotherapy or a combination of both treatment modalities for body dysmorphic disorder. Search methods We searched the Cochrane Depression, Anxiety and Neurosis Trial Register (December 2007), the Cochrane Central Register of Controlled Trials (The Cochrane Library Issue 4, 2007), MEDLINE (January 1966 to December 2007), and PsycINFO (1967 to December 2007). Ongoing and unpublished trials were located through searching the metaRegister of Controlled Trials, the CRISP and WHO ICTRP search portals (databases searched in December 2007), and through contacting key researchers and pharmaceutical companies. Additional studies were located through study reference lists. Selection criteria Randomised controlled trials (RCTs) of patients meeting DSM or ICD diagnostic criteria for BDD, in which the trials compare pharmacotherapy, psychotherapy or multi‐modal treatment groups with active or non‐active control groups. Short or long‐term trials were eligible. Data collection and analysis Two review authors independently assessed RCTs for inclusion in the review, collated trial data, and assessed trial quality. Investigators were contacted to obtain missing data. Summary effect sizes for dichotomous and continuous outcomes were calculated using a random effects model and heterogeneity was assessed. Two pharmacotherapy and three psychotherapy trials were eligible for inclusion in the review, with data from four short‐term RCTs (169 participants) available for analysis. Response data from a single placebo‐controlled trial of fluoxetine suggested overall superiority of medication relative to placebo (relative risk (RR) 3.07, 95% CI 1.4 to 6.72, n = 67). Symptom severity was also significantly reduced in the RCTs of fluoxetine and clomipramine (relative to desipramine), as well as in the two CBT trials (WMD ‐44.96, 95% CI ‐54.43 to ‐35.49, n = 73). A low relapse rate (4/22) was demonstrated in one trial of CBT. Results from the small number of available RCTs suggest that SRIs and CBT may be useful in treating patients with BDD. The findings of these studies need to be replicated. In addition, future controlled studies in other samples, such as adolescents, and using other selective SRIs, as well as a range of psychological therapy approaches and modalities (alone and in combination), are essential in supplementing the sparse data currently available.
t185
t185_3
yes
There has been a growing recognition that BDD is common, and is associated with significant illness and disability.
Body dysmorphic disorder (BDD) is a prevalent and disabling preoccupation with a slight or imagined defect in appearance. Trials have investigated the use of serotonin reuptake inhibitors (SRIs) and cognitive behaviour therapy (CBT) for BDD. Objectives To assess the efficacy of pharmacotherapy, psychotherapy or a combination of both treatment modalities for body dysmorphic disorder. Search methods We searched the Cochrane Depression, Anxiety and Neurosis Trial Register (December 2007), the Cochrane Central Register of Controlled Trials (The Cochrane Library Issue 4, 2007), MEDLINE (January 1966 to December 2007), and PsycINFO (1967 to December 2007). Ongoing and unpublished trials were located through searching the metaRegister of Controlled Trials, the CRISP and WHO ICTRP search portals (databases searched in December 2007), and through contacting key researchers and pharmaceutical companies. Additional studies were located through study reference lists. Selection criteria Randomised controlled trials (RCTs) of patients meeting DSM or ICD diagnostic criteria for BDD, in which the trials compare pharmacotherapy, psychotherapy or multi‐modal treatment groups with active or non‐active control groups. Short or long‐term trials were eligible. Data collection and analysis Two review authors independently assessed RCTs for inclusion in the review, collated trial data, and assessed trial quality. Investigators were contacted to obtain missing data. Summary effect sizes for dichotomous and continuous outcomes were calculated using a random effects model and heterogeneity was assessed. Two pharmacotherapy and three psychotherapy trials were eligible for inclusion in the review, with data from four short‐term RCTs (169 participants) available for analysis. Response data from a single placebo‐controlled trial of fluoxetine suggested overall superiority of medication relative to placebo (relative risk (RR) 3.07, 95% CI 1.4 to 6.72, n = 67). Symptom severity was also significantly reduced in the RCTs of fluoxetine and clomipramine (relative to desipramine), as well as in the two CBT trials (WMD ‐44.96, 95% CI ‐54.43 to ‐35.49, n = 73). A low relapse rate (4/22) was demonstrated in one trial of CBT. Results from the small number of available RCTs suggest that SRIs and CBT may be useful in treating patients with BDD. The findings of these studies need to be replicated. In addition, future controlled studies in other samples, such as adolescents, and using other selective SRIs, as well as a range of psychological therapy approaches and modalities (alone and in combination), are essential in supplementing the sparse data currently available.
t185
t185_4
yes
There is also some evidence that it may respond to pharmacotherapy and psychotherapy.
Body dysmorphic disorder (BDD) is a prevalent and disabling preoccupation with a slight or imagined defect in appearance. Trials have investigated the use of serotonin reuptake inhibitors (SRIs) and cognitive behaviour therapy (CBT) for BDD. Objectives To assess the efficacy of pharmacotherapy, psychotherapy or a combination of both treatment modalities for body dysmorphic disorder. Search methods We searched the Cochrane Depression, Anxiety and Neurosis Trial Register (December 2007), the Cochrane Central Register of Controlled Trials (The Cochrane Library Issue 4, 2007), MEDLINE (January 1966 to December 2007), and PsycINFO (1967 to December 2007). Ongoing and unpublished trials were located through searching the metaRegister of Controlled Trials, the CRISP and WHO ICTRP search portals (databases searched in December 2007), and through contacting key researchers and pharmaceutical companies. Additional studies were located through study reference lists. Selection criteria Randomised controlled trials (RCTs) of patients meeting DSM or ICD diagnostic criteria for BDD, in which the trials compare pharmacotherapy, psychotherapy or multi‐modal treatment groups with active or non‐active control groups. Short or long‐term trials were eligible. Data collection and analysis Two review authors independently assessed RCTs for inclusion in the review, collated trial data, and assessed trial quality. Investigators were contacted to obtain missing data. Summary effect sizes for dichotomous and continuous outcomes were calculated using a random effects model and heterogeneity was assessed. Two pharmacotherapy and three psychotherapy trials were eligible for inclusion in the review, with data from four short‐term RCTs (169 participants) available for analysis. Response data from a single placebo‐controlled trial of fluoxetine suggested overall superiority of medication relative to placebo (relative risk (RR) 3.07, 95% CI 1.4 to 6.72, n = 67). Symptom severity was also significantly reduced in the RCTs of fluoxetine and clomipramine (relative to desipramine), as well as in the two CBT trials (WMD ‐44.96, 95% CI ‐54.43 to ‐35.49, n = 73). A low relapse rate (4/22) was demonstrated in one trial of CBT. Results from the small number of available RCTs suggest that SRIs and CBT may be useful in treating patients with BDD. The findings of these studies need to be replicated. In addition, future controlled studies in other samples, such as adolescents, and using other selective SRIs, as well as a range of psychological therapy approaches and modalities (alone and in combination), are essential in supplementing the sparse data currently available.
t185
t185_5
no
Our systematic review of randomised controlled trials assesses the effects of drug treatment or psychotherapy when used on their own or in combination.
Body dysmorphic disorder (BDD) is a prevalent and disabling preoccupation with a slight or imagined defect in appearance. Trials have investigated the use of serotonin reuptake inhibitors (SRIs) and cognitive behaviour therapy (CBT) for BDD. Objectives To assess the efficacy of pharmacotherapy, psychotherapy or a combination of both treatment modalities for body dysmorphic disorder. Search methods We searched the Cochrane Depression, Anxiety and Neurosis Trial Register (December 2007), the Cochrane Central Register of Controlled Trials (The Cochrane Library Issue 4, 2007), MEDLINE (January 1966 to December 2007), and PsycINFO (1967 to December 2007). Ongoing and unpublished trials were located through searching the metaRegister of Controlled Trials, the CRISP and WHO ICTRP search portals (databases searched in December 2007), and through contacting key researchers and pharmaceutical companies. Additional studies were located through study reference lists. Selection criteria Randomised controlled trials (RCTs) of patients meeting DSM or ICD diagnostic criteria for BDD, in which the trials compare pharmacotherapy, psychotherapy or multi‐modal treatment groups with active or non‐active control groups. Short or long‐term trials were eligible. Data collection and analysis Two review authors independently assessed RCTs for inclusion in the review, collated trial data, and assessed trial quality. Investigators were contacted to obtain missing data. Summary effect sizes for dichotomous and continuous outcomes were calculated using a random effects model and heterogeneity was assessed. Two pharmacotherapy and three psychotherapy trials were eligible for inclusion in the review, with data from four short‐term RCTs (169 participants) available for analysis. Response data from a single placebo‐controlled trial of fluoxetine suggested overall superiority of medication relative to placebo (relative risk (RR) 3.07, 95% CI 1.4 to 6.72, n = 67). Symptom severity was also significantly reduced in the RCTs of fluoxetine and clomipramine (relative to desipramine), as well as in the two CBT trials (WMD ‐44.96, 95% CI ‐54.43 to ‐35.49, n = 73). A low relapse rate (4/22) was demonstrated in one trial of CBT. Results from the small number of available RCTs suggest that SRIs and CBT may be useful in treating patients with BDD. The findings of these studies need to be replicated. In addition, future controlled studies in other samples, such as adolescents, and using other selective SRIs, as well as a range of psychological therapy approaches and modalities (alone and in combination), are essential in supplementing the sparse data currently available.
t185
t185_6
no
We found five eligible trials, including three of psychotherapy (cognitive behavioural therapy (CBT) and exposure and response prevention (ERP)) and two of medication (the serotonin reuptake inhibitors (SRIs) fluoxetine and clomipramine).
Body dysmorphic disorder (BDD) is a prevalent and disabling preoccupation with a slight or imagined defect in appearance. Trials have investigated the use of serotonin reuptake inhibitors (SRIs) and cognitive behaviour therapy (CBT) for BDD. Objectives To assess the efficacy of pharmacotherapy, psychotherapy or a combination of both treatment modalities for body dysmorphic disorder. Search methods We searched the Cochrane Depression, Anxiety and Neurosis Trial Register (December 2007), the Cochrane Central Register of Controlled Trials (The Cochrane Library Issue 4, 2007), MEDLINE (January 1966 to December 2007), and PsycINFO (1967 to December 2007). Ongoing and unpublished trials were located through searching the metaRegister of Controlled Trials, the CRISP and WHO ICTRP search portals (databases searched in December 2007), and through contacting key researchers and pharmaceutical companies. Additional studies were located through study reference lists. Selection criteria Randomised controlled trials (RCTs) of patients meeting DSM or ICD diagnostic criteria for BDD, in which the trials compare pharmacotherapy, psychotherapy or multi‐modal treatment groups with active or non‐active control groups. Short or long‐term trials were eligible. Data collection and analysis Two review authors independently assessed RCTs for inclusion in the review, collated trial data, and assessed trial quality. Investigators were contacted to obtain missing data. Summary effect sizes for dichotomous and continuous outcomes were calculated using a random effects model and heterogeneity was assessed. Two pharmacotherapy and three psychotherapy trials were eligible for inclusion in the review, with data from four short‐term RCTs (169 participants) available for analysis. Response data from a single placebo‐controlled trial of fluoxetine suggested overall superiority of medication relative to placebo (relative risk (RR) 3.07, 95% CI 1.4 to 6.72, n = 67). Symptom severity was also significantly reduced in the RCTs of fluoxetine and clomipramine (relative to desipramine), as well as in the two CBT trials (WMD ‐44.96, 95% CI ‐54.43 to ‐35.49, n = 73). A low relapse rate (4/22) was demonstrated in one trial of CBT. Results from the small number of available RCTs suggest that SRIs and CBT may be useful in treating patients with BDD. The findings of these studies need to be replicated. In addition, future controlled studies in other samples, such as adolescents, and using other selective SRIs, as well as a range of psychological therapy approaches and modalities (alone and in combination), are essential in supplementing the sparse data currently available.
t185
t185_7
no
In the only placebo‐controlled medication trial included in our review, people with BDD treated with fluoxetine were more likely to respond (56%, 19 out of 34) than those allocated placebo (18%, 6 out of 33).
Body dysmorphic disorder (BDD) is a prevalent and disabling preoccupation with a slight or imagined defect in appearance. Trials have investigated the use of serotonin reuptake inhibitors (SRIs) and cognitive behaviour therapy (CBT) for BDD. Objectives To assess the efficacy of pharmacotherapy, psychotherapy or a combination of both treatment modalities for body dysmorphic disorder. Search methods We searched the Cochrane Depression, Anxiety and Neurosis Trial Register (December 2007), the Cochrane Central Register of Controlled Trials (The Cochrane Library Issue 4, 2007), MEDLINE (January 1966 to December 2007), and PsycINFO (1967 to December 2007). Ongoing and unpublished trials were located through searching the metaRegister of Controlled Trials, the CRISP and WHO ICTRP search portals (databases searched in December 2007), and through contacting key researchers and pharmaceutical companies. Additional studies were located through study reference lists. Selection criteria Randomised controlled trials (RCTs) of patients meeting DSM or ICD diagnostic criteria for BDD, in which the trials compare pharmacotherapy, psychotherapy or multi‐modal treatment groups with active or non‐active control groups. Short or long‐term trials were eligible. Data collection and analysis Two review authors independently assessed RCTs for inclusion in the review, collated trial data, and assessed trial quality. Investigators were contacted to obtain missing data. Summary effect sizes for dichotomous and continuous outcomes were calculated using a random effects model and heterogeneity was assessed. Two pharmacotherapy and three psychotherapy trials were eligible for inclusion in the review, with data from four short‐term RCTs (169 participants) available for analysis. Response data from a single placebo‐controlled trial of fluoxetine suggested overall superiority of medication relative to placebo (relative risk (RR) 3.07, 95% CI 1.4 to 6.72, n = 67). Symptom severity was also significantly reduced in the RCTs of fluoxetine and clomipramine (relative to desipramine), as well as in the two CBT trials (WMD ‐44.96, 95% CI ‐54.43 to ‐35.49, n = 73). A low relapse rate (4/22) was demonstrated in one trial of CBT. Results from the small number of available RCTs suggest that SRIs and CBT may be useful in treating patients with BDD. The findings of these studies need to be replicated. In addition, future controlled studies in other samples, such as adolescents, and using other selective SRIs, as well as a range of psychological therapy approaches and modalities (alone and in combination), are essential in supplementing the sparse data currently available.
t185
t185_8
no
Symptoms became less severe after treatment with both medication and psychotherapy.
Body dysmorphic disorder (BDD) is a prevalent and disabling preoccupation with a slight or imagined defect in appearance. Trials have investigated the use of serotonin reuptake inhibitors (SRIs) and cognitive behaviour therapy (CBT) for BDD. Objectives To assess the efficacy of pharmacotherapy, psychotherapy or a combination of both treatment modalities for body dysmorphic disorder. Search methods We searched the Cochrane Depression, Anxiety and Neurosis Trial Register (December 2007), the Cochrane Central Register of Controlled Trials (The Cochrane Library Issue 4, 2007), MEDLINE (January 1966 to December 2007), and PsycINFO (1967 to December 2007). Ongoing and unpublished trials were located through searching the metaRegister of Controlled Trials, the CRISP and WHO ICTRP search portals (databases searched in December 2007), and through contacting key researchers and pharmaceutical companies. Additional studies were located through study reference lists. Selection criteria Randomised controlled trials (RCTs) of patients meeting DSM or ICD diagnostic criteria for BDD, in which the trials compare pharmacotherapy, psychotherapy or multi‐modal treatment groups with active or non‐active control groups. Short or long‐term trials were eligible. Data collection and analysis Two review authors independently assessed RCTs for inclusion in the review, collated trial data, and assessed trial quality. Investigators were contacted to obtain missing data. Summary effect sizes for dichotomous and continuous outcomes were calculated using a random effects model and heterogeneity was assessed. Two pharmacotherapy and three psychotherapy trials were eligible for inclusion in the review, with data from four short‐term RCTs (169 participants) available for analysis. Response data from a single placebo‐controlled trial of fluoxetine suggested overall superiority of medication relative to placebo (relative risk (RR) 3.07, 95% CI 1.4 to 6.72, n = 67). Symptom severity was also significantly reduced in the RCTs of fluoxetine and clomipramine (relative to desipramine), as well as in the two CBT trials (WMD ‐44.96, 95% CI ‐54.43 to ‐35.49, n = 73). A low relapse rate (4/22) was demonstrated in one trial of CBT. Results from the small number of available RCTs suggest that SRIs and CBT may be useful in treating patients with BDD. The findings of these studies need to be replicated. In addition, future controlled studies in other samples, such as adolescents, and using other selective SRIs, as well as a range of psychological therapy approaches and modalities (alone and in combination), are essential in supplementing the sparse data currently available.
t185
t185_9
yes
Adverse events were mild to moderate in severity and none of the people in the active treatment groups were reported to have dropped out of the studies because of treatment‐emergent adverse events.
Body dysmorphic disorder (BDD) is a prevalent and disabling preoccupation with a slight or imagined defect in appearance. Trials have investigated the use of serotonin reuptake inhibitors (SRIs) and cognitive behaviour therapy (CBT) for BDD. Objectives To assess the efficacy of pharmacotherapy, psychotherapy or a combination of both treatment modalities for body dysmorphic disorder. Search methods We searched the Cochrane Depression, Anxiety and Neurosis Trial Register (December 2007), the Cochrane Central Register of Controlled Trials (The Cochrane Library Issue 4, 2007), MEDLINE (January 1966 to December 2007), and PsycINFO (1967 to December 2007). Ongoing and unpublished trials were located through searching the metaRegister of Controlled Trials, the CRISP and WHO ICTRP search portals (databases searched in December 2007), and through contacting key researchers and pharmaceutical companies. Additional studies were located through study reference lists. Selection criteria Randomised controlled trials (RCTs) of patients meeting DSM or ICD diagnostic criteria for BDD, in which the trials compare pharmacotherapy, psychotherapy or multi‐modal treatment groups with active or non‐active control groups. Short or long‐term trials were eligible. Data collection and analysis Two review authors independently assessed RCTs for inclusion in the review, collated trial data, and assessed trial quality. Investigators were contacted to obtain missing data. Summary effect sizes for dichotomous and continuous outcomes were calculated using a random effects model and heterogeneity was assessed. Two pharmacotherapy and three psychotherapy trials were eligible for inclusion in the review, with data from four short‐term RCTs (169 participants) available for analysis. Response data from a single placebo‐controlled trial of fluoxetine suggested overall superiority of medication relative to placebo (relative risk (RR) 3.07, 95% CI 1.4 to 6.72, n = 67). Symptom severity was also significantly reduced in the RCTs of fluoxetine and clomipramine (relative to desipramine), as well as in the two CBT trials (WMD ‐44.96, 95% CI ‐54.43 to ‐35.49, n = 73). A low relapse rate (4/22) was demonstrated in one trial of CBT. Results from the small number of available RCTs suggest that SRIs and CBT may be useful in treating patients with BDD. The findings of these studies need to be replicated. In addition, future controlled studies in other samples, such as adolescents, and using other selective SRIs, as well as a range of psychological therapy approaches and modalities (alone and in combination), are essential in supplementing the sparse data currently available.
t185
t185_10
yes
There is preliminary evidence from one trial of CBT that the effects of CBT may persist once treatment has ended.
Body dysmorphic disorder (BDD) is a prevalent and disabling preoccupation with a slight or imagined defect in appearance. Trials have investigated the use of serotonin reuptake inhibitors (SRIs) and cognitive behaviour therapy (CBT) for BDD. Objectives To assess the efficacy of pharmacotherapy, psychotherapy or a combination of both treatment modalities for body dysmorphic disorder. Search methods We searched the Cochrane Depression, Anxiety and Neurosis Trial Register (December 2007), the Cochrane Central Register of Controlled Trials (The Cochrane Library Issue 4, 2007), MEDLINE (January 1966 to December 2007), and PsycINFO (1967 to December 2007). Ongoing and unpublished trials were located through searching the metaRegister of Controlled Trials, the CRISP and WHO ICTRP search portals (databases searched in December 2007), and through contacting key researchers and pharmaceutical companies. Additional studies were located through study reference lists. Selection criteria Randomised controlled trials (RCTs) of patients meeting DSM or ICD diagnostic criteria for BDD, in which the trials compare pharmacotherapy, psychotherapy or multi‐modal treatment groups with active or non‐active control groups. Short or long‐term trials were eligible. Data collection and analysis Two review authors independently assessed RCTs for inclusion in the review, collated trial data, and assessed trial quality. Investigators were contacted to obtain missing data. Summary effect sizes for dichotomous and continuous outcomes were calculated using a random effects model and heterogeneity was assessed. Two pharmacotherapy and three psychotherapy trials were eligible for inclusion in the review, with data from four short‐term RCTs (169 participants) available for analysis. Response data from a single placebo‐controlled trial of fluoxetine suggested overall superiority of medication relative to placebo (relative risk (RR) 3.07, 95% CI 1.4 to 6.72, n = 67). Symptom severity was also significantly reduced in the RCTs of fluoxetine and clomipramine (relative to desipramine), as well as in the two CBT trials (WMD ‐44.96, 95% CI ‐54.43 to ‐35.49, n = 73). A low relapse rate (4/22) was demonstrated in one trial of CBT. Results from the small number of available RCTs suggest that SRIs and CBT may be useful in treating patients with BDD. The findings of these studies need to be replicated. In addition, future controlled studies in other samples, such as adolescents, and using other selective SRIs, as well as a range of psychological therapy approaches and modalities (alone and in combination), are essential in supplementing the sparse data currently available.
t185
t185_11
yes
Treatment response in the medication trials was not effected by the degree to which people had insight into their condition.
Body dysmorphic disorder (BDD) is a prevalent and disabling preoccupation with a slight or imagined defect in appearance. Trials have investigated the use of serotonin reuptake inhibitors (SRIs) and cognitive behaviour therapy (CBT) for BDD. Objectives To assess the efficacy of pharmacotherapy, psychotherapy or a combination of both treatment modalities for body dysmorphic disorder. Search methods We searched the Cochrane Depression, Anxiety and Neurosis Trial Register (December 2007), the Cochrane Central Register of Controlled Trials (The Cochrane Library Issue 4, 2007), MEDLINE (January 1966 to December 2007), and PsycINFO (1967 to December 2007). Ongoing and unpublished trials were located through searching the metaRegister of Controlled Trials, the CRISP and WHO ICTRP search portals (databases searched in December 2007), and through contacting key researchers and pharmaceutical companies. Additional studies were located through study reference lists. Selection criteria Randomised controlled trials (RCTs) of patients meeting DSM or ICD diagnostic criteria for BDD, in which the trials compare pharmacotherapy, psychotherapy or multi‐modal treatment groups with active or non‐active control groups. Short or long‐term trials were eligible. Data collection and analysis Two review authors independently assessed RCTs for inclusion in the review, collated trial data, and assessed trial quality. Investigators were contacted to obtain missing data. Summary effect sizes for dichotomous and continuous outcomes were calculated using a random effects model and heterogeneity was assessed. Two pharmacotherapy and three psychotherapy trials were eligible for inclusion in the review, with data from four short‐term RCTs (169 participants) available for analysis. Response data from a single placebo‐controlled trial of fluoxetine suggested overall superiority of medication relative to placebo (relative risk (RR) 3.07, 95% CI 1.4 to 6.72, n = 67). Symptom severity was also significantly reduced in the RCTs of fluoxetine and clomipramine (relative to desipramine), as well as in the two CBT trials (WMD ‐44.96, 95% CI ‐54.43 to ‐35.49, n = 73). A low relapse rate (4/22) was demonstrated in one trial of CBT. Results from the small number of available RCTs suggest that SRIs and CBT may be useful in treating patients with BDD. The findings of these studies need to be replicated. In addition, future controlled studies in other samples, such as adolescents, and using other selective SRIs, as well as a range of psychological therapy approaches and modalities (alone and in combination), are essential in supplementing the sparse data currently available.
t185
t185_12
yes
Although few controlled trials have been done, and those that have been conducted were small, indicating that our findings should be used with caution unless confirmed by larger studies (some of which are ongoing), the results suggest that treatment with both medication or psychotherapy can be effective in treating the symptoms of body dysmorphic disorder.
Body dysmorphic disorder (BDD) is a prevalent and disabling preoccupation with a slight or imagined defect in appearance. Trials have investigated the use of serotonin reuptake inhibitors (SRIs) and cognitive behaviour therapy (CBT) for BDD. Objectives To assess the efficacy of pharmacotherapy, psychotherapy or a combination of both treatment modalities for body dysmorphic disorder. Search methods We searched the Cochrane Depression, Anxiety and Neurosis Trial Register (December 2007), the Cochrane Central Register of Controlled Trials (The Cochrane Library Issue 4, 2007), MEDLINE (January 1966 to December 2007), and PsycINFO (1967 to December 2007). Ongoing and unpublished trials were located through searching the metaRegister of Controlled Trials, the CRISP and WHO ICTRP search portals (databases searched in December 2007), and through contacting key researchers and pharmaceutical companies. Additional studies were located through study reference lists. Selection criteria Randomised controlled trials (RCTs) of patients meeting DSM or ICD diagnostic criteria for BDD, in which the trials compare pharmacotherapy, psychotherapy or multi‐modal treatment groups with active or non‐active control groups. Short or long‐term trials were eligible. Data collection and analysis Two review authors independently assessed RCTs for inclusion in the review, collated trial data, and assessed trial quality. Investigators were contacted to obtain missing data. Summary effect sizes for dichotomous and continuous outcomes were calculated using a random effects model and heterogeneity was assessed. Two pharmacotherapy and three psychotherapy trials were eligible for inclusion in the review, with data from four short‐term RCTs (169 participants) available for analysis. Response data from a single placebo‐controlled trial of fluoxetine suggested overall superiority of medication relative to placebo (relative risk (RR) 3.07, 95% CI 1.4 to 6.72, n = 67). Symptom severity was also significantly reduced in the RCTs of fluoxetine and clomipramine (relative to desipramine), as well as in the two CBT trials (WMD ‐44.96, 95% CI ‐54.43 to ‐35.49, n = 73). A low relapse rate (4/22) was demonstrated in one trial of CBT. Results from the small number of available RCTs suggest that SRIs and CBT may be useful in treating patients with BDD. The findings of these studies need to be replicated. In addition, future controlled studies in other samples, such as adolescents, and using other selective SRIs, as well as a range of psychological therapy approaches and modalities (alone and in combination), are essential in supplementing the sparse data currently available.
t186
t186_1
yes
Allogeneic hematopoietic stem cell transplantation is a procedure in which a portion of a healthy donor's stem cells (cells that can develop into various types of blood cells) or bone marrow is obtained and prepared for intravenous infusion.
Allogeneic hematopoietic stem cell transplantation (allo‐HCT) is associated with improved outcomes for people with various hematologic diseases; however, the morbidity and mortality resulting from acute and subsequently chronic graft‐versus‐host disease (GVHD) pose a serious challenge to wider applicability of allo‐HCT. Intravenous methotrexate in combination with a calcineurin inhibitor, cyclosporine or tacrolimus, is a widely used regimen for the prophylaxis of acute GVHD, but the administration of methotrexate is associated with a number of adverse events. Mycophenolate mofetil, in combination with a calcineurin inhibitor, has been used extensively in people undergoing allo‐HCT. Conflicting results regarding various clinical outcomes following allo‐HCT have been observed when comparing mycophenolate mofetil‐based regimens against methotrexate‐based regimens for acute GVHD prophylaxis. Objectives Primary objective: to assess the effect of mycophenolate mofetil versus methotrexate for prevention of acute GVHD in people undergoing allo‐HCT. Secondary objectives: to evaluate the effect of mycophenolate mofetil versus methotrexate for overall survival, prevention of chronic GVHD, incidence of relapse, treatment‐related harms, nonrelapse mortality, and quality of life. Search methods We searched Cochrane Central Register of Controlled Trials (CENTRAL) and MEDLINE from inception to March 2014. We handsearched conference abstracts from the last two meetings (2011 and 2012) of relevant societies in the field. We searched ClinicalTrials.gov, Novartis clinical trials database (www.novctrd.com), Roche clinical trial protocol registry (www.roche‐trials.com), Australian New Zealand Clinical Trials Registry (ANZCTR), and the metaRegister of Controlled Trials for ongoing trials. Selection criteria Two review authors independently reviewed all titles/abstracts and selected full‐text articles for inclusion. We included all references that reported results of randomized controlled trials (RCTs) of mycophenolate mofetil versus methotrexate for the prophylaxis of GVHD among people undergoing allo‐HCT in this review. Data collection and analysis Two review authors independently extracted data on outcomes from all studies and compared prior to data entry and analysis. We expressed results as risk ratios (RR) and 95% confidence intervals (CI) for dichotomous outcomes and hazard ratios (HR) and 95% CIs for time‐to‐event outcomes. We pooled the individual study effects using the random‐effects model. Estimates lower than one indicate that mycophenolate mofetil was favored over methotrexate. We included three trials enrolling 177 participants (174 participants analyzed). All participants in the trials by Keihl et al. and Bolwell et al. received cyclosporine while all participants enrolled in the trial by Perkins et al. received tacrolimus. However, the results did not differ by the type of calcineurin inhibitor employed (cyclosporine versus tacrolimus). There was no evidence for a difference between mycophenolate mofetil versus methotrexate for the outcomes of incidence of acute GVHD (RR 1.25; 95% CI 0.75 to 2.09; P value = 0.39, very low quality evidence), overall survival (HR 0.73; 95% CI 0.45 to 1.17; P value = 0.19, low‐quality evidence), median days to neutrophil engraftment (HR 0.77; 95% CI 0.51 to 1.17; P value = 0.23, low‐quality evidence), incidence of relapse (RR 0.84; 95% CI 0.52 to 1.38; P value = 0.50, low‐quality evidence), non‐relapse mortality (RR 1.21; 95% CI 0.62 to 2.36; P value = 0.57, low‐quality evidence), and incidence of chronic GVHD (RR 0.92; 95% CI 0.65 to 1.30; P value = 0.62, low‐quality evidence). There was low‐quality evidence that mycophenolate mofetil compared with methotrexate improved platelet engraftment period (HR 0.87; 95% CI 0.81 to 0.93; P value < 0.0001, low‐quality evidence). There was low‐quality evidence that mycophenolate mofetil compared with methotrexate resulted in decreased incidence of severe mucositis (RR 0.48; 95% CI 0.32 to 0.73; P value = 0.0006, low‐quality evidence), use of parenteral nutrition (RR 0.48; 95% CI 0.26 to 0.91; P value = 0.02, low‐quality evidence), and medication for pain control (RR 0.76; 95% CI 0.63 to 0.91; P value = 0.002, low‐quality evidence). Overall heterogeneity was not detected in the analysis except for the outcome of neutrophil engraftment. None of the included studies reported any outcomes related to quality of life. Overall quality of evidence was low. The use of mycophenolate mofetil compared with methotrexate for primary prevention of GVHD seems to be associated with a more favorable toxicity profile, without an apparent compromise on disease relapse, transplant‐associated mortality, or overall survival. The effects on incidence of GVHD between people receiving mycophenolate mofetil compared with people receiving methotrexate were uncertain. There is a need for additional high‐quality RCTs to determine the optimal GVHD prevention strategy. Future studies should take into account a comprehensive view of clinical benefit, including measures of morbidity, symptom burden, and healthcare resource utilization associated with interventions.
t186
t186_2
yes
Hematopoietic stem cells are taken from a healthy donor and transplanted into the patient (recipient).
Allogeneic hematopoietic stem cell transplantation (allo‐HCT) is associated with improved outcomes for people with various hematologic diseases; however, the morbidity and mortality resulting from acute and subsequently chronic graft‐versus‐host disease (GVHD) pose a serious challenge to wider applicability of allo‐HCT. Intravenous methotrexate in combination with a calcineurin inhibitor, cyclosporine or tacrolimus, is a widely used regimen for the prophylaxis of acute GVHD, but the administration of methotrexate is associated with a number of adverse events. Mycophenolate mofetil, in combination with a calcineurin inhibitor, has been used extensively in people undergoing allo‐HCT. Conflicting results regarding various clinical outcomes following allo‐HCT have been observed when comparing mycophenolate mofetil‐based regimens against methotrexate‐based regimens for acute GVHD prophylaxis. Objectives Primary objective: to assess the effect of mycophenolate mofetil versus methotrexate for prevention of acute GVHD in people undergoing allo‐HCT. Secondary objectives: to evaluate the effect of mycophenolate mofetil versus methotrexate for overall survival, prevention of chronic GVHD, incidence of relapse, treatment‐related harms, nonrelapse mortality, and quality of life. Search methods We searched Cochrane Central Register of Controlled Trials (CENTRAL) and MEDLINE from inception to March 2014. We handsearched conference abstracts from the last two meetings (2011 and 2012) of relevant societies in the field. We searched ClinicalTrials.gov, Novartis clinical trials database (www.novctrd.com), Roche clinical trial protocol registry (www.roche‐trials.com), Australian New Zealand Clinical Trials Registry (ANZCTR), and the metaRegister of Controlled Trials for ongoing trials. Selection criteria Two review authors independently reviewed all titles/abstracts and selected full‐text articles for inclusion. We included all references that reported results of randomized controlled trials (RCTs) of mycophenolate mofetil versus methotrexate for the prophylaxis of GVHD among people undergoing allo‐HCT in this review. Data collection and analysis Two review authors independently extracted data on outcomes from all studies and compared prior to data entry and analysis. We expressed results as risk ratios (RR) and 95% confidence intervals (CI) for dichotomous outcomes and hazard ratios (HR) and 95% CIs for time‐to‐event outcomes. We pooled the individual study effects using the random‐effects model. Estimates lower than one indicate that mycophenolate mofetil was favored over methotrexate. We included three trials enrolling 177 participants (174 participants analyzed). All participants in the trials by Keihl et al. and Bolwell et al. received cyclosporine while all participants enrolled in the trial by Perkins et al. received tacrolimus. However, the results did not differ by the type of calcineurin inhibitor employed (cyclosporine versus tacrolimus). There was no evidence for a difference between mycophenolate mofetil versus methotrexate for the outcomes of incidence of acute GVHD (RR 1.25; 95% CI 0.75 to 2.09; P value = 0.39, very low quality evidence), overall survival (HR 0.73; 95% CI 0.45 to 1.17; P value = 0.19, low‐quality evidence), median days to neutrophil engraftment (HR 0.77; 95% CI 0.51 to 1.17; P value = 0.23, low‐quality evidence), incidence of relapse (RR 0.84; 95% CI 0.52 to 1.38; P value = 0.50, low‐quality evidence), non‐relapse mortality (RR 1.21; 95% CI 0.62 to 2.36; P value = 0.57, low‐quality evidence), and incidence of chronic GVHD (RR 0.92; 95% CI 0.65 to 1.30; P value = 0.62, low‐quality evidence). There was low‐quality evidence that mycophenolate mofetil compared with methotrexate improved platelet engraftment period (HR 0.87; 95% CI 0.81 to 0.93; P value < 0.0001, low‐quality evidence). There was low‐quality evidence that mycophenolate mofetil compared with methotrexate resulted in decreased incidence of severe mucositis (RR 0.48; 95% CI 0.32 to 0.73; P value = 0.0006, low‐quality evidence), use of parenteral nutrition (RR 0.48; 95% CI 0.26 to 0.91; P value = 0.02, low‐quality evidence), and medication for pain control (RR 0.76; 95% CI 0.63 to 0.91; P value = 0.002, low‐quality evidence). Overall heterogeneity was not detected in the analysis except for the outcome of neutrophil engraftment. None of the included studies reported any outcomes related to quality of life. Overall quality of evidence was low. The use of mycophenolate mofetil compared with methotrexate for primary prevention of GVHD seems to be associated with a more favorable toxicity profile, without an apparent compromise on disease relapse, transplant‐associated mortality, or overall survival. The effects on incidence of GVHD between people receiving mycophenolate mofetil compared with people receiving methotrexate were uncertain. There is a need for additional high‐quality RCTs to determine the optimal GVHD prevention strategy. Future studies should take into account a comprehensive view of clinical benefit, including measures of morbidity, symptom burden, and healthcare resource utilization associated with interventions.
t186
t186_3
no
People undergoing allogeneic hematopoietic stem cell transplantation are at risk of developing graft‐versus‐host disease (GVHD).
Allogeneic hematopoietic stem cell transplantation (allo‐HCT) is associated with improved outcomes for people with various hematologic diseases; however, the morbidity and mortality resulting from acute and subsequently chronic graft‐versus‐host disease (GVHD) pose a serious challenge to wider applicability of allo‐HCT. Intravenous methotrexate in combination with a calcineurin inhibitor, cyclosporine or tacrolimus, is a widely used regimen for the prophylaxis of acute GVHD, but the administration of methotrexate is associated with a number of adverse events. Mycophenolate mofetil, in combination with a calcineurin inhibitor, has been used extensively in people undergoing allo‐HCT. Conflicting results regarding various clinical outcomes following allo‐HCT have been observed when comparing mycophenolate mofetil‐based regimens against methotrexate‐based regimens for acute GVHD prophylaxis. Objectives Primary objective: to assess the effect of mycophenolate mofetil versus methotrexate for prevention of acute GVHD in people undergoing allo‐HCT. Secondary objectives: to evaluate the effect of mycophenolate mofetil versus methotrexate for overall survival, prevention of chronic GVHD, incidence of relapse, treatment‐related harms, nonrelapse mortality, and quality of life. Search methods We searched Cochrane Central Register of Controlled Trials (CENTRAL) and MEDLINE from inception to March 2014. We handsearched conference abstracts from the last two meetings (2011 and 2012) of relevant societies in the field. We searched ClinicalTrials.gov, Novartis clinical trials database (www.novctrd.com), Roche clinical trial protocol registry (www.roche‐trials.com), Australian New Zealand Clinical Trials Registry (ANZCTR), and the metaRegister of Controlled Trials for ongoing trials. Selection criteria Two review authors independently reviewed all titles/abstracts and selected full‐text articles for inclusion. We included all references that reported results of randomized controlled trials (RCTs) of mycophenolate mofetil versus methotrexate for the prophylaxis of GVHD among people undergoing allo‐HCT in this review. Data collection and analysis Two review authors independently extracted data on outcomes from all studies and compared prior to data entry and analysis. We expressed results as risk ratios (RR) and 95% confidence intervals (CI) for dichotomous outcomes and hazard ratios (HR) and 95% CIs for time‐to‐event outcomes. We pooled the individual study effects using the random‐effects model. Estimates lower than one indicate that mycophenolate mofetil was favored over methotrexate. We included three trials enrolling 177 participants (174 participants analyzed). All participants in the trials by Keihl et al. and Bolwell et al. received cyclosporine while all participants enrolled in the trial by Perkins et al. received tacrolimus. However, the results did not differ by the type of calcineurin inhibitor employed (cyclosporine versus tacrolimus). There was no evidence for a difference between mycophenolate mofetil versus methotrexate for the outcomes of incidence of acute GVHD (RR 1.25; 95% CI 0.75 to 2.09; P value = 0.39, very low quality evidence), overall survival (HR 0.73; 95% CI 0.45 to 1.17; P value = 0.19, low‐quality evidence), median days to neutrophil engraftment (HR 0.77; 95% CI 0.51 to 1.17; P value = 0.23, low‐quality evidence), incidence of relapse (RR 0.84; 95% CI 0.52 to 1.38; P value = 0.50, low‐quality evidence), non‐relapse mortality (RR 1.21; 95% CI 0.62 to 2.36; P value = 0.57, low‐quality evidence), and incidence of chronic GVHD (RR 0.92; 95% CI 0.65 to 1.30; P value = 0.62, low‐quality evidence). There was low‐quality evidence that mycophenolate mofetil compared with methotrexate improved platelet engraftment period (HR 0.87; 95% CI 0.81 to 0.93; P value < 0.0001, low‐quality evidence). There was low‐quality evidence that mycophenolate mofetil compared with methotrexate resulted in decreased incidence of severe mucositis (RR 0.48; 95% CI 0.32 to 0.73; P value = 0.0006, low‐quality evidence), use of parenteral nutrition (RR 0.48; 95% CI 0.26 to 0.91; P value = 0.02, low‐quality evidence), and medication for pain control (RR 0.76; 95% CI 0.63 to 0.91; P value = 0.002, low‐quality evidence). Overall heterogeneity was not detected in the analysis except for the outcome of neutrophil engraftment. None of the included studies reported any outcomes related to quality of life. Overall quality of evidence was low. The use of mycophenolate mofetil compared with methotrexate for primary prevention of GVHD seems to be associated with a more favorable toxicity profile, without an apparent compromise on disease relapse, transplant‐associated mortality, or overall survival. The effects on incidence of GVHD between people receiving mycophenolate mofetil compared with people receiving methotrexate were uncertain. There is a need for additional high‐quality RCTs to determine the optimal GVHD prevention strategy. Future studies should take into account a comprehensive view of clinical benefit, including measures of morbidity, symptom burden, and healthcare resource utilization associated with interventions.
t186
t186_4
yes
GVHD results when the transplanted cells from the donor (graft) attack the recipient's (host) body cells because they perceive the recipient's body as foreign.
Allogeneic hematopoietic stem cell transplantation (allo‐HCT) is associated with improved outcomes for people with various hematologic diseases; however, the morbidity and mortality resulting from acute and subsequently chronic graft‐versus‐host disease (GVHD) pose a serious challenge to wider applicability of allo‐HCT. Intravenous methotrexate in combination with a calcineurin inhibitor, cyclosporine or tacrolimus, is a widely used regimen for the prophylaxis of acute GVHD, but the administration of methotrexate is associated with a number of adverse events. Mycophenolate mofetil, in combination with a calcineurin inhibitor, has been used extensively in people undergoing allo‐HCT. Conflicting results regarding various clinical outcomes following allo‐HCT have been observed when comparing mycophenolate mofetil‐based regimens against methotrexate‐based regimens for acute GVHD prophylaxis. Objectives Primary objective: to assess the effect of mycophenolate mofetil versus methotrexate for prevention of acute GVHD in people undergoing allo‐HCT. Secondary objectives: to evaluate the effect of mycophenolate mofetil versus methotrexate for overall survival, prevention of chronic GVHD, incidence of relapse, treatment‐related harms, nonrelapse mortality, and quality of life. Search methods We searched Cochrane Central Register of Controlled Trials (CENTRAL) and MEDLINE from inception to March 2014. We handsearched conference abstracts from the last two meetings (2011 and 2012) of relevant societies in the field. We searched ClinicalTrials.gov, Novartis clinical trials database (www.novctrd.com), Roche clinical trial protocol registry (www.roche‐trials.com), Australian New Zealand Clinical Trials Registry (ANZCTR), and the metaRegister of Controlled Trials for ongoing trials. Selection criteria Two review authors independently reviewed all titles/abstracts and selected full‐text articles for inclusion. We included all references that reported results of randomized controlled trials (RCTs) of mycophenolate mofetil versus methotrexate for the prophylaxis of GVHD among people undergoing allo‐HCT in this review. Data collection and analysis Two review authors independently extracted data on outcomes from all studies and compared prior to data entry and analysis. We expressed results as risk ratios (RR) and 95% confidence intervals (CI) for dichotomous outcomes and hazard ratios (HR) and 95% CIs for time‐to‐event outcomes. We pooled the individual study effects using the random‐effects model. Estimates lower than one indicate that mycophenolate mofetil was favored over methotrexate. We included three trials enrolling 177 participants (174 participants analyzed). All participants in the trials by Keihl et al. and Bolwell et al. received cyclosporine while all participants enrolled in the trial by Perkins et al. received tacrolimus. However, the results did not differ by the type of calcineurin inhibitor employed (cyclosporine versus tacrolimus). There was no evidence for a difference between mycophenolate mofetil versus methotrexate for the outcomes of incidence of acute GVHD (RR 1.25; 95% CI 0.75 to 2.09; P value = 0.39, very low quality evidence), overall survival (HR 0.73; 95% CI 0.45 to 1.17; P value = 0.19, low‐quality evidence), median days to neutrophil engraftment (HR 0.77; 95% CI 0.51 to 1.17; P value = 0.23, low‐quality evidence), incidence of relapse (RR 0.84; 95% CI 0.52 to 1.38; P value = 0.50, low‐quality evidence), non‐relapse mortality (RR 1.21; 95% CI 0.62 to 2.36; P value = 0.57, low‐quality evidence), and incidence of chronic GVHD (RR 0.92; 95% CI 0.65 to 1.30; P value = 0.62, low‐quality evidence). There was low‐quality evidence that mycophenolate mofetil compared with methotrexate improved platelet engraftment period (HR 0.87; 95% CI 0.81 to 0.93; P value < 0.0001, low‐quality evidence). There was low‐quality evidence that mycophenolate mofetil compared with methotrexate resulted in decreased incidence of severe mucositis (RR 0.48; 95% CI 0.32 to 0.73; P value = 0.0006, low‐quality evidence), use of parenteral nutrition (RR 0.48; 95% CI 0.26 to 0.91; P value = 0.02, low‐quality evidence), and medication for pain control (RR 0.76; 95% CI 0.63 to 0.91; P value = 0.002, low‐quality evidence). Overall heterogeneity was not detected in the analysis except for the outcome of neutrophil engraftment. None of the included studies reported any outcomes related to quality of life. Overall quality of evidence was low. The use of mycophenolate mofetil compared with methotrexate for primary prevention of GVHD seems to be associated with a more favorable toxicity profile, without an apparent compromise on disease relapse, transplant‐associated mortality, or overall survival. The effects on incidence of GVHD between people receiving mycophenolate mofetil compared with people receiving methotrexate were uncertain. There is a need for additional high‐quality RCTs to determine the optimal GVHD prevention strategy. Future studies should take into account a comprehensive view of clinical benefit, including measures of morbidity, symptom burden, and healthcare resource utilization associated with interventions.
t186
t186_5
no
Mycophenolate mofetil and methotrexate are two drugs often used to suppress the human body's reaction against the graft (immune response) and prevent GVHD.
Allogeneic hematopoietic stem cell transplantation (allo‐HCT) is associated with improved outcomes for people with various hematologic diseases; however, the morbidity and mortality resulting from acute and subsequently chronic graft‐versus‐host disease (GVHD) pose a serious challenge to wider applicability of allo‐HCT. Intravenous methotrexate in combination with a calcineurin inhibitor, cyclosporine or tacrolimus, is a widely used regimen for the prophylaxis of acute GVHD, but the administration of methotrexate is associated with a number of adverse events. Mycophenolate mofetil, in combination with a calcineurin inhibitor, has been used extensively in people undergoing allo‐HCT. Conflicting results regarding various clinical outcomes following allo‐HCT have been observed when comparing mycophenolate mofetil‐based regimens against methotrexate‐based regimens for acute GVHD prophylaxis. Objectives Primary objective: to assess the effect of mycophenolate mofetil versus methotrexate for prevention of acute GVHD in people undergoing allo‐HCT. Secondary objectives: to evaluate the effect of mycophenolate mofetil versus methotrexate for overall survival, prevention of chronic GVHD, incidence of relapse, treatment‐related harms, nonrelapse mortality, and quality of life. Search methods We searched Cochrane Central Register of Controlled Trials (CENTRAL) and MEDLINE from inception to March 2014. We handsearched conference abstracts from the last two meetings (2011 and 2012) of relevant societies in the field. We searched ClinicalTrials.gov, Novartis clinical trials database (www.novctrd.com), Roche clinical trial protocol registry (www.roche‐trials.com), Australian New Zealand Clinical Trials Registry (ANZCTR), and the metaRegister of Controlled Trials for ongoing trials. Selection criteria Two review authors independently reviewed all titles/abstracts and selected full‐text articles for inclusion. We included all references that reported results of randomized controlled trials (RCTs) of mycophenolate mofetil versus methotrexate for the prophylaxis of GVHD among people undergoing allo‐HCT in this review. Data collection and analysis Two review authors independently extracted data on outcomes from all studies and compared prior to data entry and analysis. We expressed results as risk ratios (RR) and 95% confidence intervals (CI) for dichotomous outcomes and hazard ratios (HR) and 95% CIs for time‐to‐event outcomes. We pooled the individual study effects using the random‐effects model. Estimates lower than one indicate that mycophenolate mofetil was favored over methotrexate. We included three trials enrolling 177 participants (174 participants analyzed). All participants in the trials by Keihl et al. and Bolwell et al. received cyclosporine while all participants enrolled in the trial by Perkins et al. received tacrolimus. However, the results did not differ by the type of calcineurin inhibitor employed (cyclosporine versus tacrolimus). There was no evidence for a difference between mycophenolate mofetil versus methotrexate for the outcomes of incidence of acute GVHD (RR 1.25; 95% CI 0.75 to 2.09; P value = 0.39, very low quality evidence), overall survival (HR 0.73; 95% CI 0.45 to 1.17; P value = 0.19, low‐quality evidence), median days to neutrophil engraftment (HR 0.77; 95% CI 0.51 to 1.17; P value = 0.23, low‐quality evidence), incidence of relapse (RR 0.84; 95% CI 0.52 to 1.38; P value = 0.50, low‐quality evidence), non‐relapse mortality (RR 1.21; 95% CI 0.62 to 2.36; P value = 0.57, low‐quality evidence), and incidence of chronic GVHD (RR 0.92; 95% CI 0.65 to 1.30; P value = 0.62, low‐quality evidence). There was low‐quality evidence that mycophenolate mofetil compared with methotrexate improved platelet engraftment period (HR 0.87; 95% CI 0.81 to 0.93; P value < 0.0001, low‐quality evidence). There was low‐quality evidence that mycophenolate mofetil compared with methotrexate resulted in decreased incidence of severe mucositis (RR 0.48; 95% CI 0.32 to 0.73; P value = 0.0006, low‐quality evidence), use of parenteral nutrition (RR 0.48; 95% CI 0.26 to 0.91; P value = 0.02, low‐quality evidence), and medication for pain control (RR 0.76; 95% CI 0.63 to 0.91; P value = 0.002, low‐quality evidence). Overall heterogeneity was not detected in the analysis except for the outcome of neutrophil engraftment. None of the included studies reported any outcomes related to quality of life. Overall quality of evidence was low. The use of mycophenolate mofetil compared with methotrexate for primary prevention of GVHD seems to be associated with a more favorable toxicity profile, without an apparent compromise on disease relapse, transplant‐associated mortality, or overall survival. The effects on incidence of GVHD between people receiving mycophenolate mofetil compared with people receiving methotrexate were uncertain. There is a need for additional high‐quality RCTs to determine the optimal GVHD prevention strategy. Future studies should take into account a comprehensive view of clinical benefit, including measures of morbidity, symptom burden, and healthcare resource utilization associated with interventions.
t186
t186_6
yes
We conducted a systematic review of three randomized controlled trials (RCTs, which are clinical studies where people are randomly put into one of two or more treatment groups) that compared mycophenolate mofetil versus methotrexate for use in preventing GVHD among 174 participants.
Allogeneic hematopoietic stem cell transplantation (allo‐HCT) is associated with improved outcomes for people with various hematologic diseases; however, the morbidity and mortality resulting from acute and subsequently chronic graft‐versus‐host disease (GVHD) pose a serious challenge to wider applicability of allo‐HCT. Intravenous methotrexate in combination with a calcineurin inhibitor, cyclosporine or tacrolimus, is a widely used regimen for the prophylaxis of acute GVHD, but the administration of methotrexate is associated with a number of adverse events. Mycophenolate mofetil, in combination with a calcineurin inhibitor, has been used extensively in people undergoing allo‐HCT. Conflicting results regarding various clinical outcomes following allo‐HCT have been observed when comparing mycophenolate mofetil‐based regimens against methotrexate‐based regimens for acute GVHD prophylaxis. Objectives Primary objective: to assess the effect of mycophenolate mofetil versus methotrexate for prevention of acute GVHD in people undergoing allo‐HCT. Secondary objectives: to evaluate the effect of mycophenolate mofetil versus methotrexate for overall survival, prevention of chronic GVHD, incidence of relapse, treatment‐related harms, nonrelapse mortality, and quality of life. Search methods We searched Cochrane Central Register of Controlled Trials (CENTRAL) and MEDLINE from inception to March 2014. We handsearched conference abstracts from the last two meetings (2011 and 2012) of relevant societies in the field. We searched ClinicalTrials.gov, Novartis clinical trials database (www.novctrd.com), Roche clinical trial protocol registry (www.roche‐trials.com), Australian New Zealand Clinical Trials Registry (ANZCTR), and the metaRegister of Controlled Trials for ongoing trials. Selection criteria Two review authors independently reviewed all titles/abstracts and selected full‐text articles for inclusion. We included all references that reported results of randomized controlled trials (RCTs) of mycophenolate mofetil versus methotrexate for the prophylaxis of GVHD among people undergoing allo‐HCT in this review. Data collection and analysis Two review authors independently extracted data on outcomes from all studies and compared prior to data entry and analysis. We expressed results as risk ratios (RR) and 95% confidence intervals (CI) for dichotomous outcomes and hazard ratios (HR) and 95% CIs for time‐to‐event outcomes. We pooled the individual study effects using the random‐effects model. Estimates lower than one indicate that mycophenolate mofetil was favored over methotrexate. We included three trials enrolling 177 participants (174 participants analyzed). All participants in the trials by Keihl et al. and Bolwell et al. received cyclosporine while all participants enrolled in the trial by Perkins et al. received tacrolimus. However, the results did not differ by the type of calcineurin inhibitor employed (cyclosporine versus tacrolimus). There was no evidence for a difference between mycophenolate mofetil versus methotrexate for the outcomes of incidence of acute GVHD (RR 1.25; 95% CI 0.75 to 2.09; P value = 0.39, very low quality evidence), overall survival (HR 0.73; 95% CI 0.45 to 1.17; P value = 0.19, low‐quality evidence), median days to neutrophil engraftment (HR 0.77; 95% CI 0.51 to 1.17; P value = 0.23, low‐quality evidence), incidence of relapse (RR 0.84; 95% CI 0.52 to 1.38; P value = 0.50, low‐quality evidence), non‐relapse mortality (RR 1.21; 95% CI 0.62 to 2.36; P value = 0.57, low‐quality evidence), and incidence of chronic GVHD (RR 0.92; 95% CI 0.65 to 1.30; P value = 0.62, low‐quality evidence). There was low‐quality evidence that mycophenolate mofetil compared with methotrexate improved platelet engraftment period (HR 0.87; 95% CI 0.81 to 0.93; P value < 0.0001, low‐quality evidence). There was low‐quality evidence that mycophenolate mofetil compared with methotrexate resulted in decreased incidence of severe mucositis (RR 0.48; 95% CI 0.32 to 0.73; P value = 0.0006, low‐quality evidence), use of parenteral nutrition (RR 0.48; 95% CI 0.26 to 0.91; P value = 0.02, low‐quality evidence), and medication for pain control (RR 0.76; 95% CI 0.63 to 0.91; P value = 0.002, low‐quality evidence). Overall heterogeneity was not detected in the analysis except for the outcome of neutrophil engraftment. None of the included studies reported any outcomes related to quality of life. Overall quality of evidence was low. The use of mycophenolate mofetil compared with methotrexate for primary prevention of GVHD seems to be associated with a more favorable toxicity profile, without an apparent compromise on disease relapse, transplant‐associated mortality, or overall survival. The effects on incidence of GVHD between people receiving mycophenolate mofetil compared with people receiving methotrexate were uncertain. There is a need for additional high‐quality RCTs to determine the optimal GVHD prevention strategy. Future studies should take into account a comprehensive view of clinical benefit, including measures of morbidity, symptom burden, and healthcare resource utilization associated with interventions.