Target_Summary_ID
stringlengths 2
4
| Target_Sentence
stringlengths 438
4.17k
| Original_Abstract
stringlengths 2.14k
7.26k
|
---|---|---|
t0 | The skin patch and the vaginal (birth canal) ring are two methods of birth control. Both methods contain the hormones estrogen and progestin. The patch is a small, thin, adhesive square that is applied to the skin. The contraceptive vaginal ring is a flexible, lightweight device that is inserted into the vagina. Both methods release drugs like those in birth control pills. These methods could be used more consistently than pills because they do not require a daily dose. This review looked at how well the methods worked to prevent pregnancy, if they caused bleeding problems, if women used them as prescribed, and how safe they were. Through February 2013, we did computer searches for randomized controlled trials of the skin patch or vaginal ring compared to pills for birth control. Pills included types with both estrogen and progestin. We wrote to researchers to find other trials. We found 18 trials. Of six patch trials, five compared the marketed patch to birth control pills and one studied a patch being developed. Of 12 ring trials, 11 looked at the marketed ring and pills while one studied a ring being developed. The methods compared had similar pregnancy rates. Patch users reported using their method more consistently than the pill group did. Only half of the patch studies had data on pregnancy or whether the women used the method correctly. However, most of the ring studies had those data. Patch users were more likely than pill users to drop out early from the trial. Ring users were not more likely to drop out early. Compared to pill users, users of the marketed patch had more breast discomfort, painful periods, nausea, and vomiting. Ring users had more vaginal irritation and discharge than pill users but less nausea, acne, irritability, depression, and emotional changes. Ring users often had fewer bleeding problems than pill users. The quality of information was classed as low for the patch trials and moderate for the ring studies. Lower quality was due to not reporting how groups were assigned or not having good outcome measures. Other issues were high losses and taking assigned women out of the analysis. Studies of the patch and ring should provide more detail on whether women used the method correctly. | The delivery of combination contraceptive steroids from a transdermal contraceptive patch or a contraceptive vaginal ring offers potential advantages over the traditional oral route. The transdermal patch and vaginal ring could require a lower dose due to increased bioavailability and improved user compliance. Objectives To compare the contraceptive effectiveness, cycle control, compliance (adherence), and safety of the contraceptive patch or the vaginal ring versus combination oral contraceptives (COCs). Search methods Through February 2013, we searched MEDLINE, POPLINE, CENTRAL, LILACS, ClinicalTrials.gov, and ICTRP for trials of the contraceptive patch or the vaginal ring. Earlier searches also included EMBASE. For the initial review, we contacted known researchers and manufacturers to identify other trials. Selection criteria We considered randomized controlled trials comparing a transdermal contraceptive patch or a contraceptive vaginal ring with a COC. Data collection and analysis Data were abstracted by two authors and entered into RevMan. For dichotomous variables, the Peto odds ratio (OR) with 95% confidence intervals (CI) was calculated. For continuous variables, the mean difference was computed. We also assessed the quality of evidence for this review. We found 18 trials that met our inclusion criteria. Of six patch studies, five examined the marketed patch containing norelgestromin plus ethinyl estradiol (EE); one studied a patch in development that contains levonorgestrel (LNG) plus EE. Of 12 vaginal ring trials, 11 examined the same marketing ring containing etonogestrel plus EE; one studied a ring being developed that contains nesterone plus EE. Contraceptive effectiveness was not significantly different for the patch or ring versus the comparison COC. Compliance data were limited. Patch users showed better compliance than COC users in three trials. For the norelgestromin plus EE patch, ORs were 2.05 (95% CI 1.83 to 2.29) and 2.76 (95% CI 2.35 to 3.24). In the levonorgestrel plus EE patch report, patch users were less likely to have missed days of therapy (OR 0.36; 95% CI 0.25 to 0.51). Of four vaginal ring trials, one found ring users had more noncompliance (OR 3.99; 95% CI 1.87 to 8.52), while another showed more compliance with the regimen (OR 1.67; 95% CI 1.04 to 2.68). More patch users discontinued early than COC users. ORs from two meta‐analyses were 1.59 (95% CI 1.26 to 2.00) and 1.56 (95% CI 1.18 to 2.06) and another trial showed OR 2.57 (95% CI 0.99 to 6.64). Patch users also had more discontinuation due to adverse events than COC users. Users of the norelgestromin‐containing patch reported more breast discomfort, dysmenorrhea, nausea, and vomiting. In the levonorgestrel‐containing patch trial, patch users reported less vomiting, headaches, and fatigue. Of 11 ring trials with discontinuation data, two showed the ring group discontinued less than the COC group: OR 0.32 (95% CI 0.16 to 0.66) and OR 0.52 (95% CI 0.31 to 0.88). Ring users were less likely to discontinue due to adverse events in one study (OR 0.32; 95% CI 0.15 to 0.70). Compared to the COC users, ring users had more vaginitis and leukorrhea but less vaginal dryness. Ring users also reported less nausea, acne, irritability, depression, and emotional lability than COC users. For cycle control, only one trial study showed a significant difference. Women in the patch group were less likely to have breakthrough bleeding and spotting. Seven ring studies had bleeding data; four trials showed the ring group generally had better cycle control than the COC group. Effectiveness was not significantly different for the methods compared. Pregnancy data were available from half of the patch trials but two‐thirds of ring trials. The patch could lead to more discontinuation than the COC. The patch group had better compliance than the COC group. Compliance data came from half of the patch studies and one‐third of the ring trials. Patch users had more side effects than the COC group. Ring users generally had fewer adverse events than COC users but more vaginal irritation and discharge. The main reasons for downgrading were lack of information on the randomization sequence generation or allocation concealment, the outcome assessment methods, high losses to follow up, and exclusions after randomization. |
t1 | Excess body weight has become a health problem around the world. Being overweight or obese may affect how well some birth control methods work to prevent pregnancy. Hormonal birth control includes pills, the skin patch, the vaginal ring, implants, injectables, and hormonal intrauterine contraception (IUC). Until 4 August 2016, we did computer searches for studies of hormonal birth control among women who were overweight or obese. We looked for studies that compared overweight or obese women with women of normal weight or body mass index (BMI). The formula for BMI is weight (kg) / height (m) 2 . We included all study designs. For the original review, we wrote to investigators to find other studies we might have missed. With 8 studies added in this update, we had 17 with a total of 63,813 women. We focus here on 12 studies with high, moderate, or low quality results. Most did not show more pregnancies for overweight or obese women. Two of five studies using birth control pills found differences between BMI groups. In one, overweight women had a higher pregnancy risk. The other found a lower pregnancy rate for obese women versus nonobese women. The second study also tested a new skin patch. Obese women in the patch group had a higher pregnancy rate. Of five implant studies, two showed differences among weight groups. They studied the older six‐capsule implant. One study showed a higher pregnancy rate in years 6 and 7 combined for women weighing 70 kg or more. The other reported pregnancy differences in year 5 among the lower weight groups only. Results for other methods of birth control did not show overweight or obesity related to pregnancy rate. Those methods included an injectable, hormonal IUC, and the two‐rod and single‐rod implants. These studies generally did not show an association of BMI or weight with the effect of hormonal methods. We found few studies for most methods. Studies using BMI rather than weight can show whether body fat is related to how well birth control prevents pregnancy. The methods studied here work very well when used according to directions. | Obesity has reached epidemic proportions around the world. Effectiveness of hormonal contraceptives may be related to metabolic changes in obesity or to greater body mass or body fat. Hormonal contraceptives include oral contraceptives (OCs), injectables, implants, hormonal intrauterine contraception (IUC), the transdermal patch, and the vaginal ring. Given the prevalence of overweight and obesity, the public health impact of any effect on contraceptive efficacy could be substantial. Objectives To examine the effectiveness of hormonal contraceptives in preventing pregnancy among women who are overweight or obese versus women with a lower body mass index (BMI) or weight. Search methods Until 4 August 2016, we searched for studies in PubMed (MEDLINE), CENTRAL, POPLINE, Web of Science, ClinicalTrials.gov, and ICTRP. We examined reference lists of pertinent articles to identify other studies. For the initial review, we wrote to investigators to find additional published or unpublished studies. Selection criteria All study designs were eligible. The study could have examined any type of hormonal contraceptive. Reports had to contain information on the specific contraceptive methods used. The primary outcome was pregnancy. Overweight or obese women must have been identified by an analysis cutoff for weight or BMI (kg/m 2 ). Data collection and analysis Two authors independently extracted the data. One entered the data into RevMan and a second verified accuracy. The main comparisons were between overweight or obese women and women of lower weight or BMI. We examined the quality of evidence using the Newcastle‐Ottawa Quality Assessment Scale. Where available, we included life‐table rates. We also used unadjusted pregnancy rates, relative risk (RR), or rate ratio when those were the only results provided. For dichotomous variables, we computed an odds ratio with 95% confidence interval (CI). With 8 studies added in this update, 17 met our inclusion criteria and had a total of 63,813 women. We focus here on 12 studies that provided high, moderate, or low quality evidence. Most did not show a higher pregnancy risk among overweight or obese women. Of five COC studies, two found BMI to be associated with pregnancy but in different directions. With an OC containing norethindrone acetate and ethinyl estradiol (EE), pregnancy risk was higher for overweight women, i.e. with BMI ≥ 25 versus those with BMI < 25 (reported relative risk 2.49, 95% CI 1.01 to 6.13). In contrast, a trial using an OC with levonorgestrel and EE reported a Pearl Index of 0 for obese women (BMI ≥ 30) versus 5.59 for nonobese women (BMI < 30). The same trial tested a transdermal patch containing levonorgestrel and EE. Within the patch group, obese women in the "treatment‐compliant" subgroup had a higher reported Pearl Index than nonobese women (4.63 versus 2.15). Of five implant studies, two that examined the six‐capsule levonorgestrel implant showed differences in pregnancy by weight. One study showed higher weight was associated with higher pregnancy rate in years 6 and 7 combined (reported P < 0.05). In the other, pregnancy rates differed in year 5 among the lower weight groups only (reported P < 0.01) and did not involve women weighing 70 kg or more. Analysis of data from other contraceptive methods indicated no association of pregnancy with overweight or obesity. These included depot medroxyprogesterone acetate (subcutaneous), levonorgestrel IUC, the two‐rod levonorgestrel implant, and the etonogestrel implant. The evidence generally did not indicate an association between higher BMI or weight and effectiveness of hormonal contraceptives. However, we found few studies for most contraceptive methods. Studies using BMI, rather than weight alone, can provide information about whether body composition is related to contraceptive effectiveness. The contraceptive methods examined here are among the most effective when used according to the recommended regimen. We considered the overall quality of evidence to be low for the objectives of this review. More recent reports provided evidence of varying quality, while the quality was generally low for older studies. For many trials the quality would be higher for their original purpose rather than the non‐randomized comparisons here. Investigators should consider adjusting for potential confounding related to BMI or contraceptive effectiveness. Newer studies included a greater proportion of overweight or obese women, which helps in examining effectiveness and side effects of hormonal contraceptives within those groups. |
t2 | Cluster headaches are excruciating headaches of extreme intensity. They can last for several hours, are usually on one side of the head only, and affect men more than women. Multiple headaches can occur over several days. Fast pain relief is important because of the intense nature of the pain with cluster headache. Triptans are a type of drug used to treat migraine. Although migraine is different from cluster headache, there are reasons to believe that some forms of these drugs could be useful in cluster headache. Triptans can be given by injection under the skin (subcutaneously) or by a spray into the nose (intranasally) to produce fast pain relief. The review found six studies examining two different triptans. Within 15 minutes of using subcutaneous sumatriptan 6 mg, almost 8 in 10 participants had no worse than mild pain, and 5 in 10 were pain‐free. Within 15 minutes of using intranasal zolmitriptan 5 mg, about 3 in 10 had no worse than mild pain, and 1 in 10 was pain‐free. Adverse events were more common with a triptan than with placebo but they were generally of mild to moderate severity. | This is an updated version of the original Cochrane review published in Issue 4, 2010 ( Law 2010 ). Cluster headache is an uncommon, severely painful, and disabling condition, with rapid onset. Validated treatment options are limited; first‐line therapy includes inhaled oxygen. Other therapies such as intranasal lignocaine and ergotamine are not as commonly used and are less well studied. Triptans are successfully used to treat migraine attacks and they may also be useful for cluster headache. Objectives To assess the efficacy and tolerability of the triptan class of drugs compared to placebo and other active interventions in the acute treatment of episodic and chronic cluster headache in adult patients. Search methods We searched the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE, ClinicalTrials.gov, and reference lists for studies from inception to 22 January 2010 for the original review, and from 2009 to 4 April 2013 for this update. Selection criteria Randomised, double‐blind, placebo‐controlled studies of triptans for acute treatment of cluster headache episodes. Data collection and analysis Two review authors independently assessed study quality and extracted data. Numbers of participants with different levels of pain relief, requiring rescue medication, and experiencing adverse events and headache‐associated symptoms in treatment and control groups were used to calculate relative risk and numbers needed to treat for benefit (NNT) and harm (NNH). New searches in 2013 did not identify any relevant new studies. All six included studies used a single dose of triptan to treat an attack of moderate to severe pain intensity. Subcutaneous sumatriptan was given to 131 participants at a 6 mg dose, and 88 at a 12 mg dose. Oral or intranasal zolmitriptan was given to 231 participants at a 5 mg dose, and 223 at a 10 mg dose. Placebo was given to 326 participants. Triptans were more effective than placebo for headache relief and pain‐free responses. By 15 minutes after treatment with subcutaneous sumatriptan 6 mg, 48% of participants were pain‐free and 75% had no pain or mild pain (17% and 32% respectively with placebo). NNTs for subcutaneous sumatriptan 6 mg were 3.3 (95% CI 2.4 to 5.0) and 2.4 (1.9 to 3.2) respectively. Intranasal zolmitriptan 10 mg was of less benefit, with 12% of participants pain‐free and 28% with no or mild pain (3% and 7% respectively with placebo). NNTs for intranasal zolmitriptan 10 mg were 11 (6.4 to 49) and 4.9 (3.3 to 9.2) respectively. Based on limited data, subcutaneous sumatriptan 6 mg was superior to intranasal zolmitriptan 5 mg or 10 mg for rapid (15 minute) responses, which are important in this condition. Oral routes of administration are not appropriate. |
t3 | Sugar‐sweetened beverages (SSBs) are cold and hot drinks with added sugar. Common SSBs are non‐diet soft drinks, regular soda, iced tea, sports drinks, energy drinks, fruit punches, sweetened waters, and sweetened tea and coffee. Research shows that people who drink a lot of SSBs often gain weight. Drinking a lot of SSBs can also increase the risk of diabetes, heart disease, and dental decay. Doctors therefore recommend that children, teenagers and adults drink fewer SSBs. Governments, businesses, schools and workplaces have taken various measures to support healthier beverage choices. We wanted to find out whether the measures taken so far have been successful in helping people to drink fewer SSBs to improve their health. We focused on measures that change the environment in which people make beverage choices. We did not look at studies on educational programmes or on SSB taxes, as these are examined in separate reviews. We searched for all available studies meeting clearly‐defined criteria to answer this question. We found 58 studies, which included more than one million adults, teenagers and children. Most studies lasted about one year, and were done in schools, stores or restaurants. Some studies used methods that are not very reliable. For example, in some studies participants were simply asked how much SSB they drank, which is not very reliable, as people sometimes forget how much SSB they drank. Some of the findings of our review may therefore change when more and better studies become available. We have found some evidence that some of the measures implemented to help people drink fewer SSBs have been successful, including the following: Labels which are easy to understand, such as traffic‐light labels, and labels which rate the healthfulness of beverages with stars or numbers. Limits to the availability of SSB in schools (e.g. replacing SSBs with water in school cafeterias). Price increases on SSBs in restaurants, stores and leisure centres. Children’s menus in chain restaurants which include healthier beverages as their standard beverage. Promotion of healthier beverages in supermarkets. Government food benefits (e.g. food stamps) which cannot be used to buy SSBs. Community campaigns focused on SSBs. Measures that improve the availability of low‐calorie beverages at home, e.g. through home deliveries of bottled water and diet beverages. We have also found some evidence that improved availability of drinking water and diet beverages at home can help people lose weight. There are also other measures which may influence how much SSB people drink, but for these the available evidence is less certain. Some, but not all studies found that such measures can have effects which were not intended and which may be negative. Some studies reported that profits of stores and restaurants decreased when the measures were implemented, but other studies showed that profits increased or stayed the same. Children who get free drinking water in schools may drink less milk. Some studies reported that people were unhappy with the measures. We also looked at studies on sugar‐sweetened milk. We found that small prizes for children who chose plain milk in their school cafeteria, as well as emoticon labels, may help children drink less sugar‐sweetened milk. However, this may also drive up the share of milk which is wasted because children choose but do not drink it. Our review shows that measures which change the environment in which people make beverage choices can help people drink less SSB. Based on our findings we suggest that such measures may be used more widely. Government officials, business people and health professionals implementing such measures should work together with researchers to find out more about their effects in the short and long term. | Frequent consumption of excess amounts of sugar‐sweetened beverages (SSB) is a risk factor for obesity, type 2 diabetes, cardiovascular disease and dental caries. Environmental interventions, i.e. interventions that alter the physical or social environment in which individuals make beverage choices, have been advocated as a means to reduce the consumption of SSB. Objectives To assess the effects of environmental interventions (excluding taxation) on the consumption of sugar‐sweetened beverages and sugar‐sweetened milk, diet‐related anthropometric measures and health outcomes, and on any reported unintended consequences or adverse outcomes. Search methods We searched 11 general, specialist and regional databases from inception to 24 January 2018. We also searched trial registers, reference lists and citations, scanned websites of relevant organisations, and contacted study authors. Selection criteria We included studies on interventions implemented at an environmental level, reporting effects on direct or indirect measures of SSB intake, diet‐related anthropometric measures and health outcomes, or any reported adverse outcome. We included randomised controlled trials (RCTs), non‐randomised controlled trials (NRCTs), controlled before‐after (CBA) and interrupted‐time‐series (ITS) studies, implemented in real‐world settings with a combined length of intervention and follow‐up of at least 12 weeks and at least 20 individuals in each of the intervention and control groups. We excluded studies in which participants were administered SSB as part of clinical trials, and multicomponent interventions which did not report SSB‐specific outcome data. We excluded studies on the taxation of SSB, as these are the subject of a separate Cochrane Review. Data collection and analysis Two review authors independently screened studies for inclusion, extracted data and assessed the risks of bias of included studies. We classified interventions according to the NOURISHING framework, and synthesised results narratively and conducted meta‐analyses for two outcomes relating to two intervention types. We assessed our confidence in the certainty of effect estimates with the GRADE framework as very low, low, moderate or high, and presented ‘Summary of findings’ tables. We identified 14,488 unique records, and assessed 1030 in full text for eligibility. We found 58 studies meeting our inclusion criteria, including 22 RCTs, 3 NRCTs, 14 CBA studies, and 19 ITS studies, with a total of 1,180,096 participants. The median length of follow‐up was 10 months. The studies included children, teenagers and adults, and were implemented in a variety of settings, including schools, retailing and food service establishments. We judged most studies to be at high or unclear risk of bias in at least one domain, and most studies used non‐randomised designs. The studies examine a broad range of interventions, and we present results for these separately. Labelling interventions (8 studies): We found moderate‐certainty evidence that traffic‐light labelling is associated with decreasing sales of SSBs, and low‐certainty evidence that nutritional rating score labelling is associated with decreasing sales of SSBs. For menu‐board calorie labelling reported effects on SSB sales varied. Nutrition standards in public institutions (16 studies): We found low‐certainty evidence that reduced availability of SSBs in schools is associated with decreased SSB consumption. We found very low‐certainty evidence that improved availability of drinking water in schools and school fruit programmes are associated with decreased SSB consumption. Reported associations between improved availability of drinking water in schools and student body weight varied. Economic tools (7 studies): We found moderate‐certainty evidence that price increases on SSBs are associated with decreasing SSB sales. For price discounts on low‐calorie beverages reported effects on SSB sales varied. Whole food supply interventions (3 studies): Reported associations between voluntary industry initiatives to improve the whole food supply and SSB sales varied. Retail and food service interventions (7 studies): We found low‐certainty evidence that healthier default beverages in children’s menus in chain restaurants are associated with decreasing SSB sales, and moderate‐certainty evidence that in‐store promotion of healthier beverages in supermarkets is associated with decreasing SSB sales. We found very low‐certainty evidence that urban planning restrictions on new fast‐food restaurants and restrictions on the number of stores selling SSBs in remote communities are associated with decreasing SSB sales. Reported associations between promotion of healthier beverages in vending machines and SSB intake or sales varied. Intersectoral approaches (8 studies): We found moderate‐certainty evidence that government food benefit programmes with restrictions on purchasing SSBs are associated with decreased SSB intake. For unrestricted food benefit programmes reported effects varied. We found moderate‐certainty evidence that multicomponent community campaigns focused on SSBs are associated with decreasing SSB sales. Reported associations between trade and investment liberalisation and SSB sales varied. Home‐based interventions (7 studies): We found moderate‐certainty evidence that improved availability of low‐calorie beverages in the home environment is associated with decreased SSB intake, and high‐certainty evidence that it is associated with decreased body weight among adolescents with overweight or obesity and a high baseline consumption of SSBs. Adverse outcomes reported by studies, which may occur in some circumstances, included negative effects on revenue, compensatory SSB consumption outside school when the availability of SSBs in schools is reduced, reduced milk intake, stakeholder discontent, and increased total energy content of grocery purchases with price discounts on low‐calorie beverages, among others. The certainty of evidence on adverse outcomes was low to very low for most outcomes. We analysed interventions targeting sugar‐sweetened milk separately, and found low‐ to moderate‐certainty evidence that emoticon labelling and small prizes for the selection of healthier beverages in elementary school cafeterias are associated with decreased consumption of sugar‐sweetened milk. We found low‐certainty evidence that improved placement of plain milk in school cafeterias is not associated with decreasing sugar‐sweetened milk consumption. The evidence included in this review indicates that effective, scalable interventions addressing SSB consumption at a population level exist. Implementation should be accompanied by high‐quality evaluations using appropriate study designs, with a particular focus on the long‐term effects of approaches suitable for large‐scale implementation. |
t4 | A baby may be in this situation because the placenta is no longer functioning well and this means the baby may be short of nutrition or oxygen. We asked in this Cochrane review if it is better to induce labour or do a caesarean section (both ways of ensuring the baby is born earlier) rather than letting the pregnancy continue until labour starts by itself. Sometimes, when a healthy pregnant woman gets towards the end of pregnancy, there may be signs that her baby may be having difficulty coping. Some of these babies are born sick, very occasionally they do not survive, or they have problems in their later development. A baby may not be growing normally and so is smaller than expected (this is termed intrauterine growth restriction ‐ IUGR). The baby may show decreased movements, which may indicate the placenta is no longer functioning well. Fetal heart monitoring (known as cardiotocography or CTG) may show up a possible problem. Ultrasound can also measure amniotic fluid and blood flow in order to assess the baby’s well‐being. Induction of labour or caesarean section might help these babies by taking them out of the uterus. But intervening early in this way may mean that these babies’ lungs are not mature enough to deal well with the outside world, and they might be better to continue inside the uterus. It is not clear which option is best for mothers and babies. We found three trials involving 546 pregnant women and their babies at term. All three trials looked at using induction of labour for an early birth. Two trials looked at babies thought to have growth restriction and one trial looked at babies thought to have a small volume of amniotic fluid (oligohydramnios). All three trials were of reasonable quality and most of the evidence comes from the largest trial which compared babies who were growth restricted. There is no information about funding sources for these trials. Overall, we found no major differences between these two strategies in terms of the babies’ survival, the numbers of very sick babies nor in the numbers of babies with problems in development. We looked at many other outcomes, too, including how many caesarean sections there were, and how many operative vaginal births (with forceps or ventouse). We also need research into better tests to identify babies who are not coping well towards the end of pregnancy. Women should discuss their specific circumstances with their caregivers when coming to a decision. | Fetal compromise in the term pregnancy is suspected when the following clinical indicators are present: intrauterine growth restriction (IUGR), decreased fetal movement (DFM), or when investigations such as cardiotocography (CTG) and ultrasound reveal results inconsistent with standard measurements. Pathological results would necessitate the need for immediate delivery, but the management for ‘suspicious’ results remains unclear and varies widely across clinical centres. There is clinical uncertainty as to how to best manage women presenting with a suspected term compromised baby in an otherwise healthy pregnancy. Objectives To assess, using the best available evidence, the effects of immediate delivery versus expectant management of the term suspected compromised baby on neonatal, maternal and long‐term outcomes. Search methods We searched the Cochrane Pregnancy and Childbirth Group's Trials Register (31 May 2015) and reference lists of retrieved studies. Selection criteria Randomised or quasi‐randomised controlled trials comparing expectant management versus planned early delivery for women with a suspected compromised fetus from 37 weeks' gestation or more. Data collection and analysis Two review authors independently assessed trials for inclusion and assessed trial quality. Two review authors independently extracted data. Data were checked for accuracy. We assessed the quality of the evidence using the GRADE approach. Of the 20 reports identified by the search strategy, we included three trials (546 participants: 269 to early delivery and 277 to expectant management), which met our inclusion criteria. Two of the trials compared outcomes in 492 pregnancies with IUGR of the fetus, and one in 54 pregnancies with oligohydramnios. All three trials were of reasonable quality and at low risk of bias. The level of evidence was graded moderate, low or very low, downgrading mostly for imprecision and for some indirectness. Overall, there was no difference in the primary neonatal outcomes of perinatal mortality (no deaths in either group, one trial, 459 women, evidence graded moderate ), major neonatal morbidity (risk ratio (RR) 0.15, 95% confidence interval (CI) 0.01 to 2.81, one trial, 459 women, evidence graded low ), or neurodevelopmental disability/impairment at two years of age (RR 2.04, 95% CI 0.62 to 6.69,one trial, 459 women, evidence graded low ). There was no difference in the risk of necrotising enterocolitis (one trial, 333 infants) or meconium aspiration (one trial, 459 infants), There was also no difference in the reported primary maternal outcomes: maternal mortality (RR 3.07, 95% CI 0.13 to 74.87, one trial, 459 women, evidence graded low ), and significant maternal morbidity (RR 0.92, 95% CI 0.38 to 2.22, one trial, 459 women, evidence graded low ). The gestational age at birth was on average 10 days earlier in women randomised to early delivery (mean difference (MD) ‐9.50, 95% CI ‐10.82 to ‐8.18, one trial, 459 women) and women in the early delivery group were significantly less likely to have a baby beyond 40 weeks' gestation (RR 0.10, 95% CI 0.01 to 0.67, one trial, 33 women). Significantly more infants in the planned early delivery group were admitted to intermediate care nursery (RR 1.28, 95% CI 1.02 to 1.61, two trials, 491 infants). There was no difference in the risk of respiratory distress syndrome, (one trial, 333 infants), Apgar score less than seven at five minutes (three trials, 546 infants), resuscitation required (one trial, 459 infants), mechanical ventilation (one trial, 337 infants), admission to neonatal intensive care unit (NICU) (RR 0.88, 95% CI 0.35 to 2.23, three trials, 545 infants, evidence graded very low ), length of stay in NICU/SCN (one trial, 459 infants), and sepsis (two trials, 366 infants). Babies in the expectant management group were more likely to be < 2.3rd centile for birthweight (RR 0.51, 95% CI 0.36 to 0.73, two trials, 491 infants), however there was no difference in the proportion of babies with birthweight < 10th centile (RR 0.98, 95% CI 0.88 to 1.10). There was no difference in any of the reported maternal secondary outcomes including: caesarean section rates (RR 1.02, 95% CI 0.65 to 1.59, three trials, 546 women, evidence graded low ), placental abruption (one trial, 459 women), pre‐eclampsia (one trial, 459 women), vaginal birth (three trials 546 women), assisted vaginal birth (three trials 546 women), breastfeeding rates (one trial, 218 women), and number of weeks of breastfeeding after delivery one trial, 124 women). There was an expected increase in induction in the early delivery group (RR 2.05, 95% CI 1.78 to 2.37, one trial, 459 women). No data were reported for the pre‐specified secondary neonatal outcomes of the number of days of mechanical ventilation, moderate‐severe hypoxic ischaemic encephalopathy or need for therapeutic hypothermia. Likewise, no data were reported for secondary maternal outcomes of postnatal infection, maternal satisfaction or views of care. A policy for planned early delivery versus expectant management for a suspected compromised fetus at term does not demonstrate any differences in major outcomes of perinatal mortality, significant neonatal or maternal morbidity or neurodevelopmental disability. In women randomised to planned early delivery, the gestational age at birth was on average 10 days earlier, women were less likely to have a baby beyond 40 weeks' gestation, they were more likely to be induced and infants were more likely to be admitted to intermediate care nursery. There was also a significant difference in the proportion of babies with a birthweight centile < 2.3rd, however this did not translate into a reduction in morbidity. The review is informed by only one large trial and two smaller trials assessing fetuses with IUGR or oligohydramnios and therefore cannot be generalised to all term pregnancies with suspected fetal compromise. There are other indications for suspecting compromise in a fetus at or near term such as maternal perception of DFM, and ultrasound and/or CTG abnormalities. Future randomised trials need to assess effectiveness of timing of delivery for these indications. |
t5 | Rapid tests for diagnosing malaria caused by Plasmodium vivax or other less common parasites. This review summarises trials evaluating the accuracy of rapid diagnostic tests (RDTs) for diagnosing malaria due to Plasmodium vivax or other non‐falciparum species. After searching for relevant studies up to December 2013, we included 47 studies, enrolling 22,862 adults and children. What are rapid tests and why do they need to be able to distinguish Plasmodium vivax malaria RDTs are simple to use, point of care tests, suitable for use in rural settings by primary healthcare workers. RDTs work by using antibodies to detect malaria antigens in the patient's blood. A drop of blood is placed on the test strip where the antibodies and antigen combine to create a distinct line indicating a positive test. Malaria can be caused any one of five species of Plasmodium parasite, but P. falciparum and P. vivax are the most common. In some areas, RDTs need to be able to distinguish which species is causing the malaria symptoms as different species may require different treatments. Unlike P. falciparum , P. vivax has a liver stage which can cause repeated illness every few months unless it is treated with primaquine. The most common types of RDTs for P. vivax use two test lines in combination; one line specific to P. falciparum, and one line which can detect any species of Plasmodium. If the P. falciparum line is negative and the 'any species' line is positive, the illness is presumed to be due to P. vivax (but could also be caused by P. malariae, or P. ovale ) . More recently, RDTs have been developed which specifically test for P. vivax. What does the research say RDTs testing for non‐falciparum malaria were very specific (range 98% to 100%) meaning that only 1% to 2% of patients who test positive would actually not have the disease. However, they were less sensitive (range 78% to 89%), meaning between 11% and 22% of people with non‐falciparum malaria would actually get a negative test result. RDTs which specifically tested for P. vivax were more accurate with a specificity of 99% and a sensitivity of 95%, meaning that only 5% of people with P. vivax malaria would have a negative test result. | In settings where both Plasmodium vivax and Plasmodium falciparum infection cause malaria, rapid diagnostic tests (RDTs) need to distinguish which species is causing the patients' symptoms, as different treatments are required. Older RDTs incorporated two test lines to distinguish malaria due to P. falciparum, from malaria due to any other Plasmodium species (non‐falciparum). These RDTs can be classified according to which antibodies they use: Type 2 RDTs use HRP‐2 (for P. falciparum ) and aldolase (all species); Type 3 RDTs use HRP‐2 (for P. falciparum ) and pLDH (all species); Type 4 use pLDH (from P. falciparum ) and pLDH (all species). More recently, RDTs have been developed to distinguish P. vivax parasitaemia by utilizing a pLDH antibody specific to P. vivax . Objectives To assess the diagnostic accuracy of RDTs for detecting non‐falciparum or P. vivax parasitaemia in people living in malaria‐endemic areas who present to ambulatory healthcare facilities with symptoms suggestive of malaria, and to identify which types and brands of commercial test best detect non‐falciparum and P. vivax malaria. Search methods We undertook a comprehensive search of the following databases up to 31 December 2013: Cochrane Infectious Diseases Group Specialized Register; MEDLINE; EMBASE; MEDION; Science Citation Index; Web of Knowledge; African Index Medicus; LILACS; and IndMED. Selection criteria Studies comparing RDTs with a reference standard (microscopy or polymerase chain reaction) in blood samples from a random or consecutive series of patients attending ambulatory health facilities with symptoms suggestive of malaria in non‐falciparum endemic areas. Data collection and analysis For each study, two review authors independently extracted a standard set of data using a tailored data extraction form. We grouped comparisons by type of RDT (defined by the combinations of antibodies used), and combined in meta‐analysis where appropriate. Average sensitivities and specificities are presented alongside 95% confidence intervals (95% CI). We included 47 studies enrolling 22,862 participants. Patient characteristics, sampling methods and reference standard methods were poorly reported in most studies. RDTs detecting 'non‐falciparum' parasitaemia Eleven studies evaluated Type 2 tests compared with microscopy, 25 evaluated Type 3 tests, and 11 evaluated Type 4 tests. In meta‐analyses, average sensitivities and specificities were 78% (95% CI 73% to 82%) and 99% (95% CI 97% to 99%) for Type 2 tests, 78% (95% CI 69% to 84%) and 99% (95% CI 98% to 99%) for Type 3 tests, and 89% (95% CI 79% to 95%) and 98% (95% CI 97% to 99%) for Type 4 tests, respectively. Type 4 tests were more sensitive than both Type 2 (P = 0.01) and Type 3 tests (P = 0.03). Five studies compared Type 3 tests with PCR; in meta‐analysis, the average sensitivity and specificity were 81% (95% CI 72% to 88%) and 99% (95% CI 97% to 99%) respectively. RDTs detecting P.vivax parasitaemia Eight studies compared pLDH tests to microscopy; the average sensitivity and specificity were 95% (95% CI 86% to 99%) and 99% (95% CI 99% to 100%), respectively. RDTs designed to detect P. vivax specifically, whether alone or as part of a mixed infection, appear to be more accurate than older tests designed to distinguish P. falciparum malaria from non‐falciparum malaria. Compared to microscopy, these tests fail to detect around 5% of P. vivax cases. This Cochrane Review, in combination with other published information about in vitro test performance and stability in the field, can assist policy‐makers to choose between the available RDTs. 12 April 2019 No update planned Review superseded This Cochrane Review has been superseded by Choi 2019 https://doi.org/10.1002/14651858.CD013218 |
t6 | This summary presents what we know from research about the effect of exercise therapy in JIA. The review shows that in children with JIA, exercise may not lead to any difference in a child's ability to function or move their joints fully, the number of joints with swelling, quality of life, overall wellbeing, pain or aerobic capacity. Aerobic capacity is the amount of oxygen the body consumes during exercise. If a person has low aerobic capacity, it generally means he or she is able to do less physical activity and may tire easily. The number of joints with pain was not measured in these studies. We often do not have precise information about side effects and complications. This is particularly true for rare but serious side effects. No short‐term adverse effects of exercise therapy were found in the studies that make up this review. Juvenile idiopathic arthritis (JIA) is the most common chronic rheumatic disease in children and is an important cause of short‐term and long‐term disability. In JIA the cause of the arthritis is unknown. It generally begins in children younger than age 16 years. It always lasts for at least six weeks. A physician will rule out other conditions that may be causing the symptoms before diagnosing JIA. Several types of exercise therapy are described in this review, for example, physical training programs such as strength training for improving muscle strength and endurance exercise for improving overall fitness (either land based or in a pool). Other studies state that a change of 0.13 on the score of the Childhood Health Assessment Questionnaire (CHAQ) is a clinically important improvement from the perspective of children and their parents. | Exercise therapy is considered an important component of the treatment of arthritis. The efficacy of exercise therapy has been reviewed in adults with rheumatoid arthritis but not in children with juvenile idiopathic arthritis (JIA). Objectives To assess the effects of exercise therapy on functional ability, quality of life and aerobic capacity in children with JIA. Search methods The Cochrane Central Register of Controlled Trials (CENTRAL), Cochrane Database of Systematic Reviews ( The Cochrane Library ), MEDLINE (January 1966 to April 2007), CINAHL (January 1982 to April 2007), EMBASE (January 1966 to October 2007), PEDro (January 1966 to October 2007), SportDiscus (January 1966 to October 2007), Google Scholar (to October 2007), AMED (Allied and Alternative Medicine) (January 1985 to October 2007), Health Technologies Assessment database (January 1988 to October 2007), ISI Web Science Index to Scientific and Technical Proceedings (January 1966 to October 2007) and the Chartered Society of Physiotherapy website (http://www.cps.uk.org) were searched and references tracked. Selection criteria Randomised controlled trials (RCTs) of exercise treatment in JIA. Data collection and analysis Potentially relevant references were evaluated and all data were extracted by two review authors working independently. Three out of 16 identified studies met the inclusion criteria, with a total of 212 participants. All the included studies fulfilled at least seven of 10 methodological criteria. The outcome data of the following measures were homogenous and were pooled in a meta‐analysis: functional ability (n = 198; WMD ‐0.07, 95% CI ‐0.22 to 0.08), quality of life (CHQ‐PhS: n = 115; WMD ‐3.96, 95% CI ‐8.91 to 1.00) and aerobic capacity (n = 124; WMD 0.04, 95% CI ‐0.11 to 0.19). The results suggest that the outcome measures all favoured the exercise therapy but none were statistically significant. None of the studies reported negative effects of the exercise therapy. Overall, based on 'silver‐level' evidence (www.cochranemsk.org) there was no clinically important or statistically significant evidence that exercise therapy can improve functional ability, quality of life, aerobic capacity or pain. The low number of available RCTs limits the generalisability. The included and excluded studies were all consistent about the adverse effects of exercise therapy; no short‐term detrimental effects of exercise therapy were found in any study. Both included and excluded studies showed that exercise does not exacerbate arthritis. The large heterogeneity in outcome measures, as seen in this review, emphasises the need for a standardised assessment or a core set of functional and physical outcome measurements suited for health research to generate evidence about the possible benefits of exercise therapy for patients with JIA. Although the short‐term effects look promising, the long‐term effect of exercise therapy remains unclear. |
t7 | The aim of this Cochrane Review was to find out if adjustable sutures (stitches) are better than non‐adjustable sutures for strabismus (squint) surgery. Cochrane researchers collected and analysed all relevant studies to answer this question and found one study. The review shows that there is an evidence gap on this topic. The Cochrane researchers found only one small study to answer this question and the results were uncertain. Strabismus occurs when the eye deviates (moves) from its normally perfect alignment. This is commonly known as a squint. Strabismus can be corrected by surgery on the muscles surrounding the eye. A variety of surgical techniques are available, including the use of adjustable or non‐adjustable sutures. There is uncertainty as to which of these suture techniques results in a better alignment of the eye and whether there are any disadvantages to the techniques. Cochrane researchers found one relevant study from Egypt. Sixty children under the age of 12 years took part in the study which compared adjustable with non‐adjustable sutures and followed participants for six months. Clinically, there may be a small increased chance of a successful outcome with adjustable sutures, but the results showed no statistical difference. | Strabismus, or squint, can be defined as a deviation from perfect ocular alignment and can be classified in many ways according to its aetiology and presentation. Treatment can be broadly divided into medical and surgical options, with a variety of surgical techniques being available, including the use of adjustable or non‐adjustable sutures for the extraocular muscles. There exists an uncertainty as to which of these techniques produces a better surgical outcome, and an opinion that the adjustable suture technique may be of greater benefit in certain situations. Objectives To determine if either an adjustable suture or non‐adjustable suture technique is associated with a more accurate long‐term ocular alignment and to identify specific situations in which it would be of benefit to use a particular method. Search methods We searched the Cochrane Central Register of Controlled Trials (CENTRAL) (which contains the Cochrane Eyes and Vision Trials Register) (2017, Issue 5); Ovid MEDLINE; Ovid Embase; LILACS; the ISRCTN registry; ClinicalTrials.gov and the ICTRP. The date of the search was 13 June 2017. We contacted experts in the field for further information. Selection criteria We included only randomised controlled trials (RCTs) comparing adjustable to non‐adjustable sutures for strabismus surgery. Data collection and analysis We used standard procedures recommended by Cochrane. Two review authors independently screened search results and extracted data. We graded the certainty of the evidence using the GRADE approach. We identified one RCT comparing adjustable and non‐adjustable sutures in primary horizontal strabismus surgeries in 60 children aged less than 12 years in Egypt. The study was not masked and we judged it at high risk of detection bias. Ocular alignment was defined as orthophoria or a horizontal tropia of 8 prism dioptres (PD) or less at near and far distances. At six months, there may be a small increased chance of ocular alignment with adjustable sutures compared with non‐adjustable sutures clinically, however, the confidence intervals (CIs) were wide and were compatible with an increased chance of ocular alignment in the non‐adjustable sutures group, so there was no statistical difference (risk ratio (RR) 1.18, 95% CI 0.91 to 1.53). We judged this to be low‐certainty evidence, downgrading for imprecision and risk of bias. At six months, 730 per 1000 children in the non‐adjustable sutures group had ocular alignment. The study authors reported that there were no complications during surgery. The trials did not assess patient satisfaction and resource use and costs. We could reach no reliable conclusions regarding which technique (adjustable or non‐adjustable sutures) produced a more accurate long‐term ocular alignment following strabismus surgery or in which specific situations one technique is of greater benefit than the other, given the low‐certainty and chance with just the one study. More high‐quality RCTs are needed to obtain clinically valid results and to clarify these issues. Such trials should ideally 1. recruit participants with any type of strabismus or specify the subgroup of participants to be studied, for example, thyroid, paralytic, non‐paralytic, paediatric; 2. randomise all consenting participants to have either adjustable or non‐adjustable surgery prospectively; 3. have at least six months of follow‐up data; and 4. include reoperation rates as an outcome measure. |
t8 | Gout caused by crystal formation in the joints due to high uric acid levels in the blood. People have attacks of painful, warm and swollen joints, often at the big toe. Some people develop large accumulations of crystal just beneath the skin known as tophi. Cure can be achieved if uric acid levels in blood return to normal for a prolonged time, making the crystal deposits dissolve. Dietary supplements are preparations such as vitamins, essential minerals, prebiotics, etc. Few studies evaluate their benefits and some might not be free of harm. The first study (120 participants) compared enriched skim milk powder (with peptides with probable anti‐inflammatory effect) to standard skim milk and to lactose powder, and the second study (40 participants) compared vitamin C with allopurinol. In the first study, the enriched milk aimed to reduce the frequency of gout attacks, while in the second study the vitamin C aimed to reduce the uric acid levels in blood. People with gout enrolled in both studies were predominantly middle‐aged men; in the skim milk study, participants with gout appeared severe as they had very frequent attacks and 20% to 43% presented with tophi, while in the vitamin C study, participants appeared similar to ordinary participants with gout. Withdrawals due to adverse events 4 more people out of 100 who consumed enriched skim milk powder discontinued the supplement at three months (4% more withdrawals). Pain reduction, serum uric acid (sUA) levels and physical function were uncertain. Effect on tophus regression was not measured. People who consumed vitamin C showed an sUA level reduction of 0.014 mmol/L after eight weeks (or 2.8% sUA reduction). People who were administered allopurinol showed an sUA level reduction of 0.118 mmol/L after eight weeks (or 23.6% sUA reduction). There were no reports of side effects or withdrawals due to side effects in the vitamin C or allopurinol treatment groups. Effects of vitamin C on gout attacks, pain reduction, physical function and tophus regression were not measured. We do not have precise information about side effects and complications, but possible side effects may include nausea or diarrhoea. Compared with the commonly used medicine allopurinol, low‐quality evidence from one study indicated the effect of vitamin C in reducing sUA levels is smaller and probably clinically unimportant. Other possible benefits of vitamin C are uncertain, as they were not evaluated in the study. | Dietary supplements are frequently used for the treatment of several medical conditions, both prescribed by physicians or self administered. However, evidence of benefit and safety of these supplements is usually limited or absent. Objectives To assess the efficacy and safety of dietary supplementation for people with chronic gout. Search methods We performed a search in the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE and CINAHL on 6 June 2013. We applied no date or language restrictions. In addition, we performed a handsearch of the abstracts from the 2010 to 2013 American College of Rheumatology (ACR) and European League against Rheumatism (EULAR) conferences, checked the references of all included studies and trial registries. Selection criteria We considered all published randomised controlled trials (RCTs) or quasi‐RCTs that compared dietary supplements with no supplements, placebo, another supplement or pharmacological agents for adults with chronic gout for inclusion. Dietary supplements included, but were not limited to, amino acids, antioxidants, essential minerals, polyunsaturated fatty acids, prebiotic agents, probiotic agents and vitamins. The main outcomes were reduction in frequency of gouty attacks and trial participant withdrawal due to adverse events. We also considered pain reduction, health‐related quality of life, serum uric acid (sUA) normalisation, function (i.e. activity limitation), tophus regression and the rate of serious adverse events. Data collection and analysis We used standard methodological procedures expected by The Cochrane Collaboration. We identified two RCTs (160 participants) that fulfilled our inclusion criteria. As these two trials evaluated different diet supplements (enriched skim milk powder (SMP) and vitamin C) with different outcomes (gout flare prevention for enriched SMP and sUA reduction for vitamin C), we reported the results separately. One trial including 120 participants, at moderate risk of bias, compared SMP enriched with glycomacropeptides (GMP) with unenriched SMP and with lactose over three months. Participants were predominantly men aged in their 50's who had severe gout. The frequency of acute gout attacks, measured as the number of flares per month, decreased in all three groups over the study period. The effects of enriched SMP (SMP/GMP/G600) compared with the combined control groups (SMP and lactose powder) at three months in terms of mean number of gout flares per month were uncertain (mean ± standard deviation (SD) flares per month: 0.49 ± 1.52 in SMP/GMP/G60 group versus 0.70 ± 1.28 in control groups; mean difference (MD) ‐0.21, 95% confidence interval (CI) ‐0.76 to 0.34; low‐quality evidence). The number of withdrawals due to adverse effects was similar in both groups although again the results were imprecise (7/40 in SMP/GMP/G600 group versus 11/80 in control groups; risk ratio (RR) 1.27, 95% CI 0.53 to 3.03; low‐quality evidence). The findings for adverse events were also uncertain (2/40 in SMP/GMP/G600 group versus 3/80 in control groups; RR 1.33, 95% CI 0.23 to 7.66; low‐quality evidence). Gastrointestinal events were the most commonly reported adverse effects. Pain from self reported gout flares (measured on a 10‐point Likert scale) improved slightly more in the SMP/GMP/G600 group compared with controls (mean ± SD reduction ‐1.97 ± 2.28 points in SMP/GMP/G600 group versus ‐0.94 ± 2.25 in control groups; MD ‐1.03, 95% CI ‐1.96 to ‐0.10; low‐quality evidence). This was an absolute reduction of 10% (95% CI 20% to 1% reduction), which may not be of clinical relevance. Results were imprecise for the outcome improvement in physical function (mean ± SD Health Assessment Questionnaire (HAQ)‐II (scale 0 to 3, 0 = no disability): 0.08 ± 0.23 in SMP/GMP/G60 group versus 0.11 ± 0.31 in control groups; MD ‐0.03, 95% CI ‐0.14 to 0.08; low‐quality evidence). Similarly, results for sUA reduction were imprecise (mean ± SD reduction: ‐0.025 ± 0.067 mmol/L in SMP/GMP/G60 group versus ‐0.010 ± 0.069 in control groups; MD ‐0.01, 95% CI ‐0.04 to 0.01; low‐quality evidence). The study did not report tophus regression and health‐related quality of life impact. One trial including 40 participants, at moderate to high risk of bias, compared vitamin C alone with allopurinol and with allopurinol plus vitamin C in a three‐arm trial. We only compared vitamin C with allopurinol in this review. Participants were predominantly middle‐aged men, and their severity of gout was representative of gout in general. The effect of vitamin C on the rate of gout attacks was not assessed. Vitamin C did not lower sUA as much as allopurinol (‐0.014 mmol/L in vitamin C group versus ‐0.118 mmol/L in allopurinol group; MD 0.10, 95% CI 0.06 to 0.15; low‐quality evidence). The study did not assess tophus regression, pain reduction or disability or health‐related quality of life impact. The study reported no adverse events and no participant withdrawal due to adverse events. While dietary supplements may be widely used for gout, this review has shown a paucity of high‐quality evidence assessing dietary supplementation. |
t9 | Priapism (the prolonged painful erection of the penis) is common in males with sickle cell disease. The length of time priapism lasts differs for different types and so does the medical treatment for it. Self‐management approaches may be helpful. We looked for randomised controlled trials of different treatments to find the best option. We found three trials set in Jamaica, Nigeria and the UK involving 102 people. In the trials, four different drug treatments (stilboestrol, sildenafil, ephedrine and etilefrine) were compared to placebo. The trials all looked at whether the treatments reduced how often attacks of priapism occurred. There was no difference between any of the treatments compared to placebo. Due to lack of evidence, we are not able to conclude the best treatment of priapism in sickle cell disease. We considered the quality of evidence to be low to very low as all of the trials were at risk of bias and all had low participant numbers. | Sickle cell disease comprises a group of genetic haemoglobin disorders. The predominant symptom associated with sickle cell disease is pain resulting from the occlusion of small blood vessels by abnormally 'sickle‐shaped' red blood cells. There are other complications, including chronic organ damage and prolonged painful erection of the penis, known as priapism. Severity of sickle cell disease is variable, and treatment is usually symptomatic. Priapism affects up to half of all men with sickle cell disease, however, there is no consistency in treatment. We therefore need to know the best way of treating this complication in order to offer an effective interventional approach to all affected individuals. Objectives To assess the benefits and risks of different treatments for stuttering (repeated short episodes) and fulminant (lasting for six hours or more) priapism in sickle cell disease. Search methods We searched the Cochrane Cystic Fibrosis and Genetic Disorders Group Haemoglobinopathies Trials Register, which comprises references identified from comprehensive electronic database searches and handsearches of relevant journals and abstract books of conference proceedings. We also searched trial registries. Date of the most recent search of the Group's Haemoglobinopathies Trials Register: 15 September 2017. Date of most recent search of trial registries and of Embase: 12 December 2016. Selection criteria All randomised or quasi‐randomised controlled trials comparing non‐surgical or surgical treatment with placebo or no treatment, or with another intervention for stuttering or fulminant priapism. Data collection and analysis The authors independently extracted data and assessed the risk of bias of the trials. Three trials with 102 participants were identified and met the criteria for inclusion in this review. These trials compared stilboestrol to placebo, sildenafil to placebo and ephedrine or etilefrine to placebo and ranged in duration from two weeks to six months. All of the trials were conducted in an outpatient setting in Jamaica, Nigeria and the UK. None of the trials measured our first primary outcome, detumescence but all three trials reported on the reduction in frequency of stuttering priapism, our second primary outcome. No significant effect of any of the treatments was seen compared to placebo. Immediate side effects were not found to be significantly different from placebo in the two trials where this information was reported. We considered the quality of evidence to be low to very low as all of the trials were at risk of bias and all had low participant numbers. There is a lack of evidence for the benefits or risks of the different treatments for both stuttering and fulminant priapism in sickle cell disease. This systematic review has clearly identified the need for well‐designed, adequately‐powered, multicentre randomised controlled trials assessing the effectiveness of specific interventions for priapism in sickle cell disease. |
t10 | Reducing blood pressure with drugs has been a strategy used in patients suffering from an acute event in the heart or in the brain, such as heart attack or stroke. There is controversy whether these drugs should be used in the immediate period of these events, and what would be the best type of drug that renders the most benefit. This review looked at all studies where patients were randomized to one of these drugs or placebo, in this period. One class of blood pressure lowering drug, the so‐called nitrates, demonstrated reduction in mortality in patients with heart attack. For 1000 patients treated 4 to 8 deaths were prevented during the first 2 days of this acute event. The ACE‐inhibitors class also decrease mortality when continued for 10 days (3 to 5 deaths prevented per 1000). | Acute cardiovascular events represent a therapeutic challenge. Blood pressure lowering drugs are commonly used and recommended in the early phase of these settings. This review analyses randomized controlled trial (RCT) evidence for this approach. Objectives To determine the effect of immediate and short‐term administration of anti‐hypertensive drugs on all‐cause mortality, total non‐fatal serious adverse events (SAE) and blood pressure, in patients with an acute cardiovascular event, regardless of blood pressure at the time of enrollment. Search methods MEDLINE, EMBASE, and Cochrane clinical trial register from Jan 1966 to February 2009 were searched. Reference lists of articles were also browsed. In case of missing information from retrieved articles, authors were contacted. Selection criteria Randomized controlled trials (RCTs) comparing anti‐hypertensive drug with placebo or no treatment administered to patients within 24 hours of the onset of an acute cardiovascular event. Data collection and analysis Two reviewers independently extracted data and assessed risk of bias. Fixed effects model with 95% confidence intervals (CI) were used. Sensitivity analyses were also conducted. Sixty‐five RCTs (N=166,206) were included, evaluating four classes of anti‐hypertensive drugs: ACE inhibitors (12 trials), beta‐blockers (20), calcium channel blockers (18) and nitrates (18). Acute stroke was studied in 6 trials (all involving CCBs). Acute myocardial infarction was studied in 59 trials. In the latter setting immediate nitrate treatment (within 24 hours) reduced all‐cause mortality during the first 2 days (RR 0.81, 95%CI [0.74,0.89], p<0.0001). No further benefit was observed with nitrate therapy beyond this point. ACE inhibitors did not reduce mortality at 2 days (RR 0.91,95%CI [0.82, 1.00]), but did after 10 days (RR 0.93, 95%CI [0.87,0.98] p=0.01). No other blood pressure lowering drug administered as an immediate treatment or short‐term treatment produced a statistical significant mortality reduction at 2, 10 or ≥30 days. There was not enough data studying acute stroke, and there were no RCTs evaluating other acute cardiovascular events. Nitrates reduce mortality (4‐8 deaths prevented per 1000) at 2 days when administered within 24 hours of symptom onset of an acute myocardial infarction. No mortality benefit was seen when treatment continued beyond 48 hours. Mortality benefit of immediate treatment with ACE inhibitors post MI at 2 days did not reach statistical significance but the effect was significant at 10 days (3‐5 deaths prevented per 1000). There is good evidence for lack of a mortality benefit with immediate or short‐term treatment with beta‐blockers and calcium channel blockers for acute myocardial infarction. |
t11 | Venous leg ulcers are a common and recurring type of chronic wound. Compression therapy (bandages or stockings) is used to treat venous leg ulcers. Dressings which aim to protect the wound and provide an environment that will help it to heal are used underneath compression. Protease‐modulating dressings are one of several types of dressing available. Wounds that are slower to heal are thought to have higher levels of proteases (enzymes that break down proteins). Protease‐modulating dressings are designed to lower protease activity and help wounds to heal. A test to detect high levels of protease activity has also been introduced. A 'test and treat' strategy involves testing for elevated proteases and then using protease‐modulating treatments in ulcers which show elevated protease levels. It is important to know if using both the test and the treatment together can improve healing of leg ulcers. What we found In January 2016 we searched for as many relevant studies as possible that were randomised controlled trials, and which compared a 'test and treat' strategy with another treatment in people with venous leg ulcers. We did not find any eligible randomised studies. We found one ongoing study which might be relevant but could not obtain any more information on this. Research is still needed to find out if it is helpful to test venous leg ulcers for high levels of protease activity and then treat high levels using protease‐modulating treatments. This review is part of a set of reviews investigating different aspects of using protease‐modulating treatments in people with venous leg ulcers. | Venous leg ulcers are a common and recurring type of complex wound. They can be painful, malodorous, prone to infection and slow to heal. Standard treatment includes compression therapy and a dressing. The use of protease‐modulating treatments for venous leg ulcers is increasing. These treatments are based on some evidence that a proportion of slow to heal ulcers have elevated protease activity in the wound. Point‐of‐care tests which aim to detect elevated protease activity are now available. A 'test and treat' strategy involves testing for elevated proteases and then using protease‐modulating treatments in ulcers which show elevated protease levels. Objectives To determine the effects on venous leg ulcer healing of a 'test and treat' strategy involving detection of high levels of wound protease activity and treatment with protease‐modulating therapies, compared with alternative treatment strategies such as using the same treatment for all participants or using a different method of treatment selection. Search methods We searched the following electronic databases to identify reports of relevant randomised clinical trials: The Cochrane Wounds Group Specialised Register (January 2016), the Cochrane Central Register of Controlled Trials (CENTRAL, The Cochrane Library ) Issue 12, 2015); Ovid MEDLINE (1946 to January 2016); Ovid MEDLINE (In‐Process & Other Non‐Indexed Citations January 2016); Ovid EMBASE (1974 to January 2016); EBSCO CINAHL (1937 to January 2016). We also searched three clinical trials registers, reference lists and the websites of regulatory agencies. There were no restrictions with respect to language, date of publication or study setting. Selection criteria Published or unpublished RCTs which assessed a test and treat strategy for elevated protease activity in venous leg ulcers in adults compared with an alternative treatment strategy. The test and treat strategy needed to be the only systematic difference between the groups. Data collection and analysis Two review authors independently performed study selection; we planned that two authors would also assess risk of bias and extract data. We did not identify any studies which met the inclusion criteria for this review. We identified one ongoing study; it was unclear whether this would be eligible for inclusion. Currently there is no randomised evidence on the impact of a test and treat policy for protease levels on outcomes in people with venous leg ulcers. |
t12 | LBP is very common. While most back pain gets better without medical treatment, about 10% of cases lasts for three months or more. There are many therapies that are used to treat the pain, and improve the lives of individuals with back pain. Massage is one of these treatments. In total we included 25 RCTs and 3096 participants in this review update. Only one trial included patients with acute LBP (pain duration less than four weeks), while all the others included patients with sub‐acute (four to 12 weeks) or chronic LBP (12 weeks or longer). In three studies, massage was applied using a mechanical device (such as a metal bar to increase the compression to the skin or a vibrating instrument), and in the remaining trials it was done using the hands. Pain intensity and quality were the most common outcomes measured in these studies, followed by back‐related function, such as walking, sleeping, bending and lifting weights. Study funding sources Seven studies did not report the sources of funding, Sixteen studies were funded by not‐for‐profit organizations. One study reported not receiving any funding, and one study was funded by a College of Massage Therapists. There were eight studies comparing massage to interventions that are not expected to improve outcomes (inactive controls) and 13 studies comparing massage to other interventions expected to improve outcomes (active controls). Massage was better than inactive controls for pain and function in the short‐term, but not in the long‐term follow‐up. Massage was better than active controls for pain both in the short and long‐term follow‐ups, but we found no differences for function, either in the short or long‐term follow‐ups. There were no reports of serious adverse events in any of these trials. The most common adverse events were increased pain intensity in 1.5% to 25% of the participants. | Low‐back pain (LBP) is one of the most common and costly musculoskeletal problems in modern society. It is experienced by 70% to 80% of adults at some time in their lives. Massage therapy has the potential to minimize pain and speed return to normal function. Objectives To assess the effects of massage therapy for people with non‐specific LBP. Search methods We searched PubMed to August 2014, and the following databases to July 2014: MEDLINE, EMBASE, CENTRAL, CINAHL, LILACS, Index to Chiropractic Literature, and Proquest Dissertation Abstracts. We also checked reference lists. There were no language restrictions used. Selection criteria We included only randomized controlled trials of adults with non‐specific LBP classified as acute, sub‐acute or chronic. Massage was defined as soft‐tissue manipulation using the hands or a mechanical device. We grouped the comparison groups into two types: inactive controls (sham therapy, waiting list, or no treatment), and active controls (manipulation, mobilization, TENS, acupuncture, traction, relaxation, physical therapy, exercises or self‐care education). Data collection and analysis We used standard Cochrane methodological procedures and followed CBN guidelines. Two independent authors performed article selection, data extraction and critical appraisal. In total we included 25 trials (3096 participants) in this review update. The majority was funded by not‐for‐profit organizations. One trial included participants with acute LBP, and the remaining trials included people with sub‐acute or chronic LBP (CLBP). In three trials massage was done with a mechanical device, and the remaining trials used only the hands. The most common type of bias in these studies was performance and measurement bias because it is difficult to blind participants, massage therapists and the measuring outcomes. We judged the quality of the evidence to be "low" to "very low", and the main reasons for downgrading the evidence were risk of bias and imprecision. There was no suggestion of publication bias. For acute LBP, massage was found to be better than inactive controls for pain ((SMD ‐1.24, 95% CI ‐1.85 to ‐0.64; participants = 51; studies = 1)) in the short‐term, but not for function ((SMD ‐0.50, 95% CI ‐1.06 to 0.06; participants = 51; studies = 1)). For sub‐acute and chronic LBP, massage was better than inactive controls for pain ((SMD ‐0.75, 95% CI ‐0.90 to ‐0.60; participants = 761; studies = 7)) and function (SMD ‐0.72, 95% CI ‐1.05 to ‐0.39; 725 participants; 6 studies; ) in the short‐term, but not in the long‐term; however, when compared to active controls, massage was better for pain, both in the short ((SMD ‐0.37, 95% CI ‐0.62 to ‐0.13; participants = 964; studies = 12)) and long‐term follow‐up ((SMD ‐0.40, 95% CI ‐0.80 to ‐0.01; participants = 757; studies = 5)), but no differences were found for function (both in the short and long‐term). There were no reports of serious adverse events in any of these trials. Increased pain intensity was the most common adverse event reported in 1.5% to 25% of the participants. We have very little confidence that massage is an effective treatment for LBP. Acute, sub‐acute and chronic LBP had improvements in pain outcomes with massage only in the short‐term follow‐up. Functional improvement was observed in participants with sub‐acute and chronic LBP when compared with inactive controls, but only for the short‐term follow‐up. There were only minor adverse effects with massage. |
t13 | Individuals with mildly elevated blood pressures, but no previous cardiovascular events, make up the majority of those considered for and receiving antihypertensive therapy. The decision to treat this population has important consequences for both the patients (e.g. adverse drug effects, lifetime of drug therapy, cost of treatment, etc.) and any third party payer (e.g. high cost of drugs, physician services, laboratory tests, etc.). In this review, existing evidence comparing the health outcomes between treated and untreated individuals are summarized. Available data from the limited number of available trials and participants showed no difference between treated and untreated individuals in heart attack, stroke, and death. About 9% of patients treated with drugs discontinued treatment due to adverse effects. Therefore, the benefits and harms of antihypertensive drug therapy in this population need to be investigated by further research. | People with no previous cardiovascular events or cardiovascular disease represent a primary prevention population. The benefits and harms of treating mild hypertension in primary prevention patients are not known at present. This review examines the existing randomised controlled trial (RCT) evidence. Objectives Primary objective: To quantify the effects of antihypertensive drug therapy on mortality and morbidity in adults with mild hypertension (systolic blood pressure (BP) 140‐159 mmHg and/or diastolic BP 90‐99 mmHg) and without cardiovascular disease. Search methods We searched The Cochrane Central Register of Controlled Trials (CENTRAL) 2013 Issue 9, MEDLINE (1946 to October 2013), EMBASE (1974 to October 2013), ClinicalTrials.gov (all dates to October 2013), and reference lists of articles. The Cochrane Database of Systematic Reviews and the Database of Abstracts of Reviews of Effectiveness (DARE) were searched for previous reviews and meta‐analyses of anti‐hypertensive drug treatment compared to placebo or no treatment trials until the end of 2011. Selection criteria RCTs of at least 1 year duration. Data collection and analysis The outcomes assessed were mortality, stroke, coronary heart disease (CHD), total cardiovascular events (CVS), and withdrawals due to adverse effects. Of 11 RCTs identified 4 were included in this review, with 8,912 participants. Treatment for 4 to 5 years with antihypertensive drugs as compared to placebo did not reduce total mortality (RR 0.85, 95% CI 0.63, 1.15). In 7,080 participants treatment with antihypertensive drugs as compared to placebo did not reduce coronary heart disease (RR 1.12, 95% CI 0.80, 1.57), stroke (RR 0.51, 95% CI 0.24, 1.08), or total cardiovascular events (RR 0.97, 95% CI 0.72, 1.32). Withdrawals due to adverse effects were increased by drug therapy (RR 4.80, 95%CI 4.14, 5.57), Absolute risk increase (ARI) 9%. Antihypertensive drugs used in the treatment of adults (primary prevention) with mild hypertension (systolic BP 140‐159 mmHg and/or diastolic BP 90‐99 mmHg) have not been shown to reduce mortality or morbidity in RCTs. Treatment caused 9% of patients to discontinue treatment due to adverse effects. More RCTs are needed in this prevalent population to know whether the benefits of treatment exceed the harms. |
t14 | Depression affects 350 million people worldwide, impacting on quality of life, work, relationships and physical health. Medication and talking therapies are not always suitable or available. Dance movement therapy (DMT) uses bodily movements to explore and express emotions with groups or individuals. This is the first review of the effectiveness of DMT for depression and will add to the evidence base regarding depression treatments. Databases were searched for all published and unpublished randomised controlled studies of DMT for depression up to October 2014, with participants of any age, gender or ethnicity. Three studies (147 participants) met inclusion criteria: two of adults (men and women); and one of adolescents (females only). Due to the low number of studies and low quality of evidence, it was not possible to draw firm conclusions about the effectiveness of DMT for depression. It was not possible to compare DMT with medication, talking therapies, physical treatments or to compare types of DMT due to lack of available evidence. Overall, there is no evidence for or against DMT as a treatment for depression. There is some evidence to suggest DMT is more effective than standard care for adults, but this was not clinically significant. DMT is no more effective than standard care for young people. Evidence from just one study of low methodological quality suggested that drop‐out rates from the DMT group were not significant, and there is no reliable effect in either direction for quality of life or self esteem. A large positive effect was observed for social functioning, but since this was from one study of low methodological quality the result is imprecise. | Depression is a debilitating condition affecting more than 350 million people worldwide ( WHO 2012 ) with a limited number of evidence‐based treatments. Drug treatments may be inappropriate due to side effects and cost, and not everyone can use talking therapies.There is a need for evidence‐based treatments that can be applied across cultures and with people who find it difficult to verbally articulate thoughts and feelings. Dance movement therapy (DMT) is used with people from a range of cultural and intellectual backgrounds, but effectiveness remains unclear. Objectives To examine the effects of DMT for depression with or without standard care, compared to no treatment or standard care alone, psychological therapies, drug treatment, or other physical interventions. Also, to compare the effectiveness of different DMT approaches. Search methods The Cochrane Depression, Anxiety and Neurosis Review Group's Specialised Register (CCDANCTR‐Studies and CCDANCTR‐References) and CINAHL were searched (to 2 Oct 2014) together with the World Health Organization's International Clinical Trials Registry Platform (WHO ICTRP) and ClinicalTrials.gov . The review authors also searched the Allied and Complementary Medicine Database (AMED), the Education Resources Information Center (ERIC) and Dissertation Abstracts (to August 2013), handsearched bibliographies, contacted professional associations, educational programmes and dance therapy experts worldwide. Selection criteria Inclusion criteria were: randomised controlled trials (RCTs) studying outcomes for people of any age with depression as defined by the trialist, with at least one group being DMT. DMT was defined as: participatory dance movement with clear psychotherapeutic intent, facilitated by an individual with a level of training that could be reasonably expected within the country in which the trial was conducted. For example, in the USA this would either be a trainee, or qualified and credentialed by the American Dance Therapy Association (ADTA). In the UK, the therapist would either be in training with, or accredited by, the Association for Dance Movement Psychotherapy (ADMP, UK). Similar professional bodies exist in Europe, but in some countries (e.g. China) where the profession is in development, a lower level of qualification would mirror the situation some decades previously in the USA or UK. Hence, the review authors accepted a relevant professional qualification (e.g. nursing or psychodynamic therapies) plus a clear description of the treatment that would indicate its adherence to published guidelines including Levy 1992 , ADMP UK 2015 , Meekums 2002 , and Karkou 2006 . Data collection and analysis Study methodological quality was evaluated and data were extracted independently by the first two review authors using a data extraction form, the third author acting as an arbitrator. Three studies totalling 147 participants (107 adults and 40 adolescents) met the inclusion criteria. Seventy‐four participants took part in DMT treatment, while 73 comprised the control groups. Two studies included male and female adults with depression. One of these studies included outpatient participants; the other study was conducted with inpatients at an urban hospital. The third study reported findings with female adolescents in a middle‐school setting. All included studies collected continuous data using two different depression measures: the clinician‐completed Hamilton Depression Rating Scale (HAM‐D); and the Symptom Checklist‐90‐R (SCL‐90‐R) (self‐rating scale). Statistical heterogeneity was identified between the three studies. There was no reliable effect of DMT on depression (SMD ‐0.67 95% CI ‐1.40 to 0.05; very low quality evidence). A planned subgroup analysis indicated a positive effect in adults, across two studies, 107 participants, but this failed to meet clinical significance (SMD ‐7.33 95% CI ‐9.92 to ‐4.73). One adult study reported drop‐out rates, found to be non‐significant with an odds ratio of 1.82 [95% CI 0.35 to 9.45]; low quality evidence. One study measured social functioning, demonstrating a large positive effect (MD ‐6.80 95 % CI ‐11.44 to ‐2.16; very low quality evidence), but this result was imprecise. One study showed no effect in either direction for quality of life (0.30 95% CI ‐0.60 to 1.20; low quality evidence) or self esteem (1.70 95% CI ‐2.36 to 5.76; low quality evidence). The low‐quality evidence from three small trials with 147 participants does not allow any firm conclusions to be drawn regarding the effectiveness of DMT for depression. Larger trials of high methodological quality are needed to assess DMT for depression, with economic analyses and acceptability measures and for all age groups. |
t15 | The aim of this Cochrane Review is to find out whether certain antibiotics are more effective in treating scrub typhus. We collected and analysed all relevant studies to answer this question and included seven studies. Tetracycline, doxycycline, azithromycin, and rifampicin are effective antibiotics for scrub typhus treatment that have led to few treatment failures. For specific outcomes, some low‐certainty evidence suggests there may be little or no difference between tetracycline, doxycycline, and azithromycin. Healthcare workers should not use rifampicin as a first‐line treatment. Researchers should standardize the way they diagnose and assess scrub typhus. Scrub typhus is an important cause of fever in Asia. We studied people with scrub typhus diagnosed by health professionals and confirmed by laboratory tests. We compared different antibiotic treatments. We looked at whether choice of antibiotic made a difference in the number of people who experienced failed treatment, and we determined the proportions who had resolution of fever at 48 hours. We found seven relevant studies. Only one study included children younger than 15 years. We are uncertain whether doxycycline compared to tetracycline affects treatment failure, as the certainty of the evidence is very low. Studies looked at resolution of fever within five days. Doxycycline compared to tetracycline may make little or no difference in the proportion of patients with resolution of fever within 48 hours and in time to defervescence. Studies did not formally report serious adverse events. We are uncertain whether macrolides compared to doxycycline affect treatment failure, resolution of fever within five days, time to defervescence, or serious adverse events, as the certainty of the evidence is very low. Macrolides compared to doxycycline may make little or no difference in the proportion of patients with resolution of fever within five days. We are uncertain whether rifampicin compared to doxycycline affects treatment failure, proportion of patients with resolution of fever within 48 hours, or time to defervescence, as the certainty of evidence is very low. | Scrub typhus, an important cause of acute fever in Asia, is caused by Orientia tsutsugamushi, an obligate intracellular bacterium. Antibiotics currently used to treat scrub typhus include tetracyclines, chloramphenicol, macrolides, and rifampicin. Objectives To assess and compare the effects of different antibiotic regimens for treatment of scrub typhus. Search methods We searched the following databases up to 8 January 2018: the Cochrane Infectious Diseases Group specialized trials register; CENTRAL, in the Cochrane Library (2018, Issue 1); MEDLINE; Embase; LILACS; and the meta Register of Controlled Trials ( m RCT). We checked references and contacted study authors for additional data. We applied no language or date restrictions. Selection criteria Randomized controlled trials (RCTs) or quasi‐RCTs comparing antibiotic regimens in people with the diagnosis of scrub typhus based on clinical symptoms and compatible laboratory tests (excluding the Weil‐Felix test). Data collection and analysis For this update, two review authors re‐extracted all data and assessed the certainty of evidence. We meta‐analysed data to calculate risk ratios (RRs) for dichotomous outcomes when appropriate, and elsewhere tabulated data to facilitate narrative analysis. We included six RCTs and one quasi‐RCT with 548 participants; they took place in the Asia‐Pacific region: Korea (three trials), Malaysia (one trial), and Thailand (three trials). Only one trial included children younger than 15 years (N = 57). We judged five trials to be at high risk of performance and detection bias owing to inadequate blinding. Trials were heterogenous in terms of dosing of interventions and outcome measures. Across trials, treatment failure rates were low. Two trials compared doxycycline to tetracycline. For treatment failure, the difference between doxycycline and tetracycline is uncertain (very low‐certainty evidence). Doxycycline compared to tetracycline may make little or no difference in resolution of fever within 48 hours (risk ratio (RR) 1.14, 95% confidence interval (CI) 0.90 to 1.44, 55 participants; one trial; low‐certainty evidence) and in time to defervescence (116 participants; one trial; low‐certainty evidence). We were unable to extract data for other outcomes. Three trials compared doxycycline versus macrolides. For most outcomes, including treatment failure, resolution of fever within 48 hours, time to defervescence, and serious adverse events, we are uncertain whether study results show a difference between doxycycline and macrolides (very low‐certainty evidence). Macrolides compared to doxycycline may make little or no difference in the proportion of patients with resolution of fever within five days (RR 1.05, 95% CI 0.99 to 1.10; 185 participants; two trials; low‐certainty evidence). Another trial compared azithromycin versus doxycycline or chloramphenicol in children, but we were not able to disaggregate date for the doxycycline/chloramphenicol group. One trial compared doxycycline versus rifampicin. For all outcomes, we are uncertain whether study results show a difference between doxycycline and rifampicin (very low‐certainty evidence). Of note, this trial deviated from the protocol after three out of eight patients who had received doxycycline and rifampicin combination therapy experienced treatment failure. Across trials, mild gastrointestinal side effects appeared to be more common with doxycycline than with comparator drugs. Tetracycline, doxycycline, azithromycin, and rifampicin are effective treatment options for scrub typhus and have resulted in few treatment failures. Chloramphenicol also remains a treatment option, but we could not include this among direct comparisons in this review. Most available evidence is of low or very low certainty. For specific outcomes, some low‐certainty evidence suggests there may be little or no difference between tetracycline, doxycycline, and azithromycin as treatment options. Given very low‐certainty evidence for rifampicin and the risk of inducing resistance in undiagnosed tuberculosis, clinicians should not regard this as a first‐line treatment option. Clinicians could consider rifampicin as a second‐line treatment option after exclusion of active tuberculosis. Further research should consist of additional adequately powered trials of doxycycline versus azithromycin or other macrolides, trials of other candidate antibiotics including rifampicin, and trials of treatments for severe scrub typhus. Researchers should standardize diagnostic techniques and reporting of clinical outcomes to allow robust comparisons. 11 April 2019 Up to date All studies incorporated from most recent search All eligible published studies found in the last search (8 Jan, 2018) were included and four ongoing studies have been identified (see 'Characteristics of ongoing studies' section) |
t16 | We reviewed the evidence on the effects of dietary interventions on pain in children aged between five and 18 years with recurrent abdominal pain (RAP). Recurrent abdominal pain, or RAP, is a term used for unexplained episodes of stomachache or abdominal pain in children. Recurrent abdominal pain is a common condition, and most children are likely to be helped by simple measures. However, a range of treatments have been recommended to relieve abdominal pain, including making changes to the child's eating habits by adding supplements or excluding certain foods. Nineteen studies met our inclusion criteria, including 13 studies of probiotics and four studies of fibre interventions. We also found one study of a diet low in substances known as FODMAPs (fermentable oligosaccharides, disaccharides, monosaccharides and polyols) and one study of a fructose‐restricted diet. All of the studies compared dietary interventions to a placebo or control. The trials were carried out in eight countries and included a total of 1453 participants, aged between five and 18 years. Most children were recruited from outpatient clinics. Most interventions lasted four to six weeks. Probiotics We found evidence from 13 studies suggesting that probiotics might be effective in improving pain in the shorter term. Most studies did not report on other areas such as quality of daily life. We judged this evidence to be of moderate or low quality because some studies were small, showed varying results, or were at risk of bias. Fibre supplements We found no clear evidence of improvement of pain from four studies of fibre supplements. Most studies did not report on other areas such as quality of daily life. There were few studies of fibre supplements, and some of these studies were at risk of bias. Low FODMAP diets We found only one study evaluating the effectiveness of low FODMAP diets in children with RAP. We found only one study evaluating the effectiveness of fructose‐restricted diets in children with RAP. We found some evidence suggesting that probiotics may be helpful in relieving pain in children with RAP in the short term. Clinicians may therefore consider probiotic interventions as part of the management strategy for RAP. | This is an update of the original Cochrane review, last published in 2009 (Huertas‐Ceballos 2009). Recurrent abdominal pain (RAP), including children with irritable bowel syndrome, is a common problem affecting between 4% and 25% of school‐aged children. For the majority of such children, no organic cause for their pain can be found on physical examination or investigation. Many dietary inventions have been suggested to improve the symptoms of RAP. These may involve either excluding ingredients from the diet or adding supplements such as fibre or probiotics. Objectives To examine the effectiveness of dietary interventions in improving pain in children of school age with RAP. Search methods We searched CENTRAL, Ovid MEDLINE, Embase, eight other databases, and two trials registers, together with reference checking, citation searching and contact with study authors, in June 2016. Selection criteria Randomised controlled trials (RCTs) comparing dietary interventions with placebo or no treatment in children aged five to 18 years with RAP or an abdominal pain‐related, functional gastrointestinal disorder, as defined by the Rome III criteria (Rasquin 2006). Data collection and analysis We used standard methodological procedures expected by Cochrane. We grouped dietary interventions together by category for analysis. We contacted study authors to ask for missing information and clarification, when needed. We included 19 RCTs, reported in 27 papers with a total of 1453 participants. Fifteen of these studies were not included in the previous review. All 19 RCTs had follow‐up ranging from one to five months. Participants were aged between four and 18 years from eight different countries and were recruited largely from paediatric gastroenterology clinics. The mean age at recruitment ranged from 6.3 years to 13.1 years. Girls outnumbered boys in most trials. Fourteen trials recruited children with a diagnosis under the broad umbrella of RAP or functional gastrointestinal disorders; five trials specifically recruited only children with irritable bowel syndrome. The studies fell into four categories: trials of probiotic‐based interventions (13 studies), trials of fibre‐based interventions (four studies), trials of low FODMAP (fermentable oligosaccharides, disaccharides, monosaccharides and polyols) diets (one study), and trials of fructose‐restricted diets (one study). We found that children treated with probiotics reported a greater reduction in pain frequency at zero to three months postintervention than those given placebo (standardised mean difference (SMD) ‐0.55, 95% confidence interval (CI) ‐0.98 to ‐0.12; 6 trials; 523 children). There was also a decrease in pain intensity in the intervention group at the same time point (SMD ‐0.50, 95% CI ‐0.85 to ‐0.15; 7 studies; 575 children). However, we judged the evidence for these outcomes to be of low quality using GRADE due to an unclear risk of bias from incomplete outcome data and significant heterogeneity. We found that children treated with probiotics were more likely to experience improvement in pain at zero to three months postintervention than those given placebo (odds ratio (OR) 1.63, 95% CI 1.07 to 2.47; 7 studies; 722 children). The estimated number needed to treat for an additional beneficial outcome (NNTB) was eight, meaning that eight children would need to receive probiotics for one to experience improvement in pain in this timescale. We judged the evidence for this outcome to be of moderate quality due to significant heterogeneity. Children with a symptom profile defined as irritable bowel syndrome treated with probiotics were more likely to experience improvement in pain at zero to three months postintervention than those given placebo (OR 3.01, 95% CI 1.77 to 5.13; 4 studies; 344 children). Children treated with probiotics were more likely to experience improvement in pain at three to six months postintervention compared to those receiving placebo (OR 1.94, 95% CI 1.10 to 3.43; 2 studies; 224 children). We judged the evidence for these two outcomes to be of moderate quality due to small numbers of participants included in the studies. We found that children treated with fibre‐based interventions were not more likely to experience an improvement in pain at zero to three months postintervention than children given placebo (OR 1.83, 95% CI 0.92 to 3.65; 2 studies; 136 children). There was also no reduction in pain intensity compared to placebo at the same time point (SMD ‐1.24, 95% CI ‐3.41 to 0.94; 2 studies; 135 children). We judged the evidence for these outcomes to be of low quality due to an unclear risk of bias, imprecision, and significant heterogeneity. We found only one study of low FODMAP diets and only one trial of fructose‐restricted diets, meaning no pooled analyses were possible. We were unable to perform any meta‐analyses for the secondary outcomes of school performance, social or psychological functioning, or quality of daily life, as not enough studies included these outcomes or used comparable measures to assess them. With the exception of one study, all studies reported monitoring children for adverse events; no major adverse events were reported. Overall, we found moderate‐ to low‐quality evidence suggesting that probiotics may be effective in improving pain in children with RAP. Clinicians may therefore consider probiotic interventions as part of a holistic management strategy. However, further trials are needed to examine longer‐term outcomes and to improve confidence in estimating the size of the effect, as well as to determine the optimal strain and dosage. Future research should also explore the effectiveness of probiotics in children with different symptom profiles, such as those with irritable bowel syndrome. We found only a small number of trials of fibre‐based interventions, with overall low‐quality evidence for the outcomes. There was therefore no convincing evidence that fibre‐based interventions improve pain in children with RAP. Further high‐quality RCTs of fibre supplements involving larger numbers of participants are required. Future trials of low FODMAP diets and other dietary interventions are also required to facilitate evidence‐based recommendations. |
t17 | Obesity is associated with many health problems and a higher risk of death. Bariatric surgery for obesity is usually only considered when other treatments have failed. We aimed to compare surgical interventions with non‐surgical interventions for obesity (such as drugs, diet and exercise) and to compare different surgical procedures. Bariatric surgery can be considered for people with a body mass index (BMI = kg/m²) greater than 40, or for those with a BMI less than 40 and obesity‐related diseases such as diabetes. We included 22 studies comparing surgery with non‐surgical interventions, or comparing different types of surgery. Altogether 1496 participants were allocated to surgery and 302 participants to non‐surgical interventions. Most studies followed participants for 12 to 36 months, the longest follow‐up was 10 years. The majority of participants were women and, on average, in their early 30s to early 50s. Seven studies compared surgery with non‐surgical interventions. Due to differences in the way that the studies were designed we decided not to generate an average of their results. The direction of the effect indicated that people who had surgery achieved greater weight loss one to two years afterwards compared with people who did not have surgery. Improvements in quality of life and diabetes were also found. No deaths occurred, reoperations in the surgical intervention groups ranged between 2% and 13%, as reported in five studies. Three studies found that gastric bypass (GB) achieved greater weight loss up to five years after surgery compared with adjustable gastric band (AGB): the BMI at the end of the studies was on average five units less. The GB procedure resulted in greater duration of hospitalisation and a greater number of late major complications. AGB required high rates of reoperation for removal of the gastric band. Seven studies compared GB with sleeve gastrectomy (SG). Overall there were no important differences for weight loss, quality of life, comorbidities and complications, although gastro‐oesophageal reflux disease improved in more patients following GB in one study. One death occurred in the GB group. Serious adverse events occurred in 5% of the GB group and 1% of SG group, as reported in one study. Two studies reported 7% to 24% of people with GB and 3% to 34% of those with SG requiring reoperations. Two studies found that biliopancreatic diversion with duodenal switch resulted in greater weight loss than GB after two or four years in people with a relatively high BMI. BMI at the end of the studies was on average seven units lower. One death occurred in the biliopancreatic diversion group. Reoperations were higher in the biliopancreatic diversion group (16% to 28%) than the GB group (4% to 8%). One study comparing duodenojejunal bypass with SG versus GB found weight loss outcomes and rates of remission of diabetes and hypertension were similar at 12 months follow‐up. No deaths occurred in either group, reoperation rates were not reported. One study found that BMI was reduced by 10 units more following SG at three years follow‐up compared with AGB. Reoperations occurred in 20% of the AGB group and in 10% of the SG group. One study found no relevant difference in weight‐loss outcomes following gastric imbrication compared with SG. No deaths occurred; 17% of participants in the gastric imbrication group required reoperation. From the information that was available to us about the studies, we were unable to assess how well designed they were. Adverse events and reoperation rates were not consistently reported in the publications of the studies. Most studies followed participants for only one or two years, therefore the long‐term effects of surgery remain unclear. Few studies assessed the effects of bariatric surgery in treating comorbidities in participants with a lower BMI. There is therefore a lack of evidence for the use of bariatric surgery in treating comorbidities in people who are overweight or who do not meet standard criteria for bariatric surgery. | Bariatric (weight loss) surgery for obesity is considered when other treatments have failed. The effects of the available bariatric procedures compared with medical management and with each other are uncertain. This is an update of a Cochrane review first published in 2003 and most recently updated in 2009. Objectives To assess the effects of bariatric surgery for overweight and obesity, including the control of comorbidities. Search methods Studies were obtained from searches of numerous databases, supplemented with searches of reference lists and consultation with experts in obesity research. Date of last search was November 2013. Selection criteria Randomised controlled trials (RCTs) comparing surgical interventions with non‐surgical management of obesity or overweight or comparing different surgical procedures. Data collection and analysis Data were extracted by one review author and checked by a second review author. Two review authors independently assessed risk of bias and evaluated overall study quality utilising the GRADE instrument. Twenty‐two trials with 1798 participants were included; sample sizes ranged from 15 to 250. Most studies followed participants for 12, 24 or 36 months; the longest follow‐up was 10 years. The risk of bias across all domains of most trials was uncertain; just one was judged to have adequate allocation concealment. All seven RCTs comparing surgery with non‐surgical interventions found benefits of surgery on measures of weight change at one to two years follow‐up. Improvements for some aspects of health‐related quality of life (QoL) (two RCTs) and diabetes (five RCTs) were also found. Five studies reported data on mortality, no deaths occurred. Serious adverse events (SAEs) were reported in four studies and ranged from 0% to 37% in the surgery groups and 0% to 25% in the no surgery groups. Between 2% and 13% of participants required reoperations in the five studies that reported these data. Three RCTs found that laparoscopic Roux‐en‐Y gastric bypass (L)(RYGB) achieved significantly greater weight loss and body mass index (BMI) reduction up to five years after surgery compared with laparoscopic adjustable gastric banding (LAGB). Mean end‐of‐study BMI was lower following LRYGB compared with LAGB: mean difference (MD) ‐5.2 kg/m² (95% confidence interval (CI) ‐6.4 to ‐4.0; P < 0.00001; 265 participants; 3 trials; moderate quality evidence). Evidence for QoL and comorbidities was very low quality. The LRGYB procedure resulted in greater duration of hospitalisation in two RCTs (4/3.1 versus 2/1.5 days) and a greater number of late major complications (26.1% versus 11.6%) in one RCT. In one RCT the LAGB required high rates of reoperation for band removal (9 patients, 40.9%). Open RYGB, LRYGB and laparoscopic sleeve gastrectomy (LSG) led to losses of weight and/or BMI but there was no consistent picture as to which procedure was better or worse in the seven included trials. MD was ‐0.2 kg/m² (95% CI ‐1.8 to 1.3); 353 participants; 6 trials; low quality evidence) in favour of LRYGB. No statistically significant differences in QoL were found (one RCT). Six RCTs reported mortality; one death occurred following LRYGB. SAEs were reported by one RCT and were higher in the LRYGB group (4.5%) than the LSG group (0.9%). Reoperations ranged from 6.7% to 24% in the LRYGB group and 3.3% to 34% in the LSG group. Effects on comorbidities, complications and additional surgical procedures were neutral, except gastro‐oesophageal reflux disease improved following LRYGB (one RCT). One RCT of people with a BMI 25 to 35 and type 2 diabetes found laparoscopic mini‐gastric bypass resulted in greater weight loss and improvement of diabetes compared with LSG, and had similar levels of complications. Two RCTs found that biliopancreatic diversion with duodenal switch (BDDS) resulted in greater weight loss than RYGB in morbidly obese patients. End‐of‐study mean BMI loss was greater following BDDS: MD ‐7.3 kg/m² (95% CI ‐9.3 to ‐5.4); P < 0.00001; 107 participants; 2 trials; moderate quality evidence). QoL was similar on most domains. In one study between 82% to 100% of participants with diabetes had a HbA1c of less than 5% three years after surgery. Reoperations were higher in the BDDS group (16.1% to 27.6%) than the LRYGB group (4.3% to 8.3%). One death occurred in the BDDS group. One RCT comparing laparoscopic duodenojejunal bypass with sleeve gastrectomy versus LRYGB found BMI, excess weight loss, and rates of remission of diabetes and hypertension were similar at 12 months follow‐up (very low quality evidence). QoL, SAEs and reoperation rates were not reported. No deaths occurred in either group. One RCT comparing laparoscopic isolated sleeve gastrectomy (LISG) versus LAGB found greater improvement in weight‐loss outcomes following LISG at three years follow‐up (very low quality evidence). QoL, mortality and SAEs were not reported. Reoperations occurred in 20% of the LAGB group and in 10% of the LISG group. One RCT (unpublished) comparing laparoscopic gastric imbrication with LSG found no statistically significant difference in weight loss between groups (very low quality evidence). QoL and comorbidities were not reported. No deaths occurred. Two participants in the gastric imbrication group required reoperation. Surgery results in greater improvement in weight loss outcomes and weight associated comorbidities compared with non‐surgical interventions, regardless of the type of procedures used. When compared with each other, certain procedures resulted in greater weight loss and improvements in comorbidities than others. Outcomes were similar between RYGB and sleeve gastrectomy, and both of these procedures had better outcomes than adjustable gastric banding. For people with very high BMI, biliopancreatic diversion with duodenal switch resulted in greater weight loss than RYGB. Duodenojejunal bypass with sleeve gastrectomy and laparoscopic RYGB had similar outcomes, however this is based on one small trial. Isolated sleeve gastrectomy led to better weight‐loss outcomes than adjustable gastric banding after three years follow‐up. This was based on one trial only. Weight‐related outcomes were similar between laparoscopic gastric imbrication and laparoscopic sleeve gastrectomy in one trial. Across all studies adverse event rates and reoperation rates were generally poorly reported. Most trials followed participants for only one or two years, therefore the long‐term effects of surgery remain unclear. |
t18 | When women go to their doctor with a mass that could be ovarian cancer, they are normally referred for surgery, since the mass may need to be removed and examined microscopically in a laboratory in a procedure known as paraffin section histopathology. A third of women with ovarian cancer present with a cyst or mass without any visible evidence of spread elsewhere. However, in these apparently early‐stage cancers (confined to the ovary) surgical staging is required to decide if chemotherapy is required. This staging consists of sampling tissues within the abdomen, including lymph nodes. Different staging strategies exist. One is to perform surgical staging for all women who might have a cancer, to get information about spread. This may result in complications due to additional surgical procedures that may turn out to be unnecessary in approximately two thirds of women. A second strategy is to perform an operation to remove just the suspicious mass and await the paraffin section diagnosis. This may result in needing a further operation in one third of women if cancer is confirmed, putting them at increased risks from another operation. A third strategy is to send the mass to the laboratory during the operation for a quick diagnosis, known as 'frozen section'. This helps the surgeon decide if further surgical treatment is required during a single operation. Frozen section is not as accurate as the traditional slower paraffin section examination, and it entails a risk of incorrect diagnosis, meaning that some women may not have all the samples taken at the initial surgery and may need to undergo a second operation; and others may undergo unnecessary surgical sampling. We searched all available studies reporting use of frozen section in women with suspicious ovarian masses. We excluded studies without an English translation and studies without enough information to allow us to analyse the data. We included 38 studies (11,181 women), reporting three types of diagnoses from the frozen section test. Cancer, which occurred in an average of 29% of women. Borderline tumour, which occurred in 8% of women. In a hypothetical group of 1000 patients where 290 have cancer and 80 have a borderline tumour, 261 women would receive a correct diagnosis of a cancer and 706 women would be correctly diagnosed without a cancer based on a frozen section result. However, 4 women would be incorrectly diagnosed as having a cancer where none existed (false positive), and 29 women with cancer would be missed and potentially need further treatment (false negative). If surgeons used a frozen section result of either a cancer or a borderline tumour to diagnose cancer, 280 women would be correctly diagnosed with a cancer and 635 women would be correctly diagnosed without a cancer. However, 75 women would be incorrectly diagnosed as having a cancer, and 10 women with cancer would be missed on the initial test and found to have a cancer after surgery. If the frozen section result reported the mass as benign or malignant, the final diagnosis would remain the same in, on average, 94% and 99% of the cases, respectively. In cases where the frozen section diagnosis was a borderline tumour, there is a chance that the final diagnosis would turn out to be a cancer in, on average, 21% of women. Where the frozen section diagnosis is a borderline tumour, the diagnosis is less accurate than for benign or malignant tumours. Surgeons may choose to perform additional surgery in this group of women at the time of their initial surgery in order to reduce the need for a second operation if the final diagnosis turns out to be a cancer, as it would on average in one out of five of these women. | Women with suspected early‐stage ovarian cancer need surgical staging which involves taking samples from areas within the abdominal cavity and retroperitoneal lymph nodes in order to inform further treatment. One potential strategy is to surgically stage all women with suspicious ovarian masses, without any histological information during surgery. This avoids incomplete staging, but puts more women at risk of potential surgical over‐treatment. A second strategy is to perform a two‐stage procedure to remove the pelvic mass and subject it to paraffin sectioning, which involves formal tissue fixing with formalin and paraffin embedding, prior to ultrathin sectioning and multiple site sampling of the tumour. Surgeons may then base further surgical staging on this histology, reducing the rate of over‐treatment, but conferring additional surgical and anaesthetic morbidity. A third strategy is to perform a rapid histological analysis on the ovarian mass during surgery, known as 'frozen section'. Tissues are snap frozen to allow fine tissue sections to be cut and basic histochemical staining to be performed. Surgeons can perform or avoid the full surgical staging procedure depending on the results. However, this is a relatively crude test compared to paraffin sections, which take many hours to perform. With frozen section there is therefore a risk of misdiagnosing malignancy and understaging women subsequently found to have a presumed early‐stage malignancy (false negative), or overstaging women without a malignancy (false positive). Therefore it is important to evaluate the accuracy and usefulness of adding frozen section to the clinical decision‐making process. Objectives To assess the diagnostic test accuracy of frozen section (index test) to diagnose histopathological ovarian cancer in women with suspicious pelvic masses as verified by paraffin section (reference standard). Search methods We searched MEDLINE (January 1946 to January 2015), EMBASE (January 1980 to January 2015) and relevant Cochrane registers. Selection criteria Studies that used frozen section for intraoperative diagnosis of ovarian masses suspicious of malignancy, provided there was sufficient data to construct 2 x 2 tables. We excluded articles without an available English translation. Data collection and analysis Authors independently assessed the methodological quality of included studies using the Quality Assessment of Diagnostic Accuracy Studies tool (QUADAS‐2) domains: patient selection, index test, reference standard, flow and timing. Data extraction converted 3 x 3 tables of per patient results presented in articles into 2 x 2 tables, for two index test thresholds. All studies were retrospective, and the majority reported consecutive sampling of cases. Sensitivity and specificity results were available from 38 studies involving 11,181 participants (3200 with invasive cancer, 1055 with borderline tumours and 6926 with benign tumours, determined by paraffin section as the reference standard). The median prevalence of malignancy was 29% (interquartile range (IQR) 23% to 36%, range 11% to 63%). We assessed test performance using two thresholds for the frozen section test. Firstly, we used a test threshold for frozen sections, defining positive test results as invasive cancer and negative test results as borderline and benign tumours. The average sensitivity was 90.0% (95% confidence interval (CI) 87.6% to 92.0%; with most studies typically reporting range of 71% to 100%), and average specificity was 99.5% (95% CI 99.2% to 99.7%; range 96% to 100%). Similarly, we analysed sensitivity and specificity using a second threshold for frozen section, where both invasive cancer and borderline tumours were considered test positive and benign cases were classified as negative. Average sensitivity was 96.5% (95% CI 95.5% to 97.3%; typical range 83% to 100%), and average specificity was 89.5% (95% CI 86.6% to 91.9%; typical range 58% to 99%). Results were available from the same 38 studies, including the subset of 3953 participants with a frozen section result of either borderline or invasive cancer, based on final diagnosis of malignancy. Studies with small numbers of disease‐negative cases (borderline cases) had more variation in estimates of specificity. Average sensitivity was 94.0% (95% CI 92.0% to 95.5%; range 73% to 100%), and average specificity was 95.8% (95% CI 92.4% to 97.8%; typical range 81% to 100%). Our additional analyses showed that, if the frozen section showed a benign or invasive cancer, the final diagnosis would remain the same in, on average, 94% and 99% of cases, respectively. In cases where the frozen section diagnosis was a borderline tumour, on average 21% of the final diagnoses would turn out to be invasive cancer. In three studies, the same pathologist interpreted the index and reference standard tests, potentially causing bias. No studies reported blinding pathologists to index test results when reporting paraffin sections. In heterogeneity analyses, there were no statistically significant differences between studies with pathologists of different levels of expertise. In a hypothetical population of 1000 patients (290 with cancer and 80 with a borderline tumour), if a frozen section positive test result for invasive cancer alone was used to diagnose cancer, on average 261 women would have a correct diagnosis of a cancer, and 706 women would be correctly diagnosed without a cancer. However, 4 women would be incorrectly diagnosed with a cancer (false positive), and 29 with a cancer would be missed (false negative). If a frozen section result of either an invasive cancer or a borderline tumour was used as a positive test to diagnose cancer, on average 280 women would be correctly diagnosed with a cancer and 635 would be correctly diagnosed without. However, 75 women would be incorrectly diagnosed with a cancer and 10 women with a cancer would be missed. The largest discordance is within the reporting of frozen section borderline tumours. Investigation into factors leading to discordance within centres and standardisation of criteria for reporting borderline tumours may help improve accuracy. Some centres may choose to perform surgical staging in women with frozen section diagnosis of a borderline ovarian tumour to reduce the number of false positives. In their interpretation of this review, readers should evaluate results from studies most typical of their population of patients. |
t19 | The aim of this Cochrane Review was to find out if anti‐vascular endothelial growth factor (called anti‐VEGF) treatment of new blood vessels in people with severe myopia (also known as nearsightedness or shortsightedness) prevents vision loss. Cochrane researchers collected and analysed all relevant studies to answer this question and found six studies. People with severe myopia and growth of new blood vessels at the back of the eye may benefit from treatment with anti‐VEGF. It may prevent vision loss. SIde effects (harms) occur rarely. Myopia occurs when the eyeball becomes too long. If the myopia is severe, sometimes the retina (light‐sensitive tissue at the back of the eye) becomes too thin and new blood vessels grow. These new blood vessels can leak and cause vision loss. Anti‐vascular endothelial growth factor (anti‐VEGF) is a drug that may slow down the growth of these new vessels. Doctors can inject anti‐VEGF into the eye of people who have severe myopia and signs of new blood vessels growing at the back of the eye. This may prevent vision loss. The Cochrane researchers found six relevant studies. These studies took place in multiple clinical centres across three continents (Europe, Asia and North America), Three studies compared anti‐VEGF treatment with photodynamic therapy (PDT; a treatment with a light‐sensitive medicine and a light source that destroys abnormal cells); one study compared anti‐VEGF with laser treatment; one study compared anti‐VEGF with no treatment; and two studies compared different types of anti‐VEGF to each other. In some of the studies, the comparison group received anti‐VEGF after a short period which may mean that the results underestimate the beneficial effect of anti‐VEGF. People with severe myopia who have anti‐VEGF treatment probably achieve better vision than people receiving PDT, laser or no treatment (moderate‐ and low‐certainty evidence). | Choroidal neovascularisation (CNV) is a common complication of pathological myopia. Once developed, most eyes with myopic CNV (mCNV) experience a progression to macular atrophy, which leads to irreversible vision loss. Anti‐vascular endothelial growth factor (anti‐VEGF) therapy is used to treat diseases characterised by neovascularisation and is increasingly used to treat mCNV. Objectives To assess the effects of anti‐vascular endothelial growth factor (anti‐VEGF) therapy for choroidal neovascularisation (CNV), compared with other treatments, sham treatment or no treatment, in people with pathological myopia. Search methods We searched a number of electronic databases including CENTRAL and Ovid MEDLINE, ClinicalTrials.gov and the World Health Organization (WHO) International Clinical Trials Registry Platform ICTRP). We did not use any date or language restrictions in the electronic searches for trials. Electronic databases were last searched on 16 June 2016. Selection criteria We included randomised controlled trials (RCTs) and quasi‐RCTs comparing anti‐VEGF therapy with another treatment (e.g. photodynamic therapy (PDT) with verteporfin, laser photocoagulation, macular surgery, another anti‐VEGF), sham treatment or no treatment in participants with mCNV. Data collection and analysis We used standard methodological procedures expected by Cochrane. Two authors independently screened records, extracted data, and assessed risk of bias. We contacted trial authors for additional data. We analysed outcomes as risk ratios (RRs) or mean differences (MDs). We graded the certainty of the evidence using GRADE. The present review included six studies which provided data on the comparison between anti‐VEGF with PDT, laser, sham treatment and another anti‐VEGF treatment, with 594 participants with mCNV. Three trials compared bevacizumab or ranibizumab with PDT, one trial compared bevacizumab with laser, one trial compared aflibercept with sham treatment, and two trials compared bevacizumab with ranibizumab. Pharmaceutical companies conducted two trials. The trials were conducted at multiple clinical centres across three continents (Europe, Asia and North America). In all these six trials, one eye for each participant was included in the study. When compared with PDT, people treated with anti‐VEGF agents (ranibizumab (one RCT), bevacizumab (two RCTs)), were more likely to regain vision. At one year of follow‐up, the mean visual acuity (VA) in participants treated with anti‐VEGFs was ‐0.14 logMAR better, equivalent of seven Early Treatment Diabetic Retinopathy Study (ETDRS) letters, compared with people treated with PDT (95% confidence interval (CI) ‐0.20 to ‐0.08, 3 RCTs, 263 people, low‐certainty evidence). The RR for proportion of participants gaining 3+ lines of VA was 1.86 (95% CI 1.27 to 2.73, 2 RCTs, 226 people, moderate‐certainty evidence). At two years, the mean VA in people treated with anti‐VEGFs was ‐0.26 logMAR better, equivalent of 13 ETDRS letters, compared with people treated with PDT (95% CI ‐0.38 to ‐0.14, 2 RCTs, 92 people, low‐certainty evidence). The RR for proportion of people gaining 3+ lines of VA at two years was 3.43 (95% CI 1.37 to 8.56, 2 RCTs, 92 people, low‐certainty evidence). People treated with anti‐VEGFs showed no obvious reduction (improvement) in central retinal thickness at one year compared with people treated with PDT (MD ‐17.84 μm, 95% CI ‐41.98 to 6.30, 2 RCTs, 226 people, moderate‐certainty evidence). There was low‐certainty evidence that people treated with anti‐VEGF were more likely to have CNV angiographic closure at 1 year (RR 1.24, 95% CI 0.99 to 1.54, 2 RCTs, 208 people). One study allowed ranibizumab treatment as of month 3 in participants randomised to PDT, which may have led to an underestimate of the benefits of anti‐VEGF treatment. When compared with laser photocoagulation, there was more improvement in VA among bevacizumab‐treated people than among laser‐treated people after one year (MD ‐0.22 logMAR, equivalent of 11 ETDRS letters, 95% CI ‐0.43 to ‐0.01, 1 RCT, 36 people, low‐certainty evidence) and after two years (MD ‐0.29 logMAR, equivalent of 14 ETDRS letters, 95% CI ‐0.50 to ‐0.08, 1 RCT, 36 people, low‐certainty evidence). When compared with sham treatment, people treated with aflibercept had better vision at one year (MD ‐0.19 logMAR, equivalent of 9 ETDRS letters, 95% CI ‐0.27 to ‐0.12, 1 RCT, 121 people, moderate‐certainty evidence). The fact that this study allowed for aflibercept treatment at 6 months in the control group might cause an underestimation of the benefit with anti‐VEGF. People treated with ranibizumab had similar improvement in VA recovery compared with people treated with bevacizumab after one year (MD ‐0.02 logMAR, equivalent of 1 ETDRS letter, 95% CI ‐0.11 to 0.06, 2 RCTs, 80 people, moderate‐certainty evidence). Of the included six studies, two studies reported no adverse events in either group and two industry‐sponsored studies reported both systemic and ocular adverse events. In the control group, there were no systemic or ocular adverse events reported in 149 participants. Fifteen people reported systemic serious adverse events among 359 people treated with anti‐VEGF agents (15/359, 4.2%). Five people reported ocular adverse events among 359 people treated with anti‐VEGF agents (5/359, 1.4%). The number of adverse events was low, and the estimate of RR was uncertain regarding systemic serious adverse events (4 RCTs, 15 events in 508 people, RR 4.50, 95% CI 0.60 to 33.99, very low‐certainty evidence) and serious ocular adverse events (4 RCTs, 5 events in 508 people, RR 1.82, 95% CI 0.23 to 14.71, very low‐certainty evidence). There were no reports of mortality or cases of endophthalmitis or retinal detachment. There was sparse reporting of data for vision‐related quality of life (in favour of anti‐VEGF) in only one trial at one year of follow‐up. The studies did not report data for other outcomes, such as percentage of participants with newly developed chorioretinal atrophy. There is low to moderate‐certainty evidence from RCTs for the efficacy of anti‐VEGF agents to treat mCNV at one year and two years. Moderate‐certainty evidence suggests ranibizumab and bevacizumab are equivalent in terms of efficacy. Adverse effects occurred rarely and the trials included here were underpowered to assess these. Future research should be focused on the efficacy and safety of different drugs and treatment regimens, the efficacy on different location of mCNV, as well as the effects on practice in the real world. |
t20 | We wanted to see whether talking therapies reduce drinking in adult users of illicit drugs (mainly opioids and stimulants). We also wanted to find out whether one type of therapy is more effective than another. Drinking alcohol above the low‐risk drinking limits can lead to serious alcohol use problems or disorders. Drinking above those limits is common in people who also have problems with other drugs. It worsens their physical and mental health. Talking therapies aim to identify an alcohol problem and motivate an individual to do something about it. Talking therapies can be given by trained doctors, nurses, counsellors, psychologists, etc. Talking therapies may help reduce alcohol use but we wanted to find out if they can help people who also have problems with other drugs. We found seven studies that examined five talking therapies among 825 people with drug problems. Cognitive‐behavioural coping skills training (CBCST) is a talking therapy that focuses on changing the way people think and act. The twelve‐step programme is based on theories from Alcoholics Anonymous and aims to motivate the person to develop a desire to stop using drugs or alcohol. Motivational interviewing (MI) helps people to explore and resolve doubts about changing their behaviour. It can be delivered in group, individual and intensive formats. Brief motivational interviewing (BMI) is a shorter MI that takes 45 minutes to three hours. Brief interventions are based on MI but they take only five to 30 minutes and are often delivered by a non‐specialist. Six of the studies were funded by the National Institutes for Health or by the Health Research Board; one study did not report its funding source. We found that the talking therapies led to no differences, or only small differences, for the outcomes assessed. These included abstinence, reduced drinking, and substance use. One study found that there may be no difference between CBCST and the twelve‐step programme. Three studies found that there may be no difference between brief intervention and usual treatment. Three studies found that there may be no difference between MI and usual treatment or education only. One study found that BMI is probably better at reducing alcohol use than usual treatment (needle exchange), but found no differences in other outcomes. One study found that intensive MI may be somewhat better than standard MI at reducing severity of alcohol use disorder among women, but not among men and found no differences in other outcomes. It remains uncertain whether talking therapies reduce alcohol and drug use in people who also have problems with other drugs. | Problem alcohol use is common among people who use illicit drugs (PWID) and is associated with adverse health outcomes. It is also an important factor contributing to a poor prognosis among drug users with hepatitis C virus (HCV) as it impacts on progression to hepatic cirrhosis or opioid overdose in PWID. Objectives To assess the effectiveness of psychosocial interventions to reduce alcohol consumption in PWID (users of opioids and stimulants). Search methods We searched the Cochrane Drugs and Alcohol Group trials register, the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, Embase, CINAHL, and PsycINFO, from inception up to August 2017, and the reference lists of eligible articles. We also searched: 1) conference proceedings (online archives only) of the Society for the Study of Addiction, International Harm Reduction Association, International Conference on Alcohol Harm Reduction and American Association for the Treatment of Opioid Dependence; and 2) online registers of clinical trials: Current Controlled Trials, ClinicalTrials.gov, Center Watch and the World Health Organization International Clinical Trials Registry Platform. Selection criteria We included randomised controlled trials comparing psychosocial interventions with other psychosocial treatment, or treatment as usual, in adult PWIDs (aged at least 18 years) with concurrent problem alcohol use. Data collection and analysis We used the standard methodological procedures expected by Cochrane. We included seven trials (825 participants). We judged the majority of the trials to have a high or unclear risk of bias. The psychosocial interventions considered in the studies were: cognitive‐behavioural coping skills training (one study), twelve‐step programme (one study), brief intervention (three studies), motivational interviewing (two studies), and brief motivational interviewing (one study). Two studies were considered in two comparisons. There were no data for the secondary outcome, alcohol‐related harm. The results were as follows. Comparison 1: cognitive‐behavioural coping skills training versus twelve‐step programme (one study, 41 participants) There was no significant difference between groups for either of the primary outcomes (alcohol abstinence assessed with Substance Abuse Calendar and breathalyser at one year: risk ratio (RR) 2.38 (95% confidence interval [CI] 0.10 to 55.06); and retention in treatment, measured at end of treatment: RR 0.89 (95% CI 0.62 to 1.29), or for any of the secondary outcomes reported. The quality of evidence for the primary outcomes was very low. Comparison 2: brief intervention versus treatment as usual (three studies, 197 participants) There was no significant difference between groups for either of the primary outcomes (alcohol use, measured as scores on the Alcohol Use Disorders Identification Test (AUDIT) or Alcohol, Smoking and Substance Involvement Screening Test (ASSIST) at three months: standardised mean difference (SMD) 0.07 (95% CI ‐0.24 to 0.37); and retention in treatment, measured at three months: RR 0.94 (95% CI 0.78 to 1.13), or for any of the secondary outcomes reported. The quality of evidence for the primary outcomes was low. Comparison 3: motivational interviewing versus treatment as usual or educational intervention only (three studies, 462 participants) There was no significant difference between groups for either of the primary outcomes (alcohol use, measured as scores on the AUDIT or ASSIST at three months: SMD 0.04 (95% CI ‐0.29 to 0.37); and retention in treatment, measured at three months: RR 0.93 (95% CI 0.60 to 1.43), or for any of the secondary outcomes reported. The quality of evidence for the primary outcomes was low. Comparison 4: brief motivational intervention (BMI) versus assessment only (one study, 187 participants) More people reduced alcohol use (by seven or more days in the past month, measured at six months) in the BMI group than in the control group (RR 1.67; 95% CI 1.08 to 2.60). There was no difference between groups for the other primary outcome, retention in treatment, measured at end of treatment: RR 0.98 (95% CI 0.94 to 1.02), or for any of the secondary outcomes reported. The quality of evidence for the primary outcomes was moderate. Comparison 5: motivational interviewing (intensive) versus motivational interviewing (one study, 163 participants) There was no significant difference between groups for either of the primary outcomes (alcohol use, measured using the Addiction Severity Index‐alcohol score (ASI) at two months: MD 0.03 (95% CI 0.02 to 0.08); and retention in treatment, measured at end of treatment: RR 17.63 (95% CI 1.03 to 300.48), or for any of the secondary outcomes reported. The quality of evidence for the primary outcomes was low. We found low to very low‐quality evidence to suggest that there is no difference in effectiveness between different types of psychosocial interventions to reduce alcohol consumption among people who use illicit drugs, and that brief interventions are not superior to assessment‐only or to treatment as usual. No firm conclusions can be made because of the paucity of the data and the low quality of the retrieved studies. |
t21 | Cognitive impairment is when people have problems remembering, learning, concentrating and making decisions. People with mild cognitive impairment (MCI) generally have more memory problems than other people of their age, but these problems are not severe enough to be classified as dementia. Studies have shown that people with MCI and loss of memory are more likely to develop Alzheimer's disease dementia (approximately 10% to 15% of cases per year) than people without MCI (1% to 2% per year). Currently, the only reliable way of diagnosing Alzheimer's disease dementia is to follow people with MCI and assess cognitive changes over the years. Magnetic resonance imaging (MRI) may detect changes in the brain structures that indicate the beginning of Alzheimer's disease. Early diagnosis of MCI due to Alzheimer's disease is important because people with MCI could benefit from early treatment to prevent or delay cognitive decline. To assess the diagnostic accuracy of MRI for the early diagnosis of dementia due to Alzheimer's disease in people with MCI. The volume of several brain regions was measured with MRI. Most studies (22 studies, 2209 participants) measured the volume of the hippocampus, a region of the brain that is associated primarily with memory. Thirty‐three studies were eligible, in which 3935 participants with MCI were included and followed up for two or three years to see if they developed Alzheimer's disease dementia. About a third of them converted to Alzheimer's disease dementia, and the others did not or developed other types of dementia. We found that MRI is not accurate enough to identify people with MCI who will develop dementia due to Alzheimer's disease. The correct prediction of Alzheimer's disease would be missed in 81 out of 300 people with MCI (false negatives) and a wrong prediction of Alzheimer's disease would be made in 203 out of 700 people with MCI (false positives). As a result, people with a false‐negative diagnosis would be falsely reassured and would not prepare themselves to cope with Alzheimer's disease, while those with a false‐positive diagnosis would suffer from the wrongly anticipated diagnosis. The included studies diagnosed Alzheimer's disease dementia by assessing all participants with standard clinical criteria after two or three years' follow‐up. We had some concerns about how the studies were conducted, since the participants were mainly selected from clinical registries and referral centres, and we also had concerns about how studies interpreted MRI. Moreover, the studies were conducted differently from each other, and they used different methods to select people with MCI and perform MRI. The results do not apply to people with MCI in the community, but only to people with MCI who attend memory clinics or referral centres. MRI, as a single test, is not accurate for the early diagnosis of dementia due to Alzheimer's disease in people with MCI since one in three or four participants received a wrong diagnosis of Alzheimer's disease. Future research should not focus on a single test (such as MRI), but rather on combinations of tests to improve an early diagnosis of Alzheimer's disease dementia. | Mild cognitive impairment (MCI) due to Alzheimer's disease is the symptomatic predementia phase of Alzheimer's disease dementia, characterised by cognitive and functional impairment not severe enough to fulfil the criteria for dementia. In clinical samples, people with amnestic MCI are at high risk of developing Alzheimer's disease dementia, with annual rates of progression from MCI to Alzheimer's disease estimated at approximately 10% to 15% compared with the base incidence rates of Alzheimer's disease dementia of 1% to 2% per year. Objectives To assess the diagnostic accuracy of structural magnetic resonance imaging (MRI) for the early diagnosis of dementia due to Alzheimer's disease in people with MCI versus the clinical follow‐up diagnosis of Alzheimer's disease dementia as a reference standard (delayed verification). To investigate sources of heterogeneity in accuracy, such as the use of qualitative visual assessment or quantitative volumetric measurements, including manual or automatic (MRI) techniques, or the length of follow‐up, and age of participants. MRI was evaluated as an add‐on test in addition to clinical diagnosis of MCI to improve early diagnosis of dementia due to Alzheimer's disease in people with MCI. Search methods On 29 January 2019 we searched Cochrane Dementia and Cognitive Improvement's Specialised Register and the databases, MEDLINE, Embase, BIOSIS Previews, Science Citation Index, PsycINFO, and LILACS. We also searched the reference lists of all eligible studies identified by the electronic searches. Selection criteria We considered cohort studies of any size that included prospectively recruited people of any age with a diagnosis of MCI. We included studies that compared the diagnostic test accuracy of baseline structural MRI versus the clinical follow‐up diagnosis of Alzheimer's disease dementia (delayed verification). We did not exclude studies on the basis of length of follow‐up. We included studies that used either qualitative visual assessment or quantitative volumetric measurements of MRI to detect atrophy in the whole brain or in specific brain regions, such as the hippocampus, medial temporal lobe, lateral ventricles, entorhinal cortex, medial temporal gyrus, lateral temporal lobe, amygdala, and cortical grey matter. Data collection and analysis Four teams of two review authors each independently reviewed titles and abstracts of articles identified by the search strategy. Two teams of two review authors each independently assessed the selected full‐text articles for eligibility, extracted data and solved disagreements by consensus. Two review authors independently assessed the quality of studies using the QUADAS‐2 tool. We used the hierarchical summary receiver operating characteristic (HSROC) model to fit summary ROC curves and to obtain overall measures of relative accuracy in subgroup analyses. We also used these models to obtain pooled estimates of sensitivity and specificity when sufficient data sets were available. We included 33 studies, published from 1999 to 2019, with 3935 participants of whom 1341 (34%) progressed to Alzheimer's disease dementia and 2594 (66%) did not. Of the participants who did not progress to Alzheimer's disease dementia, 2561 (99%) remained stable MCI and 33 (1%) progressed to other types of dementia. The median proportion of women was 53% and the mean age of participants ranged from 63 to 87 years (median 73 years). The mean length of clinical follow‐up ranged from 1 to 7.6 years (median 2 years). Most studies were of poor methodological quality due to risk of bias for participant selection or the index test, or both. Most of the included studies reported data on the volume of the total hippocampus (pooled mean sensitivity 0.73 (95% confidence interval (CI) 0.64 to 0.80); pooled mean specificity 0.71 (95% CI 0.65 to 0.77); 22 studies, 2209 participants). This evidence was of low certainty due to risk of bias and inconsistency. Seven studies reported data on the atrophy of the medial temporal lobe (mean sensitivity 0.64 (95% CI 0.53 to 0.73); mean specificity 0.65 (95% CI 0.51 to 0.76); 1077 participants) and five studies on the volume of the lateral ventricles (mean sensitivity 0.57 (95% CI 0.49 to 0.65); mean specificity 0.64 (95% CI 0.59 to 0.70); 1077 participants). This evidence was of moderate certainty due to risk of bias. Four studies with 529 participants analysed the volume of the total entorhinal cortex and four studies with 424 participants analysed the volume of the whole brain. We did not estimate pooled sensitivity and specificity for the volume of these two regions because available data were sparse and heterogeneous. We could not statistically evaluate the volumes of the lateral temporal lobe, amygdala, medial temporal gyrus, or cortical grey matter assessed in small individual studies. We found no evidence of a difference between studies in the accuracy of the total hippocampal volume with regards to duration of follow‐up or age of participants, but the manual MRI technique was superior to automatic techniques in mixed (mostly indirect) comparisons. We did not assess the relative accuracy of the volumes of different brain regions measured by MRI because only indirect comparisons were available, studies were heterogeneous, and the overall accuracy of all regions was moderate. The volume of hippocampus or medial temporal lobe, the most studied brain regions, showed low sensitivity and specificity and did not qualify structural MRI as a stand‐alone add‐on test for an early diagnosis of dementia due to Alzheimer's disease in people with MCI. This is consistent with international guidelines, which recommend imaging to exclude non‐degenerative or surgical causes of cognitive impairment and not to diagnose dementia due to Alzheimer's disease. In view of the low quality of most of the included studies, the findings of this review should be interpreted with caution. Future research should not focus on a single biomarker, but rather on combinations of biomarkers to improve an early diagnosis of Alzheimer's disease dementia. |
t22 | NMIBC is a cancer (tumour) of the inner lining of the bladder that can be removed from the inside using small instruments and a light source, so‐called endoscopic surgery. These tumours can come back over time and spread into the deeper layers of the bladder wall. We know that different types of medicines that we can put into the bladder help prevent this. Investigators have looked at the use of an electrical current to make medicines work better. In this review, we wanted to discover whether using an electrical current was better or worse than not using an electrical current. We found three studies that were conducted between 1994 and 2003 with 672 participants that compared five different ways of giving this treatment. Mitomycin (MMC) was the only medicine used together with electrical current. We are very unsure whether the use of an electrical current to give a course of MMC after endoscopic surgery is better or worse compared to giving a course of Bacillus Calmette‐Guérin (BCG; vaccine usually used in tuberculosis) or MMC without electrical current. MMC given with electrical current together with BCG given over a long period of time may be better than BCG alone in delaying the tumour from coming back and from spreading into the deeper layer of the bladder wall. Giving one dose of MMC with electrical current before endoscopic surgery may be better than one dose of MMC without electric current after surgery or surgery alone without further treatment. | Electromotive drug administration (EMDA) is the use of electrical current to improve the delivery of intravesical agents to reduce the risk of recurrence in people with non‐muscle invasive bladder cancer (NMIBC). It is unclear how effective this is in comparison to other forms of intravesical therapy. Objectives To assess the effects of intravesical EMDA for the treatment of NMIBC. Search methods We performed a comprehensive search using multiple databases (CENTRAL, MEDLINE, EMBASE), two clinical trial registries and a grey literature repository. We searched reference lists of relevant publications and abstract proceedings. We applied no language restrictions. The last search was February 2017. Selection criteria We searched for randomised studies comparing EMDA of any intravesical agent used to reduce bladder cancer recurrence in conjunction with transurethral resection of bladder tumour (TURBT). Data collection and analysis Two review authors independently screened the literature, extracted data, assessed risk of bias and rated quality of evidence (QoE) according to GRADE on a per outcome basis. We included three trials with 672 participants that described five distinct comparisons. The same principal investigator conducted all three trials. All studies used mitomycin C (MMC) as the chemotherapeutic agent for EMDA. 1. Postoperative MMC‐EMDA induction versus postoperative Bacillus Calmette‐Guérin (BCG) induction: based on one study with 72 participants with carcinoma in situ (CIS) and concurrent pT1 urothelial carcinoma, we are uncertain (very low QoE) about the effect of MMC‐EMDA on time to recurrence (risk ratio (RR) 1.06, 95% confidence interval (CI) 0.64 to 1.76; corresponding to 30 more per 1000 participants, 95% CI 180 fewer to 380 more). There was no disease progression in either treatment arm at three months' follow‐up. We are uncertain (very low QoE) about serious adverse events (RR 0.75, 95% CI 0.18 to 3.11). 2. Postoperative MMC‐EMDA induction versus MMC‐passive diffusion (PD) induction: based on one study with 72 participants with CIS and concurrent pT1 urothelial carcinoma, postoperative MMC‐EMDA may (low QoE) reduce disease recurrence (RR 0.65, 95% CI 0.44 to 0.98; corresponding to 147 fewer per 1000 participants, 95% CI 235 fewer to 8 fewer). There was no disease progression in either treatment arm at three months' follow‐up. We are uncertain (very low QoE) about the effect of MMC‐EMDA on serious adverse events (RR 1.50, 95% CI 0.27 to 8.45). 3. Postoperative MMC‐EMDA with sequential BCG induction and maintenance versus postoperative BCG induction and maintenance: based on one study with 212 participants with pT1 urothelial carcinoma of the bladder with or without CIS, postoperative MMC‐EMDA with sequential BCG may result (low QoE) in a longer time to recurrence (hazard ratio (HR) 0.51, 95% CI 0.34 to 0.77; corresponding to 181 fewer per 1000 participants, 95% CI 256 fewer to 79 fewer) and time to progression (HR 0.36, 95% CI 0.17 to 0.75; corresponding to 63 fewer per 1000 participants, 95% CI 82 fewer to 24 fewer). We are uncertain (very low QoE) about the effect of MMC‐EMDA on serious adverse events (RR 1.02, 95% CI 0.21 to 4.94). 4. Single‐dose, preoperative MMC‐EMDA versus single‐dose, postoperative MMC‐PD: based on one study with 236 participants with primary pTa and pT1 urothelial carcinoma, preoperative MMC‐EMDA likely (moderate QoE) results in a longer time to recurrence (HR 0.47, 95% CI 0.32 to 0.69; corresponding to 247 fewer per 1000 participants, 95% CI 341 fewer to 130 fewer) for a median follow‐up of 86 months. We are uncertain (very low QoE) about the effect of MMC‐EMDA on time to progression (HR 0.81, 95% CI 0.00 to 259.93; corresponding to 34 fewer per 1000 participants, 95% CI 193 fewer to 807 more) and serious adverse events (RR 0.79, 95% CI 0.30 to 2.05). 5. Single‐dose, preoperative MMC‐EMDA versus TURBT alone: based on one study with 233 participants with primary pTa and pT1 urothelial carcinoma, preoperative MMC‐EMDA likely (moderate QoE) results in a longer time to recurrence (HR 0.40, 95% CI 0.28 to 0.57; corresponding to 304 fewer per 1000 participants, 95% CI 390 fewer to 198 fewer) for a median follow‐up of 86 months. We are uncertain (very low QoE) about the effect of MMC‐EMDA on time to progression (HR 0.74, 95% CI 0.00 to 247.93; corresponding to 49 fewer per 1000 participants, 95% CI 207 fewer to 793 more) or serious adverse events (HR 1.74, 95% CI 0.52 to 5.77). While the use of EMDA to administer intravesical MMC may result in a delay in time to recurrence in select patient populations, we are uncertain about its impact on serious adverse events in all settings. Common reasons for downgrading the QoE were study limitations and imprecision. A potential role for EMDA‐based administration of MMC may lie in settings where more established agents (such as BCG) are not available. In the setting of low or very low QoE for most comparisons, our confidence in the effect estimates is limited and the true effect sizes may be substantially different from those reported here. |
t23 | Video communication software like Skype and FaceTime allows counsellors to see and hear people over the Internet to help them quit smoking. Video counselling could help large numbers of people to quit smoking because more than four billion people use the Internet, and video communication software is free. Our main focus was to learn if video counselling delivered individually or to a group could help people quit smoking and to learn how it compared with other types of support to help people quit. We also studied the effect of real‐time video counselling on the number of times people tried to quit, the number of sessions they completed, their satisfaction with the counselling, their relationship or bond with the counsellor and the costs of using video communication to help people quit smoking. Both studies took place in the USA, and included people from rural areas or women with HIV. Both studies gave one‐to‐one video sessions to individuals. There were eight video sessions in one study and four video sessions in the other study. Both studies compared video counselling to telephone counselling and looked at whether people quit smoking, the number of sessions they completed and their satisfaction with the programme. One study examined the number of times people tried to quit and one study looked at the relationship or bond with the counsellor. It is unclear how video counselling compares with telephone counselling in terms of helping people to quit smoking. People who used video counselling were more likely than those who used telephone counselling to recommend the programme to a friend or someone in their family, but we found no differences in how satisfied they were, the number of video or telephone sessions completed, whether all sessions were completed and in the relationship or bond with the counsellor. | Real‐time video communication software such as Skype and FaceTime transmits live video and audio over the Internet, allowing counsellors to provide support to help people quit smoking. There are more than four billion Internet users worldwide, and Internet users can download free video communication software, rendering a video counselling approach both feasible and scalable for helping people to quit smoking. Objectives To assess the effectiveness of real‐time video counselling delivered individually or to a group in increasing smoking cessation, quit attempts, intervention adherence, satisfaction and therapeutic alliance, and to provide an economic evaluation regarding real‐time video counselling. Search methods We searched the Cochrane Tobacco Addiction Group Specialised Register, CENTRAL, MEDLINE, PubMed, PsycINFO and Embase to identify eligible studies on 13 August 2019. We searched the World Health Organization International Clinical Trials Registry Platform and ClinicalTrials.gov to identify ongoing trials registered by 13 August 2019. We checked the reference lists of included articles and contacted smoking cessation researchers for any additional studies. Selection criteria We included randomised controlled trials (RCTs), randomised trials, cluster RCTs or cluster randomised trials of real‐time video counselling for current tobacco smokers from any setting that measured smoking cessation at least six months following baseline. The real‐time video counselling intervention could be compared with a no intervention control group or another smoking cessation intervention, or both. Data collection and analysis Two authors independently extracted data from included trials, assessed the risk of bias and rated the certainty of the evidence using the GRADE approach. We performed a random‐effects meta‐analysis for the primary outcome of smoking cessation, using the most stringent measure of smoking cessation measured at the longest follow‐up. Analysis was based on the intention‐to‐treat principle. We considered participants with missing data at follow‐up for the primary outcome of smoking cessation to be smokers. We included two randomised trials with 615 participants. Both studies delivered real‐time video counselling for smoking cessation individually, compared with telephone counselling. We judged one study at unclear risk of bias and one study at high risk of bias. There was no statistically significant treatment effect for smoking cessation (using the strictest definition and longest follow‐up) across the two included studies when real‐time video counselling was compared to telephone counselling (risk ratio (RR) 2.15, 95% confidence interval (CI) 0.38 to 12.04; 2 studies, 608 participants; I 2 = 66%). We judged the overall certainty of the evidence for smoking cessation as very low due to methodological limitations, imprecision in the effect estimate reflected by the wide 95% CIs and inconsistency of cessation rates. There were no significant differences between real‐time video counselling and telephone counselling reported for number of quit attempts among people who continued to smoke (mean difference (MD) 0.50, 95% CI –0.60 to 1.60; 1 study, 499 participants), mean number of counselling sessions completed (MD –0.20, 95% CI –0.45 to 0.05; 1 study, 566 participants), completion of all sessions (RR 1.13, 95% CI 0.71 to 1.79; 1 study, 43 participants) or therapeutic alliance (MD 1.13, 95% CI –0.24 to 2.50; 1 study, 398 participants). Participants in the video counselling arm were more likely than their telephone counselling counterparts to recommend the programme to a friend or family member (RR 1.06, 95% CI 1.01 to 1.11; 1 study, 398 participants); however, there were no between‐group differences on satisfaction score (MD 0.70, 95% CI –1.16 to 2.56; 1 study, 29 participants). There is very little evidence about the effectiveness of real‐time video counselling for smoking cessation. The existing research does not suggest a difference between video counselling and telephone counselling for assisting people to quit smoking. However, given the very low GRADE rating due to methodological limitations in the design, imprecision of the effect estimate and inconsistency of cessation rates, the smoking cessation results should be interpreted cautiously. High‐quality randomised trials comparing real‐time video counselling to telephone counselling are needed to increase the confidence of the effect estimate. Furthermore, there is currently no evidence comparing real‐time video counselling to a control group. Such research is needed to determine whether video counselling increases smoking cessation. |
t24 | Lumbar puncture involves getting a sample of spinal fluid though a needle inserted into the lower back. Post‐dural puncture headache (PDPH) is the most common side effect of a lumbar puncture. The symptom of PDPH is a constant headache that gets worse when upright and improves when lying down. Lots of drugs are used to treat PDPH, so the aim of this review was to assess the effectiveness of these drugs. We included 13 small randomised clinical trials (RCTs), with a total of 479 participants. The trials assessed eight drugs: caffeine, sumatriptan, gabapentin, hydrocortisone, theophylline, adrenocorticotropic hormone, pregabalin and cosyntropin. Caffeine proved to be effective in decreasing the number of people with PDPH and those requiring extra drugs (2 or 3 in 10 with caffeine compared to 9 in 10 with placebo). Gabapentin, theophylline and hydrocortisone also proved to be effective, relieving pain better than placebo or conventional treatment alone. More people had better pain relief with theophylline (9 in 10 with theophylline compared to 4 in 10 with conventional treatment). No important side effects of these drugs were reported. The quality of the studies was difficult to assess due to the lack of information available. | This is an updated version of the original Cochrane review published in Issue 8, 2011, on 'Drug therapy for treating post‐dural puncture headache'. Post‐dural puncture headache (PDPH) is the most common complication of lumbar puncture, an invasive procedure frequently performed in the emergency room. Numerous pharmaceutical drugs have been proposed to treat PDPH but there are still some uncertainties about their clinical effectiveness. Objectives To assess the effectiveness and safety of drugs for treating PDPH in adults and children. Search methods The searches included the Cochrane Central Register of Controlled Trials (CENTRAL 2014, Issue 6), MEDLINE and MEDLINE in Process (from 1950 to 29 July 2014), EMBASE (from 1980 to 29 July 2014) and CINAHL (from 1982 to July 2014). There were no language restrictions. Selection criteria We considered randomised controlled trials (RCTs) assessing the effectiveness of any pharmacological drug used for treating PDPH. Outcome measures considered for this review were: PDPH persistence of any severity at follow‐up (primary outcome), daily activity limited by headache, conservative supplementary therapeutic option offered, epidural blood patch performed, change in pain severity scores, improvements in pain severity scores, number of days participants stay in hospital, any possible adverse events and missing data. Data collection and analysis Review authors independently selected studies, assessed risk of bias and extracted data. We estimated risk ratios (RR) for dichotomous data and mean differences (MD) for continuous outcomes. We calculated a 95% confidence interval (CI) for each RR and MD. We did not undertake meta‐analysis because the included studies assessed different sorts of drugs or different outcomes. We performed an intention‐to‐treat (ITT) analysis. We included 13 small RCTs (479 participants) in this review (at least 274 participants were women, with 118 parturients after a lumbar puncture for regional anaesthesia). In the original version of this Cochrane review, only seven small RCTs (200 participants) were included. Pharmacological drugs assessed were oral and intravenous caffeine, subcutaneous sumatriptan, oral gabapentin, oral pregabalin, oral theophylline, intravenous hydrocortisone, intravenous cosyntropin and intramuscular adrenocorticotropic hormone (ACTH). Two RCTs reported data for PDPH persistence of any severity at follow‐up (primary outcome). Caffeine reduced the number of participants with PDPH at one to two hours when compared to placebo. Treatment with caffeine also decreased the need for a conservative supplementary therapeutic option. Treatment with gabapentin resulted in better visual analogue scale (VAS) scores after one, two, three and four days when compared with placebo and also when compared with ergotamine plus caffeine at two, three and four days. Treatment with hydrocortisone plus conventional treatment showed better VAS scores at six, 24 and 48 hours when compared with conventional treatment alone and also when compared with placebo. Treatment with theophylline showed better VAS scores compared with acetaminophen at two, six and 12 hours and also compared with conservative treatment at eight, 16 and 24 hours. Theophylline also showed a lower mean "sum of pain" when compared with placebo. Sumatriptan and ACTH did not show any relevant effect for this outcome. Theophylline resulted in a higher proportion of participants reporting an improvement in pain scores when compared with conservative treatment. There were no clinically significant drug adverse events. The rest of the outcomes were not reported by the included RCTs or did not show any relevant effect. None of the new included studies have provided additional information to change the conclusions of the last published version of the original Cochrane review. Caffeine has shown effectiveness for treating PDPH, decreasing the proportion of participants with PDPH persistence and those requiring supplementary interventions, when compared with placebo. Gabapentin, hydrocortisone and theophylline have been shown to decrease pain severity scores. Theophylline has also been shown to increase the proportion of participants that report an improvement in pain scores when compared with conventional treatment. There is a lack of conclusive evidence for the other drugs assessed (sumatriptan, adrenocorticotropic hormone, pregabalin and cosyntropin). These conclusions should be interpreted with caution, due to the lack of information to allow correct appraisal of risk of bias, the small sample sizes of the studies and also their limited generalisability, as nearly half of the participants were postpartum women in their 30s. |
t25 | We reviewed the evidence about the effect of bracing on pulmonary disorders (lung diseases), disability, back pain, quality of life, and psychological and cosmetic issues in adolescent with idiopathic scoliosis. We looked at randomized controlled trials (RCTs) and prospective controlled cohort studies (CCTs). Scoliosis is a condition where the spine is curved in three dimensions (from the back the spine appears to be shaped like an 's' and the trunk is deformed). It is often idiopathic, which means the cause is unknown. The most common type of scoliosis is generally discovered around 10 years of age or older, and is defined as a curve that measures at least 10° (called a Cobb angle; measured on x‐ray). Because of the unknown cause and the age of diagnosis, it is called adolescent idiopathic scoliosis (AIS). While there are usually no symptoms, the appearance of AIS frequently has a negative impact on adolescents. Increased curvature of the spine can present health risks in adulthood and in older people. Braces are one intervention that may stop further progression of the curve. They generally need to be worn full time, with treatment lasting until the end of growth (most frequently, from a minimum of two to four/five years). However, bracing for this condition is still controversial, and questions remain about how effective it is. This review included seven studies, with a total of 662 adolescents of both genders. AIS from 15° to more than 45° curves were considered. Elastic, rigid (polyethylene), and very rigid (polycarbonate) braces were studied. Quality of life was not affected during brace treatment (very low quality evidence); quality of life, back pain, and psychological and cosmetic issues did not change in the long term (very low quality evidence). Rigid bracing seems effective in 20° to 40° curves (low quality evidence), elastic bracing in 15° to 30° curves (low quality evidence), and very rigid bracing in high degree curves above 45° (very low quality evidence); rigid was more successful than an elastic bracing (low quality evidence), and a pad pressure control system did not increase results (very low quality evidence). Primary outcomes such as pulmonary disorders, disability, back pain, psychological and cosmetic issues, and quality of life should be better evaluated in the future. Side effects, as well as the usefulness of exercises and other adjunctive treatments to bracing should be studied too. | Idiopathic scoliosis is a three‐dimensional deformity of the spine. The most common form is diagnosed in adolescence. While adolescent idiopathic scoliosis (AIS) can progress during growth and cause a surface deformity, it is usually not symptomatic. However, in adulthood, if the final spinal curvature surpasses a certain critical threshold, the risk of health problems and curve progression is increased. Objectives To evaluate the efficacy of bracing for adolescents with AIS versus no treatment or other treatments, on quality of life, disability, pulmonary disorders, progression of the curve, and psychological and cosmetic issues. Search methods We searched CENTRAL, MEDLINE, EMBASE, five other databases, and two trials registers up to February 2015 for relevant clinical trials. We also checked the reference lists of relevant articles and conducted an extensive handsearch of grey literature. Selection criteria Randomized controlled trials (RCTs) and prospective controlled cohort studies comparing braces with no treatment, other treatment, surgery, and different types of braces for adolescent with AIS. Data collection and analysis We used standard methodological procedures expected by The Cochrane Collaboration. We included seven studies (662 participants). Five were planned as RCTs and two as prospective controlled trials. One RCT failed completely, another was continued as an observational study, reporting also the results of the participants that had been randomized. There was very low quality evidence from one small RCT (111 participants) that quality of life (QoL) during treatment did not differ significantly between rigid bracing and observation (mean difference (MD) ‐2.10, 95% confidence interval (CI) ‐7.69 to 3.49). There was very low quality evidence from a subgroup of 77 adolescents from one prospective cohort study showing that QoL, back pain, psychological, and cosmetic issues did not differ significantly between rigid bracing and observation in the long term (16 years). Results of the secondary outcomes showed that there was low quality evidence that rigid bracing compared with observation significantly increased the success rate in 20° to 40° curves at two years' follow‐up (one RCT, 116 participants; risk ratio (RR) 1.79, 95% CI 1.29 to 2.50). There was low quality evidence that elastic bracing increased the success rate in 15° to 30° curves at three years' follow‐up (one RCT, 47 participants; RR 1.88, 95% CI 1.11 to 3.20). There is very low quality evidence from two prospective cohort studies with a control group that rigid bracing increases the success rate (curves not evolving to 50° or above) at two years' follow‐up (one study, 242 participants; RR 1.50, 95% CI 1.19 to 1.89) and at three years' follow‐up (one study, 240 participants; RR 1.75, 95% CI 1.42 to 2.16). There was very low quality evidence from a prospective cohort study (57 participants) that very rigid bracing increased the success rate (no progression of 5° or more, fusion, or waiting list for fusion) in adolescents with high degree curves (above 45°) (one study, 57 adolescents; RR 1.79, 95% CI 1.04 to 3.07 in the intention‐to‐treat (ITT) analysis). There was low quality evidence from one RCT that a rigid brace was more successful than an elastic brace at curbing curve progression when measured in Cobb degrees in low degree curves (20° to 30°), with no significant differences between the two groups in the subjective perception of daily difficulties associated with wearing the brace (43 girls; risk of success at four years' follow‐up: RR 1.40, 1.03 to 1.89). Finally, there was very low quality evidence from one RCT (12 participants) that a rigid brace with a pad pressure control system is no better than a standard brace in reducing the risk of progression. Only one prospective cohort study (236 participants) assessed adverse events: neither the percentage of adolescents with any adverse event (RR 1.27, 95% CI 0.96 to 1.67) nor the percentage of adolescents reporting back pain, the most common adverse event, were different between the groups (RR 0.72, 95% CI 0.47 to 1.10). Due to the important clinical differences among the studies, it was not possible to perform a meta‐analysis. Two studies showed that bracing did not change QoL during treatment (low quality), and QoL, back pain, and psychological and cosmetic issues in the long term (16 years) (very low quality). All included papers consistently showed that bracing prevented curve progression (secondary outcome). However, due to the strength of evidence (from low to very low quality), further research is very likely to have an impact on our confidence in the estimate of effect. The high rate of failure of RCTs demonstrates the huge difficulties in performing RCTs in a field where parents reject randomization of their children. This challenge may prevent us from seeing increases in the quality of the evidence over time. Other designs need to be implemented and included in future reviews, including 'expertise‐based' trials, prospective controlled cohort studies, prospective studies conducted according to pre‐defined criteria such as the Scoliosis Research Society (SRS) and the international Society on Scoliosis Orthopedic and Rehabilitation Treatment (SOSORT) criteria. Future studies should increase their focus on participant outcomes, adverse effects, methods to increase compliance, and usefulness of physiotherapeutic scoliosis specific exercises added to bracing. |
t26 | People are living longer, however, the very old often have many health problems and disabilities which result in them living and eventually dying in care homes. Residents of such homes are highly likely to die there, making these places where palliative care is needed. Palliative care provides relief from pain and other distressing symptoms experienced by people reaching the end of life. Palliative care hopes to help people live as actively as possible until death, and their families cope with the illness and bereavement. The aim of this review was to see how effective palliative care interventions in care homes are, and to describe the outcome measures used in the studies. We found only three suitable studies (735 participants), all from the USA. There was little evidence that interventions to improve palliative care for older people in care homes improved outcomes for residents. One study found that palliative care increased bereaved family members' perceptions of the quality of care and another found lower discomfort for residents with dementia who were dying. There were problems with both of these findings. Two studies found that palliative care improved some of the ways in which care was given in the care home, however, we do not know if this resulted in better outcomes for residents. There is a need for more high quality research, particularly outside the USA. | Residents of nursing care homes for older people are highly likely to die there, making these places where palliative care is needed. Objectives The primary objective was to determine effectiveness of multi‐component palliative care service delivery interventions for residents of care homes for older people. The secondary objective was to describe the range and quality of outcome measures. Search methods The grey literature and the following electronic databases were searched: Cochrane Central Register of Controlled Trials, Cochrane Database of Systematic Reviews, Database of Abstracts of Reviews of Effectiveness (all issue 1, 2010); MEDLINE, EMBASE, CINAHL, British Nursing Index, (1806 to February 2010), Science Citation Index Expanded & AMED (all to February 2010). Key journals were hand searched and a PubMed related articles link search was conducted on the final list of articles. Selection criteria We planned to include Randomised Clinical Trials (RCTs), Controlled Clinical Trials (CCTs), controlled before‐and‐after studies and interrupted time series studies of multi‐component palliative care service delivery interventions for residents of care homes for older people. These usually include the assessment and management of physical, psychological and spiritual symptoms and advance care planning. We did not include individual components of palliative care, such as advance care planning. Data collection and analysis Two review authors independently assessed studies for inclusion, extracted data, and assessed quality and risk of bias. Meta analysis was not conducted due to heterogeneity of studies. The analysis comprised a structured narrative synthesis. Outcomes for residents and process of care measures were reported separately. Two RCTs and one controlled before‐and‐after study were included (735 participants). All were conducted in the USA and had several potential sources of bias. Few outcomes for residents were assessed. One study reported higher satisfaction with care and the other found lower observed discomfort in residents with end‐stage dementia. Two studies reported group differences on some process measures. Both reported higher referral to hospice services in their intervention group, one found fewer hospital admissions and days in hospital in the intervention group, the other found an increase in do‐not‐resuscitate orders and documented advance care plan discussions. We found few studies, and all were in the USA. Although the results are potentially promising, high quality trials of palliative care service delivery interventions which assess outcomes for residents are needed, particularly outside the USA. These should focus on measuring standard outcomes, assessing cost‐effectiveness, and reducing bias. |
t27 | Acute heart attacks and severe angina (heart pain) are usually due to blockages in the arteries supplying the heart (coronary arteries). These problems are collectively referred to as 'acute coronary syndrome' (ACS). ACS is very common and may lead to severe complications including death. Hyperbaric oxygen therapy (HBOT) involves people breathing pure oxygen at high pressures in a specially designed chamber. It is sometimes used as a treatment to increase the supply of oxygen to the damaged heart in an attempt to reduce the area of the heart that is at risk of dying. We searched the medical literature for any studies that reported the outcome of patients with ACS when treated with HBOT. All studies included patients with heart attack and some also included patients with severe angina. The dose of hyperbaric oxygen was similar in most studies. Overall, we found some evidence that people with ACS are less likely to die or to have major adverse events, and to have more rapid relief from their pain if they receive hyperbaric oxygen therapy as part of their treatment. However, our conclusions are based on relatively small randomised trials. Our confidence in these findings is further reduced because in most of these studies both the patients and researchers were aware of who was receiving HBOT and it is possible a 'placebo effect' has biased the result in favour of HBOT. HBOT was generally well‐tolerated. Some patients complained of claustrophobia when treated in small (single person) chambers and there was no evidence of important toxicity from oxygen breathing in any subject. One individual suffered damage to the eardrum from pressurisation. While HBOT may reduce the risk of dying, time to pain relief and the chance of adverse heart events in people with heart attack and unstable angina, more work is needed to be sure that HBOT should be recommended. | Acute coronary syndrome (ACS), includes acute myocardial infarction and unstable angina, is common and may prove fatal. Hyperbaric oxygen therapy (HBOT) will improve oxygen supply to the threatened heart and may reduce the volume of heart muscle that perishes. The addition of HBOT to standard treatment may reduce death rate and other major adverse outcomes. This an update of a review previously published in May 2004 and June 2010. Objectives The aim of this review was to assess the evidence for the effects of adjunctive HBOT in the treatment of ACS. We compared treatment regimens including adjunctive HBOT against similar regimens excluding HBOT. Where regimens differed significantly between studies this is clearly stated and the implications discussed. All comparisons were made using an intention to treat analysis where this was possible. Efficacy was estimated from randomised trial comparisons but no attempt was made to evaluate the likely effectiveness that might be achieved in routine clinical practice. Specifically, we addressed: Does the adjunctive administration of HBOT to people with acute coronary syndrome (unstable angina or infarction) result in a reduction in the risk of death? Does the adjunctive administration of HBOT to people with acute coronary syndrome result in a reduction in the risk of major adverse cardiac events (MACE), that is: cardiac death, myocardial infarction, and target vessel revascularization by operative or percutaneous intervention? Is the administration of HBOT safe in both the short and long term? Search methods We updated the search of the following sources in September 2014, but found no additional relevant citations since the previous search in June 2010 (CENTRAL), MEDLINE, EMBASE, CINAHL and DORCTHIM. Relevant journals were handsearched and researchers in the field contacted. We applied no language restrictions. Selection criteria Randomised studies comparing the effect on ACS of regimens that include HBOT with those that exclude HBOT. Data collection and analysis Three authors independently evaluated the quality of trials using the guidelines of the Cochrane Handbook and extracted data from included trials. Binary outcomes were analysed using risk ratios (RR) and continuous outcomes using the mean difference (MD) and both are presented with 95% confidence intervals. We assessed the quality of the evidence using the GRADE approach. No new trials were located in our most recent search in September 2014. Six trials with 665 participants contributed to this review. These trials were small and subject to potential bias. Only two reported randomisation procedures in detail and in only one trial was allocation concealed. While only modest numbers of participants were lost to follow‐up, in general there is little information on the longer‐term outcome for participants. Patients with acute coronary syndrome allocated to HBOT were associated with a reduction in the risk of death by around 42% (RR: 0.58, (95% CI 0.36 to 0.92), 5 trials, 614 participants; low quality evidence). In general, HBOT was well‐tolerated. No patients were reported as suffering neurological oxygen toxicity and only a single patient was reported to have significant barotrauma to the tympanic membrane. One trial suggested a significant incidence of claustrophobia in single occupancy chambers of 15% (RR of claustrophobia with HBOT 31.6, 95% CI 1.92 to 521). For people with ACS, there is some evidence from small trials to suggest that HBOT is associated with a reduction in the risk of death, the volume of damaged muscle, the risk of MACE and time to relief from ischaemic pain. In view of the modest number of patients, methodological shortcomings and poor reporting, this result should be interpreted cautiously, and an appropriately powered trial of high methodological rigour is justified to define those patients (if any) who can be expected to derive most benefit from HBOT. The routine application of HBOT to these patients cannot be justified from this review. |
t28 | The aim of this Cochrane Review was to find out what methods of skin preparation before caesarean section were most effective in preventing infection after the operation. We collected and analysed all studies that assessed the effectiveness of antiseptics used to prepare the skin before making an incision (or cut) for the caesarean section. We only included analysis of preparations that were used to prepare the surgical site on the abdomen before caesarean section; we did not look at handwashing by the surgical team, or bathing the mother. Infections of surgical incisions are the third most frequently reported hospital‐acquired infections. Women who give birth by caesarean section are exposed to infection from germs already present on the mother's own skin, or from external sources. The risk of infection following a caesarean section can be 10 times that of vaginal birth. Therefore, preventing infection by properly preparing the skin before the incision is made is an important part of the overall care given to women prior to caesarean birth. An antiseptic is a substance applied to remove bacteria that can cause harm to the mother or baby when they multiply. Antiseptics include iodine or povidone iodine, alcohol, chlorhexidine, and parachlorometaxylenol. They can be applied as liquids or powders, scrubs, paints, swabs, or on impregnated 'drapes' that stick to the skin, which the surgeon then cuts through. Non‐impregnated drapes can also be applied, once the skin has been scrubbed or swapped, with the aim of reducing the spread of any remaining bacteria during surgery. It is important to know if some of these antiseptics or methods work better than others. The review looked at what was best for women and babies when it came to important outcomes including: infection of the site where the surgeon cut the woman to perform the caesarean section; inflammation of the lining of the womb (metritis and endometritis); how long the woman stayed in hospital; and any other adverse effects, such as irritation of the woman's skin, or any reported impact on the baby. The evidence suggested that there was probably little or no difference between the various antiseptics in the incidence of surgical site infection, endometritis, skin irritation, or allergic skin reaction in the mother. However, in one study, there was a reduction in bacterial growth on the skin at 18 hours after caesarean section for women who received a skin preparation with chlorhexidine gluconate compared with women who received the skin preparation with povidone iodine, but more data are needed to see if this actually reduces infections for women. The available evidence from the trials that have been conducted was insufficient to tell us the best type of skin preparation for preventing surgical site infection following caesarean section. | The risk of maternal mortality and morbidity (particularly postoperative infection) is higher for caesarean section (CS) than for vaginal birth. With the increasing rate of CS, it is important to minimise the risks to the mother as much as possible. This review focused on different forms and methods of preoperative skin preparation to prevent infection. This review is an update of a review that was first published in 2012, and updated in 2014. Objectives To compare the effects of different antiseptic agents, different methods of application, or different forms of antiseptic used for preoperative skin preparation for preventing postcaesarean infection. Search methods For this update, we searched Cochrane Pregnancy and Childbirth’s Trials Register, ClinicalTrials.gov , the WHO International Clinical Trials Registry Platform ( ICTRP ) (27 November 2017), and reference lists of retrieved studies. Selection criteria Randomised and quasi‐randomised trials, evaluating any type of preoperative skin preparation agents, forms, and methods of application for caesarean section. Comparisons of interest in this review were between different antiseptic agents used for CS skin preparation (e.g. alcohol, povidone iodine), different methods of antiseptic application (e.g. scrub, paint, drape), different forms of antiseptic (e.g. powder, liquid), and also between different skin preparations, such as a plastic incisional drape, which may or may not be impregnated with antiseptic agents. Only studies involving the preparation of the incision area were included. This review did not cover studies of preoperative handwashing by the surgical team or preoperative bathing. Data collection and analysis Three review authors independently assessed all potential studies for inclusion, assessed risk of bias, and extracted the data using a predesigned form. We checked data for accuracy. We assessed the quality of the evidence using the GRADE approach. For this update, we included 11 randomised controlled trials (RCTs), with a total of 6237 women who were undergoing CS. Ten trials (6215 women) contributed data to this review. All included studies were individual RCTs. We did not identify any quasi‐ or cluster‐RCTs. The trial dates ranged from 1983 to 2016. Six trials were conducted in the USA, and the remainder in Nigeria, South Africa, France, Denmark, and Indonesia. The included studies were broadly methodologically sound, but raised some specific concerns regarding risk of bias in a number of cases. Drape versus no drape This comparison investigated the use of a non‐impregnated drape versus no drape, following preparation of the skin with antiseptics. For women undergoing CS, low‐quality evidence suggested that using a drape before surgery compared with no drape, may make little or no difference to the incidence of surgical site infection (risk ratio (RR) 1.29, 95% confidence interval (CI) 0.97 to 1.71; 2 trials, 1294 women), or length of stay in the hospital (mean difference (MD) 0.10 day, 95% CI ‐0.27 to 0.46 1 trial, 603 women). One‐minute alcohol scrub with iodophor drape versus five‐minute iodophor scrub without drape One trial compared an alcohol scrub and iodophor drape with a five‐minute iodophor scrub only, and reported no surgical site infection in either group (79 women, very‐low quality evidence). We were uncertain whether the combination of a one‐minute alcohol scrub and a drape reduced the incidence of endomyometritis when compared with a five‐minute scrub, because the quality of the evidence was very low (RR 1.62, 95% CI 0.29 to 9.16; 1 trial, 79 women). The available evidence from the trials that have been conducted was insufficient to tell us the best type of skin preparation for preventing surgical site infection following caesarean section. More high‐quality research is needed. We found four studies that were still ongoing. We will incorporate the results of these studies into this review in future updates. |
t29 | We wanted to find out if giving hydroxyurea to people with non‐transfusion dependent beta thalassaemia would reduce the need for blood transfusion. Thalassaemia is a genetic blood disorder causing defective adult haemoglobin (the oxygen carrying component of red blood cells). This causes anaemia with different degrees of severity. People with non‐transfusion dependent beta thalassaemia do not depend on regular transfusions for survival, but may require blood transfusion from time to time. Persistent anaemia affects growth, may delay puberty and reduce quality of life. However, transfusion should be avoided, if possible, because it leads to excess iron being deposited in various organs affecting how they function. People with non‐transfusion dependent beta thalassaemia have higher levels of foetal haemoglobin (the main form of haemoglobin found during the development of a baby before birth). After birth, foetal haemoglobin gradually disappears and is replaced by the defective adult haemoglobin. A small amount of foetal haemoglobin remains after birth and is often present in people with non‐transfusion dependent beta thalassaemia. The higher the level of foetal haemoglobin the less transfusion could be needed. Hydroxyurea is an anti‐cancer treatment which increases the level of foetal haemoglobin. Therefore, it might reduce the need for blood transfusion in people with non‐transfusion dependent beta thalassaemia. However, it is not known whether hydroxyurea is effective and safe and if so, which is the best dose and at which age treatment should start. We did not find any randomised controlled trials (where people taking part in the trial have equal chances of being in the treatment or the control group) comparing hydroxyurea with a placebo (a dummy drug) or usual care. However, we found one randomised controlled trial comparing two different doses of hydroxyurea (10 mg/kg/day versus 20 mg/kg/day given for 24 weeks) and included it in this review. A total of 61 people took part in this trial. The lower dose of hydroxyurea appeared to increase levels of foetal haemoglobin, but the higher dose did not. We found some evidence that the higher dose was harmful, particularly to the bone marrow. The trial did not look at whether blood transfusions could be given less often or whether the effects of the anaemia were reduced. In the short term, the lower dose does not appear to have any side effects. | Non‐transfusion dependent beta thalassaemia is a subset of inherited haemoglobin disorders characterised by reduced production of the beta globin chain of the haemoglobin molecule leading to anaemia of varying severity. Although blood transfusion is not a necessity for survival, it is required when episodes of chronic anaemia occur. This chronic anaemia can impair growth and affect quality of life. People with non‐transfusion dependent beta thalassaemia suffer from iron overload due to their body's increased capability of absorbing iron from food sources. Iron overload becomes more pronounced in those requiring blood transfusion. People with a higher foetal haemoglobin level have been found to require fewer blood transfusions. Hydroxyurea has been used to increase foetal haemoglobin level; however, its efficacy in reducing transfusion, chronic anaemia complications and its safety need to be established. Objectives To assess the effectiveness, safety and appropriate dose regimen of hydroxyurea in people with non‐transfusion dependent beta thalassaemia (haemoglobin E combined with beta thalassaemia and beta thalassaemia intermedia). Search methods We searched the Cochrane Cystic Fibrosis and Genetic Disorders Group's Haemoglobinopathies Trials Register, compiled from electronic database searches and handsearching of relevant journals. We also searched ongoing trials registries and the reference lists of relevant articles and reviews. Date of last search: 30 April 2016. Selection criteria Randomised or quasi‐randomised controlled trials of hydroxyurea in people with non‐transfusion dependent beta thalassaemia comparing hydroxyurea with placebo or standard treatment or comparing different doses of hydroxyurea. Data collection and analysis Two authors independently applied the inclusion criteria in order to select trials for inclusion. Both authors assessed the risk of bias of trials and extracted the data. A third author verified these assessments. No trials comparing hydroxyurea with placebo or standard care were found. However, we included one randomised controlled trial (n = 61) comparing 20 mg/kg/day with 10 mg/kg/day of hydroxyurea for 24 weeks. Both haemoglobin and foetal haemoglobin levels were lower at 24 weeks in the 20 mg group compared with the 10 mg group, mean difference ‐2.39 (95% confidence interval ‐ 2.8 to ‐1.98) and mean difference ‐1.5 (95% confidence interval ‐1.83 to ‐1.17), respectively. Major adverse effects were significantly more common in the 20 mg group, for neutropenia risk ratio 9.93 (95% confidence interval 1.34 to 73.97) and for thrombocytopenia risk ratio 3.68 (95% confidence interval 1.13 to 12.07). No difference was reported for minor adverse effects (gastrointestinal disturbances and raised liver enzymes). The effect of hydroxyurea on transfusion frequency was not reported. The overall quality for the outcomes reported was graded as very low mainly because the outcomes were derived from only one small study with an unclear method of allocation concealment. There is no evidence from randomised controlled trials to show whether hydroxyurea has any effect compared with controls on the need for blood transfusion. Administration of 10 mg/kg/day compared to 20 mg/kg/day of hydroxyurea resulted in higher haemoglobin levels and seems safer with fewer adverse effects. It has not been reported whether hydroxyurea is capable of reducing the need for blood transfusion. Large well‐designed randomised controlled trials with sufficient duration of follow up are recommended. |
t30 | We reviewed the evidence for the effect of omalizumab on people with asthma when compared with placebo. We focused on whether omalizumab is a beneficial but safe treatment for adults and children with asthma. Asthma is a respiratory condition that affects millions of people worldwide. It is thought that allergy may be an important part of the disease for many people with asthma. Omalizumab is a drug that targets a protein, called IgE, and removes it from free circulation in the body. IgE is centrally involved in allergy. Omalizumab is an expensive drug that is usually given by injection under the skin every two to four weeks. It is licenced for use in asthma sufferers who are not being adequately treated with standard therapy and who require frequent courses or continuous use of oral steroid tablets. We looked for evidence on whether administration of omalizumab is better or worse than giving placebo. Twenty‐five studies, involving 6382 people, were included in this review. These studies lasted between eight and 60 weeks. All of the people included in the studies had asthma, of different severity. Both men and women were included, and some of the studies included children and young people. All studies compared omalizumab versus placebo. In keeping with current medical practice, most studies (21 of 25) used omalizumab given by injection under the skin. Some of the older studies used omalizumab injected into a vein or given by inhalation. Most of the studies were sponsored by the pharmaceutical industry. We found that people receiving omalizumab were less likely to have a flare‐up (‘exacerbation’) of their asthma. For example, on average, 26 of 100 people who were receiving placebo (over a 16 to 60‐week period) had an exacerbation compared with an average of 16 of 100 people receiving omalizumab. People receiving omalizumab were also more likely to be able to reduce the doses of inhaled steroids. For example, on average, 21 of 100 people with moderate or severe asthma who were receiving placebo were able to completely stop their inhaled steroids (over a 28 to 32‐week period) compared with an average of 40 of 100 receiving omalizumab. People receiving omalizumab also experienced improvement in their asthma symptoms and in their health‐related quality of life. People receiving omalizumab were no more or less likely to have unwanted side effects overall. However, people receiving omalizumab were more likely to have skin reactions at the site of the injection. Perhaps unfortunately, many of the trials in this review included participants with moderate asthma, and this drug is not licenced for this group. More trials need to focus on whether this drug is effective in people with the most severe asthma; evidence for efficacy in this group is poor, in spite of current guidelines. | Asthma is a respiratory (airway) condition that affects an estimated 300 million people worldwide and is associated with significant morbidity and mortality. Omalizumab is a monoclonal antibody that binds and inhibits free serum immunoglobulin E (IgE). It is called an 'anti‐IgE' drug. IgE is an immune mediator involved in clinical manifestations of asthma. A recent update of National Institute for Health and Care Excellence (NICE) guidance in 2013 recommends omalizumab for use as add‐on therapy in adults and children over six years of age with inadequately controlled severe persistent allergic IgE‐mediated asthma who require continuous or frequent treatment with oral corticosteroids. Objectives To assess the effects of omalizumab versus placebo or conventional therapy for asthma in adults and children. Search methods We searched the Cochrane Airways Group Specialised Register of trials for potentially relevant studies. The most recent search was performed in June 2013. We also checked the reference lists of included trials and searched online trial registries and drug company websites. Selection criteria Randomised controlled trials examining anti‐IgE administered in any manner for any duration. Trials with co‐interventions were included, as long as they were the same in each arm. Data collection and analysis Two review authors independently assessed study quality and extracted and entered data. Three modes of administration were identified from the published literature: inhaled, intravenous and subcutaneous injection. The main focus of the updated review is subcutaneous administration, as this route is currently used in clinical practice. Subgroup analysis was performed by asthma severity. Data were extracted from published and unpublished sources. In all, 25 trials were included in the review, including 11 new studies since the last update, for a total of 19 that considered the efficacy of subcutaneous anti‐IgE treatment as an adjunct to treatment with corticosteroids. For participants with moderate or severe asthma who were receiving inhaled corticosteroid steroid (ICS) therapy, a significant advantage favoured subcutaneous omalizumab with regard to experiencing an asthma exacerbation (odds ratio (OR) 0.55, 95% confidence interval (CI) 0.42 to 0.60; ten studies, 3261 participants). This represents an absolute reduction from 26% for participants suffering an exacerbation on placebo to 16% on omalizumab, over 16 to 60 weeks. A significant benefit was noted for subcutaneous omalizumab versus placebo with regard to reducing hospitalisations (OR 0.16, 95% CI 0.06 to 0.42; four studies, 1824 participants), representing an absolute reduction in risk from 3% with placebo to 0.5% with omalizumab over 28 to 60 weeks. No separate data on hospitalisations were available for the severe asthma subgroup, and all of these data were reported for participants with the diagnosis of moderate to severe asthma. Participants treated with subcutaneous omalizumab were also significantly more likely to be able to withdraw their ICS completely than those treated with placebo (OR 2.50, 95% CI 2.00 to 3.13), and a small but statistically significant reduction in daily inhaled steroid dose was reported for omalizumab‐treated participants compared with those given placebo (weighted mean difference (WMD) ‐118 mcg beclomethasone dipropionate (BDP) equivalent per day, 95% CI ‐154 to ‐84). However, no significant difference between omalizumab and placebo treatment groups was seen in the number of participants who were able to withdraw from oral corticosteroid (OCS) therapy (OR 1.18, 95% CI 0.53 to 2.63). Participants treated with subcutaneous omalizumab as an adjunct to treatment with corticosteroids required a small but significant reduction in rescue beta 2 ‐agonist medication compared with placebo (mean difference (MD) ‐0.39 puffs per day, 95% CI ‐0.55 to ‐0.24; nine studies, 3524 participants). This benefit was observed in both the moderate to severe (MD ‐0.58, 95% CI ‐0.84 to ‐0.31) and severe (MD ‐0.30, 95% CI ‐0.49 to ‐0.10) asthma subgroups on a therapy of inhaled corticosteroids; however, no significant difference between subcutaneous omalizumab and placebo was noted for this outcome in participants with severe asthma who were receiving a therapy of inhaled plus oral corticosteroids. Significantly fewer serious adverse events were reported in participants assigned to subcutaneous omalizumab than in those receiving placebo (OR 0.72, 95% CI 0.57 to 0.91; 15 studies, 5713 participants), but more injection site reactions were observed (from 5.6% with placebo to 9.1% with omalizumab). To reflect current clinical practice, discussion of the results is limited to subcutaneous use, and trials involving intravenous and inhaled routes have been archived. Omalizumab was effective in reducing asthma exacerbations and hospitalisations as an adjunctive therapy to inhaled steroids and during steroid tapering phases of clinical trials. Omalizumab was significantly more effective than placebo in increasing the numbers of participants who were able to reduce or withdraw their inhaled steroids. Omalizumab was generally well tolerated, although more injection site reactions were seen with omalizumab. Further assessment in paediatric populations is necessary, as is direct double‐dummy comparison with ICS. Although subgroup analyses suggest that participants receiving prednisolone had better asthma control when they received omalizumab, it remains to be tested prospectively whether the addition of omalizumab has a prednisolone‐sparing effect. It is also not clear whether there is a threshold level of baseline serum IgE for optimum efficacy of omalizumab. Given the high cost of the drug, identification of biomarkers predictive of response is of major importance for future research. |
t31 | Stroke is a major cause of disability. Stroke‐related disability can include difficulty with daily tasks such as toileting, washing, and walking. Sometimes disability is so severe that a person becomes dependent on others for performing basic activities (this is known as 'dependence'). Our previous Cochrane Review published in 2012 suggested that SSRI drugs (a class of drug usually used to treat mood problems, which work by changing the level of chemicals in the brain), might improve recovery after stroke, thereby reducing disability and increasing the chance of being independent after a stroke, However, when we looked at only the high‐quality trials, the effect was less convincing. A large trial recruiting more than 3000 participants has now been completed and so it is necessary to update this review. In our main analyses we decided to include only high‐quality trials, that is those which used rigorous methods to avoid biases (such as the person assessing outcome being aware of whether the stroke survivor received the active drug or placebo). In this review, we refer to them as 'low risk of bias' trials. If disability and dependency can be improved by a simple drug, this could have a major impact on quality of life for many stroke survivors. We also wanted to find out whether SSRIs had other benefits, for example improving the severity of any arm or leg weakness, mood, anxiety, quality of life, and also whether SSRIs were associated with side effects such as bleeding or seizures. In total we found 63 trials recruiting 9168 stroke survivors within one year of their stroke. There was a wide age range. About half the trials required participants to have depression to enter the trial. The duration, drug, and dose varied between trials. However, only three of these trials were at low risk of bias; the participants in these trials did not have to be depressed to enter the trial, and they were all recruited soon after the stroke. When we combined data from these three studies at low risk of bias, which recruited 3249 participants, SSRIs did not affect disability score or dependency. SSRIs reduced the risk of future depression but increased the risk of problems with the digestive system. | Stroke is a major cause of adult disability. Selective serotonin reuptake inhibitors (SSRIs) have been used for many years to manage depression and other mood disorders after stroke. The 2012 Cochrane Review of SSRIs for stroke recovery demonstrated positive effects on recovery, even in people who were not depressed at randomisation. A large trial of fluoxetine for stroke recovery (fluoxetine versus placebo under supervision) has recently been published, and it is now appropriate to update the evidence. Objectives To determine if SSRIs are more effective than placebo or usual care at improving outcomes in people less than 12 months post‐stroke, and to determine whether treatment with SSRIs is associated with adverse effects. Search methods For this update, we searched the Cochrane Stroke Group Trials Register (last searched 16 July 2018), the Cochrane Controlled Trials Register (CENTRAL, Issue 7 of 12, July 2018), MEDLINE (1946 to July 2018), Embase (1974 to July 2018), CINAHL (1982 July 2018), PsycINFO (1985 to July 2018), AMED (1985 to July 2018), and PsycBITE March 2012 to July 2018). We also searched grey literature and clinical trials registers. Selection criteria We included randomised controlled trials (RCTs) that recruited ischaemic or haemorrhagic stroke survivors at any time within the first year. The intervention was any SSRI, given at any dose, for any period, and for any indication. We excluded drugs with mixed pharmacological effects. The comparator was usual care or placebo. To be included, trials had to collect data on at least one of our primary (disability score or independence) or secondary outcomes (impairments, depression, anxiety, quality of life, fatigue, healthcare cost, death, adverse events and leaving the trial early). Data collection and analysis We extracted data on demographics, type of stroke, time since stroke, our primary and secondary outcomes, and sources of bias. Two review authors independently extracted data from each trial. We used standardised mean differences (SMDs) to estimate treatment effects for continuous variables, and risk ratios (RRs) for dichotomous effects, with their 95% confidence intervals (CIs). We assessed risks of bias and applied GRADE criteria. We identified a total of 63 eligible trials recruiting 9168 participants, most of which provided data only at end of treatment and not at follow‐up. There was a wide age range. About half the trials required participants to have depression to enter the trial. The duration, drug, and dose varied between trials. Only three of the included trials were at low risk of bias across the key 'Risk of bias' domains. A meta‐analysis of these three trials found little or no effect of SSRI on either disability score: SMD −0.01 (95% CI −0.09 to 0.06; P = 0.75; 2 studies, 2829 participants; moderate‐quality evidence) or independence: RR 1.00 (95% CI 0.91 to 1.09; P = 0.99; 3 studies, 3249 participants; moderate‐quality evidence). We downgraded both these outcomes for imprecision. SSRIs reduced the average depression score (SMD 0.11 lower, 0.19 lower to 0.04 lower; 2 trials, 2861 participants; moderate‐quality evidence), but there was a higher observed number of gastrointestinal side effects among participants treated with SSRIs compared to placebo (RR 2.19, 95% CI 1.00 to 4.76; P = 0.05; 2 studies, 148 participants; moderate‐quality evidence), with no evidence of heterogeneity (I 2 = 0%). For seizures there was no evidence of a substantial difference. When we included all trials in a sensitivity analysis, irrespective of risk of bias, SSRIs appeared to reduce disability scores but not dependence. One large trial (FOCUS) dominated the results. We identified several ongoing trials, including two large trials that together will recruit more than 3000 participants. We found no reliable evidence that SSRIs should be used routinely to promote recovery after stroke. Meta‐analysis of the trials at low risk of bias indicate that SSRIs do not improve recovery from stroke. We identified potential improvements in disability only in the analyses which included trials at high risk of bias. A further meta‐analysis of large ongoing trials will be required to determine the generalisability of these findings. |
t32 | The veins of the leg are designed to return blood from the leg upwards towards the heart. Blood is under the force of gravity and, left to itself, would flow downwards. Valves within the veins normally prevent blood from flowing downwards (i.e. backwards), however, if these valves become leaky, pressure within the veins increases. This high pressure causes swelling, thickening and damage to skin, which may break down to form ulcers. Venous leg ulcers are associated with pain and mobility restrictions that affect quality of life. Compression of legs with bandages or medical stockings helps to move the blood upwards, and reduces pressure in the veins and tissues. This treatment has been shown to improve ulcer healing. Compression is unpopular because it can be uncomfortable, and only provides a benefit while the bandages or hosiery are worn. Even with compression treatment, healing of venous ulcers may still take a long time, and ulcers often come back. Traditionally, surgery for venous disease involves removing the veins from the leg. The blood is then diverted through the remaining healthy veins. This reduces the pressure in the veins and helps prevent ulcers that have healed from coming back. Generally, this surgery is performed under a general anaesthetic and involves a period of recovery. Some people, particularly the elderly, are less suitable for general anaesthetic and may be at risk of age‐related complications or a prolonged and difficult recovery. Newer 'keyhole' surgical techniques destroy the veins with heat, and require only local anaesthesia. These treatments have been shown to be as effective as surgery in the treatment of varicose veins in the absence of ulcers, and result in less pain than traditional surgery. Since a general anaesthetic can be avoided, there is also a reduced risk associated with the anaesthetic procedure, and the recovery period is shorter. The purpose of this review was to compare the effectiveness of these new, minimally invasive surgical techniques with compression therapy for the management of venous leg ulcers. We wanted to see how well the different treatments work in terms of ulcer healing and recurrence rates. | Venous leg ulcers represent the worst extreme within the spectrum of chronic venous disease. Affecting up to 3% of the adult population, this typically chronic, recurring condition significantly impairs quality of life, and its treatment places a heavy financial burden upon healthcare systems. The current mainstay of treatment for venous leg ulcers is compression therapy, which has been shown to enhance ulcer healing rates. Open surgery on the veins in the leg has been shown to reduce ulcer recurrence rates, but it is an unpopular option and many patients are unsuitable. The efficacy of the newer, minimally‐invasive endovenous thermal techniques has been established in uncomplicated superficial venous disease, and these techniques are now beginning to be used in the management of venous ulceration, though the evidence for this treatment is currently unclear. It is hypothesised that, when used with compression, ablation may further reduce pressures in the leg veins, resulting in improved rates of healing. Furthermore, since long‐term patient concordance with compression is relatively poor, it may prove more popular, effective and cost‐effective to provide a single intervention to reduce recurrence, rather than life‐long treatment with compression. Objectives To determine the effects of superficial endovenous thermal ablation on the healing, recurrence and quality of life of people with active or healed venous ulcers. Search methods In August 2013 we searched Cochrane Wounds Group Specialised Register; The Cochrane Central Register of Controlled Trials (CENTRAL) ( The Cochrane Library ); Ovid MEDLINE; Ovid MEDLINE (In‐Process & Other Non‐Indexed Citations); Ovid EMBASE; and EBSCO CINAHL. There were no restrictions on the language of publication but there was a date restriction based on the fact that superficial endovenous thermal ablation is a comparatively new medical technology. Selection criteria Randomised clinical trials comparing endovenous thermal ablative techniques with compression therapy alone for venous leg ulcers were eligible for inclusion. Trials had to report on at least one objective measure of ulcer healing (primary outcome) such as proportion of ulcers healed at a given time point, time to complete healing, change in ulcer size, proportion of ulcers recurring over a given time period, or at a specific point, and ulcer‐free days. Secondary outcomes sought included patient‐reported quality of life, economic data and adverse events. Data collection and analysis Details of potentially eligible studies were extracted and summarised using a data extraction table. Data extraction and validity assessment were performed independently by two review authors, and any disagreements resolved by consensus or by arbitration of a third review author. No eligible randomised controlled trials were identified. There is an absence of evidence regarding the effects of superficial endovenous thermal ablation on ulcer healing, recurrence or quality of life of people with venous leg ulcer disease. The review identified no randomised controlled trials on the effects on ulcer healing, recurrence or quality of life, of superficial endovenous thermal ablation in people with active or healed venous leg ulcers. Adequately‐powered, high quality randomised controlled trials comparing endovenous thermal ablative interventions with compression therapy are urgently required to explore this new treatment strategy. These should measure and report outcomes that include time to ulcer healing, ulcer recurrence, quality of life and cost‐effectiveness. |
t33 | Clostridium difficile ( C. difficile ) is a bacterium that can live harmlessly in the colon, but when an individual takes an antibiotic for another condition, the C. difficile can grow and replace most of the normal bacterial flora that live in the colon. This overgrowth causes C. difficile ‐associated diarrhoea (also known as C. difficile infection ‐ CDI). The symptoms of CDI include diarrhoea, fever and pain. CDI may be only mild but in many cases is very serious and, if untreated, can be fatal. There are many proposed treatments for CDI, but the most common are withdrawing the antibiotic that caused the CDI and prescribing an antibiotic that kills the bacterium. Many antibiotics have been tested in clinical trials for effectiveness and this review studies the comparisons of these antibiotics. This review is an update of a previously published Cochrane review. Methods We searched the medical literature up to 26 January 2017. All randomised trials that compare two different antibiotics, or variations in dosing of a single antibiotic for treatment of CDI were included. Trials comparing antibiotic to placebo (e.g. a sugar pill) or no treatment were sought but, save for one poor quality placebo‐controlled trial, none were found. Trials that compared antibiotics to a non‐antibiotic treatment were not included. Results Twenty‐two studies (total 3215 participants) were included. The majority of studies enrolled participants with mild to moderate CDI who could tolerate oral antibiotics. Sixteen of the included studies excluded participants with severe CDI and few participants with severe CDI were included in the other studies. Twelve different antibiotics were assessed. Most of the studies compared vancomycin or metronidazole with other antibiotics. One small study compared vancomycin to placebo (e.g. sugar pill). There were no other studies that compared antibiotic treatment to a placebo or a no treatment control group. Seventeen of the 22 included studies had quality issues. In four studies, vancomycin was found to be superior to metronidazole for achieving sustained symptomatic cure (defined as resolution of diarrhoea and no recurrence of CDI). A new antibiotic, fidaxomicin, was, in two large studies, found to be superior to vancomycin. It should be noted that the differences in effectiveness between these antibiotics were not too great and that metronidazole is far less expensive than either vancomycin and fidaxomicin. A pooled analysis of two small studies suggests that teicoplanin may be more effective than vancomycin for achieving symptomatic cure. The quality of the evidence for the other seven antibiotics in this review was very poor because the studies were very small, and many patients dropped out of these studies before completion. One hundred and forty deaths were reported in the studies, all of which were attributed to participants preexisting health problems. The only side effects attributed to antibiotics were rare nausea and temporary elevation of liver enzymes. Recent cost data (July 2016) for a 10 day course of treatment shows that metronidazole 500 mg is the least expensive antibiotic with a cost of USD 13. Vancomycin 125 mg costs USD 1779 compared to fidaxomicin 200 mg at USD 3453.83 or more and teicoplanin at approximately USD 83.67. Conclusion No firm conclusions can be drawn regarding the effectiveness of antibiotic treatment in severe CDI as most studies excluded these patients. The lack of any 'no treatment' control studies does not allow for any conclusions regarding the need for antibiotic treatment in patients with mild CDI beyond withdrawal of the antibiotic that caused CDI. Nonetheless, moderate quality evidence suggests that vancomycin is superior to metronidazole and fidaxomicin is superior to vancomycin. The differences in effectiveness between these antibiotics were not too large and the advantage of metronidazole is its far lower cost compared to the other antibiotics. Larger studies are needed to determine if teicoplanin performs as well as the other antibiotics. A trial comparing the two cheapest antibiotics, metronidazole and teicoplanin would be of interest. | Clostridium difficile ( C. difficile ) is recognized as a frequent cause of antibiotic‐associated diarrhoea and colitis. This review is an update of a previously published Cochrane review. Objectives The aim of this review is to investigate the efficacy and safety of antibiotic therapy for C. difficile‐ associated diarrhoea (CDAD), or C. difficile infection (CDI), being synonymous terms. Search methods We searched MEDLINE, EMBASE, CENTRAL and the Cochrane IBD Group Specialized Trials Register from inception to 26 January 2017. We also searched clinicaltrials.gov and clinicaltrialsregister.eu for ongoing trials. Selection criteria Only randomised controlled trials assessing antibiotic treatment for CDI were included in the review. Data collection and analysis Three authors independently assessed abstracts and full text articles for inclusion and extracted data. The risk of bias was independently rated by two authors. For dichotomous outcomes, we calculated the risk ratio (RR) and corresponding 95% confidence interval (95% CI). We pooled data using a fixed‐effect model, except where significant heterogeneity was detected, at which time a random‐effects model was used. The following outcomes were sought: sustained symptomatic cure (defined as initial symptomatic response and no recurrence of CDI), sustained bacteriologic cure, adverse reactions to the intervention, death and cost. Twenty‐two studies (3215 participants) were included. The majority of studies enrolled patients with mild to moderate CDI who could tolerate oral antibiotics. Sixteen of the included studies excluded patients with severe CDI and few patients with severe CDI were included in the other six studies. Twelve different antibiotics were investigated: vancomycin, metronidazole, fusidic acid, nitazoxanide, teicoplanin, rifampin, rifaximin, bacitracin, cadazolid, LFF517, surotomycin and fidaxomicin. Most of the studies were active comparator studies comparing vancomycin with other antibiotics. One small study compared vancomycin to placebo. There were no other studies that compared antibiotic treatment to a placebo or a 'no treatment' control group. The risk of bias was rated as high for 17 of 22 included studies. Vancomycin was found to be more effective than metronidazole for achieving symptomatic cure. Seventy‐two per cent (318/444) of metronidazole patients achieved symptomatic cure compared to 79% (339/428) of vancomycin patients (RR 0.90, 95% CI 0.84 to 0.97; moderate quality evidence). Fidaxomicin was found to be more effective than vancomycin for achieving symptomatic cure. Seventy‐one per cent (407/572) of fidaxomicin patients achieved symptomatic cure compared to 61% (361/592) of vancomycin patients (RR 1.17, 95% CI 1.04 to 1.31; moderate quality evidence). Teicoplanin may be more effective than vancomycin for achieving a symptomatic cure. Eightly‐seven per cent (48/55) of teicoplanin patients achieved symptomatic cure compared to 73% (40/55) of vancomycin patients (RR 1.21, 95% CI 1.00 to 1.46; very low quality evidence). For other comparisons including the one placebo‐controlled study the quality of evidence was low or very low due to imprecision and in many cases high risk of bias because of attrition and lack of blinding. One hundred and forty deaths were reported in the studies, all of which were attributed by study authors to the co‐morbidities of the participants that lead to acquiring CDI. Although many other adverse events were reported during therapy, these were attributed to the participants' co‐morbidities. The only adverse events directly attributed to study medication were rare nausea and transient elevation of liver enzymes. Recent cost data (July 2016) for a 10 day course of treatment shows that metronidazole 500 mg is the least expensive antibiotic with a cost of USD 13 (Health Warehouse). Vancomycin 125 mg costs USD 1779 (Walgreens for 56 tablets) compared to fidaxomicin 200 mg at USD 3453.83 or more (Optimer Pharmaceuticals) and teicoplanin at approximately USD 83.67 (GBP 71.40, British National Formulary). No firm conclusions can be drawn regarding the efficacy of antibiotic treatment in severe CDI as most studies excluded patients with severe disease. The lack of any 'no treatment' control studies does not allow for any conclusions regarding the need for antibiotic treatment in patients with mild CDI beyond withdrawal of the initiating antibiotic. Nonetheless, moderate quality evidence suggests that vancomycin is superior to metronidazole and fidaxomicin is superior to vancomycin. The differences in effectiveness between these antibiotics were not too large and the advantage of metronidazole is its far lower cost compared to the other two antibiotics. The quality of evidence for teicoplanin is very low. Adequately powered studies are needed to determine if teicoplanin performs as well as the other antibiotics. A trial comparing the two cheapest antibiotics, metronidazole and teicoplanin, would be of interest. |
t34 | We reviewed the evidence about the effect and safety of phytomedicines in people with sickle cell disease of all types, of any age, in any setting. Sickle cell disease is an inherited blood condition caused by defects in the production of haemoglobin. Haemoglobin is the part of the red blood cell that carries oxygen across the body. Sickle cell disease occurs when people inherit faulty genes responsible for producing haemoglobin from both parents. A variety of complications and a reduced life expectancy are linked with sickle cell disease. Phytomedicines are medicines derived from plants in their original state. People with sickle cell disease may come across them in terms of plant‐remedies from traditional healers. Their benefits have not been evaluated systematically. Laboratory work has long suggested that these medicines may help to ease the symptoms of sickle cell disease. Two trials (182 participants) and two phytomedicines Niprisan ® (also known as Nicosan ® ) and Ciklavit ® were included. This review found that Niprisan ® may help to reduce episodes of sickle cell disease crises associated with severe pain. Ciklavit ® , which has been reported to reduce painful crises in people with sickle cell disease, deserves further study before recommendations can be made regarding its use. The trial of Ciklavit ® also reported a possible adverse effect on the level of anaemia. Both formulations reported no serious adverse symptoms or derangement of liver or kidney function in the participants. More detailed and larger trials of these medicines will need to be carried out before we can make any recommendations about their use. Further research should also assess long‐term outcome measures. | Sickle cell disease, a common recessively inherited haemoglobin disorder, affects people from sub‐Saharan Africa, the Middle East, Mediterranean basin, Indian subcontinent, Caribbean and South America. It is associated with complications and a reduced life expectancy. Phytomedicines (medicine derived from plants in their original state) encompass many of the plant remedies from traditional healers which the populations most affected would encounter. Laboratory research and limited clinical trials have suggested positive effects of phytomedicines both in vivo and in vitro. However, there has been little systematic appraisal of their benefits. This is an update of a Cochrane Review first published in 2004, and updated in 2010, 2013, and 2015. Objectives To assess the benefits and risks of phytomedicines in people with sickle cell disease of all types, of any age, in any setting. Search methods We searched the Cochrane Cystic Fibrosis and Genetic Disorders Group Haemoglobinopathies Trials Register, the International Standard Randomised Controlled Trial Number Register (ISRCTN), the Allied and Complimentary Medicine Database (AMED), ClinicalTrials.gov and the World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP). Dates of most recent searches: Cochrane Cystic Fibrosis and Genetic Disorders Haemoglobinopathies Trials Register: 10 April 2017; ISRCTN: 26 July 2017; AMED: 24 August 2017; ClinicalTrials.gov: 02 August 2017; and the WHO ICTRP: 27 July 2017. Selection criteria Randomised or quasi‐randomised trials with participants of all ages with sickle cell disease, in all settings, comparing the administration of phytomedicines, by any mode to placebo or conventional treatment, including blood transfusion and hydroxyurea. Data collection and analysis Both authors independently assessed trial quality and extracted data. Two trials (182 participants) and two phytomedicines Niprisan ® (also known as Nicosan ® ) and Ciklavit ® were included. The Phase IIB (pivotal) trial suggests that Niprisan ® was effective in reducing episodes of severe painful sickle cell disease crisis over a six‐month period (low‐quality evidence). It did not affect the risk of severe complications or the level of anaemia (low‐quality evidence). No serious adverse effects were reported. The single trial of Cajanus cajan (Ciklavit ® ) reported a possible benefit to individuals with painful crises (low‐quality evidence), and a possible adverse effect (non‐significant) on the level of anaemia (low‐quality evidence). While Niprisan ® appeared to be safe and effective in reducing severe painful crises over a six‐month follow‐up period, further trials are required to assess its role in the management of people with sickle cell disease and the results of its multicentre trials are awaited. Currently no conclusions can be made regarding the efficacy of Ciklavit ® . Based on the published results for Niprisan ® and in view of the limitations in data collection and analysis of both trials, phytomedicines may have a potential beneficial effect in reducing painful crises in sickle cell disease. This needs to be further validated in future trials. More trials are required on the safety and efficacy of phytomedicines used in managing sickle cell disease. |
t35 | An aneurysm is an abnormal localised widening (dilatation) of an artery. The most common place for such a dilatation is the abdominal aorta. This is the main artery linking the heart to the lower limbs and the organs of the abdomen, and a dilatation here is termed an abdominal aortic aneurysm (AAA). About 4% of men over 55 years of age have an AAA, but it is less common in women. Aneurysms over 55 mm in diameter carry a high risk of rupture, and rupture carries a high risk of death. To reduce the risks, screening programmes using ultrasound scanning have been introduced for selected groups in a number of countries. Patients with aneurysms over 55 mm are then evaluated for elective aneurysm repair. For aneurysms at or below the 55 mm cut‐off, the current treatment is 'watchful waiting', where the aneurysm is repeatedly scanned over time to see if it is enlarging. This review aimed to identify medical treatments which could slow or even reverse aneurysm growth, and thus delay or avoid the need for elective surgery. We identified seven trials involving 1558 participants where the aneurysm diameters of patients randomised to receive medical treatment were compared to those participants given a control medication or surveillance imaging alone. Four trials studied the effects of antibiotics on slowing aneurysm growth, and showed a small protective effect. Three trials studied the effects of beta‐blockers, and demonstrated a very small protective effect. Notably, the beta‐blocker drugs were associated with a large number of adverse effects. It was unclear whether either drug type delayed referral to aneurysm surgery. The accuracy of the results was limited by the low number of participants (especially important when trying to detect small changes in aneurysm growth rates) and some potentially damaging biases. | Screening for abdominal aortic aneurysm (AAA) in selected groups is now performed in England, the USA and Sweden. Patients with aneurysms over 55 mm in diameter are generally considered for elective surgical repair. Patients with aneurysm diameters below or equal to 55 mm (termed 'small AAAs') are managed with aneurysm surveillance as there is currently insufficient evidence to recommend surgery in these cases. As more patients are screened, there will be an increasing number of small AAAs identified. There is interest in pharmaceutical interventions (for example angiotensin converting enzyme (ACE) inhibitors, antibiotics, beta‐blockers, statins) which could be given to such patients to delay or reverse aneurysm expansion and reduce the need for elective surgical repair. Objectives To assess the effects of medical treatment on the expansion rate of small abdominal aortic aneurysms. Search methods The Cochrane Peripheral Vascular Diseases Group Trials Search Co‐ordinator searched the Specialised Register (May 2012) and CENTRAL (2012, Issue 5). Clinical trials databases were searched for details of ongoing or unpublished studies. The reference lists of articles retrieved by electronic searches were searched for additional citations. Selection criteria We selected randomised trials in which patients with small AAAs allocated to medical treatment with the intention of retarding aneurysm expansion were compared to patients allocated to a placebo treatment, alternative medical treatment, a different regimen of the same drug or imaging surveillance alone. Data collection and analysis Two authors independently extracted the data and assessed the risk of bias in the trials. Meta‐analyses were used when heterogeneity was considered low. The two primary outcomes were the mean difference (MD) in aneurysm diameter and the odds ratio (OR) calculated to compare the number of individuals referred to AAA surgery in each group over the trial period. Seven trials involving 1558 participants were included in this review; 457 were involved in four trials of antibiotic medication, and 1101 were involved in three trials of beta‐blocker medication. Five of the studies were rated at a high risk of bias. Individually, all of the included trials reported non‐significant differences in AAA expansion rates between their intervention and control groups. The two major drug groups were then analysed separately. For AAA expansion it was only possible to combine two of the antibiotic trials in a meta‐analysis. This demonstrated that roxithromycin had a small but significant protective effect (MD ‐0.86 mm; 95% confidence interval (CI) ‐1.57 to ‐0.14). When referral to AAA surgery was compared (including all four antibiotic trials in the meta‐analysis), non‐significantly fewer patients were referred in the intervention groups (OR 0.96; 95% CI 0.59 to 1.57) than the control groups. When only the trials reporting actual elective surgery were included in a subgroup analysis, the result remained statistically non‐significant (OR 1.17; 95% CI 0.57 to 2.42). For the beta‐blocker trials, when all were combined in a meta‐analysis, there was a very small, non‐significant protective effect for propranolol on AAA expansion (MD ‐0.08 mm; 95% CI ‐0.25 to 0.10), and non‐significantly fewer patients were referred to AAA surgery in the propranolol group (OR 0.74; 95% CI 0.52 to 1.05). Bronchospasm and shortness of breath were the main adverse effects from the beta‐blockers. In one trial the adverse effects were reportedly so severe that the trial was stopped early after two years. There is some limited evidence that antibiotic medication may have a slight protective effect in retarding the expansion rates of small AAAs. The quality of the evidence makes it unclear whether this translates into fewer referrals to AAA surgery, owing mainly to the small sample sizes of the studies. Antibiotics were generally well tolerated with minimal adverse effects. Propranolol was poorly tolerated by patients in all of the beta‐blocker trials and demonstrated only minimal and non‐significant protective effects. Further research on beta‐blockers for AAA needs to consider the use of drugs other than propranolol. In general, there is surprisingly little high quality evidence on medical treatment for small AAAs, especially in relation to the use of newer beta‐blockers, ACE inhibitors and statins. |
t36 | Epilepsy is a disorder where recurrent seizures are caused by abnormal electrical discharges in the brain. People with epilepsy may present with various types of immunological abnormalities. Most seizures can be controlled by antiepileptic drugs, but sometimes seizures develop which are resistant to these drugs. People may require other types of treatment, such as intravenous immunoglobulins (IVIg). IVIg is a sterile, purified blood product extracted from the plasma of blood donors. IVIg treatment may present a valuable approach and its efficacy has important implications for epilepsy management. This review assessed the efficacy of IVg as a treatment for the control of epilepsy. Only one study (61 participants) which compared the treatment efficacy of IVIg as an add‐on with a placebo add‐on in patients with drug‐resistant epilepsy was included. Results Results of the review suggest that there is no convincing evidence to support the use of IVIg as a treatment for epilepsy and further randomised controlled trials are needed. Certainty of the evidence The included study was rated at low to unclear risk of bias. Using GRADE methodology, the certainty of the evidence was rated as low. This means that the true effect may be substantially different from what was found. | Epilepsy is a common neurological condition, with an estimated incidence of 50 per 100,000 persons. People with epilepsy may present with various types of immunological abnormalities, such as low serum immunoglobulin A (IgA) levels, lack of the immunoglobulin G (IgG) subclass and identification of certain types of antibodies. Intravenous immunoglobulin (IVIg) treatment may represent a valuable approach and its efficacy has important implications for epilepsy management. This is an update of a Cochrane review first published in 2011 and last updated in 2017. Objectives To examine the effects of IVIg on the frequency and duration of seizures, quality of life and adverse effects when used as monotherapy or as add‐on treatment for people with epilepsy. Search methods For the latest update, we searched the Cochrane Register of Studies (CRS Web) (20 December 2018), MEDLINE (Ovid, 1946 to 20 December 2018), Web of Science (1898 to 20 December 2018), ISRCTN registry (20 December 2018), WHO International Clinical Trials Registry Platform (ICTRP, 20 December 2018), the US National Institutes of Health ClinicalTrials.gov (20 December 2018), and reference lists of articles. Selection criteria Randomised or quasi‐randomised controlled trials of IVIg as monotherapy or add‐on treatment in people with epilepsy. Data collection and analysis Two review authors independently assessed the trials for inclusion and extracted data. We contacted study authors for additional information. Outcomes included percentage of people rendered seizure‐free, 50% or greater reduction in seizure frequency, adverse effects, treatment withdrawal and quality of life. We included one study (61 participants). The included study was a randomised, double‐blind, placebo‐controlled, multicentre trial which compared the treatment efficacy of IVIg as an add‐on with a placebo add‐on in patients with drug‐resistant epilepsy. Seizure freedom was not reported in the study. There was no significant difference between IVIg and placebo in 50% or greater reduction in seizure frequency (RR 1.89, 95% CI 0.85 to 4.21; one study, 58 participants; low‐certainty evidence). The study reported a statistically significant effect for global assessment in favour of IVIg (RR 3.29, 95% CI 1.13 to 9.57; one study, 60 participants; low‐certainty evidence). No adverse effects were demonstrated. We found no randomised controlled trials that investigated the effects of IVIg monotherapy for epilepsy. Overall, the included study was rated at low to unclear risk of bias. Using GRADE methodology, the certainty of the evidence was rated as low. We cannot draw any reliable conclusions regarding the efficacy of IVIg as a treatment for epilepsy. Further randomised controlled trials are needed. |
t37 | We reviewed the evidence for treatment with nebulised hypertonic saline compared to placebo or other agents for improving mucus clearance in the lungs of people with cystic fibrosis (CF). People with CF produce large amounts of thick mucus which is difficult to clear and blocks up their airways. Chest physiotherapy or medication e.g. hypertonic saline, or both combined, are used to try and clear this mucus from the airways. Hypertonic saline is water with a concentration of 3% to 7% salt and is inhaled as a fine mist. Trial characteristics We included 17 trials with 966 participants with CF aged between 4 months and 63 years. Eleven trials compared hypertonic saline to isotonic saline (water with 0.12 to 0.9% salt (described as placebo (a dummy treatment)); one trial compared isotonic saline and voluntary cough to hypertonic saline or mannitol 300 mg; three trials compared hypertonic saline to rhDNase (Pulmozyme®); one trial compared hypertonic saline to amiloride; and one trial compared hypertonic saline to Mistabron®. Trials assessed different concentrations of hypertonic saline with different nebulisers and different treatment schedules; the most common treatment was twice‐daily 7% hypertonic saline and the most common nebuliser was ultrasonic. Most trials treated people with a bronchodilator to widen the airways before giving the hypertonic saline. Hypertonic saline 3% to 7% versus placebo In three trials (225 people) lung function improved after four weeks, but only one trial (164 people) reported results after 48 weeks, and showed no difference in lung function. One adult trial reported fewer exacerbations needing antibiotics with hypertonic saline than with placebo, but a trial in children found no difference in this outcome. There was not enough information to properly assess adverse events such as cough, chest tightness, tonsillitis and vomiting. In four trials (80 participants) sputum clearance was better with hypertonic saline. One trial in 132 adults with an exacerbation reported uncertain effects of hypertonic saline on short‐term lung function and the time to the next exacerbation after discharge from hospital. Side effects such as cough and wheeze were reported, but there were no serious side effects. Hypertonic saline versus mucus mobilising treatments We could analyse data from two of the three trials comparing hypertonic saline to rhDNase (61 participants). In one trial there was no difference in lung function at three weeks, but the second reported rhDNase led to a greater increase in lung function at 12 weeks in people with moderate to severe disease. One trial (47 participants) reported no difference in the number of exacerbations, but there was increased cough with hypertonic saline compared to rhDNase. There was not enough information to assess other side effects. One trial (12 participants) compared hypertonic saline to amiloride and one (n = 29) to Mistabron®. Neither trial found a difference between treatments in any measures of sputum clearance. The trial comparing hypertonic saline and Mistabron® also reported no differences in how many antibiotic courses were prescribed or in side effects. The trial comparing hypertonic saline to mannitol (12 participants) did not report lung function at relevant time points for this review; there were no differences in sputum clearance, but mannitol was reported to be more 'irritating'. | Impaired mucociliary clearance characterises lung disease in cystic fibrosis (CF). Hypertonic saline enhances mucociliary clearance and may lessen the destructive inflammatory process in the airways. This is an update of a previously published review. Objectives To investigate efficacy and tolerability of treatment with nebulised hypertonic saline on people with CF compared to placebo and or other treatments that enhance mucociliary clearance. Search methods We searched the Cochrane Cystic Fibrosis and Genetic Disorders Group's Cystic Fibrosis Trials Register, comprising references identified from comprehensive electronic database searches, handsearches of relevant journals and abstract books of conference proceedings. We also searched ongoing trials databases. Date of most recent searches: 08 August 2018. Selection criteria Randomised and quasi‐randomised controlled trials assessing hypertonic saline compared to placebo or other mucolytic therapy, for any duration or dose regimen in people with CF (any age or disease severity). Data collection and analysis Two authors independently reviewed all identified trials and data, and assessed trial quality. A total of 17 trials (966 participants, aged 4 months to 63 years) were included; 19 trials were excluded, three trials are ongoing and 16 are awaiting classification. We judged 14 of the 17 included trials to have a high risk of bias due to participants ability to discern the taste of the solutions. Hypertonic saline 3% to 7% versus placebo At four weeks, we found very low‐quality evidence from three placebo‐controlled trials (n = 225) that hypertonic saline (3% to 7%, 10 mL twice‐daily) increased the mean change from baseline of the forced expiratory volume at one second (FEV 1 ) (% predicted) by 3.44% (95% confidence interval (CI) 0.67 to 6.21), but there was no difference between groups in lung clearance index in one small trial (n = 10). By 48 weeks the effect was slightly smaller in one trial (n = 134), 2.31% (95% CI ‐2.72 to 7.34) (low‐quality evidence). No deaths occurred in the trials. Two trials reporting data on exacerbations were not combined as the age difference between the participants in the trials was too great . One trial (162 adults) found 0.5 fewer exacerbations requiring antibiotics per person in the hypertonic saline group; the second trial (243 children, average age of two years) found no difference between groups (low‐quality evidence). There was insufficient evidence reported across the trials to determine the rate of different adverse events such as cough, chest tightness, tonsillitis and vomiting (very low‐quality evidence). Four trials (n = 80) found very low‐quality evidence that sputum clearance was better with hypertonic saline. A further trial was performed in adults with an acute exacerbation of lung disease (n = 132). The effects of hypertonic saline on short‐term lung function, 5.10% higher (14.67% lower to 24.87% higher) and the time to the subsequent exacerbation post‐discharge, hazard ratio 0.86 (95% CI 0.57 to 1.30) are uncertain (low‐quality evidence). No deaths were reported. Cough and wheeze were reported but no serious adverse events (very low‐quality evidence). Hypertonic saline versus mucus mobilising treatments Three trials compared a similar dose of hypertonic saline to recombinant deoxyribonuclease (rhDNase); two (61 participants) provided data for inclusion in the review. There was insufficient evidence from one three‐week trial (14 participants) to determine the effects of hypertonic saline on FEV 1 % predicted, mean difference (MD) 1.60% (95% CI ‐7.96 to 11.16) (very low‐quality evidence). In the second trial, rhDNase led to a greater increase in FEV 1 % predicted than hypertonic saline (5 mL twice daily) at 12 weeks in participants with moderate to severe lung disease, MD 8.00% (95% CI 2.00 to 14.00) (low‐quality evidence). One cross‐over trial (47 participants) reported 15 exacerbations during treatment with hypertonic saline and 18 exacerbations in the rhDNase group (low‐quality evidence). Increased cough was reported in 13 participants using hypertonic saline and 17 on daily rhDNase in one cross‐over trial of 47 people (low‐quality evidence). There was insufficient evidence to assess rates of other adverse events reported. No deaths were reported. One trial (12 participants) compared hypertonic saline to amiloride and one (29 participants) to sodium‐2‐mercaptoethane sulphonate. Neither trial found a difference between treatments in any measures of sputum clearance; additionally the comparison of hypertonic saline and sodium‐2‐mercaptoethane sulphonate reported no differences in courses of antibiotics or adverse events (very low‐quality evidence). One trial (12 participants) compared hypertonic saline to mannitol but did not report lung function at relevant time points for this review; there were no differences in sputum clearance, but mannitol was reported to be more 'irritating' (very low‐quality evidence). Regular use of nebulised hypertonic saline by adults and children over the age of 12 years with CF results in an improvement in lung function after four weeks (very low‐quality evidence from three trials), but this was not sustained at 48 weeks (low‐quality evidence from one trial). The review did show that nebulised hypertonic saline reduced the frequency of pulmonary exacerbations (although we found insufficient evidence for this outcome in children under six years of age) and may have a small effect on improvement in quality of life in adults. Evidence from one small cross‐over trial in children indicates that rhDNase may lead to better lung function at three months; qualifying this we highlight that while the study did demonstrate that the improvement in FEV 1 was greater with daily rHDNase, there were no differences seen in any of the secondary outcomes. Hypertonic saline does appear to be an effective adjunct to physiotherapy during acute exacerbations of lung disease in adults. However, for the outcomes assessed, the quality of the evidence ranged from very low to at best moderate, according to the GRADE criteria. |
t38 | Deep vein thrombosis (DVT) is a condition in which a blood clot forms in the deep vein of the leg or pelvis. It affects approximately 1 in 1000 people. If it is not treated, the clot can travel in the blood, and block the arteries in the lungs. This life‐threatening condition is called a pulmonary embolism and occurs in approximately 3 to 4 in 10,000 people. Another side‐effect of DVT is post‐thrombotic syndrome (PTS), a condition in which the patient suffers pain, swelling, and changes in the skin of the leg, which can lead to an ulcer. This causes significant disability and diminished qualify of life, and is costly to the healthcare system. One way to prevent another blood clot or PTS is to remove the clot. A catheter can be inserted into the vein and the clot removed directly (mechanical thrombectomy), the clot can be broken down through the use of drugs infused into a vein in the foot or directly at the site of the clot using a catheter and X‐ray control (pharmacomechanical thrombolysis), or a combination of the two procedures. This review aimed to measure how safe and effective pharmacomechanical thrombectomy is, compared to other techniques. There were no randomised controlled trials that met the inclusion criteria of this review (current until December 2015). At present, there is a lack of randomised controlled trials that examine the comparative effectiveness and safety of pharmacomechanical thrombectomy in the management of patients with DVT. Conclusion Further research is required before conclusions can be made. | Deep venous thrombosis (DVT) occurs in approximately one in 1000 adults every year, and has an annual mortality of 14.6%. In particular, iliofemoral DVT can lead to recurrent thrombosis and post‐thrombotic syndrome (PTS), a painful condition which can lead to chronic venous insufficiency, oedema, and ulceration. It causes significant disability, impaired quality of life, and economic burden. Early thrombus removal techniques have been advocated in patients with an iliofemoral DVT in order to improve vein patency, prevent valvular dysfunction, and reduce future complications, such as post‐thrombotic syndrome and venous ulceration. One such technique is pharmacomechanical thrombectomy, a combination of catheter‐based thrombectomy and catheter‐directed thrombolysis. Objectives To assess the effects of pharmacomechanical thrombectomy versus anticoagulation (alone or with compression stockings), mechanical thrombectomy, thrombolysis, or other endovascular techniques in the management of people with acute DVT of the iliofemoral vein. Search methods The Cochrane Vascular Information Specialist searched the Specialised Register (last searched December 2015) and the Cochrane Register of Studies (last searched December 2015). We searched clinical trials databases for details of ongoing or unpublished studies and the reference lists of relevant articles retrieved by electronic searches for additional citations. Selection criteria Randomised controlled trials in which patients with an iliofemoral deep vein thrombosis were allocated to receive pharmacomechanical thrombectomy versus anticoagulation, mechanical thrombectomy, thrombolysis (systemic or catheter directed thrombolysis), or other endovascular techniques for the treatment of iliofemoral DVT. Data collection and analysis At least two review authors independently assessed studies identified for potential inclusion. We found no randomised controlled trials that met the eligibility criteria for this review. We identified one ongoing study. There were no randomised controlled trials that assessed the effects of pharmacomechanical thrombectomy versus anticoagulation (alone or with compression stockings), mechanical thrombectomy, thrombolysis, or other endovascular techniques in the management of people with acute DVT of the iliofemoral vein that met the eligibility criteria for this review. Further high quality randomised controlled trials are needed. |
t39 | Allowing preterm infants to receive blood from the placenta after birth and before clamping the umbilical cord has health benefits for the baby and is not harmful for the mother. Most babies will start breathing or crying (or both) before the cord is clamped. However, some babies do not establish regular breathing during this time. After clamping the cord, most preterm babies are given some form of breathing support like continuous positive airway pressure (CPAP). CPAP applies continuous low air pressure to keep the airways open in babies who can breathe on their own. The question for this review was whether it was beneficial to start the breathing support before the cord is clamped. Preterm infants born before 32 weeks' gestation (32 weeks from the first day of the woman's last period (menstruation) to the current date) who had clamping of the umbilical cord delayed for 60 seconds after birth were selected at random to enter a group of babies who received breathing support and a group of babies who did not receive breathing support. The breathing support was given after birth of the baby and before the cord was clamped. Breathing support was the use of CPAP for infants breathing on their own or applying intermittent airway pressure to expand the lungs in babies not breathing well on their own. Most of the study infants (83%) were delivered by caesarean section. Key results: the single study included in the review did not provide sufficient evidence either for or against the use of breathing support before cord clamping. | Placental transfusion (by means of delayed cord clamping (DCC), cord milking, or cord stripping) confers benefits for preterm infants. It is not known if providing respiratory support to preterm infants before cord clamping improves outcomes. Objectives To assess the efficacy and safety of respiratory support provided during DCC compared with no respiratory support during placental transfusion (in the form of DCC, milking, or stripping) in preterm infants immediately after delivery. Search methods We used the standard search strategy of Cochrane Neonatal to search the Cochrane Central Register of Controlled Trials (CENTRAL, 2017, Issue 5), MEDLINE via PubMed (1966 to 19 June 2017), Embase (1980 to 19 June 2017), and CINAHL (1982 to 19 June 2017). We also searched clinical trials databases, conference proceedings, and the reference lists of retrieved articles for randomized controlled trials and quasi‐randomized trials . Selection criteria Randomized, cluster randomized, or quasi‐randomized controlled trials enrolling preterm infants undergoing DCC, where one of the groups received respiratory support before cord clamping and the control group received no respiratory support before cord clamping. Data collection and analysis All review authors assisted with data collection, assessment, and extraction. Two review authors assessed the quality of evidence using the GRADE approach. We contacted study authors to request missing information. One study fulfilled the review criteria. In this study, 150 preterm infants of less than 32 weeks' gestation undergoing 60 second DCC were randomized to a group who received respiratory support in the form of continuous positive airway pressure (CPAP) or positive pressure ventilation during DCC and a group that did not receive respiratory support during the procedure. Mortality during hospital admission was not significantly different between groups with wide confidence intervals (CI) for magnitude of effect (risk ratio (RR) 1.67, 95% CI 0.41 to 6.73). The study did not report neurodevelopmental disability and death or disability at two to three years of age. There were no significant differences between groups in condition at birth (Apgar scores or intubation in the delivery room), use of inotropic agents (RR 1.25, CI 0.63 to 2.49), and receipt of blood transfusion (RR 1.03, 95% CI 0.70 to 1.54). In addition, there were no significant differences in the incidences of any intraventricular haemorrhage (RR 1.50, 95% CI 0.65 to 3.46) and severe intraventricular haemorrhage (RR 1.33, 95% CI 0.31 to 5.75). Several continuous variables were reported in subgroups depending on method of delivery. Unpublished data for each group as a whole was made available and showed peak haematocrit in the first 24 hours and duration of phototherapy did not differ significantly. Overall, the quality of evidence for several key neonatal outcomes (e.g. mortality and intraventricular haemorrhage) was low because of lack of precision with wide CIs. The results from one study with wide CIs for magnitude of effect do not provide evidence either for or against the use of respiratory support before clamping the umbilical cord. A greater body of evidence is required as many of the outcomes of interest to the review occurred infrequently. Similarly, the one included study cannot answer the question of whether the intervention is or is not harmful. |
t40 | The aim of this Cochrane review was to find out if acupuncture improves pain and function in people with hip osteoarthritis. We collected and analyzed all relevant studies to answer this question and found 6 relevant studies with 413 people. Key messages In people with hip osteoarthritis, at close to 8 weeks: Acupuncture probably results in little or no difference in pain or function compared to sham acupuncture. Acupuncture plus routine primary physician care may improve pain and function compared to routine primary physician care alone. We are uncertain whether acupuncture improves pain and function compared to either advice plus exercise or NSAIDs. We are uncertain whether acupuncture plus patient education improves pain or function compared to patient education alone. Osteoarthritis (OA) is a disease of the joints, and the hip is the second most commonly affected joint. Some drug therapies commonly used for treating hip OA have a risk for side effects. Therefore, it is important to evaluate the effectiveness and safety of non‐drug therapies, including acupuncture. According to traditional acupuncture theory, stimulating the appropriate acupuncture points in the body by inserting very thin needles can reduce pain or improve function. In clinical trials, sham acupuncture is intended to be a placebo for true acupuncture. In sham acupuncture, the patient believes he or she is receiving true acupuncture, but the needles either do not penetrate the skin or are not placed at the correct places on the body, or both. The purpose of the sham acupuncture control is to determine whether improvements from acupuncture are due to patient beliefs in acupuncture, rather than the specific biological effects of acupuncture. However, there is controversy about sham acupuncture. It is believed that some types of sham acupuncture may produce effects that are similar to the effects of true acupuncture. After searching for all relevant trials published up to March 2018, we found 6 trials with 413 people. All trials included primarily older participants, with mean age range from 61‐67 years, and mean duration of hip OA pain from 2‐8 years. About two‐thirds of participants were women. Two of the included trials compared acupuncture to sham acupuncture. These two sham‐controlled trials were small‐sized, but were well‐designed and of generally high methodological quality. The sham acupuncture control interventions were judged believable, but each sham acupuncture intervention was also judged to have a risk of weak acupuncture‐specific effects. This was due to placement of non‐penetrating needles at the correct acupuncture points in one trial, and use of penetrating needles not inserted at the correct points in the other. A meta‐analysis of these two trials gave moderate‐quality evidence of little or no effect in reduction in pain or improvement in function for true acupuncture relative to sham acupuncture. People who received true acupuncture had slight and non‐significant improvements on both pain and function outcomes (2 point greater improvement on a scale of 0‐100 for each), compared to those people who received sham acupuncture. Due to the small sample size in the studies the confidence intervals include both the possibility of moderate benefits and the possibility of no effect of acupuncture. One unblinded trial gave low quality evidence that acupuncture as an addition to routine primary physician care is associated with benefits on pain, function, and physical component‐quality of life (but not mental component‐quality of life). However, these reports of benefits in trial participants who received the additional acupuncture are likely due at least partially to their a priori expectations of a benefit, or their preference to get randomized to acupuncture. Evidence from the 3 other unblinded trials was uncertain. Possible side effects of acupuncture treatment included minor bruising and bleeding at the site of needle insertion, which were reported in 2 trials. Four trials reported on adverse events, and none reported any serious adverse events attributed to acupuncture. | Hip osteoarthritis (OA) is a major cause of pain and functional limitation. Few hip OA treatments have been evaluated for safety and effectiveness. Acupuncture is a traditional Chinese medical therapy which aims to treat disease by inserting very thin needles at specific points on the body. Objectives To assess the benefits and harms of acupuncture in patients with hip OA. Search methods We searched Cochrane CENTRAL, MEDLINE, and Embase all through March 2018. Selection criteria We included randomized controlled trials (RCTs) that compared acupuncture with sham acupuncture, another active treatment, or no specific treatment; and RCTs that evaluated acupuncture as an addition to another treatment. Major outcomes were pain and function at the short term (i.e. < 3 months after randomization) and adverse events. Data collection and analysis We used standard methodological procedures expected by Cochrane. Six RCTs with 413 participants were included. Four RCTs included only people with OA of the hip, and two included a mix of people with OA of the hip and knee. All RCTs included primarily older participants, with a mean age range from 61 to 67 years, and a mean duration of hip OA pain from two to eight years. Approximately two‐thirds of participants were women. Two RCTs compared acupuncture versus sham acupuncture; the other four RCTs were not blinded. All results were evaluated at short term (i.e. four to nine weeks after randomization). In the two RCTs that compared acupuncture to sham acupuncture, the sham acupuncture control interventions were judged believable, but each sham acupuncture intervention was also judged to have a risk of weak acupuncture‐specific effects, due to placement of non‐penetrating needles at the correct acupuncture points in one RCT, and the use of penetrating needles not inserted at the correct points in the other RCT. For these two sham‐controlled RCTs, the risk of bias was low for all outcomes. The combined analysis of two sham‐controlled RCTs gave moderate quality evidence of little or no effect in reduction in pain for acupuncture relative to sham acupuncture. Due to the small sample sizes in the studies, the confidence interval includes both the possibility of moderate benefit and the possibility of no effect of acupuncture (120 participants; Standardized Mean Difference (SMD) ‐0.13, (95% Confidence Interval (CI) ‐0.49 to 0.22); 2.1 points greater improvement with acupuncture compared to sham acupuncture on 100 point scale (i.e., absolute percent change ‐2.1% (95% CI ‐7.9% to 3.6%)); relative percent change ‐4.1% (95% CI ‐15.6% to 7.0%)). Estimates of effect were similar for function (120 participants; SMD ‐0.15, (95% CI ‐0.51 to 0.21)). No pooled estimate, representative of the two sham‐controlled RCTs, could be calculated or reported for the quality of life outcome. The four other RCTs were unblinded comparative effectiveness RCTs, which compared (additional) acupuncture to four different active control treatments. There was low quality evidence that addition of acupuncture to the routine primary care that RCT participants were receiving from their physicians was associated with statistically significant and clinically relevant benefits, compared to the routine primary physician care alone, in pain (1 RCT; 137 participants; mean percent difference ‐22.9% (95% CI ‐29.2% to ‐16.6%); relative percent difference ‐46.5% (95% CI ‐59.3% to ‐33.7%)) and function (mean percent difference ‐19.0% (95% CI ‐24.41 to ‐13.59); relative percent difference ‐38.6% (95% CI ‐49.6% to ‐27.6%)). There was no statistically significant difference for mental quality of life and acupuncture showed a small, significant benefit for physical quality of life. The effects of acupuncture compared with either advice plus exercise or NSAIDs are uncertain. We are also uncertain whether acupuncture plus patient education improves pain, function, and quality of life, when compared to patient education alone. In general, the overall quality of the evidence for the four comparative effectiveness RCTs was low to very low, mainly due to the potential for biased reporting of patient‐assessed outcomes due to lack of blinding and sparse data. Information on safety was reported in four RCTs. Two RCTs reported minor side effects of acupuncture, which were primarily minor bruising, bleeding, or pain at needle insertion sites. Four RCTs reported on adverse events, and none reported any serious adverse events attributed to acupuncture. Acupuncture probably has little or no effect in reducing pain or improving function relative to sham acupuncture in people with hip osteoarthritis. Due to the small sample size in the studies, the confidence intervals include both the possibility of moderate benefits and the possibility of no effect of acupuncture. One unblinded trial found that acupuncture as an addition to routine primary physician care was associated with benefits on pain and function. However, these reported benefits are likely due at least partially to RCT participants' greater expectations of benefit from acupuncture. Possible side effects associated with acupuncture treatment were minor. |
t41 | Telemedicine uses information technology so that doctors or nurses can communicate with their patients when they are not in the same room. The parents of sick infants who are treated in neonatal intensive care units require a lot of support when their child is ill and when they are taking their baby home. Telemedicine may be able to help the doctors and nurses to improve provision of support to the parents. This review identified one trial which did not show that telemedicine alters the time these infants stay in hospital. However, there was some imprecision of the published data in this study that makes it difficult to make firm recommendations either way with telemedicine. | Telemedicine is the use of electronic communications technology to provide care for patients when distance separates the practitioner and the patient. As the parents and families of infants admitted to the NICU require major support from health professionals in terms of information and time, telemedicine has the potential to increase this support. Objectives To evaluate if the use of telemedicine technology to support families of newborn infants receiving intensive care affects the length of hospital stay and parental/family satisfaction. Search methods We searched the following databases: Cochrane Central Register of Controlled Trials (CENTRAL, The Cochrane Library, 2011, Issue 8), MEDLINE (from 1966 to September 2011), EMBASE (1980 to September 2011). We also searched ClinicalTrials.gov ( http://www.clinicaltrials.gov ) and the EudraCT ( http://eudract.emea.eu.int ) web sites. We searched the proceedings of conferences of the Canadian Society of Telehealth, American Telemedicine Association, the International Society for Telemedicine, the Annual Conference of The International e‐Health Association, American Medical Informatics Association and MedInfo. Selection criteria We attempted to identify randomised controlled trials that assessed the use of telemedicine designed to support parents of infants cared for in a Neonatal Intensive Care Unit (NICU) compared with standard support measures. Our primary outcome was the length of hospital stay, and secondary outcomes included parental and staff satisfaction, emergency hospital visits post‐discharge and family utilisation of infant health‐related resources. Data collection and analysis Two review authors independently screened the studies, extracted the data and assessed the risk of bias of the one included study using the standard methods of the Cochrane Neonatal Review Group. We planned to express treatment effects as risk ratio (RR), risk difference (RD), number needed to treat (NNT) and mean difference (MD) where appropriate, using a fixed‐effect model. A single study was included for analysis in this review. This study compared the use of telemedicine (Baby Carelink) for parents and families of infants in the NICU with a control group without access to this programme and assessed the length of hospital stay for the infants and family satisfaction in multiple components of infant care. The study shows no difference in the length of hospital stay (average length of stay: telemedicine group: 68.5 days (standard deviation (SD) 28.3 days), control group: 70.6 days (SD 35.6 days), MD ‐2.10 days (95% confidence interval: ‐18.85 to 14.65 days). There was insufficient information for further analysis of measures of family satisfaction. There is insufficient evidence to support or refute the use of telemedicine technology to support the parents of high‐risk newborn infants receiving intensive care. Clinical trials are needed to assess the application of telemedicine to support parents and families of infants in NICU with length of hospital stay and their perception of NICU care as the major outcomes. |
t42 | Cataract is a clouding of the lens in the eye, which most commonly occurs due to increasing age. This can only be treated with an operation, and the aim of this review was to assess two different surgical methods. The first, called manual small incision cataract surgery (MSICS) involves using instruments to remove the lens from the eye through a small incision. The second, phacoemulsification, involves using a high frequency ultrasound probe to fragment the lens, and this machine also removes the lens fragments from the eye. We searched the literature in July 2013 and identified eight randomised controlled trials that compared these two techniques. These included a total of 1708 participants randomly allocated to MSICS or phacoemulsification. The studies were carried out in India, Nepal and South Africa. Not all studies reported the outcomes of visual acuity that we aimed to assess, making it difficult to draw definite conclusions. Better uncorrected visual acuity was seen in the short term with phacoemulsification; however, there were no differences in best‐corrected visual acuity (i.e. after correction with spectacles). There appeared to be no significant difference regarding uncorrected visual acuity between the two techniques at six months in the one trial that reported at that time point. There was a lack of long‐term data (one year or more after surgery). Very few participants were reported to have poor visual outcomes or complications (such as posterior capsule rupture) from the surgery. The cost of phacoemulsification was documented in one study only, and this was more than four times the cost of MSICS. In this setting, the two techniques appear to be comparable in terms of visual acuity outcomes and complications. However further studies with a longer follow‐up period are needed to better assess these outcomes. | Age‐related cataract is a major cause of blindness and visual morbidity worldwide. It is therefore important to establish the optimal technique of lens removal in cataract surgery. Objectives To compare manual small incision cataract surgery (MSICS) and phacoemulsification techniques. Search methods We searched CENTRAL (which contains the Cochrane Eyes and Vision Group Trials Register) ( The Cochrane Library 2013, Issue 6), Ovid MEDLINE, Ovid MEDLINE In‐Process and Other Non‐Indexed Citations, Ovid MEDLINE Daily, Ovid OLDMEDLINE (January 1946 to July 2013), EMBASE (January 1980 to July 2013), Latin American and Caribbean Literature on Health Sciences (LILACS) (January 1982 to July 2013), Web of Science Conference Proceedings Citation Index ‐ Science (CPCI‐S) (January 1970 to July 2013), the meta Register of Controlled Trials ( m RCT) ( www.controlled‐trials.com ), ClinicalTrials.gov ( www.clinicaltrials.gov ) and the WHO International Clinical Trials Registry Platform (ICTRP) ( www.who.int/ictrp/search/en ). We did not use any date or language restrictions in the electronic searches for trials. We last searched the electronic databases on 23 July 2013. Selection criteria We included randomised controlled trials (RCTs) for age‐related cataract that compared MSICS and phacoemulsification. Data collection and analysis Two authors independently assessed all studies. We defined two primary outcomes: 'good functional vision' (presenting visual acuity of 6/12 or better) and 'poor visual outcome' (best corrected visual acuity of less than 6/60). We collected data on these outcomes at three and 12 months after surgery. Complications such as posterior capsule rupture rates and other intra‐ and postoperative complications were also assessed. In addition, we examined cost effectiveness of the two techniques. Where appropriate, we pooled data using a random‐effects model. We included eight trials in this review with a total of 1708 participants. Trials were conducted in India, Nepal and South Africa. Follow‐up ranged from one day to six months, but most trials reported at six to eight weeks after surgery. Overall the trials were judged to be at risk of bias due to unclear reporting of masking and follow‐up. No studies reported presenting visual acuity so data were collected on both best‐corrected (BCVA) and uncorrected (UCVA) visual acuity. Most studies reported visual acuity of 6/18 or better (rather than 6/12 or better) so this was used as an indicator of good functional vision. Seven studies (1223 participants) reported BCVA of 6/18 or better at six to eight weeks (pooled risk ratio (RR) 0.99 95% confidence interval (CI) 0.98 to 1.01) indicating no difference between the MSICS and phacoemulsification groups. Three studies (767 participants) reported UCVA of 6/18 or better at six to eight weeks, with a pooled RR indicating a more favourable outcome with phacoemulsification (0.90, 95% CI 0.84 to 0.96). One trial (96 participants) reported UCVA at six months with a RR of 1.07 (95% CI 0.91 to 1.26). Regarding BCVA of less than 6/60: there were only 11/1223 events reported. The pooled Peto odds ratio was 2.48 indicating a more favourable outcome using phacoemulsification but with wide confidence intervals (0.74 to 8.28) which means that we are uncertain as to the true effect. The number of complications reported were also low for both techniques. Again this means the review is underpowered to detect a difference between the two techniques with respect to these complications. One study reported on cost which was more than four times higher using phacoemulsification than MSICS. On the basis of this review, removing cataract by phacoemulsification may result in better UCVA in the short term (up to three months after surgery) compared to MSICS, but similar BCVA. There is a lack of data on long‐term visual outcome. The review is currently underpowered to detect differences for rarer outcomes, including poor visual outcome. In view of the lower cost of MSICS, this may be a favourable technique in the patient populations examined in these studies, where high volume surgery is a priority. Further studies are required with longer‐term follow‐up to better assess visual outcomes and complications which may develop over time such as posterior capsule opacification. |
t43 | Folate is an essential vitamin that is needed to make and repair DNA and for cell division. Folate has two main forms: folate, the natural form found in foods, and folic acid, the form that is used in supplements and fortified foods. Wheat and maize (corn) flour are staple crops consumed widely throughout the world. Fortification (i.e. the addition of vitamins and minerals to foods, to increase their nutritional value) of wheat or maize flour with folic acid has been introduced in over 80 countries to prevent neural tube defects among women of reproductive age. However, no previous systematic reviews have been conducted to evaluate the effects of folic acid‐fortified flour on folate status or other health outcomes in the general population. This review aimed to determine the benefits and safety of fortification of wheat and maize flour with folic acid (i.e. alone or with other vitamins and minerals), compared to wheat or maize flour without folic acid (or no intervention), on folate status and different measures of health in the general population. Six studies were conducted in upper‐middle‐income countries (China, Mexico, South Africa), one study was conducted in a lower‐middle‐income country (Bangladesh), and three studies were conducted in a high‐income country (Canada). Seven studies examined the effects of wheat flour fortified with folic acid alone (3 studies) or with other micronutrients (4 studies). Three studies assessed the effects of maize flour fortified with folic acid alone (1 study) or with other micronutrients (two studies). Fortification of wheat flour with folic acid may reduce the likelihood of neural tube defects. Fortification of wheat or maize flour with folic acid (i.e. alone or with other vitamins and minerals) may increase folate status. | Folate is a B‐vitamin required for DNA synthesis, methylation, and cellular division. Wheat and maize (corn) flour are staple crops consumed widely throughout the world and have been fortified with folic acid in over 80 countries to prevent neural tube defects. Folic acid fortification may be an effective strategy to improve folate status and other health outcomes in the overall population. Objectives To evaluate the health benefits and safety of folic acid fortification of wheat and maize flour (i.e. alone or in combination with other micronutrients) on folate status and health outcomes in the overall population, compared to wheat or maize flour without folic acid (or no intervention). Search methods We searched the following databases in March and May 2018: Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE and MEDLINE In Process, Embase, CINAHL, Web of Science (SSCI, SCI), BIOSIS, Popline, Bibliomap, TRoPHI, ASSIA, IBECS, SCIELO, Global Index Medicus‐AFRO and EMRO, LILACS, PAHO, WHOLIS, WPRO, IMSEAR, IndMED, and Native Health Research Database. We searched the International Clinical Trials Registry Platform and ClinicalTrials.gov for ongoing or planned studies in June 2018, and contacted authors for further information. Selection criteria We included randomised controlled trials (RCTs), with randomisation at the individual or cluster level. We also included non‐RCTs and prospective observational studies with a control group; these studies were not included in meta‐analyses, although their characteristics and findings were described. Interventions included wheat or maize flour fortified with folic acid (i.e. alone or in combination with other micronutrients), compared to unfortified flour (or no intervention). Participants were individuals over two years of age (including pregnant and lactating women), from any country. Data collection and analysis Two review authors independently assessed study eligibility, extracted data, and assessed risk of bias. We included 10 studies: four provided data for quantitative analyses (437 participants); five studies were randomised trials (1182 participants); three studies were non‐RCTs (1181 participants, 8037 live births); two studies were interrupted time series (ITS) studies (1 study population of 2,242,438, 1 study unreported). Six studies were conducted in upper‐middle‐income countries (China, Mexico, South Africa), one study was conducted in a lower‐middle‐income country (Bangladesh), and three studies were conducted in a high‐income country (Canada). Seven studies examined wheat flour fortified with folic acid alone or with other micronutrients. Three studies included maize flour fortified with folic acid alone or with other micronutrients. The duration of interventions ranged from two weeks to 36 months, and the ITS studies included postfortification periods of up to seven years. Most studies had unclear risk of bias for randomisation, blinding, and reporting, and low/unclear risk of bias for attrition and contamination. Neural tube defects : none of the included RCTs reported neural tube defects as an outcome. In one non‐RCT, wheat flour fortified with folic acid and other micronutrients was associated with significantly lower occurrence of total neural tube defects, spina bifida, and encephalocoele, but not anencephaly, compared to unfortified flour (total neural tube defects risk ratio (RR) 0.32, 95% confidence interval (CI) 0.21 to 0.48; 1 study, 8037 births; low‐certainty evidence). Folate status : pregnant women who received folic acid‐fortified maize porridge had significantly higher erythrocyte folate concentrations (mean difference (MD) 238.90 nmol/L, 95% CI 149.40 to 328.40); 1 study, 38 participants; very low‐certainty evidence) and higher plasma folate (MD 14.98 nmol/L, 95% CI 9.63 to 20.33; 1 study, 38 participants; very low‐certainty evidence), compared to no intervention. Women of reproductive age consuming maize flour fortified with folic acid and other micronutrients did not have higher erythrocyte folate (MD ‐61.80 nmol/L, 95% CI ‐152.98 to 29.38; 1 study, 35 participants; very low‐certainty evidence) or plasma folate (MD 0.00 nmol/L, 95% CI ‐0.00 to 0.00; 1 study, 35 participants; very low‐certainty evidence) concentrations, compared to women consuming unfortified maize flour. Adults consuming folic acid‐fortified wheat flour bread rolls had higher erythrocyte folate (MD 0.66 nmol/L, 95% CI 0.13 to 1.19; 1 study, 30 participants; very low‐certainty evidence) and plasma folate (MD 27.00 nmol/L, 95% CI 15.63 to 38.37; 1 study, 30 participants; very low‐certainty evidence), versus unfortified flour. In two non‐RCTs, serum folate concentrations were significantly higher among women who consumed flour fortified with folic acid and other micronutrients compared to women who consumed unfortified flour (MD 2.92 nmol/L, 95% CI 1.99 to 3.85; 2 studies, 657 participants; very low‐certainty evidence). Haemoglobin or anaemia : in a cluster‐randomised trial among children, there were no significant effects of fortified wheat flour flatbread on haemoglobin concentrations (MD 0.00 nmol/L, 95% CI ‐2.08 to 2.08; 1 study, 334 participants; low‐certainty evidence) or anaemia (RR 1.07, 95% CI 0.74 to 1.55; 1 study, 334 participants; low‐certainty evidence), compared to unfortified wheat flour flatbread. Fortification of wheat flour with folic acid may reduce the risk of neural tube defects; however, this outcome was only reported in one non‐RCT. Fortification of wheat or maize flour with folic acid (i.e. alone or with other micronutrients) may increase erythrocyte and serum/plasma folate concentrations. Evidence is limited for the effects of folic acid‐fortified wheat or maize flour on haemoglobin levels or anaemia. The effects of folic acid fortification of wheat or maize flour on other primary outcomes assessed in this review is not known. No studies reported on the occurrence of adverse effects. Limitations of this review were the small number of studies and participants, limitations in study design, and low‐certainty of evidence due to how included studies were designed and reported. |
t44 | There is insufficient evidence to support the use of angioplasty for intracranial artery stenosis. Narrowing of the arteries inside the skull is a significant cause of stroke. Medical treatment for prevention consists of the control of risk factors such as high blood pressure, diabetes, and high cholesterol. Blood thinners are also used, but none has been demonstrated to be superior to another. Angioplasty, a procedure for opening narrowed arteries by means of a balloon or stent, is feasible but its safety and efficacy is not known. This review found no randomised controlled trials and no evidence to support the use of this procedure in routine practice. More research is needed to establish the role of this procedure in the treatment of this disease. | Intracranial artery stenosis causes up to 10% of all ischaemic strokes. The rate of recurrent vascular ischaemic events is very high. Angioplasty with or without stent placement is a feasible procedure to dilate the vessel affected. However, its safety and efficacy have not been systematically studied. Objectives To determine the efficacy and safety of angioplasty combined with best medical treatment compared with best medical treatment alone in patients with acute ischaemic stroke or transient ischaemic attack (TIA) resulting from intracranial artery stenosis for preventing recurrent ischaemic strokes, death, and vascular events. Search methods We searched the Cochrane Stroke Group Trials Register (last searched March 2006). In addition we searched the Cochrane Central Register of Controlled Trials (CENTRAL) ( The Cochrane Library Issue 1, 2006), MEDLINE (1966 to March 2006), EMBASE (1980 to February 2006) and Science Citation Index (1945 to March 2006). To identify further published, unpublished and ongoing trials we searched reference lists of relevant articles and contacted authors and experts in the field. Selection criteria Randomised or otherwise controlled studies comparing best medical care plus angioplasty of the intracranial cerebral arteries, with or without stent placement, with best medical care alone. Studies were only included if data for clinical significant endpoints such as recurrent ischaemic stroke, haemorrhagic stroke and death were available. Data collection and analysis Two review authors selected trials for inclusion, and independently assessed trial quality and extracted data. Calculation of relative treatment effects with subgroup analysis was done if possible. No randomised controlled trials were found. There were 79 articles of interest consisting of open‐label case series with three or more cases. The safety profile of the procedure showed an overall perioperative rate of stroke of 7.9% (95% confidence intervals (CI) 5.5% to 10.4%), perioperative death of 3.4% (95% CI 2.0% to 4.8%), and perioperative stroke or death of 9.5% (95% CI 7.0% to 12.0%). No comments can be made on the effectiveness of the procedure. At present there is insufficient evidence to recommend angioplasty with or without stent placement in routine practice for the prevention of stroke in patients with intracranial artery stenosis. The descriptive studies show that the procedure is feasible although carries a significant morbidity and mortality risk. Evidence from randomised controlled trials is needed to assess the safety of angioplasty and its effectiveness in preventing recurrent stroke. |
t45 | We examine research on the effect of needle syringe programmes (NSP) and opioid substitution treatment (OST) in reducing the risk of becoming infected with the hepatitis C virus. There are around 114.9 million people living with hepatitis C and 3 to 4 million people newly infected each year. The main risk for becoming infected is sharing used needles/syringes. Almost half the people who inject drugs have hepatitis C. The provision of sterile injecting equipment through NSPs reduces the need for sharing equipment when preparing and injecting drugs. OST is taken orally and reduces frequency of injection and unsafe injecting practices. We examined whether NSP and OST, provided alone or together, are effective in reducing the chances of becoming infected with hepatitis C in people who inject drugs. We identified 28 research studies across Europe, Australia, North America and China. On average across the studies, the rate of new hepatitis C infections per year was 19.0 for every 100 people. Data from 11,070 people who inject drugs who were not infected with hepatitis C at the start of the study were combined in the analysis. Of the sample, 32% were female, 50% injected opioids, 51% injected daily, and 40% had been homeless. Our study was funded by the National Institute of Health Research's (NIHR) Public Health Research Programme, the Health Protection Research Unit in Evaluation of Interventions, and the European Commission Drug Prevention and Information Programme (DIPP), Treatment as Prevention in Europe: Model Projections. Current use of OST (defined as use at the time of survey or within the previous six months) may reduce risk of acquiring hepatitis C by 50%. We are uncertain whether high coverage NSP (defined as regular attendance at an NSP or all injections being covered by a new needle/syringe) reduces the risk of becoming infected with hepatitis C across all studies globally, but there was some evidence from studies in Europe that high NSP coverage may reduce the risk of hepatitis C infection by 76%. The combined use of high coverage NSP with OST may reduce risk of hepatitis C infection by 74%. | Needle syringe programmes and opioid substitution therapy for preventing hepatitis C transmission in people who inject drugs Needle syringe programmes (NSP) and opioid substitution therapy (OST) are the primary interventions to reduce hepatitis C (HCV) transmission in people who inject drugs. There is good evidence for the effectiveness of NSP and OST in reducing injecting risk behaviour and increasing evidence for the effectiveness of OST and NSP in reducing HIV acquisition risk, but the evidence on the effectiveness of NSP and OST for preventing HCV acquisition is weak. Objectives To assess the effects of needle syringe programmes and opioid substitution therapy, alone or in combination, for preventing acquisition of HCV in people who inject drugs. Search methods We searched the Cochrane Drug and Alcohol Register, CENTRAL, the Cochrane Database of Systematic Reviews (CDSR), the Database of Abstracts of Reviews of Effects (DARE), the Health Technology Assessment Database (HTA), the NHS Economic Evaluation Database (NHSEED), MEDLINE, Embase, PsycINFO, Global Health, CINAHL, and the Web of Science up to 16 November 2015. We updated this search in March 2017, but we have not incorporated these results into the review yet. Where observational studies did not report any outcome measure, we asked authors to provide unpublished data. We searched publications of key international agencies and conference abstracts. We reviewed reference lists of all included articles and topic‐related systematic reviews for eligible papers. Selection criteria We included prospective and retrospective cohort studies, cross‐sectional surveys, case‐control studies and randomised controlled trials that measured exposure to NSP and/or OST against no intervention or a reduced exposure and reported HCV incidence as an outcome in people who inject drugs. We defined interventions as current OST (within previous 6 months), lifetime use of OST and high NSP coverage (regular attendance at an NSP or all injections covered by a new needle/syringe) or low NSP coverage (irregular attendance at an NSP or less than 100% of injections covered by a new needle/syringe) compared with no intervention or reduced exposure. Data collection and analysis We followed the standard Cochrane methodological procedures incorporating new methods for classifying risk of bias for observational studies. We described study methods against the following 'Risk of bias' domains: confounding, selection bias, measurement of interventions, departures from intervention, missing data, measurement of outcomes, selection of reported results; and we assigned a judgment (low, moderate, serious, critical, unclear) for each criterion. We identified 28 studies (21 published, 7 unpublished): 13 from North America, 5 from the UK, 4 from continental Europe, 5 from Australia and 1 from China, comprising 1817 incident HCV infections and 8806.95 person‐years of follow‐up. HCV incidence ranged from 0.09 cases to 42 cases per 100 person‐years across the studies. We judged only two studies to be at moderate overall risk of bias, while 17 were at serious risk and 7 were at critical risk; for two unpublished datasets there was insufficient information to assess bias. As none of the intervention effects were generated from RCT evidence, we typically categorised quality as low. We found evidence that current OST reduces the risk of HCV acquisition by 50% (risk ratio (RR) 0.50, 95% confidence interval (CI) 0.40 to 0.63, I 2 = 0%, 12 studies across all regions, N = 6361). The intervention effect remained significant in sensitivity analyses that excluded unpublished datasets and papers judged to be at critical risk of bias. We found evidence of differential impact by proportion of female participants in the sample, but not geographical region of study, the main drug used, or history of homelessness or imprisonment among study samples. Overall, we found very low‐quality evidence that high NSP coverage did not reduce risk of HCV acquisition (RR 0.79, 95% CI 0.39 to 1.61) with high heterogeneity (I 2 = 77%) based on five studies from North America and Europe involving 3530 participants. After stratification by region, high NSP coverage in Europe was associated with a 76% reduction in HCV acquisition risk (RR 0.24, 95% CI 0.09 to 0.62) with less heterogeneity (I 2 =0%). We found low‐quality evidence of the impact of combined high coverage of NSP and OST, from three studies involving 3241 participants, resulting in a 74% reduction in the risk of HCV acquisition (RR 0.26 95% CI 0.07 to 0.89). OST is associated with a reduction in the risk of HCV acquisition, which is strengthened in studies that assess the combination of OST and NSP. There was greater heterogeneity between studies and weaker evidence for the impact of NSP on HCV acquisition. High NSP coverage was associated with a reduction in the risk of HCV acquisition in studies in Europe. |
t46 | We looked for evidence about the effects of any treatment used to prevent or treat low‐back pain, pelvic pain or both during pregnancy. We also wanted to know whether treatments decreased disability or sick leave, and whether treatments caused any side effects for pregnant women. Pain in the lower‐back, pelvis, or both, is a common complaint during pregnancy and often gets worse as pregnancy progresses. This pain can disrupt daily activities, work and sleep for pregnant women. We wanted to find out whether any treatment, or combination of treatments, was better than usual prenatal care for pregnant women with these complaints. We included 34 randomised studies in this updated review, with 5121 pregnant women, aged 16 to 45 years. Women were from 12 to 38 weeks’ pregnant. Studies looked at different treatments for pregnant women with low‐back pain, pelvic pain or both types of pain. All treatments were added to usual prenatal care, and were compared with usual prenatal care alone in 23 studies. Studies measured women's symptoms in different ways, ranging from self‐reported pain and sick leave to the results of specific tests. When we combined the results from seven studies (645 women) that compared any land‐based exercise with usual prenatal care, exercise interventions (lasting from five to 20 weeks) improved women's levels of low‐back pain and disability. Pelvic pain There is less evidence available on treatments for pelvic pain. Two studies found that women who participated in group exercise and received information about managing their pain reported no difference in their pelvic pain than women who received usual prenatal care. Low‐back and pelvic pain The results of four studies combined (1176 women) showed that an eight‐ to 12‐week exercise program reduced the number of women who reported low‐back and pelvic pain. Land‐based exercise, in a variety of formats, also reduced low‐back and pelvic pain‐related sick leave in two studies (1062 women). However, two other studies (374 women) found that group exercise plus information was no better at preventing either pelvic or low‐back pain than usual prenatal care. There were a number of single studies that tested a variety of treatments. Findings suggested that craniosacral therapy, osteomanipulative therapy or a multi‐modal intervention (manual therapy, exercise and education) may be of benefit. When reported, there were no lasting side effects in any of the studies. | More than two‐thirds of pregnant women experience low‐back pain and almost one‐fifth experience pelvic pain. The two conditions may occur separately or together (low‐back and pelvic pain) and typically increase with advancing pregnancy, interfering with work, daily activities and sleep. Objectives To update the evidence assessing the effects of any intervention used to prevent and treat low‐back pain, pelvic pain or both during pregnancy. Search methods We searched the Cochrane Pregnancy and Childbirth (to 19 January 2015), and the Cochrane Back Review Groups' (to 19 January 2015) Trials Registers, identified relevant studies and reviews and checked their reference lists. Selection criteria Randomised controlled trials (RCTs) of any treatment, or combination of treatments, to prevent or reduce the incidence or severity of low‐back pain, pelvic pain or both, related functional disability, sick leave and adverse effects during pregnancy. Data collection and analysis Two review authors independently assessed trials for inclusion and risk of bias, extracted data and checked them for accuracy. We included 34 RCTs examining 5121 pregnant women, aged 16 to 45 years and, when reported, from 12 to 38 weeks’ gestation. Fifteen RCTs examined women with low‐back pain (participants = 1847); six examined pelvic pain (participants = 889); and 13 examined women with both low‐back and pelvic pain (participants = 2385). Two studies also investigated low‐back pain prevention and four, low‐back and pelvic pain prevention. Diagnoses ranged from self‐reported symptoms to clinicians’ interpretation of specific tests. All interventions were added to usual prenatal care and, unless noted, were compared with usual prenatal care. The quality of the evidence ranged from moderate to low, raising concerns about the confidence we could put in the estimates of effect. For low‐back pain Results from meta‐analyses provided low‐quality evidence (study design limitations, inconsistency) that any land‐based exercise significantly reduced pain (standardised mean difference (SMD) ‐0.64; 95% confidence interval (CI) ‐1.03 to ‐0.25; participants = 645; studies = seven) and functional disability (SMD ‐0.56; 95% CI ‐0.89 to ‐0.23; participants = 146; studies = two). Low‐quality evidence (study design limitations, imprecision) also suggested no significant differences in the number of women reporting low‐back pain between group exercise, added to information about managing pain, versus usual prenatal care (risk ratio (RR) 0.97; 95% CI 0.80 to 1.17; participants = 374; studies = two). For pelvic pain Results from a meta‐analysis provided low‐quality evidence (study design limitations, imprecision) of no significant difference in the number of women reporting pelvic pain between group exercise, added to information about managing pain, and usual prenatal care (RR 0.97; 95% CI 0.77 to 1.23; participants = 374; studies = two). For low‐back and pelvic pain Results from meta‐analyses provided moderate‐quality evidence (study design limitations) that: an eight‐ to 12‐week exercise program reduced the number of women who reported low‐back and pelvic pain (RR 0.66; 95% CI 0.45 to 0.97; participants = 1176; studies = four); land‐based exercise, in a variety of formats, significantly reduced low‐back and pelvic pain‐related sick leave (RR 0.76; 95% CI 0.62 to 0.94; participants = 1062; studies = two). The results from a number of individual studies, incorporating various other interventions, could not be pooled due to clinical heterogeneity. There was moderate‐quality evidence (study design limitations or imprecision) from individual studies suggesting that osteomanipulative therapy significantly reduced low‐back pain and functional disability, and acupuncture or craniosacral therapy improved pelvic pain more than usual prenatal care. Evidence from individual studies was largely of low quality (study design limitations, imprecision), and suggested that pain and functional disability, but not sick leave, were significantly reduced following a multi‐modal intervention (manual therapy, exercise and education) for low‐back and pelvic pain. When reported, adverse effects were minor and transient. There is low‐quality evidence that exercise (any exercise on land or in water), may reduce pregnancy‐related low‐back pain and moderate‐ to low‐quality evidence suggesting that any exercise improves functional disability and reduces sick leave more than usual prenatal care. Evidence from single studies suggests that acupuncture or craniosacral therapy improves pregnancy‐related pelvic pain, and osteomanipulative therapy or a multi‐modal intervention (manual therapy, exercise and education) may also be of benefit. Clinical heterogeneity precluded pooling of results in many cases. Statistical heterogeneity was substantial in all but three meta‐analyses, which did not improve following sensitivity analyses. Publication bias and selective reporting cannot be ruled out. Further evidence is very likely to have an important impact on our confidence in the estimates of effect and change the estimates. Studies would benefit from the introduction of an agreed classification system that can be used to categorise women according to their presenting symptoms, so that treatment can be tailored accordingly. |
t47 | We reviewed the evidence for the benefits and harms of a short course (typically up to 21 days) of corticosteroid given by mouth to people with chronic rhinosinusitis compared with giving a placebo or no treatment, or another type of treatment. Chronic rhinosinusitis is a common condition that is defined as inflammation of the nose and paranasal sinuses (a group of air‐filled spaces behind the nose, eyes and cheeks). Patients with chronic rhinosinusitis experience at least two or more of the following symptoms for at least 12 weeks: blocked nose, discharge from their nose or runny nose, pain or pressure in their face and/or a reduced sense of smell (hyposmia). Some people will also have nasal polyps, which are grape‐like swellings of the normal nasal lining inside the nasal passage and sinuses. Short courses of oral corticosteroids are a widely used treatment for chronic rhinosinusitis. They work by controlling the inflammatory response and when polyps are present they rapidly reduce the size of the polyps to improve symptoms. The adverse effects of corticosteroids can include insomnia, mood changes and gastrointestinal changes (such as stomach pain, heartburn, diarrhoea, constipation, nausea and vomiting). When given over the longer term, or through many repeated short courses, it is also possible to develop osteoporosis (fragile bones). We included eight randomised controlled trials with a total of 474 participants. All of the patients were adults who had chronic rhinosinusitis with nasal polyps. All of the studies followed patients until the end of treatment (two to three weeks) and three studies (210 participants) followed up people for three to six months after the initial treatment had ended. Five of the eight reports mentioned how the trial was funded. None of the funding sources were pharmaceutical companies. At the end of a two‐ or three‐week treatment course, people who took oral steroids may have had a better quality of life, less severe symptoms and smaller nasal polyps than people who had placebo or did not receive any treatment. After three to six months, there was little or no difference in quality of life, symptom severity or nasal polyps between the people who had oral steroids and the people who had placebo or no intervention. The people who took oral steroids may have had more gastrointestinal disturbances and insomnia than the people who had placebo or no intervention. It is not clear if the people who took oral steroids had more mood disturbances than the people who had placebo or no intervention. | This review is one of a suite of six Cochrane reviews looking at the primary medical management options for patients with chronic rhinosinusitis. Chronic rhinosinusitis is a common condition involving inflammation of the lining of the nose and paranasal sinuses. It is characterised by nasal blockage and nasal discharge, facial pressure/pain and loss of sense of smell. The condition can occur with or without nasal polyps. Oral corticosteroids are used to control the inflammatory response and improve symptoms. Objectives To assess the effects of oral corticosteroids compared with placebo/no intervention or other pharmacological interventions (intranasal corticosteroids, antibiotics, antifungals) for chronic rhinosinusitis. Search methods The Cochrane ENT Information Specialist searched the ENT Trials Register; Central Register of Controlled Trials (CENTRAL 2015, Issue 7); MEDLINE; EMBASE; ClinicalTrials.gov; ICTRP and additional sources for published and unpublished trials. The date of the search was 11 August 2015. Selection criteria Randomised controlled trials (RCTs) comparing a short course (up to 21 days) of oral corticosteroids with placebo or no treatment or compared with other pharmacological interventions. Data collection and analysis We used the standard methodological procedures expected by Cochrane. Our primary outcomes were disease‐specific health‐related quality of life (HRQL), patient‐reported disease severity, and the adverse event of mood or behavioural disturbances. Secondary outcomes included general HRQL, endoscopic nasal polyp score, computerised tomography (CT) scan score and the adverse events of insomnia, gastrointestinal disturbances and osteoporosis. We included eight RCTs (474 randomised participants), which compared oral corticosteroids with placebo or no intervention. All trials only recruited adults with chronic rhinosinusitis with nasal polyps. All trials reported outcomes at two to three weeks, at the end of the short‐course oral steroid treatment period. Three trials additionally reported outcomes at three to six months. Two of these studies prescribed intranasal steroids to patients in both arms of the trial at the end of the oral steroid treatment period. Oral steroids versus placebo or no intervention Disease‐specific health‐related quality of life was reported by one study. This study reported improved quality of life after treatment (two to three weeks) in the group receiving oral steroids compared with the group who received placebo (standardised mean difference (SMD) ‐1.24, 95% confidence interval (CI) ‐1.92 to ‐0.56, 40 participants, modified RSOM‐31), which corresponds to a large effect size. We assessed the evidence to be low quality (we are uncertain about the effect estimate; the true effect may be substantially different from the estimate of the effect). Disease severity as measured by patient‐reported symptom scores was reported by two studies, which allowed the four key symptoms used to define chronic rhinosinusitis (nasal blockage, nasal discharge, facial pressure, hyposmia) to be combined into one score. The results at the end of treatment (two to three weeks) showed an improvement in patients receiving oral steroids compared to placebo, both when presented as a mean final value (SMD ‐2.84, 95% CI ‐4.09 to ‐1.59, 22 participants) and as a change from baseline (SMD ‐2.28, 95% CI ‐2.76 to ‐1.80, 114 participants). These correspond to large effect sizes but we assessed the evidence to be low quality. One study (114 participants) followed patients for 10 weeks after the two‐week treatment period. All patients in both arms received intranasal steroids at the end of the oral steroid treatment period. The results showed that the initial results after treatment were not sustained (SMD ‐0.22, 95% CI ‐0.59 to 0.15, 114 participants, percentage improvement from baseline). This corresponds to a small effect size and we assessed the evidence to be low quality. There was an increase in adverse events in people receiving orals steroids compared with placebo for gastrointestinal disturbances (risk ratio (RR) 3.45, 95% CI 1.11 to 10.78; 187 participants; three studies) and insomnia (RR 3.63, 95% CI 1.10 to 11.95; 187 participants; three studies). There was no significant impact of oral steroids on mood disturbances at the dosage used in the included study (risk ratio (RR) 2.50, 95% CI 0.55 to 11.41; 40 participants; one study). We assessed the evidence to be low quality due to the lack of definitions of the adverse events and the small number of events or sample size, or both). Other comparisons No studies that compared short‐course oral steroids with other treatment for chronic rhinosinusitis met the inclusion criteria. At the end of the treatment course (two to three weeks) there is an improvement in health‐related quality of life and symptom severity in patients with chronic rhinosinusitis with nasal polyps taking oral corticosteroids compared with placebo or no treatment. The quality of the evidence supporting this finding is low. At three to six months after the end of the oral steroid treatment period, there is little or no improvement in health‐related quality of life or symptom severity for patients taking an initial course of oral steroids compared with placebo or no treatment. The data on the adverse effects associated with short courses of oral corticosteroids indicate that there may be an increase in insomnia and gastrointestinal disturbances but it is not clear whether there is an increase in mood disturbances. All of the adverse events results are based on low quality evidence. More research in this area, particularly research evaluating patients with chronic rhinosinusitis without nasal polyps, longer‐term outcomes and adverse effects, is required. There is no evidence for oral steroids compared with other treatments. |
t48 | PRO 140 (a humanized form of the PA14 antibody, a monoclonal CCR5 antibody) is a laboratory made antibody that blocks the CCR5 receptor on CD4 cells. By blocking CCR5, PRO 140 prevents the HIV virus from infecting healthy cells. PRO 140 may be an effective new treatment drug because it has the potential to address the limitations of currently available therapies for HIV‐infected patients. PRO 140 has emerged as an important new therapy and has entered testing. We reviewed the efficacy, safety, clinical disease progression and immunologic (CD4 count/percentage) and virologic (plasma HIV RNA viral load) markers of PRO 140 for HIV‐infected patients. We included three randomized controlled trials (RCTs). These three RCTs were of unclear risk of bias, as the details of methodological items were not adequately reported. All patients in these three studies were adult HIV‐infected patients, PRO 140 was adminstrated subcutaneous or intravenous infusion with different doses. These three studies adressed the immunologic (CD4 count/percentage) and virologic (plasma HIV RNA viral load) markers. There may be potential conflicts of interest in all studies, as some of the authors are current or past employees of Progenics Pharmaceuticals, the producer of PRO 140. Our systematic review showed that PRO 140 may offer significant short‐term dose‐dependent HIV‐1 RNA suppression with tolerable side effects. PRO 140 2 mg/kg, 5 mg/kg, 10 mg/kg, 162 mg weekly, 324 mg biweekly, and 324 mg weekly could reduce HIV‐1 RNA levels and demonstrate antiviral response. And PRO 140 5 mg/kg showed greater change in CD4 + cell count on day eight. Headache, lymphadenopathy, diarrhoea, fatigue, hypertension, nasal congestion and pruritus were reported to be the most frequent adverse events. Even though available evidence from the three trials suggests that PRO 140 may be effective, the number of patients in these three studies was very small, and the results of these three studies may be influenced by potential biases. PRO 140 has been granted fast‐track approval status by the United States Food and Drug Administration (FDA), but the efficacy of PRO 140 still needs to be proven in large, long‐term high quality RCTs. The three studies reviewed here only evaluated the short‐term (58 or 59 days) efficacy. The long‐term efficacy was not evaluated, and adverse events data were not reported adequately for each group. Whether PRO 140 could be used in clinical practice as first‐line treatment for HIV‐infected patients or not depends on the results of high quality future RCTs. | PRO 140 (a humanized form of the PA14 antibody, a monoclonal CCR5 antibody) inhibits CCR5‐tropic (R5) type 1 human immunodeficiency virus (HIV). This may be an effective new treatment with the potential to address the limitations of currently available therapies for HIV‐infected patients. Objectives We aimed to assess the efficacy, safety, clinical disease progression and immunologic (CD4 count/percentage) and virologic (plasma HIV RNA viral load) markers of PRO 140 for HIV‐infected patients in randomized controlled trials (RCTs) and quasi‐randomized controlled trials (quasi‐RCTs). Search methods We searched databases including The Cochrane Central Register of Controlled Trials (The Cochrane Library 2014, Issue 4), MEDLINE (PubMed, January 1966 to April 2014), EMBASE (January 1978 to April 2014) and ISI Web of Knowledge (January 1966 to April 2014) , online trials registries and other sources. We also screened the reference lists of related literature and eligible studies, and presentations from major HIV/AIDS (human immunodeficiency virus/acquired immunodeficiency syndrome) conferences. Selection criteria We included RCTs and quasi‐RCTs comparing PRO 140 with placebo or other antiretroviral drugs, or different doses of PRO 140 for individuals infected with HIV. Data collection and analysis Two reviewers (L Li and JH Tian) independently screened all retrieved citations and selected eligible studies. Two authors (P Zhang and WQ Jia) independently extracted data. Any disagreements when selecting studies and extracting data were adjudicated by the review mentor (KH Yang). We used Review Manager (RevMan) software for statistical analysis based on an intention‐to‐treat analysis. We examined heterogeneity using the Chi 2 statistic. We regarded I 2 estimates greater than 50% as moderate or high levels of heterogeneity. According to the level of heterogeneity, we used either a fixed or random‐effects model.If significant heterogeneity existed and the reasons could not be found, we reported the results qualitatively. We included three trials comparing PRO 140 with placebo in adult patients with HIV infection. Our review indicates that PRO 140 may offer significant dose‐dependent HIV‐1 RNA suppression with tolerable side effects. PRO 140 2 mg/kg, 5 mg/kg, 10 mg/kg, 162 mg weekly, 324 mg biweekly, and 324 mg weekly showed statistically significant differences in the changes of HIV‐1 RNA levels. HIV‐1 RNA levels were reduced by intravenous (IV) infusion of PRO 140 2 mg/kg or 5 mg/kg on day 10, 5 mg/kg or 10 mg/kg on day 12, 162 mg weekly, 324 mg biweekly, or 324 mg weekly on day 22. PRO 140 2 mg/kg, 5 mg/kg, 10 mg/kg, 162 mg weekly, 324 mg biweekly, and 324 mg weekly demonstrated greater antiviral response. PRO 140 324 mg weekly, 5 mg/kg, and 10 mg/kg showed more patients with ≦ 400 copies/mL HIV‐1 RNA. Only PRO 140 5 mg/kg showed greater change in CD4 + cell count on day eight. Headache, lymphadenopathy, diarrhoea, fatigue, hypertension, nasal congestion and pruritus were reported to be the most frequent adverse events. Limited evidence from three small trials suggests that PRO 140 might demonstrate potent, short‐term, dose‐dependent, highly significant antiviral activity. However, as the evidence is insufficient, recommendations cannot yet be made. Larger, longer‐term, double‐blind RCTs are required to provide conclusive evidence. |
t49 | In future, as the population ages, the number of people in our communities suffering with dementia will rise dramatically. This will not only affect the quality of life of people with dementia but also increase the burden on family caregivers, community care, and residential care services. Exercise is one lifestyle factor that has been identified as a potential means of reducing or delaying progression of the symptoms of dementia. This review evaluated the results of 17 trials (search dates August 2012 and October 2013), including 1,067 participants, that tested whether exercise programs could improve cognition (which includes such things as memory, reasoning ability and spatial awareness), activities of daily living, behaviour and psychological symptoms (such as depression, anxiety and agitation) in older people with dementia. We also looked for effects on mortality, quality of life, caregivers' experience and use of healthcare services, and for any adverse effects of exercise. There was some evidence that exercise programs can improve the ability of people with dementia to perform daily activities, but there was a lot of variation among trial results that we were not able to explain. The studies showed no evidence of benefit from exercise on cognition, psychological symptoms, and depression. There was little or no evidence regarding the other outcomes listed above. There was no evidence that exercise was harmful for the participants. | This is an update of our previous 2013 review. Several recent trials and systematic reviews of the impact of exercise on people with dementia are reporting promising findings. Objectives Primary objective Do exercise programs for older people with dementia improve their cognition, activities of daily living (ADLs), neuropsychiatric symptoms, depression, and mortality? Secondary objectives Do exercise programs for older people with dementia have an indirect impact on family caregivers’ burden, quality of life, and mortality? Do exercise programs for older people with dementia reduce the use of healthcare services (e.g. visits to the emergency department) by participants and their family caregivers? Search methods We identified trials for inclusion in the review by searching ALOIS ( www.medicine.ox.ac.uk/alois ), the Cochrane Dementia and Cognitive Improvement Group’s Specialised Register, on 4 September 2011, on 13 August 2012, and again on 3 October 2013. Selection criteria In this review, we included randomized controlled trials in which older people, diagnosed with dementia, were allocated either to exercise programs or to control groups (usual care or social contact/activities) with the aim of improving cognition, ADLs, neuropsychiatric symptoms, depression, and mortality. Secondary outcomes related to the family caregiver(s) and included caregiver burden, quality of life, mortality, and use of healthcare services. Data collection and analysis Independently, at least two authors assessed the retrieved articles for inclusion, assessed methodological quality, and extracted data. We analysed data for summary effects. We calculated mean differences or standardized mean difference (SMD) for continuous data, and synthesized data for each outcome using a fixed‐effect model, unless there was substantial heterogeneity between studies, when we used a random‐effects model. We planned to explore heterogeneity in relation to severity and type of dementia, and type, frequency, and duration of exercise program. We also evaluated adverse events. Seventeen trials with 1067 participants met the inclusion criteria. However, the required data from three included trials and some of the data from a fourth trial were not published and not made available. The included trials were highly heterogeneous in terms of subtype and severity of participants' dementia, and type, duration, and frequency of exercise. Only two trials included participants living at home. Our meta‐analysis revealed that there was no clear evidence of benefit from exercise on cognitive functioning. The estimated standardized mean difference between exercise and control groups was 0.43 (95% CI ‐0.05 to 0.92, P value 0.08; 9 studies, 409 participants). There was very substantial heterogeneity in this analysis (I² value 80%), most of which we were unable to explain, and we rated the quality of this evidence as very low. We found a benefit of exercise programs on the ability of people with dementia to perform ADLs in six trials with 289 participants. The estimated standardized mean difference between exercise and control groups was 0.68 (95% CI 0.08 to 1.27, P value 0.02). However, again we observed considerable unexplained heterogeneity (I² value 77%) in this meta‐analysis, and we rated the quality of this evidence as very low. This means that there is a need for caution in interpreting these findings. In further analyses, in one trial we found that the burden experienced by informal caregivers providing care in the home may be reduced when they supervise the participation of the family member with dementia in an exercise program. The mean difference between exercise and control groups was ‐15.30 (95% CI ‐24.73 to ‐5.87; 1 trial, 40 participants; P value 0.001). There was no apparent risk of bias in this study. In addition, there was no clear evidence of benefit from exercise on neuropsychiatric symptoms (MD ‐0.60, 95% CI ‐4.22 to 3.02; 1 trial, 110 participants; P value .0.75), or depression (SMD 0.14, 95% CI ‐0.07 to 0.36; 5 trials, 341 participants; P value 0.16). We could not examine the remaining outcomes, quality of life, mortality, and healthcare costs, as either the appropriate data were not reported, or we did not retrieve trials that examined these outcomes. There is promising evidence that exercise programs may improve the ability to perform ADLs in people with dementia, although some caution is advised in interpreting these findings. The review revealed no evidence of benefit from exercise on cognition, neuropsychiatric symptoms, or depression. There was little or no evidence regarding the remaining outcomes of interest (i.e., mortality, caregiver burden, caregiver quality of life, caregiver mortality, and use of healthcare services). |
t50 | Visits to emergency departments and family doctors have increased. One possible way to decrease the demands is to provide telephone helplines, hotlines or consultations. People can speak with health care professionals, such as doctors and nurses, on the telephone and receive medical advice or a referral to an appropriate health service. Nine studies were found and analysed to determine whether telephone consultation was safe and effective. In general, at least half of the calls were handled by telephone only (without the need for face‐to‐face visits). It was found that telephone consultation appears to decrease the number of immediate visits to doctors and does not appear to increase visits to emergency departments. Telephone consultation also appears to be safe and people were just as satisfied using the telephone as going to see someone face‐to‐face. There are still questions about its effectiveness and more research into the use, cost, safety and satisfaction of telephone consultation is needed. | Telephone consultation is the process where calls are received, assessed and managed by giving advice or by referral to a more appropriate service. In recent years there has been a growth in telephone consultation developed, in part, as a response to increased demand for general practitioner (GP) and accident and emergency (A&E) department care. Objectives To assess the effects of telephone consultation on safety, service usage and patient satisfaction and to compare telephone consultation by different health care professionals. Search methods We searched the Cochrane Central Register of Controlled Trials, the specialised register of the Cochrane Effective Practice and Organisation of Care (EPOC) group, PubMed, EMBASE, CINAHL, SIGLE, and the National Research Register. We checked reference lists of identified studies and review articles and contacted experts in the field. The search was not restricted by language or publication status. The searches were updated in 2007 and no new studies were found. Selection criteria Randomised controlled trials (RCTs), controlled studies, controlled before/after studies (CBAs) and interrupted time series (ITSs) of telephone consultation or triage in a general health care setting. Disease specific phone lines were excluded. Data collection and analysis Two review authors independently screened studies for inclusion in the review, extracted data and assessed study quality. Data were collected on adverse events, service usage, cost and patient satisfaction. Due to heterogeneity we did not pool studies in a meta‐analysis and instead present a narrative summary of the findings. Nine studies met our inclusion criteria, five RCTs, one CCT and three ITSs. Six studies compared telephone consultation versus normal care; four by a doctor, one by a nurse and one by a clinic clerk. Three studies compared telephone consultation by different types of health care workers; two compared nurses with doctors and one compared health assistants with doctors or nurses. Three of five studies found a decrease in visits to GP's but two found a significant increase in return consultations. In general at least 50% of calls were handled by telephone advice alone. Seven studies looked at accident and emergency department visits, six showed no difference between the groups and one, of nurse telephone consultation, found an increase in visits. Two studies reported deaths and found no difference between nurse telephone triage and normal care. Telephone consultation appears to reduce the number of surgery contacts and out‐of‐hours visits by general practitioners. However, questions remain about its affect on service use and further rigorous evaluation is needed with emphasis on service use, safety, cost and patient satisfaction. |
t51 | Heparin in intravenous fluids may reduce IV tube changes needed for newborn babies in neonatal intensive care, but more research is needed to determine its safety. Babies in neonatal intensive care often need fluids intravenously (through a tube inserted into a vein). Sometimes the intravenous (IV) tube becomes blocked, as the blood clots and skin becomes swollen. Bacteria can also enter and cause serious infection. Regularly changing the tube (and which vein is used) can reduce some problems, but babies have few usable veins. The drug heparin used in the IV fluids could reduce blockages by thinning the blood, but it can have serious adverse effects. The review of trials found that more research is needed to determine whether heparin in IV fluids is advantageous for neonates without causing side effects. | Mechanical or infectious complications often necessitate their removal and/or replacement of peripheral intravenous catheters. Heparin has been shown to be effective in prolonging the patency of peripheral arterial catheters and central venous catheters, but may result in life threatening complications, especially in preterm neonates. Objectives The primary objective was to determine the effectiveness of heparin versus placebo or no treatment on duration of peripheral intravenous (PIV) catheter patency in neonates requiring a PIV catheter. Search methods A literature search was performed using the following databases: MEDLINE (1966 to February 2005), EMBASE (1980 to February 2005), CINAHL (1982 to February 2005), Cochrane Central Register of Controlled Trials (CENTRAL, The Cochrane Library, Issue 1, 2005), and abstracts from the annual meetings of the Society for Pediatric Research, American Pediatric Society and Pediatric Academic Societies published in Pediatric Research (1991 to 2004). No language restrictions were applied. This search was updated in 2010. Selection criteria Randomized or quasi‐randomised trials of heparin administered as flush or infusion versus placebo or no treatment were included. Studies which included a neonatal population and reported on at least one of the outcomes were included. Data collection and analysis Data collection and analysis was performed in accordance with the recommendations of the Cochrane Neonatal Review Group. Ten eligible studies were identified. Heparin was administered either as a flush solution or as an additive to the total parenteral nutrition solution. Five studies reported data on the duration of use of the first catheter. Two of these studies found no statistically significant effect of heparin; two studies showed a statistically significant increase and one study showed a statistically significant decrease in the duration of PIV catheter use in the heparin group. There were marked differences between the studies in terms of the methodological quality, the dose, the timing, the route of administration of heparin and the outcomes reported. The results were not combined for meta‐analysis. Individual studies did not report any significant differences between the heparin and the placebo/no treatment groups in the risks of infiltration, phlebitis and intracranial haemorrhage. There are insufficient data concerning the effect of heparin for prolonging PIV catheter use in neonates. Recommendations for heparin use in neonates with PIV catheters cannot be made. Further research on the effectiveness, the optimal dose, and the safety of heparin is required. |
t52 | Flail chest is a medical term describing multiple rib fractures, when ribs are broken or dislocated in more than one place and are no longer completely connected to the other rib bones. When a person injured in this way breathes, the broken segment may move in a different way compared to the rest of the chest wall. Flail chest can cause a person to have difficulty breathing, in which case they may be given mechanical ventilation (machine‐assisted breathing). Surgery is sometimes performed in order to reconnect the broken ribs. The authors of this review aimed to evaluate the effects and safety of surgery compared with no surgery for people with flail chest. We searched scientific databases for studies comparing surgical treatment with nonsurgical treatment in adults or children with flail chest. We included three studies in this review, which involved 123 people. In these studies, people with flail chest were randomly allocated into the surgery or no surgery study groups. The results show that surgery to repair the broken ribs reduces pneumonia, chest deformity, tracheostomy, duration of mechanical ventilation and length of ICU stay. There was no difference in deaths between people treated with surgery or no surgery. Since only six people died across the three studies, due to a variety of causes, more research is needed in order to know for certain which treatment is better for reducing deaths. These three small studies have shown that surgical treatment is preferable to nonsurgical treatment in reducing pneumonia, chest deformity, tracheostomy, mechanical ventilation and length of stay in the ICU. More research is needed in order to know which treatment is better for reducing deaths. Three more studies are being undertaken by researchers in the USA and Canada at the moment, and the results will be incorporated into the review in the future. | Thoracic trauma (TT) is common among people with multiple traumatic injuries. One of the injuries caused by TT is the loss of thoracic stability resulting from multiple fractures of the rib cage, otherwise known as flail chest (FC). A person with FC can be treated conservatively with orotracheal intubation and mechanical ventilation (internal pneumatic stabilization) but may also undergo surgery to fix the costal fractures. Objectives To evaluate the effectiveness and safety of surgical stabilization compared with clinical management for people with FC. Search methods We ran the search on the 12 May 2014. We searched the Cochrane Injuries Group's Specialised Register, the Cochrane Central Register of Controlled Trials (CENTRAL, The Cochrane Library ), MEDLINE (OvidSP), EMBASE Classic and EMBASE (OvidSP), CINAHL Plus (EBSCO), ISI WOS (SCI‐EXPANDED, SSCI, CPCI‐S, and CPSI‐SSH), and clinical trials registers. We also screened reference lists and contacted experts. Selection criteria Randomized controlled trials of surgical versus nonsurgical treatment for people diagnosed with FC. Data collection and analysis Two review authors selected relevant trials, assessed their risk of bias, and extracted data. We included three studies that involved 123 people. The methods used for blinding the participants and researchers to the treatment group were not reported, but as the comparison is surgical treatment with medical treatment this bias is hard to avoid. There was no description of concealment of the randomization sequence in two studies. All three studies reported on mortality, and deaths occurred in two studies. There was no clear evidence of a difference in mortality between treatment groups (risk ratio (RR) 0.56, 95% confidence interval (CI) 0.13 to 2.42); however, the analysis was underpowered to detect a difference between groups. Out of the 123 people randomized and treated, six people died; the causes of death were pneumonia, pulmonary embolism, mediastinitis, and septic shock. Among people randomized to surgery, there were reductions in pneumonia (RR 0.36, 95% 0.15 to 0.85; three studies, 123 participants), chest deformity (RR 0.13, 95% CI 0.03 to 0.67; two studies, 86 participants), and tracheostomy (RR 0.38, 95% CI 0.14 to 1.02; two studies, 83 participants). Duration of mechanical ventilation, length of intensive care unit stay (ICU), and length of hospital stay were measured in the three studies. Due to differences in reporting, we could not combine the results and have listed them separately. Chest pain, chest tightness, bodily pain, and adverse effects were each measured in one study. There was some evidence from three small studies that showed surgical treatment was preferable to nonsurgical management in reducing pneumonia, chest deformity, tracheostomy, duration of mechanical ventilation, and length of ICU stay. Further well‐designed studies with a sufficient sample size are required to confirm these results and to detect possible surgical effects on mortality. |
t53 | Neurocysticercosis is a common infection of the brain caused by the larvae of the pork tapeworm, migrating to the brain. Seizures are the most common symptom, although some people may present with headache, vomiting or other symptoms of brain swelling. This review investigates the usefulness of antiepileptic drugs (AEDs) in preventing seizures in people who did not have seizures but presented with these other symptoms. We also examined the usefulness of the AEDs in people with epilepsy due to neurocysticercosis in terms of choice of drug, dosage, duration of treatment, cost, side effects and the quality of life. Four trials with a total of 466 participants were reviewed, focusing on the comparison of 'short duration' and 'long duration' of AEDs drugs in people with a single cerebral lesion. These trials compared various durations of AED therapy: six to 12 months as short duration and 12 to 24 months as long‐duration therapy. No statistically significant benefit of one duration of AED over the other (six, 12 or 24 months) could be demonstrated. In people with calcified cysts, longer duration of therapy may be preferable. All four included trials, enrolled people with a single brain lesion. The findings of our review cannot be extrapolated to people with multiple cysts or with cysts in unusual parts of the brain. | Neurocysticercosis is the most common parasitic infection of the brain. Epilepsy is the most common clinical presentation, though it may also present with headache, symptoms of raised intracranial pressure, hydrocephalus and ocular symptoms depending upon the localisation of the parasitic cysts. Anthelmintic drugs, anti‐oedema drugs, such as steroids, and antiepileptic drugs (AEDs) form the mainstay of treatment. This is an updated version of the original Cochrane Review published in 2015, Issue 10. Objectives To assess the effects (benefits and harms) of AEDs for the primary and secondary prevention of seizures in people with neurocysticercosis. For the question of primary prevention, we examined whether AEDs reduce the likelihood of seizures in patients who have neurocysticercosis but have not had a seizure. For the question of secondary prevention, we examined whether AEDs reduce the likelihood of further seizures in patients who have had at least one seizure due to neurocysticercosis. As part of primary prevention studies, we also aimed to examine which AED has been found to be beneficial in people with neurocysticercosis in terms of duration, dose and side‐effect profile. Search methods For the latest update of this review, we searched the following databases on 8 July 2019: Cochrane Register of Studies (CRS Web), MEDLINE (Ovid, 1946 to July 05, 2019) and LILACS (1982‐ ). CRS Web includes the Cochrane Epilepsy Group Specialized Register, the Cochrane Central Register of Controlled Trials (CENTRAL), and randomised or quasi‐randomised, controlled trials from Embase, ClinicalTrials.gov and the World Health Organization International Clinical Trials Registry Platform (ICTRP). We also checked the references lists of identified studies, and contacted experts in the field and colleagues to search for additional studies and for information about ongoing studies. Selection criteria Randomised and quasi‐randomised controlled trials. Single‐blind, double‐blind or unblinded studies were eligible for inclusion. Data collection and analysis Two review authors screened all citations for eligibility (MS screened the initially identified 180 citations, MF and BDM screened the 48 citations identified for the purpose of this update).Two review authors independently extracted data and evaluated each study for risk of bias. We did not find any trials that investigated the role of AEDs in preventing seizures among people with neurocysticercosis, presenting with symptoms other than seizures. We did not find any trials that evaluated evaluating individual AEDs in people with neurocysticercosis. We found one trial, comparing two AEDs in people with solitary neurocysticercosis with seizures. However, we excluded this study from the review as it was of poor quality. We found four trials that compared the efficacy of short term versus longer term AED treatment for people with solitary neurocysticercosis (identified on computed tomography (CT) scan) presenting with seizures. In total, 466 people were enrolled. These studies compared various AED treatment durations, six, 12 and 24 months. The risk of seizure recurrence with six months treatment compared with 12 to 24 months treatment was not statistically significant (odds ratio (OR) 1.34 (95% confidence interval (CI) 0.73 to 2.47; three studies, 360 participants; low‐certainty evidence)). The risk of seizure recurrence with six to 12 months compared with 24 months treatment was not statistically significant (OR 1.36 (95% CI 0.72 to 2.57; three studies, 385 participants; low‐certainty evidence)). Two studies co‐related seizure recurrence with CT findings and suggested that persistent and calcified lesions had a higher recurrence risk and suggest longer duration of treatment with AEDs. One study reported no side effects, while the rest did not comment on side effects of drugs. None of the studies addressed the quality of life of the participants.These studies had certain methodological deficiencies such as a small sample size and a possibility of bias due to lack of blinding, which affect the results of this review. Despite neurocysticercosis being the most common cause of epilepsy worldwide, there is currently no evidence available regarding the use of AEDs as seizure prophylaxis among people presenting with symptoms other than seizures. For those presenting with seizures, there is no reliable evidence regarding the duration of treatment required. There is therefore a need for large scale randomised controlled trials to address these questions. |
t54 | Breaks (fractures) of the lower part of the thigh bone (distal femur) are debilitating and painful injuries. The reduced mobility after these injuries is also an important cause of ill‐health. Sometimes these fractures happen in people who have previously had a knee replacement; this can make treatment of the fracture more complicated. Many treatments have been used in the management of these injuries. Historically, people were treated in bed with weights holding the leg straight (traction). More recently, surgical fixation of the broken bone has become routine. Methods of surgical fixation include using plates and screws or rods inside the thigh bone to hold the fracture in place while it heals. The technology of these implants has become increasingly advanced with components that 'lock' together, forming a 'locked' device. Despite these advances, the best management of these injuries remains controversial. This review set out to evaluate the effects, primarily on function, of different methods for treating fractures of the lower end of the femur in adults. We searched the scientific literature up to September 2014 and found seven relevant studies with 444 participants with these fractures. One study compared surgery with non‐surgical treatment and the other six studies compared the use of different surgical implants. Each of the studies was small and was designed in a way that may affect the reliability of their findings. Most studies did not report on patient‐reported outcomes measures of function. We judged the quality of the reported evidence was very low and thus we are not certain that these results are true. The study comparing surgical fixation with non‐surgical intervention (traction and wearing a brace) did not confirm there was any difference between the two treatments in terms of re‐operations or repeat traction and bone healing. However, there were more complications such as pressure sores associated with prolonged immobilisation in the traction group, who stayed on average one month longer in hospital. Five studies compared one type of nail versus one of three different types of plate fixation. One study compared locked with non‐locked plate fixation. The evidence available for the four comparisons did not confirm that any of the surgical implants were superior to any other surgical implant for any outcomes, including re‐operation for complications such as lack of bone healing and infection. The review found that the available evidence was very limited and insufficient to inform current clinical practice. Further research comparing commonly used surgical treatments is needed. | Fractures of the distal femur (the part of the thigh bone nearest the knee) are a considerable cause of morbidity. Various different surgical and non‐surgical treatments have been used in the management of these injuries but the best treatment remains controversial. Objectives To assess the effects (benefits and harms) of interventions for treating fractures of the distal femur in adults. Search methods We searched the Cochrane Bone, Joint and Muscle Trauma Group Specialised Register (9 September 2014); the Cochrane Central Register of Controlled Trials ( The Cochrane Library , 2014, Issue 8); MEDLINE (1946 to August week 4 2014); EMBASE (1980 to 2014 week 36); World Health Organization (WHO) International Clinical Trials Registry Platform (January 2015); conference proceedings and reference lists without language restrictions. Selection criteria Randomised and quasi‐randomised controlled clinical trials comparing interventions for treating fractures of the distal femur in adults. Our primary outcomes were patient‐reported outcome measures (PROMs) of knee function and adverse events, including re‐operations. Data collection and analysis Two review authors independently selected studies and performed data extraction and risk of bias assessment. We assessed treatment effects using risk ratios (RR) or mean differences (MD) and, where appropriate, we pooled data using a fixed‐effect model. We included seven studies that involved a total of 444 adults with distal femur fractures. Each of the included studies was small and assessed to be at substantial risk of bias, with four studies being quasi‐randomised and none of the studies using blinding in outcome assessment. All studies provided an incomplete picture of outcome. One study compared surgical (dynamic condylar screw (DCS) fixation) and non‐surgical (skeletal traction) treatment in 42 older adults (mean age 79 years) with displaced fractures of the distal femur. This study, which did not report on PROMs, provided very low quality evidence of little between‐group differences in adverse events such as death (2/20 surgical versus 1/20 non‐surgical), re‐operation or repeat procedures (1/20 versus 3/20) and other adverse effects including delayed union. However, while none of the findings were statistically significant, there were more complications such as pressure sores (0/20 versus 4/20) associated with prolonged immobilisation in the non‐surgical group, who stayed on average one month longer in hospital. The other six studies compared different surgical interventions. Three studies, including 159 participants, compared retrograde intramedullary nail (RIMN) fixation versus DCS or blade‐plate fixation (fixed‐angle devices). None of these studies reported PROMS relating to function. None of the results for the reported adverse events showed a difference between the two implants. Thus, although there was very low quality evidence of a higher risk of re‐operation in the RIMN group, the 95% confidence interval (CI) also included the possibility of a higher risk of re‐operation for the fixed‐angle device (9/83 RIMN versus 4/96 fixed‐angle device; 3 studies: RR 1.85, 95% CI 0.62 to 5.57). There was no clinically important difference between the two groups found in quality of life assessed using the 36‐item Short Form in one study (23 fractures). One study (18 participants) provided very low quality evidence of there being little difference in adverse events between RIMN and non‐locking plate fixation. One study (53 participants) provided very low quality evidence of a higher risk of re‐operation after locking plate fixation compared with a single fixed‐angle device (6/28 locking plate versus 1/25 fixed‐angle device; RR 5.36, 95% CI 0.69 to 41.50); however, the 95% CI also included the possibility of a higher risk of re‐operation for the fixed‐angle device. Neither of these trials reported on PROMs. The largest included study, which reported outcomes in 126 participants at one‐year follow‐up, compared RIMN versus locking plate fixation; both implants are commonly used in current practice. None of the between‐group differences in the reported outcomes were statistically significant; thus the CIs crossed the line of no effect. There was very low quality evidence of better patient‐reported musculoskeletal function in the RIMN group based on Short Musculoskeletal Function Assessment (0 to 100: best function) scores (e.g. dysfunction index: MD ‐5.90 favouring RIMN, 95% CI ‐15.13 to 3.33) as well as quality of life using the EuroQoL‐5D Index (0 to 1: best quality of life) (MD 0.10 favouring RIMN, 95% CI ‐0.01 to 0.21). The CIs for both results included a clinically important effect favouring RIMN but also a clinically insignificant effect in favour of locking plate fixation. This review highlights the major limitations of the available evidence concerning current treatment interventions for fractures of the distal femur. The currently available evidence is incomplete and insufficient to inform current clinical practice. Priority should be given to a definitive, pragmatic, multicentre randomised controlled clinical trial comparing contemporary treatments such as locked plates and intramedullary nails. At minimum, these should report validated patient‐reported functional and quality‐of‐life outcomes at one and two years. All trials should be reported in full using the CONSORT guidelines. |
t55 | A defect in the abdominal wall through which organs can protrude is called hernia. Hernias may occur spontaneously (primary hernia) or at the site of a previous surgical incision (incisional hernia). A hernia is usually recognized as a bulge or tear under the abdominal skin. Occasionally it causes no discomfort for the patient but it can hurt while lifting heavy objects, coughing, or having bowel movements. Also after prolonged standing or sitting it can cause heavy discomfort. For the repair of these hernias many different surgical techniques are in use. The conventional technique is the open technique, where with either a suture or a mesh prosthesis the defect of the abdominal wall will be closed. A mesh prosthesis is a synthetic material that reinforces the tissue or bridges the defect. On the other hand the laparoscopic hernia repair is a technique to repair the defect in the abdominal wall also with a mesh but using small incisions and a laparoscope. In this case, the mesh is always placed in the abdominal cavity. This review analysed randomised controlled trials, comparing the conventional, open technique with the laparoscopic technique. Based on the results of nearly 1000 adult patients, the laparoscopic technique appears to be effective at least in the short‐term evaluation. As laparoscopic surgery requires smaller incisions than open surgery, wound infection was fourfold less likely to occur in patients with laparoscopic repair. However, there is a rare but theoretically higher risk that intraabdominal organs are more likely to be injured during a laparoscopic procedure. Length of hospital stay after laparoscopic hernia repair was found to be shorter in the majority of trials. As most studies had evaluated only a follow‐up of 1 or 2 years, data on the long‐term effectiveness are still lacking. Most importantly, the risks of the hernia coming back (i.e. recurrence) are relatively unknown. Therefore, the authors of the review believe that further studies are necessary, before laparoscopic repair can be considered a standard procedure for primary ventral or incisional hernia repair. | There are many different techniques currently in use for ventral and incisional hernia repair. Laparoscopic techniques have become more common in recent years, although the evidence is sparse. Objectives We compared laparoscopic with open repair in patients with (primary) ventral or incisional hernia. Search methods We searched the following electronic databases: MEDLINE, EMBASE, Cochrane Central Register of Controlled Trials, metaRegister of Controlled Trials. The last searches were conducted in July 2010. In addition, congress abstracts were searched by hand. Selection criteria We selected randomised controlled studies (RCTs), which compared the two techniques in patients with ventral or incisional hernia. Studies were included irrespective of language, publication status, or sample size. We did not include quasi‐randomised trials. Data collection and analysis Two authors assessed trial quality and extracted data independently. Meta‐analytic results are expressed as relative risks (RR) or weighted mean difference (WMD). We included 10 RCTs with a total number of 880 patients suffering primarily from primary ventral or incisional hernia. No trials were identified on umbilical or parastomal hernia. The recurrence rate was not different between laparoscopic and open surgery (RR 1.22; 95% CI 0.62 to 2.38; I 2 = 0%), but patients were followed up for less than two years in half of the trials. Results on operative time were too heterogeneous to be pooled. The risk of intraoperative enterotomy was slightly higher in laparoscopic hernia repair (Peto OR 2.33; 95% CI 0.53 to 10.35), but this result stems from only 7 cases with bowel lesion (5 vs. 2). The most clear and consistent result was that laparoscopic surgery reduced the risk of wound infection (RR = 0.26; 95% CI 0.15 to 0.46; I 2 = 0%). Laparoscopic surgery shortened hospital stay significantly in 6 out of 9 trials, but again data were heterogeneous. Based on a small number of trials, it was not possible to detect any difference in pain intensity, both in the short‐ and long‐term evaluation. Laparoscopic repair apparently led to much higher in‐hospital costs. The short‐term results of laparoscopic repair in ventral hernia are promising. In spite of the risks of adhesiolysis, the technique is safe. Nevertheless, long‐term follow‐up is needed in order to elucidate whether laparoscopic repair of ventral/incisional hernia is efficacious. |
t56 | We reviewed the evidence about the effect of different interventions for raising breast cancer awareness in women. We found two randomised controlled trials, the highest quality of research evidence. Breast cancer is the most commonly diagnosed cancer in women. Early detection, diagnosis and treatment of breast cancer are key to better outcomes. Since many women will discover a breast symptom themselves, it is important that they are breast cancer aware i.e. that they have the knowledge, skills and confidence to notice any breast changes and visit their doctor promptly. A search for trials investigating interventions on breast cancer awareness in women was run in January 2016. We found two trials with a total of 997 women. The Promoting Early Presentation (PEP) study, funded by Breast Cancer UK, involved randomising 867 women to receive one of three interventions: (1) a written booklet and usual care, (2) a written booklet and usual care plus one‐to‐one discussion with a healthcare professional or (3) usual care only. Women were aged between 67 to 70 years and recruited into the study at breast cancer screening units in the UK. The Zahedan University of Medical Sciences (ZUMS) study involved randomising 130 women into two groups that received either: (1) an educational programme using written and oral materials that focused on "breast cancer preventive behaviours" (e.g. having a healthy diet and positive beliefs towards breast self‐examining behaviour) or (2) no intervention. Women were employed at ZUMS and aged between 35 and 39 years. The PEP study assessed outcomes at one month, one year and two years after the intervention. The ZUMS study measured outcomes at one month after the intervention. Since the studies were very different in terms of the participants' age, interventions, outcomes and time points measured, the results are reported separately. Knowledge of breast cancer symptoms In PEP: women's knowledge of breast cancer symptoms seemed to somewhat improve after receiving either the written booklet or written booklet plus verbal interaction. These results improved when compared to usual care at 2 years postintervention. In ZUMS: women’s awareness of breast cancer symptoms increased one month after the educational programme. In PEP: knowledge of age‐related risk increased for women who had received a written booklet and interacted with a healthcare professional compared to usual care at 2 years postintervention. For women who only received the booklet, there was less of a comparable increase in knowledge. In ZUMS: this study only measured if women perceived themselves to be at risk of getting breast cancer. This self‐perception of risk did increase at one month following the intervention. In PEP: women's reported monthly breast checking increased, but not significantly, at 2 years postintervention compared to usual care. In ZUMS: women's reported "breast cancer preventive behaviours" increased one month after the intervention. Specifically, this refers to their positive beliefs towards breast self‐examining behaviour. In PEP: women's breast cancer awareness overall did not change after receiving a booklet alone compared to usual care at 2 years after the intervention. However, breast cancer awareness increased in women who had received a written booklet and interacted with a healthcare professional. This behaviour change was in comparison to usual care at 2 years postintervention. In ZUMS: women's "breast cancer preventive behaviours" were reported to increase at one month. | Breast cancer continues to be the most commonly diagnosed cancer in women globally. Early detection, diagnosis and treatment of breast cancer are key to better outcomes. Since many women will discover a breast cancer symptom themselves, it is important that they are breast cancer aware i.e. have the knowledge, skills and confidence to detect breast changes and present promptly to a healthcare professional. Objectives To assess the effectiveness of interventions for raising breast cancer awareness in women. Search methods We searched the Cochrane Breast Cancer Group's Specialised Register (searched 25 January 2016), Cochrane Central Register of Controlled Trials (CENTRAL; 2015, Issue 12) in the Cochrane Library (searched 27 January 2016), MEDLINE OvidSP (2008 to 27 January 2016), Embase (Embase.com, 2008 to 27 January 2016), the World Health Organization’s International Clinical Trials Registry Platform (ICTRP) search portal and ClinicalTrials.gov (searched 27 Feburary 2016). We also searched the reference lists of identified articles and reviews and the grey literature for conference proceedings and published abstracts. No language restriction was applied. Selection criteria Randomised controlled trials (RCTs) focusing on interventions for raising women’s breast cancer awareness i.e. knowledge of potential breast cancer symptoms/changes and the confidence to look at and feel their breasts, using any means of delivery, i.e. one‐to‐one/group/mass media campaign(s). Data collection and analysis Two authors selected studies, independently extracted data and assessed risk of bias. We reported the odds ratio (OR) and 95% confidence intervals (CIs) for dichotomous outcomes and mean difference (MD) and standard deviation (SD) for continuous outcomes. Since it was not possible to combine data from included studies due to their heterogeneity, we present a narrative synthesis. We assessed the quality of evidence using GRADE methods. We included two RCTs involving 997 women: one RCT (867 women) randomised women to receive either a written booklet and usual care (intervention group 1), a written booklet and usual care plus a verbal interaction with a radiographer or research psychologist (intervention group 2) or usual care (control group); and the second RCT (130 women) randomised women to either an educational programme (three sessions of 60 to 90 minutes) or no intervention (control group). Knowledge of breast cancer symptoms In the first study, knowledge of non‐lump symptoms increased in intervention group 1 compared to the control group at two years postintervention, but not significantly (OR 1.1, 95% CI 0.7 to 1.6; P = 0.66; 449 women; moderate‐quality evidence). Similarly, at two years postintervention, knowledge of symptoms increased in the intervention group 2 compared to the control group but not significantly (OR 1.4, 95% CI 0.9 to 2.1; P = 0.11; 434 women; moderate‐quality evidence). In the second study, women’s awareness of breast cancer symptoms had increased one month post intervention in the educational group (MD 3.45, SD 5.11; 65 women; low‐quality evidence) compared to the control group (MD −0.68, SD 5.93; 65 women; P < 0.001), where there was a decrease in awareness. Knowledge of age‐related risk In the first study, women’s knowledge of age‐related risk of breast cancer increased, but not significantly, in intervention group 1 compared to control at two years postintervention (OR 1.8; 95% CI 0.9 to 3.5; P < 0.08; 447 women; moderate‐quality evidence). Women's knowledge of risk increased significantly in intervention group 2 compared to control at two years postintervention (OR 4.8, 95% CI 2.6 to 9.0; P < 0.001; 431 women; moderate‐quality evidence). In the second study, women’s perceived susceptibility (how at risk they considered themselves) to breast cancer had increased significantly one month post intervention in the educational group (MD 1.31, SD 3.57; 65 women; low‐quality evidence) compared to the control group (MD −0.55, SD 3.31; 65 women; P = 0.005), where a decrease in perceived susceptibility was noted. Frequency of Breast Checking In the first study, no significant change was noted for intervention group 1 compared to control at two years postintervention (OR 1.1, 95% CI 0.8 to 1.6; P = 0.54; 457 women; moderate‐quality evidence). Monthly breast checking increased, but not significantly, in intervention group 2 compared to control at two years postintervention (OR 1.3, 95% CI 0.9 to 1.9; P = 0.14; 445 women; moderate‐quality evidence). In the second study, women’s breast cancer preventive behaviours increased significantly one month post intervention in the educational group (MD 1.21, SD 2.54; 65 women; low‐quality evidence) compared to the control group (MD 0.15, SD 2.94; 65 women; P < 0.045). Breast Cancer Awareness Women’s overall breast cancer awareness did not change in intervention group 1 compared to control at two years postintervention (OR 1.8, 95% CI 0.6 to 5.30; P = 0.32; 435 women; moderate‐quality evidence) while overall awareness increased in the intervention group 2 compared to control at two years postintervention (OR 8.1, 95% CI 2.7 to 25.0; P < 0.001; 420 women; moderate‐quality evidence). In the second study, there was a significant increase in scores on the Health Belief Model (that included the constructs of awareness and perceived susceptibility) at one month postintervention in the educational group (mean 1.21, SD 2.54; 65 women) compared to the control group (mean 0.15, SD 2.94; 65 women; P = 0.045). Neither study reported outcomes relating to motivation to check their breasts, confidence to seek help, time from breast symptom discovery to presentation to a healthcare professional, intentions to seek help, quality of life, adverse effects of the interventions, stages of breast cancer, survival estimates or breast cancer mortality rates. Based on the results of two RCTs, a brief intervention has the potential to increase women’s breast cancer awareness. However, findings of this review should be interpreted with caution, as GRADE assessment identified moderate‐quality evidence in only one of the two studies reviewed. In addition, the included trials were heterogeneous in terms of the interventions, population studied and outcomes measured. Therefore, current evidence cannot be generalised to the wider context. Further studies including larger samples, validated outcome measures and longitudinal approaches are warranted. |
t57 | It is estimated that 350,000 women per year in the United Kingdom and millions more worldwide experience perineal stitches because of a childbirth‐related natural tear or cut (episiotomy). Sometimes the perineal wound breaks down (opens up). This may be because it becomes infected, which could lead to systemic infection and sepsis. The current management of broken down wounds varies widely between individual health practitioners and hospitals. For most women the broken down perineal wound is left to heal naturally (managed expectantly). This is a slow process and it can take several weeks for the wound to heal completely resulting in persistent pain and discomfort at the perineal wound site, also possible urinary retention and defecation problems. The alternative is re‐stitching. Due to the lack of research evidence, we do not know the best way to treat this type of complication. This review looked at randomised controlled trials of re‐stitching broken down wounds compared with non‐stitching. As the studies were small and of poor quality, it is not possible to draw conclusions about the best way to manage wound breakdown after childbirth. Therefore, there is an urgent need to conduct further studies to compare fully the benefits and risks of both treatments. | Each year approximately 350,000 women in the United Kingdom and millions more worldwide, experience perineal suturing following childbirth. The postpartum management of perineal trauma is a core component of routine maternity care. However, for those women whose perineal wound dehisces (breaks down), the management varies depending on individual practitioners preferences as there is limited scientific evidence and no clear guidelines to inform best practice. For most women the wound will be managed expectantly whereas, others may be offered secondary suturing. Objectives To evaluate the therapeutic effectiveness of secondary suturing of dehisced perineal wounds compared to non‐suturing (healing by secondary intention, expectancy). Search methods We searched the Cochrane Pregnancy and Childbirth Group's Trials Register (31 July 2013) and reference lists of retrieved studies. Selection criteria Randomised controlled trials of secondary suturing of dehisced perineal wounds (second‐, third‐ or fourth‐degree tear or episiotomy), following wound debridement and the removal of any remaining suture material within the first six weeks following childbirth compared with non‐suturing. Data collection and analysis Three review authors independently assessed trials for inclusion. Two review authors independently assessed trial quality and extracted data. Data were checked for accuracy. Two small studies of poor methodological quality including 52 women with a dehisced and/or infected episiotomy wound at point of entry have been included. Only one small study presented data in relation to wound healing at less than four weeks, (the primary outcome measure for this review), although no reference was made to demonstrate how healing was measured. There was a trend to favour this outcome in the resuturing group, however, this difference was not statistically significant (risk ratio (RR) 1.69, 95% confidence interval (CI) 0.73 to 3.88, one study, 17 women). Similarly, only one trial reported on rates of dyspareunia (a secondary outcome measure for this review) at two months and six months with no statistically significant difference between both groups; two months, (RR 0.44, 95% CI 0.18 to 1.11, one study, 26 women) and six months, (RR 0.39, 95% CI 0.04 to 3.87, one study 32 women). This trial also included data on the numbers of women who resumed sexual intercourse by two months and six months. Significantly more women in the secondary suturing group had resumed intercourse by two months (RR 1.78, 95% CI 1.10 to 2.89, one study, 35 women), although by six months there was no significant difference between the two groups (RR 1.08, 95% CI, 0.91 to 1.28). Neither of the trials included data in relation to the following prespecified secondary outcome measures: pain at any time interval; the woman's satisfaction with the aesthetic results of the perineal wound; exclusive breastfeeding; maternal anxiety or depression. Based on this review, there is currently insufficient evidence available to either support or refute secondary suturing for the management of broken down perineal wounds following childbirth. There is an urgent need for a robust randomised controlled trial to evaluate fully the comparative effects of both treatment options. |
t58 | Antibiotics are used to prevent life‐threatening complications for mother and baby when the amniotic fluid is infected, but it is not known which antibiotic is best. Amniotic fluid is the 'water' surrounding the baby inside the womb. If this fluid becomes infected, it can be life‐threatening for the mother and baby, and the baby should be born within 12 hours. Infection can come from bacteria entering the womb from the vagina, or from a medical procedure that penetrates the membranes ('bag' around baby and waters). Antibiotics reduce the risk of dangerous complications for both mother and baby. The review found there is not enough evidence from trials to show which antibiotic is best or whether it should be given before or after the baby is born. | Intraamniotic infection is associated with maternal morbidity and neonatal sepsis, pneumonia and death. Although antibiotic treatment is accepted as the standard of care, few studies have been conducted to examine the effectiveness of different antibiotic regimens for this infection and whether to administer antibiotics intrapartum or postpartum. Objectives To study the effects of different maternal antibiotic regimens for intraamniotic infection on maternal and perinatal morbidity and mortality. Search methods We searched the Cochrane Pregnancy and Childbirth Group's Trials Register (May 2002) and the Cochrane Controlled Trials Register (The Cochrane Library, Issue 2, 2002). We updated the search of the Cochrane Pregnancy and Childbirth Group's Trials Register on 30 April 2010 and added the results to the awaiting classification section of the review. Selection criteria Trials where there was a randomized comparison of different antibiotic regimens to treat women with a diagnosis of intraamniotic infection were included. The primary outcome was perinatal morbidity. Data collection and analysis Data were extracted from each publication independently by the authors. Two eligible trials (181 women) were included in this review. No trials were identified that compared antibiotic treatment with no treatment. Intrapartum treatment with antibiotics for intraamniotic infection was associated with a reduction in neonatal sepsis (relative risk (RR) 0.08; 95% confidence interval (CI) 0.00, 1.44) and pneumonia (RR 0.15; CI 0.01, 2.92) compared with treatment given immediately postpartum, but these results did not reach statistical significance (number of women studied = 45). There was no difference in the incidence of maternal bacteremia (RR 2.19; CI 0.25, 19.48). There was no difference in the outcomes of neonatal sepsis (RR 2.16; CI 0.20, 23.21) or neonatal death (RR 0.72; CI 0.12, 4.16) between a regimen with and without anaerobic activity (number of women studied = 133). There was a trend towards a decrease in the incidence of post‐partum endometritis in women who received treatment with ampicillin, gentamicin and clindamycin compared with ampicillin and gentamicin alone, but this did not reach statistical significance (RR 0.54; CI 0.19, 1.49). The conclusions that can be drawn from this meta‐analysis are limited due to the small number of studies. For none of the outcomes was a statistically significant difference seen between the different interventions. Current consensus is for the intrapartum administration of antibiotics when the diagnosis of intraamniotic infection is made; however, the results of this review neither support nor refute this although there was a trend towards improved neonatal outcomes when antibiotics were administered intrapartum. No recommendations can be made on the most appropriate antimicrobial regimen to choose to treat intraamniotic infection. [Note: The six citations in the awaiting classification section of the review may alter the conclusions of the review once assessed.] |
t59 | Transient tachypnoea of the newborn is characterized by a high respiratory rate (more than 60 breaths per minute) and signs of respiratory distress (difficulty in breathing). It typically appears within the first two hours of life in babies born at or after 34 weeks' gestational age. Although transient tachypnoea of the newborn usually improves without treatment, it might be associated with wheezing in late childhood. The idea behind using steroids for transient tachypnoea of the newborn is based on studies showing that steroids can reduce fluid from small cavities within the lungs called the alveoli. In this Cochrane Review, we reported and critically analyzed the available evidence on the benefit and harms of steroids in the management of transient tachypnoea of the newborn. We identified and included one study, which compared steroids with placebo (dummy pill) in 49 newborns. The steroids were given to babies by inhalation. Results Steroids did not improve lung function or reduce the need for breathing support. Overall, we are uncertain as to whether steroids have an important effect on rapid breathing because the results are imprecise and based on only one small study. | Transient tachypnoea of the newborn (TTN) is characterized by tachypnoea and signs of respiratory distress. Transient tachypnoea typically appears within the first two hours of life in term and late preterm newborns. The administration of corticosteroids might compensate for the impaired hormonal changes which occur when infants are delivered late preterm, or at term but before the onset of spontaneous labour (elective caesarean section). Corticosteroids might improve the clearance of liquid from the lungs, thus reducing the effort required to breathe and improving respiratory distress. Objectives The objective of this review is to assess whether postnatal corticosteroids — compared to placebo, no treatment or any other drugs administered to treat TTN — are effective and safe in the treatment of TTN in infants born at 34 weeks' gestational age or more. Search methods We searched the Cochrane Central Register of Controlled Trials (CENTRAL; 2019, Issue 2), MEDLINE (1996 to 19 February 2019), Embase (1980 to 19 February 2019) and CINAHL (1982 to 19 February 2019). We applied no language restrictions. We searched clinical trial registries for ongoing studies. Selection criteria We included randomized controlled trials, quasi‐randomized controlled trials and cluster‐randomized trials comparing postnatal corticosteroids versus placebo or no treatment or any other drugs administered to infants born at 34 weeks' gestational age or more and less than three days of age with TTN. Data collection and analysis For each of the included trials, two review authors independently extracted data (e.g. number of participants, birth weight, gestational age, duration of oxygen therapy, need for continuous positive airway pressure, need for mechanical ventilation, duration of mechanical ventilation, etc.) and assessed the risk of bias (e.g. adequacy of randomization and blinding, completeness of follow‐up). The primary outcomes considered in this review were need for nasal continuous positive airway pressure and need for mechanical ventilation. We used the GRADE approach to assess the certainty of the evidence. One trial, which included 49 infants, met the inclusion criteria. The trial compared the use of inhaled corticosteroids (budesonide) with placebo. We found no differences between groups in terms of need for nasal continuous positive airway pressure (risk ratio (RR) 1.27, 95% confidence interval (CI) 0.65 to 2.51; 1 study, 49 participants) and need for mechanical ventilation (RR 0.52, 95% CI 0.05 to 5.38; 1 study, 49 participants). The type of mechanical ventilation used in the included study was high‐frequency oscillation. Tests for heterogeneity were not applicable for any of the analyses as only one study was included. Out of the secondary outcomes we deemed to be of greatest importance to patients, the study only reported on duration of hospital stay, which was no different between groups. We identified no ongoing trials. Given the paucity and very low quality of the available evidence, we are unable to determine the benefits and harms of postnatal administration of either inhaled or systemic corticosteroids for the management of TTN. |
t60 | A team of Cochrane researchers investigated how well sulthiame worked when it was used as an add‐on antiepileptic medicine (medicines that reduce seizures) in people with any type of epilepsy. Epilepsy is a common neurological (brain) condition that is characterised by repeated seizures. Most people respond well to conventional antiepileptic medicines, however, about 30% continue to have seizures. These people are said to have drug‐resistant epilepsy. Sulthiame is an antiepileptic drug that is used widely in some European countries and in Israel. Sometimes it is used as an additional (add‐on) antiepileptic medicine for people with epilepsy, alongside an existing antiepileptic medicine. Randomised controlled trials produce the most reliable evidence for medicines. The team searched the medical literature for randomised controlled trials that compared sulthiame as an add‐on therapy to add‐on placebo (an inactive, dummy drug), or another antiepileptic medicine. The researchers found one relevant trial that included 37 infants, aged from three to 15 months, who had a diagnosis of West syndrome, a type of epilepsy. All infants were started on an antiepileptic medicine, pyridoxine, three days before they added sulthiame or placebo. The infants' parents did not know which add‐on therapy their children received. Very uncertain evidence from the trial suggests that sulthiame may stop seizures in people with West syndrome whose seizures do not stop with pyridoxine. Thirty per cent more infants had their seizures stop when they received add‐on sulthiame (6/20 participants) compared to add‐on placebo (0/17 participants). This difference was not statistically significant, mainly because there were so few infants included in the trial. More infants experienced somnolence (drowsiness) when they received add‐on sulthiame (4/20), compared to those who received add‐on placebo (1/17), but again, this was not statistically significant. | This is an updated version of the Cochrane Review previously published in the Cochrane Database of Systematic Reviews 2015, Issue 10. Epilepsy is a common neurological condition, characterised by recurrent seizures. Most people respond to conventional antiepileptic drugs, however, around 30% will continue to experience seizures, despite treatment with multiple antiepileptic drugs. Sulthiame, also known as sultiame, is a widely used antiepileptic drug in Europe and Israel. We present a summary of the evidence for the use of sulthiame as add‐on therapy in epilepsy. Objectives To assess the efficacy and tolerability of sulthiame as add‐on therapy for people with epilepsy of any aetiology compared with placebo or another antiepileptic drug. Search methods For the latest update, we searched the Cochrane Register of Studies (CRS Web), which includes the Cochrane Epilepsy Group's Specialized Register and CENTRAL (17 January 2019), MEDLINE Ovid (1946 to January 16, 2019), ClinicalTrials.gov and the WHO ICTRP Search Portal (17 January 2019). We imposed no language restrictions. We contacted the manufacturers of sulthiame, and researchers in the field to seek any ongoing or unpublished studies. Selection criteria Randomised controlled trials of add‐on sulthiame, with any level of blinding (single, double or unblinded) in people of any age, with epilepsy of any aetiology. Data collection and analysis Two review authors independently selected trials for inclusion, and extracted relevant data. We assessed these outcomes: (1) 50% or greater reduction in seizure frequency between baseline and end of follow‐up; (2) complete cessation of seizures during follow‐up; (3) mean seizure frequency; (4) time‐to‐treatment withdrawal; (5) adverse effects; and (6) quality of life. We used intention‐to‐treat for primary analyses. We presented results as risk ratios (RR) with 95% confidence intervals (CIs). However, due to the paucity of trials, we mainly conducted a narrative analysis. We included one placebo‐controlled trial that recruited 37 infants with newly diagnosed West syndrome. This trial was funded by DESITIN Pharma, Germany. During the study, sulthiame was given as an add‐on therapy to pyridoxine. No data were reported for the outcomes: 50% or greater reduction in seizure frequency between baseline and end of follow‐up; mean seizure frequency; or quality of life. For complete cessation of seizures during a nine‐day follow‐up period for add‐on sulthiame versus placebo, the RR was 11.14 (95% CI 0.67 to 184.47; very low‐certainty evidence), however, this difference was not shown to be statistically significant (P = 0.09). The number of infants experiencing one or more adverse events was not significantly different between the two treatment groups (RR 0.85, 95% CI 0.44 to 1.64; very low‐certainty evidence; P = 0.63). Somnolence was more prevalent amongst infants randomised to add‐on sulthiame compared to placebo, but again, the difference was not statistically significant (RR 3.40, 95% CI 0.42 to 27.59; very low‐certainty evidence; P = 0.25). We were unable to conduct meaningful analysis of time‐to‐treatment withdrawal and adverse effects due to incomplete data. Sulthiame may lead to a cessation of seizures when used as an add‐on therapy to pyridoxine in infants with West syndrome, however, we are very uncertain about the reliability of this finding. The included study was small and had a significant risk of bias, largely due to the lack of details regarding blinding and the incomplete reporting of outcomes. Both issues negatively impacted the certainty of the evidence. No conclusions can be drawn about the occurrence of adverse effects, change in quality of life, or mean reduction in seizure frequency. No evidence exists for the use of sulthiame as an add‐on therapy in people with epilepsy outside West syndrome. Large, multi‐centre randomised controlled trials are needed to inform clinical practice, if sulthiame is to be used as an add‐on therapy for epilepsy. |
t61 | Little evidence that antibiotics or alpha‐blocker drugs help to relieve chronic abacterial prostatitis, but heat treatments might be effective and more research is needed. Chronic abacterial prostatitis (CAP) involves inflammation of the prostate gland and commonly affects men of all ages. It can cause problems urinating, including discomfort and pain, increased frequency and urge, or problems emptying the bladder. Treatments for CAP include heat treatments (using microwaves) and several different types of drugs. The review found that there is little evidence to support the routine use of antibiotic or alpha‐blocking drugs for CAP. Heat treatments in comparison may be useful. | Chronic abacterial prostatitis is a common disabling but enigmatic condition with a symptom complex of pelvic area pain and lower urinary tract symptoms. The scope of treatments recommended for chronic abacterial prostatitis is a testament to how little is known about what causes the condition and how to treat it. As a result, chronic abacterial prostatitis often causes physician frustration, patient confusion and dissatisfaction, variable thresholds for referral, and potentially inappropriate antibiotic use. Objectives Examine the evidence regarding the effectiveness of therapies for chronic abacterial prostatitis. Search methods Studies were identified through a search of MEDLINE (1966 to 2000), the Cochrane Library, bibliographies of identified articles and reviews, and contact with an expert. Selection criteria Studies were eligible if they: (1) are randomized controlled trials (RCTs) or controlled clinical trials (CCTs) (2) involve men with chronic abacterial prostatitis (3) control group receives placebo, sham intervention, active pharmacologic or device therapy for chronic abacterial prostatitis and (4) outcomes data are provided. Eligibility was assessed by at least two independent observers. Data collection and analysis Study information on patients, interventions, and outcomes was extracted independently by two reviewers. The main outcome was the efficacy of treatment for chronic abacterial prostatitis versus control in improving urologic symptom scale scores or global report of urinary tract symptoms. Secondary outcomes included changes in the prostate examination, uroflowmetry, urodynamics, analysis of urine, expressed prostatic secretions and seminal fluid, and prostate ultrasonography. The 15 treatment trials involved: medications used to treat benign prostatic hyperplasia (n = 4 trials); anti‐inflammatory medications (n = 2 trials); antibiotics (n = 1 trial); thermotherapy (n = 5 trials); and miscellaneous medications (n = 3 trials). The disparity between studies did not permit quantitative analysis. There were a total of 600 enrollees (age range 38 to 45). All but one of the trials were done outside the United States. The treatment trials are few, weak methodologically, and involve small sample sizes. The routine use of antibiotics and alpha blockers for chronic abacterial prostatitis is not supported by the existing evidence. The small studies examining thermal therapy appear to demonstrate benefit of clinical significance and merit further evaluation. Additional treatment trials are required and they should report important patient characteristics (e.g., race), study design details and utilize clinically relevant and validated assessment measures. |
t62 | LTC is the name used for residential homes, which provide personal care, supervision with medications and help with day‐to‐day activities, and nursing homes, which provide 24‐hour nursing care. Delirium is a common and serious illness for older people living in LTC. Delirium is a condition that causes confusion, usually over a few hours or days. Some people with delirium become quiet and sleepy while others become agitated and disorientated, so it can be a very distressing condition. Delirium can increase the chances of being admitted to hospital, developing dementia and can increase the risk of death. Importantly, studies of people in hospital have shown that it is possible to prevent around a third of cases of delirium by providing an environment and care plan that target the main delirium risk factors, including, providing better lighting and signs to avoid disorientation; avoiding unnecessary use of catheters to help prevent infection; and avoiding certain medications which increase the risk of delirium. This review has searched for and assessed research on preventing delirium in older people living in LTC. The evidence is current to February 2019. We found three studies that included 3851 participants. Two studies took place in the US and one study in the UK. One study tested whether delirium could be prevented by calculating how much fluid an older person in a care home needs each day and ensuring hydration was maintained. There were 98 people in the study, which lasted four weeks. One study tested the effect of a computer program which searched for prescriptions of medications that might increase the chance of developing delirium, to enable a pharmacist to adjust or stop them. There were 3538 people in the study, which lasted 12 months. One study tested an enhanced educational intervention which included learning sessions on delirium with care home staff and group meetings to identity targets for preventing delirium. It was not possible to determine if the hydration intervention reduced the occurrence of delirium. The study of a computerised medication search programme probably reduced delirium, but there was no clear reduction in hospital admissions, deaths or falls. A potential problem is that it might not be possible to use this computer program in different countries that do not have similar computer systems available. It was not possible to determine if the enhanced education intervention reduced the occurrence of delirium and there was no clear reduction in the number of deaths. The intervention was probably associated with a reduction in hospital admissions. | Delirium is a common and distressing mental disorder. It is often caused by a combination of stressor events in susceptible people, particularly older people living with frailty and dementia. Adults living in institutional long‐term care (LTC) are at particularly high risk of delirium. An episode of delirium increases risks of admission to hospital, development or worsening of dementia and death. Multicomponent interventions can reduce the incidence of delirium by a third in the hospital setting. However, it is currently unclear whether interventions to prevent delirium in LTC are effective. This is an update of a Cochrane Review first published in 2014. Objectives To assess the effectiveness of interventions for preventing delirium in older people in institutional long‐term care settings. Search methods We searched ALOIS ( www.medicine.ox.ac.uk/alois ), the Cochrane Dementia and Cognitive Improvement Group (CDCIG) ’s Specialised Register of dementia trials ( dementia.cochrane.org/our‐trials‐register ), to 27 February 2019. The search was sufficiently sensitive to identify all studies relating to delirium. We ran additional separate searches in the Cochrane Central Register of Controlled Trials (CENTRAL), major healthcare databases, trial registers and grey literature sources to ensure that the search was comprehensive. Selection criteria We included randomised controlled trials (RCTs) and cluster‐randomised controlled trials (cluster‐RCTs) of single and multicomponent, non‐pharmacological and pharmacological interventions for preventing delirium in older people (aged 65 years and over) in permanent LTC residence. Data collection and analysis We used standard methodological procedures expected by Cochrane. Primary outcomes were prevalence, incidence and severity of delirium; and mortality. Secondary outcomes included falls, hospital admissions and other adverse events; cognitive function; new diagnoses of dementia; activities of daily living; quality of life; and cost‐related outcomes. We used risk ratios (RRs) as measures of treatment effect for dichotomous outcomes, hazard ratios (HR) for time‐to‐event outcomes and mean difference (MD) for continuous outcomes. For each outcome, we assessed the overall certainty of the evidence using GRADE methods. We included three trials with 3851 participants. All three were cluster‐RCTs. Two of the trials were of complex, single‐component, non‐pharmacological interventions and one trial was a feasibility trial of a complex, multicomponent, non‐pharmacological intervention. Risk of bias ratings were mixed across the three trials. Due to the heterogeneous nature of the interventions, we did not combine the results statistically, but produced a narrative summary. It was not possible to determine the effect of a hydration‐based intervention on delirium incidence (RR 0.85, 95% confidence interval (CI) 0.18 to 4.00; 1 study, 98 participants; very low‐certainty evidence downgraded for risk of bias and very serious imprecision). This study did not assess delirium prevalence, severity or mortality. The introduction of a computerised system to identify medications that may contribute to delirium risk and trigger a medication review was probably associated with a reduction in delirium incidence (12‐month HR 0.42, CI 0.34 to 0.51; 1 study, 7311 participant‐months; moderate‐certainty evidence downgraded for risk of bias) but probably had little or no effect on mortality (HR 0.88, CI 0.66 to 1.17; 1 study, 9412 participant‐months; moderate‐certainty evidence downgraded for imprecision), hospital admissions (HR 0.89, CI 0.72 to 1.10; 1 study, 7599 participant‐months; moderate‐certainty evidence downgraded for imprecision) or falls (HR 1.03, CI 0.92 to 1.15; 1 study, 2275 participant‐months; low‐certainty evidence downgraded for imprecision and risk of bias). Delirium prevalence and severity were not assessed. In the enhanced educational intervention study, aimed at changing practice to address key delirium risk factors, it was not possible to determine the effect of the intervention on delirium incidence (RR 0.62, 95% CI 0.16 to 2.39; 1 study, 137 resident months; very low‐certainty evidence downgraded for risk of bias and serious imprecision) or delirium prevalence (RR 0.57, 95% CI 0.15 to 2.19; 1 study, 160 participants; very low‐certainty evidence downgraded for risk of bias and serious imprecision). There was probably little or no effect on mortality (RR 0.82, CI 0.50 to 1.34; 1 study, 215 participants; moderate‐certainty evidence downgraded for imprecision). The intervention was probably associated with a reduction in hospital admissions (RR 0.67, CI 0.57 to 0.79; 1 study, 494 participants; moderate‐certainty evidence downgraded due to indirectness). Our review identified limited evidence on interventions for preventing delirium in older people in LTC. A software‐based intervention to identify medications that could contribute to delirium risk and trigger a pharmacist‐led medication review, probably reduces incidence of delirium in older people in institutional LTC. This is based on one large RCT in the US and may not be practical in other countries or settings which do not have comparable information technology services available in care homes. In the educational intervention aimed at identifying risk factors for delirium and developing bespoke solutions within care homes, it was not possible to determine the effect of the intervention on delirium incidence, prevalence or mortality. This evidence is based on a small feasibility trial. Our review identified three ongoing trials of multicomponent delirium prevention interventions. We identified no trials of pharmacological agents. Future trials of multicomponent non‐pharmacological delirium prevention interventions for older people in LTC are needed to help inform the provision of evidence‐based care for this vulnerable group. |
t63 | High blood pressure increases risks of stroke and heart attack. In people with moderate elevations of blood pressure, drugs that lower blood pressure reduce the incidence of stroke and heart attack. It is not known whether blood pressure‐lowering drugs reduce sudden death (death of unknown cause within one hour of the onset of acute symptoms or within 24 hours of observation of the patient as alive and symptom free). We found 15 trials including 39,908 people that investigated whether blood pressure‐lowering drugs reduce sudden death. This review presents moderate‐quality evidence to show that blood pressure‐lowering drugs reduce heart attacks but do not appear to reduce sudden cardiac death. This suggests that sudden cardiac death may not be caused primarily by heart attack. Continued research is needed to determine the causes of sudden cardiac death. | High blood pressure is an important public health problem because of associated risks of stroke and cardiovascular events. Antihypertensive drugs are often used in the belief that lowering blood pressure will prevent cardiac events, including myocardial infarction and sudden death (death of unknown cause within one hour of the onset of acute symptoms or within 24 hours of observation of the patient as alive and symptom free). Objectives To assess the effects of antihypertensive pharmacotherapy in preventing sudden death, non‐fatal myocardial infarction and fatal myocardial infarction among hypertensive individuals. Search methods We searched the Cochrane Hypertension Specialised Register (all years to January 2016), the Cochrane Central Register of Controlled Trials (CENTRAL) via the Cochrane Register of Studies Online (2016, Issue 1), Ovid MEDLINE (1946 to January 2016), Ovid EMBASE (1980 to January 2016) and ClinicalTrials.gov (all years to January 2016). Selection criteria All randomised trials evaluating any antihypertensive drug treatment for hypertension, defined, when possible, as baseline resting systolic blood pressure of at least 140 mmHg and/or resting diastolic blood pressure of at least 90 mmHg. Comparisons included one or more antihypertensive drugs versus placebo, or versus no treatment. Data collection and analysis Review authors independently extracted data. Outcomes assessed were sudden death, fatal and non‐fatal myocardial infarction and change in blood pressure. We included 15 trials (39,908 participants) that evaluated antihypertensive pharmacotherapy for a mean duration of follow‐up of 4.2 years. This review provides moderate‐quality evidence to show that antihypertensive drugs do not reduce sudden death (risk ratio (RR) 0.96, 95% confidence interval (CI) 0.81 to 1.15) but do reduce both non‐fatal myocardial infarction (RR 0.85, 95% CI 0.74, 0.98; absolute risk reduction (ARR) 0.3% over 4.2 years) and fatal myocardial infarction (RR 0.75, 95% CI 0.62 to 0.90; ARR 0.3% over 4.2 years). Withdrawals due to adverse effects were increased in the drug treatment group to 12.8%, as compared with 6.2% in the no treatment group. Although antihypertensive drugs reduce the incidence of fatal and non‐fatal myocardial infarction, they do not appear to reduce the incidence of sudden death. This suggests that sudden cardiac death may not be caused primarily by acute myocardial infarction. Continued research is needed to determine the causes of sudden cardiac death . |
t64 | Degenerative changes of the cervical spine are quite common and can cause severe neck pain, impairment and decreased quality of life. Degenerative disc disease of the cervical spine can result in severe pain, instability and radiculopathy (pain spreading down the arms and into the head), myelopathy (spasticity and weakness of arms or hands, which may include "numb and clumsy" hands) or both. Chinese oral and topical herbal medicines are being used to treat many neck disorders. Description of the trials Two Chinese oral herbal medications were tested in three randomized controlled trials that included 701 adults with chronic neck pain with radicular signs or symptoms or myelopathy. One oral herbal medication was compared with Mobicox (non‐steroidal anti‐inflammatory medication) and Methycobal (drug to reduce numbness, tingling in the arms), and the other (Compound Qishe Tablet) with placebo and Jingfukang. A topical herbal medicine (Compound Extractum Nucis Vomicae) was compared with Diclofenac Diethylamine Emulgel (non‐steroidal anti‐inflammatory medication). Findings Oral herbal medications may reduce neck pain more than placebo and Jingfukang. A topical herbal medicine (Compound Extractum Nucis Vomicae) also relieved neck pain in the short term (four weeks), but the trail had a high risk of bias. Limitations All four included studies were in Chinese and two of these studies were unpublished. Half of the trials had a low risk of bias, but they only tested the effects of short term use (up to eight weeks). There is a need for trials with adequate numbers of participants that address the long‐term efficacy or effectiveness of Chinese herbal medicine compared to placebo. Conclusion For chronic neck pain with or without radicular symptoms, there is low quality evidence that Compound Qishe Tablet is more effective than placebo for pain relief, measured at the end of the treatment. There is a need for trials with adequate numbers of participants that address long‐term efficacy or effectiveness of herbal medicine compared to placebo. | Chronic neck pain with radicular signs or symptoms is a common condition. Many patients use complementary and alternative medicine, including traditional Chinese medicine, to address their symptoms. Objectives To assess the efficacy of Chinese herbal medicines in treating chronic neck pain with radicular signs or symptoms. Search methods We electronically searched CENTRAL ( The Cochrane Library 2009, issue 3), MEDLINE, EMBASE, CINAHL and AMED (beginning to October 1, 2009), the Chinese Biomedical Database and related herbal medicine databases in Japan and South Korea (1979 to 2007). We also contacted content experts and handsearched a number of journals published in China. Selection criteria We included randomized controlled trials with adults with a clinical diagnosis of cervical degenerative disc disease, cervical radiculopathy or myelopathy supported by appropriate radiological findings. The interventions were Chinese herbal medicines, defined as products derived from raw or refined plants or parts of plants, minerals and animals that are used for medicinal purposes in any form. The primary outcome was pain relief, measured with a visual analogue scale, numeric scale or other validated tool. Data collection and analysis The data were independently extracted and recorded by two review authors on a pre‐developed form. Risk of bias and clinical relevance were assessed separately by two review authors using the twelve criteria and the five questions recommended by the Cochrane Back Review Group. Disagreements were resolved by consensus. All four included studies were in Chinese; two of which were unpublished. Effect sizes were not clinically relevant and there was low quality evidence for all outcomes due to study limitations and sparse data (single studies). Two trials (680 participants) found that Compound Qishe Tablets relieved pain better in the short‐term than either placebo or Jingfukang; one trial (60 participants) found than an oral herbal formula of Huangqi (( Radix Astragali )18 g, Dangshen ( Radix Codonopsi s) 9 g, Sanqi ( Radix Notoginseng ) 9 g, Chuanxiong ( Rhizoma Chuanxiong )12 g, Lujiao ( Cornu Cervi Pantotrichum ) 12 g, and Zhimu ( Rhizoma Anemarrhenae )12 g) relieved pain better than Mobicox or Methycobal and one trial (360 participants) showed that a topical herbal medicine, Compound Extractum Nucis Vomicae, relieved pain better than Diclofenac Diethylamine Emulgel. There is low quality evidence that an oral herbal medication, Compound Qishe Tablet, reduced pain more than placebo or Jingfukang and a topical herbal medicine, Compound Extractum Nucis Vomicae, reduced pain more than Diclofenac Diethylamine Emulgel. Further research is very likely to change both the effect size and our confidence in the results. |
t65 | The aim of this review was to evaluate the effectiveness of surgical and non‐surgical treatments for dissociated vertical deviation. Eye misalignment (strabismus) is the drifting of one or both eyes, which can be inward, outward, upward, or downward. This review evaluated the treatment for a specific type of upward drifting of one or both eyes known as dissociated vertical deviation (DVD). DVD can occur in both children and adults. For some people, DVD is controlled and is only detectable during testing. In others, DVD happens all of a sudden as the eye drifts up of its accord. It can be hard for the person to gain control of the eye, which can cause distress to the person in social situations. The condition also may cause double vision or eyestrain. Surgery is the common treatment for DVD. Treatments that do not involve surgery are uncommon. There is limited evidence about the effectiveness of treatments (either surgical or non‐surgical) for DVD. We found four randomized controlled trials (RCTs) of surgical treatment for DVD. We found no studies evaluating non‐surgical treatments. One trial was conducted in Canada and compared a surgical repositioning procedure (anteriorization of the inferior oblique muscle) with or without resection; one in the USA compared surgical weakening of an eye muscle (superior rectus recession) with or without augmentation with a fixation suture; and two in the Czech Republic compared anteriorization of the inferior oblique muscle versus removal of a piece of the inferior oblique muscle (myectomy). Only one of the RCTs examined what we wanted to know: the proportion of participants who had surgical success. There was insufficient information available to determine the differences between any of the surgical procedures with respect to surgical success or any other outcome relevant to our review. The most common adverse events from the surgical procedures were downward drifting of the eye after surgery (hypotropia), limited upward movement of the eye, and need for repeat surgery. | The term "strabismus" describes misalignment of the eyes. One or both eyes may deviate inward, outward, upward, or downward. Dissociated vertical deviation (DVD) is a well‐recognized type of upward drifting of one or both eyes, which can occur in children or adults. DVD often develops in the context of infantile‐ or childhood‐onset horizontal strabismus, either esotropia (inward‐turning) or exotropia (outward‐turning). For some individuals, DVD remains controlled and can only be detected during clinical testing. For others, DVD becomes spontaneously "manifest" and the eye drifts up of its own accord. Spontaneously manifest DVD can be difficult to control and often causes psychosocial concerns. Traditionally, DVD has been thought to be asymptomatic, although some individuals have double vision. More recently it has been suggested that individuals with DVD may also suffer from eyestrain. Treatment for DVD may be sought either due to psychosocial concerns or because of these symptoms. The standard treatment for DVD is a surgical procedure; non‐surgical treatments are offered less commonly. Although there are many studies evaluating different management options for the correction of DVD, a lack of clarity remains regarding which treatments are most effective. Objectives The objective of this review was to determine the effectiveness and safety of various surgical and non‐surgical interventions in randomized controlled trials of participants with DVD. Search methods We searched CENTRAL (which contains the Cochrane Eyes and Vision Trials Register) (2015, Issue 8), Ovid MEDLINE, Ovid MEDLINE In‐Process and Other Non‐Indexed Citations, Ovid MEDLINE Daily, Ovid OLDMEDLINE (January 1946 to August 2015), EMBASE (January 1980 to August 2015), PubMed (1948 to August 2015), Latin American and Caribbean Health Sciences Literature Database (LILACS) (1982 to August 2015), the meta Register of Controlled Trials ( m RCT) ( www.controlled‐trials.com ) (last searched 3 February 2014), ClinicalTrials.gov ( www.clinicaltrials.gov ), and the WHO International Clinical Trials Registry Platform (ICTRP) ( www.who.int/ictrp/search/en ). We did not use any date or language restrictions in the electronic searches for trials. We last searched the electronic databases on 3 August 2015. Selection criteria We included randomized controlled trials (RCTs) of surgical and non‐surgical interventions for the correction of DVD. Data collection and analysis We used standard procedures expected by Cochrane. Two review authors independently completed eligibility screening, data abstraction, 'Risk of bias' assessment, and grading of the evidence. We found four RCTs eligible for inclusion in this review (248 eyes of 151 participants between the ages of 6 months to 22 years). All trials were assessed as having unclear risk of bias overall due to insufficient reporting of study methods. One trial was conducted in Canada and compared anteriorization of the inferior oblique muscle with resection versus anteriorization of the inferior oblique muscle alone; one in the USA compared superior rectus recession with posterior fixation suture versus superior rectus recession alone; and two in the Czech Republic compared anteriorization of the inferior oblique muscle versus myectomy of the inferior oblique muscle. Only one trial reported data that allowed analysis of the primary outcome for this review, the proportion of participants with treatment success. The difference between inferior oblique anteriorization plus resection versus inferior oblique anteriorization alone was uncertain when measured at least four months postoperatively (risk ratio 1.13, 95% confidence interval 0.60 to 2.11, 30 participants, very low‐quality evidence). Three trials measured the magnitude of hyperdeviation, but did not provide sufficient data for analysis. All four trials reported a relatively low rate of adverse events; hypotropia, limited elevation, and need for repeat surgery were reported as adverse events associated with some of the surgical interventions. No trials reported any other secondary outcome specified for our review. The four trials included in this review assessed the effectiveness of five different surgical procedures for the treatment of DVD. Nevertheless, insufficient reporting of study methods and data led to methodological concerns that undermine the conclusions of all studies. There is a pressing need for carefully executed RCTs of treatment for DVD in order to improve the evidence for the optimal management of this condition. |
t66 | The copper intrauterine device (copper IUD) is a highly‐effective non‐hormonal type of birth control, and is the most commonly used method in the world. However, use of the copper IUD is low in countries with relatively high rates of unintended pregnancy, such as the United Kingdom and United States. Our review looked at studies of different interventions to improve use of the copper IUD. We did computer searches for relevant studies and looked at the reference lists of study reports to identify more studies. Three studies on contraceptive counselling and referrals by community workers showed an increase in use of the copper IUD. Two studies on antenatal contraceptive counselling and one study on postnatal couple counselling, with provision of an information leaflet before being discharged from the maternity ward, also showed an increase in use of the copper IUD. A study on postnatal home visits and two studies on enhanced postabortion contraceptive counselling did not show an increase in use of the copper IUD. More high‐quality research is needed to look at the longer‐term effectiveness of interventions to improve use of the copper IUD. | Intrauterine devices (IUDs) are highly effective and are the most widely used reversible contraceptive method in the world. However, in developed countries IUDs are among the least common methods of contraception used. We evaluated the effect of interventions to increase uptake of the copper IUD, a long‐acting, reversible contraceptive method. Objectives To determine effectiveness of interventions to improve uptake and continuation of the copper IUD. Search methods We searched the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE, POPLINE, PsycINFO, PubMed, ClinicalTrials.gov , International Clinical Trials Registry Platform (ICTRP) and OpenSIGLE. We also handsearched references of relevant reviews and included studies. Selection criteria We included randomised controlled trials (RCTs) and controlled before and after studies of interventions which measured use and uptake of contraception including copper IUD as an outcome. Data collection and analysis Two authors independently screened the search results for relevant studies and extracted data from included studies. We used RevMan 5.1 to calculate Peto odd ratios (OR) with 95% confidence intervals (CI) for dichotomous outcomes. We conducted meta‐analysis by pooling data for similar types of intervention where possible. We used the GRADE system to evaluate the quality of evidence. Nine studies representing 7960 women met our inclusion criteria, including seven randomised controlled trials and two controlled before and after studies that reported IUD uptake postintervention. We evaluated the quality of evidence as moderate to low. Three studies on contraceptive counselling and referrals by community workers showed an increase in uptake of the IUD among intervention groups (Peto OR 2.00; 95% CI 1.40 to 2.85). Two studies on antenatal contraceptive counselling also favoured the intervention groups (Peto OR 2.33; 95% CI 1.39 to 3.91). One study on postnatal couple contraceptive counselling also showed an increase in IUD uptake compared to control (Peto OR 5.73; 95% CI 3.59 to 9.15). The results of one study evaluating postnatal home visits and two studies on enhanced postabortion contraceptive counselling did not reach statistical significance. Community‐based interventions and antenatal contraceptive counselling improved uptake of copper IUD contraception. Since the copper IUD is one of the most effective reversible contraceptive methods, primary care and family planning and practitioners could consider adopting these interventions. Although our review suggests these interventions are clinically effective, a cost‐benefit analysis may be required to evaluate applicability. |
t67 | Pain is commonly experienced after surgical procedures. Acute postoperative pain of moderate or severe intensity is often used (as a model) to test whether or not drugs are effective painkillers. In this case we could find no studies that tested oral nefopam against placebo. It is possible that the studies were performed, but not reported, because they were used only to register nefopam with licensing authorities throughout the world. | Nefopam is a centrally‐acting but non‐opioid analgesic drug of the benzoxazocine chemical class, developed in the early 1970s. It is widely used, mainly in European countries, for the relief of moderate to severe pain as an alternative to opioid analgesic drugs, and used in rheumatic disease and other musculoskeletal disorders in the UK. This review sought to evaluate the efficacy and safety of oral nefopam in acute postoperative pain, using clinical studies of patients with established pain, and with outcomes measured primarily over 6 hours using standard methods. This type of study has been used for many decades to establish that drugs have analgesic properties. Objectives To assess the efficacy of single dose oral nefopam in acute postoperative pain, and any associated adverse events. Search methods We searched CENTRAL (Issue 2, 2009), MEDLINE (1966 to May 2009); EMBASE via Ovid (1980 to May 2009); the Oxford Pain Relief Database (1950 to 1994); and reference lists of studies found. Selection criteria Randomised, double‐blind, placebo‐controlled clinical trials of oral nefopam for relief of acute postoperative pain in adults. Data collection and analysis Two review authors independently assessed trial quality and extracted data. The area under the "pain relief versus time" curve was used to derive the proportion of participants with nefopam and placebo experiencing least 50% pain relief over 4 to 6 hours, using validated equations. The number‐needed‐to‐treat‐to‐benefit (NNT) was calculated using 95% confidence intervals (CIs). The proportion of participants using rescue analgesia over a specified time period, and time to use of rescue analgesia, were sought as additional measures of efficacy. Information on adverse events and withdrawals was also collected. No included studies were identified after examining in detail thirteen studies on oral nefopam in participants with established postoperative pain. In the absence of evidence of efficacy for oral nefopam in acute postoperative pain, its use in this indication is not justified. Because trials clearly demonstrating analgesic efficacy in the most basic of acute pain studies are lacking, use in other indications should be evaluated carefully. Given the large number of available drugs of this and similar classes, there is no urgent research agenda. |
t68 | The umbilical artery catheters (tubes) (UACs) commonly used in neonatal intensive care to monitor babies can sometimes cause them problems. They can be placed in high or low positions, and come in different materials and designs. The blood anticoagulant, heparin, theoretically helps prevent blood clots forming (thromboses), but high doses could lead to haemorrhage (bleeding). This review found that low heparin doses are effective in preventing catheters becoming blocked and needing to be re‐inserted. There is not enough evidence to rule out the possibility of adverse effects. Heparin does not seem to lower the rate of blood clots in the major artery. | Umbilical arterial catheters (UACs) are among the most commonly used monitoring methodologies in neonatal intensive care. There seems to be significant variance between neonatal intensive care units in exactly how these catheters are used. This variance involves heparin dosing, catheter materials and catheter design and positioning of the catheter. Objectives To determine whether the use of heparin in fluids infused through an umbilical arterial catheter in newborn infants influences the frequency of clinical ischemic events, catheter occlusion, aortic thrombosis, intraventricular hemorrhage, hypertension, death, or the duration of catheter usability. Search methods Randomized and quasi‐randomized controlled trials of umbilical catheterization use were obtained using the search methods of the Cochrane Neonatal Review Group. The Cochrane Library, MEDLINE (search via PubMed), CINAHL and EMBASE were searched from 1999 to 2009. Selection criteria Randomized trials in newborn infants of any birthweight or gestation. Comparison of heparinised to non heparinised infusion fluids, including comparison of heparin in the infusate to heparin just in the flush solution. Clinically important end points such as catheter occlusion or aortic thrombosis. Data collection and analysis There were five randomized controlled trials retrieved. All gave details of the incidence of catheter occlusion. Two also reported the incidence of aortic thrombosis. The intervention was reasonably consistent: heparin in the infusate at a concentration of 1 unit/mL was investigated in all trials except one which used a concentration of 0.25 units/mL. Studies generally included both term and preterm infants. Heparinization of the infusate decreases the incidence of catheter occlusion but does not affect the frequency of aortic thrombosis. Heparinization of the flush solution is not an adequate alternative. There does not appear to be an effect on frequency of intraventricular hemorrhage, death or clinical ischemic phenomena. Heparinization of the fluid infused through an umbilical arterial catheter decreases the likelihood of umbilical arterial catheters occluding. The lowest concentration tested so far (0.25 units/mL) has been shown to be effective. Heparinization of flushes without heparinizing the infusate is ineffective. The frequency of aortic thrombosis has not been shown to be affected; however, the confidence intervals for this effect are very wide. The frequency of intraventricular hemorrhage has not been shown to be affected by heparinization of the infusate, but again the confidence intervals are very wide and even a major increase in the incidence of grade 3 and 4 intraventricular hemorrhage would not have been detected. |
t69 | Undernutrition is a cause of child mortality; it contributed to the deaths of more than three million children in 2011. Furthermore, it can lead to higher risk of infection, poorer child development and school performance, and to chronic disease in adulthood. Evidence about the effectiveness of nutrition interventions for young children, therefore, is fundamentally important; not only for governments, funding agencies and nongovernmental organisations, but also for the children themselves. We included studies that compared children who were given supplementary feeding (food, drink) to those who did not receive any feeding. We looked carefully for factors that may have impacted on the results (child age, sex and disadvantage, family sharing food, amount of energy given, etc.). We included 32 studies; 21 randomised controlled trials (in which children were randomly assigned to receive either supplementary feeding (intervention group) or not (a control group), and 11 controlled before‐and‐after studies (in which outcomes were observed before and after treatment in a group of children who were not randomly assigned to an intervention and a control group). Most studies were from low‐ and middle‐income countries; three were from high‐income countries. We found that, in low‐ and middle‐income countries, providing additional food to children aged three months to five years led to small gains in weight (0.24 kg a year in both RCTs and CBAs) and height (0.54 cm a year in RCTs only; no evidence of an effect in other study designs),and moderate increases in haemoglobin. We also found positive impacts on psychomotor development (skills that involve mental and muscular activity). We found mixed evidence on effects of supplementary feeding on mental development. In high‐income countries, two studies found no benefits for growth. We found that food was often redistributed ('leakage') within the family; when feeding was home‐delivered, children benefited from only 36% of the energy given in the supplement. However, when the supplementary food was given in day care centres or feeding centres, there was much less leakage; children took in 85% of the energy provided in the supplement. When we looked at different groups supplementary food was more effective for younger children (under two years old) and for those who were poorer or less well‐nourished. Feeding programmes that were well‐supervised and those that provided a greater proportion of required daily food for energy were generally more effective. | Undernutrition contributes to five million deaths of children under five each year. Furthermore, throughout the life cycle, undernutrition contributes to increased risk of infection, poor cognitive functioning, chronic disease, and mortality. It is thus important for decision‐makers to have evidence about the effectiveness of nutrition interventions for young children. Objectives Primary objective 1. To assess the effectiveness of supplementary feeding interventions, alone or with co‐intervention, for improving the physical and psychosocial health of disadvantaged children aged three months to five years. Secondary objectives 1. To assess the potential of such programmes to reduce socio‐economic inequalities in undernutrition. 2. To evaluate implementation and to understand how this may impact on outcomes. 3. To determine whether there are any adverse effects of supplementary feeding. Search methods We searched CENTRAL, Ovid MEDLINE, PsycINFO, and seven other databases for all available years up to January 2014. We also searched ClinicalTrials.gov and several sources of grey literature. In addition, we searched the reference lists of relevant articles and reviews, and asked experts in the area about ongoing and unpublished trials. Selection criteria Randomised controlled trials (RCTs), cluster‐RCTs, controlled clinical trials (CCTs), controlled before‐and‐after studies (CBAs), and interrupted time series (ITS) that provided supplementary food (with or without co‐intervention) to children aged three months to five years, from all countries. Adjunctive treatments, such as nutrition education, were allowed. Controls had to be untreated. Data collection and analysis Two or more review authors independently reviewed searches, selected studies for inclusion or exclusion, extracted data, and assessed risk of bias. We conducted meta‐analyses for continuous data using the mean difference (MD) or the standardised mean difference (SMD) with a 95% confidence interval (CI), correcting for clustering if necessary. We analysed studies from low‐ and middle‐income countries and from high‐income countries separately, and RCTs separately from CBAs. We conducted a process evaluation to understand which factors impact on effectiveness. We included 32 studies (21 RCTs and 11 CBAs); 26 of these (16 RCTs and 10 CBAs) were in meta‐analyses. More than 50% of the RCTs were judged to have low risk of bias for random selection and incomplete outcome assessment. We judged most RCTS to be unclear for allocation concealment, blinding of outcome assessment, and selective outcome reporting. Because children and parents knew that they were given food, we judged blinding of participants and personnel to be at high risk for all studies. Growth. Supplementary feeding had positive effects on growth in low‐ and middle‐income countries. Meta‐analysis of the RCTs showed that supplemented children gained an average of 0.12 kg more than controls over six months (95% confidence interval (CI) 0.05 to 0.18, 9 trials, 1057 participants, moderate quality evidence). In the CBAs, the effect was similar; 0.24 kg over a year (95% CI 0.09 to 0.39, 1784 participants, very low quality evidence). In high‐income countries, one RCT found no difference in weight, but in a CBA with 116 Aboriginal children in Australia, the effect on weight was 0.95 kg (95% CI 0.58 to 1.33). For height, meta‐analysis of nine RCTs revealed that supplemented children grew an average of 0.27 cm more over six months than those who were not supplemented (95% CI 0.07 to 0.48, 1463 participants, moderate quality evidence). Meta‐analysis of seven CBAs showed no evidence of an effect (mean difference (MD) 0.52 cm, 95% CI ‐0.07 to 1.10, 7 trials, 1782 participants, very low quality evidence). Meta‐analyses of the RCTs demonstrated benefits for weight‐for‐age z‐scores (WAZ) (MD 0.15, 95% CI 0.05 to 0.24, 8 trials, 1565 participants, moderate quality evidence), and height‐for‐age z‐scores (HAZ) (MD 0.15, 95% CI 0.06 to 0.24, 9 trials, 4638 participants, moderate quality evidence), but not for weight‐for‐height z‐scores MD 0.10 (95% CI ‐0.02 to 0.22, 7 trials, 4176 participants, moderate quality evidence). Meta‐analyses of the CBAs showed no effects on WAZ, HAZ, or WHZ (very low quality evidence). We found moderate positive effects for haemoglobin (SMD 0.49, 95% CI 0.07 to 0.91, 5 trials, 300 participants) in a meta‐analysis of the RCTs. Psychosocial outcomes. Eight RCTs in low‐ and middle‐income countries assessed psychosocial outcomes. Our meta‐analysis of two studies showed moderate positive effects of feeding on psychomotor development (SMD 0.41, 95% CI 0.10 to 0.72, 178 participants). The evidence of effects on cognitive development was sparse and mixed. We found evidence of substantial leakage. When feeding was given at home, children benefited from only 36% of the energy in the supplement. However, when the supplementary food was given in day cares or feeding centres, there was less leakage; children took in 85% of the energy provided in the supplement. Supplementary food was generally more effective for younger children (less than two years of age) and for those who were poorer/ less well‐nourished. Results for sex were equivocal. Our results also suggested that feeding programmes which were given in day‐care/feeding centres and those which provided a moderate‐to‐high proportion of the recommended daily intake (% RDI) for energy were more effective. Feeding programmes for young children in low‐ and middle‐income countries can work, but good implementation is key. |
t70 | This systematic review sought to find out how well educational interventions worked for managing cancer‐related fatigue. Condition Fatigue is a common and problematic symptom for people with cancer that is greater than the tiredness experienced in everyday life. It can make the experience of other symptoms worse, negatively affect mood, interfere with the ability to carry out everyday activities, and negatively impact on quality of life. Interventions Education can provide people with information about what fatigue is and how to manage it. For example, managing fatigue may involve conserving energy throughout the day, and learning about the benefits of exercise, diet, relaxation, and good sleep routines. These approaches may help people to manage their fatigue and help them cope with its effects. In November 2016 we found 14 trials using education for cancer‐related fatigue compared to the usual care people received or to an attention control such as providing general information about cancer. These trials were undertaken with adults with any type or stage of cancer. Results The review found that education may have a small effect on reducing the intensity of fatigue, its interference in daily activities or relationships, and general (overall) fatigue. It could have a moderate effect on reducing distress from fatigue amongst people with non‐advanced cancer. There may also be beneficial effects on anxiety and overall quality of life, although it is unclear whether it reduces depression. It is unknown if this result might differ between types of cancer treatment or if the education is provided during or after cancer treatment. Not enough is known about the type of education that is most effective, when it is best provided, or whether it is effective for people with advanced cancer. | Cancer‐related fatigue is reported as the most common and distressing symptom experienced by patients with cancer. It can exacerbate the experience of other symptoms, negatively affect mood, interfere with the ability to carry out everyday activities, and negatively impact on quality of life. Educational interventions may help people to manage this fatigue or to cope with this symptom, and reduce its overall burden. Despite the importance of education for managing cancer‐related fatigue there are currently no systematic reviews examining this approach. Objectives To determine the effectiveness of educational interventions for managing cancer‐related fatigue in adults. Search methods We searched the Cochrane Central Register of Controlled Trials (CENTRAL), and MEDLINE, EMBASE, CINAHL, PsycINFO, ERIC, OTseeker and PEDro up to 1st November 2016. We also searched trials registries. Selection criteria We included randomised controlled trials (RCTs) of educational interventions focused on cancer‐related fatigue where fatigue was a primary outcome. Studies must have aimed to evaluate the effect of educational interventions designed specifically to manage cancer‐related fatigue, or to evaluate educational interventions targeting a constellation of physical symptoms or quality of life where fatigue was the primary focus. The studies could have compared educational interventions with no intervention or wait list controls, usual care or attention controls, or an alternative intervention for cancer‐related fatigue in adults with any type of cancer. Data collection and analysis Two review authors independently screened studies for inclusion and extracted data. We resolved differences in opinion by discussion. Trial authors were contacted for additional information. A third independent person checked the data extraction. The main outcome considered in this review was cancer‐related fatigue. We assessed the evidence using GRADE and created a 'Summary of Findings' table. We included 14 RCTs with 2213 participants across different cancer diagnoses. Four studies used only 'information‐giving' educational strategies, whereas the remainder used mainly information‐giving strategies coupled with some problem‐solving, reinforcement, or support techniques. Interventions differed in delivery including: mode of delivery (face to face, web‐based, audiotape, telephone); group or individual interventions; number of sessions provided (ranging from 2 to 12 sessions); and timing of intervention in relation to completion of cancer treatment (during or after completion). Most trials compared educational interventions to usual care and meta‐analyses compared educational interventions to usual care or attention controls. Methodological issues that increased the risk of bias were evident including lack of blinding of outcome assessors, unclear allocation concealment in over half of the studies, and generally small sample sizes. Using the GRADE approach, we rated the quality of evidence as very low to moderate, downgraded mainly due to high risk of bias, unexplained heterogeneity, and imprecision. There was moderate quality evidence of a small reduction in fatigue intensity from a meta‐analyses of eight studies (1524 participants; standardised mean difference (SMD) ‐0.28, 95% confidence interval (CI) ‐0.52 to ‐0.04) comparing educational interventions with usual care or attention control. We found low quality evidence from twelve studies (1711 participants) that educational interventions had a small effect on general/overall fatigue (SMD ‐0.27, 95% CI ‐0.51 to ‐0.04) compared to usual care or attention control. There was low quality evidence from three studies (622 participants) of a moderate size effect of educational interventions for reducing fatigue distress (SMD ‐0.57, 95% CI ‐1.09 to ‐0.05) compared to usual care, and this could be considered clinically significant. Pooled data from four studies (439 participants) found a small reduction in fatigue interference with daily life (SMD ‐0.35, 95% CI ‐0.54 to ‐0.16; moderate quality evidence). No clear effects on fatigue were found related to type of cancer treatment or timing of intervention in relation to completion of cancer treatment, and there were insufficient data available to determine the effect of educational interventions on fatigue by stage of disease, tumour type or group versus individual intervention. Three studies (571 participants) provided low quality evidence for a reduction in anxiety in favour of the intervention group (mean difference (MD) ‐1.47, 95% CI ‐2.76 to ‐0.18) which, for some, would be considered clinically significant. Two additional studies not included in the meta‐analysis also reported statistically significant improvements in anxiety in favour of the educational intervention, whereas a third study did not. Compared with usual care or attention control, educational interventions showed no significant reduction in depressive symptoms (four studies, 881 participants, SMD ‐0.12, 95% CI ‐0.47 to 0.23; very low quality evidence). Three additional trials not included in the meta‐analysis found no between‐group differences in the symptoms of depression. No between‐group difference was evident in the capacity for activities of daily living or physical function when comparing educational interventions with usual care (4 studies, 773 participants, SMD 0.33, 95% CI ‐0.10 to 0.75) and the quality of evidence was low. Pooled evidence of low quality from two of three studies examining the effect of educational interventions compared to usual care found an improvement in global quality of life on a 0‐100 scale (MD 11.47, 95% CI 1.29 to 21.65), which would be considered clinically significant for some. No adverse events were reported in any of the studies. Educational interventions may have a small effect on reducing fatigue intensity, fatigue's interference with daily life, and general fatigue, and could have a moderate effect on reducing fatigue distress. Educational interventions focused on fatigue may also help reduce anxiety and improve global quality of life, but it is unclear what effect they might have on capacity for activities of daily living or depressive symptoms. Additional studies undertaken in the future are likely to impact on our confidence in the conclusions. The incorporation of education for the management of fatigue as part of routine care appears reasonable. However, given the complex nature of this symptom, educational interventions on their own are unlikely to optimally reduce fatigue or help people manage its impact, and should be considered in conjunction with other interventions. Just how educational interventions are best delivered, and their content and timing to maximise outcomes, are issues that require further research. |
t71 | Researchers in the Cochrane Collaboration conducted a review to evaluate the effect of approaches to encourage health workers to work in particular healthcare facilities. After searching for all relevant studies, they were unable to find any studies that met their requirements for inclusion in this review. Many countries have a severe lack of health workers. In addition, the health workers that are available are often not distributed in the best possible way. Most health workers work in urban areas, leaving rural areas underserved. Problems also occur in urban areas as health workers here often prefer to work in the private healthcare sector, which is often too expensive for many people. In rural areas, governments may not have built health facilities and the only available health care in these areas may, therefore, be private. However, private facilities in rural areas are not only expensive but may also struggle to attract qualified health workers. To address these problems, governments need to find ways of ensuring that more health workers work in the areas and facilities where most people seek care. This might, for instance, involve encouraging health workers to work in public healthcare facilities in towns and cities or to work in public or private facilities in rural areas. One approach governments could take is to give extra incentives to health workers serving in particular facilities. These incentives could include higher salaries, special allowances, or higher retirement packages. Another approach is to give health workers bursaries or scholarships during training on the condition that they work in particular facilities for a fixed period of time after they have finished their training. Results Although these types of approaches are not uncommon, the review could not find any relevant studies that gave a reliable assessment of their impact. There is still a lot of work to be done to understand how governments can ensure that health workers serve in those health facilities that care for the majority of the population. | Health workers move between public and private organizations in both urban and rural areas during the course of their career. Depending on the proportion of the population served by public or private organizations in a particular setting, this movement may result in imbalances in the number of healthcare providers available relative to the population receiving care from that sector. However, both public and private organizations are needed as each sector has unique contributions to make to the effective delivery of health services. Objectives To assess the effects of financial incentives and movement restriction interventions to manage the movement of health workers between public and private organizations in low‐ and middle‐income countries. Search methods We searched the Cochrane Central Register of Controlled Trials (CENTRAL) (10 November 2012); EMBASE (7 June 2011); LILACS (9 June 2011); MEDLINE (10 November 2012); CINAHL (13 August 2012); and the British Nursing Index (13 August 2012). Selection criteria Randomized controlled trials and non‐randomized controlled trials; controlled before‐and‐after studies if pre‐ and post‐intervention periods for study and control groups were the same and there were at least two units included in both the intervention and control groups; uncontrolled and controlled interrupted time series studies if the point in time when the intervention occurred was clearly defined and there were at least three or more data points before and after the intervention. Interventions included payment of special allowances, increasing salaries, bonding health workers, offering bursary schemes, scholarships or lucrative terminal benefits, and hiring people on contract basis. Data collection and analysis Two review authors independently applied the criteria for inclusion and exclusion of studies to the titles and abstracts of all articles obtained from the search. The same two review authors independently screened the full reports of the selected citations. At each stage, we compared the results and resolved discrepancies through discussion with a third review author. We found no studies that were eligible for inclusion in this review. We identified no rigorous studies on the effects of interventions to manage the movement of health workers between public and private organizations in low‐ and middle‐income countries. Health worker availability is a key obstacle in delivery of health services. Interventions to make the health sector more responsive to the expectations of populations by having more health workers in the sector that serves most people would contribute to the more efficient use of the health workforce. More research is needed to assess the effect of increase in salaries, offering scholarships or bonding on movement of health workers in one sector compared with another. |
t72 | Pericarditis is the inflammation and swelling of the tissue covering the outer layer of the heart. Pericarditis causes severe and disabling chest pain and fever, however the main issue is the repeated recurrence of pericarditis attacks. Colchicine is an ancient medication that has been used in the treatment of other inflammatory diseases such as gout. We wanted to discover whether colchicine alone or added to other medications is better or worse than alternative therapies in preventing pericarditis. We have reviewed all randomised controlled trials about the effect of colchicine in preventing recurrence of pericarditis in people with pericarditis. We found four trials involving 564 participants, who were followed up for at least 18 months. Two studies examined the use of colchicine in people with recurrent pericarditis and two examined the use of colchicine in people with a first episode of pericarditis. The trials showed that people taking colchicine have a lower risk of developing pericarditis recurrence and a higher proportion experience symptom relief. It is expected that at 18 months, one pericarditis recurrence can be avoided for every four people receiving colchicine with NSAIDs rather than NSAIDs alone. Adverse effects were reported in all trials and affected 15 people (9%) of the 162 taking colchicine. Adverse effects included abdominal pain, nausea and vomiting. Two studies were designed so that participants knew the type of intervention they were taking and people in the comparison group had no dummy pill. The results of these studies could exaggerate the effects of the drug. The evidence suggests beneficial effects of colchicine in preventing recurrence of pericarditis, however this is based on a limited number of small trials. | Pericarditis is the inflammation of the pericardium, the membranous sac surrounding the heart. Recurrent pericarditis is the most common complication of acute pericarditis, causing severe and disabling chest pains. Recurrent pericarditis affects one in three patients with acute pericarditis within the first 18 months. Colchicine has been suggested to be beneficial in preventing recurrent pericarditis. Objectives To review all randomised controlled trials (RCTs) that assess the effects of colchicine alone or combined, compared to any other intervention to prevent further recurrences of pericarditis, in people with acute or recurrent pericarditis. Search methods We searched the following bibliographic databases on 4 August 2014: Cochrane Central Register of Controlled Trials (CENTRAL, Issue 7 of 12, 2014 on The Cochrane Library ), MEDLINE (OVID, 1946 to July week 4, 2014), EMBASE (OVID, 1947 to 2014 week 31), and the Conference Proceedings Citation Index ‐ Science on Web of Science (Thomson Reuters) 1990 to 1 Aug 2014. We did not apply any language or time restrictions. Selection criteria RCTs of people with acute or recurrent pericarditis who are receiving colchicine compared to any other treatment, in order to prevent recurrences. Data collection and analysis Two review authors independently selected trials for inclusion, extracted data and assessed the risk of bias. The first primary outcome was the time to recurrence, measured by calculating the hazard ratios (HRs). The second primary outcome was the adverse effects of colchicine. Secondary outcomes were the rate of recurrences at 6, 12 and 18 months, and symptom relief. We included four RCTs, involving 564 participants in this review. We compared the effects of colchicine in addition to a non‐steroidal anti‐inflammatory drug (NSAID) such as ibuprofen, aspirin or indomethacin to the effects of the NSAID alone. Two comparable trials studied the effects of colchicine in 204 participants with recurrent pericarditis and two trials studied 360 people with acute pericarditis. All trials had a moderate quality for the primary outcomes. We identified two on‐going trials; one of these trials examines acute pericarditis and the other assesses recurrent pericarditis. There was moderate quality evidence that colchicine reduces episodes of pericarditis in people with recurrent pericarditis over 18 months follow‐up (HR 0.37; 95% confidence interval (CI) 0.24 to 0.58). It is expected that at 18 months, the number needed to treat (NNT) is 4. In people with acute pericarditis, there was moderate quality evidence that colchicine reduces recurrence (HR 0.40; 95% CI 0.27 to 0.61) at 18 months follow‐up. Colchicine led to a greater chance of symptom relief at 72 hours (risk ratio (RR) 1.4; 95% CI 1.26 to 1.56; low quality evidence). Adverse effects were mainly gastrointestinal and included abdominal pain and diarrhoea. The pooled RR for adverse events was 1.26 (95% CI 0.75 to 2.12). While the number of people experiencing adverse effects was higher in the colchicine than the control groups (9% versus 7%), the quality of evidence was low owing to imprecision, and there was no statistically significant difference between the treatment groups (P = 0.42). There was moderate quality evidence that treatment with colchicine led to more people stopping treatment due to adverse events (RR 1.87; 95% CI 1.02 to 3.41). Colchicine, as adjunctive therapy to NSAIDs, is effective in reducing the number of pericarditis recurrences in patients with recurrent pericarditis or acute pericarditis. However, evidence is based on a limited number of small trials. Patients with multiple resistant recurrences were not represented in any published or on‐going trials, and it is these patients that are in the most need for treatment. |
t73 | We reviewed the evidence about the effectiveness of negative pressure wound therapy (NPWT) for preventing surgical site infection (SSI). Surgical site infections are common wound infections that develop at the site of a surgical incision. The incidence of SSI may be as high as 40% for some types of surgery, and may also be higher for people with medical problems such as diabetes or cancer. Surgical site infections increase patient discomfort, length of hospital stay, and treatment costs. Negative pressure wound therapy involves a sealed wound dressing connected to vacuum pump that sucks up fluid from the wound, which is thought to promote wound healing and prevent infection. In an earlier 2014 version of this review, we found the effectiveness of NPWT to be unclear. In February 2018 we searched for randomised controlled trials (studies in which participants are assigned to one of two or more treatment groups using a random method) that compared NPWT with other dressings or with another type of NPWT for the prevention of SSI. We found 25 additional trials, resulting in a total of 30 trials (2957 participants), and two economic studies. The types of surgery included abdominal surgery, caesarean section, joint surgery, and others. The included trials were small, with most recruiting fewer than 100 participants. Evidence of low certainty shows that NPWT may reduce the incidence of SSI. We are uncertain if NPWT reduces the incidence of death, dehiscence (reopening of the wound), seroma (excessive fluid under a wound), haematoma (formation of blood clots), readmission to hospital, or repeat surgery. It is uncertain if NPWT results in more dressing‐related blisters, or whether the treatment costs more on average than a standard dressing. Results from one trial suggest that NPWT may be more cost‐effective than standard care when the impact of an SSI on length of hospital stay and other hospital costs is taken into account. | Indications for the use of negative pressure wound therapy (NPWT) are broad and include prophylaxis for surgical site infections (SSIs). While existing evidence for the effectiveness of NPWT remains uncertain, new trials necessitated an updated review of the evidence for the effects of NPWT on postoperative wounds healing by primary closure. Objectives To assess the effects of negative pressure wound therapy for preventing surgical site infection in wounds healing through primary closure. Search methods We searched the Cochrane Wounds Specialised Register, CENTRAL, Ovid MEDLINE (including In‐Process & Other Non‐Indexed Citations), Ovid Embase, and EBSCO CINAHL Plus in February 2018. We also searched clinical trials registries for ongoing and unpublished studies, and checked reference lists of relevant included studies as well as reviews, meta‐analyses, and health technology reports to identify additional studies. There were no restrictions on language, publication date, or setting. Selection criteria We included trials if they allocated participants to treatment randomly and compared NPWT with any other type of wound dressing, or compared one type of NPWT with another type of NPWT. Data collection and analysis Four review authors independently assessed trials using predetermined inclusion criteria. We carried out data extraction, 'Risk of bias' assessment using the Cochrane 'Risk of bias' tool, and quality assessment according to GRADE methodology. In this second update we added 25 intervention trials, resulting in a total of 30 intervention trials (2957 participants), and two economic studies nested in trials. Surgeries included abdominal and colorectal (n = 5); caesarean section (n = 5); knee or hip arthroplasties (n = 5); groin surgery (n = 5); fractures (n = 5); laparotomy (n = 1); vascular surgery (n = 1); sternotomy (n = 1); breast reduction mammoplasty (n = 1); and mixed (n = 1). In three key domains four studies were at low risk of bias; six studies were at high risk of bias; and 20 studies were at unclear risk of bias. We judged the evidence to be of low or very low certainty for all outcomes, downgrading the level of the evidence on the basis of risk of bias and imprecision. Primary outcomes Three studies reported mortality (416 participants; follow‐up 30 to 90 days or unspecified). It is uncertain whether NPWT has an impact on risk of death compared with standard dressings (risk ratio (RR) 0.63, 95% confidence interval (CI) 0.25 to 1.56; very low‐certainty evidence, downgraded once for serious risk of bias and twice for very serious imprecision). Twenty‐five studies reported on SSI. The evidence from 23 studies (2533 participants; 2547 wounds; follow‐up 30 days to 12 months or unspecified) showed that NPWT may reduce the rate of SSIs (RR 0.67, 95% CI 0.53 to 0.85; low‐certainty evidence, downgraded twice for very serious risk of bias). Fourteen studies reported dehiscence. We combined results from 12 studies (1507 wounds; 1475 participants; follow‐up 30 days to an average of 113 days or unspecified) that compared NPWT with standard dressings. It is uncertain whether NPWT reduces the risk of wound dehiscence compared with standard dressings (RR 0.80, 95% CI 0.55 to 1.18; very low‐certainty evidence, downgraded twice for very serious risk of bias and once for serious imprecision). Secondary outcomes We are uncertain whether NPWT increases or decreases reoperation rates when compared with a standard dressing (RR 1.09, 95% CI 0.73 to 1.63; 6 trials; 1021 participants; very low‐certainty evidence, downgraded for very serious risk of bias and serious imprecision) or if there is any clinical benefit associated with NPWT for reducing wound‐related readmission to hospital within 30 days (RR 0.86, 95% CI 0.47 to 1.57; 7 studies; 1271 participants; very low‐certainty evidence, downgraded for very serious risk of bias and serious imprecision). It is also uncertain whether NPWT reduces incidence of seroma compared with standard dressings (RR 0.67, 95% CI 0.45 to 1.00; 6 studies; 568 participants; very low‐certainty evidence, downgraded twice for very serious risk of bias and once for serious imprecision). It is uncertain if NPWT reduces or increases the risk of haematoma when compared with a standard dressing (RR 1.05, 95% CI 0.32 to 3.42; 6 trials; 831 participants; very low‐certainty evidence, downgraded twice for very serious risk of bias and twice for very serious imprecision. It is uncertain if there is a higher risk of developing blisters when NPWT is compared with a standard dressing (RR 6.64, 95% CI 3.16 to 13.95; 6 studies; 597 participants; very low‐certainty evidence, downgraded twice for very serious risk of bias and twice for very serious imprecision). Quality of life was not reported separately by group but was used in two economic evaluations to calculate quality‐adjusted life years (QALYs). There was no clear difference in incremental QALYs for NPWT relative to standard dressing when results from the two trials were combined (mean difference 0.00, 95% CI −0.00 to 0.00; moderate‐certainty evidence). One trial concluded that NPWT may be more cost‐effective than standard care, estimating an incremental cost‐effectiveness ratio (ICER) value of GBP 20.65 per QALY gained. A second cost‐effectiveness study estimated that when compared with standard dressings NPWT was cost saving and improved QALYs. We rated the overall quality of the reports as very good; we did not grade the evidence beyond this as it was based on modelling assumptions. Despite the addition of 25 trials, results are consistent with our earlier review, with the evidence judged to be of low or very low certainty for all outcomes. Consequently, uncertainty remains about whether NPWT compared with a standard dressing reduces or increases the incidence of important outcomes such as mortality, dehiscence, seroma, or if it increases costs. Given the cost and widespread use of NPWT for SSI prophylaxis, there is an urgent need for larger, well‐designed and well‐conducted trials to evaluate the effects of newer NPWT products designed for use on clean, closed surgical incisions. Such trials should initially focus on wounds that may be difficult to heal, such as sternal wounds or incisions on obese patients. |
t74 | We reviewed the evidence regarding effects of oxygen compared with air on breathlessness in people with chronic obstructive pulmonary disease (COPD) with only mildly or moderately decreased blood oxygen levels. People with COPD are sometimes prescribed oxygen therapy to reduce the severity of breathlessness. However, the use of oxygen in people who do not have severely reduced levels of oxygen in their bloodstream remains controversial, as little is known about its effectiveness. Additionally, oxygen is relatively costly and is not given without risk, particularly to smokers because of the risk of fire. We included studies of oxygen therapy versus air delivered through nasal prongs or mask during exertion, continuously, 'as needed' over a defined period or as short‐burst oxygen before exertion. Study participants were 18 years of age or older, had received a diagnosis of COPD, had low oxygen levels in the blood and did not receive long‐term oxygen therapy. We included a total of 44 studies (1195 participants) in this review. Compared with the previous review, which was published in 2011, we have added 14 studies (493 participants) to this review. We found that oxygen can modestly reduce breathlessness. To be effective, oxygen has to be given during exercise. Most studies evaluated oxygen given during exercise testing in the laboratory. Oxygen therapy during daily life had uncertain effects on breathlessness and did not clearly change patient quality of life. | Breathlessness is a cardinal symptom of chronic obstructive pulmonary disease (COPD). Long‐term oxygen therapy (LTOT) is given to improve survival time in people with COPD and severe chronic hypoxaemia at rest. The efficacy of oxygen therapy for breathlessness and health‐related quality of life (HRQOL) in people with COPD and mild or no hypoxaemia who do not meet the criteria for LTOT has not been established. Objectives To determine the efficacy of oxygen versus air in mildly hypoxaemic or non‐hypoxaemic patients with COPD in terms of (1) breathlessness; (2) HRQOL; (3) patient preference whether to continue therapy; and (4) oxygen‐related adverse events. Search methods We searched the Cochrane Airways Group Register, the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE and Embase, to 12 July 2016, for randomised controlled trials (RCTs). We handsearched the reference lists of included articles. Selection criteria We included RCTs of the effects of non‐invasive oxygen versus air on breathlessness, HRQOL or patient preference to continue therapy among people with COPD and mild or no hypoxaemia (partial pressure of oxygen (PaO 2 ) > 7.3 kPa) who were not already receiving LTOT. Two review authors independently assessed articles for inclusion in the review. Data collection and analysis Two review authors independently collected and analysed data. We assessed risk of bias by using the Cochrane 'Risk of bias tool'. We pooled effects recorded on different scales as standardised mean differences (SMDs) with 95% confidence intervals (CIs) using random‐effects models. Lower SMDs indicated decreased breathlessness and reduced HRQOL. We performed subanalyses and sensitivity analyses and assessed the quality of evidence according to the Grading of Recommendations, Assessment, Development and Evaluations (GRADE) approach. Compared with the previous review, which was published in 2011, we included 14 additional studies (493 participants), excluded one study and included data for meta‐analysis of HRQOL. In total, we included in this review 44 studies including 1195 participants, and we included 33 of these (901 participants)in the meta‐analysis. We found that breathlessness during exercise or daily activities was reduced by oxygen compared with air (32 studies; 865 participants; SMD ‐0.31, 95% CI ‐0.43 to ‐0.20; I 2 = 29%; low‐quality evidence). This translates to a decrease in breathlessness of about 0.7 points on a 0 to 10 numerical rating scale. In contrast, we found no effect of short‐burst oxygen given before exercise (four studies; 90 participants; SMD ‐0.03, 95% CI ‐0.28 to 0.22; I 2 = 0%; low‐quality evidence). Oxygen reduced breathlessness measured during exercise tests (30 studies; 591 participants; SMD ‐0.34, 95% CI ‐0.46 to ‐0.22; I 2 = 29%; moderate‐quality evidence), whereas evidence of an effect on breathlessness measured in daily life was limited (two studies; 274 participants; SMD ‐0.13, 95% CI, ‐0.37 to 0.11; I 2 = 0%; low‐quality evidence). Oxygen did not clearly affect HRQOL (five studies; 267 participants; SMD 0.12, 95% CI ‐0.04 to 0.28; I 2 = 0%; low‐quality evidence). Patient preference and adverse events could not be analysed owing to insufficient data. We are moderately confident that oxygen can relieve breathlessness when given during exercise to mildly hypoxaemic and non‐hypoxaemic people with chronic obstructive pulmonary disease who would not otherwise qualify for home oxygen therapy. Most evidence pertains to acute effects during exercise tests, and no evidence indicates that oxygen decreases breathlessness in the daily life setting. Findings show that oxygen does not affect health‐related quality of life. |
t75 | A molar pregnancy (hydatidiform mole) develops following an abnormal process of conception, whereby placental tissue overgrows inside the womb (uterus). Molar pregnancies are classified as complete (CM) or partial (PM) based on their appearance (gross and microscopic), and their chromosome pattern. When present, moles are usually suspected at the early pregnancy scan and women often present with bleeding, similar to a miscarriage. The molar tissue is removed by evacuation of retained products of conception (ERPC), also known as dilatation and curettage (D&C) and women generally make a full recovery. However, some women go on to develop a cancer in the womb (about 1 in every 5 women with a CM and 1 in 200 with a PM). Women are generally at a higher risk of getting this cancer, which is known as gestational trophoblastic neoplasia (GTN), if they are over 40 years old, have a large increase in the size of the womb, have large cysts in the ovaries or have high initial levels of β‐human chorionic gonadotrophin (hCG) (the pregnancy hormone) in their blood. Although treatment of the cancer with chemotherapy (anti‐cancer drugs) is almost always effective, it has been suggested that routinely giving women anti‐cancer drugs (P‐Chem) before or after the removal the molar tissue may reduce the risk of the cancerous tissue developing. By doing this review, we tried to assess the benefits and risks of giving anti‐cancer drugs (P‐Chem) to women with molar pregnancies, before or after ERPC. We found three randomised studies (randomised controlled trials (RCTs) where people are allocated at random i.e. by chance alone) involving a total of 613 women. Two studies tested methotrexate in all women with a CM and one study tested dactinomycin in women with a CM who were at a high risk of getting GTN. The two methotrexate studies are older studies that used relatively poor research methods, therefore their findings cannot be relied upon. Overall the review findings suggest that P‐Chem reduces the number of women developing cancer after molar pregnancy; however, this is probably only true for women with high‐risk moles (i.e. CM). In addition, P‐Chem might make the time to diagnose the cancer longer and might increase the number of anti‐cancer treatments needed to cure the cancer if it develops. We were unable to assess the short‐ and long‐term side‐effects of P‐Chem in this review because there were not enough available data; however, we are concerned that the five‐ and eight‐day courses of P‐Chem used by researchers in these studies are too toxic to be given to women routinely. | This is an update of the original Cochrane Review published in Cochrane Library, Issue 10, 2012. Hydatidiform mole (HM), also called a molar pregnancy, is characterised by an overgrowth of foetal chorionic tissue within the uterus. HMs may be partial (PM) or complete (CM) depending on their gross appearance, histopathology and karyotype. PMs usually have a triploid karyotype, derived from maternal and paternal origins, whereas CMs are diploid and have paternal origins only. Most women with HM can be cured by evacuation of retained products of conception (ERPC) and their fertility preserved. However, in some women the growth persists and develops into gestational trophoblastic neoplasia (GTN), a malignant form of the disease that requires treatment with chemotherapy. CMs have a higher rate of malignant transformation than PMs. It may be possible to reduce the risk of GTN in women with HM by administering prophylactic chemotherapy (P‐Chem). However, P‐Chem given before or after evacuation of HM to prevent malignant sequelae remains controversial, as the risks and benefits of this practice are unclear. Objectives To evaluate the effectiveness and safety of P‐Chem to prevent GTN in women with a molar pregnancy. To investigate whether any subgroup of women with HM may benefit more from P‐Chem than others. Search methods For the original review we performed electronic searches in the Cochrane Gynaecological Cancer Specialised Register, the Cochrane Central Register of Controlled Trials (CENTRAL, Issue 2, 2012), MEDLINE (1946 to February week 4, 2012) and Embase (1980 to 2012, week 9). We developed the search strategy using free text and MeSH. For this update we searched the Cochrane Central Register of Controlled Trials (CENTRAL, Issue 5, 2017), MEDLINE (February 2012 to June week 1, 2017) and Embase (February 2012 to 2017, week 23). We also handsearched reference lists of relevant literature to identify additional studies and searched trial registries. Selection criteria We included randomised controlled trials (RCTs) of P‐Chem for HM. Data collection and analysis Two review authors independently assessed studies for inclusion in the review and extracted data using a specifically designed data collection form. Meta‐analyses were performed by pooling data from individual trials using Review Manager 5 (RevMan 5) software in line with standard methodological procedures expected by Cochrane methodology. The searches identified 161 records; after de‐duplication and title and abstract screening 90 full‐text articles were retrieved. From these we included three RCTs with a combined total of 613 participants. One study compared prophylactic dactinomycin to no prophylaxis (60 participants); the other two studies compared prophylactic methotrexate to no prophylaxis (420 and 133 participants). All participants were diagnosed with CMs. We considered the latter two studies to be of poor methodological quality. P‐Chem reduced the risk of GTN occurring in women following a CM (3 studies, 550 participants; risk ratio (RR) 0.37, 95% confidence interval (CI) 0.24 to 0.57; I² = 0%; P < 0.00001; low‐quality evidence). However, owing to the poor quality (high risk of bias) of two of the included studies, we performed sensitivity analyses excluding these two studies. This left only one small study of high‐risk women to contribute data for this primary outcome (59 participants; RR 0.28, 95% CI 0.10 to 0.73; P = 0.01); therefore we consider this evidence to be of low quality. The time to diagnosis was longer in the P‐Chem group than the control group (2 studies, 33 participants; mean difference (MD) 28.72, 95% CI 13.19 to 44.24; P = 0.0003; low‐quality evidence); and the P‐Chem group required more courses to cure subsequent GTN (1 poor‐quality study, 14 participants; MD 1.10, 95% CI 0.52 to 1.68; P = 0.0002; very low quality evidence). There were insufficient data to perform meta‐analyses for toxicity, overall survival, drug resistance and reproductive outcomes. P‐Chem may reduce the risk of progression to GTN in women with CMs who are at a high risk of malignant transformation; however, current evidence in favour of P‐Chem is limited by the poor methodological quality and small size of the included studies. As P‐Chem may increase drug resistance, delays treatment of GTN and may expose women toxic side effects, this practice cannot currently be recommended. |
t76 | Bronchiectasis is a long‐term respiratory condition. The airways in the lungs are damaged, and people are prone to infection. Symptoms are chronic cough and the production of sputum (coughed‐up material (phlegm) from the lower airways). Moreover, bronchiectasis is associated with a mortality rate more than twice that of the general population. Long‐term antibiotic therapy with macrolides (such as azithromycin, roxithromycin, erythromycin, and clarithromycin) may reduce the cycle of reinfection, reduce symptoms, and improve quality of life. We wanted to do this review to look at the evidence on use of macrolides in people with bronchiectasis. This review is intended to help people such as guideline producers, doctors, and patients make decisions about whether to use or recommend macrolides. We found 15 studies that compared macrolides with placebo (a substance or treatment with no benefit) or no intervention. Eleven studies involved 690 adults (aged 18 years and older) and four studies involved 190 children. Among adults, six used azithromycin, four roxithromycin, and one erythromycin. The four studies with children used azithromycin, clarithromycin, erythromycin, or roxithromycin. The studies on azithromycin reported improved quality of life in adults. Although we found only a few trials, they do show a possible increase in antibiotic resistance. Antibiotic resistance is seen when an antibiotic becomes less effective at killing the bacteria causing the chest infection. We know that macrolides are associated with higher risk of cardiovascular death and other serious adverse events when they are used to treat other conditions. The data in our review suggest it is possible that people with bronchiectasis are at risk for these adverse effects when taking macrolides. | Bronchiectasis is a chronic respiratory disease characterised by abnormal and irreversible dilatation and distortion of the smaller airways. Bacterial colonisation of the damaged airways leads to chronic cough and sputum production, often with breathlessness and further structural damage to the airways. Long‐term macrolide antibiotic therapy may suppress bacterial infection and reduce inflammation, leading to fewer exacerbations, fewer symptoms, improved lung function, and improved quality of life. Further evidence is required on the efficacy of macrolides in terms of specific bacterial eradication and the extent of antibiotic resistance. Objectives To determine the impact of macrolide antibiotics in the treatment of adults and children with bronchiectasis. Search methods We identified trials from the Cochrane Airways Trials Register, which contains studies identified through multiple electronic searches and handsearches of other sources. We also searched trial registries and reference lists of primary studies. We conducted all searches on 18 January 2018. Selection criteria We included randomised controlled trials (RCTs) of at least four weeks' duration that compared macrolide antibiotics with placebo or no intervention for the long‐term management of stable bronchiectasis in adults or children with a diagnosis of bronchiectasis by bronchography, plain film chest radiograph, or high‐resolution computed tomography. We excluded studies in which participants had received continuous or high‐dose antibiotics immediately before enrolment or before a diagnosis of cystic fibrosis, sarcoidosis, or allergic bronchopulmonary aspergillosis. Our primary outcomes were exacerbation, hospitalisation, and serious adverse events. Data collection and analysis Two review authors independently screened the titles and abstracts of 103 records. We independently screened the full text of 40 study reports and included 15 trials from 30 reports. Two review authors independently extracted outcome data and assessed risk of bias for each study . We analysed dichotomous data as odds ratios (ORs) and continuous data as mean differences (MDs) or standardised mean differences (SMDs). We used standard methodological procedures as expected by Cochrane. We included 14 parallel‐group RCTs and one cross‐over RCT with interventions lasting from 8 weeks to 24 months. Of 11 adult studies with 690 participants, six used azithromycin, four roxithromycin, and one erythromycin. Four studies with 190 children used either azithromycin, clarithromycin, erythromycin, or roxithromycin. We included nine adult studies in our comparison between macrolides and placebo and two in our comparison with no intervention. We included one study with children in our comparison between macrolides and placebo and one in our comparison with no intervention. In adults, macrolides reduced exacerbation frequency to a greater extent than placebo (OR 0.34, 95% confidence interval (CI) 0.22 to 0.54; 341 participants; three studies; I 2 = 65%; moderate‐quality evidence). This translates to a number needed to treat for an additional beneficial outcome of 4 (95% CI 3 to 8). Data show no differences in exacerbation frequency between use of macrolides (OR 0.31, 95% CI 0.08 to 1.15; 43 participants; one study; moderate‐quality evidence) and no intervention. Macrolides were also associated with a significantly better quality of life compared with placebo (MD ‐8.90, 95% CI ‐13.13 to ‐4.67; 68 participants; one study; moderate‐quality evidence). We found no evidence of a reduction in hospitalisations (OR 0.56, 95% CI 0.19 to 1.62; 151 participants; two studies; I 2 = 0%; low‐quality evidence), in the number of participants with serious adverse events, including pneumonia, respiratory and non‐respiratory infections, haemoptysis, and gastroenteritis (OR 0.49, 95% CI 0.20 to 1.23; 326 participants; three studies; I 2 = 0%; low‐quality evidence), or in the number experiencing adverse events (OR 0.83, 95% CI 0.51 to 1.35; 435 participants; five studies; I 2 = 28%) in adults with macrolides compared with placebo. In children, exacerbation frequency was reduced more with macrolides than with placebo (IRR 0.50, 95% CI 0.35 to 0.71; 89 children; one study; low‐quality evidence). However there was no significant difference in this age group with regard to: hospitalisations (OR 0.28, 95% CI 0.07 to 1.11; 89 children; one study; low‐quality evidence), serious adverse events, defined within the study as exacerbations of bronchiectasis or investigations related to bronchiectasis (OR 0.43, 95% CI 0.17 to 1.05; 89 children; one study; low‐quality evidence), or adverse events (OR 0.78, 95% CI 0.33 to 1.83; 89 children; one study), in those receiving macrolides compared to placebo. The same study reported an increase in macrolide‐resistant bacteria (OR 7.13, 95% CI 2.13 to 23.79; 89 children; one study), an increase in resistance to Streptococcus pneumoniae (OR 13.20, 95% CI 1.61 to 108.19; 89 children; one study), and an increase in resistance to Staphylococcus aureus (OR 4.16, 95% CI 1.06 to 16.32; 89 children; one study) with macrolides compared with placebo. Quality of life was not reported in the studies with children. Long‐term macrolide therapy may reduce the frequency of exacerbations and improve quality of life, although supporting evidence is derived mainly from studies of azithromycin, rather than other macrolides, and predominantly among adults rather than children. However, macrolides should be used with caution, as limited data indicate an associated increase in microbial resistance. Macrolides are associated with increased risk of cardiovascular death and other serious adverse events in other populations, and available data cannot exclude a similar risk among patients with bronchiectasis. |
t77 | Yoga for secondary prevention of coronary heart disease Coronary heart disease (CHD) is a major cause of early cardiovascular‐related illness and death in most developed countries. Secondary prevention is a term used to describe interventions that aim to prevent repeat cardiac events and death in people with established CHD. Individuals with CHD are at the highest risk of coronary events and death. Lifestyle modifications play an important role in secondary prevention. Yoga has been regarded as both a type of physical activity and a stress management strategy. The physical and psychological benefits of yoga are well accepted, yet inappropriate practice of yoga may lead to musculoskeletal injuries, such as muscle soreness and strain. The aim of this systematic review was to determine the effectiveness of yoga for secondary prevention in CHD in terms of cardiac events, death, and health‐related quality of life. We found no randomised controlled trials which met the inclusion criteria for this review. Therefore, the effectiveness of yoga for secondary prevention in CHD remains uncertain. | Coronary heart disease (CHD) is the major cause of early morbidity and mortality in most developed countries. Secondary prevention aims to prevent repeat cardiac events and death in people with established CHD. Lifestyle modifications play an important role in secondary prevention. Yoga has been regarded as a type of physical activity as well as a stress management strategy. Growing evidence suggests the beneficial effects of yoga on various ailments. Objectives To determine the effectiveness of yoga for the secondary prevention of mortality and morbidity in, and on the health‐related quality of life of, individuals with CHD. Search methods This is an update of a review previously published in 2012. For this updated review, we searched the Cochrane Central Register of Controlled Trials (CENTRAL) in The Cochrane Library (Issue 1 of 12, 2014), MEDLINE (1948 to February week 1 2014), EMBASE (1980 to 2014 week 6), Web of Science (Thomson Reuters, 1970 to 12 February 2014), China Journal Net (1994 to May 2014), WanFang Data (1990 to May 2014), and Index to Chinese Periodicals of Hong Kong (HKInChiP) (from 1980). Ongoing studies were identified in the meta Register of Controlled Trials (May 2014) and the World Health Organization International Clinical Trials Registry Platform (May 2014). We applied no language restrictions. Selection criteria We planned to include randomised controlled trials (RCTs) investigating the influence of yoga practice on CHD outcomes in men and women (aged 18 years and over) with a diagnosis of acute or chronic CHD. Studies were eligible for inclusion if they had a follow‐up duration of six months or more. We considered studies that compared one group practicing a type of yoga with a control group receiving either no intervention or interventions other than yoga. Data collection and analysis Two authors independently selected studies according to prespecified inclusion criteria. We resolved disagreements either by consensus or by discussion with a third author. We found no eligible RCTs that met the inclusion criteria of the review and thus we were unable to perform a meta‐analysis. The effectiveness of yoga for secondary prevention in CHD remains uncertain. Large RCTs of high quality are needed. |
t78 | The optimum treatment for Lennox‐Gastaut syndrome has yet to be established. Lennox‐Gastaut syndrome is a seizure (epilepsy) disorder that is commonly associated with behavioural and mental health problems. Many different treatments are currently used in the treatment of this disorder and many more have been tried in the past, often with little success. The review of trials found that there was no evidence to suggest that any one drug was more effective than another in the treatment of this disorder in terms of controlling the different seizure types. More research is needed to compare the therapies currently available. | The Lennox‐Gastaut syndrome (LGS) is an age‐specific disorder, characterised by epileptic seizures, a characteristic electroencephalogram (EEG), psychomotor delay and behavioural disorder. It occurs more frequently in males and onset is usually before the age of eight years, with a peak between three and five years of age. Late cases occurring in adolescence and early adulthood have rarely been reported. Language is frequently affected, with both slowness in ideation and expression in addition to difficulties of motor dysfunction. Severe behavioural disorders (e.g. hyperactivity, aggressiveness and autistic tendencies) and personality disorders are nearly always present. There is also a tendency for psychosis to develop with time. The long‐term prognosis is poor; although the epilepsy often improves, complete seizure freedom is rare and conversely the mental and psychiatric disorders tend to worsen with time. Objectives To compare the effects of pharmaceutical therapies used to treat LGS in terms of control of seizures and adverse effects. Many people who suffer from this syndrome will already be receiving other antiepileptic medications at the time of their entry into a trial. However, for the purpose of this review we will only consider the effect of the single therapeutic agent being trialled (often as add‐on therapy). Search methods We searched the Cochrane Epilepsy Group's Specialized Register (18 October 2012), the Cochrane Central Register of Controlled Trials (CENTRAL, The Cochrane Library Issue 10 of 12, 2012) and MEDLINE (1946 to October week 2, 2012). We also searched EMBASE (1980 to March 2003). We imposed no language restrictions. We searched the International Standard Randomised Controlled Trial Number (ISRCTN) register (18 October 2012) for ongoing trials and in addition, we contacted pharmaceutical companies and colleagues in the field to seek any unpublished or ongoing studies. Selection criteria All randomised controlled trials (RCTs) of the administration of drug therapy to patients with LGS. Data collection and analysis Two review authors independently extracted data. Analysis included assessing study quality, as well as statistical analysis of the effects on overall seizure rates and effects on specific seizure types (e.g. drop attacks), adverse effects and mortality. We found nine RCTs, but were unable to perform any meta‐analysis, because each trial looked at different populations, different therapies and considered different outcomes. The optimum treatment for LGS remains uncertain and no study to date has shown any one drug to be highly efficacious; rufinamide, lamotrigine, topiramate and felbamate may be helpful as add‐on therapy, clobazam may be helpful for drop seizures. Until further research has been undertaken, clinicians will need to continue to consider each patient individually, taking into account the potential benefit of each therapy weighed against the risk of adverse effects. |
t79 | Opioid antagonists are a type of drug which blunts the effects of narcotics such as heroin and morphine, and might help reduce nicotine addiction by blocking some of the rewarding effects of smoking. Our review identified eight trials of naltrexone, a long‐acting opioid antagonist. The trials included over 1200 smokers. Half the trials gave everyone nicotine replacement therapy and tested whether naltrexone had any additional benefit. Compared to a placebo, naltrexone did not increase the proportion of people who had stopped smoking, at the end of treatment, or at six months or more after treatment, either on its own or added to NRT. The available evidence does not suggest that opioid antagonists such as naltrexone assist smoking cessation. | The reinforcing properties of nicotine may be mediated through release of various neurotransmitters both centrally and systemically. People who smoke report positive effects such as pleasure, arousal, and relaxation as well as relief of negative affect, tension, and anxiety. Opioid (narcotic) antagonists are of particular interest to investigators as potential agents to attenuate the rewarding effects of cigarette smoking. Objectives To evaluate the efficacy of opioid antagonists in promoting long‐term smoking cessation. The drugs include naloxone and the longer‐acting opioid antagonist naltrexone. Search methods We searched the Cochrane Tobacco Addiction Group Specialised Register for trials of naloxone, naltrexone and other opioid antagonists and conducted an additional search of MEDLINE using 'Narcotic antagonists' and smoking terms in April 2013. We also contacted investigators, when possible, for information on unpublished studies. Selection criteria We considered randomised controlled trials comparing opioid antagonists to placebo or an alternative therapeutic control for smoking cessation. We included in the meta‐analysis only those trials which reported data on abstinence for a minimum of six months. We also reviewed, for descriptive purposes, results from short‐term laboratory‐based studies of opioid antagonists designed to evaluate psycho‐biological mediating variables associated with nicotine dependence. Data collection and analysis We extracted data in duplicate on the study population, the nature of the drug therapy, the outcome measures, method of randomisation, and completeness of follow‐up. The main outcome measure was abstinence from smoking after at least six months follow‐up in patients smoking at baseline. Abstinence at end of treatment was a secondary outcome. We extracted cotinine‐ or carbon monoxide‐verified abstinence where available. Where appropriate, we performed meta‐analysis, pooling risk ratios using a Mantel‐Haenszel fixed‐effect model. Eight trials of naltrexone met inclusion criteria for meta‐analysis of long‐term cessation. One trial used a factorial design so five trials compared naltrexone versus placebo and four trials compared naltrexone plus nicotine replacement therapy (NRT) versus placebo plus NRT. Results from 250 participants in one long‐term trial remain unpublished. No significant difference was detected between naltrexone and placebo (risk ratio (RR) 1.00; 95% confidence interval (CI) 0.66 to 1.51, 445 participants), or between naltrexone and placebo as an adjunct to NRT (RR 0.95; 95% CI 0.70 to 1.30, 768 participants). The estimate was similar when all eight trials were pooled (RR 0.97; 95% CI 0.76 to 1.24, 1213 participants). In a secondary analysis of abstinence at end of treatment, there was also no evidence of any early treatment effect, (RR 1.03; 95% CI 0.88 to 1.22, 1213 participants). No trials of naloxone or buprenorphine reported abstinence outcomes. Based on data from eight trials and over 1200 individuals, there was no evidence of an effect of naltrexone alone or as an adjunct to NRT on long‐term smoking abstinence, with a point estimate strongly suggesting no effect and confidence intervals that make a clinically important effect of treatment unlikely. Although further trials might narrow the confidence intervals they are unlikely to be a good use of resources. |
t80 | Haemoglobin carries oxygen in the blood. In thalassaemia, a genetic disease, sometimes the body cannot produce enough haemoglobin. This can be managed by receiving regular blood transfusions, but may lead to excess iron in the body which must be removed to prevent organ damage. Iron is removed by iron chelation therapy using a substance called an iron chelator. This works by sticking to excess iron molecules in the body. When patients go to the toilet, this excess iron leaves the body. Three iron chelators are commonly used. One (desferrioxamine) is injected and two (deferiprone and deferasirox) are taken orally. Deferasirox is licensed for use in children. Desferrioxamine is inconvenient and expensive; motivating researchers to find safe, effective oral iron chelators. We found 22 randomised controlled trials comparing iron chelators. These do not provide enough information about death or organ damage. However, they showed all three chelators performed similarly well in removing excess iron. Several trials found combining desferrioxamine and deferiprone removed more excess iron than using just one iron chelator. Trials showing side effects must be considered carefully. Side effects with desferrioxamine included pain or skin reactions at the injection site and joint pain. Side effects with deferiprone included joint pain, nausea, stomach upsets and low white blood cell count. Side effects with deferasirox included skin rashes, increases in liver enzymes and reduced kidney function. Low white blood cell count and reduced kidney function are important side effects and in patients receiving deferiprone or deferasirox these should be monitored regularly. Patients were three times more likely to experience a side effect when deferiprone and desferrioxamine were combined, compared with desferrioxamine alone. Three studies showed that patients using deferiprone were two and a half times more likely to have joint pain compared with using desferrioxamine alone. We have found no evidence for changing current treatment recommendations, which state that deferiprone or deferasirox should be used to remove excess iron when desferrioxamine cannot be used or is inadequate. The Food and Drug Administration in the United States of America have approved deferiprone only as "last resort treatment of iron overload in thalassemia". Larger randomised control trials of iron chelation therapy are needed, using standardised agreed measures of iron levels and organ damage to allow comparison of such valuable treatments. | Thalassaemia major is a genetic disease characterised by a reduced ability to produce haemoglobin. Management of the resulting anaemia is through red blood cell transfusions. Repeated transfusions result in an excessive accumulation of iron in the body (iron overload), removal of which is achieved through iron chelation therapy. Desferrioxamine mesylate (desferrioxamine) is one of the most widely used iron chelators. Substantial data have shown the beneficial effects of desferrioxamine, although adherence to desferrioxamine therapy is a challenge. Alternative oral iron chelators, deferiprone and deferasirox, are now commonly used. Important questions exist about whether desferrioxamine, as monotherapy or in combination with an oral iron chelator, is the best treatment for iron chelation therapy. Objectives To determine the effectiveness (dose and method of administration) of desferrioxamine in people with transfusion‐dependent thalassaemia. To summarise data from trials on the clinical efficacy and safety of desferrioxamine for thalassaemia and to compare these with deferiprone and deferasirox. Search methods We searched the Cochrane Cystic Fibrosis and Genetic Disorders Group's Haemoglobinopathies Trials Register. We also searched MEDLINE, EMBASE, CENTRAL ( The Cochrane Library), LILACS and other international medical databases, plus ongoing trials registers and the Transfusion Evidence Library (www.transfusionevidencelibrary.com). All searches were updated to 5 March 2013. Selection criteria Randomised controlled trials comparing desferrioxamine with placebo, with another iron chelator, or comparing two schedules or doses of desferrioxamine, in people with transfusion‐dependent thalassaemia. Data collection and analysis Six authors working independently were involved in trial quality assessment and data extraction. For one trial, investigators supplied additional data upon request. A total of 22 trials involving 2187 participants (range 11 to 586 people) were included. These trials included eight comparisons between desferrioxamine alone and deferiprone alone; five comparisons between desferrioxamine combined with deferiprone and deferiprone alone; eight comparisons between desferrioxamine alone and desferrioxamine combined with deferiprone; two comparisons of desferrioxamine with deferasirox; and two comparisons of different routes of desferrioxamine administration (bolus versus continuous infusion). Overall, few trials measured the same or long‐term outcomes. Seven trials reported cardiac function or liver fibrosis as measures of end organ damage; none of these included a comparison with deferasirox. Five trials reported a total of seven deaths; three in patients who received desferrioxamine alone, two in patients who received desferrioxamine and deferiprone. A further death occurred in a patient who received deferiprone in another who received deferasirox alone. One trial reported five further deaths in patients who withdrew from randomised treatment (deferiprone with or without desferrioxamine) and switched to desferrioxamine alone. One trial planned five years of follow up but was stopped early due to the beneficial effects of a reduction in serum ferritin levels in those receiving combined desferrioxamine and deferiprone treatment compared with deferiprone alone. The results of this and three other trials suggest an advantage of combined therapy with desferrioxamine and deferiprone over monotherapy to reduce iron stores as measured by serum ferritin. There is, however, no evidence for the improved efficacy of combined desferrioxamine and deferiprone therapy against monotherapy from direct or indirect measures of liver iron. Earlier trials measuring the cardiac iron load indirectly by measurement of the magnetic resonance imaging T2* signal had suggested deferiprone may reduce cardiac iron more quickly than desferrioxamine. However, meta‐analysis of two trials showed a significantly lower left ventricular ejection fraction in patients who received desferrioxamine alone compared with those who received combination therapy using desferrioxamine with deferiprone. Adverse events were recorded by 18 trials. These occurred with all treatments, but were significantly less likely with desferrioxamine than deferiprone in one trial, relative risk 0.45 (95% confidence interval 0.24 to 0.84) and significantly less likely with desferrioxamine alone than desferrioxamine combined with deferiprone in two other trials, relative risk 0.33 (95% confidence interval 0.13 to 0.84). In particular, four studies reported permanent treatment withdrawal due to adverse events from deferiprone; only one of these reported permanent withdrawals associated with desferrioxamine. Adverse events also occurred at a higher frequency in patients who received deferasirox than desferrioxamine in one trial. Eight trials reported local adverse reactions at the site of desferrioxamine infusion including pain and swelling. Adverse events associated with deferiprone included joint pain, gastrointestinal disturbance, increases in liver enzymes and neutropenia; adverse events associated with deferasirox comprised increases in liver enzymes and renal impairment. Regular monitoring of white cell counts has been recommended for deferiprone and monitoring of liver and renal function for deferasirox. In summary, desferrioxamine and the oral iron chelators deferiprone and deferasirox produce significant reductions in iron stores in transfusion‐dependent, iron‐overloaded people. There is no evidence from randomised clinical trials to suggest that any one of these has a greater reduction of clinically significant end organ damage, although in two trials, combination therapy with desferrioxamine and deferiprone showed a greater improvement in left ventricular ejection fraction than desferrioxamine used alone. Desferrioxamine is the recommended first‐line therapy for iron overload in people with thalassaemia major and deferiprone or deferasirox are indicated for treating iron overload when desferrioxamine is contraindicated or inadequate. Oral deferasirox has been licensed for use in children aged over six years who receive frequent blood transfusions and in children aged two to five years who receive infrequent blood transfusions. In the absence of randomised controlled trials with long‐term follow up, there is no compelling evidence to change this conclusion. Worsening iron deposition in the myocardium in patients receiving desferrioxamine alone would suggest a change of therapy by intensification of desferrioxamine treatment or the use of desferrioxamine and deferiprone combination therapy. Adverse events are increased in patients treated with deferiprone compared with desferrioxamine and in patients treated with combined deferiprone and desferrioxamine compared with desferrioxamine alone. People treated with all chelators must be kept under close medical supervision and treatment with deferiprone or deferasirox requires regular monitoring of neutrophil counts or renal function respectively. There is an urgent need for adequately‐powered, high‐quality trials comparing the overall clinical efficacy and long‐term outcomes of deferiprone, deferasirox and desferrioxamine. |
t81 | We reviewed the evidence on the effect of positive expiratory pressure (PEP) physiotherapy to clear the airways of people with cystic fibrosis (CF). CF affects approximately one in 3000 live births in white populations and causes frequent lung infection, due to mucus blocking the airways. Chest physiotherapy is often used to try to clear the mucus from the lungs. We wanted to discover whether using a PEP device (a form of chest physiotherapy) was better or worse than other other forms of chest physiotherapy for clearing the mucus from the lungs in people with CF. A PEP device provides positive pressure behind the mucus to try to push it out of the lungs. The review includes 28 studies with 788 people (from infants to adults) with CF with mild to severe lung disease. The studies compared PEP to other methods of chest physiotherapy; the length of treatment ranged from a single session to two years of treatment. Generally, the efficacy of PEP is similar to other methods of chest physiotherapy such as postural drainage with percussion, active cycle of breathing techniques, autogenic drainage, oscillatory PEP devices such as the flutter and acapella, thoracic oscillating devices such as the 'Vest', and bilevel positive airway pressure (BiPaP) (typically used for ventilatory support, but by changing the inspiratory and expiratory pressures on the device and combining it with huffing, BiPaP has been used for airway clearance). We found no difference between PEP and other forms of chest physiotherapy in lung function, the amount of mucus cleared from the airways or its related effects on the health of people with CF. However, the rate of flare ups of respiratory symptoms decreased in people using PEP compared to other forms of physiotherapy such as a vibrating PEP device or a vibrating vest. There was some evidence that people with CF may prefer PEP to other chest physiotherapy methods. There was no evidence of PEP causing harm, except in one study where infants performing either PEP or percussion in various positions which use gravity to help drain secretions, experienced some gastro‐oesophageal reflux (regurgitation of food) in head‐down positions; this was more severe in the group using postural drainage with percussion. In all the other trials PEP was performed in a sitting position. In 10 of the 28 studies studied single PEP treatment sessions. The results from these studies are very limited as they could not report on the number of respiratory infections and lung function did not change with just one treatment. Two one‐year studies compared PEP to postural drainage and percussion; in the study with children, PEP improved their lung function, while in the adult study, lung function declined slightly with both PEP and postural drainage and percussion. Also, the method of performing PEP was different in the two age groups. Although PEP seems to have an advantage in reducing flare ups (based on the combined results of a few studies), different physiotherapy techniques and devices may be more or less effective at varying times and in different individuals during baseline function and chest flare ups. Each person should talk to their clinician to help choose which method of airway clearance is best for them and which they will adhere to, so as to provide the best quality of life and long‐term outcomes. | Chest physiotherapy is widely prescribed to assist the clearance of airway secretions in people with cystic fibrosis (CF). Positive expiratory pressure (PEP) devices provide back pressure to the airways during expiration. This may improve clearance by building up gas behind mucus via collateral ventilation and by temporarily increasing functional residual capacity. The developers of the PEP technique recommend using PEP with a mask in order to avoid air leaks via the upper airways and mouth. In addition, increasing forced residual capacity (FRC) has not been demonstrated using mouthpiece PEP. Given the widespread use of PEP devices, there is a need to determine the evidence for their effect. This is an update of a previously published review. Objectives To determine the effectiveness and acceptability of PEP devices compared to other forms of physiotherapy as a means of improving mucus clearance and other outcomes in people with CF. Search methods We searched the Cochrane Cystic Fibrosis and Genetic Disorders Group Trials Register comprising of references identified from comprehensive electronic database searches and handsearches of relevant journals and abstract books of conference proceedings. The electronic database CINAHL was also searched from 1982 to 2017. Most recent search of the Group's CF Trials Register: 20 February 2019. Selection criteria Randomised controlled studies in which PEP was compared with any other form of physiotherapy in people with CF. This included, postural drainage and percussion (PDPV), active cycle of breathing techniques (ACBT), oscillating PEP devices, thoracic oscillating devices, bilevel positive airway pressure (BiPaP) and exercise. Data collection and analysis Three authors independently applied the inclusion and exclusion criteria to publications, assessed the risk of bias of the included studies and assessed the quality of the evidence using the GRADE recommendations. A total of 28 studies (involving 788 children and adults) were included in the review; 18 studies involving 296 participants were cross‐over in design. Data were not published in sufficient detail in most of these studies to perform any meta‐analysis. In 22 of the 28 studies the PEP technique was performed using a mask, in three of the studies a mouthpiece was used with nose clips and in three studies it was unclear whether a mask or mouthpiece was used. These studies compared PEP to ACBT, autogenic drainage (AD), oral oscillating PEP devices, high‐frequency chest wall oscillation (HFCWO) and BiPaP and exercise. Forced expiratory volume in one second was the review's primary outcome and the most frequently reported outcome in the studies (24 studies, 716 participants). Single interventions or series of treatments that continued for up to three months demonstrated little or no difference in effect between PEP and other methods of airway clearance on this outcome (low‐ to moderate‐quality evidence). However, long‐term studies had equivocal or conflicting results regarding the effect on this outcome (low‐ to moderate‐quality evidence). A second primary outcome was the number of respiratory exacerbations. There was a lower exacerbation rate in participants using PEP compared to other techniques when used with a mask for at least one year (five studies, 232 participants; moderate‐ to high‐quality evidence). In one of the included studies which used PEP with a mouthpiece, it was reported (personal communication) that there was no difference in the number of respiratory exacerbations (66 participants, low‐quality evidence). Participant preference was reported in 10 studies; and in all studies with an intervention period of at least one month, this was in favour of PEP. The results for the remaining outcome measures (including our third primary outcome of mucus clearance) were not examined or reported in sufficient detail to provide any high‐quality evidence; only very low‐ to moderate‐quality evidence was available for other outcomes. There was limited evidence reported on adverse events; these were measured in five studies, two of which found no events. In a study where infants performing either PEP or PDPV experienced some gastro‐oesophageal reflux , this was more severe in the PDPV group (26 infants, low‐quality evidence). In PEP versus oscillating PEP, adverse events were only reported in the flutter group (five participants complained of dizziness, which improved after further instructions on device use was provided) (22 participants, low‐quality evidence). In PEP versus HFCWO, from one long‐term high‐quality study (107 participants) there was little or no difference in terms of number of adverse events; however, those in the PEP group had fewer adverse events related to the lower airways when compared to HFCWO (high‐certainty evidence). Many studies had a risk of bias as they did not report how the randomisation sequence was either generated or concealed. Most studies reported the number of dropouts and also reported on all planned outcome measures. The evidence provided by this review is of variable quality, but suggests that all techniques and devices described may have a place in the clinical treatment of people with CF. Following meta‐analyses of the effects of PEP versus other airway clearance techniques on lung function and patient preference, this Cochrane Review demonstrated that there was high‐quality evidence that showed a significant reduction in pulmonary exacerbations when PEP using a mask was compared with HFCWO. It is important to note that airway clearance techniques should be individualised throughout life according to developmental stages, patient preferences, pulmonary symptoms and lung function. This also applies as conditions vary between baseline function and pulmonary exacerbations. |
t82 | The class of drugs called ACE inhibitors is commonly used for the treatment of elevated blood pressure. This class includes drugs such as ramipril (brand name: Altace), captopril (Capoten), enalapril (Vasotec), fosinopril (Monopril), lisinopril (Prinivil, Zestril) and quinapril (Accupril). We asked how much this class of drugs lowers blood pressure and whether there is a difference between individual drugs within the class. The available scientific literature was searched to find all the trials that had assessed this question. We found 92 trials that randomly assigned participants to take either an ACE inhibitor or an inert substance (placebo). These trials evaluated the blood pressure lowering ability of 14 different ACE inhibitors in 12 954 participants. The trials followed participants for approximately 6 weeks (though people are typically expected to take anti‐hypertension drugs for the rest of their lives). The blood pressure lowering effect was modest. There was an 8‐point reduction in the upper number that signifies the systolic pressure and a 5‐point reduction in the lower number that signifies the diastolic pressure. Most of the blood pressure lowering effect (about 70%) can be achieved with the lowest recommended dose of the drugs. No ACE inhibitor drug appears to be any better or worse than others in terms of blood pressure lowering ability. Most of the trials in this review were funded by companies that make ACE inhibitors and serious adverse effects were not reported by the authors of many of these trials. This could mean that the drug companies are withholding unfavorable findings related to their drugs. Due to incomplete reporting of the number of participants who dropped out of the trials due to adverse drug reactions, as well as the short duration of these trials, this review could not provide a good estimate of the harms associated with this class of drugs. Prescribing the least expensive ACE inhibitor in lower doses will lead to substantial cost savings, and possibly a reduction in dose‐related adverse events. | ACE inhibitors are widely prescribed for hypertension so it is essential to determine and compare their effects on blood pressure (BP), heart rate and withdrawals due to adverse effects (WDAE). Objectives To quantify the dose‐related systolic and/or diastolic BP lowering efficacy of ACE inhibitors versus placebo in the treatment of primary hypertension. Search methods We searched CENTRAL (The Cochrane Library 2007, Issue 1), MEDLINE (1966 to February 2007), EMBASE (1988 to February 2007) and reference lists of articles. Selection criteria Double‐blind, randomized, controlled trials evaluating the BP lowering efficacy of fixed‐dose monotherapy with an ACE inhibitor compared with placebo for a duration of 3 to 12 weeks in patients with primary hypertension. Data collection and analysis Two authors independently assessed the risk of bias and extracted data. Study authors were contacted for additional information. WDAE information was collected from the trials. Ninety two trials evaluated the dose‐related trough BP lowering efficacy of 14 different ACE inhibitors in 12 954 participants with a baseline BP of 157/101 mm Hg. The data do not suggest that any one ACE inhibitor is better or worse at lowering BP. A dose of 1/8 or 1/4 of the manufacturer's maximum recommended daily dose (Max) achieved a BP lowering effect that was 60 to 70% of the BP lowering effect of Max. A dose of 1/2 Max achieved a BP lowering effect that was 90% of Max. ACE inhibitor doses above Max did not significantly lower BP more than Max. Combining the effects of 1/2 Max and higher doses gives an estimate of the average trough BP lowering efficacy for ACE inhibitors as a class of drugs of ‐8 mm Hg for SBP and ‐5 mm Hg for DBP. ACE inhibitors reduced BP measured 1 to 12 hours after the dose by about 11/6 mm Hg. There are no clinically meaningful BP lowering differences between different ACE inhibitors. The BP lowering effect of ACE inhibitors is modest; the magnitude of trough BP lowering at one‐half the manufacturers' maximum recommended dose and above is ‐8/‐5 mm Hg. Furthermore, 60 to 70% of this trough BP lowering effect occurs with recommended starting doses. The review did not provide a good estimate of the incidence of harms associated with ACE inhibitors because of the short duration of the trials and the lack of reporting of adverse effects in many of the trials. |
t83 | Cirrhosis is a chronic disorder of the liver where scar tissue replaces the normal liver. People with cirrhosis can develop a kidney disease known as hepatorenal syndrome. The disease may develop when the blood flow to the kidneys becomes insufficient. Increasing the blood flow to the kidneys may therefore benefit people with hepatorenal syndrome. There are two types of hepatorenal syndrome: type 1 occurs rapidly, and type 2 has a slower onset. Terlipressin is a drug that increases the blood flow to the kidneys by constricting blood vessels. The drug may therefore help people with cirrhosis and hepatorenal syndrome. The review includes nine randomised clinical trials (RCTs) and a total of 534 participants. The trials originated from six countries. Seven trials included only participants with type 1 hepatorenal syndrome. Two trials included a total of 96 participants with type 1 or type 2 hepatorenal syndrome. Study funding sources Three RCTs reported funding from a pharmaceutical company. The remaining trials did not report funding or did not receive funding from pharmaceutical companies. People who received terlipressin had a lower risk of dying than people who received inactive placebo or no treatment. Terlipressin was also associated with a beneficial effect on renal function. Terlipressin increased the risk of serious circulation and heart problems (so‐called cardiovascular events). Other adverse events included diarrhoea and abdominal pain. The analyses mainly included people with type 1 hepatorenal syndrome. No beneficial or harmful effects of terlipressin were found when analysing participants with type 2 hepatorenal syndrome (possibly due to the small number of participants). | Hepatorenal syndrome is a potentially reversible renal failure associated with severe liver disease. The disease is relatively common among people with decompensated cirrhosis. Terlipressin is a drug that increases the blood flow to the kidneys by constricting blood vessels. The previous version of this systematic review found a potential beneficial effect of terlipressin on mortality and renal function in people with cirrhosis and hepatorenal syndrome. Objectives To assess the beneficial and harmful effects of terlipressin versus placebo/no intervention for people with cirrhosis and hepatorenal syndrome. Search methods We identified eligible trials through searches of the Cochrane Hepato‐Biliary Group Controlled Trials Register, the Cochrane Central Register of Controlled Trials (CENTRAL) in the Cochrane Library, MEDLINE, Embase, and Science Citation Index Expanded, and manual searches until 21 November 2016. Selection criteria Randomised clinical trials (RCTs) involving participants with cirrhosis and type 1 or type 2 hepatorenal syndrome allocated to terlipressin versus placebo or no intervention. We allowed co‐administration with albumin administered to both comparison groups. Data collection and analysis Two review authors independently extracted data from trial reports and undertook correspondence with the authors. Primary outcomes were mortality, hepatorenal syndrome, and serious adverse events. We conducted sensitivity analyses of RCTs in which participants received albumin, subgroup analyses of participants with type 1 or type 2 hepatorenal syndrome, and Trial Sequential Analyses to control random errors. We reported random‐effects meta‐analyses with risk ratios (RR) and 95% confidence intervals (CI). We assessed the risk of bias based on the Cochrane Hepato‐Biliary Group domains. We included nine RCTs with a total of 534 participants with cirrhosis and ascites. One RCT had a low risk of bias for mortality and a high risk of bias for the remaining outcomes. All included trials had a high risk of bias for non‐mortality outcomes. In total, 473 participants had type 1 hepatorenal syndrome. Seven RCTs specifically evaluated terlipressin and albumin. Terlipressin was associated with a beneficial effect on mortality when including all RCTs (RR 0.85, 95% CI 0.73 to 0.98; 534 participants; number needed to treat for an additional beneficial outcome (NNTB) 10.3 people; low‐quality evidence). Trial Sequential Analysis including all RCTs also found a beneficial effect of terlipressin. Additional analyses showed a beneficial effect of terlipressin and albumin on reversal of hepatorenal syndrome (RR 0.63, 95% CI 0.48 to 0.82; 510 participants; 8 RCTs; NNTB 4 people; low‐quality evidence). Terlipressin increased the risk of serious cardiovascular adverse events (RR 7.26, 95% CI 1.70 to 31.05; 234 participants; 4 RCTs), but it had no effect on the risk of serious adverse events when analysed as a composite outcome (RR 0.91, 95% CI 0.68 to 1.21; 534 participants; 9 RCTs; number needed to treat for an additional harmful outcome 24.5 people; low‐quality evidence). Non‐serious adverse events were mainly gastrointestinal, including diarrhoea (RR 5.76, 95% CI 2.19 to 15.15; 240 participants; low‐quality evidence) and abdominal pain (RR 1.54, 95% CI 0.97 to 2.43; 294 participants; low‐quality evidence). We identified one ongoing trial on terlipressin versus placebo in participants with cirrhosis, ascites, and hepatorenal syndrome type 1. Three RCTs reported funding from a pharmaceutical company. The remaining trials did not report funding or did not receive funding from pharmaceutical companies. This review suggests that terlipressin may be associated with beneficial effects on mortality and renal function in people with cirrhosis and type 1 hepatorenal syndrome, but it is also associated with serious adverse effects. We downgraded the strength of the evidence due to methodological issues including bias control, clinical heterogeneity, and imprecision. Consequently, additional evidence is needed. |
t84 | Chronic obstructive pulmonary disease (COPD) comprises two conditions: emphysema and chronic bronchitis. It has been recognised as a serious health problem and one of the main causes of death around the world. The World Health Organization (WHO) informs that the number of people with COPD continues to grow, and by 2030, COPD will become the world's third cause of death. Most of the people who have COPD also experience depression. Studies show that up to 80% of patients with more severe COPD can have symptoms of depression. Other findings show that patients with COPD are four times more likely to have depression than those without COPD. The number of people living with COPD is increasing, rather than decreasing, around the world. Depression in this population is commonly unrecognised, and patients rarely receive appropriate treatment. Untreated depression increases the risk of death, hospitalisation, readmissions, and healthcare costs. Currently, there is no strong evidence showing which psychological therapy is most effective for patients with COPD and depression. People who have COPD and depression, respiratory physicians, mental health specialists, respiratory nurses, other healthcare professionals, and policy makers. This review included 13 experimental studies (RCTs) with 1500 participants. Our main result shows that psychological therapies using cognitive‐behavioural therapy (CBT) approach may, potentially, be effective in reducing depressive symptoms in patients with COPD. More experimental studies with larger numbers of participants are needed, to confirm beneficial effects of CBT for patients with COPD‐related depression. | Chronic obstructive pulmonary disease (COPD) has been recognised as a global health concern, and one of the leading causes of morbidity and mortality worldwide. Projections of the World Health Organization (WHO) indicate that prevalence rates of COPD continue to increase, and by 2030, it will become the world's third leading cause of death. Depression is a major comorbidity amongst patients with COPD, with an estimate prevalence of up to 80% in severe stages of COPD. Prevalence studies show that patients who have COPD are four times as likely to develop depression compared to those without COPD. Regrettably, they rarely receive appropriate treatment for COPD‐related depression. Available findings from trials indicate that untreated depression is associated with worse compliance with medical treatment, poor quality of life, increased mortality rates, increased hospital admissions and readmissions, prolonged length of hospital stay, and subsequently, increased costs to the healthcare system. Given the burden and high prevalence of untreated depression, it is important to evaluate and update existing experimental evidence using rigorous methodology, and to identify effective psychological therapies for patients with COPD‐related depression. Objectives To assess the effectiveness of psychological therapies for the treatment of depression in patients with chronic obstructive pulmonary disease. Search methods We searched the Cochrane Central Register of Controlled Trials (CENTRAL) (2018, Issue 11), and Ovid MEDLINE, Embase and PsycINFO from June 2016 to 26 November 2018. Previously these databases were searched via the Cochrane Airways and Common Mental Disorders Groups' Specialised Trials Registers (all years to June 2016). We searched ClinicalTrials.gov, the ISRCTN registry, and the World Health Organization International Clinical Trials Registry Platform (ICTRP) to 26 November 2018 to identify unpublished or ongoing trials. Additionally, the grey literature databases and the reference lists of studies initially identified for full‐text screening were also searched. Selection criteria Eligible for inclusion were randomised controlled trials that compared the use of psychological therapies with either no intervention, education, or combined with a co‐intervention and compared with the same co‐intervention in a population of patients with COPD whose depressive symptoms were measured before or at baseline assessment. Data collection and analysis Two review authors independently assessed the titles and abstracts identified by the search to determine which studies satisfied the inclusion criteria. We assessed two primary outcomes: depressive symptoms and adverse events; and the following secondary outcomes: quality of life, dyspnoea, forced expiratory volume in one second (FEV 1 ), exercise tolerance, hospital length of stay or readmission rate, and cost‐effectiveness. Potentially eligible full‐text articles were also independently assessed by two review authors. A PRISMA flow diagram was prepared to demonstrate the decision process in detail. We used the Cochrane 'Risk of bias' evaluation tool to examine the risk of bias, and assessed the quality of evidence using the GRADE framework. All outcomes were continuous, therefore, we calculated the pooled standardised mean difference (SMD) or mean difference (MD) with a corresponding 95% confidence interval (CI). We used a random‐effects model to calculate treatment effects. The findings are based on 13 randomised controlled trials (RCTs), with a total of 1500 participants. In some of the included studies, the investigators did not recruit participants with clinically confirmed depression but applied screening criteria after randomisation. Hence, across the studies, baseline scores for depressive symptoms varied from no symptoms to severe depression. The severity of COPD across the studies was moderate to severe. Primary outcomes There was a small effect showing the effectiveness of psychological therapies in improving depressive symptoms when compared to no intervention (SMD 0.19, 95% CI 0.05 to 0.33; P = 0.009; 6 studies, 764 participants), or to education (SMD 0.23, 95% CI 0.06 to 0.41; P = 0.010; 3 studies, 507 participants). Two studies compared psychological therapies plus a co‐intervention versus the co‐intervention alone (i.e. pulmonary rehabilitation (PR)). The results suggest that a psychological therapy combined with a PR programme can reduce depressive symptoms more than a PR programme alone (SMD 0.37, 95% CI ‐0.00 to 0.74; P = 0.05; 2 studies, 112 participants). We rated the quality of evidence as very low. Owing to the nature of psychological therapies, blinding of participants, personnel, and outcome assessment was a concern. None of the included studies measured adverse events. Secondary outcomes Quality of life was measured in four studies in the comparison with no intervention, and in three studies in the comparison with education. We found inconclusive results for improving quality of life. However, when we pooled data from two studies using the same measure, the result suggested that psychological therapy improved quality of life better than no intervention. One study measured hospital admission rates and cost‐effectiveness and showed significant reductions in the intervention group compared to the education group. We rated the quality of evidence as very low for the secondary outcomes. The findings from this review indicate that psychological therapies (using a CBT‐based approach) may be effective for treating COPD‐related depression, but the evidence is limited. Depressive symptoms improved more in the intervention groups compared to: 1) no intervention (attention placebo or standard care), 2) educational interventions, and 3) a co‐intervention (pulmonary rehabilitation). |
t85 | Phosphorus is a chemical element sometimes used in a military or industrial context. Phosphorus burns resulting from military or industrial injuries are chemical burns that can be fatal. Although rare, these burns are serious, often very deep and painful, and can be associated with lengthy periods of time in hospital for patients. The usual procedure for dealing with phosphorus burns is to remove any affected clothing and wash the wounds with water or saline solution. In addition, copper sulphate can be used to make the particles of phosphorus more visible and easier to remove, however, copper sulphate is poisonous and can in itself be fatal if absorbed into the body. This review found two retrospective studies (88 patients) that compared burns treated with or without copper sulphate. The review found no evidence that using copper sulphate improves the outcome of the burn, indeed, based upon the limited available evidence, the review authors suggest that copper sulphate should not be used in the treatment of phosphorus burns. | Phosphorus burns are rarely encountered in usual clinical practice and occur mostly in military and industrial settings. However, these burns can be fatal, even with minimal burn area, and are often associated with prolonged hospitalisation. Objectives To summarise the evidence of effects (beneficial and harmful) of all interventions for treating people with phosphorus burns. Search methods In October 2013 for this first update we searched the Cochrane Wounds Group Specialised Register; the Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library);Ovid OLDMEDLINE; Ovid MEDLINE; Ovid MEDLINE (In‐Process & Other Non‐Indexed Citations); Ovid EMBASE; EBSCO CINAHL and Conference Proceedings Citation Index ‐ Science (CPCI‐S). We did not apply any methodological filters or restrictions on the basis of study design, language, date of publication or publication status. Selection criteria Any comparisons of different ways of managing phosphorus burns including, but not restricted, to randomised trials. Data collection and analysis We found two non‐randomised comparative studies, both comparing patients treated with and without copper sulphate. These two comparative studies provide no evidence to support the use of copper sulphate in managing phosphorus burns. Indeed the small amount of available evidence suggests that it may be harmful. First aid for phosphorus burns involves the common sense measures of acting promptly to remove the patient's clothes, irrigating the wound(s) with water or saline continuously, and removing phosphorus particles. There is no evidence that using copper sulphate to assist visualisation of phosphorus particles for removal is associated with better outcome, and some evidence that systemic absorption of copper sulphate may be harmful. We have so far been unable to identify any other comparisons relevant to informing other aspects of the care of patients with phosphorus burns. Future versions of this review will take account of information in articles published in languages other than English, which may contain additional evidence based on treatment comparisons. |
t86 | Duchenne muscular dystrophy is a progressive wasting condition of muscles which starts in early childhood, leads to dependence on a wheelchair by the age of thirteen and respiratory failure by late teens. The condition is due to absence of dystrophin, a large muscle protein that has several functions within muscle cells. We know that calcium molecules build up in the muscle cells of people with Duchenne muscular dystrophy and this is associated with cell death. The rationale behind this review was to ascertain whether randomised controlled trials using drugs that block calcium entry into muscle would result in a reduction in progression of the condition. Although these trials were conducted over ten years ago a systematic review was not done at that time, and so a potential effect of calcium blocking drugs (antagonists) on the course of DMD may have been missed. If it were to exist, calcium antagonists might be an effective treatment in their own right or, more likely, could be used in combination with newer treatments such as corticosteroids or potential treatments such as gene related therapies. In the original review original eight studies were identified and five were high enough quality to be included. In this update no new trials were identified and so five were still included in the review. These studies were of different types of calcium channel blocking drugs and measured a variety of outcomes such as muscle strength, scales of muscle function, biochemical changes in muscle and electrocardiographic findings. Only one study showed a beneficial effect, which was an increase in muscle strength, but in this study the drug used was also associated with cardiac side effects. Adverse effects noted were mostly known side effects of calcium channel blockers. Limitations of the review were that a meta‐analysis could not be done as the trials used different calcium channel blockers and measured different outcomes, and all but one of the trials included a low number of patients. In conclusion, the review did not find calcium antagonists to have a useful effect. | Duchenne muscular dystrophy (DMD) is a progressive muscle condition starting in childhood, leading to severe disability and a shortened life span. It is due to severe deficiency of the protein dystrophin which performs both structural and signalling roles within skeletal and cardiac myocytes. Calcium accumulates in dystrophic muscle cells and plays a role in cell damage. It has been hypothesised that use of calcium antagonists might reduce this calcium load and its toxic effect on muscle cells. This is an updated review, in which no new trials were found. Objectives To evaluate the effects of calcium antagonists on muscle function and muscle strength in people with DMD. Search methods We searched the Cochrane Neuromuscular Disease Group Trials Register (July 2010), MEDLINE (from January 1950 to July 2010) and EMBASE (from January 1947 to July 2010). We also searched bibliographies in reports of any trials. Selection criteria All randomised or quasi‐randomised controlled trials of any calcium antagonist in people with DMD. Data collection and analysis Both authors assessed all identified trials for inclusion in the study on the basis of whether they fulfilled the selection criteria. Both authors extracted data from the trials and assessed the methodological quality. Had there been more than one trial of the same intervention and outcome of sufficient methodological quality, we had planned to undertake a meta‐analysis. Five randomised or quasi‐randomised double‐blind trials fulfilled the selection criteria, but were not sufficiently comparable to undertake a meta‐analysis. The drugs studied were verapamil (8 participants), diltiazem (56 participants), nifedipine (105 participants) and flunarizine (27 participants). There were limitations in the description of blinding and randomisation, and definition of outcome measures. One trial, using verapamil, showed a difference between groups in muscle force measured by ergometry, but also revealed cardiac side effects. The numbers of people included in the trials were low, and so the studies may not have included enough people for sufficient power to detect small differences in muscle force or function between placebo and control groups. In addition, calcium antagonists were in an early stage of development and some of the second generation drugs that have a better side effect profile, such as amlodipine, have not been studied. There is no evidence to show a significant beneficial effect of calcium antagonists on muscle function in DMD. |
t87 | In the last few years, there has been increasing interest in whether compression stockings (or 'flight socks') reduce the risk of deep vein thrombosis (DVT; blood clots in the legs) and other circulatory problems in airline passengers. The stockings are worn throughout the flight and are similar to those known to be effective in patients lying in bed after an operation. By applying a gentle pressure, to the ankle in particular, compression stockings help blood to flow. Pressure combined with leg movement helps blood in superficial (surface) veins to move to the deep veins and back to the heart. The blood is then less likely to clot in the deep veins, which could be fatal if the clot moves to the lungs. and This review included eleven trials (2906 participants) and we were able to combine the data from nine trials with a total of 2637 participants (current to February 2016). Almost half of the participants were randomly assigned to wearing stockings for a flight lasting at least five hours while the other half did not wear stockings. None of the passengers developed a DVT with symptoms (slowly developing leg pain, swelling and increased temperature) and no serious events (a blood clot in their lungs (pulmonary embolism) or dying) were reported. Passengers were carefully assessed after the flight to detect any problems with the circulation of blood in their legs, even if they had not noticed any problems themselves. Wearing compression stockings resulted in a large reduction in symptomless DVT among airline passengers who were allocated to wear compression stockings compared to those allocated not to wear compression stockings. This difference in symptomless DVT between the two groups is equivalent to a reduction in the risk from a few tens per thousand passengers to two or three per thousand. People who wore stockings had less swelling in their legs (oedema) than those who did not wear them. Fewer passengers developed superficial vein thrombosis when wearing compression stockings than those not wearing stockings. Not all the trials reported on possible problems with wearing stockings but in those that did, the researchers said that the stockings were well‐tolerated, without any problems. High‐quality evidence shows that airline passengers wearing compression stockings develop less symptomless DVT and low‐quality evidence shows that leg swelling is reduced when compared to not wearing compression stockings. | Air travel might increase the risk of deep vein thrombosis (DVT). It has been suggested that wearing compression stockings might reduce this risk. This is an update of the review first published in 2006. Objectives To assess the effects of wearing compression stockings versus not wearing them for preventing DVT in people travelling on flights lasting at least four hours. Search methods For this update the Cochrane Vascular Information Specialist (CIS) searched the Specialised Register (10 February 2016). In addition, the CIS searched the Cochrane Register of Studies (CENTRAL (2016, Issue 1)). Selection criteria Randomised trials of compression stockings versus no stockings in passengers on flights lasting at least four hours. Trials in which passengers wore a stocking on one leg but not the other, or those comparing stockings and another intervention were also eligible. Data collection and analysis Two review authors independently selected trials for inclusion and extracted data. We sought additional information from trialists where necessary. One new study that fulfilled the inclusion criteria was identified for this update. Eleven randomised trials (n = 2906) were included in this review: nine (n = 2821) compared wearing graduated compression stockings on both legs versus not wearing them; one trial (n = 50) compared wearing graduated compression tights versus not wearing them; and one trial (n = 35) compared wearing a graduated compression stocking on one leg for the outbound flight and on the other leg on the return flight. Eight trials included people judged to be at low or medium risk of developing DVT (n = 1598) and two included high‐risk participants (n = 1273). All flights had a duration of more than five hours. Fifty of 2637 participants with follow‐up data available in the trials of wearing compression stockings on both legs had a symptomless DVT; three wore stockings, 47 did not (odds ratio (OR) 0.10, 95% confidence interval (CI) 0.04 to 0.25, P < 0.001; high‐quality evidence). There were no symptomless DVTs in three trials. Sixteen of 1804 people developed superficial vein thrombosis, four wore stockings, 12 did not (OR 0.45, 95% CI 0.18 to 1.13, P = 0.09; moderate‐quality evidence). No deaths, pulmonary emboli or symptomatic DVTs were reported. Wearing stockings had a significant impact in reducing oedema (mean difference (MD) −4.72, 95% CI −4.91 to −4.52; based on six trials; low‐quality evidence). A further two trials showed reduced oedema in the stockings group but could not be included in the meta‐analysis as they used different methods to measure oedema. No significant adverse effects were reported. There is high‐quality evidence that airline passengers similar to those in this review can expect a substantial reduction in the incidence of symptomless DVT and low‐quality evidence that leg oedema is reduced if they wear compression stockings. Quality was limited by the way that oedema was measured. There is moderate‐quality evidence that superficial vein thrombosis may be reduced if passengers wear compression stockings. We cannot assess the effect of wearing stockings on death, pulmonary embolism or symptomatic DVT because no such events occurred in these trials. Randomised trials to assess these outcomes would need to include a very large number of people. |
t88 | Bedwetting (nocturnal enuresis) is the involuntary loss of urine at night without an underlying organic disease as the cause. It can result in social problems, sibling teasing and lowered self esteem. It affects around 15% to 20% of five‐year olds, and up to 2% of adults. Many different types of drugs have been used to treat children with bed wetting. There is not enough reliable evidence to show that drugs other than desmopressin or tricyclics reduce bedwetting in children during treatment when used in isolation, despite their risk of unwanted side effects. In other Cochrane reviews, alarms triggered by wetting, desmopressin and tricyclic drugs have been shown to work during treatment. However, alarms have a more sustained effect than desmopressin and tricyclics after treatment has finished. The adverse effects of alarm therapy (tiredness and waking other members of the family) are relatively benign and self limiting compared with the adverse effects of drugs. One class of drugs (anticholinergic drugs) appears to improve the efficacy of other established treatments such as tricyclics, bedwetting alarms and desmopressin. The cost of treating children with bedwetting with alarm therapy or drugs may vary in different countries. | Enuresis (bedwetting) is a socially stigmatising and stressful condition which affects around 15% to 20% of five‐year olds and up to 2% of young adults. Although there is a high rate of spontaneous remission, the social, emotional and psychological costs to the children can be great. Drugs (including desmopressin, tricyclics and other drugs) have often been tried to treat nocturnal enuresis. Objectives To assess the effects of drugs other than desmopressin and tricyclics on nocturnal enuresis in children and to compare them with other interventions. Search methods We searched the Cochrane Incontinence Group Specialised Register of trials (searched 15 December 2011), which includes searches of MEDLINE and CENTRAL, to identify published and unpublished randomised and quasi‐randomised trials. The reference lists of relevant articles were also searched. Selection criteria All randomised trials of drugs (excluding desmopressin or tricyclics) for treating nocturnal enuresis in children up to the age of 16 years were included in the review. Trials were eligible for inclusion if children were randomised to receive drugs compared with placebo, other drugs or behavioral interventions for nocturnal enuresis. Studies which included children with daytime urinary incontinence or children with organic conditions were also included in this review if the focus of the study was on nocturnal enuresis. Trials focused solely on daytime wetting and trials of adults with nocturnal enuresis were excluded. Data collection and analysis Two review authors independently assessed the quality of the eligible trials and extracted data. Differences between review authors were settled by discussion with a third review author. A total of 40 randomised or quasi‐randomised controlled trials (10 new in this update) met the inclusion criteria, with a total of 1780 out of 2440 children who enrolled receiving an active drug other than desmopressin or a tricyclic. In all, 31 different drugs or classes of drugs were tested. The trials were generally small or of poor methodological quality. There was an overall paucity of data regarding outcomes after treatment was withdrawn. For drugs versus placebo, when compared to placebo indomethacin (risk ratio [RR] 0.36, 95% CI 0.16 to 0.79), diazepam (RR 0.22, 95% CI 0.11 to 0.46), mestorelone (RR 0.32, 95% CI 0.17 to 0.62) and atomoxetine (RR 0.81, 95% CI 0.70 to 0.94) appeared to reduce the number of children failing to have 14 consecutive dry nights. Although indomethacin and diclofenac were better than placebo during treatment, they were not as effective as desmopressin and there was a higher chance of adverse effects. None of the medications were effective in reducing relapse rates, although this was only reported in five placebo controlled trials. For drugs versus drugs, combination therapy with imipramine and oxybutynin was more effective than imipramine monotherapy (RR 0.68, 95% CI 0.50 to 0.94) and also had significantly lower relapse rates than imipramine monotherapy (RR 0.35, 95% CI 0.16 to 0.77). There was an overall paucity of data regarding outcomes after treatment was withdrawn. For drugs versus behavioural therapy, bedwetting alarms were found to be better than amphetamine (RR 2.2, 95% CI 1.12 to 4.29), oxybutynin (RR 3.25, 95% CI 1.77 to 5.98), and oxybutynin plus holding exercises (RR 3.3, 95% CI 1.84 to 6.18) in reducing the number of children failing to achieve 14 consecutive dry nights. Adverse effects of drugs were seen in 19 trials while 17 trials did not adequately report the occurrence of side effects. There was not enough evidence to judge whether or not the included drugs cured bedwetting when used alone. There was limited evidence to suggest that desmopressin, imipramine and enuresis alarms therapy were better than the included drugs to which they were compared. In other reviews, desmopressin, tricyclics and alarm interventions have been shown to be effective during treatment. There was also evidence to suggest that combination therapy with anticholinergic therapy increased the efficacy of other established therapies such as imipramine, desmopressin and enuresis alarms by reducing the relapse rates, by about 20%, although it was not possible to identify the characteristics of children who would benefit from combination therapy. Future studies should evaluate the role of combination therapy against established treatments in rigorous and adequately powered trials. |
t89 | Nearly three‐quarters of women suffer from period pain or menstrual cramps (dysmenorrhoea). Research has shown that women with severe period pain have high levels of prostaglandins, hormones known to cause cramping abdominal pain. NSAIDs are drugs which act by blocking prostaglandin production. NSAIDs include the common painkillers aspirin, naproxen, ibuprofen and mefenamic acid. Researchers in The Cochrane Collaboration reviewed the evidence about the safety and effectiveness of NSAIDs for period pain. We found 80 randomised controlled trials (RCTs), which included a total of 5820 women and compared 20 different types of NSAIDs with placebo (an inactive pill), paracetamol or each other. Most of the studies were commercially funded (59%), and a further 31% did not state their source of funding. The review found that NSAIDs appear to be very effective in relieving period pain. The evidence suggests that if 18% of women taking placebo achieve moderate or excellent pain relief, between 45% and 53% taking NSAIDs will do so. NSAIDs appear to work better than paracetamol, but it is unclear whether any one NSAID is safer or more effective than others. NSAIDs commonly cause adverse effects (side effects), including indigestion, headaches and drowsiness. The evidence suggests that if 10% of women taking placebo experience side effects, between 11% and 14% of women taking NSAIDs will do so. Based on two studies that made head‐to‐head comparisons, there was no evidence that newer types of NSAID (known as COX‐2‐specific inhibitors) are more effective for the treatment of dysmenorrhoea than traditional NSAIDs (known as non‐selective inhibitors), nor that there is a difference between them with regard to adverse effects. | Dysmenorrhoea is a common gynaecological problem consisting of painful cramps accompanying menstruation, which in the absence of any underlying abnormality is known as primary dysmenorrhoea. Research has shown that women with dysmenorrhoea have high levels of prostaglandins, hormones known to cause cramping abdominal pain. Nonsteroidal anti‐inflammatory drugs (NSAIDs) are drugs that act by blocking prostaglandin production. They inhibit the action of cyclooxygenase (COX), an enzyme responsible for the formation of prostaglandins. The COX enzyme exists in two forms, COX‐1 and COX‐2. Traditional NSAIDs are considered 'non‐selective' because they inhibit both COX‐1 and COX‐2 enzymes. More selective NSAIDs that solely target COX‐2 enzymes (COX‐2‐specific inhibitors) were launched in 1999 with the aim of reducing side effects commonly reported in association with NSAIDs, such as indigestion, headaches and drowsiness. Objectives To determine the effectiveness and safety of NSAIDs in the treatment of primary dysmenorrhoea. Search methods We searched the following databases in January 2015: Cochrane Menstrual Disorders and Subfertility Group Specialised Register, Cochrane Central Register of Controlled Trials (CENTRAL, November 2014 issue), MEDLINE, EMBASE and Web of Science. We also searched clinical trials registers (ClinicalTrials.gov and ICTRP). We checked the abstracts of major scientific meetings and the reference lists of relevant articles. Selection criteria All randomised controlled trial (RCT) comparisons of NSAIDs versus placebo, other NSAIDs or paracetamol, when used to treat primary dysmenorrhoea. Data collection and analysis Two review authors independently selected the studies, assessed their risk of bias and extracted data, calculating odds ratios (ORs) for dichotomous outcomes and mean differences for continuous outcomes, with 95% confidence intervals (CIs). We used inverse variance methods to combine data. We included 80 randomised controlled trials (5820 women). They compared 20 different NSAIDs (18 non‐selective and two COX‐2‐specific) versus placebo, paracetamol or each other. NSAIDs versus placebo Among women with primary dysmenorrhoea, NSAIDs were more effective for pain relief than placebo (OR 4.37, 95% CI 3.76 to 5.09; 35 RCTs, I 2 = 53%, low quality evidence). This suggests that if 18% of women taking placebo achieve moderate or excellent pain relief, between 45% and 53% taking NSAIDs will do so. However, NSAIDs were associated with more adverse effects (overall adverse effects: OR 1.29, 95% CI 1.11 to 1.51, 25 RCTs, I 2 = 0%, low quality evidence; gastrointestinal adverse effects: OR 1.58, 95% CI 1.12 to 2.23, 14 RCTs, I 2 = 30%; neurological adverse effects: OR 2.74, 95% CI 1.66 to 4.53, seven RCTs, I 2 = 0%, low quality evidence). The evidence suggests that if 10% of women taking placebo experience side effects, between 11% and 14% of women taking NSAIDs will do so. NSAIDs versus other NSAIDs When NSAIDs were compared with each other there was little evidence of the superiority of any individual NSAID for either pain relief or safety. However, the available evidence had little power to detect such differences, as most individual comparisons were based on very few small trials. Non‐selective NSAIDs versus COX‐2‐specific selectors Only two of the included studies utilised COX‐2‐specific inhibitors (etoricoxib and celecoxib). There was no evidence that COX‐2‐specific inhibitors were more effective or tolerable for the treatment of dysmenorrhoea than traditional NSAIDs; however data were very scanty. NSAIDs versus paracetamol NSAIDs appeared to be more effective for pain relief than paracetamol (OR 1.89, 95% CI 1.05 to 3.43, three RCTs, I 2 = 0%, low quality evidence). There was no evidence of a difference with regard to adverse effects, though data were very scanty. Most of the studies were commercially funded (59%); a further 31% failed to state their source of funding. NSAIDs appear to be a very effective treatment for dysmenorrhoea, though women using them need to be aware of the substantial risk of adverse effects. There is insufficient evidence to determine which (if any) individual NSAID is the safest and most effective for the treatment of dysmenorrhoea. |
t90 | Cardiovascular disease (CVD) is a global burden and varies between regions. This regional variation has been linked in part to dietary factors and low fruit and vegetable intake has been associated with higher rates of CVD. This review assessed the effectiveness of increasing fruit and vegetable consumption as a single intervention without the influence of other dietary patterns or other lifestyle modifications in healthy adults and those at high risk of CVD for the prevention of CVD. We found 10 trials involving 1730 participants in which six examined the provision of fruit and vegetables to increase intake and four trials examined dietary advice to increase fruit and vegetable intake. There were variations in the type of fruit and vegetable provided but all interventions investigating provision involved only one fruit or vegetable component. There were also variations in the number of fruit and vegetables that participants were advised to eat. Some studies advised participants to eat at least five servings of fruit and vegetables a day while others advised at least eight or nine servings per day.The duration of the interventions ranged from three months to one year. Adverse effects were reported in three of the included trials and included increased bowel movements, bad breath and body odour. None of the included trials were long enough to examine the effect of increased fruit and vegetable consumption on cardiovascular disease events such as heart attacks. There was no strong evidence that provision of one type of fruit or vegetable had beneficial effects on blood pressure and lipid levels but most trials were short term. There was some evidence to suggest beneficial effects of dietary advice to increase fruit and vegetable consumption but this is based on findings from two trials. | There is increasing evidence that high consumption of fruit and vegetables is beneficial for cardiovascular disease (CVD) prevention. Objectives The primary objective is to determine the effectiveness of i) advice to increase fruit and vegetable consumption ii) the provision of fruit and vegetables to increase consumption, for the primary prevention of CVD. Search methods We searched the following electronic databases: The Cochrane Library (2012, issue 9‐CENTRAL, HTA, DARE, NEED), MEDLINE (1946 to week 3 September 2012); EMBASE (1980 to 2012 week 39) and the Conference Proceedings Citation Index ‐ Science on ISI Web of Science (5 October 2012). We searched trial registers, screened reference lists and contacted authors for additional information where necessary. No language restrictions were applied. Selection criteria Randomised controlled trials with at least three months follow‐up (follow‐up was considered to be the time elapsed since the start of the intervention) involving healthy adults or those at high risk of CVD. Trials investigated either advice to increase fruit and vegetable intake (via any source or modality) or the provision of fruit and vegetables to increase intake. The comparison group was no intervention or minimal intervention. Outcomes of interest were CVD clinical events (mortality (CVD and all‐cause), myocardial infarction (MI), coronary artery bypass grafting (CABG) or percutaneous transluminal coronary angioplasty (PTCA), angiographically‐defined angina pectoris, stroke, carotid endarterectomy, peripheral arterial disease (PAD)) and major CVD risk factors (blood pressure, blood lipids, type 2 diabetes). Trials involving multifactorial lifestyle interventions (including different dietary patterns, exercise) or where the focus was weight loss were excluded to avoid confounding. Data collection and analysis Two review authors independently selected trials for inclusion, extracted data and assessed the risk of bias. Trials of provision of fruit and vegetables were analysed separately from trials of dietary advice. We identified 10 trials with a total of 1730 participants randomised, and one ongoing trial. Six trials investigated the provision of fruit and vegetables, and four trials examined advice to increase fruit and vegetable consumption.The ongoing trial is examining the provision of an avocado‐rich diet.The number and type of intervention components for provision, and the dietary advice provided differed between trials. None of the trials reported clinical events as they were all relatively short term. There was no strong evidence for effects of individual trials of provision of fruit and vegetables on cardiovascular risk factors, but trials were heterogeneous and short term. Furthermore, five of the six trials only provided one fruit or vegetable. Dietary advice showed some favourable effects on blood pressure (systolic blood pressure (SBP): mean difference (MD) ‐3.0 mmHg (95% confidence interval (CI) ‐4.92 to ‐1.09), diastolic blood pressure (DBP): MD ‐0.90 mmHg (95% CI ‐2.03 to 0.24)) and low‐density lipoprotein (LDL) cholesterol but analyses were based on only two trials. Three of the 10 included trials examined adverse effects, which included increased bowel movements, bad breath and body odour. There are very few studies to date examining provision of, or advice to increase the consumption of, fruit and vegetables in the absence of additional dietary interventions or other lifestyle interventions for the primary prevention of CVD. The limited evidence suggests advice to increase fruit and vegetables as a single intervention has favourable effects on CVD risk factors but more trials are needed to confirm this. |
t91 | Epilepsy is one of the most common long‐term disorders of the nervous system, and despite several antiepileptic drugs being available, 30% of people continue having seizures (fits). Reports have suggested that melatonin can work in epilepsy with a good safety profile. Melatonin is produced by the body and is prescribed by doctors to treat sleep disorders and problems such as jet lag. We searched medical databases for clinical trials of melatonin added to another antiepileptic drug (add‐on treatment) compared with antiepileptic drug plus add‐on pretend treatment (placebo) or add‐on no treatment in people with epilepsy. The participants were of any age or sex and included children and adults with disabilities. The studies measured reduction of seizure frequency by half, proportion of people with no seizures (seizure freedom), side effects, and improvement in quality of life. We found six trials representing 125 participants for the present review. They reported two different comparisons: melatonin versus placebo and melatonin 5 mg versus melatonin 10 mg. Included trials did not evaluate seizure frequency, seizure freedom, and adverse events in a methodical way. Only one study reported seizure frequency and none of the participants had a change in frequency occurring during the trial compared to before the trial. Only one trial evaluated the effect of melatonin on quality of life and found no improvement with add‐on melatonin compared with add‐on placebo. The included trials were of poor methodological quality and it was not possible to draw any definitive conclusions about the role of melatonin in reducing seizure frequency or improving the quality of life in people with epilepsy. | This is an updated version of the original Cochrane review published in Issue 6, 2012. Epilepsy is one of the most common chronic neurological disorders. Despite the plethora of antiepileptic drugs (AEDs) currently available, 30% of people continue having seizures. This group of people requires a more aggressive treatment, since monotherapy, the first choice scheme, fails to control seizures. Nevertheless, polytherapy often results in a number of unwanted effects, including neurological disturbances (somnolence, ataxia, dizziness), psychiatric and behavioural symptoms, and metabolic alteration (osteoporosis, inducement or inhibition of hepatic enzymes, etc.). The need for better tolerated AEDs is even more urgent in this group of people. Reports have suggested an antiepileptic role of melatonin with a good safety profile. Objectives To assess the efficacy and tolerability of melatonin as add‐on treatment for epilepsy. Search methods For the latest update, we searched the Cochrane Epilepsy Group's Specialized Register (12 January 2016), the Cochrane Central Register of Controlled Trials (CENTRAL) via the Cochrane Register of Studies Online (CRSO, 12 January 2016), and MEDLINE (Ovid, 11 January 2016). We searched the bibliographies of any identified study for further references. We handsearched selected journals and conference proceedings. We applied no language restrictions. In addition, we contacted melatonin manufacturers (i.e. Nathura) and original investigators to identify any unpublished studies. Selection criteria Randomized controlled trials; double, single, or unblinded trials; parallel group or cross‐over studies. People with epilepsy regardless of age and gender, including children and adults with disabilities. Administration of melatonin as add‐on treatment to any AED(s) compared to add‐on placebo or no add‐on treatment. Data collection and analysis Review authors independently selected trials for inclusion according to pre‐defined criteria, extracted relevant data, and evaluated the methodological quality of trials. We assessed the following outcomes: at least 50% seizure reduction, seizure freedom, adverse events, and quality of life. We included six publications, with 125 participants (106 aged under 18 years). Two different comparisons were available: melatonin versus placebo and melatonin 5 mg versus melatonin 10 mg. Despite our primary intention, due to insufficient information on outcomes, we were unable to perform any meta‐analyses, but summarized data narratively. Four studies were randomized, double‐blind, cross‐over, placebo‐controlled trials and two were randomized, double‐blind, parallel, placebo‐controlled trials. Only two studies provided the exact number of seizures during the trial compared to the baseline: none of the participants with seizures during the trial had a change in seizure frequency compared with the baseline. Two studies systematically evaluated adverse effects (worsening of headache was reported in a child with migraine under melatonin treatment). Only one study systematically evaluated quality of life, showing no statistically significant improvement in quality of life in the add‐on melatonin group. Included studies were of poor methodological quality, and did not systematically evaluate seizure frequency and adverse events, so that it was impossible to summarize data in a meta‐analysis. It is not possible to draw any conclusion about the role of melatonin in reducing seizure frequency or improving quality of life in people with epilepsy. |
t92 | The aim of this Cochrane Review, first published in 1999, was to summarise research that looks at the effects of immunising healthy adults with influenza vaccines during influenza seasons. We used information from randomised trials comparing vaccines with dummy vaccines or nothing. We focused on the results of studies looking at vaccines based on inactivated influenza viruses, which are developed by killing the influenza virus with a chemical and are given by injection through the skin. We evaluated the effects of vaccines on reducing the number of adults with confirmed influenza and the number of adults who had influenza‐like symptoms such as headache, high temperature, cough, and muscle pain (influenza‐like illness, or ILI). We also evaluated hospital admission and harms arising from the vaccines. Observational data included in previous versions of the review have been retained for historical reasons but have not been updated due to their lack of influence on the review conclusions. Over 200 viruses cause ILI, which produces the same symptoms (fever, headache, aches, pains, cough, and runny nose) as influenza. Without laboratory tests, doctors cannot distinguish between ILI and influenza because both last for days and rarely cause serious illness or death. The types of virus contained in influenza vaccines are usually those that are expected to circulate in the following influenza seasons, according to recommendations of the World Health Organization (seasonal vaccine). Pandemic vaccine contains only the virus strain that is responsible of the pandemic (i.e. the type A H1N1 for the 2009 to 2010 pandemic). We found 52 clinical trials of over 80,000 adults. We were unable to determine the impact of bias on about 70% of the included studies due to insufficient reporting of details. Around 15% of the included studies were well designed and conducted. We focused on reporting of results from 25 studies that looked at inactivated vaccines. Injected influenza vaccines probably have a small protective effect against influenza and ILI (moderate‐certainty evidence), as 71 people would need to be vaccinated to avoid one influenza case, and 29 would need to be vaccinated to avoid one case of ILI. Vaccination may have little or no appreciable effect on hospitalisations (low‐certainty evidence) or number of working days lost. We were uncertain of the protection provided to pregnant women against ILI and influenza by the inactivated influenza vaccine, or this was at least very limited. The administration of seasonal vaccines during pregnancy showed no significant effect on abortion or neonatal death, but the evidence set was observational. Inactivated vaccines can reduce the proportion of healthy adults (including pregnant women) who have influenza and ILI, but their impact is modest. We are uncertain about the effects of inactivated vaccines on working days lost or serious complications of influenza during influenza season. | The consequences of influenza in adults are mainly time off work. Vaccination of pregnant women is recommended internationally. This is an update of a review published in 2014. Future updates of this review will be made only when new trials or vaccines become available. Observational data included in previous versions of the review have been retained for historical reasons but have not been updated due to their lack of influence on the review conclusions. Objectives To assess the effects (efficacy, effectiveness, and harm) of vaccines against influenza in healthy adults, including pregnant women. Search methods We searched the Cochrane Central Register of Controlled Trials (CENTRAL; 2016, Issue 12), MEDLINE (January 1966 to 31 December 2016), Embase (1990 to 31 December 2016), the WHO International Clinical Trials Registry Platform (ICTRP; 1 July 2017), and ClinicalTrials.gov (1 July 2017), as well as checking the bibliographies of retrieved articles. Selection criteria Randomised controlled trials (RCTs) or quasi‐RCTs comparing influenza vaccines with placebo or no intervention in naturally occurring influenza in healthy individuals aged 16 to 65 years. Previous versions of this review included observational comparative studies assessing serious and rare harms cohort and case‐control studies. Due to the uncertain quality of observational (i.e. non‐randomised) studies and their lack of influence on the review conclusions, we decided to update only randomised evidence. The searches for observational comparative studies are no longer updated. Data collection and analysis Two review authors independently assessed trial quality and extracted data. We rated certainty of evidence for key outcomes (influenza, influenza‐like illness (ILI), hospitalisation, and adverse effects) using GRADE. We included 52 clinical trials of over 80,000 people assessing the safety and effectiveness of influenza vaccines. We have presented findings from 25 studies comparing inactivated parenteral influenza vaccine against placebo or do‐nothing control groups as the most relevant to decision‐making. The studies were conducted over single influenza seasons in North America, South America, and Europe between 1969 and 2009. We did not consider studies at high risk of bias to influence the results of our outcomes except for hospitalisation. Inactivated influenza vaccines probably reduce influenza in healthy adults from 2.3% without vaccination to 0.9% (risk ratio (RR) 0.41, 95% confidence interval (CI) 0.36 to 0.47; 71,221 participants; moderate‐certainty evidence), and they probably reduce ILI from 21.5% to 18.1% (RR 0.84, 95% CI 0.75 to 0.95; 25,795 participants; moderate‐certainty evidence; 71 healthy adults need to be vaccinated to prevent one of them experiencing influenza, and 29 healthy adults need to be vaccinated to prevent one of them experiencing an ILI). The difference between the two number needed to vaccinate (NNV) values depends on the different incidence of ILI and confirmed influenza among the study populations. Vaccination may lead to a small reduction in the risk of hospitalisation in healthy adults, from 14.7% to 14.1%, but the CI is wide and does not rule out a large benefit (RR 0.96, 95% CI 0.85 to 1.08; 11,924 participants; low‐certainty evidence). Vaccines may lead to little or no small reduction in days off work (‐0.04 days, 95% CI ‐0.14 days to 0.06; low‐certainty evidence). Inactivated vaccines cause an increase in fever from 1.5% to 2.3%. We identified one RCT and one controlled clinical trial assessing the effects of vaccination in pregnant women. The efficacy of inactivated vaccine containing pH1N1 against influenza was 50% (95% CI 14% to 71%) in mothers (NNV 55), and 49% (95% CI 12% to 70%) in infants up to 24 weeks (NNV 56). No data were available on efficacy against seasonal influenza during pregnancy. Evidence from observational studies showed effectiveness of influenza vaccines against ILI in pregnant women to be 24% (95% CI 11% to 36%, NNV 94), and against influenza in newborns from vaccinated women to be 41% (95% CI 6% to 63%, NNV 27). Live aerosol vaccines have an overall effectiveness corresponding to an NNV of 46. The performance of one‐ or two‐dose whole‐virion 1968 to 1969 pandemic vaccines was higher (NNV 16) against ILI and (NNV 35) against influenza. There was limited impact on hospitalisations in the 1968 to 1969 pandemic (NNV 94). The administration of both seasonal and 2009 pandemic vaccines during pregnancy had no significant effect on abortion or neonatal death, but this was based on observational data sets. Healthy adults who receive inactivated parenteral influenza vaccine rather than no vaccine probably experience less influenza, from just over 2% to just under 1% (moderate‐certainty evidence). They also probably experience less ILI following vaccination, but the degree of benefit when expressed in absolute terms varied across different settings. Variation in protection against ILI may be due in part to inconsistent symptom classification. Certainty of evidence for the small reductions in hospitalisations and time off work is low. Protection against influenza and ILI in mothers and newborns was smaller than the effects seen in other populations considered in this review. Vaccines increase the risk of a number of adverse events, including a small increase in fever, but rates of nausea and vomiting are uncertain. The protective effect of vaccination in pregnant women and newborns is also very modest. We did not find any evidence of an association between influenza vaccination and serious adverse events in the comparative studies considered in this review. Fifteen included RCTs were industry funded (29%). |
t93 | Elective delivery in diabetic pregnant women Induction of labour at 38 weeks pregnancy for women with diabetes treated with insulin lowers the chances of delivering a large baby. Women with diabetes or gestational diabetes are more likely to have a large baby, which can cause problems around birth. Early elective delivery (labour induction or caesarean section) aims to avoid these complications. However, early elective delivery can also cause problems. The review found only one trial of labour induction for women with diabetes treated with insulin. Induction of labour lowered the number of large babies without increasing the risk of caesarean section. | In pregnancies complicated by diabetes the major concerns during the third trimester are fetal distress and the potential for birth trauma associated with fetal macrosomia. Objectives The objective of this review was to assess the effect of a policy of elective delivery, as compared to expectant management, in term diabetic pregnant women, on maternal and perinatal mortality and morbidity. Search methods We searched the Cochrane Pregnancy and Childbirth Group's Trials Register (24 July 2004). We updated this search on 24 July 2009 and added the results to the awaiting classification section. Selection criteria All available randomized controlled trials of elective delivery, either by induction of labour or by elective caesarean section, compared to expectant management in diabetic pregnant women at term. Data collection and analysis The reports of the only available trial were analysed independently by the three co‐reviewers to retrieve data on maternal and perinatal outcomes. Results are expressed as relative risks (RR) and 95% confidence intervals (CI). The participants in the one trial included in this review were 200 insulin‐requiring diabetic women. Most had gestational diabetes, except 13 women with type 2 pre‐existing diabetes (class B). The trial compared a policy of active induction of labour at 38 completed weeks of pregnancy, to expectant management until 42 weeks. The risk of caesarean section was not statistically different between groups (relative risk (RR) 0.81, 95% confidence interval (CI) 0.52 ‐ 1.26). The risk of macrosomia was reduced in the active induction group (RR 0.56, 95% CI 0.32 ‐ 0.98) and three cases of mild shoulder dystocia were reported in the expectant management group. No other perinatal morbidity was reported. The results of the single randomized controlled trial comparing elective delivery with expectant management at term in pregnant women with insulin‐requiring diabetes show that induction of labour reduces the risk of macrosomia. The risk of maternal or neonatal morbidity was not different between groups, but, given the rarity of maternal and neonatal morbidity, the number of women included does not permit to draw firm conclusions. Women's views on elective delivery and on prolonged surveillance and treatment with insulin should be assessed in future trials. [Note: The one citation in the awaiting classification section of the review may alter the conclusions of the review once assessed.] |
t94 | Paracetamol is one of the most common drugs taken in overdose. Intentional or accidental poisoning with paracetamol is a common cause of liver injury. Randomised clinical trials (studies where people are randomly put into one of two or more treatment groups) where participants had come to medical attention because they had taken a paracetamol overdose, intentionally or by accident, regardless of the amount of paracetamol taken or the age, sex, or other medical conditions of the person involved. There are many different interventions that can be used to try to treat people with paracetamol poisoning. These interventions include decreasing the absorption of the paracetamol ingested and hence decreasing the amount absorbed into the bloodstream. The agents include activated charcoal (that binds paracetamol together in the stomach), gastric lavage (stomach washout to remove as much paracetamol as possible), or ipecacuanha (a syrup that is swallowed and causes vomiting (being sick)). Paracetamol once absorbed into the bloodstream goes to the liver where the majority is broken down to harmless products. However, a small amount of the medicine is converted into a toxic product that the liver can normally handle but, when large amounts of paracetamol are taken, the liver is overwhelmed. As a consequence, the toxic product can damage the liver leading to liver failure, kidney failure, and in some cases death. Other interventions to treat paracetamol poisoning include medicines (antidotes) that may decrease the amount of the toxic products (such as a medicine called cimetidine) or breakdown the toxic products (including medicines called methionine, cysteamine, dimercaprol, or acetylcysteine). Finally, attempts can be made to remove paracetamol and its toxic products from the bloodstream using special blood cleansing equipment. We found 11 randomised clinical trials with 700 participants. Most of these trials looked at different treatments. Activated charcoal, gastric lavage, and ipecacuanha may reduce absorption of paracetamol if started within one to two hours of paracetamol ingestion, but the clinical benefit was unclear. Activated charcoal seems to be the best choice if the person is able to take it. People may not be able to take charcoal if they are drowsy and some may dislike its taste or texture (or both). Of the treatments that remove the toxic products of paracetamol, acetylcysteine seems to reduce the rate of liver injury from paracetamol poisoning. Furthermore, it has fewer side effects than some other antidotes such as dimercaprol and cysteamine; its superiority to methionine was unclear. Acetylcysteine should be given to people with paracetamol poisoning at risk of liver damage, risk is determined by the dose ingested, time of ingestion, and investigations. More recent clinical trials have looked at ways to decrease side effects of intravenous (into a vein) acetylcysteine treatment, by altering the way it is given. These trials have shown that by using a slower infusion and lower initial dose of acetylcysteine, the proportion of side effects such as nausea (feeling sick) and vomiting, and allergy (the body's bad reaction to the medicine such as a rash) may be lowered. | Paracetamol (acetaminophen) is the most widely used non‐prescription analgesic in the world. Paracetamol is commonly taken in overdose either deliberately or unintentionally. In high‐income countries, paracetamol toxicity is a common cause of acute liver injury. There are various interventions to treat paracetamol poisoning, depending on the clinical status of the person. These interventions include inhibiting the absorption of paracetamol from the gastrointestinal tract (decontamination), removal of paracetamol from the vascular system, and antidotes to prevent the formation of, or to detoxify, metabolites. Objectives To assess the benefits and harms of interventions for paracetamol overdosage irrespective of the cause of the overdose. Search methods We searched The Cochrane Hepato‐Biliary Group Controlled Trials Register (January 2017), CENTRAL (2016, Issue 11), MEDLINE (1946 to January 2017), Embase (1974 to January 2017), and Science Citation Index Expanded (1900 to January 2017). We also searched the World Health Organization International Clinical Trials Registry Platform and ClinicalTrials.gov database (US National Institute of Health) for any ongoing or completed trials (January 2017). We examined the reference lists of relevant papers identified by the search and other published reviews. Selection criteria Randomised clinical trials assessing benefits and harms of interventions in people who have ingested a paracetamol overdose. The interventions could have been gastric lavage, ipecacuanha, or activated charcoal, or various extracorporeal treatments, or antidotes. The interventions could have been compared with placebo, no intervention, or to each other in differing regimens. Data collection and analysis Two review authors independently extracted data from the included trials. We used fixed‐effect and random‐effects Peto odds ratios (OR) with 95% confidence intervals (CI) for analysis of the review outcomes. We used the Cochrane 'Risk of bias' tool to assess the risks of bias (i.e. systematic errors leading to overestimation of benefits and underestimation of harms). We used Trial Sequential Analysis to control risks of random errors (i.e. play of chance) and GRADE to assess the quality of the evidence and constructed 'Summary of findings' tables using GRADE software. We identified 11 randomised clinical trials (of which one acetylcysteine trial was abandoned due to low numbers recruited), assessing several different interventions in 700 participants. The variety of interventions studied included decontamination, extracorporeal measures, and antidotes to detoxify paracetamol's toxic metabolite; which included methionine, cysteamine, dimercaprol, or acetylcysteine. There were no randomised clinical trials of agents that inhibit cytochrome P‐450 to decrease the activation of the toxic metabolite N ‐acetyl‐ p ‐benzoquinone imine. Of the 11 trials, only two had two common outcomes, and hence, we could only meta‐analyse two comparisons. Each of the remaining comparisons included outcome data from one trial only and hence their results are presented as described in the trials. All trial analyses lack power to access efficacy. Furthermore, all the trials were at high risk of bias. Accordingly, the quality of evidence was low or very low for all comparisons. Interventions that prevent absorption, such as gastric lavage, ipecacuanha, or activated charcoal were compared with placebo or no intervention and with each other in one four‐armed randomised clinical trial involving 60 participants with an uncertain randomisation procedure and hence very low quality. The trial presented results on lowering plasma paracetamol levels. Activated charcoal seemed to reduce the absorption of paracetamol, but the clinical benefits were unclear. Activated charcoal seemed to have the best risk:benefit ratio among gastric lavage, ipecacuanha, or supportive treatment if given within four hours of ingestion. There seemed to be no difference between gastric lavage and ipecacuanha, but gastric lavage and ipecacuanha seemed more effective than no treatment (very low quality of evidence). Extracorporeal interventions included charcoal haemoperfusion compared with conventional treatment (supportive care including gastric lavage, intravenous fluids, and fresh frozen plasma) in one trial with 16 participants. The mean cumulative amount of paracetamol removed was 1.4 g. One participant from the haemoperfusion group who had ingested 135 g of paracetamol, died. There were no deaths in the conventional treatment group. Accordingly, we found no benefit of charcoal haemoperfusion (very low quality of evidence). Acetylcysteine appeared superior to placebo and had fewer adverse effects when compared with dimercaprol or cysteamine. Acetylcysteine superiority to methionine was unproven. One small trial (low quality evidence) found that acetylcysteine may reduce mortality in people with fulminant hepatic failure (Peto OR 0.29, 95% CI 0.09 to 0.94). The most recent randomised clinical trials studied different acetylcysteine regimens, with the primary outcome being adverse events. It was unclear which acetylcysteine treatment protocol offered the best efficacy, as most trials were underpowered to look at this outcome. One trial showed that a modified 12‐hour acetylcysteine regimen with a two‐hour acetylcysteine 100 mg/kg bodyweight loading dose was associated with significantly fewer adverse reactions compared with the traditional three‐bag 20.25‐hour regimen (low quality of evidence). All Trial Sequential Analyses showed lack of sufficient power. Children were not included in the majority of trials. Hence, the evidence pertains only to adults. These results highlight the paucity of randomised clinical trials comparing different interventions for paracetamol overdose and their routes of administration and the low or very low level quality of the evidence that is available. Evidence from a single trial found activated charcoal seemed the best choice to reduce absorption of paracetamol. Acetylcysteine should be given to people at risk of toxicity including people presenting with liver failure. Further randomised clinical trials with low risk of bias and adequate number of participants are required to determine which regimen results in the fewest adverse effects with the best efficacy. Current management of paracetamol poisoning worldwide involves the administration of intravenous or oral acetylcysteine which is based mainly on observational studies. Results from these observational studies indicate that treatment with acetylcysteine seems to result in a decrease in morbidity and mortality, However, further evidence from randomised clinical trials comparing different treatments are needed. |
t95 | The removal of dental plaque by daily toothbrushing plays a major role in preventing tooth decay and gum disease, the two main causes of tooth loss. Toothbrushing is a skill that can be difficult for people with ID; they may require help and people who care for them may need training in how to help them. This review included 34 studies that involved 1795 people with ID and 354 carers. Nineteen studies randomly allocated participants to two or more groups (i.e. randomised controlled trials (RCTs), and 15 were non‐randomised studies (NRS). randomised controlled trials (RCTs), and 15 were non‐randomised studies (NRS). The studies assessed different ways to improve the oral hygiene of people with ID: special manual toothbrushes; electric toothbrushes; oral hygiene training for carers; oral hygiene training for people with ID; varying the scheduled intervals between dental visits and supervising toothbrushing; using discussion of clinical photographs as a motivator; varying how frequently the teeth of people with ID were brushed; using a plaque‐disclosing agent and using individualised oral care plans. The studies evaluated gingival inflammation (red and swollen gums) and plaque. Some studies evaluated carer knowledge, behaviour, attitude and self‐efficacy (belief in their competence) in terms of oral hygiene, as well as the oral hygiene behaviour and skills of people with ID. We grouped the studies according to when the outcomes were measured: short term (six weeks or less), medium term (between six weeks and 12 months) and long term (more than 12 months). A special manual toothbrush (the Superbrush), used by carers, may be better at reducing levels of gingival inflammation and possibly plaque in people with ID than an ordinary manual toothbrush in the medium term, though this was not seen in the short term. We found no difference between electric and manual toothbrushes used by people with ID or their carers in terms of gingival inflammation or plaque in the medium term, and the short‐term results were unclear. Training carers to brush the teeth of people with ID may have improved carers' oral hygiene knowledge in the medium term. Training people with ID to brush their own teeth may have reduced the amount of plaque on their teeth in the short term. Regularly scheduled dental recall visits and carers supervising toothbrushing between visits may have been more likely than usual care to reduce gingival inflammation and plaque in the long term. Discussing clinical photographs of plaque on participants' teeth shown up by a disclosing agent, to motivate them to better toothbrushing did not seem to reduce plaque. Daily toothbrushing by a dental student may be more effective for reducing plaque levels in the short term than once or twice weekly professional toothbrushing. Toothpaste with a plaque‐disclosing agent and individualised oral care plans were each evaluated in one nonrandomised study that suggested they may be beneficial. Only one study set out to formally measure negative side effects; however, most studies commented that there were none. Some studies found that some people had difficulties with the electric or special manual toothbrushes. Certainty of the evidence Although some oral hygiene interventions for people with ID show scientific evidence of benefits, what these benefits actually mean for an individual's oral hygiene or oral health is unclear. The certainty of the evidence is mainly low or very low so future research may change our findings. Moderate‐certainty evidence is available for only one finding: electric and manual toothbrushes are probably similarly effective for reducing gingival inflammation in people with ID in the medium term. More and better research is needed to fully evaluate interventions that show promise for improving the oral hygiene of people with ID, and to confirm which interventions are ineffective. In the meantime, changes to current habits based on this review should be made cautiously, and decisions about oral hygiene care should be based on professional expertise and the needs and preferences of people with ID and their carers. | Periodontal (gum) disease and dental caries (tooth decay) are the most common causes of tooth loss; dental plaque plays a major role in the development of these diseases. Effective oral hygiene involves removing dental plaque, for example, by regular toothbrushing. People with intellectual disabilities (ID) can have poor oral hygiene and oral health outcomes. Objectives To assess the effects (benefits and harms) of oral hygiene interventions, specifically the mechanical removal of plaque, for people with intellectual disabilities (ID). Search methods Cochrane Oral Health's Information Specialist searched the following databases to 4 February 2019: Cochrane Oral Health's Trials Register, the Cochrane Central Register of Controlled Trials (CENTRAL; Cochrane Register of Studies), MEDLINE Ovid, Embase Ovid and PsycINFO Ovid. ClinicalTrials.gov and the World Health Organization International Clinical Trials Registry Platform were searched for ongoing trials. The Embase search was restricted by date due to the Cochrane Centralised Search Project, which makes available clinical trials indexed in Embase through CENTRAL. We handsearched specialist conference abstracts from the International Association of Disability and Oral Health (2006 to 2016). Selection criteria We included randomised controlled trials (RCTs) and some types of non‐randomised studies (NRS) (non‐RCTs, controlled before‐after studies, interrupted time series studies and repeated measures studies) that evaluated oral hygiene interventions targeted at people with ID or their carers, or both. We used the definition of ID in the International Statistical Classification of Diseases and Related Health Problems, 10th revision (ICD‐10). We defined oral hygiene as the mechanical removal of plaque. We excluded studies that evaluated chemical removal of plaque, or mechanical and chemical removal of plaque combined. Data collection and analysis At least two review authors independently screened search records, identified relevant studies, extracted data, assessed risk of bias and judged the certainty of the evidence according to GRADE criteria. We contacted study authors for additional information if required. We reported RCTs and NRSs separately. We included 19 RCTs and 15 NRSs involving 1795 adults and children with ID and 354 carers. Interventions evaluated were: special manual toothbrushes, electric toothbrushes, oral hygiene training, scheduled dental visits plus supervised toothbrushing, discussion of clinical photographs showing plaque, varied frequency of toothbrushing, plaque‐disclosing agents and individualised care plans. We categorised results as short (six weeks or less), medium (between six weeks and 12 months) and long term (more than 12 months). Most studies were small; all were at overall high or unclear risk of bias. None of the studies reported quality of life or dental caries. We present below the evidence available from RCTs (or NRS if the comparison had no RCTs) for gingival health (inflammation and plaque) and adverse effects, as well as knowledge and behaviour outcomes for the training studies. Very low‐certainty evidence suggested a special manual toothbrush (the Superbrush) reduced gingival inflammation (GI), and possibly plaque, more than a conventional toothbrush in the medium term (GI: mean difference (MD) –12.40, 95% CI –24.31 to –0.49; plaque: MD –0.44, 95% CI –0.93 to 0.05; 1 RCT, 18 participants); brushing was carried out by the carers. In the short term, neither toothbrush showed superiority (GI: MD –0.10, 95% CI –0.77 to 0.57; plaque: MD 0.20, 95% CI –0.45 to 0.85; 1 RCT, 25 participants; low‐ to very low‐certainty evidence). Moderate‐ and low‐certainty evidence found no difference between electric and manual toothbrushes for reducing GI or plaque, respectively, in the medium term (GI: MD 0.02, 95% CI –0.06 to 0.09; plaque: standardised mean difference 0.29, 95% CI –0.07 to 0.65; 2 RCTs, 120 participants). Short‐term findings were inconsistent (4 RCTs; low‐ to very low‐certainty evidence). Low‐certainty evidence suggested training carers in oral hygiene care had no detectable effect on levels of GI or plaque in the medium term (GI: MD –0.09, 95% CI –0.63 to 0.45; plaque: MD –0.07, 95% CI –0.26 to 0.13; 2 RCTs, 99 participants). Low‐certainty evidence suggested oral hygiene knowledge of carers was better in the medium term after training (MD 0.69, 95% CI 0.31 to 1.06; 2 RCTs, 189 participants); this was not found in the short term, and results for changes in behaviour, attitude and self‐efficacy were mixed. One RCT (10 participants) found that training people with ID in oral hygiene care reduced plaque but not GI in the short term (GI: MD –0.28, 95% CI –0.90 to 0.34; plaque: MD –0.47, 95% CI –0.92 to –0.02; very low‐certainty evidence). One RCT (304 participants) found that scheduled dental recall visits (at 1‐, 3‐ or 6‐month intervals) plus supervised daily toothbrushing were more likely than usual care to reduce GI (pocketing but not bleeding) and plaque in the long term (low‐certainty evidence). One RCT (29 participants) found that motivating people with ID about oral hygiene by discussing photographs of their teeth with plaque highlighted by a plaque‐disclosing agent, did not reduce plaque in the medium term (very low‐certainty evidence). One RCT (80 participants) found daily toothbrushing by dental students was more effective for reducing plaque in people with ID than once‐ or twice‐weekly toothbrushing in the short term (low‐certainty evidence). A benefit to gingival health was found by one NRS that evaluated toothpaste with a plaque‐disclosing agent and one that evaluated individualised oral care plans (very low‐certainty evidence). Most studies did not report adverse effects; of those that did, only one study considered them as a formal outcome. Some studies reported participant difficulties using the electric or special manual toothbrushes. Although some oral hygiene interventions for people with ID show benefits, the clinical importance of these benefits is unclear. The evidence is mainly low or very low certainty. Moderate‐certainty evidence was available for only one finding: electric and manual toothbrushes were similarly effective for reducing gingival inflammation in people with ID in the medium term. Larger, higher‐quality RCTs are recommended to endorse or refute the findings of this review. In the meantime, oral hygiene care and advice should be based on professional expertise and the needs and preferences of the individual with ID and their carers. |
t96 | Bronchiolitis is a common illness affecting the lower (smaller) respiratory airways in infants (younger than 24 months of age). Usually caused by a viral infection, it results in breathing problems, including cough, fast breathing, wheezing and can cause poor feeding. It is a major cause of hospitalisation in infants. Current treatment involves supporting infants to breath until the infection clears. An emerging method to support breathing is using blended, heated, humidified air and oxygen, through nasal cannulae (tubes) at flow rates higher than two litres per minute, which is the maximum for conventional dry oxygen delivery. This is known as high‐flow nasal cannula therapy and it allows the comfortable delivery of high flow rates of an air/oxygen blend which may improve ventilation. This may lead to a reduced need for invasive respiratory support (e.g. intubation) and may have a clinical advantage over other treatments by preventing drying of the upper airway. This review assessed the effects of high‐flow nasal cannula therapy, compared with other respiratory support, in the treatment of infants with bronchiolitis. One study (19 participants) met our inclusion criteria. It showed that high‐flow nasal cannula therapy is well tolerated as a treatment for bronchiolitis. Oxygen saturations (blood oxygen levels) were better at eight and 12 hours in participants receiving high‐flow nasal cannula therapy than in those receiving oxygen therapy via a head box, but were similar between groups at 24 hours, although this may have been due to higher oxygen flow rates in the high‐flow nasal cannula group. There was no clear evidence of a difference between the two groups in the duration of oxygen therapy, length of hospitalisation and time to discharge. There is insufficient evidence to determine the effectiveness of high‐flow nasal cannula therapy for treating bronchiolitis in infants. The included study provides some indication that HFNC therapy is feasible and well tolerated. However, our evidence is based on one low‐quality, small study with uncertainty about the effects and some possibility of bias arising from the study methods. | Bronchiolitis is a common lower respiratory tract illness, usually of viral aetiology, affecting infants younger than 24 months of age and is a frequent cause of hospitalisation. It causes airway inflammation, mucus production and mucous plugging, resulting in airway obstruction. Effective pharmacotherapy is lacking and bronchiolitis is a major cause of morbidity and mortality. Conventional treatment consists of supportive therapy in the form of fluids, supplemental oxygen and respiratory support. Traditionally oxygen delivery is as a dry gas at 100% concentration via low‐flow nasal prongs. However, the use of heated, humidified, high‐flow nasal cannula (HFNC) therapy enables delivery of higher inspired gas flows of an air/oxygen blend, up to 12 L/min in infants and 30 L/min in children. Its use provides some level of continuous positive airway pressure to improve ventilation in a minimally invasive manner. This may reduce the need for invasive respiratory support thus potentially lowering costs, with clinical advantages and fewer adverse effects. Objectives To assess the effects of HFNC therapy compared with conventional respiratory support in the treatment of infants with bronchiolitis. Search methods We searched CENTRAL (2013, Issue 4), MEDLINE (1946 to May week 1, 2013), EMBASE (January 2010 to May 2013), CINAHL (1981 to May 2013), LILACS (1982 to May 2013) and Web of Science (1985 to May 2013). In addition we consulted ongoing trial registers and experts in the field to identify ongoing studies, checked reference lists of relevant articles and searched conference abstracts. Selection criteria We included randomised controlled trials (RCTs) or quasi‐RCTs which assessed the effects of HFNC (delivering oxygen or oxygen/room air blend at flow rates greater than 4 L/min) compared to conventional treatment in infants (< 24 months) with a clinical diagnosis of bronchiolitis. Data collection and analysis Two review authors independently used a standard template to assess trials for inclusion and extract data on study characteristics, 'Risk of bias' elements and outcomes. We contacted trial authors to request missing data. Outcome measures included the need for invasive respiratory support and time until discharge, clinical severity measures, oxygen saturation, duration of oxygen therapy and adverse events. We included one RCT which was a pilot study with 19 participants that compared HFNC therapy with oxygen delivery via a head box. In this study, we judged the risk of selection, attrition and reporting bias to be low, and we judged the risk of performance and detection bias to be unclear due to lack of blinding. The median oxygen saturation (SpO 2 ) was higher in the HFNC group at eight hours (100% versus 96%, P = 0.04) and at 12 hours (99% versus 96%, P = 0.04) but similar at 24 hours. There was no clear evidence of a difference in total duration of oxygen therapy, time to discharge or total length of stay between groups. No adverse events were reported in either group and no participants in either group required further respiratory support. Five ongoing trials were identified but no data were available in May 2013. We were not able to perform a meta‐analysis. There is insufficient evidence to determine the effectiveness of HFNC therapy for treating infants with bronchiolitis. The current evidence in this review is of low quality, from one small study with uncertainty about the estimates of effect and an unclear risk of performance and detection bias. The included study provides some indication that HFNC therapy is feasible and well tolerated. Further research is required to determine the role of HFNC in the management of bronchiolitis in infants. The results of the ongoing studies identified will contribute to the evidence in future updates of this review. |
t97 | Most trials follow people up to collect data through personal contact after they have been recruited. Some trials get data from other sources, such as routine collected data or disease registers. There are many ways to collect data from people in trials, and these include using letters, the internet, telephone calls, text messaging, face‐to‐face meetings or the return of medical test kits. Most trials have missing data, for example, because people are too busy to reply, are unable to attend a clinic, have moved or no longer want to participate. Sometimes data has not been recorded at study sites, or are not sent to the trial co‐ordinating centre. Researchers call this 'loss to follow‐up', 'drop out' or 'attrition' and it can affect the trial's results. For example, if the people with the most or least severe symptoms do not return questionnaires or attend a follow‐up visit, this will bias the findings of the trial. Many methods are used by researchers to keep people in trials. These encourage people to send back data by questionnaire, return to a clinic or hospital for trial‐related tests, or be seen by a health or community care worker. This review identified methods that encouraged people to stay in trials. We searched scientific databases for randomised studies (where people are allocated to one of two or more possible treatments in a random manner) or quasi‐randomised studies (where allocation is not really random, e.g. based on date of birth, order in which they attended clinic) that compared methods of increasing retention in trials. We included trials of participants from any age, gender, ethnic, cultural, language and geographic groups. The methods that appeared to work were offering or giving a small amount of money for return of a completed questionnaire and enclosing a small amount of money with a questionnaire with the promise of a further small amount of money for return of a filled in questionnaire. The effect of other ways to keep people in trials is still not clear and more research is needed to see if these really do work. Such methods are shorter questionnaires, sending questionnaires by recorded delivery, using a trial design where people know which treatment they will receive, sending specially designed letters with a reply self addressed stamped envelope followed by a number of reminders, offering a donation to charity or entry into a prize draw, sending a reminder to the study site about participants to follow‐up, sending questionnaires close to the time the patient was last followed‐up, managing peoples' follow‐up, conducting follow‐up by telephone and changing the order of questionnaire questions. | Loss to follow‐up from randomised trials can introduce bias and reduce study power, affecting the generalisability, validity and reliability of results. Many strategies are used to reduce loss to follow‐up and improve retention but few have been formally evaluated. Objectives To quantify the effect of strategies to improve retention on the proportion of participants retained in randomised trials and to investigate if the effect varied by trial strategy and trial setting. Search methods We searched the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, PreMEDLINE, EMBASE, PsycINFO, DARE, CINAHL, Campbell Collaboration’s Social, Psychological, Educational and Criminological Trials Register, and ERIC. We handsearched conference proceedings and publication reference lists for eligible retention trials. We also surveyed all UK Clinical Trials Units to identify further studies. Selection criteria We included eligible retention trials of randomised or quasi‐randomised evaluations of strategies to increase retention that were embedded in 'host' randomised trials from all disease areas and healthcare settings. We excluded studies aiming to increase treatment compliance. Data collection and analysis We contacted authors to supplement or confirm data that we had extracted. For retention trials, we recorded data on the method of randomisation, type of strategy evaluated, comparator, primary outcome, planned sample size, numbers randomised and numbers retained. We used risk ratios (RR) to evaluate the effectiveness of the addition of strategies to improve retention. We assessed heterogeneity between trials using the Chi 2 and I 2 statistics. For main trials that hosted retention trials, we extracted data on disease area, intervention, population, healthcare setting, sequence generation and allocation concealment. We identified 38 eligible retention trials. Included trials evaluated six broad types of strategies to improve retention. These were incentives, communication strategies, new questionnaire format, participant case management, behavioural and methodological interventions. For 34 of the included trials, retention was response to postal and electronic questionnaires with or without medical test kits. For four trials, retention was the number of participants remaining in the trial. Included trials were conducted across a spectrum of disease areas, countries, healthcare and community settings. Strategies that improved trial retention were addition of monetary incentives compared with no incentive for return of trial‐related postal questionnaires (RR 1.18; 95% CI 1.09 to 1.28, P value < 0.0001), addition of an offer of monetary incentive compared with no offer for return of electronic questionnaires (RR 1.25; 95% CI 1.14 to 1.38, P value < 0.00001) and an offer of a GBP20 voucher compared with GBP10 for return of postal questionnaires and biomedical test kits (RR 1.12; 95% CI 1.04 to 1.22, P value < 0.005). The evidence that shorter questionnaires are better than longer questionnaires was unclear (RR 1.04; 95% CI 1.00 to 1.08, P value = 0.07) and the evidence for questionnaires relevant to the disease/condition was also unclear (RR 1.07; 95% CI 1.01 to 1.14). Although each was based on the results of a single trial, recorded delivery of questionnaires seemed to be more effective than telephone reminders (RR 2.08; 95% CI 1.11 to 3.87, P value = 0.02) and a 'package' of postal communication strategies with reminder letters appeared to be better than standard procedures (RR 1.43; 95% CI 1.22 to 1.67, P value < 0.0001). An open trial design also appeared more effective than a blind trial design for return of questionnaires in one fracture prevention trial (RR 1.37; 95% CI 1.16 to 1.63, P value = 0.0003). There was no good evidence that the addition of a non‐monetary incentive, an offer of a non‐monetary incentive, 'enhanced' letters, letters delivered by priority post, additional reminders, or questionnaire question order either increased or decreased trial questionnaire response/retention. There was also no evidence that a telephone survey was either more or less effective than a monetary incentive and a questionnaire. As our analyses are based on single trials, the effect on questionnaire response of using offers of charity donations, sending reminders to trial sites and when a questionnaire is sent, may need further evaluation. Case management and behavioural strategies used for trial retention may also warrant further evaluation. Most of the retention trials that we identified evaluated questionnaire response. There were few evaluations of ways to improve participants returning to trial sites for trial follow‐up. Monetary incentives and offers of monetary incentives increased postal and electronic questionnaire response. Some other strategies evaluated in single trials looked promising but need further evaluation. Application of the findings of this review would depend on trial setting, population, disease area, data collection and follow‐up procedures. |
t98 | A majority of patients with advanced colorectal cancer cannot be cured of their disease. This is because it has spread widely throughout the body and is therefore not resectable. In many of these patients, the original cancer that caused the problem is relatively asymptomatic and the patient is not aware of it. Most of these patients will be treated with a combination of chemotherapy and possibly radiotherapy. From a clinical perspective, a major problem in dealing with these patients is what to do with the primary cancer. Some studies have suggested that resecting the primary cancer can prolong survival and prevent complication arising from the cancer, such as obstruction or bleeding. This review addresses the question of whether surgically removing the primary cancer is beneficial to patients with advanced and unresectable colorectal cancer. | In a majority of patients with stage IV colorectal cancer, the metastatic disease is not resectable and the focus of management is on how best to palliate the patient. How to manage the primary tumour is an important part of palliation. A small proportion of these patients present with either obstructing or perforating cancers and require urgent surgical care. However, a majority are relatively asymptomatic from their primary cancer. Chemotherapy has been shown to prolong survival in this group of patients, and a majority of patients would be treated this way. Nonetheless, A recent meta‐analysis ( Stillwell 2010 ) suggests an improved overall survival and reduced requirement for emergency surgery in those patients who undergo primary tumour resection. This review was also able to quantify the mortality and morbidity associated with surgery to remove the primary. Objectives To determine if there is an improvement in overall survival following resection of the primary cancer in patients with unresectable stage IV colorectal cancer and an asymptomatic primary who are treated with chemo/radiotherapy. Search methods In January 2012 we searched for published randomised and non‐randomised controlled clinical trials without language restrictions using the following electronic databases: CENTRAL (the Cochrane Library (latest issue)), MEDLINE (1966 to date), EMBASE (1980 to date), Science Citation Index (1981 to date), ISI Proceedings (1990 to date), Current Controlled Trials MetaRegister (latest issue), Zetoc (latest issue) and CINAHL (1982 to date). Selection criteria Randomised controlled trials and non‐randomised controlled studies evaluating the influence on overall survival of primary tumour resection versus no resection in asymptomatic patients with unresectable stage IV colorectal cancer who are treated with palliative chemo/radiotherapy. Data collection and analysis We conducted the review according to the recommendations of The Cochrane Collaboration and the Cochrane Colorectal Group. “Review Manager 5” software was used. A total of 798 studies were identified following the initial search. No published or unpublished randomised controlled trials comparing primary tumour resection versus no resection in asymptomatic patients with unresectable stage IV colorectal cancer who were treated with chemo/radiotherapy were identified. Seven non‐randomised studies, potentially eligible for inclusion, were identified: 2 case‐matched studies, 2 CCTs and 3 retrospective cohort studies. Overall, these trials included 1.086 patients (722 patients treated with primary tumour resection, and 364 patients managed first with chemotherapy and/or radiotherapy). Resection of the primary tumour in asymptomatic patients with unresectable stage IV colorectal cancer who are managed with chemo/radiotherapy is not associated with a consistent improvement in overall survival. In addition, resection does not significantly reduce the risk of complications from the primary tumour (i.e. obstruction, perforation or bleeding). Yet there is enough doubt with regard to the published literature to justify further clinical trials in this area. The results from an ongoing high quality randomised controlled trial will help to answer this question. |
t99 | Key‐hole removal of the gallbladder (laparoscopic cholecystectomy) is currently the preferred treatment for people with symptoms related to gallstones in the gallbladder. This is generally performed by distending the tummy (abdomen) using carbon dioxide gas (pneumoperitoneum) so that there is adequate space for instruments and to visualise the structures within the abdomen. This enables the surgeons to identify and divide the appropriate structures. However, distending the abdominal wall can result in various physiological changes that affect the functioning of the heart or lungs. These changes are more pronounced at higher pressures of the gas used to distend the abdomen. They are generally tolerated well in people with a low risk of anaesthetic problems. However, those with pre‐existing illnesses may not tolerate this distension of the abdomen well. So, an alternate method of enabling the surgeons to visualise the structures in the abdomen and to use instruments by lifting up the abdominal wall using special devices (abdominal wall lift) has been suggested for people undergoing laparoscopic cholecystectomy. We reviewed all the relevant information from randomised trials (a type of study which provides the best information on whether one treatment is better than the other, if conducted properly) in the literature to find out if abdominal wall lift is better than distending the abdomen using carbon dioxide gas. We adopted methods to identify all the possible studies and used methods that decrease the errors in data collection. Abdominal wall lift with pneumoperitoneum versus pneumoperitoneum A total of 130 participants (all with low anaesthetic risk) were included in five trials which compared abdominal wall lift combined with very low pressure pneumoperitoneum and standard pneumoperitoneum. All five trials had a high risk of bias (introducing the possibility of overestimating benefits or underestimating the harms of abdominal wall lift). There was no significant difference in the rate of serious complications related to the surgery. None of the trials reported quality of life, the proportion of people discharged as laparoscopic cholecystectomy day‐patients, or pain between four and eight hours after the operation. None required conversion of key‐hole surgery to an open operation using a larger incision. There was no significant difference in the operating time between the two groups. Abdominal wall lift versus pneumoperitoneum A total of 774 participants (the majority with low anaesthetic risk) who underwent planned laparoscopic cholecystectomy were included in 18 trials which compared abdominal wall lift and standard pneumoperitoneum. There was no significant difference in the rate of serious complications related to surgery. None of the trials reported quality of life or pain between four and eight hours after the operation. There was no significant difference in the rate of serious adverse events, the proportion of people who underwent an open operation using a larger incision, or the proportion discharged on the same day of surgery. The operating time was about seven minutes longer on average if the operation was performed using abdominal wall lift rather than pneumoperitoneum. In summary, abdominal wall lift does not seem to offer an advantage over pneumoperitoneum in any of the patient‐oriented outcomes for laparoscopic cholecystectomy in people with low anaesthetic risk. Abdominal wall lift may increase costs by increasing the operating time. Hence it cannot be recommended routinely. The safety of abdominal wall lift is yet to be established. More randomised clinical trials on the topic are needed since the possibility of arriving at erroneous conclusions due to bias and due to the play of chance was high because of the design of the trials. Future trials should include people at high risk during anaesthesia. Furthermore, such trials should employ blinded assessments of outcome measures. | Laparoscopic cholecystectomy (key‐hole removal of the gallbladder) is now the most often used method for treatment of symptomatic gallstones. Several cardiopulmonary changes (decreased cardiac output, pulmonary compliance, and increased peak airway pressure) occur during pneumoperitoneum, which is now introduced to allow laparoscopic cholecystectomy. These cardiopulmonary changes may not be tolerated in individuals with poor cardiopulmonary reserve. Objectives To assess the benefits and harms of abdominal wall lift compared to pneumoperitoneum in patients undergoing laparoscopic cholecystectomy. Search methods We searched the Cochrane Hepato‐Biliary Group Controlled Trials Register, the Cochrane Central Register of Controlled Trials (CENTRAL) in The Cochrane Library , MEDLINE, EMBASE, and Science Citation Index Expanded until February 2013. Selection criteria We included all randomised clinical trials comparing abdominal wall lift (with or without pneumoperitoneum) versus pneumoperitoneum. Data collection and analysis We calculated the risk ratio (RR), rate ratio (RaR), or mean difference (MD) with 95% confidence intervals (CI) based on intention‐to‐treat analysis with both the fixed‐effect and the random‐effects models using the Review Manager (RevMan) software. For abdominal wall lift with pneumoperitoneum versus pneumoperitoneum, a total of 130 participants (all with low anaesthetic risk) scheduled for elective laparoscopic cholecystectomy were randomised in five trials to abdominal wall lift with pneumoperitoneum (n = 53) versus pneumoperitoneum only (n = 52). One trial which included 25 people did not state the number of participants in each group. All five trials had a high risk of bias. There was no mortality or conversion to open cholecystectomy in any of the participants in the trials that reported these outcomes. There was no significant difference in the rate of serious adverse events between the two groups (two trials; 2/29 events (0.069 events per person) versus 2/29 events (0.069 events per person); rate ratio 1.00; 95% CI 0.17 to 5.77). None of the trials reported quality of life, the proportion of people discharged as day‐patient laparoscopic cholecystectomies, or pain between four and eight hours after the operation. There was no significant difference in the operating time between the two groups (four trials; 53 participants versus 54 participants; 13.39 minutes longer (95% CI 2.73 less to 29.51 minutes longer) in the abdominal wall lift with pneumoperitoneum group and 100 minutes in the pneumoperitoneum group). For abdominal wall lift versus pneumoperitoneum, a total of 774 participants (the majority with low anaesthetic risk) scheduled for elective laparoscopic cholecystectomy were randomised in 18 trials to abdominal wall lift without pneumoperitoneum (n = 332) versus pneumoperitoneum (n = 358). One trial which included 84 people did not state the number in each group. All the trials had a high risk of bias. There was no mortality in any of the trials that reported this outcome. There was no significant difference in the proportion of participants with serious adverse events (six trials; 5/172 (weighted proportion 2.4%) versus 2/171 (1.2%); RR 2.01; 95% CI 0.52 to 7.80). There was no significant difference in the rate of serious adverse events between the two groups (three trials; 5/99 events (weighted number of events per person = 0.346 events) versus 2/99 events (0.020 events per person); rate ratio 1.73; 95% CI 0.35 to 8.61). None of the trials reported quality of life or pain between four and eight hours after the operation. There was no significant difference in the proportion of people who underwent conversion to open cholecystectomy (11 trials; 5/225 (weighted proportion 2.3%) versus 7/235 (3.0%); RR 0.76; 95% CI 0.26 to 2.21). The operating time was significantly longer in the abdominal wall lift group than in the pneumoperitoneum group (16 trials; 6.87 minutes longer (95% CI 4.74 minutes to 9.00 minutes longer) in the abdominal wall lift group versus 75 minutes in the pneumoperitoneum group). There was no significant difference in the proportion of people discharged as laparoscopic cholecystectomy day‐patients (two trials; 15/31 (weighted proportion 48.5%) versus 9/31 (29%); RR 1.67; 95% CI 0.85 to 3.26). Abdominal wall lift with or without pneumoperitoneum does not seem to offer an advantage over pneumoperitoneum in any of the patient‐oriented outcomes for laparoscopic cholecystectomy in people with low anaesthetic risk. Hence it cannot be recommended routinely. The safety of abdominal wall lift is yet to be established. More research on the topic is needed because of the risk of bias in the included trials and because of the risk of type I and type II random errors due to the few participants included in the trials. Future trials should include people at higher anaesthetic risk. Furthermore, such trials should include blinded assessment of outcomes. |
Subsets and Splits