Target_Summary_ID
stringlengths 2
4
| Target_Sentence
stringlengths 438
4.17k
| Original_Abstract
stringlengths 2.14k
7.26k
|
---|---|---|
t0 | The skin patch and the vaginal (birth canal) ring are two methods of birth control. Both methods contain the hormones estrogen and progestin. The patch is a small, thin, adhesive square that is applied to the skin. The contraceptive vaginal ring is a flexible, lightweight device that is inserted into the vagina. Both methods release drugs like those in birth control pills. These methods could be used more consistently than pills because they do not require a daily dose. This review looked at how well the methods worked to prevent pregnancy, if they caused bleeding problems, if women used them as prescribed, and how safe they were. Through February 2013, we did computer searches for randomized controlled trials of the skin patch or vaginal ring compared to pills for birth control. Pills included types with both estrogen and progestin. We wrote to researchers to find other trials. We found 18 trials. Of six patch trials, five compared the marketed patch to birth control pills and one studied a patch being developed. Of 12 ring trials, 11 looked at the marketed ring and pills while one studied a ring being developed. The methods compared had similar pregnancy rates. Patch users reported using their method more consistently than the pill group did. Only half of the patch studies had data on pregnancy or whether the women used the method correctly. However, most of the ring studies had those data. Patch users were more likely than pill users to drop out early from the trial. Ring users were not more likely to drop out early. Compared to pill users, users of the marketed patch had more breast discomfort, painful periods, nausea, and vomiting. Ring users had more vaginal irritation and discharge than pill users but less nausea, acne, irritability, depression, and emotional changes. Ring users often had fewer bleeding problems than pill users. The quality of information was classed as low for the patch trials and moderate for the ring studies. Lower quality was due to not reporting how groups were assigned or not having good outcome measures. Other issues were high losses and taking assigned women out of the analysis. Studies of the patch and ring should provide more detail on whether women used the method correctly. | The delivery of combination contraceptive steroids from a transdermal contraceptive patch or a contraceptive vaginal ring offers potential advantages over the traditional oral route. The transdermal patch and vaginal ring could require a lower dose due to increased bioavailability and improved user compliance. Objectives To compare the contraceptive effectiveness, cycle control, compliance (adherence), and safety of the contraceptive patch or the vaginal ring versus combination oral contraceptives (COCs). Search methods Through February 2013, we searched MEDLINE, POPLINE, CENTRAL, LILACS, ClinicalTrials.gov, and ICTRP for trials of the contraceptive patch or the vaginal ring. Earlier searches also included EMBASE. For the initial review, we contacted known researchers and manufacturers to identify other trials. Selection criteria We considered randomized controlled trials comparing a transdermal contraceptive patch or a contraceptive vaginal ring with a COC. Data collection and analysis Data were abstracted by two authors and entered into RevMan. For dichotomous variables, the Peto odds ratio (OR) with 95% confidence intervals (CI) was calculated. For continuous variables, the mean difference was computed. We also assessed the quality of evidence for this review. We found 18 trials that met our inclusion criteria. Of six patch studies, five examined the marketed patch containing norelgestromin plus ethinyl estradiol (EE); one studied a patch in development that contains levonorgestrel (LNG) plus EE. Of 12 vaginal ring trials, 11 examined the same marketing ring containing etonogestrel plus EE; one studied a ring being developed that contains nesterone plus EE. Contraceptive effectiveness was not significantly different for the patch or ring versus the comparison COC. Compliance data were limited. Patch users showed better compliance than COC users in three trials. For the norelgestromin plus EE patch, ORs were 2.05 (95% CI 1.83 to 2.29) and 2.76 (95% CI 2.35 to 3.24). In the levonorgestrel plus EE patch report, patch users were less likely to have missed days of therapy (OR 0.36; 95% CI 0.25 to 0.51). Of four vaginal ring trials, one found ring users had more noncompliance (OR 3.99; 95% CI 1.87 to 8.52), while another showed more compliance with the regimen (OR 1.67; 95% CI 1.04 to 2.68). More patch users discontinued early than COC users. ORs from two meta‐analyses were 1.59 (95% CI 1.26 to 2.00) and 1.56 (95% CI 1.18 to 2.06) and another trial showed OR 2.57 (95% CI 0.99 to 6.64). Patch users also had more discontinuation due to adverse events than COC users. Users of the norelgestromin‐containing patch reported more breast discomfort, dysmenorrhea, nausea, and vomiting. In the levonorgestrel‐containing patch trial, patch users reported less vomiting, headaches, and fatigue. Of 11 ring trials with discontinuation data, two showed the ring group discontinued less than the COC group: OR 0.32 (95% CI 0.16 to 0.66) and OR 0.52 (95% CI 0.31 to 0.88). Ring users were less likely to discontinue due to adverse events in one study (OR 0.32; 95% CI 0.15 to 0.70). Compared to the COC users, ring users had more vaginitis and leukorrhea but less vaginal dryness. Ring users also reported less nausea, acne, irritability, depression, and emotional lability than COC users. For cycle control, only one trial study showed a significant difference. Women in the patch group were less likely to have breakthrough bleeding and spotting. Seven ring studies had bleeding data; four trials showed the ring group generally had better cycle control than the COC group. Effectiveness was not significantly different for the methods compared. Pregnancy data were available from half of the patch trials but two‐thirds of ring trials. The patch could lead to more discontinuation than the COC. The patch group had better compliance than the COC group. Compliance data came from half of the patch studies and one‐third of the ring trials. Patch users had more side effects than the COC group. Ring users generally had fewer adverse events than COC users but more vaginal irritation and discharge. The main reasons for downgrading were lack of information on the randomization sequence generation or allocation concealment, the outcome assessment methods, high losses to follow up, and exclusions after randomization. |
t1 | Excess body weight has become a health problem around the world. Being overweight or obese may affect how well some birth control methods work to prevent pregnancy. Hormonal birth control includes pills, the skin patch, the vaginal ring, implants, injectables, and hormonal intrauterine contraception (IUC). Until 4 August 2016, we did computer searches for studies of hormonal birth control among women who were overweight or obese. We looked for studies that compared overweight or obese women with women of normal weight or body mass index (BMI). The formula for BMI is weight (kg) / height (m) 2 . We included all study designs. For the original review, we wrote to investigators to find other studies we might have missed. With 8 studies added in this update, we had 17 with a total of 63,813 women. We focus here on 12 studies with high, moderate, or low quality results. Most did not show more pregnancies for overweight or obese women. Two of five studies using birth control pills found differences between BMI groups. In one, overweight women had a higher pregnancy risk. The other found a lower pregnancy rate for obese women versus nonobese women. The second study also tested a new skin patch. Obese women in the patch group had a higher pregnancy rate. Of five implant studies, two showed differences among weight groups. They studied the older six‐capsule implant. One study showed a higher pregnancy rate in years 6 and 7 combined for women weighing 70 kg or more. The other reported pregnancy differences in year 5 among the lower weight groups only. Results for other methods of birth control did not show overweight or obesity related to pregnancy rate. Those methods included an injectable, hormonal IUC, and the two‐rod and single‐rod implants. These studies generally did not show an association of BMI or weight with the effect of hormonal methods. We found few studies for most methods. Studies using BMI rather than weight can show whether body fat is related to how well birth control prevents pregnancy. The methods studied here work very well when used according to directions. | Obesity has reached epidemic proportions around the world. Effectiveness of hormonal contraceptives may be related to metabolic changes in obesity or to greater body mass or body fat. Hormonal contraceptives include oral contraceptives (OCs), injectables, implants, hormonal intrauterine contraception (IUC), the transdermal patch, and the vaginal ring. Given the prevalence of overweight and obesity, the public health impact of any effect on contraceptive efficacy could be substantial. Objectives To examine the effectiveness of hormonal contraceptives in preventing pregnancy among women who are overweight or obese versus women with a lower body mass index (BMI) or weight. Search methods Until 4 August 2016, we searched for studies in PubMed (MEDLINE), CENTRAL, POPLINE, Web of Science, ClinicalTrials.gov, and ICTRP. We examined reference lists of pertinent articles to identify other studies. For the initial review, we wrote to investigators to find additional published or unpublished studies. Selection criteria All study designs were eligible. The study could have examined any type of hormonal contraceptive. Reports had to contain information on the specific contraceptive methods used. The primary outcome was pregnancy. Overweight or obese women must have been identified by an analysis cutoff for weight or BMI (kg/m 2 ). Data collection and analysis Two authors independently extracted the data. One entered the data into RevMan and a second verified accuracy. The main comparisons were between overweight or obese women and women of lower weight or BMI. We examined the quality of evidence using the Newcastle‐Ottawa Quality Assessment Scale. Where available, we included life‐table rates. We also used unadjusted pregnancy rates, relative risk (RR), or rate ratio when those were the only results provided. For dichotomous variables, we computed an odds ratio with 95% confidence interval (CI). With 8 studies added in this update, 17 met our inclusion criteria and had a total of 63,813 women. We focus here on 12 studies that provided high, moderate, or low quality evidence. Most did not show a higher pregnancy risk among overweight or obese women. Of five COC studies, two found BMI to be associated with pregnancy but in different directions. With an OC containing norethindrone acetate and ethinyl estradiol (EE), pregnancy risk was higher for overweight women, i.e. with BMI ≥ 25 versus those with BMI < 25 (reported relative risk 2.49, 95% CI 1.01 to 6.13). In contrast, a trial using an OC with levonorgestrel and EE reported a Pearl Index of 0 for obese women (BMI ≥ 30) versus 5.59 for nonobese women (BMI < 30). The same trial tested a transdermal patch containing levonorgestrel and EE. Within the patch group, obese women in the "treatment‐compliant" subgroup had a higher reported Pearl Index than nonobese women (4.63 versus 2.15). Of five implant studies, two that examined the six‐capsule levonorgestrel implant showed differences in pregnancy by weight. One study showed higher weight was associated with higher pregnancy rate in years 6 and 7 combined (reported P < 0.05). In the other, pregnancy rates differed in year 5 among the lower weight groups only (reported P < 0.01) and did not involve women weighing 70 kg or more. Analysis of data from other contraceptive methods indicated no association of pregnancy with overweight or obesity. These included depot medroxyprogesterone acetate (subcutaneous), levonorgestrel IUC, the two‐rod levonorgestrel implant, and the etonogestrel implant. The evidence generally did not indicate an association between higher BMI or weight and effectiveness of hormonal contraceptives. However, we found few studies for most contraceptive methods. Studies using BMI, rather than weight alone, can provide information about whether body composition is related to contraceptive effectiveness. The contraceptive methods examined here are among the most effective when used according to the recommended regimen. We considered the overall quality of evidence to be low for the objectives of this review. More recent reports provided evidence of varying quality, while the quality was generally low for older studies. For many trials the quality would be higher for their original purpose rather than the non‐randomized comparisons here. Investigators should consider adjusting for potential confounding related to BMI or contraceptive effectiveness. Newer studies included a greater proportion of overweight or obese women, which helps in examining effectiveness and side effects of hormonal contraceptives within those groups. |
t2 | Cluster headaches are excruciating headaches of extreme intensity. They can last for several hours, are usually on one side of the head only, and affect men more than women. Multiple headaches can occur over several days. Fast pain relief is important because of the intense nature of the pain with cluster headache. Triptans are a type of drug used to treat migraine. Although migraine is different from cluster headache, there are reasons to believe that some forms of these drugs could be useful in cluster headache. Triptans can be given by injection under the skin (subcutaneously) or by a spray into the nose (intranasally) to produce fast pain relief. The review found six studies examining two different triptans. Within 15 minutes of using subcutaneous sumatriptan 6 mg, almost 8 in 10 participants had no worse than mild pain, and 5 in 10 were pain‐free. Within 15 minutes of using intranasal zolmitriptan 5 mg, about 3 in 10 had no worse than mild pain, and 1 in 10 was pain‐free. Adverse events were more common with a triptan than with placebo but they were generally of mild to moderate severity. | This is an updated version of the original Cochrane review published in Issue 4, 2010 ( Law 2010 ). Cluster headache is an uncommon, severely painful, and disabling condition, with rapid onset. Validated treatment options are limited; first‐line therapy includes inhaled oxygen. Other therapies such as intranasal lignocaine and ergotamine are not as commonly used and are less well studied. Triptans are successfully used to treat migraine attacks and they may also be useful for cluster headache. Objectives To assess the efficacy and tolerability of the triptan class of drugs compared to placebo and other active interventions in the acute treatment of episodic and chronic cluster headache in adult patients. Search methods We searched the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE, ClinicalTrials.gov, and reference lists for studies from inception to 22 January 2010 for the original review, and from 2009 to 4 April 2013 for this update. Selection criteria Randomised, double‐blind, placebo‐controlled studies of triptans for acute treatment of cluster headache episodes. Data collection and analysis Two review authors independently assessed study quality and extracted data. Numbers of participants with different levels of pain relief, requiring rescue medication, and experiencing adverse events and headache‐associated symptoms in treatment and control groups were used to calculate relative risk and numbers needed to treat for benefit (NNT) and harm (NNH). New searches in 2013 did not identify any relevant new studies. All six included studies used a single dose of triptan to treat an attack of moderate to severe pain intensity. Subcutaneous sumatriptan was given to 131 participants at a 6 mg dose, and 88 at a 12 mg dose. Oral or intranasal zolmitriptan was given to 231 participants at a 5 mg dose, and 223 at a 10 mg dose. Placebo was given to 326 participants. Triptans were more effective than placebo for headache relief and pain‐free responses. By 15 minutes after treatment with subcutaneous sumatriptan 6 mg, 48% of participants were pain‐free and 75% had no pain or mild pain (17% and 32% respectively with placebo). NNTs for subcutaneous sumatriptan 6 mg were 3.3 (95% CI 2.4 to 5.0) and 2.4 (1.9 to 3.2) respectively. Intranasal zolmitriptan 10 mg was of less benefit, with 12% of participants pain‐free and 28% with no or mild pain (3% and 7% respectively with placebo). NNTs for intranasal zolmitriptan 10 mg were 11 (6.4 to 49) and 4.9 (3.3 to 9.2) respectively. Based on limited data, subcutaneous sumatriptan 6 mg was superior to intranasal zolmitriptan 5 mg or 10 mg for rapid (15 minute) responses, which are important in this condition. Oral routes of administration are not appropriate. |
t3 | Sugar‐sweetened beverages (SSBs) are cold and hot drinks with added sugar. Common SSBs are non‐diet soft drinks, regular soda, iced tea, sports drinks, energy drinks, fruit punches, sweetened waters, and sweetened tea and coffee. Research shows that people who drink a lot of SSBs often gain weight. Drinking a lot of SSBs can also increase the risk of diabetes, heart disease, and dental decay. Doctors therefore recommend that children, teenagers and adults drink fewer SSBs. Governments, businesses, schools and workplaces have taken various measures to support healthier beverage choices. We wanted to find out whether the measures taken so far have been successful in helping people to drink fewer SSBs to improve their health. We focused on measures that change the environment in which people make beverage choices. We did not look at studies on educational programmes or on SSB taxes, as these are examined in separate reviews. We searched for all available studies meeting clearly‐defined criteria to answer this question. We found 58 studies, which included more than one million adults, teenagers and children. Most studies lasted about one year, and were done in schools, stores or restaurants. Some studies used methods that are not very reliable. For example, in some studies participants were simply asked how much SSB they drank, which is not very reliable, as people sometimes forget how much SSB they drank. Some of the findings of our review may therefore change when more and better studies become available. We have found some evidence that some of the measures implemented to help people drink fewer SSBs have been successful, including the following: Labels which are easy to understand, such as traffic‐light labels, and labels which rate the healthfulness of beverages with stars or numbers. Limits to the availability of SSB in schools (e.g. replacing SSBs with water in school cafeterias). Price increases on SSBs in restaurants, stores and leisure centres. Children’s menus in chain restaurants which include healthier beverages as their standard beverage. Promotion of healthier beverages in supermarkets. Government food benefits (e.g. food stamps) which cannot be used to buy SSBs. Community campaigns focused on SSBs. Measures that improve the availability of low‐calorie beverages at home, e.g. through home deliveries of bottled water and diet beverages. We have also found some evidence that improved availability of drinking water and diet beverages at home can help people lose weight. There are also other measures which may influence how much SSB people drink, but for these the available evidence is less certain. Some, but not all studies found that such measures can have effects which were not intended and which may be negative. Some studies reported that profits of stores and restaurants decreased when the measures were implemented, but other studies showed that profits increased or stayed the same. Children who get free drinking water in schools may drink less milk. Some studies reported that people were unhappy with the measures. We also looked at studies on sugar‐sweetened milk. We found that small prizes for children who chose plain milk in their school cafeteria, as well as emoticon labels, may help children drink less sugar‐sweetened milk. However, this may also drive up the share of milk which is wasted because children choose but do not drink it. Our review shows that measures which change the environment in which people make beverage choices can help people drink less SSB. Based on our findings we suggest that such measures may be used more widely. Government officials, business people and health professionals implementing such measures should work together with researchers to find out more about their effects in the short and long term. | Frequent consumption of excess amounts of sugar‐sweetened beverages (SSB) is a risk factor for obesity, type 2 diabetes, cardiovascular disease and dental caries. Environmental interventions, i.e. interventions that alter the physical or social environment in which individuals make beverage choices, have been advocated as a means to reduce the consumption of SSB. Objectives To assess the effects of environmental interventions (excluding taxation) on the consumption of sugar‐sweetened beverages and sugar‐sweetened milk, diet‐related anthropometric measures and health outcomes, and on any reported unintended consequences or adverse outcomes. Search methods We searched 11 general, specialist and regional databases from inception to 24 January 2018. We also searched trial registers, reference lists and citations, scanned websites of relevant organisations, and contacted study authors. Selection criteria We included studies on interventions implemented at an environmental level, reporting effects on direct or indirect measures of SSB intake, diet‐related anthropometric measures and health outcomes, or any reported adverse outcome. We included randomised controlled trials (RCTs), non‐randomised controlled trials (NRCTs), controlled before‐after (CBA) and interrupted‐time‐series (ITS) studies, implemented in real‐world settings with a combined length of intervention and follow‐up of at least 12 weeks and at least 20 individuals in each of the intervention and control groups. We excluded studies in which participants were administered SSB as part of clinical trials, and multicomponent interventions which did not report SSB‐specific outcome data. We excluded studies on the taxation of SSB, as these are the subject of a separate Cochrane Review. Data collection and analysis Two review authors independently screened studies for inclusion, extracted data and assessed the risks of bias of included studies. We classified interventions according to the NOURISHING framework, and synthesised results narratively and conducted meta‐analyses for two outcomes relating to two intervention types. We assessed our confidence in the certainty of effect estimates with the GRADE framework as very low, low, moderate or high, and presented ‘Summary of findings’ tables. We identified 14,488 unique records, and assessed 1030 in full text for eligibility. We found 58 studies meeting our inclusion criteria, including 22 RCTs, 3 NRCTs, 14 CBA studies, and 19 ITS studies, with a total of 1,180,096 participants. The median length of follow‐up was 10 months. The studies included children, teenagers and adults, and were implemented in a variety of settings, including schools, retailing and food service establishments. We judged most studies to be at high or unclear risk of bias in at least one domain, and most studies used non‐randomised designs. The studies examine a broad range of interventions, and we present results for these separately. Labelling interventions (8 studies): We found moderate‐certainty evidence that traffic‐light labelling is associated with decreasing sales of SSBs, and low‐certainty evidence that nutritional rating score labelling is associated with decreasing sales of SSBs. For menu‐board calorie labelling reported effects on SSB sales varied. Nutrition standards in public institutions (16 studies): We found low‐certainty evidence that reduced availability of SSBs in schools is associated with decreased SSB consumption. We found very low‐certainty evidence that improved availability of drinking water in schools and school fruit programmes are associated with decreased SSB consumption. Reported associations between improved availability of drinking water in schools and student body weight varied. Economic tools (7 studies): We found moderate‐certainty evidence that price increases on SSBs are associated with decreasing SSB sales. For price discounts on low‐calorie beverages reported effects on SSB sales varied. Whole food supply interventions (3 studies): Reported associations between voluntary industry initiatives to improve the whole food supply and SSB sales varied. Retail and food service interventions (7 studies): We found low‐certainty evidence that healthier default beverages in children’s menus in chain restaurants are associated with decreasing SSB sales, and moderate‐certainty evidence that in‐store promotion of healthier beverages in supermarkets is associated with decreasing SSB sales. We found very low‐certainty evidence that urban planning restrictions on new fast‐food restaurants and restrictions on the number of stores selling SSBs in remote communities are associated with decreasing SSB sales. Reported associations between promotion of healthier beverages in vending machines and SSB intake or sales varied. Intersectoral approaches (8 studies): We found moderate‐certainty evidence that government food benefit programmes with restrictions on purchasing SSBs are associated with decreased SSB intake. For unrestricted food benefit programmes reported effects varied. We found moderate‐certainty evidence that multicomponent community campaigns focused on SSBs are associated with decreasing SSB sales. Reported associations between trade and investment liberalisation and SSB sales varied. Home‐based interventions (7 studies): We found moderate‐certainty evidence that improved availability of low‐calorie beverages in the home environment is associated with decreased SSB intake, and high‐certainty evidence that it is associated with decreased body weight among adolescents with overweight or obesity and a high baseline consumption of SSBs. Adverse outcomes reported by studies, which may occur in some circumstances, included negative effects on revenue, compensatory SSB consumption outside school when the availability of SSBs in schools is reduced, reduced milk intake, stakeholder discontent, and increased total energy content of grocery purchases with price discounts on low‐calorie beverages, among others. The certainty of evidence on adverse outcomes was low to very low for most outcomes. We analysed interventions targeting sugar‐sweetened milk separately, and found low‐ to moderate‐certainty evidence that emoticon labelling and small prizes for the selection of healthier beverages in elementary school cafeterias are associated with decreased consumption of sugar‐sweetened milk. We found low‐certainty evidence that improved placement of plain milk in school cafeterias is not associated with decreasing sugar‐sweetened milk consumption. The evidence included in this review indicates that effective, scalable interventions addressing SSB consumption at a population level exist. Implementation should be accompanied by high‐quality evaluations using appropriate study designs, with a particular focus on the long‐term effects of approaches suitable for large‐scale implementation. |
t4 | A baby may be in this situation because the placenta is no longer functioning well and this means the baby may be short of nutrition or oxygen. We asked in this Cochrane review if it is better to induce labour or do a caesarean section (both ways of ensuring the baby is born earlier) rather than letting the pregnancy continue until labour starts by itself. Sometimes, when a healthy pregnant woman gets towards the end of pregnancy, there may be signs that her baby may be having difficulty coping. Some of these babies are born sick, very occasionally they do not survive, or they have problems in their later development. A baby may not be growing normally and so is smaller than expected (this is termed intrauterine growth restriction ‐ IUGR). The baby may show decreased movements, which may indicate the placenta is no longer functioning well. Fetal heart monitoring (known as cardiotocography or CTG) may show up a possible problem. Ultrasound can also measure amniotic fluid and blood flow in order to assess the baby’s well‐being. Induction of labour or caesarean section might help these babies by taking them out of the uterus. But intervening early in this way may mean that these babies’ lungs are not mature enough to deal well with the outside world, and they might be better to continue inside the uterus. It is not clear which option is best for mothers and babies. We found three trials involving 546 pregnant women and their babies at term. All three trials looked at using induction of labour for an early birth. Two trials looked at babies thought to have growth restriction and one trial looked at babies thought to have a small volume of amniotic fluid (oligohydramnios). All three trials were of reasonable quality and most of the evidence comes from the largest trial which compared babies who were growth restricted. There is no information about funding sources for these trials. Overall, we found no major differences between these two strategies in terms of the babies’ survival, the numbers of very sick babies nor in the numbers of babies with problems in development. We looked at many other outcomes, too, including how many caesarean sections there were, and how many operative vaginal births (with forceps or ventouse). We also need research into better tests to identify babies who are not coping well towards the end of pregnancy. Women should discuss their specific circumstances with their caregivers when coming to a decision. | Fetal compromise in the term pregnancy is suspected when the following clinical indicators are present: intrauterine growth restriction (IUGR), decreased fetal movement (DFM), or when investigations such as cardiotocography (CTG) and ultrasound reveal results inconsistent with standard measurements. Pathological results would necessitate the need for immediate delivery, but the management for ‘suspicious’ results remains unclear and varies widely across clinical centres. There is clinical uncertainty as to how to best manage women presenting with a suspected term compromised baby in an otherwise healthy pregnancy. Objectives To assess, using the best available evidence, the effects of immediate delivery versus expectant management of the term suspected compromised baby on neonatal, maternal and long‐term outcomes. Search methods We searched the Cochrane Pregnancy and Childbirth Group's Trials Register (31 May 2015) and reference lists of retrieved studies. Selection criteria Randomised or quasi‐randomised controlled trials comparing expectant management versus planned early delivery for women with a suspected compromised fetus from 37 weeks' gestation or more. Data collection and analysis Two review authors independently assessed trials for inclusion and assessed trial quality. Two review authors independently extracted data. Data were checked for accuracy. We assessed the quality of the evidence using the GRADE approach. Of the 20 reports identified by the search strategy, we included three trials (546 participants: 269 to early delivery and 277 to expectant management), which met our inclusion criteria. Two of the trials compared outcomes in 492 pregnancies with IUGR of the fetus, and one in 54 pregnancies with oligohydramnios. All three trials were of reasonable quality and at low risk of bias. The level of evidence was graded moderate, low or very low, downgrading mostly for imprecision and for some indirectness. Overall, there was no difference in the primary neonatal outcomes of perinatal mortality (no deaths in either group, one trial, 459 women, evidence graded moderate ), major neonatal morbidity (risk ratio (RR) 0.15, 95% confidence interval (CI) 0.01 to 2.81, one trial, 459 women, evidence graded low ), or neurodevelopmental disability/impairment at two years of age (RR 2.04, 95% CI 0.62 to 6.69,one trial, 459 women, evidence graded low ). There was no difference in the risk of necrotising enterocolitis (one trial, 333 infants) or meconium aspiration (one trial, 459 infants), There was also no difference in the reported primary maternal outcomes: maternal mortality (RR 3.07, 95% CI 0.13 to 74.87, one trial, 459 women, evidence graded low ), and significant maternal morbidity (RR 0.92, 95% CI 0.38 to 2.22, one trial, 459 women, evidence graded low ). The gestational age at birth was on average 10 days earlier in women randomised to early delivery (mean difference (MD) ‐9.50, 95% CI ‐10.82 to ‐8.18, one trial, 459 women) and women in the early delivery group were significantly less likely to have a baby beyond 40 weeks' gestation (RR 0.10, 95% CI 0.01 to 0.67, one trial, 33 women). Significantly more infants in the planned early delivery group were admitted to intermediate care nursery (RR 1.28, 95% CI 1.02 to 1.61, two trials, 491 infants). There was no difference in the risk of respiratory distress syndrome, (one trial, 333 infants), Apgar score less than seven at five minutes (three trials, 546 infants), resuscitation required (one trial, 459 infants), mechanical ventilation (one trial, 337 infants), admission to neonatal intensive care unit (NICU) (RR 0.88, 95% CI 0.35 to 2.23, three trials, 545 infants, evidence graded very low ), length of stay in NICU/SCN (one trial, 459 infants), and sepsis (two trials, 366 infants). Babies in the expectant management group were more likely to be < 2.3rd centile for birthweight (RR 0.51, 95% CI 0.36 to 0.73, two trials, 491 infants), however there was no difference in the proportion of babies with birthweight < 10th centile (RR 0.98, 95% CI 0.88 to 1.10). There was no difference in any of the reported maternal secondary outcomes including: caesarean section rates (RR 1.02, 95% CI 0.65 to 1.59, three trials, 546 women, evidence graded low ), placental abruption (one trial, 459 women), pre‐eclampsia (one trial, 459 women), vaginal birth (three trials 546 women), assisted vaginal birth (three trials 546 women), breastfeeding rates (one trial, 218 women), and number of weeks of breastfeeding after delivery one trial, 124 women). There was an expected increase in induction in the early delivery group (RR 2.05, 95% CI 1.78 to 2.37, one trial, 459 women). No data were reported for the pre‐specified secondary neonatal outcomes of the number of days of mechanical ventilation, moderate‐severe hypoxic ischaemic encephalopathy or need for therapeutic hypothermia. Likewise, no data were reported for secondary maternal outcomes of postnatal infection, maternal satisfaction or views of care. A policy for planned early delivery versus expectant management for a suspected compromised fetus at term does not demonstrate any differences in major outcomes of perinatal mortality, significant neonatal or maternal morbidity or neurodevelopmental disability. In women randomised to planned early delivery, the gestational age at birth was on average 10 days earlier, women were less likely to have a baby beyond 40 weeks' gestation, they were more likely to be induced and infants were more likely to be admitted to intermediate care nursery. There was also a significant difference in the proportion of babies with a birthweight centile < 2.3rd, however this did not translate into a reduction in morbidity. The review is informed by only one large trial and two smaller trials assessing fetuses with IUGR or oligohydramnios and therefore cannot be generalised to all term pregnancies with suspected fetal compromise. There are other indications for suspecting compromise in a fetus at or near term such as maternal perception of DFM, and ultrasound and/or CTG abnormalities. Future randomised trials need to assess effectiveness of timing of delivery for these indications. |
t5 | Rapid tests for diagnosing malaria caused by Plasmodium vivax or other less common parasites. This review summarises trials evaluating the accuracy of rapid diagnostic tests (RDTs) for diagnosing malaria due to Plasmodium vivax or other non‐falciparum species. After searching for relevant studies up to December 2013, we included 47 studies, enrolling 22,862 adults and children. What are rapid tests and why do they need to be able to distinguish Plasmodium vivax malaria RDTs are simple to use, point of care tests, suitable for use in rural settings by primary healthcare workers. RDTs work by using antibodies to detect malaria antigens in the patient's blood. A drop of blood is placed on the test strip where the antibodies and antigen combine to create a distinct line indicating a positive test. Malaria can be caused any one of five species of Plasmodium parasite, but P. falciparum and P. vivax are the most common. In some areas, RDTs need to be able to distinguish which species is causing the malaria symptoms as different species may require different treatments. Unlike P. falciparum , P. vivax has a liver stage which can cause repeated illness every few months unless it is treated with primaquine. The most common types of RDTs for P. vivax use two test lines in combination; one line specific to P. falciparum, and one line which can detect any species of Plasmodium. If the P. falciparum line is negative and the 'any species' line is positive, the illness is presumed to be due to P. vivax (but could also be caused by P. malariae, or P. ovale ) . More recently, RDTs have been developed which specifically test for P. vivax. What does the research say RDTs testing for non‐falciparum malaria were very specific (range 98% to 100%) meaning that only 1% to 2% of patients who test positive would actually not have the disease. However, they were less sensitive (range 78% to 89%), meaning between 11% and 22% of people with non‐falciparum malaria would actually get a negative test result. RDTs which specifically tested for P. vivax were more accurate with a specificity of 99% and a sensitivity of 95%, meaning that only 5% of people with P. vivax malaria would have a negative test result. | In settings where both Plasmodium vivax and Plasmodium falciparum infection cause malaria, rapid diagnostic tests (RDTs) need to distinguish which species is causing the patients' symptoms, as different treatments are required. Older RDTs incorporated two test lines to distinguish malaria due to P. falciparum, from malaria due to any other Plasmodium species (non‐falciparum). These RDTs can be classified according to which antibodies they use: Type 2 RDTs use HRP‐2 (for P. falciparum ) and aldolase (all species); Type 3 RDTs use HRP‐2 (for P. falciparum ) and pLDH (all species); Type 4 use pLDH (from P. falciparum ) and pLDH (all species). More recently, RDTs have been developed to distinguish P. vivax parasitaemia by utilizing a pLDH antibody specific to P. vivax . Objectives To assess the diagnostic accuracy of RDTs for detecting non‐falciparum or P. vivax parasitaemia in people living in malaria‐endemic areas who present to ambulatory healthcare facilities with symptoms suggestive of malaria, and to identify which types and brands of commercial test best detect non‐falciparum and P. vivax malaria. Search methods We undertook a comprehensive search of the following databases up to 31 December 2013: Cochrane Infectious Diseases Group Specialized Register; MEDLINE; EMBASE; MEDION; Science Citation Index; Web of Knowledge; African Index Medicus; LILACS; and IndMED. Selection criteria Studies comparing RDTs with a reference standard (microscopy or polymerase chain reaction) in blood samples from a random or consecutive series of patients attending ambulatory health facilities with symptoms suggestive of malaria in non‐falciparum endemic areas. Data collection and analysis For each study, two review authors independently extracted a standard set of data using a tailored data extraction form. We grouped comparisons by type of RDT (defined by the combinations of antibodies used), and combined in meta‐analysis where appropriate. Average sensitivities and specificities are presented alongside 95% confidence intervals (95% CI). We included 47 studies enrolling 22,862 participants. Patient characteristics, sampling methods and reference standard methods were poorly reported in most studies. RDTs detecting 'non‐falciparum' parasitaemia Eleven studies evaluated Type 2 tests compared with microscopy, 25 evaluated Type 3 tests, and 11 evaluated Type 4 tests. In meta‐analyses, average sensitivities and specificities were 78% (95% CI 73% to 82%) and 99% (95% CI 97% to 99%) for Type 2 tests, 78% (95% CI 69% to 84%) and 99% (95% CI 98% to 99%) for Type 3 tests, and 89% (95% CI 79% to 95%) and 98% (95% CI 97% to 99%) for Type 4 tests, respectively. Type 4 tests were more sensitive than both Type 2 (P = 0.01) and Type 3 tests (P = 0.03). Five studies compared Type 3 tests with PCR; in meta‐analysis, the average sensitivity and specificity were 81% (95% CI 72% to 88%) and 99% (95% CI 97% to 99%) respectively. RDTs detecting P.vivax parasitaemia Eight studies compared pLDH tests to microscopy; the average sensitivity and specificity were 95% (95% CI 86% to 99%) and 99% (95% CI 99% to 100%), respectively. RDTs designed to detect P. vivax specifically, whether alone or as part of a mixed infection, appear to be more accurate than older tests designed to distinguish P. falciparum malaria from non‐falciparum malaria. Compared to microscopy, these tests fail to detect around 5% of P. vivax cases. This Cochrane Review, in combination with other published information about in vitro test performance and stability in the field, can assist policy‐makers to choose between the available RDTs. 12 April 2019 No update planned Review superseded This Cochrane Review has been superseded by Choi 2019 https://doi.org/10.1002/14651858.CD013218 |
t6 | This summary presents what we know from research about the effect of exercise therapy in JIA. The review shows that in children with JIA, exercise may not lead to any difference in a child's ability to function or move their joints fully, the number of joints with swelling, quality of life, overall wellbeing, pain or aerobic capacity. Aerobic capacity is the amount of oxygen the body consumes during exercise. If a person has low aerobic capacity, it generally means he or she is able to do less physical activity and may tire easily. The number of joints with pain was not measured in these studies. We often do not have precise information about side effects and complications. This is particularly true for rare but serious side effects. No short‐term adverse effects of exercise therapy were found in the studies that make up this review. Juvenile idiopathic arthritis (JIA) is the most common chronic rheumatic disease in children and is an important cause of short‐term and long‐term disability. In JIA the cause of the arthritis is unknown. It generally begins in children younger than age 16 years. It always lasts for at least six weeks. A physician will rule out other conditions that may be causing the symptoms before diagnosing JIA. Several types of exercise therapy are described in this review, for example, physical training programs such as strength training for improving muscle strength and endurance exercise for improving overall fitness (either land based or in a pool). Other studies state that a change of 0.13 on the score of the Childhood Health Assessment Questionnaire (CHAQ) is a clinically important improvement from the perspective of children and their parents. | Exercise therapy is considered an important component of the treatment of arthritis. The efficacy of exercise therapy has been reviewed in adults with rheumatoid arthritis but not in children with juvenile idiopathic arthritis (JIA). Objectives To assess the effects of exercise therapy on functional ability, quality of life and aerobic capacity in children with JIA. Search methods The Cochrane Central Register of Controlled Trials (CENTRAL), Cochrane Database of Systematic Reviews ( The Cochrane Library ), MEDLINE (January 1966 to April 2007), CINAHL (January 1982 to April 2007), EMBASE (January 1966 to October 2007), PEDro (January 1966 to October 2007), SportDiscus (January 1966 to October 2007), Google Scholar (to October 2007), AMED (Allied and Alternative Medicine) (January 1985 to October 2007), Health Technologies Assessment database (January 1988 to October 2007), ISI Web Science Index to Scientific and Technical Proceedings (January 1966 to October 2007) and the Chartered Society of Physiotherapy website (http://www.cps.uk.org) were searched and references tracked. Selection criteria Randomised controlled trials (RCTs) of exercise treatment in JIA. Data collection and analysis Potentially relevant references were evaluated and all data were extracted by two review authors working independently. Three out of 16 identified studies met the inclusion criteria, with a total of 212 participants. All the included studies fulfilled at least seven of 10 methodological criteria. The outcome data of the following measures were homogenous and were pooled in a meta‐analysis: functional ability (n = 198; WMD ‐0.07, 95% CI ‐0.22 to 0.08), quality of life (CHQ‐PhS: n = 115; WMD ‐3.96, 95% CI ‐8.91 to 1.00) and aerobic capacity (n = 124; WMD 0.04, 95% CI ‐0.11 to 0.19). The results suggest that the outcome measures all favoured the exercise therapy but none were statistically significant. None of the studies reported negative effects of the exercise therapy. Overall, based on 'silver‐level' evidence (www.cochranemsk.org) there was no clinically important or statistically significant evidence that exercise therapy can improve functional ability, quality of life, aerobic capacity or pain. The low number of available RCTs limits the generalisability. The included and excluded studies were all consistent about the adverse effects of exercise therapy; no short‐term detrimental effects of exercise therapy were found in any study. Both included and excluded studies showed that exercise does not exacerbate arthritis. The large heterogeneity in outcome measures, as seen in this review, emphasises the need for a standardised assessment or a core set of functional and physical outcome measurements suited for health research to generate evidence about the possible benefits of exercise therapy for patients with JIA. Although the short‐term effects look promising, the long‐term effect of exercise therapy remains unclear. |
t7 | The aim of this Cochrane Review was to find out if adjustable sutures (stitches) are better than non‐adjustable sutures for strabismus (squint) surgery. Cochrane researchers collected and analysed all relevant studies to answer this question and found one study. The review shows that there is an evidence gap on this topic. The Cochrane researchers found only one small study to answer this question and the results were uncertain. Strabismus occurs when the eye deviates (moves) from its normally perfect alignment. This is commonly known as a squint. Strabismus can be corrected by surgery on the muscles surrounding the eye. A variety of surgical techniques are available, including the use of adjustable or non‐adjustable sutures. There is uncertainty as to which of these suture techniques results in a better alignment of the eye and whether there are any disadvantages to the techniques. Cochrane researchers found one relevant study from Egypt. Sixty children under the age of 12 years took part in the study which compared adjustable with non‐adjustable sutures and followed participants for six months. Clinically, there may be a small increased chance of a successful outcome with adjustable sutures, but the results showed no statistical difference. | Strabismus, or squint, can be defined as a deviation from perfect ocular alignment and can be classified in many ways according to its aetiology and presentation. Treatment can be broadly divided into medical and surgical options, with a variety of surgical techniques being available, including the use of adjustable or non‐adjustable sutures for the extraocular muscles. There exists an uncertainty as to which of these techniques produces a better surgical outcome, and an opinion that the adjustable suture technique may be of greater benefit in certain situations. Objectives To determine if either an adjustable suture or non‐adjustable suture technique is associated with a more accurate long‐term ocular alignment and to identify specific situations in which it would be of benefit to use a particular method. Search methods We searched the Cochrane Central Register of Controlled Trials (CENTRAL) (which contains the Cochrane Eyes and Vision Trials Register) (2017, Issue 5); Ovid MEDLINE; Ovid Embase; LILACS; the ISRCTN registry; ClinicalTrials.gov and the ICTRP. The date of the search was 13 June 2017. We contacted experts in the field for further information. Selection criteria We included only randomised controlled trials (RCTs) comparing adjustable to non‐adjustable sutures for strabismus surgery. Data collection and analysis We used standard procedures recommended by Cochrane. Two review authors independently screened search results and extracted data. We graded the certainty of the evidence using the GRADE approach. We identified one RCT comparing adjustable and non‐adjustable sutures in primary horizontal strabismus surgeries in 60 children aged less than 12 years in Egypt. The study was not masked and we judged it at high risk of detection bias. Ocular alignment was defined as orthophoria or a horizontal tropia of 8 prism dioptres (PD) or less at near and far distances. At six months, there may be a small increased chance of ocular alignment with adjustable sutures compared with non‐adjustable sutures clinically, however, the confidence intervals (CIs) were wide and were compatible with an increased chance of ocular alignment in the non‐adjustable sutures group, so there was no statistical difference (risk ratio (RR) 1.18, 95% CI 0.91 to 1.53). We judged this to be low‐certainty evidence, downgrading for imprecision and risk of bias. At six months, 730 per 1000 children in the non‐adjustable sutures group had ocular alignment. The study authors reported that there were no complications during surgery. The trials did not assess patient satisfaction and resource use and costs. We could reach no reliable conclusions regarding which technique (adjustable or non‐adjustable sutures) produced a more accurate long‐term ocular alignment following strabismus surgery or in which specific situations one technique is of greater benefit than the other, given the low‐certainty and chance with just the one study. More high‐quality RCTs are needed to obtain clinically valid results and to clarify these issues. Such trials should ideally 1. recruit participants with any type of strabismus or specify the subgroup of participants to be studied, for example, thyroid, paralytic, non‐paralytic, paediatric; 2. randomise all consenting participants to have either adjustable or non‐adjustable surgery prospectively; 3. have at least six months of follow‐up data; and 4. include reoperation rates as an outcome measure. |
t8 | Gout caused by crystal formation in the joints due to high uric acid levels in the blood. People have attacks of painful, warm and swollen joints, often at the big toe. Some people develop large accumulations of crystal just beneath the skin known as tophi. Cure can be achieved if uric acid levels in blood return to normal for a prolonged time, making the crystal deposits dissolve. Dietary supplements are preparations such as vitamins, essential minerals, prebiotics, etc. Few studies evaluate their benefits and some might not be free of harm. The first study (120 participants) compared enriched skim milk powder (with peptides with probable anti‐inflammatory effect) to standard skim milk and to lactose powder, and the second study (40 participants) compared vitamin C with allopurinol. In the first study, the enriched milk aimed to reduce the frequency of gout attacks, while in the second study the vitamin C aimed to reduce the uric acid levels in blood. People with gout enrolled in both studies were predominantly middle‐aged men; in the skim milk study, participants with gout appeared severe as they had very frequent attacks and 20% to 43% presented with tophi, while in the vitamin C study, participants appeared similar to ordinary participants with gout. Withdrawals due to adverse events 4 more people out of 100 who consumed enriched skim milk powder discontinued the supplement at three months (4% more withdrawals). Pain reduction, serum uric acid (sUA) levels and physical function were uncertain. Effect on tophus regression was not measured. People who consumed vitamin C showed an sUA level reduction of 0.014 mmol/L after eight weeks (or 2.8% sUA reduction). People who were administered allopurinol showed an sUA level reduction of 0.118 mmol/L after eight weeks (or 23.6% sUA reduction). There were no reports of side effects or withdrawals due to side effects in the vitamin C or allopurinol treatment groups. Effects of vitamin C on gout attacks, pain reduction, physical function and tophus regression were not measured. We do not have precise information about side effects and complications, but possible side effects may include nausea or diarrhoea. Compared with the commonly used medicine allopurinol, low‐quality evidence from one study indicated the effect of vitamin C in reducing sUA levels is smaller and probably clinically unimportant. Other possible benefits of vitamin C are uncertain, as they were not evaluated in the study. | Dietary supplements are frequently used for the treatment of several medical conditions, both prescribed by physicians or self administered. However, evidence of benefit and safety of these supplements is usually limited or absent. Objectives To assess the efficacy and safety of dietary supplementation for people with chronic gout. Search methods We performed a search in the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE and CINAHL on 6 June 2013. We applied no date or language restrictions. In addition, we performed a handsearch of the abstracts from the 2010 to 2013 American College of Rheumatology (ACR) and European League against Rheumatism (EULAR) conferences, checked the references of all included studies and trial registries. Selection criteria We considered all published randomised controlled trials (RCTs) or quasi‐RCTs that compared dietary supplements with no supplements, placebo, another supplement or pharmacological agents for adults with chronic gout for inclusion. Dietary supplements included, but were not limited to, amino acids, antioxidants, essential minerals, polyunsaturated fatty acids, prebiotic agents, probiotic agents and vitamins. The main outcomes were reduction in frequency of gouty attacks and trial participant withdrawal due to adverse events. We also considered pain reduction, health‐related quality of life, serum uric acid (sUA) normalisation, function (i.e. activity limitation), tophus regression and the rate of serious adverse events. Data collection and analysis We used standard methodological procedures expected by The Cochrane Collaboration. We identified two RCTs (160 participants) that fulfilled our inclusion criteria. As these two trials evaluated different diet supplements (enriched skim milk powder (SMP) and vitamin C) with different outcomes (gout flare prevention for enriched SMP and sUA reduction for vitamin C), we reported the results separately. One trial including 120 participants, at moderate risk of bias, compared SMP enriched with glycomacropeptides (GMP) with unenriched SMP and with lactose over three months. Participants were predominantly men aged in their 50's who had severe gout. The frequency of acute gout attacks, measured as the number of flares per month, decreased in all three groups over the study period. The effects of enriched SMP (SMP/GMP/G600) compared with the combined control groups (SMP and lactose powder) at three months in terms of mean number of gout flares per month were uncertain (mean ± standard deviation (SD) flares per month: 0.49 ± 1.52 in SMP/GMP/G60 group versus 0.70 ± 1.28 in control groups; mean difference (MD) ‐0.21, 95% confidence interval (CI) ‐0.76 to 0.34; low‐quality evidence). The number of withdrawals due to adverse effects was similar in both groups although again the results were imprecise (7/40 in SMP/GMP/G600 group versus 11/80 in control groups; risk ratio (RR) 1.27, 95% CI 0.53 to 3.03; low‐quality evidence). The findings for adverse events were also uncertain (2/40 in SMP/GMP/G600 group versus 3/80 in control groups; RR 1.33, 95% CI 0.23 to 7.66; low‐quality evidence). Gastrointestinal events were the most commonly reported adverse effects. Pain from self reported gout flares (measured on a 10‐point Likert scale) improved slightly more in the SMP/GMP/G600 group compared with controls (mean ± SD reduction ‐1.97 ± 2.28 points in SMP/GMP/G600 group versus ‐0.94 ± 2.25 in control groups; MD ‐1.03, 95% CI ‐1.96 to ‐0.10; low‐quality evidence). This was an absolute reduction of 10% (95% CI 20% to 1% reduction), which may not be of clinical relevance. Results were imprecise for the outcome improvement in physical function (mean ± SD Health Assessment Questionnaire (HAQ)‐II (scale 0 to 3, 0 = no disability): 0.08 ± 0.23 in SMP/GMP/G60 group versus 0.11 ± 0.31 in control groups; MD ‐0.03, 95% CI ‐0.14 to 0.08; low‐quality evidence). Similarly, results for sUA reduction were imprecise (mean ± SD reduction: ‐0.025 ± 0.067 mmol/L in SMP/GMP/G60 group versus ‐0.010 ± 0.069 in control groups; MD ‐0.01, 95% CI ‐0.04 to 0.01; low‐quality evidence). The study did not report tophus regression and health‐related quality of life impact. One trial including 40 participants, at moderate to high risk of bias, compared vitamin C alone with allopurinol and with allopurinol plus vitamin C in a three‐arm trial. We only compared vitamin C with allopurinol in this review. Participants were predominantly middle‐aged men, and their severity of gout was representative of gout in general. The effect of vitamin C on the rate of gout attacks was not assessed. Vitamin C did not lower sUA as much as allopurinol (‐0.014 mmol/L in vitamin C group versus ‐0.118 mmol/L in allopurinol group; MD 0.10, 95% CI 0.06 to 0.15; low‐quality evidence). The study did not assess tophus regression, pain reduction or disability or health‐related quality of life impact. The study reported no adverse events and no participant withdrawal due to adverse events. While dietary supplements may be widely used for gout, this review has shown a paucity of high‐quality evidence assessing dietary supplementation. |
t9 | Priapism (the prolonged painful erection of the penis) is common in males with sickle cell disease. The length of time priapism lasts differs for different types and so does the medical treatment for it. Self‐management approaches may be helpful. We looked for randomised controlled trials of different treatments to find the best option. We found three trials set in Jamaica, Nigeria and the UK involving 102 people. In the trials, four different drug treatments (stilboestrol, sildenafil, ephedrine and etilefrine) were compared to placebo. The trials all looked at whether the treatments reduced how often attacks of priapism occurred. There was no difference between any of the treatments compared to placebo. Due to lack of evidence, we are not able to conclude the best treatment of priapism in sickle cell disease. We considered the quality of evidence to be low to very low as all of the trials were at risk of bias and all had low participant numbers. | Sickle cell disease comprises a group of genetic haemoglobin disorders. The predominant symptom associated with sickle cell disease is pain resulting from the occlusion of small blood vessels by abnormally 'sickle‐shaped' red blood cells. There are other complications, including chronic organ damage and prolonged painful erection of the penis, known as priapism. Severity of sickle cell disease is variable, and treatment is usually symptomatic. Priapism affects up to half of all men with sickle cell disease, however, there is no consistency in treatment. We therefore need to know the best way of treating this complication in order to offer an effective interventional approach to all affected individuals. Objectives To assess the benefits and risks of different treatments for stuttering (repeated short episodes) and fulminant (lasting for six hours or more) priapism in sickle cell disease. Search methods We searched the Cochrane Cystic Fibrosis and Genetic Disorders Group Haemoglobinopathies Trials Register, which comprises references identified from comprehensive electronic database searches and handsearches of relevant journals and abstract books of conference proceedings. We also searched trial registries. Date of the most recent search of the Group's Haemoglobinopathies Trials Register: 15 September 2017. Date of most recent search of trial registries and of Embase: 12 December 2016. Selection criteria All randomised or quasi‐randomised controlled trials comparing non‐surgical or surgical treatment with placebo or no treatment, or with another intervention for stuttering or fulminant priapism. Data collection and analysis The authors independently extracted data and assessed the risk of bias of the trials. Three trials with 102 participants were identified and met the criteria for inclusion in this review. These trials compared stilboestrol to placebo, sildenafil to placebo and ephedrine or etilefrine to placebo and ranged in duration from two weeks to six months. All of the trials were conducted in an outpatient setting in Jamaica, Nigeria and the UK. None of the trials measured our first primary outcome, detumescence but all three trials reported on the reduction in frequency of stuttering priapism, our second primary outcome. No significant effect of any of the treatments was seen compared to placebo. Immediate side effects were not found to be significantly different from placebo in the two trials where this information was reported. We considered the quality of evidence to be low to very low as all of the trials were at risk of bias and all had low participant numbers. There is a lack of evidence for the benefits or risks of the different treatments for both stuttering and fulminant priapism in sickle cell disease. This systematic review has clearly identified the need for well‐designed, adequately‐powered, multicentre randomised controlled trials assessing the effectiveness of specific interventions for priapism in sickle cell disease. |
t10 | Reducing blood pressure with drugs has been a strategy used in patients suffering from an acute event in the heart or in the brain, such as heart attack or stroke. There is controversy whether these drugs should be used in the immediate period of these events, and what would be the best type of drug that renders the most benefit. This review looked at all studies where patients were randomized to one of these drugs or placebo, in this period. One class of blood pressure lowering drug, the so‐called nitrates, demonstrated reduction in mortality in patients with heart attack. For 1000 patients treated 4 to 8 deaths were prevented during the first 2 days of this acute event. The ACE‐inhibitors class also decrease mortality when continued for 10 days (3 to 5 deaths prevented per 1000). | Acute cardiovascular events represent a therapeutic challenge. Blood pressure lowering drugs are commonly used and recommended in the early phase of these settings. This review analyses randomized controlled trial (RCT) evidence for this approach. Objectives To determine the effect of immediate and short‐term administration of anti‐hypertensive drugs on all‐cause mortality, total non‐fatal serious adverse events (SAE) and blood pressure, in patients with an acute cardiovascular event, regardless of blood pressure at the time of enrollment. Search methods MEDLINE, EMBASE, and Cochrane clinical trial register from Jan 1966 to February 2009 were searched. Reference lists of articles were also browsed. In case of missing information from retrieved articles, authors were contacted. Selection criteria Randomized controlled trials (RCTs) comparing anti‐hypertensive drug with placebo or no treatment administered to patients within 24 hours of the onset of an acute cardiovascular event. Data collection and analysis Two reviewers independently extracted data and assessed risk of bias. Fixed effects model with 95% confidence intervals (CI) were used. Sensitivity analyses were also conducted. Sixty‐five RCTs (N=166,206) were included, evaluating four classes of anti‐hypertensive drugs: ACE inhibitors (12 trials), beta‐blockers (20), calcium channel blockers (18) and nitrates (18). Acute stroke was studied in 6 trials (all involving CCBs). Acute myocardial infarction was studied in 59 trials. In the latter setting immediate nitrate treatment (within 24 hours) reduced all‐cause mortality during the first 2 days (RR 0.81, 95%CI [0.74,0.89], p<0.0001). No further benefit was observed with nitrate therapy beyond this point. ACE inhibitors did not reduce mortality at 2 days (RR 0.91,95%CI [0.82, 1.00]), but did after 10 days (RR 0.93, 95%CI [0.87,0.98] p=0.01). No other blood pressure lowering drug administered as an immediate treatment or short‐term treatment produced a statistical significant mortality reduction at 2, 10 or ≥30 days. There was not enough data studying acute stroke, and there were no RCTs evaluating other acute cardiovascular events. Nitrates reduce mortality (4‐8 deaths prevented per 1000) at 2 days when administered within 24 hours of symptom onset of an acute myocardial infarction. No mortality benefit was seen when treatment continued beyond 48 hours. Mortality benefit of immediate treatment with ACE inhibitors post MI at 2 days did not reach statistical significance but the effect was significant at 10 days (3‐5 deaths prevented per 1000). There is good evidence for lack of a mortality benefit with immediate or short‐term treatment with beta‐blockers and calcium channel blockers for acute myocardial infarction. |
t11 | Venous leg ulcers are a common and recurring type of chronic wound. Compression therapy (bandages or stockings) is used to treat venous leg ulcers. Dressings which aim to protect the wound and provide an environment that will help it to heal are used underneath compression. Protease‐modulating dressings are one of several types of dressing available. Wounds that are slower to heal are thought to have higher levels of proteases (enzymes that break down proteins). Protease‐modulating dressings are designed to lower protease activity and help wounds to heal. A test to detect high levels of protease activity has also been introduced. A 'test and treat' strategy involves testing for elevated proteases and then using protease‐modulating treatments in ulcers which show elevated protease levels. It is important to know if using both the test and the treatment together can improve healing of leg ulcers. What we found In January 2016 we searched for as many relevant studies as possible that were randomised controlled trials, and which compared a 'test and treat' strategy with another treatment in people with venous leg ulcers. We did not find any eligible randomised studies. We found one ongoing study which might be relevant but could not obtain any more information on this. Research is still needed to find out if it is helpful to test venous leg ulcers for high levels of protease activity and then treat high levels using protease‐modulating treatments. This review is part of a set of reviews investigating different aspects of using protease‐modulating treatments in people with venous leg ulcers. | Venous leg ulcers are a common and recurring type of complex wound. They can be painful, malodorous, prone to infection and slow to heal. Standard treatment includes compression therapy and a dressing. The use of protease‐modulating treatments for venous leg ulcers is increasing. These treatments are based on some evidence that a proportion of slow to heal ulcers have elevated protease activity in the wound. Point‐of‐care tests which aim to detect elevated protease activity are now available. A 'test and treat' strategy involves testing for elevated proteases and then using protease‐modulating treatments in ulcers which show elevated protease levels. Objectives To determine the effects on venous leg ulcer healing of a 'test and treat' strategy involving detection of high levels of wound protease activity and treatment with protease‐modulating therapies, compared with alternative treatment strategies such as using the same treatment for all participants or using a different method of treatment selection. Search methods We searched the following electronic databases to identify reports of relevant randomised clinical trials: The Cochrane Wounds Group Specialised Register (January 2016), the Cochrane Central Register of Controlled Trials (CENTRAL, The Cochrane Library ) Issue 12, 2015); Ovid MEDLINE (1946 to January 2016); Ovid MEDLINE (In‐Process & Other Non‐Indexed Citations January 2016); Ovid EMBASE (1974 to January 2016); EBSCO CINAHL (1937 to January 2016). We also searched three clinical trials registers, reference lists and the websites of regulatory agencies. There were no restrictions with respect to language, date of publication or study setting. Selection criteria Published or unpublished RCTs which assessed a test and treat strategy for elevated protease activity in venous leg ulcers in adults compared with an alternative treatment strategy. The test and treat strategy needed to be the only systematic difference between the groups. Data collection and analysis Two review authors independently performed study selection; we planned that two authors would also assess risk of bias and extract data. We did not identify any studies which met the inclusion criteria for this review. We identified one ongoing study; it was unclear whether this would be eligible for inclusion. Currently there is no randomised evidence on the impact of a test and treat policy for protease levels on outcomes in people with venous leg ulcers. |
t12 | LBP is very common. While most back pain gets better without medical treatment, about 10% of cases lasts for three months or more. There are many therapies that are used to treat the pain, and improve the lives of individuals with back pain. Massage is one of these treatments. In total we included 25 RCTs and 3096 participants in this review update. Only one trial included patients with acute LBP (pain duration less than four weeks), while all the others included patients with sub‐acute (four to 12 weeks) or chronic LBP (12 weeks or longer). In three studies, massage was applied using a mechanical device (such as a metal bar to increase the compression to the skin or a vibrating instrument), and in the remaining trials it was done using the hands. Pain intensity and quality were the most common outcomes measured in these studies, followed by back‐related function, such as walking, sleeping, bending and lifting weights. Study funding sources Seven studies did not report the sources of funding, Sixteen studies were funded by not‐for‐profit organizations. One study reported not receiving any funding, and one study was funded by a College of Massage Therapists. There were eight studies comparing massage to interventions that are not expected to improve outcomes (inactive controls) and 13 studies comparing massage to other interventions expected to improve outcomes (active controls). Massage was better than inactive controls for pain and function in the short‐term, but not in the long‐term follow‐up. Massage was better than active controls for pain both in the short and long‐term follow‐ups, but we found no differences for function, either in the short or long‐term follow‐ups. There were no reports of serious adverse events in any of these trials. The most common adverse events were increased pain intensity in 1.5% to 25% of the participants. | Low‐back pain (LBP) is one of the most common and costly musculoskeletal problems in modern society. It is experienced by 70% to 80% of adults at some time in their lives. Massage therapy has the potential to minimize pain and speed return to normal function. Objectives To assess the effects of massage therapy for people with non‐specific LBP. Search methods We searched PubMed to August 2014, and the following databases to July 2014: MEDLINE, EMBASE, CENTRAL, CINAHL, LILACS, Index to Chiropractic Literature, and Proquest Dissertation Abstracts. We also checked reference lists. There were no language restrictions used. Selection criteria We included only randomized controlled trials of adults with non‐specific LBP classified as acute, sub‐acute or chronic. Massage was defined as soft‐tissue manipulation using the hands or a mechanical device. We grouped the comparison groups into two types: inactive controls (sham therapy, waiting list, or no treatment), and active controls (manipulation, mobilization, TENS, acupuncture, traction, relaxation, physical therapy, exercises or self‐care education). Data collection and analysis We used standard Cochrane methodological procedures and followed CBN guidelines. Two independent authors performed article selection, data extraction and critical appraisal. In total we included 25 trials (3096 participants) in this review update. The majority was funded by not‐for‐profit organizations. One trial included participants with acute LBP, and the remaining trials included people with sub‐acute or chronic LBP (CLBP). In three trials massage was done with a mechanical device, and the remaining trials used only the hands. The most common type of bias in these studies was performance and measurement bias because it is difficult to blind participants, massage therapists and the measuring outcomes. We judged the quality of the evidence to be "low" to "very low", and the main reasons for downgrading the evidence were risk of bias and imprecision. There was no suggestion of publication bias. For acute LBP, massage was found to be better than inactive controls for pain ((SMD ‐1.24, 95% CI ‐1.85 to ‐0.64; participants = 51; studies = 1)) in the short‐term, but not for function ((SMD ‐0.50, 95% CI ‐1.06 to 0.06; participants = 51; studies = 1)). For sub‐acute and chronic LBP, massage was better than inactive controls for pain ((SMD ‐0.75, 95% CI ‐0.90 to ‐0.60; participants = 761; studies = 7)) and function (SMD ‐0.72, 95% CI ‐1.05 to ‐0.39; 725 participants; 6 studies; ) in the short‐term, but not in the long‐term; however, when compared to active controls, massage was better for pain, both in the short ((SMD ‐0.37, 95% CI ‐0.62 to ‐0.13; participants = 964; studies = 12)) and long‐term follow‐up ((SMD ‐0.40, 95% CI ‐0.80 to ‐0.01; participants = 757; studies = 5)), but no differences were found for function (both in the short and long‐term). There were no reports of serious adverse events in any of these trials. Increased pain intensity was the most common adverse event reported in 1.5% to 25% of the participants. We have very little confidence that massage is an effective treatment for LBP. Acute, sub‐acute and chronic LBP had improvements in pain outcomes with massage only in the short‐term follow‐up. Functional improvement was observed in participants with sub‐acute and chronic LBP when compared with inactive controls, but only for the short‐term follow‐up. There were only minor adverse effects with massage. |
t13 | Individuals with mildly elevated blood pressures, but no previous cardiovascular events, make up the majority of those considered for and receiving antihypertensive therapy. The decision to treat this population has important consequences for both the patients (e.g. adverse drug effects, lifetime of drug therapy, cost of treatment, etc.) and any third party payer (e.g. high cost of drugs, physician services, laboratory tests, etc.). In this review, existing evidence comparing the health outcomes between treated and untreated individuals are summarized. Available data from the limited number of available trials and participants showed no difference between treated and untreated individuals in heart attack, stroke, and death. About 9% of patients treated with drugs discontinued treatment due to adverse effects. Therefore, the benefits and harms of antihypertensive drug therapy in this population need to be investigated by further research. | People with no previous cardiovascular events or cardiovascular disease represent a primary prevention population. The benefits and harms of treating mild hypertension in primary prevention patients are not known at present. This review examines the existing randomised controlled trial (RCT) evidence. Objectives Primary objective: To quantify the effects of antihypertensive drug therapy on mortality and morbidity in adults with mild hypertension (systolic blood pressure (BP) 140‐159 mmHg and/or diastolic BP 90‐99 mmHg) and without cardiovascular disease. Search methods We searched The Cochrane Central Register of Controlled Trials (CENTRAL) 2013 Issue 9, MEDLINE (1946 to October 2013), EMBASE (1974 to October 2013), ClinicalTrials.gov (all dates to October 2013), and reference lists of articles. The Cochrane Database of Systematic Reviews and the Database of Abstracts of Reviews of Effectiveness (DARE) were searched for previous reviews and meta‐analyses of anti‐hypertensive drug treatment compared to placebo or no treatment trials until the end of 2011. Selection criteria RCTs of at least 1 year duration. Data collection and analysis The outcomes assessed were mortality, stroke, coronary heart disease (CHD), total cardiovascular events (CVS), and withdrawals due to adverse effects. Of 11 RCTs identified 4 were included in this review, with 8,912 participants. Treatment for 4 to 5 years with antihypertensive drugs as compared to placebo did not reduce total mortality (RR 0.85, 95% CI 0.63, 1.15). In 7,080 participants treatment with antihypertensive drugs as compared to placebo did not reduce coronary heart disease (RR 1.12, 95% CI 0.80, 1.57), stroke (RR 0.51, 95% CI 0.24, 1.08), or total cardiovascular events (RR 0.97, 95% CI 0.72, 1.32). Withdrawals due to adverse effects were increased by drug therapy (RR 4.80, 95%CI 4.14, 5.57), Absolute risk increase (ARI) 9%. Antihypertensive drugs used in the treatment of adults (primary prevention) with mild hypertension (systolic BP 140‐159 mmHg and/or diastolic BP 90‐99 mmHg) have not been shown to reduce mortality or morbidity in RCTs. Treatment caused 9% of patients to discontinue treatment due to adverse effects. More RCTs are needed in this prevalent population to know whether the benefits of treatment exceed the harms. |
t14 | Depression affects 350 million people worldwide, impacting on quality of life, work, relationships and physical health. Medication and talking therapies are not always suitable or available. Dance movement therapy (DMT) uses bodily movements to explore and express emotions with groups or individuals. This is the first review of the effectiveness of DMT for depression and will add to the evidence base regarding depression treatments. Databases were searched for all published and unpublished randomised controlled studies of DMT for depression up to October 2014, with participants of any age, gender or ethnicity. Three studies (147 participants) met inclusion criteria: two of adults (men and women); and one of adolescents (females only). Due to the low number of studies and low quality of evidence, it was not possible to draw firm conclusions about the effectiveness of DMT for depression. It was not possible to compare DMT with medication, talking therapies, physical treatments or to compare types of DMT due to lack of available evidence. Overall, there is no evidence for or against DMT as a treatment for depression. There is some evidence to suggest DMT is more effective than standard care for adults, but this was not clinically significant. DMT is no more effective than standard care for young people. Evidence from just one study of low methodological quality suggested that drop‐out rates from the DMT group were not significant, and there is no reliable effect in either direction for quality of life or self esteem. A large positive effect was observed for social functioning, but since this was from one study of low methodological quality the result is imprecise. | Depression is a debilitating condition affecting more than 350 million people worldwide ( WHO 2012 ) with a limited number of evidence‐based treatments. Drug treatments may be inappropriate due to side effects and cost, and not everyone can use talking therapies.There is a need for evidence‐based treatments that can be applied across cultures and with people who find it difficult to verbally articulate thoughts and feelings. Dance movement therapy (DMT) is used with people from a range of cultural and intellectual backgrounds, but effectiveness remains unclear. Objectives To examine the effects of DMT for depression with or without standard care, compared to no treatment or standard care alone, psychological therapies, drug treatment, or other physical interventions. Also, to compare the effectiveness of different DMT approaches. Search methods The Cochrane Depression, Anxiety and Neurosis Review Group's Specialised Register (CCDANCTR‐Studies and CCDANCTR‐References) and CINAHL were searched (to 2 Oct 2014) together with the World Health Organization's International Clinical Trials Registry Platform (WHO ICTRP) and ClinicalTrials.gov . The review authors also searched the Allied and Complementary Medicine Database (AMED), the Education Resources Information Center (ERIC) and Dissertation Abstracts (to August 2013), handsearched bibliographies, contacted professional associations, educational programmes and dance therapy experts worldwide. Selection criteria Inclusion criteria were: randomised controlled trials (RCTs) studying outcomes for people of any age with depression as defined by the trialist, with at least one group being DMT. DMT was defined as: participatory dance movement with clear psychotherapeutic intent, facilitated by an individual with a level of training that could be reasonably expected within the country in which the trial was conducted. For example, in the USA this would either be a trainee, or qualified and credentialed by the American Dance Therapy Association (ADTA). In the UK, the therapist would either be in training with, or accredited by, the Association for Dance Movement Psychotherapy (ADMP, UK). Similar professional bodies exist in Europe, but in some countries (e.g. China) where the profession is in development, a lower level of qualification would mirror the situation some decades previously in the USA or UK. Hence, the review authors accepted a relevant professional qualification (e.g. nursing or psychodynamic therapies) plus a clear description of the treatment that would indicate its adherence to published guidelines including Levy 1992 , ADMP UK 2015 , Meekums 2002 , and Karkou 2006 . Data collection and analysis Study methodological quality was evaluated and data were extracted independently by the first two review authors using a data extraction form, the third author acting as an arbitrator. Three studies totalling 147 participants (107 adults and 40 adolescents) met the inclusion criteria. Seventy‐four participants took part in DMT treatment, while 73 comprised the control groups. Two studies included male and female adults with depression. One of these studies included outpatient participants; the other study was conducted with inpatients at an urban hospital. The third study reported findings with female adolescents in a middle‐school setting. All included studies collected continuous data using two different depression measures: the clinician‐completed Hamilton Depression Rating Scale (HAM‐D); and the Symptom Checklist‐90‐R (SCL‐90‐R) (self‐rating scale). Statistical heterogeneity was identified between the three studies. There was no reliable effect of DMT on depression (SMD ‐0.67 95% CI ‐1.40 to 0.05; very low quality evidence). A planned subgroup analysis indicated a positive effect in adults, across two studies, 107 participants, but this failed to meet clinical significance (SMD ‐7.33 95% CI ‐9.92 to ‐4.73). One adult study reported drop‐out rates, found to be non‐significant with an odds ratio of 1.82 [95% CI 0.35 to 9.45]; low quality evidence. One study measured social functioning, demonstrating a large positive effect (MD ‐6.80 95 % CI ‐11.44 to ‐2.16; very low quality evidence), but this result was imprecise. One study showed no effect in either direction for quality of life (0.30 95% CI ‐0.60 to 1.20; low quality evidence) or self esteem (1.70 95% CI ‐2.36 to 5.76; low quality evidence). The low‐quality evidence from three small trials with 147 participants does not allow any firm conclusions to be drawn regarding the effectiveness of DMT for depression. Larger trials of high methodological quality are needed to assess DMT for depression, with economic analyses and acceptability measures and for all age groups. |
t15 | The aim of this Cochrane Review is to find out whether certain antibiotics are more effective in treating scrub typhus. We collected and analysed all relevant studies to answer this question and included seven studies. Tetracycline, doxycycline, azithromycin, and rifampicin are effective antibiotics for scrub typhus treatment that have led to few treatment failures. For specific outcomes, some low‐certainty evidence suggests there may be little or no difference between tetracycline, doxycycline, and azithromycin. Healthcare workers should not use rifampicin as a first‐line treatment. Researchers should standardize the way they diagnose and assess scrub typhus. Scrub typhus is an important cause of fever in Asia. We studied people with scrub typhus diagnosed by health professionals and confirmed by laboratory tests. We compared different antibiotic treatments. We looked at whether choice of antibiotic made a difference in the number of people who experienced failed treatment, and we determined the proportions who had resolution of fever at 48 hours. We found seven relevant studies. Only one study included children younger than 15 years. We are uncertain whether doxycycline compared to tetracycline affects treatment failure, as the certainty of the evidence is very low. Studies looked at resolution of fever within five days. Doxycycline compared to tetracycline may make little or no difference in the proportion of patients with resolution of fever within 48 hours and in time to defervescence. Studies did not formally report serious adverse events. We are uncertain whether macrolides compared to doxycycline affect treatment failure, resolution of fever within five days, time to defervescence, or serious adverse events, as the certainty of the evidence is very low. Macrolides compared to doxycycline may make little or no difference in the proportion of patients with resolution of fever within five days. We are uncertain whether rifampicin compared to doxycycline affects treatment failure, proportion of patients with resolution of fever within 48 hours, or time to defervescence, as the certainty of evidence is very low. | Scrub typhus, an important cause of acute fever in Asia, is caused by Orientia tsutsugamushi, an obligate intracellular bacterium. Antibiotics currently used to treat scrub typhus include tetracyclines, chloramphenicol, macrolides, and rifampicin. Objectives To assess and compare the effects of different antibiotic regimens for treatment of scrub typhus. Search methods We searched the following databases up to 8 January 2018: the Cochrane Infectious Diseases Group specialized trials register; CENTRAL, in the Cochrane Library (2018, Issue 1); MEDLINE; Embase; LILACS; and the meta Register of Controlled Trials ( m RCT). We checked references and contacted study authors for additional data. We applied no language or date restrictions. Selection criteria Randomized controlled trials (RCTs) or quasi‐RCTs comparing antibiotic regimens in people with the diagnosis of scrub typhus based on clinical symptoms and compatible laboratory tests (excluding the Weil‐Felix test). Data collection and analysis For this update, two review authors re‐extracted all data and assessed the certainty of evidence. We meta‐analysed data to calculate risk ratios (RRs) for dichotomous outcomes when appropriate, and elsewhere tabulated data to facilitate narrative analysis. We included six RCTs and one quasi‐RCT with 548 participants; they took place in the Asia‐Pacific region: Korea (three trials), Malaysia (one trial), and Thailand (three trials). Only one trial included children younger than 15 years (N = 57). We judged five trials to be at high risk of performance and detection bias owing to inadequate blinding. Trials were heterogenous in terms of dosing of interventions and outcome measures. Across trials, treatment failure rates were low. Two trials compared doxycycline to tetracycline. For treatment failure, the difference between doxycycline and tetracycline is uncertain (very low‐certainty evidence). Doxycycline compared to tetracycline may make little or no difference in resolution of fever within 48 hours (risk ratio (RR) 1.14, 95% confidence interval (CI) 0.90 to 1.44, 55 participants; one trial; low‐certainty evidence) and in time to defervescence (116 participants; one trial; low‐certainty evidence). We were unable to extract data for other outcomes. Three trials compared doxycycline versus macrolides. For most outcomes, including treatment failure, resolution of fever within 48 hours, time to defervescence, and serious adverse events, we are uncertain whether study results show a difference between doxycycline and macrolides (very low‐certainty evidence). Macrolides compared to doxycycline may make little or no difference in the proportion of patients with resolution of fever within five days (RR 1.05, 95% CI 0.99 to 1.10; 185 participants; two trials; low‐certainty evidence). Another trial compared azithromycin versus doxycycline or chloramphenicol in children, but we were not able to disaggregate date for the doxycycline/chloramphenicol group. One trial compared doxycycline versus rifampicin. For all outcomes, we are uncertain whether study results show a difference between doxycycline and rifampicin (very low‐certainty evidence). Of note, this trial deviated from the protocol after three out of eight patients who had received doxycycline and rifampicin combination therapy experienced treatment failure. Across trials, mild gastrointestinal side effects appeared to be more common with doxycycline than with comparator drugs. Tetracycline, doxycycline, azithromycin, and rifampicin are effective treatment options for scrub typhus and have resulted in few treatment failures. Chloramphenicol also remains a treatment option, but we could not include this among direct comparisons in this review. Most available evidence is of low or very low certainty. For specific outcomes, some low‐certainty evidence suggests there may be little or no difference between tetracycline, doxycycline, and azithromycin as treatment options. Given very low‐certainty evidence for rifampicin and the risk of inducing resistance in undiagnosed tuberculosis, clinicians should not regard this as a first‐line treatment option. Clinicians could consider rifampicin as a second‐line treatment option after exclusion of active tuberculosis. Further research should consist of additional adequately powered trials of doxycycline versus azithromycin or other macrolides, trials of other candidate antibiotics including rifampicin, and trials of treatments for severe scrub typhus. Researchers should standardize diagnostic techniques and reporting of clinical outcomes to allow robust comparisons. 11 April 2019 Up to date All studies incorporated from most recent search All eligible published studies found in the last search (8 Jan, 2018) were included and four ongoing studies have been identified (see 'Characteristics of ongoing studies' section) |
t16 | We reviewed the evidence on the effects of dietary interventions on pain in children aged between five and 18 years with recurrent abdominal pain (RAP). Recurrent abdominal pain, or RAP, is a term used for unexplained episodes of stomachache or abdominal pain in children. Recurrent abdominal pain is a common condition, and most children are likely to be helped by simple measures. However, a range of treatments have been recommended to relieve abdominal pain, including making changes to the child's eating habits by adding supplements or excluding certain foods. Nineteen studies met our inclusion criteria, including 13 studies of probiotics and four studies of fibre interventions. We also found one study of a diet low in substances known as FODMAPs (fermentable oligosaccharides, disaccharides, monosaccharides and polyols) and one study of a fructose‐restricted diet. All of the studies compared dietary interventions to a placebo or control. The trials were carried out in eight countries and included a total of 1453 participants, aged between five and 18 years. Most children were recruited from outpatient clinics. Most interventions lasted four to six weeks. Probiotics We found evidence from 13 studies suggesting that probiotics might be effective in improving pain in the shorter term. Most studies did not report on other areas such as quality of daily life. We judged this evidence to be of moderate or low quality because some studies were small, showed varying results, or were at risk of bias. Fibre supplements We found no clear evidence of improvement of pain from four studies of fibre supplements. Most studies did not report on other areas such as quality of daily life. There were few studies of fibre supplements, and some of these studies were at risk of bias. Low FODMAP diets We found only one study evaluating the effectiveness of low FODMAP diets in children with RAP. We found only one study evaluating the effectiveness of fructose‐restricted diets in children with RAP. We found some evidence suggesting that probiotics may be helpful in relieving pain in children with RAP in the short term. Clinicians may therefore consider probiotic interventions as part of the management strategy for RAP. | This is an update of the original Cochrane review, last published in 2009 (Huertas‐Ceballos 2009). Recurrent abdominal pain (RAP), including children with irritable bowel syndrome, is a common problem affecting between 4% and 25% of school‐aged children. For the majority of such children, no organic cause for their pain can be found on physical examination or investigation. Many dietary inventions have been suggested to improve the symptoms of RAP. These may involve either excluding ingredients from the diet or adding supplements such as fibre or probiotics. Objectives To examine the effectiveness of dietary interventions in improving pain in children of school age with RAP. Search methods We searched CENTRAL, Ovid MEDLINE, Embase, eight other databases, and two trials registers, together with reference checking, citation searching and contact with study authors, in June 2016. Selection criteria Randomised controlled trials (RCTs) comparing dietary interventions with placebo or no treatment in children aged five to 18 years with RAP or an abdominal pain‐related, functional gastrointestinal disorder, as defined by the Rome III criteria (Rasquin 2006). Data collection and analysis We used standard methodological procedures expected by Cochrane. We grouped dietary interventions together by category for analysis. We contacted study authors to ask for missing information and clarification, when needed. We included 19 RCTs, reported in 27 papers with a total of 1453 participants. Fifteen of these studies were not included in the previous review. All 19 RCTs had follow‐up ranging from one to five months. Participants were aged between four and 18 years from eight different countries and were recruited largely from paediatric gastroenterology clinics. The mean age at recruitment ranged from 6.3 years to 13.1 years. Girls outnumbered boys in most trials. Fourteen trials recruited children with a diagnosis under the broad umbrella of RAP or functional gastrointestinal disorders; five trials specifically recruited only children with irritable bowel syndrome. The studies fell into four categories: trials of probiotic‐based interventions (13 studies), trials of fibre‐based interventions (four studies), trials of low FODMAP (fermentable oligosaccharides, disaccharides, monosaccharides and polyols) diets (one study), and trials of fructose‐restricted diets (one study). We found that children treated with probiotics reported a greater reduction in pain frequency at zero to three months postintervention than those given placebo (standardised mean difference (SMD) ‐0.55, 95% confidence interval (CI) ‐0.98 to ‐0.12; 6 trials; 523 children). There was also a decrease in pain intensity in the intervention group at the same time point (SMD ‐0.50, 95% CI ‐0.85 to ‐0.15; 7 studies; 575 children). However, we judged the evidence for these outcomes to be of low quality using GRADE due to an unclear risk of bias from incomplete outcome data and significant heterogeneity. We found that children treated with probiotics were more likely to experience improvement in pain at zero to three months postintervention than those given placebo (odds ratio (OR) 1.63, 95% CI 1.07 to 2.47; 7 studies; 722 children). The estimated number needed to treat for an additional beneficial outcome (NNTB) was eight, meaning that eight children would need to receive probiotics for one to experience improvement in pain in this timescale. We judged the evidence for this outcome to be of moderate quality due to significant heterogeneity. Children with a symptom profile defined as irritable bowel syndrome treated with probiotics were more likely to experience improvement in pain at zero to three months postintervention than those given placebo (OR 3.01, 95% CI 1.77 to 5.13; 4 studies; 344 children). Children treated with probiotics were more likely to experience improvement in pain at three to six months postintervention compared to those receiving placebo (OR 1.94, 95% CI 1.10 to 3.43; 2 studies; 224 children). We judged the evidence for these two outcomes to be of moderate quality due to small numbers of participants included in the studies. We found that children treated with fibre‐based interventions were not more likely to experience an improvement in pain at zero to three months postintervention than children given placebo (OR 1.83, 95% CI 0.92 to 3.65; 2 studies; 136 children). There was also no reduction in pain intensity compared to placebo at the same time point (SMD ‐1.24, 95% CI ‐3.41 to 0.94; 2 studies; 135 children). We judged the evidence for these outcomes to be of low quality due to an unclear risk of bias, imprecision, and significant heterogeneity. We found only one study of low FODMAP diets and only one trial of fructose‐restricted diets, meaning no pooled analyses were possible. We were unable to perform any meta‐analyses for the secondary outcomes of school performance, social or psychological functioning, or quality of daily life, as not enough studies included these outcomes or used comparable measures to assess them. With the exception of one study, all studies reported monitoring children for adverse events; no major adverse events were reported. Overall, we found moderate‐ to low‐quality evidence suggesting that probiotics may be effective in improving pain in children with RAP. Clinicians may therefore consider probiotic interventions as part of a holistic management strategy. However, further trials are needed to examine longer‐term outcomes and to improve confidence in estimating the size of the effect, as well as to determine the optimal strain and dosage. Future research should also explore the effectiveness of probiotics in children with different symptom profiles, such as those with irritable bowel syndrome. We found only a small number of trials of fibre‐based interventions, with overall low‐quality evidence for the outcomes. There was therefore no convincing evidence that fibre‐based interventions improve pain in children with RAP. Further high‐quality RCTs of fibre supplements involving larger numbers of participants are required. Future trials of low FODMAP diets and other dietary interventions are also required to facilitate evidence‐based recommendations. |
t17 | Obesity is associated with many health problems and a higher risk of death. Bariatric surgery for obesity is usually only considered when other treatments have failed. We aimed to compare surgical interventions with non‐surgical interventions for obesity (such as drugs, diet and exercise) and to compare different surgical procedures. Bariatric surgery can be considered for people with a body mass index (BMI = kg/m²) greater than 40, or for those with a BMI less than 40 and obesity‐related diseases such as diabetes. We included 22 studies comparing surgery with non‐surgical interventions, or comparing different types of surgery. Altogether 1496 participants were allocated to surgery and 302 participants to non‐surgical interventions. Most studies followed participants for 12 to 36 months, the longest follow‐up was 10 years. The majority of participants were women and, on average, in their early 30s to early 50s. Seven studies compared surgery with non‐surgical interventions. Due to differences in the way that the studies were designed we decided not to generate an average of their results. The direction of the effect indicated that people who had surgery achieved greater weight loss one to two years afterwards compared with people who did not have surgery. Improvements in quality of life and diabetes were also found. No deaths occurred, reoperations in the surgical intervention groups ranged between 2% and 13%, as reported in five studies. Three studies found that gastric bypass (GB) achieved greater weight loss up to five years after surgery compared with adjustable gastric band (AGB): the BMI at the end of the studies was on average five units less. The GB procedure resulted in greater duration of hospitalisation and a greater number of late major complications. AGB required high rates of reoperation for removal of the gastric band. Seven studies compared GB with sleeve gastrectomy (SG). Overall there were no important differences for weight loss, quality of life, comorbidities and complications, although gastro‐oesophageal reflux disease improved in more patients following GB in one study. One death occurred in the GB group. Serious adverse events occurred in 5% of the GB group and 1% of SG group, as reported in one study. Two studies reported 7% to 24% of people with GB and 3% to 34% of those with SG requiring reoperations. Two studies found that biliopancreatic diversion with duodenal switch resulted in greater weight loss than GB after two or four years in people with a relatively high BMI. BMI at the end of the studies was on average seven units lower. One death occurred in the biliopancreatic diversion group. Reoperations were higher in the biliopancreatic diversion group (16% to 28%) than the GB group (4% to 8%). One study comparing duodenojejunal bypass with SG versus GB found weight loss outcomes and rates of remission of diabetes and hypertension were similar at 12 months follow‐up. No deaths occurred in either group, reoperation rates were not reported. One study found that BMI was reduced by 10 units more following SG at three years follow‐up compared with AGB. Reoperations occurred in 20% of the AGB group and in 10% of the SG group. One study found no relevant difference in weight‐loss outcomes following gastric imbrication compared with SG. No deaths occurred; 17% of participants in the gastric imbrication group required reoperation. From the information that was available to us about the studies, we were unable to assess how well designed they were. Adverse events and reoperation rates were not consistently reported in the publications of the studies. Most studies followed participants for only one or two years, therefore the long‐term effects of surgery remain unclear. Few studies assessed the effects of bariatric surgery in treating comorbidities in participants with a lower BMI. There is therefore a lack of evidence for the use of bariatric surgery in treating comorbidities in people who are overweight or who do not meet standard criteria for bariatric surgery. | Bariatric (weight loss) surgery for obesity is considered when other treatments have failed. The effects of the available bariatric procedures compared with medical management and with each other are uncertain. This is an update of a Cochrane review first published in 2003 and most recently updated in 2009. Objectives To assess the effects of bariatric surgery for overweight and obesity, including the control of comorbidities. Search methods Studies were obtained from searches of numerous databases, supplemented with searches of reference lists and consultation with experts in obesity research. Date of last search was November 2013. Selection criteria Randomised controlled trials (RCTs) comparing surgical interventions with non‐surgical management of obesity or overweight or comparing different surgical procedures. Data collection and analysis Data were extracted by one review author and checked by a second review author. Two review authors independently assessed risk of bias and evaluated overall study quality utilising the GRADE instrument. Twenty‐two trials with 1798 participants were included; sample sizes ranged from 15 to 250. Most studies followed participants for 12, 24 or 36 months; the longest follow‐up was 10 years. The risk of bias across all domains of most trials was uncertain; just one was judged to have adequate allocation concealment. All seven RCTs comparing surgery with non‐surgical interventions found benefits of surgery on measures of weight change at one to two years follow‐up. Improvements for some aspects of health‐related quality of life (QoL) (two RCTs) and diabetes (five RCTs) were also found. Five studies reported data on mortality, no deaths occurred. Serious adverse events (SAEs) were reported in four studies and ranged from 0% to 37% in the surgery groups and 0% to 25% in the no surgery groups. Between 2% and 13% of participants required reoperations in the five studies that reported these data. Three RCTs found that laparoscopic Roux‐en‐Y gastric bypass (L)(RYGB) achieved significantly greater weight loss and body mass index (BMI) reduction up to five years after surgery compared with laparoscopic adjustable gastric banding (LAGB). Mean end‐of‐study BMI was lower following LRYGB compared with LAGB: mean difference (MD) ‐5.2 kg/m² (95% confidence interval (CI) ‐6.4 to ‐4.0; P < 0.00001; 265 participants; 3 trials; moderate quality evidence). Evidence for QoL and comorbidities was very low quality. The LRGYB procedure resulted in greater duration of hospitalisation in two RCTs (4/3.1 versus 2/1.5 days) and a greater number of late major complications (26.1% versus 11.6%) in one RCT. In one RCT the LAGB required high rates of reoperation for band removal (9 patients, 40.9%). Open RYGB, LRYGB and laparoscopic sleeve gastrectomy (LSG) led to losses of weight and/or BMI but there was no consistent picture as to which procedure was better or worse in the seven included trials. MD was ‐0.2 kg/m² (95% CI ‐1.8 to 1.3); 353 participants; 6 trials; low quality evidence) in favour of LRYGB. No statistically significant differences in QoL were found (one RCT). Six RCTs reported mortality; one death occurred following LRYGB. SAEs were reported by one RCT and were higher in the LRYGB group (4.5%) than the LSG group (0.9%). Reoperations ranged from 6.7% to 24% in the LRYGB group and 3.3% to 34% in the LSG group. Effects on comorbidities, complications and additional surgical procedures were neutral, except gastro‐oesophageal reflux disease improved following LRYGB (one RCT). One RCT of people with a BMI 25 to 35 and type 2 diabetes found laparoscopic mini‐gastric bypass resulted in greater weight loss and improvement of diabetes compared with LSG, and had similar levels of complications. Two RCTs found that biliopancreatic diversion with duodenal switch (BDDS) resulted in greater weight loss than RYGB in morbidly obese patients. End‐of‐study mean BMI loss was greater following BDDS: MD ‐7.3 kg/m² (95% CI ‐9.3 to ‐5.4); P < 0.00001; 107 participants; 2 trials; moderate quality evidence). QoL was similar on most domains. In one study between 82% to 100% of participants with diabetes had a HbA1c of less than 5% three years after surgery. Reoperations were higher in the BDDS group (16.1% to 27.6%) than the LRYGB group (4.3% to 8.3%). One death occurred in the BDDS group. One RCT comparing laparoscopic duodenojejunal bypass with sleeve gastrectomy versus LRYGB found BMI, excess weight loss, and rates of remission of diabetes and hypertension were similar at 12 months follow‐up (very low quality evidence). QoL, SAEs and reoperation rates were not reported. No deaths occurred in either group. One RCT comparing laparoscopic isolated sleeve gastrectomy (LISG) versus LAGB found greater improvement in weight‐loss outcomes following LISG at three years follow‐up (very low quality evidence). QoL, mortality and SAEs were not reported. Reoperations occurred in 20% of the LAGB group and in 10% of the LISG group. One RCT (unpublished) comparing laparoscopic gastric imbrication with LSG found no statistically significant difference in weight loss between groups (very low quality evidence). QoL and comorbidities were not reported. No deaths occurred. Two participants in the gastric imbrication group required reoperation. Surgery results in greater improvement in weight loss outcomes and weight associated comorbidities compared with non‐surgical interventions, regardless of the type of procedures used. When compared with each other, certain procedures resulted in greater weight loss and improvements in comorbidities than others. Outcomes were similar between RYGB and sleeve gastrectomy, and both of these procedures had better outcomes than adjustable gastric banding. For people with very high BMI, biliopancreatic diversion with duodenal switch resulted in greater weight loss than RYGB. Duodenojejunal bypass with sleeve gastrectomy and laparoscopic RYGB had similar outcomes, however this is based on one small trial. Isolated sleeve gastrectomy led to better weight‐loss outcomes than adjustable gastric banding after three years follow‐up. This was based on one trial only. Weight‐related outcomes were similar between laparoscopic gastric imbrication and laparoscopic sleeve gastrectomy in one trial. Across all studies adverse event rates and reoperation rates were generally poorly reported. Most trials followed participants for only one or two years, therefore the long‐term effects of surgery remain unclear. |
t18 | When women go to their doctor with a mass that could be ovarian cancer, they are normally referred for surgery, since the mass may need to be removed and examined microscopically in a laboratory in a procedure known as paraffin section histopathology. A third of women with ovarian cancer present with a cyst or mass without any visible evidence of spread elsewhere. However, in these apparently early‐stage cancers (confined to the ovary) surgical staging is required to decide if chemotherapy is required. This staging consists of sampling tissues within the abdomen, including lymph nodes. Different staging strategies exist. One is to perform surgical staging for all women who might have a cancer, to get information about spread. This may result in complications due to additional surgical procedures that may turn out to be unnecessary in approximately two thirds of women. A second strategy is to perform an operation to remove just the suspicious mass and await the paraffin section diagnosis. This may result in needing a further operation in one third of women if cancer is confirmed, putting them at increased risks from another operation. A third strategy is to send the mass to the laboratory during the operation for a quick diagnosis, known as 'frozen section'. This helps the surgeon decide if further surgical treatment is required during a single operation. Frozen section is not as accurate as the traditional slower paraffin section examination, and it entails a risk of incorrect diagnosis, meaning that some women may not have all the samples taken at the initial surgery and may need to undergo a second operation; and others may undergo unnecessary surgical sampling. We searched all available studies reporting use of frozen section in women with suspicious ovarian masses. We excluded studies without an English translation and studies without enough information to allow us to analyse the data. We included 38 studies (11,181 women), reporting three types of diagnoses from the frozen section test. Cancer, which occurred in an average of 29% of women. Borderline tumour, which occurred in 8% of women. In a hypothetical group of 1000 patients where 290 have cancer and 80 have a borderline tumour, 261 women would receive a correct diagnosis of a cancer and 706 women would be correctly diagnosed without a cancer based on a frozen section result. However, 4 women would be incorrectly diagnosed as having a cancer where none existed (false positive), and 29 women with cancer would be missed and potentially need further treatment (false negative). If surgeons used a frozen section result of either a cancer or a borderline tumour to diagnose cancer, 280 women would be correctly diagnosed with a cancer and 635 women would be correctly diagnosed without a cancer. However, 75 women would be incorrectly diagnosed as having a cancer, and 10 women with cancer would be missed on the initial test and found to have a cancer after surgery. If the frozen section result reported the mass as benign or malignant, the final diagnosis would remain the same in, on average, 94% and 99% of the cases, respectively. In cases where the frozen section diagnosis was a borderline tumour, there is a chance that the final diagnosis would turn out to be a cancer in, on average, 21% of women. Where the frozen section diagnosis is a borderline tumour, the diagnosis is less accurate than for benign or malignant tumours. Surgeons may choose to perform additional surgery in this group of women at the time of their initial surgery in order to reduce the need for a second operation if the final diagnosis turns out to be a cancer, as it would on average in one out of five of these women. | Women with suspected early‐stage ovarian cancer need surgical staging which involves taking samples from areas within the abdominal cavity and retroperitoneal lymph nodes in order to inform further treatment. One potential strategy is to surgically stage all women with suspicious ovarian masses, without any histological information during surgery. This avoids incomplete staging, but puts more women at risk of potential surgical over‐treatment. A second strategy is to perform a two‐stage procedure to remove the pelvic mass and subject it to paraffin sectioning, which involves formal tissue fixing with formalin and paraffin embedding, prior to ultrathin sectioning and multiple site sampling of the tumour. Surgeons may then base further surgical staging on this histology, reducing the rate of over‐treatment, but conferring additional surgical and anaesthetic morbidity. A third strategy is to perform a rapid histological analysis on the ovarian mass during surgery, known as 'frozen section'. Tissues are snap frozen to allow fine tissue sections to be cut and basic histochemical staining to be performed. Surgeons can perform or avoid the full surgical staging procedure depending on the results. However, this is a relatively crude test compared to paraffin sections, which take many hours to perform. With frozen section there is therefore a risk of misdiagnosing malignancy and understaging women subsequently found to have a presumed early‐stage malignancy (false negative), or overstaging women without a malignancy (false positive). Therefore it is important to evaluate the accuracy and usefulness of adding frozen section to the clinical decision‐making process. Objectives To assess the diagnostic test accuracy of frozen section (index test) to diagnose histopathological ovarian cancer in women with suspicious pelvic masses as verified by paraffin section (reference standard). Search methods We searched MEDLINE (January 1946 to January 2015), EMBASE (January 1980 to January 2015) and relevant Cochrane registers. Selection criteria Studies that used frozen section for intraoperative diagnosis of ovarian masses suspicious of malignancy, provided there was sufficient data to construct 2 x 2 tables. We excluded articles without an available English translation. Data collection and analysis Authors independently assessed the methodological quality of included studies using the Quality Assessment of Diagnostic Accuracy Studies tool (QUADAS‐2) domains: patient selection, index test, reference standard, flow and timing. Data extraction converted 3 x 3 tables of per patient results presented in articles into 2 x 2 tables, for two index test thresholds. All studies were retrospective, and the majority reported consecutive sampling of cases. Sensitivity and specificity results were available from 38 studies involving 11,181 participants (3200 with invasive cancer, 1055 with borderline tumours and 6926 with benign tumours, determined by paraffin section as the reference standard). The median prevalence of malignancy was 29% (interquartile range (IQR) 23% to 36%, range 11% to 63%). We assessed test performance using two thresholds for the frozen section test. Firstly, we used a test threshold for frozen sections, defining positive test results as invasive cancer and negative test results as borderline and benign tumours. The average sensitivity was 90.0% (95% confidence interval (CI) 87.6% to 92.0%; with most studies typically reporting range of 71% to 100%), and average specificity was 99.5% (95% CI 99.2% to 99.7%; range 96% to 100%). Similarly, we analysed sensitivity and specificity using a second threshold for frozen section, where both invasive cancer and borderline tumours were considered test positive and benign cases were classified as negative. Average sensitivity was 96.5% (95% CI 95.5% to 97.3%; typical range 83% to 100%), and average specificity was 89.5% (95% CI 86.6% to 91.9%; typical range 58% to 99%). Results were available from the same 38 studies, including the subset of 3953 participants with a frozen section result of either borderline or invasive cancer, based on final diagnosis of malignancy. Studies with small numbers of disease‐negative cases (borderline cases) had more variation in estimates of specificity. Average sensitivity was 94.0% (95% CI 92.0% to 95.5%; range 73% to 100%), and average specificity was 95.8% (95% CI 92.4% to 97.8%; typical range 81% to 100%). Our additional analyses showed that, if the frozen section showed a benign or invasive cancer, the final diagnosis would remain the same in, on average, 94% and 99% of cases, respectively. In cases where the frozen section diagnosis was a borderline tumour, on average 21% of the final diagnoses would turn out to be invasive cancer. In three studies, the same pathologist interpreted the index and reference standard tests, potentially causing bias. No studies reported blinding pathologists to index test results when reporting paraffin sections. In heterogeneity analyses, there were no statistically significant differences between studies with pathologists of different levels of expertise. In a hypothetical population of 1000 patients (290 with cancer and 80 with a borderline tumour), if a frozen section positive test result for invasive cancer alone was used to diagnose cancer, on average 261 women would have a correct diagnosis of a cancer, and 706 women would be correctly diagnosed without a cancer. However, 4 women would be incorrectly diagnosed with a cancer (false positive), and 29 with a cancer would be missed (false negative). If a frozen section result of either an invasive cancer or a borderline tumour was used as a positive test to diagnose cancer, on average 280 women would be correctly diagnosed with a cancer and 635 would be correctly diagnosed without. However, 75 women would be incorrectly diagnosed with a cancer and 10 women with a cancer would be missed. The largest discordance is within the reporting of frozen section borderline tumours. Investigation into factors leading to discordance within centres and standardisation of criteria for reporting borderline tumours may help improve accuracy. Some centres may choose to perform surgical staging in women with frozen section diagnosis of a borderline ovarian tumour to reduce the number of false positives. In their interpretation of this review, readers should evaluate results from studies most typical of their population of patients. |
t19 | The aim of this Cochrane Review was to find out if anti‐vascular endothelial growth factor (called anti‐VEGF) treatment of new blood vessels in people with severe myopia (also known as nearsightedness or shortsightedness) prevents vision loss. Cochrane researchers collected and analysed all relevant studies to answer this question and found six studies. People with severe myopia and growth of new blood vessels at the back of the eye may benefit from treatment with anti‐VEGF. It may prevent vision loss. SIde effects (harms) occur rarely. Myopia occurs when the eyeball becomes too long. If the myopia is severe, sometimes the retina (light‐sensitive tissue at the back of the eye) becomes too thin and new blood vessels grow. These new blood vessels can leak and cause vision loss. Anti‐vascular endothelial growth factor (anti‐VEGF) is a drug that may slow down the growth of these new vessels. Doctors can inject anti‐VEGF into the eye of people who have severe myopia and signs of new blood vessels growing at the back of the eye. This may prevent vision loss. The Cochrane researchers found six relevant studies. These studies took place in multiple clinical centres across three continents (Europe, Asia and North America), Three studies compared anti‐VEGF treatment with photodynamic therapy (PDT; a treatment with a light‐sensitive medicine and a light source that destroys abnormal cells); one study compared anti‐VEGF with laser treatment; one study compared anti‐VEGF with no treatment; and two studies compared different types of anti‐VEGF to each other. In some of the studies, the comparison group received anti‐VEGF after a short period which may mean that the results underestimate the beneficial effect of anti‐VEGF. People with severe myopia who have anti‐VEGF treatment probably achieve better vision than people receiving PDT, laser or no treatment (moderate‐ and low‐certainty evidence). | Choroidal neovascularisation (CNV) is a common complication of pathological myopia. Once developed, most eyes with myopic CNV (mCNV) experience a progression to macular atrophy, which leads to irreversible vision loss. Anti‐vascular endothelial growth factor (anti‐VEGF) therapy is used to treat diseases characterised by neovascularisation and is increasingly used to treat mCNV. Objectives To assess the effects of anti‐vascular endothelial growth factor (anti‐VEGF) therapy for choroidal neovascularisation (CNV), compared with other treatments, sham treatment or no treatment, in people with pathological myopia. Search methods We searched a number of electronic databases including CENTRAL and Ovid MEDLINE, ClinicalTrials.gov and the World Health Organization (WHO) International Clinical Trials Registry Platform ICTRP). We did not use any date or language restrictions in the electronic searches for trials. Electronic databases were last searched on 16 June 2016. Selection criteria We included randomised controlled trials (RCTs) and quasi‐RCTs comparing anti‐VEGF therapy with another treatment (e.g. photodynamic therapy (PDT) with verteporfin, laser photocoagulation, macular surgery, another anti‐VEGF), sham treatment or no treatment in participants with mCNV. Data collection and analysis We used standard methodological procedures expected by Cochrane. Two authors independently screened records, extracted data, and assessed risk of bias. We contacted trial authors for additional data. We analysed outcomes as risk ratios (RRs) or mean differences (MDs). We graded the certainty of the evidence using GRADE. The present review included six studies which provided data on the comparison between anti‐VEGF with PDT, laser, sham treatment and another anti‐VEGF treatment, with 594 participants with mCNV. Three trials compared bevacizumab or ranibizumab with PDT, one trial compared bevacizumab with laser, one trial compared aflibercept with sham treatment, and two trials compared bevacizumab with ranibizumab. Pharmaceutical companies conducted two trials. The trials were conducted at multiple clinical centres across three continents (Europe, Asia and North America). In all these six trials, one eye for each participant was included in the study. When compared with PDT, people treated with anti‐VEGF agents (ranibizumab (one RCT), bevacizumab (two RCTs)), were more likely to regain vision. At one year of follow‐up, the mean visual acuity (VA) in participants treated with anti‐VEGFs was ‐0.14 logMAR better, equivalent of seven Early Treatment Diabetic Retinopathy Study (ETDRS) letters, compared with people treated with PDT (95% confidence interval (CI) ‐0.20 to ‐0.08, 3 RCTs, 263 people, low‐certainty evidence). The RR for proportion of participants gaining 3+ lines of VA was 1.86 (95% CI 1.27 to 2.73, 2 RCTs, 226 people, moderate‐certainty evidence). At two years, the mean VA in people treated with anti‐VEGFs was ‐0.26 logMAR better, equivalent of 13 ETDRS letters, compared with people treated with PDT (95% CI ‐0.38 to ‐0.14, 2 RCTs, 92 people, low‐certainty evidence). The RR for proportion of people gaining 3+ lines of VA at two years was 3.43 (95% CI 1.37 to 8.56, 2 RCTs, 92 people, low‐certainty evidence). People treated with anti‐VEGFs showed no obvious reduction (improvement) in central retinal thickness at one year compared with people treated with PDT (MD ‐17.84 μm, 95% CI ‐41.98 to 6.30, 2 RCTs, 226 people, moderate‐certainty evidence). There was low‐certainty evidence that people treated with anti‐VEGF were more likely to have CNV angiographic closure at 1 year (RR 1.24, 95% CI 0.99 to 1.54, 2 RCTs, 208 people). One study allowed ranibizumab treatment as of month 3 in participants randomised to PDT, which may have led to an underestimate of the benefits of anti‐VEGF treatment. When compared with laser photocoagulation, there was more improvement in VA among bevacizumab‐treated people than among laser‐treated people after one year (MD ‐0.22 logMAR, equivalent of 11 ETDRS letters, 95% CI ‐0.43 to ‐0.01, 1 RCT, 36 people, low‐certainty evidence) and after two years (MD ‐0.29 logMAR, equivalent of 14 ETDRS letters, 95% CI ‐0.50 to ‐0.08, 1 RCT, 36 people, low‐certainty evidence). When compared with sham treatment, people treated with aflibercept had better vision at one year (MD ‐0.19 logMAR, equivalent of 9 ETDRS letters, 95% CI ‐0.27 to ‐0.12, 1 RCT, 121 people, moderate‐certainty evidence). The fact that this study allowed for aflibercept treatment at 6 months in the control group might cause an underestimation of the benefit with anti‐VEGF. People treated with ranibizumab had similar improvement in VA recovery compared with people treated with bevacizumab after one year (MD ‐0.02 logMAR, equivalent of 1 ETDRS letter, 95% CI ‐0.11 to 0.06, 2 RCTs, 80 people, moderate‐certainty evidence). Of the included six studies, two studies reported no adverse events in either group and two industry‐sponsored studies reported both systemic and ocular adverse events. In the control group, there were no systemic or ocular adverse events reported in 149 participants. Fifteen people reported systemic serious adverse events among 359 people treated with anti‐VEGF agents (15/359, 4.2%). Five people reported ocular adverse events among 359 people treated with anti‐VEGF agents (5/359, 1.4%). The number of adverse events was low, and the estimate of RR was uncertain regarding systemic serious adverse events (4 RCTs, 15 events in 508 people, RR 4.50, 95% CI 0.60 to 33.99, very low‐certainty evidence) and serious ocular adverse events (4 RCTs, 5 events in 508 people, RR 1.82, 95% CI 0.23 to 14.71, very low‐certainty evidence). There were no reports of mortality or cases of endophthalmitis or retinal detachment. There was sparse reporting of data for vision‐related quality of life (in favour of anti‐VEGF) in only one trial at one year of follow‐up. The studies did not report data for other outcomes, such as percentage of participants with newly developed chorioretinal atrophy. There is low to moderate‐certainty evidence from RCTs for the efficacy of anti‐VEGF agents to treat mCNV at one year and two years. Moderate‐certainty evidence suggests ranibizumab and bevacizumab are equivalent in terms of efficacy. Adverse effects occurred rarely and the trials included here were underpowered to assess these. Future research should be focused on the efficacy and safety of different drugs and treatment regimens, the efficacy on different location of mCNV, as well as the effects on practice in the real world. |
t20 | We wanted to see whether talking therapies reduce drinking in adult users of illicit drugs (mainly opioids and stimulants). We also wanted to find out whether one type of therapy is more effective than another. Drinking alcohol above the low‐risk drinking limits can lead to serious alcohol use problems or disorders. Drinking above those limits is common in people who also have problems with other drugs. It worsens their physical and mental health. Talking therapies aim to identify an alcohol problem and motivate an individual to do something about it. Talking therapies can be given by trained doctors, nurses, counsellors, psychologists, etc. Talking therapies may help reduce alcohol use but we wanted to find out if they can help people who also have problems with other drugs. We found seven studies that examined five talking therapies among 825 people with drug problems. Cognitive‐behavioural coping skills training (CBCST) is a talking therapy that focuses on changing the way people think and act. The twelve‐step programme is based on theories from Alcoholics Anonymous and aims to motivate the person to develop a desire to stop using drugs or alcohol. Motivational interviewing (MI) helps people to explore and resolve doubts about changing their behaviour. It can be delivered in group, individual and intensive formats. Brief motivational interviewing (BMI) is a shorter MI that takes 45 minutes to three hours. Brief interventions are based on MI but they take only five to 30 minutes and are often delivered by a non‐specialist. Six of the studies were funded by the National Institutes for Health or by the Health Research Board; one study did not report its funding source. We found that the talking therapies led to no differences, or only small differences, for the outcomes assessed. These included abstinence, reduced drinking, and substance use. One study found that there may be no difference between CBCST and the twelve‐step programme. Three studies found that there may be no difference between brief intervention and usual treatment. Three studies found that there may be no difference between MI and usual treatment or education only. One study found that BMI is probably better at reducing alcohol use than usual treatment (needle exchange), but found no differences in other outcomes. One study found that intensive MI may be somewhat better than standard MI at reducing severity of alcohol use disorder among women, but not among men and found no differences in other outcomes. It remains uncertain whether talking therapies reduce alcohol and drug use in people who also have problems with other drugs. | Problem alcohol use is common among people who use illicit drugs (PWID) and is associated with adverse health outcomes. It is also an important factor contributing to a poor prognosis among drug users with hepatitis C virus (HCV) as it impacts on progression to hepatic cirrhosis or opioid overdose in PWID. Objectives To assess the effectiveness of psychosocial interventions to reduce alcohol consumption in PWID (users of opioids and stimulants). Search methods We searched the Cochrane Drugs and Alcohol Group trials register, the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, Embase, CINAHL, and PsycINFO, from inception up to August 2017, and the reference lists of eligible articles. We also searched: 1) conference proceedings (online archives only) of the Society for the Study of Addiction, International Harm Reduction Association, International Conference on Alcohol Harm Reduction and American Association for the Treatment of Opioid Dependence; and 2) online registers of clinical trials: Current Controlled Trials, ClinicalTrials.gov, Center Watch and the World Health Organization International Clinical Trials Registry Platform. Selection criteria We included randomised controlled trials comparing psychosocial interventions with other psychosocial treatment, or treatment as usual, in adult PWIDs (aged at least 18 years) with concurrent problem alcohol use. Data collection and analysis We used the standard methodological procedures expected by Cochrane. We included seven trials (825 participants). We judged the majority of the trials to have a high or unclear risk of bias. The psychosocial interventions considered in the studies were: cognitive‐behavioural coping skills training (one study), twelve‐step programme (one study), brief intervention (three studies), motivational interviewing (two studies), and brief motivational interviewing (one study). Two studies were considered in two comparisons. There were no data for the secondary outcome, alcohol‐related harm. The results were as follows. Comparison 1: cognitive‐behavioural coping skills training versus twelve‐step programme (one study, 41 participants) There was no significant difference between groups for either of the primary outcomes (alcohol abstinence assessed with Substance Abuse Calendar and breathalyser at one year: risk ratio (RR) 2.38 (95% confidence interval [CI] 0.10 to 55.06); and retention in treatment, measured at end of treatment: RR 0.89 (95% CI 0.62 to 1.29), or for any of the secondary outcomes reported. The quality of evidence for the primary outcomes was very low. Comparison 2: brief intervention versus treatment as usual (three studies, 197 participants) There was no significant difference between groups for either of the primary outcomes (alcohol use, measured as scores on the Alcohol Use Disorders Identification Test (AUDIT) or Alcohol, Smoking and Substance Involvement Screening Test (ASSIST) at three months: standardised mean difference (SMD) 0.07 (95% CI ‐0.24 to 0.37); and retention in treatment, measured at three months: RR 0.94 (95% CI 0.78 to 1.13), or for any of the secondary outcomes reported. The quality of evidence for the primary outcomes was low. Comparison 3: motivational interviewing versus treatment as usual or educational intervention only (three studies, 462 participants) There was no significant difference between groups for either of the primary outcomes (alcohol use, measured as scores on the AUDIT or ASSIST at three months: SMD 0.04 (95% CI ‐0.29 to 0.37); and retention in treatment, measured at three months: RR 0.93 (95% CI 0.60 to 1.43), or for any of the secondary outcomes reported. The quality of evidence for the primary outcomes was low. Comparison 4: brief motivational intervention (BMI) versus assessment only (one study, 187 participants) More people reduced alcohol use (by seven or more days in the past month, measured at six months) in the BMI group than in the control group (RR 1.67; 95% CI 1.08 to 2.60). There was no difference between groups for the other primary outcome, retention in treatment, measured at end of treatment: RR 0.98 (95% CI 0.94 to 1.02), or for any of the secondary outcomes reported. The quality of evidence for the primary outcomes was moderate. Comparison 5: motivational interviewing (intensive) versus motivational interviewing (one study, 163 participants) There was no significant difference between groups for either of the primary outcomes (alcohol use, measured using the Addiction Severity Index‐alcohol score (ASI) at two months: MD 0.03 (95% CI 0.02 to 0.08); and retention in treatment, measured at end of treatment: RR 17.63 (95% CI 1.03 to 300.48), or for any of the secondary outcomes reported. The quality of evidence for the primary outcomes was low. We found low to very low‐quality evidence to suggest that there is no difference in effectiveness between different types of psychosocial interventions to reduce alcohol consumption among people who use illicit drugs, and that brief interventions are not superior to assessment‐only or to treatment as usual. No firm conclusions can be made because of the paucity of the data and the low quality of the retrieved studies. |
t21 | Cognitive impairment is when people have problems remembering, learning, concentrating and making decisions. People with mild cognitive impairment (MCI) generally have more memory problems than other people of their age, but these problems are not severe enough to be classified as dementia. Studies have shown that people with MCI and loss of memory are more likely to develop Alzheimer's disease dementia (approximately 10% to 15% of cases per year) than people without MCI (1% to 2% per year). Currently, the only reliable way of diagnosing Alzheimer's disease dementia is to follow people with MCI and assess cognitive changes over the years. Magnetic resonance imaging (MRI) may detect changes in the brain structures that indicate the beginning of Alzheimer's disease. Early diagnosis of MCI due to Alzheimer's disease is important because people with MCI could benefit from early treatment to prevent or delay cognitive decline. To assess the diagnostic accuracy of MRI for the early diagnosis of dementia due to Alzheimer's disease in people with MCI. The volume of several brain regions was measured with MRI. Most studies (22 studies, 2209 participants) measured the volume of the hippocampus, a region of the brain that is associated primarily with memory. Thirty‐three studies were eligible, in which 3935 participants with MCI were included and followed up for two or three years to see if they developed Alzheimer's disease dementia. About a third of them converted to Alzheimer's disease dementia, and the others did not or developed other types of dementia. We found that MRI is not accurate enough to identify people with MCI who will develop dementia due to Alzheimer's disease. The correct prediction of Alzheimer's disease would be missed in 81 out of 300 people with MCI (false negatives) and a wrong prediction of Alzheimer's disease would be made in 203 out of 700 people with MCI (false positives). As a result, people with a false‐negative diagnosis would be falsely reassured and would not prepare themselves to cope with Alzheimer's disease, while those with a false‐positive diagnosis would suffer from the wrongly anticipated diagnosis. The included studies diagnosed Alzheimer's disease dementia by assessing all participants with standard clinical criteria after two or three years' follow‐up. We had some concerns about how the studies were conducted, since the participants were mainly selected from clinical registries and referral centres, and we also had concerns about how studies interpreted MRI. Moreover, the studies were conducted differently from each other, and they used different methods to select people with MCI and perform MRI. The results do not apply to people with MCI in the community, but only to people with MCI who attend memory clinics or referral centres. MRI, as a single test, is not accurate for the early diagnosis of dementia due to Alzheimer's disease in people with MCI since one in three or four participants received a wrong diagnosis of Alzheimer's disease. Future research should not focus on a single test (such as MRI), but rather on combinations of tests to improve an early diagnosis of Alzheimer's disease dementia. | Mild cognitive impairment (MCI) due to Alzheimer's disease is the symptomatic predementia phase of Alzheimer's disease dementia, characterised by cognitive and functional impairment not severe enough to fulfil the criteria for dementia. In clinical samples, people with amnestic MCI are at high risk of developing Alzheimer's disease dementia, with annual rates of progression from MCI to Alzheimer's disease estimated at approximately 10% to 15% compared with the base incidence rates of Alzheimer's disease dementia of 1% to 2% per year. Objectives To assess the diagnostic accuracy of structural magnetic resonance imaging (MRI) for the early diagnosis of dementia due to Alzheimer's disease in people with MCI versus the clinical follow‐up diagnosis of Alzheimer's disease dementia as a reference standard (delayed verification). To investigate sources of heterogeneity in accuracy, such as the use of qualitative visual assessment or quantitative volumetric measurements, including manual or automatic (MRI) techniques, or the length of follow‐up, and age of participants. MRI was evaluated as an add‐on test in addition to clinical diagnosis of MCI to improve early diagnosis of dementia due to Alzheimer's disease in people with MCI. Search methods On 29 January 2019 we searched Cochrane Dementia and Cognitive Improvement's Specialised Register and the databases, MEDLINE, Embase, BIOSIS Previews, Science Citation Index, PsycINFO, and LILACS. We also searched the reference lists of all eligible studies identified by the electronic searches. Selection criteria We considered cohort studies of any size that included prospectively recruited people of any age with a diagnosis of MCI. We included studies that compared the diagnostic test accuracy of baseline structural MRI versus the clinical follow‐up diagnosis of Alzheimer's disease dementia (delayed verification). We did not exclude studies on the basis of length of follow‐up. We included studies that used either qualitative visual assessment or quantitative volumetric measurements of MRI to detect atrophy in the whole brain or in specific brain regions, such as the hippocampus, medial temporal lobe, lateral ventricles, entorhinal cortex, medial temporal gyrus, lateral temporal lobe, amygdala, and cortical grey matter. Data collection and analysis Four teams of two review authors each independently reviewed titles and abstracts of articles identified by the search strategy. Two teams of two review authors each independently assessed the selected full‐text articles for eligibility, extracted data and solved disagreements by consensus. Two review authors independently assessed the quality of studies using the QUADAS‐2 tool. We used the hierarchical summary receiver operating characteristic (HSROC) model to fit summary ROC curves and to obtain overall measures of relative accuracy in subgroup analyses. We also used these models to obtain pooled estimates of sensitivity and specificity when sufficient data sets were available. We included 33 studies, published from 1999 to 2019, with 3935 participants of whom 1341 (34%) progressed to Alzheimer's disease dementia and 2594 (66%) did not. Of the participants who did not progress to Alzheimer's disease dementia, 2561 (99%) remained stable MCI and 33 (1%) progressed to other types of dementia. The median proportion of women was 53% and the mean age of participants ranged from 63 to 87 years (median 73 years). The mean length of clinical follow‐up ranged from 1 to 7.6 years (median 2 years). Most studies were of poor methodological quality due to risk of bias for participant selection or the index test, or both. Most of the included studies reported data on the volume of the total hippocampus (pooled mean sensitivity 0.73 (95% confidence interval (CI) 0.64 to 0.80); pooled mean specificity 0.71 (95% CI 0.65 to 0.77); 22 studies, 2209 participants). This evidence was of low certainty due to risk of bias and inconsistency. Seven studies reported data on the atrophy of the medial temporal lobe (mean sensitivity 0.64 (95% CI 0.53 to 0.73); mean specificity 0.65 (95% CI 0.51 to 0.76); 1077 participants) and five studies on the volume of the lateral ventricles (mean sensitivity 0.57 (95% CI 0.49 to 0.65); mean specificity 0.64 (95% CI 0.59 to 0.70); 1077 participants). This evidence was of moderate certainty due to risk of bias. Four studies with 529 participants analysed the volume of the total entorhinal cortex and four studies with 424 participants analysed the volume of the whole brain. We did not estimate pooled sensitivity and specificity for the volume of these two regions because available data were sparse and heterogeneous. We could not statistically evaluate the volumes of the lateral temporal lobe, amygdala, medial temporal gyrus, or cortical grey matter assessed in small individual studies. We found no evidence of a difference between studies in the accuracy of the total hippocampal volume with regards to duration of follow‐up or age of participants, but the manual MRI technique was superior to automatic techniques in mixed (mostly indirect) comparisons. We did not assess the relative accuracy of the volumes of different brain regions measured by MRI because only indirect comparisons were available, studies were heterogeneous, and the overall accuracy of all regions was moderate. The volume of hippocampus or medial temporal lobe, the most studied brain regions, showed low sensitivity and specificity and did not qualify structural MRI as a stand‐alone add‐on test for an early diagnosis of dementia due to Alzheimer's disease in people with MCI. This is consistent with international guidelines, which recommend imaging to exclude non‐degenerative or surgical causes of cognitive impairment and not to diagnose dementia due to Alzheimer's disease. In view of the low quality of most of the included studies, the findings of this review should be interpreted with caution. Future research should not focus on a single biomarker, but rather on combinations of biomarkers to improve an early diagnosis of Alzheimer's disease dementia. |
t22 | NMIBC is a cancer (tumour) of the inner lining of the bladder that can be removed from the inside using small instruments and a light source, so‐called endoscopic surgery. These tumours can come back over time and spread into the deeper layers of the bladder wall. We know that different types of medicines that we can put into the bladder help prevent this. Investigators have looked at the use of an electrical current to make medicines work better. In this review, we wanted to discover whether using an electrical current was better or worse than not using an electrical current. We found three studies that were conducted between 1994 and 2003 with 672 participants that compared five different ways of giving this treatment. Mitomycin (MMC) was the only medicine used together with electrical current. We are very unsure whether the use of an electrical current to give a course of MMC after endoscopic surgery is better or worse compared to giving a course of Bacillus Calmette‐Guérin (BCG; vaccine usually used in tuberculosis) or MMC without electrical current. MMC given with electrical current together with BCG given over a long period of time may be better than BCG alone in delaying the tumour from coming back and from spreading into the deeper layer of the bladder wall. Giving one dose of MMC with electrical current before endoscopic surgery may be better than one dose of MMC without electric current after surgery or surgery alone without further treatment. | Electromotive drug administration (EMDA) is the use of electrical current to improve the delivery of intravesical agents to reduce the risk of recurrence in people with non‐muscle invasive bladder cancer (NMIBC). It is unclear how effective this is in comparison to other forms of intravesical therapy. Objectives To assess the effects of intravesical EMDA for the treatment of NMIBC. Search methods We performed a comprehensive search using multiple databases (CENTRAL, MEDLINE, EMBASE), two clinical trial registries and a grey literature repository. We searched reference lists of relevant publications and abstract proceedings. We applied no language restrictions. The last search was February 2017. Selection criteria We searched for randomised studies comparing EMDA of any intravesical agent used to reduce bladder cancer recurrence in conjunction with transurethral resection of bladder tumour (TURBT). Data collection and analysis Two review authors independently screened the literature, extracted data, assessed risk of bias and rated quality of evidence (QoE) according to GRADE on a per outcome basis. We included three trials with 672 participants that described five distinct comparisons. The same principal investigator conducted all three trials. All studies used mitomycin C (MMC) as the chemotherapeutic agent for EMDA. 1. Postoperative MMC‐EMDA induction versus postoperative Bacillus Calmette‐Guérin (BCG) induction: based on one study with 72 participants with carcinoma in situ (CIS) and concurrent pT1 urothelial carcinoma, we are uncertain (very low QoE) about the effect of MMC‐EMDA on time to recurrence (risk ratio (RR) 1.06, 95% confidence interval (CI) 0.64 to 1.76; corresponding to 30 more per 1000 participants, 95% CI 180 fewer to 380 more). There was no disease progression in either treatment arm at three months' follow‐up. We are uncertain (very low QoE) about serious adverse events (RR 0.75, 95% CI 0.18 to 3.11). 2. Postoperative MMC‐EMDA induction versus MMC‐passive diffusion (PD) induction: based on one study with 72 participants with CIS and concurrent pT1 urothelial carcinoma, postoperative MMC‐EMDA may (low QoE) reduce disease recurrence (RR 0.65, 95% CI 0.44 to 0.98; corresponding to 147 fewer per 1000 participants, 95% CI 235 fewer to 8 fewer). There was no disease progression in either treatment arm at three months' follow‐up. We are uncertain (very low QoE) about the effect of MMC‐EMDA on serious adverse events (RR 1.50, 95% CI 0.27 to 8.45). 3. Postoperative MMC‐EMDA with sequential BCG induction and maintenance versus postoperative BCG induction and maintenance: based on one study with 212 participants with pT1 urothelial carcinoma of the bladder with or without CIS, postoperative MMC‐EMDA with sequential BCG may result (low QoE) in a longer time to recurrence (hazard ratio (HR) 0.51, 95% CI 0.34 to 0.77; corresponding to 181 fewer per 1000 participants, 95% CI 256 fewer to 79 fewer) and time to progression (HR 0.36, 95% CI 0.17 to 0.75; corresponding to 63 fewer per 1000 participants, 95% CI 82 fewer to 24 fewer). We are uncertain (very low QoE) about the effect of MMC‐EMDA on serious adverse events (RR 1.02, 95% CI 0.21 to 4.94). 4. Single‐dose, preoperative MMC‐EMDA versus single‐dose, postoperative MMC‐PD: based on one study with 236 participants with primary pTa and pT1 urothelial carcinoma, preoperative MMC‐EMDA likely (moderate QoE) results in a longer time to recurrence (HR 0.47, 95% CI 0.32 to 0.69; corresponding to 247 fewer per 1000 participants, 95% CI 341 fewer to 130 fewer) for a median follow‐up of 86 months. We are uncertain (very low QoE) about the effect of MMC‐EMDA on time to progression (HR 0.81, 95% CI 0.00 to 259.93; corresponding to 34 fewer per 1000 participants, 95% CI 193 fewer to 807 more) and serious adverse events (RR 0.79, 95% CI 0.30 to 2.05). 5. Single‐dose, preoperative MMC‐EMDA versus TURBT alone: based on one study with 233 participants with primary pTa and pT1 urothelial carcinoma, preoperative MMC‐EMDA likely (moderate QoE) results in a longer time to recurrence (HR 0.40, 95% CI 0.28 to 0.57; corresponding to 304 fewer per 1000 participants, 95% CI 390 fewer to 198 fewer) for a median follow‐up of 86 months. We are uncertain (very low QoE) about the effect of MMC‐EMDA on time to progression (HR 0.74, 95% CI 0.00 to 247.93; corresponding to 49 fewer per 1000 participants, 95% CI 207 fewer to 793 more) or serious adverse events (HR 1.74, 95% CI 0.52 to 5.77). While the use of EMDA to administer intravesical MMC may result in a delay in time to recurrence in select patient populations, we are uncertain about its impact on serious adverse events in all settings. Common reasons for downgrading the QoE were study limitations and imprecision. A potential role for EMDA‐based administration of MMC may lie in settings where more established agents (such as BCG) are not available. In the setting of low or very low QoE for most comparisons, our confidence in the effect estimates is limited and the true effect sizes may be substantially different from those reported here. |
t23 | Video communication software like Skype and FaceTime allows counsellors to see and hear people over the Internet to help them quit smoking. Video counselling could help large numbers of people to quit smoking because more than four billion people use the Internet, and video communication software is free. Our main focus was to learn if video counselling delivered individually or to a group could help people quit smoking and to learn how it compared with other types of support to help people quit. We also studied the effect of real‐time video counselling on the number of times people tried to quit, the number of sessions they completed, their satisfaction with the counselling, their relationship or bond with the counsellor and the costs of using video communication to help people quit smoking. Both studies took place in the USA, and included people from rural areas or women with HIV. Both studies gave one‐to‐one video sessions to individuals. There were eight video sessions in one study and four video sessions in the other study. Both studies compared video counselling to telephone counselling and looked at whether people quit smoking, the number of sessions they completed and their satisfaction with the programme. One study examined the number of times people tried to quit and one study looked at the relationship or bond with the counsellor. It is unclear how video counselling compares with telephone counselling in terms of helping people to quit smoking. People who used video counselling were more likely than those who used telephone counselling to recommend the programme to a friend or someone in their family, but we found no differences in how satisfied they were, the number of video or telephone sessions completed, whether all sessions were completed and in the relationship or bond with the counsellor. | Real‐time video communication software such as Skype and FaceTime transmits live video and audio over the Internet, allowing counsellors to provide support to help people quit smoking. There are more than four billion Internet users worldwide, and Internet users can download free video communication software, rendering a video counselling approach both feasible and scalable for helping people to quit smoking. Objectives To assess the effectiveness of real‐time video counselling delivered individually or to a group in increasing smoking cessation, quit attempts, intervention adherence, satisfaction and therapeutic alliance, and to provide an economic evaluation regarding real‐time video counselling. Search methods We searched the Cochrane Tobacco Addiction Group Specialised Register, CENTRAL, MEDLINE, PubMed, PsycINFO and Embase to identify eligible studies on 13 August 2019. We searched the World Health Organization International Clinical Trials Registry Platform and ClinicalTrials.gov to identify ongoing trials registered by 13 August 2019. We checked the reference lists of included articles and contacted smoking cessation researchers for any additional studies. Selection criteria We included randomised controlled trials (RCTs), randomised trials, cluster RCTs or cluster randomised trials of real‐time video counselling for current tobacco smokers from any setting that measured smoking cessation at least six months following baseline. The real‐time video counselling intervention could be compared with a no intervention control group or another smoking cessation intervention, or both. Data collection and analysis Two authors independently extracted data from included trials, assessed the risk of bias and rated the certainty of the evidence using the GRADE approach. We performed a random‐effects meta‐analysis for the primary outcome of smoking cessation, using the most stringent measure of smoking cessation measured at the longest follow‐up. Analysis was based on the intention‐to‐treat principle. We considered participants with missing data at follow‐up for the primary outcome of smoking cessation to be smokers. We included two randomised trials with 615 participants. Both studies delivered real‐time video counselling for smoking cessation individually, compared with telephone counselling. We judged one study at unclear risk of bias and one study at high risk of bias. There was no statistically significant treatment effect for smoking cessation (using the strictest definition and longest follow‐up) across the two included studies when real‐time video counselling was compared to telephone counselling (risk ratio (RR) 2.15, 95% confidence interval (CI) 0.38 to 12.04; 2 studies, 608 participants; I 2 = 66%). We judged the overall certainty of the evidence for smoking cessation as very low due to methodological limitations, imprecision in the effect estimate reflected by the wide 95% CIs and inconsistency of cessation rates. There were no significant differences between real‐time video counselling and telephone counselling reported for number of quit attempts among people who continued to smoke (mean difference (MD) 0.50, 95% CI –0.60 to 1.60; 1 study, 499 participants), mean number of counselling sessions completed (MD –0.20, 95% CI –0.45 to 0.05; 1 study, 566 participants), completion of all sessions (RR 1.13, 95% CI 0.71 to 1.79; 1 study, 43 participants) or therapeutic alliance (MD 1.13, 95% CI –0.24 to 2.50; 1 study, 398 participants). Participants in the video counselling arm were more likely than their telephone counselling counterparts to recommend the programme to a friend or family member (RR 1.06, 95% CI 1.01 to 1.11; 1 study, 398 participants); however, there were no between‐group differences on satisfaction score (MD 0.70, 95% CI –1.16 to 2.56; 1 study, 29 participants). There is very little evidence about the effectiveness of real‐time video counselling for smoking cessation. The existing research does not suggest a difference between video counselling and telephone counselling for assisting people to quit smoking. However, given the very low GRADE rating due to methodological limitations in the design, imprecision of the effect estimate and inconsistency of cessation rates, the smoking cessation results should be interpreted cautiously. High‐quality randomised trials comparing real‐time video counselling to telephone counselling are needed to increase the confidence of the effect estimate. Furthermore, there is currently no evidence comparing real‐time video counselling to a control group. Such research is needed to determine whether video counselling increases smoking cessation. |
t24 | Lumbar puncture involves getting a sample of spinal fluid though a needle inserted into the lower back. Post‐dural puncture headache (PDPH) is the most common side effect of a lumbar puncture. The symptom of PDPH is a constant headache that gets worse when upright and improves when lying down. Lots of drugs are used to treat PDPH, so the aim of this review was to assess the effectiveness of these drugs. We included 13 small randomised clinical trials (RCTs), with a total of 479 participants. The trials assessed eight drugs: caffeine, sumatriptan, gabapentin, hydrocortisone, theophylline, adrenocorticotropic hormone, pregabalin and cosyntropin. Caffeine proved to be effective in decreasing the number of people with PDPH and those requiring extra drugs (2 or 3 in 10 with caffeine compared to 9 in 10 with placebo). Gabapentin, theophylline and hydrocortisone also proved to be effective, relieving pain better than placebo or conventional treatment alone. More people had better pain relief with theophylline (9 in 10 with theophylline compared to 4 in 10 with conventional treatment). No important side effects of these drugs were reported. The quality of the studies was difficult to assess due to the lack of information available. | This is an updated version of the original Cochrane review published in Issue 8, 2011, on 'Drug therapy for treating post‐dural puncture headache'. Post‐dural puncture headache (PDPH) is the most common complication of lumbar puncture, an invasive procedure frequently performed in the emergency room. Numerous pharmaceutical drugs have been proposed to treat PDPH but there are still some uncertainties about their clinical effectiveness. Objectives To assess the effectiveness and safety of drugs for treating PDPH in adults and children. Search methods The searches included the Cochrane Central Register of Controlled Trials (CENTRAL 2014, Issue 6), MEDLINE and MEDLINE in Process (from 1950 to 29 July 2014), EMBASE (from 1980 to 29 July 2014) and CINAHL (from 1982 to July 2014). There were no language restrictions. Selection criteria We considered randomised controlled trials (RCTs) assessing the effectiveness of any pharmacological drug used for treating PDPH. Outcome measures considered for this review were: PDPH persistence of any severity at follow‐up (primary outcome), daily activity limited by headache, conservative supplementary therapeutic option offered, epidural blood patch performed, change in pain severity scores, improvements in pain severity scores, number of days participants stay in hospital, any possible adverse events and missing data. Data collection and analysis Review authors independently selected studies, assessed risk of bias and extracted data. We estimated risk ratios (RR) for dichotomous data and mean differences (MD) for continuous outcomes. We calculated a 95% confidence interval (CI) for each RR and MD. We did not undertake meta‐analysis because the included studies assessed different sorts of drugs or different outcomes. We performed an intention‐to‐treat (ITT) analysis. We included 13 small RCTs (479 participants) in this review (at least 274 participants were women, with 118 parturients after a lumbar puncture for regional anaesthesia). In the original version of this Cochrane review, only seven small RCTs (200 participants) were included. Pharmacological drugs assessed were oral and intravenous caffeine, subcutaneous sumatriptan, oral gabapentin, oral pregabalin, oral theophylline, intravenous hydrocortisone, intravenous cosyntropin and intramuscular adrenocorticotropic hormone (ACTH). Two RCTs reported data for PDPH persistence of any severity at follow‐up (primary outcome). Caffeine reduced the number of participants with PDPH at one to two hours when compared to placebo. Treatment with caffeine also decreased the need for a conservative supplementary therapeutic option. Treatment with gabapentin resulted in better visual analogue scale (VAS) scores after one, two, three and four days when compared with placebo and also when compared with ergotamine plus caffeine at two, three and four days. Treatment with hydrocortisone plus conventional treatment showed better VAS scores at six, 24 and 48 hours when compared with conventional treatment alone and also when compared with placebo. Treatment with theophylline showed better VAS scores compared with acetaminophen at two, six and 12 hours and also compared with conservative treatment at eight, 16 and 24 hours. Theophylline also showed a lower mean "sum of pain" when compared with placebo. Sumatriptan and ACTH did not show any relevant effect for this outcome. Theophylline resulted in a higher proportion of participants reporting an improvement in pain scores when compared with conservative treatment. There were no clinically significant drug adverse events. The rest of the outcomes were not reported by the included RCTs or did not show any relevant effect. None of the new included studies have provided additional information to change the conclusions of the last published version of the original Cochrane review. Caffeine has shown effectiveness for treating PDPH, decreasing the proportion of participants with PDPH persistence and those requiring supplementary interventions, when compared with placebo. Gabapentin, hydrocortisone and theophylline have been shown to decrease pain severity scores. Theophylline has also been shown to increase the proportion of participants that report an improvement in pain scores when compared with conventional treatment. There is a lack of conclusive evidence for the other drugs assessed (sumatriptan, adrenocorticotropic hormone, pregabalin and cosyntropin). These conclusions should be interpreted with caution, due to the lack of information to allow correct appraisal of risk of bias, the small sample sizes of the studies and also their limited generalisability, as nearly half of the participants were postpartum women in their 30s. |
t25 | We reviewed the evidence about the effect of bracing on pulmonary disorders (lung diseases), disability, back pain, quality of life, and psychological and cosmetic issues in adolescent with idiopathic scoliosis. We looked at randomized controlled trials (RCTs) and prospective controlled cohort studies (CCTs). Scoliosis is a condition where the spine is curved in three dimensions (from the back the spine appears to be shaped like an 's' and the trunk is deformed). It is often idiopathic, which means the cause is unknown. The most common type of scoliosis is generally discovered around 10 years of age or older, and is defined as a curve that measures at least 10° (called a Cobb angle; measured on x‐ray). Because of the unknown cause and the age of diagnosis, it is called adolescent idiopathic scoliosis (AIS). While there are usually no symptoms, the appearance of AIS frequently has a negative impact on adolescents. Increased curvature of the spine can present health risks in adulthood and in older people. Braces are one intervention that may stop further progression of the curve. They generally need to be worn full time, with treatment lasting until the end of growth (most frequently, from a minimum of two to four/five years). However, bracing for this condition is still controversial, and questions remain about how effective it is. This review included seven studies, with a total of 662 adolescents of both genders. AIS from 15° to more than 45° curves were considered. Elastic, rigid (polyethylene), and very rigid (polycarbonate) braces were studied. Quality of life was not affected during brace treatment (very low quality evidence); quality of life, back pain, and psychological and cosmetic issues did not change in the long term (very low quality evidence). Rigid bracing seems effective in 20° to 40° curves (low quality evidence), elastic bracing in 15° to 30° curves (low quality evidence), and very rigid bracing in high degree curves above 45° (very low quality evidence); rigid was more successful than an elastic bracing (low quality evidence), and a pad pressure control system did not increase results (very low quality evidence). Primary outcomes such as pulmonary disorders, disability, back pain, psychological and cosmetic issues, and quality of life should be better evaluated in the future. Side effects, as well as the usefulness of exercises and other adjunctive treatments to bracing should be studied too. | Idiopathic scoliosis is a three‐dimensional deformity of the spine. The most common form is diagnosed in adolescence. While adolescent idiopathic scoliosis (AIS) can progress during growth and cause a surface deformity, it is usually not symptomatic. However, in adulthood, if the final spinal curvature surpasses a certain critical threshold, the risk of health problems and curve progression is increased. Objectives To evaluate the efficacy of bracing for adolescents with AIS versus no treatment or other treatments, on quality of life, disability, pulmonary disorders, progression of the curve, and psychological and cosmetic issues. Search methods We searched CENTRAL, MEDLINE, EMBASE, five other databases, and two trials registers up to February 2015 for relevant clinical trials. We also checked the reference lists of relevant articles and conducted an extensive handsearch of grey literature. Selection criteria Randomized controlled trials (RCTs) and prospective controlled cohort studies comparing braces with no treatment, other treatment, surgery, and different types of braces for adolescent with AIS. Data collection and analysis We used standard methodological procedures expected by The Cochrane Collaboration. We included seven studies (662 participants). Five were planned as RCTs and two as prospective controlled trials. One RCT failed completely, another was continued as an observational study, reporting also the results of the participants that had been randomized. There was very low quality evidence from one small RCT (111 participants) that quality of life (QoL) during treatment did not differ significantly between rigid bracing and observation (mean difference (MD) ‐2.10, 95% confidence interval (CI) ‐7.69 to 3.49). There was very low quality evidence from a subgroup of 77 adolescents from one prospective cohort study showing that QoL, back pain, psychological, and cosmetic issues did not differ significantly between rigid bracing and observation in the long term (16 years). Results of the secondary outcomes showed that there was low quality evidence that rigid bracing compared with observation significantly increased the success rate in 20° to 40° curves at two years' follow‐up (one RCT, 116 participants; risk ratio (RR) 1.79, 95% CI 1.29 to 2.50). There was low quality evidence that elastic bracing increased the success rate in 15° to 30° curves at three years' follow‐up (one RCT, 47 participants; RR 1.88, 95% CI 1.11 to 3.20). There is very low quality evidence from two prospective cohort studies with a control group that rigid bracing increases the success rate (curves not evolving to 50° or above) at two years' follow‐up (one study, 242 participants; RR 1.50, 95% CI 1.19 to 1.89) and at three years' follow‐up (one study, 240 participants; RR 1.75, 95% CI 1.42 to 2.16). There was very low quality evidence from a prospective cohort study (57 participants) that very rigid bracing increased the success rate (no progression of 5° or more, fusion, or waiting list for fusion) in adolescents with high degree curves (above 45°) (one study, 57 adolescents; RR 1.79, 95% CI 1.04 to 3.07 in the intention‐to‐treat (ITT) analysis). There was low quality evidence from one RCT that a rigid brace was more successful than an elastic brace at curbing curve progression when measured in Cobb degrees in low degree curves (20° to 30°), with no significant differences between the two groups in the subjective perception of daily difficulties associated with wearing the brace (43 girls; risk of success at four years' follow‐up: RR 1.40, 1.03 to 1.89). Finally, there was very low quality evidence from one RCT (12 participants) that a rigid brace with a pad pressure control system is no better than a standard brace in reducing the risk of progression. Only one prospective cohort study (236 participants) assessed adverse events: neither the percentage of adolescents with any adverse event (RR 1.27, 95% CI 0.96 to 1.67) nor the percentage of adolescents reporting back pain, the most common adverse event, were different between the groups (RR 0.72, 95% CI 0.47 to 1.10). Due to the important clinical differences among the studies, it was not possible to perform a meta‐analysis. Two studies showed that bracing did not change QoL during treatment (low quality), and QoL, back pain, and psychological and cosmetic issues in the long term (16 years) (very low quality). All included papers consistently showed that bracing prevented curve progression (secondary outcome). However, due to the strength of evidence (from low to very low quality), further research is very likely to have an impact on our confidence in the estimate of effect. The high rate of failure of RCTs demonstrates the huge difficulties in performing RCTs in a field where parents reject randomization of their children. This challenge may prevent us from seeing increases in the quality of the evidence over time. Other designs need to be implemented and included in future reviews, including 'expertise‐based' trials, prospective controlled cohort studies, prospective studies conducted according to pre‐defined criteria such as the Scoliosis Research Society (SRS) and the international Society on Scoliosis Orthopedic and Rehabilitation Treatment (SOSORT) criteria. Future studies should increase their focus on participant outcomes, adverse effects, methods to increase compliance, and usefulness of physiotherapeutic scoliosis specific exercises added to bracing. |
t26 | People are living longer, however, the very old often have many health problems and disabilities which result in them living and eventually dying in care homes. Residents of such homes are highly likely to die there, making these places where palliative care is needed. Palliative care provides relief from pain and other distressing symptoms experienced by people reaching the end of life. Palliative care hopes to help people live as actively as possible until death, and their families cope with the illness and bereavement. The aim of this review was to see how effective palliative care interventions in care homes are, and to describe the outcome measures used in the studies. We found only three suitable studies (735 participants), all from the USA. There was little evidence that interventions to improve palliative care for older people in care homes improved outcomes for residents. One study found that palliative care increased bereaved family members' perceptions of the quality of care and another found lower discomfort for residents with dementia who were dying. There were problems with both of these findings. Two studies found that palliative care improved some of the ways in which care was given in the care home, however, we do not know if this resulted in better outcomes for residents. There is a need for more high quality research, particularly outside the USA. | Residents of nursing care homes for older people are highly likely to die there, making these places where palliative care is needed. Objectives The primary objective was to determine effectiveness of multi‐component palliative care service delivery interventions for residents of care homes for older people. The secondary objective was to describe the range and quality of outcome measures. Search methods The grey literature and the following electronic databases were searched: Cochrane Central Register of Controlled Trials, Cochrane Database of Systematic Reviews, Database of Abstracts of Reviews of Effectiveness (all issue 1, 2010); MEDLINE, EMBASE, CINAHL, British Nursing Index, (1806 to February 2010), Science Citation Index Expanded & AMED (all to February 2010). Key journals were hand searched and a PubMed related articles link search was conducted on the final list of articles. Selection criteria We planned to include Randomised Clinical Trials (RCTs), Controlled Clinical Trials (CCTs), controlled before‐and‐after studies and interrupted time series studies of multi‐component palliative care service delivery interventions for residents of care homes for older people. These usually include the assessment and management of physical, psychological and spiritual symptoms and advance care planning. We did not include individual components of palliative care, such as advance care planning. Data collection and analysis Two review authors independently assessed studies for inclusion, extracted data, and assessed quality and risk of bias. Meta analysis was not conducted due to heterogeneity of studies. The analysis comprised a structured narrative synthesis. Outcomes for residents and process of care measures were reported separately. Two RCTs and one controlled before‐and‐after study were included (735 participants). All were conducted in the USA and had several potential sources of bias. Few outcomes for residents were assessed. One study reported higher satisfaction with care and the other found lower observed discomfort in residents with end‐stage dementia. Two studies reported group differences on some process measures. Both reported higher referral to hospice services in their intervention group, one found fewer hospital admissions and days in hospital in the intervention group, the other found an increase in do‐not‐resuscitate orders and documented advance care plan discussions. We found few studies, and all were in the USA. Although the results are potentially promising, high quality trials of palliative care service delivery interventions which assess outcomes for residents are needed, particularly outside the USA. These should focus on measuring standard outcomes, assessing cost‐effectiveness, and reducing bias. |
t27 | Acute heart attacks and severe angina (heart pain) are usually due to blockages in the arteries supplying the heart (coronary arteries). These problems are collectively referred to as 'acute coronary syndrome' (ACS). ACS is very common and may lead to severe complications including death. Hyperbaric oxygen therapy (HBOT) involves people breathing pure oxygen at high pressures in a specially designed chamber. It is sometimes used as a treatment to increase the supply of oxygen to the damaged heart in an attempt to reduce the area of the heart that is at risk of dying. We searched the medical literature for any studies that reported the outcome of patients with ACS when treated with HBOT. All studies included patients with heart attack and some also included patients with severe angina. The dose of hyperbaric oxygen was similar in most studies. Overall, we found some evidence that people with ACS are less likely to die or to have major adverse events, and to have more rapid relief from their pain if they receive hyperbaric oxygen therapy as part of their treatment. However, our conclusions are based on relatively small randomised trials. Our confidence in these findings is further reduced because in most of these studies both the patients and researchers were aware of who was receiving HBOT and it is possible a 'placebo effect' has biased the result in favour of HBOT. HBOT was generally well‐tolerated. Some patients complained of claustrophobia when treated in small (single person) chambers and there was no evidence of important toxicity from oxygen breathing in any subject. One individual suffered damage to the eardrum from pressurisation. While HBOT may reduce the risk of dying, time to pain relief and the chance of adverse heart events in people with heart attack and unstable angina, more work is needed to be sure that HBOT should be recommended. | Acute coronary syndrome (ACS), includes acute myocardial infarction and unstable angina, is common and may prove fatal. Hyperbaric oxygen therapy (HBOT) will improve oxygen supply to the threatened heart and may reduce the volume of heart muscle that perishes. The addition of HBOT to standard treatment may reduce death rate and other major adverse outcomes. This an update of a review previously published in May 2004 and June 2010. Objectives The aim of this review was to assess the evidence for the effects of adjunctive HBOT in the treatment of ACS. We compared treatment regimens including adjunctive HBOT against similar regimens excluding HBOT. Where regimens differed significantly between studies this is clearly stated and the implications discussed. All comparisons were made using an intention to treat analysis where this was possible. Efficacy was estimated from randomised trial comparisons but no attempt was made to evaluate the likely effectiveness that might be achieved in routine clinical practice. Specifically, we addressed: Does the adjunctive administration of HBOT to people with acute coronary syndrome (unstable angina or infarction) result in a reduction in the risk of death? Does the adjunctive administration of HBOT to people with acute coronary syndrome result in a reduction in the risk of major adverse cardiac events (MACE), that is: cardiac death, myocardial infarction, and target vessel revascularization by operative or percutaneous intervention? Is the administration of HBOT safe in both the short and long term? Search methods We updated the search of the following sources in September 2014, but found no additional relevant citations since the previous search in June 2010 (CENTRAL), MEDLINE, EMBASE, CINAHL and DORCTHIM. Relevant journals were handsearched and researchers in the field contacted. We applied no language restrictions. Selection criteria Randomised studies comparing the effect on ACS of regimens that include HBOT with those that exclude HBOT. Data collection and analysis Three authors independently evaluated the quality of trials using the guidelines of the Cochrane Handbook and extracted data from included trials. Binary outcomes were analysed using risk ratios (RR) and continuous outcomes using the mean difference (MD) and both are presented with 95% confidence intervals. We assessed the quality of the evidence using the GRADE approach. No new trials were located in our most recent search in September 2014. Six trials with 665 participants contributed to this review. These trials were small and subject to potential bias. Only two reported randomisation procedures in detail and in only one trial was allocation concealed. While only modest numbers of participants were lost to follow‐up, in general there is little information on the longer‐term outcome for participants. Patients with acute coronary syndrome allocated to HBOT were associated with a reduction in the risk of death by around 42% (RR: 0.58, (95% CI 0.36 to 0.92), 5 trials, 614 participants; low quality evidence). In general, HBOT was well‐tolerated. No patients were reported as suffering neurological oxygen toxicity and only a single patient was reported to have significant barotrauma to the tympanic membrane. One trial suggested a significant incidence of claustrophobia in single occupancy chambers of 15% (RR of claustrophobia with HBOT 31.6, 95% CI 1.92 to 521). For people with ACS, there is some evidence from small trials to suggest that HBOT is associated with a reduction in the risk of death, the volume of damaged muscle, the risk of MACE and time to relief from ischaemic pain. In view of the modest number of patients, methodological shortcomings and poor reporting, this result should be interpreted cautiously, and an appropriately powered trial of high methodological rigour is justified to define those patients (if any) who can be expected to derive most benefit from HBOT. The routine application of HBOT to these patients cannot be justified from this review. |
t28 | The aim of this Cochrane Review was to find out what methods of skin preparation before caesarean section were most effective in preventing infection after the operation. We collected and analysed all studies that assessed the effectiveness of antiseptics used to prepare the skin before making an incision (or cut) for the caesarean section. We only included analysis of preparations that were used to prepare the surgical site on the abdomen before caesarean section; we did not look at handwashing by the surgical team, or bathing the mother. Infections of surgical incisions are the third most frequently reported hospital‐acquired infections. Women who give birth by caesarean section are exposed to infection from germs already present on the mother's own skin, or from external sources. The risk of infection following a caesarean section can be 10 times that of vaginal birth. Therefore, preventing infection by properly preparing the skin before the incision is made is an important part of the overall care given to women prior to caesarean birth. An antiseptic is a substance applied to remove bacteria that can cause harm to the mother or baby when they multiply. Antiseptics include iodine or povidone iodine, alcohol, chlorhexidine, and parachlorometaxylenol. They can be applied as liquids or powders, scrubs, paints, swabs, or on impregnated 'drapes' that stick to the skin, which the surgeon then cuts through. Non‐impregnated drapes can also be applied, once the skin has been scrubbed or swapped, with the aim of reducing the spread of any remaining bacteria during surgery. It is important to know if some of these antiseptics or methods work better than others. The review looked at what was best for women and babies when it came to important outcomes including: infection of the site where the surgeon cut the woman to perform the caesarean section; inflammation of the lining of the womb (metritis and endometritis); how long the woman stayed in hospital; and any other adverse effects, such as irritation of the woman's skin, or any reported impact on the baby. The evidence suggested that there was probably little or no difference between the various antiseptics in the incidence of surgical site infection, endometritis, skin irritation, or allergic skin reaction in the mother. However, in one study, there was a reduction in bacterial growth on the skin at 18 hours after caesarean section for women who received a skin preparation with chlorhexidine gluconate compared with women who received the skin preparation with povidone iodine, but more data are needed to see if this actually reduces infections for women. The available evidence from the trials that have been conducted was insufficient to tell us the best type of skin preparation for preventing surgical site infection following caesarean section. | The risk of maternal mortality and morbidity (particularly postoperative infection) is higher for caesarean section (CS) than for vaginal birth. With the increasing rate of CS, it is important to minimise the risks to the mother as much as possible. This review focused on different forms and methods of preoperative skin preparation to prevent infection. This review is an update of a review that was first published in 2012, and updated in 2014. Objectives To compare the effects of different antiseptic agents, different methods of application, or different forms of antiseptic used for preoperative skin preparation for preventing postcaesarean infection. Search methods For this update, we searched Cochrane Pregnancy and Childbirth’s Trials Register, ClinicalTrials.gov , the WHO International Clinical Trials Registry Platform ( ICTRP ) (27 November 2017), and reference lists of retrieved studies. Selection criteria Randomised and quasi‐randomised trials, evaluating any type of preoperative skin preparation agents, forms, and methods of application for caesarean section. Comparisons of interest in this review were between different antiseptic agents used for CS skin preparation (e.g. alcohol, povidone iodine), different methods of antiseptic application (e.g. scrub, paint, drape), different forms of antiseptic (e.g. powder, liquid), and also between different skin preparations, such as a plastic incisional drape, which may or may not be impregnated with antiseptic agents. Only studies involving the preparation of the incision area were included. This review did not cover studies of preoperative handwashing by the surgical team or preoperative bathing. Data collection and analysis Three review authors independently assessed all potential studies for inclusion, assessed risk of bias, and extracted the data using a predesigned form. We checked data for accuracy. We assessed the quality of the evidence using the GRADE approach. For this update, we included 11 randomised controlled trials (RCTs), with a total of 6237 women who were undergoing CS. Ten trials (6215 women) contributed data to this review. All included studies were individual RCTs. We did not identify any quasi‐ or cluster‐RCTs. The trial dates ranged from 1983 to 2016. Six trials were conducted in the USA, and the remainder in Nigeria, South Africa, France, Denmark, and Indonesia. The included studies were broadly methodologically sound, but raised some specific concerns regarding risk of bias in a number of cases. Drape versus no drape This comparison investigated the use of a non‐impregnated drape versus no drape, following preparation of the skin with antiseptics. For women undergoing CS, low‐quality evidence suggested that using a drape before surgery compared with no drape, may make little or no difference to the incidence of surgical site infection (risk ratio (RR) 1.29, 95% confidence interval (CI) 0.97 to 1.71; 2 trials, 1294 women), or length of stay in the hospital (mean difference (MD) 0.10 day, 95% CI ‐0.27 to 0.46 1 trial, 603 women). One‐minute alcohol scrub with iodophor drape versus five‐minute iodophor scrub without drape One trial compared an alcohol scrub and iodophor drape with a five‐minute iodophor scrub only, and reported no surgical site infection in either group (79 women, very‐low quality evidence). We were uncertain whether the combination of a one‐minute alcohol scrub and a drape reduced the incidence of endomyometritis when compared with a five‐minute scrub, because the quality of the evidence was very low (RR 1.62, 95% CI 0.29 to 9.16; 1 trial, 79 women). The available evidence from the trials that have been conducted was insufficient to tell us the best type of skin preparation for preventing surgical site infection following caesarean section. More high‐quality research is needed. We found four studies that were still ongoing. We will incorporate the results of these studies into this review in future updates. |
license: apache-2.0 task_categories: - summarization language: - en tags: - biomedical - health - NLP - summarization - LLM size_categories: - 1K<n<10K
PlainFact-summary is a high-quality human-annotated dataset designed for Plain Language Summarization tasks, along with PlainQAFact factuality evaluation framework. It is collected from the Cochrane database sampled from CELLS dataset (Guo et al., 2024).
We also provided a sentence-level version PlainFact that split the summaries into sentences with fine-grained explanation annotations. In total, we have 200 plain language summary-abstract pairs.
Here are explanations for the headings:
- Target_Sentence: The plain language sentence/summary.
- Original_Abstract: The scientific abstract corresponding to each sentence/summary.
You can load our dataset as follows:
from datasets import load_dataset
plainfact = load_dataset("uzw/PlainFact-summary")
For detailed information regarding the dataset or factuality evaluation framework, please refer to our Github repo and paper.
Citation If you use data from PlainFact or PlainFact-summary, please cite with the following BibTex entry:
- Downloads last month
- 6