instruction
stringlengths
10
664
context
stringlengths
1
5.66k
response
stringlengths
1
3.34k
category
stringclasses
1 value
Detection of thyroid dysfunction in early pregnancy: Universal screening or targeted high-risk case finding?
Maternal subclinical hypothyroidism during pregnancy is associated with various adverse outcomes. Recent consensus guidelines do not advocate universal thyroid function screening during pregnancy but recommend testing high-risk pregnant women with a personal history of thyroid or other autoimmune disorders or with a family history of thyroid disorders. The objective of the study was to assess efficacy of the targeted high-risk case-finding approach in identifying women with thyroid dysfunction during early pregnancy.DESIGN/ This was a single-center cohort study. PATIENTS/ We prospectively analyzed TSH, free T4 and free T3 in 1560 consecutive pregnant women during their first antenatal visit (median gestation 9 wk). We tested thyroperoxidase antibodies in 1327 (85%). We classified 413 women (26.5%), who had a personal history of thyroid or other autoimmune disorders or a family history of thyroid disorders, as a high-risk group. We examined whether testing only such a high-risk group would pick up most pregnant women with thyroid dysfunction. Forty women (2.6%) had raised TSH (>4.2 mIU/liter). The prevalence of raised TSH was higher in the high-risk group [6.8 vs. 1% in the low-risk group, relative risk (RR) 6.5, 95% confidence interval (CI) 3.3-12.6, P<0.0001]. Presence of personal history of thyroid disease (RR 12.2, 95% CI 6.8-22, P<0.0001) or other autoimmune disorders (RR 4.8, 95% CI 1.3-18.2, P = 0.016), thyroperoxidase antibodies (RR 8.4, 95% CI 4.6-15.3, P<0.0001), and family history of thyroid disorders (RR 3.4, 95% CI 1.8-6.2, P<0.0001) increased the risk of raised TSH. However, 12 of 40 women with raised TSH (30%) were in the low-risk group.
Targeted thyroid function testing of only the high-risk group would miss about one third of pregnant women with overt/subclinical hypothyroidism.
closed_qa
Multidetector CT in abdominal aortic aneurysm treated with endovascular repair: are unenhanced and delayed phase enhanced images effective for endoleak detection?
To retrospectively determine the sensitivity and specificity of unenhanced, delayed enhanced phase (DEP), and arterial enhanced phase (AEP) multi-detector row computed tomography (CT) for depicting endoleaks during follow-up of endovascular aneurysm repair. Fifty patients (two women, 48 men; mean age, 72 years) underwent follow-up multi-detector row CT 1, 6, and 12 months after endovascular aneurysm repair. Unenhanced CT was performed with 2.5-mm collimation; 1-mm collimation was used with AEP and DEP examinations. Two independent readers assessed the presence of endoleak in three reading sessions: AEP (session A), unenhanced and AEP (session B), and AEP and DEP (session C). At 6- and 12-month follow-up, a fourth set was included: 1-month unenhanced and AEP (session D). Sensitivity, specificity, and positive predictive value of each session were calculated. Triple-phase multi-detector row CT was the reference standard. At 1 month, sensitivity, specificity, and positive predictive value, respectively, were 79%, 75%, and 55% for session A; 93%, 97%, and 93% for session B; and 93%, 78%, and 62% for session C. At 6 months, sensitivity, specificity, and positive predictive value, respectively, were 92%, 68%, and 48% for session A; 92%, 100%, and 100% for session B; and 100%, 84%, and 67% for session C. At 12 months, sensitivity, specificity, and positive predictive value, respectively, were 80%, 80%, and 50% for session A; 90%, 98%, and 90% for session B; and 100%, 80%, and 56% for session C. Sensitivity did not significantly differ (P>.05) among reading sessions A, B, and C, whereas specificity and positive predictive values in session B were significantly higher (P<.001). For 6- and 12-month follow-up, no significant differences (P>.05) were found between sessions D and B.
The combination of AEP and unenhanced imaging performed at 1-month follow-up offers improved specificity and positive predictive values compared with AEP alone. DEP imaging does not significantly increase sensitivity for detection of endoleaks, but it does depict low-flow endoleaks not seen at AEP.
closed_qa
Is the determination of the soluble interleukin-2 receptor after application of interleukin-2 receptor antibodies still appropriate for immunological monitoring after renal transplantation?
The use of monoclonal antibodies against the alpha-chain of the membrane-bound interleukin-2 receptor (IL-2Ralpha) as immune suppressants causes characteristic changes in the levels of soluble interleukin-2 receptor (sIL-2R) in serum and urine. 38 kidney transplant patients were included in this study. 28 of them received an induction therapy with the IL-2R antibody basiliximax (Simulect) in addition to standard immunosuppression, 10 patients constituted the control group. Time courses of sIL-2R levels of Simulect patients with and without complications after transplantation have been compared. It turned out that of a total of 18 cases with complications 15 cases could be identified by their elevated sIL-2R levels, which corresponds to a sensitivity of 83%.
Acute rejection, CMV infection, extrarenal bacterial infection and pyelonephritis in the transplant all cause a significant increase of the sIL-2R level even after application of Simulect.
closed_qa
Are lower urinary tract symptoms associated with erectile dysfunction in aging males of Taiwan?
This study was conducted to evaluate the relationship between lower urinary tract symptoms (LUTS) and erectile dysfunction (ED) in aging males of Taiwan. A free health screening for aging males (>or=45 years old) was conducted in Kaohsiung Medical University Chung-Ho Memorial Hospital in August 2004. LUTS and ED were assessed by validated symptom scales: the International Prostate Symptom Score (IPSS) and the International Index of Erectile Function-5 (IIEF-5). The subjects also completed a health and demographics questionnaire and underwent detailed physical examination, serum prostate-specific antigen level determination, and transrectal ultrasonography. The final study population consisted of 141 patients with a mean age of 59.8 years. The severity of LUTS and ED increased with age. After controlling for comorbidities, age (p<0.001) and IPSS score (p<0.001) were significantly associated with the IIEF-5 score. Furthermore, men with moderate to high IPSS scores were more likely to have ED as compared with those with mild symptoms after age adjustment (age-adjusted odds ratio 3.27, p=0.002).
ED and LUTS are highly prevalent in our study population, and this prevalence increases with age. ED is significantly associated with the severity of LUTS after controlling for age and comorbidities. These results highlight the clinical importance of evaluating LUTS in patients with ED and the need to consider sexual issues in the management of patients with benign prostatic hyperplasia.
closed_qa
Consumption of gluten-free products: should the threshold value for trace amounts of gluten be at 20, 100 or 200 p.p.m.?
The threshold of gluten contamination in gluten-free products of both dietary and normal consumption is under debate. The objective of this study was to gather information on consumption of gluten-free products intended for dietary use of people under a gluten-free diet. This information is essential to ascertain the exposure of coeliac patients to gluten through their diet and deduce the maximum gluten content that these products should contain to guarantee a safe diet. A diet diary of consumption of gluten-free products intended for dietary use was distributed to the coeliac societies of two typical Mediterranean countries (Italy and Spain) and two Northern countries (Norway and Germany). The diet diary included a self-weigh table of the selected food items and a 10-day consumption table. Results were reported in percentiles as distributions were clearly right skewed. The respondents included in the study accounted for 1359 in Italy, 273 in Spain, 226 in Norway and 56 in Germany. Gluten-free products intended for dietary use contributed significantly to the diet of coeliac patients in Italy, Germany and Norway and to a lesser degree in Spain. The most consumed gluten-free product in all countries was bread, and it was double consumed in the Northern countries (P<0.001). Mediterranean countries showed consumption of a wider variety of gluten-free foods and pasta was eaten to a large degree in Italy.
The differences between Northern and Mediterranean countries were not in the total amount of gluten-free products but in the type of products consumed. The observed daily consumption of gluten-free products results in the exposure to rather large amounts of gluten, thus the limit of 200 p.p.m. should be revised. A limit of 20 p.p.m. for products naturally gluten-free and of 100 p.p.m. for products rendered gluten-free is proposed to guarantee a safe diet and to enable coeliac patients to make an informed choice. These limits should be revised as new data become available.
closed_qa
Do early ionized calcium levels really matter in trauma patients?
Age, injury severity, and base deficit are commonly used prognostic indicators in trauma. This study investigates the relationship between ionized calcium (iCa) levels drawn on arrival to the emergency department, with injury severity, acidosis, hypotension, and mortality. Adult trauma team activations requiring the highest level of response were identified retrospectively from January 2000 to December 2002. Patients were stratified into two groups: iCa<or = 1 and iCa>1 mmol/L. The relationship between iCa and injury severity (Trauma Injury Severity Score, Injury Severity Score [ISS], Revised Trauma Scale, Glasgow Coma Scale), age, sampling time, shock (systolic blood pressure [SBP]<90 at the scene, transport, and admission; base deficit), resource utilization (hospital and intensive care unit length of stay, ventilator days) and mortality was examined. Statistical analysis included chi2 tests, Wilcoxon rank sum tests, p<0.05 versus iCa>1, median (25th-75th percentile), and odds ratio (OR). In all, 396 out of 2,367 patients were identified. Mortality was significantly increased in the iCa<or = 1 group (26.4% versus 16.7%, p<0.05; OR 1.92). Time to death in iCa<or = 1 was significantly shorter, 0.50 (0-1) versus 1.0 (0-6) days. Mortality was predicted using iCa<or = 1 alone (p<0.02, OR 3.28), iCa<or = 1 + base deficit (p<0.02, OR 2.00), and base deficit alone (p = 0.06, OR 1.5). Low iCa was associated with SBP<90 at the scene and transport (p<0.01). The incidence of base deficit was higher in the iCa<or = 1 group (p<0.05).
Low iCa is associated with prehospital hypotension regardless of age, ISS, or sampling time and is a better predictor of mortality than base deficit. Since acidosis reduces calcium binding to serum protein and actually increases iCa, the association between base deficit and iCa in this study requires further investigation.
closed_qa
Is fecal diversion necessary for nondestructive penetrating extraperitoneal rectal injuries?
Current management of penetrating extraperitoneal rectal injury includes diversion of the fecal stream. The purpose of this study is to assess whether nondestructive penetrating extraperitoneal rectal injuries can be managed successfully without diversion of the fecal stream. This study was performed at an urban Level I trauma center during a 28-month period from February 2003 through June 2005. All patients who suffered nondestructive penetrating extraperitoneal rectal injuries were managed with a diagnosis and treatment protocol that excluded fecal stream diversion. Patients were placed in one of two management arms based upon clinical suspicion for intraperitoneal injury. In the first arm, patients with suspicion for rectal injury and a positive clinical examination for intraperitoneal injuries were delivered to the operating room for exploratory laparotomy. Proctoscopy was performed before exploratory laparotomy. Extraperitoneal rectal injuries were left to heal by secondary intention. Intraperitoneal rectal injuries were repaired primarily. Patients did not receive fecal diversion or perineal drainage. In the second management arm, patients with a negative clinical examination for intraperitoneal injury and wounding agent trajectory suspicious for rectal injury underwent diagnostic peritoneal lavage (DPL), cystography, and proctoscopy in the emergency room. Positive DPL or cystography warranted laparotomy as above. Patients with positive proctoscopy alone were admitted and placed on a clear liquid diet. Barium enema was performed 5 to 7 days postinjury for all rectal injuries with diets advanced accordingly.A matched historic control group of rectal injury patients who underwent fecal diversion was compared with the nondiversion protocol group. Patients from both groups were matched for penetrating abdominal trauma index (PATI), age and mechanism of injury. There were 14 consecutive patients diagnosed with penetrating rectal injury placed in the nondiversion management protocol. Of these, 9 (64%) patients in the nondiversion group required laparotomy. The average age in the diversion historical control group was 30.5 years and 29.3 years in the nondiversion group. The average PATI in the diversion group was 15.3 and 16.1 in the nondiversion protocol group. The average length of stay for the diversion and nondiversion groups was 9.8 days (range, 7-15) and 7.2 days (range, 4-10), respectively. There were no complications associated with rectal injuries in either group.
Nondestructive penetrating rectal injuries can be managed successfully without fecal diversion. Randomized prospective study will be necessary to assess this management method.
closed_qa
Do trauma centers have the capacity to respond to disasters?
Concern has been raised about the capacity of trauma centers to absorb large numbers of additional patients from mass casualty events. Our objective was to examine the capacity of current centers to handle an increased load from a mass casualty disaster. This was a cross-sectional study of Level I and II trauma centers. They were contacted by mail and asked to respond to questions about their surge capacity as of July 4, 2005. Data were obtained from 133 centers. On July 4, 2005 there were a median of 77 beds available in Level I and 84 in Level II trauma centers. Fifteen percent of the Level I and 12.2% of the Level II centers had a census at 95% capacity or greater. In the first 6 hours, each Level I center would be able to operate on 38 patients, while each Level II center would be able to operate on 22 patients. Based on available data, there are 10 trauma centers available to an average American within 60 minutes. Given the available bed capacity, a total of 812 beds would be available within a 60-minute transport distance in a mass casualty event.
There is capacity to care for the number of serious non-fatally injured patients resulting from the types of mass casualties recently experienced. If there is a further continued shift of uninsured patients to and fiscally driven closure of trauma centers, the surge capacity could be severely compromised.
closed_qa
Safety belt use by law enforcement officers on reality television: a missed opportunity for injury prevention?
Although safety belt usage rates are increasing nationwide, motor vehicle crashes (MVCs) remain a leading cause of death for young people and are emerging as a leading cause for police officers specifically. A content analysis was performed on the television show, COPS, to determine on-air safety belt usage rates. A sample of 63 unique episodes of the reality-based television series, COPS, was viewed during a 4-month period (September 1, 2005 to January 1, 2006). Episodes had original airing dates ranging from 1990 to 2004. Safety belt usage status was determined per police officer per driving scene (N = 250). A driving scene represented a continuous trip (start to finish) with a total on-camera time exceeding 5 seconds. Scenes with indeterminate safety belt status were excluded. High-speed driving, officer gender, and officer race were also recorded. Of the 203 scenes included, 77 (38%) demonstrated safety belt usage. High-speed driving scenes had higher safety belt usage rates compared with low-speed (48% versus 29%, p = 0.005). More contemporary episodes (1999 to 2004) had higher safety belt usage rates as well (51% versus 28%, p = 0.001). Officer gender and race revealed no significant differences in safety belt usage rates (p = 0.930 and p = 0.900, respectively).
In this popular, reality-based television series, safety belt usage by police officers is extremely low. These findings suggest the need to increase safety belt usage by police officers, especially those filmed for television.
closed_qa
Is there significant regional variation in hospital antibiotic consumption in Germany?
Outpatient antibiotic use in Germany differs substantially between eastern and southern parts of the country (low use) and the western part (high use). Whether similar regional variation exists in hospital antibiotic consumption is not known. We investigated this issue using a convenience sample of 145 hospitals providing data for the year 2003. Data on hospital consumption of systemic antibiotics in Anatomical Therapeutic Chemical (ATC) class J01 were obtained from acute care hospitals that participated in an IMS survey and had complete data (dispensed drugs and patient-days per year) for at least one non-pediatric, non-psychiatric department or ward. A total of 275 non-ICU surgical departments/wards, 229 non-ICU non-surgical (general medicine, haematology-oncology, neurology/stroke) departments/wards, and 184 ICUs were analysed. Data were expressed in DDD (WHO/ATC definition version 2003) or daily doses adapted for recommendations in hospitalized patients (RDD) per 100 patient days (DDD/100 and RDD/100). The weighted mean over all departments/wards was 49.6 DDD/100 or 31.3 RDD/100, respectively. As expected, ICU antibiotic use density was much higher than use in non-ICU areas, and use in haematology-oncology was higher than in other non-surgical departments/wards. In univariate analyses, region, hospital bed-size category, university affiliation and haematology-oncology as specialty were associated with use density, but these associations were only partly confirmed in multivariate logistic regression analyses of factors associated with excess (>or = 75%) use density which showed university affiliation and haematology-oncology but not hospital location to be independently associated with comparatively high use.
Antibiotic use density in German acute care hospitals does not appear to differ significantly between regions. Overall hospital consumption of antibiotics in this country appears to be similar to what has been described from other parts of Europe. In comparative analyses of hospital antibiotic consumption, data need to be adjusted at least for university affiliation and haematology-oncology.
closed_qa
Health-related quality of life after laparoscopically assisted vaginal hysterectomy: is uterine weight a major factor?
To assess uterine size, symptom severity, and hemoglobin level as determinants of health-related quality of life (HRQOL) in women subsequently undergoing laparoscopically assisted vaginal hysterectomy (LAVH). Sixty-one consecutive women with uterine leiomyoma or adenomyosis undergoing LAVH were studied using a prospective cohort design. The Chinese version of the Uterine Fibroid Symptom and Quality of Life (UFS-QOL) questionnaire was assessed preoperatively. The Taiwan brief version of the World Health Organization Quality of Life (WHOQOL-BREF) questionnaire and a self-assessment of the perceived health status were assessed preoperatively and 1 day, 1 week, 12 weeks, and 18 months postoperatively. Women with a greater uterine weight did not report a greater severity of symptoms than those with lower uterine weight. Women with more severe symptoms had lower preoperative hemoglobin levels and were usually younger. Their perceived health status and their scores in physical domain of WHOQOL-BREF were also significantly lower, indicating a poorer HRQOL. The mixed-effects model found that a normal (higher) baseline hemoglobin level and a greater symptom severity were associated with a significant improvement of HRQOL in the physical domain of WHOQOL-BREF following LAVH.
Preoperative symptom severity, but not uterine weight, was associated with long-term improvement in HRQOL after LAVH. Women with severe symptoms could be considered for LAVH before development of anemia.
closed_qa
Are factors associated with subjective quality of life in people with severe mental illness consistent over time?
To investigate the cross-sectional relationship between subjective quality of life and sociodemographic clinical and social factors over three points of assessment during a 6-year follow-up, and to investigate longitudinal predictors of subjective quality of life. We investigated a sample of people with severe mental illness (n = 92), mainly with a psychosis diagnosis, at baseline and at an 18-month and 6-year follow-up. Measures included the Lancashire quality of life profile, Manchester short assessment of quality of life, Symptom Check List 90, Camberwell Assessment of Needs and the Interview Schedule for Social Interaction. Cross-sectionally subjective quality of life was associated to self-reported symptoms, social network and unmet needs. However, these determinants varied in importance between points of assessment. Longitudinal predictors of subjective quality of life were changes in self-reported symptoms and social network.
There was a rather consistent set of determinants of subjective quality of life over time. Social network seems to be an important factor with relevance for improvements in subjective quality of life, however largely overlooked in earlier studies within the field.
closed_qa
Do access experiences affect parents' decisions to enroll their children in Medicaid and SCHIP?
The Covering Kids and Families (CKF) program seeks to expand health insurance coverage for children by supporting community-based outreach and enrollment. For the evaluation of CKF, researchers conducted focus groups to explore parents' experiences accessing health care for their children, and to assess whether these experiences affected decisions to enroll their children in Medicaid or the State Children's Health Insurance Program (SCHIP). In May and June 2003, 13 focus groups were conducted in 5 cities--Everett, MA; Denver, CO; Los Angeles, CA; Mena, AR; and San Antonio, TX. In each community, groups were conducted with parents of children insured under Medicaid or SCHIP and parents of uninsured children. Three groups were conducted with Spanish-speaking parents in two communities--Denver and Los Angeles. Access to primary care was considered good by most parents with children in Medicaid and SCHIP. Among parents of uninsured children, there was more variation in perceptions of access to care. For parents of both uninsured and insured children, access to dentists and specialists was more problematic. Spanish-speaking families reported numerous barriers to care due to language differences and perceived discrimination. All focus group participants said that they placed great value on health insurance.
Even when parents encountered problems accessing care, very few indicated that this discouraged them from enrolling their children into Medicaid or SCHIP, or from renewing their children's public coverage.
closed_qa
Could a rural lifestyle decrease the prevalence of erectile dysfunction?
To determine the prevalence of erectile dysfunction (ED) in a specific population and explore potential correlates with lifestyle. This prospective observational study, covering a population of a very small rural town, included 2000 men aged>or = 20 years from a total population of 121 831 (51% female and 49% male). The International Index of Erectile Function was completed by each of the 2000 men at their homes over a 1-year period. Another questionnaire assessing socio-economic status and health-related determinants of ED were also completed. All 2000 men completed the questionnaires; overall, only 34 reported ED (1.7%). The frequency of mild, mild to moderate, moderate and severe ED was 12%, 29%, 20% and 38%, respectively. Significantly more men aged>51 years had ED than those aged<41 years (0.05% and 0.45%, respectively; P<0.001). There was no difference in ED with salary levels.
The prevalence of ED in this particular rural population of Brazil was very low, at only 1.7%. Although ED increases with age, this association was not apparent for all age groups. It seems that several others factors, e.g. lifestyle, culture and diet, could be important for the onset of ED.
closed_qa
Functional urinary and fecal incontinence in neurologically normal children: symptoms of one 'functional elimination disorder'?
To clarify the relationship between disordered defecation and non-neuropathic bladder-sphincter dysfunction (NNBSD) by comparing the prevalence of symptoms of disordered defecation in children with NNBSD before and after treatment for urinary incontinence (UI), and assessing the effect of such symptoms on the cure rate for UI. In the European Bladder Dysfunction Study, a prospective multicentre study comparing treatment plans for children with NNBSD, 202 children completed questionnaires on voiding and on defecation, at entry and after treatment for UI. Four symptoms of disordered defecation were evaluated; low defecation frequency, painful defecation, fecal soiling, and encopresis. At entry, 17 of the 179 children with complete data sets had low defecation frequency and/or painful defecation (9%), classified as functional constipation (FC). Of the 179 children, 57 had either isolated fecal soiling or soiling with encopresis (32%), classified as functional fecal incontinence (FFI). After treatment for UI, FFI decreased to 38/179 (21%) (statistically significant, P = 0.035); for FC there were too few children for analysis. After treatment for UI, 19 of the 179 children (11%) reported de novo FFI. Symptoms of disordered defecation did not influence the cure rate of treatment for UI.
FFI improved significantly after treatment for UI only, but not in relation to the outcome of such treatment. FFI did not influence the cure rate for UI. There was little to support a causal relation between disordered defecation and NNBDS ('functional elimination syndrome').
closed_qa
Is it possible to use urodynamic variables to predict upper urinary tract dilatation in children with neurogenic bladder-sphincter dysfunction?
To investigate the possibility of using urodynamic variables to predict upper urinary tract dilatation (UUTD) in children with neurogenic bladder-sphincter dysfunction (NBSD). The study included 200 children with NBSD, of whom 103 had UUTD and 97 did not; they were examined using routine urological, neurological and urodynamic methods. The group with UUTD was divided into three subgroups (group 1-3, from mild to severe hydronephrosis). A urodynamic risk score (URS) was calculated, including a detrusor leak-point pressure (DLPP) of>40 cmH2O, a bladder compliance (BC) of<9 mL/cmH2O and evidence of acontractile detrusor (ACD). The postvoid residual urine volume (PVR), DLPP, incidences of ACD and DLPP of>40 cmH2O were greater and the BC significantly less in groups 1-3 than in the control group. Moreover, the BC decreased, while the PVR, DLPP and the incidence of DLPP of>40 cmH2O were significantly higher in group 3 than in group 2. The relative safe cystometric capacity of groups 2 and 3 were lower, respectively, than that of the control and group 1, and the relative unsafe cystometric capacity (RUCC) and relative risk rate of cystometric capacity (RRRCC) were significantly greater with the severity of UUTD. The maximum detrusor pressure on voiding or at maximum flow rate, and the Abrams-Griffiths number for voluntary contractile bladders, of the UUTD group were significantly higher than those of the control group. There was a positive correlation between URS and UUTD.
The selective use of urodynamic variables might be valuable for predicting the risk of UUTD in children with NBSD. Decreased BC, and increased DLPP and ACD are the main urodynamic risk factors, and they reciprocally increase the occurrence and grades of UUTD. The grades of UUTD are compatible with increases in RUCC, RRRCC and URS.
closed_qa
Does the clinical efficacy of vardenafil correlate with its effect on the endothelial function of cavernosal arteries?
To investigate whether the results of the ultrasonographic (US) measurement of post-occlusive changes in the diameters of cavernosal arteries after administering phosphodiesterase type 5 (PDE-5) inhibitor vardenafil could be associated with the response to vardenafil in patients with erectile dysfunction (ED), as currently there are no reliable methods for predicting the success rate of oral PDE-5 inhibitors. The study included 122 men with ED; after a complex evaluation, the endothelial function of the cavernosal arteries was assessed in all patients before and 1 h after oral ingestion of vardenafil (20 mg), using our modification of the US assessment of post-occlusive changes in the diameter of cavernosal arteries. After the evaluation, all patients received vardenafil 20 mg on demand for 4 weeks. A successful response was defined using two endpoints, i.e. the normalization of the International Index of Erectile Function Erectile Function domain score (>or = 26) and positive answers to both Sexual Encounter Profile questions 2 and 3 on>or = 75% of occasions, based on the diary data collected. In all patients the mean (sd) initial percentage increase in the cavernosal artery diameter (PICAD) in responders and nonresponders was not statistically different, at 49 (24) and 43 (26), respectively (P = 0.168), but PICAD values after vardenafil were significantly greater in responders, at 73 (16) vs 55 (23) (P<0.001). Analysis of data from patients with different causes of ED showed statistically significant differences in PICAD between responders and nonresponders only in those with arteriogenic ED. The sensitivity and specificity of a PICAD of>or = 50% after taking vardenafil 20 mg for predicting a positive response to the same dose of the drug in patients with arteriogenic ED were 94.9% and 91.3%, respectively.
The results of the US assessment of post-occlusive changes in the diameter of cavernosal arteries after vardenafil administration are significantly associated with the clinical efficacy of the drug in patients with arteriogenic ED.
closed_qa
Is early age-related macular degeneration related to carotid artery stiffness?
Atherosclerosis and vascular stiffness have been implicated in the pathogenesis of age-related macular degeneration (AMD). The association of carotid artery stiffness, a measure of arterial elasticity reflecting early atherosclerosis, with early AMD, was examined in this study. A population-based, cross-sectional study of 9954 middle-aged people (age range 51-72 years). The presence of AMD signs was determined from fundus photographs according to the Wisconsin grading protocol. Carotid arterial stiffness was measured from high-resolution ultrasonic echo tracking of the left common carotid artery, and was defined as an adjusted arterial diameter change (AADCmu). A smaller AADC reflects greater carotid artery stiffness. The associations of pulse pressure and carotid artery intima-media thickness (IMT) with early AMD signs were also analysed. In the study population, 454 (4.6%) had early AMD. The mean (SD) AADC was 403 (127) mu. After adjusting for age, sex, race/centre, education, cigarette smoking, fasting glucose, lipid profile and inflammatory markers, a smaller AADC was found to be not associated with early AMD (odds ratio 0.94; 95% confidence interval, 0.71 to 1.25) or its component lesions. Other measures of arterial stiffness (pulse pressure) and atherosclerosis (carotid IMT) were also not associated with early AMD.
Carotid artery stiffness was not associated with signs of early AMD in this middle-aged population. These data provide no evidence of a link between age-related elastoid changes and early atherosclerotic processes in the carotid arteries and early AMD.
closed_qa
Spinal fusion surgery in children with non-idiopathic scoliosis: is there a need for routine postoperative ventilation?
The perioperative management of children with non-idiopathic scoliosis undergoing spinal deformity surgery has not been standardized and the current practice is to routinely ventilate these patients in the postoperative period. This study reports the experience from a single institution and evaluates the need and reasons for postoperative ventilation. Details of ventilated patients are presented. All patients undergoing spinal fusion surgery for non-idiopathic scoliosis were recorded prospectively (2003-4). Patients were anaesthetized according to a standardized technique. Physical characteristics, cardiopulmonary function, intraoperative blood loss and fluid requirement, postoperative need for ventilation and all perioperative adverse events were recorded on a computer database. A total of 76.2% of patients were safely extubated at the end of surgery without any further complications or need for re-ventilation; 23.8% of patients required postoperative ventilation with half of the cases being planned before operation and 40% of all patients with Duchenne muscular dystrophy (DMD) required postoperative ventilation. There were no specific factors that could predict the need for postoperative ventilation, although an increased tendency for children with DMD and those with a preoperative forced vital capacity<30% towards requiring postoperative ventilation was observed.
Early extubation can be safely performed after spinal deformity surgery for non-idiopathic scoliosis. The use of short-acting anaesthetics, drugs to reduce blood loss, experienced spinal anaesthetists and the availability of intensive care support are all essential for a good outcome in patients with neuromuscular disease and cardiopulmonary co-morbidity.
closed_qa
Does quality of life of COPD patients as measured by the generic EuroQol five-dimension questionnaire differentiate between COPD severity stages?
To assess the discriminative properties of the EuroQol five-dimension questionnaire (EQ-5D) with respect to COPD severity according to Global Initiative for Chronic Obstructive Lung Disease (GOLD) criteria in a large multinational study. Baseline EQ-5D visual analog scale (VAS) scores, EQ-5D utility scores, and St. George Respiratory Questionnaire scores were obtained from a subset of patients in the Understanding the Potential Long-term Impact on Function with Tiotropium trial, which was a 4-year placebo-controlled trial designed to assess the effect of tiotropium on the rate of decline in FEV(1) in COPD patients aged>or = 40 years, an FEV(1) of<70% predicted, an FEV(1)/FVC ratio of<or = 70%, and a smoking history of>/= 10 pack-years. A total of 1,235 patients (mean post bronchodilator FEV(1), 48.8% predicted) from 13 countries completed the EQ-5D. The EQ-5D VAS and utility scores differed significantly among patients in GOLD stages 2, 3, and 4, also after correction for age, sex, smoking, body mass index (BMI), and comorbidity (p<0.001). The mean EQ-5D VAS scores for patients in GOLD stages 2, 3, and 4 were 68 (SD, 16), 62 (SD, 17), and 58 (SD, 16), respectively. The mean utility scores were 0.79 (SD, 0.20) for patients in GOLD stage 2, 0.75 (SD, 0.21) for patients in GOLD stage 3, and 0.65 (SD, 0.23) for patients in GOLD stage 4. Effect sizes for the difference in utility scores between patients in GOLD stages 3 and 4 were more than twice as high as those for the difference between patients in GOLD stages 2 and 3. Gender, postbronchodilator FEV(1) percent predicted, the number of hospital admissions and emergency department visits in the year prior to baseline measurements, measures of comorbidity, and BMI were independently associated with EQ-5D utility. EQ-5D utility scores also differed between patients from different countries. French patients especially had lower utility scores than US patients. Utility scores calculated with the US value set were on average 5% higher than those calculated with the UK value set.
Increasing severity of COPD was associated with a significant decline in EQ-5D VAS scores and utility scores. These results demonstrate that a generic instrument can assess COPD impact on quality of life and that the scores discriminate between patient groups of known severity. These utility scores will be useful in cost-effectiveness assessments.
closed_qa
mini-PAT (Peer Assessment Tool): a valid component of a national assessment programme in the UK?
To design, implement and evaluate a multisource feedback instrument to assess Foundation trainees across the UK. mini-PAT (Peer Assessment Tool) was modified from SPRAT (Sheffield Peer Review Assessment Tool), an established multisource feedback (360 degrees ) instrument to assess more senior doctors, as part of a blueprinting exercise of instruments suitable for assessment in Foundation programmes (first 2 years postgraduation). mini-PAT's content validity was assured by a mapping exercise against the Foundation Curriculum. Trainees' clinical performance was then assessed using 16 questions rated against a six-point scale on two occasions in the pilot period. Responses were analysed to determine internal structure, potential sources of bias and measurement characteristics. Six hundred and ninety-three mini-PAT assessments were undertaken for 553 trainees across 12 Deaneries in England, Wales and Northern Ireland. Two hundred and nineteen trainees were F1s or PRHOs and 334 were F2s. Trainees identified 5544 assessors of whom 67% responded. The mean score for F2 trainees was 4.61 (SD = 0.43) and for F1s was 4.44 (SD = 0.56). An independent t test showed that the mean scores of these 2 groups were significantly different (t = -4.59, df 390, p<0.001). 43 F1s (19.6%) and 19 F2s (5.6%) were assessed as being below expectations for F2 completion. The factor analysis produced 2 main factors, one concerned clinical performance, the other humanistic qualities. Seventy-four percent of F2 trainees could have been assessed by as few as 8 assessors (95% CI +/-0.6) as they either scored an overall mean of 4.4 or above or 3.6 and below. Fifty-three percent of F1 trainees could have been assessed by as few as 8 assessors (95% CI +/-0.5) as they scored an overall mean of 4.5 or above or 3.5 and below. The hierarchical regression when controlling for the grade of trainee showed that bias related to the length of the working relationship, occupation of the assessor and the working environment explained 7% of the variation in mean scores when controlling for the year of the Foundation Programme (R squared change = 0.06, F change = 8.5, significant F change<0.001).
As part of an assessment programme, mini-PAT appears to provide a valid way of collating colleague opinions to help reliably assess Foundation trainees.
closed_qa
Is radiation a risk factor for atherosclerosis?
The aim of the present paper was to study the role of irradiation in the atherosclerotic process in patients affected by Hodgkin and non-Hodgkin lymphoma. We studied 84 subjects, 42 with Hodgkin or non-Hodgkin disease and 42 controls. All 42 cases had been irradiated and were comparable in terms of risk factors for atherosclerosis. All 84 subjects underwent echo-color Doppler of the arterial axis (carotids, abdominal aorta, and femoral arteries), and the intima-media thickness was measured. The irradiated cases had a greater intima-media thickness in the carotid district, even after dividing them according to age and sex; males were affected more than females. The irradiated patients were at greater risk of developing cardiovascular events than the controls.
An echo-color Doppler of the carotid district is advisable in all patients who have been submitted to radiotherapy, and the patients with a significantly greater than normal intima-media thickness need a strict follow-up, and antioxidant or antiaggregant therapy should be considered.
closed_qa
Should patients with anemia and low normal or normal serum ferritin undergo colonoscopy?
Patients with unexplained iron deficiency anemia have a greater prevalence of colonic neoplasia, and should be evaluated for a colonoscopy. The approach to patients with anemia without iron deficiency remains unclear. To compare the prevalence of colonic neoplasia in anemic patients with normal ferritin (>50 ng/mL), to those with ferritin<or =50 ng/mL, and nonanemic individuals. Patients referred for colonoscopy for anemia evaluation were stratified into 3 groups: ferritin<or =50 ng/mL, 51-100 ng/mL, and>100 ng/mL. We compared these groups to each other, and to asymptomatic nonanemic individuals undergoing screening colonoscopy. The prevalence of advanced colonic neoplasia was determined for each group using existing records. During the study period, 414 patients who underwent colonoscopy for anemia evaluation and 323 nonanemic individuals who underwent colonoscopy for cancer screening met inclusion criteria. Study subjects were mostly men. The prevalence of advanced colonic neoplasia in subjects with ferritin 51-100 ng/mL was 7.2% (95% CI 2.4-17.9%), similar to 7.9% (95% CI 5.1-11.9%) in those with ferritin<or =50 ng/mL. The incidence of advanced colonic neoplasia in subjects with ferritin>100 ng/mL was 1.7% (95% CI 0.1-6.6%), similar to 1.2% (95% CI 0.4-3.3%) in the asymptomatic nonanemic group. After adjusting for age, patients with ferritin<or =50 ng/mL and 51-100 ng/mL were almost 5 times more likely to harbor advanced colonic neoplasia than the other groups. The addition of other laboratory parameters did not improve the predictive value of ferritin.
A ferritin cutoff of 100 ng/mL can be used to determine the need for colonoscopy in men with anemia.
closed_qa
Does RV lead positioning provide additional benefit to cardiac resynchronization therapy in patients with advanced heart failure?
The left ventricular (LV) stimulation site is currently recommended to position the lead at the lateral wall. However, little is known as to whether right ventricular (RV) lead positioning is also important for cardiac resynchronization therapy. This study compared the acute hemodynamic response to biventricular pacing (BiV) at two different RV stimulation sites: RV high septum (RVHS) and RV apex (RVA). Using micro-manometer-tipped catheter, LV pressure was measured during BiV pacing at RV (RVA or RVHS) and LV free wall in 33 patients. Changes in LV dP/dt(max) and dP/dt(min) from baseline were compared between RVA and RVHS. BiV pacing increased dP/dt(max) by 30.3 +/- 1.2% in RVHS and by 33.3 +/- 1.7% in RVA (P = n.s.), and decreased dP/dt(min) by 11.4 +/- 0.7% in RVHS and by 13.0 +/- 1.0% in RVA (P = n.s.). To explore the optimal combination of RV and LV stimulation sites, we assessed separately the role of RV positioning with LV pacing at anterolateral (AL), lateral (LAT), or posterolateral (PL) segment. When the LV was paced at AL or LAT, the increase in dP/dt(max) with RVHS pacing was smaller than that with RVA pacing (AL: 12.2 +/- 2.2% vs 19.3 +/- 2.1%, P<0.05; LAT: 22.0 +/- 2.7% vs 28.5 +/- 2.2%, P<0.05). There was no difference in dP/dt(min) between RVHS- and RVA pacing in individual LV segments.
RVHS stimulation has no overall advantage as an alternative stimulation site for RVA during BiV pacing. RVHS was equivalent with RVA in combination with the PL LV site, while RVA was superior to RVHS in combination with AL or LAT LV site.
closed_qa
Is inducibility of atrial fibrillation after radio frequency ablation really a relevant prognostic factor?
The study was intended to assess the prognostic value of inducibility of atrial fibrillation (AF) after radio frequency ablation. Two hundred and thirty four patients with drug-resistant paroxysmal (n=165) or persistent AF (n=69) underwent either Lasso-guided segmental pulmonary vein isolation (n=83) or CARTO-guided left atrial circumferential ablation (n=151). After ablation, two attempts to induce AF (>1 min) by decremental coronary sinus stimulation were performed. Patients were followed for at least 6 months (median: 12.7 months). At 6 months of follow-up, 67% of patients with paroxysmal and 48% of patients with persistent AF were AF-free. Inducibility of AF was a significant predictor of AF recurrence in univariate [hazard ratio (HR)=2.32, P<0.001] and multivariable (HR=2.19, P<0.001) Cox regression analyses. The prognostic value of inducibility was present in both patients with paroxysmal (HR=2.38, P=0.001) and persistent AF (HR=1.91, P=0.034) and did not significantly differ between both ablation techniques. The sensitivity, specificity, positive, and negative predictive values of the AF induction test to predict the 6-month ablation outcome were 46.7, 75, 53.8, and 69.2%, respectively.
Inducibility of AF after ablation is a significant predictor of recurrent AF. However, owing to the low diagnostic accuracy of the AF induction test, non-inducibility does not qualify as reliable procedural endpoint.
closed_qa
Racial disparities in living kidney donation: is there a lack of willing donors or an excess of medically unsuitable candidates?
Live kidney donation is safe for healthy donors and an effective treatment for patients with end-stage renal disease. Many potential donors are referred for live kidney donation, but only a small percentage donate. This study aims to determine reasons for nondonation and establish if racial differences exist. A retrospective database and chart review of all patients that were referred for potential live kidney donation from January 1, 2000 to December 31, 2004 was conducted. In all, 30.3% of referred potential live kidney donors were lost to follow-up. Primary reasons for nondonation (n=1,050) included unsuitable donor health (43.1%) and recipient-based causes (41.3%). Immunologic incompatibility accounted for 9.7% of all nondonations. Racial differences indicated more African Americans had incompatible blood types (P=0.01) or ineligible recipients (26.7% vs. 14.4%, P<0.01). More non-African Americans donated (13.2% vs. 4.6%, P<0.01) or were halted because the potential recipient received another organ (living/cadaveric) (20.0% versus 7.9%, P<0.01). Nondonation due to overall donor health (including diabetes and hypertension) did not differ between races, but subanalysis indicated more African American nondonation was due to high body mass index (P=0.01).
Determining the reason behind nondonation is a first step towards understanding low rates of live kidney donation. More African American donor referrals are lost to follow-up while rates of other reasons were similar among races. This may indicate that African Americans are not more frequently medically unsuitable, but that the divergence in rates of live kidney donation is caused by a disparity in willingness to donate among African Americans.
closed_qa
Pain relief in early rehabilitation of rotator cuff tendinitis: any role for indirect suprascapular nerve block?
A total of 40 potential study subjects, who complained of shoulder pain from a RCT, were enrolled and randomly assigned to standard rehabilitation treatment plus SSNB (Group A) or to standard rehabilitation treatment alone (Group B). The UCLA shoulder rating scale was used to assess the shoulder mobility on admission and discharge, and to calculate the percentage of potential improvement achieved during rehabilitation (effectiveness). A pain visual analogic scale was used to serially assess pain. At the end of the trial, a self-report questionnaire evaluated whether patients could sleep and achieve activity of day life carry out everyday activities better than they could before treatment. Forty patients suffering from RCT entered the study. Those receiving nerve block from the beginning of the treatment in addition to standard rehabilitation therapy reported significantly less pain during physiotherapy and better final outcomes. During treatment with SSNBs, patients reported a more significant reduction in the intensity of pain and a better reduction of pain during sleep and rehabilitation exercises in comparison to with the standard therapy alone. A statistically significant inverse correlation was found between shoulder pain and mobility.
The results indicate that combining nerve block with standard rehabilitative therapy may improve the final outcome of painful RCT. It decreased the severity and frequency of the perceived pain, improved the compliance with physiotherapy, restored more normal sleep patterns, and increased compliance with the rehabilitation program. This result proves to be an effective, safe and inexpensive therapeutic option for patients suffering from painful disabling shoulder tendinitis.
closed_qa
Is intraoperative touch imprint cytology of sentinel lymph nodes in patients with breast cancer cost effective?
Sentinel lymph nodes (SLNs) are generally evaluated postoperatively, requiring 5-7 days for assessment. SLNs can also be evaluated intraoperatively by using touch imprint cytology (TIC), thus providing the surgeon immediate feedback and allowing for concurrent completion node dissection (CND) for positive SLNs. The authors hypothesized that TIC, when compared with standard postoperative SLN assessment alone, would permit a cost-effective evaluation of SLNs in patients with clinically node-negative breast cancer. A decision-analysis model was created to compare TIC with standard postoperative SLN assessment alone. Sensitivity and specificity of TIC were determined prospectively from 342 patients who underwent SLN biopsy assessed by both techniques. Short-term health states associated with surgical staging were defined, and utilities were estimated using EuroQol-5D. Base-case analysis was performed to estimate quality-adjusted life years and the incremental cost-effectiveness ratio. Sensitivity analyses were performed to examine stability of model parameters. For each tumor stage, TIC was cost effective, and for patients with larger tumors (T3 and T4), TIC was the dominant strategy. The analysis was robust to changes in sensitivity and specificity of TIC, prevalence of metastasis, probability of complications, and cost. However, when utility associated with standard SLN assessment was 0.9 or greater, this became the preferred strategy.
TIC is cost effective for assessing SLN metastasis intraoperatively. For patients with larger tumors, it is not only more effective, but also less costly than standard SLN assessment alone. TIC may be particularly useful for patients who experience significant anxiety while awaiting results of standard SLN assessment.
closed_qa
Prevalence of asthma and allergic diseases in Sanliurfa, Turkey, and the relation to environmental and socioeconomic factors: is the hygiene hypothesis enough?
The prevalence of asthma and allergic diseases has been reported to be higher in urban than in rural areas between developed and underdeveloped countries and within any given country. Studies in Turkey have yielded different results for different regions. This study aimed to investigate the prevalence of asthma and atopy in Sanliurfa, Turkey, and the influence of environmental factors. We recruited 1108 children from different areas of Sanliurfa and administered the questionnaire of the International Study of Asthma and Allergies in Childhood. Items asking for socioeconomic data were also included. Skin prick and purified protein derivative tests were performed on the children. Measles antibodies were determined and feces were analyzed for parasites. The total prevalence of atopic diseases was 8.6% (n = 95/1108), asthma 1.9% (n=21/1108), allergic rhinitis 2.9% (n=32/1108), and allergic conjunctivitis 3.8% (n=42/1108). The rate of atopic diseases was 5.6% (n=32/573) in children attending schools in peripheral, less urban, slum areas while it was 11.8% (n=63/535) in those attending city-center schools (OR, 2.2; 95% confidence interval [CI]; 1.4-3.5; P<.001). Skin prick test positivity was observed in 3.9% (n=43/1108) overall; at schools in slum areas it was 1.9% (n=11/573), whereas at central schools the rate was 6% (n=32/535) (OR, 4.08; 95% CI, 2.03-8.20; P<.001). The prevalence of asthma and atopic diseases was significantly higher in children who have a family history of atopy, attend a central school, live in an apartment, have more rooms in their homes, and enjoy better economic conditions.
We found associations between various factors suggested by the hygiene hypothesis and asthma, and very low rates of prevalence of asthma and atopic diseases both in Sanliurfa in comparison with the more developed western regions and in the peripheral slum areas. The hygiene hypothesis is helpful in explaining these observations.
closed_qa
Chronic hepatitis C virus infection: is there a correlation between HCV genotypes and the level of viremia?
Hepatitis C virus (HCV) RNA status and HCV genotypes have become extremely important for exact diagnosis, prognosis, duration of treatment and monitoring of antiviral therapy of chronic HCV infection. For the purpose of precise and objective assessment of virologic analyses, such as the determination of the number of virus copies and virus genotypes, 110 patients with chronic HCV infection were tested Genotyping of HCV isolates and HCV RNA quantification were performed by using the PCR method. Genotype 1b infection was verified in 49.1% of patients, genotype 3a infection was found in 28.2%, genotype 4 in 9.1%, genotype 2 in 4.5%, while mixed genotype infections were diagnosed in 9.1% of cases. Patients infected by genotype 1b had significantly higher serum HCV RNA level in relation to patients infected by other genotypes (p<0.05). Over 70% of patients infected by genotype 1b had more than 2 x 10(6) virus copies in 1 ml of blood, while in genotypes 2, 3a and 4, the percentage was 40%, 38.5% and 30%, respectively. Male patients had approximately 7.7 x 10(6) virus copies in 1 ml of blood, which was significantly higher in comparison with female patients (2.3 x 10(6) copies/ml; p<0.05).
Our results are in concordance with the results of other authors reporting that genotype 1b is predominant in Europe, as well as significantly higher incidence of viremia in patients with genotype 1b infection in relation to other HCV genotypes. Based on these results, we can conclude that our patients, most commonly, present with severe clinical course of chronic HCV infection and require longer treatment (48 weeks), which causes economic problems.
closed_qa
Complex sleep apnea syndrome: is it a unique clinical syndrome?
Some patients with apparent obstructive sleep apnea hypopnea syndrome (OSAHS) have elimination of obstructive events but emergence of problematic central apneas or Cheyne-Stokes breathing pattern. Patients with this sleep-disordered breathing problem, which for the sake of study we call the "complex sleep apnea syndrome," are not well characterized. We sought to determine the prevalence of complex sleep apnea syndrome and hypothesized that the clinical characteristics of patients with complex sleep apnea syndrome would more nearly resemble those of patients with central sleep apnea syndrome (CSA) than with those of patients with OSAHS. Retrospective review Sleep disorders center. Two hundred twenty-three adults consecutively referred over 1 month plus 20 consecutive patients diagnosed with CSA. NA. Prevalence of complex sleep apnea syndrome, OSAHS, and CSA in the 1-month sample was 15%, 84%, and 0.4%, respectively. Patients with complex sleep apnea syndrome differed in gender from patients with OSAHS (81% vs 60% men, p<.05) but were otherwise similar in sleep and cardiovascular history. Patients with complex sleep apnea syndrome had fewer maintenance-insomnia complaints (32% vs 79%; p<.05) than patients with CSA but were otherwise not significantly different clinically. Diagnostic apnea-hypopnea index for patients with complex sleep apnea syndrome, OSAHS, and CSA was 32.3 +/- 26.8, 20.6 +/- 23.7, and 38.3 +/- 36.2, respectively (p = .005). Continuous positive airway pressure suppressed obstructive breathing, but residual apnea-hypopnea index, mostly from central apneas, remained high in patients with complex sleep apnea syndrome and CSA (21.7 +/- 18.6 in complex sleep apnea syndrome, 32.9 +/- 30.8 in CSA vs 2.14 +/- 3.14 in OSAHS; p<.001).
Patients with complex sleep apnea syndrome are mostly similar to those with OSAHS until one applies continuous positive airway pressure. They are left with very disrupted breathing and sleep on continuous positive airway pressure. Clinical risk factors don't predict the emergence of complex sleep apnea syndrome, and best treatment is not known.
closed_qa
Does living near heavy industry cause lung cancer in women?
The incidence of lung cancer among women is high in the highly industrialised area of Teesside in north-east England. Previous research has implicated industrial pollution as a possible cause. A study was undertaken to investigate whether prolonged residence close to heavy industry is associated with lung cancer among women in Teesside. Two hundred and four women aged<80 years with incident primary lung cancer and 339 age matched community controls were recruited to a population based case-control study. Life course residential, occupational, and active and passive smoking histories were obtained using an interviewer administered questionnaire. The age adjusted odds ratio (OR) for lung cancer among people living>25 years v 0 years near (within 0-5 km) heavy industry in Teesside was 2.13 (95% CI 1.34 to 3.38). After adjustment for confounding factors the OR was 1.83 (95% CI 0.82 to 4.08) for>25 years or 1.10 (95% CI 0.96 to 1.26) for an additional 10 years living near industry. ORs were similar after residence near heavy industry outside Teesside was also included, and when latency was allowed for by disregarding residential exposures within the last 20 years. Adjustment for active smoking had the greatest effect on the OR.
This population based study using life grid interviews for life course exposure assessment has addressed many deficiencies in the design of previous studies. The findings support those in most of the international literature of a modestly raised risk of lung cancer with prolonged residence close to heavy industry, although the confidence intervals were wide. The effect of air pollution on the incidence of lung cancer merits continued study.
closed_qa
Can computers improve patient care by primary health care workers in India?
The objective was to test whether a decision support technology for non-physicians can increase health care utilization and quality. Before and after measurements were taken from a systematic random sample of patients and staff at randomly assigned intervention and control facilities. The study took place at primary health facilities in rural Tamil Nadu, India. One thousand two hundred and eighty-six patients and 82 staff were interviewed. A computer-assisted decision support technology was introduced to assist with patient screening. Outcome measures included new patient visits per month, a Global Patient Assessment of Care Index, and health worker attitude variables. There was a difference of difference of 430 new patient visits per month at the intervention sites (P = 0.005), an increase from baseline of 18% at intervention sites compared with a decline of 5% at control sites. The intervention was associated with significant improvements in a Global Patient Assessment of Care Index (mean difference of difference 7.9, P<0.001). The largest gains were made in patient communication, technical quality, and general satisfaction with care. The attitudes of public health workers toward the new technology and their jobs did not change.
Decision support technologies have considerable potential to improve coverage and quality of health care for the poor and where there is no doctor, but the unreceptive attitude of public health workers would need to be overcome. Application of these technologies should take advantage of their popularity with patients and the opportunity to work through the private sector.
closed_qa
Standardization of pelvic lymphadenectomy performed at radical cystectomy: can we establish a minimum number of lymph nodes that should be removed?
The number of lymph nodes (LNs) removed during radical cystectomy (RC) for transitional cell carcinoma (TCC) of the bladder affects overall and disease-specific survival, but no consensus exists regarding the minimum number of LNs that should be removed. The goal of the current study was to determine if a threshold number of nodes exists, above which taking additional LNs has no clinical benefit. A total of 1121 patients were identified who underwent RC for clinically localized TCC of the bladder between January 1990 and April 2004. To determine the relation of LNs removal and overall survival, a Cox proportional hazards model was used with pathologic stage, age, and comorbidity as covariates. A dose-response curve, adjusted for covariates, was modeled to assess the impact of an increasing number of LNs removed on overall survival. A median of 9 LNs were removed (range, 0-53 LNs). In multivariable analysis, all covariates (number of LNs removed, age, stage of disease, and comorbidity) were found to be predictive of survival. The dose-response curve for number of LNs versus survival revealed that, when adjusted for covariates, the probability of survival did not plateau but instead continued to rise as the number of LNs removed increased.
No evidence was found that a minimum number of LNs is sufficient for optimizing bladder cancer outcomes when a limited or extended pelvic LN dissection is performed during RC. Instead, the probability of survival continues to rise as the number of LNs removed increases. This study supports a more extended LN dissection at the time of RC, and highlights the challenges of interpreting retrospective LN dissection data.
closed_qa
Is the ankle-brachial index a useful screening test for subclinical atherosclerosis in asymptomatic, middle-aged adults?
Measurement of the ankle-brachial index (ABI) is recommended as a screening test for cardiovascular risk prediction in individuals>or = 50 years old; however, there is little data regarding the utility of the ABI as a screening test in individuals for whom physicians actually order non-invasive testing for cardiovascular risk prediction. This study included 493 consecutive asymptomatic patients without known atherosclerotic vascular disease who were referred by their physician for measurement of the ABI and ultrasound measurement of carotid intima-media thickness (CIMT). ABI values were classified as "reduced" (<0.9), "normal" (0.9-1.3), and "increased" (>1.3). The mean age of the patients was 55.3 (standard deviation 7.5) years. Only 1 patient had a reduced ABI (0.2%). ABI values tended to be higher in those with increased CIMT (P=0.051); however, CIMT was not significantly different between those with normal and increased ABI values (P=0.802). There were no significant differences in the prevalence of traditional cardiovascular risk factors or carotid plaque presence among the ABI groups.
Despite recommendations, the ABI is not sensitive as a screening tool for detecting subclinical atherosclerosis in asymptomatic middle-aged individuals.
closed_qa
Does N-acetylcysteine prevent contrast-induced nephropathy during endovascular AAA repair?
To examine if N-acetylcysteine (NAC) reduces the incidence of contrast nephropathy during endovascular abdominal aortic aneurysm repair (EVAR) as evidenced by changes in markers of renal function. Twenty consecutive men (mean age 72 years, range 65-79) undergoing EVAR were randomized to receive standard intravenous fluid hydration or standard fluid hydration and NAC (600 mg BID orally, 4 doses). Venous blood and urine were collected prior to the procedure and for 5 postoperative days and analyzed blindly for serum creatinine, urinary retinol-binding protein (RBP), and albumin/creatinine ratio (ACR). There were no significant differences in baseline demographics between the groups. No patient developed acute renal failure. In both groups, urinary RBP rose significantly from baseline (median 15 microg/mmol to peak 699 microg/mmol in controls versus 17 to 648 microg/mmol in the treatment group, p<0.003). There were similar significant rises in ACR (p<0.02). There was, however, no significant difference in the postoperative RBP or ACR between the groups at any time point.
EVAR causes significant acute renal injury in most patients. This was not attenuated by N-acetylcysteine. The causes of renal injury are probably multifactorial, the long-term clinical significance of which is unclear.
closed_qa
Does perioperative blood transfusion affect survival in patients with cervical cancer treated with radical hysterectomy?
To determine if blood transfusions during or after radical hysterectomy adversely affect survival in patients with invasive cervical carcinoma. Two hundred eighty-three women with stage IA2-IIA cervical cancer were treated with radical hysterectomy and pelvic lymphadenectomy from 1980-1989. Thirteen were lost to follow-up, and five others received adjuvant chemotherapy. Among the remaining 265 patients, 131 were given blood transfusions during surgery or within 30 days, whereas 134 were not. The clinical and pathologic characteristics of these two groups were reviewed and analyzed statistically. Transfused and nontransfused patients did not differ with respect to mean age (45.0 versus 43.4 years, respectively), stage, grade, cell type, depth of invasion, or prevalence of nodal metastasis. Transfused patients more frequently received adjuvant pelvic irradiation than did nontransfused (47 versus 33%, respectively, chi 2 P<.05). After a mean follow-up of 51 months (range 13-125), 19 women (14%) in each group were diagnosed as having recurrent disease, predominantly in the pelvis. Using life-table analysis, the calculated 5-year survival was 86% for transfused and 84% for nontransfused patients, a nonsignificant difference. Disease-free survival was also similar. In the study population, grade, depth of invasion, and nodal status predicted survival. When patients were stratified according to age, cell type, stage, depth of invasion, nodal involvement, and use of adjuvant radiation, blood transfusion still did not adversely influence survival. Using the Cox proportional hazards model, only nodal status was an independent predictor of death.
Perioperative blood transfusion does not impact overall survival or time to recurrence after radical hysterectomy.
closed_qa
Vaccine information pamphlets: more information than parents want?
To assess the information needs of parents regarding childhood immunizations, and their satisfaction with the Vaccine Information Pamphlets (VIPs). Verbally administered, forced-choice survey of a representative sample. Urban teaching hospital-primary care center (N = 73), neighborhood health center (N = 75), and a suburban private practice (N = 75). Parents or guardians of children scheduled for routine checkups, aged 1 month to 18 years, presenting for routine health care maintenance visits. Of 227 parents, 223 completed the survey. Almost all (98%) had prior experience with their children's immunizations, and 7% reported a history of a "bad" experience. Most parents stated that it was "very important" to receive information about immunizations regarding: diseases prevented by the immunizations (89%); common side effects (91%); serious side effects (89%); contraindications (91%). Eighty percent of parents indicated they wanted immunization information discussed with each vaccination. Forty-three percent of the parents were familiar with the VIPs; of these, 88% reported that the amount of information was "just right," and 94% thought the VIPs were helpful. However, 29% thought the VIPs were either too long, or somewhat too long.
Parents indicate that they want information about many aspects of immunizations, and those familiar with the VIPs report high levels of satisfaction with the pamphlets.
closed_qa
Features of good consultation in general practice: is time important?
To relate specifically defined 'good consultations' (GC) to length of consultation, continuity, patients' age and sex, and different doctors. A questionnaire about consultation length, communication and problem character, given to doctors and patients immediately after consultations. The number of GCs for different doctors in relation to time, continuity, and patients' age and sex were calculated. 581 consultations were registered with six male general practitioners working at three different health centres in Umeå, a university town in northern Sweden. A significant difference in the number of GCs was only found between the doctors (p<0.01). Length of consultation, patients' age and sex, and continuity and no impact on the GC frequency.
The doctor as a person and his working style is most important in achieving good consultations in general practice. Length of consultation is less influential.
closed_qa
Do pregnant women who report a reduction in cigarette consumption consume less tobacco?
To investigate the relationship between changes in self-reported cigarette consumption and changes in serum thiocyanate among pregnant and non-pregnant women who participated in a smoking cessation trial. Intervention study. General practitioners in western Norway. 146 pregnant and 102 non-pregnant women who were daily smokers at inclusion. Self-reported cigarette consumption and serum thiocyanate were recorded at inclusion and after 12 months. Women who smoked in the first trimester of pregnancy reported 21% less cigarette consumption than non-pregnant women. This was in accordance with the serum thiocyanate values. Twelve months later the mean values of serum thiocyanate had increased irrespective of whether the postpartum women reported that they had reduced, increased, or not changed their cigarette consumption. However, among those who reported that they had stopped smoking, analyses of serum thiocyanate confirmed their statements, with very few exceptions. Among nonpregnant women, the serum thiocyanate changed in accordance with the reported changes in cigarette consumption in all groups.
Women smoking daily in the first trimester of pregnancy had a lower exposure to tobacco than daily smoking non-pregnant women. Twelve months later (six months after delivery) analyses of serum thiocyanate indicated that postpartum women underestimated their tobacco consumption.
closed_qa
Prenatal and postpartum Pap smears: do we need both?
The need to perform a Pap smear at the time of entry to prenatal care, as well as at the postpartum check-up, is questionable. A comparison of the rates of recovery of endocervical cells and the incidence of dysplasia on the prenatal and postpartum Pap smears may be helpful in determining an optimal preventive care protocol for patients who are pregnant. Demographic and clinical data were collected from the records of 1,377 obstetrical patients at a midwest family practice residency. The yield of endocervical cells and the incidence of dysplasia was determined for both the prenatal and the postpartum Pap smears performed for this group of patients. In women having both exams, endocervical cells were recovered in 44.1% of prenatal Pap smears compared to 82.0% of postpartum smears. The incidence of dysplasia was 2.6% on prenatal Pap smears and 4.8% on postpartum smears. In this study population, 33% of women did not return for their postpartum check-up.
The postpartum Pap smear is of value due to a significant yield of dysplasia. The sensitivity of the prenatal Pap test may be less than desired. Efforts directed toward increased patient compliance regarding the postpartum check-up are needed.
closed_qa
Conference attendance: do we meet the new residency review committee requirements?
To characterize the attendance at and presenters of conferences given to emergency medicine residents and to determine the ability of emergency medicine residents to attend conferences while working in the emergency department and on off-service rotations. Descriptive study of an anonymous mail survey. Residency directors of all approved emergency medicine residency programs in the United States. Seventy-six of 95 questionnaires (80%) were returned. We defined "high attendance" at emergency medicine conferences as a reported average of at least 75% attendance by emergency medicine resident physicians. Fifty percent of respondents reported high attendance. Conversely, 17% of programs reported poor attendance, which we defined as an average attendance by 50% or fewer emergency medicine resident physicians. Forty-eight percent of programs reported that emergency medicine faculty conducted more than 50% of the conferences, and 16% reported that the faculty conducted 25% or fewer conferences. Ninety-six percent of programs allowed residents to attend conferences during off-service rotations. Ninety-two percent of programs relieved residents of clinical responsibilities during scheduled shifts in the emergency department so that they might attend lectures.
We found that a sizable proportion of programs may not have met the new Residency Review Committee requirements for lecture attendance at the time the guidelines were issued. The vast majority of programs met guidelines for relief of clinical duties, and a large proportion of programs exceeded the requirements for percentage of lectures given by emergency medicine faculty.
closed_qa
Does genetic anticipation occur in familial rheumatoid arthritis?
To determine if there is evidence for genetic anticipation in rheumatoid arthritis (RA) by analysing the possibility that parental disease status and age at proband conception influence the age of onset and disease severity of the proband. RA outpatients were identified and data were also taken from Newcastle multicase RA pedigrees. Comparisons of age of onset and parental age at proband conception were made for pedigrees grouped according to the disease status of the parents. Correlation coefficients and linear regression models were calculated for the age of RA onset in the probands. Measures of disease severity were compared in RA mother-proband pairs. The results were similar in both the outpatient (n = 153) and multicase pedigree (n = 15) samples. Significant results were confined to pedigrees in which the mother had RA (20 of the outpatient probands and seven of the multicase group). Probands in these sibships had a younger age of RA onset than their affected mothers (38.3 years (95% confidence interval (CI) 33.8 to 42.8) versus 53.7 (47.3 to 60.0) (p = 0.002) in the outpatient sample; 32.4 years (25.3 to 39.6) versus 43.4 years (29.0 to 57.9) (p = 0.1) in the multicase pedigrees). In the maternal RA group, both the maternal and paternal age at proband conception showed significant negative correlations (r = -0.65, p = 0.002 and r = -0.60, p = 0.005, respectively in the outpatient sample) and linear regression coefficients with age of proband disease onset. In seven affected mother-proband pairs, the probands had a tendency to more severe disease, despite shorter disease duration and younger age.
This preliminary analysis has suggested that within pedigrees in which the mother has RA, the features of genetic anticipation and observations consistent with premutation models may prevail.
closed_qa
Comparative assessment of phenolphthalein and phenolphthalein glucuronide: is phenolphthalein glucuronide a better laxative?
Phenolphthalein is widely used as a safe and effective laxative. After oral administration, phenolphthalein is absorbed in the small bowel and is conjugated in the liver to phenolphthalein glucuronide which passes into the colon where it is deconjugated and the active compound, phenolphthalein, is released. Since phenolphthalein glucuronide does not undergo enterohepatic circulation it should theoretically have a more rapid onset of action and a lower threshold dose for laxation. The present study was designed to examine this issue. Ten normal healthy subjects volunteered for the study. All subjects were administered placebo, phenolphthalein (at doses of 15, 30, 45, 60, 75 and 90 mg) or phenolphthalein glucuronide (at equivalent doses of 24, 48, 72, 96, 120 and 144 mg) in a random order. Stool weight, the frequency and consistency of stools, and the development of symptoms were recorded at 12-h intervals for 84 h. There was a significant increase in the mean stool weight obtained within the first 24 h of administration of a 30 mg dose of phenolphthalein and its glucuronide equivalent compared to the values obtained with placebo. A further increase in the dose did not improve the therapeutic response. There was no difference between phenolphthalein and phenolphthalein glucuronide with respect to the rapidity of action, the threshold dose, effectiveness of laxation, or the frequency of adverse effects.
The therapeutic response and side effect profile of the different doses favoured 30 mg phenolphthalein as the optimum laxative dose. Although theoretically superior, phenolphthalein glucuronide was not found to be a more effective laxative compared to phenolphthalein in normal subjects.
closed_qa
Is the three year breast screening interval too long?
To report the detection rate of interval cancers in women screened by the NHS breast screening programme. Detection of interval cancers by computer linkage of records held by the screening centres in the North Western Regional Health Authority with breast cancer registrations at the regional cancer registry. North Western Regional Health Authority. 137,421 women screened between 1 March 1988 and 31 March 1992 who had a negative screening result. 297 invasive interval cancers were detected. The rate of detection of interval cancers expressed as a proportion of the underlying incidence was 31% in the first 12 months after screening, 52% between 12 and 24 months, and 82% between 24 and 36 months.
The incidence of interval cancers in the third year after breast screening approaches that which would have been expected in the absence of screening and suggests that the three year interval between screens is too long.
closed_qa
Endoscopic ultrasonography in the treatment of oesophageal varices by endoscopic sclerotherapy and band ligation: do we need it?
To assess the role of endoscopic ultrasonography (EUS) in monitoring the treatment of oesophageal varices by endoscopic sclerotherapy and band ligation. We studied 35 patients with portal hypertension undergoing elective treatment for oesophageal varices by injection sclerotherapy with absolute ethanol (group 1, n = 19) or by endoscopic variceal ligation (EVL; group 2, n = 16). All patients were examined by EUS before treatment to assess the status of their oesophago-gastric varices and the presence of collateral and perforating veins. Evaluation with EUS was repeated to confirm variceal eradication whenever endoscopy suggested successful obliteration, or to determine the reason for failure when treatment did not appear to be successful. Depending on the endosonographic findings, treatment was continued until EUS showed complete variceal eradication. After treatment, EUS showed insufficient variceal thrombosis in six (17%) patients who appeared to have variceal eradication at endoscopy. EUS was also superior to endoscopy for diagnosing gastric varices and showed patent vessels in 26 (74%) out of 35 patients. Gastric varices observed on EUS were detected at endoscopy in only 60% of cases. Endoscopic sclerotherapy and EVL had induced characteristic changes on EUS evaluation, and oesophageal fibrosis was observed more frequently in endoscopic sclerotherapy than in EVL-treated patients.
EUS provides valuable information on the status of oesophago-gastric varices and can be used to assess the efficiency of endoscopic sclerotherapy and EVL.
closed_qa
Cisapride in chronic idiopathic constipation: can the colon be re-educated?
To investigate whether dose tapering and, potentially, withdrawal of cisapride is possible without loss of therapeutic effect. A total of 119 patients with chronic constipation (less than three spontaneous, i.e., not laxative-induced, stools per week). Randomized double-blind study. Group A (n = 56) was treated with cisapride 20 mg twice daily for 12 weeks. Treatment was continued for a further 12 weeks during which the patients were allowed to take a maximum of four tablets containing 5 mg cisapride each (maximum daily dose, 20 mg). Group B (n = 63) was treated with cisapride 20 mg twice daily for 6 weeks and then with cisapride 10 mg for 6 weeks. Treatment was then stopped and follow-up was continued for a further 12 weeks. Stool frequency was increased in both groups during active treatment and was not reduced when the dose was decreased from 20 mg to 10 mg twice daily in group B but was maintained in group A. Laxative intake fell by 50% in both groups, but this effect was maintained during follow-up in group A only. Group A patients took nearly the maximum dosage of cisapride tablets allowed during follow-up (3.3 tablets per day +/- 0.2 SEM).
This study confirmed the efficacy of cisapride in chronic idiopathic constipation. Dose tapering below 15-20 mg per day, however, does not appear to be possible.
closed_qa
Does lipophilicity of angiotensin converting enzyme inhibitors selectively influence autonomic neural function in human hypertension?
Angiotensin II has both central nervous system and peripheral effects on autonomic function. Ramipril is among the more lipophilic angiotensin converting enzyme (ACE) inhibitors, and hence can penetrate the central nervous system readily. We investigated whether rampiril has selective effects on autonomic control of the circulation in human hypertension, compared with the more hydrophilic ACE inhibitor enalapril. Blood pressure, hemodynamics and measurements of autonomic function were obtained in 13 essential hypertensive subjects after 10 days on placebo, and after crossover monotherapy with 10 days on enalapril versus 10 days on ramipril. Both enalapril and ramipril lowered systolic, diastolic and mean arterial blood pressures significantly, with no reflex increase in heart rate. Plasma renin activity increased substantially on each of the ACE inhibitors. There were no significant effects of either agent on plasma catecholamines (norepinephrine or epinephrine) or chromogranin A, biochemical indices of efferent sympatho-adrenal outflow. There were also no significant changes after either agent in baroreflex sensitivity (to high- and low-pressure stimuli), the response to cold stress or sympathetic (alpha-adrenergic) participation in blood pressure maintenance. There was a marginal effect of ACE inhibition on alpha 1-adrenergic pressor sensitivity, but the two compounds did not differ significantly in this respect.
Autonomic control of circulatory function was maintained well after either lipophilic (ramipril) or hydrophilic (enalapril) ACE inhibitors, and the lipophilic compound ramipril had no additional effects on autonomic function beyond those shown by the hydrophilic agent enalapril.
closed_qa
Neonatal mortality rate: is further improvement possible?
To determine whether improvement in neonatal and infant mortality rates is possible or likely. Regional neonatal intensive care unit. Experience during a decade (1982-1991) was evaluated. We determined postnatal age at death and birth weight-specific and gestational age-specific mortality rates. Neonatal deaths (deaths before discharge) were categorized as "possibly preventable" or "probably unpreventable." Deaths occurring after 28 days ("postponed" deaths) contributed 9% of the total for the decade, and 5% for those with extremely low birth weight (ELBW;<1000 gm) during the last 6 years; 47% of all deaths and 65% of deaths of ELBW infants occurred within 24 hours of birth. Congenital malformations accounted for 7%, 54%, and 66% of deaths when birth weight was 500 to 1499 gm, 1500 to 2499 gm, and>or = 2500 gm, respectively. In infants with birth weight>or = 1000 gm, probably unpreventable deaths (predominantly from congenital malformations, but also including hydrops and inborn errors of metabolism) accounted for 61% of deaths. Of deaths of ELBW infants, extreme prematurity (500 to 750 gm) accounted for 58%; major malformations and pulmonary hypoplasia contributed an additional 9%.
During the decade, the gestational age at which there was a 50% survival rate fell from 26 weeks to 24 weeks and a marked increase in the survival rate occurred at birth weights<1500 gm (VLBW) after the introduction of exogenous surfactant therapy. The number of possibly preventable deaths is now very small. For any substantial impact on mortality rates, it will be necessary to lower VLBW and ELBW rates.
closed_qa
Do infiltrating leukocytes contribute to the adaptation of human gastric mucosa to continued aspirin administration?
Aspirin (ASA)-induced gastropathy decreases with continued ASA ingestion due to the development of gastric mucosal tolerance. However, the mechanism of the gastric mucosal adaptation to repeated ASA challenge is unknown. The aim of the present study was to determine the density of leukocytes infiltrating the gastric mucosa in healthy subjects during prolonged treatment with ASA. In eight healthy volunteers ASA treatment (2 g/day) was continued for 14 days. Endoscopy was performed before medication, on the 3rd, 7th, and 14th day of ASA treatment, and on the 16th and 18th day (2 and 4 days after medication was stopped). Gastric damage was scored (Lanza score), and gastric biopsy specimens were taken from both the oxyntic and antral mucosa. ASA administration resulted in the development of hemorrhagic erosions, which were most severe on the 3rd day of the medication; later significant reduction of severity of the damage was observed. ASA administration caused an increased mucosal infiltration of leukocytes; leukocyte margination and adherence to endothelia were commonly observed in the gastric mucosa, particularly on the 3rd day of ASA treatment but not later on. Mast cell density increased significantly on the 3rd day of ASA treatment. Density of mast cells later decreased in the antral mucosa but continued to be significantly increased in the oxyntic mucosa up to the 14th day. There was a striking correspondence between mast cell density and endoscopic score of the mucosal damage. Eosinophil density increased significantly during ASA treatment and remained high even after medication was withdrawn.
1) Initial mucosal damage by ASA is followed by gastric adaptation on continuous exposure to this agent; 2) infiltrating leukocytes appear to contribute to the development of gastric mucosal adaptation to ASA; and 3) mast cell density reflects the endoscopic score of gastric damage by ASA.
closed_qa
Chronic, unexplained diarrhea: are biopsies necessary if colonoscopy is normal?
We sought to determine the frequency of clinically important histological abnormalities in patients with chronic, unexplained diarrhea who had macroscopically normal colonic endoscopies. Of 855 consecutive patients undergoing colonoscopy (595 cases) or flexible proctosigmoidoscopy (260 cases) by one endoscopist, biopsies were taken in 111 cases of unexplained diarrhea of at least 4-6 weeks duration in which the colorectal mucosa appeared grossly normal. All biopsies were blindly reviewed by one pathologist. In this group of patients with macroscopically normal colons, we identified no cases of Crohn's disease or ulcerative colitis or any definite cases of collagenous colitis (CC) or lymphocytic colitis (LC). There was one case classified as "possible CC" and 13 cases classified as "some features of LC". There were five cases of melanosis coli, one case of cytomegalovirus colitis (in an immunosuppressed patient), and one case of radiation injury. Ninety-one cases were classified as no pathological diagnosis or minimal histological change. Patients with abnormal histology were contacted to see if they had persistence or resolution of diarrhea. For the cases of "possible CC" and "some features of LC," diarrhea had resolved spontaneously in the majority. Interesting to note, only one of the five melanosis coli patients admitted to laxative use, raising the question of surreptitious abuse.
We conclude that the yield of biopsies in diarrhea patients with macroscopically normal colons at endoscopy is low. It may be reasonable to obtain biopsies in patients with relatively severe or debilitating symptoms, with diarrhea that sounds "organic" (e.g., nocturnal stools, frequent watery stools, weight loss, elevated sedimentation rate), or in patients who are immunosuppressed. When biopsies are taken at colonoscopy, we suggest taking about six from throughout the colon and placing them into just one specimen container to help minimize costs.
closed_qa
Gastrospirillum hominis ("Helicobacter heilmanii"): a cause of gastritis, sometimes transient, better diagnosed by touch cytology?
Besides Helicobacter pylori, another spirillar microorganism, provisionally called Gastrospirillum hominis, has been described in the human stomach in association with gastritis. The aim of this study was to assess the role of cytology in the diagnosis, to assess the gastritis associated with this infection, and to approach its natural history. Charts and endoscopic biopsies and smears (touch cytology) from 28 patients with G. hominis seen between 1986 and 1992 were reviewed and compared with biopsies and smears from 28 patients with H. pylori gastritis. G. hominis was seen on smears from all 28 patients but diagnosed in only 15 of the corresponding sets of biopsies. No patient had evidence of H. pylori colonization. All patients had chronic antral gastritis with lymphoplasmocytes, and neutrophils were present in 13 patients. In addition, reactive changes were frequent: foveolar hyperplasia (n = 25), vasodilation (n = 23), lamina propria edema (n = 23), and increased intracytoplasmic mucin (n = 19). In contrast, intestinal metaplasia (n = 3) and glandular atrophy (n = 2) were infrequent, and lymphoid nodules were not seen. In patients with H. pylori, reactive changes were mild, and the lymphoplasmocytic infiltration was more intense (p<0.005). Eleven patients had at least two endoscopic examinations with biopsies, with persistent colonization in only four. Seven patients cleared the infection with a concomitant regression of gastritis.
G. hominis is more often detected in smears than biopsies. It is seen in association with a peculiar form of gastritis-associating chronic and reactive changes. Colonization may be a transient phenomenon and is never associated with H. pylori.
closed_qa
Mast cells in the placenta. Is there a relation to the development of atopic disease in infants before 18 months of age?
To disclose a relation between the amount of mast cells in placenta and the development of atopic disease in children before 18 months of age. A prospective, descriptive study. Two obstetric departments at university hospitals. 67 pairs of mothers and their newborn infants. Family history of atopic disease was taken. The amount of mast cells in placenta was counted. Follow-up questionnaires of the children were evaluated after 18 months. The follow-up rate was 84%. The number of mast cells in placenta did not differ between atopic and non-atopic children.
A new predictor, the amount of mast cells in placenta was evaluated. In this study the amount of mast cells, in placenta was not predictive of the development of atopic disease before 18 months of age.
closed_qa
Subcutaneous urinary diversion utilizing a nephrovesical stent: a superior alternative to long-term external drainage?
The use of external percutaneous nephrostomy drainage in patients with end-stage ureteral obstruction in whom ureteral stenting has failed presents significant compromises in the patient's quality of life. Toward this end, we present the initial experience in the United States with an intracorporeal nephrovesical stent. We performed successful subcutaneous urinary diversion in 2 patients with malignant, metastatic periureteral obstruction. Both patients had previously been managed with a chronic percutaneous nephrostomy that was both painful and inconvenient. The nephrovesical stent was inserted utilizing percutaneous access to both the kidney and bladder followed by creation of a subcutaneous tunnel between the two sites. The nephrovesical stents are patent at 6 and 9 weeks postoperatively and both patients have had their nephrostomy tubes removed. Both patients have noted a marked improvement in their overall comfort and quality of life since the stent has been in place.
Subcutaneous urinary diversion with a nephrovesical stent provides effective urinary drainage and may improve the quality of life of patients with malignant metastatic ureteral obstruction. Further long-term studies are needed.
closed_qa
Is there value in audition extramurals?
It has become common for fourth-year medical students interested in surgical careers to leave their parent university to take extramural elective rotations in surgery at other institutions. These "audition extramurals," while of some educational value, are often repetitions of prior clerkships and may not broaden the student's educational horizons. Instead, they are intended to enhance a student's competitiveness in the match. While recent opinions and questionnaires have suggested that such extramural rotations are not valuable in general surgery, no study has formally evaluated the effect of extramural electives on the residency match. Over a 6-year period, the authors reviewed the outcome in 99 students who took extramural elective rotations in surgery. Of the 99 students, 28 were from the authors' institution who left to do extramural rotations elsewhere and 71 were outside students who came to the University of South Florida for an elective. While the elective rotation increased the probability of an interview, it did not alter ranking or probability of matching. For general surgery students, the elective rotation may actually decrease competitiveness, while for specialty students, it appears necessary but not sufficient to improve match outcome. The elective might facilitate placement for students who did not match, but did not do so predictably.
The authors conclude that extramural elective rotations should be taken for educational value only and not as auditions for residency.
closed_qa
Steroid induced osteoporosis: an opportunity for prevention?
To determine the frequency with which osteoporosis prophylaxis is given to corticosteroid treated hospital inpatients. All patients receiving systemic corticosteroids in a large teaching hospital over a three month period were identified through routine prescription monitoring by hospital ward pharmacists. Coprescription of antiosteoporotic therapy was recorded, along with other relevant details such as steroid dose, actual, or intended duration of therapy, and indication for therapy. Corticosteroids were prescribed to 214 patients over the study period, giving an average rate of 2.5 new prescriptions each day. Indications included: chest disease (n = 84; 39.2%), cancer (n = 17; 7.9%), inflammatory bowel disease (n = 16; 7.5%), rheumatoid arthritis/connective tissue disease (n = 16; 7.5%), and renal diseases (n = 7; 3.3%). One hundred and twelve patients (52.3%) were receiving short term steroid therapy (less than four months); 66 (37%) were receiving medium/long term steroid therapy (four months or more). In 36 cases (16.8%) the duration of therapy was unknown. Only 12 of the 214 patients (5.6%) received any form of osteoporosis prophylaxis. The prevalence of prophylaxis was similarly low in postmenopausal women (six of 93; 6.4%) and in patients receiving high dose long term steroid therapy (two of 25; 8%).
Systemic corticosteroids are used frequently in hospital practice for a wide range of indications, but few patients receive co-prescription of prophylaxis against osteoporosis. This is true even in high risk groups such as postmenopausal women and those on high dose long term steroid therapy. Identification of individuals by the mechanism used in this study provides an opportunity by which all corticosteroid treated patients could be detected and offered osteoporosis prophylaxis before serious loss of bone density has occurred.
closed_qa
Does grandma need condoms?
To examine the relationship between age and condom use among women who are typically seen in the primary care setting. Survey of a population using a self-administered questionnaire. Four community-based family practice clinics located in a low-income, racially mixed geographical area. All consenting patients (N = 995) during their visits for routine Papanicolaou tests. The mean age of patients was 35 years, with a range of 75 years (12 to 87 years). Respondents were predominantly black (63.2%), 39.2% were single, and over 65% had incomes no greater than $15,000/y. The outcome measure of condom use is reported. Data analysis of patients' sexual behavior revealed that older women might be at risk for sexually transmitted diseases (STDs). The hypothesis that condom use is related to age emerged during data collection. Condom use is related to being younger (<31 years), having had an STD, having a sexual partner in whom an STD was diagnosed, having a lower income, or being single or black. In multivariate models, marital status (single), age (<31 years), and having a partner with an STD remain significant. Among unmarried women, the effects of age, race, and a partner with STD remain, and being a nonsmoker is also significant. In the multivariate analysis for unmarried women, only age (<31 years) is significantly related to condom use. An independent random sample of charts revealed that almost 45% of the patients aged 45 years or younger received condom counseling, whereas condoms were discussed with none of those older than 45 years.
Because older patients (those beyond child-bearing years) are less likely to use condoms and evidently receive little education about condom use, older patients must be educated about the need for condoms.
closed_qa
Sex differences in mortality after myocardial infarction. Is there evidence for an increased risk for women?
A number of studies have indicated that women who have a myocardial infarction have higher mortality rates than men. The purpose of the present study was to review the literature on sex differences in mortality after myocardial infarction to determine whether female sex is independently associated with lower survival. Reports were identified mainly through a MEDLINE search of the English-language literature from January 1966 through June 1994. Studies included were those comparing mortality after myocardial infarction between men and women, controlling at least for age and with more than 30 outcome events. After duplicate patient series were eliminated, 27 reports were included in our review. Crude rates were higher in women than in men during the early phase (in-hospital or first month), but control for age alone or in combination with other factors reduced sex differences in almost all studies. Unadjusted mortality rates among the survivors of the early phase were similar for men and women in most studies, and control for age and other factors resulted in an increased survival rate in women compared with men in several investigations, particularly those with a follow-up of>1 year.
Much of the increased early mortality after myocardial infarction in women is explained by the older age and more unfavorable risk characteristics of the women. In the long run, when differences in age and other risk factors are controlled for, women tend to have an improved survival compared with men.
closed_qa
Natural history of minute sessile colonic adenomas based on radiographic findings. Is endoscopic removal of every colonic adenoma necessary?
With the development of colonoscopy and double-contrast barium enema, detection of minute sessile colonic adenomas has increased. We evaluated progression of these lesions radiologically and attempted to clarify the natural history. A total of 125 minute sessile adenomas (<or = 5 mm in size) with histologic confirmation were examined by double-contrast barium enema at an interval of more than one year. The average follow-up period was 24 (range, 12-36; standard deviation, 9.4) months. To allow for differences in magnification, adenomas increasing in size by 2 mm or more were defined as growing, and the other lesions were defined as unchanged. Eighty-six adenomas showed no interval change in size. Four adenomas decreased 1 mm in size, and 27 adenomas increased 1 mm in size. The remaining eight adenomas (6 percent) increased by 2 or 3 mm in size. None of the adenomas showed any morphologic changes. There was also no difference in degree of histologic atypia between growing and unchanged adenomas. None of the adenomas developed into carcinomas during the follow-up period.
These data show that most minute sessile adenomas remain unchanged in size and morphology over the long term. Accordingly, these adenomas probably should be followed up radiologically or endoscopically to avoid excessive polypectomy.
closed_qa
Is clozapine a mood stabilizer?
Clozapine has been increasingly shown to be effective in the acute and maintenance treatment of bipolar disorders. For this reason, we studied whether clozapine alone is effective as a mood stabilizer in patients with refractory bipolar disorders. Subjects were part of a long-term follow-up study cohort of 193 patients with refractory mood disorders who were treated with clozapine at McLean Hospital prior to July 1, 1992. Patients included in this study were those older than 16 years with bipolar disorder (manic or mixed) and schizoaffective disorder, bipolar type, discharged taking clozapine alone (N = 17). Hospital records on all patients were reviewed by trained raters blind to "best-estimate" diagnoses. Response to clozapine was determined by the Clinical Global Impressions-Improvement (CGI-I) scale. Patients were contacted at least 6 months after clozapine initiation for semistructured follow-up interviews by raters blind to diagnosis and baseline information. Seventeen subjects were contacted 16.1 +/- 5.6 months after clozapine initiation. Most of the 17 patients had previously failed trials of lithium, valproate, carbamazepine, neuroleptics, combinations of these, and electroconvulsive therapy; or had tardive dyskinesia. Of these patients, 65% (11/17) continued to be on clozapine therapy alone at follow-up and had no subsequent rehospitalization or affective episode. At follow-up, there was a significant decrease in the rehospitalization rate (p = .025) than before starting clozapine and a significant improvement in CGI-I scores (p = .02).
Clozapine monotherapy is an effective mood stabilizer, reducing both the number of affective episodes and rehospitalizations in patients with severe refractory bipolar illness.
closed_qa
Helicobacter heilmannii (formerly Gastrospirillum hominis) gastritis: an infection transmitted by animals?
The source of infection with Helicobacter heilmannii (formerly Gastrospirillum hominis), a relatively rare causative agent of gastritis in humans, is not clear. It has long been known that this organism occurs in the stomach of domestic animals and pets. By performing an epidemiologic investigation on possible contact of patients with Helicobacter heilmannii gastritis with such animals, we made an attempt to gain further information about the source of infection. Of 125 patients with confirmed H. heilmannii infection, 111 provided us with information about contact with animals. Some 70.3% of the patients had contact with one or more animals (as compared with 37% in the 'normal' population); 73% were males, and 1.6% had concomitant infection with H. pylori.
Our analysis indicates that H. heilmannii gastritis is due to its transmission to humans by domestic animals or pets. Concomitant infections by H. heilmannii and H. pylori are very rare, and it is possible that H. heilmannii might protect from infection with H. pylori. However, the results of our retrospective analysis will have to be tested against those of a prospective study investigating the day-to-day situation of the individual patients in greater detail and also be compared with patients not infected with H. heilmannii.
closed_qa
Maximal inspiratory pressure: does reproducibility indicate full effort?
Maximal inspiratory pressure (MIP) is often relied upon as an index of inspiratory muscle strength, and reproducibility of MIP taken to indicate maximal effort. This study was designed to determine whether reproducibility is a valid indicator of maximal effort. Ten normal subjects were studied, all of whom were familiar with the MIP test but none was an experienced subject. They were told that the purpose was to measure how accurately they could generate 50% of their MIP. Each performed nine MIP efforts and nine submaximal efforts. Means and coefficients of variation of peak negative inspiratory pressure (Pmax) and the ranges of the best three efforts were calculated for each type. Mean (SE) Pmax averaged-93.8 (6.0) cm H2O for the maximal efforts and -60.6 (7.7) cm H2O for the submaximal trials, with coefficients of variation averaging 8.71 (1.75)% and 14.58 (2.63)%, respectively and the ranges averaging 6.5 (1.1)% and 13.4 (3.5)%, respectively. There was no clear separation between the coefficients of variation or ranges of maximal and submaximal efforts. In four cases the ranges of the best three submaximal efforts were less than 5 cm H2O and less than 5% -criteria that have been used to validate MIP results. These four subjects had lower ranges for submaximal than maximal efforts, even when expressed as percentages of the means.
Reproducibility should not be relied upon to indicate a valid MIP test, especially for research purposes when relatively small changes in inspiratory muscle strength must be discriminated.
closed_qa
Women and myocardial infarction: agism rather than sexism?
To determine whether women with myocardial infarction are treated differently from men of the same age and to assess the effect of changes in the coronary care unit admission policy. Clinical audit. The coronary care unit and general medical wards of a teaching hospital. In 1990 the age limit for admission to coronary care was 65 years. This age limit was removed in 1991. 539 female and 977 male patients admitted with myocardial infarction between 1990 and 1992. Admission to the coronary care unit, administration of thrombolysis, and in-hospital mortality. 409 men and 254 women were admitted with myocardial infarction in 1990 and 568 men and 285 women in 1992. Removal of the age limit for admission to the coronary care unit resulted in an increase in the numbers of both sexes admitted with myocardial infarction. In both years, however, proportionately more men with infarction were admitted to coronary care: 226 men (55%) and 96 women (38%) (P<0.01) (95% CI 7 to 28) in 1990 and 459 men (81%) and 200 women (70%) (P<0.01) (%CI 2 to 19) in 1992. Some 246 men (60%) and 133 women (52%) with infarction (P<0.01) received thrombolytic treatment in 1990 compared with 319 men (56%) and 130 women (46%) (P<0.01) in 1992. The mean age of women sustaining a myocardial infarction was significantly greater in both years studied. In 1992 a total of 78 men (7%) and 34 women (4%) (P<0.05) admitted with chest pain underwent cardiac catheterisation before discharge from hospital.
Differences in admission rates to the coronary care unit and the rate of thrombolysis between the sexes can be explained by the older age of women sustaining infarction. The application of age limits for admission to coronary care or administration of thrombolysis places elderly patients at a disadvantage. As women sustain myocardial infarctions at an older age they are placed at a greater disadvantage.
closed_qa
Treatment of blunt injury of the spleen: is there a place for mesh wrapping?
To describe our experience of treatment of blunt injury to the spleen and to assess the contribution of observation and mesh wrapping to outcome. Retrospective study. Teaching hospital, Norway. 50 consecutive patients with blunt injuries to the spleen treated between 1987 and 1992. 36 of the 50 were operated on (15 of whom had other injuries as well). 19 Underwent splenectomy and the others had various conservation measures including 8 who had absorbable mesh wrapping applied. 14 Patients were successfully treated by observation alone (mean 8 days, range 4-14). 24 were operated on within 3 hours, and 12 after a period of observation, by a total of 18 surgeons (9 of whom dealt with only one such injury each). Half the eight patients who had absorbable mesh applied developed complications (rebleeding and pleural effusion, n = 2 each). Two patients died: one was a baby with an associated severe head injury and one had cirrhosis of the liver and had initially refused treatment but was admitted in a critical condition and died of coagulopathy after splenectomy.
Conservation with absorbable mesh wrapping should be attempted more often, but experience is necessary to do it properly. Given a protocol with explicit criteria for operation, we suggest that haemodynamically stable patients who require little or no blood transfusion should be observed carefully in the first instance.
closed_qa
Is stated income a valid indicator of the socioeconomic status?
1) To analyse the validity of the reply to the question on socio-economic income included in health questionnaires. 2) To identify other valid indicators of socio-economic status. Crossover and retrospective observation study. Community (Molina de Segura Health District). 1,071 people over 18, selected by means of simple random sampling. Analysis of 16 questions relating to home furnishing, consumer goods and stated income. We used Factorial Analysis and the ji2 test. We identified three levels by means of the factorial analysis: a) basic--consisting of hot water, washing-machine, absence of damp patches in the house, individual heating and television. b) Intermediate--consisting of car, video and telephone. c) High--with dishwasher, domestic help and air-conditioning. On analysing the relationship between stated income and socio-economic status we detected association to p<0.001. The basic level answered more often than expected that their income was less than 50,000 pesetas; the intermediate, between 50 and 100,000; and the high, over 101,000.
1) It is possible to identify the socio-economic profile of the general population by the belonging of selected of determined consumer items. 2) Stated income is a reliable indicator of socio-economic status. Despite a tendency to a low reply to the question on income, income is not generally under-declared.
closed_qa
Content of advertisements for junior doctors: is there sufficient detail?
To determine whether employers follow BMA guidelines on advertisements when advertising for junior doctors. Survey of advertisements for junior doctors in the BMJ's classified advertisements supplement from 12 March to 14 May 1994. 300 advertisements for substantive posts for junior doctors. Compliance with BMA guidelines, compared by grade, specialty, and employer (trust or regional health authority); observation of any useful information not included in the guidelines. Only eight advertisements included all the recommended information. Amount of information given was related to grade, specialty, or employer in only one respect: advertisements for basic trainees were more likely than those for higher specialist trainees to include information on pay and hours of work (P<0.001).
Advertisements for junior doctors in the BMJ do not comply with BMA guidelines and often contain little useful information for potential applicants.
closed_qa
Frequent and characteristic K-ras activation in aberrant crypt foci of colon. Is there preference among K-ras mutants for malignant progression?
To investigate very early lesion of colorectal cancer, K-ras activation and nuclear p53 accumulation were studied in aberrant crypt focus (ACF). ACF were microscopically identified in grossly normal mucosa of patients with colorectal cancer who underwent surgery. Each ACF was microdissected from the surgical specimen and divided into two pieces, one for histologic and immunohistochemical examinations and the other for K-ras activation. K-ras mutations in codons 12 and 13 were sequenced after being screened by polymerase chain reaction amplification followed by restriction fragment length polymorphism analysis. Intranuclear accumulation of p53 protein was immunostained with the avidin-biotin complex method. ACF was predominantly distributed in the sigmoid colon and rectum, and its incidence was increased with age. Unexpectedly, ACF was very rare in colons of three patients with hereditary nonpolyposis colorectal carcinoma. K-ras mutations were detected in 58% (33 of 57) of ACF cases and in 44% (11 of 25) of adenocarcinoma cases. Although GTT mutation in codon 12 was predominantly observed in adenocarcinoma (10 of 11), GAT mutation (12 of 33) was as frequent as GTT mutation (11 of 33) in ACF together with mutation at codon 13 (7 of 33). No accumulation of p53 protein was found in any ACF.
ACF were not diagnosed as neoplasms histologically, but they were considered to be neoplastic lesions, and K-ras activation is one of the key events to form ACF. The G-T substitution in K-ras codon 12 may undergo malignant growth easily compared with G-A substitution in colorectal carcinogenesis.
closed_qa
Are there predictors for failed expectant management of cervical intraepithelial neoplasia 1?
To identify the short-term natural history of cervical intraepithelial neoplasia (CIN) 1 and the potential risk factors for its progression, regression and persistence and to identify any characteristics of patients who were lost to follow-up. All colposcopic specimens from July 2001 through December 2004 were evaluated for the presence of CIN 1. Adequate follow-up was defined as 24 months of surveillance with Pap smears every 4-6 months. The chi2 and Student t test were performed for analysis. Three hundred sixty women who had colposcopic specimens with the presence of CIN 1 were evaluated. Persistence of CIN 1 and progression to CIN 2 and 3 were associated with pregnancy at the time of colposcopy (p = 0.04), history of sexually transmitted diseases (p = 0.007) and age at first intercourse (p = 0.04). Age (p = 0.001) and no prior history of abnormal Pap smears (p = 0.001) were associated with the rate of loss to follow-up.
Expectant management for the majority of patients with biopsy-proven CIN 1 is appropriate, but some risk factors might influence that decision. In this study, age at first intercourse was the only independent predictor of failure to resolve CIN 1 on multivariate analysis.
closed_qa
Do socio-economic factors, elderly population size and service development factors influence the development of specialist mental health programs for older people?
Despite the increase in the proportion of older people in the population, little is known about factors that facilitate the development of specialist mental health services for older people. The relationship between the presence of specialist mental health programs for older people and elderly population size, proportion of older people in the population, gross national domestic product (GDP), and various parameters of health funding, mental health funding and mental health service provision was examined in an ecological study using data from the World Health Organization. The presence of specialist mental health programs for older people was significantly associated with higher GDP, higher expenditure on healthcare and mental healthcare, the presence of a national mental health policy and a national mental health program, the availability of mental health care in primary care and the community, and higher density of psychiatric beds, psychiatrists, psychiatric nurses, psychologists and social workers.
The challenge will be to persuade policy-makers in low and medium income countries, where the increase in the elderly population is most rapid, to develop specialist mental health services for older people.
closed_qa
Recent trends in mammography utilization in the Medicare population: is there a cause for concern?
Recent published reports have shown a decline in the mammography screening rate in women over age 40, but it is not known whether this trend is a reason for concern in the Medicare population. To study recent trends in mammography utilization in the Medicare population and determine how the newer digital mammography may be affecting mammography utilization. The Centers for Medicare&Medicaid Services Physician/Supplier Procedure Summary Master Files for 1996 through 2005 were examined to determine overall trends in mammography utilization, as well as trends in screening vs diagnostic and conventional screen-film vs newer digital examinations. Medicare Limited Datasets for 2002 to 2004 were used to determine 2-year mammography and multiple imaging rates in individual patients. Mammography utilization. Overall, the mammography utilization rate increased from 26,646 per 100,000 in 1996 to 39,363 per 100,000 in 2005, a 48% increase. The diagnostic mammography rate decreased by 39% (from 15,314 to 9,301), whereas the rate for screening mammography increased by 166% (from 11,332 to 30,062). Digital mammography increased from 2.2% of all mammography in 2002 to 10.4% in 2005. In both digital and film mammography, screening increased more rapidly than diagnostic mammography.
The utilization rate of all mammography showed a substantial 48% increase between 1996 and 2005, and an 11% increase in screening mammography was seen between 2000 and 2005. Although the increase in mammography utilization is encouraging, the 2005 rate of 39,363 per 100,000 female Medicare beneficiaries seems to be well below American Cancer Society recommendations.
closed_qa
Does functional appliance treatment truly improve stability of mandibular vertical distraction osteogenesis in hemifacial microsomia?
Ten children were treated by a combined orthodontic-distraction treatment, seven by distraction only. Only the vertical changes in the mandible and maxilla in the panoramic and postero-anterior cephalometric X-rays were measured. All of the patients showed a gradual return of the asymmetry with growth. Occlusal plane correction and, to a much lesser extent, mandibular vertical ramus height correction were better maintained over 5 years post-DO in the orthopaedic group.
Although orthopaedic treatment allows for a more stable occlusal plane and for a slower return of the mandibular vertical asymmetry, it has mainly a dento-alveolar effect. Therefore, the decision of applying an orthopaedic treatment associated with distraction, should be taken by surgeon and orthodontist together, considering both the advantages and the disadvantages of this treatment.
closed_qa
Can validated wrist devices with position sensors replace arm devices for self-home blood pressure monitoring?
Electronic devices that measure blood pressure (BP) at the arm level are regarded as more accurate than wrist devices and are preferred for home BP (HBP) monitoring. Recently, wrist devices with position sensors have been successfully validated using established protocols. This study assessed whether HBP values measured with validated wrist devices are sufficiently reliable to be used for making patient-related decisions in clinical practice. This randomized crossover study compared HBP measurements taken using validated wrist devices (wrist-HBP, Omron R7 with position sensor) with those taken using arm devices (arm-HBP, Omron 705IT), and also with measurements of awake ambulatory BP (ABP, SpaceLabs), in 79 subjects (36 men and 43 women) with hypertension. The mean age of the study population was 56.7 +/- 11.8 years, and 33 of the subjects were not under treatment for hypertension. The average arm-HBP was higher than the average wrist-HBP (mean difference, systolic 5.2 +/- 9.1 mm Hg, P<0.001, and diastolic 2.2 +/- 6.7, P<0.01). Twenty-seven subjects (34%) had a>or =10 mm Hg difference between systolic wrist-HBP and arm-HBP and twelve subjects (15%) showed similar levels of disparity in diastolic HBP readings. Strong correlations were found between arm-HBP and wrist-HBP (r 0.74/0.74, systolic/diastolic, P<0.0001). However, ABP was more strongly correlated with arm-HBP (r 0.73/0.76) than with wrist-HBP (0.55/0.69). The wrist-arm HBP difference was associated with systolic ABP (r 0.34) and pulse pressure (r 0.29), but not with diastolic ABP, sex, age, arm circumference, and wrist circumference.
There might be important differences in HBP measured using validated wrist devices with position sensor vs. arm devices, and these could impact decisions relating to the patient in clinical practice. Measurements taken using arm devices are more closely related to ABP values than those recorded by wrist devices. More research is needed before recommending the widespread use of wrist monitors in clinical practice. American Journal of Hypertension doi:10.1038/ajh.2008.176American Journal of Hypertension (2008); 21, 7, 753-758. doi:10.1038/ajh.2008.176.
closed_qa
Is inflammation a significant predictor of bile duct injury during laparoscopic cholecystectomy?
Bile duct injuries (BDI) have been reported to occur more frequently during laparoscopic cholecystectomy (LC) compared to open cholecystectomy (OC). Several studies have demonstrated various potential predisposing factors for BDI. However, there is a controversy as to whether gallbladder inflammation is a significant predictor for BDI. Therefore, out primary aim was to investigate the relationship between inflammation and BDI at LC, and secondarily to present the management and clinical outcome of BDI. We recorded all consecutive LC performed between 1993 and 2005 in our institution by nine staff surgeons. BDI were classified according to Strasberg's classification. Simple and multivariate logistic regression analysis was performed to evaluate the association between inflammation and BDI occurrence during LC. There were 2,184 patients. Among those, 344 had inflammation (16%). The conversion rate was 5% and was higher among male, elder patients, and those with inflammation. The BDI incidence was 0.69% (0.14% for major and 0.55% for minor injuries) and it was significantly higher in those with inflammation compared to those without inflammation (p = 0.01). In particular, the risk for BDI was almost 3.5 times higher in those with inflammation (OR = 3.61, 95% CI 1.27-10.21). Inflammation remained an independent risk factor for BDI even after adjustment for potential confounders. Among patients sustaining injury, one died and two have recurrent cholangitis. No association was observed between clinical outcome and management of BDI, time of diagnosis, sex, and inflammation.
We revealed that inflammation is an independent predictor of BDI occurrence during LC. Therefore, it would be advisable for surgeons to not hesitate to convert a LC to an OC in the presence of inflammation.
closed_qa
Evaluation of the tumor board as a Continuing Medical Education (CME) activity: is it useful?
Although it has been previously reported that offering continuing medical education (CME) credit is not a major factor in tumor board attendance, the results/utility of the Accreditation Council for Continuing Medical Education mandated evaluations of those tumor boards offering CME credit has not been studied. We reviewed the CME evaluations of our University Gastrointestinal Tumor Board; this meeting was chosen because it is multidisciplinary, well attended, and offers CME credit contingent on completing a standard CME evaluation form each session. Of the 2736 attendees, 660 (24%) at the 79 consecutive conferences studied completed the evaluation for CME credit. Reported satisfaction was high; the average response on the 4-question satisfaction survey was 5 (Excellent) on a 5-point Likert scale, only 6% of attendees perceived any commercial bias, and only 3 attendees stated that the conference did not achieve the stated objectives. Of the respondents, 42% indicated that the tumor board information would change their practice, although few specific examples were given. A minority of responders provided specific feedback.
A minority of attendees at this tumor board utilized CME credit. Although satisfaction and impact ratings were high, potential response set bias, lack of specific feedback, and nonresponse bias were limitations to the evaluations.
closed_qa
Professionalism in residency training: is there a generation gap?
Teaching and evaluating professionalism is part of the Accreditation Council for Graduate Medical Education's training requirements for postgraduate education. Defining what constitutes professional behavior is the first step in this endeavor. Difficulty in teaching and evaluating professionalism may stem from generational differences between teachers and trainees in their definition of professional behavior. We sought to explore the magnitude of generational differences by asking faculty and residents to evaluate behaviors along a continuum of professionalism. A questionnaire composed of 16 vignettes describing unprofessional behaviors was distributed to a sample of internal medicine trainees and faculty. For each specific behavior described, participants were asked to rate the severity of the infraction on a 4-point scale. Within each group, responses were distributed across severity categories for most vignettes. There were no significant differences in the responses of trainees versus faculty for any of the vignettes except two.
There is little consensus for determining the severity of unprofessional behaviors among faculty and trainees at one urban university training program. However, this lack of consensus does not appear to have a generational basis. Attributing difficulties in teaching and assessing professionalism cannot be blamed on differences between the generations.
closed_qa
Should forensic autopsies be a source for medical education?
Practical anatomy sessions including dissection of cadavers are essential for anatomy courses. There are many difficulties in obtaining cadavers. In addition, hardened and discolored cadavers that are fixed with formaldehyde look unrealistic and generate apathy among students. We considered that forensic autopsies may be used as ancillary and supportive practice in anatomy education. We invited the participation of Year 2 medical students in suitable forensic autopsy cases during the course of one year. Specialists of forensic medicine and anatomy provided theoretical support through talks in their specialized fields during the autopsy. At the end of the semester, feedback questionnaire forms were prepared and the students were asked to evaluate these sessions. Forty students participated in the evaluation by completing the questionnaire. Students made positive statements about adequacy of the time of the application, consistency of the structures with theoretical and practical issues shown in anatomy lectures, and necessary explanations of the lecturers during and after the application.
We think that forensic autopsies are an attractive supplementary educational model, and we have decided to continue the forensic autopsy practices. We believe that further studies on the evaluation of the sessions using a larger student population will lead to more conclusive results.
closed_qa
Is effective the initial management of primary nonrefluxing megaureter with double-J stent?
Initial management of patients with primary nonrefluxing megaureter (PNRM) associated with impaired renal function or with high risk rate of decreased kidney function can be a dilemma. We present our experience in the use of double -J stent in these patients to evaluate the role as a method to decompressing the system, to prevent function loss or to temporize surgical treatment. In the period 1996-2006 27 patients were diagnosed of PNRM. The patients classified themselves in two groups according to the initial treatment received: those with conservative management and those managed with double-J stent insertion during 6 months. A complete reassessment was performed after one year from the diagnosis in the first group and three months after stent removal in the second one. Patients underwent uretereral reimplantation if, at assessment, an obstructed excretion pattern was found on diuretic testing. The following data have been studied in each case: age at diagnosis, sex, renal function previous and after the treatment, morbidity associated to the double- J stent insertion, excretion pattern on diuretic testing after initial management, surgical technique, ureteral tapering, outcome and time of follow-up. 15 cases were managed with conservative conduct. After a 12 month period an obstructed excretion pattern was found on diuretic testing in eight patients (53.3%), and an unobstructed one in seven (46.7%). The differential function of the affected kidney got worse in two cases (15.4%), being in one of them less than 10%. Eight patients underwent a surgical intervention (53.3%), in seven cases was performed ureteral reimplantation and in one case was performed a nefrectomy. Four cases needed ureteral tapering (57.1%). Twelve patients were selected to undergo double- J stent insertion for al 6-month period. Stent-related complications developed in 5 cases (41.7%), including upper migration in two cases, distal migration in two and breakthrough infections in one patient. At reassessment three months after stent removal, 6 patients (50%) presented an obstructive pattern and the other cases an unonobstructive pattern (50%). In one of the patients with impaired function, the kidney function got worse until becoming smaller than 10%. Six patients underwent surgical treatment (50%), a nefrectomy and 5 ureteral reimplantation. None case needed ureteral tapering. Mean time of follow-up has been 7 months in the first group and 3 years and 3 months in the second one.
Double-J stent insertion in patients with PNRM is associated with high morbidity and there weren't differents in the final outcomes between both groups, therefore it's necessary to create severe prognosis indicators to use the double-J stent as a method to temporize a surgical treatment.
closed_qa
Do pain specialists meet the needs of the referring physician?
To study the factors that influence the use of opioids in the management of chronic noncancer pain (CNCP) by primary care providers (PCPs) for patients returning from a pain specialist. A survey of PCPs. Two physician groups in the Minneapolis-St. Paul metropolitan area. Two seventy-six PCPs surveyed and 80 surveys returned. Participants rated the importance of specific concerns regarding the role of pain specialists and the use of opioids in the management of CNCP. Past experience with pain specialists, comfort using opioids, and opinions regarding a trilateral opioid agreement were also examined. The top concerns for PCPs were as follows: the use of opioids in patients with chemical dependency or psychological issues, the escalation of opioid dosing, and the use of opioids in pain states without objective findings. They also ranked highly the importance of coordinating the return of patients from a pain specialist with explicit opioid instructions and the availability of consultation by phone or a timely follow-up visit. PCPs were supportive of the concept of a trilateral opioid agreement.
PCPs have significant concerns regarding the prescribing of opioids in CNCP. They desire closer collaboration with pain specialists, including more explicit plans of care when patients are transferred back to them. The trilateral agreement may provide one framework for better collaboration.
closed_qa
Residential facilities and day centres in mental health. Is there any difference?
We wanted to investigate to what extent and in what characteristics the patients cared in the psychiatric residential facilities (RF) were similar to those in the day-centres (DC), and whether 6-month improvements in the two settings were comparable. We described 141 patients admitted to the RF and 180 in DC of three mental health service networks in Milan and near Milan. They were evaluated again after six months. In both groups, we identified subgroups of more intensive treatment: 45% of those in residential treatment were in high intensity rehabilitation facilities, and those who followed a residential program of>12 hours/week were 53%. The mean duration of treatment in the residential treatment was 40 months (SD 55.7) and in DC 49.6 months (49.3). The two groups differed in the overall scores of the HoNOS, but differences emerged in the subscales relative to daily life activities and living conditions. Among those in RF, about half had a house, versus 99% among those in DC. After six months, clinically significant modifications were small in both groups.
Residential patients had more needs than DC patients. It is possible that some of the residential patients might be treated with intensive DC program, but the absence of a home for the majority of residential facilities patients makes this unlikely.
closed_qa
Does HIV serostatus affect outcomes of dually diagnosed opiate dependents in residential treatment?
Little is known about specific treatment needs of mentally ill clients abusing substances and infected by HIV. The major gap concerns residential programmes. To explore differences in outcomes between seropositive and seronegative dually diagnosed opiate dependent clients who participated in a residential therapy programme. Data were gathered on 154 clients treated in a therapeutic community in Milan between October 1999 and September 2004. Odds ratios with 95% confidence intervals were used to study the association between HIV serostatus and outcome. At 12-month followup, seropositive clients were more likely to relapse.
The impact of HIV seropositivity on behavioural outcomes should be taken into consideration when planning residential programmes for the HIV (+) dually diagnosed population. Further research could test the need of incorporating dedicated treatments into existing programmes.
closed_qa
Do hospitalists or physicians with greater inpatient HIV experience improve HIV care in the era of highly active antiretroviral therapy?
Little is known about the effect of provider type and experience on outcomes, resource use, and processes of care of hospitalized patients with human immunodeficiency virus (HIV) infection. Hospitalists are caring for this population with increasing frequency. Data from a natural experiment in which patients were assigned to physicians on the basis of call cycle was used to study the effects of provider type-that is, hospitalist versus nonhospitalist-and HIV-specific inpatient experience on resource use, outcomes, and selected measures of processes of care at 6 academic institutions. Administrative data, inpatient interviews, 30-day follow-up interviews, and the National Death Index were used to measure outcomes. A total of 1207 patients were included in the analysis. There were few differences in resource use, outcomes, and processes of care by provider type and experience with HIV-infected inpatients. Patients who received hospitalist care demonstrated a trend toward increased length of hospital stay compared with patients who did not receive hospitalist care (6.0 days vs. 5.2 days; P = .13). Inpatient providers with moderate experience with HIV-infected patients were more likely to coordinate care with outpatient providers (odds ratio, 2.40; P = .05) than were those with the least experience with HIV-infected patients, but this pattern did not extend to providers with the highest level of experience.
Provider type and attending physician experience with HIV-infected inpatients had minimal effect on the quality of care of HIV-infected inpatients. Approaches other than provider experience, such as the use of multidisciplinary inpatient teams, may be better targets for future studies of the outcomes, processes of care, and resource use of HIV-infected inpatients.
closed_qa
Long-term prognosis of cirrhotics with an upper gastrointestinal bleeding episode: does infection play a role?
We evaluated the effect of infection on the short- and long-term outcome of cirrhotic patients with upper gastrointestinal bleeding (UGIB), in a series of patients not submitted to antibiotic prophylaxis. The cirrhotic patients hospitalized for UGIB were prospectively followed up until the last visit, death, or transplantation. A standard screening protocol was used for bacterial infection at admission. In total, 205 patients were included in the study. Antibiotics were administered in 79 (38.5%) patients and an infection was documented in 64 (31.4%) patients. In total, 130 (63.4%) patients died after a mean (SD) follow up of 23.8 (30.9) months. Six-week mortality was higher in the infected patients (P<0.0001). The mortality of patients who were alive 6 weeks after admission was not different between the infected and non-infected patients. Antibiotic use or bacterial infection, the Child-Pugh score, hepatocellular carcinoma, and creatinine were the independent predictors of 6-week mortality. Age and the Child-Pugh score were the only predictors of mortality of the patients who had survived for more than 6 weeks after acute bleeding. In total, 51 (24.9%) patients rebled, 37 (18.1%) within 5 days of admission. Rebleeding was more frequent (41.8% vs 14.3%, P<0.0001) in infected patients, mostly due to differences in early rebleeding (31.6% vs 9.5%, P = 0.0001).
Bacterial infection is associated with failure to control UGIB and early mortality in cirrhotic patients, but does not seem to affect the outcome of patients who overcome the bleeding episode.
closed_qa
Is there any relationship between metabolic parameters and left ventricular functions in type 2 diabetic patients without evident heart disease?
The aim of the present study was to evaluate left ventricle (LV) systolic and diastolic function, using tissue Doppler echocardiography (TDE) and color M-mode flow propagation velocity, in relation to blood glucose status in normotensive patients with type 2 diabetes mellitus (T2DM) who had no clinical evidence of heart disease. Seventy-two patients with T2DM (mean age 49.1 +/- 9.8 years) without symptoms, signs or history of heart disease and hypertension, and 50 ages matched healthy controls (mean age 46.1 +/- 9.8 years) had echocardiography. Systolic and diastolic LV functions were detected by using conventional echocardiography, TDE and mitral color M-mode flow propagation velocity (V(E)). Fasting blood glucose level (FBG) after 8 hours since eating a meal, postprandial blood glucose level (PPG), and HbA(1C) level were determined. The association of FBG, PPG and HbA(1C) with the echocardiographic parameters was investigated. It was detected that although systolic functions of two groups were similar, diastolic functions were significantly impaired in diabetics. No relation of FBG and PPG with systolic and diastolic functions was determined. However, HbA(1C) was found to be related to diastolic parameters such as E/A, Em/Am, V(E) and E/V(E) (beta=-0.314, P =<0.05; beta=-0.230, P<0.05; beta=-0.602, P<0.001, beta= 0.387, P<0.005, respectively). In addition to HbA(1C), LV, diastolic functions were also correlated with age and diabetes duration.
Diastolic LV dysfunction may develop even in absence of ischemia, hypertension, and LVH in T2DM. FBG and PPG have no effect on LV functions, but HbA(1C) levels may affect diastolic parameters.
closed_qa
The relative contribution of genes and environment to alcohol use in early adolescents: are similar factors related to initiation of alcohol use and frequency of drinking?
The present study assessed the relative contribution of genes and environment to individual differences in initiation of alcohol use and frequency of drinking among early adolescents and examined the extent to which the same genetic and environmental factors influence both individual differences in initiation of alcohol use and frequency of drinking. Questionnaire data collected by the Netherlands Twin Register were available for 694 twin pairs aged of 12 to 15 years. Bivariate genetic model fitting analyses were conducted in mx. We modeled the variance of initiation of alcohol use and frequency of drinking as a function of three influences: genetic effects, common environmental effects, and unique environmental effects. Analyses were performed conditional on sex. Findings indicated that genetic factors were most important for variation in early initiation of alcohol use (83% explained variance in males and 70% in females). There was a small contribution of common environment (2% in males, 19% in females). In contrast, common environmental factors explained most of the variation in frequency of drinking (82% in males and females). In males the association between initiation and frequency was explained by common environmental factors influencing both phenotypes. In females, there was a large contribution of common environmental factors that influenced frequency of drinking only. There was no evidence that different genetic or common environmental factors operated in males and females.
Different factors were involved in individual differences in early initiation of alcohol use and frequency of drinking once adolescents have started to use alcohol.
closed_qa
Distribution of Langerhans cells and mast cells within the human oral mucosa: new application sites of allergens in sublingual immunotherapy?
Sublingual immunotherapy (SLIT) represents an alternative to subcutaneous immunotherapy. While antigen-presenting cells such as Langerhans cells (LCs) are thought to contribute to the effectiveness of SLIT, mast cells (MCs) most likely account for adverse reactions such as sublingual edema. As little is known about LCs and MCs within the oral cavity, we investigated their distribution in search for mucosal sites with highest LCs and lowest MCs density. Biopsies were taken simultaneously from human vestibulum, bucca, palatum, lingua, sublingua, gingiva, and skin. Immunohistochemistry and flow cytometry were used to detect MCs, LCs and high affinity receptor for IgE (FcepsilonRI) expression of LCs. Mixed lymphocyte reactions were performed to assess their stimulatory capacity. Highest density of MCs was detected within the gingiva, while the lowest density of MCs was found within the palatum and lingua. However, sublingual MCs were located within glands, which might explain swelling of sublingual caruncle in some SLIT patients. Highest density of LCs was detected within the vestibular region with lowest density in sublingual region. Highest expression of FcepsilonRI was detected on LCs within the vestibulum. Furthermore LCs from different regions displayed similar stimulatory capacity towards allogeneic T cells.
In view of our data, different mucosal regions such as the vestibulum might represent alternative SLIT application sites with potent allergen uptake. Our data might serve as a basis for new application strategies for SLIT to enhance efficiency and reduce local adverse reactions.
closed_qa
Do quality of life, participation and environment of older adults differ according to level of activity?
Activity limitation is one of the most frequent geriatric clinical syndromes that have significant individual and societal impacts. People living with activity limitations might have fewer opportunities to be satisfied with life or experience happiness, which can have a negative effect on their quality of life. Participation and environment are also important modifiable variables that influence community living and are targeted by health interventions. However, little is known about how quality of life, participation and environment differ according to activity level. This study examines if quality of life, participation (level and satisfaction) and perceived quality of the environment (facilitators or obstacles in the physical or social environment) of community-dwelling older adults differ according to level of activity. A cross-sectional design was used with a convenience sample of 156 older adults (mean age = 73.7; 76.9% women), living at home and having good cognitive functions, recruited according to three levels of activity limitations (none, slight to moderate and moderate to severe). Quality of life was estimated with the Quality of Life Index, participation with the Assessment of Life Habits and environment with the Measure of the Quality of the Environment. Analysis of variance (ANOVA) or Welch F-ratio indicated if the main variables differed according to activity level. Quality of life and satisfaction with participation were greater with a higher activity level (p<0.001). However, these differences were clinically significant only between participants without activity limitations and those with moderate to severe activity limitations. When activity level was more limited, participation level was further restricted (p<0.001) and the physical environment was perceived as having more obstacles (p<0.001). No differences were observed for facilitators in the physical and social environment or for obstacles in the social environment.
This study suggests that older adults' participation level and obstacles in the physical environment differ according to level of activity. Quality of life and satisfaction with participation also differ but only when activity level is sufficiently disrupted. The study suggests the importance of looking beyond activity when helping older adults live in the community.
closed_qa
Radiographic assessment of skeletal maturation stages for orthodontic patients: hand-wrist bones or cervical vertebrae?
The skeletal maturation status of a growing patient can influence the selection of orthodontic treatment procedures. Either lateral cephalometric or hand-wrist radiography can be used to assess skeletal development. In this study, we examined the correlation between the maturation stages of cervical vertebrae and hand-wrist bones in Taiwanese individuals. The study group consisted of 330 male and 379 female subjects ranging in age from 8 to 18 years. A total of 709 hand-wrist and 709 lateral cephalometric radiographs were analyzed. Hand-wrist maturation stages were assessed using National Taiwan University Hospital Skeletal Maturation Index (NTUH-SMI). Cervical vertebral maturation stages were determined by the latest Cervical Vertebral Maturation Stage (CVMS) Index. Spearman's rank correlation was used to correlate the respective maturation stages assessed from the hand-wrist bones and the cervical vertebrae. The values of Spearman's rank correlation were 0.910 for males and 0.937 for females, respectively. These data confirmed a strong and significant correlation between CVMS and NTUH-SMI systems (p less than 0.001). After comparison of the mean ages of subjects in different stages of CVMS and NTU-SMI systems, we found that CVMS I corresponded to NTUH-SMI stages 1 and 2, CVMS II to NTUH-SMI stage 3, CVMS III to NTUHSMI stage 4, CVMS IV to NTUH-SMI stage 5, CVMS V to NTUH-SMI stages 6, 7 and 8, and CVMS VI to NTUH-SMI stage 9.
Our results indicate that cervical vertebral maturation stages can be used to replace hand-wrist bone maturation stages for evaluation of skeletal maturity in Taiwanese individuals.
closed_qa
Worsening renal function in children hospitalized with decompensated heart failure: evidence for a pediatric cardiorenal syndrome?
The purpose of this study was to determine the incidence of renal insufficiency in children hospitalized with acute decompensated heart failure and whether worsening renal function is associated with adverse cardiovascular outcome. Prospective observational cohort study. Single-center children's hospital. All pediatric patients from birth to age 21 yrs admitted to our institution with acute decompensated heart failure from October 2003 to October 2005. None. Acute decompensated heart failure was defined as new-onset or acute exacerbation of heart failure signs or symptoms requiring hospitalization and inpatient treatment. We required that heart failure be attributable to ventricular dysfunction only. Worsening renal function was defined as an increase in serum creatinine by>or = 0.3 mg/dL during hospitalization. Sixty-three patients (35 male, 28 female) comprised 73 patient hospitalizations. Median age at admission was 10 yrs (range 0.1-20.3 yrs). Median serum creatinine at admission was 0.6 mg/dL (range 0.2-3.5 mg/dL), and median creatinine clearance was 103 mL/min/1.73 m2 (range 22-431 mL/min/1.73 m2). Serum creatinine increased during 60 of 73 (82%) patient hospitalizations (median increase 0.2 mg/dL, range 0.1-2.7 mg/dL), and worsening renal function occurred in 35 of 73 (48%) patient hospitalizations. Clinical variables associated with worsening renal function included admission serum creatinine (p = .009) and blood urea nitrogen (p = .04) and, during hospitalization, continuous infusions of dopamine (p = .028) or nesiritide (p = .007). Worsening renal function was independently associated with the combined end point of in-hospital death or need for mechanical circulatory support (adjusted odds ratio 10.2; 95% confidence interval 1.7-61.2, p = .011). Worsening renal function was also associated with longer observed length of stay (33 +/- 30 days vs. 18 +/- 25 days, p<.03).
These data suggest that an important cardiorenal interaction occurs in children hospitalized for acute decompensated heart failure. Renal function commonly worsens in such patients and is associated with prolonged hospitalization and in-hospital death or the need for mechanical circulatory assistance.
closed_qa
Does high field MRI allow an earlier diagnosis of multiple sclerosis?
High field magnetic resonance imaging (MRI) provides higher lesion load measurements in patients presenting with clinically isolated syndromes (CIS) suggestive of demyelination and has impact upon the classification of these syndromes and potentially, the diagnosis of multiple sclerosis (MS). To investigate whether high field MRI can provide an earlier diagnosis of definite MS within the International Panel (IP) and Swanton criteria. Forty patients presenting with CIS suggestive of MS were included. All patients received multi-sequence MRI at 1.5 Tesla (T) and 3T as well as a neurological assessment at baseline. Follow-up visits including MRI at both field strengths and neurological examinations were scheduled 3-4 and 6-7 months after the first clinical event. Based on MRI and clinical findings, fulfilled IP criteria as well as Swanton criteria were analysed. At baseline, the higher detection rate of inflammatory lesions using high field MRI leads to higher classifications according to the Swanton criteria in 15 % of the patients. One additional patient was diagnosed with dissemination in space according to Swanton and IP criteria. During follow-up, an earlier diagnosis of definite MS could not be accomplished, neither according to the IP nor to the Swanton criteria.
Although high field MRI shows a higher detection rate of inflammatory brain lesion in CIS and MS patients with an influence according to MRI criteria, this influence does not lead to an earlier diagnosis of lesion dissemination in time and therefore definite MS.
closed_qa
Do cosmetic surgeons consider estrogen-containing drugs to be of significant risk in the development of thromboembolism?
Well-documented evidence shows that estrogen increases the risk of deep vein thrombosis (DVT), and that the effects of DVT are compounded by the stress of surgery and an anesthetic. This study sought to determine the current views and practice of plastic surgeons regarding combined oral contraceptive and surgery. In the United Kingdom, 285 consultant plastic surgeons were identified, and postal questionnaires were distributed to each surgeon. Of 286 postal questionnaires distributed to consultant plastic surgeons, 53% were returned and analyzed. Most of the surgeons considered combined oral contraceptive and surgery to be a risk factor for DVT, although only 54% discontinued it before surgery. Approximately 50% believed hormone-replacement therapy (HRT) is a risk, but fewer than a one-fourth of surgeons stopped its use before surgery. There was a range of distribution for the length of time HRT was discontinued for surgery. The majority of consultants discontinue HRT use for 5 to 6 weeks before surgery and until full ambulation after surgery. Data retrieved were used to compare documented evidence relating to combined oral contraceptive and surgery and its association with DVT.
This survey shows that the management of patients taking estrogen-containing medication before plastic surgery varies, and guidelines regarding this should be sought.
closed_qa
Screening for postpartum depression with the Edinburgh Postnatal Depression Scale in an indigent population: does a directed interview improve detection rates compared with the standard self-completed questionnaire?
The Edinburgh Postnatal Depression Scale (EPDS) is a well-validated screening tool for the detection of patients at risk for postpartum depression. It was postulated that screening utilizing the EPDS in a directed interview would increase the detection rate compared with a self-completed EPDS in an indigent population. To compare the results of a self-completed EPDS with those of a directed interview utilizing the EPDS in the identification of patients at increased risk for postpartum depression. All patients undergoing a 6-week postpartum evaluation in the obstetric clinic at a community teaching hospital between November 1, 2003 and March 31, 2004 were screened for postpartum depression using the self-completed EPDS. This was followed by a directed interview, which consisted of a verbally administered EPDS by a social worker blinded to the results of the self-completed EPDS. A positive screen was defined as an EPDS score of>or =12 by either method. The number of patients with a positive screen to either the self-completed EPDS, the directed interview EPDS, or both were recorded. The two techniques were compared by the McNemar Chi-square test. The self-completed and directed interview EPDS scores were compared by Pearson's correlation coefficient to examine differences in screening techniques. Demographic data and characteristics in each group were examined. Among the 134 patients evaluated, 24 (17.9%) screened positively for being at an increased risk of having postpartum depression. The self-completed EPDS and the directed interview EPDS screening detection rates were not different, identifying 23 (17.2%) and 22 (16.4%) patients, respectively (p = 1.0). The use of the self-completed EPDS and the directed interview EPDS in parallel detected one additional subject (0.7%; p = 0.99). The self-completed EPDS and directed interview EPDS scores correlated significantly (r = 0.94; p = 0.01). The demographics and characteristics of patients with a positive screen were not different from those with a negative screen.
The self-completed EPDS and directed interview EPDS are equivalent screening techniques for postpartum depression. There is no evidence to suggest that parallel screening improves detection. Either technique should be incorporated into the postpartum visit to screen for postpartum depression.
closed_qa
Pretransplant inflammation: a risk factor for delayed graft function?
Inflammation plays an important role in the pathogenesis of ischemic acute kidney injury (IAKI). In this study, we hypothesize that transplant recipients with pretransplant inflammation may have a greater chance of developing delayed graft function (DGF), an example of IAKI. We analyzed 178 patients who had undergone their first transplant using cadaveric donors. Blood samples were extracted from transplant recipients prior to transplantation. C-reactive protein (CRP) (nephelometry); interleukin 6 (IL-6) and tumor necrosis factor alpha (TNF-alpha) (automatized enzyme chemiluminescence immunometric assay); and pregnancy-associated plasma protein A (PAPP-A) (enzyme-linked immunosorbent assay) were determined using the pretransplant blood samples. The risk factors analyzed included cold ischemia, type and time of dialysis, donor and recipient age and HLA compatibility. Sixty-one patients (34.3%) developed DGF. Pretransplant TNF-alpha (9.31 +/- 2.57 vs. 10.56 +/- 3.82 pg/mL; p=0.039) and PAPP-A (1.25 +/- 0.74 vs. 1.90 +/- 1.56 mU/L; p=0.002) were significantly elevated in the group of patients with DGF. Univariate analysis showed that PAPP-A, TNF-alpha, cold ischemia, type of dialysis (hemodialysis) and donor age were associated with DGF. Multivariate analysis showed that PAPP-A (p=0.006), cold ischemia (p=0.009) and type of dialysis (p=0.046) were independent risk factors for DGF.
Pretransplant inflammation (TNF-alpha, PAPP-A) in transplant recipients could be a risk factor for the development of DGF.
closed_qa
Is therapy with calcium and vitamin D and parathyroid autotransplantation useful in total thyroidectomy for preventing hypocalcemia?
Routine calcium and vitamin D administration and routine autotransplantation of parathyroid glands can prevent hypocalcemia after total thyroidectomy. Routine autotransplantation of 1 or more parathyroid glands and oral calcium and vitamin D supplementation was used in 252 patients. One, 2, or 3 parathyroid glands were autotransplanted in 223, 27, and 2 patients, respectively. Routine oral calcium and vitamin D was administered in postoperative period in all patients. Postoperative hypocalcemia occurred in 17%, of whom 1.6% had minor symptoms related to hypocalcemia. No patient developed permanent hypocalcemia during the follow-up period. The postoperative stay was 1 day in 93.6% of the cases. The incidence of postoperative hypocalcemia and hospital stay was higher in patients who underwent autotransplantation of more than 1 parathyroid gland.
Routine oral calcium and vitamin D supplementation and autotransplantation of at least 1 parathyroid gland effectively reduced symptomatic hypocalcemia and permanent hypoparathyroidism in total thyroidectomy.
closed_qa
Patient preferences: do they contribute to healthcare disparities?
The purpose of this study was to examine the effect of race on whether or not a patient would accept an invasive cardiac procedure when referred by a physician. A retrospective longitudinal review of medical records at a public health hospital in southeastern Louisiana was conducted to determine cardiovascular patient acceptance/ rejection differences. Patient charts were examined using specific indicators (type of pain, laboratory values, blood pressure, and radiographic tests) to determine which patients were eligible to be referred. In order to be selected, each medical record had to have documentation of a physician referral for an invasive cardiac procedure. Medical charts without this referral were deemed ineligible for the cohort. Patient preferences were similar for both minorities and Caucasians, despite the fact that the study controlled for disease severity, age, income, sex, race, social support, diagnosis, and family history.
Race did not contribute to disparate acceptance and rejection rates among African Americans and Caucasians. A possible reason for this occurrence is that the site was a teaching hospital, which may indicate more physician oversight and better articulation of treatment options. Future studies should delve deeper into physician and institutional bias in non-teaching facilities during patient/physician interactions.
closed_qa
Is universal newborn hearing screening more efficient with auditory evoked potentials compared to otoacoustic emissions?
Cost-effectivity of universal newborn hearing screening programmes is under constant review. In this context, the aim of the present study is to evaluate the performance of brainstem response audiometry (BERA) compared to otoacoustic emissions (OAE) as screening tools. Observational and retrospective study on a universal screening programme started in 1998. We perform a comparative analysis between two groups of newborns evaluated in consecutive periods of time. We analyze outcome measures of the programme as a measure of effectivity, and dedicated resources to weight the costs. We compare a group of 862 newborns from year 2003, screened with transient evoked OAE with a clinical device, with a group of 2300 newborns from years 2005 and 2006, screened with automated BERA. We find a statistically significant difference in the percentage of pass in the first step, favoring BERA (99.7 % vs 91.8 %; P<.0005). The median of exploration time with BERA was 276 seconds. Costs evaluation points to a progressively decreasing difference between both tools.
There are data indicating that BERA could be more cost-effective as initial screening tool. This advantage should be added to the already known more comprehensive evaluation of the auditory pathway, which could lead to the recommendation of its preferential use in auditory screening programmes.
closed_qa
Drugs used in paediatric outpatients: do we have enough information available?
To analyse the drugs taken in paediatric outpatients and the information available on these drugs. A cross-sectional, observational, descriptive study was carried out. The study involved a sample of children under 14 years seen in the Emergency Room of the HGUV from June 2005 to August 2006. The medicines they received were quantified and classified, and the information on these drugs available in the Vademecum International Medicom and in the Summary of Product Characteristics, were analysed. Of the 462 children (mean age 5.2 (95 % CI 4.9-5.6)) included, 336 received 667 medicines (152 different medicines) that contained 864 drugs (161 different drugs). In 34.3 % of the cases it was for self-medication. Children under 4 years received more drugs than the older group (80.2 % in the younger group and 67.4 % in the older). Patients received from 1 to 7 medicines (mean 2.0). Children receiving 2 or 3 medicines were younger than those who received one. Five therapeutic groups of the Anatomical-Therapeutical-Chemical Classification (ATC) include the 93.1 % of the drugs administered (R: 26.5 %; M: 23.8 %; N: 22.8 %; J: 10.6 % and A: 10.0 %). In the information sources consulted there was no information available on paediatric use for 40 of the 152 medicines used.
Almost 75 % of patients seen in the Emergency Room were already receiving drugs before they arrived at the hospital, in many cases as a result of self-medication. The information available on the paediatric use of drugs is deficient. Clinical research is required to study the effects of pharmacological treatment on children and to improve the information on their use.
closed_qa
Is there value for serial ultrasonographic assessment of cervical lengths after a cerclage?
The objective of the study was to determine the value of serial ultrasonographic cervical length (CL) measurements after cerclage to predict preterm delivery. Retrospective ultrasonographic and outcome data from singleton pregnancies with cerclage were reviewed. Using transvaginal ultrasound (TVS), overall CL obtained before cerclage placement, 2 weeks after cerclage, and before delivery were compared between women who delivered preterm (less than 37 weeks) and term. The overall CL including CL above (CLA) and below the cerclage (CLB) were compared using the SAS program. Cerclage was placed at 15.7 +/- 3.6 weeks (mean +/- SD) in 57 women. The overall CL before cerclage, 2 weeks after cerclage, and the last TVS before delivery was not different in preterm and term births. The odds ratio of a measurable CLA for preterm delivery by TVS was 0.87 (0.78 to 0.95, 95% confidence interval). Thirty-two patients (56%) had absent CLA at 26.7 +/- 4.4 weeks. Of these, 16 (50%) were delivered for preterm premature rupture of membranes (PPROM) and chorioamnionitis (sensitivity of 100%, specificity of 61%, positive predictive value of 50%, and negative predictive value of 100%).
Although the overall cervical length by serial TVS after cerclage did not predict preterm birth, absent CLA is associated with preterm delivery, chorioamnionitis, and PPROM.
closed_qa
Caustic ingestion in children: is endoscopy always indicated?
The ingestion of caustic substances can represent a serious medical problem in children. Whether or not an urgent endoscopy should be performed is still a matter of debate, particularly in asymptomatic patients. We conducted a multicenter observational study to investigate the predictive value of signs and symptoms in detecting severe esophageal lesions. The records of 162 children who presented with accidental caustic substance ingestion were analyzed. Signs and symptoms were divided into minor (oral and/or oropharyngeal lesions and vomiting) and major (dyspnea, dysphagia, drooling, and hematemesis). An endoscopy was performed in all patients within 12 to 24 hours of the substance being ingested. The types of substance ingested, signs and symptoms, age, sex, and severity of esophageal injury were correlated. Mild esophageal lesions were identified in 143 of 162 patients (88.3%), and severe (third degree) esophageal lesions in 19 patients (11.7%). The risk of severe esophageal lesions without signs and/or symptoms was very low (odds ratio [OR] 0.13 [95% CI, 0.02-0.62], P = .002). Indeed, the presence of 3 or more symptoms is an important predictor of severe esophageal lesions (OR 11.97 [95% CI, 3.49-42.04], P = .0001). Multivariate analysis showed that the presence of symptoms is the most significant predictor of severe esophageal lesions (OR 2.3 [95% CI, 1.57-3.38], P = .001).
The results demonstrated that the incidence of patients with third-degree lesions without any early symptoms and/or signs is very low, and an endoscopy could be avoided. The risk of severe damage increases proportionally with the number of signs and symptoms, and an endoscopy is always mandatory in symptomatic patients.
closed_qa
Proximity of fast food restaurants to schools: do neighborhood income and type of school matter?
To investigate the proximity of fast food restaurants to public schools and examine proximity by neighborhood income and school level (elementary, middle, or high school). Geocoded school and restaurant databases from 2005 and 2003, respectively, were used to determine the percentage of schools with one or more fast food restaurants within 400 m and 800 m of all public schools in Los Angeles County, California. Single-factor analysis of variance (ANOVA) models were run to examine fast food restaurant proximity to schools by median household income of the surrounding census tract and by school level. Two-factor ANOVA models were run to assess the additional influence of neighborhood level of commercialization. Overall, 23.3% and 64.8% of schools had one or more fast food restaurants located within 400 m and 800 m, respectively. Fast food restaurant proximity was greater for high schools than for middle and elementary schools, and was inversely related to neighborhood income for schools in the highest commercial areas. No association with income was observed in less commercial areas.
Fast food restaurants are located in close proximity to many schools in this large metropolitan area, especially high schools and schools located in low income highly commercial neighborhoods. Further research is needed to assess the relationship between fast food proximity and student dietary practices and obesity risk.
closed_qa
Is the association of hypertension and panic disorder explained by clustering of autonomic panic symptoms in hypertensive patients?
Autonomic nervous system dysfunction may be implicated in the association of hypertension with panic attacks and panic disorder. We hypothesised that panic symptoms of autonomic origin are more common in attacks experienced by hypertensive than normotensive patients, that autonomic panic symptoms cluster together as a distinct factor, and that this factor is more prevalent in hypertensive patients with panic than in normotensives. We analysed all 346 structured questionnaires completed by primary care and hospital clinic patients who had reported experiencing full (n=287) or limited symptom panic attacks (n=59) (268 with hypertension, and 78 never having had hypertension). Frequency of sweating, flushes, and racing heart, symptoms selected prospectively as being most likely of autonomic origin, were compared between hypertensive and normotensive patients. Principal component analysis was performed with varimax orthogonal rotation. Using logistic regression, odds ratios were calculated for association of factor scores with hypertension. Sweating and flushes were significantly more common among hypertensive patients than normotensives (sweating; 65% v 46%, p=0.003, flushes; 55% v 40%, p=0.019). There was no significant difference between groups for frequency of racing heart nor any of the remaining panic symptoms analysed as secondary endpoints. Principal component analysis yielded four factors with eigenvalues>1.0. Factor 1 was dominated by autonomic symptoms, notably sweating and flushes, which had loadings of 0.68 and 0.61. On regression only this autonomic factor showed a significant association with hypertension, the odds ratio being 1.37 (95% C.I. 1.05 to 1.77, p=0.018).
These findings support the possibility that autonomic dysfunction contributes to the association of hypertension with panic.
closed_qa